DS T1 Report - Load Balancing in Cloud Computing
DS T1 Report - Load Balancing in Cloud Computing
The paper addresses challenges in load balancing in cloud computing, including the
need to reduce response time, ensure fault tolerance, improve task migration, minimize
waiting times, and implement energy-aware task allocation for critical applications.
Cloud computing provides diverse services, but load balancing is a common challenge
affecting performance and compliance. This paper reviews Load Balancing techniques
in static, dynamic, and nature-inspired cloud environments to optimize Data Center
Response Time and overall performance. It identifies research gaps, introduces a
fault-tolerant framework, and explores existing frameworks in recent literature.
Insufficient Focus on Response Time Emphasized the need for efficient load
Optimization balancing to reduce response times and
enhance system performance.
Conclusion:
The paper "A novel load balancing technique for cloud computing platform based on
PSO" addresses the challenge of load balancing and task scheduling in cloud
computing environments. Load balancing involves sharing tasks among multiple
machines to speed up job completion and ensure VMs perform well. The primary focus
is on optimizing resource utilization by minimizing makespan and balancing the load
among virtual machines (VMs) through efficient task scheduling.
Before the proposed work, existing systems encountered limitations like inefficient
resource usage, lengthy makespan, and suboptimal load balancing algorithms. Past
approaches, including PSO-based task scheduling, load-balancing algorithms like
PSOBTS and L-PSO, and dynamic load balancing, aimed to address these issues.
PSO:
Optimizes task allocation in cloud Balances loads among VMs for optimal
computing resource utilization
Result:
Conclusion:
The LBMPSO technique effectively reduces makespan, increases resource utilization,
and balances loads in cloud computing environments, outperforming existing
methods.Future work aims to enhance the quality of service parameters for further
improvements.
Paper 3: Hybridization of meta-heuristic algorithm for load balancing in cloud
computing environment
The problem addressed in the paper is the need for efficient load balancing in cloud
computing environments. Specifically, the challenge of distributing tasks among virtual
machines (VMs) to optimize performance, reduce response times, and maximize
throughput is highlighted. The goal is to achieve better resource utilization and overall
system efficiency in a cloud network.
Existing System:
Algorithm Description
Hierarchical load balancing schemes Organize tasks into a hierarchical structure for
efficient distribution across nodes
Dynamic load balancing through heat Adjust task distribution dynamically by simulating
diffusion workload diffusion across nodes.
The paper concludes that the QMPSO methodology offers a promising solution for
improving load balancing in cloud computing environments. By leveraging the
hybridization of MPSO and Q-learning, the proposed algorithm demonstrates enhanced
performance in terms of load optimization, response time reduction, and overall system
efficiency. The validation of the algorithm through simulations and real scenarios
confirms its effectiveness in achieving better load distribution and resource utilization in
cloud networks.
Inefficient Load Balancing QMPSO combines MPSO and Q-learning for dynamic load
Algorithms balancing, optimizing task distribution among VMs
Increased Response Times and QMPSO minimizes response times and task delays
Task Delays through optimized load distribution and task prioritization.
Result:
Conclusion:
QMPSO enhances cloud load balancing effectively.It's stable, efficient, and addresses
dynamic environment challenges.Overcomes existing limitations, optimizing resource
allocation.Promising for improving system performance in cloud computing.
Name - Kunjan Bharade
C No. - C22020441610
Roll No. - 4610
BTech IT
Introduction :
Summary:
The paper addresses multiple load balancing strategies in cloud computing aimed at
optimizing various performance metrics. It introduces a taxonomy for load balancing
algorithms in the cloud and provides a concise overview of performance parameters
studied in existing literature and their impacts. Performance evaluation of
heuristic-based algorithms is conducted through simulations using the CloudSim
simulator, with detailed results presented..
● In a cloud data center, a finite number of diverse physical hosts are present, each
identified by a host identification number and characterized by processing
elements, processing speed in terms of Million Instructions Per Second (MIPS),
memory size, bandwidth, and other attributes. These hosts accommodate
several VMs, each possessing attributes similar to those of a host.
● Tasks originating from various users are directed to the central load balancer or
serial scheduler for resource mapping within the cloud environment. Each
computing node (VM) executes a single task at a time, with the load balancer
assigning incoming requests to VMs if sufficient resources are available to meet
deadlines.
● Tasks that cannot be immediately executed wait based on Service Level
Agreement (SLA) conditions. Upon task completion, the resources utilized by the
task on a specific VM are released, potentially creating new VMs to handle
additional requests.
● The scheduling model within the cloud data center necessitates load balancing
due to the vast array of heterogeneous input tasks with varying resource
requirements. Input tasks (T1, T2, ..., Tn) are submitted to the cloud system's
task queue, where the VM manager assesses resource availability, active VMs,
and task queue lengths across hosts. If the available active VMs can
accommodate the input tasks, the VM manager assigns the tasks to the task
scheduler. Otherwise, the VM manager creates necessary VMs in hosts with
suitable resource availability.
● The task scheduler functions as a load balancer, orchestrating the mapping of
tasks to VMs based on their resource demands, with each host in the cloud
supporting a finite number of active VMs.
5 Predictability (PR) It is the degree used for the prediction of task allocation,
task execution, and task completion according to the
available cloud resources (virtual machines).
The static strategy acts on VMs without In allocation policy where the current load
any load information. information of VMs are available before
the allocation is said to be a dynamic
strategy.
Typically, the static load in cloud The load is distributed among physical
computing strategies are coming under machines during the run-time. Here, the
two assumptions. The first is the initial arrival time of tasks is unusual, and the
task arrival and the second is the creation of virtual machines is also
availability of physical machines at the according to the type of input tasks.
beginning. The resource update will be Can be classified into two categories:
carried out after each task is scheduled. Off-line mode (Batch mode) (task is
allocated only at some predefined
moments) and On-line mode ( a user
request (task) is mapped onto a
computing node as soon as it enters at
the scheduler).
1 OLB (Opportunistic OLB heuristic technique is used through both static and
load balancing) dynamic(Online mode) strategy in a cloud environment.
This heuristic always allocates tasks to virtual machines
arbitrarily and then checks for the next available
machine.
Simulation results:
● The authors have analyzed the load balancing algorithms (MCT, MET, Min-Min,
Max–Min, and Min–Max) through simulation with generated datasets.
● The experiments were performed using CloudSim-3.0.3 simulator. The version of
the system is Intel Core i7 4th Generation processor, 3.4 GHz CPU and 8 GB
RAM running on Microsoft Windows 8 platform. The arrival rate of the task
follows the Pareto distribution.
● Here, to analyze the algorithms, the authors have considered makespan and
energy consumption of the system as performance metrics.
● The authors have conducted two sets of simulation scenarios as follows.
❖ Scenario-1: For this scenario, the total number of tasks is 500 which is
fixed. The number of VMs varies from 20 to 200 in intervals of 20.
❖ Scenario-2: For this scenario, the total number of VMs is 100 which is
fixed. The number of input tasks varies from 100 to 1000 in intervals of
100.
● A comparative report is shown in Figs. 4 and 5. The makespan and energy
consumption minimum for the MCT load balancing algorithm among the
compared five algorithms.
● A comparative report is shown in Figs. 6 and 7. The makespan and energy
consumption minimum for the MCT load balancing algorithm among the
compared five algorithms.
● Here, in both the scenarios, the Max–Min load balancing algorithm not performed
better as compared to the MCT, MET, Min-Min, and Min–Max algorithms.
Conclusion :
The paper discusses the importance of load balancing in cloud computing to enhance
system stability and performance. It categorizes load balancing algorithms into static
and dynamic approaches, highlighting their impact on system efficiency. Key
performance metrics such as makespan and energy consumption are identified as
crucial indicators of load balancing effectiveness. The study emphasizes the
significance of task allocation and execution in optimizing resource utilization. Overall,
the paper provides valuable insights into load balancing strategies in cloud
environments and suggests areas for future research and improvement in load
balancing algorithms.
Paper 2 : Resource scheduling algorithm with load balancing for cloud service
provisioning
Summary :
The paper addresses the challenges of resource scheduling, response time
optimization, and load balancing in cloud data centers by proposing a novel fuzzy-based
approach and multidimensional queuing network model for efficient resource allocation
and improved performance.
4. Efficient load balancing and resource utilization: The F-MRSQN method aims to
achieve maximum resource utilization and minimum processing time by
optimizing resource allocation and balancing loads across cloud servers [T4].
Result :
● Average Success Rate (ASR) is the ratio of number of users’ requests addressed
by virtual machine grid through the resource manager at a particular time in
cloud environment.
● F-MRSQN method increases average success rate by 6% when compared to
existing STM and 12% when compared to SWDP respectively.
● Resource scheduling efficiency is defined as the ratio of the number of resources
scheduled based on user request to the total number of resources in the cloud.
● It improves the resource scheduling efficiency by 6% compared to STM and 7%
compared to SWDP respectively.
● Response time is defined as time utilized to respond by a MQLO algorithm in a
distributed manner while scheduling the resources in the cloud.
● Response time is reduced in proposed F-MRSQN method by 26% when compared
to existing STM and 45% when compared to SWDP.
Conclusion :
Fuzzy-based Multidimensional Resource Scheduling and Queuing Network (F-MRSQN)
method is proposed for efficient scheduling of resources and optimizing the load for
each cloud user requests with the efficient evolution of data centers. Resource
scheduling efficiency is improved by performing the Fuzzy-based Multidimensional
Resource Scheduling in the proposed F-MRSQN method. Followed by this with the
application of multidimensional queuing network, efficient balancing of loads on
scheduled resources is said to be achieved, therefore enhancing the average success
rate for each cloud user requests.The results show that F-MRSQN method provide
better performance with an improvement of average success ratio by 9% and reduce the
response time by 20% compared to existing methods.
Paper 3 : Hybridization of firefly and Improved Multi-Objective Particle Swarm
Optimization algorithm for energy efficient load balancing in Cloud Computing
environments
Summary :
The paper addresses the optimization of load balancing in Cloud Computing
environments through the proposal of a new hybrid algorithm called FIMPSO (a
combination of Firefly algorithm and Improved Multi-Objective Particle Swarm
Optimization technique). The FIMPSO algorithm aims to distribute resources effectively
among computers, networks, or servers to manage workload demands and application
demands in a cloud environment.
FF algorithm :
Generally, the Firefly Algorithm (FF) is categorized into three premises:
1. All fireflies belong to the same gender, and attraction only occurs between
fireflies of opposite sexes.
2. In fireflies, attraction is proportional to brightness, meaning a lower-brightness
firefly will be attracted to a brighter one. Consequently, as the distance between
fireflies increases, both attraction and brightness diminish. If a firefly is not
brighter than a specific individual, it moves randomly to avoid attraction.
3. An objective function is utilized to calculate the brightness of fireflies.
The core principles of the FF algorithm revolve around light intensity and
attraction. Fireflies' attraction is quantified through their intensity, while
brightness is determined through an objective function during issue optimization.
IMPSO algorithm :
Here are the steps involved in the IMPSO algorithm summarized in points:
1. Initialization of PSO:
- Determine the swarm set number (J) and particle dimension (D).
- Set computation range for variable values (U(l)mn and U(l)mx).
- Control particle speed and initialize swarm position and speed randomly.
2. Parameter evolution:
- Set maximum iterations (zmx).
- Define higher and lower inertia weights (ωmn = 0.4, ωmx = 0.9).
- Establish training factors (t1 = t2 = g = 2).
3. Evaluation of objective function:
- Compute objective function values for each particle.
- Perform particle generalization.
4. Particle position initialization:
- Set personal best position (pbest(l)) and global best position (gbest(l)).
5. Archive initialization:
- Save non-dominated solutions into the archive.
6. Iterative process:
- If maximum iterations are not reached:
(a) Explore gbest(l) from the archive.
(b) Update particle position and speed.
(c) Perform mutation.
(d) Update archive.
(e) Upgrade personal best solutions.
(f) Update iteration value.
7. Repeat cycle until iteration requirements are met.
Results :
The FIMPSO algorithm produced favorable results, achieving an average response time
of 13.58 ms, the highest CPU utilization of 98%, and the highest memory utilization of
93% under extra-large tasks. Additionally, the proposed method demonstrated a
maximum reliability of 67% and a make span of 148, along with the highest average
throughput of 72% under extra-large tasks.
Hence, it can be deduced that the FIMPSO algorithm effectively maintains average
throughput across all task sizes and exhibits a decrease in average throughput as the
number of tasks increases.
Conclusion :
The simulation outcome showed that the proposed FIMPSO model excelled in its
performance over the compared methods. From the simulation outcome, it is
understood that the FIMPSO algorithm achieved effective results with the least average
response time of 13.58 ms, maximum CPU utilization of 98%, memory utilization of 93%,
reliability of 67% and throughput of 72% along with a make span of 148, which was
superior to all other compared method.
Name -Ashlesha Ahirwadi
C No-C22020441608
Roll No-4608
BTech.IT
Introduction :
Existing Solution :
The existing solution presented in the paper involves the use of conventional load
balancing algorithms in Cloud environments. However, due to the high volume of
requests and servers available at any given time, these traditional algorithms may not
be as effective. The ones mentioned in the paper are Genetic Algorithm and Simulated
Annealing.
Simulated Annealing:
Simulated annealing is a probabilistic optimization technique inspired by the annealing
process in metallurgy. The algorithm mimics the annealing process where a material is
heated and then slowly cooled to reach a low-energy state. In simulated annealing, a
system starts at a high temperature where random moves are accepted even if they
increase the objective function value. As the temperature decreases over time, the
algorithm becomes more selective, accepting only moves that decrease the objective
function value. This balance between exploration and exploitation allows simulated
annealing to escape local optima and converge towards a global optimum.
Proposed solution:
The proposed solution in the paper is a Hybrid Fuzzy-Ant Colony Algorithm designed to
enhance load balancing in Cloud Computing environments. This novel approach
combines Fuzzy logic and ant colony optimization to address the challenges of
managing a large number of virtual machines and servers in the Cloud. The algorithm
aims to optimize load balancing, response time, and processing time by introducing a
Fuzzy module for pheromone calculation and improving the parameters of the ant
colony optimization algorithm. By adapting the pheromone formula to the context of
Cloud computing and conducting simulations using the CloudAnalyst platform, the
proposed algorithm demonstrates its effectiveness compared to traditional algorithms.
The study focuses on optimizing multiple objectives while leveraging the strengths of
Fuzzy logic and ant colony optimization to achieve efficient load balancing in Cloud
environments.
The comparison of results between the Hybrid Fuzzy-Ant Colony Algorithm (ACO) and
the Round Robin algorithm in the paper shows significant improvements in performance
metrics, particularly in response time and processing time. The results indicate that the
ACO algorithm outperforms the Round Robin algorithm in terms of response time and
processing time across different scenarios. Here is a summary of the performance
improvements achieved by the ACO algorithm over Round Robin:
- Scenario S1: ACO reduced response time by 12.39% and processing time by 28.8%
compared to Round Robin.
- Scenario S2: ACO reduced response time by 9.27% and processing time by 87.9%
compared to Round Robin.
- Scenario S3: ACO reduced response time by 82.93% and processing time by 90.5%
compared to Round Robin.
- Scenario S4: ACO reduced response time by 9.55% and processing time by 35.9%
compared to Round Robin.
- Scenario S5: ACO reduced response time by 37.62% and processing time by 53.2%
compared to Round Robin.
These results demonstrate the superior performance of the Hybrid Fuzzy-Ant Colony
Algorithm over the Round Robin algorithm in optimizing load balancing and achieving
significant improvements in response time and processing time in Cloud Computing
environments.
Conclusion :
The Hybrid Fuzzy-Ant Colony Algorithm proposed in the study offers a novel approach
to enhancing load balancing in Cloud Computing environments. By combining Fuzzy
logic and ant colony optimization, the algorithm demonstrates superior performance in
optimizing response time and processing time compared to traditional algorithms like
Round Robin. The results highlight the effectiveness of the Hybrid Fuzzy-Ant Colony
Algorithm in improving load balancing efficiency without compromising cost objectives.
Overall, the study showcases the potential of this algorithm to significantly enhance
performance metrics in Cloud environments, making it a promising solution for
optimizing load balancing strategies.
Paper 2 - An Secure and Optimized Load Balancing for Multi-tier IoT
and Edge-Cloud Computing Systems
Introduction :
This paper focuses on the increasing demand for efficient and secure computation
offloading techniques in Mobile-edge computing (MEC) systems. The study addresses
the challenges faced by Mobile Device Users (MDUs) in executing resource-intensive
tasks by proposing a novel load balancing algorithm. Additionally, a new security layer
based on electrocardiogram (ECG) signal encryption is introduced to enhance data
security during transmission. The integration of load balancing, computation
offloading, and security measures aims to optimize system performance and energy
consumption in multitier IoT and edge-cloud computing environments.
Existing solution :
Proposed solution :
The proposed solution in the IEEE Internet of Things Journal article introduces a
comprehensive approach that combines load balancing and computation offloading
(CO) for multitier Mobile-edge cloud computing systems. The key components of
the proposed solution are outlined below:
By integrating the load balancing algorithm, security layer, and optimization problem
formulation, the proposed solution offers a comprehensive framework for enhancing
the performance, efficiency, and security of multi-tier IoT and edge-cloud computing
systems in the context of Mobile-edge computing (MEC) environments.
The results and observations of the study presented in the IEEE Internet of Things
Journal article highlight the performance improvements and benefits of the proposed
load balancing and computation offloading (CO) technique with the integrated security
layer for multitier Mobile-edge cloud computing systems. The key findings and
observations are detailed below:
3. Security Enhancement:
- The introduction of the new security layer, which incorporates AES cryptographic
strategy with encryption and decryption keys derived from ECG signals, enhances data
security during different stages of data transmission. By infusing ECG signal
parameters into the encryption process, the system ensures secure communication
and protects sensitive information from potential security threats.
5. Performance Comparison:
- The simulation-based experiments demonstrate that the proposed load balancing
and CO algorithm, with or without the additional security layer, significantly reduces
system consumption by about 68.2% to 72.4% compared to local execution (LE).
This highlights the effectiveness of the proposed approach in optimizing system
performance and resource utilization in multitier Mobile-edge cloud computing
systems.
Conclusion :
Overall, the results and observations of the study emphasize the effectiveness of the
proposed solution in enhancing system efficiency, load balancing, security, and
resource allocation in multitier IoT and edge-cloud computing environments, ultimately
leading to improved performance and reduced system overhead.
Paper 3 - A Novel Weight-Assignment Load Balancing Algorithm for Cloud
Application
Introduction :
The problem addressed in the research is the inefficiency of common load balancing
and auto-scaling strategies in combating flash crowds and resource failures in
cloud-deployed applications. Despite the use of these strategies by cloud providers,
performance degradation still occurs due to the limitations of existing load balancers
in adapting to dynamic workload changes caused by flash crowds and resource
failures
Existing Solution :
Conclusion :
Overall, the results and observations highlight the efficacy of the novel
weight-assignment load balancing algorithm in enhancing system performance,
reducing communication overhead, and adapting to changing workload conditions in
cloud environments.
Overall Conclusion
There is an utmost need to optimize the present load balancing techniques in order for
providing QoS to the clients using cloud services. The classical solutions/ algorithms do
not suffice real life scenarios therefore use of hybrid algorithms and optimization
techniques are necessary. These techniques need to take care of response time, task
migration, resource utilization, makespan and security as well.