This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Iaetsd improved load balancing model based onIaetsd Iaetsd
This document proposes an improved load balancing model for cloud computing based on partitioning. It analyzes static and dynamic load balancing schemes using the CloudAnalyst tool. Static schemes like round robin performed similarly regardless of system load. Dynamic schemes analyzed current system status and allocated jobs accordingly. Analysis showed dynamic schemes had better response times than static schemes, with throttled and equally spread current execution performing best by balancing load based on system conditions. The proposed model implements multiple dynamic algorithms to further reduce response times and improve user satisfaction in cloud systems.
An Enhanced Throttled Load Balancing Approach for Cloud EnvironmentIRJET Journal
The document proposes an enhanced throttled load balancing approach for cloud environments. It discusses existing load balancing techniques like round robin, weighted round robin, and throttled approaches. It identifies that existing throttled approaches can lead to overloading as they do not consider task size when assigning tasks to virtual machines. The proposed approach aims to improve performance for cloud users by enhancing the basic throttled mapping approach to better distribute tasks among resources. The approach is evaluated using the CloudAnalyst simulator and results show it performs better than original techniques.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
A New Approach for Dynamic Load Balancing Using Simulation In Grid ComputingIRJET Journal
This document proposes a new dynamic load balancing approach for grid computing using simulation. It discusses how dynamic load balancing algorithms can improve performance by reallocating tasks from heavily loaded nodes to lightly loaded nodes. The proposed approach implements a dynamic load balancing algorithm in a simulated grid environment. The algorithm uses information about current resource loads to schedule tasks in a way that aims to optimize resource usage and achieve high performance computing across the distributed grid resources.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
This document discusses various load balancing algorithms that can be applied in cloud computing. It begins with an introduction to cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then discusses the goals of load balancing in cloud computing. The main part of the document describes and provides examples of several load balancing algorithms: Round Robin, Opportunistic Load Balancing, Minimum Completion Time, and Minimum Execution Time. For each algorithm, it explains the basic approach and provides an example to illustrate how it works.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Profit based unit commitment for GENCOs using Parallel PSO in a distributed c...IDES Editor
In the deregulated electricity market, each
generating company has to maximize its own profit by
committing suitable generation schedule termed as profit
based unit commitment (PBUC). This article proposes a
Parallel Particle Swarm Optimization (PPSO) solution to the
PBUC problem. This method has better convergence
characteristics in obtaining optimum solution. The proposed
approach uses a cluster of computers performing parallel
operations in a distributed environment for obtaining the
PBUC solution. The time complexity and the solution quality
with respect to the number of processors in the cluster are
thoroughly tested. The method has been applied to 10 unit
system and the results show that the proposed PPSO in a
distributed cluster constantly outperforms the other methods
which are available in the literature.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
This work proposes an optimization algorithm to control speed of a permanent magnet synchronous motor (PMSM) during starting and speed reversal of motor, as well as during load disturbance conditions. The objective is to minimize the integral absolute control error of the PMSM shaft speed to achieve fast and accurate speed response under load disturbance and speed reversal conditions. The maximum overshoot, peak time, settling time and rise time of the motor is also minimized to obtain efficient transient speed response. Optimum speed control of PMSM is obtained with the aid of a PID speed controller. Modified Particle Swarm Optimization (MPSO) and Ant Colony Optimization (ACO) techniques has been employed for tuning of the PID speed controller, to determine its gain coefficients (proportional, integral and derivative). Simulation results demonstrate that with use of MPSO and ACO techniques improved control performance of PMSM can be achieved in comparison to the classical Ziegler-Nichols (Z-N) method of PID tuning.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Scalable scheduling of updates in streaming data warehousesIRJET Journal
This document discusses scheduling updates in streaming data warehouses. It proposes a scheduling framework to handle complications like view hierarchies, data consistency, inability to preempt updates, heterogeneous update jobs from different data sources, and transient overload. It models the update problem as a scheduling problem where the objective is to minimize data staleness over time. It then presents several update scheduling algorithms and discusses how performance is affected by different factors based on simulation experiments.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
IJERD(www.ijerd.com)International Journal of Engineering Research and Develop...IJERD Editor
This document presents a fuzzy-logic based approach to solve the unit commitment problem in power generation systems. The unit commitment problem aims to determine the optimal on/off schedule of generating units to minimize operating costs while meeting demand and constraints. The proposed approach models key factors like generator load capacity, fuel costs, and startup costs as fuzzy variables. It then uses fuzzy logic techniques to determine a commitment schedule. The approach is demonstrated on a case study of a 4-unit thermal power plant in Turkey. Results are compared to dynamic programming to show the fuzzy logic approach provides preferable solutions with less computational time.
This document discusses using a particle swarm algorithm to enhance dynamic load balancing in a cloud computing environment. It begins with introducing centralized and decentralized load balancing approaches. It then describes using a particle swarm optimization technique, which identifies the least loaded, available virtual machine to distribute workload to in order to minimize energy usage and processing time. The document reviews several related works applying genetic algorithms, particle swarms, ant colony optimization and other approaches to optimize load balancing. It suggests a particle swarm algorithm can distribute load more efficiently compared to centralized and simple decentralized methods.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
This document compares the performance of two dynamic load balancing algorithms - the Honey Bee algorithm and the Throttled Load Balancing algorithm - in a cloud computing environment. It first describes both algorithms and other related concepts. It then discusses results from simulations run using the CloudAnalyst tool. The simulations show that the Honey Bee algorithm has lower average, minimum, and maximum response times compared to the Throttled algorithm. Additionally, the Honey Bee algorithm results in lower data center processing times and costs. Therefore, the document concludes the Honey Bee algorithm performs better than the Throttled algorithm for load balancing in cloud computing.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
A New Approach for Dynamic Load Balancing Using Simulation In Grid ComputingIRJET Journal
This document proposes a new dynamic load balancing approach for grid computing using simulation. It discusses how dynamic load balancing algorithms can improve performance by reallocating tasks from heavily loaded nodes to lightly loaded nodes. The proposed approach implements a dynamic load balancing algorithm in a simulated grid environment. The algorithm uses information about current resource loads to schedule tasks in a way that aims to optimize resource usage and achieve high performance computing across the distributed grid resources.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
This document discusses various load balancing algorithms that can be applied in cloud computing. It begins with an introduction to cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then discusses the goals of load balancing in cloud computing. The main part of the document describes and provides examples of several load balancing algorithms: Round Robin, Opportunistic Load Balancing, Minimum Completion Time, and Minimum Execution Time. For each algorithm, it explains the basic approach and provides an example to illustrate how it works.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Profit based unit commitment for GENCOs using Parallel PSO in a distributed c...IDES Editor
In the deregulated electricity market, each
generating company has to maximize its own profit by
committing suitable generation schedule termed as profit
based unit commitment (PBUC). This article proposes a
Parallel Particle Swarm Optimization (PPSO) solution to the
PBUC problem. This method has better convergence
characteristics in obtaining optimum solution. The proposed
approach uses a cluster of computers performing parallel
operations in a distributed environment for obtaining the
PBUC solution. The time complexity and the solution quality
with respect to the number of processors in the cluster are
thoroughly tested. The method has been applied to 10 unit
system and the results show that the proposed PPSO in a
distributed cluster constantly outperforms the other methods
which are available in the literature.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
This work proposes an optimization algorithm to control speed of a permanent magnet synchronous motor (PMSM) during starting and speed reversal of motor, as well as during load disturbance conditions. The objective is to minimize the integral absolute control error of the PMSM shaft speed to achieve fast and accurate speed response under load disturbance and speed reversal conditions. The maximum overshoot, peak time, settling time and rise time of the motor is also minimized to obtain efficient transient speed response. Optimum speed control of PMSM is obtained with the aid of a PID speed controller. Modified Particle Swarm Optimization (MPSO) and Ant Colony Optimization (ACO) techniques has been employed for tuning of the PID speed controller, to determine its gain coefficients (proportional, integral and derivative). Simulation results demonstrate that with use of MPSO and ACO techniques improved control performance of PMSM can be achieved in comparison to the classical Ziegler-Nichols (Z-N) method of PID tuning.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Scalable scheduling of updates in streaming data warehousesIRJET Journal
This document discusses scheduling updates in streaming data warehouses. It proposes a scheduling framework to handle complications like view hierarchies, data consistency, inability to preempt updates, heterogeneous update jobs from different data sources, and transient overload. It models the update problem as a scheduling problem where the objective is to minimize data staleness over time. It then presents several update scheduling algorithms and discusses how performance is affected by different factors based on simulation experiments.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
IJERD(www.ijerd.com)International Journal of Engineering Research and Develop...IJERD Editor
This document presents a fuzzy-logic based approach to solve the unit commitment problem in power generation systems. The unit commitment problem aims to determine the optimal on/off schedule of generating units to minimize operating costs while meeting demand and constraints. The proposed approach models key factors like generator load capacity, fuel costs, and startup costs as fuzzy variables. It then uses fuzzy logic techniques to determine a commitment schedule. The approach is demonstrated on a case study of a 4-unit thermal power plant in Turkey. Results are compared to dynamic programming to show the fuzzy logic approach provides preferable solutions with less computational time.
This document discusses using a particle swarm algorithm to enhance dynamic load balancing in a cloud computing environment. It begins with introducing centralized and decentralized load balancing approaches. It then describes using a particle swarm optimization technique, which identifies the least loaded, available virtual machine to distribute workload to in order to minimize energy usage and processing time. The document reviews several related works applying genetic algorithms, particle swarms, ant colony optimization and other approaches to optimize load balancing. It suggests a particle swarm algorithm can distribute load more efficiently compared to centralized and simple decentralized methods.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
This document compares the performance of two dynamic load balancing algorithms - the Honey Bee algorithm and the Throttled Load Balancing algorithm - in a cloud computing environment. It first describes both algorithms and other related concepts. It then discusses results from simulations run using the CloudAnalyst tool. The simulations show that the Honey Bee algorithm has lower average, minimum, and maximum response times compared to the Throttled algorithm. Additionally, the Honey Bee algorithm results in lower data center processing times and costs. Therefore, the document concludes the Honey Bee algorithm performs better than the Throttled algorithm for load balancing in cloud computing.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
This document summarizes a research paper that proposes a hybrid task scheduling algorithm for cloud computing environments called PSOCS. PSOCS combines the Particle Swarm Optimization (PSO) algorithm and Cuckoo Search (CS) algorithm to optimize task scheduling and minimize completion time while increasing resource utilization. The paper describes PSO and CS algorithms individually, then defines the proposed PSOCS algorithm. It evaluates PSOCS using a simulation and finds it reduces makespan and increases utilization compared to PSO and random allocation algorithms.
Optimizing Task Scheduling in Mobile Cloud Computing Using Particle Swarm Opt...IRJET Journal
The document discusses optimizing task scheduling in mobile cloud computing using the particle swarm optimization algorithm. It proposes using PSO to develop a task scheduling optimization model that reduces task transmission time, execution time, and costs. PSO is a dynamic scheduling algorithm that could help speed up task execution and decrease costs compared to other algorithms. The document reviews background on task scheduling and cloud computing and analyzes related work on using algorithms like genetic algorithms and ant colony optimization for task scheduling.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Load balancing aims to distribute workloads across multiple computing resources like servers, networks, databases or other technologies. It helps to optimize resource use, maximize throughput, minimize response time and avoid overload of any single resource. Some common load balancing techniques include round-robin, least connection, weighted least connection and shortest expected delay. Effective load balancing is important for cloud computing environments to ensure efficient use of resources and good performance.
Three types of service model: SAAS, PAAS, IAAS
Four types of deployment model: Public, Private, Hybrid And community Cloud.
During the load balancing process, few issues are yet to be fully addressed. Couple of them are:
Some of the nodes are overutilized or some of the nodes are underutilized
Improper workload in Cloud environment results into overhead in resource utilization and in turn inefficient usage of energy
response time of jobs
communication cost of servers
maintain cost of VMs,
throughput and overload of any single node.
By addressing the concern of load balancing, we aim to address multiple facets of Cloud viz. (a) resource utilization (b) CPU time (c) Migration time.
Problem statement
Problem raised while dealing with load balancing
How to minimize the CPU time
How to increase the resource utilization &
How to decrease the energy consumption and Migration time etc.
LoadAwareDistributor: An Algorithmic Approach for Cloud Resource AllocationIRJET Journal
This document summarizes research on load balancing algorithms for cloud resource allocation. It proposes a new LoadAwareDistributor algorithm that prioritizes virtual machines with lower CPU utilization to improve efficiency. A literature review covers existing load balancing techniques and their goals. The proposed algorithm is evaluated through simulation and shown to improve metrics like VM utilization and task completion time over round-robin methods. The study advocates for future algorithm advances incorporating machine learning to better address dynamic load balancing challenges in cloud computing environments.
IRJET- An Efficient Energy Consumption Minimizing Based on Genetic and Power ...IRJET Journal
This document discusses techniques for minimizing energy consumption in cloud computing. It proposes using a genetic algorithm-based power aware scheduling (G-PARS) method along with a Dynamic Single Threshold (DST) virtual machine consolidation approach to dynamically reallocate VMs and reduce the number of active physical nodes. The paper also reviews related work on resource optimization algorithms like genetic algorithms, ant colony optimization, and particle swarm optimization. It finds that using DST and G-PARS can minimize power consumption compared to other existing algorithms under different workload conditions.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
ANALYSIS ON LOAD BALANCING ALGORITHMS IMPLEMENTATION ON CLOUD COMPUTING ENVIR...AM Publications
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and distributed server. One of the main problems in cloud computing is load balancing. Balancing the load means to distribute the workload among several nodes evenly so that no single node will be overloaded. Load can be of any type that is it can be CPU load, memory capacity or network load. In this paper we presented an architecture of load balancing and algorithm which will further improve the load balancing problem by minimizing the response time. In this paper, we have proposed the enhanced version of existing regulated load balancing approach for cloud computing by comping the Randomization and greedy load balancing algorithm. To check the performance of proposed approach, we have used the cloud analyst simulator (Cloud Analyst). Through simulation analysis, it has been found that proposed improved version of regulated load balancing approach has shown better performance in terms of cost, response time and data processing time.
A Result on Novel Approach for Load Balancing in Cloud Computingijtsrd
Cloud computing is a large pool of system in which private or public networks are interconnected in order to provide the scalable infrastructure to application, data and file storage. It is considered as the computer archetype in which large amount of information is stored. It helps in the significant reduction of the cost of computation, application hosting, content storage and delivery. In order to experience direct cost benefits, cloud computing is considered as a practical approach and it can possibly transform a data center from a capital intensive set up to a variable priced environment. It provides the feasibility to its customers that they can access their information from anywhere they want. Therefore, cloud overcomes the limitation of the location constraint. As compared to traditional concepts, cloud computing coveys the concept of the grid computing, distributed computing, utility computing or autonomic computing. When any virtual machine gets overloaded, fault may occur in the cloud environment. With the help of BFO algorithm, technique of adaptive task scheduling is proposed. Using this method, it becomes easy to transfer the task to the most reliable virtual machine. On the basis of calculated weight at virtual machine, the reliability of the virtual machine is calculated. The proposed and existing algorithms have been implemented in CloudSim. On the basis of the simulation results, it is concluded that the proposed method shows the reduction in the execution time as compared to existing technique. Sukhdeep Kaur | Preeti Sondhi "A Result on Novel Approach for Load Balancing in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26362.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/26362/a-result-on-novel-approach-for-load-balancing-in-cloud-computing/sukhdeep-kaur
Hybrid Scheduling Algorithm for Efficient Load Balancing In Cloud ComputingEswar Publications
This document presents a hybrid scheduling algorithm for efficient load balancing in cloud computing. The algorithm uses both round robin and priority-based scheduling approaches. It first assigns priorities to incoming job requests and then executes them in a round robin fashion. The algorithm aims to minimize overall response time and data center processing time. It is evaluated through simulation and found to perform better than round robin, priority-based, and equally spread current execution algorithms alone in terms of optimized response time and data center service time.
A SURVEY ON STATIC AND DYNAMIC LOAD BALANCING ALGORITHMS FOR DISTRIBUTED MULT...IRJET Journal
This document summarizes a survey of static and dynamic load balancing algorithms for distributed multicore systems. It discusses how efficient load balancing is essential for distributing work across cores in large supercomputers. Both static and dynamic algorithms are reviewed. Static algorithms allocate work deterministically or probabilistically without considering runtime conditions, while dynamic algorithms can adapt based on network conditions and core capabilities. The paper evaluates various performance metrics for different load balancing algorithms and concludes that modern distributed multicore systems require more reliable dynamic algorithms to optimize performance.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
An Optimized-Throttled Algorithm for Distributing Load in Cloud ComputingIRJET Journal
This document proposes an optimized-throttled algorithm for distributing load in cloud computing. It summarizes existing load balancing algorithms like round robin and throttled, and then describes the proposed optimized-throttled algorithm in more detail. The algorithm prioritizes distributing load to minimize virtual machine overload and underload. It is evaluated through simulation and is shown to improve response times and processing times compared to round robin and throttled algorithms.
A Comparative Study of Load Balancing Algorithms for Cloud ComputingIJERA Editor
Cloud Computing is fast growing technology in both industry research and academy. User can access the cloud
service and pay based on the usage of resource. Balancing the load is major task of cloud service provider with
minimum response time, maximum throughput and better resource utilization. There are many load balancing
algorithms proposed to assign a user request to cloud resource in efficient manner. In this paper three load balancing
algorithms are simulated in Cloud Analyst and results are compared.
1) Load balancing is an important issue in cloud computing to improve performance and resource utilization. It aims to distribute tasks evenly among nodes to prevent overloading some nodes while leaving others idle.
2) There are two main categories of load balancing algorithms: static and dynamic. Static algorithms do not consider current system state while dynamic algorithms react to changing system states.
3) Prior research on load balancing in cloud computing has proposed approaches such as using a genetic algorithm to optimize load balancing and addressing delays in dynamic load balancing.
This presentation provides a comprehensive overview of a specialized test rig designed in accordance with ISO 4548-7, the international standard for evaluating the vibration fatigue resistance of full-flow lubricating oil filters used in internal combustion engines.
Key features include:
DIY Gesture Control ESP32 LiteWing Drone using PythonCircuitDigest
Build a gesture-controlled LiteWing drone using ESP32 and MPU6050. This presentation explains components, circuit diagram, assembly steps, and working process.
Read more : https://circuitdigest.com/microcontroller-projects/diy-gesture-controlled-drone-using-esp32-and-python-with-litewing
Ideal for DIY drone projects, robotics enthusiasts, and embedded systems learners. Explore how to create a low-cost, ESP32 drone with real-time wireless gesture control.
"The Enigmas of the Riemann Hypothesis" by Julio ChaiJulio Chai
In the vast tapestry of the history of mathematics, where the brightest minds have woven with threads of logical reasoning and flash-es of intuition, the Riemann Hypothesis emerges as a mystery that chal-lenges the limits of human understanding. To grasp its origin and signif-icance, it is necessary to return to the dawn of a discipline that, like an incomplete map, sought to decipher the hidden patterns in numbers. This journey, comparable to an exploration into the unknown, takes us to a time when mathematicians were just beginning to glimpse order in the apparent chaos of prime numbers.
Centuries ago, when the ancient Greeks contemplated the stars and sought answers to the deepest questions in the sky, they also turned their attention to the mysteries of numbers. Pythagoras and his followers revered numbers as if they were divine entities, bearers of a universal harmony. Among them, prime numbers stood out as the cornerstones of an infinite cathedral—indivisible and enigmatic—hiding their ar-rangement beneath a veil of apparent randomness. Yet, their importance in building the edifice of number theory was already evident.
The Middle Ages, a period in which the light of knowledge flick-ered in rhythm with the storms of history, did not significantly advance this quest. It was the Renaissance that restored lost splendor to mathe-matical thought. In this context, great thinkers like Pierre de Fermat and Leonhard Euler took up the torch, illuminating the path toward a deeper understanding of prime numbers. Fermat, with his sharp intuition and ability to find patterns where others saw disorder, and Euler, whose overflowing genius connected number theory with other branches of mathematics, were the architects of a new era of exploration. Like build-ers designing a bridge over an unknown abyss, their contributions laid the groundwork for later discoveries.
Better Builder Magazine brings together premium product manufactures and leading builders to create better differentiated homes and buildings that use less energy, save water and reduce our impact on the environment. The magazine is published four times a year.
Department of Environment (DOE) Mix Design with Fly Ash.MdManikurRahman
Concrete Mix Design with Fly Ash by DOE Method. The Department of Environmental (DOE) approach to fly ash-based concrete mix design is covered in this study.
The Department of Environment (DOE) method of mix design is a British method originally developed in the UK in the 1970s. It is widely used for concrete mix design, including mixes that incorporate supplementary cementitious materials (SCMs) such as fly ash.
When using fly ash in concrete, the DOE method can be adapted to account for its properties and effects on workability, strength, and durability. Here's a step-by-step overview of how the DOE method is applied with fly ash.
Module4: Ventilation
Definition, necessity of ventilation, functional requirements, various system & selection criteria.
Air conditioning: Purpose, classification, principles, various systems
Thermal Insulation: General concept, Principles, Materials, Methods, Computation of Heat loss & heat gain in Buildings
Filters for Electromagnetic Compatibility ApplicationsMathias Magdowski
In this lecture, I explain the fundamentals of electromagnetic compatibility (EMC), the basic coupling model and coupling paths via cables, electric fields, magnetic fields and wave fields. We also look at electric vehicles as an example of systems with many conducted EMC problems due to power electronic devices such as rectifiers and inverters with non-linear components such as diodes and fast switching components such as MOSFETs or IGBTs. After a brief review of circuit analysis fundamentals and an experimental investigation of the frequency-dependent impedance of resistors, capacitors and inductors, we look at a simple low-pass filter. The input impedance from both sides as well as the transfer function are measured.