Task Scheduling Using Firefly algorithm with cloudsimAqilIzzuddin
The document discusses implementing a firefly algorithm for task scheduling in a cloud computing environment using Cloudsim. It begins with background on cloud computing and scheduling, and notes that Cloudsim will be used to simulate scheduling algorithms. The firefly algorithm is proposed as a new scheduling algorithm to implement in Cloudsim and compare with the default FCFS algorithm. Objectives are to study scheduling algorithms in Cloudsim, implement the firefly algorithm, and analyze performance versus FCFS in terms of resource utilization. Methodology will involve analyzing and comparing the firefly and FCFS algorithms implemented in Cloudsim simulations.
This document discusses scheduling in cloud computing environments and summarizes an experimental study comparing different task scheduling policies in virtual machines. It begins with introductions to cloud computing, architectures, and virtualization. It then presents the problem statement of improving application performance under varying resource demands through efficient scheduling. The document outlines simulations conducted using the CloudSim toolkit to evaluate scheduling algorithms like shortest job first, round robin, and a proposed algorithm incorporating machine processing speeds. It presents the implementation including a web interface and concludes that round robin scheduling distributes jobs equally but can cause fragmentation, while the proposed algorithm aims to overcome limitations of existing approaches.
REVIEW PAPER on Scheduling in Cloud ComputingJaya Gautam
This document reviews scheduling algorithms for workflow applications in cloud computing. It discusses characteristics of cloud computing, deployment and service models, and the importance of scheduling in cloud computing. The document analyzes several scheduling algorithms proposed in literature that consider parameters like makespan, cost, load balancing, and priority. It finds that algorithms like Max-Min, Min-Min, and HEFT perform better than traditional algorithms in optimizing these parameters for workflow scheduling in cloud environments.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
This document provides an overview of task scheduling algorithms for load balancing in cloud computing. It begins with introductions to cloud computing and load balancing. It then surveys several existing task scheduling algorithms, including Min-Min, Max-Min, Resource Awareness Scheduling Algorithm, QoS Guided Min-Min, and others. It discusses the goals, workings, results and problems of each algorithm. It identifies the need for an optimized task scheduling algorithm. It also discusses tools like CloudSim that can be used to simulate scheduling algorithms and evaluate performance.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
This document provides a summary of a student's seminar paper on resource scheduling algorithms. The paper discusses the need for resource scheduling algorithms in cloud computing environments. It then describes several types of algorithms commonly used for resource scheduling, including genetic algorithms, bee algorithms, ant colony algorithms, workflow algorithms, and load balancing algorithms. For each algorithm type, it provides a brief introduction, overview of the basic steps or concepts, and some examples of applications where the algorithm has been used. The paper was submitted by a student named Shilpa Damor to fulfill requirements for a degree in information technology.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document presents an overview of cloud computing concepts including cloud architecture, deployment models, service models, characteristics, job scheduling, virtualization, energy conservation, and network security. It discusses key cloud computing topics such as Infrastructure as a Service, Platform as a Service, Software as a Service, public clouds, private clouds, hybrid clouds, community clouds, resource pooling, broad network access, on-demand self-service, and measured service. Virtualization concepts like hypervisors, virtual machine monitors, and virtual network models are also covered.
Genetic Algorithm for task scheduling in Cloud Computing EnvironmentSwapnil Shahade
This document proposes a modified genetic algorithm to schedule tasks in cloud computing environments. It begins with an introduction and background on cloud computing and task scheduling. It then describes the standard genetic algorithm approach and introduces the modified genetic algorithm which uses Longest Cloudlet to Fastest Processor and Smallest Cloudlet to Fastest Processor scheduling algorithms to generate the initial population. The implementation and results show that the modified genetic algorithm reduces makespan and cost compared to the standard genetic algorithm.
An optimized scientific workflow scheduling in cloud computingDIGVIJAY SHINDE
The document discusses optimizing scientific workflow scheduling in cloud computing. It begins with definitions of workflow and cloud computing. Workflow is a group of repeatable dependent tasks, while cloud computing provides applications and hardware resources over the Internet. There are three cloud service models: SaaS, PaaS, and IaaS. The document explores how to efficiently schedule workflows in the cloud to reduce makespan, cost, and energy consumption. It reviews different scheduling algorithms like FCFS, genetic algorithms, and discusses optimizing objectives like time and cost. The document provides a literature review comparing various workflow scheduling methods and algorithms. It concludes with discussing open issues and directions for future work in optimizing workflow scheduling for cloud computing.
dynamic resource allocation using virtual machines for cloud computing enviro...Kumar Goud
Abstract—Cloud computing allows business customers to scale up and down their resource usage based on needs., we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing imbalance, we will mix completely different of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
Index Terms—Cloud computing, resource management, virtualization, green computing.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
An Efficient Decentralized Load Balancing Algorithm in Cloud ComputingAisha Kalsoom
This document proposes a new efficient decentralized load balancing algorithm for cloud computing. It consists of two phases: 1) a request sequencing phase where incoming user requests are sequenced to minimize wait times, and 2) a load transferring phase where a load balancer calculates resource utilization of each VM and transfers tasks to less utilized VMs. This algorithm aims to improve load balancing performance and achieve more efficient resource utilization in cloud computing environments.
Task Scheduling using Tabu Search algorithm in Cloud Computing Environment us...AzarulIkhwan
1. The document proposes using Tabu Search algorithm for task scheduling in cloud computing environments using CloudSim simulator. It aims to maximize throughput and minimize turnaround time compared to traditional algorithms like FCFS.
2. The methodology section describes how CloudSim simulator works and the components involved in task scheduling. It also provides an overview of how the Tabu Search algorithm guides the search process to avoid getting stuck at local optima.
3. The expected result is that Tabu Search algorithm will provide higher throughput and lower turnaround times for cloud tasks compared to FCFS, as Tabu Search is designed to escape local optima and find better solutions.
task scheduling in cloud datacentre using genetic algorithmSwathi Rampur
Task scheduling and resource provisioning is the core and challenging issues in cloud environment. Processes running in the cloud environment will race for available resources in order to complete their tasks with the minimum execution time; it is clear that we need an efficient scheduling technique for mapping between processes running and available resources. In this research paper, we are presented a non-traditional optimization technique, which mimics the process of evolution and based on the mechanics of natural selection and natural genetics called Genetic algorithm (GA), which minimizes the execution time and in turn reduces computation cost. We had done comparison with Round Robin algorithm and used CloudSim toolkit for our tests, results shows that Meta heuristic GA gives better performance than other scheduling algorithm.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Simulation of Heterogeneous Cloud InfrastructuresCloudLightning
During the last years, except from the traditional CPU based hardware servers, hardware accelerators are widely used in various HPC application areas. More specifically, Graphics Processing Units (GPUs), Many Integrated Cores (MICs) and Field-Programmable Gate Arrays (FPGAs) have shown a great potential in HPC and have been widely mobilised in supercomputing and in HPC-Clouds. This presentation focuses on the development of a cloud simulation framework that supports hardware accelerators. The design and implementation of the framework are also discussed.
This presentation was given by Dr. Konstantinos Giannoutakis (CERTH) at the CloudLightning Conference on 11th April 2017.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
This document provides an overview of a survey on resource allocation and monitoring in cloud computing. It discusses (1) cloud computing and its key characteristics, (2) elements of resource management including allocation, monitoring, discovery and provisioning, (3) existing mechanisms for resource allocation and monitoring, and (4) gaps in current approaches. The survey aims to study resource allocation and monitoring in cloud computing and describe issues and current solutions to help develop a better resource management framework.
The document discusses various scheduling techniques in cloud computing. It begins with an introduction to scheduling and its importance in cloud computing. It then covers traditional scheduling approaches like FCFS, priority queue, and shortest job first. The document also presents job scheduling frameworks, dynamic and fault-tolerant scheduling, deadline-constrained scheduling, and inter-cloud meta-scheduling. It concludes with the benefits of effective scheduling in improving service quality and resource utilization in cloud environments.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
This document proposes a new Ranking Chaos Optimization (RCO) algorithm to solve the dual scheduling problem of cloud services and computing resources (DS-CSCR) in private clouds. It introduces the DS-CSCR concept and models the characteristics of cloud services and computing resources. The RCO algorithm uses ranking selection, individual chaos, and dynamic heuristic operators. Experimental results show RCO has better searching ability, time complexity, and stability compared to other algorithms for solving DS-CSCR. Future work is needed to study additional quality of service properties and improve RCO for other optimization problems.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
Dynamic resource allocation using virtual machines for cloud computing enviro...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - [email protected]¬m-Visit Our Website: www.finalyearprojects.org
Energy Efficient Heuristic Base Job Scheduling Algorithms in Cloud ComputingIOSRjournaljce
Cloud computing environment provides the cost efficient solution to customers by the resource provisioning and flexible customized configuration. The interest of cloud computing is growing around the globe at very fast pace because it provides scalable virtualized infrastructure by mean of which extensive computing capabilities can be used by the cloud clients to execute their submitted jobs. It becomes challenge for the cloud infrastructure to manage and schedule these jobs originated by different cloud users to available resources in such a manner to strengthen the overall performance of the system. As the number of user increases the job scheduling become an intensive task. Energy efficient job scheduling is one constructive solution to streamline the resource utilization as well as to reduce the energy consumption. Though there are several scheduling algorithms available, this paper intends to present job scheduling based on two Heuristic approaches i.e. Efficient MQS (Multi-queue job scheduling) and ACO (Ant colony optimization) and further evaluating the effectiveness of both approaches by considering the parameter of energy consumption and time in cloud computing.
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentEditor IJCATR
Cloud computing becomes quite popular among cloud users by offering a variety of resources. This is an on demand service because it offers dynamic flexible resource allocation and guaranteed services in pay as-you-use manner to public. In this paper, we present the several dynamic resource allocation techniques and its performance. This paper provides detailed description of the dynamic resource allocation technique in cloud for cloud users and comparative study provides the clear detail about the different techniques
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm IJORCS
Grid computing is a new paradigm for next generation computing, it enables the sharing and selection of geographically distributed heterogeneous resources for solving large scale problems in science and engineering. Grid computing does require special software that is unique to the computing project for which the grid is being used. In this paper the proposed algorithm namely dynamic load balancing algorithm is created for job scheduling in Grid computing. Particle Swarm Intelligence (PSO) is the latest evolutionary optimization techniques in Swarm Intelligence. It has the better performance of global searching and has been successfully applied to many areas. The performance measure used for scheduling is done by Quality of service (QoS) such as makespan, cost and deadline. Max PSO and Min PSO algorithm has been partially integrated with PSO and finally load on the resources has been balanced.
This document presents an overview of cloud computing concepts including cloud architecture, deployment models, service models, characteristics, job scheduling, virtualization, energy conservation, and network security. It discusses key cloud computing topics such as Infrastructure as a Service, Platform as a Service, Software as a Service, public clouds, private clouds, hybrid clouds, community clouds, resource pooling, broad network access, on-demand self-service, and measured service. Virtualization concepts like hypervisors, virtual machine monitors, and virtual network models are also covered.
Genetic Algorithm for task scheduling in Cloud Computing EnvironmentSwapnil Shahade
This document proposes a modified genetic algorithm to schedule tasks in cloud computing environments. It begins with an introduction and background on cloud computing and task scheduling. It then describes the standard genetic algorithm approach and introduces the modified genetic algorithm which uses Longest Cloudlet to Fastest Processor and Smallest Cloudlet to Fastest Processor scheduling algorithms to generate the initial population. The implementation and results show that the modified genetic algorithm reduces makespan and cost compared to the standard genetic algorithm.
An optimized scientific workflow scheduling in cloud computingDIGVIJAY SHINDE
The document discusses optimizing scientific workflow scheduling in cloud computing. It begins with definitions of workflow and cloud computing. Workflow is a group of repeatable dependent tasks, while cloud computing provides applications and hardware resources over the Internet. There are three cloud service models: SaaS, PaaS, and IaaS. The document explores how to efficiently schedule workflows in the cloud to reduce makespan, cost, and energy consumption. It reviews different scheduling algorithms like FCFS, genetic algorithms, and discusses optimizing objectives like time and cost. The document provides a literature review comparing various workflow scheduling methods and algorithms. It concludes with discussing open issues and directions for future work in optimizing workflow scheduling for cloud computing.
dynamic resource allocation using virtual machines for cloud computing enviro...Kumar Goud
Abstract—Cloud computing allows business customers to scale up and down their resource usage based on needs., we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing imbalance, we will mix completely different of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.
Index Terms—Cloud computing, resource management, virtualization, green computing.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
An Efficient Decentralized Load Balancing Algorithm in Cloud ComputingAisha Kalsoom
This document proposes a new efficient decentralized load balancing algorithm for cloud computing. It consists of two phases: 1) a request sequencing phase where incoming user requests are sequenced to minimize wait times, and 2) a load transferring phase where a load balancer calculates resource utilization of each VM and transfers tasks to less utilized VMs. This algorithm aims to improve load balancing performance and achieve more efficient resource utilization in cloud computing environments.
Task Scheduling using Tabu Search algorithm in Cloud Computing Environment us...AzarulIkhwan
1. The document proposes using Tabu Search algorithm for task scheduling in cloud computing environments using CloudSim simulator. It aims to maximize throughput and minimize turnaround time compared to traditional algorithms like FCFS.
2. The methodology section describes how CloudSim simulator works and the components involved in task scheduling. It also provides an overview of how the Tabu Search algorithm guides the search process to avoid getting stuck at local optima.
3. The expected result is that Tabu Search algorithm will provide higher throughput and lower turnaround times for cloud tasks compared to FCFS, as Tabu Search is designed to escape local optima and find better solutions.
task scheduling in cloud datacentre using genetic algorithmSwathi Rampur
Task scheduling and resource provisioning is the core and challenging issues in cloud environment. Processes running in the cloud environment will race for available resources in order to complete their tasks with the minimum execution time; it is clear that we need an efficient scheduling technique for mapping between processes running and available resources. In this research paper, we are presented a non-traditional optimization technique, which mimics the process of evolution and based on the mechanics of natural selection and natural genetics called Genetic algorithm (GA), which minimizes the execution time and in turn reduces computation cost. We had done comparison with Round Robin algorithm and used CloudSim toolkit for our tests, results shows that Meta heuristic GA gives better performance than other scheduling algorithm.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Simulation of Heterogeneous Cloud InfrastructuresCloudLightning
During the last years, except from the traditional CPU based hardware servers, hardware accelerators are widely used in various HPC application areas. More specifically, Graphics Processing Units (GPUs), Many Integrated Cores (MICs) and Field-Programmable Gate Arrays (FPGAs) have shown a great potential in HPC and have been widely mobilised in supercomputing and in HPC-Clouds. This presentation focuses on the development of a cloud simulation framework that supports hardware accelerators. The design and implementation of the framework are also discussed.
This presentation was given by Dr. Konstantinos Giannoutakis (CERTH) at the CloudLightning Conference on 11th April 2017.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
This document provides an overview of a survey on resource allocation and monitoring in cloud computing. It discusses (1) cloud computing and its key characteristics, (2) elements of resource management including allocation, monitoring, discovery and provisioning, (3) existing mechanisms for resource allocation and monitoring, and (4) gaps in current approaches. The survey aims to study resource allocation and monitoring in cloud computing and describe issues and current solutions to help develop a better resource management framework.
The document discusses various scheduling techniques in cloud computing. It begins with an introduction to scheduling and its importance in cloud computing. It then covers traditional scheduling approaches like FCFS, priority queue, and shortest job first. The document also presents job scheduling frameworks, dynamic and fault-tolerant scheduling, deadline-constrained scheduling, and inter-cloud meta-scheduling. It concludes with the benefits of effective scheduling in improving service quality and resource utilization in cloud environments.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
This document proposes a new Ranking Chaos Optimization (RCO) algorithm to solve the dual scheduling problem of cloud services and computing resources (DS-CSCR) in private clouds. It introduces the DS-CSCR concept and models the characteristics of cloud services and computing resources. The RCO algorithm uses ranking selection, individual chaos, and dynamic heuristic operators. Experimental results show RCO has better searching ability, time complexity, and stability compared to other algorithms for solving DS-CSCR. Future work is needed to study additional quality of service properties and improve RCO for other optimization problems.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
Dynamic resource allocation using virtual machines for cloud computing enviro...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - [email protected]¬m-Visit Our Website: www.finalyearprojects.org
Energy Efficient Heuristic Base Job Scheduling Algorithms in Cloud ComputingIOSRjournaljce
Cloud computing environment provides the cost efficient solution to customers by the resource provisioning and flexible customized configuration. The interest of cloud computing is growing around the globe at very fast pace because it provides scalable virtualized infrastructure by mean of which extensive computing capabilities can be used by the cloud clients to execute their submitted jobs. It becomes challenge for the cloud infrastructure to manage and schedule these jobs originated by different cloud users to available resources in such a manner to strengthen the overall performance of the system. As the number of user increases the job scheduling become an intensive task. Energy efficient job scheduling is one constructive solution to streamline the resource utilization as well as to reduce the energy consumption. Though there are several scheduling algorithms available, this paper intends to present job scheduling based on two Heuristic approaches i.e. Efficient MQS (Multi-queue job scheduling) and ACO (Ant colony optimization) and further evaluating the effectiveness of both approaches by considering the parameter of energy consumption and time in cloud computing.
Survey on Dynamic Resource Allocation Strategy in Cloud Computing EnvironmentEditor IJCATR
Cloud computing becomes quite popular among cloud users by offering a variety of resources. This is an on demand service because it offers dynamic flexible resource allocation and guaranteed services in pay as-you-use manner to public. In this paper, we present the several dynamic resource allocation techniques and its performance. This paper provides detailed description of the dynamic resource allocation technique in cloud for cloud users and comparative study provides the clear detail about the different techniques
A Survey of Job Scheduling Algorithms Whit Hierarchical Structure to Load Ba...Editor IJCATR
Due to the advances in human civilization, problems in science and engineering are becoming more complicated than ever
before. To solve these complicated problems, grid computing becomes a popular tool. a grid environment collects, integrates, and uses
heterogeneous or homogeneous resources scattered around the globe by a high-speed network. Scheduling problems are at the heart of
any Grid-like computational system. a good scheduling algorithm can assign jobs to resources efficiently and can balance the system
load. in this paper we survey three algorithms for grid scheduling and compare benefit and disadvantages of their based on makespan.
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm IJORCS
Grid computing is a new paradigm for next generation computing, it enables the sharing and selection of geographically distributed heterogeneous resources for solving large scale problems in science and engineering. Grid computing does require special software that is unique to the computing project for which the grid is being used. In this paper the proposed algorithm namely dynamic load balancing algorithm is created for job scheduling in Grid computing. Particle Swarm Intelligence (PSO) is the latest evolutionary optimization techniques in Swarm Intelligence. It has the better performance of global searching and has been successfully applied to many areas. The performance measure used for scheduling is done by Quality of service (QoS) such as makespan, cost and deadline. Max PSO and Min PSO algorithm has been partially integrated with PSO and finally load on the resources has been balanced.
The document proposes a novel VM-assign load balancing algorithm for efficiently allocating incoming requests to virtual machines in a cloud computing environment. It aims to avoid underutilization of resources. The algorithm maintains a table of VMs and their current load. When a request arrives, it selects the least loaded VM for processing. The experimental results using a CloudSim simulator show the algorithm balances load well across VMs, fully utilizing them without over or underloading. Future work could consider improving the algorithm's handling of mixed static and dynamic loads.
This document discusses client-side load balancing in a cloud computing environment. It describes how a client-side load balancer can distribute requests across backend web servers in a scalable way without requiring control of the infrastructure. The proposed architecture uses static anchor pages hosted on Amazon S3 that contain JavaScript code to select a web server based on its reported load. The JavaScript then proxies the request to that server and updates the page content. This approach achieves high scalability and adaptiveness without hardware load balancers or layer 2 optimizations.
An efficient approach for load balancing using dynamic ab algorithm in cloud ...bhavikpooja
This document outlines a proposed approach for efficient load balancing using a dynamic Ant-Bee algorithm in cloud computing. It discusses limitations of existing ant colony and bee colony algorithms for load balancing. The author aims to develop a new AB algorithm approach that combines aspects of ant colony optimization and bee colony algorithms to improve load balancing optimization and overcome issues like slow convergence and tendency to stagnate in ant colony algorithms. The proposed approach would leverage both the dynamic path finding of ants and pheromone updating of bees for more effective load balancing in cloud environments.
This document discusses load balancing, which is a technique for distributing work across multiple computing resources like CPUs, disk drives, and network links. The goals of load balancing are to maximize resource utilization, throughput, and response time while avoiding overloads and crashes. Static load balancing involves preset mappings, while dynamic load balancing distributes workload in real-time. Common load balancing algorithms are round robin, least connections, and response time-based. Server load balancing distributes client requests to multiple backend servers and can operate in centralized or distributed architectures using network address translation or direct routing.
A study on dynamic load balancing in grid environmentIJSRD
Grid computing is a collection of computer resources from multiple locations to reach a common goal. Grid computing distinguishes from conventional high performance computing systems that are heterogeneous and geographically dispersed than cluster computer. One of the major issues in grid computing is load balancing. Classification of load balancing is: Static – Dynamic, Centralized – Decentralized, Homogeneous – Heterogeneous. Techniques like: Ant Colony Optimization, Threshold based and Optimal Heterogeneous are used by some researcher to balance the load. This survey paper discusses set of parameters to be used for comparing performance of each of them. In addition to that it says which technique is more useful for grid environment.
Server load balancing (SLB) distributes network traffic across multiple servers to optimize resource utilization and maximize throughput. It intercepts traffic destined for a website and redirects requests to various backend servers using techniques like network address translation. SLB aims to improve performance, increase scalability, and maintain high availability by monitoring servers and routing traffic around failures to keep applications running if servers go down. Both hardware and software-based solutions exist, with hardware providing higher performance but at greater cost than software-based options.
Load Balancing In Distributed ComputingRicha Singh
Load Balancing In Distributed Computing
The goal of the load balancing algorithms is to maintain the load to each processing element such that all the processing elements become neither overloaded nor idle that means each processing element ideally has equal load at any moment of time during execution to obtain the maximum performance (minimum execution time) of the system
This document discusses load balancing in distributed systems. It provides definitions of static and dynamic load balancing, compares their approaches, and describes several dynamic load balancing algorithms. Static load balancing assigns tasks at compile time without migration, while dynamic approaches migrate tasks at runtime based on current system state. Dynamic approaches have overhead from migration but better utilize resources. Specific dynamic algorithms discussed include nearest neighbor, random, adaptive contracting with neighbor, and centralized information approaches.
This is a presentation for Chapter 7 Distributed system management
Book: DISTRIBUTED COMPUTING , Sunita Mahajan & Seema Shah
Prepared by Students of Computer Science, Ain Shams University - Cairo - Egypt
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Load balancing distributes network traffic across multiple servers to optimize resource utilization, maximize throughput, minimize response time, and avoid overload. It improves availability and reliability. In Windows Server 2003, Network Load Balancing allows multiple servers to be grouped together and appear as a single virtual server to clients. Requests are distributed to servers using round-robin DNS or a hardware load balancer which rewrites requests and forwards them to cluster nodes based on performance metrics. Servers detect failures and new additions to ensure high availability.
Grid computing is a model of distributed computing that uses geographically and administratively disparate resources to solve large problems. It involves sharing computing power, data, and other resources across organizational boundaries. Key aspects include applying resources from many computers to a single problem, combining resources from multiple administrative domains for tasks requiring large processing power or data, and using middleware to coordinate resources as a virtual system. The document then discusses definitions of grid computing from various organizations and the core functional requirements and characteristics needed for grid applications and users.
This document discusses architectural and security management for grid computing. It begins by defining grid computing as an environment that enables sharing of distributed resources across organizations to achieve common goals. It then describes the key components of a grid, including computation resources, storage, communications, software/licenses, and special equipment. The document outlines a four-level grid architecture including a fabric level, core middleware level, user middleware level, and application level. It also discusses important aspects of grid computing such as resource balancing, reliability through distribution, parallel CPU capacity, and management of different projects. Finally, it emphasizes that security is a major concern for grid computing due to the open nature of sharing resources across organizational boundaries.
Inroduction to grid computing by gargi shankar vermagargishankar1981
Grid computing allows for sharing and coordination of distributed computer resources to address large-scale computation problems. It enables dynamic, scalable, and inexpensive access to computing power by connecting computers and other resources together with open standards. Key aspects of grid computing include dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities through coordination of distributed and often heterogeneous resources not subject to centralized control.
This document summarizes a review paper on grid computing. It begins with an introduction to grid computing, describing it as a system that combines distributed computing resources to solve large-scale computational problems. It then discusses the layered grid architecture, including the fabric, connectivity, resource, and collective layers. Next, it outlines different types of grids like computational, data, service, and collaborative grids. It proceeds to examine challenges in grid computing such as security, resource discovery, and heterogeneity. It also describes characteristics of grids like their heterogeneous and user-centric nature. The document concludes by covering topics like grid resource management and security issues in grids.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
Grid computing is the sharing of computer resources from multiple administrative domains to achieve common goals. It allows for independent, inexpensive access to high-end computational capabilities. Grid computing federates resources like computers, data, software and other devices. It provides a single login for users to access distributed resources for tasks like drug discovery, climate modeling and other data-intensive applications. Current grids are used for distributed supercomputing, high-throughput computing, on-demand computing and other methods. Grids benefit scientists, engineers and other users who need to solve large problems or collaborate globally.
This document discusses grid computing and provides an overview of the topic. It begins with an introduction to grid computing, explaining that it utilizes distributed resources over a network to solve large computational problems. It then covers aspects of grid computing such as data, computation, types of grids, how grid computing works, and the grid architecture with different layers. The document also discusses applications of grid computing, advantages, limitations, and provides a case study on using a grid-like approach for weather prediction.
This document provides a review of grid computing. It begins with definitions and explanations of grid computing and its key characteristics including decentralized control, open standards, and coordinated resource sharing across organizations. The document then discusses the types of grids, architectures, benefits including improved resource utilization and fault tolerance techniques like checkpointing and replication. It also reviews the evolution of grid technologies like Globus Toolkit and the Open Grid Services Architecture (OGSA). The challenges of programming and managing resources across administrative domains in grid environments are also summarized.
This document evaluates the performance of the First Come First Serve (FCFS) and Easy Backfilling (EBF) resource allocation algorithms in grid computing systems. It compares the resource utilization and throughput of the two algorithms when gridlet size increases linearly and non-linearly. The results show that EBF achieves better resource utilization and throughput than FCFS in both linear and non-linear cases. EBF is more efficient at scheduling jobs to maximize resource usage and the amount of work completed per time period.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENTpharmaindexing
This document discusses job scheduling algorithms in cloud computing environments. It begins with an introduction to cloud computing and job scheduling challenges. It then reviews several existing job scheduling algorithms that aim to minimize completion time and costs while improving performance and quality of service. These algorithms use approaches like genetic algorithms, priority queues, and workload prediction. The document also discusses issues like priority-based scheduling and balancing mixed workloads. Overall, the document analyzes the problem of job scheduling in clouds and surveys different proposed scheduling algorithms and their objectives.
Grid computing allows for the sharing of distributed computing resources over a network. It provides users with access to high-end computing facilities in a dependable, consistent, and inexpensive manner. A grid aggregates distributed computing power to solve large-scale problems. It enables virtual organizations through coordinated sharing of resources across locations, organizations, and hardware/software boundaries. Grid computing provides computational utility to consumers by managing resource identification, allocation, and consolidation through middleware software. It allows under-utilized resources to be dynamically distributed in an equitable manner.
Cloud Computing: A Perspective on Next Basic Utility in IT World IRJET Journal
This document discusses cloud computing and its architecture. It begins with an introduction to cloud computing, defining it as a model that provides infrastructure, platforms, and software as services. The key characteristics and service models of cloud computing are described.
The document then discusses the architecture of cloud computing, including the layers of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the deployment models of private cloud, public cloud, community cloud, and hybrid cloud.
The document outlines several challenges of cloud computing, such as resource allocation and scheduling, cost optimization, processing time and speed, memory management, load balancing, security issues, fault
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
The Open Grid Services Architecture (OGSA) defines a set of standards for building grid systems. It has four main layers:
1) The application layer which includes physical resources like servers and storage, and logical resources like database and workflow managers.
2) A web services layer which defines how resources and services can interact using Open Grid Services Infrastructure (OGSI) and grid services.
3) OGSI specifies five interfaces for grid services: Factory, Life Cycle, State Management, Service Groups, and Notification.
4) Together these layers define a standardized architecture for building grid systems using web services and interfaces to manage resources and their interactions.
‘Grids’areanapproachforbuildingdynamicallyconstructedproblem-solvingenvironmentsusing
geographically and organizationally dispersed,
high-performance computing and
data handling resources.
Gridsalsoprovideimportantinfrastructuresupportingmulti-institutionalcollaboration.
A Reconfigurable Component-Based Problem Solving EnvironmentSheila Sinclair
This technical report describes a reconfigurable component-based problem solving environment called DISCWorld. The key features discussed are:
1) DISCWorld uses a data flow model represented as directed acyclic graphs (DAGs) of operators to integrate distributed computing components across networks.
2) It supports both long running simulations and parameter search applications by allowing complex processing requests to be composed graphically or through scripting and executed on heterogeneous platforms.
3) Operators can be simple "pure Java" implementations or wrappers to fast platform-specific implementations, and some operators may represent sub-graphs that can be reconfigured to run across multiple servers for faster execution.
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Web & Graphics Designing Training at Erginous Technologies in Rajpura offers practical, hands-on learning for students, graduates, and professionals aiming for a creative career. The 6-week and 6-month industrial training programs blend creativity with technical skills to prepare you for real-world opportunities in design.
The course covers Graphic Designing tools like Photoshop, Illustrator, and CorelDRAW, along with logo, banner, and branding design. In Web Designing, you’ll learn HTML5, CSS3, JavaScript basics, responsive design, Bootstrap, Figma, and Adobe XD.
Erginous emphasizes 100% practical training, live projects, portfolio building, expert guidance, certification, and placement support. Graduates can explore roles like Web Designer, Graphic Designer, UI/UX Designer, or Freelancer.
For more info, visit erginous.co.in , message us on Instagram at erginoustechnologies, or call directly at +91-89684-38190 . Start your journey toward a creative and successful design career today!
IT help desk outsourcing Services can assist with that by offering availability for customers and address their IT issue promptly without breaking the bank.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
Train Smarter, Not Harder – Let 3D Animation Lead the Way!
Discover how 3D animation makes inductions more engaging, effective, and cost-efficient.
Check out the slides to see how you can transform your safety training process!
Slide 1: Why 3D animation changes the game
Slide 2: Site-specific induction isn’t optional—it’s essential
Slide 3: Visitors are most at risk. Keep them safe
Slide 4: Videos beat text—especially when safety is on the line
Slide 5: TechEHS makes safety engaging and consistent
Slide 6: Better retention, lower costs, safer sites
Slide 7: Ready to elevate your induction process?
Can an animated video make a difference to your site's safety? Let's talk.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
An efficient scheduling policy for load balancing model for computational grid system
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
An Efficient Scheduling Policy for Load Balancing Model for
Computational Grid System
Mukul Pathak1 Ajeet Kumar Bhartee2 Vinay Tandon3
1, 2.Department of Computer Science & Engineering, Galgotias College of Engineering & Technology, Greater
Noida (U.P.), India
[email protected][email protected]
3.Department of Master of Computer Application, Aligarh College of Engineering & Technology, Greater Noida
(U.P.), India
[email protected]
Abstract
Workload and resource management are two essential functions provided at the service level of the Grid system. To
improvement in global throughput need, effective and efficient load balancing are fundamentally important. We also
check that what type of scheduling policy is used by that algorithm, because an efficient scheduling policy can utilize
the computational resources efficiently by allowing multiple independent jobs to run over a network of
heterogeneous clusters. In this paper, a dynamic grid model, as a collection of clusters has been proposed. An
efficient scheduling policy is used, and its comparison with the other scheduling policy has been presented.
1. INTRODUCTION. In order to fulfil the user expectations in terms of performance and efficiency, the Grid
system needs efficient load balancing algorithms for the distribution of tasks [1]. A load balancing algorithm
attempts to improve the response time of user’s submitted applications by ensuring maximal utilization of available
resources. The main goal is to prevent, if possible, the condition where some processors are overloaded with a set of
tasks while others are lightly loaded or even idle [2]. Although load balancing problem in conventional distributed
systems has been intensively studied, new challenges in Grid computing still make it an interesting topic and many
research projects are under way. This is due to the characteristics of Grid computing and the complex nature of the
problem itself. Load balancing algorithms in classical distributed systems, which usually run on homogeneous and
dedicated resources, cannot work well in the Grid architectures. In this chapter we define the motivation of this
research and then identify the research questions. This chapter also discusses overall organization of thesis.
2. CHARACTERISTICS OF GRID
There are these main issues that characterize computational Grids [3, 4]:
• Heterogeneity: A Grid involves a multiplicity of resources that are heterogeneous in nature and might span
numerous administrative domains across wide geographical distances.
Resources are heterogeneous
Resources are administratively disparate
Resources are geographically disparate
Users do not have to worry about system details (e.g., location, operating system, accounts).
Resources are numerous.
Resources have different resource management policies.
Resources are owned and managed by different, potentially mutually distrustful organizations and
individuals that likely have different security policies and practices.
51
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
• Scalability: A Grid might grow from few resources to millions. This raises the problem of potential performance
degradation as a Grids size increases. Consequently, applications that require a large number of geographically
located resources must be designed to be extremely latency tolerant.
• Dynamicity or Adaptability: In a Grid, a resource failure is the rule, not the exception [5]. In fact, with so many
resources in a Grid, the probability of some resource failing is naturally high. The resource managers or applications
must tailor their behaviour dynamically so as to extract the maximum performance from the available resources and
services.
• Parallel CPU execution: One of most important feature of Grid is its scope for massive parallel CPU capacity. The
common attribute among such uses is that the applications have been written to use algorithms that can be partitioned
into independently running parts. A CPU intensive Grid application can be thought of as many smaller “sub jobs,”
each executing on a different machine in the Grid.[5] To the extent that these sub jobs do not need to communicate
with each other, the more “scalable” the application becomes. A perfectly scalable application will, for example,
finish 10 times faster if it uses 10 times the number of processors.
• Virtual organizations: The users of the Grid can be organized dynamically into a number of virtual organizations,
each with different policy requirements [6, 7]. These virtual organizations can share their resources collectively as a
larger Grid.
• Resource balancing: A Grid contains a large number of resources contributed by individual machines into a greater
total virtual resource. For applications that are Grid enabled, the Grid can offer a resource balancing effect by
scheduling Grid jobs on machines with low utilization [8].
• Reliability and Management: High-end conventional computing systems use expensive hardware to increase
reliability. A Grid is an alternate approach to reliability that relies more on software technology than expensive
hardware [51]. The goal to virtualized the resources on the Grid and more uniformly handle heterogeneous systems
will create new
Opportunities to better manage a larger more disperse IT infrastructure.
3. GRID ARCHITECTURE
Architecture identifies the fundamental system components, specifies purpose and function of these components, and
indicates how these components interact with each other. Grid architecture is protocol architecture, with protocols
defining the basic mechanisms by which VO [9, 10, and 11] users and resources negotiate, establish, manage and
exploit sharing relationships. Grid architecture is also a services standards based open architecture that facilitates
extensibility, interoperability, portability and code sharing. The components that is necessary to form a Grid.
Grid Fabric: It comprises all the resources geographically distributed (across the globe) and accessible from
anywhere on the Internet. They could be computers (such as PCs or Workstations running operating systems such as
UNIX or NT), clusters (running cluster operating systems or resource management systems such as LSF, Condor or
PBS), storage devices, databases, and special scientific instruments such as a radio telescope.
Grid Middleware: It offers core services such as remote process management, collocation of resources, storage
access, information (registry), security, authentication, and Quality of Service (QoS) such as resource reservation and
trading.
Grid Development Environments and Tools: These offer high-level services that allow programmers to develop
applications and brokers that act as user agents that can manage or schedule computations across global resources.
52
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Grid Applications and Portals: They are developed using Grid-enabled languages such as HPC++, and
message-passing systems such as MPI. Applications, such as parameter simulations and grand-Challenge problems
often require considerable computational power, require access to remote data sets, and may need to interact with
scientific instruments. Grid portals offer web-enabled application services i.e., users can submit and collect results
for their jobs on remote resources through a web interface.
4 LOAD BALANCING APPROACHES
Load balancing problem has been discussed in traditional distributed systems literature for more than two decades.
Various algorithms, strategies and policies have been proposed, implemented and classified [12].
4.1 STATIC LOAD BALANCING ALGORITHM
Static load balancing algorithms allocate the tasks of a parallel program to workstations based on either the load at
the time nodes are allocated to some task, or based on an average load of our workstation cluster. The decisions
related to load balance are made at compile time when resource requirements are estimated. The advantage in this
sort of algorithm is the simplicity in terms of both implementation as well as overhead, since there is no need to
constantly monitor the workstations for performance statistics.
4.2 DYNAMIC LOAD BALANCING ALGORITHM
Dynamic load balancing algorithms make changes to the distribution of work among workstations at run-time; they
use current or recent load information when making distribution decisions. Multicomputers with dynamic load
balancing allocate/ reallocate resources at runtime based on no a priori task information, which may determine when
and whose tasks can be migrated.
5. LOAD BALANCING STRATEGIES
There are two major strategies which usually used in load balancing algorithm will employ [17].
SENDER-INITIATED VS RECEIVER-INITIATED STRATEGIES
The question of who makes the load balancing decision is answered based on whether a sender-initiated or
receiver-initiated policy is employed [13]. In sender- initiated policies, congested nodes attempt to move work to
lightly-loaded nodes. In receiver-initiated policies, lightly-loaded nodes look for heavily-loaded nodes from which
work may be received.
Figure shows the relative performance of a sender-initiated and receiver-initiated load balancing algorithm. As can
be seen, both the sender-initiated and receiver-initiated policies perform substantially better than a system which has
no load sharing.
The sender-initiated policy performing better than the receiver-initiated policy at low to moderate system loads.
Reasons are that at these loads, the probability of finding a lightly loaded node is higher than that of finding a
heavily-loaded node. Similarly, at high system loads, the receiver initiated policy performs better since it is much
easier to find a heavily loaded node. As a result, adaptive policies have been proposed which behave like sender
initiated policies at low to moderate system loads, while at high system loads they behave like receiver-initiated
policies.
53
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
6. LOAD BALANCING POLICIES
Load balancing algorithms can be defined by their implementation of the following policies [14-15]:
• Information policy: specifies what workload information to be collected, when it is to be collected and
from where.
• Triggering policy: determines the appropriate period to start a load balancing operation.
• Resource type policy: classifies a resource as server or receiver of tasks according to its availability status.
• Location policy: uses the results of the resource type policy to find a suitable partner for a server or
receiver.
• Selection policy: defines the tasks that should be migrated from overloaded resources (source) to most idle
resources (receiver).
7. PROBLEM STATEMENT
In grid environments, the shared resources are dynamic in nature, which in turn affects application performance.
Workload and resource management are two essential functions provided at the service level of the Grid software
infrastructure. To improve the global throughput of these environments, effective and efficient load balancing
algorithms are fundamentally important. The focus of our study is to consider factors which can be used as
characteristics for decision making to initiate Load Balancing. Load Balancing is one of the most important factors
which can affect the performance of the grid application.
This thesis work analyzes the existing Load Balancing modules and tries to find out performance bottlenecks in it.
All Load Balancing algorithms implement five policies [16]. The efficient implementation of these policies decides
overall performance of Load Balancing algorithm.
The main objective of this paper is to propose an efficient Load Balancing Algorithm for Grid environment. Main
difference between existing Load Balancing algorithm and proposed Load Balancing is in implementation of
Scheduling policies: selected by Selection Policy. In my thesis we make a scheduling policy, which can be used
more reliably to make decision about selection of job for migration from heavily loaded node to lightly loaded node.
8. PROPOSED METHODOLOGY
Load balancing is defined as the allocation of the work of a single application to processors at run-time so that the
execution time of the application is minimized. This chapter is going to discuss the design of proposed Load
Balancing algorithm.
9. PROPOSED GRID MODEL COMPONENTS
Cluster-Level consists of a collection of computing nodes. Cluster-Level manager (CM) can fully control the
computing nodes within it, but cannot operate the computing nodes of other clusters directly. The computing nodes
within the cluster are referred as friends. CM maintains the load information along with registration information of
its computing nodes. In Cluster-Level, each friend runs a CM. CM role is to balance the intra-cluster workload. A
designated friend with highest CPU speed in each cluster is treated as the cluster server or master. Cluster System
Monitor (CS) determines the load index of computing nodes and provides this information to CM.
Grid-Level consists of a collection of interconnected clusters. Grid-Level manager (GM) is responsible for load
control among its clusters as shown in. GM maintains the load information along with registration information of
neighbouring masters in the grid. Neighbours for each cluster are formed in terms of communication costs. GM
calculates the minimum communication cost of sending or receiving jobs to/from remote clusters based on the
information collected in the last exchange interval. Master of each cluster also runs the GM. K denotes the number of
clusters in Grid-Level. Grid System Monitor (GS) determines the load index of masters and provides this information
to GM.
54
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Client Interface provides a graphical user interface to the user for the submission of jobs. Scheduler is responsible for
scheduling of submitted jobs. Dispatcher performs the dispatching of jobs to other masters. Collector is in charge of
capturing jobs from other masters. Each neighbour of a cluster is responsible for completing the jobs assigned to
them by their master.
A decentralized job scheduling approach is used since the jobs generated by users are directly submitted to master.
Scheduler runs as a sub-component of CM
9. DESIGN OF LOAD BALANCING MODEL
Load balancing should take place when the load situation has changed. There are some particular activities which
change the load configuration in Grid environment. The activities can be categorized as following Arrival of any new
job and queuing of that job to any particular node.
• Completion of execution of any job.
• Arrival of any new resource
• Withdrawal of any existing resource.
Whenever any of these four activities happens activity is communicated to master node then load information is
collected and load balancing condition is checked. If load balancing condition is fulfilled then actual load balancing
activity is performed.
SI-LB algorithm gets the load information of cluster and communicating this information to GM via mutual
information feedback. Based on the load information SI-LB chooses most suitable node for each job, thereby
minimizing job execution time and maximizing system throughput. Load information, generally defined in term of
load index is necessary condition of SI-LB algorithm. The load at each computing node contributes to the overall
load of the cluster and can be determined as: CPU utilization, CPU speed and queue length. The load index is
determined dynamically and the weighted sum of squares method is used to calculate the load at each computing
node. Figure represents the logical structure of a processing cluster. Any cluster in the grid can be a processing
cluster.
In Cluster-Level load balancing, depending on the current workload of its associated cluster, estimated from its own
friends in NLIST, each Cluster-Level manager (CM) decides whether to start or not a load balancing operation. If it
decides to start a load balancing operation, then it tries to load balance the workload among its under-loaded friends
in NLIST. If any friend in the cluster at any instant of time is under loaded or over-loaded, it requests or allots jobs
to/from other friends in (NLIST) with minimal load using the symmetric initiated approach to load balancing. If the
master is unable to load balance the workload among its friends, then the jobs are transferred to the under-loaded
masters in (CLIST) with minimum load and minimum communication delay.
In Grid-Level load balancing, the load balancing is performed only if CM fails to load balance their workload among
their associated friends. In this case, jobs of overloaded clusters are transferred to under loaded ones in (CLIST)
regarding the minimum communication delay and load. The chosen under loaded clusters are those which need
minimal communication delay for transferring jobs from overloaded clusters.
10. IMPLEMENTATION DETAILS AND EXPERIMENTAL RESULTS
To implement the proposed Load Balancing Algorithm, an application has been developed, which is executed in
simulated grid environment. The application has been developed using J2EE and Alea 3 simulator.
EXPERIMENTAL RESULTS
In this section we show the performance of the Alea 3 simulator through several experiments. All experiments were
performed on Intel Core I3 2.27GHz Laptop with 3GB of RAM. Unless otherwise indicated, the JVM (Java Virtual
Machine) was limited by 1GB of available RAM.
55
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
The experiment involved 103,656 jobs 14 clusters having 806 CPUs.
Through the experiment, we have compared three different scheduling algorithms: FCFS, EASY-BF and
Random+CONS scheduling policy. Figure presents graphs depicting the average machine usage per cluster (left) and
the number of waiting and running jobs per day (right) as were generated by the Alea 3 during the experiment. These
graphs nicely demonstrate major differences among the algorithms. Concerning the machine usage as expected FCFS
generates very poor results. FCFS is not able to utilize available resources when the first job in the queue requires
some specific and currently unavailable machine(s). At this point, other more flexible" jobs in the queue can be
executed increasing the machine utilization. This is the main goal of the EASYBF algorithm. As we can see,
EASY-BF is able to increase the machine usage by using the backfilling approach. Still, EASY-BF does not allow
delaying the execution start of the first job in the queue, which restricts it from making more aggressive decisions
that would increase the utilization even more. Random+CONS algorithm does not apply such restrictions and thanks
to the application of a more efficient schedule-based approach it produces the best results.
In case of the second criteria, similar reasons as in the previous example caused that FCFS is not able to schedule
jobs fluently, generating huge peak of waiting jobs during the time. For the same reason, the resulting make span is
also much higher than in the remaining algorithms. EASY-BF is capable of a higher resource utilization and
reduction of the number of waiting jobs through the time. ESG again produces the best results. As can be seen, these
and several other graphical outputs such as those presented in Figure help the user to understand and compare the
scheduling process of different scheduling algorithms.
11. CONCLUSION AND SCOPE OF FUTURE WORK
Every Load Balancing algorithm implements five policies. The efficient implementation of these policies decides
overall performance of Load Balancing algorithm. In this work we analyzed existing Load Balancing algorithm and
proposed an enhanced algorithm which more efficiently implements three out of five policies implemented in
existing Load Balancing algorithm. These three policies are: Information Policy, Triggering Policy and Selection
Policy. Proposed algorithm is executed in simulated Grid environment.
12. FUTURE DIRECTIONS
• More complex models such as nesting of clusters need to be investigated.
• Additional factors like network bandwidth that may affect the performance of the algorithm need to be
studied.
• Experiments could be tried in a real environment.
REFERENCE
[1] Krishnaram Kenthapadi, Stanford University , [email protected] and
Gurmeet Singh Mankuy , Google Inc., [email protected],Decentralized Algorithms using both Local and Random
Probes for P2P Load Balancing.
[2] B. Yagoubi , Department of Computer Science, Faculty of Sciences, University of Oran and Y. Slimani ,
Department of Computer Science, Faculty of Sciences of Tunis, Task Load Balancing Strategy for Grid Computing .
[3] Hans-Ulrich Heiss and Michael Schmitz, Decentralized Dynamic Load Balancing: The Particles Approach.
[4]Junwei Cao1, Daniel P. Spooner, Stephen A. Jarvis, and Graham R. Nudd, Grid Load Balancing Using Intelligent
Agents.
[5]Ian Foster, Argonne National Laboratory & University of Chicago, What is the Grid? A Three Point Checklist.
56
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
[6]Jennifer M. Schopf, Mathematics and Computer Science Division, Argonne National Lab, Department of
Computer Science, Northwestern University, Grids: The Top Ten Questions.
[7]Karl Czajkowski, Ian Foster and Carl Kesselman, Resource Co-Allocation in Computational Grids.
[8]Ann Chervenak, Ian Foster, Carl Kesselman, Charles Salisbury and Steven Tuecke, The Data Grid:Towards an
Architecture for the Distributed Management and Analysis of Large Scientific Datasets
[9]Foster, I., C. Kesselman, and S. Tuecke, “The Anatomy of the Grid: Enabling Scalable Virtual Organizations”.
International Journal of Supercomputer Applications, 2001.
[10]Jean-Christophe Durand, “Grid Computing a Conceptual and Practical Study”, November 8, 2004
[11]Clovis Chapman1, Paul Wilson2, “Condor services for the Global Grid: Interoperability between Condor and
OGSA”, Proceedings of the 2004 UK e-Science All Hands Meeting, ISBN 1-904425-21-6, pages 870-877,
Nottingham, UK, August 2004 http://www.cs.wisc.edu/condor/doc/condor-ogsa-2004.pdf
[12]Javier Bustos Jimenez, Robin Hood: An Active Objects Load Balancing Mechanism for Intranet.
[13]Shahzad Malik, Dynamic Load Balancing in a Network of Workstations, 95.515F Research Report, November
29, 2000.
[14]Menno Dobber, Ger Koole, and Rob van der Mei, Dynamic Load Balancing for a Grid Application,
http://www.cs.vu.nl/~amdobber
[15]Guy Bernard, A Decentralized and Efficient Algorithm for Load Sharing in Networks of Workstations. [51] D.
Klus_a_cek, L. Matyska, and H. Rudov_a. Alea { Grid scheduling simulation environment. In 7th International
Conference on Parallel Processing and Applied Mathematics (PPAM 2007), volume 4967 of LNCS, pages
1029{1038. Springer, 2008}}.
[16] Francois Grey, Matti Heikkurinen, Rosy Mondardini, Robindra Prabhu, “Brief History of Grid”,
http://Gridcafe.web.cern.ch/Gridcafe/Gridhistory/history.html.
[17] Gregor von laszewaski, Ian Foster, Argonne National Laboratory, Designing Grid Based Problem solving
Environments www-unix.mcs.anl.gov/~laszewsk/papers/cogpse- final.pdf.
Mukul Pathak
The author is pursuing Post Graduation in engineering from Galgotias College of
Engineering & Technology, Greater Noida (U.P.). He had completed engineering
from College of Engineering & Technology; Moradabad (U. P.) affiliated to
Gautam Buddh Technical University in 2010.
57
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Ajeet Kumar Bhartee
The author currently working in Galgotias College of Engineering & Technology,
Technology, Greater Noida (U.P.), and have 9+ year experience. He had
completed engineering from Madan Mohan Malvia Engineering College;
Gorakhpur (U. P.) affiliated to Uttar Pradesh Technical University in 2001 and
completed his Masters from CDAC Noida (U.P) in 2007.
Vnay Tandon
The author currently working in Aligarh college of engineering and technology,
Aligarh (U.P.), and have 13 year experience. He had completed Master in
Computer Application; and Pursuing his M-Tech.
Figure-1: Grid Architecture
58
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure-2: Static Load Balancing [12]
Figure 3: Dynamic Load Balancing [12]
Figure 4: System Utilization of Sender initiated, receiver initiated
and No load
59
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 5: Design of Load Balancing Model
Figure 6: Logical Structure of Processing Cluster
60
11. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.7, 2012
Figure 7: FCFS Scheduling Result
Figure 8: EASY-BF Scheduling Result
Figure 9: CONS+RANDOM Scheduling Result
61
12. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar