The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
This document proposes a genetic algorithm called Workflow Scheduling for Public Cloud Using Genetic Algorithm (WSGA) to optimize the cost of executing workflows in the public cloud. It discusses how genetic algorithms can be applied to the workflow scheduling problem to generate optimal schedules. The WSGA represents potential scheduling solutions as chromosomes, uses a fitness function to evaluate scheduling costs, and applies genetic operators like selection, crossover and mutation to evolve new schedules over multiple iterations. The goal is to minimize total execution cost while meeting workflow dependencies and deadline constraints. An experimental setup is described and the WSGA approach is claimed to reduce costs more than other heuristic scheduling algorithms for communication-intensive workflows.
The document discusses optimization of resource allocation in cloud environments using a modified particle swarm optimization (PSO) approach. It proposes a Modified Resource Allocation Mutation PSO (MRAMPSO) strategy that uses an Extended Multi Queue Scheduling algorithm to schedule tasks based on resource availability and reschedules failed tasks. The MRAMPSO strategy is compared to standard PSO and other algorithms to show it can reduce execution time, makespan, transmission cost, and round trip time.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
A novel scheduling algorithm for cloud computing environmentSouvik Pal
The document describes a proposed genetic algorithm-based scheduling approach for cloud computing environments. It aims to minimize waiting time and queue length. The algorithm first permutes task burst times and finds minimum waiting times using FCFS and genetic algorithms. It then applies a queuing model to the sequences with minimum waiting time from each approach. Experimental results on 4 sample tasks show the genetic algorithm reduces waiting time compared to FCFS. The genetic operators of selection, crossover and mutation are applied to evolve optimal task scheduling sequences.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
The document discusses using a genetic algorithm to schedule tasks in a cloud computing environment. It aims to minimize task execution time and reduce computational costs compared to the traditional Round Robin scheduling algorithm. The proposed genetic algorithm mimics natural selection and genetics to evolve optimal task schedules. It was tested using the CloudSim simulation toolkit and results showed the genetic algorithm provided better performance than Round Robin scheduling.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
SURVEY ON SCHEDULING AND ALLOCATION IN HIGH LEVEL SYNTHESIScscpconf
This paper presents the detailed survey of scheduling and allocation techniques in the High Level Synthesis (HLS) presented in the research literature. It also presents the methodologies and techniques to improve the Speed, (silicon) Area and Power in High Level Synthesis, which are presented in the research literature.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical like structure used to ensure load balancing through a local neighborhood propagation strategy. Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms of system performance and scalability and in terms of load balancing efficiency.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
1. The document proposes a new framework for scheduling multiple DAG applications on a cluster of processors. It involves finding the optimal and maximum number of processors that can be allotted to each DAG.
2. Regression analysis is used to model the reduction in makespan for each additional processor allotted to a DAG. This information helps determine the best way to share available processors among submitted DAGs.
3. The framework receives DAG submissions, allocates processors to each DAG, and schedules tasks on the allotted processors. The goal is to maximize resource utilization and minimize overall completion time. Experiments show this approach performs better than other methods in literature.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
This document summarizes a reinforcement learning based multi-core scheduling (RLBMCS) algorithm for real-time systems. The algorithm uses reinforcement learning to dynamically assign task priorities and place tasks in a multi-level feedback queue to schedule tasks across multiple processor cores. It aims to optimize metrics like CPU utilization, throughput, turnaround time, waiting time, response time and deadline meet ratio. Tasks can transition between four states - initial, objective degradation, objective progression, and objective stabilization - based on changes to a multi-objective optimization function. The scheduler acts as the agent and assigns tasks to queues/actions based on task and system states to maximize the optimization function over time.
This document presents a scheduling strategy that performs dynamic job grouping at runtime to optimize the execution of applications with many fine-grained tasks on global grids. The strategy groups individual jobs into larger "job groups" based on the processing requirements of each job, the capabilities of available grid resources, and a defined granularity size. It aims to minimize overall job execution time and cost while maximizing resource utilization. The strategy is evaluated through simulations using the GridSim toolkit, which models grid resources and application scheduling.
The document presents a novel hyper-heuristic scheduling algorithm called HHSA for cloud computing systems. HHSA aims to find better scheduling solutions than traditional rule-based algorithms by employing diversity and improvement detection operators to dynamically determine which low-level heuristic to use. The performance of HHSA is evaluated on CloudSim and Hadoop and shown to significantly reduce makespan compared to other algorithms.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
This document summarizes a research paper that proposes a scheduling algorithm for scientific applications in cloud environments. The algorithm aims to schedule tasks in workflows based on user preferences for quality of service (QoS), like time and cost. It ranks tasks and uses an UPFF function to select resources that meet the user's desired QoS. The algorithm is compared to other similar algorithms through scenarios, and results show it has better efficiency. The full paper provides more details on scientific workflows, cloud computing, related work on workflow scheduling algorithms, and defines the problem of scheduling tasks to resources while considering costs and times.
IRJET- Optimization of Completion Time through Efficient Resource Allocation ...IRJET Journal
This document discusses optimizing task completion time in cloud computing through efficient resource allocation using genetic and differential evolutionary algorithms. It aims to reduce makespan (completion time) by combining a genetic algorithm with differential evolutionary algorithms. The genetic algorithm uses selection, crossover and mutation to allocate tasks to resources. The outputs are then input to the differential evolutionary algorithm, which has the same operations in reverse order. This double process refines the allocation to provide the best allocation minimizing completion time. The document outlines the related work in genetic algorithms for resource allocation and task scheduling in cloud computing.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Independent tasks scheduling based on geneticambitlick
An independent task scheduling algorithm based on genetic algorithm is proposed for cloud computing. The algorithm uses genetic algorithm techniques like encoding, initialization, fitness function, selection, crossover and mutation to schedule independent tasks to heterogeneous computing resources dynamically. The tasks have varying computation and memory requirements. The algorithm aims to adapt to resource heterogeneity and optimize performance under memory and deadline constraints in cloud computing.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
Current Perspective in Task Scheduling Techniques in Cloud Computing: A Review ijfcstjournal
Cloud computing is a development of parallel, distributed and grid computing which provides computing
potential as a service to clients rather than a product. Clients can access software resources, valuable
information and hardware devices as a subscribed and monitored service over a network through cloud
computing.Due to large number of requests for access to resources and service level agreements between
cloud service providers and clients, few burning issues in cloud environment like QoS, Power, Privacy and
Security, VM Migration, Resource Allocation and Scheduling need attention of research
community.Resource allocation among multiple clients has to be ensured as per service level agreements.
Several techniques have been invented and tested by research community for generation of optimal
schedules in cloud computing. A few promising approaches like Metaheuristics, Greedy, Heuristic
technique and Genetic are applied for task scheduling in several parallel and distributed systems. This
paper presents a review on scheduling proposals in cloud environment.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
A customized task scheduling in cloud using genetic algorithmeSAT Journals
Abstract Cloud computing is an emerging technology in distributed computing which provides pay per use according to user demand and requirement. The primary aim of the Cloud computing is to provide efficient access to distributed resources. Scheduling of task is a critical issue in cloud computing, because it serves many users. The An approach for categorizing the tasks as Hard Real-Time Tasks (critical tasks that need to be completed on time with high rates of confidentiality) and Soft Real-Time Tasks (tasks that can be completed with certain delay and still can be efficient in its own way) before they are scheduled is applied. From the results observed the efficient processor for a particular combination of the tasks is determined thus producing customized results for each of the tasks. Efficient task scheduling is of high criticality for obtaining high performance in heterogeneous multiprocessor systems. Since task scheduling is a NP-hard problem, The Genetic Algorithm, an Evolutionary Algorithm which make use of techniques inspired by evolutionary biology such as inheritance, mutation, selection and crossover that is capable of producing optimal solutions. Keywords-Task Scheduling, NP-hard problem, Genetic Algorithm, Hard Real-Time Tasks, Soft Real –Time Tasks
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
SURVEY ON SCHEDULING AND ALLOCATION IN HIGH LEVEL SYNTHESIScscpconf
This paper presents the detailed survey of scheduling and allocation techniques in the High Level Synthesis (HLS) presented in the research literature. It also presents the methodologies and techniques to improve the Speed, (silicon) Area and Power in High Level Synthesis, which are presented in the research literature.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical like structure used to ensure load balancing through a local neighborhood propagation strategy. Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms of system performance and scalability and in terms of load balancing efficiency.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
1. The document proposes a new framework for scheduling multiple DAG applications on a cluster of processors. It involves finding the optimal and maximum number of processors that can be allotted to each DAG.
2. Regression analysis is used to model the reduction in makespan for each additional processor allotted to a DAG. This information helps determine the best way to share available processors among submitted DAGs.
3. The framework receives DAG submissions, allocates processors to each DAG, and schedules tasks on the allotted processors. The goal is to maximize resource utilization and minimize overall completion time. Experiments show this approach performs better than other methods in literature.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
This document summarizes a reinforcement learning based multi-core scheduling (RLBMCS) algorithm for real-time systems. The algorithm uses reinforcement learning to dynamically assign task priorities and place tasks in a multi-level feedback queue to schedule tasks across multiple processor cores. It aims to optimize metrics like CPU utilization, throughput, turnaround time, waiting time, response time and deadline meet ratio. Tasks can transition between four states - initial, objective degradation, objective progression, and objective stabilization - based on changes to a multi-objective optimization function. The scheduler acts as the agent and assigns tasks to queues/actions based on task and system states to maximize the optimization function over time.
This document presents a scheduling strategy that performs dynamic job grouping at runtime to optimize the execution of applications with many fine-grained tasks on global grids. The strategy groups individual jobs into larger "job groups" based on the processing requirements of each job, the capabilities of available grid resources, and a defined granularity size. It aims to minimize overall job execution time and cost while maximizing resource utilization. The strategy is evaluated through simulations using the GridSim toolkit, which models grid resources and application scheduling.
The document presents a novel hyper-heuristic scheduling algorithm called HHSA for cloud computing systems. HHSA aims to find better scheduling solutions than traditional rule-based algorithms by employing diversity and improvement detection operators to dynamically determine which low-level heuristic to use. The performance of HHSA is evaluated on CloudSim and Hadoop and shown to significantly reduce makespan compared to other algorithms.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
This document summarizes a research paper that proposes a scheduling algorithm for scientific applications in cloud environments. The algorithm aims to schedule tasks in workflows based on user preferences for quality of service (QoS), like time and cost. It ranks tasks and uses an UPFF function to select resources that meet the user's desired QoS. The algorithm is compared to other similar algorithms through scenarios, and results show it has better efficiency. The full paper provides more details on scientific workflows, cloud computing, related work on workflow scheduling algorithms, and defines the problem of scheduling tasks to resources while considering costs and times.
IRJET- Optimization of Completion Time through Efficient Resource Allocation ...IRJET Journal
This document discusses optimizing task completion time in cloud computing through efficient resource allocation using genetic and differential evolutionary algorithms. It aims to reduce makespan (completion time) by combining a genetic algorithm with differential evolutionary algorithms. The genetic algorithm uses selection, crossover and mutation to allocate tasks to resources. The outputs are then input to the differential evolutionary algorithm, which has the same operations in reverse order. This double process refines the allocation to provide the best allocation minimizing completion time. The document outlines the related work in genetic algorithms for resource allocation and task scheduling in cloud computing.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Independent tasks scheduling based on geneticambitlick
An independent task scheduling algorithm based on genetic algorithm is proposed for cloud computing. The algorithm uses genetic algorithm techniques like encoding, initialization, fitness function, selection, crossover and mutation to schedule independent tasks to heterogeneous computing resources dynamically. The tasks have varying computation and memory requirements. The algorithm aims to adapt to resource heterogeneity and optimize performance under memory and deadline constraints in cloud computing.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
Current Perspective in Task Scheduling Techniques in Cloud Computing: A Review ijfcstjournal
Cloud computing is a development of parallel, distributed and grid computing which provides computing
potential as a service to clients rather than a product. Clients can access software resources, valuable
information and hardware devices as a subscribed and monitored service over a network through cloud
computing.Due to large number of requests for access to resources and service level agreements between
cloud service providers and clients, few burning issues in cloud environment like QoS, Power, Privacy and
Security, VM Migration, Resource Allocation and Scheduling need attention of research
community.Resource allocation among multiple clients has to be ensured as per service level agreements.
Several techniques have been invented and tested by research community for generation of optimal
schedules in cloud computing. A few promising approaches like Metaheuristics, Greedy, Heuristic
technique and Genetic are applied for task scheduling in several parallel and distributed systems. This
paper presents a review on scheduling proposals in cloud environment.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
A customized task scheduling in cloud using genetic algorithmeSAT Journals
Abstract Cloud computing is an emerging technology in distributed computing which provides pay per use according to user demand and requirement. The primary aim of the Cloud computing is to provide efficient access to distributed resources. Scheduling of task is a critical issue in cloud computing, because it serves many users. The An approach for categorizing the tasks as Hard Real-Time Tasks (critical tasks that need to be completed on time with high rates of confidentiality) and Soft Real-Time Tasks (tasks that can be completed with certain delay and still can be efficient in its own way) before they are scheduled is applied. From the results observed the efficient processor for a particular combination of the tasks is determined thus producing customized results for each of the tasks. Efficient task scheduling is of high criticality for obtaining high performance in heterogeneous multiprocessor systems. Since task scheduling is a NP-hard problem, The Genetic Algorithm, an Evolutionary Algorithm which make use of techniques inspired by evolutionary biology such as inheritance, mutation, selection and crossover that is capable of producing optimal solutions. Keywords-Task Scheduling, NP-hard problem, Genetic Algorithm, Hard Real-Time Tasks, Soft Real –Time Tasks
Current perspective in task scheduling techniques in cloud computing a reviewijfcstjournal
Cloud computing is a development of parallel, distributed and grid computing which provides computing
potential as a service to clients rather than a product. Clients can access software resources, valuable
information and hardware devices as a subscribed and monitored service over a network through cloud
computing.Due to large number of requests for access to resources and service level agreements between
cloud service providers and clients, few burning issues in cloud environment like QoS, Power, Privacy and
Security, VM Migration, Resource Allocation and Scheduling need attention of research
community.Resource allocation among multiple clients has to be ensured as per service level agreements.
Several techniques have been invented and tested by research community for generation of optimal
schedules in cloud computing. A few promising approaches like Metaheuristics, Greedy, Heuristic
technique and Genetic are applied for task scheduling in several parallel and distributed systems. This
paper presents a review on scheduling proposals in cloud environment.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
This document summarizes a research paper that proposes a hybrid task scheduling algorithm for cloud computing environments called PSOCS. PSOCS combines the Particle Swarm Optimization (PSO) algorithm and Cuckoo Search (CS) algorithm to optimize task scheduling and minimize completion time while increasing resource utilization. The paper describes PSO and CS algorithms individually, then defines the proposed PSOCS algorithm. It evaluates PSOCS using a simulation and finds it reduces makespan and increases utilization compared to PSO and random allocation algorithms.
Multi-objective tasks scheduling using bee colony algorithm in cloud computingIJECEIAES
This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
The task scheduling is a key process in large-scale distributed systems like cloud computing infrastructures
which can have much impressed on system performance. This problem is referred to as a NP-hard problem
because of some reasons such as heterogeneous and dynamic features and dependencies among the
requests. Here, we proposed a bi-objective method called DWSGA to obtain a proper solution for
allocating the requests on resources. The purpose of this algorithm is to earn the response quickly, with
some goal-oriented operations. At first, it makes a good initial population by a special way that uses a bidirectional
tasks prioritization. Then the algorithm moves to get the most appropriate possible solution in a
conscious manner by focus on optimizing the makespan, and considering a good distribution of workload
on resources by using efficient parameters in the mentioned systems. Here, the experiments indicate that
the DWSGA amends the results when the numbers of tasks are increased in application graph, in order to
mentioned objectives. The results are compared with other studied algorithms.
This document discusses using genetic algorithms for job scheduling in cloud computing environments. It begins with an introduction to cloud computing and genetic algorithms. It then discusses the challenges of genetic scheduling, including reducing makespan time, uniform load balancing, and minimizing user cost. It reviews various genetic algorithm approaches that have been proposed to address these challenges, such as approaches aimed at reducing makespan time alone, reducing cost alone, or reducing both cost and makespan time simultaneously. The document concludes that no single algorithm solves all problems, and that combining algorithms can better satisfy complex constraints in job scheduling.
The Optimization-based Approaches for Task Scheduling to Enhance the Resource...BRNSSPublicationHubI
This document summarizes optimization-based approaches for task scheduling in cloud computing. It discusses how task scheduling is an NP-hard problem due to the large number of possible solutions. Optimization techniques can help obtain optimal scheduling to improve resource utilization and reduce task completion time. The document reviews several existing task scheduling strategies like fuzzy theory and machine learning approaches. It analyzes optimization-based task scheduling methods based on metrics like execution time, cost, energy usage, and overhead. Swarm intelligence and bio-inspired algorithms are discussed as meta-heuristic approaches to distributed task scheduling in cloud computing.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
5 Reasons cheap WordPress hosting is costing you more | Reversed OutReversed Out Creative
Cheap WordPress hosting may seem budget-friendly, but it often comes with hidden costs like poor performance, security risks, and limited support. This article breaks down the true impact of low-cost hosting and why investing wisely can benefit your website in the long run.
This presentation explores the collaboration between advanced cybersecurity tools and human expertise. While automated tools enhance vulnerability detection, skilled professionals are essential for understanding complex attacks and adapting to emerging threats. Combining both elements strengthens an organization's defense, improving overall cybersecurity resilience.
Essential Tech Stack for Effective Shopify Dropshipping Integration.pdfCartCoders
Looking to connect AliExpress or other platforms with your Shopify store? Our Shopify Dropshipping Integration service helps automate orders, manage inventory, and improve delivery time. Start syncing your suppliers and scale your dropshipping business.
Network Efficiency: The LLM Advantage
In today's complex IT environment, network professionals face rising demands for efficiency and reliability. This presentation explores how Large Language Models (LLMs) are transforming network management by automating routine tasks, enhancing threat detection, and optimizing performance.
We demonstrate how LLMs streamline operations through intelligent log analysis, dynamic performance tuning, and natural language query handling a turning questions like Why is my network slow into actionable insights. Real-world examples will show how LLMs can function as expert assistants, delivering rapid, precise recommendations.
Attendees will gain practical knowledge on integrating LLMs into network workflows, unlocking the power of generative AI and machine learning to build smarter, more proactive, and self-optimizing network infrastructures.
This presentation discusses the deployment of an IPv6 Mostly network environment at APRICOT conferences, highlighting its core concept, configuration examples, and operational insights. Key challenges encountered during implementation and lessons learned will be also discussed, offering practical guidance for future IPv6 network deployments.
This is an introduction to the Internet Service Providers and Connectivity Providers of ICANN. The Internet Corporation for Assigned Names and Numbers is a global multistakeholder group and nonprofit organization, which ensures the secure and stable operations of the Internet unique identifiers system.
As networks increasingly demand faster convergence and enhanced resilience, Segment Routing over MPLS (SR-MPLS) has emerged as a robust framework to simplify traffic engineering and improve failure recovery. This technical session will delve into Fast Reroute (FRR) mechanisms within SR-MPLS, with a focus on Topology Independent Loop-Free Alternate (TI-LFA). I will explore how TI-LFA enables sub-50ms protection against link and node failures while ensuring optimal coverage across any topology.
The talk will also address key challenges like microloop formation during convergence and discuss practical strategies for microloop prevention using SR policies and ordered FIB updates. Through real-world examples and lab-tested topologies, attendees will gain a deeper understanding of how to design and deploy scalable, fast-converging SR-MPLS networks with high availability and minimal service disruption.
The session will cover some recent DDoS trend and vulnerable ports in BD. It recommends some strategies to protect ISPs and network operators against DDoS attacks while dealing as a victim as well as as part of the attack.
The operational environments of ISPs and service providers—particularly Network Operations Centers (NOCs) and support teams—are increasingly overwhelmed by repetitive communication, documentation, and content creation tasks. At BdREN, we encountered similar challenges while managing high volumes of client emails, drafting incident communications, and facilitating digital learning across our network. In response, we developed AI-powered tools not only for the education sector but also to streamline our internal operations—challenges shared by many ISPs.
This talk presents a practical and ISP-relevant perspective on how BdREN is integrating Artificial Intelligence to automate repetitive yet critical tasks. Key use cases include:
An AI-based email assistant that intelligently generates replies, summarizes conversations, and drafts new messages to support overloaded NOC and helpdesk teams.
A quiz generation system that transforms documents into ready-to-use assessments in seconds, addressing one of the most time-consuming tasks in training and academic operations.
In addition to showcasing these innovations, the session will outline our roadmap for AI-assisted assessments, content analytics, and collaboration opportunities with ISPs and research networks alike. Whether you're managing clients, students, or support workflows, these solutions offer replicable and scalable models for operational efficiency.
The session includes live demonstrations and real-world examples aimed at inspiring local ISPs to explore how AI can be embedded into everyday technical workflows—beyond the buzzwords.
Paper: QFS: World Game (s) Great Redesign.pdfSteven McGee
THESIS: All artifacts internet, programmable net of money are formed using:
1) Epoch time cycle temporal intervals ex: created by silicon microchip oscillations
2) Syntax parsed, processed during epoch time cycle epoch temporal intervals
3) All things internet, internet of money, blockchains (time chains) are formed with unicast, multicast, anycast protocols. workflow logic, procedures, described by process filters, propagated by wave form motion described by nature, natural law I.e., Tesla describe electro - gravity - magnetic wave forms (standing, scaler)
DATA COMMUNICATION components, modes of transmission & communication devices ...samina khan
This presentation offers a clear and structured exploration of core concepts in Data Communication, covering everything from the building blocks of communication systems to the various modes and devices involved in the process.
It begins with an overview of the key components that make communication possible, followed by an explanation of how data flows through different transmission modes. The presentation also highlights the role of essential networking devices in facilitating efficient communication across networks.
Designed for clarity and comprehension, the slides use visuals and real-world analogies to help learners grasp technical ideas intuitively. Whether you're a student new to computer systems or an instructor looking for ready-to-use teaching material, this presentation provides a solid foundation in:
The primary elements involved in any data communication setup
How data is transmitted across channels under different modes
The function and purpose of devices like hubs, switches, routers, and gateways
A comparison between asynchronous and synchronous transmission styles
How different components and methods interact to ensure reliable data exchange
Perfect for introductory lessons or revision sessions, this resource simplifies complex networking concepts without compromising depth.
The technology and internet industry is a fascinating, fast-paced environment that drives innovation and shapes the world. However, behind the glamorous fasade of startups, tech giants, and digital pioneers, there is often a reality filled with immense pressure, high expectations, and mental health challenges.
In my presentation, I want to share my personal story of an honest look at my life and career in the tech industry. I will highlight the challenges I've encountered, the problems I've faced firsthand, and the impact workplace culture has had on my mental health. It's not just about the difficulties but also about potential solutions and ways to create a more people-friendly industry.
Every individual experiences their career in this industry differently. However, there are recurring patterns and systemic issues that affect many of us. With my presentation, I aim to raise awareness, encourage reflection, and spark discussions: What is wrong? What is working well? Where can we collaborate to create positive change?
As part of this initiative/ presentation, I will also introduce my passion project "Open Ears" a platform dedicated to active listening and open exchange within the tech industry. Through this initiative, I hope to encourage colleagues to share their experiences, seek support, and collectively contribute to a healthier workplace culture.
My goal is not only to provide a personal perspective but also to initiate a dialogue about the urgent need for change in our industry.
The Domain Name System (DNS) is a critical part of the Internet infrastructure. DNS translates the domain names of websites and email addresses that people can remember to the IP addresses that computers can understand. It is a large distributed system with many moving parts.
KINDNS is simple framework for stable and secure DNS operations. The KINDNS guidelines are current best practices for DNS operators to improve the security and reliability of their operations.
Cyber threats are becoming more complex for modern businesses, necessitating the use of advanced security solutions that go beyond firewalling. In order to accomplish Next-Generation Enterprise Firewalling with strong threat detection, deep packet inspection, and adaptive policy enforcement, this proposal investigates the combination of OPNsense, Suricata, and Zenarmor. In order to show how this integrated strategy improves enterprise security posture against changing cyber threats, I describe deployment methodologies, performance optimization, and real-world use cases. The results demonstrate the increased protection capabilities, scalability, and affordability of utilizing OPNsense in conjunction with Suricata and Zenarmor for next-generation firewall deployments.
Concept and purpose of community diagnosisfelixsakwa55
Objectives of the session
• By the end of this class, you will be able to:
• Describe the concept and purpose of community
diagnosis
• Explain how to plan a community diagnosis
survey
• Describe how to develop and pre-test tools for
data collection
• Explain how to execute a survey
• State how to write and disseminate a community
diagnosis report and plan community action
Concept and Purpose of Community
Diagnosis
Introduction
When you care for an individual patient, you make
a patient diagnosis and organize the appropriate
treatment.
Similarly, in order to look after a community, you
must make a community diagnosis and organise
appropriate community health programmes.
It is therefore important for you to learn the
approaches to community diagnosis and what its
purpose is, and how it differs from patient
diagnosis.
The Concept of Community
Diagnosis
• Community diagnosis is a process through
which health workers together with members
of the community identify the community’s
priority health problems, and together make
plans of action and implement them.
• It points out where the health services should
put their main efforts and resources.
The Concept of Community
Diagnosis…
• The community diagnosis concept therefore
stresses that the community must identify its
problems, prioritize them and draw a plan of
action to address the identified problems.
• The community then implements this plan to resolve
the problems.
• It emphasizes total community involvement. This is
because the community knows its problems and
priorities better than the health worker.
• When they actively participate in solving
these issues, they become bound by the
decisions they make and feel motivated to
see the plans through.
Community diagnosis…
• In community diagnosis, you follow the
same basic steps as the ones you do in
patient diagnosis.
• The only difference is that the amount of
data is much greater and requires more
lengthy analysis and processing.
• In community diagnosis you start by
collecting basic information.
Community diagnosis…
• You collect information about the following:
Local people and their environment
The number of people and their distribution
The diseases the local people suffer from
The organization of local health services
Community diagnosis…
You then make a community diagnosis by
identifying the main health problems and the
reasons for them.
Identify priority health problems and plan a
community health programme or treatment to
solve these problems.
Importance of selecting priority health needs/
problems.
This is because health centres often have limited
resources and many demands on those resources.
There are simply not enough resources to solve all
the health problems in the community.
Therefore, you as the health care worker together
with the community must select priorities for
health action.
• It is important to choose only those problems
that the
Concept and purpose of community diagnosisfelixsakwa55
A Modified GA-based Workflow Scheduling for Cloud Computing Environment
1. A Modified GA-based Workflow Scheduling for
Cloud Computing Environment
Safwat A. Hamad
Department of Computer Science,
Faculty of Computers & Information, Cairo University,
Cairo, Egypt
[email protected]
Fatma A. Omara
Department of Computer Science,
Faculty of Computers & Information, Cairo University,
Cairo, Egypt
[email protected]
Abstract— The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
Keywords—Cloud Computing; Task Scheduling; Genetic
Algorithm; Directed Acyclic Graph; Optimization Algorithm
I. INTRODUCTION
The Cloud computing is emerging technology and great
popularity in recent years which grants the users with high
scalability, reliability, security, cost effective mechanism,
group collaboration and ease of access to various applications
[1]. In addition, The Cloud computing provides dynamic
services as Software as a service (SaaS), Platform as a service
(PaaS) and Infrastructure as a service (IaaS) via the internet [2].
The Cloud computing has some challenges (e.g., security,
performance, resource management, etc.). Therefore, the task
scheduling is considering one of the most challenges that
related to resource management [3]. In general, task scheduling
is a problem of assigning tasks to the machine to complete their
work. In the same context, the scheduling in the Cloud
computing environment means that large number of the tasks
are executing on the available resources in a suitable way
depending on many parameters (i.e., minimize completion
time, minimize the cost of execution tasks, maximize resource
utilization, etc.) [3]. Therefore, task scheduling in the Cloud
computing environment is considered one of the most factors
would affect reliability and performance of the Cloud services
[2].
Generally, the problem of assigning tasks to apparently
unlimited computing resources in the Cloud computing
environment is an NP-Complete problem. According to the
process of task scheduling, the user’s jobs are submitted to the
Cloud scheduler. In turn, the Cloud scheduler inquires the
Cloud information service about the statues of the available
resources, and then allocates the various tasks on different
resource (i.e., virtual machines) as per the task requirements
[2]. The good task scheduling must assign the virtual machine
in an optimal way [3].
Therefore, task scheduling problem is considering the
challenge in the Cloud computing environment. The
researchers are trying to apply heuristic methods to solve this
problem and get optimal solution [4]. Therefore, the Meta-
heuristic based techniques deal with this problem by providing
near optimal solutions. In addition, Meta-heuristic has gained
huge popularity in past years due to its efficiency and
effectiveness to solve the large and complex problem. There
are many of Meta-heuristic algorithms (e.g., Genetic Algorithm
(GA), Particle Swarm Optimization (PSO), Ant Colony
Optimization (ACO), etc.).[5].
Further, task scheduling algorithms are different based on
dependency among tasks to be scheduled. According to
dependent task scheduling, there is precedence orders exist in
tasks where any task can only be scheduled after finishing
execution all its parent tasks. Otherwise, tasks are independent
of each other, and they can be scheduled in any sequence. In
addition, the dependent task scheduling is known as workflow
scheduling and independent task scheduling is known as
independent scheduling [5].
The aim of this paper is to develop a workflow scheduling
algorithm in the Cloud computing environment based on
Genetic Algorithm for allocating and executing dependent
tasks to improve task completion time.
The rest of the paper is as follows: in Section 2, the related
works are discussed. In Section 3, a model for task scheduling
problem is described. Sections 4, the principles of the modified
GA-based dependent task scheduling are described. The
configuration of the Workflowsim simulator, implementation
of the proposed Genetic Algorithm, as well as, performance
evaluation is discussed in Section 5. Finally, conclusion and
future work are given in Section 6.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, August 2017
276 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. II. RELATED WORK
In recent years, the problem of task scheduling in the Cloud
computing environment has caught the attention of researchers.
One of the solutions that try to solve task scheduling is use
Meta-heuristic algorithm. On the other hand, task scheduling in
the Cloud computing environment is considered critical issue
by considering different factors like completion time, the total
cost for executing user’s tasks, utilization resource, power
consumption, fault tolerant, etc.
In this paper, a modified Genetic Algorithm has been
introduced to scheduling dependent tasks.
In many types of research, different GA based task
scheduling algorithms have been introduced; each of these
algorithms proposes some modifications to the default Genetic
Algorithm. In [6], fixed bit string representation is used, where
the solutions are encoded as a fixed length binary string. Also,
there are others approaches as Direct Representation is used
[7]. The Permutation based representation is applied using 2D
vector to represent a chromosome. One dimension represents
the resources and other shows the order of tasks on each
resource [8-10]. In addition, the Tree representation has been
used for mapping the relationship between virtual machine and
physical machine [11, 12].
On the other hand, the initial population is generated
randomly in basic Genetic Algorithm. Therefore, some
approaches have been applied to enhance optimal results and
increase the convergence of Genetic Algorithm. In [9], the
Minimum Execution Time (MET) and Min-min heuristic are
used to generate initial population. Genetic Algorithm has been
used to solve workflow scheduling problem, where the
precedence of tasks is considered through the initial population
generation
Further, one of the main steps of Genetic Algorithm is
crossover and mutation, therefore the modification on basic
crossover has applied to enhance the performance of Genetic
Algorithm. In [3], a new model of crossover has been used
differently from the used crossover in the default Genetic
Algorithm. Therefor, the two selected chromosomes for
crossover process to generate two offspring are also considered
as offsprings. After producing the offspring, the two best
offsprings are chosen. In [12], the crossover and mutation
operators have been developed to make them appropriate for a
tree representation of chromosome.
On the other side, many studies have considered Genetic
Algorithm to solve task scheduling problem to minimize
makspan, improve load balance among virtual machines,
minimizing total cost to execute tasks, maximize resource
utilization and save energy consumption. In [13], an immune
Genetic Algorithm has proposed for workflow scheduling to
minimize the makspan and cost, which considered five
objectives and solved constraint satisfaction problem associated
with task scheduling constraints. A task scheduling algorithm
based on Genetic Algorithm has been proposed with the aim of
minimizing makspan and improve load balance among virtual
machines [7]. Genetic Algorithm has been used to achieve
good load balance among virtual machines [6, 8, 14, 15].
In [16], Genetic Algorithm and Fuzzy Theory called
(FUGA) algorithm had been introduced to minimize makspan,
cost and enhancement imbalance in the Cloud computing
during scheduling task. The Fuzzy Theory is used to compute
fitness value of the solution and for crossover operation.
The energy efficient is consider one of the most parameters
of task scheduling process, so there are approaches have been
introduced using Genetic Algorithm to enhance the energy
consumption of datacenters. In [17], energy aware task
scheduling algorithm has been presented based on shadow
price guided Genetic Algorithm (SGA) where shadow price is
used into Genetic Algorithm to improve solution’s fitness
value. In addition, the gene has been modified in order to
enhance the probability of producing better solutions. In [18],
pareto-solution based Genetic Algorithm approach for
workflow scheduling has been introduced to optimize
objectives.
In addition, there are many studies have been proposed
using other Meta-heuristic approaches as Particle Swarm
Optimization (PSO), Cuckoo Search (CS), Tabu Search, etc. In
[19, 20], the authors have introduced a modified task
scheduling algorithm by merging the PSO and Cuckoo
algorithms to minimize the execution time, as well as,
maximize the resource utilization. Two hybrid task scheduling
algorithms have been introduced to enhance the default PSO
algorithm by using a Best-Fit algorithm to initialize population
instead of being initiated randomly as in the default PSO
algorithm and using Tabu Search algorithm to improve the
local search by avoiding the trap of local which could be
occurred in default PSO algorithm [21, 22].
Further, a modified PSO algorithm has been proposed to
allocate dependent tasks on available resources to minimize the
execution time, as well as, computation cost [23].
III. MODEL OF TASK SCHEDULING PROPLEM
The model for task scheduling for Cloud computing
according to this work is defined as follows:
The Cloud resources are provided to the user as a number
of Heterogeneous Virtual Machine (VM) through virtualization
technology. The user’s application submitted to the Cloud
service center and it has been split into several tasks with
known data dependence. Generally, association task scheduling
is defined as a Directed Acyclic Graph (DAG) composed of
Nodes (n1, n2, …., nN). Each node in the graph represents a
task that must be executed sequentially without preemption in
the same VM. If a node in the DAG has no parent node (input),
calls (entry node), and if the node has no any child node, calls
(exit node) [24].
In addition, the graph has directed edges E representing a
partial order among the task nodes. The partial order introduces
a precedence-constraints DAG and implies that for example if
ni nj, then nj is a child, which cannot start until its parent ni
finishes. After all task nodes have been scheduled, the schedule
length is defined as Completion Time to execute the last task.
The objective of task scheduling problem is to fined optimal
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
277 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. assignment task on available VMs and minimize the
completion time with precedence-constraints are preserved.
IV. SCHEDULING ALGORITHM
Task scheduling problem is considered one of the most
issues in the Cloud computing environment, perspective the
Cloud provider and Cloud user. The Cloud provider should
guarantee optimal scheduling of the user’s task according to
SLA. At the same time, he should guarantee throughput and
good utilization of Cloud resources. Therefore, he needs a good
algorithm to schedule the tasks in Cloud. As a result, task
scheduling is classified as an optimization problem.
Therefore, heuristic algorithms such as Genetic Algorithm
(GA), Particle Swarm Optimization (PSO), Ant Colony
Optimization (ACO), etc. could be used to solve the problem.
In this work, a workflow Scheduling has been proposed
based on the default GA with some modifications. According
to these modifications, the parents will be considered in each
population beside the produced childs after the crossover
process. Also, the Tournament Selection is used to select the
better chromosomes to overcome the limitation of the
population size. Therefore, the proposed algorithm is called
Dependent Task Genetic Algorithm (DTGA).
A- Default Genetic Algorithm (DGA)
Genetic Algorithm (GA) is based on the biological concept
of generating the population. GA is considered a rapidly
growing area of Artificial Intelligence [25, 26]. The Genetic
Algorithms (GAs) was inspired from Darwin's theory of
evolution. According to Darwin's theory, term “Survival of the
fittest” is used as the method of scheduling in which the tasks
are mapped to resources according to the value of fitness
function for each parameter of the task scheduling process [27].
Generally, the default Genetic Algorithm consists of five
steps; Initial population, fitness function, selection process,
crossover, and mutation (see Figure 1) [5].
B- The Proposed Genetic Based Dependent Task
Scheduling
In this work, a Genetic Based Dependent Task scheduling
(DTGA) algorithm has been proposed for the Cloud
environment. This proposed algorithm is considered an
extension of our previous GA algorithm by concerning
scheduling of dependent tasks instead of independent ones [3].
By considering a DAG with seven tasks to be executed on 4
VMs, the steps of the proposed DTGA algorithm is illustrated
as follows (see Figure 2):
1. Representation of Chromosome
According to the proposed DTGA algorithm, the
representation of chromosome is divided into two parts;
mapping (for VMs) and schedule (for tasks Ts) as shown in
Figure 3
2. Initial Population
The population is randomly generated. The first part of the
chromosome (VMs mapping) is chosen randomly from 1 to
No_VMs where No_VMs is the number of the Virtual Machine
in the Cloud system. The second part (schedule TS) is
generated randomly such that the topological order of the DAG
graph is preserved.
3. Fitness Function Representation
In the GA, each chromosome in population has a value
called (fitness function) measured based on which fitness of
solution. Therefore, the fitness function for task scheduling
problem in the Cloud computing environment is considered the
Completion Time of all tasks on the available VMs.
In the case of dependent task scheduling, a task may have
more than one parent. Therefore, the maximum Completion
Time of a task's parent is considered the start execution time of
it.
According to Figs. 2 and 3, suppose task 2 completes its
work on VM1 at 4 unit, and task 3 completes its work at 5 unitFigure 1. Pseudo Code of Default Genetic Algorithm [5].
Procedure GA
1. Initialization: Generate initial population P consisting of
chromosomes.
2. Fitness: Calculate the fitness value of each chromosome using
fitness function.
3. Selection: Select the chromosomes for producing next
generation using selection operator.
4. Crossover: Perform the crossover operation on the pair of
chromosomes obtained in step 3.
5. Mutation: Perform the mutation operation on the chromosomes.
6. Fitness: Calculate the fitness value of these newly generated
chromosomes known as offsprings.
7. Replacement: Update the population P by replacing bad
solutions with better chromosomes from offsprings.
8. Repeat steps 3 to 7 until stopping condition is met. Stopping
condition may be the maximum number of iterations or no change
in fitness value of chromosomes for consecutive iterations.
9. Output best chromosome as the final solution.
End Procedure
1
4
3
2
5
6
7
Figure 2. DAG with seven Tasks.
VM 3 VM 1 VM 4 VM 1 VM 2 VM 3 VM 4
T 1 T 5 T 7 T 3 T 2 T 6 T 4
Figure 3. Representation of Chromosomes.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
278 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. Crossover point = 4
Offspring 1
Offspring 2
Parent 2
Parent 1
VM 3 VM 1 VM 4 VM 1 VM 2 VM 3 VM 4
T 1 T 5 T 7 T 3 T 2 T 6 T 4
VM 2 VM 3 VM 1 VM 2 VM 4 VM 3 VM 2
T 1 T 4 T 6 T 5 T 2 T 7 T 3
VM 3 VM 1 VM 4 VM 1 VM 4 VM 3 VM 2
T 1 T 5 T 7 T 3 T 2 T 6 T 4
VM 2 VM 3 VM 1 VM 2 VM 2 VM 3 VM 4
T 1 T 4 T 6 T 5 T 2 T 7 T 3
Figure 4. One point croosover operator.
on VM4, then the execution of task 6 will start from unit 5 on
VM3 if no task active at that time.
Therefore, the starting time (ST) of a task is calculated
using equation (1).
STi = max max (completion time of parent Ti) . . (1)
Where is the starting time of task Ti on VMj.
The completion time of task Ti is calculated using equation
(2).
CTij = STi + execution time of Ti on VMj … … … … … . (2)
Therefore, the completion time for all tasks on all VMs is
calculated as equation (3).
=
_
… … … … … … (3)
Where n is the number of tasks, No_VMs is the number
of VMs, and CTij is the execution time of task i on VM j
4. Reproduction
• Tournament selection; In this step, the selection
method is applying to select two chromosomes from
an available solution according to the fitness value to
generate a new population. There are different
approaches that can be applied in selection phase.
Therefore, in our proposed DTGA algorithm the
Tournament selection is used to select pairs of a
parent for crossover process.
• Crossover; After the selection process, the crossover
is implemented on two chromosomes to generate a
new solution with considering the dependency of the
tasks
In our proposed DTGA algorithm, the crossover is
implemented using two steps:
a. Apply crossover point
The single crossover point is applied to mapping (for VMs)
part according to a value generated randomly. As an example,
two parents 1, 2 are used and crossover point value is 4 (see
Figure 4).
This crossover generates new offspring and at the same
time, its preserve the dependency for the tasks.
b. Apply New Model of Crossover
In this model, the two parents who are selected to crossover
to generate two offspring will be considered as offspring too.
So, the proposed new model of crossover produces 4 children
(see Figure 5). After that, the two best children are chosen from
these [3].
• Mutation
The mutation applies according to two points generated
randomly and makes a check whether there is a dependency
between tasks at these points or not. If no dependency, swap
their position with VM number. Otherwise, generate another
mutation point which allows mutation. As an example,
Suppose the mutation point for parent 1 in Fig. 4 are (2 and 5).
Now, check whether task 2 and task 5 are dependencies or not.
Because there is no dependency between them, swap them and
generate a new solution (see Figure 6).
5. Enhancement Population
Two modifications have been introduced to enhance
population. According to the first modification, bad solutions
will be considered besides the good ones instead of replacing
them as in the default GA algorithm. These will help to
generate an optimal solution.
According to the second modification, new chromosomes
will generate randomly and involve in the population after each
iteration to enhance the diversity of the population. These
random chromosomes are considered 5% of the chromosomes
in the population. The toleration of this percentage could be
considered as a future work.
Figure 5. New Model of Crossover Process [3]
Parent 1 Child 1
Crossover
Copy
Parent 2 Child 2
Child 3
Child 4
Before mutation
After mutation
VM 3 VM 1 VM 4 VM 1 VM 2 VM 3 VM 4
T 1 T 5 T 7 T 3 T 2 T 6 T 4
VM 3 VM 2 VM 4 VM 1 VM 1 VM 3 VM 4
T 1 T 5 T 7 T 3 T 2 T 6 T 4
Figure 6. New chromosome after mutation.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
279 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. TABLE 2 THE COMPLETION TIME OF RRA, RA, GA, AND DTGA
ALGORITHMS USING 4 VMS
No. Task RRA RA GA DTGA No. VM
50 295.32 261.25 233.05 167.23
4100 564.59 505.3 455.6 322.29
300 1,511.02 1,369.48 1,211.29 801.93
TABLE 4 THE COMPLETION TIME OF RRA, RA, GA, AND
DTGA ALGORITHMS USING 12 VMS
No. Task RRA RA GA DTGA No. VM
50 107.22 99.82 80.51 56.07
12100 236.7 192.43 172.39 110.16
300 702.95 584.1 499.07 335.46
0
200
400
600
800
1000
1200
1400
1600
50100300
Time(second)
Number of Tasks
RRA
RA
GA
DTGA
Figure 7. the comparison completion time of four algorithms RRA, RA,
GA and DTGA.
V. PERFORMANCE EVALUATION
In this section, the experimental evaluation of the proposed
DTGA algorithm relative to the default GA, Random
Algorithm (RA) and Round-Robin algorithms is presented.
A. The Experimental Environment
Workflow scheduling can be composed of a large number
of tasks and execution of these tasks may require many
complex modules and software. Also, the evaluation of the
performance of workflow optimization techniques in real
infrastructure is complex and time consuming. As a result, the
simulation-based studies have become a widely accepted way
to evaluate workflow system.
On the other hand, WorkflowSim simulator is considered
the commonly used simulator to implement and evaluate the
performance of task scheduling algorithms in the Cloud
computing. It is considered an extension of the existing
Cloudsim simulator by providing a higher layer of workflow
management [28].
B. Experimental Results
By using WorkflowSim toolkit, the proposed DTGA algorithm
is implemented, and a comparative study has been made among
four algorithms; Round-Robin Algorithm (RRA), Random
Algorithm (RA), the default GA, and the developed DTGA
algorithms using benchmark programs [29]. The Completion
time is considered to evaluate the performance. The used
benchmark programs are listed in Table 1.
The completion time of RRA, RA, default GA and the
proposed DTGA algorithms using 4, 8 and 12 VMs and tasks
of Random graphs are represented in Tables 2, 3, 4 and
Figures. 7, 8, and 9.
Tables 5, 6 and 7 illustrate the completion time improvement
No. Task Notes
50 Random graphs
100 Random graphs
300 Random graphs
88 Robot control program
96 Sparse matrix solver
TABLE 1 SELECTED BENCHMARK PROGRAM [29].
TABLE 3 THE COMPLETION TIME OF RRA, RA, GA, AND DTGA
ALGORITHMS USING 8 VMS
No. Task RRA RA GA DTGA No. VM
50 175.71 157.91 138.47 87.4
8100 349.77 325.17 291.94 171.9
300 1,009.31 945.51 862.82 537.01
Figre 8. the comparison completion time of four algorithms RRA, RA, GA
and DTGA.
0
200
400
600
800
1000
1200
50100300
Time(second)
Number of Tasks
RRA
RA
GA
DTGA
Figure 9. the comparison completion time of four algorithms RRA, RA,
GA and DTGA.
0
100
200
300
400
500
600
700
800
50100300
Time(second)
Number of Tasks
RRA
RA
GA
DTGA
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
280 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. Task RRA RA GA DTGA No. VM
Robot 88 629.4 581.02 529.33 375.24
4
Sparse 97 734.05 679.35 588.11 401.52
TABLE 8 THE COMPLETION TIME OF RRA, RA, GA, AND
DTGA ALGORITHMS USING 4 VMS
Figure 10. the comparison completion time of four algorithms RRA, RA,
GA and DTGA.
0
100
200
300
400
500
600
700
800
Robot 97 TsksSparse 88 Tasks
Time(second)
RRA
RA
GA
DTGA
Task RRA RA GA DTGA No. VM
Robot 88 301.12 299.5 241.76 159.2
8
Sparse 97 415.87 374.2 299.47 195.69
TABLE 9 THE COMPLETION TIME OF RRA, RA, GA, AND
DTGA ALGORITHMS USING 8 VMS
TABLE 10 THE COMPLETION TIME OF RRA, RA, GA, AND
DTGA ALGORITHMS USING 12 VMS
Task RRA RA GA DTGA No. VM
Robot 88 215.45 190.4 167.65 113.87
12
Sparse 97 300 271.02 199.05 134.1
Figure 12. the comparison completion time of four algorithms RRA, RA,
GA and DTGA.
0
50
100
150
200
250
300
350
Robot 88 TasksSparse 97 Tasks
Time(second)
RRA
RA
GA
DTGA
for the proposed DTGA algorithm relative to RRA, RA, and
default GA algorithms using 4, 8 and 12 VMs.
Table 8 and Figure (10) represent the completion time of RRA,
RA, default GA and the proposed DTGA algorithms using 4
VMs with the task of Robot control program and Sparse matrix
solver.
Table 9 and Figure, (11) represent the completion time of
RRA, RA, default GA and the proposed DTGA algorithms
using 8 VMs with the task of Robot control program and
Sparse matrix solver.
Table 10 and Figure, (12) represents the completion time of
RRA, RA, default GA and the proposed DTGA algorithms
using 12 VMs with the task of Robot control program and
Sparse matrix solver.
DTGA vs.
No. Task RRA RA GA No. VM
50 43.37 35.98 28.24
4
100 42.91 36.21 29.26
300 46.92 41.44 33.79
Average 44.4 % 37.87 % 30.43 %
TABLE 5 THE IMPROVE COMPLETION TIME OF DTGA VS. RRA, RA,
AND GA ON 4 VMS.
DTGA vs.
No. Task RRA RA GA No. VM
50 50.25 44.65 36.88
8
100 50.85 47.13 41.11
300 46.79 43.2 37.76
Average 49.29 % 44.99 % 38.58 %
TABLE 6 THE IMPROVE COMPLETION TIME OF DTGA VS. RRA,
RA, AND GA ON 8 VMS.
DTGA vs.
No. Task RRA RA GA No. VM
50 47.7 43.82 30.35
12
100 53.46 42.75 36.09
300 52.27 42.56 32.78
Average 51.14 % 43.04 % 33.07 %
TABLE 7 THE IMPROVE COMPLETION TIME OF DTGA VS. RRA,
RA, AND GA ON 12 VMS.
Figure 11. the comparison completion time of four algorithms RRA, RA,
GA and DTGA.
0
50
100
150
200
250
300
350
400
450
Robot88 TasksSparse 97 Tasks
Time(second)
RRA
RA
GA
DTGA
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
281 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. DTGA vs.
Task RRA RA GA No. VM
Robot 47.13 46.84 34.14
8Sparse 52.94 47.7 34.65
Average 50.03 % 47.27 % 34.39 %
TABLE 12 THE IMPROVE COMPLETION TIME OF DTGA VS. RRA, RA,
AND GA ON 8 VMS.
Tables 11, 12 and 13 illustrate the completion time
improvement of the proposed DTGA algorithm relative to
RRA, RA, and default GA algorithms using 4, 8 and 12 VMs
respectively.
According to the results in Table 5, it is found that the
completion time of the proposed DTGA algorithm is reduced
by (44.4%), (37.87%) and (30.43%) with respect to the RRA,
RA and the default GA algorithms respectively. With respect to
the results in Table 6, the completion time of the proposed
DTGA algorithm is reduced by (49.29%), (44.99%) and
(38.58%) relative to RRA, RA and the default GA algorithms
respectively. For the results in Table 7, the completion time of
the proposed DTGA algorithm is reduced by (51.14%),
(43.04%) and (33.07%) relative to RRA, RA and the default
GA algorithms respectively.
According to the results in Table 11, the completion time of
the proposed DTGA algorithm is reduced by (42.84%),
(38.15%) and (30.41%) relative to RRA, RA and the default
GA algorithms respectively . With respect to the results in
Table 12, the completion time of the proposed DTGA
algorithm is reduced by (50.03%), (47.27%) and (34.39%)
relative to RRA, RA and the default GA algorithms
respectively. For the results in Table 13, the completion time of
the proposed DTGA algorithm is reduced by (51.22%),
(45.35%) and (32.34%) relative to RRA, RA and the default
GA algorithms respectively.
VI. CONCLUSION AND FUTURE WORK
According to the work in this paper, an improved Genetic
(DTGA) algorithm for dependent task scheduling problem has
been proposed for the Cloud computing environment. The
proposed algorithm targets to minimize the completion time. A
comparative study has been contacted to evaluate the
performance of the proposed algorithm with respect to the
RRA, RA and the default GA algorithms using STC
benchmark (three random graphs, Robot graph, and Spars
graph). According to the comparative results using three
random graphs and 4, 8 and 12 VMs, the completion time of
the proposed DTGA algorithm has been reduced in average by
48.28%, 42%, and 33.98% with respect to RRA, RA and the
default GA algorithms respectively. According to the
comparative results using Robot and Sparse graphs, and 4, 8,
and 12 VMs, the completion time of the proposed DTGA
algorithm has been reduced in average by48%, 43.59%, and
32.38% with respect to RRA, RA and the default GA
algorithms respectively.
Generally, the proposed DTGA algorithm outperforms the
RRA, RA and the default GA algorithms by 48.14%, 42.8%,
and 33.18% respectively in average with respect to the
completion time.
For future work, the proposed algorithm can be extended to
consider the possibility of the dynamic characteristic of VMs.
Moreover, the users QoS requirements would be considered.
REFERENCES
[1] R. Nallakumar and K. Sruthi Priya, "A survey on
scheduling and the attributes of task scheduling in the
cloud," Int. J. Adv. Res. Comput. Commun. Eng, vol. 3,
pp. 8167-8171, 2014.
[2] T. Mathew, K. C. Sekaran, and J. Jose, "Study and
analysis of various task scheduling algorithms in the
cloud computing environment," in Advances in
Computing, Communications and Informatics (ICACCI,
2014 International Conference on, 2014, pp. 658-664.
[3] S. A. Hamad and F. A. Omara, "Genetic-Based Task
Scheduling Algorithm in Cloud Computing
Environment," International Journal of Advanced
Computer Science & Applications, vol. 1, pp. 550-556,
2016.
[4] D. C. Vegda and H. B. Prajapati, "Scheduling of
dependent tasks application using random search
technique," in Advance Computing Conference (IACC),
2014 IEEE International, 2014, pp. 825-830.
[5] M. Kalra and S. Singh, "A review of metaheuristic
scheduling techniques in cloud computing," Egyptian
Informatics Journal, vol. 16, pp. 275-295, 2015.
[6] K. Dasgupta, B. Mandal, P. Dutta, J. K. Mandal, and S.
Dam, "A genetic algorithm (ga) based load balancing
strategy for cloud computing," Procedia Technology, vol.
10, pp. 340-347, 2013.
DTGA vs.
Task RRA RA GA No. VM
Robot 40.38 35.41 29.11
4Sparse 45.3 40.89 31.72
Average 42.84 % 38.15 % 30.41 %
TABLE 11 THE IMPROVE COMPLETION TIME OF DTGA VS. RRA, RA,
AND GA ON 4 VMS.
TABLE 13 THE IMPROVE COMPLETION TIME OF DTGA VS. RRA, RA,
AND GA ON 12 VMS.
DTGA vs.
Task RRA RA GA No. VM
Robot 47.14 40.19 32.07
12Sparse 55.3 50.52 32.62
Average 51.22 % 45.35 % 32.34 %
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
282 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
8. [7] T. Wang, Z. Liu, Y. Chen, Y. Xu, and X. Dai, "Load
balancing task scheduling based on genetic algorithm in
cloud computing," in Dependable, Autonomic and Secure
Computing (DASC), 2014 IEEE 12th International
Conference on, 2014, pp. 146-152.
[8] F. Pop, C. Dobre, and V. Cristea, "Genetic algorithm for
DAG scheduling in grid environments," in Intelligent
Computer Communication and Processing, 2009. ICCP
2009. IEEE 5th International Conference on, 2009, pp.
299-305.
[9] K. Kaur, A. Chhabra, and G. Singh, "Heuristics based
genetic algorithm for scheduling static tasks in
homogeneous parallel system," international journal of
computer science and security, vol. 4, pp. 183-198, 2010.
[10] J. Yu and R. Buyya, "Scheduling scientific workflow
applications with deadline and budget constraints using
genetic algorithms," Scientific Programming, vol. 14, pp.
217-230, 2006.
[11] J. Gu, J. Hu, T. Zhao, and G. Sun, "A new resource
scheduling strategy based on genetic algorithm in cloud
computing environment," Journal of Computers, vol. 7,
pp. 42-52, 2012.
[12] S. Sawant, "A genetic algorithm scheduling approach for
virtual machine resources in a cloud computing
environment," 2011.
[13] K. Nishant, P. Sharma, V. Krishna, C. Gupta, K. P. Singh,
and R. Rastogi, "Load balancing of nodes in cloud using
ant colony optimization," in Computer Modelling and
Simulation (UKSim), 2012 UKSim 14th International
Conference on, 2012, pp. 3-8.
[14] A. G. Delavar and Y. Aryan, "HSGA: a hybrid heuristic
algorithm for workflow scheduling in cloud systems,"
Cluster computing, vol. 17, pp. 129-137, 2014.
[15] K. Zhu, H. Song, L. Liu, J. Gao, and G. Cheng, "Hybrid
genetic algorithm for cloud computing applications," in
Services Computing Conference (APSCC), 2011 IEEE
Asia-Pacific, 2011, pp. 182-187.
[16] M. Shojafar, S. Javanmardi, S. Abolfazli, and N.
Cordeschi, "FUGE: A joint meta-heuristic approach to
cloud job scheduling algorithm using fuzzy theory and a
genetic method," Cluster Computing, vol. 18, pp. 829-
844, 2015.
[17] G. Shen and Y.-Q. Zhang, "A shadow price guided
genetic algorithm for energy aware task scheduling on
cloud computers," in International Conference in Swarm
Intelligence, 2011, pp. 522-529.
[18] F. Tao, Y. Feng, L. Zhang, and T. Liao, "CLPS-GA: A
case library and Pareto solution-based hybrid genetic
algorithm for energy-aware cloud service scheduling,"
Applied Soft Computing, vol. 19, pp. 264-279, 2014.
[19] A. Al-maamari and F. A. Omara, "Task Scheduling Using
PSO Algorithm in Cloud Computing Environments,"
International Journal of Grid and Distributed Computing,
vol. 8, pp. 245-256, 2015.
[20] A. Al-maamari and F. A. Omara, "Task Scheduling using
Hybrid Algorithm in Cloud Computing Environments,"
IOSR Journal of Computer Engineering, vol. 17, pp. 96-
106, 2015.
[21] H. M. Alkhashai and F. A. Omara, "BF-PSO-TS: Hybrid
Heuristic Algorithms for Optimizing Task Schedulingon
Cloud Computing Environment," International Journal of
Advanced Computer Science & Applications, vol. 1, pp.
207-212, 2016.
[22] H. M. Alkhashai and F. A. Omara, "An Enhanced Task
Scheduling Algorithm on Cloud Computing
Environment," International Journal of Grid and
Distributed Computing, vol. 9, pp. 91-100, 2016.
[23] M. Zahraa Tarek and F. Omara, "Pso optimization
algorithm for task scheduling on the cloud computing
environment," Int. J. Comput. Technol, vol. 13, 2014.
[24] D. G. Amalarethinam and G. J. Mary, "A new DAG
based Dynamic Task Scheduling Algorithm (DYTAS)
for Multiprocessor Systems," International Journal of
Computer Applications (0975–8887) Volume, 2011.
[25] S. H. Jang, T. Y. Kim, J. K. Kim, and J. S. Lee, "The
study of genetic algorithm-based task scheduling for
cloud computing," International Journal of Control and
Automation, vol. 5, pp. 157-162, 2012.
[26] T. Goyal and A. Agrawal, "Host scheduling algorithm
using genetic algorithm in cloud computing
environment," International Journal of Research in
Engineering & Technology (IJRET) Vol, vol. 1, pp. 7-
12, 2013.
[27] R. Buyya, R. Ranjan, and R. N. Calheiros, "Modeling
and simulation of scalable Cloud computing
environments and the CloudSim toolkit: Challenges and
opportunities," in High Performance Computing &
Simulation, 2009. HPCS'09. International Conference
on, 2009, pp. 1-11.
[28] W. Chen and E. Deelman, "Workflowsim: A toolkit for
simulating scientific workflows in distributed
environments," in E-Science (e-Science), 2012 IEEE 8th
International Conference on, 2012, pp. 1-8.
[29] (2016).Available:
http://www.kasahara.elec.waseda.ac.jp/schedule/
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 15, No. 8, Augus 2017
283 https://sites.google.com/site/ijcsis/
ISSN 1947-5500