An Optimization On Task Scheduling For Makespan, Energy
An Optimization On Task Scheduling For Makespan, Energy
Cloud computing spend high energy consumption, in 2016 This section discuss about the related works regarding task
for 289 data centers in Europe they reached 3,735,735 MWh scheduling in cloud computing and their approach, and the
as total energy consumption [1]. Thus, it is inevitable for the proposed algorithm used for the simulation.
data center to explode in power consumption and in terms of
the number to meet high demand from users. This causes a 2.1 Related Works
rising concern on the environment, since 66.8% of electricity
in the world in 2017 is powered by coal, gas, and oil [2], and Table 1 contains the list of summaries from the previous
encourages the community to embrace green cloud computing works. This study has highlight four promising algorithms to
technology. solve task scheduling problem which are GA, PSO, CSA, and
BA, that has big potential to satisfy the task scheduling for the
data center to optimize the makespan, energy consumption,
and load balancing. Based on the previous studies have not
5591
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
tried to find the best single meta-heuristic algorithm to solve cost within budget
an optimizing makespan, energy consumption, and load cost
balancing. 20 [22] Greedy Tasks Makespan
21 [23] GA Tasks Makespan
Table 1: Summary of Related Works
22 [24] Greedy Tasks and Makespan and
No Author Approach Input Objective
resources with energy
1 [4] Hybrid Current Load balancing
dynamic consumption
random and resource data
voltage
greedy and CPU
scaling
algorithm
2 [5] DOTS Tasks and Makespan and
resources Load balancing 2.2 Proposed Algorithms
3 [6] Probabilisti Tasks and VM Load Balancing
c
4 [7] PSO Tasks Makespan This section will discuss four meta-heuristic such as the
5 [8] PSO Tasks Makespan Genetic Algorithm (GA), Particle Swarm Optimization
6 [9] Chaotic Tasks Makespan and (PSO), Clonal Selection Algorithm (CSA), and Bat
symbiotic Cost Algorithm (BA) which has been used to solve task scheduling
organisms optimization in the previous studies.
search
7 [3] Intelligence integer: time decrease the Niteration= number of iteration
Water Drop and cost of the task execution Initialize population
task time
8. [10] Hybrid PSO Directed decrease the
and HC. Acyclic Graph makespan
(DAG)
Niteration=0 Finish
9 [11] GWO Task and decrease the yes
resource makespan and
energy no
optimization
10 [12] A hybrid of Resources, Minimize count fitness
GA and ILP storage, tasks energy usage
11 [13] Hybrid execution decrease the
Evolutionar time and makespan
y Algorithm shared selection the parent
resources
12 [14] CSA map Task and decrease the
the resource resource makespan and
crossover parent
and tasks energy
optimization
13 [15] Stochastic- Tasks and VM decrease the
HC energy usage child mutation
14 [16] Multiple-W DAG decrease the
orkflows-Sl makespan and
ack-Time-R energy
eclaiming optimization decrease Niteration by one
15 [17] Non-DVFS DAG Energy
and global optimization
Figure 1:(GA)
A. Genetic Algorithm GA flowchart
DVFS
16 [18] GA Tasks Makespan and Genetic algorithm is one of the metaheuristic algorithms
energy inspired by Genom. The starting solution is generated
optimization
randomly, then count the fitness of the solution from the
17 [19] Hybrid of integer: the Reduce
greedy and time required execution time,
fitness result the algorithm will determine the parent of the
PSO to execute the and resources solution, from the parent the crossover function will be
task optimization executed to generate the child solution, then the child will
18 [20] The BA task execution Optimization of undergo some mutation to be considered as the next solution
with a time, task cost, execution space and the process will be repeated until the condition is
budget cost, VM time, and satisfied [25]. Figure 1 shows the flowchart of Genetic
constraint reliability, reliability
budget
Algorithm.
19 [21] ACO CPU, task, Faster
and budget computation
5592
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
B. Particle Swarm Optimization (PSO) from the start again. However, for its potential, the
CLONALG is implemented for optimization with three
Niteration = number of iteration adjustments which are first no explicit antigen thus antibody
Initialize population move randomly population does not need to make separate memory. Second
select n antibodies rather than select the best individual and
third, assume all antibodies selected for cloning (N) will be
evaluate the solution cloned in the same number. The number of cloned antibodies
will be counted with the equation [27][28]. Figure 3 shows the
flow of CSA.
update local and global optimum solution
(2)
N Ab
N (3)
x[t 1] xn _ bird [t ] vn _ bird [t 1] N c round CSA Ab
i 1 i
Where xn_bird[t] is the position of bird, vn_bird[t] is the speed of
the bird, w, 1 , 2 are coefficient assigned weight, r1, r2 are
Where Nc is the number of cloned antibodies, PAb is a list of
random vectors, X1n_bird [t] is the local optimum solution and
X2n_bird [t] is the global optimum solution. antibodies, NAbis the number of antibodies in Ab, CSA is
multiplying factor, D is constantan contain how many
C. Clonal Selection Algorithm (CSA)
antibodies need to be replaced, nbest is the best nbest number to
CSA is a meta-heuristic algorithm inspired by antibodies be selected, FAb is affinities of Ab.
system, using cell B and cell T for its cloning, selection, and
memory set. At the beginning of the clonal selection
algorithm (CLONALG) is used as machine learning and
pattern recognition proposed, where it empathizes on its
ability to store several solutions not all solutions that provided
the best outcome for other information rather than starting
5593
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
t≤Niteration finish
increase t by one
yes
Increase i by one
rank bats and find
the best x*
i≤NBA update x
yes
yes
no
yes
no
rand<ri rand<Ai &
no
f(xi)< f(x*)
yes
Figure 4: BA Flowcharts
5594
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
Update the loudness and rate pulse emission for comparison across the four algorithms.
Ait 1 BA Ait ; rit 1 ri 0 1 exp BAt (8) Table 2: Experiment Settings
Parameters Value
Number of data center 5
Where v is velocity, x is position or solution, F is frequency, f Number of hosts 10
is the objective function, A is loudness, R is rate pulse Number of VM 50
emission, BA is random vector 0,1 , BA , BA are VM MIPS [500-2500]MIPS
0 VM core [1-5]
constants, ri is initial emission rate, x* is the best solution.
Number of tasks [100-3000]
3. METHODOLOGY Task Instruction Length [200-15000]M
Number of testing for each
This study workflow is presented in Figure 5, it starts from the 10
dataset
mathematical model, algorithm implementation, evaluation Number of iteration 50-1000
for each of parameters, and reporting. GA parent 2 Chromosomes
Crossover Half point
Mathematical
Mutation type Swap Mutation
model
CSA number of cloning
and the number of [3-10]
multiplication
Algorithms
CSA constantan cloning
Implementation 0.1
and n best constantan
PSO weight 1
PSO p1, p2 0.8
Evaluation Reporting
BA frequency max 10
BA frequency min 0
BA Amplitude 1
Figure 5: Research Workflow Fitness α, β, γ 1/3
makespan function is Cij as computation time Cij≥0, while evaluated are fitness and algorithm running time. To find the
makespan is the time required for all the tasks to be executed best condition for the four algorithms, each of the algorithms
[11][9]. By calculating the maximum time required by the is tested with different iterations to find the optimum iteration
VM which runs at the same time, then it will represent the then the chosen iteration for each meta-heuristic will be used
overall time the tasks will finish. for comparison across the four algorithms.
The goal of load balancing is to have every task distributed
MS max Cij , i, j (9) equally across the existing resources. Assume that all tasks
are equally distributed then using standard deviation formula
should equal zero, therefore lowering the standard deviation
Where MS is makespan, Cij is computation time to solve jth
result means that the tasks are closer to be equally distributed.
as all tasks assigned to ith VM.
The standard deviation function used in the study is
n
(13)
4.2 Energy Consumption Task j
Task j 1 j 1,2,3..., n
Assume that the power used by the host to stay up will be m
m (14)
Task
2
equal to the VM register inside it. Furthermore, the power #
ij Task
consumption information used during idle will be count as LB i 1
m
50% of peak power, this assumption based on the claim of the
previous study that during idle CPU still used power 1,2,3..., Task ij , i m, j 1,2,3..., n
consumption [31].
The previous study stated that in computing compared to the Where LB is the standard deviation of load balancing,
energy used in another sector, most of the energy in the #
Task ij is the sum of instruction length taskj in VM i.
computer is used to power up the CPU [32]. Therefore,
4.4 Fitness Function
calculating the energy usage computing process can be
represented by CPU energy usage. Since the VM(s) have an Reducing the makespan, energy consumption, and load
identical core, the energy will be represented for each core by balancing using the task scheduling approach is the main goal
1J/s. of this study. Therefore, function addressing three of these
The previous study counts the energy consumption based on objectives need to be delivered. One may found the other to be
the energy used in task execution, therefore when the VM is more important than the other aspect. Therefore, the value of
idle, it is killed directly [14]. This study will count the energy α, β, and γ is used to determine to prioritize the fitness
used during the VM idle time [16]. The mathematical model function.
for power idle can be eliminated depending on the policy
min F MS Etotal LB (15)
applied in the data center, for the type of datacenter who turns
off the VM and host when it no longer in services it can be 1
removed.
Decision variable for energy consumption are Ec and Eidle 5. RESULT AND DISCUSSION
Ec C P (10) This section will discuss the optimum iteration for each
i ij i
algorithm then the comparison of GA, PSO, CSA, and BA for
Eidlei MS Cij Pidle i (11)
fitness, makespan, energy consumption, load balancing, and
m (12) running time.
Etotal Eci Eidlei
i 1
5.1 Optimum Iteration
PSO
makespan, Cij is computation time needed by VM index i to 0.8
CSA
0.78
solve task j. 0.76 BA
0.74
4.3 Load Balancing
50 15
0
25
0
35
0
45
0
55
0
65
0
75
0
85
0
95
0
Iterasi
This study will implement four algorithms which are PSO,
GA, CSA, and BA. The testing process will be repeated ten
times for each dataset to determine each algorithm best,
average, and worst result. The parameters that will be Figure 6: Four Meta-Heuristic Fitness for Each Iteration
5596
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
To have fair treatment conditions for comparison, one should average running time GA gives the fastest running time for
find the best optimum iteration used by each algorithm to four datasets, and all of the datasets for best maximum and
solve the task scheduling problem in one dataset. In this minimum. The detail result for small dataset is presented in
section, the study uses 1000 datasets with ten times repeated Table 3.
tests for 50-1000 iteration. From the data come in Figure 6
and Figure 7 the optimum iteration has been chosen for each Table 4: Fitness Comparison between Four Meta-Heuristic for
Medium Dataset.
algorithm such as GA will have 200 iterations, PSO 300
Tasks Aggregate GA PSO CSA BA
iterations, CSA using 600 iterations and BA will run for 450
400 min 0.7912 0.8141 0.8291 0.7899
iterations.
avg 0.8176 0.8229 0.8714 0.8039
max 0.8352 0.8295 0.9004 0.8211
Ave rage Running Ti me vs Ite ration 500 min 0.7867 0.7966 0.8518 0.8043
avg 0.8114 0.805 0.8654 0.8129
5000
Running Time (s)
C. Large Dataset BA yield the best result for four datasets while GA for two
The large dataset made up of 1000-3000 tasks with 500 tasks datasets. For average running time, maximum, and minimum
different for each dataset. Therefore in a large dataset, there BA gives the smallest running time for all of the datasets in
are five datasets. Average fitness results from a large dataset, large datasets. The detail result for large dataset is presented
PSO gives the best result for three datasets then BA for two in Table 5.
datasets. While for the best maximum and minimum PSO and
Table 6: Average Fitness and Running Time(s) from Four Meta-Heuristic for Each Dataset Group
GA PSO CSA BA
Tasks Aggregate Running Running Running Running
Fitness Fitness Fitness Fitness
Time Time Time Time
min 0.826 1.316 0.817 28.313 0.896 32.222 0.817 1.968
Avg small
avg 0.854 1.59 0.848 31.082 0.931 35.17 0.845 2.145
dataset
max 0.886 1.826 0.878 35.77 0.975 38.668 0.863 2.566
Percentage min 1.09% 0.00% 0.00% 95.35% 8.82% 95.92% 0.07% 33.13%
from
avg 1.09% 0.00% 0.37% 94.88% 9.27% 95.48% 0.00% 25.86%
optimum
result max 2.61% 0.00% 1.71% 94.90% 11.49% 95.28% 0.00% 28.87%
min 0.788 31.732 0.794 96.306 0.825 100.857 0.789 11.622
Avg medium
avg 0.806 34.452 0.805 101.763 0.849 106.344 0.801 13.622
dataset
max 0.819 37.181 0.813 109.697 0.866 112.063 0.813 16.131
Percentage min 0.00% 63.38% 0.74% 87.93% 4.41% 88.48% 0.04% 0.00%
from
avg 0.61% 60.46% 0.47% 86.61% 5.64% 87.19% 0.00% 0.00%
optimum
result max 0.76% 56.61% 0.09% 85.29% 6.19% 85.61% 0.00% 0.00%
min 0.781 958.049 0.779 339.405 0.804 287.486 0.781 98.813
Avg large
avg 0.791 1010.441 0.788 354.992 0.817 300.82 0.789 118.094
dataset
max 0.802 1042.02 0.797 377.742 0.826 309.298 0.794 141.999
Percentage min 0.25% 89.69% 0.00% 70.89% 3.07% 65.63% 0.28% 0.00%
from
avg 0.39% 88.31% 0.00% 66.73% 3.55% 60.74% 0.08% 0.00%
optimum
result max 0.91% 86.37% 0.37% 62.41% 3.89% 54.09% 0.00% 0.00%
inverse relationship with makespan. The table shows the
5.3 Discussion conclusion of the fitness result from the four meta-heuristics.
Genetic Algorithm (GA) in this study using a half-point
To ensure that meta-heuristic give their optimum solution the crossover with the swap position. GA required fast running
number of iteration is tested for each algorithm so that time for small datasets. However, if the number of tasks is
increasing the number of iterations will not give huge increasing the time required will expand. In its best condition
influence on the result. Combining the fitness result and GA able to outperform other algorithms in makespan, energy
running time needed to solve the task scheduling. The study consumption, and load balancing but not for fitness.
used 200 iterations for GA, PSO 300, CSA 600, and BA use PSO using its velocity and position to determine how many
450 iterations. times the swap position is required to be done to the previous
Some information can be derived from the result such as solutions to yield a better result. During PSO best
optimum makespan results that resemble energy performance it able to yield fitness best results for small and
consumption. This is caused by the fact of energy medium datasets, while for large dataset PSO able to beat BA
consumption using Makespan in the calculation process, in three datasets. In this simulation, PSO shows its
especially during idle time. While load balancing has an competitive side however PSO required a quite large running
opposite behavior, this is caused by load-balancing goal is time compared to other algorithms.
having equally distributed tasks across the VM however since During simulation, CSA require large memory usage for
the VM which have different core as its specification making cloning process, therefore this study limit the cloning number
them have different speed. Therefore, load balancing has an three to ten times to avoid large memory usage for the
5598
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
scheduling process. Even though CSA does not give the [2] IAE, “Electricity Statistics,” 2017. [Online].
closest result to the best solution and CSA required longer Available: https://www.iea.org/statistics/electricity/.
running time for small and medium datasets. Nevertheless, [Accessed: 03-Oct-2019].
since there is cloning limitation in CSA the running time [3] S. Elsherbiny, E. Eldaydamony, M. Alrahmawy, and
required for larger datasets much more stable, that why CSA A. E. Reyad, “An extended Intelligent Water Drops
has faster running time compare to GA in large datasets. algorithm for workflow scheduling in cloud
Bat Algorithm (BA) shows competitive performance computing environment,” Egypt. Informatics J., vol.
19, pp. 33–55, 2018.
especially in small and medium datasets, and for large dataset
https://doi.org/10.1016/j.eij.2017.07.001
BA just loses once to PSO. BA is known for fast convergent
[4] M. N. Prasadhu and M. Mehfooza, “An Efficient
rate and it is shown in this simulation especially for medium
Hybrid Load Balancing Algorithm for Heterogeneous
and large datasets since BA gives the fastest running time, Data Centers in Cloud Computing,” Int. J. Adv.
and come in second place after GA in running time for small Trends Comput. Sci. Eng., vol. 9, no. 3, pp.
dataset. 3078–3085, 2020.
In several occurrences BA yield the best maximum and https://doi.org/10.30534/ijatcse/2020/89932020
minimum in almost all of dataset, this is caused by BA [5] A. Qadir and G. Ravi, “Dual Objective Task
behavior which does not only rely on fitness result to Scheduling Algorithm in Cloud Environment,” Int. J.
determine the next solution. BA adopts a random flying Adv. Trends Comput. Sci. Eng., vol. 9, no. 3, pp.
technique to give a larger solution space. It makes BA able to 2527–2534, 2020.
give the optimum solution during maximum and minimum. https://doi.org/10.30534/ijatcse/2020/07932020
The detail performance of fitness and running time [6] S. K. Panda and P. K. Jana, “Load balanced task
comparison can be seen in Table 6. scheduling for cloud computing : a probabilistic
approach,” Knowl Inf Syst, 2019.
6. CONCLUSION [7] J. A. Jennifa, S. T. Revathi, and T. S. S. Priya, “Smart
PSO-based secured scheduling approaches for
Meta-heuristic algorithms have been implemented to solve scientific workflows in cloud computing,” Soft
NP-hard problems like task scheduling whether it is for Comput, vol. 23, no. 5, pp. 1745–1765, 2019.
computation process, industry, and employee scheduling. [8] H. Saleh, H. Nashaat, W. Saber, and H. M. Harb,
Based on the previous studies there are four potential “IPSO Task Scheduling Algorithm for Large Scale
meta-heuristic algorithms to solve the scheduling process Data in Cloud Computing Environment,” IEEE
which are GA, PSO, CSA, and BA. Besides, the latest task Access, vol. 7, pp. 5412–5420, 2019.
scheduling takes interest in multi-objectivities. Based on the [9] M. Abdullahi, M. A. Ngadi, S. I. Dishing, S. M.
study that has been conducted BA and PSO show good Abdulhamid, and B. I. Ahmad, “An efficient
performance to solve makespan, energy consumption, and symbiotic organisms search algorithm with chaotic
load balancing. optimization strategy for multi-objective task
For future references, several points might be used for future scheduling problems in cloud computing
work in task scheduling in data center cloud computing such environment,” J. Netw. Comput. Appl., vol. 133, no. 1
as: May 2019, pp. 60–74, 2019.
1) For task scheduling which required fast running time, [10] N. Dordaie and N. J. Navimipour, “A hybrid particle
swarm optimization and hill climbing algorithm for
GA will become a suitable choice for small datasets
task scheduling in the cloud environments,” ICT
while for larger dataset BA is a better option.
Express, vol. 4, no. 4, pp. 199–202, 2018.
2) The optimum result for task scheduling one can use
https://doi.org/10.1016/j.icte.2017.08.001
GA with 200 iterations, PSO with 300 iterations, [11] G. Natesan and A. Chokkalingam, “Task scheduling
CSA with 600 iterations, and BA with 450 in heterogeneous cloud environment using mean grey
iterations. wolf optimization algorithm,” ICT Express, vol. 5,
3) PSO and BA is promising algorithms for hybrid 2018.
4) Find the objectivity which has not been used before or [12] H. Ibrahim, R. O. Aburukba, and K. El-Fakih, “An
find the combination of two or more objectivities for Integer Linear Programming model and Adaptive
task scheduling optimization. Genetic Algorithm approach to minimize energy
consumption of Cloud computing data centers,”
REFERENCES Comput. Electr. Eng., vol. 67, pp. 551–565, 2018.
[1] M. Avgerinou, P. Bertoldi, and L. Castellazzi, [13] L. Teylo, U. de Paula, Y. Frota, D. de Oliveira, and L.
“Trends in Data Centre Energy Consumption under M. M. A. Drummond, “A hybrid evolutionary
the Energy Efficiency,” Energies, vol. 10, no. 1470, algorithm for task scheduling and data assignment of
pp. 1–18, 2017. data-intensive scientific workflows on clouds,” Futur.
Gener. Comput. Syst., vol. 76, pp. 1–17, 2017.
5599
Fajar Kusumaningayu et al., International Journal of Advanced Trends in Computer Science and Engineering, 9(4), July – August 2020, 5591 – 5600
[14] R. K. Jena, “Energy Efficient Task Scheduling in [27] W. Luo, X. Lin, T. Zhu, and P. Xu, “A clonal
Cloud Environment,” Energy Procedia, vol. 141, pp. selection algorithm for dynamic multimodal function
222–227, 2017. optimization,” Swarm Evol. Comput. BASE DATA,
[15] S. Rashmi and A. Basu, “Resource optimised 2018.
workflow scheduling in Hadoop using stochastic hill https://doi.org/10.1016/j.swevo.2018.10.010
climbing technique,” IET Softw., vol. 11, no. 5, pp. [28] L. N. De Castro and F. J. Von Zuben, “Learning and
239–244, 2017. Optimization Using the Clonal Selection Principle,”
[16] J. Jiang, Y. Lin, G. Xie, and L. Fu, “Time and Energy IEEE Trans. Evol. Comput. Spec. Issue Artif. Immune
Optimization Algorithms for the Static Scheduling of Syst., vol. 6, no. 3, pp. 239–251, 2002.
Multiple Workflows in Heterogeneous Computing [29] X.-S. Yang, “A New Metaheuristic Bat-Inspired
System,” J Grid Comput., vol. 15, no. 4, pp. 435–456, Algorithm,” in Nature Inspired Cooperative
2017. Strategies for Optimization Nature Inspired
https://doi.org/10.1007/s10723-017-9391-5 Cooperative Strategies for Optimization (NICSO
[17] G. Xie, G. Zeng, X. Xiao, and R. Li, 2010), J. R. Gonz, D. A. Pelta, C. Cruz, G. Terrazas,
“Energy-efficient Scheduling Algorithms for and N. Krasnogor, Eds. 2010, pp. 65–74.
Real-time Parallel Applications on Heterogeneous [30] X.-S. Yang, “Bat algorithm : literature review and
Distributed Embedded Systems,” IEEE Trans. applications Xingshi He,” Int. J. Bio-Inspired
Parallel Distrib. Syst., vol. 28, no. 12, pp. Comput., vol. 5, no. 3, p. 2013, 2013.
3426–3442, 2017. [31] C. Yang, K. Wang, H. Cheng, C. Kuo, and W. C. C.
[18] Y. Shen, Z. Bao, X. Qin, and J. Shen, “Adaptive task Chu, “Green Power Management with Dynamic
scheduling strategy in cloud : when energy Resource Allocation for Cloud Virtual Machines,” in
consumption meets performance guarantee,” World IEEE International Conference on High
Wide Web, vol. 20, no. 2, pp. 155–173, 2017. Performance Computing and Communications
[19] Z. Zhong, K. Chen, X. Zhai, and S. Zhou, “Virtual Green, 2011, pp. 726–733.
machine-based task scheduling algorithm in a cloud [32] A. Beloglazov, J. Abawajy, and R. Buyya,
computing environment,” Tsinghua Sci. Technol., “Energy-aware resource allocation heuristics for
vol. 21, no. 6, pp. 660–667, 2016. efficient management of data centers for Cloud
[20] N. Kaur and S. Singh, “A Budget-constrained Time computing,” Futur. Gener. Comput. Syst., vol. 28, no.
and Reliability Optimization BAT Algorithm for 5, pp. 755–768, 2012.
Scheduling Workflow Applications in Clouds,” https://doi.org/10.1016/j.future.2011.04.017
Procedia Comput. Sci., vol. 98, pp. 199–204, 2016.
[21] L. Zuo, L. Shu, S. Dong, C. Zhu, and T. Hara, “A
multi-objective optimization scheduling method
based on the ant colony algorithm in cloud
computing,” IEEE Access, vol. 3, pp. 2687–2699,
2015.
[22] Z. Dong, N. Liu, and R. Rojas-cessa, “Greedy
scheduling of tasks with time constraints for
energy-efficient cloud-computing data centers,” J.
Cloud Comput., vol. 8, no. 8, pp. 1–14, 2015.
https://doi.org/10.1186/s13677-015-0031-y
[23] Y. Xu, K. Li, J. Hu, and K. Li, “A genetic algorithm
for task scheduling on heterogeneous computing
systems using multiple priority queues,” Inf. Sci.
(Ny)., vol. 270, pp. 255–287, 2014.
[24] P. Lindberg, J. Leingang, D. Lysaker, S. U. Khan,
and J. Li, “Comparison and analysis of eight
scheduling heuristics for the optimization of energy
consumption and makespan in large-scale distributed
systems,” J Supercomput, vol. 59, no. 1, pp. 323–360,
2012.
[25] O. Kramer, Genetic Algorithm Essentials. Springer
International Publishing AG, 2017.
[26] M. Couceiro and P. Ghamisi, Fractional Order
Darwinian Particle Swarm Optimization:
Applications and Evaluation of an Evolutionary
Algorithm. Springer, 2016.
5600