21
21
https://doi.org/10.1007/s10586-017-1479-y
Abstract The optimization of task scheduling in cloud com- ance requirements of fast response, cost controlling and
puting is built with the purpose of improving its working algorithm reliability [3]. The trend towards highly-integrated
efficiency. Aiming at resolving the deficiencies during the cloud computing system is gradually being implemented to
method deployment, supporting algorithms are therefore back up or replace the traditional data analysis strategies
introduced. This paper proposes a particle swarm optimiza- [4,5]. Thus, the task scheduling step is deemed best able
tion algorithm with the combination of based on ant colony to meet the efficiency and stability demands. Nowadays, the
optimization, which proposes the parameter determination promises of cloud computing have led to the development of
into particle swarm algorithm. The integrated algorithm is task scheduling scheme, with the employment of intelligent
capable of keeping particles in the fitness level at a certain algorithm as well as statistical experience [6]. The architec-
concentration and guarantee the diversity of population. Fur- ture of task scheduling is sufficiently robust to huge number
ther, the global best solution with high accurate converge of tasks so as to ensure the computing capability [7]. The
can be exactly gained with the adjustment of learning fac- devising of task scheduling calls for more efficient, more
tor. After the implementation of proposed method in task reliable and more convenient allocating techniques, along
scheduling, the scheme for optimizing task scheduling shows with ever shorter times to commercial use and an increas-
better working performance in fitness, cost as well as running ing demand for quality and advanced functionalities. As a
period, which presents a more reliable and efficient idea of result, Researchers never give up the intention to develop a
optimal task scheduling. methodology task scheduling.
The idea of task scheduling algorithm is based on filtering
Keywords Cloud computing · Particle swarm algorithm · a given set of mutually exclusive constraint task {1, · · ·, p}
Ant colony optimization · Task scheduling with a mapping from star ti to Con f licti of each task i,
where star ti is the start time and Con f licti is the perfor-
mance of task. On the other hand, deali j is the time needed
1 Introduction by the j th resource point to process the task i th in the node
sets. The start time is defined as
Cloud computing have been studied for several years already
since Ramnath Chellappa’s work [1] and developed in sev- 0 Con f licti = φ
star ti = p (1)
eral different fields by a number of researchers in the past k=1 dealk j Con f licti = φ
decade [2]. Cloud computing in commercial use must bal-
Each task is performed after the completion of the previous
tasks except for the Con f licti is empty.
B Xuan Chen
[email protected] In this case, a specific task scheduling time is divided
into two parts: suspending time for all the previous tasks in
1 Zhejiang Industry Polytechnic College, Shaoxing 312006, processing sequence and performing time for the current task.
China
For the working period representation, the finishing time endi
2 Zhejiang University, Hangzhou 310007, China refers to the synergistic use of the aforementioned parts:
123
Cluster Comput
endi = star ti + deali j · bi j (2) an opportunity for further development and analysis of the
task scheduling, which can be helpful in cloud computing.
together with The remainder of this paper is organized as follows: Sec-
tion 2 gives a brief description of the theory of Particle Swarm
length i Algorithm and Ant colony algorithm. Section 3 presents the
deali j = (3)
capacit y j methodology for task scheduling based on integrated Particle
Swarm Optimization algorithm. The experiment establish-
where length i and capacit y j are the time duration of the ment to verify the proposed scheme and is presented in
task and the processing capacity of the node, respectively. Sect. 4. Concluding remarks are given in Sect. 5.
The variable bi j stands for the assignment of processing and
is normalized in [0, 1] within this research. If bi j = 1, the
task i is carried out at resource point j; if bi j = 0, then the 2 Related work
task is suspended. Specifically, each task corresponds to only
one resource node and vice versa. In this section, we briefly describe now the theory of particle
Let m stands for the number of resources and n for the swarm algorithm and its adjustment as well as the ant colony
number of tasks. Computation of different resources is con- behavior.
strained by using the following formula:
2.1 Particle swarm optimization (PSO)
m
bi j = 1 (4) The particle swarm optimization (PSO) model is originally
j=1 defined for the study of group hunting of birds [14]. With the
definition of iteration evolution, each particle corresponds to
On the basic of Eqs. (2) and (3), the processing order can be a potential solution with certain position and speed, while the
determined. In general, the time delay for resources switching particle keeps flying to increase the function fitness until an
and instruction transmitting, which is from the resource node optimal position is selected [15]. The moving position and
to the task, can be ignored. Thus, the processing time from speed are determined by two extreme values, i.e., individual
the initial to the node j is extreme value and global extreme value.
Let x be the position of particle and v be the speed. The
n
evolution of PSO can be described as
over j = deali j · bi j (5)
i=1
vt (k + 1) = wvt (k) + c1r1 xtP B (k) − xt (k)
The processing proceeds until all tasks in the system are
+ c2 r2 xtG B (k) − xt (k) (7)
traversed and addressed. Thereby, time identification of task
scheduling depends on the maximum value of processing
together with
duration. Seeing that the dealing with resource point contains
the conducting of task, we shall also define:
xt (k + 1) = xt (k) + vt (k + 1) (8)
time = max {endi } = max {over j } (6)
1≤i≤n 1≤ j≤m where the subscript t is the serial number of particle sample;
superscript P B and G B are individual extreme value and
The processing time for task scheduling basis mentioned global extreme value respectively. Inertia weight coefficient
above is easy to implement based on Eq. (5). Whereas, the w is in the range of [0.4, 0.9] and k refers to the iterative
task scheduling algorithm here developed is not obtained algebra. c1 and c2 represent acceleration constants while r1
as the solution in cloud computing. It is also based on the and r2 are random factors ranging from 0 to 1.
integration of other methods. Optimizing the working perfor-
mance of the task scheduling, a better computing result can 2.2 Improvement of PSO Gaussian Copual function
be obtained. Therefore, the improvement of task scheduling
plays an important role in the ongoing researches [8–11]. Particle swarm algorithm is even combined with genetic algo-
Observing from [12] for an analysis of this issue, a possi- rithm (GA) to improve its performance. During the running of
ble solution is proposed. (Related to this also seen in [13].) PSO algorithm, when a particle reaches an optimal position,
With the application of virtual machine resources, the tasks other particles will soon get close to it. Supposing the cur-
are re-allocated and the experimental outcomes are pre- rent optimal position is the local optimum, the particle swarm
sented. Therefore, the employment of algorithms provides will sink into local optimum, which is unable to search in the
123
Cluster Comput
solution space. Consequently, the application of genetic algo- supporting manner, which is a probabilistic technique for
rithm is to resolve the issue of convergence rate and inertia solving computational problems via path choosing [19].
weight for obtaining the diversity of outcome [16]. Never- The optimization of ant colony was initially proposed by
theless, the selecting of GA parameter generally depends on Morco Dorigo in 1992 [20]. The algorithm originally aimed
experience, which may result in the vulnerable of the conver- at choosing the shortest route leading from its colony to a food
gence stage [17]. In this research, we employ the Gaussian source. After that, the capability of ant colony optimization
Copual function for globally optimizing the solution. was realized and diversified to solve a wider class of numer-
Let(u, v) ∈ [0, 1]2 , a two-dimensional Gaussian Copual ical problems. For particle swarm optimization, it provides
is expressed as: rapid discovery for a good solution, avoiding premature con-
vergence in early stages [21]. The integration of these two
C p (u, v) = ρ (−1 (u), −1 (v)) (9) methods utilizes the mechanism of a food source that the
birds are attracted to. All birds are aware of the location of
where and ρ represent a standard normal distribution the target and tend to move towards this target. Similarly, the
function and a two-dimensional normal joint distribution user also can also get to know the globally best position that
function, respectively while ρ is a linear dependent coef- one of the member has found. By changing the speed of each
ficient varies from − 1 to 1. In line with Eq. (9), we have member, their rhythmic synchronization and they finally land
indeed a two-dimensional normal distribution as f (x1 , x2 ) = on the target.
(x1 −a)2 2ρ(x1 −α)(x2 −b)
(2π σ1 σ2 1−ρ 2 )−1 exp − 2(1−ρ1
2) σ2
− σ1 σ2
1
(x2 −b)2
+ σ22
. Based on the aforementioned characteristic,
3 Implementation of PSO algorithm in task
the calculation of Gaussian Copual can be deduced through scheduling based on ant colony behavior
Eq. (10).
Task execution of cloud computing is based on “decom-
ϕ −1 (u) ϕ −1 (v) 1 position” and “scheduling”, which means a larger task is
C p (uv) = 1 decomposed into many smaller subtasks. Afterwards, a cer-
−∞ −∞ 2π(1 − ρ 2 ) 2
tain number of resource nodes are distributed for these
u 2 − 2ρuv + v 2
exp − dudv (10) subtasks to perform them. Finally, the running results are
2(1 − ρ 2 ) sent back to the users. We suppose that the running of the
tasks is mutually independent. The aim of task scheduling is
Similarly, the symmetry of normal distribution can be
to distribute reasonable resource for every task and minimize
obtained by the following formula exactly.
the total task finishing time.
Cρ (r1 , r2 ) = ρ (−1 (r1 ), −1 (r2 )) = Cρ (r2 , r1 ) (11)
123
Cluster Comput
(a) (b)
(c) (d)
Fig. 1 Variation of random factor based on Gaussian Copual
Matrix RT Ci j is defined as follows to record the actual exe- 3.2 Particle swarm initialization
cution time of task i at resource j.
⎛ ⎞ Suppose that a represents the number of tasks in a cloud
RT C11 RT C12 ··· RT C1r computing, r represents the number of resources in the cloud
⎜ RT C21 RT C22 ··· RT C2r ⎟
⎜ ⎟ computing, and s is the
⎛ number of particles. Let ⎞ the position
RT Ci j = ⎜ .. .. .. .. ⎟ x11 x12 · · · x1r
⎝ . . . . ⎠
⎜ x21 x22 · · · x2r ⎟
RT Ca1 RT Ca2 ··· RT Car ⎜ ⎟
of particle i be xi = ⎜ . .. .. .. ⎟, where there
⎝ .. . . . ⎠
The total task completion time at resource j is denoted in x xa2 · · · xar
equation (12), where i represents the ith task at resource a1
are xi j ∈ {0, 1},∀i, rj=1 xi j = 1,1 ≤ i ≤ a and 1 ≤ j ≤ r .
j, and a represents the total number of tasks executed at ⎛ ⎞
v11 v12 · · · v1r
resource j. ⎜ v21 v22 · · · v2r ⎟
⎜ ⎟
The speed of particle i is vi = ⎜ . .. .. .. ⎟,
a ⎝ .. . . . ⎠
r esour ceT ime( j) = RT Ci j (1 ≤ j ≤ r ) (12)
va1 va2 · · · var
i=1
and there are vi j ∈ [−vmax , vmax ], 1 ≤ i ≤ a and 1 ≤
The completion time of total tasks after completing all the j ≤ r . When initialization is executed, s(xi j )a×r matrices
scheduling is calculated by equation (13). and s(vi j )a×r matrices are randomly generated as the initial
position and speed of the particle.
taskT ime = max(r esour ceT ime( j))(1 ≤ j ≤ r ) (13)
123
Cluster Comput
3.3 Fitness function Step 3: Randomly divide the group into two subgroups,
and adopt different processing methods for different sub-
Each particle has a fitness to judge its quality [22,23]. groups.
The selection of fitness function is not fixed. The fitness Step 4: For each subgroup, specific algorithm is utilized
function should be reasonably selected according to the to update the position and speed of the particles. After that,
unsolved problems, the optimization objective, the compu- the two subgroups are integrated.
tational costs and other requirements. In general, the fitness Step 5: Judge whether iterative conditions can meet the
function should possess the characteristics such as monotony, stop conditions or achieve the iteration times.
non-negative and maximization. The total task completion Step 6: If iterative conditions cannot meet the stop con-
time should be optimized and seen as the judgment standard ditions or achieve the iteration times, please execute step 3.
of task scheduling of cloud computing in this paper. There- Otherwise, please execute step 7.
fore, the fitness function is defined as (14), where s is the Step 7: Select the top10% of individuals with the best
population size. fitness values from Step 4 and generate the initial pheromone
of the ant colony algorithm.
1 Step 8: Establish the task scheduling model of the ant
f itness(i) = (i = 1, 2, · · · s) (14)
taskT ime(i) colony algorithm and initialize the parameters of the algo-
rithm.
3.4 Combination of ant colony behavior and PSO Step 9: Every ant selects transfer nodes, updates the local
algorithm pheromone and adds the nodes selected to the tabu list.
Step 10: When all the ants complete a cycle, the global
To further improve the generation quality of the initial pheromone should be updated.
pheromone and the convergence speed in the last stage of Step 11: Evaluate whether iteration conditions can meet
the ant colony algorithm, partial crossover optimization can the stop conditions or achieve the iteration times.
be made for the previous particle swarm algorithm [24]. A Step 12: If the iterative conditions cannot meet the condi-
certain selective probability should be given to every particle tions or achieve the iteration times, please execute step 11.
according to the fitness value. Crossover and mutation oper- Otherwise, the global best solution is obtained.
ations should be performed for the particles selected. The
positions of two children generated by mutation operation
are 4 Case analysis
xchild,1 (t) = px par ent,1 (t) + (1 − p)x par ent,2 (t) (15) 4.1 Experimental setup
xchild,2 (t) = px par ent,2 (t) + (1 − p)x par ent,1 (t) (16)
To illustrate the advantages of PSO–ACO algorithm, the
proposed algorithm is tested in comparison with traditional
where xchild represents the positions of child particles.
scheduling algorithms. In the laboratory setup, 5 computers
x par ent represents the positions of parent particles. p is ran-
with intel core i3 processor, 4GDDR3 memory and 500 G
dom numbers from 0 to 1.
hard-drive capacity are used as research objects. One com-
The speed of the progeny particles is:
puter is set as a server, and the other four computers are used
to imitate cloud-based clients. All experimental parameters
v par ent,1 (t) + v par ent,2 (t)
vchild,1 (t) = v par ent,1 (t) (17) were showed in Tables 1, 2, and 3.
|v par ent,1 (t) + v par ent,2 (t)|
4.2 Solution of optimal task scheduling based on
v par ent,1 (t) + v par ent,2 (t) ACO-PSO
vchild,2 (t) = v par ent,2 (t) (18)
|v par ent,1 (t) + v par ent,2 (t)|
(1) Iteration frequency determination Figure 2 describes
In the formulas, vchild represents the speed of the child parti- the iteration time comparison of PSO–ACO algorithm
cles, and v par ent represents the speed of the parent particles. with PSO and ACO algorithms under different task
sizes. When the task scale is less than 30,000, the iter-
3.5 Implementation steps ation time of the algorithm in this paper is not much
different from other algorithms. When the task scale is
Step 1: Define the related parameter values in the algorithm more than 150,000, the actual iteration time of PSO–
and randomly generate the initial population. ACO algorithm is between 85 and 90, i.e., the best
Step 2: Code the particles and initialize the population. solution can also be gained without the execution of
123
Cluster Comput
123
Cluster Comput
90 ACO
Iteration Time
PSO-ACO
88
86
84
82
80
78
3 15 60 100
35 RA
Variance (*1000)
30 PSO-ACO
25
20
15
10
5
5 20 50 100
Number of Tasks (Ten thousand)
35
able for the solution of the cloud computing resource
scheduling problem for large scale tasks by improving 30 ACO
Fitness Function Value
PSO
the particle swarm algorithm. PSO-ACO
(3) Execution costs evaluation An important standard to 25
123
Cluster Comput
200
References
150
PSO-ACO 5. Rochwerger, B., Breitgand, D., Levy, E., et al.: The reservoir model
500
and architecture for open federated cloud computing. IBM J. Res.
400
Dev. 53(4), 1–17 (2009)
6. Nurmi, D., Wolski, R., Grzegorczyk, C. et al.: The eucalyptus open-
300
source cloud-computing system. In: Proceeding of the CRID, pp.
124–131 (2009)
200
7. Ochwerger, B., Breitgand, D., Levy, E., et al.: The reservoir model
and architecture for open federated cloud computing. IBM J. Res.
100 Dev. 53(4), 1–17 (2009)
8. Li, J.Y., Mei, K.Q., Zhong, M., et al.: Online optimization for
0 scheduling preemptable tasks on IaaS cloud systems. J. Parallel
0.1 0.2 1 3 7 12 23 41 60 85 100 Distrib. Comput. 72(2), 666–677 (2012)
Number of Tasks (Ten thousand) 9. Etminani, K., Naghibzadeh, M.A.: Min-min max-min selective
algorithm for grid task s cheduling. In: 3rd IEEE/IFIP International
Fig. 6 Execution costs comparison of three algorithms Conference in Central Asia on Internet. IEEE Computer Society,
Washington, pp. 1–7 (2007)
10. Xie, L.X.: Analysis of service scheduling and resource allocation
based on cloud computing. Appl. Res. Comput. 32(2), 528–531
(2015)
5 Conclusions 11. Shi-yang, Y.: Sla-oriented virtual resources scheduling in cloud
computing environment. Comput. Appl. Softw. 32(4), 11–14
This work has presented and extended the task scheduling by (2015)
PSO–ACO algorithm in cloud computing. Though a lot has 12. Guo, L., Zhao, S., Shen, S., et al.: Task scheduling optimization
in cloud computing based on heuristic algorithm. J. Netw. 7(3),
to be done in establishing a comprehensive task scheduling 547–553 (2012)
framework, the nature of PSO algorithms has to be preserved 13. Li, J., Peng, J., Cao, X., et al.: A task scheduling algorithm based
while the convergence and iteration of such algorithm should on improved ant colony optimization in cloud computing environ-
be optimized. Also from the analysis it is clear that ACO and ment. Energy Proc. 10(13), 6833–6840 (2011)
14. Kennedy J, Eberhart R. Particle swarm optimization[C], Proceed-
Gaussian Copual function can be integrated under the con- ings of IEEE International Conference on Networks, 1995: 39-43
sideration of PSO algorithm vulnerability. The experiment 15. Graham, J.K.: Combining particle swarm optimization and genetic
of task scheduling optimization is carried out in labora- programming utilizing LISP, Master Dissertation. Utah State Uni-
tory environment, in order to obtain a practical and feasible versity, Logan (2005)
16. Juang, C.F.: A hybrid of genetic algorithm and particle swarm
improvement scheme with the combination of PSO–ACO optimization for recurrent network design. IEEE Trans. Syst. Man
algorithm. Experimental results indicate that the improved Cybern. 34(2), 997–1006 (2004)
PSO algorithm in this paper can better solve the fitness, 17. Eberhart, R., Shi, Y.: Comparing inertia weights and constriction
cost and running time issues. The optimization results are factors in particle swarm optimization. In: Proceedings of the 2000
Congress on Evolutionary Computation, pp. 84–88 (2000)
obviously better than those gained by using traditional PSO 18. Zuo, L., Shu, L., Dong, S., Zhu, C., Hara, T.: A multi-objective
algorithm, thus providing an efficient method for the optimal optimization scheduling method based on the ant colony algorithm
task scheduling. in cloud computing. IEEE Access 3, 2687–2699 (2015)
123
Cluster Comput
19. Deneubourg, J.L., Pasteels, J.M., Verhaeghe, J.C.: Probabilistic Dan Long is a Ph.D. at the Med-
behaviour in ants: a strategy of errors. J. Theor. Biol. 105(2), 259– ical Image R&D Center, Fac-
271 (1983) ulty of Science, Zhejiang Uni-
20. Dorigo, M.: Optimization, learning and natural algorithms. Doctor versity. He obtained the doctoral
Dissertation, Pilotenico di Milano, Italie (1992) degree from Zhejiang Univer-
21. Prakasam, A., Savarimuthu, N.: Metaheuristic algorithms and sity in 2012 and has participated
probabilistic behaviour: a comprehensive analysis of ant colony in 2 national-level natural fund
optimization and its variants. Artif. Intell. Rev. 45(1), 97–130 research projects. His research
(2016) direction is algorithm design and
22. Cha an-min.: Research on task scheduling based on particle swarm image processing.
and ant colony algorithm for cloud computing. Master Disserta-
tion, Nanjing University of Aeronautics and Astronautics, Nanjing
(2016)
23. Jiang, M., Luo, Y.P., Yang, S.Y.: Stochastic convergence analysis
and parameter selection of the standard particle swarm optimization
algorithm. Inf. Process. Lett. 102(1), 8–16 (2007)
24. Gutjahr, W.J.: A graph-based ant system and its convergence.
Future Gener. Comput. 16(8), 873–888 (2000)
123