0% found this document useful (0 votes)
6 views

21

The document discusses an integrated particle swarm optimization and ant colony algorithm for task scheduling in cloud computing, aimed at improving efficiency and reliability. It outlines the methodology for optimizing task scheduling, including the use of intelligent algorithms and statistical experience to enhance performance in terms of fitness, cost, and execution time. The paper emphasizes the importance of developing efficient scheduling techniques to meet the growing demands of cloud computing environments.

Uploaded by

Syed Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

21

The document discusses an integrated particle swarm optimization and ant colony algorithm for task scheduling in cloud computing, aimed at improving efficiency and reliability. It outlines the methodology for optimizing task scheduling, including the use of intelligent algorithms and statistical experience to enhance performance in terms of fitness, cost, and execution time. The paper emphasizes the importance of developing efficient scheduling techniques to meet the growing demands of cloud computing environments.

Uploaded by

Syed Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Cluster Comput

https://doi.org/10.1007/s10586-017-1479-y

Task scheduling of cloud computing using integrated particle


swarm algorithm and ant colony algorithm
Xuan Chen1 · Dan Long2

Received: 29 September 2017 / Revised: 25 November 2017 / Accepted: 3 December 2017


© Springer Science+Business Media, LLC, part of Springer Nature 2017

Abstract The optimization of task scheduling in cloud com- ance requirements of fast response, cost controlling and
puting is built with the purpose of improving its working algorithm reliability [3]. The trend towards highly-integrated
efficiency. Aiming at resolving the deficiencies during the cloud computing system is gradually being implemented to
method deployment, supporting algorithms are therefore back up or replace the traditional data analysis strategies
introduced. This paper proposes a particle swarm optimiza- [4,5]. Thus, the task scheduling step is deemed best able
tion algorithm with the combination of based on ant colony to meet the efficiency and stability demands. Nowadays, the
optimization, which proposes the parameter determination promises of cloud computing have led to the development of
into particle swarm algorithm. The integrated algorithm is task scheduling scheme, with the employment of intelligent
capable of keeping particles in the fitness level at a certain algorithm as well as statistical experience [6]. The architec-
concentration and guarantee the diversity of population. Fur- ture of task scheduling is sufficiently robust to huge number
ther, the global best solution with high accurate converge of tasks so as to ensure the computing capability [7]. The
can be exactly gained with the adjustment of learning fac- devising of task scheduling calls for more efficient, more
tor. After the implementation of proposed method in task reliable and more convenient allocating techniques, along
scheduling, the scheme for optimizing task scheduling shows with ever shorter times to commercial use and an increas-
better working performance in fitness, cost as well as running ing demand for quality and advanced functionalities. As a
period, which presents a more reliable and efficient idea of result, Researchers never give up the intention to develop a
optimal task scheduling. methodology task scheduling.
The idea of task scheduling algorithm is based on filtering
Keywords Cloud computing · Particle swarm algorithm · a given set of mutually exclusive constraint task {1, · · ·, p}
Ant colony optimization · Task scheduling with a mapping from star ti to Con f licti of each task i,
where star ti is the start time and Con f licti is the perfor-
mance of task. On the other hand, deali j is the time needed
1 Introduction by the j th resource point to process the task i th in the node
sets. The start time is defined as
Cloud computing have been studied for several years already 
since Ramnath Chellappa’s work [1] and developed in sev- 0 Con f licti = φ
star ti = p (1)
eral different fields by a number of researchers in the past k=1 dealk j Con f licti = φ
decade [2]. Cloud computing in commercial use must bal-
Each task is performed after the completion of the previous
tasks except for the Con f licti is empty.
B Xuan Chen
[email protected] In this case, a specific task scheduling time is divided
into two parts: suspending time for all the previous tasks in
1 Zhejiang Industry Polytechnic College, Shaoxing 312006, processing sequence and performing time for the current task.
China
For the working period representation, the finishing time endi
2 Zhejiang University, Hangzhou 310007, China refers to the synergistic use of the aforementioned parts:

123
Cluster Comput

endi = star ti + deali j · bi j (2) an opportunity for further development and analysis of the
task scheduling, which can be helpful in cloud computing.
together with The remainder of this paper is organized as follows: Sec-
tion 2 gives a brief description of the theory of Particle Swarm
length i Algorithm and Ant colony algorithm. Section 3 presents the
deali j = (3)
capacit y j methodology for task scheduling based on integrated Particle
Swarm Optimization algorithm. The experiment establish-
where length i and capacit y j are the time duration of the ment to verify the proposed scheme and is presented in
task and the processing capacity of the node, respectively. Sect. 4. Concluding remarks are given in Sect. 5.
The variable bi j stands for the assignment of processing and
is normalized in [0, 1] within this research. If bi j = 1, the
task i is carried out at resource point j; if bi j = 0, then the 2 Related work
task is suspended. Specifically, each task corresponds to only
one resource node and vice versa. In this section, we briefly describe now the theory of particle
Let m stands for the number of resources and n for the swarm algorithm and its adjustment as well as the ant colony
number of tasks. Computation of different resources is con- behavior.
strained by using the following formula:
2.1 Particle swarm optimization (PSO)

m
bi j = 1 (4) The particle swarm optimization (PSO) model is originally
j=1 defined for the study of group hunting of birds [14]. With the
definition of iteration evolution, each particle corresponds to
On the basic of Eqs. (2) and (3), the processing order can be a potential solution with certain position and speed, while the
determined. In general, the time delay for resources switching particle keeps flying to increase the function fitness until an
and instruction transmitting, which is from the resource node optimal position is selected [15]. The moving position and
to the task, can be ignored. Thus, the processing time from speed are determined by two extreme values, i.e., individual
the initial to the node j is extreme value and global extreme value.
Let x be the position of particle and v be the speed. The

n
evolution of PSO can be described as
over j = deali j · bi j (5)
i=1
 
vt (k + 1) = wvt (k) + c1r1 xtP B (k) − xt (k)
 
The processing proceeds until all tasks in the system are
+ c2 r2 xtG B (k) − xt (k) (7)
traversed and addressed. Thereby, time identification of task
scheduling depends on the maximum value of processing
together with
duration. Seeing that the dealing with resource point contains
the conducting of task, we shall also define:
xt (k + 1) = xt (k) + vt (k + 1) (8)
time = max {endi } = max {over j } (6)
1≤i≤n 1≤ j≤m where the subscript t is the serial number of particle sample;
superscript P B and G B are individual extreme value and
The processing time for task scheduling basis mentioned global extreme value respectively. Inertia weight coefficient
above is easy to implement based on Eq. (5). Whereas, the w is in the range of [0.4, 0.9] and k refers to the iterative
task scheduling algorithm here developed is not obtained algebra. c1 and c2 represent acceleration constants while r1
as the solution in cloud computing. It is also based on the and r2 are random factors ranging from 0 to 1.
integration of other methods. Optimizing the working perfor-
mance of the task scheduling, a better computing result can 2.2 Improvement of PSO Gaussian Copual function
be obtained. Therefore, the improvement of task scheduling
plays an important role in the ongoing researches [8–11]. Particle swarm algorithm is even combined with genetic algo-
Observing from [12] for an analysis of this issue, a possi- rithm (GA) to improve its performance. During the running of
ble solution is proposed. (Related to this also seen in [13].) PSO algorithm, when a particle reaches an optimal position,
With the application of virtual machine resources, the tasks other particles will soon get close to it. Supposing the cur-
are re-allocated and the experimental outcomes are pre- rent optimal position is the local optimum, the particle swarm
sented. Therefore, the employment of algorithms provides will sink into local optimum, which is unable to search in the

123
Cluster Comput

solution space. Consequently, the application of genetic algo- supporting manner, which is a probabilistic technique for
rithm is to resolve the issue of convergence rate and inertia solving computational problems via path choosing [19].
weight for obtaining the diversity of outcome [16]. Never- The optimization of ant colony was initially proposed by
theless, the selecting of GA parameter generally depends on Morco Dorigo in 1992 [20]. The algorithm originally aimed
experience, which may result in the vulnerable of the conver- at choosing the shortest route leading from its colony to a food
gence stage [17]. In this research, we employ the Gaussian source. After that, the capability of ant colony optimization
Copual function for globally optimizing the solution. was realized and diversified to solve a wider class of numer-
Let(u, v) ∈ [0, 1]2 , a two-dimensional Gaussian Copual ical problems. For particle swarm optimization, it provides
is expressed as: rapid discovery for a good solution, avoiding premature con-
vergence in early stages [21]. The integration of these two
C p (u, v) = ρ (−1 (u), −1 (v)) (9) methods utilizes the mechanism of a food source that the
birds are attracted to. All birds are aware of the location of
where  and ρ represent a standard normal distribution the target and tend to move towards this target. Similarly, the
function and a two-dimensional normal joint distribution user also can also get to know the globally best position that
function, respectively while ρ is a linear dependent coef- one of the member has found. By changing the speed of each
ficient varies from − 1 to 1. In line with Eq. (9), we have member, their rhythmic synchronization and they finally land
indeed a two-dimensional normal distribution as f (x1 , x2 ) = on the target.
 (x1 −a)2 2ρ(x1 −α)(x2 −b)
(2π σ1 σ2 1−ρ 2 )−1 exp − 2(1−ρ1
2) σ2
− σ1 σ2
1

(x2 −b)2
+ σ22
. Based on the aforementioned characteristic,
3 Implementation of PSO algorithm in task
the calculation of Gaussian Copual can be deduced through scheduling based on ant colony behavior
Eq. (10).
Task execution of cloud computing is based on “decom-
ϕ −1 (u) ϕ −1 (v) 1 position” and “scheduling”, which means a larger task is
C p (uv) = 1 decomposed into many smaller subtasks. Afterwards, a cer-
−∞ −∞ 2π(1 − ρ 2 ) 2
 tain number of resource nodes are distributed for these
u 2 − 2ρuv + v 2
exp − dudv (10) subtasks to perform them. Finally, the running results are
2(1 − ρ 2 ) sent back to the users. We suppose that the running of the
tasks is mutually independent. The aim of task scheduling is
Similarly, the symmetry of normal distribution can be
to distribute reasonable resource for every task and minimize
obtained by the following formula exactly.
the total task finishing time.
Cρ (r1 , r2 ) = ρ (−1 (r1 ), −1 (r2 )) = Cρ (r2 , r1 ) (11)

We set r1 = r2 in the scope of [0, 1] for initialization. 3.1 Particle coding


Whereas, the variation of these parameter is not constrained
but depends on the optimization outcome. The value of cor- In order to employ the particle swarm algorithm and the
relation coefficients ρ set as 0, 0.5, 0.8 and 1, respectively. ant colony algorithm to complete task scheduling, the par-
The value of r1 and r2 is of X axis and Y axis respectively, ticles should be coded firstly. We suppose that there are a
which approaches the diagonal with the increase of ρ from tasks and r resources, and a > r . The coded sequence is
(x1 , x2 , · · · xn )(xk ∈ [1, r ]), i.e., a particle where each task
(Fig. 1). Thus, the trend of P B and G B is in conformity   
with the change of r1 and r2 , which can be solved with the a
has a corresponding resource. Then all tasks run by every
regulation of ρ.
resource are gained by decoding the particles.
Matrix E T Ci j is defined as follows to be used to represent
2.3 Ant colony optimization (ACO)
the expected execution time of task i at resource j.
Ant colony behavior is a state-of-art model applied in swarm
behavior [18]. Ants appear to move randomly and without ⎛ ⎞
E T C11 E T C12 ··· E T C1r
purposes. However, when all behaviors in the ant colony
⎜ E T C21 E T C22 ··· E T C2r ⎟
are considered, it emerges as some kind of collective intelli- ⎜ ⎟
E T Ci j = ⎜ .. .. .. .. ⎟
gence for problem resolution. Due to development of swarm ⎝ . . . . ⎠
intelligence, ant colony optimization is becoming a kind of E T Ca1 E T Ca2 ··· E T Car

123
Cluster Comput

(a) (b)

(c) (d)
Fig. 1 Variation of random factor based on Gaussian Copual

Matrix RT Ci j is defined as follows to record the actual exe- 3.2 Particle swarm initialization
cution time of task i at resource j.
⎛ ⎞ Suppose that a represents the number of tasks in a cloud
RT C11 RT C12 ··· RT C1r computing, r represents the number of resources in the cloud
⎜ RT C21 RT C22 ··· RT C2r ⎟
⎜ ⎟ computing, and s is the
⎛ number of particles. Let ⎞ the position
RT Ci j = ⎜ .. .. .. .. ⎟ x11 x12 · · · x1r
⎝ . . . . ⎠
⎜ x21 x22 · · · x2r ⎟
RT Ca1 RT Ca2 ··· RT Car ⎜ ⎟
of particle i be xi = ⎜ . .. .. .. ⎟, where there
⎝ .. . . . ⎠
The total task completion time at resource j is denoted in x xa2 · · · xar
equation (12), where i represents the ith task at resource  a1
are xi j ∈ {0, 1},∀i, rj=1 xi j = 1,1 ≤ i ≤ a and 1 ≤ j ≤ r .
j, and a represents the total number of tasks executed at ⎛ ⎞
v11 v12 · · · v1r
resource j. ⎜ v21 v22 · · · v2r ⎟
⎜ ⎟
The speed of particle i is vi = ⎜ . .. .. .. ⎟,

a ⎝ .. . . . ⎠
r esour ceT ime( j) = RT Ci j (1 ≤ j ≤ r ) (12)
va1 va2 · · · var
i=1
and there are vi j ∈ [−vmax , vmax ], 1 ≤ i ≤ a and 1 ≤
The completion time of total tasks after completing all the j ≤ r . When initialization is executed, s(xi j )a×r matrices
scheduling is calculated by equation (13). and s(vi j )a×r matrices are randomly generated as the initial
position and speed of the particle.
taskT ime = max(r esour ceT ime( j))(1 ≤ j ≤ r ) (13)

123
Cluster Comput

3.3 Fitness function Step 3: Randomly divide the group into two subgroups,
and adopt different processing methods for different sub-
Each particle has a fitness to judge its quality [22,23]. groups.
The selection of fitness function is not fixed. The fitness Step 4: For each subgroup, specific algorithm is utilized
function should be reasonably selected according to the to update the position and speed of the particles. After that,
unsolved problems, the optimization objective, the compu- the two subgroups are integrated.
tational costs and other requirements. In general, the fitness Step 5: Judge whether iterative conditions can meet the
function should possess the characteristics such as monotony, stop conditions or achieve the iteration times.
non-negative and maximization. The total task completion Step 6: If iterative conditions cannot meet the stop con-
time should be optimized and seen as the judgment standard ditions or achieve the iteration times, please execute step 3.
of task scheduling of cloud computing in this paper. There- Otherwise, please execute step 7.
fore, the fitness function is defined as (14), where s is the Step 7: Select the top10% of individuals with the best
population size. fitness values from Step 4 and generate the initial pheromone
of the ant colony algorithm.
1 Step 8: Establish the task scheduling model of the ant
f itness(i) = (i = 1, 2, · · · s) (14)
taskT ime(i) colony algorithm and initialize the parameters of the algo-
rithm.
3.4 Combination of ant colony behavior and PSO Step 9: Every ant selects transfer nodes, updates the local
algorithm pheromone and adds the nodes selected to the tabu list.
Step 10: When all the ants complete a cycle, the global
To further improve the generation quality of the initial pheromone should be updated.
pheromone and the convergence speed in the last stage of Step 11: Evaluate whether iteration conditions can meet
the ant colony algorithm, partial crossover optimization can the stop conditions or achieve the iteration times.
be made for the previous particle swarm algorithm [24]. A Step 12: If the iterative conditions cannot meet the condi-
certain selective probability should be given to every particle tions or achieve the iteration times, please execute step 11.
according to the fitness value. Crossover and mutation oper- Otherwise, the global best solution is obtained.
ations should be performed for the particles selected. The
positions of two children generated by mutation operation
are 4 Case analysis

xchild,1 (t) = px par ent,1 (t) + (1 − p)x par ent,2 (t) (15) 4.1 Experimental setup
xchild,2 (t) = px par ent,2 (t) + (1 − p)x par ent,1 (t) (16)
To illustrate the advantages of PSO–ACO algorithm, the
proposed algorithm is tested in comparison with traditional
where xchild represents the positions of child particles.
scheduling algorithms. In the laboratory setup, 5 computers
x par ent represents the positions of parent particles. p is ran-
with intel core i3 processor, 4GDDR3 memory and 500 G
dom numbers from 0 to 1.
hard-drive capacity are used as research objects. One com-
The speed of the progeny particles is:
puter is set as a server, and the other four computers are used
to imitate cloud-based clients. All experimental parameters
v par ent,1 (t) + v par ent,2 (t)
vchild,1 (t) = v par ent,1 (t) (17) were showed in Tables 1, 2, and 3.
|v par ent,1 (t) + v par ent,2 (t)|
4.2 Solution of optimal task scheduling based on
v par ent,1 (t) + v par ent,2 (t) ACO-PSO
vchild,2 (t) = v par ent,2 (t) (18)
|v par ent,1 (t) + v par ent,2 (t)|
(1) Iteration frequency determination Figure 2 describes
In the formulas, vchild represents the speed of the child parti- the iteration time comparison of PSO–ACO algorithm
cles, and v par ent represents the speed of the parent particles. with PSO and ACO algorithms under different task
sizes. When the task scale is less than 30,000, the iter-
3.5 Implementation steps ation time of the algorithm in this paper is not much
different from other algorithms. When the task scale is
Step 1: Define the related parameter values in the algorithm more than 150,000, the actual iteration time of PSO–
and randomly generate the initial population. ACO algorithm is between 85 and 90, i.e., the best
Step 2: Code the particles and initialize the population. solution can also be gained without the execution of

123
Cluster Comput

Table 1 Parameters of PSO–ACO algorithm Table 3 Parameters of PSO algorithm


Parameter name Parameter symbol Parameter value Parameter name Parameter symbol Parameter value

Population size s 100 Population size s 100


Number of resources r 5 Number of resources r 5
Number of tasks a [2,110000] Number of tasks a [2,1100000]
Controlling factor k 3.0 Controlling factor k 3.0
Maximum inertia weight wmax 0.9 Maximum inertia weight wmax 0.9
Minimum inertia weight wmin 0.4 Minimum inertia weight wmin 0.4
The initial iterative c1s 2.0 The initial iterative value of c1 c1s 2.0
value of c1 The last iterative value of c1 c1e 0.5
The last iterative value c1e 0.5 The initial iterative value of c2 c2s 1.5
of c1
The last iterative value of c2 c2e 3.0
The initial iterative c2s 1.5
value of c2 The maximum iteration time tmax 200

The last iterative value c2e 3.0


of c2
Crossover probability Pc 0.9 pared to other algorithms with the increase in task size,
Mutation probability Pm 0.02 which is proved to effectively adapt to task distribution
The maximum iteration t1 max 50 in cloud computing.
time in the first stage
The maximum iteration t2 max 150
time in the second 4.3 Experimental results analysis
stage
Heuristic factor α 1.0 Experiments are conducted to better illustrate the efficiency
Heuristic factor β 1.0 of the algorithm. A certain number of nodes are established
Volatilization factor ρ 0.2 while the number of tasks twice as many as the node. The
PSO–ACO algorithm, the ACO algorithm and the PSO algo-
rithm are adopted for contrast experiments under the same
Table 2 Parameters of ACO algorithm conditions.
Parameter name Parameter symbol Parameter value
(1) Fitness function evaluation With the number of cloud
Number of resources r 5
computing resources equal to 20 and the number of
Number of tasks a [2,110000]
task equal to 20,000, the solution curves of the best
Heuristic factor α 1.0
scheduling schemes of three algorithms are shown as in
Heuristic factor β 1.0
Fig. 4. The PSO–ACO algorithm gains the best scheme
Volatilization coefficient ρ 0.2
when the time is 130. The PSO algorithm gains the best
The maximum iteration time tmax 200
scheme when the time is 150. The GA algorithm also
gains the best scheme when the time is 170. At the same
time, extreme disturbance, inertia weight and learning
approximately 10% of the optimization. The analysis factor are improved to improve the local convergence
above demonstrates that PSO–ACO algorithm has cer- speed of the algorithm.
tain advantages on the aspect of big task scheduling. (2) Completion time evaluation In the cloud computing sys-
tem with the number of cloud resources at 50 and the
(2) Variance determination To better verify the superior- number of tasks from 20,000 to 100,000, the completion
ity of PSO–ACO algorithm compared to the other two times of the best cloud computing resource schedul-
algorithms, we executed three algorithms for 100 times ing schemes of the ACO algorithm, the PSO algorithm
respectively and then a variance statistical analysis is and the PSO–ACO algorithm are shown in Fig. 5. The
performed on the best solutions gained. Figure 3 shows competitions between tasks become fiercer, the conflict
the best solution is mainly the minimum value of the probability of the tasks increases and the task comple-
solution execution costs in previous iterations. Because tion times of all the algorithms increase with the increase
different tasks have different lengths, the calculated in the number of tasks. Results show that the PSO–ACO
variances are different. The variance value of the best algorithm can improve the convergence speed, avoiding
solution of PSO–ACO algorithm is more stable com- the local best solution of the algorithm and be more suit-

123
Cluster Comput

Fig. 2 Iteration time 94


comparison of three algorithms
92 PSO

90 ACO

Iteration Time
PSO-ACO
88

86

84

82

80

78
3 15 60 100

Number of Tasks (Ten thousand)

Fig. 3 Variance comparison of 45


three algorithms
40 RR

35 RA
Variance (*1000)

30 PSO-ACO

25

20

15

10

5
5 20 50 100
Number of Tasks (Ten thousand)

35
able for the solution of the cloud computing resource
scheduling problem for large scale tasks by improving 30 ACO
Fitness Function Value

PSO
the particle swarm algorithm. PSO-ACO
(3) Execution costs evaluation An important standard to 25

measure the algorithm performance in cloud comput- 20


ing is how to effectively reduce the execution costs.
Execution costs mainly refer to the costs of every task 15
consumed in the distributed virtual machine. Figure 6
10
shows that there is not much difference between the exe-
cution costs of PSO–ACO algorithm and the execution 5
cost rates of the other algorithms when the task scale
0
is not large. The differences between the three curves 100 110 120 130 140 150 160 170 180 190 200
increases with the increase of the task scale, which indi- Iteration Time
cates the proposed algorithm can adapt the tasks with
larger scale. Fig. 4 Iteration curves of three algorithms

123
Cluster Comput

400 Future work will address more complex situations where


ACO multiple tasks are present to explore whether the proposed
350
PSO
PSO-ACO algorithm can be extended to a multi-scheduling method.
Completion Time (s)

300 We would also like to investigate the impact of PSO–ACO


algorithm within other issues.
250

200
References
150

1. Chellappa, R.K.: Intermediaries in Cloud-Computing: A New


100
Computing Paradigm, INFORMS Annual Meeting, Dallas, 26–29
Oct 1997
0
1 2 3 4 5 6 7 8 9 10 11 2. Takabi, H.: A semantic based policy management framework for
Number of Tasks (ten thousand) cloud computing environments, Doctor Dissertation, University of
Pittsburgh, Pittsburgh (2013)
Fig. 5 Completion time comparison of three algorithms 3. Yi, P.: Peer-to-peer based trading and file distribution for cloud
computing, Doctor Dissertation, University of Kentucky, Lexing-
ton (2014)
4. Egedigwe, E.: Service quality and perceived value of cloud
700
computing-based service encounters: evaluation of instructor per-
ACO ceived service quality in higher education in Texas, Doctor Disser-
600
PSO tation, Nova Southeastern University, Fort Lauderdale (2015)
Execution Costs Rate(%)

PSO-ACO 5. Rochwerger, B., Breitgand, D., Levy, E., et al.: The reservoir model
500
and architecture for open federated cloud computing. IBM J. Res.
400
Dev. 53(4), 1–17 (2009)
6. Nurmi, D., Wolski, R., Grzegorczyk, C. et al.: The eucalyptus open-
300
source cloud-computing system. In: Proceeding of the CRID, pp.
124–131 (2009)
200
7. Ochwerger, B., Breitgand, D., Levy, E., et al.: The reservoir model
and architecture for open federated cloud computing. IBM J. Res.
100 Dev. 53(4), 1–17 (2009)
8. Li, J.Y., Mei, K.Q., Zhong, M., et al.: Online optimization for
0 scheduling preemptable tasks on IaaS cloud systems. J. Parallel
0.1 0.2 1 3 7 12 23 41 60 85 100 Distrib. Comput. 72(2), 666–677 (2012)
Number of Tasks (Ten thousand) 9. Etminani, K., Naghibzadeh, M.A.: Min-min max-min selective
algorithm for grid task s cheduling. In: 3rd IEEE/IFIP International
Fig. 6 Execution costs comparison of three algorithms Conference in Central Asia on Internet. IEEE Computer Society,
Washington, pp. 1–7 (2007)
10. Xie, L.X.: Analysis of service scheduling and resource allocation
based on cloud computing. Appl. Res. Comput. 32(2), 528–531
(2015)
5 Conclusions 11. Shi-yang, Y.: Sla-oriented virtual resources scheduling in cloud
computing environment. Comput. Appl. Softw. 32(4), 11–14
This work has presented and extended the task scheduling by (2015)
PSO–ACO algorithm in cloud computing. Though a lot has 12. Guo, L., Zhao, S., Shen, S., et al.: Task scheduling optimization
in cloud computing based on heuristic algorithm. J. Netw. 7(3),
to be done in establishing a comprehensive task scheduling 547–553 (2012)
framework, the nature of PSO algorithms has to be preserved 13. Li, J., Peng, J., Cao, X., et al.: A task scheduling algorithm based
while the convergence and iteration of such algorithm should on improved ant colony optimization in cloud computing environ-
be optimized. Also from the analysis it is clear that ACO and ment. Energy Proc. 10(13), 6833–6840 (2011)
14. Kennedy J, Eberhart R. Particle swarm optimization[C], Proceed-
Gaussian Copual function can be integrated under the con- ings of IEEE International Conference on Networks, 1995: 39-43
sideration of PSO algorithm vulnerability. The experiment 15. Graham, J.K.: Combining particle swarm optimization and genetic
of task scheduling optimization is carried out in labora- programming utilizing LISP, Master Dissertation. Utah State Uni-
tory environment, in order to obtain a practical and feasible versity, Logan (2005)
16. Juang, C.F.: A hybrid of genetic algorithm and particle swarm
improvement scheme with the combination of PSO–ACO optimization for recurrent network design. IEEE Trans. Syst. Man
algorithm. Experimental results indicate that the improved Cybern. 34(2), 997–1006 (2004)
PSO algorithm in this paper can better solve the fitness, 17. Eberhart, R., Shi, Y.: Comparing inertia weights and constriction
cost and running time issues. The optimization results are factors in particle swarm optimization. In: Proceedings of the 2000
Congress on Evolutionary Computation, pp. 84–88 (2000)
obviously better than those gained by using traditional PSO 18. Zuo, L., Shu, L., Dong, S., Zhu, C., Hara, T.: A multi-objective
algorithm, thus providing an efficient method for the optimal optimization scheduling method based on the ant colony algorithm
task scheduling. in cloud computing. IEEE Access 3, 2687–2699 (2015)

123
Cluster Comput

19. Deneubourg, J.L., Pasteels, J.M., Verhaeghe, J.C.: Probabilistic Dan Long is a Ph.D. at the Med-
behaviour in ants: a strategy of errors. J. Theor. Biol. 105(2), 259– ical Image R&D Center, Fac-
271 (1983) ulty of Science, Zhejiang Uni-
20. Dorigo, M.: Optimization, learning and natural algorithms. Doctor versity. He obtained the doctoral
Dissertation, Pilotenico di Milano, Italie (1992) degree from Zhejiang Univer-
21. Prakasam, A., Savarimuthu, N.: Metaheuristic algorithms and sity in 2012 and has participated
probabilistic behaviour: a comprehensive analysis of ant colony in 2 national-level natural fund
optimization and its variants. Artif. Intell. Rev. 45(1), 97–130 research projects. His research
(2016) direction is algorithm design and
22. Cha an-min.: Research on task scheduling based on particle swarm image processing.
and ant colony algorithm for cloud computing. Master Disserta-
tion, Nanjing University of Aeronautics and Astronautics, Nanjing
(2016)
23. Jiang, M., Luo, Y.P., Yang, S.Y.: Stochastic convergence analysis
and parameter selection of the standard particle swarm optimization
algorithm. Inf. Process. Lett. 102(1), 8–16 (2007)
24. Gutjahr, W.J.: A graph-based ant system and its convergence.
Future Gener. Comput. 16(8), 873–888 (2000)

Xuan Chen is an associate pro-


fessor at School of Design and
Art at Zhejiang Industry Poly-
technic College. He got a mas-
ter’s degree in Computer Sci-
ence from University of Elec-
tronic Science and Technology of
China in 2013 and has held and
presided over 8 projects at the
department and municipal level.
His research direction is cloud
computing, wireless sensing and
algorithm design.

123

You might also like