Discrete sparrow search algorithm for symmetric traveling salesman problem
Discrete sparrow search algorithm for symmetric traveling salesman problem
article info a b s t r a c t
Article history: The traveling salesman problem (TSP) is one of the most intensively studied problems in computational
Received 18 August 2021 mathematics. This paper proposes a swarm intelligence approach using a discrete sparrow search
Received in revised form 29 November 2021 algorithm (DSSA) with a global perturbation strategy to solve the problem. Firstly, the initial solution
Accepted 11 January 2022
in the population is generated by the roulette-wheel selection. Secondly, the order-based decoding
Available online 25 January 2022
method is introduced to complete the update of the sparrow position. Then, the global perturbation
Keywords: mechanism combined with Gaussian mutation and swap operator is adopted to balance exploration
TSP and exploitation capability. Finally, the 2-opt local search is integrated to further improve the quality
Swarm intelligence of the solution. Those strategies enhance the solution’s quality and accelerate the convergence.
Sparrow search algorithm Experiments on 34 TSP benchmark datasets are conducted to investigate the performance of the
Global perturbation proposed DSSA. And statistical tests are used to verify the significant differences between the proposed
2-opt DSSA and other state-of-the-art methods. Results show that the proposed method is more competitive
and robust in solving the TSP.
© 2022 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.asoc.2022.108469
1568-4946/© 2022 Elsevier B.V. All rights reserved.
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
In line with the above research, this paper focuses on the inno- with a stochastic approach, which coupled two-variable neigh-
vative application of a new swarm intelligence algorithm to the borhood search mechanisms, by R language and was tested on
TSP. The sparrow search algorithm (SSA) is a recently proposed both symmetric and asymmetric TSP.
swarm intelligence optimization algorithm inspired by the forag- In terms of evolutionary algorithms, GA and differential evo-
ing and anti-predation behavior of the sparrow population [33]. lution (DE) are two typical examples. Wang et al. [46] improved
Compared with other algorithms, it has the characteristics of the basic GA with multiple offspring generation, thus improving
fewer parameters and easy adjustment, and it has been suc- the performance of the merit search. Designing different off-
cessfully applied in many fields [34–37]. When the literature is spring generation mechanisms is also one of the directions for the
examined, studies applying it over the TSP have not been found. improvement of GA. Different operators are summarized in the
The lack of similar works and the increasing enthusiasm in the literature [47,48]. Hussain et al. [49] designed a cyclic crossover
SSA served as the main motivation for this work. operator for the discrete problem characteristics of the TSP and
The basic SSA will have problems such as the reduced con- compared it experimentally with two other classical crossover
vergence speed and the decrease of population diversity in the operators. The experimental results demonstrated the excellence
later iterations, and it is easy to fall into the local optima [33]. of the proposed operator. Recently, Tripathy et al. [50] developed
To this end, we propose a discrete SSA combined with a global a multi-objective algorithm based on NSGA-II to solve the cov-
disturbance strategy, coined as the DSSA. First, the roulette wheel ering salesman problem, a generalization of TSP. Ali et al. [51]
selection is used to generate the initial solution to improve its proposed a novel design of DE (NDDE) for enhancing the per-
quality. Second, the order-based decoding method is introduced formance of the basic differential evolutionary algorithm. They
for decoding, and the position update formula of the basic SSA compared the technique with other algorithms in solving TSP and
is used during the iterations. Third, the variation perturbation obtained competitive results.
method combined with long and short steps is conducive to The literature review puts more focus on the SI approaches
maintaining population diversity and balancing diversification since the DSSA is a SI algorithm. Ahamed et al. [52,53] proposed
and intensification. Finally, the 2-opt operator is used for local a heterogeneous adaptive ant colony optimization with a 3-opt
search to speed up convergence and further improve the quality local search (HAACO). The proposed algorithm has an advantage
of the solution. over other ACO variants for medium-scale problems. In another
The main contributions of this paper are as follows: study, Yu et al. [54] presented a heterogeneous guided ant colony
a. A novel swarm-based algorithm for solving the TSP, named algorithm (DELSACO) based on two methods—long-short memory
the DSSA, is proposed by combining several improvement strate- and space explosion. Experimental outcomes showed the algo-
gies. rithm has good stability and high precision for solving TSP. By
b. In the proposed DSSA, a global disturbance strategy consist- introducing a new flight equation, velocity, and relying on the
ing of Gaussian mutation and swap operator is adopted to balance Metropolis acceptance criterion to avoid falling into premature
exploration and exploitation capability. maturity, Zhong et al. [55] discretized the classical PSO (D-CLPSO)
c. Experimental results and analysis show that the proposed and applied it to the large-scale TSP. Osaba et al. [56] improved
method is more competitive and robust in solving the TSP. the structure of the basic BA, introduced 2-opt and 3-opt op-
The rest of the paper is organized as follows: A taxonomy- erators, and proposed an improved BA algorithm (IBA) for TSP.
based literature review is given in Section 2. In Section 3, the Similarly, the author also discretized the water cycle algorithm
mathematical model of TSP is formulated. Section 4 gives an (DWCA) [57]. By introducing new explorer spiders and novice
overview of the basic SSA algorithm. The proposed algorithm spiders, BAŞ and Ülker [38] proposed a discrete social spider
DSSA for TSP is described in Section 5. Section 6 presents the algorithm (DSSpA). It also improves the quality of the solution
experimental results of the proposed DSSA and compares the per- by a 2-opt operator. Ezugwu et al. [16] designed a discrete sym-
formance with other metaheuristic algorithms. Finally, Section 7 biotic organisms search (DSOS) for TSP. The approach introduces
concludes the study and highlights future work. exchange, inversion, and insertion of three mutation operators to
reconstruct subgroups, thereby improving the algorithm. Zhong
2. Literature review et al. [30] developed a discrete version of ABC, which used a
threshold acceptance method to balance its intensification and
As a well-known problem in the field of discrete combinatorial diversification. Panwar et al. [58] proposed a discrete gray wolf
optimization, the TSP was proposed in 1930, became popular optimization (D-GWO), which is applied to discrete optimization
after 1950 [38], and has been studied intensively and exten- problems through simple improvements, and the optimization
sively in academia and industry. The main reasons for this are results are better than IBA in some cases. Gunduz et al. [59]
twofold, one is its inherent practical basis, problems such as made two improvements to the Jaya algorithm, which generates
vehicle routing [39,40], circuit board printing [41,42], and X-ray the initial solution through random perturbation and nearest
crystallography [43], are all practical applications of the TSP. And neighbor insertion, uses eight transformation operators to re-
on the other hand, their academic value, i.e., NP nature. Being NP- place the previous location update method. In another study,
Hard, the resolution of these problems is a major challenge for Ouaarab et al. [6] presented an improved discrete cuckoo search
the scientific community [10]. There are many heuristic methods algorithm to solve TSP. They developed the basic algorithm by re-
in the literature for solving TSP. They can be categorized as sin- constructing its population and adding a new category of cuckoos.
gle solution-based methods and population-based methods. The Wang et al. [60] proposed a discrete shuffled frog-leaping algo-
population-based methods can be mainly grouped into two cat- rithm (DSFLA) based on heuristic information, using TSP as a test
egories: evolutionary algorithms and swarm intelligence-based problem to evaluate the performance of the algorithm. Indadul
(SI) algorithms. et al. [61] proposed a new variant of ABC (ABCSS) to solve TSP,
Among single solution-based methods, Geng et al. [44] com- which improved the performance of the algorithm by combining
bined SA with a greedy search (ASA-GS). Three mutation mech- it with a variety of update rules and k-opt operators. Ahmet
anisms generate feasible neighbor solutions, adaptively adjusting et al. [62] integrated the exchange, movement, and symmetric
the greedy search time and the acceptance probability of new operators with the tree-seed algorithm to form a discrete version
solutions according to the problem size. Chen et al. [45] de- of the tree-seed algorithm (DTSA). Akhand et al. [63] proposed a
signed a variable neighborhood search (VNS) algorithm combined discrete version of the spider monkey optimization (SMO) called
2
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
DSMO for solving TSP efficiently. In DSMO, each spider monkey xij ∈ {0, 1} , ∀i, j ∈ {1, 2, 3 · · · n} & i ̸ = j (6)
represents a TSP solution and related operations are employed
Eq. (2) is the objective function, which minimizes the total
to obtain the optimal solution, including swap sequence (SS) and
distance traveled by the salesman. If a salesman travels from city
swap operator (SO). In another study, Zhong et al. [64] proposed
i to city j directly, xij = 1 otherwise xij = 0. Eqs. (3) and (4)
a discrete pigeon-inspired optimization (DPIO) algorithm for TSP.
guarantee that each city is visited only once. The constraint given
The DPIO’s exploration and exploitation capabilities are improved
in Eq. (5) can avoid the formation of sub-tours and ensure that the
by the redefined operators.
final route generated has only one complete closed-loop route. It
Fig. 1 demonstrates the classification of the above methods.
is a supplementary condition to Eqs. (3) and (4). Eq. (6) defines
The literature proves the great potential of the SI algorithms in
the binary decision variables.
solving TSP. When the literature in recent years is examined, the
SSA has rarely been applied to TSP. The above status serves main
4. Overview of sparrow search algorithm (SSA)
motivation for the research of this article.
SSA, as a new nature-inspired algorithm firstly proposed by
3. The traveling salesman problem (TSP)
Xue [33] in 2020, is mainly inspired by sparrow’s behavior. In
nature, many animals search for food and avoid predators by
TSP is a well-known combinatorial discrete optimization prob- their special swarm intelligence. The sparrow population is no
lem and has been extensively studied throughout past years in different. They divide into two groups by their fitness which is
computer science and operation research [28]. So far, there is still defined by sparrow individual position. The individual who has
no way to effectively solve this problem [58]. This problem aims better fitness belongs to producers. The remaining sparrows are
to find the Hamilton cycle through cities for a salesman to visit. scroungers. In the whole sparrow population, different individu-
The TSP is described as a completely undirected graph G = (N , E) als have various foraging behavior. Furthermore, there are some
with a set of nodes N = {1, 2, . . . , n} which represent the cities. sparrows in charge of avoiding predators during the foraging pro-
Each edge {i, j} ∈ E has a non-negative cost dij . Generally, the cess among the population. In confronting dangers, they choose
cost between two cities is calculated by Euclidean distance. The to fly far away or get close to other sparrows. To sum up, the
Euclidean distance d between c1 and c2 is calculated by: sparrow colony can get more food safely by constantly updating
its position.
√
d= (c1x − c2x )2 + (c1y − c2y )2 (1) SSA is proposed by imitating the sparrow group’s foraging
whereby (c1x , c1y ) and (c2x , c2y ) denote respectively the coordi- and anti-predation behavior. It has advantages of less adjustable
nates of cities c1 and c2 . parameters, stronger search capacity, and faster efficiency Thanks
It is noteworthy that the object of this paper is the symmetric to its novelty design, the algorithm and variants have remark-
TSP, namely the cost of traveling between any two cities from a able performance in continuous optimization problems compared
different direction is equal, i.e., dij = dji . The mathematical model with other meta-heuristic algorithms [34]. The main procedures
of the symmetric TSP is presented below [59,60]: of basic SSA can be described as below:
Minimize: Step 1: Initialization and produce the initial solution. Popula-
n n
tion size, the maximum number of iterations, the proportion of
∑ ∑ producers (PD) and the proportion of sparrows who take charge
Z = dij xij (2)
of strongly vigilant (PV) are set in this step. The initial position of
i=1 j=1
the sparrow population is displayed as Eq. (7). They are produced
Subject to randomly.
n
⎡ ⎤
∑ x1,1 ··· x1,d
xij = 1, i ∈ {1, 2, 3 · · · n} (3) ⎢ . .. .. ⎥
X = ⎣ .. . . ⎦ (7)
j=1
n xn,1 ··· xn,d
∑
xij = 1, j ∈ {1, 2, 3 · · · n} (4) In the matrix, n is the number of population and d indicates
i=1
∑ the dimension of the decision parameters. The fitness of every
xij ≤ |S | − 1, 2 ≤ |S | ≤ n − 2, S ⊂ {1, 2, 3, . . . , n} (5) individual is calculated for later operation. It will be stored as a
i,j∈S n × 1 matrix.
3
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Step 2: Based on the value of fitness, the whole group can be Table 1
The meaning of relevant notations in SSA.
divided into producers and scroungers. Producers update their
Provenance Notation Meaning
position by Eq. (8) and others use Eq. (9). Table 1 shows the
i i = 1, 2, 3, . . . , n.
specific meaning of notations used in Step 2 and 3.
g The current iteration.
α A value in the range of [0,1].
{ ( )
g
g +1 xi · exp −i
α·gmax
R < ST Q A random number that follows a normal
xi = (8)
g Step 2 distribution.
xi + Q · L R ≥ ST
L A 1 × D matrix with each element value is 1.
{ ( g )
Gworst −xi R The alarm value.
g +1 Q · exp i2
i> n
2 ST Safety threshold respectively.
xi = (9)
g Gworst The worst location of sparrows.
⏐ ⏐
Sbest + ⏐xi − Sbest ⏐ · A+ · L else
Sbest The current best location of producers.
A+ A is a matrix (1 × d) whose elements are
Step 3: After the total population has updated position, some
randomly assigned 1 or −1. Then ,
sparrows are selected as scouters who are responsible for de- A+ = AT (AA+ )−1
tection and warning. They usually occupy 10%–20% of the whole Gbest The current global location
population. Updating their location according to Eq. (10). η A random number that obeys normal distribution
Step 3
⏐ g ( g) (E = 0, σ 2 = 1)
⎨Gbest + η(· ⏐xi − > f (Gbest )
⎧ ⏐
Gbest ⏐ ⏐ f xi K A random number ranging in [−1,1].
g +1 σ A smaller constant.
⏐
⏐ g )
xi = ⏐x −Gworst ⏐ (10)
⏐
g
⎩xgi + K · ( ( g )i
( g)
) f xi = f (Gbest ) f (xi ) The fitness value of the present sparrow
f xi −f (Gworst ) +σ
5.2. Initialization (5) Update the position of the scouters, randomly select a
certain number of sparrows as the scouters. If the fitness of
The initial population plays a vital role in conditioning the these individuals is greater than the global optimum, use the first
speed and convergence of the proposed algorithm. To maintain formula in Eq. (10). If not, use the second formula to complete the
the diversity of the population, the roulette-wheel selection is position update.
used for the initialization of the route in this study. Roulette- Above, the update of the sparrows’ position is completed.
wheel selection [66] is a frequently used method in genetic and However, since the original SSA is used for the continuous op-
evolutionary algorithms. The first city visited is generated by timization problem, the solution obtained by the above position
a random function. The next city is selected with probability, update formula may fall into local optimal and slow convergence.
the shorter the distance from the previous city, the higher the To improve this situation, some effective strategies need to be
probability of selection. The cities that have already been visited integrated.
are added to the taboo table and not visited again. The process
repeated until all cities are included in the taboo list.
5.5. Global disturbance mechanism
5.3. Fitness function
Related studies have shown that the SSA is easy to fall into
In the basic SSA, the degree of adaptation is determined by a local optimum in the later iterations, resulting in slower con-
the specific location where the individual is located. Sparrows vergence [34]. To improve the diversity of the population and
occupying better positions have higher fitness values and vice balance diversification and intensification, the DSSA performs
versa. To solve the TSP, the length of the corresponding path of two types of mutation operations within the population. Gaus-
the solution is taken as the fitness value of the current solution. sian mutation and swap operators are used in this part. The
It can be calculated according to Eq. (2) in Section 3. In the TSP producer with the worst fitness is critical-sparrow. In each it-
problem, the shorter the route length, the higher the quality of eration, critical-sparrow is introduced as the boundary index for
the solution, that is, the route length is inversely proportional to the two types of disturbances. If the fitness value is higher than
the fitness of the solution. critical-sparrow, perform short-step perturbation, otherwise, per-
form Gaussian mutation perturbation.
5.4. Position updating The Gaussian distribution (GM) [67] forms a new solution
by generating random numbers that obey the normal distribu-
Unlike other improved algorithms that make drastic changes tion and combining them with the old solution. The Gaussian
to the original position update formula [8,27,58], this paper still probability density formula is shown below.
takes the formula of the basic algorithm. The decoding method
described in 4.1 is adapted to decode the solution after each 1 (x − µ)2
f (x) = √ exp(− ) (11)
location update is completed, thus serving as a feasible solution 2πσ 2σ 2
to the TSP. The execution steps are presented as follows where µ represents the mean or expectation of the distribution
(1) Firstly, the initial solution is generated through the and σ represents the standard deviation. Subsequently, µ and σ
roulette-wheel selection introduced in Section 5.2. The coding
are set to 0 and 1, respectively. The dimension of the Gaussian
strategy uses path representation. If the number of cities to be
distribution vector is determined by the actual problem scale. The
visited is 10, a solution can be expressed as [4 1 10 8 5 3 7 9 2
expression of Gaussian mutation is written as follows.
6].
(2) Calculate the fitness of individuals in the population ac- Xnew = Decoding {Xi + G (ζ )} (12)
cording to the fitness function. The higher the quality of the
solution, the greater the fitness. According to the fitness, the where Xi is the current sparrow position, Xnew represents mutated
population is divided into producers and scroungers. sparrow location, G (ζ ) is a Gaussian D-dimension vector formed
(3) Producers position update, use the generated random by the probability density of Eq. (11), Decoding means to decode
number to determine the update method. If the random number the solution.
is greater than the warning value, select the first formula in In the second mutation, two cities are selected randomly from
Eq. (8), otherwise, select the second formula. the current solution and swapped. The illustration of the swap
(4) The location of the scroungers is updated. If the current transformation operator is given in Fig. 3. This variation will only
individual number is less than n/2, the first formula in Eq. (9) is change the city at two locations and is a small-scale perturbation
selected to complete the update. Otherwise, the second formula compared to the Gaussian variation. The pseudo-code of global
is used to update the position. disturbance is illustrated in Algorithm 2.
5
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 2
Parameter settings of DSSA for TSP problem.
Parameter Value Parameter Value
Population size 50 Maximum of iterations 200
PD 0.4 PV 0.2
Alarm value 0.8
6. Results and discussion To verify the effectiveness of the three improved strategies
in the study, three instances of different sizes were chosen for
In this study, we have used a PC with Intel i5-9600k at 3.7 GHz testing. Table 3 shows the experimental results of our proposed
and 16 GB RAM on 64-bit Windows 10. The DSSA is imple- DSSA and the reduced strategy algorithm. Among them, SSA is the
mented in MATLAB R2018b (Mathworks, USA). To validate the basic sparrow search algorithm, DSSA-1 is an improved algorithm
proposed approach’s performance, the DSSA is performed on with 2-opt optimization; DSSA-2 is an improved algorithm with
some benchmark instances of symmetric TSP taken from the 2-opt optimization and roulette-wheel selection strategy; DSSA-
TSPLIB library [71]. This TSPLIB contains the city coordinates of 3 is an improved algorithm with 2-opt optimization and global
the corresponding instance and the best known solution, which disturbance strategy. To ensure the reliability of the results, each
6
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
7
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 3 higher. Similarly, the changing trend of the value I in Fig. 7(b)
Comparison of results between DSSA and SSA integrating different strategies on
can also illustrate the effectiveness of the proposed strategies.
TSP instances.
Although the improvement effect is different, it can improve the
Instance Metric Method
solution quality of the original algorithm. Among them, DSSA,
SSA DSSA-1 DSSA-2 DSSA-3 DSSA
which integrates the three strategies, has the best improvement
Best 26 933.00 6182.00 6128.00 6110.00 6110.00 effect. In addition, we further validate the effectiveness of the
Ch130 Avg. 28 288.10 6223.70 6210.57 6162.40 6153.65
SD. 666.28 27.23 29.88 29.95 26.38
global perturbation strategy.
Before comparing the proposed algorithm with other meth-
Best 8942.00 3954.00 3940.00 3916.00 3916.00
Tsp225 Avg. 9438.40 4011.67 4002.57 3953.73 3926.05 ods, the global perturbation strategy proposed in this paper is
SD. 239.18 22.20 28.86 24.87 12.53 further validated. Twelve typical TSP instances were selected for
Best 112 871.00 43 340.00 43 250.00 42 554.00 42 495.00 testing, ranging from 51 to 318 cities. By comparing them with
Lin318 Avg. 114 329.90 43 748.07 43 618.17 42 890.87 42 742.70 the method without perturbation (AL 1), and with the method
SD. 606.37 165.35 159.34 203.00 184.42 incorporating the traditional perturbation strategy (AL 2). The
traditional perturbation strategy used in AL 2 is derived from the
literature [55] and includes the swap, inversion, and insertion
algorithm runs independently 20 times. The value I calculated by operators. Except for the perturbation strategy, all other param-
Eq. (13) is introduced to measure the closeness of the average eters and structures were kept the same, the maximum number
optimal value to the known optimal solution. It can be concluded of iterations was set to 200 and perform 10 independent runs.
from the formula that the smaller the average value obtained Table 4 and Fig. 8 show the comparison of the experimental
in 20 runs, which means that the larger the I value, and the results and a diagram of the convergence process, respectively.
higher the quality of the solution can be considered. Fig. 7(a) Table 4 gives a comparison of the results of the three methods
and (b) respectively show the convergence curve and the I value in solving the 12 TSP problems, using the four metrics of mean,
comparison of these algorithms. best, standard deviation, and average running time, with the
best data marked in bold. It shows that the inclusion of the
f (BKS)
I= × 100% (13) perturbation strategy improves the quality of the solutions, with
f (Av erage) AL 2 improving the quality of 9/12 solutions compared to AL 1,
where f (BKS) indicates the best-known solution and f (Av erage) and AL 3 improving the quality of all solutions for AL 1. For both
indicates the average cost of 20 runs solutions found by the AL 2 and AL 3, the new perturbation strategy outperforms the
proposed algorithm. traditional perturbation strategy in terms of the average of all
It can be found from Table 3 that the basic SSA performs solutions. Moreover, the rate of finding the optimal solution is
poorly in solving the TSP. The addition of the three improvement improved. The original algorithm only finds the optimal value on
strategies can enhance the performance of the original algorithm, the KroA100, the traditional perturbation strategy only finds the
in terms of the mean, optimum metrics. Although these three optimal value on KroA100 and Pr152, while the proposed method
strategies work at different stages, they all have a positive effect finds the optimal value on six instances.
on the results. The 2-opt can eliminate the intersections between Not only is the quality of the solution improved, but the
paths, thereby reducing the path length. The global disturbance solution time is also reduced. The data in the ‘‘Time(s)’’ column
strategy can balance exploration and exploitation capabilities and shows that as the problem size increases, the time advantage of
increase the probability of jumping out of the local optimum. The the new perturbation strategy becomes more pronounced, being
roulette wheel selection strategy improves the chance of selecting slower than the other two algorithms in only four instances. This
a neighboring city in the initial phase, which also contributes to conclusion can also be drawn from the convergence of Fig. 8,
the improvement of the solution. Fig. 7(a) shows the convergence which shows the convergence curves of the three algorithms
of these algorithms when solving Tsp225. As the number of for nine instances. The proposed algorithm converges in about
iterations increases, the advantages of DSSA are more obvious, 100 generations, and the optimized route lengths are smaller
the convergence speed is faster, and the quality of the solution is than the other two methods. Thanks to the combination of two
8
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 4
Performance comparisons of three methods on 12 TSP instances.
Instance AL 1 AL 2 Proposed method (AL 3)
Name Optimal Avg. Best SD Time(s) Avg. Best SD Time(s) Avg. Best SD Time(s)
Eil51 426 428.4 427 1.08 1.21 427.5 427 0.53 1.34 427 426 0.47 1.46
Eil76 538 550.4 547 2.37 3.37 549.3 546 1.57 3.44 544.7 541 2.83 4.33
KroA100 21 282 21 323.9 21 282 35.13 8.46 21 296.8 21 282 13.84 8.68 21 288.9 21 282 11.11 7.19
Ch130 6110 6203.8 6169 18.24 18.11 6182.7 6147 25.30 18.26 6159.5 6128 20.83 14.77
Pr136 96 772 98 243.4 97 862 259.51 17.65 98 002.4 97 674 257.20 17.72 97 238.3 96 874 178.79 20.65
Ch150 6528 6671.1 6630 26.07 26.80 6649.5 6574 35.93 26.94 6579.5 6553 16.26 23.06
Pr152 73 682 73 796.1 73 686 76.05 25.51 73 835.4 73 682 102.19 26.02 73 722.8 73 682 65.69 23.46
U159 42 080 42 714.1 42 396 204.67 2.60 42 639 42 287 200.26 2.94 42 348.7 42 080 168.57 6.15
KroB200 29 437 30 286.5 30 022 153.86 69.22 30 192.9 29 980 152.20 69.96 29 749.1 29 558 139.93 59.73
Tsp225 3916 4005.1 3965 17.34 61.63 3978.8 3951 14.12 65.70 3931.8 3916 14.44 55.20
Pr264 49 135 50 669.5 49 975 322.26 97.33 50 706 50 430 231.10 98.19 49 363.5 49 135 254.47 77.32
Lin318 42 029 43 336.8 43 150 170.70 232.13 43 463.6 43 286 98.76 228.66 42 734.9 42 101 281.77 195.12
perturbations with long and short steps, which balance the al- according to Eqs. (14) and (15). The last column ‘‘Time(s)’’ is the
gorithm’s ability to explore and exploit during the optimization average elapsed time in seconds during 20 runs.
process. The new perturbation strategy introduced can improve (Best − Optimal)
the algorithm’s ability to jump out of the local optimum and PDB (%) = × 100% (14)
BKS
accelerate convergence.
(Av erage − Optimal)
PDA (%) = × 100% (15)
BKS
6.2. Simulation results of TSP
From Table 5, we can find the good performance of DSSA in
dealing with TSP. Overall, DSSA found the optimal solution in the
Table 5 shows the numerical results when DSSA is used to 26/34 test cases. In 94.12% of the instances, the PDB is within
solve the TSP, ranging from 30 up to 1002 cities. To obtain reliable 1%, and only Lin318 and Pr1002 have deviations of 1.11% and
data, 20 independent runs were carried out in each instance. The 1.99%. In terms of average deviation, except for Pr1002, the PDA
second column shows the instance’s name and the number of of all instances is within 2%. It indicates that as the problem scale
cities, like Oliver30 and St70, which have 30 and 70 cities. The increases, DSSA can still maintain better solution stability. In the
columns ‘‘Optimal’’, ‘‘Best’’ and ‘‘Avg’’. represent respectively the ‘‘Time(s)’’ column, we can find that all the execution time are
optimal values reported from TSPLIB, the best and the average within an acceptable range. It further illustrates the effectiveness
values found by DSSA. The column ‘‘Worst’’ denotes the worst of the proposed algorithm.
value of the solutions. The column ‘‘SD’’ indicates the standard Fig. 9 demonstrates the convergence curves and optimized
deviation of the obtained solution. The columns ‘‘PDA(%)’’ and routes of six instances (Att48, Pr76, KroA100, U159, Tsp225,
‘‘PDB(%)’’ represent respectively the percentage deviation of the Pr264). The algorithm execution was terminated after 1000 runs
average solutions found from the optimal solution value over 20 for each TSP instance. Obviously, DSSA can complete convergence
runs and the percentage deviation of the best value found from in a short time. At the same time, the quality of the solution
the optimal solution value over 20 runs. They can be calculated reached the best level from the TSPLIB library.
9
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 5
Numerical results of proposed DSSA for 34 TSP instances.
No Instance Optimal Avg. Best Worst PDA (%) PDB (%) SD Time (s)
1 Oliver30 420 420 420 420 0.00 0.00 0.00 0.53
2 Dantzig42 699 675 675 675 −3.43 −3.43 0.00 0.61
3 Att48 33 522 33 522 33 522 33 522 0.00 0.00 0.00 1.11
4 Rand50 5553 5553 5553 5553 0.00 0.00 0.00 1.22
5 Eil51 426 426.60 426 428 0.14 0.00 0.68 1.41
6 Berlin52 7542 7542 7542 7542 0.00 0.00 0.00 1.48
7 St70 675 675.15 675 678 0.02 0.00 0.67 2.74
8 Eil76 538 543.10 538 548 0.95 0.00 2.83 3.87
9 Pr76 108 159 108 159 108 159 108 159 0.00 0.00 0.00 2.48
10 KroA100 21 282 21 290.20 21 282 21 353 0.04 0.00 17.14 6.37
11 KroB100 22 141 22 173.10 22 141 22 258 0.14 0.00 36.30 6.88
12 KroC100 20 749 20 770.50 20 749 20 880 0.10 0.00 42.54 6.70
13 KroD100 21 294 21 319.05 21 294 21 410 0.12 0.00 44.15 6.70
14 KroE100 22 068 22 091.90 22 068 22 174 0.11 0.00 32.85 6.64
15 Eil101 629 641.50 634 646 1.99 0.79 3.62 8.39
16 Lin105 14 379 14 379 14 379 14 379 0.00 0.00 0.00 6.57
17 Pr107 44 303 44 322 44 303 44 387 0.04 0.00 34.75 5.26
18 Pr124 59 030 59 030 59 030 59 030 0.00 0.00 0.00 7.92
19 Ch130 6110 6153.65 6110 6205 0.71 0.00 26.38 14.28
20 Pr136 96 772 97 302.35 96 920 97 956 0.55 0.15 265.94 18.76
21 Pr144 58 537 58 537 58 537 58 537 0.00 0.00 0.00 11.51
22 Ch150 6528 6590.15 6528 6658 0.95 0.00 34.53 21.84
23 KroA150 26 524 26 699.85 26 525 27 031 0.66 0.00 144.19 20.73
24 KroB150 26 130 26 220.40 26 130 26 325 0.35 0.00 69.01 21.58
25 Pr152 73 682 73 731.35 73 682 73 841 0.07 0.00 67.91 21.61
26 U159 42 080 42 262.75 42 080 42 438 0.43 0.00 157.55 7.73
27 kroA200 29 368 29 682.15 29 459 29 882 1.07 0.31 130.55 55.16
28 kroB200 29 437 29 850.55 29 564 30 126 1.40 0.43 178.27 56.63
29 Tsp225 3916 3926.05 3916 3952 0.26 0.00 12.53 52.36
30 Pr226 80 369 80 369.20 80 369 80 373 0.00 0.00 0.89 47.40
31 Pr264 49 135 49 271.85 49 135 49 564 0.28 0.00 155.73 73.40
32 Lin318 42 029 42 742.70 42 495 43 298 1.70 1.11 184.42 171.65
33 Pr439 107 217 107 844.90 107 494 108 376 0.59 0.26 263.11 384.19
34 Pr1002 259 047 266 352.35 264 212 268 693 2.82 1.99 1146.43 4575.52
6.3. Comparison with the classic metaheuristic algorithms settings are summarized in Table 6. A total of six test cases were
chosen, with the number of cities ranging from 48 to 226 cities.
In this section, we compare the proposed algorithm experi- Table 7 shows the comprehensive performance comparison
mentally with five classical heuristics algorithms that are imple- data of these six algorithms in solving the TSP problem. At a
mented: ACO, GA, SA, Artificial Fish Swarm Algorithm (AFSA) [72], glance, it is clear that DSSA produces more efficient and higher-
and hybrid PSO (HPSO). These five methods have been proposed quality results. The experimental data shows that DSSA is better
for many years and are widely used for solving optimization at solving the TSP problem than ACO, SA, GA, HPSO, and AFSA.
problems. Comparisons of mean values and best values all demonstrate the
Before proceeding to the analysis of specific experimental superiority of DSSA. DSSA found optimal values for all cities on
results, the basic principles of the five methods used are sum- city-size of 48, 70, 100, 130, and 226.
marized as follows: The ACO draws on the biological behavior of In terms of convergence performance, the proposed DSSA also
ants during foraging and has a high degree of self-organization performs better compared to the other five classical algorithms.
and parallelism. The GA is one of the most classical heuristics, Since the roulette-wheel selection is used to generate the initial
inspired by the evolutionary process of ‘‘survival of the fittest’’ solution and the 2-opt optimization operator for local optimiza-
in biology, where the use of crossover and variation operators tion, the route length of DSSA decreases faster in the early stage
increases the probability of producing good offspring; In 1983, of optimization and the length is not the same dimension as
Kirkpatrick realized the similarity between combinatorial opti- the other five algorithms. The convergence of DSSA in the first
mization problems and physical annealing processes, inspired by 100 generations is shown in the form of subplots, and the con-
the Metropolis criterion, proposed the SA algorithm; The PSO is vergence curves of the last 15 times of the algorithm are also
a global optimization method based on the theory of popula- enlarged to show to better discriminate the optimization results.
tion intelligence, which guides the optimization search through Figs. 10 and 11 are chosen to show the convergence of the six
the population intelligence generated by the cooperation and methods when solving TSP problems of different scales, respec-
competition between particles in the population. To improve the tively. As can be seen from the corresponding convergence curve
ability to optimize discrete problems, crossover and mutation plots, DSSA completes convergence in about 100 generations for
operators are added to form HPSO. The biological behavior of fish the same number of populations. However, both the ACO and the
is equally relevant. AFSA is used to find the optimal value in the HPSO require a large number of iterations to converge, and the
solution space by simulating the behavior of fish such as praying, fast convergence of DSSA is mainly due to the presence of a global
swarming, tail-chasing, and randomization. perturbation mechanism that allows the algorithm to quickly
To obtain a fair and valid comparison of the results, all ex- jump out of the local optimum. And the combination of short
periments were conducted on the same computer as previously and long steps of variation allows the algorithm to search quickly
mentioned, and each algorithm was run independently 10 times. around the current solution, with variation occurring not only
The maximum number of iterations for both DSSA and the other within the more adaptive solutions but also using the presence
five algorithms was set to 500, and their specific parameter of inferior solutions to optimize the quality of the solution. Good
10
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Fig. 9. The best solution (blue route) and convergence curve (red curve) are obtained by DSSA.
Fig. 10. Convergence analysis of the proposed and conventional methods (GA, ACO, HPSO, SA, AFSA) on solving Att48.
11
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 6
Parameter settings of ACO, GA, SA, AFSA and HPSO for TSP.
ACO GA SA
Maximum of iterations:500 Maximum of iterations:500 Maximum of iterations:500
Population size:50 Population size: scale of problem (200–1000) Operator: Swap, insertion, Reversion
Information heuristics factor:1 Cross.prob:0.9 Initial temperature:0.025
Information evaporation factor:0.1 Mut.prob:0.5 Cooling rate:0.99
Expectation heuristic factor:5
AFSA HPSO
Maximum of iterations:500 Maximum of iterations:500
Population size: 50 Population size: 1000
Try times:2000 Operator: Crossover, mutation
Visual:16
Crowded operator:0.8
Table 7
Comparison of results obtained by DSSA with five traditional algorithms (ACO, GA, SA, HPSO, and AFSA).
Algorithm Index Att48 St70 KroA100 Ch130 Pr152 Pr226
Optima 33 522 675 22 182 6110 73 682 80 369
Avg 35 018.01 711.03 22 694.27 6422.08 77 907.12 86 192.06
ACO Best 34 425.01 703.22 22 387.6 6356.33 77 153.89 85 216.91
PDAvg(%) 4.46 5.34 6.64 5.11 5.73 7.25
Avg 33 729.81 685.46 21 538.82 6255.18 74 990.46 82 459.75
SA Best 33 523.71 677.20 21 285.44 6209.55 74 031.17 81 244.68
PDAvg(%) 0.62 1.55 1.21 2.38 1.78 2.60
Avg 35 472.72 852.65 41 097.34 16 202.26 326 555.34 703 177.89
GA Best 34 498.46 799.70 36 607.16 14 020.48 286 236.36 626 010.88
PDAvg(%) 5.82 26.32 93.11 165.18 343.20 774.94
Avg 34 583.29 738.22 26 165.14 9045.43 117 869.61 280 667.45
HPSO Best 33 975.02 695.29 23 991.53 8121.63 102 788.50 254 764.62
PDAvg(%) 3.17 9.37 22.94 48.04 59.97 249.22
Avg 39 285.70 968.25 38 339.85 12 031.56 230 293.32 386 446.20
AFSA Best 37 858.82 926.85 36 412.86 11 288.25 208 825.31 372 000.13
PDAvg(%) 17.19 43.44 80.15 96.92 212.55 380.84
Avg 33 522 675 21 284.3 6129.2 73 710.4 80 369
DSSA Best 33 522 675 21 282 6110 73 822 80 369
PDAvg(%) 0 0 0.01 0.31 0.04 0
Fig. 11. Convergence analysis of the proposed algorithm and conventional methods (GA, ACO, HPSO, SA, AFSA) on solving KroA100.
convergence performance is demonstrated for TSP instances of 6.4. Comparisons with state-of-the-art algorithms from the litera-
different sizes. ture
Combining Table 7, Figs. 10 and 11, it can be concluded that
DSSA has an advantage over ACO, GA, SA, HPSO, and AFSA in In this section, we conduct a comparative evaluative study
solving TSP problems. involving eleven methods proposed in recent literature, including
12
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Fig. 12. Comparisons of rank values for DSSA with eight well-known methods.
Table 8
Comparison of results obtained by DSSA with NDDE.
Instance DSSA NDDE
Name Optimal Avg Best SD PDA (%) Avg Best SD PDA (%)
Eil51 426 426.73 426 0.58 0.17 426.7 426 2.21 0.16
Berlin52 7542 7542 7542 0.00 0.00 7559.10 7542 54.07 0.23
St70 675 675.00 675 0.00 0.00 677.6 675 5.8 0.39
Eil76 538 543.30 538 3.09 0.99 540.4 538 7.59 0.45
KroA100 21 282 21 288.23 21 282 10.40 0.03 21 327.10 21 282 142.62 0.21
KroC100 20 749 20 758.10 20 749 20.71 0.04 20 807.90 20 749 186.26 0.28
KroD100 21 294 21 305.20 21 294 27.62 0.05 21 356.90 21 294 196.06 0.30
Eil101 629 642.93 630 4.45 2.22 633.1 629 12.97 0.65
Pr144 58 537 58 538.77 58 537 9.68 0.01 58 550.60 58 537 43.01 0.02
Pr152 73 682 73 733.43 73 682 66.89 0.07 74 005.20 73 682 1022.05 0.44
Table 9
Comparison of results obtained by DSSA with DSFLA.
Instance DSSA DSFLA
Name Optimal Avg Best PDA (%) PDB (%) Avg Best PDA (%) PDB (%)
Eil51 426 426.73 426 0.17 0.00 426.43 426 0.10 0.00
St70 675 675.00 675 0.00 0.00 675 675 0 0.00
Eil76 538 543.30 538 0.99 0.00 539.07 538 0.20 0.00
Rat99 1211 1212.63 1211 0.13 0.00 1216.8 1211 0.48 0.00
KroA100 21 282 21 288.23 21 282 0.03 0.00 21 312.03 21 282 0.14 0.00
KroB100 22 141 22 161.43 22 141 0.10 0.00 22 305.13 22 199 0.75 0.27
KroC100 20 749 20 758.10 20 749 0.04 0.00 20 772.80 20 749 0.11 0.00
KroD100 21 294 21 305.20 21 294 0.05 0.00 21 401.67 21 294 0.51 0.00
KroE100 22 068 22 096.03 22 068 0.13 0.00 22 175.60 22 068 0.49 0.00
Eil101 629 642.93 630 2.22 0.16 632.90 629 0.62 0.00
Lin105 14 379 14 380.47 14 379 0.01 0.00 14 423.93 14 379 0.31 0.00
Pr107 44 303 44 330.50 44 303 0.06 0.00 44 362.33 44 303 0.13 0.00
Pr124 59 030 59 040.73 59 030 0.02 0.00 59 503.43 59 030 0.8 0.00
Pr136 96 772 97 307.97 96 785 0.55 0.01 100 420.93 98 506 3.77 1.79
Pr144 58 537 58 538.77 58 537 0.00 0.00 58 632.10 58 537 0.16 0.00
Pr152 73 682 73 733.43 73 682 0.07 0.00 73 970.97 73 682 0.39 0.00
KroA200 29 368 29 685.13 29 440 1.08 0.25 29 671.37 29 368 1.03 0.45
IBA, DWCA, ABCSS, D-GWO, DSSpA, DSFLA, DSOS, NDDE, Discrete NDDE. In comparison with DSFLA, DSSA shows its advantages in
Firefly Algorithm (DFA) [28], Discrete Imperialist Competitive solving large-scale problems. When instances ranged from 105
Algorithm (DICA) [28], and DSMO. It is worth mentioning that the to 200, DSSA surpassed DSFLA in 6/7 instances, and DSFLA only
comparison data are all from the literature, and the programming outperformed in KroA200.
languages and hardware platforms they use are different, so the In the second experiment, we compared the proposed algo-
running time is not within the scope of the comparison. To ensure rithm with D-GWO, DWCA, DFA, IBA, DICA, DSMO, and ABCSS.
fairness, the number of populations involved in the experiment The experimental results are shown in Tables 10, 12, 13, and 14.
remains the same. The number of sparrows in DSSA is 50. And The above table relates to the average value, the best solution,
the maximum number of iterations is 200. In this study, three sets standard deviation, and the gap from the optimal value. It can be
of experiments were carried out, 30 runs, 20 runs, and 10 runs, found from Table 10 that DSSA performs better than D-GWO in
respectively, to compare with the above-mentioned algorithms. both average and optimal values. The ratio of finding the optimal
In the first experiment, the performance of DSSA was com- value or exceeding the optimal value is 13/17, while D-GWO is
pared with DSFLA and NDDE, and 30 independent runs were only 5/17. Table 12 shows the comparison of the results of DSSA,
carried out. The results are shown in Tables 8 and 9 respectively. DWCA, and DICA. The proportions of their finding optimal values
In terms of the average value, 70% of instances where DSSA per- are 88.2%, 47.1%, and 35.3%, respectively. DSSA dominates at a
forms better than NDDE, and only 30% are worse than NDDE. At ratio of 100%. At the same time, the standard deviation is also
the same time, the standard deviation of DSSA is much lower than lower than the other two methods, which reflects the stability
NDDE, which shows that DSSA has better solution stability than of the solution quality. The same conclusion can be obtained
13
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 10
Comparison of results obtained by DSSA with D-GWO.
No Instance DSSA D-GWO
Name Optimal Avg. Best PDA (%) PDB (%) Diff Avg. Best PDA (%) PDB (%) Diff
1 Dantzig42 699 675.3 675 −3.398 −3.433 −24 680.0 679 −2.710 −2.860 −20
2 Att48 33 522 33 522.0 33 522 0.000 0.000 0 33 600.0 33 523 0.230 0.002 1
3 Pr76 108 159 108 163.3 108 159 0.004 0.000 0 108 900.0 108 159 0.680 0.000 0
4 KroB100 22 141 22 173.1 22 141 0.149 0.000 0 22 444.6 22 159 1.370 0.085 19
5 KroC100 20 749 20 757.8 20 749 0.042 0.000 0 21 078.0 20 749 1.580 0.000 0
6 KroE100 22 068 22 094.0 22 068 0.118 0.000 0 22 410.0 22 131 1.540 0.280 63
7 Lin105 14 379 14 383.4 14 379 0.031 0.000 0 14 520.0 14 382 0.980 0.020 3
8 Pr107 44 303 44 377.6 44 303 0.168 0.000 0 44 685.1 44 301 0.860 −0.004 −2
9 Pr124 59 030 59 036.9 59 030 0.012 0.000 0 59 390.9 59 030 0.610 0.000 0
10 Pr136 96 772 97 042.8 96 785 0.280 0.013 13 99 310.5 97 826 2.620 1.080 1054
11 Pr144 58 537 58 537.0 58 537 0.000 0.000 0 58 600.5 58 535 0.100 −0.003 −2
12 KroB150 26 130 26 243.1 26 135 0.433 0.019 5 26 756.2 26 320 2.390 0.720 190
13 Pr152 73 682 73 751.5 73 682 0.094 0.000 0 74 230.0 73 690 0.740 0.010 8
14 U159 42 080 42 137.1 42 080 0.136 0.000 0 42 563.3 42 142 1.140 0.140 62
15 Pr226 80 369 80 369.4 80 369 0.000 0.000 0 81 135.7 80 648 0.950 0.340 279
16 Pr439 107 217 107 451.9 107 261 0.219 0.041 44 112 850.3 110 415 5.250 2.980 3198
17 Pr1002 259 047 265 370.5 263 649 2.441 1.777 4602 267 713.2 264 922 3.340 2.260 5875
from Tables 13 and 14. Table 13 shows the experimental results the better the performance. Take algorithms in Table 12 as an
of DSSA, IBA, and DFA on 20 instances. Table 14 shows the example, for the TSP the Friedman statistic (distributed according
experimental results of DSSA, SMO, and ABCSS on 23 benchmark to χ 2 with 2 degrees of freedom) was equal to 30. Furthermore,
instances. Except for the KroB100, DSSA performs less than IBA, the confidence interval has been set to 99%, being 9.21 the critical
failed to find the lower value. In other instances, the mean and point in a 2 distribution with 2 degrees of freedom. 30 is greater
optimal values of DSSA are better than IBA and DFA. In Table 14, than 9.21 and the observed p-value is 3.059E−7 < 0.05. It can
the standard deviation of DSSA is within 300, which is much be concluded that there are significant differences among the
smaller than the value of SMO and ABCSS, and the value on four results, thus DSSA being the best technique as having the lowest
instances (Berlin52, Pr76, Lin105, and Pr124) is 0. rank. Fig. 12 shows the average ranking of each method in their
In the third experiment, we compared the proposed algorithm group with a significance level of 0.05. Similar conclusions can
with DSSpA and DSOS. The experimental results are shown in be obtained from other parts of Table 16. Following the above
Tables 11 and 15. DSSpA also chooses 2-opt as a local optimiza- principles, we can infer that DSSA is the best method among the
tion strategy, while DSOS uses mutation operators to improve the comparative algorithms.
quality of the solution. First of all, from the analysis of the number Additionally, two post hoc analysis methods, the Bonferroni–
of iterations, it can be obtained from the literature that the Dunn test and Holm’s test, were used to evaluate the statistical
number of iterations of the two comparison algorithms is 1000 significance of the better performance of DSSA. The Holm’s test
and 1,00 000, which is much greater than the 200 of DSSA. The can second confirm the Bonferroni–Dunn test results. DSSA is
population size of DSOS is the same as DSSA. DSSpA is adjusted considered the control algorithm. And the null hypothesis is
according to the problem scale. When the number of cities is considered that all the algorithms are equivalent. Tables 17 & 18
greater than 50, the population size must be greater than the size gathers the unadjusted and adjusted p-values obtained through
set by DSSA. In terms of solution quality, in Table 11, DSSA’s PDA the application of the two methods’ procedure. All the p-values
is within 2% except for Pr1002, and in Pr1002 is 2.85%. The largest are less than 0.05. Analyzing these data, it can be concluded that
DSSA is significantly better than DSSpA, DSOS, IBA, DFA, DWCA,
PDA of DSSpA is 6.91%. In terms of average value, DSSA dominates
DICA, DSOS, and ABCSS for TSP at 95% confidence level.
in all instances. It can be found from Table 15 that DSSA is better
than DSOS in 21 instances, and only one instance is worse than
7. Conclusion
DSOS. Regardless of whether it is PDA or PDB, DSSA is lower than
DSOS in most conditions (except for Rat 783) and has a better In this study, a DSSA combined with a global perturbation
optimization result. strategy is proposed to solve the TSP. Firstly, to improve the
Based on the above comparison and analysis, DSSA has better quality of the solution, use the roulette-wheel mechanism to
solution performance and robustness than the 11 comparative generate the initial solution. Secondly, a sequence-based coding
methods from the literature. As the size of cities increases, it can and decoding strategy are introduced to complete the update
still maintain a deviation within 3%. At the same time, the lower of the position of the sparrow. Then, the global perturbation
standard deviation reflects the stability of the solution quality, mechanism combined with long and short steps not only im-
and there will be no major changes. proves the ability of the algorithm to jump out of the local
optimum but also accelerate the convergence. The 2-opt local
6.5. Statistical analysis search is integrated to further improve the quality of the solu-
tion. The effectiveness of the proposed algorithm was tested on
To obtain a fair and unbiased comparison, this section uses 34 TSP instances. The results were compared with the classic
three statistical testing methods to assess the differences between algorithms and the newly proposed methods. Compared with the
the algorithms. The literature [73] introduces several methods classic algorithms, it shows good convergence characteristics and
that can be used for us. First, use a non-parametric test method— robustness. Considering the measurement criteria such as the
Friedman test to verify whether there are significant differences best-found solution, the average solution, and standard deviation
between the algorithms and if there are, reject the null hypoth- value, DSSA’s performance on the TSP problem outperforms D-
esis. For the sake of fairness, each test is carried out in three GWO, DSFLA, DSMO, and other state-of-the-art techniques from
algorithms in one table, and the algorithms in Tables 11 and 15 the literature. In addition, the Friedman test and two post-test
are combined. Table 16 shows the average rank value obtained methods (Holm’s test and Dunn–Bonferroni test) verify the sig-
through five times non-parametric tests. The lower the value, nificant differences between the proposed algorithm and other
14
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 11
Comparison of results obtained by DSSA with DSSpA.
No Instance DSSA DSSpA
Name Optimal Avg. Best Worst PDA (%) SD Avg. Best Worst PDA (%) SD
1 Eil51 426 427.0 426 428 0.23 0.5 431.9 431.9 183 530 0.65 2.8
2 Berlin52 7542 7542.0 7542 7542 0.00 0.0 7659.0 7659.0 31 432 1.55 117.0
3 Eil76 538 544.7 541 549 1.25 2.8 559.3 559.3 2720.4 3.96 0.4
4 KroA100 21 282 21 288.9 21 282 21 305 0.03 11.1 21 363.0 21 363.0 189 380 0.38 81.0
5 KroB100 22 141 22 156.6 22 141 22 239 0.07 34.2 22 347.0 22 347.0 188 300 0.93 206.0
6 KroC100 20 749 20 767.3 20 749 20 852 0.09 34.7 20 997.0 20 997.0 191 830 1.19 248.0
7 KroD100 21 294 21 320.2 21 294 21 401 0.12 40.2 21 552.0 21 552.0 147 160 1.21 258.0
8 KroE100 22 068 22 080.5 22 068 22 106 0.06 17.9 22 407.0 22 407.0 198 040 1.53 339.0
9 Pr107 44 303 44 303.0 44 303 44 303 0.00 0.0 44 346.0 44 346.0 68 720 0.09 43.0
10 Pr124 59 030 59 030.0 59 030 59 030 0.00 0.0 59 087.0 59 087.0 76 722 0.09 57.0
11 Pr136 96 772 97 238.3 96 874 97 441 0.48 178.8 103 460.0 103 460.0 914 060 6.91 6688.0
12 Pr144 58 537 58 537.0 58 537 58 537 0.00 0.0 58 669.0 58 669.0 87 320 0.22 132.0
13 KroA150 26 524 26 757.5 26 587 26 934 0.88 112.8 27 027.0 27 027.0 359 350 1.59 503.0
14 Pr152 73 682 73 722.8 73 682 73 818 0.06 65.7 74 462.0 74 462.0 112 480 1.05 780.0
15 kroA200 29 368 29 696.6 29 507 29 869 1.12 121.5 29 666.0 29 666.0 373 590 1.01 298.0
16 Tsp225 3916 3931.8 3916 3962 0.33 14.4 3933.0 3933.0 43 032 0.35 14.0
17 Pr226 80 369 80 369.0 80 369 80 369 0.00 0.0 82 186.0 82 186.0 180 840 2.26 1817.0
18 Pr264 49 135 49 363.5 49 135 49 913 0.47 254.5 50 739.0 50 739.0 61 355 3.26 1604.0
19 Lin318 42 029 42 734.9 42 101 43 100 1.68 281.8 42 686.0 42 686.0 62 734 3.24 1341.0
20 Pr439 107 217 107 718.5 107 325 107 979 0.47 208.5 111 450.0 111 450.0 2 021 400 3.94 4233.0
21 Pr1002 259 047 266 418.1 264 493 268 162 2.85 1151.1 266 440.0 266 440.0 312 240 2.85 7395.0
Table 12
Comparison of results obtained by DSSA with DWCA, DICA.
No Instance DSSA DWCA DICA
Name Optimal Avg. Best SD Avg. Best SD Avg. Best SD
1 Oliver30 420 420.00 420 0.00 420.00 420 0.00 420.00 420 0.00
2 Eil51 426 426.60 426 0.68 428.40 426 2.00 432.30 426 3.10
3 Berlin52 7542 7542.00 7542 0.00 7542.00 7542 0.00 7542.00 7542 0.00
4 St70 675 675.15 675 0.67 678.60 675 2.20 684.70 675 3.70
5 Eil76 538 543.10 538 2.83 547.90 543 3.30 557.60 544 5.80
6 KroA100 21 282 21 290.20 21 282 17.14 21 348.10 21 282 47.90 21 500.30 21 282 183.40
7 KroB100 22 141 22 173.10 22 141 36.30 22 450.70 22 178 164.40 22 599.70 22 180 244.90
8 KroC100 20 749 20 770.50 20 749 42.54 20 934.70 20 769 124.60 21 103.90 20 756 161.10
9 KroD100 21 294 21 319.05 21 294 44.15 21 529.60 21 361 113.90 21 666.80 21 399 174.00
10 KroE100 22 068 22 091.90 22 068 32.85 22 246.20 22 130 66.10 22 453.30 22 083 196.90
11 Eil101 629 641.50 634 1.99 645.90 639 2.90 663.80 644 9.60
12 Pr107 44 303 44 322.00 44 303 34.75 44 647.10 44 442 117.60 44 803.30 44 303 302.70
13 Pr124 59 030 59 030.00 59 030 0.00 59 338.90 59 030 163.60 59 436.90 59 030 299.40
14 Pr136 96 772 97 302.35 96 920 265.94 98 761.40 97 488 741.20 99 583.70 97 736 848.90
15 Pr144 58 537 58 537.00 58 537 0.00 58 734.60 58 537 167.20 59 070.90 58 563 323.00
16 Pr152 73 682 73 731.35 73 682 67.91 74 202.60 73 682 309.30 74 886.70 74 052 513.90
17 Pr264 49 135 49 271.85 49 135 155.73 49 528.60 49 310 1302.70 51 934.60 50 553 863.70
Table 13
Comparison of results obtained by DSSA with IBA, DFA.
No Instance DSSA IBA DFA
Name Optimal Avg. Best SD Avg. Best SD Avg. Best SD
1 Oliver30 420 420.00 420 0.00 420.00 420.00 0.00 420.00 420 0.00
2 Eil51 426 426.60 426 0.68 428.10 426.00 1.60 430.80 426 2.30
3 Berlin52 7542 7542.00 7542 0.00 7542.00 7542.00 0.00 7542.00 7542 0.00
4 St70 675 675.15 675 0.67 679.10 675.00 2.80 685.30 675 4.00
5 Eil76 538 543.10 538 2.83 548.10 539.00 3.80 556.80 543 4.90
6 KroA100 21 282 21 290.20 21 282 17.14 21 445.30 21 282.00 116.50 21 483.60 21 282 163.70
7 KroB100 22 141 22 173.10 22 141 36.30 22 506.40 22 140.00 221.30 22 604.80 22 183 243.90
8 KroC100 20 749 20 770.50 20 749 42.54 21 050.00 20 749.00 164.70 21 096.30 20 756 148.30
9 KroD100 21 294 21 319.05 21 294 44.15 21 593.40 21 294.00 141.60 21 683.80 21 408 163.70
10 KroE100 22 068 22 091.90 22 068 32.85 22 349.60 22 068.00 169.60 22 413.00 22 079 183.00
11 Eil101 629 641.50 634 8.39 646.40 634.00 4.90 659.00 643 8.10
12 Pr107 44 303 44 322.00 44 303 34.75 44 793.80 44 303.00 232.40 44 790.40 44 303 227.30
13 Pr124 59 030 59 030.00 59 030 0.00 59 412.10 59 030.00 265.90 59 404.30 59 030 257.90
14 Pr136 96 772 97 302.35 96 920 265.94 99 351.20 97 547.00 707.20 99 683.70 97 716 831.30
15 Pr144 58 537 58 537.00 58 537 0.00 58 876.20 58 537.00 295.60 58 993.30 58 546 300.10
16 Pr152 73 682 73 731.35 73 682 67.91 74 676.90 73 921.00 426.50 74 934.30 74 033 483.70
17 Pr264 49 135 49 271.85 49 135 155.73 50 908.30 49 756.00 887.00 51 837.00 50 491 760.80
18 Pr299 48 191 48 605.05 48 409 184.42 49 674.10 48 310.00 1200.10 49 839.70 48 579 1305.40
19 Pr439 107 217 107 844.90 107 494 263.11 115 256.40 11 153.00 3825.80 115 558.20 111 967 4009.10
20 Pr1002 259 047 266 352.35 264 212 1146.43 274 419.70 270 016.00 3617.80 277 344.70 272 003 4731.60
15
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
Table 14
Comparison of results obtained by DSSA with SMO, ABCSS.
No Instance DSSA DSMO ABCSS
Name Optimal Avg. Best SD Avg. Best SD Avg. Best SD
1 Eil51 426 426.60 426 0.68 436.96 428.86 4.73 437.01 428.98 4.98
2 Berlin52 7542 7542.00 7542 0.00 7633.60 7544.37 85.40 7807.86 7544.37 177.55
3 St70 675 675.15 675 0.67 702.64 677.11 15.04 690.50 682.57 5.35
4 Eil76 538 543.10 538 2.83 572.70 558.68 7.56 561.48 550.24 7.21
5 Pr76 108 159 108 159.00 108 159 0.00 111 299.30 108 159.40 2050.48 109 758.57 108 879.70 850.64
6 KroA100 21 282 21 290.20 21 282 17.14 22 024.27 21 298.21 508.89 21 878.83 21 299.00 455.63
7 KroB100 22 141 22 173.10 22 141 36.30 23 022.37 22 308.00 277.32 22 707.96 22 229.71 259.83
8 Eil101 629 641.50 634 3.62 674.40 648.66 10.97 662.63 646.05 7.13
9 Lin105 14 379 14 379.00 14 379 0.00 15 114.00 14 383.00 500.76 14 766.55 14 406.12 263.01
10 Pr107 44 303 44 322.00 44 303 34.75 45 666.99 44 385.86 1300.43 44 927.27 44 525.68 319.03
11 Pr124 59 030 59 030.00 59 030 0.00 62 443.49 60 285.21 1644.93 59 772.68 59 030.74 516.56
12 Pr136 96 772 97 302.35 96 920 265.94 102 872.00 97 583.68 2855.28 101 795.57 97 853.91 1916.45
13 KroA150 26 524 26 699.85 26 525 144.19 28 354.09 27 591.44 524.91 27 971.36 26 981.98 554.50
14 KroB150 26 130 26 220.40 26 130 69.01 27 576.16 26 601.94 625.26 27 653.49 26 760.79 509.24
15 Pr152 73 682 73 731.35 73 682 67.91 76 526.77 74 243.91 1663.08 76 097.48 74 337.62 904.45
16 U159 42 080 42 262.75 42 080 157.55 42 598.30 42 598.30 0.00 45 234.92 42 862.51 1212.54
17 KroA200 29 368 29 682.15 29 459 130.55 31 828.64 30 481.35 652.32 31 938.81 16 270.22 162.08
18 KroB200 29 437 29 850.55 29 564 178.27 31 781.62 30 716.50 483.79 32 208.73 30 701.86 656.84
19 Tsp225 3916 3926.05 3916 12.53 4162.79 4013.68 66.08 4276.92 4140.24 72.80
20 Pr226 80 369 80 369.20 80 369 0.89 85 935.69 83 587.98 2105.30 87 400.60 82 266.00 3482.64
21 Pr299 48 191 48 605.05 48 409 118.87 51 747.99 50 579.82 863.32 67 620.95 64 464.76 2092.63
22 Lin318 42 029 42 742.70 42 495 184.42 45 460.25 44 118.66 660.47 61 902.84 55 744.52 3824.65
23 Pr439 107 217 107 844.90 107 494 263.11 116 379.20 112 105.20 2462.82 244 192.44 206 233.14 21 576.50
Table 15
Comparison of results obtained by DSSA with DSOS.
No Instance DSSA DSOS
Name Optimal Avg. Best SD PDA (%) PDB (%) Avg. Best SD PDA (%) PDB (%)
1 Eil51 426 427 426 0.50 0.23 0.00 427.90 426 1.20 0.45 0.00
2 Berlin52 7542 7542 7542 0.00 0.00 0.00 7542.60 7542 0.00 0.01 0.00
3 St70 675 675 675 0.00 0.00 0.00 679.20 675 2.80 0.62 0.00
4 Eil76 538 544.7 541 2.80 1.25 0.56 547.40 542 3.90 1.75 0.74
5 Rat99 1211 1213.3 1211 3.33 0.19 0.00 1228.37 1224 14.32 1.43 1.07
6 KroA100 21 282 21 288.9 21 282 11.10 0.03 0.00 21 409.50 21 282 149.15 0.60 0.00
7 KroB100 22 141 22 156.6 22 141 34.20 0.07 0.00 22 339.20 22 140 230.18 0.90 0.00
8 KroC100 20 749 20 767.3 20 749 34.70 0.09 0.00 20 881.60 20 749 189.51 0.64 0.00
9 KroD100 21 294 21 320.2 21 294 40.20 0.12 0.00 21 493.10 21 294 152.83 0.94 0.00
10 KroE100 22 068 22 080.5 22 068 17.90 0.06 0.00 22 231.10 22 068 170.43 0.74 0.00
11 Eil101 629 641.1 635 4.48 1.92 0.95 650.60 640 4.57 3.43 1.75
12 Pr107 44 303 44 303 44 303 0.00 0.00 0.00 44 445.10 44 314 181.35 0.32 0.02
13 Pr124 59 030 59 030 59 030 0.00 0.00 0.00 59 429.10 59 030 264.08 0.68 0.00
14 Pr136 96 772 97 238.3 96 874 178.80 0.48 0.11 97 673.20 97 437 709.91 0.93 0.69
15 Pr144 58 537 58 537 58 537 0.00 0.00 0.00 58 817.10 58 565 228.06 0.48 0.05
16 Pr152 73 682 73 722.8 73 682 65.70 0.06 0.00 74 785.70 74 013 428.31 1.50 0.45
17 Pr264 49 135 49 363.5 49 135 254.50 0.47 0.00 52 798.90 50 454 424.94 7.46 2.68
18 Pr299 48 191 48 484.1 48 318 134.43 0.61 0.26 50 335.20 49 162 905.42 4.45 2.02
19 Lin318 42 029 42 734.9 42 101 281.80 1.68 0.17 42 972.42 42 201 2037.43 2.24 0.41
20 Rat575 6773 6961.7 6938 20.06 2.79 2.44 7117.32 7073 171.65 5.08 4.43
21 Rat783 8806 9163 9097 40.19 4.05 3.30 9102.67 9045 37.28 3.37 2.71
22 Pr1002 259 047 266 418.1 264 493 1151.10 2.85 2.10 278 381.51 272 381 4328.62 7.46 5.15
Table 17
P-value (adjusted and unadjusted) obtained by Bonferroni–Dunn post hoc procedure for TSP.
Algorithm Adjusted p Unadjusted p Algorithm Adjusted p Unadjusted p
Tables 11 & 15 DSSpA 3.019E–04 1.006E–04 DSOS 1.436E–04 4.785E–05
Table 12 DWCA 3.029E–02 1.000E–02 DICA 8.027E–07 2.676E–07
Table 13 IBA 4.696E–03 2.000E–03 DFA 2.286E–07 7.621E–08
Table 14 DSMO 1.607E–06 5.358E–07 ABCSS 7.390E–07 2.463E–07
Table 18
P-value (adjusted and unadjusted) obtained by Holm’s post hoc procedure for TSP.
Algorithm Adjusted p Unadjusted p Algorithm Adjusted p Unadjusted p
Tables 11 & 15 DSSpA 1.006E–04 1.006E–04 DSOS 9.571E–05 4.785E–05
Table 12 DWCA 1.010E–02 1.010E–02 DICA 5.352E–07 2.676E–07
Table 13 IBA 1.565E–03 1.565E–03 DFA 1.524E–07 7.621E–08
Table 14 DSMO 5.358E–07 5.358E–07 ABCSS 4.927E–07 2.463E–07
References [20] Y. Wang, The hybrid genetic algorithm with two local optimization
strategies for traveling salesman problem, Comput. Ind. Eng. 70 (2014)
[1] T. Mostafaie, F. Modarres Khiyabani, N.J. Navimipour, A systematic study on 124–133, http://dx.doi.org/10.1016/J.CIE.2014.01.015.
meta-heuristic approaches for solving the graph coloring problem, Comput. [21] L. Wang, R. Cai, M. Lin, Y. Zhong, Enhanced list-based simulated annealing
Oper. Res. 120 (2020) http://dx.doi.org/10.1016/j.cor.2019.104850. algorithm for large-scale traveling salesman problem, IEEE Access 7 (2019)
[2] M.A. Şahman, A discrete spotted hyena optimizer for solving distributed 144366–144380, http://dx.doi.org/10.1109/ACCESS.2019.2945570.
job shop scheduling problems, Appl. Soft Comput. 106 (2021) http://dx. [22] X.-S. Yang, A new metaheuristic bat-inspired algorithm, in: D.A. Pelta,
doi.org/10.1016/j.asoc.2021.107349. C. Cruz, G. Terrazas, N. Krasnogor, J.R. González (Eds.), Nature Inspired
[3] V.K. Patel, B.D. Raja, Comparative performance of recent advanced op- Cooperative Strategies for Optimization (NICSO 2010), Springer, Berlin,
timization algorithms for minimum energy requirement solutions in Heidelberg, 2010, pp. 65–74, http://dx.doi.org/10.1007/978-3-642-12538-
water pump switching network, Arch. Comput. Methods Eng. 28 (2021) 6_6.
1545–1559, http://dx.doi.org/10.1007/s11831-020-09429-x. [23] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical
[4] C. Ammari, D. Belatrache, B. Touhami, S. Makhloufi, Sizing, optimization, Optimization, Technical Report TR06, Erciyes University, 2005.
control and energy management of hybrid renewable energy system- a [24] S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Softw.
review, Energy Built Environ. (2021) http://dx.doi.org/10.1016/J.ENBENV. 69 (2014) 46–61, http://dx.doi.org/10.1016/J.ADVENGSOFT.2013.12.007.
2021.04.002. [25] M.Y. Cheng, D. Prayogo, Symbiotic Organisms Search: A new metaheuristic
[5] el Lawler, J.K. Lenstra, A.H.G.R. Kan, D.B. Shmoys, The traveling salesman
optimization algorithm, Comput. Struct. 139 (2014) 98–112, http://dx.doi.
problem: A guided tour of combinatorial optimization, J. Oper. Res. Soc.
org/10.1016/J.COMPSTRUC.2014.03.007.
37 (1986) 535–536, http://dx.doi.org/10.1057/jors.1986.93.
[26] T.-T. Nguyen, Y. Qiao, J.-S. Pan, S.-C. Chu, K.-C. Chang, X. Xue, T.-K. Dao, A
[6] A. Ouaarab, B. Ahiod, X.S. Yang, Discrete cuckoo search algorithm for the
hybridized parallel bats algorithm for combinatorial problem of traveling
travelling salesman problem, Neural Comput. Appl. 24 (2014) 1659–1669,
salesman, J. Intell. Fuzzy Systems 38 (2020) 5811–5820, http://dx.doi.org/
http://dx.doi.org/10.1007/s00521-013-1402-2.
10.3233/JIFS-179668.
[7] S. Arora, Polynomial time approximation schemes for Euclidean traveling
salesman and other geometric problems, J. ACM 45 (1998) 753–782, http: [27] Y. Saji, M.E. Riffi, A novel discrete bat algorithm for solving the travelling
//dx.doi.org/10.1145/290179.290180. salesman problem, Neural Comput. Appl. 27 (2016) 1853–1866, http:
[8] Y. Saji, M. Barkatou, A discrete bat algorithm based on Lévy flights for //dx.doi.org/10.1007/s00521-015-1978-9.
Euclidean traveling salesman problem, Expert Syst. Appl. 172 (2021) http: [28] E. Osaba, X.S. Yang, F. Diaz, P. Lopez-Garcia, R. Carballedo, An improved
//dx.doi.org/10.1016/j.eswa.2021.114639. discrete bat algorithm for symmetric and asymmetric Traveling Salesman
[9] G. Laporte, Y. Nobert, A cutting planes algorithm for the m-salesmen Problems, Eng. Appl. Artif. Intell. 48 (2016) 59–71, http://dx.doi.org/10.
problem, J. Oper. Res. Soc. 31 (1980) 1017–1023, http://dx.doi.org/10.1057/ 1016/j.engappai.2015.10.006.
jors.1980.188. [29] J. Faigl, GSOA: Growing Self-Organizing Array - Unsupervised learning
[10] M. Padberg, G. Rinaldi, Optimization of a 532-city symmetric traveling for the Close-Enough Traveling Salesman Problem and other routing
salesman problem by branch and cut, Oper. Res. Lett. 6 (1987) 1–7, problems, Neurocomputing 312 (2018) 120–134, http://dx.doi.org/10.1016/
http://dx.doi.org/10.1016/0167-6377(87)90002-2. J.NEUCOM.2018.05.079.
[11] Gerhard Reinelt, The Traveling Salesman Computational Solutions for TSP [30] Y. Zhong, J. Lin, L. Wang, H. Zhang, Hybrid discrete artificial bee colony
Applications, Springer Berlin Heidelberg, Berlin, Heidelberg, 1994, http: algorithm with threshold acceptance criterion for traveling salesman prob-
//dx.doi.org/10.1007/3-540-48661-5. lem, Inform. Sci. 421 (2017) 70–84, http://dx.doi.org/10.1016/j.ins.2017.08.
[12] Ö. Ergun, J.B. Orlin, A dynamic programming methodology in very large 067.
scale neighborhood search applied to the traveling salesman problem, [31] B.H. Abed-alguni, F. Alkhateeb, Novel selection schemes for cuckoo search,
Discrete Optim. 3 (2006) 78–85, http://dx.doi.org/10.1016/J.DISOPT.2005. Arab. J. Sci. Eng. 42 (2017) 3635–3654, http://dx.doi.org/10.1007/s13369-
10.002. 017-2663-3.
[13] R.E. Bellman, S.E. Dreyfus, Applied Dynamic Programming, Princeton [32] A.E.S. Ezugwu, A.O. Adewumi, M.E. Frîncu, Simulated annealing based
University Press, 2015, http://dx.doi.org/10.1515/9781400874651. symbiotic organisms search optimization algorithm for traveling salesman
[14] G. Laporte, The traveling salesman problem: An overview of exact and problem, Expert Syst. Appl. 77 (2017) 189–210, http://dx.doi.org/10.1016/
approximate algorithms, European J. Oper. Res. 59 (1992) 231–247, http: j.eswa.2017.01.053.
//dx.doi.org/10.1016/0377-2217(92)90138-Y.
[33] J. Xue, B. Shen, A novel swarm intelligence optimization approach: sparrow
[15] X. Zhou, D.Y. Gao, C. Yang, W. Gui, Discrete state transition algorithm for
search algorithm, Syst. Sci. Control Eng. 8 (2020) http://dx.doi.org/10.1080/
unconstrained integer optimization problems, Neurocomputing 173 (2016)
21642583.2019.1708830.
864–874, http://dx.doi.org/10.1016/J.NEUCOM.2015.08.041.
[16] A.E.S. Ezugwu, A.O. Adewumi, Discrete symbiotic organisms search algo- [34] C. Zhang, S. Ding, A stochastic configuration network based on chaotic
rithm for travelling salesman problem, Expert Syst. Appl. 87 (2017) 70–78, sparrow search algorithm, Knowl.-Based Syst. 220 (2021) http://dx.doi.org/
http://dx.doi.org/10.1016/j.eswa.2017.06.007. 10.1016/j.knosys.2021.106924.
[17] H. Eldem, E. Ülker, The application of ant colony optimization in the [35] Y. Zhu, N. Yousefi, Optimal parameter identification of PEMFC stacks using
solution of 3D traveling salesman problem on a sphere, Eng. Sci. Technol. Adaptive Sparrow Search Algorithm, Int. J. Hydrogen Energy 46 (2021)
Int. J. 20 (2017) 1242–1248, http://dx.doi.org/10.1016/J.JESTCH.2017.08. http://dx.doi.org/10.1016/j.ijhydene.2020.12.107.
005. [36] Z. Zhang, R. He, K. Yang, A bioinspired path planning approach for mobile
[18] F. Dahan, K. el Hindi, H. Mathkour, H. Alsalman, Dynamic flying ant colony robot based on improved sparrow search algorithm, Adv. Manuf. (2021)
optimization (DFACO) for solving the traveling salesman problem, Sensors http://dx.doi.org/10.1007/s40436-021-00366-x.
(Switzerland) 19 (2019) http://dx.doi.org/10.3390/s19081837. [37] Z. Xing, C. Yi, J. Lin, Q. Zhou, Multi-component fault diagnosis of wheelset-
[19] K. Tang, Z. Li, L. Luo, B. Liu, Multi-strategy adaptive particle swarm bearing using shift-invariant impulsive dictionary matching pursuit and
optimization for numerical optimization, Eng. Appl. Artif. Intell. 37 (2015) sparrow search algorithm, Measurement 178 (2021) http://dx.doi.org/10.
9–19, http://dx.doi.org/10.1016/J.ENGAPPAI.2014.08.002. 1016/j.measurement.2021.109375.
17
Z. Zhang and Y. Han Applied Soft Computing 118 (2022) 108469
[38] E. Baş, E. Ülker, Dıscrete socıal spıder algorıthm for the travelıng salesman [56] E. Osaba, X.S. Yang, F. Diaz, P. Lopez-Garcia, R. Carballedo, An improved
problem, Artif. Intell. Rev. 54 (2021) 1063–1085, http://dx.doi.org/10.1007/ discrete bat algorithm for symmetric and asymmetric Traveling Salesman
s10462-020-09869-8. Problems, Eng. Appl. Artif. Intell. 48 (2016) 59–71, http://dx.doi.org/10.
[39] P. Kitjacharoenchai, B.C. Min, S. Lee, Two echelon vehicle routing problem 1016/j.engappai.2015.10.006.
with drones in last mile delivery, Int. J. Prod. Econ. 225 (2020) http: [57] E. Osaba, J. del Ser, A. Sadollah, M.N. Bilbao, D. Camacho, A discrete
//dx.doi.org/10.1016/j.ijpe.2019.107598. water cycle algorithm for solving the symmetric and asymmetric traveling
[40] T. Vidal, G. Laporte, P. Matl, A concise guide to existing and emerging salesman problem, Appl. Soft Comput. 71 (2018) 277–290, http://dx.doi.
vehicle routing problem variants, European J. Oper. Res. 286 (2020) org/10.1016/j.asoc.2018.06.047.
401–416, http://dx.doi.org/10.1016/J.EJOR.2019.10.010. [58] K. Panwar, K. Deep, Discrete Grey Wolf Optimizer for symmetric travelling
[41] K.H. Shin, H.A.D. Nguyen, J. Park, D. Shin, D. Lee, Roll-to-roll gravure salesman problem, Appl. Soft Comput. 105 (2021) http://dx.doi.org/10.
printing of thick-film silver electrode micropatterns for flexible printed 1016/j.asoc.2021.107298.
circuit board, J. Coat. Technol. Res. 14 (2017) http://dx.doi.org/10.1007/ [59] M. Gunduz, M. Aslan, DJAYA: A discrete jaya algorithm for solving traveling
s11998-016-9844-y. salesman problem, Appl. Soft Comput. 105 (2021) http://dx.doi.org/10.
[42] A. Alexandridis, E. Paizis, E. Chondrodima, M. Stogiannos, A particle swarm 1016/j.asoc.2021.107275.
optimization approach in printed circuit board thermal design, Integr. [60] Y. Huang, X.N. Shen, X. You, A discrete shuffled frog-leaping algorithm
Comput.-Aided Eng. 24 (2017) http://dx.doi.org/10.3233/ICA-160536. based on heuristic information for traveling salesman problem, Appl. Soft
[43] R. Matai, S. Singh, M. Lal, Traveling salesman problem: an overview of Comput. 102 (2021) http://dx.doi.org/10.1016/j.asoc.2021.107085.
[61] I. Khan, M.K. Maiti, A swap sequence based Artificial Bee Colony algorithm
applications, formulations, and solution approaches, in: Traveling Salesman
for Traveling Salesman Problem, Swarm Evol. Comput. 44 (2019) 428–438,
Problem, Theory and Applications, 2010, http://dx.doi.org/10.5772/12909.
http://dx.doi.org/10.1016/j.swevo.2018.05.006.
[44] X. Geng, Z. Chen, W. Yang, D. Shi, K. Zhao, Solving the traveling salesman
[62] A.C. Cinar, S. Korkmaz, M.S. Kiran, A discrete tree-seed algorithm for
problem based on an adaptive simulated annealing algorithm with greedy
solving symmetric traveling salesman problem, Eng. Sci. Technol. Int. J.
search, Appl. Soft Comput. 11 (2011) 3680–3689, http://dx.doi.org/10.1016/
23 (2020) 879–890, http://dx.doi.org/10.1016/j.jestch.2019.11.005.
j.asoc.2011.01.039.
[63] M.A.H. Akhand, S.I. Ayon, S.A. Shahriyar, N. Siddique, H. Adeli, Discrete
[45] S. Hore, A. Chatterjee, A. Dewanji, Improving variable neighborhood search
spider monkey optimization for travelling salesman problem, Appl. Soft
to solve the traveling salesman problem, Appl. Soft Comput. 68 (2018)
Comput. 86 (2020) http://dx.doi.org/10.1016/j.asoc.2019.105887.
83–91, http://dx.doi.org/10.1016/j.asoc.2018.03.048.
[64] Y. Zhong, L. Wang, M. Lin, H. Zhang, Discrete pigeon-inspired optimization
[46] J. Wang, O.K. Ersoy, M. He, F. Wang, Multi-offspring genetic algorithm and algorithm with Metropolis acceptance criterion for large-scale traveling
its application to the traveling salesman problem, Appl. Soft Comput. 43 salesman problem, Swarm Evol. Comput. 48 (2019) 134–144, http://dx.
(2016) http://dx.doi.org/10.1016/j.asoc.2016.02.021. doi.org/10.1016/J.SWEVO.2019.04.002.
[47] P. Kora, P. Yadlapalli, Crossover operators in genetic algorithms: A review, [65] P. Larrañaga, C.M.H. Kuijpers, R.H. Murga, I. Inza, S. Dizdarevic, Genetic al-
Int. J. Comput. Appl. 162 (2017) http://dx.doi.org/10.5120/ijca2017913370. gorithms for the travelling salesman problem: A review of representations
[48] A.J. Umbarkar, P.D. Sheth, Crossover operators in genetic algorithms: A and operators, Artif. Intell. Rev. 13 (1999) http://dx.doi.org/10.1023/A:
review, ICTACT J. Soft Comput. (2015) 1, http://dx.doi.org/10.21917/ijsc. 1006529012972.
2015.0150. [66] A. Lipowski, D. Lipowska, Roulette-wheel selection via stochastic ac-
[49] A. Hussain, Y.S. Muhammad, M. Nauman Sajid, I. Hussain, A. Mo- ceptance, Physica A 391 (2012) http://dx.doi.org/10.1016/j.physa.2011.12.
hamd Shoukry, S. Gani, Genetic algorithm for traveling salesman problem 004.
with modified cycle crossover operator, Comput. Intell. Neurosci. 2017 [67] J. Luo, H. Chen, Q. zhang, Y. Xu, H. Huang, X. Zhao, An improved grasshop-
(2017) http://dx.doi.org/10.1155/2017/7430125. per optimization algorithm with application to financial stress prediction,
[50] S.P. Tripathy, A. Biswas, T. Pal, A multi-objective covering salesman Appl. Math. Model. 64 (2018) http://dx.doi.org/10.1016/j.apm.2018.07.044.
problem with 2-coverage, Appl. Soft Comput. (2021) 108024, http://dx. [68] S. Lin, Computer solutions of the traveling salesman problem, Bell Syst.
doi.org/10.1016/J.ASOC.2021.108024. Tech. J. 44 (1965) http://dx.doi.org/10.1002/j.1538-7305.1965.tb04146.x.
[51] I.M. Ali, D. Essam, K. Kasmarik, A novel design of differential evolution [69] P. Cortés, R.A. Gómez-Montoya, J. Muñuzuri, A. Correa-Espinal, A tabu
for solving discrete traveling salesman problems, Swarm Evol. Comput. 52 search approach to solving the picking routing problem for large- and
(2020) 100607, http://dx.doi.org/10.1016/j.swevo.2019.100607. medium-size distribution centres considering the availability of inventory
[52] A.F. Tuani, E. Keedwell, M. Collett, H-ACO: A Heterogeneous Ant Colony Op- and K heterogeneous material handling equipment, Appl. Soft Comput. 53
timisation Approach with Application to the Travelling Salesman Problem, (2017) 61–73, http://dx.doi.org/10.1016/J.ASOC.2016.12.026.
Springer International Publishing, 2018, http://dx.doi.org/10.1007/978-3- [70] T. Hintsch, S. Irnich, Large multiple neighborhood search for the clustered
319-78133-4_11. vehicle-routing problem, European J. Oper. Res. 270 (2018) 118–131, http:
[53] A.F. Tuani, E. Keedwell, M. Collett, Heterogenous adaptive ant colony //dx.doi.org/10.1016/J.EJOR.2018.02.056.
optimization with 3-opt local search for the travelling salesman problem, [71] G. Reinelt, TSPLIB. A traveling salesman problem library, ORSA J. Comput.
Appl. Soft Comput. 97 (2020) http://dx.doi.org/10.1016/j.asoc.2020.106720. 3 (1991) http://dx.doi.org/10.1287/ijoc.3.4.376.
[54] J. Yu, X. You, S. Liu, A heterogeneous guided ant colony algorithm based on [72] X.L. Li, Z.J. Shao, J.X. Qian, Optimizing method based on autonomous
space explosion and long–short memory, Appl. Soft Comput. 113 (2021) animats: Fish-swarm algorithm, Xitong Gongcheng Lilun Yu Shijian/Syst.
107991, http://dx.doi.org/10.1016/J.ASOC.2021.107991. Eng. Theory Pract. 22 (2002).
[55] Y. Zhong, J. Lin, L. Wang, H. Zhang, Discrete comprehensive learning par- [73] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the
ticle swarm optimization algorithm with Metropolis acceptance criterion use of nonparametric statistical tests as a methodology for comparing
for traveling salesman problem, Swarm Evol. Comput. 42 (2018) 77–88, evolutionary and swarm intelligence algorithms, Swarm Evol. Comput. 1
http://dx.doi.org/10.1016/j.swevo.2018.02.017. (2011) 3–18, http://dx.doi.org/10.1016/j.swevo.2011.02.002.
18