Spider Monkey Optimization Algorithm For Numerical Optimization
Spider Monkey Optimization Algorithm For Numerical Optimization
Abstract Swarm intelligence is a fascinating area for the researchers in the field of optimization. Researchers have
developed many algorithms by simulating the swarming behavior of various creatures like ants, honey bees, fishes, birds
and their findings are very motivating. In this paper, a new approach for optimization is proposed by modeling the social
behavior of spider monkeys. Spider monkeys have been categorized as fission-fusion social structure based animals. The
animals which follow fission-fusion social systems, initially work in a large group and based on need after some time,
they divide themselves in smaller groups led by an adult female for foraging. Therefore, the proposed strategy broadly
classified as inspiration from the intelligent foraging behavior of fission-fusion social structure based animals.
1 Introduction
The name swarm is used for an accumulation of creatures such as ants, fishes, birds, termites and honey bees which
behaves collectively. The definition given by Bonabeau for the swarm intelligence is “any attempt to design algorithms
or distributed problem-solving devices inspired by the collective behaviour of social insect colonies and other animal
societies” [2].
Swarm Intelligence is a meta-heuristic approach in the field of nature inspired techniques that is used to solve
optimization problems. It is based on the collective behavior of social creatures. Social creatures utilizes their ability
of social learning to solve complex tasks. Researchers have analyzed such behaviors and designed algorithms that
can be used to solve nonlinear, nonconvex or combinatorial optimization problems in many science and engineering
domains. Previous research [7, 12, 19, 29] have shown that algorithms based on Swarm Intelligence have great potential
to find a solution of real world optimization problem. The algorithms that have emerged in recent years include Ant
Colony Optimization (ACO) [7], Particle Swarm Optimization (PSO) [12], Bacterial Foraging Optimization (BFO) [17],
Artificial Bee Colony Optimization (ABC) [10] etc.
As shown in Fig. 1, the necessary and sufficient properties for obtaining intelligent swarming behaviors of animals
are self-organization and division of labour. Each of the properties are explained as follows:
Necessary and
Sufficient conditions
for swarm intelligence
Self-organization Divison of a
l bour
1. Self-organization: is an important feature of a swarm structure which results global level response by means of
interactions among its low-level components without a central authority or external element enforcing it through
planning. Therefore, the globally coherent pattern appears from the local interaction of the components that build
up the structure, thus the organization is achieved by parallelly as all the elements act at the same time and dis-
tributed as no element is a central coordinator. Bonabeau et al. have defined following four important characteristics
on which self organization is based: [2]
(i) Positive feedback: is an information extracted from the output of a system and reapplied to the input to promotes
the creations of convenient structures. In the field of swarm intelligence positive feedback provides diversity and
accelerate the system to new stable state.
(ii)Negative feedback: compensates the effect of positive feedback and helps to stabilize the collective pattern.
(iii) Fluctuations: are the rate or magnitude of random changes in the system. Randomness is often crucial for efflo-
rescent structures since it allows the findings of new solutions. In foraging process, it helps to get-ride of stagnation.
(iv) Multiple interactions: provide the way of learning from the individuals within a society and thus enhance the
combined intelligence of the swarm.
2. Division of labour: is a cooperative labour in specific, circumscribed tasks and like roles. In a group, there are various
tasks, which are performed simultaneously by specialized individuals. Simultaneous task performance by cooperat-
ing specialized individuals is believed to be more efficient than the sequential task performance by unspecialized
individuals [6, 9, 16].
Fission-fusion swarm is a social grouping pattern in which individuals form temporary small parties (also called sub-
groups) whose members belong to a larger community (or unit-group) of stable membership, there can be fluid movement
between subgroups and unit-groups such that group composition and size changes frequently [27].
The fission-fusion social system of swarm may minimize direct foraging competition between group members, so they
divide themselves into sub-groups in order to search food. The group members interact among themselves and with other
group members, to maintain social bonds and territorial boundaries. In this society, social group sleep in one habitat
together but forage in small sub-groups going off in different directions during the day. This form of social formation
occurs in several species of primates like hamadryas, bonobo, chimpanzees, gelada baboons and spider monkeys. These
societies change frequently in their size and composition, making up a strong social group called the ‘parent group’.
All the individual members of a faunal community comprise of permanent social networks and their capability to track
changes in the environment varies according to their individual animal dynamics. In a fission-fusion society, the main
Spider Monkey Optimization Algorithm for Numerical Optimization 3
parent group can fission into smaller subgroups or individuals to adapt the environmental or social circumstances. For
example, members of a group are separated from the main group in order to hunt or forage for food during the day,
but at night they return to join (fusion) the primary group to share food and to take part in other activities [27].
The society of spider monkeys is one of the example of fission-fusion social structure. In subsequent subsections, a
brief overview on swarming of spider monkeys is presented.
The social organization of spider monkeys is related to fission-fusion social system. Fig. 2 shows the social organization
of spider monkeys[23]. They are social animals and live in group of up to 50 individuals. Spider monkeys break up into
Fig. 2: Social Organization and Behavior (a) Spider-Monkey (b) Spider Monkey Group (c) Spider Monkey sub-group (d) Foods
Foraging [23]
small foraging groups that travel together and forage throughout the day within a core area of the larger group’s home
range [24]. Spider monkeys find their foods in a very different way: a female leads the group and is responsible for finding
food sources. In case if she doesn’t find sufficient food for the group, she divides the group into smaller subgroups that
forage separately [14]. The subgroups within the band are temporary and may vary in formation frequently throughout
the day, but average three members ([28, 14]). When two different bands of spider monkeys come closer, the males
in each band display aggressiveness and territorial behavior such as calling and barking. These communications occur
with much distance between the two subgroups and do not involve any physical contacts, showing that groups respect
distinct territory boundaries [28]. Members of a society might not ever be noticed closer at one place, but their mutual
tolerance of each other when they come into contact reflects that they are component of the larger group [28]. The
main reason behind emerging of fission-fusion social system is the food competition among the group members when
there is shortage in food availability due to seasonal reasons [14]. When a big group gets food at particular location,
there is likely to be less food per group member compare to a small group. After some time, when food scarcity is
at its peak, average subgroup size is the smallest and during period of highest food availability, subgroup size is the
largest, indicating that competition for scarce resources necessitates breaking into smaller foraging groups [28, 13]. One
reason spider monkeys broke into smaller foraging groups but still remain part of a larger social unit is the advantage
to individual group members in terms of increased mating chances and security from predators.
2.2 Communication
Spider monkeys share their intentions and observations using postures and positions, such as postures of sexual recep-
tivity and of attack. During traveling, they interact with each other over long distances using a particular call which
sounds like a horse’s whinny. Each individual has its own discernible sound so that other members of the group can
easily identify who is calling. This long-distance communication permits spider monkeys to get-together, stay away
from enemies, share food and gossip. In order to interact to other group members, they generally use visual and vocal
communication[20].
4 Jagdish Chand Bansal et al.
Social behavior of spider monkeys inspires authors to develop an stochastic optimization technique that mimics the
foraging behavior of spider monkeys. The foraging behavior of spider monkeys shows that these monkeys fall, in the
category of fission-fusion social structure (FFSS) based animals. Thus the proposed optimization algorithm which is
based on foraging behavior of spider monkeys can be explained better in terms of FFSS. Following are the key features
of the FFSS.
1. The fission-fusion social structure based animals are social and live in groups of 40-50 individuals. The FFSS of
swarm may reduce the foraging competition among group members by dividing them into sub-groups in order to
search food.
2. A female (global Leader) generally leads the group and is responsible for searching food sources. If she is not able
to get enough food for the group, she divides the group into smaller subgroups (size varies from 3 to 8 members)
that forage independently.
3. Sub-groups are also supposed to be leaded by a female (local leader) who becomes decision-maker for planning an
efficient foraging route each day.
4. The group members communicate among themselves and with other group members, to maintain social bonds and
territorial boundaries.
In the developed strategy, foraging behavior of FFSS based animals (e.g. spider monkeys) is divided into four steps.
First, the group starts food foraging and evaluates their distance from the food. In the second step, based on the
distance from the foods, group members update their positions and again evaluate distance from the food sources.
Furthermore, in the third step, local leader updates its best position within the group and if the position is not updated
for a specified number of times then all members of that group start searching of the foods in different directions. Next,
in the fourth step, global leader, updates its ever best position and in case of stagnation, it splits the group into smaller
size subgroups. All the four steps mentioned aforesaid, are continuously executed until the desired output is achieved.
There are two important control parameters necessary to introduce in the proposed strategy, one is ‘GlobalLeaderLimit’
and another is ‘LocalLeaderLimit’ which helps local and global leaders to take appropriate decisions.
The control parameter LocalLeaderLimit is used to avoid stagnation i.e. if a local group leader does not update
herself in a specified number of times then that group is re-directed to a different direction for foraging. Here, the term
‘specified number of times’ is referred as LocalLeaderLimit. Another control parameter, GlobalLeaderLimit is used for
the same purpose for global leader. The global leader breaks the group into smaller sub-groups if she does not update
in a specified number of times.
The proposed strategy follows self-organization and division of labour properties for obtaining intelligent swarming
behaviors of animals. As animals updating their positions by learning from local leader, global leader and self experience
in first and second steps of algorithm, it shows positive feedback mechanisms of self organization. In third step, the
stagnated group members are re-directed to different directions for food searching, shows fluctuations property. In
fourth step, when the global leader is get stuck, it divides the groups into smaller subgroups for foraging of foods this
phenomena presents division of labour property. ‘Local leader limit’ and ‘Global leader limit’ provides negative feedback
to help local and global leader’s for their decisions.
Similar to the other population-based algorithms, SMO is a trial and error based collaborative iterative process. SMO
process consists of six phases: Local Leader phase, Global Leader phase, Local Leader Learning phase, Global Leader
Learning phase, Local Leader Decision phase and Global Leader Decision phase. The position update process in Global
Leader phase is inspired from the Gbest-guided ABC [32] and modified version of ABC [11]. The details of each step of
SM O implementation are explained below :
Initially, SM O generates a uniformly distributed initial population of N spider monkeys where each monkey SMi (i =
1, 2, ..., N ) is a D-dimensional vector. Here D is the number of variables in the optimization problem and SMi represent
Spider Monkey Optimization Algorithm for Numerical Optimization 5
the ith Spider Monkey (SM ) in the population. Each spider monkey SM corresponds to the potential solution of the
problem under consider. Each SMi is initialized as follows:.
where SMminj and SMmaxj are bounds of SMi in j th direction and U (0, 1) is a uniformly distributed random number
in the range [0, 1]
In Local Leader phase, SM modify its current position based on the information of the local leader experience as well as
local group members experience. The fitness value of so obtained new position is calculated. If the fitness value of the
new position is higher than that of the old position, then the SM updates his position with the new one. The position
update equation for ith SM ( which is a member of kth local group) in this phase is
where SMij is the j th dimension of the ith SM , LLkj represents the j th dimension of the kth local group leader
position. SMrj is the j th dimension of the kth SM which is chosen randomly within kth group such that r 6= i, U (0, 1)
is a universally distributed random number between 0 and 1. Algorithm 1 shows position update process in the Local
Leader phase. In the algorithm 1, pr is the perturbation rate which controls the amount of perturbation in the current
After completion of the Local Leader phase, the Global Leader phase (GLP) starts. In GLP phase, all the SM’s update
their position using experience of Global Leader and local group member’s experience. The position update equation
for this phase is as follows:
where GLj represents the j th dimension of the global leader position and j ∈ {1, 2, ..., D} is the randomly chosen index.
In this phase, the position of SMi is updated based on a probability probi which is calculated using their fitness
in this way The better candidate will have more chance to make itself better. The probability probi may be calculated
using following expression (there may be some other but must be a function of fitness):
f itnessi
probi = PN (4)
i=1 f itnessi
In this phase, the position of the global leader is updated by applying the greedy selection in the population i.e the
position of the SM having best fitness in the population is selected as the updated position of the global leader. Further,
it is checked that the position of global leader is updating or not and if not then the GlobalLimitCount is incremented
by 1.
In this phase, the position of the local leader is updated by applying the greedy selection in that group i.e. the position
of the SM having best fitness in that group is selected as the updated position of the local leader. Next, the updated
position of the local leader is compared with the old one and if the local leader is not updated the LocalLimitCount is
incremented by 1.
If any Local Leader position is not updated up to a predetermined threshold called LocalLeaderLimit, then all the
members of that group update their positions either by random initialization or by using combined information from
Global Leader and Local Leader through equation (5), based on the pr.
It is clear from the equation (5) that the updated dimension of this SM is attracted towards global leader and repel
from the local leader. The position update process of LLD phase is shown in algorithm 3 Further, the fitness of updated
SM is calculated.
Spider Monkey Optimization Algorithm for Numerical Optimization 7
In this phase, the position of global leader is monitored and if it is not updated up to predetermined number of iterations
called GlobalLeaderLimit, then the global leader divides the population into smaller groups. Firstly, the population
is divided into two groups and then three groups and so on till the maximum number of groups (MG) are formed as
shown in the Fig. 3-6. Each time in GLD phase, LLL process is initiated to elect the local leader in the newly formed
groups. The case in which maximum number of groups are formed and even then the position of global leader is not
updated then the global leader combines all the groups to form a single group. Thus the proposed algorithm mimics
fusion-fission structure of SMs. The working of this phase is shown in algorithm 4
It is clear from the above discussion that there are four control parameters in SM O algorithm: the value of LocalLeaderLimit,
GlobalLeaderLimit, the maximum group M G and perturbation rate pr. Some settings of control parameters are sug-
gested as follows:
– M G = N/10, i.e. it is chose such that minimum number of SM’s in a group should be 10
8 Jagdish Chand Bansal et al.
– pr ∈ [0.1, 0.8],
In order to establish SM O for optimization as a swarm intelligence algorithm, it is tested over well known optimization
test problems as well as some popular real world optimization problems. Sensitivity analysis of different parameters,
statistical analysis of results with comparison to some other well established optimization algorithms have been carried
out.
10 Jagdish Chand Bansal et al.
In order to analyze the performance of SM O algorithm, 21 different global optimization problems (f1 to f21 ) are selected
(listed in Table 3). These are continuous, non-biased optimization problems and have different degrees of complexity
and multimodality. Test problems f1 − f8 and f15 − f21 are taken from [1] and test problems f9 − f14 are taken from
[26] with the associated offset values.
To see the robustness of the proposed strategy, 4 real world global optimization problems namely Pressure Vessel
[30], Parameter Estimation for Frequency-Modulated (FM) Sound Waves [5], Compression Spring [15, 22, 3] and Gear
Train[15, 22] have been solved.
Pressure Vessel design (without Granularity) : The pressure vessel design is to minimize the total cost of
the material, forming, and welding of a cylindrical vessel [30]. There are four design variables involved: x1 , (Ts , shell
thickness), x2 (Th , spherical head thickness), x3 ( R , radius of cylindrical shell), and x4 ( L , shell length). The
mathematical formulation of this typical constrained optimization problem is as follows:
subject to
g1 (X) = 0.0193x3 − x1
g2 (X) = 0.00954x3 − x2
4
g3 (X) = 750 × 1728 − πx23 (x4 + x3 )
3
The search boundaries for the variables are
1.125 ≤ x1 ≤ 12.5,
0.625 ≤ x2 ≤ 12.5,
1.0 × 10−8 ≤ x3 ≤ 240
and
1.0 × 10−8 ≤ x4 ≤ 240.
The best known global optimum solution isf (1.125, 0.625, 58.29016, 43.69266) = 7197.729 [30]. For a successful run, the
minimum error criteria is fixed to be 1.0 × 10−5 i.e. an algorithm is considered successful if it finds the error less than
acceptable error in a specified maximum function evaluations.
Frequency-Modulated (FM) sound wave : Frequency-Modulated (FM) sound wave synthesis has an important
role in several modern music systems and to optimize the parameter of an FM synthesizer is a six dimensional opti-
mization problem where the vector to be optimized is X = {a1 , w1 , a2 , w2 , a3 , w3 } of the sound wave given in equation
(6). The problem is to generate a sound (6) similar to target (7). This problem is a highly complex multimodal and
have strong epistasis, with minimum value f (Xsol ) = 0. The expressions for the estimated sound and the target sound
waves are given as:
y(t) = a1 sin(w1 tθ + a2 sin(w2 tθ + a3 sin(w3 tθ))) (6)
y0 (t) = (1.0)sin((5.0)tθ − (1.5)sin((4.8)tθ + (2.0)sin((4.9)tθ))) (7)
respectively where θ = 2π/100 and the parameters are defined in the range [-6.4, 6.35]. The fitness function is the
summation of squared errors between the estimated wave (6) and the target wave (7) and is given below:
100
(y(t) − y0 (t))2
X
f23 (X) =
i=0
−05
Acceptable error for this problem is fixed to be 1.0 × 10 , i.e. an algorithm is considered successful if it finds the error
less than acceptable error in a specified maximum function evaluations.
Spider Monkey Optimization Algorithm for Numerical Optimization 11
Compression Spring: The considered third real world problem is compression spring problem [15, 22, 3]. This
problem minimizes the weight of a compression spring, subject to constraints of minimum deflection, shear stress, surge
frequency, and limits on outside diameter and on design variables. There are three design variables: the wire diameter
x1 , the mean coil diameter x2 , and the number of active coils x3 . This is a simplified version of a more difficult problem.
The mathematical formulation of this problem is:
The search space is {12, 13, ...., 60}4 . There are several solutions, depending on the required precision. Here, we used
10−13 . So, a possible solution is f ∗ = f (19, 16, 43, 49) = 2.7 × 10−12 . For this problem, a run is said to be successful if
it finds a fitness f so that |f − f ∗ | ≤ 10−13 .
Swarm size, perturbation rate (pr), LLL, GLL and maximum number of groups (M G) are the parameters that affects
the performance of the SM O significantly. To fine tune (finding most suitable values) these parameters, sensitivity
analysis with different values of these parameters has been carried out. Swarm size is varied from 40 to 160 with step
size 20, pr is varied from 0.1 to 0.9 with step size of 0.1, M G is varied from 1 to 6 with step size 1, LLL is varied from
100 to 2500 with step size 200 and GLL is varied from 10 to 220 with step size 30. At a time only one parameter is
varied while all other parameters are kept fixed. This fine tuning is done with the following assumptions:
12 Jagdish Chand Bansal et al.
– pr is varied from 0.1 to 0.9 while M G, LLL, GLL and Swarm size are fixed to be 5, 1500, 50 and 50 respectively.
– M G is varied from 1 to 6 while LLL, GLL and Swarm size are fixed to be 1500, 50 and 50 respectively. pr is linearly
increasing from 0.1 to 0.4 by iterations.
– GLL is varied from 10 to 220 while LLL, M G and Swarm size are fixed to be 1500, 5 and 50 respectively. pr is
linearly increasing from 0.1 to 0.4 by iterations.
– LLL is varied from 100 to 2500 while M G, GLL and Swarm size are fixed to be 5, 50 and 50 respectively. pr is
linearly increasing from 0.1 to 0.4 by iterations.
– Swarm size is varied from 40 to 160 while LLL, GLL and M G are fixed to be 1500, 50 and 5 respectively. pr is
linearly increasing from 0.1 to 0.4 by iterations.
For the purpose of sensitivity analysis, 6 problems are considered and each problem is simulated 30 times. Effects of
these parameters are shown in Figures 7(a)- 7(f) respectively. It is clear form Fig. 7(a) that the test problems are very
sensitive towards pr. Some problems perform better at lower value of pr while other perform better at higher value of
pr. Therefore, the value of pr is adopted linearly increasing over iterations to balance all the test problems. Further,
by analyzing the 7(b), it can be stated that the value of M G = 5 gives comparatively better results for the given
set of test problems. Sensitivity of GlobalLeaderLimit(GLL) and LocalLeaderLimit(LLL) can be analyzed by Fig.
7(c) and Fig. 7(d) and found that the value of GLL = 50 and LLL = 1500 gives better results on the considered
benchmark optimization problems. Further, Swarm size is analyzed in Fig. 7(e) and Fig. 7(f). It is clear from Fig. 7(e)
that the functions f9 , f13 and f21 are sensitive towards swarm size and give better results at 40. Further, success rate
of remaining three benchmarks functions are not significant varies by varying swarm size therefore average of functions
evaluations are analyzed. Fig. 7(f) shows that for swarm size 40, average number of function evaluations are minimum
for all the considered benchmark functions.
To prove the efficiency of SMO algorithm, it is compared with three popular nature inspired algorithms namely
PSO (based on Standard PSO 2006 [3] but with linearly decreasing inertia weight and a different parameters setting),
ABC [10] and DE (DE/rand/bin/1) [25]. These algorithms are selected due to their similarities in the position update
procedure as it is based on difference vectors i.e. variation component [32]. For the comparison, same stopping criteria,
number of simulations, maximum number of function evaluations, and random number generator (KISS [21]) are used
for all the algorithms (ABC, DE, P SO) as in SM O algorithm. The values of parameters for the considered algorithms
are as follows:
SM O parameters setting:
– The Swarm size N = 50,
– M G = 5,
– GlobalLeaderLimit=50,
– LocalLeaderLimit=1500,
– pr ∈ [0.1, 0.4], linearly increasing over iterations,
35
35 f4
f4 f7
f7 30 f9
30 f9 f13
f13 25 f21
25 f21 f23
f23
20
20
15
15
10 10
5 5
0 0
0.1 0.3 0.5 0.7 0.9 1 2 3 4 5 6
Perturbation Rate Maximum Group (MG)
(a) (b)
35 35
f4 f4
f7 f7
30 30 f9
f9
f13
25 f21 25 f21
f23 f23
20 20
15 15
10 10
5 5
0 0
40 70 100 130 160 190 220 100 500 900 1300 1700 2100 2500
Global Leader Limit Local Leader Limit
(c) (d)
5
x 10
35
f4 2 f4
f7 f7
30 f9 1.8 f9
f13 f13
Average Function Evaluations
f21 f21
Number of successful Run
1.6
25 f23 f23
1.4
20
1.2
1
15
0.8
10
0.6
5 0.4
0.2
0
40 60 80 100 120 140 160 40 60 80 100 120 140 160
Swarm Size Swarm Size
(e) (f)
Fig. 7: Effect of parameters on success rate for functions f4 , f7 , f9 , f13 , f21 and f23 (a) for pr (b) for M G (c) for GlobalLimit (d) for
LocalLimit (e) for Swarm Size v/s Successful Run (f) for Swarm Size v/s Average Function Evaluations.
Numerical results with experimental setting of subsection 4.3 are given in Table 1. In Table 1, success rate (SR), mean
error (M E), average function evaluations (AF E) and standard deviation (SD) are reported. Table 1 shows that most
of the time SM O outperforms in terms of reliability, efficiency and accuracy. Some more intensive statistical analyses
based on t test and boxplots have been carried out for results of SM O, P SO, DE and ABC.
Fig. 8 show the convergence characteristics in terms of the error of the median run of each algorithm for functions on
which all the considered algorithms achieved 100% success rate within the specified maximum function evaluations (to
carry out fair comparison of convergence rate). It can be observed that the convergence of SM O is relatively better
than the P SO, DE and ABC.
In order to compare the performance of the SM O, P SO, DE and ABC, statistical analyses have been carried out using
t-test and boxplots.
The t-test is quite popular among researchers in the field of evolutionary computation. In this paper students t-test
is applied according to the description given in [4] for a confidence level of 0.95. Table 2 shows the results of the t-test
for the null hypothesis that there is no difference in the mean number of function evaluations of 100 runs using SM O,
P SO, DE and ABC. Note that here ‘+’ indicates the significant difference (or the null hypothesis is rejected) at a 0.05
level of significance, ‘-’ implies that there is no significant difference while ‘=’ indicates that comparison is not possible.
In Table 2. SM O is compared with the P SO, DE and ABC. Significant differences are observed in 55 comparisons out
of 75 comparisons. Therefore, it can be concluded that the results of SM O is significantly better from the ABC, DE
and P SO.
Boxplot analysis is also carried out for all the considered algorithms. The empirical distribution of data is efficiently
represented graphically by the boxplot analysis tool [31]. The Boxplots for SM O, P SO, DE and ABC are shown in
Figure 9. It is clear from this figure that strategy (1), i.e. SM O is best among all mentioned strategies as interquartile
range and median are low for SM O.
SM O, P SO, DE and ABC are compared through SR, M E and AF E. First SR is compared for all these algorithms
and if it is not possible to distinguish the algorithms based on SR then comparison is made on the basis of M E. AF E
is used for comparison if it is possible on the basis of SR and M E both. It is observed from Table 1 that SM O
outperforms for 13 problems among all the considered algorithms. It is a better algorithm for 17 problems, 21 problems
and 19 problems compared to P SO, DE and ABC respectively.
18 Jagdish Chand Bansal et al.
1
10
SMO SMO SMO
4 PSO PSO PSO
10 DE DE 0
DE
10
ABC ABC ABC
2
10 −1
10
Error
Error
0
Error
10
0
10 −2
10
−2
10 −3
10
−4
10
−4
10
0 1 2 3 4 5 6 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5
Number of Function Evaluations 4 Number of Function Evaluations x 10
4 Number of Function Evaluations 4
x 10 x 10
Error
0
Error
10
2
−2
10
10
1
10
−4
10 0
10
0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 3.5 500 1000 1500 2000 2500 3000
Number of Function Evaluations 4 Number of Function Evaluations 4 Number of Function Evaluations
x 10 x 10
SMO
0
10 PSO
SMO
DE
PSO 0
10 ABC
DE
−1 ABC
10
Error
−5
10
Error
−2
10
−10
−3
10
10
100 200 300 400 500 600 700 800 900 1000 1100 0 2 4 6 8 10 12
Number of Function Evaluations Number of Function Evaluations 4
x 10
(g) (h)
Fig. 8: Convergence characteristics of SM O, P SO, DE and ABC for functions; (a) (f23 ), (b) (f24 ), (c) (f29 ), (d) (f33 ), (e) (f36 ), (f)
(f37 ), (g) (f41 ).
4
x 10
20
10
Fig. 9: Boxplot graph for Average Function Evaluation: (1) SM O (2) P SO (3) DE (4) ABC
5 Conclusion
In this paper, a new meta-heuristic algorithm for optimization is proposed. The inspiration is from the social be-
havior of spider monkeys. The proposed algorithm proves to be very flexible in the category of swarm intelligence
based algorithms. Generally, in the solution search process, exploration and exploitation capabilities contradict each
other. Therefore to obtain better performance on the problems of optimization, the two capabilities should be well
balanced. It may noted that the solution search equation of the SM O in LocalLeaderP hase, GlobalLeaderP hase and
LocalLeaderLearning phase consists of the local best solution, the global best solution and both respectively. There-
fore, SM O exploits the search space efficiently. Further, use of perturbation rate (pr) and random difference vectors in
the said phases, explore the search space. With the help of experiments over test problems and real world problems, it
has been showen that, for most of the problems the reliability (due to success rate), efficiency (due to average number
of function evaluations) and accuracy (due to mean objective function value) of SM O algorithm is higher than that of
ABC, P SO and DE. Hence, it may be concluded that SM O is going to be a competing candidate in the field of swarm
based optimization algorithms.
References
1. M.M. Ali, C. Khompatraporn, and Z.B. Zabinsky. A numerical evaluation of several stochastic algorithms on
selected continuous global optimization test problems. Journal of Global Optimization, 31(4):635–672, 2005.
2. E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm intelligence: from natural to artificial systems. Number 1.
Oxford University Press, USA, 1999.
3. M. Clerc. Particle swarm optimization. Wiley-ISTE, 2006.
4. C. Croarkin and P. Tobias. Nist/sematech e-handbook of statistical methods. Retrieved March, 1:2010, 2010.
5. S. Das and PN Suganthan. Problem definitions and evaluation criteria for cec 2011 competition on testing evo-
lutionary algorithms on real world optimization problems. Jadavpur University, Kolkata, India, and Nangyang
Technological University, Singapore, Tech. Rep, 2010.
6. L.N. De Castro and F.J. Von Zuben. Artificial immune systems: Part i–basic theory and applications. Universidade
Estadual de Campinas, Dezembro de, Tech. Rep, 1999.
7. M. Dorigo and T. Stützle. Ant colony optimization. the MIT Press, 2004.
8. R. Gamperle, S.D. Muller, and A. Koumoutsakos. A parameter study for differential evolution. Advances in
Intelligent Systems, Fuzzy Systems, Evolutionary Computation, 10:293–298, 2002.
9. RL Jeanne. The evolution of the organization of work in social insects. Monitore zoologico italiano, 20(2):119–133,
1986.
10. D. Karaboga. An idea based on honey bee swarm for numerical optimization. Techn. Rep. TR06, Erciyes Univ.
Press, Erciyes, 2005.
20 Jagdish Chand Bansal et al.
11. D. Karaboga and B. Akay. A modified artificial bee colony (abc) algorithm for constrained optimization problems.
Applied Soft Computing, 2010.
12. J. Kennedy and R. Eberhart. Particle swarm optimization. In Neural Networks, 1995. Proceedings., IEEE Inter-
national Conference on, volume 4, pages 1942–1948. IEEE, 1995.
13. K. Milton. Diet and social organization of a free-ranging spider monkey population: The development of species-
typical behavior in the absence of adults. Juvenile primates: life history, development, and behavior, pages 173–181,
1993.
14. M.A. Norconk and W.G. Kinzey. Challenge of neotropical frugivory: travel patterns of spider monkeys and bearded
sakis. American Journal of Primatology, 34(2):171–183, 1994.
15. G.C. Onwubolu and BV Babu. New optimization techniques in engineering, volume 141. Springer Verlag, 2004.
16. G.F. Oster and E.O. Wilson. Caste and ecology in the social insects. Princeton Univ Pr, 1979.
17. K.M. Passino. Bacterial foraging optimization. International Journal of Swarm Intelligence Research (IJSIR),
1(1):1–16, 2010.
18. K.V. Price. Differential evolution: a fast and simple numerical optimizer. In Fuzzy Information Processing Society,
1996. NAFIPS. 1996 Biennial Conference of the North American, pages 524–527. IEEE, 1996.
19. K.V. Price, R.M. Storn, and J.A. Lampinen. Differential evolution: a practical approach to global optimization.
Springer Verlag, 2005.
20. G. Ramos-Fernandez. Patterns of association, feeding competition and vocal communication in spider monkeys,
ateles geoffroyi. 2001.
21. G. Rose. Kiss: A bit too simple. http://eprint.iacr.org/2011/007.pdf, 18 April 2011.
22. E. Sandgren. Nonlinear integer and discrete programming in mechanical design optimization. Journal of Mechanical
Design, 112:223, 1990.
23. J. Sartore. Spider monkey images, http://animals.nationalgeographic.com/animals/mammals/spider-monkey, Re-
trived on 21 Decmber 2011.
24. B. Simmen and D. Sabatier. Diets of some french guianan primates: food composition and food choices. International
Journal of Primatology, 17(5):661–693, 1996.
25. R. Storn and K. Price. Differential evolution-a simple and efficient adaptive scheme for global optimization over
continuous spaces. INTERNATIONAL COMPUTER SCIENCE INSTITUTE-PUBLICATIONS-TR, 1997.
26. P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, YP Chen, A. Auger, and S. Tiwari. Problem definitions and
evaluation criteria for the cec 2005 special session on real-parameter optimization. KanGAL Report, 2005005, 2005.
27. M.M.F. Symington. Fission-fusion social organization inateles andpan. International Journal of Primatology,
11(1):47–61, 1990.
28. M.G.M. van Roosmalen and Instituto Nacional de Pesquisas da Amazônia. Habitat preferences, diet, feeding
strategy and social organization of the black spider monkey (ateles paniscus paniscus linnaeus 1758) in surinam,
1985.
29. J. Vesterstrom and R. Thomsen. A comparative study of differential evolution, particle swarm optimization, and evo-
lutionary algorithms on numerical benchmark problems. In Evolutionary Computation, 2004. CEC2004. Congress
on, volume 2, pages 1980–1987. IEEE, 2004.
30. X. Wang, XZ Gao, and SJ Ovaska. A simulated annealing-based immune optimization method. In Proceedings of
the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning, Porvoo,
Finland, pages 41–47, 2008.
31. D.F. Williamson, R.A. Parker, and J.S. Kendrick. The box plot: a simple visual method to interpret data. Annals
of internal medicine, 110(11):916, 1989.
32. G. Zhu and S. Kwong. Gbest-guided artificial bee colony algorithm for numerical function optimization. Applied
Mathematics and Computation, 2010.
Spider Monkey Optimization Algorithm for Numerical Optimization
Table 3: Test problems
Beale f6 (x) = [1.5 − x1 (1 − x2 )]2 + [2.25 − x1 (1 − x22 )]2 + [2.625 − [-4.5,4.5] f (3, 0.5) = 0 2 1.0E − 05
x1 (1 − x32 )]2
x1 (b2
i +bi x2 ) 2
Kowalik f7 (x) = 11 [-5,5] f (0.192833, 4 1.0E − 05
P
i=1 [ai − b2 +b x +x ]
i 3 4
i
0.190836, 0.123117,
0.135766) =
0.000307486
2D Tripod f8 (x) = p(x2 )(1+p(x1 ))+|(x1 +50p(x2 )(1−2p(x1 )))|+|(x2 + [-100,100] f (0, −50) = 0 2 1.0E − 04
50(1 − 2p(x2 )))|
Shifted Rosenbrock f9 (x) = D−1 2 2 2 [-100, 100] f (o) = fbias = 390 10 1.0E − 01
P
i=1 (100(zi −zi+1 ) +(zi −1) )+fbias , z = x−o+1,
x = [x1 , x2 , ....xD ], o = [o1 , o2 , ...oD ]
PD 2
Shifted Sphere f10 (x) = i=1 zi + fbias , z = x − o ,x = [x1 , x2 , ....xD ], [-100,100] f (o) = fbias = 10 1.0E − 05
o = [o1 , o2 , ...oD ] −450
PD 2
Shifted Rastrigin f11 (x) = i=1 (zi − 10 cos(2πzi ) + 10) + fbias z=(x-o), [-5,5] f (o) = fbias = 10 1.0E − 02
x=(x1 ,x2 ,........xD), o=(o1 ,o2 ,........oD ) −330
PD Pi 2
Shifted Schwefel f12 (x) = i=1 ( j=1 zj ) + fbias , z = x − o ,x = [-100,100] f (o) = fbias = 10 1.0E − 05
[x1 , x2 , ....xD ], o = [o1 , o2 , ...oD ] −450
zi2 z
f13 (x) = D
QD
Shifted Griewank [-600,600] f (o) = fbias = 10 1.0E − 05
P
i=1 4000 − i=1 cos( i ) + 1 + fbias , z = (x − o),
√i
21
22
Table 3: Test problems (Cont.)