SlideShare a Scribd company logo
J Glob Optim
DOI 10.1007/s10898-011-9736-8




Clonal selection: an immunological algorithm for global
optimization over continuous spaces

Mario Pavone · Giuseppe Narzisi · Giuseppe Nicosia




Received: 7 October 2009 / Accepted: 23 May 2011
© Springer Science+Business Media, LLC. 2011


Abstract In this research paper we present an immunological algorithm (IA) to solve
global numerical optimization problems for high-dimensional instances. Such optimization
problems are a crucial component for many real-world applications. We designed two ver-
sions of the IA: the first based on binary-code representation and the second based on real
values, called opt-IMMALG01 and opt-IMMALG, respectively. A large set of experiments
is presented to evaluate the effectiveness of the two proposed versions of IA. Both opt-
IMMALG01 and opt-IMMALG were extensively compared against several nature inspired
methodologies including a set of Differential Evolution algorithms whose performance is
known to be superior to many other bio-inspired and deterministic algorithms on the same
test bed. Also hybrid and deterministic global search algorithms (e.g., DIRECT, LeGO,
PSwarm) are compared with both IA versions, for a total 39 optimization algorithms.The
results suggest that the proposed immunological algorithm is effective, in terms of accu-
racy, and capable of solving large-scale instances for well-known benchmarks. Experimental
results also indicate that both IA versions are comparable, and often outperform, the state-
of-the-art optimization algorithms.

Keywords Nonlinear optimization · Global optimization · Derivative-free optimization ·
Black-box optimization · Immunological algorithms · Evolutionary algorithms




M. Pavone · G. Nicosia (B )
Department of Mathematics and Computer Science, University of Catania, Viale A. Doria 6,
95125 Catania, Italy
e-mail: nicosia@dmi.unict.it

M. Pavone
e-mail: mpavone@dmi.unict.it

G. Narzisi
Computer Science Department, Courant Institute of Mathematical Sciences,
New York University, New York, NY 10012, USA
e-mail: narzisi@nyu.edu


                                                                                           123
J Glob Optim


1 Introduction

Artificial Immune Systems (AIS) is a paradigm in biologically inspired computing, which
has been successfully applied to several real-world applications in computer science and
engineering [17,23,37–39]. AIS are bio-inspired algorithms that take their inspiration from
the natural immune system, whose function is to detect and protect the organism against
foreign organisms, like viruses, bacteria, fungi and parasites, that can be cause of diseases.
The main research work on AIS was concentrated primarily on three immunological theo-
ries: (1) immune networks, (2) negative selection and (3) clonal selection. Such algorithms
have been successfully employed in a variety of different application areas [18,35]. All algo-
rithms based on the simulation of the clonal selection principle are included into a special
class called Clonal Selection Algorithms (CSA), and represents an effective mechanism for
searching and optimization [13,15,16]. The core components of the CSAs are cloning and
hypermutation operators: the first triggers the growth of a new population of high-value B
cells (the candidate solutions) centered on a higher affinity value, whereas the last can be
seen as a local search procedure that leads to a faster maturation during the learning phase.
    We designed and implemented an Immunological Algorithm (IA) to tackle the global
numerical optimization problems, based on CSAs. We give two different versions of the
proposed IA, using either binary-code or real values representations, called respectively opt-
IMMALG01 and opt-IMMALG.
    Global optimization is the task of finding the best set of parameters to optimize a given
objective function; global optimization problems are typically quite difficult to solve because
of the presence of many locally optimal solutions [22]. In many real-world applications ana-
lytical solutions, even for simple problems, are not always available, so numerical continuous
optimization by approximate methods is often the only viable alternative [33,22].
    The global optimization consists of finding a variable (or a set of variables) x =
(x1 , x2 , . . . , xn ) ∈ S, where S ⊆ Rn is a bounded set on Rn , such that a certain n-dimensional
objective function f : S → R is optimized. Specifically the goal for global minimization
problem is to find a point xmin ∈ S such that f (xmin ) is a global minimum on S, i.e.
∀x ∈ S : f (xmin ) ≤ f (x). The problem of continuous optimization is a difficult task for
three main reasons [33]: (1) it is difficult to decide when a global (or local) optimum has been
reached; (2) there could be many local optimal solutions where the search algorithm can get
trapped; (3) the number of suboptimal solutions grows dramatically with the dimension of
the search space [22].
    In this research, we consider the following numerical minimization problem:
                                     min( f (x)), Bl ≤ x ≤ Bu                                           (1)
where x = (x1 , x2 , . . . , xn ) is the variable vector in  Rn ,
                                                                f (x) denotes the objective function
to minimize and Bl = (Bl1 , Bl2 , . . . , Bln ), Bu = (Bu 1 , Bu 2 , . . . , Bu n ) represent, respectively,
the lower and the upper bounds of the variables, such that xi ∈ Bli , Bu i (i = 1, . . . , n).
   To evaluate the performance and convergence ability of the proposed IAs compared to the
state-of-the-art optimization algorithms [22], we have used the classic benchmark proposed
in Yao et al. [43], that includes twenty-three functions (see Table 1 in Sect. 3.1). These func-
tions belong to three different categories: unimodal, multimodal with many local optima,
and multimodal with few local optima. Moreover we compare both IA versions with several
immunological algorithms. For some of these experiments we tackled the functions proposed
in Timmis and Kelsey [40] (Table 2, described in Sect. 3.1).
   The paper is structured as follows: in Sect. 2 we describe the proposed immunological
algorithm and its main features; in Sect. 3 we describe the benchmark and the metrics used

123
J Glob Optim


Table 1 First class of functions to optimize [43]

Test function                                                         n            S

f 1 (x) =      n    2                                                              [−100, 100]n
               i=1 xi                                                 30
f 2 (x) =      n |x | +      n                                                     [−10, 10]n
               i=1 i        i=1 |xi |                                 30
               n       i        2
f 3 (x) =      i=1      j=1 x j                                       30           [−100, 100]n
f 4 (x) = maxi {|xi |, 1 ≤ i ≤ n}                                     30           [−100, 100]n
               n−1                 2 2        2
f 5 (x) =      i=1 [100(xi+1 − xi ) + (xi − 1) ]                      30           [−30, 30]n
f 6 (x) =      n ( x + 0.5 )2                                         30           [−100, 100]n
               i=1    i
f 7 (x) =      n i x 4 + random[0, 1)                                 30           [−1.28, 1.28]n
               i=1 i
               n           √
f 8 (x) =      i=1 −xi sin( |xi |)                                    30           [−500, 500]n
f 9 (x) =      n     2                                                             [−5.12, 5.12]n
               i=1 [xi − 10 cos(2π xi ) + 10]                         30
                        1
f 10 (x) = −20 exp −0.2 n               n    2                                     [−32, 32]n
                                        i=1 xi                        30
                   1
             − exp n     n
                         i=1 cos 2π xi + 20 + e
             1     n            n       x
f 11 (x) = 4000 i=1 xi2 − i=1 cos √i + 1                              30           [−600, 600]n
                                          i
           π {10 sin2 (π y )
f 12 (x) = n                                                          30           [−50, 50]n
                           1
                n−1
          + i=1 (yi − 1)2 [1 + 10 sin2 (π yi+1 )] + (yn − 1)2 }
                n
          + i=1 u(xi , 10, 100, 4),
           1 (x + 1)
yi = 1 + 4 i
                  ⎧            m
                  ⎪ k(xi − a) , if xi > a,
                  ⎨
u(xi , a, k, m) =      0,           if −a ≤ xi ≤ a,
                     ⎪
                     ⎩ k(−x − a)m , if x < −a.
                           i            i
f 13 (x) = 0.1{sin2 (3π x1 )                                          30           [−50, 50]n
                  n−1
            + i=1 (xi − 1)2 [1 + sin2 (3π xi+1 )]
                                                         n
            + (xn − 1)[1 + sin2 (2π xn )]} + i=1 u(xi , 5, 100, 4)
                                                          −1
f 14 (x)   = 500 + 25
                1
                           j=1 j+ 2 (x −a )6
                                             1                        2            [−65.536, 65.536]n
                                         i=1   i   ij
                                                 2
                               xi (b2 +b x )
f 15 (x)   = i=1 ai − 2 i i 2
                11                                                    4            [−5, 5]n
                              bi +bi x3 +x4
f 16 (x)   = 4x1 − 2.1x1 + 1 x1 + x1 x2 − 4x2 + 4x2
                2          4
                                   3
                                       6                2      4      2            [−5, 5]n
                                                 2
f 17 (x)   = x2 − 5.1 x1 + π x1 − 6
                             2       5                                2            [−5, 10] × [0, 15]
                      4π 2
                           1
            + 10 1 − 8π cos x1 + 10
f 18 (x)   = [1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x1 − 14x2     2        2            [−2, 2]n
            + 6x1 x2 + 3x2    2 )] × [30 + (2x − 3x )2 (18 − 32x
                                                   1       2     1
                    2
            + 12x1 + 48x2 − 36x1 x2 + 27x2 )]         2

f 19 (x)   = − i=1 ci exp − 4 ai j (x j − pi j )2
                   4
                                          j=1                         4            [0, 1]n

f 20 (x) = −      4               6                     2                          [0, 1]n
                  i=1 ci exp −    j=1 ai j (x j − pi j )              6
                  5                              −1
f 21 (x) = −                            T                                          [0, 10]n
                  i=1 (x − ai )(x − ai ) + ci                         4
                  7                              −1
f 22 (x) = −                            T                                          [0, 10]n
                  i=1 (x − ai )(x − ai ) + ci                         4
                  10                             −1
f 23 (x) = −                            T                                          [0, 10]n
                  i=1 (x − ai )(x − ai ) + ci                         4

We indicate with n the number of variables employed and with S ⊆ Rn the variable bounds


                                                                                             123
J Glob Optim


Table 2 Second class of numerical functions [40], with S ⊆ Rn the variable bounds
Test function                                                            S

g1 (x) = 2(x − 0.75)2 + sin (5π x − 0.4π ) − 0.125                       0≤x ≤1
                            4
g2 (x, y) = (4 − 2.1x 2 + x3 )x 2 + x y + (−4 + 4y 2 )y 2 )              −3 ≤ x ≤ 3
                                                                         −2 ≤ y ≤ 2
g3 (x) = − 5 [ j sin (( j + 1)x + j)]
              j=1                                                        −10 ≤ x ≤ 10
g4 (x, y) = a(y − bx 2 + cx − d)2 + h(i − f ) cos (x) + h                a = 1, b = 5.1 , c = π
                                                                                      2
                                                                                              5
                                                                                    4π
                                                                                     1
                                                                         d = 6, f = 8π , h = 10
                                                                         −5 ≤ x ≤ 10, 0 ≤ y ≤ 15
g5 (x, y) =     5
                j=1 j cos [( j + 1)x + j]                                −10 ≤ x ≤ 10
                                                                         −10 ≤ y ≤ 10, β = 0.5
g6 =    5                                          2               2
        j=1 j cos [( j + 1)x + j] + β (x + 1, 4513) + (y + 0.80032)      −10 ≤ x ≤ 10
                                                                         −10 ≤ y ≤ 10, β = 1
g7 (x, y) = x sin (4π x) − y sin (4π yπ ) + 1                            −10 ≤ x ≤ 10
                                                                         −10 ≤ y ≤ 10
g8 (y) = sin6 5π x                                                       −10 ≤ x ≤ 10
                                                                         −10 ≤ y ≤ 10
              4    2         2
g9 (x, y) = x4 − x2 + 10 + y2
                      x                                                  −10 ≤ x ≤ 10
                                                                         −10 ≤ y ≤ 10
g10 (x, y) =     5                           5
                 j=1 j cos [( j + i)x + j]   j=1 j cos [( j + i)y + j]   −10 ≤ x ≤ 10
                                 √                                       −10 ≤ y ≤ 10
g11 (x) = 418.9829n − i=1 nxi sin | xi |                                 −512.03 ≤ xi ≤ 511.97 n = 3
                n                  x
                    4000 − n cos √i + 1
g12 (x) = 1 + i=1 2        i=1                                           −600 ≤ xi ≤ 600
                        xi                   i
                                                                         n = 20




to compare opt-IMMALG01 and opt-IMMALG algorithms with the state-of-the-art opti-
mization algorithms; in the same section we show the influence of the different potential
mutations on the dynamics of both IAs; Sect. 4 presents a large set of experiments, compar-
ing the two IA versions with several nature inspired methodologies; finally, Sect.5 contains
the concluding remarks.


2 The immunological algorithm

In this section we describe the IA based on the clonal selection principle. The main fea-
tures of the algorithm are: (i) cloning, (ii) inversely proportional hypermutation and (iii)
aging operator. The cloning operator clones each candidate solution in order to explore its
neighbourhood in the search space; the inversely proportional hypermutation perturbs each
candidate solution using an inversely proportional law to its objective function value; and
the aging operator eliminates old candidate solutions from the current population in order to
introduce diversity and to avoid local minima during the evolutionary search process.
    We present two versions of the IA: the first one is based on binary code representation
(opt-IMMALG01), and the second on real values (opt-IMMALG). Both algorithms model
antigens (Ag) and B cells; the Ag represents the problem to tackle, i.e. the function to opti-
mize, while the B cell receptors are points (candidate solutions) in the search space for the
problem. At each time step t the algorithm maintains a population of B cells P (t) of size d
(i.e., d candidate solutions). Algorithm 1 shows the pseudo-code of the algorithm.

123
J Glob Optim


2.1 Initialize population

The population is initialized at time t = 0 (steps 1–4 in Algorithm 1) by randomly generating
each solution using uniform distribution in the corresponding domains for each function (see
last column of Tables 1, 2). For binary string representation, each real value xi is coded using
bit strings of length k = 32. The mapping from the binary string b = b1 , b2 , . . . , bk into a
real number x consists of two steps: (1) convert the bit string b = b1 , . . . , bk from base 2
                 k−1
to base 10 : i=0 bi ∗ 2k−i = x , (2) finding the corresponding real value:

                                              x (Bu i − Bli )
                                  x = Bli +                                                 (2)
                                                 2k − 1
Bli and Bu i are the lower and upper bounds of the ith variable, respectively. In the case of
real value representation, each variable is randomly initialized as follows:

                                 xi = Bli + β · (Bu i − Bli )                               (3)

where β is a random number in [0, 1] and Bli , Bu i are the lower and upper bounds of the real
coded variable xi respectively. The strategy used to initialize the population plays a crucial
role in evolutionary algorithms, since it influences the later performance of the algorithm.
In traditional evolutionary computing, the initial population is generated using a random
numbers distribution or chaotic sequences [4]. After the population is initialized, the objec-
tive function value is computed for each candidate solution x ∈ P (t) , using the function
Compute_Fitness(P (t) ) (step 5 in Algorithm 1).

2.2 Cloning operator

The cloning operator (step 8 in Algorithm 1) clones each candidate solution dup times pro-
ducing an intermediate population P (clo) of size d × dup and assigns to each clone a random
age chosen in the range [0, τ B ]. The age for a candidate solution determines its life time in
the population: when a candidate solution reaches the maximum age (τ B ) it is discarded,
i.e. it dies. This strategy reduces the premature convergence of the algorithm and keeps high
diversity in the population. An improvement of the performances can be obtained choosing
the age of each clone into the range [0, 2 τ B ], as showed in Sect. 4. The cloning operator,
                                            3
coupled with the hypermutation operator, performs a local search around the cloned solu-
tions. The introduction of blind mutations can produce individuals with higher affinities
(higher objective function values) which will be then selected forming the improved mature
progenies.

2.3 Hypermutation operator

The hypermutation operator (step 9 in Algorithm 1) acts on each candidate solution of popu-
lation P (clo) . Although there are different ways of implementing this operator (see [11,12]),
in this research work we use an inversely proportional strategy where each candidate solu-
tion is subject to M mutations without explicitly using a mutation probability. The number
of mutations M is determined by an inversely proportional law: the better is the objective
function value of the candidate solution, the lower is the number of mutations performed. In
this work we employ two different potential mutations to determine the number of mutations
M:

                                                                                    123
J Glob Optim

                                                   ˆ
                                               e− f (x)
                                         α=             ,                                   (4)
                                                 ρ

and
                                                    ˆ
                                         α = e−ρ f (x) ,                                    (5)

where α represents the mutation rate, ρ determines the shape of the mutation rates and fˆ(x)
the objective function value normalized in [0, 1].
   Thus the number of mutations M is given by

                                      M = (α × ) + 1 ,                                      (6)

where is the length of any candidate solution: (1) = kn for opt-IMMALG01, with k the
number of bits used to code each real variable and n the dimension of the function; whilst
(2) = n for opt-IMMALG, that is the dimension of the problem. By this equation at least
one mutation is guaranteed on any candidate solution; this happens exactly when the solution
represented by a candidate solution is very close to the optimal one into the space of the solu-
tions. Once the objective function is normalized into the range [0, 1], the best solutions are
those whose values are closer to 1, whilst the worst ones are closer to 0. During normalization
of the objective function value we use the best current objective function value decreased by
a user-defined threshold θ , rather than the global optima. This way we do not use any a priori
knowledge about the problem. In opt-IMMALG01, the hypermutation operator is based on
the classical bit-flip mutation without redundancy: in any x candidate solution the operator
randomly chooses xi , and inverts its value (from 0 to 1 or from 1 to 0). Since M mutations
are performed in any candidate solution the xi are randomly chose without repetition. In
opt-IMMALG instead the mutation operator randomly chooses two indexes 1 ≤ i, j ≤ ,
                                (t)
such that i = j, and replaces xi with a new value in according to the following rule:

                              (t+1)                 (t)          (t)
                            xi        = (1 − β) xi          + β xj                          (7)

where β ∈ [0, 1] is a random number generated with uniform distribution.
  Immunological Algorithm (d, dup, ρ, τ B , Tmax );
  t ← 0;
  F F E ← 0;
  Nc ← d · dup;
  P (t) ← Initialize_Population(d);
  Compute_Fitness(P (t) );
  F F E ← F F E + d;
  while F F E < Tmax do
      P (clo) ← Cloning (P (t) , dup);
      P (hyp) ← Hypermutation(P (clo) , ρ);
     Compute_Fitness(P (hyp) );
      F F E ← F F E + Nc ;
         (t)   (hyp)
     (Pa , Pa        ) = Aging(P (t) , P (hyp) , τ B );
      P (t+1) ← (μ + λ)-Selection(P (t) , P (hyp) );
                                          a      a
     t ← t + 1;
  end
                Algorithm 1: Pseudo-code of the Immunological Algorithm


123
J Glob Optim


2.4 Aging operator

The aging operator (step 12 in Algorithm 1) eliminates all old candidate solutions in the pop-
ulations P (t) and P (hyp) . The main goal of this operator is to produce high diversity in the
current population and to avoid premature convergence. Each candidate solution is allowed
to remain in the population for a fixed number of generations according to the parameter τ B .
Hence, τ B indicates the maximum number of generations allowed; when a candidate solution
is τ B + 1 old it is discarded from the current population independently from its objective
function value. Such kind of operator is called static aging operator. The algorithm makes
only one exception: when generating a new population the selection mechanism always keeps
the best candidate solution, i.e. the solution with the best objective function value so far, even
if older than τ B . This variant is called elitist aging operator.

2.5 (μ + λ)-Selection operator

After performing the aging operator, the best candidate solutions that have survived the
aging step are selected to generate the new population P (t+1) , of d candidate solutions from
                   (t)      (hyp)
the populations Pa and Pa         . If only d1 < d candidate solutions have survived then the
(μ + λ)-Selection operator randomly selects d − d1 candidate solutions among those “dead”,
i.e. from the set
                                                           (hyp)
                              P (t)  Pa
                                       (t)
                                                P (hyp)  Pa       .

The (μ + λ)-Selection operator, with μ = d and λ = Nc , reduces the offspring population
of size λ ≥ μ, created by cloning and hypermutation operators, to a new parent population
of size μ = d. The selection operator identifies the d best elements from the offspring set and
the old parent candidate solutions, thus guaranteeing monotonicity in the evolution dynamics.
   Both algorithms terminate the execution when the fitness function evaluation (F F E), is
great or equal to Tmax , the maximum number of objective function evaluations.


3 Benchmarks and metrics

Before presenting the comparative performance analysis to the state-of-the-art (Sect. 4), we
explore some of the feature of the two IAs described in this work. We first present the test
functions and the experimental protocol used in our tests. We then explore the influence of
different mutation schemes on the performance of the IA. Next we show the experimen-
tal tuning of some of the parameters of the algorithm. Finally the dynamics and learning
capabilities of both algorithms are explored.

3.1 Test functions and experimental protocol

We have used a large benchmark of test functions belonging to different classes and with
different features. Specifically we combined two benchmarks proposed respectively in Yao
et al. [43] (23 functions showed in Table 1) and [40] (12 functions showed in Table 2). These
functions can be divided in two categories of different complexity: unimodal and multimodal
(with many and few local optima) functions. Although their complexity gets larger as the
dimension space increase, optimizing unimodal functions is not a major issue, so for this
kind of functions the convergence rate becomes the main interest. Moreover, we have used

                                                                                      123
J Glob Optim


Table 3 Number of the objective function evaluations (Tmax ) used for each test function of Table 1, as
proposed in Yao et al. [43]

Function            Tmax                Function             Tmax                Function             Tmax

f1                  150,000              f9                 500,000              f 17                 10,000
f2                  200,000              f 10               150,000              f 18                 10,000
f3                  500,000              f 11               200,000              f 19                 10,000
f4                  500,000              f 12               150,000              f 20                 20,000
f5                  2 × 106              f 13               150,000              f 21                 10,000
f6                  150,000              f 14                10,000              f 22                 10,000
f7                  300,000              f 15               400,000              f 23                 10,000
f8                  900,000              f 16                10,000




another set of functions taken from Cassioli et al. [5], which includes 8 functions with number
of variables n ∈ {10, 20}. The main goal when applying optimization algorithms to these
functions is to get a picture of their convergence speed. Multimodal functions are instead
characterized by a rugged fitness landscape difficult to explore, so the quality of the result
obtained by any optimization method is crucial since it reflects the ability of the algorithm to
escape from local optima. This last category of functions represents the most difficult class
of problems for many optimization algorithms. Using a very large benchmark is necessary
in order to reduce biases and analyze the overall robustness of evolutionary algorithms [24].
Also we have tested our IAs using different dimensions: from small (1 variable) to very large
values (5000 variables).
   We use the same experimental protocol proposed in Yao et al. [43]: 50 independent runs
were performed for each test function. For all runs we compute both the mean value of the
best candidate solutions and the standard deviation. The dimension was fixed as follows:
n = 30 for functions from f 1 , to f 13 ; n = 2 for functions ( f 14 , f 16 , f 17 , f 18 ); n = 4 for
functions ( f 15 , f 19 , f 21 , . . . , f 23 ); and n = 6 for function f 20 . Finally, for these experiments
we used the same stopping criteria, Tmax value, proposed in Yao et al. [43] and shown in
Table 3.

3.2 Influence of different mutation potentials

Two different potential mutations (Eqs. 4, 5) are used in opt-IMMALG01 and opt-IMMALG
to determine the number of mutations M (Eq. 6). In this section we present a comparison of
their relative performances. Table 4 shows for each function the mean of the best candidate
solutions and the standard deviation for all runs (the best result is highlighted in boldface).
    These results were obtained using the experimental protocol described previously in
Sect. 3.1. Moreover, we fixed for opt-IMMALG01 d ∈ {10, 20}, dup = 2, τ B ∈
{5, 10, 15, 20, 50}, while for opt-IMMALG d = 100, dup = 2, τ B = 15. For both ver-
sions we used ρ in the set {50, 75, 100, 125, 150, 175, 200} for the mutation rate 4, and ρ
in the set {4, 5, 6, 7, 8, 9, 10, 11} for mutation rate 5. After inspecting the table it is easy to
conclude that the second potential mutation has in the overall a better performance for both
versions of the algorithm. For opt-IMMALG the improvements obtained using mutation rate
5 are more evident rather than opt-IMMALG01. In fact for opt-IMMALG01 the potential
mutation 4 reaches better solutions in the second class, i.e. the ones with many local optima.


123
J Glob Optim


Table 4 Comparison of the results obtained by both versions, opt-IMMALG01 and opt-IMMALG, using
the two potential mutations (Eqs. 4, 5)

               opt- IMMALG01                                      opt- IMMALG
                   − fˆ(x)                        ˆ                   − fˆ(x)                          ˆ
               α= e ρ                   α = e−ρ f (x)             α= e ρ                     α = e−ρ f (x)

f1             1.7 × 10−8               9.23 × 10−12              4.663 × 10−19              0.0
               3.5 × 10−15              2.44 × 10−11              7.365 × 10−19              0.0
f2             7.1 × 10−8               0.0                       3.220 × 10−17              0.0
               0.0                      0.0                       1.945 × 10−17              0.0
f3             1.9 × 10−10              0.0                       3.855                      0.0
               2.63 × 10−10             0.0                       5.755                      0.0
f4             4.1 × 10−2               1.0 × 10−2                8.699 × 10−3               0.0
               5.3 × 10−2               5.3 × 10−3                3.922 × 10−2               0.0
f5             28.4                     3.02                      22.32                      16.29
               0.42                     12.2                      11.58                      13.96
f6             0.0                      0.2                       0.0                        0.0
               0.0                      0.44                      0.0                        0.0
f7             3.9 × 10−3               3.0 × 10−3                1.143 × 10−4               1.995 × 10−5
               1.3 × 10−3               1.2 × 10−3                1.411 × 10−4               2.348 × 10−5
f8             −12568.27                −12508.38                 −12559.69                  −12535.15
               0.23                     155.54                    34.59                      62.81
f9             2.66                     19.98                     0.0                        0.596
               2.39                     7.66                      0.0                        4.178
f 10           1.1 × 10−4               18.98                     1.017 × 10−10              0.0
               3.1 × 10−5               0.35                      5.307 × 10−11              0.0
f 11           4.55 × 10−2              7.7 × 10−2                2.066 × 10−2               0.0
               4.46 × 10−2              8.63 × 10−2               5.482 × 10−2               0.0
f 12           3.1 × 10−2               0.137                     7.094 × 10−21              1.770 × 10−21
               5.7 × 10−2               0.23                      5.621 × 10−21              8.774 × 10−24
f 13           3.20                     1.51                      1.122 × 10−19              1.687 × 10−21
               0.13                     0.1                       2.328 × 10−19              5.370 × 10−24
f 14           1.21                     1.02                      0.999                      0.998
               0.54                     7.1 × 10−2                7.680 × 10−3               1.110 × 10−3
f 15           7.7 × 10−3               7.1 × 10−4                3.27 × 10−4                3.2 × 10−4
               1.4 × 10−2               1.3 × 10−4                3.651 × 10−5               2.672 × 10−5
f 16           −1.02                    −1.032                    −1.017                     −1.013
               1.1 × 10−2               1.5 × 10−4                2.039 × 10−2               2.212 × 10−2
f 17           0.450                    0.398                     0.425                      0.423
               0.21                     2.0 × 10−4                4.987 × 10−2               3.217 × 10−2
f 18           3.0                      3.0                       6.106                      5.837
               0.0                      0.0                       4.748                      3.742
f 19           −3.72                    −3.72                     −3.72                      −3.72
               1.1 × 10−2               1.1 × 10−4                8.416 × 10−3               7.846 × 10−3
f 20           −3.31                    −3.31                     −3.293                     −3.292
               5.9 × 10−3               7.4 × 10−2                3.022 × 10−2               3.097 × 10−2
f 21           −5.36                    −9.11                     −10.153                    −10.153
               2.20                     1.82                      (7.710 × 10−8 )            1.034 × 10−7
f 22           −5.34                    −9.86                     −10.402                    −10.402
               2.11                     1.88                      (1.842 × 10−6 )            1.082 × 10−5
f 23           −6.03                    −9.96                     −10.536                    −10.536
               2.66                     1.46                      7.694 × 10−7               1.165 × 10−5
Each result indicates the mean of the best solutions (in the first line of each table entry), and the standard
deviation (in the second line). The best result for each function is highlighted in boldface




                                                                                                123
J Glob Optim


3.3 The parameters of the immunological algorithms

In this section we present an analysis of the parameter settings that influence the per-
formance of the algorithms. Independently from the experimental protocol, we fixed
d ∈ {10, 20}, dup = 2, τ B ∈ {5, 10, 15, 20, 50} for opt-IMMALG01 and d = 100, dup =
2, τ B = 15 for opt-IMMALG. These values were chosen after a deep investigation
of the parameter tuning for each algorithm, not shown in this work (see [8,9,14] for
details). In the first set of experiments the values for parameter ρ were fixed as follows:
{50, 75, 100, 125, 150, 175, 200} using mutation rate 4 and {4, 5, 6, 7, 8, 9, 10, 11} for muta-
tion rate 5. Since opt-IMMALG presents one more parameter θ than opt-IMMALG01 (see
Sect. 2), we first analyzed the best tuning for θ . After several experiments (not shown in
this work), the best value found was θ = 75% for both potential mutations. Such setting
allows both algorithms to perform better on 14 functions out of 23. These experiments were
made on 50 independent runs. We have also tested opt-IMMALG using different ranges to
randomly choose the age of each clone, and we have discovered that choosing the age in the
range [0, 2 τ B ] improves its performance. For this new variant of opt-IMMALG we used
           3
only the potential mutation from Eq. 5, because it appears to be the best (as shown in Sect. 4).
We will call this new version opt- IMMALG∗ . After several experiments, the best tuning for
opt- IMMALG∗ was: dup = 2, τ B = 10, θ = 50%, and d = 1000 for all n ≥ 30, d = 100
otherwise.
    Next we explored the performances of parameter ρ when tackling functions with different
dimension value, with the goal of finding the best setting for ρ for each dimension. Figure 1
shows the dynamics of the number of mutations for different dimensions and ρ values. Using
this figure we have fixed ρ as follows: ρ = 3.5 for dimension n = 30; ρ = 4.0 for dimension
n = 50; ρ = 6.0 for dimension n = 100; and ρ = 7 for dimension n = 200. Instead for
dimension n = 2 and n = 4 (not showed in the figure) we found the best values to be ρ = 0.8
for n = 2, and ρ = 1.5 for n = 4.
    Moreover, we considered functions with very large dimension: n = 1000 and n = 5000.
In this case we have tuned ρ = 9.0 and ρ = 11.5 respectively (see Fig. 2). From the figure we


                       Number of Mutations of the Inversely Proportional Hypermutation Operator
                 220
                                                                            dim=30, ρ=3.5
                 200                                                        dim=50, ρ=4.0
                                                                           dim=100, ρ=6.0
                                                                           dim=200, ρ=7.0
                 180

                 160

                 140                               10
                                                    9
                 120                                8
             M




                                                    7
                 100                                6
                                                    5
                 80                                 4
                                                    3
                 60
                                                    2
                                                    1
                 40                                     0.4   0.5   0.6   0.7   0.8     0.9   1
                 20

                   0
                       0            0.2            0.4              0.6           0.8             1
                                                  normalized fitness

Fig. 1 Number of mutations M obtained on several dimensions


123
J Glob Optim

                                 Number of Mutations of the Inversely Proportional Hypermutation Operator
                   5000
                                                                                            dim=1000, ρ=9.0
                                                                                           dim=5000, ρ=11.5


                   4000


                                                                 3
                   3000
                                                            2.5
               M




                   2000                                          2

                                                            1.5

                   1000                                          1
                                                                     0.7   0.75    0.8       0.85   0.9     0.95   1



                         0
                             0               0.2            0.4                   0.6                 0.8              1
                                                           normalized fitness

Fig. 2 Number of mutations M obtained for high dimension values


                                                                                        (-ρ f(x))
                                                    Potential Mutation α = e
                    1
                                                                                                       ρ=3.5
                                                                                                       ρ=4.0
                   0.9                                                                                 ρ=6.0
                                                                                                       ρ=7.0
                   0.8                                                                                 ρ=9.0
                                                                                                      ρ=11.5
                   0.7

                   0.6
               α




                   0.5

                   0.4

                   0.3

                   0.2

                   0.1

                    0
                         0                  0.2            0.4                    0.6                0.8               1
                                                         normalized fitness

Fig. 3 Potential mutation α (Eq. 5) used by opt- IMMALG∗




can conclude that when the mutation rate is low the corresponding objective values improve,
whereas high mutation rates correspond to bad objective function values (which agrees with
the behaviour of the B cells in the natural immune systems). The inset plot shows a zoom of
mutation rates in the range [0.7, 1].
   Finally, Fig. 3 instead shows the curves produced by α mutation potential using Eq. 5 at the
different ρ values. Also in this figure it is possible to see an inversely proportional behaviour
with respect to the normalized objective function, where higher α values correspond to worst
solutions, whose normalized objective function value is closer to zero. Vice versa, the lower
α values are obtained for good normalized objective function values (i.e. closer to one).


                                                                                                                           123
J Glob Optim

                 Mean performance comparison curves for test function f1                         Mean performance comparison curves for test function f 6
                 70000                                                                           70000
                                                          Legend                                                                          Legend
                                                          Real                                                                            Real
                 60000                                    Binary                                 60000                                    Binary
Function value




                                                                              Function value
                 50000                                                                           50000

                 40000                                                                           40000

                 30000                                                                           30000

                 20000                                                                           20000

                 10000                                                                           10000

                      0                                                                               0
                             100    200    300   400    500    600   700                                   5   10   15    20   25   30   35    40   45      50
                                           Generation                                                                      Generation

Fig. 4 Evolution curves of opt-IMMALG01 and opt-IMMALG algorithms on two unimodal functions f 1
(left plot) and f 6 (right plot)



                    Mean performance comparison curves for test function f8                      Mean performance comparison curves for test function f10
                  -2000                                                                          25
                                                          Legend                                                                         Legend
                                                          Real                                                                           Real
                  -4000                                   Binary                                                                         Binary
                                                                                                 20
                                                                                Function value
Function value




                  -6000
                                                                                                 15
                  -8000
                                                                                                 10
                 -10000

                                                                                                  5
                 -12000

                 -14000                                                                           0
                            100 200 300 400 500 600 700 800 900 1000                                      100 200 300 400 500 600 700 800 900 1000
                                            Generation                                                                    Generation

Fig. 5 Evolution curves of opt-IMMALG01 and opt-IMMALG algorithms on two multimodal functions f 8
(left plot) and f 10 (right plot) with many local optima




3.4 The convergence and learning processes

Two important features that have an impact on the performance of any optimization algo-
rithm are the convergence speed and the learning ability. In this section we examine the
performance of the two versions of the IA according to these two properties. For this purpose
we tested the IAs on two functions for each class from Table 1: f 1 and f 6 for the unimodal
functions; f 8 and f 10 for the multimodal functions with many local optima, and f 18 and f 21
for the multimodal functions with a few local optima. All the results are averaged over 50
independent runs.
    Figures 4, 5 and 6 show the evolution curves produced by opt-IMMALG01 (labelled as
binary) and opt-IMMALG (labelled as real), on the full set of test functions. Inspecting the
results of the plots it is clear that opt-IMMALG presents faster and better quality convergence
than opt-IMMALG01 in all instances.
    The analysis of the learning process of the algorithm is performed using an entropic
function, the Information Gain. This function measures the quantity of information the sys-
tem discovers during the learning phase [10,13]. For this purpose we define the candi-
                                           (t)
date solutions distribution function f m as the ratio between the number, Bm , of candidate
                                                                                t

solutions at time step t with objective function value m and the total number of candidate

123
J Glob Optim

                 Mean performance comparison curves for test function f18                         Mean performance comparison curves for test function f 21
                  70                                                                                0
                                                           Legend                                                                         Legend
                                                           Real                                    -1                                    Real
                  60                                       Binary                                                                        Binary
                                                                                                   -2
                                                                                                   -3
Function value




                  50




                                                                                 Function value
                                                                                                   -4
                  40                                                                               -5
                  30                                                                               -6
                                                                                                   -7
                  20                                                                               -8
                                                                                                   -9
                  10
                                                                                                  -10
                   0                                                                              -11
                        5    10    15   20     25   30    35   40   45      50                          5    10 15 20 25 30 35 40 45 50
                                             Generation                                                                    Generation

Fig. 6 Convergence process of opt-IMMALG01 and opt-IMMALG algorithms on two multimodal
functions f 18 (left plot) and f 21 (right plot) with few local optima


solutions:
                                                                             t
                                                                            Bm                            t
                                                                                                         Bm
                                                            (t)
                                                           fm =          h
                                                                                                    =       .                                           (8)
                                                                         m=0
                                                                                  t
                                                                                 Bm                      d
It follows that the information gain K (t, t0 ) and entropy E(t) can be defined as:

                                                    K (t, t0 ) =            f m log f m / f m 0 )
                                                                              (t)     (t) (t
                                                                                                                                                        (9)
                                                                     m
                                                                              (t)     (t)
                                                          E(t) =            f m log f m .                                                             (10)
                                                                     m

The gain is the amount of information the system has already learnt from the given problem
instance compared to the randomly generated initial population P (t=0) (the initial distribu-
tion). Once the learning process begins, the information gain increases monotonically until it
reaches a final steady state (see Fig. 7). This is consistent with the maximum information-gain
principle: ddt ≥ 0. Figure 7 shows the dynamics of the Information Gain of opt-IMMALG∗
             K

when applied to the functions f 5 , f 7 , and f 10 . The algorithm quickly gains high information
on functions f 7 and f 10 and reaches a steady state at generation 20. However, more genera-
tions are required for function f 5 as the information gain starts growing only after generation
22. This behaviour is correlated to the different search space of function f 5 whose com-
plexity is higher than the search spaces of function f 7 and f 10 . This response is consistent
with the experimental results: both opt-IMMALG and opt-IMMALG∗ algorithms require
greater number of objective function evaluations to achieve good solutions (see the exper-
imental protocol in Table 3). The plot in Fig. 8 shows the monotonous behaviour of the
information gain for function f 5 , together with the standard deviation (inset plot); the stan-
dard deviation increases quickly (the spike in the inset plot) when the algorithm begins to
learn information, than it rapidly decreases towards zero as the algorithm approaches the
steady state of the information gain. The algorithm converges to the best solution in this
temporal window. Thus, the highest point of information learned corresponds to the lowest
value of uncertainty, i.e. standard deviation. Finally, Fig. 9 shows the curves of the infor-
mation gain K (t, t0 ), and entropy E(t) for opt-IMMALG∗ of the function f 5 . The inset
plot instead shows average objective function values versus best objective function value for
the first 10 generations obtained on the same function f 5 ; the algorithm quickly decreases
from solutions of the order 109 to solutions of the order (101 − 1). The best solution for
the results presented in Figs. 8 and 9 was 0.0 and the mean of the best solutions was 15.6

                                                                                                                                             123
J Glob Optim

                                                                      Information Gain
                         25
                                   f5
                                   f7
                                  f10

                         20



                         15
              K(t, t0)




                         10



                         5



                         0
                              1         2           4                             8      16       32
                                                                        Generations

Fig. 7 Learning of the problem. Information Gain curves of opt-IMMALG∗ algorithm on the functions
f 5 , f 7 , and f 10 . Each curve was obtained over 50 independent runs, with the following parameters: d =
100, dup = 2, τ B = 15, ρ = 3.5 and Tmax = 5 × 105


                                                                                              *
                                            Clonal Selection Algorithm: opt-IMMALG
                         25




                         20



                                                                      300
                         15
              K(t, t0)




                                                                      250
                                                        stand. dev.




                                                                      200

                         10                                           150

                                                                      100

                                                                       50
                          5
                                                                        0
                                                                            16           32            64


                          0
                              16                                                 32                         64
                                                                       Generations

Fig. 8 Information Gain curves of opt-IMMALG∗ algorithm on functions f 5 . The inset plot shows the
standard deviation


(14.07 as standard deviation). The experiments were performed fixing parameters as follows:
d = 100, dup = 2, τ B = 15, ρ = 3.5 and Tmax = 5 × 105 .

3.5 Time-to-target analysis

Time-To-Target plots [2,20] are a way to characterize the running time of stochastic algo-
rithms to solve a given combinatorial optimization problem. They display the probability that
an algorithm will find a solution as good as a target within a given running time. Nowadays

123
J Glob Optim


                                     Clonal Selection Algorithm: opt-IMMALG*
               25
                     K(t, t0)
                    entropy


               20


                                                                 4e+09
                                                               3.5e+09                avg fitness
               15                                                                     best fitness
                                                                 3e+09




                                                     Fitness
                                                               2.5e+09
                                                                 2e+09
                                                               1.5e+09
               10
                                                                 1e+09
                                                                 5e+08
                                                                     0
                                                                         0        2    4      6      8        10
               5                                                                      Generations




               0
                       20       25       30     35              40           45       50      55         60
                                                     Generations

Fig. 9 Information Gain K (t, t0 ) and Entropy E(t) curves of opt-IMMALG∗ on the function f 5 . The
inset plot shows the average objective function values versus the best objective function value for the first
10 generations. All curves are averaged over 50 independent runs with the following parameter setting:
d = 100, dup = 2, τ B = 15, ρ = 3.5 and Tmax = 5 × 105




they are a standard graphical methodology for data analysis [6] to compare the empirical and
theoretical distributions.1
    In Aiex et al. [1] is presented a Perl program (called tttplots.pl) to create time-
to-target plots, as an useful tool for the comparisons of different stochastic algorithms or,
in general, strategies for solving a given problem. Such program can be downloaded from
                                 ˜
http://www2.research.att.com/mgcr/tttplots/ By tttplots.pl two kinds of plots are pro-
duced: Q Q-plot with superimposed variability information, and superimposed empirical and
theoretical distributions.
    Following the example presented in Aiex et al. [1], we ran opt-IMMALG∗ on the first 13
functions of the Table 1 (for n = 30) where the obtained mean is equal to the optimal solu-
tion; that is when the success rate is 100%. For these experiments, of course, the termination
criterion was properly changed, and that is until finding the target, i.e. the optimal solution.
Moreover, since that larger is the number of runs closer is the empirical distribution to the
theoretical distribution, we include in this work only the plots produced after 200 runs. For
each of the 200 runs (as made for all the experiments and results presented in this article) the
random number generator is initialized with a distinct seed, that is, each run is independent.
    The Figs. 10, 11 and 12 show the convergence process produced by opt-IMMALG∗
using tttplots.pl on the functions: f 1 , . . . , f 6 , f 9 , . . . , f 13 . The left plots show the
comparisons among empirical and theoretical distributions, whilst the right plots display
the Q Q-plots with variability information. Inspecting the plots in the rightmost column is
possible to see that the empirical and theoretical distributions are often the same, except for
function f 6 which seems to be the easiest for opt-IMMALG∗ among the given benchmark.


1 For major details on this methodology see [1,2].



                                                                                                                   123
J Glob Optim

                                                      function1_runs200_dim30                                                                   function1_runs200_dim30
                                     1                                                                                   1.3


                                                                                                                        1.25
           cumulative probability   0.8




                                                                                                       measured times
                                                                                                                         1.2
                                    0.6

                                                                                                                        1.15

                                    0.4
                                                                                                                         1.1

                                    0.2
                                                                                                                        1.05
                                                                                                                                                                      empirical
                                                                                                                                                                     estimated
                                                                            empirical                                                                         +1 std dev range
                                                                          theoretical                                                                         -1 std dev range
                                     0                                                                                     1
                                          0    0.2     0.4    0.6   0.8     1           1.2   1.4                                  0    0.5 1    1.5 2    2.5 3     3.5 4          4.5 5
                                                      time to target solution                                                                   exponential quantiles

                                                     function2_runs200_dim30                                                                   function2_runs200_dim30
                                     1                                                                                  2.05

                                                                                                                           2
                                    0.8
           cumulative probability




                                                                                                                        1.95
                                                                                                       measured times
                                                                                                                         1.9
                                    0.6

                                                                                                                        1.85

                                    0.4
                                                                                                                         1.8

                                                                                                                        1.75
                                    0.2
                                                                                                                         1.7                                          empirical
                                                                                                                                                                     estimated
                                                                            empirical                                                                         +1 std dev range
                                                                          theoretical                                                                         -1 std dev range
                                     0                                                                                  1.65
                                          0   0.2 0.4 0.6 0.8 1      1.2 1.4 1.6 1.8 2                                             0    0.5 1    1.5 2    2.5 3     3.5 4          4.5 5
                                                      time to target solution                                                                   exponential quantiles
                                                  function3_runs200_dim30                                                                  function3_runs200_dim30
                                     1                                                                                  25.5

                                                                                                                          25
                                    0.8
           cumulative probability




                                                                                                       measured times




                                                                                                                        24.5

                                    0.6
                                                                                                                          24

                                                                                                                        23.5
                                    0.4

                                                                                                                          23
                                    0.2
                                                                                                                        22.5                                          empirical
                                                                                                                                                                     estimated
                                                                            empirical                                                                         +1 std dev range
                                                                          theoretical                                                                         -1 std dev range
                                     0                                                                                    22
                                          0       5          10      15          20           25                               0       0.5 1     1.5 2   2.5 3     3.5 4          4.5 5
                                                      time to target solution                                                                   exponential quantiles
                                                     function4_runs200_dim30                                                                   function4_runs200_dim30
                                     1                                                                                  32.2

                                                                                                                         32
                                    0.8
           cumulative probability




                                                                                                                        31.8
                                                                                                    measured times




                                                                                                                        31.6
                                    0.6
                                                                                                                        31.4

                                                                                                                        31.2
                                    0.4
                                                                                                                         31

                                    0.2                                                                                 30.8
                                                                                                                                                                     empirical
                                                                                                                        30.6                                        estimated
                                                                            empirical                                                                        +1 std dev range
                                                                          theoretical                                                                        -1 std dev range
                                     0                                                                                  30.4
                                      0       5       10     15     20    25       30         35                               0       0.5 1    1.5 2    2.5 3    3.5 4           4.5 5
                                                     time to target solution                                                                   exponential quantiles

Fig. 10 Empirical versus theoretical distributions (left plot) and Q Q-plots with variability information (right
plot). The curves have been obtained for the functions: f 1 , f 2 , f 3 and f 4


123
J Glob Optim

                                                         function5_runs200_dim30                                                                   function5_runs200_dim30
                                         1                                                                                            20


              cumulative probability   0.8
                                                                                                                                    19.5




                                                                                                                   measured times
                                                                                                                                      19

                                       0.6
                                                                                                                                    18.5

                                                                                                                                      18
                                       0.4

                                                                                                                                    17.5
                                       0.2
                                                                                                                                      17                                     empirical
                                                                                                                                                                            estimated
                                                                                        empirical                                                                    +1 std dev range
                                                                                      theoretical                                                                    -1 std dev range
                                         0                                                                                          16.5
                                             0   2       4      6   8     10 12 14 16 18 20                                                0   0.5 1    1.5 2   2.5 3      3.5 4         4.5 5
                                                                time to target solution                                                                 exponential quantiles
                                                     function6_runs200_dim30                                                                      function6_runs200_dim30
                                         1                                                                                          0.23
              cumulative probability




                                                                                                                                    0.22
                                       0.8

                                                                                                                   measured times   0.21

                                       0.6
                                                                                                                                     0.2

                                                                                                                                    0.19
                                       0.4

                                                                                                                                    0.18
                                       0.2
                                                                                                                                    0.17                                     empirical
                                                                                                                                                                            estimated
                                                                                        empirical                                                                    +1 std dev range
                                                                                      theoretical                                                                    -1 std dev range
                                         0                                                                                          0.16
                                             0           0.05       0.1        0.15          0.2         0.25                              0   0.5 1    1.5 2   2.5 3      3.5 4         4.5 5
                                                             time to target solution                                                                   exponential quantiles
                                                  function9_runs200_dim30                                                                        function9_runs200_dim30
                                         1                                                                                          13.8

                                                                                                                                    13.7
              cumulative probability




                                       0.8                                                                                          13.6
                                                                                                                   measured times




                                                                                                                                    13.5

                                       0.6                                                                                          13.4

                                                                                                                                    13.3

                                       0.4                                                                                          13.2

                                                                                                                                    13.1

                                       0.2                                                                                            13
                                                                                                                                                                             empirical
                                                                                                                                    12.9                                    estimated
                                                                                        empirical                                                                    +1 std dev range
                                                                                      theoretical                                                                    -1 std dev range
                                         0                                                                                          12.8
                                             0       2          4    6      8          10           12   14                                0   0.5 1    1.5 2   2.5 3     3.5 4          4.5 5
                                                                time to target solution                                                                 exponential quantiles
                                                     function10_runs200_dim30                                                                    function10_runs200_dim30
                                        1                                                                                            1.8

                                                                                                                                    1.75
          cumulative probability




                                       0.8                                                                                           1.7
                                                                                                                measured times




                                                                                                                                    1.65
                                       0.6                                                                                           1.6

                                                                                                                                    1.55
                                       0.4                                                                                           1.5

                                                                                                                                    1.45

                                       0.2                                                                                           1.4
                                                                                                                                                                            empirical
                                                                                                                                    1.35                                   estimated
                                                                                        empirical                                                                   +1 std dev range
                                                                                      theoretical                                                                   -1 std dev range
                                        0                                                                                            1.3
                                             0   0.2 0.4 0.6 0.8           1     1.2 1.4 1.6 1.8                                           0   0.5 1   1.5 2    2.5 3    3.5 4           4.5 5
                                                              time to target solution                                                                  exponential quantiles

Fig. 11 Empirical versus theoretical distributions (left plot) and Q Q-plots with variability information (right
plot). The curves have been obtained for the functions: f 5 , f 6 , f 9 and f 10

                                                                                                                                                                                           123
J Glob Optim

                                                     function11_runs200_dim30                                                                      function11_runs200_dim30
                                  1                                                                                         1.4


                                                                                                                          1.35
                                0.8
       cumulative probability




                                                                                                         measured times
                                                                                                                            1.3

                                0.6
                                                                                                                          1.25


                                                                                                                            1.2
                                0.4

                                                                                                                          1.15
                                0.2
                                                                                                                            1.1                                            empirical
                                                                                                                                                                          estimated
                                                                             empirical                                                                             +1 std dev range
                                                                           theoretical                                                                             -1 std dev range
                                  0                                                                                       1.05
                                      0   0.2       0.4       0.6   0.8        1        1.2    1.4                                0   0.5   1     1.5   2   2.5    3     3.5    4      4.5   5
                                                     time to target solution                                                                       exponential quantiles

                                                function12_runs200_dim30                                                                        function12_runs200_dim30
                                  1                                                                                         18


                                                                                                                          17.5
                                0.8
       cumulative probability




                                                                                                         measured times




                                                                                                                            17
                                0.6

                                                                                                                          16.5

                                0.4
                                                                                                                            16


                                0.2
                                                                                                                          15.5
                                                                                                                                                                           empirical
                                                                                                                                                                          estimated
                                                                             empirical                                                                             +1 std dev range
                                                                           theoretical                                                                             -1 std dev range
                                  0                                                                                         15
                                      0   2     4         6     8   10    12       14     16   18                                 0   0.5   1     1.5   2   2.5    3     3.5    4      4.5   5
                                                    time to target solution                                                                       exponential quantiles

                                                function13_runs200_dim30                                                                        function13_runs200_dim30
                                 1                                                                                         1.4

                                                                                                                          1.38

                                0.8                                                                                       1.36
 cumulative probability




                                                                                                     measured times




                                                                                                                          1.34

                                0.6                                                                                       1.32

                                                                                                                           1.3

                                0.4                                                                                       1.28

                                                                                                                          1.26

                                0.2                                                                                       1.24
                                                                                                                                                                          empirical
                                                                                                                          1.22                                           estimated
                                                                            empirical                                                                             +1 std dev range
                                                                          theoretical                                                                             -1 std dev range
                                 0                                                                                         1.2
                                      0   0.2       0.4       0.6   0.8    1            1.2    1.4                               0    0.5   1     1.5   2   2.5   3     3.5    4       4.5   5
                                                     time to target solution                                                                      exponential quantiles

Fig. 12 Empirical versus theoretical distributions (left plot) and Q Q-plots with variability information (right
plot). The curves have been obtained for the functions: f 11 , f 12 and f 13




123
J Glob Optim


4 Comparisons and results

In this section we present an exhaustive comparative study between opt-IMMALG01, opt-
IMMALG and opt-IMMALG∗ with 39 state-of-the-art optimization algorithms from the
literature. Such a large simulation protocol is required to fairly compare the IA to the current
best nature-inspired, deterministic and hybrid optimization algorithms and to demonstrate
its ability to outperform many of these techniques.


4.1 IA versus FEP and I-FEP

In the first experiment we compare opt-IMMALG01 and opt-IMMALG with FEP algo-
rithm (Fast Evolutionary Programming), which was proposed in Yao et al. [43]. FEP is
based on Conventional Evolutionary Programming (CEP [7]) and it uses a mutation operator
based on Cauchy random numbers that helps the algorithm to escape from local optima. The
results of this comparison are shown in Table 5. Both opt-IMMALG01 and opt-IMMALG
outperform FEP in the majority of the instances. In particular opt-IMMALG reaches the
best values in 16 functions out of 23; 12 using the potential mutation of Eq. 5, and only
5 with Eq. 4. Comparing the two IA versions we can observe that opt-IMMALG, using
both potential mutations, outperforms opt-IMMALG01 on 18 out of 23 functions. The
best results are obtained using the second potential mutation (Eq. 5). It is important to
mention that opt-IMMALG outperforms opt-IMMALG01 mainly on the multimodal func-
tions. This result reflects its ability to escape from local optima. The analysis presented
in Yao et al. [43] shows that Cauchy mutations perform better when the current search
point is far away from the global optimum, whilst Gaussian mutations are better when
the search points are in the neighbourhood of the global optimum. Based on these obser-
vations, the authors of [43] proposed an improved version of FEP. This algorithm, called
I-FEP, is based on both Cauchy and Gaussian mutations, and it differs from FEP in the way
offspring are created. Two new offspring are generated as follows: the first using Cauchy
mutation and the second using Gaussian mutation; only the best offspring is chosen. There-
fore we compared opt-IMMALG and opt-IMMALG01 also with I-FEP, and the results
are reported in Table 6. We used functions f 1 , f 2 , f 10 , f 11 , f 21 , f 22 and f 23 from Table 1,
and for each function we show the mean of the best candidate solutions averaged over all
runs (as proposed in Yao et al. [43]). Inspecting the results, we can infer that both ver-
sions of the IA obtain better performances (i.e. better solutions quality), than I-FEP on all
functions.
    Finally, since FEP is based on Conventional Evolutionary Programming (CEP), we pres-
ent in Table 7 a comparison between the two versions of IA and CEP algorithm. CEP is
based on three different mutation operators (as proposed in Chellapilla [7]): Gaussian Muta-
tion Operator (GMO); Cauchy Mutation Operator (CMO); and Mean Mutation Operator
(MMO). For this set of experiments, we used the same functions and the same experimental
protocol proposed in Chellapilla [7]: i.e. Tmax = 2.5 × 105 for all functions, except for
functions f 1 and f 10 , where Tmax = 1.5 × 105 was used. The obtained results by opt-
IMMALG01 and opt-IMMALG indicate that both IA versions outperform CEP on most of
the instances. Moreover opt-IMMALG shows an overall better performance compared to
opt-IMMALG01 and CEP.




                                                                                           123
J Glob Optim


Table 5 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
representation) and FEP (Fast Evolutionary Algorithm), using the same experimental protocol proposed in
Yao et al. [43]

                  ˆ                                                      − fˆ(x)
        α = e−ρ f (x)                                FEP [43]        α= e ρ
        opt-IMMALG            opt-IMMALG01                           opt-IMMALG            opt-IMMALG01

f1      0.0                   9.23 × 10−12           5.7 × 10−4      4.663 × 10−19         1.7 × 10−8
        (0.0)                 2.44 × 10−11           1.3 × 10−4      (7.365 × 10−19 )      3.5 × 10−15
f2      0.0                   0.0                    8.1 × 10−3      3.220 × 10−17         7.1 × 10−8
        (0.0)                 (0.0)                  7.7 × 10−4      (1.945 × 10−17 )      (0.0)
f3      0.0                   0.0                    1.6 × 10−2      3.855                 1.9 × 10−10
        (0.0)                 (0.0)                  1.4 × 10−2      (5.755)               (2.63 × 10−10 )
f4      0.0                   1.0 × 10−2             0.3             8.699 × 10−3          4.1 × 10−2
        (0.0)                 (5.3 × 10−3 )          0.5             (3.922 × 10−2 )       (5.3 × 10−2 )
f5      16.29                 3.02                   5.06            22.32                 28.4
        (13.96)               (12.2)                 5.87            (11.58)               (0.42)
f6      0.0                   0.2                    0.0             0.0                   0.0
        (0.0)                 (0.44)                 0.0             (0.0)                 (0.0)
f7      1.995 × 10−5          3.0 × 10−3             7.6 × 10−3      1.143 × 10−4          3.9 × 10−3
        (2.348 × 10−5 )       (1.2 × 10−3 )          2.6 × 10−3      (1.411 × 10−4 )       (1.3 × 10−3 )
f8      −12535.15             −12508.38              −12554.5        −12559.69             −12568.27
        (62.81)               (155.54)               52.6            (34.59)               (0.23)
f9      0.596                 19.98                  4.6 × 10−2      0.0                   2.66
        (4.178)               (7.66)                 1.2 × 10−2      (0.0)                 (2.39)
f 10    0.0                   18.98                  1.8 × 10−2      1.017 × 10−10         1.1 × 10−4
        (0.0)                 (0.35)                 2.1 × 10−3      (5.307 × 10−11 )      (3.1 × 10−5 )
f 11    0.0                   7.7 × 10−2             1.6 × 10−2      2.066 × 10−2          4.55 × 10−2
        (0.0)                 (8.63 × 10−2 )         2.2 × 10−2      (5.482 × 10−2 )       (4.46 × 10−2 )
f 12    1.770 × 10−21         0.137                  9.2 × 10−6      7.094 × 10−21         3.1 × 10−2
        (8, 774 × 10−24 )     (0.23)                 3.6 × 10−6      (5.621 × 10−21 )      (5.7 × 10−2 )
f 13    1.687 × 10−21         1.51                   1.6 × 10−4      1.122 × 10−19         3.20
        (5.370 × 10−24 )      (0.10)                 7.3 × 10−5      (2.328 × 10−19 )      (0.13)
f 14    0.998                 1.02                   1.22            0.999                 1.21
        (1.110 × 10−3 )       (7.1 × 10−2 )          0.56            (7.680 × 10−3 )       (0.54)
f 15    3.200 × 10−4          7.1 × 10−4             5.0 × 10−4      3.270 × 10−4          7.7 × 10−3
        (2.672 × 10−5 )       (1.3 × 10−4 )          3.2 × 10−4      (3.651 × 10−5 )       (1.4 × 10−2 )
f 16    −1.013                −1.032                 −1.031          −1.017                −1.02
        (2.212 × 10−2 )       (1.5 × 10−4 )          4.9 × 10−7      (2.039 × 10−2 )       (1.1 × 10−2 )
f 17    0.423                 0.398                  0.398           0.425                 0.450
        (3.217 × 10−2 )       (2.0 × 10−4 )          1.5 × 10−7      (4.987 × 10−2 )       (0.21)
f 18    5.837                 3.0                    3.02            6.106                 3.0
        (3.742)               (0.0)                  0.11            (4.748)               (0.0)
f 19    −3.72                 −3.72                  −3.86           −3.72                 −3.72
        (7.846 × 10−3 )       (1.1 × 10−4 )          1.4 × 10−5      (8.416 × 10−3 )       (1.1 × 10−2 )
f 20    −3.292                −3.31                  −3.27           −3.293                −3.31
        (3.097 × 10−2 )       (7.4 × 10−2 )          5.9 × 10−2      (3.022 × 10−2 )       (5.9 × 10−3 )
f 21    −10.153               −9.11                  −5.52           −10.153               −5.36
        (1.034 × 10−7 )       (1.82)                 1.59            (7.710 × 10−8 )       (2.20)
f 22    −10.402               −9.86                  −5.52           −10.402               −5.34
        (1.082 × 10−5 )       (1.88)                 2.12            (1.842 × 10−6 )       (2.11)
f 23    −10.536               −9.96                  −6.57           −10.536               −6.03
        (1.165 × 10−5 )       (1.46)                 3.14            (7.694 × 10−7 )       (2.66)
For opt-IMMALG and opt-IMMALG01 we show the results obtained using both potential mutations. For
each algorithm we report the mean of the best candidate solutions on all runs (in the first line of each table
entry) and the standard deviation (in the second line). The best results are highlighted in boldface


123
J Glob Optim


Table 6 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
representation) and I-FEP (Improved Fast Evolutionary Algorithm), on functions f 1 , f 2 , f 10 , f 11 , f 21 , f 22
and f 23 from Table 1

                       ˆ                                                         − fˆ(x)
            α = e−ρ f (x)                                 I-FEP [43]         α= e ρ
            opt-IMMALG              opt-IMMALG01                             opt-IMMALG              opt-IMMALG01

f1           0.0                    9.23 × 10−12          4.16 × 10−5        4.663 × 10−19           1.7 × 10−8
f2           0.0                    0.0                   2.44 × 10−2        3.220 × 10−17           7.1 × 10−8
f 10         0.0                    18.98                 4.83 × 10−3        1.017 × 10−10           1.1 × 10−4
f 11         0.0                    7.7 × 10−2            4.54 × 10−2        2.066 × 10−2            4.55 × 10−2
f 21       −10.153                  −9.11                 −6.46              −10.153                 −5.36
f 22       −10.402                  −9.86                 −7.10              −10.402                 −5.34
f 23       −10.536                  −9.96                 −7.80              −10.536                 −6.03
For each algorithm we report the mean of the best candidate solutions averaged over all runs. The best results
are highlighted in boldface



Table 7 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
representation) and the version of CEP (Conventional Evolutionary Programming) based on three different
mutation operators [7]: GMO (Gaussian Mutation Operator), CMO (Cauchy Mutation Operator), and MMO
(Mean Mutation Operator)
                                                                                              ˆ
                ˆ                                                                         e− f (x)
       α = e−ρ f (x)                       CEP [7]                                   α=     ρ

       opt-IMMALG opt-IMMALG01 GMO                       CMO           MMO           opt-IMMALG opt-IMMALG01

f1     0.0                 9.23 × 10−12    3.09 × 10−7   3.07 × 10−7   9.81 × 10−7   4.663 × 10−19   1.7 × 10−8
f2     0.0                 0.0             1.99 × 10−3   5.87 × 10−3   3.23 × 10−3   3.220 × 10−17   7.1 × 10−8
f3     0.0                 0.0             17.60         5.78          11.80         3.855           1.9 × 10−10
f4     0.0                 1.0 × 10−2      5.18          0.66          1.88          8.699 × 10−3    4.1 × 10−2
f5     16.29               3.02            86.70         114.0         63.8          22.32           28.4
f7     1.995 × 10−5        12.20           9.42          9.53          7.6 × 10−3    1.143 × 10−4    3.9 × 10−3
f9     0.596               19.98           120.0         4.73          9.52          0.0             2.66
f 10   0.0                 18.98           9.10          1.3 × 10−3    7.49 × 10−4   1.017 × 10−10   1.1 × 10−4
f 11   0.0                 7.7 × 10−2      2.52 × 10−7   2.2 × 10−6    6.99 × 10−7   2.066 × 10−2    4.55 × 10−2

For each algorithm the mean of the best candidate solutions on all runs is presented. The best results are
highlighted in boldface




4.2 IA versus DIRECT, PSO and EO

Next we compared opt-IMMALG01 and opt-IMMALG with other two well-known
biologically inspired algorithms: Particle Swarm Optimization (PSO) and Evolutionary Opti-
mization (EO) [3]. For this other set of experiments we used functions f 1 , f 5 , f 9 and f 11 as
proposed in Angeline [3], and we fixed the maximum number of objective function evalua-
tions Tmax = 2.5 × 105 . The results presented in Table 8 strongly demonstrate the superior
performance of opt-IMMALG and opt-IMMALG01 both in terms of convergence and qual-
ity of the solutions.
    Table 9 presents the comparison between both versions of the IA and DIRECT [21,
28], a deterministic global search algorithm for bound constrained optimization based on
Lipschitz constant estimation. Since, the results by DIRECT are not available for all func-
tions of Table 1, we used only a subset of such functions { f 5 , f 7 , f 8 , f 12 , . . . , f 23 }. The

                                                                                                         123
J Glob Optim


Table 8 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
representation), PSO (particle swarm optimization), and EO (Evolutionary Optimization) [3]

                 ˆ                                                           − fˆ(x)
       α = e−ρ f (x)                           PSO [3]      EO [3]       α= e ρ

       opt-IMMALG         opt-IMMALG01                                   opt-IMMALG           opt-IMMALG01

f1     0.0                9.23 × 10−12         11.75        9.8808       4.663 × 10−19        1.7 × 10−8
       (0.0)              2.44 × 10−11         1.3208       0.9444       (7.365 × 10−19 )     3.5 × 10−15
f5     16.29              3.02                 1911.598     1610.39      22.32                28.4
       (13.96)            (12.2)               374.2935     293.5783     (11.58)              (0.42)
f9     0.596              19.98                47.1354      46.4689      0.0                  2.66
       (4.178)            (7.66)               1.8782       2.4545       (0.0)                (2.39)
f 11   0.0                7.7 × 10−2           0.4498       0.4033       2.066 × 10−2         4.55 × 10−2
       (0.0)              (8.63 × 10−2 )       0.0566       0.0436       (5.482 × 10−2 )      (4.46 × 10−2 )
For each algorithm we report the mean of the best candidate solutions on all runs (in the first line of each table
entry) and the standard deviation (in the second line). The best results are highlighted in boldface



Table 9 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
representation) and DIRECT, a deterministic global search algorithm for bound constrained optimization based
on Lipschitz constant estimation [21,28]

                     ˆ                                                        − fˆ(x)
        α = e−ρ f (x)                              DIRECT [21,28]         α= e ρ

        opt-IMMALG          opt-IMMALG01                                  opt-IMMALG          opt-IMMALG01

f5      16.29               3.02                   27.89                  22.32               28.4
f7      1.995 × 10−5        3.0 × 10−3             8.9 × 10−3             1.143 × 10−4        3.9 × 10−3
f8      −12535.15           −12508.38              −4093.0                −12559.69           −12568.27
f 12    1.770 × 10−21       0.137                  0.03                   7.094 × 10−21       3.1 × 10−2
f 13    1.687 × 10−21       1.51                   0.96                   1.122 × 10−19       3.20
f 14    0.998               1.02                   1.0                    0.999               1.21
f 15    3.2 × 10−4          7.1 × 10−4             1.2 × 10−3             3.27 × 10−4         7.7 × 10−3
f 16    −1.013              −1.032                 −1.031                 −1.017              −1.02
f 17    0.423               0.398                  0.398                  0.425               0.450
f 18    5.837               3.0                    3.01                   6.106               3.0
f 19    −3.72               −3.72                  −3.86                  −3.72               −3.72
f 20    −3.292              −3.31                  −3.30                  −3.293              −3.31
f 21    −10.153             −9.11                  −6.84                  −10.153             −5.36
f 22    −10.402             −9.86                  −7.09                  −10.402             −5.34
f 23    −10.536             −9.96                  −7.22                  −10.536             −6.03
For each algorithm we report the mean of the best candidate solutions averaged over all runs. The best results
are highlighted in boldface




reason why some functions could not be tested with DIRECT is that the optimum for these
functions lays on the centre of the variable bounds and that is the point from which DIRECT
starts its search. For these tests we used the same values for Tmax as showed in Table 3
(Sect. 3.1).
   By inspecting the results in the table we can claim that, except for function f 19 , both
opt-IMMALG and opt-IMMALG01 show again superior performance, in particular in the
presence of rugged landscapes (multimodal functions).


123
J Glob Optim


4.3 IA versus CLONALG and BCA

We have compared opt-IMMALG01 and opt-IMMALG with two well-known immuno-
logical inspired algorithms, both based on the clonal selection principle: CLONALG [19]
and BCA [40]. Two populations characterize CLONALG: a population of antigens Ag and
a population of antibodies Ab. The individual antibody, Ab, and antigen, Ag, are repre-
sented by string attributes m = m L , . . . , m 1 , that is, a point in a L-dimensional shape space
S, with m ∈ S L . Two different strategies were adopted by CLONALG, labelled as CLO-
NALG1 and CLONALG2 [9], based on different selection schemes: in CLONALG1 each
Ab at time step t will be replaced for the new generation (time step t + 1) with the best
mutated clone; whilst, in CLONALG2 the new population for generation t + 1 will be pro-
duced by the n best Ab’s of the mutated clones at time step t (n is the population size).
Both schemes of CLONALG are based on the same potential mutation, produced by both
Eqs. 4 and 5. Also for these experiments we have used the same values of Tmax showed in
Table 3.
    Table 10 presents the comparative analysis between both versions of the IA and
CLONALG [19]: opt-IMMALG01, opt-IMMALG, CLONALG1 , and CLONALG2 . The
potential mutation of Eq. 4 was used for all four algorithms. The table shows the mean
of the best candidate solutions on all runs and the standard deviation. All the results pre-
sented for the two versions of CLONALG were previously reported in Cutello et al. [9]. The
results indicate that opt-IMMALG outperforms both versions of CLONALG on all classes
of functions, except for functions f 11 , f 16 and f 17 . If we compare the algorithms only on the
multimodal functions with many local optima ( f 8 − f 13 ), we can claim that opt-IMMALG
is capable of reaching the best solutions more easily than the other clonal selection algorithm
CLONALG. Table 11 presents the same comparison between the IAs and CLONALG but
this time using the potential mutation of Eq. 5. The results show that both opt-IMMALG
and opt-IMMALG01 again outperform both versions of CLONALG, in particular in the
unimodal and multimodal (with many local optima) classes.
    So far the experimental results have demonstrated opt-IMMALG being the superior imple-
mentation of the two IAs, so we next compare only this version with another immunological
inspired optimization algorithm, BCA [40], and a Hybrid Genetic Algorithm, HGA. For this
comparison we used the functions listed in Table 2 and we set the Tmax value as proposed in
Timmis and Kelsey [40]. 50 independent runs were performed. Table 12 compares these three
algorithms. opt-IMMALG outperforms both BCA and HGA on 8 out of 12 test functions.
In particular the results for function g7 , g8 , g11 , and g12 are significant.

4.4 IA versus PSO, SEA, and RCMA

Using a different experimental protocol, we have compared opt-IMMALG, using both
potential mutations (Eqs. 4, 5), with other evolutionary algorithms proposed in Versterstrøm
and Thomsen [42]: Particle Swarm Optimization (PSO) and Simple Evolutionary Algorithm
(SEA). In addition to the classical PSO, the authors in Versterstrøm and Thomsen [42] pro-
posed the attractive and repulsive PSO (arPSO), which uses a modified scheme for PSO to
avoid premature convergence. We performed the comparisons on all functions from Table 1,
except for the functions f 19 and f 20 according to Versterstrøm and Thomsen [42]. For each
experiment, the maximum number of objective function evaluations (Tmax ) was fixed to
5 × 105 for dimensions ≤30 and we performed 30 independent runs for each instance. For
functions f 1 − f 13 the comparison was performed using 100 dimensions. In this case, Tmax


                                                                                       123
J Glob Optim


Table 10 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
                                                                                         − fˆ(x)
representation) and the two versions of CLONALG [9,19], using potential mutation 4 (α = e ρ )

           opt-IMMALG               opt-IMMALG01             CLONALG 1 [9,19]            CLONALG 2 [9,19]

f1         4.663 × 10−19            1.7 × 10−8               3.7 × 10−3                  5.5 × 10−4
           (7.365 × 10−19 )         (3.5 × 10−15 )           (2.6 × 10−3 )               (2.4 × 10−4 )
f2         3.220 × 10−17            7.1 × 10−8               2.9 × 10−3                  2.7 × 10−3
           (1.945 × 10−17 )         (0.0)                    (6.6 × 10−4 )               (7.1 × 10−4 )
f3         3.855                    1.9 × 10−10              1.5 × 10+4                  5.9 × 10+3
           (5.755)                  (2.63 × 10−10 )          (1.8 × 10+3 )               (1.8 × 10+3 )
f4         8.699 × 10−3             4.1 × 10−2               4.91                        8.7 × 10−3
           (3.922 × 10−2 )          (5.3 × 10−2 )            (1.11)                      (2.1 × 10−3 )
f5         22.32                    28.4                     27.6                        2.35 × 10+2
           (11.58)                  (0.42)                   (1.034)                     (4.4 × 10+2 )
f6         0.0                      0.0                      2.0 × 10−2                  0.0
           (0.0)                    (0.0)                    (1.4 × 10−1 )               (0.0)
f7         1.143 × 10−4             3.9 × 10−3               7.8 × 10−2                  5.3 × 10−3
           (1.411 × 10−4 )          (1.3 × 10−3 )            (1.9 × 10−2 )               (1.4 × 10−3 )
f8         −12559.69                −12568.27                −11044.69                   −12533.86
           (34.59)                  (0.23)                   (186.73)                    (43.08)
f9         0.0                      2.66                     37.56                       22.41
           (0.0)                    (2.39)                   (4.88)                      (6.70)
f 10       1.017 × 10−10            1.1 × 10−4               1.57                        1.2 × 10−1
           (5.307 × 10−11 )         (3.1 × 10−5 )            (3.9 × 10−1 )               (4.1 × 10−1 )
f 11       2.066 × 10−2             4.55 × 10−2              1.7 × 10−2                  4.6 × 10−2
           (5.482 × 10−2 )          (4.46 × 10−2 )           (1.9 × 10−2 )               (7.0 × 10−2 )
f 12       7.094 × 10−21            3.1 × 10−2               0.336                       0.573
           (5.621 × 10−21 )         (5.7 × 10−2 )            (9.4 × 10−2 )               (2.6 × 10−1 )
f 13       1.122 × 10−19            3.20                     1.39                        1.69
           (2.328 × 10−19 )         (0.13)                   (1.8 × 10−1 )               (2.4 × 10−1 )
f 14       0.999                    1.21                     1.0021                      2.42
           (7.680 × 10−3 )          (0.54)                   (2.8 × 10−2 )               (2.60)
f 15       3.270 × 10−4             7.7 × 10−3               1.5 × 10−3                  7.2 × 10−3
           (3.651 × 10−5 )          (1.4 × 10−2 )            (7.8 × 10−4 )               (8.1 × 10−3 )
f 16       −1.017                   −1.02                    −1.0314                     −1.0210
           (2.039 × 10−2 )          (1.1 × 10−2 )            (5.7 × 10−4 )               (1.9 × 10−2 )
f 17       0.425                    0.450                    0.399                       0.422
           (4.987 × 10−2 )          (0.21)                   (2.0 × 10−3 )               (2.7 × 10−2 )
f 18       6.106                    3.0                      3.0                         3.46
           (4.748)                  (0.0)                    (1.3 × 10−5 )               (3.28)
f 19       −3.72                    −3.72                    −3.71                       −3.68
           (8.416 × 10−3 )          (1.1 × 10−2 )            (1.5 × 10−2 )               (6.9 × 10−2 )
f 20       −3.293                   −3.31                    −3.23                       −3.18
           (3.022 × 10−2 )          (5.9 × 10−3 )            (5.9 × 10−2 )               (1.2 × 10−1 )
f 21       −10.153                  −5.36                    −5.92                       −3.98
           (7.710 × 10−8 )          (2.20)                   (1.77)                      (2.73)
f 22       −10.402                  −5.34                    −5.90                       −4.66
           (1.842 × 10−6 )          (2.11)                   (2.09)                      (2.55)
f 23       −10.536                  −6.03                    −5.98                       −4.38
           (7.694 × 10−7 )          (2.66)                   (1.98)                      (2.66)
Each result report the mean of the best candidate solutions on all runs (in the first line of each table entry),
and the standard deviation (in the second line). The best results are highlighted in boldface




123
J Glob Optim


Table 11 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values
representation) and the two versions of CLONALG [9,19], using Each result indicates the mean of the best
candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second
line)

           opt-IMMALG               opt-IMMALG01              CLONALG 1 [9,19]             CLONALG 2 [9,19]

f1         0.0                      9.23 × 10−12              9.6 × 10−4                   3.2 × 10−6
           (0.0)                    (2.44 × 10−11 )           (1.6 × 10−3 )                (1.5 × 10−6 )
f2         0.0                      0.0                       7.7 × 10−5                   1.2 × 10−4
           (0.0)                    (0.0)                     (2.5 × 10−5 )                (2.1 × 10−5 )
f3         0.0                      0.0                       2.2 × 104                    2.4 × 10+4
           (0.0)                    (0.0)                     (1.3 × 10−4 )                (5.7 × 10+3 )
f4         0.0                      1.0 × 10−2                9.44                         5.9 × 10−4
           (0.0)                    (5.3 × 10−3 )             (1.98)                       (3.5 × 10−4 )
f5         16.29                    3.02                      31.07                        4.67 × 10+2
           (13.96)                  (12.2)                    (13.48)                      (6.3 × 10+2 )
f6         0.0                      0.2                       0.52                         0.0
           (0.0)                    (0.44)                    (0.49)                       (0.0)
f7         1.995 × 10−5             3.0 × 10−3                1.3 × 10−1                   4.6 × 10−3
           (2.348 × 10−5 )          (1.2 × 10−3 )             (3.5 × 10−2 )                (1.6 × 10−3 )
f8         −12535.15                −12508.38                 −11099.56                    −1228.39
           (62.81)                  (155.54)                  (112.05)                     (41.08)
f9         0.596                    19.98                     42.93                        21.75
           (4.178)                  (7.66)                    (3.05)                       (5.03)
f 10       0.0                      18.98                     18.96                        19.30
           (0.0)                    (0.35)                    (2.2 × 10−1 )                (1.9 × 10−1 )
f 11       0.0                      7.7 × 10−2                3.6 × 10−2                   9.4 × 10−2
           (0.0)                    (8.63 × 10−2 )            (3.5 × 10−2 )                (1.4 × 10−1 )
f 12       1.770 × 10−21            0.137                     0.632                        0.738
           (8.774 × 10−24 )         (0.23)                    (2.2 × 10−1 )                (5.3 × 10−1 )
f 13       1.687 × 10−21            1.51                      1.83                         1.84
           (5.370 × 10−24 )         (0.10)                    (2.7 × 10−1 )                (2.7 × 10−1 )
f 14       0.998                    1.02                      1.0062                       1.45
           (1.110 × 10−3 )          (7.1 × 10−2 )             (4.0 × 10−2 )                (0.95)
f 15       3.2 × 10−4               7.1 × 10−4                1.4 × 10−3                   8.3 × 10−3
           (2.672 × 10−5 )          (1.3 × 10−4 )             (5.4 × 10−4 )                (8.5 × 10−3 )
f 16       −1.013                   −1.032                    −1.0315                      −1.0202
           (2.212 × 10−2 )          (1.5 × 10−4 )             (1.8 × 10−4 )                (1.8 × 10−2 )
f 17       0.423                    0.398                     0.401                        0.462
           (3.217 × 10−2 )          (2.0 × 10−4 )             8.8 × 10−3 )                 (2.0 × 10−1 )
f 18       5.837                    3.0                       3.0                          3.54
           (3.742)                  (0.0)                     (1.3 × 10−7 )                (3.78)
f 19       −3.72                    −3.72                     −3.71                        −3.67
           (7.846 × 10−3 )          (1.1 × 10−4 )             (1.1 × 10−2 )                (6.6 × 10−2 )
f 20       −3.292                   −3.31                     −3.30                        −3.21
           (3.097 × 10−2 )          (7.4 × 10−2 )             (1.0 × 10−2 )                (8.6 × 10−2 )
f 21       −10.153                  −9.11                     −7.59                        −5.21
           (1.034 × 10−7 )          (1.82)                    (1.89)                       (1.78)
f 22       −10.402                  −9.86                     −8.41                        −7.31
           (1.082 × 10−5 )          (1.88)                    (1.4)                        (2.67)
f 23       −10.536                  −9.96                     −8.48                        −7.12
           (1.165 × 10−5 )          (1.46)                    (1.51)                       (2.48)
The best results are highlighted in boldface




                                                                                                    123
J Glob Optim


Table 12 Comparison between opt-IMMALG, BCA and HGA [40]
                             − fˆ(x)                                      ˆ
       opt-IMMALG using α = e ρ            opt-IMMALG using α = e−ρ f (x)        BCA [40]      HGA [40]

g1     −1.12 ± 1.17 × 10−3                 −1.12 ± 1.62 × 10−3                    −1.08         −1.12
g2     −1.03 ± 8.82 × 10−4                 −1.03 ± 7.129 × 10−4                   −1.03         −0.99
g3     −12.03 ± 8.196 × 10−4               −12.03 ± 9.28 × 10−4                  −12.03        −12.03
g4     0.3984 ± 6.73 × 10−4                0.3985 ± 8.859 × 10−4                   0.40          0.40
g5     −178.51 ± 11.49                     −178.88 ± 9.83                       −186.73       −186.73
g6     −179.27 ± 11.498                    −179.12 ± 10.02                      −186.73       −186.73
g7     −2.529 ± 0.2026                     −2.571±0.253                            0.92          0.92
g8     1.314 × 10−12 ± 4.668 × 10−12       1.314 × 10−12 ± 4.668 × 10−12           1.0           1.0
g9     −3.51 ± 1.464 × 10−3                −0.351 ± 1.62 × 10−3                   −0.91         −0.99
g10    −186.67 ± 8.17 × 10−2               −186.65 ± 0.1158                     −186.73       −186
g11    3.81 × 10−5 ± 5.58 × 10−15          3.81 × 10−5 ± 6.98 × 10−14              0.04          0.04
g12    0.0 ± 0.0                           0.0 ± 0.0                               1             1
The functions used are listed in table 2. For opt-IMMALG we show the mean of the best candidate solutions
on all runs, and standard deviation values (mean ± sd). The best results are highlighted in boldface



was set to 5 × 106 . For each instance we report the mean of the best solutions averaged over
all runs and the standard deviation.
    Table 13 presents the comparison between opt-IMMALG, PSO (particle swarm opti-
mization), arPSO (attractive and repulsive particle swarm optimization) and SEA (simple
evolutionary algorithm), obtained using n = 30 dimensions. The results indicate a bet-
ter performance of opt-IMMALG than the above-cited algorithms, outperforming them in
the majority of the functions. In Table 14 instead we show the same comparisons but with
n = 100 dimensions. Again, opt-IMMALG outperforms SEA, PSO and its modification on
all functions. Therefore, from these experiments, we can claim that opt-IMMALG is capable
of tackling functions with high dimension better than these evolutionary algorithms.
    Recent developments in the evolutionary algorithms field, have shown that in order to
tackle complex search spaces, pure genetic algorithms (GA) need to use local search opera-
tors and specialized crossover [25]. Such kind of algorithms are called Memetic Algorithms
(MA) [26]. Table 15 shows the comparisons of opt-IMMALG with several real coded me-
metic algorithms (RCMA) [30,32]: CHC algorithm, Generalized Generation Gap(G3 − 1),
hybrid steady state RCMA (SW-100), Family Competition (FC) and RCMA with crossover
Hill Climbing (RCMA-XHC). The detailed descriptions for these algorithms can be found
in Lozano et al. [30], whilst the reported results were extracted from Noman and Iba [32].
Such experiments were performed using n = 25 dimensions, Tmax = 105 maximum number
of objective function evaluations and 30 independent runs. For this comparison we used the
potential mutation from Eq. 5. As proposed in Noman and Iba [32], the tests were performed
only on functions f 5 , f 9 and f 11 . By looking at the results reported in the table it is clear that
opt-IMMALG outperforms all RCMAs. Although RMCA-XHC obtains the best result for
the last function f 5 , the proposed IA presents notably better results than the others RMCA.

4.5 opt- IMMALG versus opt- IMMALG∗

The analysis of the experiments reported so far have shown that opt-IMMALG, using the
second potential mutation (5), performs better in terms of solution’s quality and ability to
escape from a local optima. While performing the parameter tuning of the algorithm we

123
J Glob Optim


Table 13 Comparison between opt-IMMALG, PSO (particle swarm optimization), arPSO (attractive and
repulsive particle swarm optimization) and SEA (simple evolutionary algorithm) [42], using 30 dimensions

        opt-IMMALG                                   PSO [42]              arPSO [42]            SEA [42]
            − fˆ(x)                      ˆ
        α= e ρ                 α = e−ρ f (x)

f1       0.0                    0.0                  0.0                   6.8 × 10−13           1.79 × 10−3
         0.0                    0.0                  0.0                   5.3 × 10−13           2.77 × 10−4
f2       0.0                    0.0                  0.0                   2.09 × 10−2           1.72 × 10−2
         0.0                    0.0                  0.0                   1.48 × 10−1           1.7 × 10−3
f3       0.0                    0.0                  0.0                   0.0                   1.59 × 10−2
         0.0                    0.0                  0.0                   2.13 × 10−25          4.25 × 10−3
f4      5.6 × 10−4              0.0                  2.11 × 10−16          1.42 × 10−5           1.98 × 10−2
        2.18 × 10−3             0.0                  8.01 × 10−16          8.27 × 10−6           2.07 × 10−3
f5      21.16                  12                    4.026                 3.55 × 10+2           31.32
        11.395                 13.22                 4.99                  2.15 × 10+3           17.4
f6       0.0                    0.0                  4 × 10−2              18.98                 0.0
         0.0                    0.0                  1.98 × 10−1           63                    0.0
f7      3.7 × 10−5              1.52 × 10−5          1.91 × 10−3           3.89 × 10−4           7.11 × 10−4
        5.62 × 10−5             2.05 × 10−5          1.14 × 10−3           4.78 × 10−4           3.27 × 10−4
f8      −1.257 × 10+4          −1.256 × 10+4         −7.187 × 10+3         −8.598 × 10+3         −1.167 × 10+4
        8.369                  25.912                6.72 × 10+2           2.07 × 10+3           2.34 × 10+2
f9       0.0                    0.0                  49.17                 2.15                  7.18 × 10−1
         0.0                    0.0                  16.2                  4.91                  9.22 × 10−1
f 10    4.74 × 10−16           0.0                   1.4                   1.84 × 10−7           1.05 × 10−2
        1.21 × 10−15           0.0                   7.91 × 10−1           7.15 × 10−8           9.08 × 10−4
f 11     0.0                    0.0                  2.35 × 10−2           9.23 × 10−2           4.64 × 10−3
         0.0                    0.0                  3.54 × 10−2           3.41 × 10−1           3.96 × 10−3
f 12    1.787 × 10−21          1.77 × 10−21          3.819 × 10−1          8.559 × 10−3          4.56 × 10−6
        5.06 × 10−23           7.21 × 10−24          8.4 × 10−1            4.79 × 10−2           8.11 × 10−7
f 13    1.702 × 10−21          1.686 × 10−21         −5.969 × 10−1         −9.626 × 10−1         −1.143
        4.0628 × 10−23         1.149 × 10−24         5.17 × 10−1           5.14 × 10−1           1.34 × 10−5
f 14    9.98 × 10−1            9.98 × 10−1           1.157                 9.98 × 10−1           9.98 × 10−1
        5.328 × 10−4           2.719 × 10−4          3.68 × 10−1           2.13 × 10−8           4.33 × 10−8
f 15    3.26 × 10−4             3.215 × 10−4         1.338 × 10−3          1.248 × 10−3          3.704 × 10−4
        3.64 × 10−5             2.56 × 10−5          3.94 × 10−3           3.96 × 10−3           8.78 × 10−5
f 16    −1.023                 −1.017                −1.032                −1.032                −1.032
        1.52 × 10−2            3.625 × 10−2          3.84 × 10−8           3.84 × 10−8           3.16 × 10−8
f 17    4.19 × 10−1            4.2 × 10−1            3.98 × 10−1           3.98 × 10−1           3.98 × 10−1
        2.9 × 10−2             3.5158 × 10−2         5.01 × 10−9           5.01 × 10−9           2.20 × 10−8
f 18    4.973                  5.371                 3.0                   3.516                 3.0
        2.9366                 3.0449                0.0                   3.65                  0.0
f 21     −10.15                 −10.15               −5.4                  −8.18                 −8.41
        1.81 × 10−6             1.018 × 10−7         3.40                  2.60                  3.16
f 22     −10.4                 −10.4                 −6.946                −8.435                −8.9125
        1.19 × 10−6            9.3 × 10−6            3.70                  2.83                  2.86
f 23     −10.54                −10.54                −6.71                 −8.616                −9.8
        6.788 × 10−7           7.29 × 10−6           3.77                  2.88                  2.24
For opt-IMMALG we show the results obtained using both potential mutations (Eqs. 4, 5). For all algorithms
we report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard
deviation (in the second line). The best results are highlighted in boldface. Results have been averaged over
30 independent runs and Tmax = 5 × 105




                                                                                                     123
J Glob Optim


Table 14 Comparison between opt-IMMALG, PSO (particle swarm optimization), arPSO (attractive and
repulsive particle swarm optimization) and SEA (simple evolutionary algorithm) [42], using 100 dimensions

        opt                        -IMMALG          PSO [42]               arPSO [42]            SEA [42]
            − fˆ(x)                         ˆ
        α= e ρ                α = e−ρ f (x)

f1      0.0                   0.0                   0.0                    7.4869 × 10+2         5.229 × 10−4
        0.0                   0.0                   0.0                    2.31 × 10+3           5.18 × 10−5
f2      0.0                   0.0                   1.804 × 10+1           3.9637 × 10+1         1.737 × 10−2
        0.0                   0.0                   6.52 × 10+1            2.45 × 10+1           9.43 × 10−4
f3      0.0                   0.0                   3.666 × 10+3           1.817 × 10+1          3.68 × 10−2
        0.0                   0.0                   6.94 × 10+3            2.50 × 10+1           6.06 × 10−3
f4      7.32 × 10−4           6.447 × 10−7          5.312                  2.4367                7.6708 × 10−3
        2.109 × 10−3          3.338 × 10−6          8.63 × 10−1            3.80 × 10−1           5.71 × 10−4
f5      97.02                 74.99                 2.02 × 10+2            2.36 × 10+2           9.249 × 10+1
        54.73                 38.99                 7.66 × 10+2            1.25 × 10+2           1.29 × 10+1
f6      0.0                   0.0                   2.1                    4.118 × 10+2          0.0
        0.0                   0.0                   3.52                   4.21 × 10+2           0.0
f7      1.763 × 10−5          1.59 × 10−5           2.784 × 10−2           3.23 × 10−3           7.05 × 10−4
        2.108 × 10−5          3.61 × 10−5           7.31 × 10−2            7.87 × 10−4           9.70 × 10−5
f8      −4.176 × 10+4         −4.16 × 10+4          −2.1579 × 10+4         −2.1209 × 10+4        −3.943 × 10+4
        2.08 × 10+2           2.06 × 10+2           1.73 × 10+3            2.98 × 10+3           5.36 × 10+2
f9      0.0                   0.0                   2.4359 × 10+2          4.809 × 10+1          9.9767 × 10−2
        0.0                   0.0                   4.03 × 10+1            9.54                  3.04 × 10−1
f 10    1.18 × 10−16          0.0                   4.49                   5.628 × 10−2          2.93 × 10−3
        6.377 × 10−16         0.0                   1.73                   3.08 × 10−1           1.47 × 10−4
f 11    0.0                   0.0                   4.17 × 10−1            8.53 × 10−2           1.89 × 10−3
        0.0                   0.0                   6.45 × 10−1            2.56 × 10−1           4.42 × 10−3
f 12    5.34 × 10−22          5.3169 × 10−22        1.77 × 10−1            9.219 × 10−2          2.978 × 10−7
        9.81 × 10−24          5.0655 × 10−24        1.75 × 10−1            4.61 × 10−1           2.76 × 10−8
f 13    1.712 × 10−21         1.689 × 10−21         −3.86 × 10−1           3.301 × 10+2          −1.142810
        9.379 × 10−23         9.877 × 10−24         9.47 × 10−1            1.72 × 10+3           2.41 × 10−8
For opt-IMMALG we show the results obtained using both potential mutations (Eqs. 4, 5). For all algorithms
we report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard
deviation (in the second line). The best results are highlighted in boldface. Results have been averaged over
30 independent runs and Tmax = 5 × 106




Table 15 Comparison between opt-IMMALG and several Real Coded Memetic Algorithm (RCMA) pro-
posed in Noman and Iba [32]

Algorithm                            f 11                             f9                               f5

opt-IMMALG                          0.0                              0.0                              4.68
CHC                                 6.5 × 10−3                       1.6 × 10+1                       1.9 × 10+1
G3-1                                5.1 × 10−1                       7.4 × 10+1                       2.8 × 10+1
SW-100                              2.7 × 10−2                       7.6                              1 × 10+1
FC                                  3.5 × 10−4                       5.5                              2.3 × 10+1
RCMA-XHC                            1.3 × 10−2                       1.4                              2.2
We report the mean of the best individuals on all runs. The best results are highlighted in boldface. Results
have been averaged over 30 independent runs, using Tmax = 105 and n = 25 dimensions




123
J Glob Optim


Table 16 Comparison between the opt-IMMALG and opt-IMMALG∗ , with maximum number of objective
function evaluations, Tmax ) = 5 × 105 for dimension n = 30, and (Tmax ) = 5 × 106 for dimension n = 100

             n = 30                                                    n = 100
             opt-IMMALG                     opt-IMMALG∗                opt-IMMALG                opt-IMMALG∗

f1            0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f2            0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f3            0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f4            0.0                           0.0                        6.447 × 10−7               0.0
              0.0                           0.0                        3.338 × 10−6               0.0
f5           12                             0.0                        74.99                      22.116
             13.22                          0.0                        38.99                      39.799
f6            0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f7           1.521 × 10−5                   7.4785 × 10−6              1.59 × 10−5               1.2 × 10−6
             2.05 × 10−5                    6.463 × 10−6               3.61 × 10−5               1.53 × 10−6
f8           −1.256041 × 10+4               −9.05 × 10+3               −4.16 × 10+4              −2.727 × 10+4
             25.912                         1.91 × 10+4                2.06 × 10+2               7.627 × 10−4
f9            0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f 10          0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f 11          0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f 12          0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
f 13          0.0                           0.0                         0.0                       0.0
              0.0                           0.0                         0.0                       0.0
We report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard
deviation (in the second line). The best results are highlighted in boldface



noticed that randomly choosing the age of the candidate solutions in the range [0, 2 τ B ] and
                                                                                       3
fixing θ = 50%, opt-IMMALG improves its own performances. We called this new variant
opt- IMMALG∗ .
   Table 16 shows the improved performance of opt- IMMALG∗ on the first 13 functions.
For these experiments we fixed Tmax = 5 × 105 for 30 dimensions, and Tmax = 5 × 106 for
100 dimensions, which corresponds to the experimental protocol used in the previous sub-
section and proposed in Versterstrøm and Thomsen [42]. All the results with value ≤ 10−25
were reported as 0.0. The improved performance is particularly evident for function f 5 with
n = 30, where now opt-IMMALG∗ is able to reach the best solution while the previous
variants failed.
   In Tables 17 and 18, we present again the comparison with FEP but this time including
the new variant of opt-IMMALG. Table 17 shows the results obtained by opt-IMMALG∗
on the first 13 functions, whilst Table 18 the results on the multimodal functions with a few
local optima ( f 14 − f 23 ). The new variant opt-IMMALG∗ , improves the overall quality of
the results, in particular for functions f 5 , and f 9 . Opposite behaviour is instead obtained
in Table 18, where the new variant (opt-IMMALG∗ ) is comparable, but not outperforming
the performances of opt-IMMALG. Most likely, for this class of functions, each candidate
solution still needs longer life span.

                                                                                                     123
J Glob Optim


Table 17 Comparison between opt-IMMALG, opt-IMMALG∗ , and FEP (Fast Evolutionary Algorithm)
[43], on the first 13 functions
       opt-IMMALG opt-IMMALG∗ FEP [43]                         opt-IMMALG opt-IMMALG∗ FEP [43]

f 1 0.0               0.0                5.7 × 10−4     f8     −12535.15       −8707.04            −12554.5
    0.0               0.0                1.3 × 10−4            62.81           1.7 × 103           52.6
f 2 0.0               0.0                8.1 × 10−3     f9     0.596           0.0                 4.6 × 10−2
    0.0               0.0                7.7 × 10−4            4.178           0.0                 1.2 × 10−2
f 3 0.0               0.0                1.6 × 10−2     f 10   0.0             0.0                 1.8 × 10−2
    0.0               0.0                1.4 × 10−2            0.0             0.0                 2.1 × 10−3
f 4 0.0               0.0                0.3            f 11   0.0             0.0                 1.6 × 10−2
    0.0               0.0                0.5                   0.0             0.0                 2.2 × 10−2
f 5 16.29             0.0                5.06           f 12   0.0             0.0                 9.2 × 10−6
    13.96             0.0                5.87                  0.0             0.0                 3.6 × 10−6
f 6 0.0               0.0                0.0            f 13   0.0             0.0                 1.6 × 10−4
    0.0               0.0                0.0                   0.0             0.0                 7.3 × 10−5
f 7 1.995 × 10−5      1.6 × 10−5         7.6 × 10−3
    2.348 × 10−5      1.37 × 10−5        2.6 × 10−3
The used experimental protocol was the same described in Sect. 4.1. For all algorithms we report mean of the
best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the
second line). The best results are highlighted in boldface

Table 18 Comparison between opt-IMMALG, opt-IMMALG∗ , and FEP (Fast Evolutionary Algorithm)
[43], on all functions included into the last categories, i.e. multimodal functions with a few local optima

                        opt-IMMALG                             opt-IMMALG∗                        FEP [42]

f 14                    0.998                                  1.255                              1.22
                        1.11 × 10−3                            1.14                               0.56
f 15                    3.20 × 10−4                            3.22 × 10−4                        5.0 × 10−4
                        2.672 × 10−5                           2.23 × 10−5                        3.2 × 10−4
f 16                    −1.013                                 −1.0033                            −1.031
                        2.212 × 10−2                           4.9 × 10−2                         4.9 × 10−7
f 17                    0.423                                  0.452                              0.398
                        3.217 × 10−2                           7.58 × 10−2                        1.5 × 10−7
f 18                    5.837                                  7.097                              3.02
                        3.742                                  5.61                               0.11
f 19                    −3.72                                  −3.65                              −3.86
                        7.846 × 10−3                           4.82 × 10−2                        1.4 × 10−5
f 20                    −3.29                                  −3.026                             −3.27
                        3.097 × 10−2                           0.12                               5.9 × 10−2
f 21                    −10.153                                −10.153                            −5.52
                        1.034 × 10−7                           1.46 × 10−7                        1.59
f 22                    −10.402                                −10.403                            −5.52
                        1.082 × 10−5                           1.75 × 10−5                        2.12
f 23                    −10.536                                −10.536                            −6.57
                        1.165 × 10−5                           1.76 × 10−5                        3.14
The used experimental protocol was the same described in Sect. 4.1. For all algorithms we report mean of the
best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the
second line). The best results are highlighted in boldface

4.6 IA versus differential evolution algorithms

Among the many evolutionary methodologies able to effectively tackle global numerical opti-
mization problems, differential evolution (DE) has shown better performances on complex

123
J Glob Optim


Table 19 Comparison between opt-IMMALG, opt-IMMALG∗ , and several DE variants, proposed in
Mezura-Montes et al. [31]
Algorithm                   Unimodal functions
                             f1           f2            f3            f4          f6            f7

opt-IMMALG∗                 0.0           0.0           0.0          0.0          0.0           2.79 × 10−5
opt-IMMALG                  0.0           0.0           0.0          0.0          0.0           4.89 × 10−5
DE rand/1/bin               0.0           0.0           0.02         1.9521       0.0           0.0
DE rand/1/exp               0.0           0.0           0.0          3.7584       0.84          0.0
DE best/1/bin               0.0           0.0           0.0          0.0017       0.0           0.0
DE best/1/exp               407.972       3.291         10.6078      1.701872     2737.8458     0.070545
DE current-to-best/1        0.54148       4.842         0.471730     4.2337       1.394         0.0
DE current-to-rand/1        0.69966       3.503         0.903563     3.298563     1.767         0.0
DE current-to-rand/1/bin    0.0           0.0           0.000232     0.149514     0.0           0.0
DE rand/2/dir               0.0           0.0           30.112881    0.044199     0.0           0.0
Algorithm                   Multimodal functions
                             f5           f9            f 10          f 11        f 12          f 13
opt-IMMALG∗                 16.2          0.0           0.0          0.0          0.0           0.0
opt-IMMALG                  11.69         0.0           0.0          0.0          0.0           0.0
DE rand/1/bin               19.578        0.0           0.0          0.001117     0.0           0.0
DE rand/1/exp               6.696         97.753938     0.080037     0.000075     0.0           0.0
DE best/1/bin               30.39087      0.0           0.0          0.000722     0.0           0.000226
DE best/1/exp               132621.5      40.003971     9.3961       5.9278       1293.0262     2584.85
DE current-to-best/1        30.984666     98.205432     0.270788     0.219391     0.891301      0.038622
DE current-to-rand/1        31.702063     92.263070     0.164786     0.184920     0.464829      5.169196
DE current-to-rand/1/bin    24.260535     0.0           0.0          0.0          0.001007      0.000114
DE rand/2/dir               30.654916     0.0           0.0          0.0          0.0           0.0
We report the mean of the best individuals on all runs. The best results are highlighted in boldface. Results
have been averaged over 100 independent runs, using Tmax = 1.2 × 105 , and n = 30 dimensions. For
opt-IMMALG∗ we fixed d = 100



and continuous search space [34,36]. For this purpose, we compared opt-IMMALG and
opt-IMMALG∗ with several DE variants [31,42], and their memetic versions [32] using the
first 13 functions from Table 1. As previously described, for this class of experiments we used
only the second potential mutation (Eq. 5) because it presents better performances. Several
dimensions were used, from small (n = 30) to high values (n = 200). For these instances we
fixed ρ as described in Sect. 3.1. In the first experiment, opt-IMMALG and opt-IMMALG∗
are compared with 8 DE variants, proposed in Mezura-Montes et al. [31], where Tmax was
fixed to 1.2 × 105 [31]. For each function 100 independent runs were performed, and the
variable dimensions were fixed to 30. Results are shown in Table 19. Since the authors of
Mezura-Montes et al. [31] modified the function f 8 to have its minimum at zero (rather than
−12569.5), this function is not included in the table. Inspecting the comparison in the table,
we can observe that the new variant opt-IMMALG∗ outperforms all DE variants except for
the functions f 5 and f 7 .
   In Table 20 opt-IMMALG and opt-IMMALG∗ algorithms are compared to the rand/1/bin
variant, one of the best DE variants, based on a different experimental protocol proposed in
Versterstrøm and Thomsen [42]. For each experiment two different dimension values were
used: n = 30 with Tmax = 5 × 105 , and n = 100 with Tmax = 5 × 106 . Thirty independent
runs were performed for each benchmark function. In this table we present the mean of the
best candidate solutions on all runs and the standard deviation (in a new line). All results

                                                                                                123
J Glob Optim


Table 20 Comparison between opt-IMMALG and rand/1/bin variant, proposed in Versterstrøm and
Thomsen [42]

       30 dimension                                           100 dimension

       opt-IMMALG         opt-IMMALG∗     DE                  opt-IMMALG opt-IMMALG∗ DE
                                          rand/1/bin                                 rand/1/bin
                                          [42]                                       [42]

f1     0.0                0.0             0.0                 0.0             0.0               0.0
       0.0                0.0             0.0                 0.0             0.0               0.0
f2     0.0                0.0             0.0                 0.0             0.0               0.0
       0.0                0.0             0.0                 0.0             0.0               0.0
f3     0.0                0.0             2.02 × 10−9         0.0             0.0               5.87 × 10−10
       0.0                0.0             8.26 × 10−10        0.0             0.0               1.83 × 10−10
f4     0.0                0.0             3.85 × 10−8         6.447 × 10−7    0.0               1.128 × 10−9
       0.0                0.0             9.17 × 10−9         3.338 × 10−6    0.0               1.42 × 10−10
f5     12                 0.0              0.0                74.99           22.116            0.0
       13.22              0.0              0.0                38.99           39.799            0.0
f6     0.0                0.0             0.0                 0.0             0.0               0.0
       0.0                0.0             0.0                 0.0             0.0               0.0
f7     1.521 × 10−5       7.48 × 10−6     4.939 × 10−3        1.59 × 10−5     1.2 × 10−6        7.664 × 10−3
       2.05 × 10−5        6.46 × 10−6     1.13 × 10−3         3.61 × 10−5     1.53 × 10−6       6.58 × 10−4
f8     −1.256041 × 10+4   −9.05 × 10+3    −1.256948 × 10+4    −4.16 × 10+4    −2.727 × 10+4     −4.1898 × 10+4
       25.912             1.91 × 104      2.3 × 10−4          2.06 × 10+2     7.63 × 10−4       1.06 × 10−3
f9     0.0                0.0             0.0                 0.0             0.0               0.0
       0.0                0.0             0.0                 0.0             0.0               0.0
f 10   0.0                0.0             −1.19 × 10−15       0.0             0.0               8.023 × 10−15
       0.0                0.0             7.03 × 10−16        0.0             0.0               1.74 × 10−15
f 11   0.0                0.0             0.0                 0.0             0.0               5.42 × 10−20
       0.0                0.0             0.0                 0.0             0.0               5.42 × 10−20
f 12   0.0                0.0             0.0                 0.0             0.0               0.0
       0.0                0.0             0.0                 0.0             0.0               0.0
f 13   0.0                0.0             −1.142824           0.0             0.0               −1.142824
       0.0                0.0             4.45 × 10−8         0.0             0.0               2.74 × 10−8

We report the mean of the best individuals on all runs (in the first line of each table entry), and the standard
deviation (in the second line). The best results are highlight in boldface. The results are obtained using n = 30
and n = 100 dimensions




≤ 10−25 were reported as 0.0 [42]. This is the same experimental protocol used for the results
in Table 16, hence the two tables are similar. The results indicate that the overall performances
of opt-IMMALG and opt-IMMALG∗ are comparable to the ones produced by rand/1/bin
variant, in both 30 and 100 variables dimension. Two memetic versions of DE variants, based
on crossover local search (XLS) and called DEfirDE and DEfirSPX, were proposed in Noman
and Iba [32]. Then, as last experiments, in Tables 21 and 22 we compared opt-IMMALG∗ ,
and opt-IMMALG with these two DE algorithms, rand/1/exp and best/1/exp, and their me-
metic versions, DEfirDE and DEfirSPX [32], using n = {50, 100, 200} dimensions. For each
test, the maximum number of objective function evaluations Tmax was fixed to 5 × 105 , and
30 independent runs were performed. We used only the functions f 1 , f 5 , f 9 , f 10 and f 11 ,
the same used in Noman and Iba [32].
   For the two DE algorithms and their memetic versions, in Tables 21 and 22, we report the
results obtained varying the population size with n, 5n and 10n, (first three lines, respectively)
where n indicate the dimensional search space [32]. Both tables demonstrate that the two
variants of opt-IMMALG achieve higher quality solutions rather than two DE algorithms

123
J Glob Optim


Table 21 Comparison between opt-IMMALG∗ , opt-IMMALG and two of the best DE variants, rand/1/exp
and best/1/exp, proposed in Noman and Iba [32]

       opt-IMMALG∗       opt-IMMALG         DE rand/1/exp [32]                DE best/1/exp [32]

n = 50 dimensional search space
f1   0±0               0±0                  0±0                               309.74 ± 481.05
                                            0±0                               0±0
                                            0.0535 ± 0.0520                   0.0027 ± 0.0013
f5     1.64 ± 8.7        30 ± 21.7          79.8921 ± 102.611                 3.69 × 10+5 ± 5.011 × 10+5
                                            52.4066 ± 19.9109                 54.5985 ± 25.6652
                                            90.0213 ± 33.8734                 58.1931 ± 9.4289
f9     0±0               0±0                0±0                               0.61256 ± 1.1988
                                            0±0                               0±0
                                            0±0                               0±0
f 10   0±0               0±0                0±0                               0.2621 ± 0.5524
                                            9.36 × 10−6 ± 3.67 × 10−6         6.85 × 10−6 ± 6.06 × 10−6
                                            0.0104 ± 0.0015                   0.0067 ± 0.0015
f 11   0±0               0±0                0±0                               0.1651 ± 0.2133
                                            9.95 × 10−7 ± 4.3 × 10−7          0±0
                                            0.0053 ± 0.010                    0.0012 ± 0.0028
n = 100 dimensional search space
f1   0±0               0±0                  1.58 × 10−6 ± 3.75 × 10−6         0.0046 ± 0.0247
                                            59.926 ± 16.574                   30.242 ± 5.93
                                            2496.82 ± 246.55                  1729.40 ± 172.28
f5     26.7 ± 43         85.6 ± 31.758      120.917 ± 41.8753                 178.465 ± 60.938
                                            12312.16 ± 3981.44                7463.633 ± 2631.92
                                            3.165 × 10+6 ± 6.052 × 10+5       1.798 × 10+6 ± 3.304 × 10+5
f9     0±0               0±0                0±0                               0±0
                                            2.6384 ± 0.7977                   0.7585 ± 0.2524
                                            234.588 ± 13.662                  198.079 ± 18.947
f 10   0±0               0±0                1.02 × 10−6 ± 1.6 × 10−7          9.5 × 10−7 ± 1.1 × 10−7
                                            1.6761 ± 0.0819                   1.2202 ± 0.0965
                                            7.7335 ± 0.1584                   6.7251 ± 0.1373
f 11   0±0               0±0                0±0                               0±0
                                            1.1316 ± 0.0124                   1.0530 ± 0.0100
                                            20.037 ± 0.9614                   13.068 ± 0.8876
n = 200 dimensional search space
f1   0±0               0±0                  50.005 ± 16.376                   26.581 ± 7.4714
                                            5.45 × 10+4 ± 2605.73             4.84 × 10+4 ± 1891.24
                                            1.82 × 10+5 ± 6785.18             1.74 × 10+5 ± 6119.01
f5     88.65 ± 91.85     165.1 ± 71.2       9370.17 ± 3671.11                 6725.48 ± 1915.38
                                            4.22 × 10+8 ± 3.04 × 10+7         3.54 × 10+8 ± 3.54 × 10+7
                                            3.29 × 10+9 ± 2.12 × 10+8         3.12 × 10+9 ± 1.65 × 10+8
f9     0±0               0±0                0.4245 ± 0.2905                   0.2255 ± 0.1051
                                            1878.61 ± 60.298                  1761.55 ± 43.3824
                                            5471.35 ± 239.67                  5094.97 ± 182.77
f 10   0±0               0±0                0.5208 ± 0.0870                   0.4322 ± 0.0427
                                            15.917 ± 0.1209                   15.46 ± 0.1205
                                            19.253 ± 0.0698                   19.138 ± 0.0772
f 11   0±0               0±0                0.7687 ± 0.0768                   0.5707 ± 0.0651
                                            490.29 ± 21.225                   441.97 ± 15.877
                                            1657.93 ± 47.142                  1572.51 ± 53.611
We report the mean of the best individuals on all runs and the standard deviation (mean ± sd). The best results
are highlight in boldface. The results are obtained using n = {50, 100, 200} dimensions




                                                                                                   123
J Glob Optim


Table 22 Comparison between opt-IMMALG∗ , opt-IMMALG and the memetic versions of rand/1/exp and
best/1/exp DE variants, called DEfirDE and DEfirSPX [32]

       opt- IMMALG ∗        opt-IMMALG        DEfirDE [32]                       DEfirSPX [32]

n = 50 dimensional search space
f1   0±0                 0±0                  0±0                                0±0
                                              0±0                                0±0
                                              0.0026 ± 0.0023                    1 × 10−4 ± 4.75 × 10−5
f5     1.64 ± 8.7           30 ± 21.7         72.0242 ± 47.1958                  65.8951 ± 37.8933
                                              53.1894 ± 26.1913                  45.8367 ± 10.2518
                                              66.9674 ± 23.7196                  52.0033 ± 13.6881
f9     0±0                  0±0               0±0                                0±0
                                              0±0                                0±0
                                              0±0                                0±0
f 10   0±0                  0±0               0±0                                0±0
                                              2.28 × 10−5 ± 1.45 × 10−5          3.0 × 10−6 ± 1.07 × 10−6
                                              0.0060 ± 0.0015                    0.0019 ± 4.32 × 10−4
f 11   0±0                  0±0               0±0                                0±0
                                              0±0                                0±0
                                              4.96 · 10−4 ± 6.68 · 10−4          5.27 × 10−4 ± 0.0013
n = 100 dimensional search space
f1   0±0                 0±0                  0±0                                0±0
                                              11.731 ± 5.0574                    1.2614 ± 0.4581
                                              358.57 ± 108.12                    104.986 ± 22.549
f5     26.7 ± 43            85.6 ± 31.758     107.5604 ± 28.2529                 99.1086 ± 18.5735
                                              2923.108 ± 1521.085                732.85 ± 142.22
                                              2.822 × 10+5 ± 3.012 × 10+5        16621.32 ± 6400.43
f9     0±0                  0±0               0±0                                0±0
                                              0.1534 ± 0.1240                    0.0094 ± 0.0068
                                              17.133 ± 7.958                     27.0537 ± 20.889
f 10   0±0                  0±0               1.2 × 10−6 ± 6.07 × 10−7           0±0
                                              0.5340 ± 0.1101                    0.3695 ± 0.0734
                                              3.7515 ± 0.2773                    3.4528 ± 0.1797
f 11   0±0                  0±0               0±0                                0±0
                                              0.7725 ± 0.1008                    0.5433 ± 0.1331
                                              3.7439 ± 0.7651                    2.2186 ± 0.3010
n = 200 dimensional search space
f1   0±0                 0±0                  17.678 ± 9.483                     0.8568 ± 0.2563
                                              9056.0 ± 1840.45                   2782.32 ± 335.69
                                              44090.5 ± 6122.35                  9850.45 ± 1729.9
f5     88.65 ± 91.85        165.1 ± 71.2      5302.79 ± 2363.74                  996.69 ± 128.483
                                              2.39 × 10+7 ± 6.379 × 10+6         1.19 × 10+6 ± 4.10 × 10+5
                                              3.48 × 10+8 ± 1.75 × 10+8          1.21 × 10+7 ± 4.73 × 10+6
f9     0±0                  0±0               0.1453 ± 0.2771                    0.0024 ± 0.0011
                                              352.93 ± 46.11                     369.88 ± 136.87
                                              1193.83 ± 145.477                  859.03 ± 99.76
f 10   0±0                  0±0               0.3123 ± 0.0426                    0.1589 ± 0.0207
                                              9.2373 ± 0.4785                    6.6861 ± 0.3286
                                              14.309 ± 0.3706                    9.4114 ± 0.4581
f 11   0±0                  0±0               0.5984 ± 0.1419                    0.1631 ± 0.0314
                                              78.692 ± 11.766                    28.245 ± 4.605
                                              368.90 ± 41.116                    85.176 ± 12.824
We report the mean of the best individuals on all runs and the standard deviation (mean ± sd). The best results
are highlight in boldface. The showed results are obtained using n = {50, 100, 200} dimensional search space




123
J Glob Optim


and their memetic versions, especially for function f 5 . In a both tables, it is significant the
difference in solution quality obtained by opt-IMMALG∗ for function f 5 compared to the
other algorithms. No one of the compared algorithms has been able to reach comparable solu-
tions to opt-IMMALG∗ on this function. Moreover, both tables indicate that both variants
of opt-IMMALG outperform the other algorithms when the function dimension increases.
Finally, it is important to highlight that the two variants of opt-IMMALG were ran using
smaller population size, in particular for high dimensions (n = {100, 200}).

4.7 IA versus swarm intelligence algorithms

Recently, artificial immune systems have been related to swarm systems, since many immuno-
logical algorithms operate in a very similar manner: the design of distributed systems, which
display emergent behaviour at a system level, based on low-level interactions between agents
and the environment. Therefore, some swarm intelligence algorithms, proposed in Karaboga
and Baturk [29], have been taken into account and compared with only opt-IMMALG∗ , since
the latter seems to have best performances than the other variants: particle swam optimization
(PSO); particle swarm inspired evolutionary algorithm (PS-EA), and artificial bee colony
(ABC). For this kind of experiments we have used the same experimental protocol used in
Karaboga and Baturk [29], that is: the problem dimension has been set n = {10, 20, 30},
whilst the termination criterion has been fixed to 500 (for n = 10), 750 (for n = 20), and
1000 (for n = 30) generations.
   Similarly to Karaboga and Baturk [29] in this comparison, shown in Table 23, we consid-
ered only the following functions: f 5 , f 9 , f 10 and f 11 of the benchmark in Table 1, with the
addition of the following new function:
                                                     n
                      H (x) = (418.9829 × n) +            −xi sin   |xi |                    (11)
                                                    i=1

    From Table 23, it is possible to affirm that opt-IMMALG∗ outperforms all swarm system
algorithms on all used functions, except for function H. The best performances of opt-
IMMALG∗ over the used swarm intelligent algorithms are also confirmed for an increasing
problem dimension. Conversely, for the function H , PS- EA reaches better solutions with
increasing problem dimension. Finally, the only results reported for ABC2 were obtained
using a different experimental protocol (see Karaboga and Baturk [29]): the termination
criterion was increased to 1000 (for n = 10), 1500 (for n = 20) and 2000 (for n = 30),
respectively. Although opt-IMMALG∗ has been tested with a smaller number of generations
is possible to notice that the results are comparable, and often outperform ABC2 . This exper-
iment displays as opt-IMMALG∗ reaches competitive solutions, close to global optima, in
less time than artificial bee colony (ABC) algorithm.

4.8 IA versus LeGO and PSwarm

In this section we present the comparisons between opt-IMMALG∗ and two of the best
optimization algorithms present in literature, as LeGO [5] and PSwarm [41]. For this kind
of comparison we have used a different set of functions taken from Cassioli et al. [5], which
includes 8 functions with n = 10 as dimensionality of the variables, except for the function
mgw20 , where n = 20. These functions represent a subset of the widest benchmark proposed
in Vaz and Vicente [41], which can be downloaded from http://www.norg.uminho.pt/aivaz/

                                                                                      123
J Glob Optim


Table 23 Comparison between opt-IMMALG∗ and some Swarm Intelligence algorithms

Algorithm             f 11               f9                f5             f 10                 H

10 variables
PSO                  0.079393           2.6559            4.3713         9.8499 × 10−13        161.87
                     0.033451           1.3896            2.3811         9.6202 × 10−13        144.16
PS- EA               0.222366           0.43404           25.303         0.19209               0.32037
                     0.0781             0.2551            29.7964        0.1951                1.6185
opt-IMMALG∗          0.0                0.0               0.0            0.0                   1.27 × 10−4
                     0.0                0.0               0.0            0.0                   1.268 × 10−14
ABC1                 0.00087            0.0               0.034072       7.8 × 10−11           1.27 × 10−9
                     0.002535           0.0               0.045553       1.16 × 10−9           4 × 10−12
ABC2                 0.000329           0.0               0.012522       4.6 × 10−11           1.27 × 10−9
                     0.00185            0.0               0.01263        5.4 × 10−11           4 × 10−12
20 variables
PSO                  0.030565           12.059            77.382         1.1778 × 10−6         543.07
                     0.025419           3.3216            94.901         1.5842 × 10−6         360.22
PS- EA               0.59036            1.8135            72.452         0.32321               1.4984
                     0.2030             0.2551            27.3441        0.097353              0.84612
opt-IMMALG∗          0.0                0.0               0.0            0.0                   237.5652
                     0.0                0.0               0.0            0.0                   710.4036
ABC1                 2.01 × 10−8        1.45 × 10−8       0.13614        1.6 × 10−11           19.83971
                     6.76 × 10−8        5.06 × 10−8       0.132013       1.9 × 10−11           45.12342
ABC2                 0.0                0.0               0.014458       0.0                   0.000255
                     0.0                0.0               0.010933       1 × 10−12             0
30 variables
PSO                  0.011151           32.476            402.54         1.4917 × 10−6         990.77
                     0.014209           6.9521            633.65         1.8612 × 10−6         581.14
PS- EA               0.8211             3.0527            98.407         0.3771                3.272
                     0.1394             0.9985            35.5791        0.098762              1.6185
opt-IMMALG∗          0.0                0.0               0.0            0.0                   2766.804
                     0.0                0.0               0.0            0.0                   2176.288
ABC1                 2.87 × 10−9        0.033874          0.219626       3 × 10−12             146.8568
                     8.45 × 10−10       0.181557          0.152742       5 × 10−12             82.3144
ABC2                 0.0                0.0               0.020121       0.0                   0.000382
                     0.0                0.0               0.021846       0.0                   1 × 10−12
For each function is showed the mean of the best candidate solutions on 30 independent runs (in the first line
of each table entry), and standard deviation (in the second line). The best results are highlighted in boldface


pswarm/. Table 24 presents the comparison among opt-IMMALG∗ , LeGO and PSwarm,
showing for each function the minimum, median and maximum found, except for PSwarm
because only the minimal value found is known. Precisely, the results for PSwarm were
taken from Vaz and Vicente [41], where is given the optimality gap. For this kind of com-
parison, as proposed in Vaz and Vicente [41] and Cassioli et al. [5], 30 independent runs
were performed for any test function, with Tmax fixed to 104 . Thus, also the mean of the
best solutions obtained on the 30 runs was included into the table. For LeGO algorithm we
show the best solutions found from 5000 points accepted (first relative line), and from 5000
refused ones.
   Since for these experiments a small value of objective function evaluations was set, then
a smaller population size (respect the previous comparisons) for opt-IMMALG∗ has been
used: d = 40.




123
J Glob Optim


Table 24 Comparison between opt-IMMALG∗ , LeGO [5] and PSwarm [41] algorithms

Algorithm               Min                   Mean                  Median                Max

ack (global minimum at f (x) = 0)
  PSwarm               0.217164             n. a.                   n. a.                 n. a.
  LeGO                 2.04                 4.74                    4.85                  5.36
                       4.59                 6.06                    6.03                  7.97
  opt-IMMALG∗          0                    0                       0                     0
em 10 (global minimum at f (x) = −9.660152)
  PSwarm               −8.275452            n. a.                   n. a.                 n. a.
  LeGO                 −8.88                −5.16                   −5.24                 −0.488
                       −8.68                −4.12                   −4.18                 0.002
  opt-IMMALG∗          −6.3086              −5.22                   −5.225                −4.446
f x10 (global minimum at f (x) = −10.2088)
  PSwarm               −2.131509            n. a.                   n. a.                 n. a.
  LeGO                 −10.21               −1.97                   −1.48                 −1.28
                       −10.21               −1.58                   −1.48                 −1.15
  opt-IMMALG∗          −2.2                 −0.4676                 −0.3887               −0.3784
mgw10 (global minimum at f (x) = 0)
  PSwarm               1.1078 × 10−2        n. a.                   n. a.                 n. a.
  LeGO                 4.4 × 10−16          1.9 × 10−2              8.9 × 10−3            3.63
                       4.4 × 10 −16         4.0 × 10−2              3.2 × 10−2            3.63
  opt-IMMALG∗          0                    4.23 × 10−6             1.13 × 10−6           2.56 × 10−5
mgw20 (global minimum at f (x) = 0)
  PSwarm               5.3904 × 10−2        n. a.                   n. a.                 n. a.
  LeGO                 −1.3 × 10−15         7.4 × 10−2              2.5 × 10−2            7.80
                       −1.3 × 10−15         8.4 × 10−2              4.4 × 10−2            9.42
  opt-IMMALG     ∗     0                    7.28 × 10−4             3.64 × 10−5           1.6686 × 10−2
ml10 (global minimum at f (x) = −0.965)
  PSwarm               −0.965               n. a.                   n. a.                 n. a.
  LeGO                 −1.7 × 10−22         1.7−22                  −1.0 × 10−132         8.6 × 10−19
                       −8.3 × 10−81         1.6−74                  1.9 × 10−279          8.2 × 10−71
  opt-IMMALG∗          −7.917 × 10−2        −3.088 × 10−3           0                     0
rg10 (global minimum at f (x) = 0)
  PSwarm               0                    n. a.                   n. a.                 n. a.
  LeGO                 6.96                 57.54                   57.71                 127.40
                       9.95                 81.15                   80.59                 224.90
  opt-IMMALG∗          0                    0                       0                     0
sal10 (global minimum at f (x) = 0)
  PSwarm               0.399873             n. a.                   n. a.                 n. a.
  LeGO                 2.1 × 10−16          14.47                   15.10                 20.90
                       1.2 × 10−14          18.65                   18.90                 26.60
  opt-IMMALG∗          0                    0.113                   9.987 × 10−2          0.19987
For each function is showed the minimum value found, the mean on all independent runs, and the median and
max values found. The best results are highlighted in boldface




   Inspecting these results is possible to see as opt-IMMALG∗ outperforms the other two
optimization algorithms on 5 functions over 8, whilst in the remaining 2 functions over 3,
that is f x10 and ml10 , opt-IMMALG∗ does not exhibit the worst solutions. Moreover, ana-
lyzing the obtained solutions on function em 10 by all three algorithms, is possible to see that,
although opt-IMMALG∗ is not able to reach a comparable minimal point with respect to the
other two algorithms, it shows instead better performances with respect to the mean value
of best solutions found, showing thus in the overall a better search strategy. We think that

                                                                                            123
J Glob Optim


Table 25 Results obtained by opt-IMMALG using large dimensional search space, n = {1000, 5000}

                   f1                 f5                  f9                 f 10                f 11

Tmax = 104
n = 1000          1.93 × 10−1         1.01 × 10+3        2.29 × 10−2         1.21 × 10−3        1.27 × 10−2
                  2.44 × 10−2         2.94 × 10+2        5.09 × 10−3         7.76 × 10−5        1.7 × 10−3
n = 5000          16                  9.11 × 10+3        1.83                2.76 × 10−3        3.26 × 10−1
                  28.6                3.56 × 10+3        8.13                2.31 × 10−3        5.61 × 10−1
Tmax = 105
n = 1000          3.35 × 10−3         9.54 × 10+2        7.06 × 10−4         3.76 × 10−8        6.66 × 10−12
                  2.22 × 10−2         1.54 × 10+2        4.72 × 10−3         2.63 × 10−7        4.56 × 10−11
n = 5000          3.52                5.95 × 10+3        3.64 × 10−1         8.14 × 10−4        8.99 × 10−2
                  5.14                1.98 × 10+3        6, 34 × 10−1        1.59 × 10−3        3.33 × 10−1
We performed 50 independent runs for each test function using different maximum objective function evalu-
ation Tmax = {104 , 105 }. We fixed ρ = 9 for n = 1000 and ρ = 11.5 for n = 5000. The mean of the best
individuals on all runs (in the first line of each table entry), and standard deviation (in the second line) are
presented

finding better solutions also in the mean, can be useful on any real optimization task, where
often you need to find a good alternative solution to the optimal ones.

4.8.1 IA for high dimensional search spaces

The final set of experiments that complete this exhaustive study of the performances of
the proposed IA, consists of tackling global numerical optimization problems with very
high dimensions (n = 1000 and n = 5000). We present only the results obtained by opt-
IMMALG using the potential mutation of Eq. 5. Table 25 shows the results obtained by
opt-IMMALG on large dimension values using different Tmax values: 104 and 105 . As we
expected, the proposed algorithm exhibits more obstacles in reaching optimal solutions for
the given functions, once is increased the dimensionality of the function. However, by increas-
ing the number of objective function evaluations, the algorithm begins to reach acceptable
solutions, and then showing better performances. This makes us think that giving more time
for the evolution, the algorithm performs well also on large-scale dimensions.

5 Conclusion

In this research paper we presented an extensive comparative study illustrating the perfor-
mance of two optimization immunological algorithms with 39 state-of-the-art optimization
algorithms (deterministic and nature inspired methodologies): FEP; IFEP; three versions of
CEP; two versions of PSO and arPSO; PS- EA; two version of ABC; EO; SEA; HGA;
immunological inspired algorithms, as BCA and two versions of CLONALG; CHC algo-
rithm; Generalized Generation Gap (G3 − 1); hybrid steady-state RCMA (SW- 100), Family
Competition (FC); CMA with crossover Hill Climbing (RCMA- XHC); eleven variants of DE
and two its memetic versions; artificial bee colony (ABC); learning for global optimization
(LeGO); and PSwarm.
    Two different versions are given to solve the global numerical optimization problem: opt-
IMMALG01 based on binary-code representation, and opt-IMMALG that is based on real
values. Moreover, two variant of opt-IMMALG are presented in this work.
    The main features of the designed immunological algorithm can be summarized as: (1)
the cloning operator, which explores the neighbourhood of a given solution, (2) the inversely

123
J Glob Optim


proportional hypermutation operator, which perturbs each candidate solution as a function
of its objective function value (inversely proportionally), and (3) the aging operator, that
eliminates the oldest candidate solutions from the current population in order to introduce
diversity and thus avoiding local minima during the search process.
   For our experiments, we have used a large set of test-beds and numerical functions from
Cassioli et al. [5], Timmis and Kelsey [40], Vaz and Vicente [41] and Yao et al. [43]. Further-
more the dimensionality of the problems was varied from small to high dimensions (5000
variables). Our results suggest that the proposed immunological algorithm is an effective
numerical optimization algorithm (in terms of solution quality) particularly for the most
challenging highly dimensional search spaces. In particular, increasing the dimension of the
solutions space improves the performances of IA. Moreover, experimental results indicate
that our IA using real values coding reaches better solutions than the binary-code version.
   All experimental comparisons show that opt-IMMALG is comparable, and often outper-
form, all 39 state-of-the-art optimization algorithms.

Acknowledgments       The anonymous reviewers provided helpful feedback that measurably improved the
manuscript.



References

 1. Aiex, R.M., Resende, M.G.C., Ribeiro, C.C.: TTTPLOTS: a perl program to create time-to-target
    plots. Optim. Lett. 1, 355–366 (2007)
 2. Aiex, R.M., Resende, M.G.C., Ribeiro, C.C.: Probability distribution of solution time in GRASP: an
    experimental investigation. J. Heuristics 8, 343–373 (2002)
 3. Angeline, P.J.: Evolutionary optimization versus particle swarm optimization: philosophy and perfor-
    mance differences. In: Porto, V.W., Saravanan, N., Waagen, D., Eiben, A.E. (eds.) Evolutionary program-
    ming, vol. 7, pp. 601–610. Springer-Verlang, Berlin (1998)
 4. Caponetto, R., Fortuna, L., Fazzino, S., Xibilia, M.G.: Chaotic sequences to improve the performance of
    evolutionary algorithms. IEEE Trans. Evolut. Comput. 7(3), 289–304 (2003)
 5. Cassioli, A., Di Lorenzo, D., Locatelli, M., Schoen, F., Sciandrone, M.: Machine Learning for Global
    Optimization. Comput. Optim. Appl. doi:10.1007/s10589-010-9330-x accepted August (2010)
 6. Chambers, J.M., Cleveland, W.S., Kleiner, B., Tukey, P.A.: Graphical Models for Data Analysis. Chapman
    & Hall, London (1983)
 7. Chellapilla, K.: Combining mutation operators in evolutionary programming. IEEE Trans. Evolut. Com-
    put. 2, 91–96 (1998)
 8. Cutello, V., Narzisi, G., Nicosia, G., Pavone, M.: An immunological algorithm for global numerical opti-
    mization. In: Proceedings of the of the Seventh International Conference on Artificial Evolution (EA’05),
    vol. 3871, 284–295. LNCS (2005)
 9. Cutello, V., Narzisi, G., Nicosia, G., Pavone, M.: Clonal selection algorithms: a comparative case study
    using effective mutation potentials. In: Proceedings of the Fourth International Conference on Artificial
    Immune Systems (ICARIS’05), vol. 3627, pp. 13–28. LNCS (2005)
10. Cutello, V., Nicosia, G., Pavone, M.: A hybrid immune algorithm with information gain for the graph
    coloring problem. In: Proceedings of Genetic and Evolutionary Computation COnference (GECCO’03),
    vol. 2723, pp. 171–182. LNCS (2003)
11. Cutello, V., Nicosia, G., Pavone, M.: Exploring the capability of immune algorithms: a characterization
    of hypermutation operators. In: Proceedings of the Third International Conference on Artificial Immune
    Systems (ICARIS’04), vol. 3239, pp. 263–276. LNCS (2004)
12. Cutello, V., Nicosia, G., Pavone, M.: An immune algorithm with hyper-macromutations for the Dill’s 2D
    hydrophobic–hydrophilic model. In: Proceedings of Congress on Evolutionary Computation (CEC’04),
    vol. 1, pp. 1074–1080. IEEE Press, New York (2004)
13. Cutello, V., Nicosia, G., Pavone, M.: An immune algorithm with stochastic aging and Kullback entropy
    for the chromatic number problem. J. Comb. Optim. 14(1), 9–33 (2007)
14. Cutello, V., Nicosia, G., Pavone, M., Narzisi, G.: Real coded clonal selection algorithm for uncon-
    strained global numerical optimization using a hybrid inversely proportional hypermutation operator. In:



                                                                                               123
J Glob Optim


      Proceedings of the 21st Annual ACM Symposium on Applied Computing (SAC’06), vol. 2, pp. 950–954
      (2006)
15.   Cutello, V., Nicosia, G., Pavone, M., Timmis, J.: An immune algorithm for protein structure prediction
      on lattice models. IEEE Trans. Evolut. Comput. 11(1), 101–117 (2007)
16.   Dasgupta, D.: Advances in artificial immune systems. IEEE Comput. Intell. Mag. 40–49 (2006)
17.   Dasgupta, D., Niño, F.: Immunological Computation: Theory and Applications. CRC Press, Taylor &
      Francis Group, Boca Raton (2009)
18.   Davies, M., Secker, A., Freitas, A., Timmis, J., Clark, E., Flower, D.: Alignment-independent techniques
      for protein classification. Curr. Proteomics 5(4), 217–223 (2008)
19.   De Castro, L.N., Von Zuben, F.J.: Learning and optimization using the clonal selection principle. IEEE
      Trans. Evolut. Comput. 6(3), 239–251 (2002)
20.   Feo, T.A., Resende, M.G.C., Smith, S.H.: A greedy randomized adaptive search procedure for maximum
      independent set. Oper. Res. 42, 860–878 (1994)
21.   Finkel, D.E.: DIRECT optimization algorithm user guide. Technical report, CRSC N.C. State University.
      ftp://ftp.ncsu.edu/pub/ncsu/crsc/pdf/crsc-tr03-11.pdf (March 2003)
22.   Floudas, C.A., Pardalos, P.M. (eds.): Encyclopedia of Optimization. Springer, Berlin (2009)
23.   Garrett, S.: How do we evaluate artificial immune systems? Evolut. Comput. 13(2), 145–178 (2005)
                                                                    .
24.   Goldberg, D.E.: The Design of Innovation Lessons from and for Competent Genetic Algorithms, vol. 7.
      Kluwer Academic Publisher, Boston (2002)
25.   Goldberg, D.E., Voessner, S.: Optimizing global-local search hybrids. In: Genetic and Evolutionary Com-
      putation Conference (GECCO’99), pp. 220–228 (1999)
26.   Hart, W.E., Krasnogor, N., Smith, J.E.: Recent Advances in Memetic Algorithms, Series in Studies in
      Fuzziness and Soft Computing. Springer, Berlin (2005)
27.   http://www2.research.att.com/~mgcr/tttplots/
28.   Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the Lipchitz constant.
      J. Optim. Theory Appl. 79, 157–181 (1993)
29.   Karaboga, D., Baturk, B.: A powerful and efficient algorithm for numerical function optimization: artifi-
      cial bee colony (ABC) algorithm. J. Global Optim. 39, 459–471 (2007)
30.   Lozano, M., Herrera, F., Krasnogor, N., Molina, D.: Real-coded Memetic algorithms with crossover
      hill-climbing. Evolut. Comput. 12(3), 273–302 (2004)
31.                            ´
      Mezura-Montes, E., Velazquez-Reyes, J., Coello Coello C.: A comparative study of differential evolution
      variants for global optimization. In: Genetic and Evolutionary Computation Conference (GECCO’06),
      vol. 1, pp. 485–492 (2006)
32.   Noman N., Iba H.: Enhancing differential evolution performance with local search for high dimensional
      function optimization. In: Genetic and Evolutionary Computation Conference (GECCO’05), pp. 967–974
      (2005)
33.   Pardalos, P.M., Resende, M.: Handbook of Applied Optimization. Oxford University Press,
      Oxford (2002)
34.   Price, K.V., Storn, M., Lampien, J.A.: Differential Evolution: A Practical Approach to Global Optimiza-
      tion. Springer, Berlin (2005)
35.   Smith, S., Timmis, J.: Immune network inspired evolutionary algorithm for the diagnosis of Parkinsons
      disease. Biosystems 94(1–2), 34–46 (2008)
36.   Storn, R., Price, K.V.: Differential evolution a simple and efficient heuristic for global optimization over
      continuous spaces. J. Global Optim. 11(4), 341–359 (1997)
37.   Timmis, J.: Artificial immune systems—today and tomorrow. Nat. Comput. 6(1), 1–18 (2007)
38.   Timmis, J., Hart, E.: Application areas of AIS: the past, present and the future. J. Appl. Soft
      Comput. 8(1), 191–201 (2008)
39.   Timmis, J., Hart, E., Hone, A., Neal, M., Robins, A., Stepney, S., Tyrrell A.: Immuno-engineering. In:
      Proceedings of the international conference on Biologically Inspired Collaborative Computing (IFIP’09),
      vol. 268, pp. 3–17. IEEE Press, New York (2008)
40.   Timmis, J., Kelsey J.: Immune inspired somatic contiguous hypermutation for function optimization. In:
      Proceedings of Genetic and Evolutionary Computation Conference (GECCO’03), vol. 2723, pp. 207–218.
      LNCS (2003)
41.   Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimi-
      zation. J. Global Optim. 39, 197–219 (2007)
42.   Versterstrøm, J., Thomsen, R.: A comparative study of differential evolution, particle swarm optimization,
      and evolutionary algorithms on numerical benchmark problems. In: Congress on Evolutionary Computing
      (CEC’04), vol. 1, pp. 1980–1987 (2004)
43.   Yao, X., Liu, Y., Lin, G.M.: Evolutionary programming made faster. IEEE Trans. Evolut. Com-
      put. 3(2), 82–102 (1999)



123

More Related Content

What's hot (19)

Tugasmatematikakelompok
TugasmatematikakelompokTugasmatematikakelompok
Tugasmatematikakelompok
gundul28
 
Newtons Divided Difference Formulation
Newtons Divided Difference FormulationNewtons Divided Difference Formulation
Newtons Divided Difference Formulation
Sohaib H. Khan
 
Linear Differential Equations1
Linear Differential Equations1Linear Differential Equations1
Linear Differential Equations1
Sebastian Vattamattam
 
Tugasmatematikakelompok 150715235527-lva1-app6892
Tugasmatematikakelompok 150715235527-lva1-app6892Tugasmatematikakelompok 150715235527-lva1-app6892
Tugasmatematikakelompok 150715235527-lva1-app6892
drayertaurus
 
近似ベイズ計算によるベイズ推定
近似ベイズ計算によるベイズ推定近似ベイズ計算によるベイズ推定
近似ベイズ計算によるベイズ推定
Kosei ABE
 
Computer Aided Manufacturing Design
Computer Aided Manufacturing DesignComputer Aided Manufacturing Design
Computer Aided Manufacturing Design
V Tripathi
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
Tarun Gehlot
 
last lecture in infinite series
last lecture in infinite serieslast lecture in infinite series
last lecture in infinite series
Alaa Mohammed
 
maths basics
maths basicsmaths basics
maths basics
swaroop gannavarapu
 
[4] num integration
[4] num integration[4] num integration
[4] num integration
ikhulsys
 
CalculusStudyGuide
CalculusStudyGuideCalculusStudyGuide
CalculusStudyGuide
Mo Elkhatib
 
Elementos finitos
Elementos finitosElementos finitos
Elementos finitos
jd
 
Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...
Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...
Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...
Jason Li
 
Jordan's solution
Jordan's solutionJordan's solution
Jordan's solution
sabsma
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
Stéphane Canu
 
Diifferential equation akshay
Diifferential equation akshayDiifferential equation akshay
Diifferential equation akshay
akshay1234kumar
 
統計的学習の基礎 4章 前半
統計的学習の基礎 4章 前半統計的学習の基礎 4章 前半
統計的学習の基礎 4章 前半
Ken'ichi Matsui
 
Emat 213 midterm 2 fall 2005
Emat 213 midterm 2 fall 2005Emat 213 midterm 2 fall 2005
Emat 213 midterm 2 fall 2005
akabaka12
 
Manual solucoes ex_extras
Manual solucoes ex_extrasManual solucoes ex_extras
Manual solucoes ex_extras
Vandilberto Pinto
 
Tugasmatematikakelompok
TugasmatematikakelompokTugasmatematikakelompok
Tugasmatematikakelompok
gundul28
 
Newtons Divided Difference Formulation
Newtons Divided Difference FormulationNewtons Divided Difference Formulation
Newtons Divided Difference Formulation
Sohaib H. Khan
 
Tugasmatematikakelompok 150715235527-lva1-app6892
Tugasmatematikakelompok 150715235527-lva1-app6892Tugasmatematikakelompok 150715235527-lva1-app6892
Tugasmatematikakelompok 150715235527-lva1-app6892
drayertaurus
 
近似ベイズ計算によるベイズ推定
近似ベイズ計算によるベイズ推定近似ベイズ計算によるベイズ推定
近似ベイズ計算によるベイズ推定
Kosei ABE
 
Computer Aided Manufacturing Design
Computer Aided Manufacturing DesignComputer Aided Manufacturing Design
Computer Aided Manufacturing Design
V Tripathi
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
Tarun Gehlot
 
last lecture in infinite series
last lecture in infinite serieslast lecture in infinite series
last lecture in infinite series
Alaa Mohammed
 
[4] num integration
[4] num integration[4] num integration
[4] num integration
ikhulsys
 
CalculusStudyGuide
CalculusStudyGuideCalculusStudyGuide
CalculusStudyGuide
Mo Elkhatib
 
Elementos finitos
Elementos finitosElementos finitos
Elementos finitos
jd
 
Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...
Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...
Image Compression Comparison Using Golden Section Transform, Haar Wavelet Tra...
Jason Li
 
Jordan's solution
Jordan's solutionJordan's solution
Jordan's solution
sabsma
 
Diifferential equation akshay
Diifferential equation akshayDiifferential equation akshay
Diifferential equation akshay
akshay1234kumar
 
統計的学習の基礎 4章 前半
統計的学習の基礎 4章 前半統計的学習の基礎 4章 前半
統計的学習の基礎 4章 前半
Ken'ichi Matsui
 
Emat 213 midterm 2 fall 2005
Emat 213 midterm 2 fall 2005Emat 213 midterm 2 fall 2005
Emat 213 midterm 2 fall 2005
akabaka12
 

Viewers also liked (6)

Using uml to model immune system
Using uml to model immune systemUsing uml to model immune system
Using uml to model immune system
Ayi Purbasari
 
Central dogma
Central dogmaCentral dogma
Central dogma
RISHAV DROLIA
 
2005: A Matlab Tour on Artificial Immune Systems
2005: A Matlab Tour on Artificial Immune Systems2005: A Matlab Tour on Artificial Immune Systems
2005: A Matlab Tour on Artificial Immune Systems
Leandro de Castro
 
2001: An Introduction to Artificial Immune Systems
2001: An Introduction to Artificial Immune Systems2001: An Introduction to Artificial Immune Systems
2001: An Introduction to Artificial Immune Systems
Leandro de Castro
 
Genetic algorithm raktim
Genetic algorithm raktimGenetic algorithm raktim
Genetic algorithm raktim
Raktim Halder
 
Artificial immune system
Artificial immune systemArtificial immune system
Artificial immune system
Tejaswini Jitta
 
Using uml to model immune system
Using uml to model immune systemUsing uml to model immune system
Using uml to model immune system
Ayi Purbasari
 
2005: A Matlab Tour on Artificial Immune Systems
2005: A Matlab Tour on Artificial Immune Systems2005: A Matlab Tour on Artificial Immune Systems
2005: A Matlab Tour on Artificial Immune Systems
Leandro de Castro
 
2001: An Introduction to Artificial Immune Systems
2001: An Introduction to Artificial Immune Systems2001: An Introduction to Artificial Immune Systems
2001: An Introduction to Artificial Immune Systems
Leandro de Castro
 
Genetic algorithm raktim
Genetic algorithm raktimGenetic algorithm raktim
Genetic algorithm raktim
Raktim Halder
 
Artificial immune system
Artificial immune systemArtificial immune system
Artificial immune system
Tejaswini Jitta
 
Ad

Similar to Clonal Selection: an Immunological Algorithm for Global Optimization over Continuous Spaces (20)

Introduction to inverse problems
Introduction to inverse problemsIntroduction to inverse problems
Introduction to inverse problems
Delta Pi Systems
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
Reza Rahimi
 
Amth250 octave matlab some solutions (1)
Amth250 octave matlab some solutions (1)Amth250 octave matlab some solutions (1)
Amth250 octave matlab some solutions (1)
asghar123456
 
Engr 371 final exam august 1999
Engr 371 final exam august 1999Engr 371 final exam august 1999
Engr 371 final exam august 1999
amnesiann
 
Jacobi and gauss-seidel
Jacobi and gauss-seidelJacobi and gauss-seidel
Jacobi and gauss-seidel
arunsmm
 
Numerical Methods: curve fitting and interpolation
Numerical Methods: curve fitting and interpolationNumerical Methods: curve fitting and interpolation
Numerical Methods: curve fitting and interpolation
Nikolai Priezjev
 
Engr 371 final exam april 1999
Engr 371 final exam april 1999Engr 371 final exam april 1999
Engr 371 final exam april 1999
amnesiann
 
Lesson 25: The Definite Integral
Lesson 25: The Definite IntegralLesson 25: The Definite Integral
Lesson 25: The Definite Integral
Mel Anthony Pepito
 
Lesson 25: The Definite Integral
Lesson 25: The Definite IntegralLesson 25: The Definite Integral
Lesson 25: The Definite Integral
Matthew Leingang
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson Method
Jigisha Dabhi
 
ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553
ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553
ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553
Destiny Nooppynuchy
 
Gaussseidelsor
GaussseidelsorGaussseidelsor
Gaussseidelsor
uis
 
Matlab
MatlabMatlab
Matlab
gueste03a52
 
Quadratic form and functional optimization
Quadratic form and functional optimizationQuadratic form and functional optimization
Quadratic form and functional optimization
Junpei Tsuji
 
Approximate Integration
Approximate IntegrationApproximate Integration
Approximate Integration
Silvius
 
Assignment6
Assignment6Assignment6
Assignment6
asghar123456
 
Session 6
Session 6Session 6
Session 6
vivek_shaw
 
Math report
Math reportMath report
Math report
last4ever
 
Electronic and Communication Engineering 4th Semester (2012-june) Question Pa...
Electronic and Communication Engineering 4th Semester (2012-june) Question Pa...Electronic and Communication Engineering 4th Semester (2012-june) Question Pa...
Electronic and Communication Engineering 4th Semester (2012-june) Question Pa...
BGS Institute of Technology, Adichunchanagiri University (ACU)
 
Lesson18 Double Integrals Over Rectangles Slides
Lesson18   Double Integrals Over Rectangles SlidesLesson18   Double Integrals Over Rectangles Slides
Lesson18 Double Integrals Over Rectangles Slides
Matthew Leingang
 
Introduction to inverse problems
Introduction to inverse problemsIntroduction to inverse problems
Introduction to inverse problems
Delta Pi Systems
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
Reza Rahimi
 
Amth250 octave matlab some solutions (1)
Amth250 octave matlab some solutions (1)Amth250 octave matlab some solutions (1)
Amth250 octave matlab some solutions (1)
asghar123456
 
Engr 371 final exam august 1999
Engr 371 final exam august 1999Engr 371 final exam august 1999
Engr 371 final exam august 1999
amnesiann
 
Jacobi and gauss-seidel
Jacobi and gauss-seidelJacobi and gauss-seidel
Jacobi and gauss-seidel
arunsmm
 
Numerical Methods: curve fitting and interpolation
Numerical Methods: curve fitting and interpolationNumerical Methods: curve fitting and interpolation
Numerical Methods: curve fitting and interpolation
Nikolai Priezjev
 
Engr 371 final exam april 1999
Engr 371 final exam april 1999Engr 371 final exam april 1999
Engr 371 final exam april 1999
amnesiann
 
Lesson 25: The Definite Integral
Lesson 25: The Definite IntegralLesson 25: The Definite Integral
Lesson 25: The Definite Integral
Mel Anthony Pepito
 
Lesson 25: The Definite Integral
Lesson 25: The Definite IntegralLesson 25: The Definite Integral
Lesson 25: The Definite Integral
Matthew Leingang
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson Method
Jigisha Dabhi
 
ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553
ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553
ตัวอย่างข้อสอบเก่า วิชาคณิตศาสตร์ ม.6 ปีการศึกษา 2553
Destiny Nooppynuchy
 
Gaussseidelsor
GaussseidelsorGaussseidelsor
Gaussseidelsor
uis
 
Quadratic form and functional optimization
Quadratic form and functional optimizationQuadratic form and functional optimization
Quadratic form and functional optimization
Junpei Tsuji
 
Approximate Integration
Approximate IntegrationApproximate Integration
Approximate Integration
Silvius
 
Lesson18 Double Integrals Over Rectangles Slides
Lesson18   Double Integrals Over Rectangles SlidesLesson18   Double Integrals Over Rectangles Slides
Lesson18 Double Integrals Over Rectangles Slides
Matthew Leingang
 
Ad

More from Mario Pavone (16)

A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem
A Hybrid Immunological Search for theWeighted Feedback Vertex Set ProblemA Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem
A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem
Mario Pavone
 
The Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune AlgorithmsThe Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune Algorithms
Mario Pavone
 
Multi-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting DesignMulti-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting Design
Mario Pavone
 
DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...
Mario Pavone
 
How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?
Mario Pavone
 
O-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring GraphsO-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring Graphs
Mario Pavone
 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring Problem
Mario Pavone
 
O-BEE-COL
O-BEE-COLO-BEE-COL
O-BEE-COL
Mario Pavone
 
12th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 201312th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 2013
Mario Pavone
 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring Problem
Mario Pavone
 
CFP: Optimiation on Complex Systems
CFP: Optimiation on Complex SystemsCFP: Optimiation on Complex Systems
CFP: Optimiation on Complex Systems
Mario Pavone
 
Joco pavone
Joco pavoneJoco pavone
Joco pavone
Mario Pavone
 
Immunological Multiple Sequence Alignments
Immunological Multiple Sequence AlignmentsImmunological Multiple Sequence Alignments
Immunological Multiple Sequence Alignments
Mario Pavone
 
An Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice ModelsAn Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice Models
Mario Pavone
 
Robust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global OptimizationRobust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global Optimization
Mario Pavone
 
An Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection AlgorithmsAn Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone
 
A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem
A Hybrid Immunological Search for theWeighted Feedback Vertex Set ProblemA Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem
A Hybrid Immunological Search for theWeighted Feedback Vertex Set Problem
Mario Pavone
 
The Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune AlgorithmsThe Influence of Age Assignments on the Performance of Immune Algorithms
The Influence of Age Assignments on the Performance of Immune Algorithms
Mario Pavone
 
Multi-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting DesignMulti-objective Genetic Algorithm for Interior Lighting Design
Multi-objective Genetic Algorithm for Interior Lighting Design
Mario Pavone
 
DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...DENSA:An effective negative selection algorithm with flexible boundaries for ...
DENSA:An effective negative selection algorithm with flexible boundaries for ...
Mario Pavone
 
How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?How long should Offspring Lifespan be in order to obtain a proper exploration?
How long should Offspring Lifespan be in order to obtain a proper exploration?
Mario Pavone
 
O-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring GraphsO-BEE-COL: Optimal BEEs for COLoring Graphs
O-BEE-COL: Optimal BEEs for COLoring Graphs
Mario Pavone
 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring Problem
Mario Pavone
 
12th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 201312th European Conference on Artificial Life - ECAL 2013
12th European Conference on Artificial Life - ECAL 2013
Mario Pavone
 
Swarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring ProblemSwarm Intelligence Heuristics for Graph Coloring Problem
Swarm Intelligence Heuristics for Graph Coloring Problem
Mario Pavone
 
CFP: Optimiation on Complex Systems
CFP: Optimiation on Complex SystemsCFP: Optimiation on Complex Systems
CFP: Optimiation on Complex Systems
Mario Pavone
 
Immunological Multiple Sequence Alignments
Immunological Multiple Sequence AlignmentsImmunological Multiple Sequence Alignments
Immunological Multiple Sequence Alignments
Mario Pavone
 
An Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice ModelsAn Immune Algorithm for Protein Structure Prediction on Lattice Models
An Immune Algorithm for Protein Structure Prediction on Lattice Models
Mario Pavone
 
Robust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global OptimizationRobust Immunological Algorithms for High-Dimensional Global Optimization
Robust Immunological Algorithms for High-Dimensional Global Optimization
Mario Pavone
 
An Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection AlgorithmsAn Information-Theoretic Approach for Clonal Selection Algorithms
An Information-Theoretic Approach for Clonal Selection Algorithms
Mario Pavone
 

Recently uploaded (20)

Palo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity FoundationPalo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity Foundation
VICTOR MAESTRE RAMIREZ
 
SDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhereSDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhere
Adtran
 
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...
James Anderson
 
Droidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing HealthcareDroidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing Healthcare
Droidal LLC
 
Introducing the OSA 3200 SP and OSA 3250 ePRC
Introducing the OSA 3200 SP and OSA 3250 ePRCIntroducing the OSA 3200 SP and OSA 3250 ePRC
Introducing the OSA 3200 SP and OSA 3250 ePRC
Adtran
 
Cognitive Chasms - A Typology of GenAI Failure Failure Modes
Cognitive Chasms - A Typology of GenAI Failure Failure ModesCognitive Chasms - A Typology of GenAI Failure Failure Modes
Cognitive Chasms - A Typology of GenAI Failure Failure Modes
Dr. Tathagat Varma
 
Measuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI SuccessMeasuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI Success
Nikki Chapple
 
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyesEnd-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
ThousandEyes
 
Offshore IT Support: Balancing In-House and Offshore Help Desk Technicians
Offshore IT Support: Balancing In-House and Offshore Help Desk TechniciansOffshore IT Support: Balancing In-House and Offshore Help Desk Technicians
Offshore IT Support: Balancing In-House and Offshore Help Desk Technicians
john823664
 
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025Multistream in SIP and NoSIP @ OpenSIPS Summit 2025
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025
Lorenzo Miniero
 
Jira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : IntroductionJira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : Introduction
Ravi Teja
 
Jeremy Millul - A Talented Software Developer
Jeremy Millul - A Talented Software DeveloperJeremy Millul - A Talented Software Developer
Jeremy Millul - A Talented Software Developer
Jeremy Millul
 
Cyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptxCyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptx
Ghimire B.R.
 
Supercharge Your AI Development with Local LLMs
Supercharge Your AI Development with Local LLMsSupercharge Your AI Development with Local LLMs
Supercharge Your AI Development with Local LLMs
Francesco Corti
 
Evaluation Challenges in Using Generative AI for Science & Technical Content
Evaluation Challenges in Using Generative AI for Science & Technical ContentEvaluation Challenges in Using Generative AI for Science & Technical Content
Evaluation Challenges in Using Generative AI for Science & Technical Content
Paul Groth
 
AI Trends - Mary Meeker
AI Trends - Mary MeekerAI Trends - Mary Meeker
AI Trends - Mary Meeker
Razin Mustafiz
 
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Aaryan Kansari
 
Dr Jimmy Schwarzkopf presentation on the SUMMIT 2025 A
Dr Jimmy Schwarzkopf presentation on the SUMMIT 2025 ADr Jimmy Schwarzkopf presentation on the SUMMIT 2025 A
Dr Jimmy Schwarzkopf presentation on the SUMMIT 2025 A
Dr. Jimmy Schwarzkopf
 
Fortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in CybersecurityFortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in Cybersecurity
VICTOR MAESTRE RAMIREZ
 
Grannie’s Journey to Using Healthcare AI Experiences
Grannie’s Journey to Using Healthcare AI ExperiencesGrannie’s Journey to Using Healthcare AI Experiences
Grannie’s Journey to Using Healthcare AI Experiences
Lauren Parr
 
Palo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity FoundationPalo Alto Networks Cybersecurity Foundation
Palo Alto Networks Cybersecurity Foundation
VICTOR MAESTRE RAMIREZ
 
SDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhereSDG 9000 Series: Unleashing multigigabit everywhere
SDG 9000 Series: Unleashing multigigabit everywhere
Adtran
 
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...
James Anderson
 
Droidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing HealthcareDroidal: AI Agents Revolutionizing Healthcare
Droidal: AI Agents Revolutionizing Healthcare
Droidal LLC
 
Introducing the OSA 3200 SP and OSA 3250 ePRC
Introducing the OSA 3200 SP and OSA 3250 ePRCIntroducing the OSA 3200 SP and OSA 3250 ePRC
Introducing the OSA 3200 SP and OSA 3250 ePRC
Adtran
 
Cognitive Chasms - A Typology of GenAI Failure Failure Modes
Cognitive Chasms - A Typology of GenAI Failure Failure ModesCognitive Chasms - A Typology of GenAI Failure Failure Modes
Cognitive Chasms - A Typology of GenAI Failure Failure Modes
Dr. Tathagat Varma
 
Measuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI SuccessMeasuring Microsoft 365 Copilot and Gen AI Success
Measuring Microsoft 365 Copilot and Gen AI Success
Nikki Chapple
 
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyesEnd-to-end Assurance for SD-WAN & SASE with ThousandEyes
End-to-end Assurance for SD-WAN & SASE with ThousandEyes
ThousandEyes
 
Offshore IT Support: Balancing In-House and Offshore Help Desk Technicians
Offshore IT Support: Balancing In-House and Offshore Help Desk TechniciansOffshore IT Support: Balancing In-House and Offshore Help Desk Technicians
Offshore IT Support: Balancing In-House and Offshore Help Desk Technicians
john823664
 
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025Multistream in SIP and NoSIP @ OpenSIPS Summit 2025
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025
Lorenzo Miniero
 
Jira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : IntroductionJira Administration Training – Day 1 : Introduction
Jira Administration Training – Day 1 : Introduction
Ravi Teja
 
Jeremy Millul - A Talented Software Developer
Jeremy Millul - A Talented Software DeveloperJeremy Millul - A Talented Software Developer
Jeremy Millul - A Talented Software Developer
Jeremy Millul
 
Cyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptxCyber Security Legal Framework in Nepal.pptx
Cyber Security Legal Framework in Nepal.pptx
Ghimire B.R.
 
Supercharge Your AI Development with Local LLMs
Supercharge Your AI Development with Local LLMsSupercharge Your AI Development with Local LLMs
Supercharge Your AI Development with Local LLMs
Francesco Corti
 
Evaluation Challenges in Using Generative AI for Science & Technical Content
Evaluation Challenges in Using Generative AI for Science & Technical ContentEvaluation Challenges in Using Generative AI for Science & Technical Content
Evaluation Challenges in Using Generative AI for Science & Technical Content
Paul Groth
 
AI Trends - Mary Meeker
AI Trends - Mary MeekerAI Trends - Mary Meeker
AI Trends - Mary Meeker
Razin Mustafiz
 
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Agentic AI Explained: The Next Frontier of Autonomous Intelligence & Generati...
Aaryan Kansari
 
Dr Jimmy Schwarzkopf presentation on the SUMMIT 2025 A
Dr Jimmy Schwarzkopf presentation on the SUMMIT 2025 ADr Jimmy Schwarzkopf presentation on the SUMMIT 2025 A
Dr Jimmy Schwarzkopf presentation on the SUMMIT 2025 A
Dr. Jimmy Schwarzkopf
 
Fortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in CybersecurityFortinet Certified Associate in Cybersecurity
Fortinet Certified Associate in Cybersecurity
VICTOR MAESTRE RAMIREZ
 
Grannie’s Journey to Using Healthcare AI Experiences
Grannie’s Journey to Using Healthcare AI ExperiencesGrannie’s Journey to Using Healthcare AI Experiences
Grannie’s Journey to Using Healthcare AI Experiences
Lauren Parr
 

Clonal Selection: an Immunological Algorithm for Global Optimization over Continuous Spaces

  • 1. J Glob Optim DOI 10.1007/s10898-011-9736-8 Clonal selection: an immunological algorithm for global optimization over continuous spaces Mario Pavone · Giuseppe Narzisi · Giuseppe Nicosia Received: 7 October 2009 / Accepted: 23 May 2011 © Springer Science+Business Media, LLC. 2011 Abstract In this research paper we present an immunological algorithm (IA) to solve global numerical optimization problems for high-dimensional instances. Such optimization problems are a crucial component for many real-world applications. We designed two ver- sions of the IA: the first based on binary-code representation and the second based on real values, called opt-IMMALG01 and opt-IMMALG, respectively. A large set of experiments is presented to evaluate the effectiveness of the two proposed versions of IA. Both opt- IMMALG01 and opt-IMMALG were extensively compared against several nature inspired methodologies including a set of Differential Evolution algorithms whose performance is known to be superior to many other bio-inspired and deterministic algorithms on the same test bed. Also hybrid and deterministic global search algorithms (e.g., DIRECT, LeGO, PSwarm) are compared with both IA versions, for a total 39 optimization algorithms.The results suggest that the proposed immunological algorithm is effective, in terms of accu- racy, and capable of solving large-scale instances for well-known benchmarks. Experimental results also indicate that both IA versions are comparable, and often outperform, the state- of-the-art optimization algorithms. Keywords Nonlinear optimization · Global optimization · Derivative-free optimization · Black-box optimization · Immunological algorithms · Evolutionary algorithms M. Pavone · G. Nicosia (B ) Department of Mathematics and Computer Science, University of Catania, Viale A. Doria 6, 95125 Catania, Italy e-mail: [email protected] M. Pavone e-mail: [email protected] G. Narzisi Computer Science Department, Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA e-mail: [email protected] 123
  • 2. J Glob Optim 1 Introduction Artificial Immune Systems (AIS) is a paradigm in biologically inspired computing, which has been successfully applied to several real-world applications in computer science and engineering [17,23,37–39]. AIS are bio-inspired algorithms that take their inspiration from the natural immune system, whose function is to detect and protect the organism against foreign organisms, like viruses, bacteria, fungi and parasites, that can be cause of diseases. The main research work on AIS was concentrated primarily on three immunological theo- ries: (1) immune networks, (2) negative selection and (3) clonal selection. Such algorithms have been successfully employed in a variety of different application areas [18,35]. All algo- rithms based on the simulation of the clonal selection principle are included into a special class called Clonal Selection Algorithms (CSA), and represents an effective mechanism for searching and optimization [13,15,16]. The core components of the CSAs are cloning and hypermutation operators: the first triggers the growth of a new population of high-value B cells (the candidate solutions) centered on a higher affinity value, whereas the last can be seen as a local search procedure that leads to a faster maturation during the learning phase. We designed and implemented an Immunological Algorithm (IA) to tackle the global numerical optimization problems, based on CSAs. We give two different versions of the proposed IA, using either binary-code or real values representations, called respectively opt- IMMALG01 and opt-IMMALG. Global optimization is the task of finding the best set of parameters to optimize a given objective function; global optimization problems are typically quite difficult to solve because of the presence of many locally optimal solutions [22]. In many real-world applications ana- lytical solutions, even for simple problems, are not always available, so numerical continuous optimization by approximate methods is often the only viable alternative [33,22]. The global optimization consists of finding a variable (or a set of variables) x = (x1 , x2 , . . . , xn ) ∈ S, where S ⊆ Rn is a bounded set on Rn , such that a certain n-dimensional objective function f : S → R is optimized. Specifically the goal for global minimization problem is to find a point xmin ∈ S such that f (xmin ) is a global minimum on S, i.e. ∀x ∈ S : f (xmin ) ≤ f (x). The problem of continuous optimization is a difficult task for three main reasons [33]: (1) it is difficult to decide when a global (or local) optimum has been reached; (2) there could be many local optimal solutions where the search algorithm can get trapped; (3) the number of suboptimal solutions grows dramatically with the dimension of the search space [22]. In this research, we consider the following numerical minimization problem: min( f (x)), Bl ≤ x ≤ Bu (1) where x = (x1 , x2 , . . . , xn ) is the variable vector in Rn , f (x) denotes the objective function to minimize and Bl = (Bl1 , Bl2 , . . . , Bln ), Bu = (Bu 1 , Bu 2 , . . . , Bu n ) represent, respectively, the lower and the upper bounds of the variables, such that xi ∈ Bli , Bu i (i = 1, . . . , n). To evaluate the performance and convergence ability of the proposed IAs compared to the state-of-the-art optimization algorithms [22], we have used the classic benchmark proposed in Yao et al. [43], that includes twenty-three functions (see Table 1 in Sect. 3.1). These func- tions belong to three different categories: unimodal, multimodal with many local optima, and multimodal with few local optima. Moreover we compare both IA versions with several immunological algorithms. For some of these experiments we tackled the functions proposed in Timmis and Kelsey [40] (Table 2, described in Sect. 3.1). The paper is structured as follows: in Sect. 2 we describe the proposed immunological algorithm and its main features; in Sect. 3 we describe the benchmark and the metrics used 123
  • 3. J Glob Optim Table 1 First class of functions to optimize [43] Test function n S f 1 (x) = n 2 [−100, 100]n i=1 xi 30 f 2 (x) = n |x | + n [−10, 10]n i=1 i i=1 |xi | 30 n i 2 f 3 (x) = i=1 j=1 x j 30 [−100, 100]n f 4 (x) = maxi {|xi |, 1 ≤ i ≤ n} 30 [−100, 100]n n−1 2 2 2 f 5 (x) = i=1 [100(xi+1 − xi ) + (xi − 1) ] 30 [−30, 30]n f 6 (x) = n ( x + 0.5 )2 30 [−100, 100]n i=1 i f 7 (x) = n i x 4 + random[0, 1) 30 [−1.28, 1.28]n i=1 i n √ f 8 (x) = i=1 −xi sin( |xi |) 30 [−500, 500]n f 9 (x) = n 2 [−5.12, 5.12]n i=1 [xi − 10 cos(2π xi ) + 10] 30 1 f 10 (x) = −20 exp −0.2 n n 2 [−32, 32]n i=1 xi 30 1 − exp n n i=1 cos 2π xi + 20 + e 1 n n x f 11 (x) = 4000 i=1 xi2 − i=1 cos √i + 1 30 [−600, 600]n i π {10 sin2 (π y ) f 12 (x) = n 30 [−50, 50]n 1 n−1 + i=1 (yi − 1)2 [1 + 10 sin2 (π yi+1 )] + (yn − 1)2 } n + i=1 u(xi , 10, 100, 4), 1 (x + 1) yi = 1 + 4 i ⎧ m ⎪ k(xi − a) , if xi > a, ⎨ u(xi , a, k, m) = 0, if −a ≤ xi ≤ a, ⎪ ⎩ k(−x − a)m , if x < −a. i i f 13 (x) = 0.1{sin2 (3π x1 ) 30 [−50, 50]n n−1 + i=1 (xi − 1)2 [1 + sin2 (3π xi+1 )] n + (xn − 1)[1 + sin2 (2π xn )]} + i=1 u(xi , 5, 100, 4) −1 f 14 (x) = 500 + 25 1 j=1 j+ 2 (x −a )6 1 2 [−65.536, 65.536]n i=1 i ij 2 xi (b2 +b x ) f 15 (x) = i=1 ai − 2 i i 2 11 4 [−5, 5]n bi +bi x3 +x4 f 16 (x) = 4x1 − 2.1x1 + 1 x1 + x1 x2 − 4x2 + 4x2 2 4 3 6 2 4 2 [−5, 5]n 2 f 17 (x) = x2 − 5.1 x1 + π x1 − 6 2 5 2 [−5, 10] × [0, 15] 4π 2 1 + 10 1 − 8π cos x1 + 10 f 18 (x) = [1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x1 − 14x2 2 2 [−2, 2]n + 6x1 x2 + 3x2 2 )] × [30 + (2x − 3x )2 (18 − 32x 1 2 1 2 + 12x1 + 48x2 − 36x1 x2 + 27x2 )] 2 f 19 (x) = − i=1 ci exp − 4 ai j (x j − pi j )2 4 j=1 4 [0, 1]n f 20 (x) = − 4 6 2 [0, 1]n i=1 ci exp − j=1 ai j (x j − pi j ) 6 5 −1 f 21 (x) = − T [0, 10]n i=1 (x − ai )(x − ai ) + ci 4 7 −1 f 22 (x) = − T [0, 10]n i=1 (x − ai )(x − ai ) + ci 4 10 −1 f 23 (x) = − T [0, 10]n i=1 (x − ai )(x − ai ) + ci 4 We indicate with n the number of variables employed and with S ⊆ Rn the variable bounds 123
  • 4. J Glob Optim Table 2 Second class of numerical functions [40], with S ⊆ Rn the variable bounds Test function S g1 (x) = 2(x − 0.75)2 + sin (5π x − 0.4π ) − 0.125 0≤x ≤1 4 g2 (x, y) = (4 − 2.1x 2 + x3 )x 2 + x y + (−4 + 4y 2 )y 2 ) −3 ≤ x ≤ 3 −2 ≤ y ≤ 2 g3 (x) = − 5 [ j sin (( j + 1)x + j)] j=1 −10 ≤ x ≤ 10 g4 (x, y) = a(y − bx 2 + cx − d)2 + h(i − f ) cos (x) + h a = 1, b = 5.1 , c = π 2 5 4π 1 d = 6, f = 8π , h = 10 −5 ≤ x ≤ 10, 0 ≤ y ≤ 15 g5 (x, y) = 5 j=1 j cos [( j + 1)x + j] −10 ≤ x ≤ 10 −10 ≤ y ≤ 10, β = 0.5 g6 = 5 2 2 j=1 j cos [( j + 1)x + j] + β (x + 1, 4513) + (y + 0.80032) −10 ≤ x ≤ 10 −10 ≤ y ≤ 10, β = 1 g7 (x, y) = x sin (4π x) − y sin (4π yπ ) + 1 −10 ≤ x ≤ 10 −10 ≤ y ≤ 10 g8 (y) = sin6 5π x −10 ≤ x ≤ 10 −10 ≤ y ≤ 10 4 2 2 g9 (x, y) = x4 − x2 + 10 + y2 x −10 ≤ x ≤ 10 −10 ≤ y ≤ 10 g10 (x, y) = 5 5 j=1 j cos [( j + i)x + j] j=1 j cos [( j + i)y + j] −10 ≤ x ≤ 10 √ −10 ≤ y ≤ 10 g11 (x) = 418.9829n − i=1 nxi sin | xi | −512.03 ≤ xi ≤ 511.97 n = 3 n x 4000 − n cos √i + 1 g12 (x) = 1 + i=1 2 i=1 −600 ≤ xi ≤ 600 xi i n = 20 to compare opt-IMMALG01 and opt-IMMALG algorithms with the state-of-the-art opti- mization algorithms; in the same section we show the influence of the different potential mutations on the dynamics of both IAs; Sect. 4 presents a large set of experiments, compar- ing the two IA versions with several nature inspired methodologies; finally, Sect.5 contains the concluding remarks. 2 The immunological algorithm In this section we describe the IA based on the clonal selection principle. The main fea- tures of the algorithm are: (i) cloning, (ii) inversely proportional hypermutation and (iii) aging operator. The cloning operator clones each candidate solution in order to explore its neighbourhood in the search space; the inversely proportional hypermutation perturbs each candidate solution using an inversely proportional law to its objective function value; and the aging operator eliminates old candidate solutions from the current population in order to introduce diversity and to avoid local minima during the evolutionary search process. We present two versions of the IA: the first one is based on binary code representation (opt-IMMALG01), and the second on real values (opt-IMMALG). Both algorithms model antigens (Ag) and B cells; the Ag represents the problem to tackle, i.e. the function to opti- mize, while the B cell receptors are points (candidate solutions) in the search space for the problem. At each time step t the algorithm maintains a population of B cells P (t) of size d (i.e., d candidate solutions). Algorithm 1 shows the pseudo-code of the algorithm. 123
  • 5. J Glob Optim 2.1 Initialize population The population is initialized at time t = 0 (steps 1–4 in Algorithm 1) by randomly generating each solution using uniform distribution in the corresponding domains for each function (see last column of Tables 1, 2). For binary string representation, each real value xi is coded using bit strings of length k = 32. The mapping from the binary string b = b1 , b2 , . . . , bk into a real number x consists of two steps: (1) convert the bit string b = b1 , . . . , bk from base 2 k−1 to base 10 : i=0 bi ∗ 2k−i = x , (2) finding the corresponding real value: x (Bu i − Bli ) x = Bli + (2) 2k − 1 Bli and Bu i are the lower and upper bounds of the ith variable, respectively. In the case of real value representation, each variable is randomly initialized as follows: xi = Bli + β · (Bu i − Bli ) (3) where β is a random number in [0, 1] and Bli , Bu i are the lower and upper bounds of the real coded variable xi respectively. The strategy used to initialize the population plays a crucial role in evolutionary algorithms, since it influences the later performance of the algorithm. In traditional evolutionary computing, the initial population is generated using a random numbers distribution or chaotic sequences [4]. After the population is initialized, the objec- tive function value is computed for each candidate solution x ∈ P (t) , using the function Compute_Fitness(P (t) ) (step 5 in Algorithm 1). 2.2 Cloning operator The cloning operator (step 8 in Algorithm 1) clones each candidate solution dup times pro- ducing an intermediate population P (clo) of size d × dup and assigns to each clone a random age chosen in the range [0, τ B ]. The age for a candidate solution determines its life time in the population: when a candidate solution reaches the maximum age (τ B ) it is discarded, i.e. it dies. This strategy reduces the premature convergence of the algorithm and keeps high diversity in the population. An improvement of the performances can be obtained choosing the age of each clone into the range [0, 2 τ B ], as showed in Sect. 4. The cloning operator, 3 coupled with the hypermutation operator, performs a local search around the cloned solu- tions. The introduction of blind mutations can produce individuals with higher affinities (higher objective function values) which will be then selected forming the improved mature progenies. 2.3 Hypermutation operator The hypermutation operator (step 9 in Algorithm 1) acts on each candidate solution of popu- lation P (clo) . Although there are different ways of implementing this operator (see [11,12]), in this research work we use an inversely proportional strategy where each candidate solu- tion is subject to M mutations without explicitly using a mutation probability. The number of mutations M is determined by an inversely proportional law: the better is the objective function value of the candidate solution, the lower is the number of mutations performed. In this work we employ two different potential mutations to determine the number of mutations M: 123
  • 6. J Glob Optim ˆ e− f (x) α= , (4) ρ and ˆ α = e−ρ f (x) , (5) where α represents the mutation rate, ρ determines the shape of the mutation rates and fˆ(x) the objective function value normalized in [0, 1]. Thus the number of mutations M is given by M = (α × ) + 1 , (6) where is the length of any candidate solution: (1) = kn for opt-IMMALG01, with k the number of bits used to code each real variable and n the dimension of the function; whilst (2) = n for opt-IMMALG, that is the dimension of the problem. By this equation at least one mutation is guaranteed on any candidate solution; this happens exactly when the solution represented by a candidate solution is very close to the optimal one into the space of the solu- tions. Once the objective function is normalized into the range [0, 1], the best solutions are those whose values are closer to 1, whilst the worst ones are closer to 0. During normalization of the objective function value we use the best current objective function value decreased by a user-defined threshold θ , rather than the global optima. This way we do not use any a priori knowledge about the problem. In opt-IMMALG01, the hypermutation operator is based on the classical bit-flip mutation without redundancy: in any x candidate solution the operator randomly chooses xi , and inverts its value (from 0 to 1 or from 1 to 0). Since M mutations are performed in any candidate solution the xi are randomly chose without repetition. In opt-IMMALG instead the mutation operator randomly chooses two indexes 1 ≤ i, j ≤ , (t) such that i = j, and replaces xi with a new value in according to the following rule: (t+1) (t) (t) xi = (1 − β) xi + β xj (7) where β ∈ [0, 1] is a random number generated with uniform distribution. Immunological Algorithm (d, dup, ρ, τ B , Tmax ); t ← 0; F F E ← 0; Nc ← d · dup; P (t) ← Initialize_Population(d); Compute_Fitness(P (t) ); F F E ← F F E + d; while F F E < Tmax do P (clo) ← Cloning (P (t) , dup); P (hyp) ← Hypermutation(P (clo) , ρ); Compute_Fitness(P (hyp) ); F F E ← F F E + Nc ; (t) (hyp) (Pa , Pa ) = Aging(P (t) , P (hyp) , τ B ); P (t+1) ← (μ + λ)-Selection(P (t) , P (hyp) ); a a t ← t + 1; end Algorithm 1: Pseudo-code of the Immunological Algorithm 123
  • 7. J Glob Optim 2.4 Aging operator The aging operator (step 12 in Algorithm 1) eliminates all old candidate solutions in the pop- ulations P (t) and P (hyp) . The main goal of this operator is to produce high diversity in the current population and to avoid premature convergence. Each candidate solution is allowed to remain in the population for a fixed number of generations according to the parameter τ B . Hence, τ B indicates the maximum number of generations allowed; when a candidate solution is τ B + 1 old it is discarded from the current population independently from its objective function value. Such kind of operator is called static aging operator. The algorithm makes only one exception: when generating a new population the selection mechanism always keeps the best candidate solution, i.e. the solution with the best objective function value so far, even if older than τ B . This variant is called elitist aging operator. 2.5 (μ + λ)-Selection operator After performing the aging operator, the best candidate solutions that have survived the aging step are selected to generate the new population P (t+1) , of d candidate solutions from (t) (hyp) the populations Pa and Pa . If only d1 < d candidate solutions have survived then the (μ + λ)-Selection operator randomly selects d − d1 candidate solutions among those “dead”, i.e. from the set (hyp) P (t) Pa (t) P (hyp) Pa . The (μ + λ)-Selection operator, with μ = d and λ = Nc , reduces the offspring population of size λ ≥ μ, created by cloning and hypermutation operators, to a new parent population of size μ = d. The selection operator identifies the d best elements from the offspring set and the old parent candidate solutions, thus guaranteeing monotonicity in the evolution dynamics. Both algorithms terminate the execution when the fitness function evaluation (F F E), is great or equal to Tmax , the maximum number of objective function evaluations. 3 Benchmarks and metrics Before presenting the comparative performance analysis to the state-of-the-art (Sect. 4), we explore some of the feature of the two IAs described in this work. We first present the test functions and the experimental protocol used in our tests. We then explore the influence of different mutation schemes on the performance of the IA. Next we show the experimen- tal tuning of some of the parameters of the algorithm. Finally the dynamics and learning capabilities of both algorithms are explored. 3.1 Test functions and experimental protocol We have used a large benchmark of test functions belonging to different classes and with different features. Specifically we combined two benchmarks proposed respectively in Yao et al. [43] (23 functions showed in Table 1) and [40] (12 functions showed in Table 2). These functions can be divided in two categories of different complexity: unimodal and multimodal (with many and few local optima) functions. Although their complexity gets larger as the dimension space increase, optimizing unimodal functions is not a major issue, so for this kind of functions the convergence rate becomes the main interest. Moreover, we have used 123
  • 8. J Glob Optim Table 3 Number of the objective function evaluations (Tmax ) used for each test function of Table 1, as proposed in Yao et al. [43] Function Tmax Function Tmax Function Tmax f1 150,000 f9 500,000 f 17 10,000 f2 200,000 f 10 150,000 f 18 10,000 f3 500,000 f 11 200,000 f 19 10,000 f4 500,000 f 12 150,000 f 20 20,000 f5 2 × 106 f 13 150,000 f 21 10,000 f6 150,000 f 14 10,000 f 22 10,000 f7 300,000 f 15 400,000 f 23 10,000 f8 900,000 f 16 10,000 another set of functions taken from Cassioli et al. [5], which includes 8 functions with number of variables n ∈ {10, 20}. The main goal when applying optimization algorithms to these functions is to get a picture of their convergence speed. Multimodal functions are instead characterized by a rugged fitness landscape difficult to explore, so the quality of the result obtained by any optimization method is crucial since it reflects the ability of the algorithm to escape from local optima. This last category of functions represents the most difficult class of problems for many optimization algorithms. Using a very large benchmark is necessary in order to reduce biases and analyze the overall robustness of evolutionary algorithms [24]. Also we have tested our IAs using different dimensions: from small (1 variable) to very large values (5000 variables). We use the same experimental protocol proposed in Yao et al. [43]: 50 independent runs were performed for each test function. For all runs we compute both the mean value of the best candidate solutions and the standard deviation. The dimension was fixed as follows: n = 30 for functions from f 1 , to f 13 ; n = 2 for functions ( f 14 , f 16 , f 17 , f 18 ); n = 4 for functions ( f 15 , f 19 , f 21 , . . . , f 23 ); and n = 6 for function f 20 . Finally, for these experiments we used the same stopping criteria, Tmax value, proposed in Yao et al. [43] and shown in Table 3. 3.2 Influence of different mutation potentials Two different potential mutations (Eqs. 4, 5) are used in opt-IMMALG01 and opt-IMMALG to determine the number of mutations M (Eq. 6). In this section we present a comparison of their relative performances. Table 4 shows for each function the mean of the best candidate solutions and the standard deviation for all runs (the best result is highlighted in boldface). These results were obtained using the experimental protocol described previously in Sect. 3.1. Moreover, we fixed for opt-IMMALG01 d ∈ {10, 20}, dup = 2, τ B ∈ {5, 10, 15, 20, 50}, while for opt-IMMALG d = 100, dup = 2, τ B = 15. For both ver- sions we used ρ in the set {50, 75, 100, 125, 150, 175, 200} for the mutation rate 4, and ρ in the set {4, 5, 6, 7, 8, 9, 10, 11} for mutation rate 5. After inspecting the table it is easy to conclude that the second potential mutation has in the overall a better performance for both versions of the algorithm. For opt-IMMALG the improvements obtained using mutation rate 5 are more evident rather than opt-IMMALG01. In fact for opt-IMMALG01 the potential mutation 4 reaches better solutions in the second class, i.e. the ones with many local optima. 123
  • 9. J Glob Optim Table 4 Comparison of the results obtained by both versions, opt-IMMALG01 and opt-IMMALG, using the two potential mutations (Eqs. 4, 5) opt- IMMALG01 opt- IMMALG − fˆ(x) ˆ − fˆ(x) ˆ α= e ρ α = e−ρ f (x) α= e ρ α = e−ρ f (x) f1 1.7 × 10−8 9.23 × 10−12 4.663 × 10−19 0.0 3.5 × 10−15 2.44 × 10−11 7.365 × 10−19 0.0 f2 7.1 × 10−8 0.0 3.220 × 10−17 0.0 0.0 0.0 1.945 × 10−17 0.0 f3 1.9 × 10−10 0.0 3.855 0.0 2.63 × 10−10 0.0 5.755 0.0 f4 4.1 × 10−2 1.0 × 10−2 8.699 × 10−3 0.0 5.3 × 10−2 5.3 × 10−3 3.922 × 10−2 0.0 f5 28.4 3.02 22.32 16.29 0.42 12.2 11.58 13.96 f6 0.0 0.2 0.0 0.0 0.0 0.44 0.0 0.0 f7 3.9 × 10−3 3.0 × 10−3 1.143 × 10−4 1.995 × 10−5 1.3 × 10−3 1.2 × 10−3 1.411 × 10−4 2.348 × 10−5 f8 −12568.27 −12508.38 −12559.69 −12535.15 0.23 155.54 34.59 62.81 f9 2.66 19.98 0.0 0.596 2.39 7.66 0.0 4.178 f 10 1.1 × 10−4 18.98 1.017 × 10−10 0.0 3.1 × 10−5 0.35 5.307 × 10−11 0.0 f 11 4.55 × 10−2 7.7 × 10−2 2.066 × 10−2 0.0 4.46 × 10−2 8.63 × 10−2 5.482 × 10−2 0.0 f 12 3.1 × 10−2 0.137 7.094 × 10−21 1.770 × 10−21 5.7 × 10−2 0.23 5.621 × 10−21 8.774 × 10−24 f 13 3.20 1.51 1.122 × 10−19 1.687 × 10−21 0.13 0.1 2.328 × 10−19 5.370 × 10−24 f 14 1.21 1.02 0.999 0.998 0.54 7.1 × 10−2 7.680 × 10−3 1.110 × 10−3 f 15 7.7 × 10−3 7.1 × 10−4 3.27 × 10−4 3.2 × 10−4 1.4 × 10−2 1.3 × 10−4 3.651 × 10−5 2.672 × 10−5 f 16 −1.02 −1.032 −1.017 −1.013 1.1 × 10−2 1.5 × 10−4 2.039 × 10−2 2.212 × 10−2 f 17 0.450 0.398 0.425 0.423 0.21 2.0 × 10−4 4.987 × 10−2 3.217 × 10−2 f 18 3.0 3.0 6.106 5.837 0.0 0.0 4.748 3.742 f 19 −3.72 −3.72 −3.72 −3.72 1.1 × 10−2 1.1 × 10−4 8.416 × 10−3 7.846 × 10−3 f 20 −3.31 −3.31 −3.293 −3.292 5.9 × 10−3 7.4 × 10−2 3.022 × 10−2 3.097 × 10−2 f 21 −5.36 −9.11 −10.153 −10.153 2.20 1.82 (7.710 × 10−8 ) 1.034 × 10−7 f 22 −5.34 −9.86 −10.402 −10.402 2.11 1.88 (1.842 × 10−6 ) 1.082 × 10−5 f 23 −6.03 −9.96 −10.536 −10.536 2.66 1.46 7.694 × 10−7 1.165 × 10−5 Each result indicates the mean of the best solutions (in the first line of each table entry), and the standard deviation (in the second line). The best result for each function is highlighted in boldface 123
  • 10. J Glob Optim 3.3 The parameters of the immunological algorithms In this section we present an analysis of the parameter settings that influence the per- formance of the algorithms. Independently from the experimental protocol, we fixed d ∈ {10, 20}, dup = 2, τ B ∈ {5, 10, 15, 20, 50} for opt-IMMALG01 and d = 100, dup = 2, τ B = 15 for opt-IMMALG. These values were chosen after a deep investigation of the parameter tuning for each algorithm, not shown in this work (see [8,9,14] for details). In the first set of experiments the values for parameter ρ were fixed as follows: {50, 75, 100, 125, 150, 175, 200} using mutation rate 4 and {4, 5, 6, 7, 8, 9, 10, 11} for muta- tion rate 5. Since opt-IMMALG presents one more parameter θ than opt-IMMALG01 (see Sect. 2), we first analyzed the best tuning for θ . After several experiments (not shown in this work), the best value found was θ = 75% for both potential mutations. Such setting allows both algorithms to perform better on 14 functions out of 23. These experiments were made on 50 independent runs. We have also tested opt-IMMALG using different ranges to randomly choose the age of each clone, and we have discovered that choosing the age in the range [0, 2 τ B ] improves its performance. For this new variant of opt-IMMALG we used 3 only the potential mutation from Eq. 5, because it appears to be the best (as shown in Sect. 4). We will call this new version opt- IMMALG∗ . After several experiments, the best tuning for opt- IMMALG∗ was: dup = 2, τ B = 10, θ = 50%, and d = 1000 for all n ≥ 30, d = 100 otherwise. Next we explored the performances of parameter ρ when tackling functions with different dimension value, with the goal of finding the best setting for ρ for each dimension. Figure 1 shows the dynamics of the number of mutations for different dimensions and ρ values. Using this figure we have fixed ρ as follows: ρ = 3.5 for dimension n = 30; ρ = 4.0 for dimension n = 50; ρ = 6.0 for dimension n = 100; and ρ = 7 for dimension n = 200. Instead for dimension n = 2 and n = 4 (not showed in the figure) we found the best values to be ρ = 0.8 for n = 2, and ρ = 1.5 for n = 4. Moreover, we considered functions with very large dimension: n = 1000 and n = 5000. In this case we have tuned ρ = 9.0 and ρ = 11.5 respectively (see Fig. 2). From the figure we Number of Mutations of the Inversely Proportional Hypermutation Operator 220 dim=30, ρ=3.5 200 dim=50, ρ=4.0 dim=100, ρ=6.0 dim=200, ρ=7.0 180 160 140 10 9 120 8 M 7 100 6 5 80 4 3 60 2 1 40 0.4 0.5 0.6 0.7 0.8 0.9 1 20 0 0 0.2 0.4 0.6 0.8 1 normalized fitness Fig. 1 Number of mutations M obtained on several dimensions 123
  • 11. J Glob Optim Number of Mutations of the Inversely Proportional Hypermutation Operator 5000 dim=1000, ρ=9.0 dim=5000, ρ=11.5 4000 3 3000 2.5 M 2000 2 1.5 1000 1 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0 0.2 0.4 0.6 0.8 1 normalized fitness Fig. 2 Number of mutations M obtained for high dimension values (-ρ f(x)) Potential Mutation α = e 1 ρ=3.5 ρ=4.0 0.9 ρ=6.0 ρ=7.0 0.8 ρ=9.0 ρ=11.5 0.7 0.6 α 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 normalized fitness Fig. 3 Potential mutation α (Eq. 5) used by opt- IMMALG∗ can conclude that when the mutation rate is low the corresponding objective values improve, whereas high mutation rates correspond to bad objective function values (which agrees with the behaviour of the B cells in the natural immune systems). The inset plot shows a zoom of mutation rates in the range [0.7, 1]. Finally, Fig. 3 instead shows the curves produced by α mutation potential using Eq. 5 at the different ρ values. Also in this figure it is possible to see an inversely proportional behaviour with respect to the normalized objective function, where higher α values correspond to worst solutions, whose normalized objective function value is closer to zero. Vice versa, the lower α values are obtained for good normalized objective function values (i.e. closer to one). 123
  • 12. J Glob Optim Mean performance comparison curves for test function f1 Mean performance comparison curves for test function f 6 70000 70000 Legend Legend Real Real 60000 Binary 60000 Binary Function value Function value 50000 50000 40000 40000 30000 30000 20000 20000 10000 10000 0 0 100 200 300 400 500 600 700 5 10 15 20 25 30 35 40 45 50 Generation Generation Fig. 4 Evolution curves of opt-IMMALG01 and opt-IMMALG algorithms on two unimodal functions f 1 (left plot) and f 6 (right plot) Mean performance comparison curves for test function f8 Mean performance comparison curves for test function f10 -2000 25 Legend Legend Real Real -4000 Binary Binary 20 Function value Function value -6000 15 -8000 10 -10000 5 -12000 -14000 0 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000 Generation Generation Fig. 5 Evolution curves of opt-IMMALG01 and opt-IMMALG algorithms on two multimodal functions f 8 (left plot) and f 10 (right plot) with many local optima 3.4 The convergence and learning processes Two important features that have an impact on the performance of any optimization algo- rithm are the convergence speed and the learning ability. In this section we examine the performance of the two versions of the IA according to these two properties. For this purpose we tested the IAs on two functions for each class from Table 1: f 1 and f 6 for the unimodal functions; f 8 and f 10 for the multimodal functions with many local optima, and f 18 and f 21 for the multimodal functions with a few local optima. All the results are averaged over 50 independent runs. Figures 4, 5 and 6 show the evolution curves produced by opt-IMMALG01 (labelled as binary) and opt-IMMALG (labelled as real), on the full set of test functions. Inspecting the results of the plots it is clear that opt-IMMALG presents faster and better quality convergence than opt-IMMALG01 in all instances. The analysis of the learning process of the algorithm is performed using an entropic function, the Information Gain. This function measures the quantity of information the sys- tem discovers during the learning phase [10,13]. For this purpose we define the candi- (t) date solutions distribution function f m as the ratio between the number, Bm , of candidate t solutions at time step t with objective function value m and the total number of candidate 123
  • 13. J Glob Optim Mean performance comparison curves for test function f18 Mean performance comparison curves for test function f 21 70 0 Legend Legend Real -1 Real 60 Binary Binary -2 -3 Function value 50 Function value -4 40 -5 30 -6 -7 20 -8 -9 10 -10 0 -11 5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50 Generation Generation Fig. 6 Convergence process of opt-IMMALG01 and opt-IMMALG algorithms on two multimodal functions f 18 (left plot) and f 21 (right plot) with few local optima solutions: t Bm t Bm (t) fm = h = . (8) m=0 t Bm d It follows that the information gain K (t, t0 ) and entropy E(t) can be defined as: K (t, t0 ) = f m log f m / f m 0 ) (t) (t) (t (9) m (t) (t) E(t) = f m log f m . (10) m The gain is the amount of information the system has already learnt from the given problem instance compared to the randomly generated initial population P (t=0) (the initial distribu- tion). Once the learning process begins, the information gain increases monotonically until it reaches a final steady state (see Fig. 7). This is consistent with the maximum information-gain principle: ddt ≥ 0. Figure 7 shows the dynamics of the Information Gain of opt-IMMALG∗ K when applied to the functions f 5 , f 7 , and f 10 . The algorithm quickly gains high information on functions f 7 and f 10 and reaches a steady state at generation 20. However, more genera- tions are required for function f 5 as the information gain starts growing only after generation 22. This behaviour is correlated to the different search space of function f 5 whose com- plexity is higher than the search spaces of function f 7 and f 10 . This response is consistent with the experimental results: both opt-IMMALG and opt-IMMALG∗ algorithms require greater number of objective function evaluations to achieve good solutions (see the exper- imental protocol in Table 3). The plot in Fig. 8 shows the monotonous behaviour of the information gain for function f 5 , together with the standard deviation (inset plot); the stan- dard deviation increases quickly (the spike in the inset plot) when the algorithm begins to learn information, than it rapidly decreases towards zero as the algorithm approaches the steady state of the information gain. The algorithm converges to the best solution in this temporal window. Thus, the highest point of information learned corresponds to the lowest value of uncertainty, i.e. standard deviation. Finally, Fig. 9 shows the curves of the infor- mation gain K (t, t0 ), and entropy E(t) for opt-IMMALG∗ of the function f 5 . The inset plot instead shows average objective function values versus best objective function value for the first 10 generations obtained on the same function f 5 ; the algorithm quickly decreases from solutions of the order 109 to solutions of the order (101 − 1). The best solution for the results presented in Figs. 8 and 9 was 0.0 and the mean of the best solutions was 15.6 123
  • 14. J Glob Optim Information Gain 25 f5 f7 f10 20 15 K(t, t0) 10 5 0 1 2 4 8 16 32 Generations Fig. 7 Learning of the problem. Information Gain curves of opt-IMMALG∗ algorithm on the functions f 5 , f 7 , and f 10 . Each curve was obtained over 50 independent runs, with the following parameters: d = 100, dup = 2, τ B = 15, ρ = 3.5 and Tmax = 5 × 105 * Clonal Selection Algorithm: opt-IMMALG 25 20 300 15 K(t, t0) 250 stand. dev. 200 10 150 100 50 5 0 16 32 64 0 16 32 64 Generations Fig. 8 Information Gain curves of opt-IMMALG∗ algorithm on functions f 5 . The inset plot shows the standard deviation (14.07 as standard deviation). The experiments were performed fixing parameters as follows: d = 100, dup = 2, τ B = 15, ρ = 3.5 and Tmax = 5 × 105 . 3.5 Time-to-target analysis Time-To-Target plots [2,20] are a way to characterize the running time of stochastic algo- rithms to solve a given combinatorial optimization problem. They display the probability that an algorithm will find a solution as good as a target within a given running time. Nowadays 123
  • 15. J Glob Optim Clonal Selection Algorithm: opt-IMMALG* 25 K(t, t0) entropy 20 4e+09 3.5e+09 avg fitness 15 best fitness 3e+09 Fitness 2.5e+09 2e+09 1.5e+09 10 1e+09 5e+08 0 0 2 4 6 8 10 5 Generations 0 20 25 30 35 40 45 50 55 60 Generations Fig. 9 Information Gain K (t, t0 ) and Entropy E(t) curves of opt-IMMALG∗ on the function f 5 . The inset plot shows the average objective function values versus the best objective function value for the first 10 generations. All curves are averaged over 50 independent runs with the following parameter setting: d = 100, dup = 2, τ B = 15, ρ = 3.5 and Tmax = 5 × 105 they are a standard graphical methodology for data analysis [6] to compare the empirical and theoretical distributions.1 In Aiex et al. [1] is presented a Perl program (called tttplots.pl) to create time- to-target plots, as an useful tool for the comparisons of different stochastic algorithms or, in general, strategies for solving a given problem. Such program can be downloaded from ˜ http://www2.research.att.com/mgcr/tttplots/ By tttplots.pl two kinds of plots are pro- duced: Q Q-plot with superimposed variability information, and superimposed empirical and theoretical distributions. Following the example presented in Aiex et al. [1], we ran opt-IMMALG∗ on the first 13 functions of the Table 1 (for n = 30) where the obtained mean is equal to the optimal solu- tion; that is when the success rate is 100%. For these experiments, of course, the termination criterion was properly changed, and that is until finding the target, i.e. the optimal solution. Moreover, since that larger is the number of runs closer is the empirical distribution to the theoretical distribution, we include in this work only the plots produced after 200 runs. For each of the 200 runs (as made for all the experiments and results presented in this article) the random number generator is initialized with a distinct seed, that is, each run is independent. The Figs. 10, 11 and 12 show the convergence process produced by opt-IMMALG∗ using tttplots.pl on the functions: f 1 , . . . , f 6 , f 9 , . . . , f 13 . The left plots show the comparisons among empirical and theoretical distributions, whilst the right plots display the Q Q-plots with variability information. Inspecting the plots in the rightmost column is possible to see that the empirical and theoretical distributions are often the same, except for function f 6 which seems to be the easiest for opt-IMMALG∗ among the given benchmark. 1 For major details on this methodology see [1,2]. 123
  • 16. J Glob Optim function1_runs200_dim30 function1_runs200_dim30 1 1.3 1.25 cumulative probability 0.8 measured times 1.2 0.6 1.15 0.4 1.1 0.2 1.05 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function2_runs200_dim30 function2_runs200_dim30 1 2.05 2 0.8 cumulative probability 1.95 measured times 1.9 0.6 1.85 0.4 1.8 1.75 0.2 1.7 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 1.65 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function3_runs200_dim30 function3_runs200_dim30 1 25.5 25 0.8 cumulative probability measured times 24.5 0.6 24 23.5 0.4 23 0.2 22.5 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 22 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function4_runs200_dim30 function4_runs200_dim30 1 32.2 32 0.8 cumulative probability 31.8 measured times 31.6 0.6 31.4 31.2 0.4 31 0.2 30.8 empirical 30.6 estimated empirical +1 std dev range theoretical -1 std dev range 0 30.4 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles Fig. 10 Empirical versus theoretical distributions (left plot) and Q Q-plots with variability information (right plot). The curves have been obtained for the functions: f 1 , f 2 , f 3 and f 4 123
  • 17. J Glob Optim function5_runs200_dim30 function5_runs200_dim30 1 20 cumulative probability 0.8 19.5 measured times 19 0.6 18.5 18 0.4 17.5 0.2 17 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 16.5 0 2 4 6 8 10 12 14 16 18 20 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function6_runs200_dim30 function6_runs200_dim30 1 0.23 cumulative probability 0.22 0.8 measured times 0.21 0.6 0.2 0.19 0.4 0.18 0.2 0.17 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 0.16 0 0.05 0.1 0.15 0.2 0.25 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function9_runs200_dim30 function9_runs200_dim30 1 13.8 13.7 cumulative probability 0.8 13.6 measured times 13.5 0.6 13.4 13.3 0.4 13.2 13.1 0.2 13 empirical 12.9 estimated empirical +1 std dev range theoretical -1 std dev range 0 12.8 0 2 4 6 8 10 12 14 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function10_runs200_dim30 function10_runs200_dim30 1 1.8 1.75 cumulative probability 0.8 1.7 measured times 1.65 0.6 1.6 1.55 0.4 1.5 1.45 0.2 1.4 empirical 1.35 estimated empirical +1 std dev range theoretical -1 std dev range 0 1.3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles Fig. 11 Empirical versus theoretical distributions (left plot) and Q Q-plots with variability information (right plot). The curves have been obtained for the functions: f 5 , f 6 , f 9 and f 10 123
  • 18. J Glob Optim function11_runs200_dim30 function11_runs200_dim30 1 1.4 1.35 0.8 cumulative probability measured times 1.3 0.6 1.25 1.2 0.4 1.15 0.2 1.1 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 1.05 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function12_runs200_dim30 function12_runs200_dim30 1 18 17.5 0.8 cumulative probability measured times 17 0.6 16.5 0.4 16 0.2 15.5 empirical estimated empirical +1 std dev range theoretical -1 std dev range 0 15 0 2 4 6 8 10 12 14 16 18 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles function13_runs200_dim30 function13_runs200_dim30 1 1.4 1.38 0.8 1.36 cumulative probability measured times 1.34 0.6 1.32 1.3 0.4 1.28 1.26 0.2 1.24 empirical 1.22 estimated empirical +1 std dev range theoretical -1 std dev range 0 1.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 time to target solution exponential quantiles Fig. 12 Empirical versus theoretical distributions (left plot) and Q Q-plots with variability information (right plot). The curves have been obtained for the functions: f 11 , f 12 and f 13 123
  • 19. J Glob Optim 4 Comparisons and results In this section we present an exhaustive comparative study between opt-IMMALG01, opt- IMMALG and opt-IMMALG∗ with 39 state-of-the-art optimization algorithms from the literature. Such a large simulation protocol is required to fairly compare the IA to the current best nature-inspired, deterministic and hybrid optimization algorithms and to demonstrate its ability to outperform many of these techniques. 4.1 IA versus FEP and I-FEP In the first experiment we compare opt-IMMALG01 and opt-IMMALG with FEP algo- rithm (Fast Evolutionary Programming), which was proposed in Yao et al. [43]. FEP is based on Conventional Evolutionary Programming (CEP [7]) and it uses a mutation operator based on Cauchy random numbers that helps the algorithm to escape from local optima. The results of this comparison are shown in Table 5. Both opt-IMMALG01 and opt-IMMALG outperform FEP in the majority of the instances. In particular opt-IMMALG reaches the best values in 16 functions out of 23; 12 using the potential mutation of Eq. 5, and only 5 with Eq. 4. Comparing the two IA versions we can observe that opt-IMMALG, using both potential mutations, outperforms opt-IMMALG01 on 18 out of 23 functions. The best results are obtained using the second potential mutation (Eq. 5). It is important to mention that opt-IMMALG outperforms opt-IMMALG01 mainly on the multimodal func- tions. This result reflects its ability to escape from local optima. The analysis presented in Yao et al. [43] shows that Cauchy mutations perform better when the current search point is far away from the global optimum, whilst Gaussian mutations are better when the search points are in the neighbourhood of the global optimum. Based on these obser- vations, the authors of [43] proposed an improved version of FEP. This algorithm, called I-FEP, is based on both Cauchy and Gaussian mutations, and it differs from FEP in the way offspring are created. Two new offspring are generated as follows: the first using Cauchy mutation and the second using Gaussian mutation; only the best offspring is chosen. There- fore we compared opt-IMMALG and opt-IMMALG01 also with I-FEP, and the results are reported in Table 6. We used functions f 1 , f 2 , f 10 , f 11 , f 21 , f 22 and f 23 from Table 1, and for each function we show the mean of the best candidate solutions averaged over all runs (as proposed in Yao et al. [43]). Inspecting the results, we can infer that both ver- sions of the IA obtain better performances (i.e. better solutions quality), than I-FEP on all functions. Finally, since FEP is based on Conventional Evolutionary Programming (CEP), we pres- ent in Table 7 a comparison between the two versions of IA and CEP algorithm. CEP is based on three different mutation operators (as proposed in Chellapilla [7]): Gaussian Muta- tion Operator (GMO); Cauchy Mutation Operator (CMO); and Mean Mutation Operator (MMO). For this set of experiments, we used the same functions and the same experimental protocol proposed in Chellapilla [7]: i.e. Tmax = 2.5 × 105 for all functions, except for functions f 1 and f 10 , where Tmax = 1.5 × 105 was used. The obtained results by opt- IMMALG01 and opt-IMMALG indicate that both IA versions outperform CEP on most of the instances. Moreover opt-IMMALG shows an overall better performance compared to opt-IMMALG01 and CEP. 123
  • 20. J Glob Optim Table 5 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values representation) and FEP (Fast Evolutionary Algorithm), using the same experimental protocol proposed in Yao et al. [43] ˆ − fˆ(x) α = e−ρ f (x) FEP [43] α= e ρ opt-IMMALG opt-IMMALG01 opt-IMMALG opt-IMMALG01 f1 0.0 9.23 × 10−12 5.7 × 10−4 4.663 × 10−19 1.7 × 10−8 (0.0) 2.44 × 10−11 1.3 × 10−4 (7.365 × 10−19 ) 3.5 × 10−15 f2 0.0 0.0 8.1 × 10−3 3.220 × 10−17 7.1 × 10−8 (0.0) (0.0) 7.7 × 10−4 (1.945 × 10−17 ) (0.0) f3 0.0 0.0 1.6 × 10−2 3.855 1.9 × 10−10 (0.0) (0.0) 1.4 × 10−2 (5.755) (2.63 × 10−10 ) f4 0.0 1.0 × 10−2 0.3 8.699 × 10−3 4.1 × 10−2 (0.0) (5.3 × 10−3 ) 0.5 (3.922 × 10−2 ) (5.3 × 10−2 ) f5 16.29 3.02 5.06 22.32 28.4 (13.96) (12.2) 5.87 (11.58) (0.42) f6 0.0 0.2 0.0 0.0 0.0 (0.0) (0.44) 0.0 (0.0) (0.0) f7 1.995 × 10−5 3.0 × 10−3 7.6 × 10−3 1.143 × 10−4 3.9 × 10−3 (2.348 × 10−5 ) (1.2 × 10−3 ) 2.6 × 10−3 (1.411 × 10−4 ) (1.3 × 10−3 ) f8 −12535.15 −12508.38 −12554.5 −12559.69 −12568.27 (62.81) (155.54) 52.6 (34.59) (0.23) f9 0.596 19.98 4.6 × 10−2 0.0 2.66 (4.178) (7.66) 1.2 × 10−2 (0.0) (2.39) f 10 0.0 18.98 1.8 × 10−2 1.017 × 10−10 1.1 × 10−4 (0.0) (0.35) 2.1 × 10−3 (5.307 × 10−11 ) (3.1 × 10−5 ) f 11 0.0 7.7 × 10−2 1.6 × 10−2 2.066 × 10−2 4.55 × 10−2 (0.0) (8.63 × 10−2 ) 2.2 × 10−2 (5.482 × 10−2 ) (4.46 × 10−2 ) f 12 1.770 × 10−21 0.137 9.2 × 10−6 7.094 × 10−21 3.1 × 10−2 (8, 774 × 10−24 ) (0.23) 3.6 × 10−6 (5.621 × 10−21 ) (5.7 × 10−2 ) f 13 1.687 × 10−21 1.51 1.6 × 10−4 1.122 × 10−19 3.20 (5.370 × 10−24 ) (0.10) 7.3 × 10−5 (2.328 × 10−19 ) (0.13) f 14 0.998 1.02 1.22 0.999 1.21 (1.110 × 10−3 ) (7.1 × 10−2 ) 0.56 (7.680 × 10−3 ) (0.54) f 15 3.200 × 10−4 7.1 × 10−4 5.0 × 10−4 3.270 × 10−4 7.7 × 10−3 (2.672 × 10−5 ) (1.3 × 10−4 ) 3.2 × 10−4 (3.651 × 10−5 ) (1.4 × 10−2 ) f 16 −1.013 −1.032 −1.031 −1.017 −1.02 (2.212 × 10−2 ) (1.5 × 10−4 ) 4.9 × 10−7 (2.039 × 10−2 ) (1.1 × 10−2 ) f 17 0.423 0.398 0.398 0.425 0.450 (3.217 × 10−2 ) (2.0 × 10−4 ) 1.5 × 10−7 (4.987 × 10−2 ) (0.21) f 18 5.837 3.0 3.02 6.106 3.0 (3.742) (0.0) 0.11 (4.748) (0.0) f 19 −3.72 −3.72 −3.86 −3.72 −3.72 (7.846 × 10−3 ) (1.1 × 10−4 ) 1.4 × 10−5 (8.416 × 10−3 ) (1.1 × 10−2 ) f 20 −3.292 −3.31 −3.27 −3.293 −3.31 (3.097 × 10−2 ) (7.4 × 10−2 ) 5.9 × 10−2 (3.022 × 10−2 ) (5.9 × 10−3 ) f 21 −10.153 −9.11 −5.52 −10.153 −5.36 (1.034 × 10−7 ) (1.82) 1.59 (7.710 × 10−8 ) (2.20) f 22 −10.402 −9.86 −5.52 −10.402 −5.34 (1.082 × 10−5 ) (1.88) 2.12 (1.842 × 10−6 ) (2.11) f 23 −10.536 −9.96 −6.57 −10.536 −6.03 (1.165 × 10−5 ) (1.46) 3.14 (7.694 × 10−7 ) (2.66) For opt-IMMALG and opt-IMMALG01 we show the results obtained using both potential mutations. For each algorithm we report the mean of the best candidate solutions on all runs (in the first line of each table entry) and the standard deviation (in the second line). The best results are highlighted in boldface 123
  • 21. J Glob Optim Table 6 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values representation) and I-FEP (Improved Fast Evolutionary Algorithm), on functions f 1 , f 2 , f 10 , f 11 , f 21 , f 22 and f 23 from Table 1 ˆ − fˆ(x) α = e−ρ f (x) I-FEP [43] α= e ρ opt-IMMALG opt-IMMALG01 opt-IMMALG opt-IMMALG01 f1 0.0 9.23 × 10−12 4.16 × 10−5 4.663 × 10−19 1.7 × 10−8 f2 0.0 0.0 2.44 × 10−2 3.220 × 10−17 7.1 × 10−8 f 10 0.0 18.98 4.83 × 10−3 1.017 × 10−10 1.1 × 10−4 f 11 0.0 7.7 × 10−2 4.54 × 10−2 2.066 × 10−2 4.55 × 10−2 f 21 −10.153 −9.11 −6.46 −10.153 −5.36 f 22 −10.402 −9.86 −7.10 −10.402 −5.34 f 23 −10.536 −9.96 −7.80 −10.536 −6.03 For each algorithm we report the mean of the best candidate solutions averaged over all runs. The best results are highlighted in boldface Table 7 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values representation) and the version of CEP (Conventional Evolutionary Programming) based on three different mutation operators [7]: GMO (Gaussian Mutation Operator), CMO (Cauchy Mutation Operator), and MMO (Mean Mutation Operator) ˆ ˆ e− f (x) α = e−ρ f (x) CEP [7] α= ρ opt-IMMALG opt-IMMALG01 GMO CMO MMO opt-IMMALG opt-IMMALG01 f1 0.0 9.23 × 10−12 3.09 × 10−7 3.07 × 10−7 9.81 × 10−7 4.663 × 10−19 1.7 × 10−8 f2 0.0 0.0 1.99 × 10−3 5.87 × 10−3 3.23 × 10−3 3.220 × 10−17 7.1 × 10−8 f3 0.0 0.0 17.60 5.78 11.80 3.855 1.9 × 10−10 f4 0.0 1.0 × 10−2 5.18 0.66 1.88 8.699 × 10−3 4.1 × 10−2 f5 16.29 3.02 86.70 114.0 63.8 22.32 28.4 f7 1.995 × 10−5 12.20 9.42 9.53 7.6 × 10−3 1.143 × 10−4 3.9 × 10−3 f9 0.596 19.98 120.0 4.73 9.52 0.0 2.66 f 10 0.0 18.98 9.10 1.3 × 10−3 7.49 × 10−4 1.017 × 10−10 1.1 × 10−4 f 11 0.0 7.7 × 10−2 2.52 × 10−7 2.2 × 10−6 6.99 × 10−7 2.066 × 10−2 4.55 × 10−2 For each algorithm the mean of the best candidate solutions on all runs is presented. The best results are highlighted in boldface 4.2 IA versus DIRECT, PSO and EO Next we compared opt-IMMALG01 and opt-IMMALG with other two well-known biologically inspired algorithms: Particle Swarm Optimization (PSO) and Evolutionary Opti- mization (EO) [3]. For this other set of experiments we used functions f 1 , f 5 , f 9 and f 11 as proposed in Angeline [3], and we fixed the maximum number of objective function evalua- tions Tmax = 2.5 × 105 . The results presented in Table 8 strongly demonstrate the superior performance of opt-IMMALG and opt-IMMALG01 both in terms of convergence and qual- ity of the solutions. Table 9 presents the comparison between both versions of the IA and DIRECT [21, 28], a deterministic global search algorithm for bound constrained optimization based on Lipschitz constant estimation. Since, the results by DIRECT are not available for all func- tions of Table 1, we used only a subset of such functions { f 5 , f 7 , f 8 , f 12 , . . . , f 23 }. The 123
  • 22. J Glob Optim Table 8 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values representation), PSO (particle swarm optimization), and EO (Evolutionary Optimization) [3] ˆ − fˆ(x) α = e−ρ f (x) PSO [3] EO [3] α= e ρ opt-IMMALG opt-IMMALG01 opt-IMMALG opt-IMMALG01 f1 0.0 9.23 × 10−12 11.75 9.8808 4.663 × 10−19 1.7 × 10−8 (0.0) 2.44 × 10−11 1.3208 0.9444 (7.365 × 10−19 ) 3.5 × 10−15 f5 16.29 3.02 1911.598 1610.39 22.32 28.4 (13.96) (12.2) 374.2935 293.5783 (11.58) (0.42) f9 0.596 19.98 47.1354 46.4689 0.0 2.66 (4.178) (7.66) 1.8782 2.4545 (0.0) (2.39) f 11 0.0 7.7 × 10−2 0.4498 0.4033 2.066 × 10−2 4.55 × 10−2 (0.0) (8.63 × 10−2 ) 0.0566 0.0436 (5.482 × 10−2 ) (4.46 × 10−2 ) For each algorithm we report the mean of the best candidate solutions on all runs (in the first line of each table entry) and the standard deviation (in the second line). The best results are highlighted in boldface Table 9 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values representation) and DIRECT, a deterministic global search algorithm for bound constrained optimization based on Lipschitz constant estimation [21,28] ˆ − fˆ(x) α = e−ρ f (x) DIRECT [21,28] α= e ρ opt-IMMALG opt-IMMALG01 opt-IMMALG opt-IMMALG01 f5 16.29 3.02 27.89 22.32 28.4 f7 1.995 × 10−5 3.0 × 10−3 8.9 × 10−3 1.143 × 10−4 3.9 × 10−3 f8 −12535.15 −12508.38 −4093.0 −12559.69 −12568.27 f 12 1.770 × 10−21 0.137 0.03 7.094 × 10−21 3.1 × 10−2 f 13 1.687 × 10−21 1.51 0.96 1.122 × 10−19 3.20 f 14 0.998 1.02 1.0 0.999 1.21 f 15 3.2 × 10−4 7.1 × 10−4 1.2 × 10−3 3.27 × 10−4 7.7 × 10−3 f 16 −1.013 −1.032 −1.031 −1.017 −1.02 f 17 0.423 0.398 0.398 0.425 0.450 f 18 5.837 3.0 3.01 6.106 3.0 f 19 −3.72 −3.72 −3.86 −3.72 −3.72 f 20 −3.292 −3.31 −3.30 −3.293 −3.31 f 21 −10.153 −9.11 −6.84 −10.153 −5.36 f 22 −10.402 −9.86 −7.09 −10.402 −5.34 f 23 −10.536 −9.96 −7.22 −10.536 −6.03 For each algorithm we report the mean of the best candidate solutions averaged over all runs. The best results are highlighted in boldface reason why some functions could not be tested with DIRECT is that the optimum for these functions lays on the centre of the variable bounds and that is the point from which DIRECT starts its search. For these tests we used the same values for Tmax as showed in Table 3 (Sect. 3.1). By inspecting the results in the table we can claim that, except for function f 19 , both opt-IMMALG and opt-IMMALG01 show again superior performance, in particular in the presence of rugged landscapes (multimodal functions). 123
  • 23. J Glob Optim 4.3 IA versus CLONALG and BCA We have compared opt-IMMALG01 and opt-IMMALG with two well-known immuno- logical inspired algorithms, both based on the clonal selection principle: CLONALG [19] and BCA [40]. Two populations characterize CLONALG: a population of antigens Ag and a population of antibodies Ab. The individual antibody, Ab, and antigen, Ag, are repre- sented by string attributes m = m L , . . . , m 1 , that is, a point in a L-dimensional shape space S, with m ∈ S L . Two different strategies were adopted by CLONALG, labelled as CLO- NALG1 and CLONALG2 [9], based on different selection schemes: in CLONALG1 each Ab at time step t will be replaced for the new generation (time step t + 1) with the best mutated clone; whilst, in CLONALG2 the new population for generation t + 1 will be pro- duced by the n best Ab’s of the mutated clones at time step t (n is the population size). Both schemes of CLONALG are based on the same potential mutation, produced by both Eqs. 4 and 5. Also for these experiments we have used the same values of Tmax showed in Table 3. Table 10 presents the comparative analysis between both versions of the IA and CLONALG [19]: opt-IMMALG01, opt-IMMALG, CLONALG1 , and CLONALG2 . The potential mutation of Eq. 4 was used for all four algorithms. The table shows the mean of the best candidate solutions on all runs and the standard deviation. All the results pre- sented for the two versions of CLONALG were previously reported in Cutello et al. [9]. The results indicate that opt-IMMALG outperforms both versions of CLONALG on all classes of functions, except for functions f 11 , f 16 and f 17 . If we compare the algorithms only on the multimodal functions with many local optima ( f 8 − f 13 ), we can claim that opt-IMMALG is capable of reaching the best solutions more easily than the other clonal selection algorithm CLONALG. Table 11 presents the same comparison between the IAs and CLONALG but this time using the potential mutation of Eq. 5. The results show that both opt-IMMALG and opt-IMMALG01 again outperform both versions of CLONALG, in particular in the unimodal and multimodal (with many local optima) classes. So far the experimental results have demonstrated opt-IMMALG being the superior imple- mentation of the two IAs, so we next compare only this version with another immunological inspired optimization algorithm, BCA [40], and a Hybrid Genetic Algorithm, HGA. For this comparison we used the functions listed in Table 2 and we set the Tmax value as proposed in Timmis and Kelsey [40]. 50 independent runs were performed. Table 12 compares these three algorithms. opt-IMMALG outperforms both BCA and HGA on 8 out of 12 test functions. In particular the results for function g7 , g8 , g11 , and g12 are significant. 4.4 IA versus PSO, SEA, and RCMA Using a different experimental protocol, we have compared opt-IMMALG, using both potential mutations (Eqs. 4, 5), with other evolutionary algorithms proposed in Versterstrøm and Thomsen [42]: Particle Swarm Optimization (PSO) and Simple Evolutionary Algorithm (SEA). In addition to the classical PSO, the authors in Versterstrøm and Thomsen [42] pro- posed the attractive and repulsive PSO (arPSO), which uses a modified scheme for PSO to avoid premature convergence. We performed the comparisons on all functions from Table 1, except for the functions f 19 and f 20 according to Versterstrøm and Thomsen [42]. For each experiment, the maximum number of objective function evaluations (Tmax ) was fixed to 5 × 105 for dimensions ≤30 and we performed 30 independent runs for each instance. For functions f 1 − f 13 the comparison was performed using 100 dimensions. In this case, Tmax 123
  • 24. J Glob Optim Table 10 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values − fˆ(x) representation) and the two versions of CLONALG [9,19], using potential mutation 4 (α = e ρ ) opt-IMMALG opt-IMMALG01 CLONALG 1 [9,19] CLONALG 2 [9,19] f1 4.663 × 10−19 1.7 × 10−8 3.7 × 10−3 5.5 × 10−4 (7.365 × 10−19 ) (3.5 × 10−15 ) (2.6 × 10−3 ) (2.4 × 10−4 ) f2 3.220 × 10−17 7.1 × 10−8 2.9 × 10−3 2.7 × 10−3 (1.945 × 10−17 ) (0.0) (6.6 × 10−4 ) (7.1 × 10−4 ) f3 3.855 1.9 × 10−10 1.5 × 10+4 5.9 × 10+3 (5.755) (2.63 × 10−10 ) (1.8 × 10+3 ) (1.8 × 10+3 ) f4 8.699 × 10−3 4.1 × 10−2 4.91 8.7 × 10−3 (3.922 × 10−2 ) (5.3 × 10−2 ) (1.11) (2.1 × 10−3 ) f5 22.32 28.4 27.6 2.35 × 10+2 (11.58) (0.42) (1.034) (4.4 × 10+2 ) f6 0.0 0.0 2.0 × 10−2 0.0 (0.0) (0.0) (1.4 × 10−1 ) (0.0) f7 1.143 × 10−4 3.9 × 10−3 7.8 × 10−2 5.3 × 10−3 (1.411 × 10−4 ) (1.3 × 10−3 ) (1.9 × 10−2 ) (1.4 × 10−3 ) f8 −12559.69 −12568.27 −11044.69 −12533.86 (34.59) (0.23) (186.73) (43.08) f9 0.0 2.66 37.56 22.41 (0.0) (2.39) (4.88) (6.70) f 10 1.017 × 10−10 1.1 × 10−4 1.57 1.2 × 10−1 (5.307 × 10−11 ) (3.1 × 10−5 ) (3.9 × 10−1 ) (4.1 × 10−1 ) f 11 2.066 × 10−2 4.55 × 10−2 1.7 × 10−2 4.6 × 10−2 (5.482 × 10−2 ) (4.46 × 10−2 ) (1.9 × 10−2 ) (7.0 × 10−2 ) f 12 7.094 × 10−21 3.1 × 10−2 0.336 0.573 (5.621 × 10−21 ) (5.7 × 10−2 ) (9.4 × 10−2 ) (2.6 × 10−1 ) f 13 1.122 × 10−19 3.20 1.39 1.69 (2.328 × 10−19 ) (0.13) (1.8 × 10−1 ) (2.4 × 10−1 ) f 14 0.999 1.21 1.0021 2.42 (7.680 × 10−3 ) (0.54) (2.8 × 10−2 ) (2.60) f 15 3.270 × 10−4 7.7 × 10−3 1.5 × 10−3 7.2 × 10−3 (3.651 × 10−5 ) (1.4 × 10−2 ) (7.8 × 10−4 ) (8.1 × 10−3 ) f 16 −1.017 −1.02 −1.0314 −1.0210 (2.039 × 10−2 ) (1.1 × 10−2 ) (5.7 × 10−4 ) (1.9 × 10−2 ) f 17 0.425 0.450 0.399 0.422 (4.987 × 10−2 ) (0.21) (2.0 × 10−3 ) (2.7 × 10−2 ) f 18 6.106 3.0 3.0 3.46 (4.748) (0.0) (1.3 × 10−5 ) (3.28) f 19 −3.72 −3.72 −3.71 −3.68 (8.416 × 10−3 ) (1.1 × 10−2 ) (1.5 × 10−2 ) (6.9 × 10−2 ) f 20 −3.293 −3.31 −3.23 −3.18 (3.022 × 10−2 ) (5.9 × 10−3 ) (5.9 × 10−2 ) (1.2 × 10−1 ) f 21 −10.153 −5.36 −5.92 −3.98 (7.710 × 10−8 ) (2.20) (1.77) (2.73) f 22 −10.402 −5.34 −5.90 −4.66 (1.842 × 10−6 ) (2.11) (2.09) (2.55) f 23 −10.536 −6.03 −5.98 −4.38 (7.694 × 10−7 ) (2.66) (1.98) (2.66) Each result report the mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlighted in boldface 123
  • 25. J Glob Optim Table 11 Comparison between opt-IMMALG (real values representation), opt-IMMALG01 (binary values representation) and the two versions of CLONALG [9,19], using Each result indicates the mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line) opt-IMMALG opt-IMMALG01 CLONALG 1 [9,19] CLONALG 2 [9,19] f1 0.0 9.23 × 10−12 9.6 × 10−4 3.2 × 10−6 (0.0) (2.44 × 10−11 ) (1.6 × 10−3 ) (1.5 × 10−6 ) f2 0.0 0.0 7.7 × 10−5 1.2 × 10−4 (0.0) (0.0) (2.5 × 10−5 ) (2.1 × 10−5 ) f3 0.0 0.0 2.2 × 104 2.4 × 10+4 (0.0) (0.0) (1.3 × 10−4 ) (5.7 × 10+3 ) f4 0.0 1.0 × 10−2 9.44 5.9 × 10−4 (0.0) (5.3 × 10−3 ) (1.98) (3.5 × 10−4 ) f5 16.29 3.02 31.07 4.67 × 10+2 (13.96) (12.2) (13.48) (6.3 × 10+2 ) f6 0.0 0.2 0.52 0.0 (0.0) (0.44) (0.49) (0.0) f7 1.995 × 10−5 3.0 × 10−3 1.3 × 10−1 4.6 × 10−3 (2.348 × 10−5 ) (1.2 × 10−3 ) (3.5 × 10−2 ) (1.6 × 10−3 ) f8 −12535.15 −12508.38 −11099.56 −1228.39 (62.81) (155.54) (112.05) (41.08) f9 0.596 19.98 42.93 21.75 (4.178) (7.66) (3.05) (5.03) f 10 0.0 18.98 18.96 19.30 (0.0) (0.35) (2.2 × 10−1 ) (1.9 × 10−1 ) f 11 0.0 7.7 × 10−2 3.6 × 10−2 9.4 × 10−2 (0.0) (8.63 × 10−2 ) (3.5 × 10−2 ) (1.4 × 10−1 ) f 12 1.770 × 10−21 0.137 0.632 0.738 (8.774 × 10−24 ) (0.23) (2.2 × 10−1 ) (5.3 × 10−1 ) f 13 1.687 × 10−21 1.51 1.83 1.84 (5.370 × 10−24 ) (0.10) (2.7 × 10−1 ) (2.7 × 10−1 ) f 14 0.998 1.02 1.0062 1.45 (1.110 × 10−3 ) (7.1 × 10−2 ) (4.0 × 10−2 ) (0.95) f 15 3.2 × 10−4 7.1 × 10−4 1.4 × 10−3 8.3 × 10−3 (2.672 × 10−5 ) (1.3 × 10−4 ) (5.4 × 10−4 ) (8.5 × 10−3 ) f 16 −1.013 −1.032 −1.0315 −1.0202 (2.212 × 10−2 ) (1.5 × 10−4 ) (1.8 × 10−4 ) (1.8 × 10−2 ) f 17 0.423 0.398 0.401 0.462 (3.217 × 10−2 ) (2.0 × 10−4 ) 8.8 × 10−3 ) (2.0 × 10−1 ) f 18 5.837 3.0 3.0 3.54 (3.742) (0.0) (1.3 × 10−7 ) (3.78) f 19 −3.72 −3.72 −3.71 −3.67 (7.846 × 10−3 ) (1.1 × 10−4 ) (1.1 × 10−2 ) (6.6 × 10−2 ) f 20 −3.292 −3.31 −3.30 −3.21 (3.097 × 10−2 ) (7.4 × 10−2 ) (1.0 × 10−2 ) (8.6 × 10−2 ) f 21 −10.153 −9.11 −7.59 −5.21 (1.034 × 10−7 ) (1.82) (1.89) (1.78) f 22 −10.402 −9.86 −8.41 −7.31 (1.082 × 10−5 ) (1.88) (1.4) (2.67) f 23 −10.536 −9.96 −8.48 −7.12 (1.165 × 10−5 ) (1.46) (1.51) (2.48) The best results are highlighted in boldface 123
  • 26. J Glob Optim Table 12 Comparison between opt-IMMALG, BCA and HGA [40] − fˆ(x) ˆ opt-IMMALG using α = e ρ opt-IMMALG using α = e−ρ f (x) BCA [40] HGA [40] g1 −1.12 ± 1.17 × 10−3 −1.12 ± 1.62 × 10−3 −1.08 −1.12 g2 −1.03 ± 8.82 × 10−4 −1.03 ± 7.129 × 10−4 −1.03 −0.99 g3 −12.03 ± 8.196 × 10−4 −12.03 ± 9.28 × 10−4 −12.03 −12.03 g4 0.3984 ± 6.73 × 10−4 0.3985 ± 8.859 × 10−4 0.40 0.40 g5 −178.51 ± 11.49 −178.88 ± 9.83 −186.73 −186.73 g6 −179.27 ± 11.498 −179.12 ± 10.02 −186.73 −186.73 g7 −2.529 ± 0.2026 −2.571±0.253 0.92 0.92 g8 1.314 × 10−12 ± 4.668 × 10−12 1.314 × 10−12 ± 4.668 × 10−12 1.0 1.0 g9 −3.51 ± 1.464 × 10−3 −0.351 ± 1.62 × 10−3 −0.91 −0.99 g10 −186.67 ± 8.17 × 10−2 −186.65 ± 0.1158 −186.73 −186 g11 3.81 × 10−5 ± 5.58 × 10−15 3.81 × 10−5 ± 6.98 × 10−14 0.04 0.04 g12 0.0 ± 0.0 0.0 ± 0.0 1 1 The functions used are listed in table 2. For opt-IMMALG we show the mean of the best candidate solutions on all runs, and standard deviation values (mean ± sd). The best results are highlighted in boldface was set to 5 × 106 . For each instance we report the mean of the best solutions averaged over all runs and the standard deviation. Table 13 presents the comparison between opt-IMMALG, PSO (particle swarm opti- mization), arPSO (attractive and repulsive particle swarm optimization) and SEA (simple evolutionary algorithm), obtained using n = 30 dimensions. The results indicate a bet- ter performance of opt-IMMALG than the above-cited algorithms, outperforming them in the majority of the functions. In Table 14 instead we show the same comparisons but with n = 100 dimensions. Again, opt-IMMALG outperforms SEA, PSO and its modification on all functions. Therefore, from these experiments, we can claim that opt-IMMALG is capable of tackling functions with high dimension better than these evolutionary algorithms. Recent developments in the evolutionary algorithms field, have shown that in order to tackle complex search spaces, pure genetic algorithms (GA) need to use local search opera- tors and specialized crossover [25]. Such kind of algorithms are called Memetic Algorithms (MA) [26]. Table 15 shows the comparisons of opt-IMMALG with several real coded me- metic algorithms (RCMA) [30,32]: CHC algorithm, Generalized Generation Gap(G3 − 1), hybrid steady state RCMA (SW-100), Family Competition (FC) and RCMA with crossover Hill Climbing (RCMA-XHC). The detailed descriptions for these algorithms can be found in Lozano et al. [30], whilst the reported results were extracted from Noman and Iba [32]. Such experiments were performed using n = 25 dimensions, Tmax = 105 maximum number of objective function evaluations and 30 independent runs. For this comparison we used the potential mutation from Eq. 5. As proposed in Noman and Iba [32], the tests were performed only on functions f 5 , f 9 and f 11 . By looking at the results reported in the table it is clear that opt-IMMALG outperforms all RCMAs. Although RMCA-XHC obtains the best result for the last function f 5 , the proposed IA presents notably better results than the others RMCA. 4.5 opt- IMMALG versus opt- IMMALG∗ The analysis of the experiments reported so far have shown that opt-IMMALG, using the second potential mutation (5), performs better in terms of solution’s quality and ability to escape from a local optima. While performing the parameter tuning of the algorithm we 123
  • 27. J Glob Optim Table 13 Comparison between opt-IMMALG, PSO (particle swarm optimization), arPSO (attractive and repulsive particle swarm optimization) and SEA (simple evolutionary algorithm) [42], using 30 dimensions opt-IMMALG PSO [42] arPSO [42] SEA [42] − fˆ(x) ˆ α= e ρ α = e−ρ f (x) f1 0.0 0.0 0.0 6.8 × 10−13 1.79 × 10−3 0.0 0.0 0.0 5.3 × 10−13 2.77 × 10−4 f2 0.0 0.0 0.0 2.09 × 10−2 1.72 × 10−2 0.0 0.0 0.0 1.48 × 10−1 1.7 × 10−3 f3 0.0 0.0 0.0 0.0 1.59 × 10−2 0.0 0.0 0.0 2.13 × 10−25 4.25 × 10−3 f4 5.6 × 10−4 0.0 2.11 × 10−16 1.42 × 10−5 1.98 × 10−2 2.18 × 10−3 0.0 8.01 × 10−16 8.27 × 10−6 2.07 × 10−3 f5 21.16 12 4.026 3.55 × 10+2 31.32 11.395 13.22 4.99 2.15 × 10+3 17.4 f6 0.0 0.0 4 × 10−2 18.98 0.0 0.0 0.0 1.98 × 10−1 63 0.0 f7 3.7 × 10−5 1.52 × 10−5 1.91 × 10−3 3.89 × 10−4 7.11 × 10−4 5.62 × 10−5 2.05 × 10−5 1.14 × 10−3 4.78 × 10−4 3.27 × 10−4 f8 −1.257 × 10+4 −1.256 × 10+4 −7.187 × 10+3 −8.598 × 10+3 −1.167 × 10+4 8.369 25.912 6.72 × 10+2 2.07 × 10+3 2.34 × 10+2 f9 0.0 0.0 49.17 2.15 7.18 × 10−1 0.0 0.0 16.2 4.91 9.22 × 10−1 f 10 4.74 × 10−16 0.0 1.4 1.84 × 10−7 1.05 × 10−2 1.21 × 10−15 0.0 7.91 × 10−1 7.15 × 10−8 9.08 × 10−4 f 11 0.0 0.0 2.35 × 10−2 9.23 × 10−2 4.64 × 10−3 0.0 0.0 3.54 × 10−2 3.41 × 10−1 3.96 × 10−3 f 12 1.787 × 10−21 1.77 × 10−21 3.819 × 10−1 8.559 × 10−3 4.56 × 10−6 5.06 × 10−23 7.21 × 10−24 8.4 × 10−1 4.79 × 10−2 8.11 × 10−7 f 13 1.702 × 10−21 1.686 × 10−21 −5.969 × 10−1 −9.626 × 10−1 −1.143 4.0628 × 10−23 1.149 × 10−24 5.17 × 10−1 5.14 × 10−1 1.34 × 10−5 f 14 9.98 × 10−1 9.98 × 10−1 1.157 9.98 × 10−1 9.98 × 10−1 5.328 × 10−4 2.719 × 10−4 3.68 × 10−1 2.13 × 10−8 4.33 × 10−8 f 15 3.26 × 10−4 3.215 × 10−4 1.338 × 10−3 1.248 × 10−3 3.704 × 10−4 3.64 × 10−5 2.56 × 10−5 3.94 × 10−3 3.96 × 10−3 8.78 × 10−5 f 16 −1.023 −1.017 −1.032 −1.032 −1.032 1.52 × 10−2 3.625 × 10−2 3.84 × 10−8 3.84 × 10−8 3.16 × 10−8 f 17 4.19 × 10−1 4.2 × 10−1 3.98 × 10−1 3.98 × 10−1 3.98 × 10−1 2.9 × 10−2 3.5158 × 10−2 5.01 × 10−9 5.01 × 10−9 2.20 × 10−8 f 18 4.973 5.371 3.0 3.516 3.0 2.9366 3.0449 0.0 3.65 0.0 f 21 −10.15 −10.15 −5.4 −8.18 −8.41 1.81 × 10−6 1.018 × 10−7 3.40 2.60 3.16 f 22 −10.4 −10.4 −6.946 −8.435 −8.9125 1.19 × 10−6 9.3 × 10−6 3.70 2.83 2.86 f 23 −10.54 −10.54 −6.71 −8.616 −9.8 6.788 × 10−7 7.29 × 10−6 3.77 2.88 2.24 For opt-IMMALG we show the results obtained using both potential mutations (Eqs. 4, 5). For all algorithms we report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlighted in boldface. Results have been averaged over 30 independent runs and Tmax = 5 × 105 123
  • 28. J Glob Optim Table 14 Comparison between opt-IMMALG, PSO (particle swarm optimization), arPSO (attractive and repulsive particle swarm optimization) and SEA (simple evolutionary algorithm) [42], using 100 dimensions opt -IMMALG PSO [42] arPSO [42] SEA [42] − fˆ(x) ˆ α= e ρ α = e−ρ f (x) f1 0.0 0.0 0.0 7.4869 × 10+2 5.229 × 10−4 0.0 0.0 0.0 2.31 × 10+3 5.18 × 10−5 f2 0.0 0.0 1.804 × 10+1 3.9637 × 10+1 1.737 × 10−2 0.0 0.0 6.52 × 10+1 2.45 × 10+1 9.43 × 10−4 f3 0.0 0.0 3.666 × 10+3 1.817 × 10+1 3.68 × 10−2 0.0 0.0 6.94 × 10+3 2.50 × 10+1 6.06 × 10−3 f4 7.32 × 10−4 6.447 × 10−7 5.312 2.4367 7.6708 × 10−3 2.109 × 10−3 3.338 × 10−6 8.63 × 10−1 3.80 × 10−1 5.71 × 10−4 f5 97.02 74.99 2.02 × 10+2 2.36 × 10+2 9.249 × 10+1 54.73 38.99 7.66 × 10+2 1.25 × 10+2 1.29 × 10+1 f6 0.0 0.0 2.1 4.118 × 10+2 0.0 0.0 0.0 3.52 4.21 × 10+2 0.0 f7 1.763 × 10−5 1.59 × 10−5 2.784 × 10−2 3.23 × 10−3 7.05 × 10−4 2.108 × 10−5 3.61 × 10−5 7.31 × 10−2 7.87 × 10−4 9.70 × 10−5 f8 −4.176 × 10+4 −4.16 × 10+4 −2.1579 × 10+4 −2.1209 × 10+4 −3.943 × 10+4 2.08 × 10+2 2.06 × 10+2 1.73 × 10+3 2.98 × 10+3 5.36 × 10+2 f9 0.0 0.0 2.4359 × 10+2 4.809 × 10+1 9.9767 × 10−2 0.0 0.0 4.03 × 10+1 9.54 3.04 × 10−1 f 10 1.18 × 10−16 0.0 4.49 5.628 × 10−2 2.93 × 10−3 6.377 × 10−16 0.0 1.73 3.08 × 10−1 1.47 × 10−4 f 11 0.0 0.0 4.17 × 10−1 8.53 × 10−2 1.89 × 10−3 0.0 0.0 6.45 × 10−1 2.56 × 10−1 4.42 × 10−3 f 12 5.34 × 10−22 5.3169 × 10−22 1.77 × 10−1 9.219 × 10−2 2.978 × 10−7 9.81 × 10−24 5.0655 × 10−24 1.75 × 10−1 4.61 × 10−1 2.76 × 10−8 f 13 1.712 × 10−21 1.689 × 10−21 −3.86 × 10−1 3.301 × 10+2 −1.142810 9.379 × 10−23 9.877 × 10−24 9.47 × 10−1 1.72 × 10+3 2.41 × 10−8 For opt-IMMALG we show the results obtained using both potential mutations (Eqs. 4, 5). For all algorithms we report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlighted in boldface. Results have been averaged over 30 independent runs and Tmax = 5 × 106 Table 15 Comparison between opt-IMMALG and several Real Coded Memetic Algorithm (RCMA) pro- posed in Noman and Iba [32] Algorithm f 11 f9 f5 opt-IMMALG 0.0 0.0 4.68 CHC 6.5 × 10−3 1.6 × 10+1 1.9 × 10+1 G3-1 5.1 × 10−1 7.4 × 10+1 2.8 × 10+1 SW-100 2.7 × 10−2 7.6 1 × 10+1 FC 3.5 × 10−4 5.5 2.3 × 10+1 RCMA-XHC 1.3 × 10−2 1.4 2.2 We report the mean of the best individuals on all runs. The best results are highlighted in boldface. Results have been averaged over 30 independent runs, using Tmax = 105 and n = 25 dimensions 123
  • 29. J Glob Optim Table 16 Comparison between the opt-IMMALG and opt-IMMALG∗ , with maximum number of objective function evaluations, Tmax ) = 5 × 105 for dimension n = 30, and (Tmax ) = 5 × 106 for dimension n = 100 n = 30 n = 100 opt-IMMALG opt-IMMALG∗ opt-IMMALG opt-IMMALG∗ f1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f4 0.0 0.0 6.447 × 10−7 0.0 0.0 0.0 3.338 × 10−6 0.0 f5 12 0.0 74.99 22.116 13.22 0.0 38.99 39.799 f6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f7 1.521 × 10−5 7.4785 × 10−6 1.59 × 10−5 1.2 × 10−6 2.05 × 10−5 6.463 × 10−6 3.61 × 10−5 1.53 × 10−6 f8 −1.256041 × 10+4 −9.05 × 10+3 −4.16 × 10+4 −2.727 × 10+4 25.912 1.91 × 10+4 2.06 × 10+2 7.627 × 10−4 f9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f 10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f 11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f 12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f 13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 We report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlighted in boldface noticed that randomly choosing the age of the candidate solutions in the range [0, 2 τ B ] and 3 fixing θ = 50%, opt-IMMALG improves its own performances. We called this new variant opt- IMMALG∗ . Table 16 shows the improved performance of opt- IMMALG∗ on the first 13 functions. For these experiments we fixed Tmax = 5 × 105 for 30 dimensions, and Tmax = 5 × 106 for 100 dimensions, which corresponds to the experimental protocol used in the previous sub- section and proposed in Versterstrøm and Thomsen [42]. All the results with value ≤ 10−25 were reported as 0.0. The improved performance is particularly evident for function f 5 with n = 30, where now opt-IMMALG∗ is able to reach the best solution while the previous variants failed. In Tables 17 and 18, we present again the comparison with FEP but this time including the new variant of opt-IMMALG. Table 17 shows the results obtained by opt-IMMALG∗ on the first 13 functions, whilst Table 18 the results on the multimodal functions with a few local optima ( f 14 − f 23 ). The new variant opt-IMMALG∗ , improves the overall quality of the results, in particular for functions f 5 , and f 9 . Opposite behaviour is instead obtained in Table 18, where the new variant (opt-IMMALG∗ ) is comparable, but not outperforming the performances of opt-IMMALG. Most likely, for this class of functions, each candidate solution still needs longer life span. 123
  • 30. J Glob Optim Table 17 Comparison between opt-IMMALG, opt-IMMALG∗ , and FEP (Fast Evolutionary Algorithm) [43], on the first 13 functions opt-IMMALG opt-IMMALG∗ FEP [43] opt-IMMALG opt-IMMALG∗ FEP [43] f 1 0.0 0.0 5.7 × 10−4 f8 −12535.15 −8707.04 −12554.5 0.0 0.0 1.3 × 10−4 62.81 1.7 × 103 52.6 f 2 0.0 0.0 8.1 × 10−3 f9 0.596 0.0 4.6 × 10−2 0.0 0.0 7.7 × 10−4 4.178 0.0 1.2 × 10−2 f 3 0.0 0.0 1.6 × 10−2 f 10 0.0 0.0 1.8 × 10−2 0.0 0.0 1.4 × 10−2 0.0 0.0 2.1 × 10−3 f 4 0.0 0.0 0.3 f 11 0.0 0.0 1.6 × 10−2 0.0 0.0 0.5 0.0 0.0 2.2 × 10−2 f 5 16.29 0.0 5.06 f 12 0.0 0.0 9.2 × 10−6 13.96 0.0 5.87 0.0 0.0 3.6 × 10−6 f 6 0.0 0.0 0.0 f 13 0.0 0.0 1.6 × 10−4 0.0 0.0 0.0 0.0 0.0 7.3 × 10−5 f 7 1.995 × 10−5 1.6 × 10−5 7.6 × 10−3 2.348 × 10−5 1.37 × 10−5 2.6 × 10−3 The used experimental protocol was the same described in Sect. 4.1. For all algorithms we report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlighted in boldface Table 18 Comparison between opt-IMMALG, opt-IMMALG∗ , and FEP (Fast Evolutionary Algorithm) [43], on all functions included into the last categories, i.e. multimodal functions with a few local optima opt-IMMALG opt-IMMALG∗ FEP [42] f 14 0.998 1.255 1.22 1.11 × 10−3 1.14 0.56 f 15 3.20 × 10−4 3.22 × 10−4 5.0 × 10−4 2.672 × 10−5 2.23 × 10−5 3.2 × 10−4 f 16 −1.013 −1.0033 −1.031 2.212 × 10−2 4.9 × 10−2 4.9 × 10−7 f 17 0.423 0.452 0.398 3.217 × 10−2 7.58 × 10−2 1.5 × 10−7 f 18 5.837 7.097 3.02 3.742 5.61 0.11 f 19 −3.72 −3.65 −3.86 7.846 × 10−3 4.82 × 10−2 1.4 × 10−5 f 20 −3.29 −3.026 −3.27 3.097 × 10−2 0.12 5.9 × 10−2 f 21 −10.153 −10.153 −5.52 1.034 × 10−7 1.46 × 10−7 1.59 f 22 −10.402 −10.403 −5.52 1.082 × 10−5 1.75 × 10−5 2.12 f 23 −10.536 −10.536 −6.57 1.165 × 10−5 1.76 × 10−5 3.14 The used experimental protocol was the same described in Sect. 4.1. For all algorithms we report mean of the best candidate solutions on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlighted in boldface 4.6 IA versus differential evolution algorithms Among the many evolutionary methodologies able to effectively tackle global numerical opti- mization problems, differential evolution (DE) has shown better performances on complex 123
  • 31. J Glob Optim Table 19 Comparison between opt-IMMALG, opt-IMMALG∗ , and several DE variants, proposed in Mezura-Montes et al. [31] Algorithm Unimodal functions f1 f2 f3 f4 f6 f7 opt-IMMALG∗ 0.0 0.0 0.0 0.0 0.0 2.79 × 10−5 opt-IMMALG 0.0 0.0 0.0 0.0 0.0 4.89 × 10−5 DE rand/1/bin 0.0 0.0 0.02 1.9521 0.0 0.0 DE rand/1/exp 0.0 0.0 0.0 3.7584 0.84 0.0 DE best/1/bin 0.0 0.0 0.0 0.0017 0.0 0.0 DE best/1/exp 407.972 3.291 10.6078 1.701872 2737.8458 0.070545 DE current-to-best/1 0.54148 4.842 0.471730 4.2337 1.394 0.0 DE current-to-rand/1 0.69966 3.503 0.903563 3.298563 1.767 0.0 DE current-to-rand/1/bin 0.0 0.0 0.000232 0.149514 0.0 0.0 DE rand/2/dir 0.0 0.0 30.112881 0.044199 0.0 0.0 Algorithm Multimodal functions f5 f9 f 10 f 11 f 12 f 13 opt-IMMALG∗ 16.2 0.0 0.0 0.0 0.0 0.0 opt-IMMALG 11.69 0.0 0.0 0.0 0.0 0.0 DE rand/1/bin 19.578 0.0 0.0 0.001117 0.0 0.0 DE rand/1/exp 6.696 97.753938 0.080037 0.000075 0.0 0.0 DE best/1/bin 30.39087 0.0 0.0 0.000722 0.0 0.000226 DE best/1/exp 132621.5 40.003971 9.3961 5.9278 1293.0262 2584.85 DE current-to-best/1 30.984666 98.205432 0.270788 0.219391 0.891301 0.038622 DE current-to-rand/1 31.702063 92.263070 0.164786 0.184920 0.464829 5.169196 DE current-to-rand/1/bin 24.260535 0.0 0.0 0.0 0.001007 0.000114 DE rand/2/dir 30.654916 0.0 0.0 0.0 0.0 0.0 We report the mean of the best individuals on all runs. The best results are highlighted in boldface. Results have been averaged over 100 independent runs, using Tmax = 1.2 × 105 , and n = 30 dimensions. For opt-IMMALG∗ we fixed d = 100 and continuous search space [34,36]. For this purpose, we compared opt-IMMALG and opt-IMMALG∗ with several DE variants [31,42], and their memetic versions [32] using the first 13 functions from Table 1. As previously described, for this class of experiments we used only the second potential mutation (Eq. 5) because it presents better performances. Several dimensions were used, from small (n = 30) to high values (n = 200). For these instances we fixed ρ as described in Sect. 3.1. In the first experiment, opt-IMMALG and opt-IMMALG∗ are compared with 8 DE variants, proposed in Mezura-Montes et al. [31], where Tmax was fixed to 1.2 × 105 [31]. For each function 100 independent runs were performed, and the variable dimensions were fixed to 30. Results are shown in Table 19. Since the authors of Mezura-Montes et al. [31] modified the function f 8 to have its minimum at zero (rather than −12569.5), this function is not included in the table. Inspecting the comparison in the table, we can observe that the new variant opt-IMMALG∗ outperforms all DE variants except for the functions f 5 and f 7 . In Table 20 opt-IMMALG and opt-IMMALG∗ algorithms are compared to the rand/1/bin variant, one of the best DE variants, based on a different experimental protocol proposed in Versterstrøm and Thomsen [42]. For each experiment two different dimension values were used: n = 30 with Tmax = 5 × 105 , and n = 100 with Tmax = 5 × 106 . Thirty independent runs were performed for each benchmark function. In this table we present the mean of the best candidate solutions on all runs and the standard deviation (in a new line). All results 123
  • 32. J Glob Optim Table 20 Comparison between opt-IMMALG and rand/1/bin variant, proposed in Versterstrøm and Thomsen [42] 30 dimension 100 dimension opt-IMMALG opt-IMMALG∗ DE opt-IMMALG opt-IMMALG∗ DE rand/1/bin rand/1/bin [42] [42] f1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f3 0.0 0.0 2.02 × 10−9 0.0 0.0 5.87 × 10−10 0.0 0.0 8.26 × 10−10 0.0 0.0 1.83 × 10−10 f4 0.0 0.0 3.85 × 10−8 6.447 × 10−7 0.0 1.128 × 10−9 0.0 0.0 9.17 × 10−9 3.338 × 10−6 0.0 1.42 × 10−10 f5 12 0.0 0.0 74.99 22.116 0.0 13.22 0.0 0.0 38.99 39.799 0.0 f6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f7 1.521 × 10−5 7.48 × 10−6 4.939 × 10−3 1.59 × 10−5 1.2 × 10−6 7.664 × 10−3 2.05 × 10−5 6.46 × 10−6 1.13 × 10−3 3.61 × 10−5 1.53 × 10−6 6.58 × 10−4 f8 −1.256041 × 10+4 −9.05 × 10+3 −1.256948 × 10+4 −4.16 × 10+4 −2.727 × 10+4 −4.1898 × 10+4 25.912 1.91 × 104 2.3 × 10−4 2.06 × 10+2 7.63 × 10−4 1.06 × 10−3 f9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f 10 0.0 0.0 −1.19 × 10−15 0.0 0.0 8.023 × 10−15 0.0 0.0 7.03 × 10−16 0.0 0.0 1.74 × 10−15 f 11 0.0 0.0 0.0 0.0 0.0 5.42 × 10−20 0.0 0.0 0.0 0.0 0.0 5.42 × 10−20 f 12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 f 13 0.0 0.0 −1.142824 0.0 0.0 −1.142824 0.0 0.0 4.45 × 10−8 0.0 0.0 2.74 × 10−8 We report the mean of the best individuals on all runs (in the first line of each table entry), and the standard deviation (in the second line). The best results are highlight in boldface. The results are obtained using n = 30 and n = 100 dimensions ≤ 10−25 were reported as 0.0 [42]. This is the same experimental protocol used for the results in Table 16, hence the two tables are similar. The results indicate that the overall performances of opt-IMMALG and opt-IMMALG∗ are comparable to the ones produced by rand/1/bin variant, in both 30 and 100 variables dimension. Two memetic versions of DE variants, based on crossover local search (XLS) and called DEfirDE and DEfirSPX, were proposed in Noman and Iba [32]. Then, as last experiments, in Tables 21 and 22 we compared opt-IMMALG∗ , and opt-IMMALG with these two DE algorithms, rand/1/exp and best/1/exp, and their me- metic versions, DEfirDE and DEfirSPX [32], using n = {50, 100, 200} dimensions. For each test, the maximum number of objective function evaluations Tmax was fixed to 5 × 105 , and 30 independent runs were performed. We used only the functions f 1 , f 5 , f 9 , f 10 and f 11 , the same used in Noman and Iba [32]. For the two DE algorithms and their memetic versions, in Tables 21 and 22, we report the results obtained varying the population size with n, 5n and 10n, (first three lines, respectively) where n indicate the dimensional search space [32]. Both tables demonstrate that the two variants of opt-IMMALG achieve higher quality solutions rather than two DE algorithms 123
  • 33. J Glob Optim Table 21 Comparison between opt-IMMALG∗ , opt-IMMALG and two of the best DE variants, rand/1/exp and best/1/exp, proposed in Noman and Iba [32] opt-IMMALG∗ opt-IMMALG DE rand/1/exp [32] DE best/1/exp [32] n = 50 dimensional search space f1 0±0 0±0 0±0 309.74 ± 481.05 0±0 0±0 0.0535 ± 0.0520 0.0027 ± 0.0013 f5 1.64 ± 8.7 30 ± 21.7 79.8921 ± 102.611 3.69 × 10+5 ± 5.011 × 10+5 52.4066 ± 19.9109 54.5985 ± 25.6652 90.0213 ± 33.8734 58.1931 ± 9.4289 f9 0±0 0±0 0±0 0.61256 ± 1.1988 0±0 0±0 0±0 0±0 f 10 0±0 0±0 0±0 0.2621 ± 0.5524 9.36 × 10−6 ± 3.67 × 10−6 6.85 × 10−6 ± 6.06 × 10−6 0.0104 ± 0.0015 0.0067 ± 0.0015 f 11 0±0 0±0 0±0 0.1651 ± 0.2133 9.95 × 10−7 ± 4.3 × 10−7 0±0 0.0053 ± 0.010 0.0012 ± 0.0028 n = 100 dimensional search space f1 0±0 0±0 1.58 × 10−6 ± 3.75 × 10−6 0.0046 ± 0.0247 59.926 ± 16.574 30.242 ± 5.93 2496.82 ± 246.55 1729.40 ± 172.28 f5 26.7 ± 43 85.6 ± 31.758 120.917 ± 41.8753 178.465 ± 60.938 12312.16 ± 3981.44 7463.633 ± 2631.92 3.165 × 10+6 ± 6.052 × 10+5 1.798 × 10+6 ± 3.304 × 10+5 f9 0±0 0±0 0±0 0±0 2.6384 ± 0.7977 0.7585 ± 0.2524 234.588 ± 13.662 198.079 ± 18.947 f 10 0±0 0±0 1.02 × 10−6 ± 1.6 × 10−7 9.5 × 10−7 ± 1.1 × 10−7 1.6761 ± 0.0819 1.2202 ± 0.0965 7.7335 ± 0.1584 6.7251 ± 0.1373 f 11 0±0 0±0 0±0 0±0 1.1316 ± 0.0124 1.0530 ± 0.0100 20.037 ± 0.9614 13.068 ± 0.8876 n = 200 dimensional search space f1 0±0 0±0 50.005 ± 16.376 26.581 ± 7.4714 5.45 × 10+4 ± 2605.73 4.84 × 10+4 ± 1891.24 1.82 × 10+5 ± 6785.18 1.74 × 10+5 ± 6119.01 f5 88.65 ± 91.85 165.1 ± 71.2 9370.17 ± 3671.11 6725.48 ± 1915.38 4.22 × 10+8 ± 3.04 × 10+7 3.54 × 10+8 ± 3.54 × 10+7 3.29 × 10+9 ± 2.12 × 10+8 3.12 × 10+9 ± 1.65 × 10+8 f9 0±0 0±0 0.4245 ± 0.2905 0.2255 ± 0.1051 1878.61 ± 60.298 1761.55 ± 43.3824 5471.35 ± 239.67 5094.97 ± 182.77 f 10 0±0 0±0 0.5208 ± 0.0870 0.4322 ± 0.0427 15.917 ± 0.1209 15.46 ± 0.1205 19.253 ± 0.0698 19.138 ± 0.0772 f 11 0±0 0±0 0.7687 ± 0.0768 0.5707 ± 0.0651 490.29 ± 21.225 441.97 ± 15.877 1657.93 ± 47.142 1572.51 ± 53.611 We report the mean of the best individuals on all runs and the standard deviation (mean ± sd). The best results are highlight in boldface. The results are obtained using n = {50, 100, 200} dimensions 123
  • 34. J Glob Optim Table 22 Comparison between opt-IMMALG∗ , opt-IMMALG and the memetic versions of rand/1/exp and best/1/exp DE variants, called DEfirDE and DEfirSPX [32] opt- IMMALG ∗ opt-IMMALG DEfirDE [32] DEfirSPX [32] n = 50 dimensional search space f1 0±0 0±0 0±0 0±0 0±0 0±0 0.0026 ± 0.0023 1 × 10−4 ± 4.75 × 10−5 f5 1.64 ± 8.7 30 ± 21.7 72.0242 ± 47.1958 65.8951 ± 37.8933 53.1894 ± 26.1913 45.8367 ± 10.2518 66.9674 ± 23.7196 52.0033 ± 13.6881 f9 0±0 0±0 0±0 0±0 0±0 0±0 0±0 0±0 f 10 0±0 0±0 0±0 0±0 2.28 × 10−5 ± 1.45 × 10−5 3.0 × 10−6 ± 1.07 × 10−6 0.0060 ± 0.0015 0.0019 ± 4.32 × 10−4 f 11 0±0 0±0 0±0 0±0 0±0 0±0 4.96 · 10−4 ± 6.68 · 10−4 5.27 × 10−4 ± 0.0013 n = 100 dimensional search space f1 0±0 0±0 0±0 0±0 11.731 ± 5.0574 1.2614 ± 0.4581 358.57 ± 108.12 104.986 ± 22.549 f5 26.7 ± 43 85.6 ± 31.758 107.5604 ± 28.2529 99.1086 ± 18.5735 2923.108 ± 1521.085 732.85 ± 142.22 2.822 × 10+5 ± 3.012 × 10+5 16621.32 ± 6400.43 f9 0±0 0±0 0±0 0±0 0.1534 ± 0.1240 0.0094 ± 0.0068 17.133 ± 7.958 27.0537 ± 20.889 f 10 0±0 0±0 1.2 × 10−6 ± 6.07 × 10−7 0±0 0.5340 ± 0.1101 0.3695 ± 0.0734 3.7515 ± 0.2773 3.4528 ± 0.1797 f 11 0±0 0±0 0±0 0±0 0.7725 ± 0.1008 0.5433 ± 0.1331 3.7439 ± 0.7651 2.2186 ± 0.3010 n = 200 dimensional search space f1 0±0 0±0 17.678 ± 9.483 0.8568 ± 0.2563 9056.0 ± 1840.45 2782.32 ± 335.69 44090.5 ± 6122.35 9850.45 ± 1729.9 f5 88.65 ± 91.85 165.1 ± 71.2 5302.79 ± 2363.74 996.69 ± 128.483 2.39 × 10+7 ± 6.379 × 10+6 1.19 × 10+6 ± 4.10 × 10+5 3.48 × 10+8 ± 1.75 × 10+8 1.21 × 10+7 ± 4.73 × 10+6 f9 0±0 0±0 0.1453 ± 0.2771 0.0024 ± 0.0011 352.93 ± 46.11 369.88 ± 136.87 1193.83 ± 145.477 859.03 ± 99.76 f 10 0±0 0±0 0.3123 ± 0.0426 0.1589 ± 0.0207 9.2373 ± 0.4785 6.6861 ± 0.3286 14.309 ± 0.3706 9.4114 ± 0.4581 f 11 0±0 0±0 0.5984 ± 0.1419 0.1631 ± 0.0314 78.692 ± 11.766 28.245 ± 4.605 368.90 ± 41.116 85.176 ± 12.824 We report the mean of the best individuals on all runs and the standard deviation (mean ± sd). The best results are highlight in boldface. The showed results are obtained using n = {50, 100, 200} dimensional search space 123
  • 35. J Glob Optim and their memetic versions, especially for function f 5 . In a both tables, it is significant the difference in solution quality obtained by opt-IMMALG∗ for function f 5 compared to the other algorithms. No one of the compared algorithms has been able to reach comparable solu- tions to opt-IMMALG∗ on this function. Moreover, both tables indicate that both variants of opt-IMMALG outperform the other algorithms when the function dimension increases. Finally, it is important to highlight that the two variants of opt-IMMALG were ran using smaller population size, in particular for high dimensions (n = {100, 200}). 4.7 IA versus swarm intelligence algorithms Recently, artificial immune systems have been related to swarm systems, since many immuno- logical algorithms operate in a very similar manner: the design of distributed systems, which display emergent behaviour at a system level, based on low-level interactions between agents and the environment. Therefore, some swarm intelligence algorithms, proposed in Karaboga and Baturk [29], have been taken into account and compared with only opt-IMMALG∗ , since the latter seems to have best performances than the other variants: particle swam optimization (PSO); particle swarm inspired evolutionary algorithm (PS-EA), and artificial bee colony (ABC). For this kind of experiments we have used the same experimental protocol used in Karaboga and Baturk [29], that is: the problem dimension has been set n = {10, 20, 30}, whilst the termination criterion has been fixed to 500 (for n = 10), 750 (for n = 20), and 1000 (for n = 30) generations. Similarly to Karaboga and Baturk [29] in this comparison, shown in Table 23, we consid- ered only the following functions: f 5 , f 9 , f 10 and f 11 of the benchmark in Table 1, with the addition of the following new function: n H (x) = (418.9829 × n) + −xi sin |xi | (11) i=1 From Table 23, it is possible to affirm that opt-IMMALG∗ outperforms all swarm system algorithms on all used functions, except for function H. The best performances of opt- IMMALG∗ over the used swarm intelligent algorithms are also confirmed for an increasing problem dimension. Conversely, for the function H , PS- EA reaches better solutions with increasing problem dimension. Finally, the only results reported for ABC2 were obtained using a different experimental protocol (see Karaboga and Baturk [29]): the termination criterion was increased to 1000 (for n = 10), 1500 (for n = 20) and 2000 (for n = 30), respectively. Although opt-IMMALG∗ has been tested with a smaller number of generations is possible to notice that the results are comparable, and often outperform ABC2 . This exper- iment displays as opt-IMMALG∗ reaches competitive solutions, close to global optima, in less time than artificial bee colony (ABC) algorithm. 4.8 IA versus LeGO and PSwarm In this section we present the comparisons between opt-IMMALG∗ and two of the best optimization algorithms present in literature, as LeGO [5] and PSwarm [41]. For this kind of comparison we have used a different set of functions taken from Cassioli et al. [5], which includes 8 functions with n = 10 as dimensionality of the variables, except for the function mgw20 , where n = 20. These functions represent a subset of the widest benchmark proposed in Vaz and Vicente [41], which can be downloaded from http://www.norg.uminho.pt/aivaz/ 123
  • 36. J Glob Optim Table 23 Comparison between opt-IMMALG∗ and some Swarm Intelligence algorithms Algorithm f 11 f9 f5 f 10 H 10 variables PSO 0.079393 2.6559 4.3713 9.8499 × 10−13 161.87 0.033451 1.3896 2.3811 9.6202 × 10−13 144.16 PS- EA 0.222366 0.43404 25.303 0.19209 0.32037 0.0781 0.2551 29.7964 0.1951 1.6185 opt-IMMALG∗ 0.0 0.0 0.0 0.0 1.27 × 10−4 0.0 0.0 0.0 0.0 1.268 × 10−14 ABC1 0.00087 0.0 0.034072 7.8 × 10−11 1.27 × 10−9 0.002535 0.0 0.045553 1.16 × 10−9 4 × 10−12 ABC2 0.000329 0.0 0.012522 4.6 × 10−11 1.27 × 10−9 0.00185 0.0 0.01263 5.4 × 10−11 4 × 10−12 20 variables PSO 0.030565 12.059 77.382 1.1778 × 10−6 543.07 0.025419 3.3216 94.901 1.5842 × 10−6 360.22 PS- EA 0.59036 1.8135 72.452 0.32321 1.4984 0.2030 0.2551 27.3441 0.097353 0.84612 opt-IMMALG∗ 0.0 0.0 0.0 0.0 237.5652 0.0 0.0 0.0 0.0 710.4036 ABC1 2.01 × 10−8 1.45 × 10−8 0.13614 1.6 × 10−11 19.83971 6.76 × 10−8 5.06 × 10−8 0.132013 1.9 × 10−11 45.12342 ABC2 0.0 0.0 0.014458 0.0 0.000255 0.0 0.0 0.010933 1 × 10−12 0 30 variables PSO 0.011151 32.476 402.54 1.4917 × 10−6 990.77 0.014209 6.9521 633.65 1.8612 × 10−6 581.14 PS- EA 0.8211 3.0527 98.407 0.3771 3.272 0.1394 0.9985 35.5791 0.098762 1.6185 opt-IMMALG∗ 0.0 0.0 0.0 0.0 2766.804 0.0 0.0 0.0 0.0 2176.288 ABC1 2.87 × 10−9 0.033874 0.219626 3 × 10−12 146.8568 8.45 × 10−10 0.181557 0.152742 5 × 10−12 82.3144 ABC2 0.0 0.0 0.020121 0.0 0.000382 0.0 0.0 0.021846 0.0 1 × 10−12 For each function is showed the mean of the best candidate solutions on 30 independent runs (in the first line of each table entry), and standard deviation (in the second line). The best results are highlighted in boldface pswarm/. Table 24 presents the comparison among opt-IMMALG∗ , LeGO and PSwarm, showing for each function the minimum, median and maximum found, except for PSwarm because only the minimal value found is known. Precisely, the results for PSwarm were taken from Vaz and Vicente [41], where is given the optimality gap. For this kind of com- parison, as proposed in Vaz and Vicente [41] and Cassioli et al. [5], 30 independent runs were performed for any test function, with Tmax fixed to 104 . Thus, also the mean of the best solutions obtained on the 30 runs was included into the table. For LeGO algorithm we show the best solutions found from 5000 points accepted (first relative line), and from 5000 refused ones. Since for these experiments a small value of objective function evaluations was set, then a smaller population size (respect the previous comparisons) for opt-IMMALG∗ has been used: d = 40. 123
  • 37. J Glob Optim Table 24 Comparison between opt-IMMALG∗ , LeGO [5] and PSwarm [41] algorithms Algorithm Min Mean Median Max ack (global minimum at f (x) = 0) PSwarm 0.217164 n. a. n. a. n. a. LeGO 2.04 4.74 4.85 5.36 4.59 6.06 6.03 7.97 opt-IMMALG∗ 0 0 0 0 em 10 (global minimum at f (x) = −9.660152) PSwarm −8.275452 n. a. n. a. n. a. LeGO −8.88 −5.16 −5.24 −0.488 −8.68 −4.12 −4.18 0.002 opt-IMMALG∗ −6.3086 −5.22 −5.225 −4.446 f x10 (global minimum at f (x) = −10.2088) PSwarm −2.131509 n. a. n. a. n. a. LeGO −10.21 −1.97 −1.48 −1.28 −10.21 −1.58 −1.48 −1.15 opt-IMMALG∗ −2.2 −0.4676 −0.3887 −0.3784 mgw10 (global minimum at f (x) = 0) PSwarm 1.1078 × 10−2 n. a. n. a. n. a. LeGO 4.4 × 10−16 1.9 × 10−2 8.9 × 10−3 3.63 4.4 × 10 −16 4.0 × 10−2 3.2 × 10−2 3.63 opt-IMMALG∗ 0 4.23 × 10−6 1.13 × 10−6 2.56 × 10−5 mgw20 (global minimum at f (x) = 0) PSwarm 5.3904 × 10−2 n. a. n. a. n. a. LeGO −1.3 × 10−15 7.4 × 10−2 2.5 × 10−2 7.80 −1.3 × 10−15 8.4 × 10−2 4.4 × 10−2 9.42 opt-IMMALG ∗ 0 7.28 × 10−4 3.64 × 10−5 1.6686 × 10−2 ml10 (global minimum at f (x) = −0.965) PSwarm −0.965 n. a. n. a. n. a. LeGO −1.7 × 10−22 1.7−22 −1.0 × 10−132 8.6 × 10−19 −8.3 × 10−81 1.6−74 1.9 × 10−279 8.2 × 10−71 opt-IMMALG∗ −7.917 × 10−2 −3.088 × 10−3 0 0 rg10 (global minimum at f (x) = 0) PSwarm 0 n. a. n. a. n. a. LeGO 6.96 57.54 57.71 127.40 9.95 81.15 80.59 224.90 opt-IMMALG∗ 0 0 0 0 sal10 (global minimum at f (x) = 0) PSwarm 0.399873 n. a. n. a. n. a. LeGO 2.1 × 10−16 14.47 15.10 20.90 1.2 × 10−14 18.65 18.90 26.60 opt-IMMALG∗ 0 0.113 9.987 × 10−2 0.19987 For each function is showed the minimum value found, the mean on all independent runs, and the median and max values found. The best results are highlighted in boldface Inspecting these results is possible to see as opt-IMMALG∗ outperforms the other two optimization algorithms on 5 functions over 8, whilst in the remaining 2 functions over 3, that is f x10 and ml10 , opt-IMMALG∗ does not exhibit the worst solutions. Moreover, ana- lyzing the obtained solutions on function em 10 by all three algorithms, is possible to see that, although opt-IMMALG∗ is not able to reach a comparable minimal point with respect to the other two algorithms, it shows instead better performances with respect to the mean value of best solutions found, showing thus in the overall a better search strategy. We think that 123
  • 38. J Glob Optim Table 25 Results obtained by opt-IMMALG using large dimensional search space, n = {1000, 5000} f1 f5 f9 f 10 f 11 Tmax = 104 n = 1000 1.93 × 10−1 1.01 × 10+3 2.29 × 10−2 1.21 × 10−3 1.27 × 10−2 2.44 × 10−2 2.94 × 10+2 5.09 × 10−3 7.76 × 10−5 1.7 × 10−3 n = 5000 16 9.11 × 10+3 1.83 2.76 × 10−3 3.26 × 10−1 28.6 3.56 × 10+3 8.13 2.31 × 10−3 5.61 × 10−1 Tmax = 105 n = 1000 3.35 × 10−3 9.54 × 10+2 7.06 × 10−4 3.76 × 10−8 6.66 × 10−12 2.22 × 10−2 1.54 × 10+2 4.72 × 10−3 2.63 × 10−7 4.56 × 10−11 n = 5000 3.52 5.95 × 10+3 3.64 × 10−1 8.14 × 10−4 8.99 × 10−2 5.14 1.98 × 10+3 6, 34 × 10−1 1.59 × 10−3 3.33 × 10−1 We performed 50 independent runs for each test function using different maximum objective function evalu- ation Tmax = {104 , 105 }. We fixed ρ = 9 for n = 1000 and ρ = 11.5 for n = 5000. The mean of the best individuals on all runs (in the first line of each table entry), and standard deviation (in the second line) are presented finding better solutions also in the mean, can be useful on any real optimization task, where often you need to find a good alternative solution to the optimal ones. 4.8.1 IA for high dimensional search spaces The final set of experiments that complete this exhaustive study of the performances of the proposed IA, consists of tackling global numerical optimization problems with very high dimensions (n = 1000 and n = 5000). We present only the results obtained by opt- IMMALG using the potential mutation of Eq. 5. Table 25 shows the results obtained by opt-IMMALG on large dimension values using different Tmax values: 104 and 105 . As we expected, the proposed algorithm exhibits more obstacles in reaching optimal solutions for the given functions, once is increased the dimensionality of the function. However, by increas- ing the number of objective function evaluations, the algorithm begins to reach acceptable solutions, and then showing better performances. This makes us think that giving more time for the evolution, the algorithm performs well also on large-scale dimensions. 5 Conclusion In this research paper we presented an extensive comparative study illustrating the perfor- mance of two optimization immunological algorithms with 39 state-of-the-art optimization algorithms (deterministic and nature inspired methodologies): FEP; IFEP; three versions of CEP; two versions of PSO and arPSO; PS- EA; two version of ABC; EO; SEA; HGA; immunological inspired algorithms, as BCA and two versions of CLONALG; CHC algo- rithm; Generalized Generation Gap (G3 − 1); hybrid steady-state RCMA (SW- 100), Family Competition (FC); CMA with crossover Hill Climbing (RCMA- XHC); eleven variants of DE and two its memetic versions; artificial bee colony (ABC); learning for global optimization (LeGO); and PSwarm. Two different versions are given to solve the global numerical optimization problem: opt- IMMALG01 based on binary-code representation, and opt-IMMALG that is based on real values. Moreover, two variant of opt-IMMALG are presented in this work. The main features of the designed immunological algorithm can be summarized as: (1) the cloning operator, which explores the neighbourhood of a given solution, (2) the inversely 123
  • 39. J Glob Optim proportional hypermutation operator, which perturbs each candidate solution as a function of its objective function value (inversely proportionally), and (3) the aging operator, that eliminates the oldest candidate solutions from the current population in order to introduce diversity and thus avoiding local minima during the search process. For our experiments, we have used a large set of test-beds and numerical functions from Cassioli et al. [5], Timmis and Kelsey [40], Vaz and Vicente [41] and Yao et al. [43]. Further- more the dimensionality of the problems was varied from small to high dimensions (5000 variables). Our results suggest that the proposed immunological algorithm is an effective numerical optimization algorithm (in terms of solution quality) particularly for the most challenging highly dimensional search spaces. In particular, increasing the dimension of the solutions space improves the performances of IA. Moreover, experimental results indicate that our IA using real values coding reaches better solutions than the binary-code version. All experimental comparisons show that opt-IMMALG is comparable, and often outper- form, all 39 state-of-the-art optimization algorithms. Acknowledgments The anonymous reviewers provided helpful feedback that measurably improved the manuscript. References 1. Aiex, R.M., Resende, M.G.C., Ribeiro, C.C.: TTTPLOTS: a perl program to create time-to-target plots. Optim. Lett. 1, 355–366 (2007) 2. Aiex, R.M., Resende, M.G.C., Ribeiro, C.C.: Probability distribution of solution time in GRASP: an experimental investigation. J. Heuristics 8, 343–373 (2002) 3. Angeline, P.J.: Evolutionary optimization versus particle swarm optimization: philosophy and perfor- mance differences. In: Porto, V.W., Saravanan, N., Waagen, D., Eiben, A.E. (eds.) Evolutionary program- ming, vol. 7, pp. 601–610. Springer-Verlang, Berlin (1998) 4. Caponetto, R., Fortuna, L., Fazzino, S., Xibilia, M.G.: Chaotic sequences to improve the performance of evolutionary algorithms. IEEE Trans. Evolut. Comput. 7(3), 289–304 (2003) 5. Cassioli, A., Di Lorenzo, D., Locatelli, M., Schoen, F., Sciandrone, M.: Machine Learning for Global Optimization. Comput. Optim. Appl. doi:10.1007/s10589-010-9330-x accepted August (2010) 6. Chambers, J.M., Cleveland, W.S., Kleiner, B., Tukey, P.A.: Graphical Models for Data Analysis. Chapman & Hall, London (1983) 7. Chellapilla, K.: Combining mutation operators in evolutionary programming. IEEE Trans. Evolut. Com- put. 2, 91–96 (1998) 8. Cutello, V., Narzisi, G., Nicosia, G., Pavone, M.: An immunological algorithm for global numerical opti- mization. In: Proceedings of the of the Seventh International Conference on Artificial Evolution (EA’05), vol. 3871, 284–295. LNCS (2005) 9. Cutello, V., Narzisi, G., Nicosia, G., Pavone, M.: Clonal selection algorithms: a comparative case study using effective mutation potentials. In: Proceedings of the Fourth International Conference on Artificial Immune Systems (ICARIS’05), vol. 3627, pp. 13–28. LNCS (2005) 10. Cutello, V., Nicosia, G., Pavone, M.: A hybrid immune algorithm with information gain for the graph coloring problem. In: Proceedings of Genetic and Evolutionary Computation COnference (GECCO’03), vol. 2723, pp. 171–182. LNCS (2003) 11. Cutello, V., Nicosia, G., Pavone, M.: Exploring the capability of immune algorithms: a characterization of hypermutation operators. In: Proceedings of the Third International Conference on Artificial Immune Systems (ICARIS’04), vol. 3239, pp. 263–276. LNCS (2004) 12. Cutello, V., Nicosia, G., Pavone, M.: An immune algorithm with hyper-macromutations for the Dill’s 2D hydrophobic–hydrophilic model. In: Proceedings of Congress on Evolutionary Computation (CEC’04), vol. 1, pp. 1074–1080. IEEE Press, New York (2004) 13. Cutello, V., Nicosia, G., Pavone, M.: An immune algorithm with stochastic aging and Kullback entropy for the chromatic number problem. J. Comb. Optim. 14(1), 9–33 (2007) 14. Cutello, V., Nicosia, G., Pavone, M., Narzisi, G.: Real coded clonal selection algorithm for uncon- strained global numerical optimization using a hybrid inversely proportional hypermutation operator. In: 123
  • 40. J Glob Optim Proceedings of the 21st Annual ACM Symposium on Applied Computing (SAC’06), vol. 2, pp. 950–954 (2006) 15. Cutello, V., Nicosia, G., Pavone, M., Timmis, J.: An immune algorithm for protein structure prediction on lattice models. IEEE Trans. Evolut. Comput. 11(1), 101–117 (2007) 16. Dasgupta, D.: Advances in artificial immune systems. IEEE Comput. Intell. Mag. 40–49 (2006) 17. Dasgupta, D., Niño, F.: Immunological Computation: Theory and Applications. CRC Press, Taylor & Francis Group, Boca Raton (2009) 18. Davies, M., Secker, A., Freitas, A., Timmis, J., Clark, E., Flower, D.: Alignment-independent techniques for protein classification. Curr. Proteomics 5(4), 217–223 (2008) 19. De Castro, L.N., Von Zuben, F.J.: Learning and optimization using the clonal selection principle. IEEE Trans. Evolut. Comput. 6(3), 239–251 (2002) 20. Feo, T.A., Resende, M.G.C., Smith, S.H.: A greedy randomized adaptive search procedure for maximum independent set. Oper. Res. 42, 860–878 (1994) 21. Finkel, D.E.: DIRECT optimization algorithm user guide. Technical report, CRSC N.C. State University. ftp://ftp.ncsu.edu/pub/ncsu/crsc/pdf/crsc-tr03-11.pdf (March 2003) 22. Floudas, C.A., Pardalos, P.M. (eds.): Encyclopedia of Optimization. Springer, Berlin (2009) 23. Garrett, S.: How do we evaluate artificial immune systems? Evolut. Comput. 13(2), 145–178 (2005) . 24. Goldberg, D.E.: The Design of Innovation Lessons from and for Competent Genetic Algorithms, vol. 7. Kluwer Academic Publisher, Boston (2002) 25. Goldberg, D.E., Voessner, S.: Optimizing global-local search hybrids. In: Genetic and Evolutionary Com- putation Conference (GECCO’99), pp. 220–228 (1999) 26. Hart, W.E., Krasnogor, N., Smith, J.E.: Recent Advances in Memetic Algorithms, Series in Studies in Fuzziness and Soft Computing. Springer, Berlin (2005) 27. http://www2.research.att.com/~mgcr/tttplots/ 28. Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the Lipchitz constant. J. Optim. Theory Appl. 79, 157–181 (1993) 29. Karaboga, D., Baturk, B.: A powerful and efficient algorithm for numerical function optimization: artifi- cial bee colony (ABC) algorithm. J. Global Optim. 39, 459–471 (2007) 30. Lozano, M., Herrera, F., Krasnogor, N., Molina, D.: Real-coded Memetic algorithms with crossover hill-climbing. Evolut. Comput. 12(3), 273–302 (2004) 31. ´ Mezura-Montes, E., Velazquez-Reyes, J., Coello Coello C.: A comparative study of differential evolution variants for global optimization. In: Genetic and Evolutionary Computation Conference (GECCO’06), vol. 1, pp. 485–492 (2006) 32. Noman N., Iba H.: Enhancing differential evolution performance with local search for high dimensional function optimization. In: Genetic and Evolutionary Computation Conference (GECCO’05), pp. 967–974 (2005) 33. Pardalos, P.M., Resende, M.: Handbook of Applied Optimization. Oxford University Press, Oxford (2002) 34. Price, K.V., Storn, M., Lampien, J.A.: Differential Evolution: A Practical Approach to Global Optimiza- tion. Springer, Berlin (2005) 35. Smith, S., Timmis, J.: Immune network inspired evolutionary algorithm for the diagnosis of Parkinsons disease. Biosystems 94(1–2), 34–46 (2008) 36. Storn, R., Price, K.V.: Differential evolution a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997) 37. Timmis, J.: Artificial immune systems—today and tomorrow. Nat. Comput. 6(1), 1–18 (2007) 38. Timmis, J., Hart, E.: Application areas of AIS: the past, present and the future. J. Appl. Soft Comput. 8(1), 191–201 (2008) 39. Timmis, J., Hart, E., Hone, A., Neal, M., Robins, A., Stepney, S., Tyrrell A.: Immuno-engineering. In: Proceedings of the international conference on Biologically Inspired Collaborative Computing (IFIP’09), vol. 268, pp. 3–17. IEEE Press, New York (2008) 40. Timmis, J., Kelsey J.: Immune inspired somatic contiguous hypermutation for function optimization. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’03), vol. 2723, pp. 207–218. LNCS (2003) 41. Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimi- zation. J. Global Optim. 39, 197–219 (2007) 42. Versterstrøm, J., Thomsen, R.: A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. In: Congress on Evolutionary Computing (CEC’04), vol. 1, pp. 1980–1987 (2004) 43. Yao, X., Liu, Y., Lin, G.M.: Evolutionary programming made faster. IEEE Trans. Evolut. Com- put. 3(2), 82–102 (1999) 123