Essentials of Metaheuristics
Essentials of Metaheuristics
x
1
f
x
1
x
1
f
x
2
x
1
f
x
n
x
2
f
x
1
x
2
f
x
2
x
2
f
x
n
.
.
.
.
.
.
.
.
.
.
.
.
x
n
f
x
1
x
n
f
x
2
x
n
f
x
n
~x . ~x
~x
9: ~x random value
10: until we have run out of time
11: return ~x
A global optimization algorithm is guaranteed to nd the global optimum if it runs long enough.
The above algorithm is really global only in theory: well likely never have f
0
(x) precisely equal to 0.
So well have to fudge it: if e < f
0
(x) < e for some very small value of e, well consider that
close enough to zero.
6
6
There is a gotcha with the algorithms described here: what happens when part of the function is totally at? Theres
no gradient to ascend, which leads to some problems. Lets say youre in a perfectly at valley (a local minimum) of
the function. All around you the slope is 0, so Gradient Ascent wont move at all. Its stuck. Even worse: the second
derivative is 0 as well, so for Newtons Method,
f
0
(x)
f
00
(x)
=
0
0
. Eesh. And to top it off, f
0
(x) = f
00
(x) = 0 for at minima,
at saddle points, and at maxima. Perhaps adding a bit of randomness might help in some of these situations: but thats
for the next section....
15
16
2 Single-State Methods
Gradient-based optimization makes a big assumption: that you can compute the rst (or even the
second) derivative. Thats a big assumption. If you are optimizing a well-formed, well-understood
mathematical function, its reasonable. But in most cases, you cant compute the gradient of the
function because you dont even know what the function is. All you have is a way of creating or
modifying inputs to the function, testing them, and assessing their quality.
For example, imagine that you have a humanoid robot simulator, and youre trying to nd
an optimal loop of timed operations to keep the robot walking forward without falling over. You
have some n different operations, and your candidate solutions are arbitrary-length strings of these
operations. You can plug a string in the simulator and get a quality out (how far the robot moved
forward before it fell over). How do you nd a good solution?
All youre given is a black box (in this case, the robot simulator) describing a problemthat youd
like to optimize. The box has a slot where you can submit a candidate solution to the problem
(here, a string of timed robot operations). Then you press the big red button and out comes the
assessed quality of that candidate solution. You have no idea what kind of surface the quality
assessment function looks like when plotted. Your candidate solution doesnt even have to be a
vector of numbers: it could be a graph structure, or a tree, or a set of rules, or a string of robot
operations! Whatever is appropriate for the problem.
To optimize a candidate solution in this scenario, you need to be able to do four things:
Provide one or more initial candidate solutions. This is known as the initialization procedure.
Assess the Quality of a candidate solution. This is known as the assessment procedure.
Make a Copy of a candidate solution.
Tweak a candidate solution, which produces a randomly slightly different candidate solution.
This, plus the Copy operation, are collectively known as the modication procedure.
To this the algorithm will typically provide a selection procedure that decides which candidate
solutions to retain and which to reject as it wanders through the space of possible solutions to the
problem.
2.1 Hill-Climbing
Lets begin with a simple technique, Hill-Climbing. This technique is related to gradient ascent,
but it doesnt require you to know the strength of the gradient or even its direction: you just
iteratively test new candidate solutions in the region of your current candidate, and adopt the new
ones if theyre better. This enables you to climb up the hill until you reach a local optimum.
Algorithm 4 Hill-Climbing
1: S some initial candidate solution . The Initialization Procedure
2: repeat
3: R Tweak(Copy(S)) . The Modication Procedure
4: if Quality(R) > Quality(S) then . The Assessment and Selection Procedures
5: S R
6: until S is the ideal solution or we have run out of time
7: return S
17
Notice the strong resemblance between Hill-Climbing and Gradient Ascent. The only real
difference is that Hill-Climbings more general Tweak operation must instead rely on a stochastic
(partially random) approach to hunting around for better candidate solutions. Sometimes it nds
worse ones nearby, sometimes it nds better ones.
We can make this algorithm a little more aggressive: create n tweaks to a candidate solution
all at one time, and then possibly adopt the best one. This modied algorithm is called Steepest
Ascent Hill-Climbing, because by sampling all around the original candidate solution and then
picking the best, were essentially sampling the gradient and marching straight up it.
Algorithm 5 Steepest Ascent Hill-Climbing
1: n number of tweaks desired to sample the gradient
2: S some initial candidate solution
3: repeat
4: R Tweak(Copy(S))
5: for n 1 times do
6: W Tweak(Copy(S))
7: if Quality(W) > Quality(R) then
8: R W
9: if Quality(R) > Quality(S) then
10: S R
11: until S is the ideal solution or we have run out of time
12: return S
A popular variation, which I dub Steepest Ascent Hill-Climbing with Replacement, is to not
bother comparing R to S: instead, we just replace S directly with R. Of course, this runs the risk of
losing our best solution of the run, so well augment the algorithm to keep the best-discovered-so-
far solution stashed away, in a reserve variable called Best. At the end of the run, we return Best. In
nearly all future algorithms well use the store-in-Best theme, so get used to seeing it!
Algorithm 6 Steepest Ascent Hill-Climbing With Replacement
1: n number of tweaks desired to sample the gradient
2: S some initial candidate solution
3: Best S
4: repeat
5: R Tweak(Copy(S))
6: for n 1 times do
7: W Tweak(Copy(S))
8: if Quality(W) > Quality(R) then
9: R W
10: S R
11: if Quality(S) > Quality(Best) then
12: Best S
13: until Best is the ideal solution or we have run out of time
14: return Best
18
2.1.1 The Meaning of Tweak
The initialization, Copy, Tweak, and (to a lesser extent) tness assessment functions collectively
dene the representation of your candidate solution. Together they stipulate what your candidate
solution is made up of and how it operates.
What might a candidate solution look like? It could be a vector; or an arbitrary-length list of
objects; or an unordered set or collection of objects; or a tree; or a graph. Or any combination of
these. Whatever seems to be appropriate to your problem. If you can create the four functions
above in a reasonable fashion, youre in business.
One simple and common representation for candidate solutions, which well stick to for now, is
the same as the one used in the gradient methods: a xed-length vector of real-valued numbers.
Creating a random such vector is easy: just pick random numbers within your chosen bounds. If
the bounds are min and max inclusive, and the vector length is l, we could do this:
Algorithm 7 Generate a Random Real-Valued Vector
1: min minimum desired vector element value
2: max maximum desired vector element value
3: ~v a new vector hv
1
, v
2
, ...v
l
i
4: for i from 1 to l do
5: v
i
random number chosen uniformly between min and max inclusive
6: return ~v
To Tweak a vector we might (as one of many possibilities) add a small amount of random
noise to each number: in keeping with our present denition of Tweak, lets assume for now that
this noise is no larger than a small value. Heres a simple way of adding bounded, uniformly
distributed random noise to a vector. For each slot in the vector, if a coin-ip of probability p comes
up heads, we nd some bounded uniform random noise to add to the number in that slot. In most
cases we keep p = 1.
Algorithm 8 Bounded Uniform Convolution
1: ~v vector hv
1
, v
2
, ...v
l
i to be convolved
2: p probability of adding noise to an element in the vector . Often p = 1
3: r half-range of uniform noise
4: min minimum desired vector element value
5: max maximum desired vector element value
6: for i from 1 to l do
7: if p random number chosen uniformly from 0.0 to 1.0 then
8: repeat
9: n random number chosen uniformly from r to r inclusive
10: until min v
i
+ n max
11: v
i
v
i
+ n
12: return ~v
19
We now have a knob we can turn: r, the size of the bound on Tweak. If the size is very small,
then Hill-Climbing will march right up a local hill and be unable to make the jump to the next hill
because the bound is too small for it to jump that far. Once its on the top of a hill, everywhere it
jumps will be worse than where it is presently, so it stays put. Further, the rate at which it climbs
the hill will be bounded by its small size. On the other hand, if the size is large, then Hill-Climbing
will bounce around a lot. Importantly, when it is near the top of a hill, it will have a difcult time
converging to the peak, as most of its moves will be so large as to overshoot the peak.
Thus small sizes of the bound move slowly and get caught in local optima; and large sizes on
the bound bounce around too frenetically and cannot converge rapidly to nesse the very top of
peaks. Notice how similar this is to used in Gradient Ascent. This knob is one way of controlling
the degree of Exploration versus Exploitation in our Hill-Climber. Optimization algorithms which
make largely local improvements are exploiting the local gradient, and algorithms which mostly
wander about randomly are thought to explore the space. As a rule of thumb: youd like to use a
highly exploitative algorithm (its fastest), but the uglier the space, the more you will have no
choice but to use a more explorative algorithm.
2.2 Single-State Global Optimization Algorithms
A global optimization algorithm is one which, if we run it long enough, will eventually nd the
global optimum. Almost always, the way this is done is by guaranteeing that, at the limit, every
location in the search space will be visited. The single-state algorithms weve seen so far cannot
guarantee this. This is because of our denition (for the moment) of Tweak: to make a small,
bounded, but random change. Tweak wouldnt ever make big changes. If were stuck in a
sufciently broad local optimum, Tweak may not be strong enough to get us out of it. Thus the
algorithms so far have been local optimization algorithms.
There are many ways to construct a global optimization algorithm instead. Lets start with the
simplest one possible: Random Search.
Algorithm 9 Random Search
1: Best some initial random candidate solution
2: repeat
3: S a random candidate solution
4: if Quality(S) > Quality(Best) then
5: Best S
6: until Best is the ideal solution or we have run out of time
7: return Best
Random Search is the extreme in exploration (and global optimization); in contrast, Hill-
Climbing (Algorithm 4), with Tweak set to just make very small changes and never make large ones,
may be viewed as the extreme in exploitation (and local optimization). But there are ways to achieve
reasonable exploitation and still have a global algorithm. Consider the following popular technique,
called Hill-Climbing with Random Restarts, half-way between the two. We do Hill-Climbing for
a certain random amount of time. Then when time is up, we start over with a new random location
and do Hill-Climbing again for a different random amount of time.
7
And so on. The algorithm:
7
Compare to Gradient Ascent with Restarts (Algorithm 3) and consider why were doing random restarts now rather
than gradient-based restarts. How do we know were on the top of a hill now?
20
Unimodal
Noisy
(or Hilly or Rocky)
Needle in a Haystack
Deceptive
Figure 6 Four example quality functions.
Algorithm 10 Hill-Climbing with Random Restarts
1: T distribution of possible time intervals
2: S some initial random candidate solution
3: Best S
4: repeat
5: time random time in the near future, chosen from T
6: repeat
7: R Tweak(Copy(S))
8: if Quality(R) > Quality(S) then
9: S R
10: until S is the ideal solution, or time is up, or we have run out of total time
11: if Quality(S) > Quality(Best) then
12: Best S
13: S some random candidate solution
14: until Best is the ideal solution or we have run out of total time
15: return Best
If the randomly-chosen time intervals are generally extremely long, this algorithm is basically
one big Hill-Climber. Likewise, if the intervals are very short, were basically doing random search
(by resetting to random new locations each time). Moderate interval lengths run the gamut between
the two. Thats good, right?
It depends. Consider Figure 6. The rst gure, labeled Unimodal, is a situation where Hill-
Climbing is close to optimal, and where Random Search is a very bad pick. But for the gure
labelled Noisy, Hill-Climbing is quite bad; and in fact Random Search is expected to be about
21
as good as you can do (not knowing anything about the functions beforehand). The difference
is that in Unimodal there is a strong relationship between the distance (along the x axis) of two
candidate solutions and their relationship in quality: similar solutions are generally similar in
quality, and dissimilar solutions dont have any relationship per se. In the Noisy situation, theres
no relationship like this: even similar solutions are very dissimilar in quality. This is often known
as the smoothness criterion for local search to be effective.
This isnt sufcient though. Consider the gure labeled Needle in a Haystack, for which Random
Search is the only real way to go, and Hill-Climbing is quite poor. Whats the difference between
this and Unimodal? After all, Needle in a Haystack is pretty smooth. For local search to be effective
there must be an informative gradient which generally leads towards the best solutions. In fact,
you can make highly uninformative gradients for which Hill-Climbing is spectacularly bad! In the
gure labeled Deceptive, Hill-Climbing not only will not easily nd the optimum, but it is actively
let away from the optimum.
Thus there are some kinds of problems where making small local greedy changes does best; and
other problems where making large, almost random changes does best. Global search algorithms
run this gamut, and weve seen it before: Exploration versus Exploitation. Once again, as a rule of
thumb: youd like to use a highly exploitative algorithm (its fastest), but the uglier the space, the
more you will have no choice but to use a more explorative algorithm.
Here are some ways to create a global search algorithm, plus approaches to tweaking exploration
vs. exploitation within that algorithm:
Adjust the Modication procedure Tweak occasionally makes large, random changes.
Why this is Global If you run the algorithm long enough, this randomness will cause Tweak
to eventually try every possible solution.
Exploration vs. Exploitation The more large, random changes, the more exploration.
Adjust the Selection procedure Change the algorithm so that you can go down hills at
least some of the time.
Why this is Global If you run the algorithm long enough, youll go down enough hills that
youll eventually nd the right hill to go up.
Exploration vs. Exploitation The more often you go down hills, the more exploration.
Jump to Something New Every once in a while start from a new location.
Why this is Global If you try enough new locations, eventually youll hit a hill which has the
highest peak.
Exploration vs. Exploitation The more frequently you restart, the more exploration.
Use a Large Sample Try many candidate solutions in parallel.
Why this is Global With enough parallel candidate solutions, one of them is bound to be on
the highest peak.
Exploration vs. Exploitation More parallel candidate solutions, more exploration.
Lets look at some additional global optimizers. Well focus on what Im calling single-state
optimizers which only keep around one candidate solution at a time. That is: no large sample.
22
2.3 Adjusting the Modication Procedure: (1+1), (1+), and (1, )
These three oddly named algorithms are forms of our Hill-Climbing procedures with variations
of the Tweak operation to guarantee global optimization. Theyre actually degenerate cases of the
more general (, ) and ( + ) evolutionary algorithms discussed later (in Section 3.1).
The goal is simple: construct a Tweak operation which tends to tweak in small ways but
occasionally makes larger changes, and can make any possible change. Well mostly hill-climb, but
also have the ability to, occasionally, jump far enough to land on other peaks. And there is a chance,
however small, that the Hill-Climber will get lucky and Tweak will land right on the optimum.
-1 -0.5 0 0.5 1
0
1
2
3
4
5
Figure 7 Three Normal or Gaussian distributions
N(,
2
) with the mean = 0 and the variance
2
set to
2
= 0.005: ,
2
= 0.02: , and
2
= 0.1: - - - -.
For example, imagine that were back to represent-
ing solutions in the form of xed-length vectors of real
numbers. Previously our approach to Tweaking vec-
tors was Bounded Uniform Convolution (Algorithm 8).
The key word is bounded: it required you to choose be-
tween being small enough to nesse local peaks and
being large enough to escape local optima. But a Gaus-
sian
8
(or Normal, or bell curve) distribution N(,
2
)
lets you do both: usually it makes small numbers but
sometimes it makes large numbers. Unless bounded, a
Gaussian distribution will occasionally make very large
numbers indeed. The distribution requires two parame-
ters: the mean (usually 0) and variance
2
. The degree
to which we emphasize small numbers over large ones
can be controlled by simply changing the variance
2
of the distribution.
We can do this by adding to each number in the vector some random noise under a Gaussian
distribution with a mean = 0. This is called Gaussian convolution.
9
Most noise will be near 0,
so the vector values wont change much. But occasional values could be quite large.
Algorithm 11 Gaussian Convolution
1: ~v vector hv
1
, v
2
, ...v
l
i to be convolved
2: p probability of adding noise to an element in the vector . Often p = 1
3:
2
variance of Normal distribution to convolve with . Normal = Gaussian
4: min minimum desired vector element value
5: max maximum desired vector element value
6: for i from 1 to l do
7: if p random number chosen uniformly from 0.0 to 1.0 then
8: repeat
9: n random number chosen from the Normal distribution N(0,
2
)
10: until min v
i
+ n max
11: v
i
v
i
+ n
12: return ~v
8
Karl Friedrich Gauss, 17771855, kid genius, physicist, and possibly the single most important mathematician ever.
9
A popular competitor with Gaussian convolution is polynomial mutation, from Kalyanmoy Deb and Samir Agrawal,
1999, A niched-penalty approach for constraint handling in genetic algorithms, in Proceedings of the International Conference
on Articial Neural Networks and Genetic Algorithms, pages 235243, Springer. Warning: polynomial mutation has many
variants. A popular one is from Kalyanmoy Deb, 2001, Multi-objective Optimization Using Evolutionary Algorithms, Wiley.
23
(1+1) is the name we give to basic Hill-Climbing (Algorithm 4) with this probabilistic-modied
Tweak. (1+) is the name we give to a similarly modied Steepest Ascent Hill-Climbing (Algorithm
5). And (1, ) is the name we give to the modied Steepest Ascent Hill-Climbing with Replacement
(Algorithm 6). These names may seem cryptic now but will make more sense later (in Section 3.1).
Noise in Tweak
Low High
S
a
m
p
l
e
s
Few Explorative
%
Many Exploitative
Table 1 Simplistic description of the interaction
of two factors and their effect on exploration
versus exploitation. The factors are: degree of
noise in the Tweak operation; and the samples
taken before adopting a new candidate solution.
As it turns out, Gaussian Convolution doesnt give
us just one new knob (
2
) to adjust exploration vs. ex-
ploitation, but two knobs. Consider the Steepest Ascent
Hill-Climbing with Replacement algorithm (Algorithm 6),
where the value n specied how many children are gen-
erated from the parent candidate solution through Tweak.
In the global version of this algorithm, (1, ), the value
of n interacts with
2
in an important way: if
2
is large
(noisy), then the algorithm will search crazier locations:
but a high value of n will aggressively weed out the poor
candidates discovered at those locations. This is because
if n is low, a poor quality candidate may still be the best
of the n examined; but if n is high, this is much less likely. Thus while
2
is pushing for more
exploration (at the extreme: random search), a high value of n is pushing for more exploitation. n
is an example of what will later be called selection pressure. Table 1 summarizes this interaction.
Many random number generators provide facilities for selecting random numbers under
Normal (Gaussian) distributions. But if yours doesnt, you can make two Gaussian random
numbers at a time using the Box-Muller-Marsaglia Polar Method.
10
Algorithm 12 Sample from the Gaussian Distribution (Box-Muller-Marsaglia Polar Method)
1: desired mean of the Normal distribution . Normal = Gaussian
2:
2
desired variance of the Normal distribution
3: repeat
4: x random number chosen uniformly from -1.0 to 1.0
5: y random number chosen uniformly from -1.0 to 1.0 . x and y should be independent
6: w x
2
+ y
2
7: until 0 < w < 1 . Else we could divide by zero or take the square root of a negative number!
8: g + x
q
2
ln w
w
. Its , that is,
2
. Also, note that ln is log
e
9: h + y
q
2
ln w
w
. Likewise.
10: return g and h . This method generates two random numbers at once. If you like, just use one.
Some random number generators (such as java.util.Random) only provide Gaussian random
numbers from the standard normal distribution N(0, 1). You can convert these numbers to a
Gaussian distribution for any mean and variance
2
or standard deviation you like very simply:
N(,
2
) = +
2
N(0, 1) = + N(0, 1)
10
The method was rst described in George Edward Pelham Box and Mervin Muller, 1958, A note on the generation of
random normal deviates, The Annals of Mathematical Statistics, 29(2), 610611. However the polar form of the method, as
shown here, is usually ascribed to the mathematician George Marsaglia. There is a faster, but not simpler, method with a
great, and apt, name: the Ziggurat Method.
24
2.4 Simulated Annealing
Simulated Annealing was developed by various researchers in the mid 1980s, but it has a famous
lineage, being derived from the Metropolis Algorithm, developed by the ex-Manhattan Project
scientists Nicholas Metropolis, Arianna and Marshall Rosenbluth, and Augusta and Edward Teller
in 1953.
11
The algorithm varies from Hill-Climbing (Algorithm 4) in its decision of when to replace
S, the original candidate solution, with R, its newly tweaked child. Specically: if R is better than
S, well always replace S with R as usual. But if R is worse than S, we may still replace S with R
with a certain probability P(t, R, S):
P(t, R, S) = e
Quality(R)Quality(S)
t
where t 0. That is, the algorithm sometimes goes down hills. This equation is interesting in two
ways. Note that the fraction is negative because R is worse than S. First, if R is much worse than S,
the fraction is larger, and so the probability is close to 0. If R is very close to S, the probability is
close to 1. Thus if R isnt much worse than S, well still select R with a reasonable probability.
Second, we have a tunable parameter t. If t is close to 0, the fraction is again a large number,
and so the probability is close to 0. If t is high, the probability is close to 1. The idea is to initially set
t to a high number, which causes the algorithm to move to every newly-created solution regardless
of how good it is. Were doing a random walk in the space. Then t decreases slowly, eventually to
0, at which point the algorithm is doing nothing more than plain Hill-Climbing.
Algorithm 13 Simulated Annealing
1: t temperature, initially a high number
2: S some initial candidate solution
3: Best S
4: repeat
5: R Tweak(Copy(S))
6: if Quality(R) >Quality(S) or if a random number chosen from 0 to 1 < e
Quality(R)Quality(S)
t
then
7: S R
8: Decrease t
9: if Quality(S) > Quality(Best) then
10: Best S
11: until Best is the ideal solution, we have run out of time, or t 0
12: return Best
The rate at which we decrease t is called the algorithms schedule. The longer we stretch out
the schedule, the longer the algorithm resembles a random walk and the more exploration it does.
11
Nicholas Metropolis, Arianna Rosenbluth, Marshall Rosenbluth, Augusta Teller, and Edward Teller, 1953, Equation
of state calculations by fast computing machines, Journal of Chemical Physics, 21, 10871091. And yes, Arianna and
Marshall were married, as were Augusta and Edward. Now thats a paper! This gang also developed the Monte Carlo
Method widely used in simulation. Edward Teller later became a major advocate for nuclear testing and is believed to
be one of the inspirations for Dr. Strangelove. To make this Gordion knot even more convoluted, Augusta and Edwards
grandson Eric Teller, who goes by Astro Teller, did a fair bit of early work in Genetic Programming (Section 4.3)! Astro
also developed the graph-structured Neural Programming: see Footnote 55.
A later paper on Simulated Annealing which established it as a real optimization algorithm is Scott Kirkpatrick,
Charles Daniel Gelatt Jr., and Mario Vecchi, 1983, Optimization by simulated annealing, Science, 220(4598), 671680.
25
Simulated Annealing gets its name from annealing, a process of cooling molten metal. If you
let metal cool rapidly, its atoms arent given a chance to settle into a tight lattice and are frozen in a
random conguration, resulting in brittle metal. If we decrease the temperature very slowly, the
atoms are given enough time to settle into a strong crystal. Not surprisingly, t means temperature.
2.5 Tabu Search
Tabu Search, by Fred Glover,
12
employs a different approach to doing exploration: it keeps around
a history of recently considered candidate solutions (known as the tabu list) and refuses to return to
those candidate solutions until theyre sufciently far in the past. Thus if we wander up a hill, we
have no choice but to wander back down the other side because were not permitted to stay at or
return to the top of the hill.
The simplest approach to Tabu Search is to maintain a tabu list L, of some maximum length l,
of candidate solutions weve seen so far. Whenever we adopt a new candidate solution, it goes
in the tabu list. If the tabu list is too large, we remove the oldest candidate solution and its no
longer taboo to reconsider. Tabu Search is usually implemented as a variation on Steepest Ascent
with Replacement (Algorithm 6). In the version below, we generate n tweaked children, but only
consider the ones which arent presently taboo. This requires a few little subtle checks:
Algorithm 14 Tabu Search
1: l Desired maximum tabu list length
2: n number of tweaks desired to sample the gradient
3: S some initial candidate solution
4: Best S
5: L {} a tabu list of maximum length l . Implemented as rst in, rst-out queue
6: Enqueue S into L
7: repeat
8: if Length(L) > l then
9: Remove oldest element from L
10: R Tweak(Copy(S))
11: for n 1 times do
12: W Tweak(Copy(S))
13: if W / L and (Quality(W) > Quality(R) or R L) then
14: R W
15: if R / L and Quality(R) > Quality(S) then
16: S R
17: Enqueue R into L
18: if Quality(S) > Quality(Best) then
19: Best S
20: until Best is the ideal solution or we have run out of time
21: return Best
12
Tabu is an alternate spelling for taboo. Glover also coined the word metaheuristics, and developed Scatter
Search with Path Relinking (Section 3.3.5). Tabu Search showed up rst in Fred Glover, 1986, Future paths for integer
programming and links to articial intelligence, Computers and Operations Research, 5, 533549.
26
Tabu Search really only works in discrete spaces. What if your search space is real-valued
numbers? Only in truly exceptional situations will you visit the same real-valued point in space
twice, making the tabu list worthless. In this situation, one approach is to consider a solution to
be a member of a list if it is sufciently similar to an existing member of the list. The similarity
distance measure will be up to you. See Section 6.4 for some ideas.
Even so, the big problem with Tabu Search is that if your search space is very large, and
particularly if its of high dimensionality, its easy to stay around in the same neighborhood, indeed
on the same hill, even if you have a very large tabu list. There may be just too many locations. An
alternative approach is to create a tabu list not of candidate solutions youve considered before, but
of changes youve made recently to certain features. For example, imagine if youre nding a solution
to a graph problem like the Traveling Salesman Problem (see Section 8). You tweak a candidate
solution to create a new one, by deleting edge A and adding edges B and C, and decide to adopt
the new solution. Instead of placing the solution into the tabu list, you place the changes you made
into the list. A, B, and C each go into the list. Now for a while, while youre thinking about new
tweaks, youre not allowed to even consider adding or deleting A, B, or C. Theyre taboo for now.
To implement this, the big change well need to make is in the nature of the queue acting as our
tabu list. No longer can the queue be a simple rst-in rst-out queue because variable numbers
of things will enter the queue at any time step. Instead well implement it as a set of tuples hX, di
where X is a feature we changed (for example Edge A), and d is the timestamp of when we made
the change. Also, we can no longer simply test for membership in the queue. Instead, well have to
hand the queue to the Tweak operation, so it knows which changes its not allowed to make. Thus
our revised version: Tweak(Copy(...), L). I call the new algorithm Feature-based Tabu Search.
Algorithm 15 Feature-based Tabu Search
1: l desired queue length
2: n number of tweaks desired to sample the gradient
3: S some initial candidate solution
4: Best S
5: L {} . L will hold tuples of the form hX, di where X is a feature and d is a timestamp
6: c 0
7: repeat
8: c c + 1
9: Remove from L all tuples of the form hX, di where c d > l . The old ones
10: R Tweak(Copy(S), L) . Tweak will not shift to a feature in L
11: for n 1 times do
12: W Tweak(Copy(S), L)
13: if Quality(W) > Quality(R) then
14: R W
15: S R
16: for each feature X modied by Tweak to produce R from S do
17: L L {hX, ci}
18: if Quality(S) > Quality(Best) then
19: Best S
20: until Best is the ideal solution or we have run out of time
21: return Best
27
Feature-based Tabu Search is somewhat different from the other techniques described here in
that it relies on the identiability and separability of features found in candidate solutions, rather
than considering each candidate solution as an atomic element except for Tweak purposes. Well
see this notion put to more heavy use in Combinatorial Optimization (Section 8).
2.6 Iterated Local Search
This is the present name for a concept which has been around, in many guises, since at least the
1980s.
13
Its essentially a more clever version of Hill-Climbing with Random Restarts. Each time
you do a random restart, the hill-climber then winds up in some (possibly new) local optimum.
Thus we can think of Hill-Climbing with Random Restarts as doing a sort of random search through
the space of local optima. We nd a random local optimum, then another, then another, and so on,
and eventually return the best optimum we ever discovered (ideally, its a global optimum!)
Iterated Local Search (ILS) tries to search through this space of local optima in a more intelligent
fashion: it tries to stochastically hill-climb in the space of local optima. That is, ILS nds a local optimum,
then looks for a nearby local optimum and possibly adopts that one instead, then nds a new
nearby local optimum, and so on. The heuristic here is that you can often nd better local optima
near to the one youre presently in, and walking from local optimum to local optimum in this way
often outperforms just trying new locations entirely at random.
ILS pulls this off with two tricks. First, ILS doesnt pick new restart locations entirely at random.
Rather, it maintains a home base local optimum of sorts, and selects new restart locations that
are somewhat, though not excessively, in the vicinity of the home base local optimum. We want
to restart far enough away from our current home base to wind up in a new local optimum, but
not so far as to be picking new restart locations essentially at random. We want to be doing a walk
rather than a random search.
Second, when ILS discovers a new local optimum, it decides whether to retain the current
home base local optimum, or to adopt the new local optimum as the home base. If we always
pick the new local optimum , were doing a random walk (a sort of meta-exploration). If we only
pick the new local optimum if its better than our current one, were doing hill-climbing (a sort of
meta-exploitation). ILS often picks something in-between the two, as discussed later.
If you abstract these two tricks, ILS is very simple. The only complexity lies in determining
when a local optimum has been discovered. Since this is often difcult, I will instead employ the
same approach here as was used in random restarts: to set a timer. Hill-climb for a while, and then
when timer goes off, its time to restart. This obviously doesnt guarantee that weve found the
local optimum while hill-climbing, but if the timer is long enough, were likely to be in the vicinity.
The algorithm is very straightforward: do hill-climbing for a while; then (when time is up)
determine whether to adopt the newly discovered local optimum or to retain the current home
base one (the NewHomeBase
14
function); then from our new home base, make a very big Tweak
(the Perturb function), which is ideally just large enough to likely jump to a new hill. The algorithm
looks like this:
13
A good current summary of the technique can be found in Helena Lourenco, Olivier Martin, and Thomas St utzle,
2003, Iterated local search, in Fred Glover and Gary Kochenberger, editors, Handbook of Metaheuristics, pages 320353,
Springer. They trace the technique back as far as John Baxter, 1981, Local optima avoidance in depot location, Journal of
the Operational Research Society, 32, 815819.
14
I made up that name.
28
Algorithm 16 Iterated Local Search (ILS) with Random Restarts
1: T distribution of possible time intervals
2: S some initial random candidate solution
3: H S . The current home base local optimum
4: Best S
5: repeat
6: time random time in the near future, chosen from T
7: repeat
8: R Tweak(Copy(S))
9: if Quality(R) > Quality(S) then
10: S R
11: until S is the ideal solution, or time is up, or we have run out of total time
12: if Quality(S) > Quality(Best) then
13: Best S
14: H NewHomeBase(H, S)
15: S Perturb(H)
16: until Best is the ideal solution or we have run out of total time
17: return Best
Much of the thinking behind the choices of Perturb and NewHomeBase functions is a black art,
determined largely by the nature of the particular problem being tackled. Here are some hints.
The goal of the Perturb function is to make a very large Tweak, big enough to likely escape the
current local optimum, but not so large as to be essentially a randomization. Remember that wed
like to fall onto a nearby hill. The meaning of big enough varies wildly from problem to problem.
The goal of the NewHomeBase function is to intelligently pick new starting locations. Just as
global optimization algorithms in general lie between the extremes of exploration (random search
and random walks) and exploitation (hill-climbing), the NewHomeBase should lie somewhere
between these extremes when considering among local optima.
15
At one extreme, the algorithm
could always adopt the new local optimum, that is,
NewHomeBase(H, S) = S
This results in essentially a random walk from local optimum to local optimum. At the other
extreme, the algorithm could only use the new local optimum if its of equal or higher quality than
the old one, that is,
NewHomeBase(H, S) =
(
S if Quality(S) Quality(H)
H otherwise
This results, more or less, in a kind of hill-climbing among the local optima. Most ILS heuristics try
to strike a middle-ground between the two. For example, ILS might hill-climb unless it hasnt seen
a new and better solution in a while, at which point it starts doing random walks for a bit. There
are other options of course: we could apply a Simulated Annealing approach to NewHomeBase, or a
Tabu Search procedure of sorts.
15
Thus this function truly is a meta-heuristic. Finally a valid use of the term!
29
Mixing and Matching The algorithms described in this section are not set in stone. There are
lots of ways to mix and match them, or develop other approaches entirely. For example, its
not unreasonable to use Hill-Climbing with Random Restarts mixed with a (1 + 1)-style Tweak
operation. You could also construct Steepest Ascent versions of Random Restarts. Tabu Search
could be done in (1, ) style. Or construct a Tweak procedure which slowly decreases Gaussian
convolutions
2
according to a Simulated Annealing-style temperature. And so on. Be imaginative.
30
3 Population Methods
Population-based methods differ from the previous methods in that they keep around a sample of
candidate solutions rather than a single candidate solution. Each of the solutions is involved in
tweaking and quality assessment, but what prevents this from being just a parallel hill-climber is
that candidate solutions affect how other candidates will hill-climb in the quality function. This
could happen either by good solutions causing poor solutions to be rejected and new ones created,
or by causing them to be Tweaked in the direction of the better solutions.
It may not be surprising that most population-based methods steal concepts from biology.
One particularly popular set of techniques, collectively known as Evolutionary Computation
(EC), borrows liberally from population biology, genetics, and evolution. An algorithm chosen
from this collection is known as an Evolutionary Algorithm (EA). Most EAs may be divided
into generational algorithms, which update the entire sample once per iteration, and steady-state
algorithms, which update the sample a few candidate solutions at a time. Common EAs include
the Genetic Algorithm (GA) and Evolution Strategies (ES); and there are both generational and
steady-state versions of each. There are quite a few more alphabet soup subalgorithms.
Because they are inspired by biology, EC methods tend to use (and abuse) terms from genetics
and evolution. Because the terms are so prevalent, well use them in this and most further sections.
Denition 1 Common Terms Used in Evolutionary Computation
individual a candidate solution
child and parent a child is the Tweaked copy of a candidate solution (its parent)
population set of candidate solutions
tness quality
tness landscape quality function
tness assessment or evaluation computing the tness of an individual
selection picking individuals based on their tness
mutation plain Tweaking. This is often thought as asexual breeding.
recombination or crossover a special Tweak which takes two parents, swaps sections of
them, and (usually) produces two children. This is often
thought as sexual breeding.
breeding producing one or more children from a population of parents
through an iterated process of selection and Tweaking (typically
mutation or recombination)
genotype or genome an individuals data structure, as used during breeding
chromosome a genotype in the form of a xed-length vector
gene a particular slot position in a chromosome
allele a particular setting of a gene
phenotype how the individual operates during tness assessment
generation one cycle of tness assessment, breeding, and population re-
assembly; or the population produced each such cycle
Evolutionary Computation techniques are generally resampling techniques: new samples
(populations) are generated or revised based on the results from older ones. In contrast, Parti-
cle Swarm Optimization, in Section 3.5, is an example of a directed mutation method, where
candidate solutions in the population are modied, but no resampling occurs per se.
31
The basic generational evolutionary computation algorithm rst constructs an initial population,
then iterates through three procedures. First, it assesses the tness of all the individuals in the
population. Second, it uses this tness information to breed a new population of children. Third, it
joins the parents and children in some fashion to form a new next-generation population, and the
cycle continues.
Algorithm 17 An Abstract Generational Evolutionary Algorithm (EA)
1: P Build Initial Population
2: Best 2 . 2 means nobody yet
3: repeat
4: AssessFitness(P)
5: for each individual P
i
P do
6: if Best = 2 or Fitness(P
i
) > Fitness(Best) then . Remember, Fitness is just Quality
7: Best P
i
8: P Join(P, Breed(P))
9: until Best is the ideal solution or we have run out of time
10: return Best
Notice that, unlike the Single-State methods, we now have a separate AssessFitness function.
This is because typically we need all the tness values of our individuals before we can Breed them.
So we have a certain location in the algorithm where their tnesses are computed.
Evolutionary algorithms differ from one another largely in how they perform the Breed and Join
operations. The Breed operation usually has two parts: Selecting parents from the old population,
then Tweaking them (usually Mutating or Recombining them in some way) to make children.
The Join operation usually either completely replaces the parents with the children, or includes t
parents along with their children to form the next generation.
16
Population Initialization All the algorithms described here basically use the same initialization
procedures, so its worthwhile giving some tips. Initialization is typically just creating some n
individuals at random. However, if you know something about the likely initial good regions of
the space, you could bias the random generation to tend to generate individuals in those regions. In
fact, you could seed the initial population partly with individuals of your own design. Be careful
about such techniques: often though you think you know where the good areas are, theres a good
chance you dont. Dont put all your eggs in one basket: include a signicant degree of uniform
randomness in your initialization. More on this later on when we talk about representations (in
Section 4.1.1).
Its also worthwhile to enforce diversity by guaranteeing that every individual in the initial
population is unique. Each time you make a new individual, dont scan through the whole
population to see if that individuals already been created: thats O(n
2
) and foolish. Instead, create
a hash table which stores individuals as keys and anything arbitrary as values. Each time you make
an individual, check to see if its already in the hash table as a key. If it is, throw it away and make
another one. Else, add the individual to the population, and hash it in the hash table. Thats O(n).
16
Though its usually simpler than this, the Join operation can be thought of as kind of selection procedure, choosing
from among the children and the parents to form the next generation. This general view of the Join operation is often
called survival selection, while the selection portion of the Breed operation is often called parent selection.
32
3.1 Evolution Strategies
The family of algorithms known as Evolution Strategies (ES) were developed by Ingo Rechenberg
and Hans-Paul Schwefel at the Technical University of Berlin in the mid 1960s.
17
ES employ a
simple procedure for selecting individuals called Truncation Selection, and (usually) only uses
mutation as the Tweak operator.
Among the simplest ES algorithms is the (, ) algorithm. We begin with a population of
(typically) number of individuals, generated randomly. We then iterate as follows. First we assess
the tness of all the individuals. Then we delete from the population all but the ttest ones (this
is all theres to Truncation Selection). Each of the ttest individuals gets to produce / children
through an ordinary Mutation. All told weve created new children. Our Join operation is simple:
the children just replace the parents, who are discarded. The iteration continues anew.
In short, is the number of parents which survive, and is the number of kids that the
parents make in total. Notice that should be a multiple of . ES practitioners usually refer to
their algorithm by the choice of and . For example, if = 5 and = 20, then we have a (5, 20)
Evolution Strategy. Heres the algorithm pseudocode:
Algorithm 18 The (, ) Evolution Strategy
1: number of parents selected
2: number of children generated by the parents
3: P {}
4: for times do . Build Initial Population
5: P P {new random individual}
6: Best 2
7: repeat
8: for each individual P
i
P do
9: AssessFitness(P
i
)
10: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
11: Best P
i
12: Q the individuals in P whose Fitness( ) are greatest . Truncation Selection
13: P {} . Join is done by just replacing P with the children
14: for each individual Q
j
Q do
15: for / times do
16: P P {Mutate(Copy(Q
j
))}
17: until Best is the ideal solution or we have run out of time
18: return Best
Note the use of the function Mutate instead of Tweak. Recall that population-based methods
have a variety of ways to perform the Tweak operation. The big two are mutation, which is just
like the Tweaks weve seen before: convert a single individual into a new individual through a
(usually small) random change; and recombination or crossover, in which multiple (typically two)
individuals are mixed and matched to form children. Well be using these terms in the algorithms
from now on out to indicate the Tweak performed.
17
Ingo Rechenberg, 1973, Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution,
Fromman-Holzbook, Stuttgart, Germany. In German!
33
The (, ) algorithm has three knobs with which we may adjust exploration versus exploitation.
Figure 8 shows the effect of variations with these operations.
The size of . This essentially controls the sample size for each population, and is basically
the same thing as the n variable in Steepest-Ascent Hill Climbing With Replacement. At the
extreme, as approaches , the algorithm approaches exploration (random search).
The size of . This controls how selective the algorithm is; low values of with respect to
push the algorithm more towards exploitative search as only the best individuals survive.
The degree to which Mutation is performed. If Mutate has a lot of noise, then new children
fall far from the tree and are fairly random regardless of the selectivity of .
The second Evolution Strategy algorithm is called ( +). It differs from (, ) in only one
respect: the Join operation. Recall that in (, ) the parents are simply replaced with the children
in the next generation. But in ( + ), the next generation consists of the parents plus the
new children. That is, the parents compete with the kids next time around. Thus the next and all
successive generations are + in size. The algorithm looks like this:
Algorithm 19 The ( + ) Evolution Strategy
1: number of parents selected
2: number of children generated by the parents
3: P {}
4: for times do
5: P P {new random individual}
6: Best 2
7: repeat
8: for each individual P
i
P do
9: AssessFitness(P
i
)
10: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
11: Best P
i
12: Q the individuals in P whose Fitness( ) are greatest
13: P Q . The Join operation is the only dierence with (, )
14: for each individual Q
j
Q do
15: for / times do
16: P P {Mutate(Copy(Q
j
))}
17: until Best is the ideal solution or we have run out of time
18: return Best
Generally speaking, ( + ) may be more exploitative than (, ) because high-tness parents
persist to compete with the children. This has risks: a sufciently t parent may defeat other
population members over and over again, eventually causing the entire population to prematurely
converge to immediate descendants of that parent, at which point the whole population has been
trapped in the local optimum surrounding the parent.
34
(1, 2) Evolution Strategy
(1, 8) Evolution Strategy
(4, 8) Evolution Strategy
Breeding Operations
Generation 3
Generation 2
Generation 4
Generation 1
Individuals Selected To Breed
Individuals Not Selected
Figure 8 Three (, ) Evolution Strategy variations. Each generation, individuals are selected to breed, and each gets
to create / children, resulting in children in total.
If you think about it, ( + ) resembles Steepest Ascent Hill-Climbing in that both of them
allow the parent to compete against the children for supremacy in the next iteration. Whereas (, )
resembles Steepest Ascent Hill-Climbing with Replacement in that the parents are replaced with
the best children. This is more than a coincidence: the hill-climbers are essentially degenerate cases
of the ES algorithms. Recall that with the right Tweak operator, plain Hill-Climbing becomes the
(1 + 1) algorithm, Steepest Ascent Hill-Climbing with Replacement becomes (1, ), and Steepest
Ascent Hill-Climbing becomes (1 + ). Armed with the explanation of the algorithms above, it
should be a bit clearer why this is.
3.1.1 Mutation and Evolutionary Programming
Evolution Strategies historically employ a representation in the form of a xed-length vector of
real-valued numbers. Typically such vectors are initialized using something along the lines of
Algorithm 7. Mutation is typically performed using Gassian Convolution (Algorithm 11).
Gaussian Convolution is controlled largely by the distribution variance
2
. The value of
2
is known as the mutation rate of an ES, and determines the noise in the Mutate operation. How
do you pick a value for
2
? You might pre-select its value; or perhaps you might slowly decrease
the value; or you could try to adaptively change
2
based on the current statistics of the system.
If the system seems to be too exploitative, you could increase
2
to force some more exploration
(or likewise decrease it to produce more exploitation). This notion of changing
2
is known as an
adaptive mutation rate. In general, such adaptive breeding operators adjust themselves over time,
in response to statistics gleaned from the optimization run.
18
18
Evolution Strategies have also long been associated with self-adaptive operators which are stochastically optimized
along with individuals. For example, individuals might contain their own mutation procedures which can themselves
be mutated along with the individual.
35
One old rule for changing
2
adaptively is known as the One-Fifth Rule, by Ingo Rechenberg,
19
and it goes like this:
If more than
1
5
children are tter than their parents, then were exploiting local optima too
much, and we should increase
2
.
If less than
1
5
children are tter than their parents, then were exploring too much, and we
should decrease
2
.
If exactly
1
5
children are tter than their parents, dont change anything.
This rule was derived from the results of experiments with the (1 + 1) ES on certain simple test
problems. It may not be optimal for more complex situations: but its a good starting point.
You dont have to do ES just with vectors. In fact, a little earlier than ES, an almost identical
approach was developed by Larry Fogel at the National Science Foundation (Washington DC) and
later developed in San Diego.
20
The technique, called Evolutionary Programming (EP), differs
from ES in two respects. First, it historically only used a ( + ) strategy with = . That is, half
the population was eliminated, and that half was then lled in with children. Second, EP was
applied to most any representation. From the very start Fogel was interested in evolving graph
structures (specically nite state automata, hence the programming). Thus the Mutate operation
took the form of adding or deleting an edge, adding or deleting a node, relabeling an edge or a
node, etc.
Such operations are reasonable as long as they have two features. First, to guarantee that
the algorithm remains global, we must guarantee that, with some small probability, a parent can
produce any child. Second, we ought to retain the feature that usually we make small changes likely
to not deviate signicantly in tness; and only occasionally make large changes to the individual.
The degree to which we tend to make small changes could be adjustable, like
2
was. Well get to
such representational issues for candidate solutions in detail in Section 4.
3.2 The Genetic Algorithm
The Genetic Algorithm (GA), often referred to as genetic algorithms, was invented by John Holland
at the University of Michigan in the 1970s.
21
It is similar to a (, ) Evolution Strategy in many
respects: it iterates through tness assessment, selection and breeding, and population reassembly.
The primary difference is in how selection and breeding take place: whereas Evolution Strategies
select the parents and then creates the children, the Genetic Algorithm little-by-little selects a few
parents and generates children until enough children have been created.
To breed, we begin with an empty population of children. We then select two parents from the
original population, copy them, cross them over with one another, and mutate the results. This
forms two children, which we then add to the child population. We repeat this process until the
child population is entirely lled. Heres the algorithm in pseudocode.
19
Also in his evolution strategies text (see Footnote 17, p. 33).
20
Larry Fogels dissertation was undoubtedly the rst such thesis, if not the rst major work, in the eld of evolutionary
computation. Lawrence Fogel, 1964, On the Organization of Intellect, Ph.D. thesis, University of California, Los Angeles.
21
Hollands book is one of the more famous in the eld: John Holland, 1975, Adaptation in Natural and Articial Systems,
University of Michigan Press.
36
Algorithm 20 The Genetic Algorithm (GA)
1: popsize desired population size . This is basically . Make it even.
2: P {}
3: for popsize times do
4: P P {new random individual}
5: Best 2
6: repeat
7: for each individual P
i
P do
8: AssessFitness(P
i
)
9: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
10: Best P
i
11: Q {} . Heres where we begin to deviate from (, )
12: for popsize/2 times do
13: Parent P
a
SelectWithReplacement(P)
14: Parent P
b
SelectWithReplacement(P)
15: Children C
a
, C
b
Crossover(Copy(P
a
), Copy(P
b
))
16: Q Q {Mutate(C
a
), Mutate(C
b
)}
17: P Q . End of deviation
18: until Best is the ideal solution or we have run out of time
19: return Best
Though it can be applied to any kind of vector (and indeed many representations) the GA
classically operated over xed-length vectors of boolean values, just like ES often were applied to
ones of oating-point values. For a moment, lets be pedantic about generation of new individuals. If
the individual is a vector of oating-point values, creating a new random vector could be done just
like in ES (that is, via Algorithm 7). If our representation is a boolean vector, we could do this:
Algorithm 21 Generate a Random Bit-Vector
1: ~v a new vector hv
1
, v
2
, ...v
l
i
2: for i from 1 to l do
3: if 0.5 > a random number chosen uniformly between 0.0 and 1.0 inclusive then
4: v
i
true
5: else
6: v
i
false
7: return ~v
3.2.1 Crossover and Mutation
Note how similar the Genetic Algorithm is to (, ), except during the breeding phase. To perform
breeding, we need two new functions weve not seen before: SelectWithReplacement and Crossover;
plus of course Mutate. Well start with Mutate. Mutating a real-valued vector could be done with
Gaussian Convolution (Algorithm 11). How might you Mutate a boolean vector? One simple way is
bit-ip mutation: march down the vector, and ip a coin of a certain probability (often 1/l, where
l is the length of the vector). Each time the coin comes up heads, ip the bit:
37
Algorithm 22 Bit-Flip Mutation
1: p probability of ipping a bit . Often p is set to 1/l
2: ~v boolean vector hv
1
, v
2
, ...v
l
i to be mutated
3: for i from 1 to l do
4: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
5: v
i
(v
i
)
6: return ~v
1 1 0 0 1 0 0 1
0 0 1 0 1 1 0 0
c
Swap Swap Swap
Figure 9 One-Point Crossover.
Crossover is the Genetic Algorithms distinguishing
feature.
22
It involves mixing and matching parts of two
parents to form children. How you do that mixing and
matching depends on the representation of the individuals.
There are three classic ways of doing crossover in vectors:
One-Point, Two-Point, and Uniform Crossover.
Lets say the vector is of length l. One-point crossover
picks a number c between 1 and l, inclusive, and swaps all
the indexes < c, as shown in Figure 9. The algorithm:
Algorithm 23 One-Point Crossover
1: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
2: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
3: c random integer chosen uniformly from 1 to l inclusive
4: if c 6= 1 then
5: for i from 1 to c 1 do
6: Swap the values of v
i
and w
i
7: return ~v and ~ w
1 1 0 0 1 0 0 1
0 0 1 0 1 1 0 0
c d
Swap Swap Swap
Figure 10 Two-Point Crossover.
If c = 1 no crossover happens. This empty crossover
occurs with
1
l
probability. If youd like to instead control
this probability, you can pick c from between 2 to l inclusive
and decide on your own when crossover will occur.
The problem with one-point crossover lies in the pos-
sible linkage (also called epistasis) among the elements in
the vector. Notice that the probability is high that v
1
and
v
l
will be broken up due to crossover, as almost any choice
of c will do it. Similarly, the probability that v
1
and v
2
will
be broken up is quite small, as c must be equal to 2. If
the organization of your vector was such that elements v
1
and v
l
had to work well in tandem in
order to get a high tness, youd be constantly breaking up good pairs that the system discovered.
Two-point crossover is one way to clean up the linkage problem: just pick two numbers c and d, and
swap the indexes between them. Figure 10 gives the general idea, and the pseudocode is below:
22
Though its long since been used in various ways with Evolution Strategies as well.
38
Algorithm 24 Two-Point Crossover
1: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
2: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
3: c random integer chosen uniformly from 1 to l inclusive
4: d random integer chosen uniformly from 1 to l inclusive
5: if c > d then
6: Swap c and d
7: if c 6= d then
8: for i from c to d 1 do
9: Swap the values of v
i
and w
i
10: return ~v and ~ w
As was the case for one-point crossover, when c = d you get an empty crossover (with
1
l
probability). If youd like to control the probability of this yourself, just force d to be different from
c, and decide on your own when crossover happens.
Its not immediately obvious two-point crossover would help things. But think of the vectors
not as vectors but as rings (that is, v
l
is right next to v
1
). Two-point crossover breaks the rings at
two spots and trades pieces. Since v
l
is right next to v
1
, the only way theyd break up is if c or d
sliced right between them. The same situation as v
1
and v
2
.
23
1 1 0 0 1 0 0 1
0 0 1 0 1 1 0 0
Swap Swap Swap Swap
Figure 11 Uniform Crossover.
Even so, theres still a further linkage problem. v
1
and
v
l
are now being treated fairly, but how about v
1
and v
l/2
?
Long distances like that are still more likely to be broken
up than short distances like v
1
and v
2
(or indeed v
1
and
v
l
). We can treat all genes fairly with respect to linkage
by crossing over each point independently of one another,
using Uniform crossover. Here we simply march down the
vectors, and swap individual indexes if a coin toss comes
up heads with probability p.
24
Algorithm 25 Uniform Crossover
1: p probability of swapping an index . Often p is set to 1/l. At any rate, p 0.5
2: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
3: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
4: for i from 1 to l do
5: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
6: Swap the values of v
i
and w
i
7: return ~v and ~ w
23
We can generalize two-point crossover into a Multi-Point Crossover: pick n random points and sort them smallest
rst: c
1
, c
2
, ..., c
n
. Now swap indexes in the region between c
1
and c
2
, and between c
3
and c
4
, and likewise c
5
and c
6
, etc.
24
The original uniform crossover assumed p = 1/2, and was rst proposed in David Ackley, 1987, A Connectionist
Machine for Genetic Hillclimbing, Kluwer Academic Publishers. The more general form, for arbitrary p, is sometimes
called parameterized uniform crossover.
39
Crossover is not a global mutation. If you cross over two vectors you cant get every conceivable
vector out of it. Imagine your vectors were points in space. Now imagine the hypercube formed
with those points at its extreme corners. For example, if your vectors were 3-dimensional, theyd
form the corners of a cube (or box) in space, as shown in Figure 12. All the crossovers so far are very
Figure 12 A box in space formed by two
three-dimensional vectors (black circles).
The dashed line connects the two vectors.
constrained: they will result in new vectors which lie at some
other corner of the hypercube.
By extension, imagine an entire population P as points
in space (such as the three-dimensional space in Figure 12).
Crossover done on P can only produce children inside the
bounding box surrounding P in space. Thus P
0
s bounding box
can never increase: youre doomed to only search inside it.
As we repeatedly perform crossover and selection on a pop-
ulation, it may reach the situation where certain alleles (values
for certain positions in the vector) have been eliminated, and
the bounding box will collapse in that dimension. Eventually
the population will converge, and often (unfortunately) pre-
maturely converge, to copies of the same individual. At this
stage theres no escape: when an individual crosses over with
itself, nothing new is generated.
25
Thus to make the Genetic
Algorithm global, you also need to have a Mutate operation.
Whats the point of crossover then? Crossover was origi-
nally based on the premise that highly t individuals often share certain traits, called building
blocks, in common. For xed-length vector individuals a building block was often dened as a
collection of genes set to certain alleles. For example, in the boolean individual 10110101, perhaps
***101*1 might be a building block (where the * positions arent part of the building block). In many
problems for which crossover was helpful, the tness of a given individual is often at least partly
correlated to the degree to which it contains various of these building blocks, and so crossover
works by spreading building blocks quickly throughout the population. Building blocks were the
focus of much early genetic algorithm analysis, formalized in an area known as schema theory.
Thats the idea anyway. But, hand-in-hand with this building-block hypothesis, Crossover
methods also assume that there is some degree of linkage
26
between genes on the chromosome: that
is, settings for certain genes in groups are strongly correlated to tness improvement. For example,
genes A and B might contribute to tness only when theyre both set to 1: if either is set to 0, then
the fact that the other is set to 1 doesnt do anything. One- and Two-point Crossover also make
the even more tenuous assumption that your vector is structured such that highly linked genes
are located near to one another on the vector: because such crossovers are unlikely to break apart
closely-located gene groups. Unless you have carefully organized your vector, this assumption
is probably a bug, not a feature. Uniform Crossover also makes some linkage assumptions but
does not have this linkage-location bias. Is the general linkage assumption true for your problem?
Or are your genes essentially independent of one another? For most problems of interest, its the
former: but its dicey. Be careful.
25
Crossovers which dont make anything new when an individual crosses over with itself are called homologous.
26
One special kind of linkage effect has its own term stolen straight from biology: epistasis. Here, genes A and B
are linked because gene B has an effect on the expression of gene A (on the other hand, A may not affect B). The term
epistasis can also be used more generally as a synonym for linkage.
40
In theory, you could perform uniform crossover with several vectors at once to produce children
which are the combination of all of them.
27
To avoid sheer randomization, probably youd want only
a bit of mixing to occur, so the probability of swapping any given index shouldnt be spectacularly
high. Something like this is very rare in practice though. To do it, we rst need to dene how to
uniformly randomly shufe a vector. Surprisingly, its not as obvious as youd think.
Algorithm 26 Randomly Shufe a Vector
1: ~p elements to shue hp
1
, ..., p
l
i
2: for i from l down to 2 do . Note we dont go to 1
3: j integer chosen at random from 1 to i inclusive
4: Swap p
i
and p
j
Armed with a random shufer (well use it in future algorithms too), we can now cross over k
vectors at a time, trading pieces with one another, and producing k children as a result.
Algorithm 27 Uniform Crossover among K Vectors
1: p probability of swapping an index . Ought to be very small
2: W {W
1
, ..., W
k
} vectors to cross over, each of length l
3: ~v vector hv
1
, ..., v
k
i
4: for i from 1 to l do
5: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
6: for j from 1 to k do . Load ~v with the ith elements from each vector in W
7: ~ w W
j
8: v
j
w
i
9: Randomly Shue ~v
10: for j from 1 to k do . Put back the elements, all mixed up
11: ~ w W
j
12: w
i
v
j
13: W
j
~ w
14: return W
3.2.2 More Recombination
So far weve been doing crossovers that are just swaps: but if the vectors are of oating-point
values, our recombination could be something fuzzier, like averaging the two values rather than
swapping them. Imagine if our two vectors were points in space. We draw a line between the two
points and choose two new points between them. We could extend this line somewhat beyond the
points as well, as shown in the dashed line in Figure 12, and pick along the line. This algorithm,
known as Line Recombination, here presented in the form given by Heinz M uhlenbein and Dirk
Schlierkamp-Voosen, depends on a variable p which determines how far out along the line well
allow children to be. If p = 0 then the children will be located along the line within the hypercube
(that is, between the two points). If p > 0 then the children may be located anywhere on the line,
even somewhat outside of the hypercube.
27
Theres nothing new under the sun: this was one of the early ES approaches tried by Hans-Paul Schwefel.
41
Algorithm 28 Line Recombination
1: p positive value which determines how far long the line a child can be located . Try 0.25
2: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
3: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
4: random value from p to 1 + p inclusive
5: random value from p to 1 + p inclusive
6: for i from 1 to l do
7: t v
i
+ (1 )w
i
8: s w
i
+ (1 )v
i
9: if t and s are within bounds then
10: v
i
t
11: w
i
s
12: return ~v and ~ w
We could extend this further by picking random and values for each position in the vector.
This would result in children that are located within the hypercube or (if p > 0) slightly outside of
it. M uhlenbein and Schlierkamp-Voosen call this Intermediate Recombination.
28
Algorithm 29 Intermediate Recombination
1: p positive value which determines how far long the line a child can be located . Try 0.25
2: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
3: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
4: for i from 1 to l do
5: repeat
6: random value from p to 1 + p inclusive . We just moved these two lines!
7: random value from p to 1 + p inclusive
8: t v
i
+ (1 )w
i
9: s w
i
+ (1 )v
i
10: until t and s are within bounds
11: v
i
t
12: w
i
s
13: return ~v and ~ w
Since were using different values of and for each element, instead of rejecting recombination
if the elements go out of bounds, we can now just repeatedly pick a new and .
Why bother with values of p > 0? Imagine that you have no Mutate operation, and are just
doing Intermediate or Line Recombination. Each time you select parents and generate a child,
28
Okay, they called them Extended Line and Extended Intermediate Recombination, in Heinz M uhlenbein and Dirk
Schlierkamp-Voosen, 1993, Predictive models for the breeder genetic algorithm: I. continuous parameter optimization,
Evolutionary Computation, 1(1). These methods have long been in evolutionary computation, but the terms are hardly
standardized: notably Hans-Paul Schwefels original Evolutionary Strategies work used (among others) line recombina-
tion with p = 0.5, but he called it intermediate recombination, as do others. Schwefel also tried a different variation: for
each gene of the child, two parents were chosen at random, and their gene values at that gene were averaged.
42
that child is located somewhere within the cube formed by the parents (recall Figure 12). Thus its
impossible to generate a child outside the bounding box of the population. If you want to explore in
those unknown regions, you need a way to generate children further out than your parents are.
Other Representations So far weve focused on vectors. In Section 4 well get to other repre-
sentations. For now, remember that if you can come up with a reasonable notion of Mutate, any
representation is plausible. How might we do graph structures? Sets? Arbitrary-length lists? Trees?
3.2.3 Selection
In Evolution Strategies, we just lopped off all but the best individuals, a procedure known
as Truncation Selection. Because the Genetic Algorithm performs iterative selection, crossover,
and mutation while breeding, we have more options. Unlike Truncation Selection, the GAs
SelectWithReplacement procedure can (by chance) pick certain Individuals over and over again, and
it also can (by chance) occasionally select some low-tness Individuals. In an ES an individual is
the parent of a xed and predened number of children, but not so in a GA.
1 2 3 4 5 6 7 8
0 s Total Fitness Range
Individuals
Sized by Fitness
Figure 13 Array of individual ranges in Fitness Proportionate
Selection.
The original SelectWithReplacement
technique for GAs was called Fitness-
Proportionate Selection, sometimes
known as Roulette Selection. In this algo-
rithm, we select individuals in proportion
to their tness: if an individual has a higher
tness, its selected more often.
29
To do this
we size the individuals according to their
tness as shown in Figure 13.
30
Let s =
i
f
i
be the sum tness of all the individuals. A random
number from 0 to s falls within the range of some individual, which is then selected.
Algorithm 30 Fitness-Proportionate Selection
1: perform once per generation
2: global ~p population copied into a vector of individuals hp
1
, p
2
, ..., p
l
i
3: global
~
f h f
1
, f
2
, ..., f
l
i tnesses of individuals in ~p in the same order as ~p . Must all be 0
4: if
~
f is all 0.0s then . Deal with all 0 tnesses gracefully
5: Convert
~
f to all 1.0s
6: for i from 2 to l do . Convert
~
f to a CDF. This will also cause f
l
= s, the sum of tnesses
7: f
i
f
i
+ f
i1
8: perform each time
9: n random number from 0 to f
l
inclusive
10: for i from 2 to l do . This could be done more eciently with binary search
11: if f
i1
< n f
i
then
12: return p
i
13: return p
1
29
We presume here that tnesses are 0. As usual, higher is better.
30
Also due to John Holland. See Footnote 21, p. 36.
43
Notice that Fitness-Proportionate Selection has a preprocessing step: converting all the tnesses
(or really copies of them) into a cumulative distribution. This only needs to be done once per
generation. Additionally, though the code searches linearly through the tness array to nd the
one we want, itd be smarter to do that in O(lg n) time by doing a binary search instead.
1 2 3 4 5 6 7 8
0 s Total Fitness Range
Individuals
Sized by Fitness
Start Range
(here n = 8)
n Chosen Individuals
Begins within the Start Range
0 s/n
1 1 3 4 5 5 7 8
Figure 14 Array of individual ranges, start range, and chosen
points in Stochastic Universal Sampling.
One variant on Fitness-Proportionate Se-
lection is called Stochastic Universal Sam-
pling (or SUS), by James Baker. In SUS,
we select in a tness-proportionate way but
biased so that t individuals always get
picked at least once. This is known as a low
variance resampling algorithm and I include
it here because it is now popular in other
venues than just evolutionary computation
(most famously, Particle Filters).
31
SUS selects n individuals at a time (typically n is the size of the next generation, so in our case
n = l). To begin, we build our tness array as before. Then we select a random position from 0 to
s/n. We then select the individual which straddles that position. We then increment the position
by s/n and repeat (up to n times total). Each increment, we select the individual in whose tness
region we landed. This is shown in Figure 14. The algorithm is:
Algorithm 31 Stochastic Universal Sampling
1: perform once per n individuals produced . Usually n = l, that is, once per generation
2: global ~p copy of vector of individuals (our population) hp
1
, p
2
, ..., p
l
i, shued randomly
. To shue a vector randomly, see Algorithm 26
3: global
~
f h f
1
, f
2
, ..., f
l
i tnesses of individuals in ~p in the same order as ~p . Must all be 0
4: global index 0
5: if
~
f is all 0.0s then
6: Convert
~
f to all 1.0s
7: for i from 2 to l do . Convert
~
f to a CDF. This will also cause f
l
= s, the sum of tnesses.
8: f
i
f
i
+ f
i1
9: global value random number from 0 to f
l
/n inclusive
10: perform each time
11: while f
index
< value do
12: index index + 1
13: value value + f
l
/n
14: return p
index
There are basically two advantages to SUS. First, its O(n) to select n individuals, rather than
O(n lg n) for Fitness-Proportionate Selection. That used to be a big deal but it isnt any more,
since the lions share of time in most optimization algorithms is spent in assessing the tness
of individuals, not in the selection or breeding processes. Second and more interesting, SUS
31
And they never seem to cite him. Here it is: James Edward Baker, 1987, Reducing bias and inefciency in the selection
algorithm, in John Grefenstette, editor, Genetic Algorithms and Their Applications: Proceedings of the Second International
Conference on Genetic Algorithms (ICGA), pages 1421, Lawrence Erlbaum Associates, Hillsdale.
44
guarantees that if an individual is fairly t (over s/n in size), itll get chosen for sure, sometimes
multiple times. In Fitness-Proportionate Selection even the ttest individual may never be selected.
There is a big problem with the methods described so far: they presume that the actual tness
value of an individual really means something important. But often we choose a tness function
such that higher ones are better than smaller ones, and dont mean to imply anything else. Even if
the tness function was carefully chosen, consider the following situation, where a tness function
goes from 0 to 10. Near the end of a run, all the individuals have values like 9.97, 9.98, 9.99, etc. We
want to nesse the peak of the tness function, and so we want to pick the 9.99-tness individual.
But to Fitness-Proportionate Selection (and to SUS), all these individuals will be selected with
nearly identical probability. The system has converged to just doing random selection.
To x this we could scale the tness function to be more sensitive to the values at the top end
of the function. But to really remedy the situation we need to adopt a non-parametric selection
algorithm which throws away the notion that tness values mean anything other than bigger is
better, and just considers their rank ordering. Truncation Selection does this, but the most popular
technique by far is Tournament Selection,
32
an astonishingly simple algorithm:
Algorithm 32 Tournament Selection
1: P population
2: t tournament size, t 1
3: Best individual picked at random from P with replacement
4: for i from 2 to t do
5: Next individual picked at random from P with replacement
6: if Fitness(Next) > Fitness(Best) then
7: Best Next
8: return Best
We return the ttest individual of some t individuals picked at random, with replacement, from
the population. Thats it! Tournament Selection has become the primary selection technique used
for the Genetic Algorithm and many related methods, for several reasons. First, its not sensitive
to the particulars of the tness function. Second, its dead simple, requires no preprocessing, and
works well with parallel algorithms. Third, its tunable: by setting the tournament size t, you
can change how selective the technique is. At the extremes, if t = 1, this is just random search.
If t is very large (much larger than the population size itself), then the probability that the ttest
individual in the population will appear in the tournament approaches 1.0, and so Tournament
Selection just picks the ttest individual each time (put another way, it approaches Truncation
Selection with = 1).
In the Genetic Algorithm, the most popular setting is t = 2. For certain representations (such as
those in Genetic Programming, discussed later in Section 4.3), its common to be more selective
(t = 7). To be less selective than t = 2, but not be totally random, wed need some kind of trick. One
way I do it is to also allow real-numbered values of t from 1.0 to 2.0. In this range, with probability
t 1.0, we do a tournament selection of size t = 2, else we select an individual at random (t = 1).
33
32
Tournament Selection may be a folk algorithm: but the earliest usage Im aware of is Anne Brindle, 1981, Genetic
Algorithms for Function Optimization, Ph.D. thesis, University of Alberta. She used binary tournament selection (t = 2).
33
You could generalize this to any real-valued t 1.0: with probability t btc select with size dte, else with size btc.
45
3.3 Exploitative Variations
It seems the trend in new algorithms is to be more exploitative. Some variations such as Elitism, the
Steady-State Genetic Algorithm (and Generation Gap methods), and the Genetic Algorithm with
a Tree-Style Genetic Programming Pipeline, are exploitative because highly-t parents can linger
in the population and compete with their children, like ( + ). Other variations are exploitative
because they directly augment evolution with hill-climbing: for example, certain kinds of Hybrid
Optimization Algorithms, and a method called Scatter Search with Path Relinking. We discuss
all these next.
3.3.1 Elitism
Elitism is simple: we augment the Genetic Algorithm to directly inject into the next population the
ttest individual or individuals from the previous population.
34
These individuals are called the
elites. By keeping the best individual (or individuals) around in future populations, this algorithm
begins to resemble ( + ), and has similar exploitation properties. This exploitation can cause
premature convergence if not kept in check: perhaps by increasing the mutation and crossover
noise, or weakening the selection pressure, or reducing how many elites are being stored.
A minor catch. If you want to maintain a population size of popsize, and youre doing crossover,
youll need to have popsize, minus the number of elites, be divisible by two, as in this algorithm:
Algorithm 33 The Genetic Algorithm with Elitism
1: popsize desired population size
2: n desired number of elite individuals . popsize n should be even
3: P {}
4: for popsize times do
5: P P {new random individual}
6: Best 2
7: repeat
8: for each individual P
i
P do
9: AssessFitness(P
i
)
10: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
11: Best P
i
12: Q {the n ttest individuals in P, breaking ties at random}
13: for (popsize n)/2 times do
14: Parent P
a
SelectWithReplacement(P)
15: Parent P
b
SelectWithReplacement(P)
16: Children C
a
, C
b
Crossover(Copy(P
a
), Copy(P
b
))
17: Q Q {Mutate(C
a
), Mutate(C
b
)}
18: P Q
19: until Best is the ideal solution or we have run out of time
20: return Best
34
Elitism was coined by Ken De Jong in his thesis (see Footnote 36, p. 48.).
46
Or you can just throw away an extra crossed-over child if itd put you over the population size,
as is done in The Genetic Algorithm (Tree-style Genetic Programming Pipeline) (Algorithm 3.3.3).
Elitism is very common. For example, most major multiobjective algorithms (Section 7) are
strongly elitist. Many recent Ant Colony Optimization algorithms (ACO, Section 8.3) are also elitist.
And of course anything resembling ( +), including Scatter Search (Section 3.3.5) is heavily elitist.
Even Particle Swarm Optimization (PSO, Section 3.5) has a kind of elitism in its own regard.
3.3.2 The Steady-State Genetic Algorithm
An alternative to a traditional generational approach to the Genetic Algorithm is to use a steady-
state approach, updating the population in a piecemeal fashion rather than all at one time. This
approach was popularized by the Darrell Whitley and Joan Kauths GENITOR system. The idea
is to iteratively breed a new child or two, assess their tness, and then reintroduce them directly
into the population itself, killing off some preexisting individuals to make room for them. Heres a
version which uses crossover and generates two children at a time:
Algorithm 34 The Steady-State Genetic Algorithm
1: popsize desired population size
2: P {}
3: for popsize times do
4: P P {new random individual}
5: Best 2
6: for each individual P
i
P do
7: AssessFitness(P
i
)
8: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
9: Best P
i
10: repeat
11: Parent P
a
SelectWithReplacement(P) . We rst breed two children C
a
and C
b
12: Parent P
b
SelectWithReplacement(P)
13: Children C
a
, C
b
Crossover(Copy(P
a
), Copy(P
b
))
14: C
a
Mutate(C
a
)
15: C
b
Mutate(C
b
)
16: AssessFitness(C
a
) . We next assess the tness of C
a
and C
b
17: if Fitness(C
a
) > Fitness(Best) then
18: Best C
a
19: AssessFitness(C
b
)
20: if Fitness(C
b
) > Fitness(Best) then
21: Best C
b
22: Individual P
d
SelectForDeath(P)
23: Individual P
e
SelectForDeath(P) . P
d
must be 6= P
e
24: P P {P
d
, P
e
} . We then delete P
d
and P
e
from the population
25: P P {C
a
, C
b
} . Finally we add C
a
and C
b
to the population
26: until Best is the ideal solution or we have run out of time
27: return Best
47
The Steady-State Genetic Algorithm has two important features. First, it uses half the memory
of a traditional genetic algorithm because there is only one population at a time (no Q, only P).
Second, it is fairly exploitative compared to a generational approach: the parents stay around in
the population, potentially for a very long time, and thus, like + and Elitism, this runs the risk
of causing the system to prematurely converge to largely copies of a few highly t individuals.
This may be exaggerated by how we decide to SelectForDeath. If we tend to select unt individuals
for death (using, for example, a Tournament Selection based on the least t in the tournament),
this can push diversity out of the population even faster. More commonly, we might simply select
individuals at random for death. Thus the t culprits in premature convergence can eventually be
shoved out of the population.
35
If we want less exploitation, we may do the standard tricks: use a
relatively unselective operator for SelectWithReplacement, and make Crossover and Mutate noisy.
We could of course generalize this algorithm to replace not just two individuals but some n
individuals all at once. Methods using large values of n (perhaps 50% of the total population size
or more) are often known as Generation Gap Algorithms,
36
after Ken De Jong. As n approaches
100%, we get closer and closer to a plain generational algorithm.
3.3.3 The Tree-Style Genetic Programming Pipeline
Genetic Programming (discussed in Section 4.3) is a community interested in using metaheuristics
to nd highly t computer programs. The most common form of Genetic Programming, Tree-
Style Genetic Programming, uses trees as its representation. When doing Tree-Style Genetic
Programming its traditional, but hardly required, to use a variant of The Genetic Algorithm
with a special breeding technique due to John Koza.
37
Rather than performing crossover and
then mutation, this algorithm rst ips a coin. With 90% probability it selects two parents and
performs only crossover. Otherwise, it selects one parent and directly copies the parent into the
new population. Its this direct copying which makes this a strongly exploitative variant.
A few items of note. First, theres no mutation: this is not a global algorithm. However the
peculiar version of crossover used in Tree-Style Genetic Programming is so mutative that in practice
mutation is rarely needed. Second, this algorithm could produce one more child than is needed:
just discard it. Third, traditionally the selection procedure is one that is highly selective: Genetic
Programming usually employs Tournament Selection with a tournament size t = 7. Here we go:
35
An interesting question to ask: assuming we have enough memory, why bother deleting individuals at all?
36
Theres a lot of history here. Early ES work employed the now-disused ( +1) evolution strategy, where parents
(the population) work together to create one new child (see Footnote 17, p. 33). Ken De Jong did early studies of
generation gap methods in Kenneth De Jong, 1975, An Analysis of the Behaviour of a Class of Genetic Adaptive Systems, Ph.D.
thesis, University of Michigan. GENITOR later popularized the notion of steady-state algorithms. Darrell Whitley and
Joan Kauth, 1988, GENITOR: A different genetic algorithm, Technical Report CS-88-101, Colorado State University.
37
John R. Koza, 1992, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press.
48
Algorithm 35 The Genetic Algorithm (Tree-Style Genetic Programming Pipeline)
1: popsize desired population size
2: r probability of performing direct reproduction . Usually r = 0.1
3: P {}
4: for popsize times do
5: P P {new random individual}
6: Best 2
7: repeat
8: for each individual P
i
P do
9: AssessFitness(P
i
)
10: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
11: Best P
i
12: Q {}
13: repeat . Heres where we begin to deviate from The Genetic Algorithm
14: if r a random number chosen uniformly from 0.0 to 1.0 inclusive then
15: Parent P
i
SelectWithReplacement(P)
16: Q Q {Copy(P
i
)}
17: else
18: Parent P
a
SelectWithReplacement(P)
19: Parent P
b
SelectWithReplacement(P)
20: Children C
a
, C
b
Crossover(Copy(P
a
), Copy(P
b
))
21: Q Q {C
a
}
22: if ||Q|| < popsize then
23: Q Q {C
b
}
24: until ||Q|| = popsize . End Deviation
25: P Q
26: until Best is the ideal solution or we have run out of time
27: return Best
3.3.4 Hybrid Optimization Algorithms
There are many many ways in which we can create hybrids of various metaheuristics algorithms, but
perhaps the most popular approach is a hybrid of evolutionary computation and a local improver
such as hill-climbing.
The EA could go in the inner loop and the hill-climber outside: for example, we could extend
Iterated Local Search (ILS, Section 2.6) to use a population method in its inner loop, rather than a
hill-climber, but retain the Perturb hill-climber in the outer loop.
But by far the most common approach is the other way around: augment an EA with some
hill-climbing during the tness assessment phase to revise each individual as it is being assessed.
The revised individual replaces the original one in the population. Any EA can be so augmented:
below is the abstract EA from Algorithm 17 converted into a Hybrid Algorithm.
49
Algorithm 36 An Abstract Hybrid Evolutionary and Hill-Climbing Algorithm
1: t number of iterations to Hill-Climb
2: P Build Initial Population
3: Best 2
4: repeat
5: AssessFitness(P)
6: for each individual P
i
P do
7: P
i
Hill-Climb(P
i
) for t iterations . Replace P
i
in P
8: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
9: Best P
i
10: P Join(P, Breed(P))
11: until Best is the ideal solution or we have run out of time
12: return Best
The length of t, of course, is a knob that adjusts the degree of exploitation in the algorithm. If t
is very long, then were doing more hill-climbing and thus more exploiting; whereas if t is very
short, then were spending more time in the outer algorithm and thus doing more exploring.
There are many other ways to mix an exploitative (and likely local) algorithmwith an explorative
(usually global) algorithm. Weve already seen one example: Hill-Climbing with Random Restarts
(Algorithm 10), which combines a local searching algorithm (Hill-Climbing) with a global algorithm
(RandomSearch). Another hybrid: Iterated Local Search (Algorithm16), places Hill-Climbing inside
another, more explorative Hill-Climber. Indeed, the local-improvement algorithm doesnt even
have to be a metaheuristic: it could be a machine learning or heuristic algorithm, for example. In
general, the overall family of algorithms that combines some kind of global optimization algorithm
with some kind of local improvement algorithm in some way... is often saddled with an ill-considered
name: Memetic Algorithms.
38
Though this term encompasses a fairly broad category of stuff,
the lions share of memetic algorithms in the literature have been hybrids of global search (often
evolutionary computation) and hill-climbing: and thats usually how its thought of I think.
Perhaps a better term we might use to describe such algorithms could be Lamarckian Algo-
rithms. Jean-Baptiste Lamarck was a French biologist around the time of the American revolution
who proposed an early but mistaken notion of evolution. His idea was that after individuals
improved themselves during their lifetimes, they then passed those traits genetically to their off-
spring. For example, horse-like animals in Africa might strain to reach fruit in trees, stretching their
necks. These slightly longer necks were then passed to their offspring. After several generations
of stretching, behold the giraffe. Similarly, these kinds of hybrid algorithms often work by indi-
viduals improving themselves during tness assessment and then passing on their improvements
38
In my opinion, Memetic Algorithms have little to do with memes, a Richard Dawkins notion which means ideas that
replicate by causing their recipients to forward them to others. Examples include everything from religions to email
chain letters. The term memetic algorithms was notionally justied because memetic algorithm individuals are improved
locally, just as memes might be improved by humans before passing them on. But I think the distinguishing feature of
memes isnt local improvement: its replication, even parasitic replication. Nothing in memetic algorithms gets at this.
Richard Dawkins rst coined the term meme in Richard Dawkins, 1976, The Selsh Gene, Oxford University Press.
The term memetic algorithms was coined in Pablo Moscato, 1989, On evolution, search, optimization, genetic algorithms
and martial arts: Towards memetic algorithms, Technical Report 15879, Caltech Concurrent Computation Program,
California Institute of Technology.
50
to their children. Another reasonable name would be a Baldwin Effect Algorithm, named after
a more plausible variation of Lamarckianism that has found its place in real evolutionary theory.
Much later on well see another example of a Lamarckian algorithm in SAMUEL, an algorithm for
optimizing policies in Section 10.3 with special local-improvement operators.
Another approach to hybridization is to alternate between two disjoint algorithms. For example,
the Learnable Evolution Model (LEM), discussed later in Section 9.1, alternates between evolution
and a machine-learning classication technique.
Still another kind of hybrid algorithmperhaps less aimed at exploitationis to have one
metaheuristic optimize the runtime parameters of another metaheuristic. For example, we could
use a genetic algorithm to search for the optimal mutation rate, crossover type, etc., for a second
genetic algorithm running on a problem of interest.
39
These methods were originally studied under
the name Meta-Genetic Algorithms,
40
or more generally Meta-Optimization, techniques in the
oddly-named family of Hyperheuristics.
41
Some hyperheuristics focus not just on optimizing
parameters for another optimization procedure, but on optimizing which optimization procedure
should be used in the rst place.
If youre thinking that hyperheuristics are absurdly expensive, youd be right. The original
thinking behind these techniques was that researchers nearly always do some optimization by
hand anyway: if youre going to do a whole lot of runs using a genetic algorithm and a particular
problem family, youre likely to play around with the settings up front to get the genetic algorithm
tuned well for those kinds of problems. And if this is the case, why not automate the process?
This thinking suggests that the end product of a hyperheuristic would be a set of parameter
settings which you can then use later on. But in some limited situations it might make sense to
apply a hyperheuristic to obtain an optimal end solution. For example, suppose you had a moderate
number of computers available to you and were planning on running a great many optimization
runs on them and then returning the best result you discover. You dont know the best settings
for these runs. But if youre going to do all those runs anyway, perhaps you might consider a
meta-evolutionary run: create an initial population of individuals in the form of parameter settings,
try each on a different computer a few times, then evolve and repeat.
42
39
An interesting question: what are the parameter settings for your hyperheuristic, and can you optimize those with
another algorithm? How far down the rabbit hole do you go?
40
The earliest notion of the idea that I am aware of is Daniel Joseph Cavicchio Jr., 1970, Adaptive Search Using Simulated
Evolution, Ph.D. thesis, Computer and Communication Sciences Department, University of Michigan. This was then
expanded on signicantly in Robert Ernest Mercer and Jeffrey R. Sampson, 1978, Adaptive search using a reproductive
meta-plan, Kybernetes, 7(3), 215228. The most famous early presentation of the concept is John Grefenstette, 1986,
Optimization of control parameters for genetic algorithms, IEEE Transactions on Systems, Man, and Cybernetics, SMC-16(1),
122128. Grefenstette also coined the term (he called it a meta-level GA).
41
What an ill-conceived name: hyper is simply the wrong word. Metaheuristics, Hyperheuristics, Memetic Algorithms:
we have a lot of unfortunate terms.
42
Parallelizing these runs is probably best done by using a combination of hyperheuristics and Master-Slave Fitness
Assessment (see Section 5.3). Also: if you were testing each parameter-setting individual with a single run, perhaps
its tness would be set to the best tness discovered in that run. But since metaheuristics are stochastic, such a tness
would be very noisy of course. To get a better handle on the true quality of a parameter-setting individual, you might
need to run multiple times with those parameter settings, and use the mean best tness of the runs.
This stuff can get complicated fast.
51
3.3.5 Scatter Search
Fred Glovers Scatter Search with Path Relinking
43
combines a hybrid evolutionary and hill-
climbing algorithm, line recombination, ( + ), and an explicit procedure to inject some diversity
(exploration) into the mix! Standard Scatter Search with Path Relinking is complex and baroque,
but we can describe a simplied version here. The algorithm combines exploitative mechanisms
(hybrid methods, steady-state evolution) with an explicit attempt to force diversity (and hopefully
exploration) into the system. The algorithm starts with a set of initial seeded individuals provided
by you. Then the algorithm tries to produce a large number of random individuals that are very
different from one another and from the seeds. These, plus the seeds, form the population. Then
we do some hill-climbing on each of the individuals to improve them.
We then do the following loop. First, we truncate the population to a small size consisting
of some very t individuals and some very diverse individuals (to force diversity). Then we
perform some kind of pairing up and crossover (usually using line recombination) on that smaller
population: in our version, we do line recombination on every pair of individuals in the population,
plus some mutating for good measure. Then we do hill-climbing on these new individuals to
improve them, add them to the population, and repeat the loop.
To do the ProduceDiverseIndividual function and the procedure to determine the most diverse
individuals in Q (line 17), youll need a distance measure among individuals: for example, if two
individuals were real-valued vectors ~v and ~u, use Euclidian distance, that is,
p
i
(v
i
u
i
)
2
. These
are often metric distances (discussed later in Niching, Section 6.4). From there you could dene
the diversity of an individual as its sum distance from everyone else, that is for Population P, the
diversity of P
i
is
j
distance(P
i
, P
j
).
Now we have a way to select based on whos the most diverse. But producing a diverse
individual is mostly ad-hoc: Id generate a lot of individuals, then select a subset of them using a
tournament selection based on maximum diversity from the seeds. Or you could nd gene values
uncommon among the seeds and build an individual with them. The simplied algorithm:
43
Glover also invented Tabu Search (Section 2.5). And coined the term metaheuristics. Its tough to pin down the
rst papers in Scatter Search. But a good later tutorial is Fred Glover, Manuel Laguna, and Rafael Mart, 2003, Scatter
search, in Ashish Ghosh and Shigeyoshi Tsutsui, editors, Advances in Evolutionary Computing: Theory and Applications,
pages 519538, Springer. Glover also attempted a full, detailed template of the process in Fred Glover, 1998, A template
for scatter search and path relinking, in Proceedings of the Third European Conference on Articial Evolution, pages 151,
Springer. The algorithm shown here is approximately derived from these papers.
52
Algorithm 37 A Simplied Scatter Search with Path Relinking
1: Seeds initial collection of individuals, dened by you
2: initsize initial sample size . The size of the initial population before truncation
3: t number of iterations to Hill-Climb
4: n number of individuals to be selected based on tness
5: m number of individuals to be selected based on diversity
6: P Seeds
7: for initsize ||Seeds|| times do
8: P P {ProduceDiverseIndividual(P)} . Make an individual very dierent from whats in P
9: Best 2
10: for each individual P
i
P do . Do some hill-climbing
11: P
i
Hill-Climb(P
i
) for t iterations . Replace P
i
in P
12: AssessFitness(P
i
)
13: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
14: Best P
i
15: repeat . The main loop
16: B the ttest n individuals in P
17: D the most diverse m individuals in P . Those as far from others in the space as possible
18: P B D
19: Q {}
20: for each individual P
i
P do
21: for each individual P
j
P where j 6= i do
22: Children C
a
, C
b
Crossover(Copy(P
i
), Copy(P
j
)) . Line Recombination, Algorithm 28
23: C
a
Mutate(C
a
) . Scatter Search wouldnt do this normally: but I would
24: C
b
Mutate(C
b
) . Likewise
25: C
a
Hill-Climb(C
a
) for t iterations
26: C
b
Hill-Climb(C
b
) for t iterations
27: AssessFitness(C
a
) . We next assess the tness of C
a
and C
b
28: if Fitness(C
a
) > Fitness(Best) then
29: Best C
a
30: AssessFitness(C
b
)
31: if Fitness(C
b
) > Fitness(Best) then
32: Best C
b
33: Q Q {C
a
, C
b
}
34: P Q P
35: until Best is the ideal solution or we have run out of time
36: return Best
53
3.4 Differential Evolution
Differential Evolution (DE) determines the size of Mutates largely based on the current variance in
the population. If the population is spread out, Mutate will make major changes. If the population
is condensed in a certain region, Mutates will be small. Its an adaptive mutation algorithm (like
the one-fth rule in Evolution Strategies). DE was developed by Kenneth Price and Rainer Storn.
44
A
B
C
Child
Figure 15 Differential Evolu-
tions primary mutation opera-
tor. A copy of individual A is
mutated by adding to it the vec-
tor between two other individu-
als B and C, producing a child.
DEs mutation operators employ vector addition and subtraction,
so it really only works in metric vector spaces (booleans, metric integer
spaces, reals). DE has a variety of mutation operators, but the early
one described here is common and easy to describe. For each member
i of the population, we generate a new child by picking three indi-
viduals from the population and performing some vector additions
and subtractions among them. The idea is to mutate away from one
of the three individuals (~a ) by adding a vector to it. This vector is
created from the difference between the other two individuals
~
b ~c.
If the population is spread out,
~
b and~c are likely to be far from one
another and this mutation vector is large, else it is small. This way, if
the population is spread throughout the space, mutations will be much
bigger than when the algorithm has later converged on t regions of
the space. The child is then crossed over with
~
i. (Differential Evolution
has lots of other mutation variations not shown here).
Finally, after we have built up a new group of children, we compare
each child with the parent which created it (each parent created a single child). If the child is better
than the parent, it replaces the parent in the original population.
The new locations of children are entirely based on the existing parents and which combinations
we can make of adding and subtracting them. This means that this algorithm isnt global in the
sense that any point in the space is possible: though through successive choices of individuals, and
mutating them, we can hone in on certain spots in the space. Also oddly this algorithm traditionally
mutates each individual in turn. Perhaps better would be either to mutate all of them in parallel (in
a generational fashion) or to pick i at random each time (steady-state style).
Its crucial to note that Differential Evolution selects individuals in a way quite different from
what weve seen so far. A child is created by mutating existing individuals largely picked at random
from the population. So wheres the selection? It comes after generating a child, when it competes
for survival with a specic individual already in the population. If the child is tter, it replaces that
individual, else the child is thrown away. This hill-climbing-ish approach to selection is a variation
of survival selection (as opposed to parent selection).
45
Below we show one simple implementation of Differential Evolution, as described above.
Note that in this code we will treat the population as a vector, not a collection: this is to make
the pseudocode a bit more clear. Also, note that since Differential Evolution always uses vector
representations for individuals, well treat individuals both as individuals (such as Q
i
and as
vectors (such as~a) interchangeably. Here we go:
44
DE grew out of a series of papers as it evolved, but one of its better known papers, if not the earliest, is Rainer Storn
and Kenneth Price, 1997, Differential evolution: A simple and efcient heuristic for global optimization over continuous
spaces, Journal of Global Optimization, 11(4), 341359. Price, Storn, and Jouni Lampinen later wrote a pretty big book on
the subject: Kenneth Price, Rainer Storn, and Journi Lampinen, 2005, Differential Evolution: A Practical Approach to Global
Optimization, Springer.
45
See footnote 16.
54
Algorithm 38 Differential Evolution (DE)
1: mutation rate . Commonly between 0.5 and 1.0, higher is more explorative
2: popsize desired population size
3: P h i . Empty population (its convenient here to treat it as a vector), of length popsize
4: Q 2 . The parents. Each parent Q
i
was responsible for creating the child P
i
5: for i from 1 to popsize do
6: P
i
new random individual
7: Best 2
8: repeat
9: for each individual P
i
P do
10: AssessFitness(P
i
)
11: if Q 6= 2 and Fitness(Q
i
) > Fitness(P
i
) then
12: P
i
Q
i
. Retain the parent, throw away the kid
13: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
14: Best P
i
15: Q P
16: for each individual Q
i
Q do . We treat individuals as vectors below
17: ~a a copy of an individual other than Q
i
, chosen at random with replacement from Q
18:
~
ba copy of an individual other than Q
i
or ~a, chosen at random with replacement fromQ
19: ~c a copy of an individual other than Q
i
, ~a, or
~
b, chosen at random with replacement from Q
20:
~
d ~a + (
~
b ~c) . Mutation is just vector arithmetic
21: P
i
one child from Crossover(
~
d, Copy(Q
i
))
22: until Best is the ideal solution or we ran out of time
23: return Best
Crossover can be anything: but one common approach is to do a uniform crossover (Algorithm
25), but guarantee that at least one gene from Q
i
(the gene is chosen at random) survives in P
i
.
3.5 Particle Swarm Optimization
Particle Swarm Optimization (PSO) is a stochastic optimization technique somewhat similar
to evolutionary algorithms but different in an important way. Its modeled not after evolution
per se, but after swarming and ocking behaviors in animals. Unlike other population-based
methods, PSO does not resample populations to produce new ones: it has no selection of any kind.
Instead, PSO maintains a single static population whose members are Tweaked in response to new
discoveries about the space. The method is essentially a form of directed mutation. The technique
was developed by James Kennedy and Russell Eberhart in the mid-1990s.
46
46
Among the earliest papers on PSO is James Kennedy and Russell Eberhart, 1995, Particle swarm optimization, in
Proceedings of IEEE International Conference on Neural Networks, pages 19421948. Eberhart, Kennedy, and Yuhui Shi later
wrote a book on the topic: James Kennedy, Russell Eberhart, and Yuhui Shi, 2001, Swarm Intelligence, Morgan Kaufmann.
55
Like Differential Evolution, PSO operates almost exclusively in multidimensional metric, and
usually real-valued, spaces. This is because PSOs candidate solutions are Mutated towards the best
discovered solutions so far, which really necessitates a metric space (its nontrivial to Mutate, say, a
tree towards another tree in a formal, rigorous fashion).
Because of its use in real-valued spaces, and because PSO is inspired by ocks and swarms,
PSO practitioners tend to refer to candidate solutions not as a population of individuals but as a
swarm of particles. These particles never die (there is no selection). Instead, the directed mutation
moves the particles about in the space. A particle consists of two parts:
The particles location in space, ~x = hx
1
, x
2
, ...i. This is the equivalent, in evolutionary
algorithms, of the individuals genotype.
The particles velocity, ~v = hv
1
, v
2
, ...i. This is the speed and direction at which the particle is
traveling each timestep. Put another way, if ~x
(t1)
and ~x
(t)
are the locations in space of the
particle at times t 1 and t respectively, then at time t, ~v = ~x
(t)
~x
(t1)
.
Each particle starts at a random location and with a random velocity vector, often computed by
choosing two random points in the space and using half the vector from one to the other (other
options are a small random vector or a zero vector). We must also keep track of a few other things:
The ttest known location ~x
Best 2
11: repeat
12: for each particle ~x P with velocity ~v do
13: AssessFitness(~x)
14: if
Best = 2 or Fitness(~x) > Fitness(
Best) then
15:
Best ~x
16: for each particle ~x P with velocity ~v do . Determine how to Mutate
17: ~x
i
x
i
) + c(x
+
i
x
i
) + d(x
!
i
x
i
)
25: for each particle ~x P with velocity ~v do . Mutate
26: ~x ~x + e~v
27: until
Best is the ideal solution or we have run out of time
28: return
Best
This implementation of the algorithm relies on ve parameters:
: how much of the original velocity is retained.
: how much of the personal best is mixed in. If is large, particles tend to move more
towards their own personal bests rather than towards global bests. This breaks the swarm
into a lot of separate hill-climbers rather than a joint searcher.
: how much of the informants best is mixed in. The effect here may be a mid-ground
between and . The number of informants is also a factor (assuming theyre picked at
random): more informants is more like the global best and less like the particles local best.
: how much of the global best is mixed in. If is large, particles tend to move more towards
the best known region. This converts the algorithm into one large hill-climber rather than
57
separate hill-climbers. Perhaps because this threatens to make the system highly exploitative,
is often set to 0 in modern implementations.
e: how fast the particle moves. If e is large, the particles make big jumps towards the
better areas and can jump over them by accident. Thus a big e allows the system to move
quickly to best-known regions, but makes it hard to do ne-grained optimization. Just like in
hill-climbing. Most commonly, e is set to 1.
58
4 Representation
Most techniques discussed later are typically done with population-based algorithms. So from now on we will usually
use Evolutionary Computation versions of terms: individual instead of candidate solution; tness instead of quality, etc.
The representation of an individual is the approach you take to constructing, tweaking, and
presenting the individual for tness assessment. Although often well refer to the representation
as the data structure used to dene the individual (a vector, a tree, etc.) its useful to think of the
representation not as the data type but instead simply as two functions:
The initialization function used to generate a random individual.
The Tweak function, which takes one individual (or more) and slightly modies it.
To this we might add...
The tness assessment function.
The Copy function.
These functions are the only places where many optimization algorithms deal with the internals
of individuals. Otherwise the algorithms treat individuals as black boxes. By handling these
functions specially, we can separate the entire concept of representation from the system.
Much of the success or failure of a metaheuristic lies in the design of the representation of
the individuals, because their representation, and particularly how they Tweak, has such a strong
impact on the trajectory of the optimization procedure as it marches through the tness landscape
(that is, the quality function). A lot of the black magic involved in constructing an appropriate
representation lies in nding one which improves (or at least doesnt worsen) the smoothness
of the landscape. As mentioned earlier, the smoothness criterion was approximately dened as:
individuals which are similar to each other tend to behave similarly (and thus tend to have similar
tness), whereas individuals dissimilar from one another make no such promise.
Unimodal
Noisy
(or Hilly or Rocky)
Needle in a Haystack
Deceptive
Figure 16 Four tness landscapes. Repeats Figure 6.
The smoother a landscape, the fewer hills
it has and the more it begins to resemble a uni-
modal landscape, as shown in Figure 16. Re-
call that this isnt a sufcient criterion though,
as needle-in-a-haystack or (worse) deceptive
environments are highly smooth, yet can be
extremely challenging for an optimization al-
gorithm.
When we refer to individuals being simi-
lar, we mean that they have similar genotypes,
and when we refer to individuals as behaving
similarly, we mean that they have similar phe-
notypes.
47
What do we mean by similar geno-
types? Generally genotype A is similar to geno-
type B if the probability is high that Tweaking
47
Recall that, in evolutionary computation at least, the phrase genotype refers to how the individual appears to the
genetic operators (perhaps its a vector, or a tree), and the phrase phenotype refers to how (not how well) the individual
performs when evaluated for tness assessment.
59
A will result in B (or vice versa). Thus things are similar not because their genotypes look similar,
but because they are near each other in the space with respect to your choice of the Tweak operation.
Its tempting to think of a stochastic optimization system as largely working in genotype space,
then translating the genotypes to phenotypes for purposes of evaluation. But when thinking about
the effect of representations, its better to consider the other way around: an individuals natural
arrangement is its phenotype, and when the algorithm needs to make a new individual, it translates
the phenotype to a genotype, Tweaks it, then translates back to the phenotype. Commonly we refer
to phenotypegenotype translation as encoding, and the reverse as decoding. Thus we can think
of this process as:
Parent Phenotype Encode Tweak Decode Child Phenotype
This view helps us see the perils of poor encoding choices. Imagine that your individuals take
the phenotypical form, for some reason, of Rubiks Cube congurations. Youd like that Tweak
operator to make small changes like rotating a side, etc. If you used a genotype in the form of a
Rubiks Cube, youre all set: the Tweak operator already does exactly what you want. But imagine
if your encoding operation was as follows:
Parent Do 20 specic unusual moves Tweak Undo those 20 moves Child
You can imagine that after doing the twenty moves, a single twist of one side (the Tweak) will
have huge consequences after you undo those twenty moves. It causes almost total randomization
from parent to child. Lesson: you want an encoding/decoding mechanism which doesnt cause
your carefully-selected, smooth Tweak operations to cause the phenotype space to go haywire.
This isnt just of academic concern. In the past, Genetic Algorithm folks used to encode
everything as a binary vector of xed length. The reasoning was: if theres only one genotype, we
could develop a canonical Genetic Algorithm as a library function, and the only differences of
Phenotype Genotype Gray Code Fitness
0 0000 0000 0
1 0001 0001 1
2 0010 0011 2
3 0011 0010 3
4 0100 0110 4
5 0101 0111 5
6 0110 0101 6
7 0111 0100 7
8 1000 1100 8
9 1001 1101 0
10 1010 1111 0
11 1011 1110 0
12 1100 1010 0
13 1101 1011 0
14 1110 1001 0
15 1111 1000 0
Table 2 A tness function that exploits a Hamming Cliff.
signicance would be the encoding procedure.
As it turns out, this wasnt all that good of an
idea. Consider the situation where an individual
consists of a single integer from0 to 15. Wed rep-
resent it as a vector of 4 bits. The tness function
is shown at right. Notice that it increases until
8, and then falls off the cliff at 9. This tness
function abuses a bad feature in the genotype:
what is known in the Genetic Algorithm com-
munity as a Hamming cliff, located at the jump
from 7 to 8. A Hamming cliff is where, to make
a small change in the phenotype or tness, you
must make a very large change in the genotype.
For example, to mutate 7 (0111) into 8 (1000),
you have to make four bit-ips in succession.
The function at right is hard to optimize because
to get to 8, notionally you could approach from
7 (requiring four lucky mutations) or you could
approach from 9 or 10 (which arent often going
to be selected, because of bad tness).
60
Now consider instead representing the individual not by the binary encoding genotype shown
above but rather its Gray code
48
encoding shown next to it. This encoding has an interesting
property: each successive number differs from its previous number by only one bit ip. And 15
differs from 0 by only one bit ip. Thus if were at 7 (Gray code 0100) we can easily mutate to 8
(Gray code 1100). Hamming cliff problem solved. By the way, Gray-coding is easy to do:
Algorithm 40 A Gray Coding
1: ~v boolean vector encoding a standard binary number hv
1
, v
2
, ...v
l
i to be converted to Gray code
2: ~ w Copy(~v)
3: for i from 2 to l do
4: if v
i1
is true then
5: w
i
(v
i
)
6: return ~ w
The point of this exercise is not to convince you to use Gray codes: indeed, we can construct
nasty tness functions which cause problems for Gray codes as well, and Gray coding is somewhat
old fashioned now. The point is to illustrate the notion of smoothness and its value. If you encode
your individual such that small changes in the genotype (like one bit ip) are somewhat more
likely to result in small changes in the tness, you can help your optimizer.
One heuristic approach to smooth tness landscapes is to make the genotype as similar to the
phenotype as possible: if your phenotype is a graph structure, let the genotype be a graph structure
as well. That way your tness function may still be hilly but at least youre not making it even
hillier by running it through an unfortunate encoding. But remember that this is thinking of the
representation as if its a data structure, when its not. Its largely two functions: the initialization
function and theTweak function.
Much of Representation Is an Art, Not a Science How are you going to Tweak a graph structure
in a smooth way? No, seriously. Certain representations (notably xed-length vectors of booleans
or of oating-point values) are very well understood and theres a bunch of good theory around
them. But many representations are still basically ad-hoc. Many of the algorithms and ideas in this
section should not be taken as directions, or even recommendations, but suggestions of one particular
possible way to do representations that maintain smoothness properties. Well rst take care of the
easy, well-understood one that weve seen before a lot: vectors.
4.1 Vectors
Just to be clear, by vectors we mean xed-length one-dimensional arrays. Well get to arbitrary-
length lists in Section 4.4. Vectors usually come in three avors: boolean, real-valued, and integer.
49
The rst two boolean and real-valued vectors weve seen a lot so far. As a result weve built
up several initialization, mutation, and crossover algorithms for them. In summary:
48
After Frank Gray, who developed it in 1947 at Bell Labs to reduce errors in the output of phone system switches.
49
Theres no reason you couldnt have a vector of trees, or a vector of rules, or a vector where some elements were
reals and others were booleans, etc. (In fact, well see vectors of trees and rules later on in Section 4.3.4!) You just need to
be more careful with your mutation and initialization mechanisms.
61
Boolean Vectors
Initialization
Generate a Random Bit-Vector Algorithm 21 Page 37
Mutation
Bit-Flip Mutation Algorithm 22 Page 38
Floating-Point Vectors
Initialization
Generate a Random Real-Valued Vector Algorithm 7 Page 19
Mutation
Bounded Uniform Convolution Algorithm 8 Page 19
Gaussian Convolution Algorithm 11 Page 23
Floating-Point-Specic Crossover
Line Recombination Algorithm 28 Page 42
Intermediate Recombination Algorithm 29 Page 42
Vector Crossover (applies to any vector type)
One-Point Crossover Algorithm 23 Page 38
Two-Point Crossover Algorithm 24 Page 39
Uniform Crossover Algorithm 25 Page 39
Uniform Crossover among K Vectors Algorithm 27 Page 41
Integer Vectors Weve not seen integer vectors yet: and integer vectors have a twist to consider.
What do the integers in your vector represent? Do they dene a set of unordered objects (1=China,
2=England, 3=France, ...) or do they form a metric space (IQ scores, or street addresses, or nal
course grades) where the distance between, say, 4 and 5 is greater than the distance between 1 and
5? Mutation decisions often center on whether the space is a metric space.
The remainder of this section will focus on integer vectors, but it also gives some discussion
relevant to initialization and mutation of all vector types.
4.1.1 Initialization and Bias
Creating random initial vectors is usually just a matter of picking each vector position v
i
uniformly
among all possible values. If you have some knowledge about your problem, however, you could
bias the system by tending to pick values in certain regions of the space. For example, if you believe
that better solutions usually lie in the regions where v
1
= v
2
v
3
, you could emphasize generating
vectors in those regions.
Another way to bias the initial conguration of your population is to seed the initial population
with pre-chosen individuals of your own design. For example, my students were trying to optimize
vectors which dened how a bipedal robot walked, kicked, etc. These vectors translated into joint
angles and movements for the many motors on the robot. Rather than start with random values, the
vast majority of which were nonsense, they instead chose to wire a student up to a 3D tracker and
have him perform the motions. They then converted the resulting data into joint angle movements,
which they used to seed the initial population.
Some suggestions. First, biasing is dangerous. You may think you know where the best solutions
are, but you probably dont. So if you bias the initial conguration, you may actually make it
harder for the system to nd the right answer. Know what youre getting into. Second, even if you
62
choose to bias the system, it may be wise to start with values that arent all or exactly based on your
heuristic bias. Diversity is useful, particularly early on.
4.1.2 Mutation
Its rare that youd mutate oating-point vectors with anything other than Guassian convolution
(or some similar distribution-based noise procedure). Likewise, bit-vectors are typically mutated
using bit-ip mutation. For integer vectors, it depends. If your representation treats integers as
members of a set (for example, red=1, blue=2, ...), the best you may be able to do is randomize each
slot with a given probability:
Algorithm 41 Integer Randomization Mutation
1: ~v integer vector hv
1
, v
2
, ...v
l
i to be mutated
2: p probability of randomizing an integer . Perhaps you might set p to 1/l or lower
3: for i from 1 to l do
4: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
5: v
i
new random legal integer
6: return ~v
If instead your integers represent a metric space, you might wish to mutate them in a manner
similar to gaussian convolution, so that the changes to integers tends to be small. One of a great
many ways to do this is to keep ipping a coin until it comes up heads, and do a random walk of
that length.
50
This creates noise centered around the original value, and is global.
Algorithm 42 Random Walk Mutation
1: ~v integer vector hv
1
, v
2
, ...v
l
i to be mutated
2: p probability of randomizing an integer . Perhaps you might set p to 1/l or lower
3: b coin-ip probability . Make b bigger if you have many legal integer
values so the random walks are longer
4: for i from 1 to l do
5: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
6: repeat
7: n either a 1 or -1, chosen at random
8: if v
i
+ n is within bounds for legal integer values then
9: v
i
v
i
+ n
10: else if v
i
n is within bounds for legal integer values then
11: v
i
v
i
n
12: until b < random number chosen uniformly from 0.0 to 1.0 inclusive
13: return ~v
Point Mutation The mutation methods discussed so far all have the same property: every gene
in the genome has an independent probability of being mutated. Perhaps you may have thought
50
Note: I just made up this mutator, but its probably not bad. And someone else probably already invented it.
63
of a different approach: pick a single random gene, then mutate that gene, and youre done. (Or
perhaps pick n genes at random and mutate them). Such point mutation methods are sometimes
useful but are often dangerous.
First the useful part: there exist some problems where you can make progress through the
space by changing a single gene, but if you change several genes at a time, even by a small amount,
its tougher to make progress. The Mona Lisa picture on the front page is an example of this: the
genome consists of some m polygons with random colors. Change one polygon at a time, by a fair
bit, and you can eventually eek out a Mona Lisa. Change n polygons (or even all m polygons) at
one time, even through small perturbation, and it turns out to be quite difcult to get a better child.
x
0 1
y
0 5 -100
1 -100 10
Table 3 A trivial boolean t-
ness function which is hos-
tile to point mutation.
But beware: its very easy to construct problems where point mutation
is quite bad indeed. Consider simple boolean individuals of the form
hx, yi, where x and y can each be 1 or 0, and were doing a simple hill-
climber (or (1 + 1) if you will). The problem uses the tness function
shown in Table 3, and our intrepid initial candidate solution starts at
h0, 0i, which at present has a tness of 5. Our mutation function ips a
single gene. If we ipped gene x, wed wind up in h1, 0i, with a tness of
-100, which would get promptly rejected. On the other hand, if we ipped
gene y, wed wind up in h0, 1i, also with a tness of -100. Theres no way
to get to the optimumh1, 1i without ipping both genes at the same time. But our mutation operator
wont allow that. The issue is that point mutation is not a global operator: it can only make horizontal
moves through the space, and so cannot reach all possible points in one jump. In summary: point
mutation can sometimes be useful, but know what youre getting into.
4.1.3 Recombination
So far weve seen three kinds of general-purpose vector recombination: One- and Two-point
Crossover, and UniformCrossover. Additionally weve seen two kinds of recombination designed
for real-valued number recombination: Line Recombination and Intermediate Recombination.
Of course you could do a similar thing as these last two algorithms with metric-space integers:
Algorithm 43 Line Recombination for Integers
1: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
2: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
3: p positive value which determines how far along the line a child can be located
4: random value from p to 1 + p inclusive
5: random value from p to 1 + p inclusive
6: for i from 1 to l do
7: repeat
8: t v
i
+ (1 )w
i
9: s w
i
+ (1 )v
i
10: until bt + 1/2c and bs + 1/2c are within bounds . The b... + 1/2c bit is rounding
11: v
i
bt + 1/2c
12: w
i
bs + 1/2c
13: return ~v and ~ w
64
Algorithm 44 Intermediate Recombination for Integers
1: ~v rst vector hv
1
, v
2
, ...v
l
i to be crossed over
2: ~ w second vector hw
1
, w
2
, ...w
l
i to be crossed over
3: p positive value which determines how far long the line a child can be located
4: for i from 1 to l do
5: repeat
6: random value from p to 1 + p inclusive
7: random value from p to 1 + p inclusive
8: t v
i
+ (1 )w
i
9: s w
i
+ (1 )v
i
10: until bt + 1/2c and bs + 1/2c are within bounds
11: v
i
bt + 1/2c
12: w
i
bs + 1/2c
13: return ~v and ~ w
4.1.4 Heterogeneous Vectors
A vector doesnt have to be all real values or all integer values or all booleans. It could be a mixture
of stuff. For example, the rst ten genes might be booleans, the next twenty genes might be integers,
and so on. The naive way to handle this would be to make everything real-valued numbers
and then just interpret each gene appropriately at evaluation time. But if certain genes are to be
interpreted as integers or booleans, youll want to make mutation and initialization procedures
appropriate to them. It may be unwise to rely on standard real-valued mutation methods.
For example, imagine if a gene has three values red, blue, and green, and youve decided to map
these to 1.0, 2.0, and 3.0. Youre using Gaussian Convolution (Algorithm 11) for mutation. This will
produce numbers like 1.6 is this a 2.0, that is, blue? Lets presume that during evaluation youre
rounding to the nearest integer to deal with that issue. Now youre faced with more subtle problems:
applying Gaussian Convolution to a value of 1.0 (red) is more likely to produce something near to
2.0 (blue) than it is to produce something near to 3.0 (green). Do you really want mutation from
red to more likely be blue than green? Probably not! Along the same vein, if you dont pick an
appropriate variance, a whole lot of mutations from 1.0 (red) will be things like 1.001 or 1.02, which
of course will still be red.
This kind of nonsense arises from shoehorning integers, or unordered sets (red, green, blue),
into real-valued metric spaces. Instead its probably smarter to just permit each gene to have its
own mutation and initialization procedure. You could still have them all be real-valued numbers,
but the per-gene mutators and initializers would understand how to properly handle a real-valued
number thats actually an integer, or actually a boolean.
Using a real-valued vector, plus per-gene initializers and mutators, probably works ne if your
genes are all interpreted as reals, integers, set members (red, green, blue), and booleans.
51
But if
some of your genes need to be, say, trees or strings, then youll probably have no choice but to
make a vector of objects rather than real numbers, and do everything in a custom fashion.
51
You do need to keep an eye on crossover. Most crossover methods will work ne, but some crossover methods, such
as Line Recombination (Algorithm 28) or Intermediate Recombination (Algorithm 29), assume that your genes operate
as real numbers. Itd probably make things easier if you avoided them.
65
4.1.5 Phenotype-Specic Mutation or Crossover
Last but not least, you might try instead to perform mutation or crossover on your representations
in a manner that makes sense with regard to their phenotype. For example, what if your phenotype
is a matrix, and youre using vectors to represent those matrices? Perhaps your recombination
operators should take into consideration the two-dimensional nature of the phenotype. You might
design an operator which does two one-point crossovers to slice out a rectangular region:
1 4 7
9 2 3
8 5 6
21 99 46
31 42 84
23 67 98
1 4 46
9 2 84
23 67 98
This leads us to using representations more apropos to your problem: so on to more complex
representations. Remember all that talk about the value of smoothness? Hold onto your hat because
when you get to nastier representations, guaranteeing smoothness becomes very hard indeed.
4.2 Direct Encoded Graphs
Graphs are just about the most complex of various representations, but its useful to discuss them
next. Why would you want to come up with an optimal graph structure? Graphs are used
to represent many things: neural networks, nite-state automata or Petri nets or other simple
computational devices, electrical circuits, relationships among people, etc. Correspondingly, there
are lots of kinds of graph structures, such as directed graphs, undirected graphs, graphs with labels
on the edges or nodes, graphs with weights (numbers) on the edges rather than labels, recurrent
graphs, feed-forward (non-recurrent) graphs, sparse or dense graphs, planar graphs, etc. It depends
on your problem. A lot of the decisions with regard to Tweaking must work within the constraints
of the graph structure youve decided on.
First note that you dont need a special representation if your graph structure is xed and
youre just nding weights or labels. For example, if youre developing a neural network with a
xed collection of edges, theres no need to discover the structure of this network (its xed!). Just
discover the weights of the edges. If you have 100 edges, just optimize a vector of 100 real-valued
numbers, one per edge weight, and youre done. Thus most graph representations of interest here
are really arbitrary-structured graph representations. Such structures have been around for a very
long time. Larry Fogel developed Evolutionary Programming, probably the earliest evolutionary
algorithm, specically to discover graph structures in the form of nite-state automata.
52
There are generally two approaches to developing graph structures (and certain other complex
structures): direct encoding and indirect (or developmental) encoding. Direct encoding stores
the exact edge-for-edge, node-for-node description of the graph structure in the representation
itself. Indirect encoding has the representation dene a small program or set of rules of some kind
which, when executed, grow a graph structure.
Why would you do an indirect encoding? Perhaps when you wish to cross over certain traits in
your graph structure described by subsets of those rules which are bundled together. Or perhaps
if your rules recursively cause other rules to re, you may view certain sets of rules as functions
or modules which always produce the same subgraph. Thus if your optimal graph structures are
highly repetitive, you can take advantage of this by evolving a single function which produces that
repetitive element rather than having to rediscover the subgraph over and over again during the
search process. If the graph has little repetition in it (for example, neural network weights tend to
52
For Fogels thesis, in which these ideas were advanced, see Footnote 20, p. 36.
66
have little repetition among them) and is very dense, a direct encoding might be a better choice.
Because indirect encodings represent the graph in a non-graph way (as a tree, or a set of rules, or a
list of instructions to build the graph, etc.), well discuss them later (in Sections 4.3.6 and 4.5). For
now, we consider direct encodings.
The simplest direct encoding is a full adjacency matrix. Here we have settled on an absolute
maximum size for our graph. Lets say we need to create a recurrent directed graph structure and
have decided that our graph will contain no more than 5 nodes and have no more than one edge
between any two nodes. Lets also say that self-edges are allowed, and we need to nd weights for
the edges. We could simply represent the graph structure as a 5 5 adjacency matrix describing
the edges from every node to every other node:
Off in position hi, ji means there is no edge connecting j to i. If we want fewer than 5 nodes,
we could just assign all the weights going in or out of a node to be Off. We could represent
this matrix in many ways. Here are two. First, we might have a single vector of length 25 which
stores all the weights, with Off being represented as 0.0. Or we could represent the matrix as two
vectors, a real-valued one which stores all the weights, and a boolean one which stores whether or
not an edge is On or Off. Either way, we could use standard crossover and mutation operators,
though we might want to be careful about changing Off values. If we used the two-vector version,
thats done for us for free. If we just use a single real-valued vector, we could create a modied
Gaussian Convolution algorithm which only sometimes turns edges on or off:
Algorithm 45 Gaussian Convolution Respecting Zeros
1: ~v vector hv
1
, v
2
, ...v
l
i to be convolved
2: p probability of changing an edge from On to O or vice versa
3:
2
variance of gaussian distribution to convolve with
4: min minimum desired vector element value
5: max maximum desired vector element value
6: for i from 1 to l do
7: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
8: if v
i
= 0.0 then . Turn On: pick a random edge weighting
9: v
i
random number chosen uniformly from 0.0 to 1.0 inclusive
10: else . Turn O
11: v
i
0.0
12: else if v
i
6= 0.0 then . Mutate an existing On weight
13: repeat
14: n random number chosen from the Normal distribution N(0,
2
) . See Algorithm 12
15: until min v
i
+ n max
16: v
i
v
i
+ n
17: return ~v
67
The disadvantage of this approach is that once an edge is turned Off, when its turned back
On, its previously carefully-optimized weight is lost. Perhaps the two-vector approach might
yield better results.
If we dont have a maximum size for our graph, we might need to use an arbitrary directed
graph structure, an approach done very early on (in EP) but popularized by Peter Angeline, Greg
Saunders, and Jordan Pollacks GNARL.
53
Here our representation isnt a vector: its an actual
graph, stored however we like. To do this, we need to create custom initialization and mutation or
crossover operators to add and delete nodes, add and delete edges, relabel nodes and edges, etc.
A similar approach is taken in NEAT,
54
Ken Stanley and Risto Miikkulainens method for
optimizing feed-forward neural networks. NEAT represents a graph as two sets, one of nodes and
one of vectors. Each node is simply a node number and a declaration of the purpose of the node (in
neural network parlance: an input, output, or hidden unit). Edges are more interesting: each edge
contains, among other things, the nodes the edge connected (by number), the weight of the edge,
and the birthday of the edge: a unique counter value indicating when the edge had been created.
The birthday turns out to be useful in keeping track of which edges should merge during crossover,
as discussed in Section 4.2.3.
4.2.1 Initialization
Creating an initial graph structure is mostly informed by the kind of graphs you think you need.
First, we might decide on how many nodes and edges we want. We could pick these from some
distributionperhaps a uniform distribution from 1 to some large value. Or we might choose a
them from a distribution which heavily favors small numbers, such as the Geometric Distribution.
This distribution is formed by repeatedly ipping a coin with probability p until it comes up heads:
Algorithm 46 Sample from the Geometric Distribution
1: p probability of picking a bigger number
2: m minimum legal number
3: n m1
4: repeat
5: n n + 1
6: until p < random number chosen uniformly from 0.0 to 1.0 inclusive
7: return n
The larger the value of p, the larger the value of n on average, using the equation E(n) =
m + p/(1 p). For example, if m = 0 and p = 3/4, then n will be 3 on average, while if p = 19/20,
then n will be 19 on average. Beware that this distribution has a strong tendency to make lots of
small values. Its easy to compute, but may wish to use a less skewed distribution.
Once we have our node and edge counts, we can build a graph by laying out the nodes rst,
then lling in the edges:
53
Peter J. Angeline, Gregory M. Saunders, and Jordan P. Pollack, 1994, An evolutionary algorithm that constructs
recurrent neural networks, IEEE Transactions on Neural Networks, 5(1), 5465.
54
Kenneth O. Stanley and Risto Miikkulainen, 2002, Evolving neural networks through augmenting topologies,
Evolutionary Computation, 10(2), 99127.
68
Algorithm 47 Build A Simple Graph
1: n chosen number of nodes
2: e chosen number of edges
3: f (j, k, Nodes, Edges) function which returns true if an edge from j to k is allowed
4: set of nodes N {N
1
, ...N
n
} . Brand new nodes
5: set of edges E {}
6: for each node N
i
N do
7: ProcessNode(N
i
) . Label it, etc., whatever
8: for i from 1 to e do
9: repeat
10: j random number chosen uniformly from 1 to n inclusive
11: k random number chosen uniformly from 1 to n inclusive
12: until f (j, k, N, E) returns true
13: g new edge from N
j
to N
k
14: ProcessEdge(g) . Label it, weight it, undirect it, whatever
15: E E {g}
16: return N, E
Note the ProcessNode and ProcessEdge functions, which give you a place to label and weight
edges and nodes. A difculty with this approach is that we could wind up with a disjoint graph:
you may need to adjust this algorithm to guarantee connectedness. Another very common graph
representation is a directed acyclic graph, where all edges go from later nodes to earlier ones:
Algorithm 48 Build a Simple Directed Acyclic Graph
1: n chosen number of nodes
2: D(m) probability distribution of the number of edges out of a node, given number of in-nodes m
3: f (j, k, Nodes, Edges) function which returns true if an edge from j to k is allowed
4: set of nodes N {N
1
, ...N
n
} . Brand new nodes
5: set of edges E {}
6: for each node N
i
N do
7: ProcessNode(N
i
) . Label it, etc., whatever
8: for i from 2 to n do
9: p random integer 1 chosen using D(i 1)
10: for j from 1 to p do
11: repeat
12: k random number chosen uniformly from 1 to i 1 inclusive
13: until f (i, k, N, E) returns true
14: g new edge from N
i
to N
k
15: ProcessEdge(g)
16: E E {g}
17: return N, E
69
This representation is connected but of course there are no loops. Anyway, these algorithms are
only to give you ideas: denitely dont rely on them! Do it right. There are tons of (much better)
randomized graph-building algorithms: consult any general algorithms text.
4.2.2 Mutation
One of many ways to mutate an arbitrary graph is to pick some number n of mutations, then n
times do any of:
With
1
probability, delete a random edge.
With
2
probability, add a random edge (if using NEAT, this edge would get a brand new
birthday number; see Section 4.2.3 next).
With
3
probability, delete a node and all its edges (yeesh!)
With
4
probability, add a node
With
5
probability, relabel a node
With
6
probability, relabel an edge ... etc. ...
... where
i
i
= 1.0. Obviously some of these operations are very mutative, and thus perhaps
should have a smaller probability. Keep in mind that small, common changes should result in small
tness changes, that is, more mutative operations should be done less often. Last, how do we pick
a value for n? Perhaps we might pick uniformly between some values 1...M. Or we might choose a
value from the Geometric Distribution again.
4.2.3 Recombination
Crossover in graphs is such a mess that many people dont do it at all. How do you cross over
graphs in a meaningful way? That is, transferring essential and useful elements from individual to
individual without having crossover basically be randomization?
To cross over nodes and edges we often need to get subsets of such things. To select a subset:
Algorithm 49 Select a Subset
1: S original set
2: p probability of being a member of the subset
3: subset S
0
{}
4: for each element S
i
S do
5: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
6: S
0
S
0
{S
i
}
7: return S
0
This is basically the same general notion as was used in Uniform Crossover or Bit-ip Mutation.
But you might not like this distribution of subsets. An alternative would be to pick a random
number under some distribution of your choosing and select a subset of that size:
70
Algorithm 50 Select a Subset (Second Technique)
1: S original set
2: n number of elements in the subset
3: subset S
0
{}
4: for i from 1 to n do
5: S
0
S
0
{random element from S chosen without replacement}
6: return S
0
Note that unlike most situations here, were picking without replacement that is, an element
cant be picked more than once.
So back to crossover. One naive approach might be to pick some subset of nodes and subset
of edges in each graph, and exchange subsets. But what if graph A hands graph B an edge i j
but B doesnt have i or j among its nodes? Back to the drawing board. An alternative might be to
swap nodes, then swap edges with the constraint that an edge can only be swapped to the other
graph if the other graph received the relevant nodes as well. The difculty here is, of course, that
the swapped-in subgraph will be disjoint with the existing nodes in that individuals graph. And
you might miss some important edges that connected the nodes in the original graph.
A third choice is to pick whole subgraphs and swap them. To pick a subgraph, here is one of a
great many possible algorithms:
Algorithm 51 Select a Subgraph
1: N nodes in the original graph
2: E edges in the original graph
3: N
0
N nodes in the subgraph (chosen with a subset selection operation)
4: subset E
0
{}
5: for each edge E
i
E do
6: j, k nodes connected by E
i
7: if j N
0
and k N
0
then
8: E
0
E
0
{E
i
}
9: return N
0
, E
0
Again, the problem is that the swapped-in subgraph is disjoint with the graph thats already
there. At this point you may need to merge some nodes in the original graph with those in the
newly-swapped in subgraph. As nodes get merged together, certain edges need to be renamed
since theyre pointing to things that dont exist any more. Its still possible that the two graphs will
be disjoint but unlikely. We can force at least one node to merge, thus guaranteeing that the graphs
wont be disjoint. The algorithm would then look something like this:
71
Algorithm 52 Randomly Merge One Graph Into Another
1: N nodes in the rst graph, shued randomly . To shue an array randomly, see Algorithm 26
2: N
0
nodes in the second graph
3: E edges in the rst graph
4: E
0
edges in the second graph
5: p probability of merging a given node from N into a node from N
0
6: for l from 1 to ||N|| do
7: if l = 1 or p random number chosen uniformly from 0.0 to 1.0 inclusive then
8: n
0
random node chosen uniformly from N
0
. Well merge N
l
with n
0
9: for i from 1 to ||E|| do
10: j, k nodes connected by E
i
11: if j = N
l
then
12: Change j to n
0
in E
i
13: if k = N
l
then
14: Change k to n
0
in E
i
15: else . No merge, just add N
l
into the new graph directly
16: N
0
N
0
{N
l
}
17: E
0
E
0
E
18: return N
0
, E
0
+ : 1
! : 0 1
Out
Figure 17 Neural Programming
encoding of the Fibonacci Se-
quence (1, 1, 2, 3, 5, 8, 13, 21, 34,
55, 89, 144, 233, 377, 610, 987, etc.).
See if you can work it out. The
node 1 always emits a 1. The
node + : 1 emits a 1 on the rst
timestep, then later emits the sum
of its inputs. The node : 0
emits a 0 on the rst timestep,
then later emits the product of its
inputs. The sequence is read at
the node Out.
Another strategy, used in the NEAT algorithm, merges all the edges
of two parents into one child. But if edges have the same birthday
(that is, originally they were the same edge), NEAT throws one of
them out. Thus subgraphs dont just get arbitrarily merged during
crossover: theyre merged back in the way they used to be. The idea is
to retain subgraph structures and reduce the randomness of crossover.
Sometimes you might be able to use internal running statistics
to guess which subgraphs would be good to cross over or mutate.
For example, Astro Tellers Neural Programming (NP) was direct
graph encoding for computer programs in which graph nodes were
functions connected by directed edges. In the rst timestep, each node
emitted a certain value. Thereafter in timestep t each node would
read (via incoming edges) the emitted values of other nodes at t 1,
then use those values as arguments to the nodes function, and emit
the result. Figure 17 shows a simple example from Tellers thesis
55
which computes the Fibonacci Sequence. NP was notable for its use
of internal reinforcement to determine the degree to which various
55
Though its best lumped in with other genetic programming methods (notably Cartesian Genetic Programming, see
Section 4.4), I include NP here because its a true direct graph encoding with an interesting approach to dealing with the
mess of graph crossover and mutation. For more hints on how to interpret and evaluate individuals of this kinds, see
Sections 4.3 and 4.4. Probably the best place to learn about NP and its internal reinforcement strategy is Astro Tellers
thesis: Astro Teller, 1998, Algorithm Evolution with Internal Reinforcement for Signal Understanding, Ph.D. thesis, School of
Computer Science, Carnegie Mellon University, Technical Report Number CMU-CS-98-132. As it so happens, Astro
Teller is related to Edward Teller, of the Metropolis Algorithm (see Footnote 11).
72
nodes and edges were benecial to program. NP would then make it more likely that less desirable
nodes or edges were more likely to be swapped out via crossover, or to be mutated.
Weve not even gotten to how to make sure that your particular graph constraint needs (no
self-loops, no multiple edges, etc.) are kept consistent over crossover or mutation. What a mess. As
a representation, graphs usually involve an awful lot of ad-hoc hacks and domain specicity. The
complete opposite of vectors.
4.3 Trees and Genetic Programming
Genetic Programming (GP) is a research community more than a technique per se. The community
focuses on how to use stochastic methods to search for and optimize small computer programs or
other computational devices. Note that to optimize a computer program, we must allow for the
notion of suboptimal programs rather than programs which are simply right or wrong.
56
GP is thus
generally interested in the space where there are lots of possible programs (usually small ones)
but its not clear which ones outperform the others and to what degree. For example, nding
team soccer robot behaviors, or tting arbitrary mathematical equations to data sets, or nding
nite-state automata which match a given language.
sin
+
cos
x sin
x
x sqrt
x
Figure 18 A Symbolic
Regression parse tree.
Because computer programs are variable in size, the representations used
by this community are also variable in size, mostly lists and trees. In GP, such
lists and trees are typically formed from basic functions or CPU operations
(like + or if or kick-towards-goal). Some of these operations cannot be per-
formed in the context of other operations. For example, 4 +kick-towards-goal()
makes no sense unless kick-towards-goal returns a number. In a similar vein,
certain nodes may be restricted to having a certain number of children: for ex-
ample, if a node is matrix-multiply, it might be expecting exactly two children,
representing the matrices to multiply together. For this reason, GPs initial-
ization and Tweaking operators are particularly concerned with maintaining
closure, that is, producing valid individuals from previous ones.
One of the nifty things about optimizing computer programs is how you
assess their tness: run them and see how they do! This means that the data
used to store the genotypes of the individuals might be made to conveniently correspond to the code
of the phenotypes when run. Its not surprising that the early implementations of GP all employed
a language in which code and data were closely related: Lisp.
The most common form of GP employs trees as its representation, and was rst proposed by
Nichael Cramer,
57
but much of the work discussed here was invented by John Koza, to whom a lot
of credit is due.
58
56
John Koza proposed exactly this notion in his book Genetic Programming: ...you probably assumed I was talking
about writing a correct computer program to solve this problem.... In fact, this book, focuses almost entirely on incorrect
programs. In particular, I want to develop the notion that there are gradations in performance among computer programs.
Some incorrect programs are very poor; some are better than others; some are approximately correct; occasionally, one
may be 100% correct. (p. 130 of John R. Koza, 1992, Genetic Programming: On the Programming of Computers by Means of
Natural Selection, MIT Press.)
57
In a single paper Cramer proposed both tree-based GP and a list-based GP similar to that discussed in Section 4.4. He
called the list-based version the JB Language, and the tree-based version the TB Language. Nichael Lynn Cramer, 1985,
A representation for the adaptive generation of simple sequential programs, in John J. Grefenstette, editor, Proceedings of
an International Conference on Genetic Algorithms and the Applications, pages 183187.
58
Except as noted, the material in Section 4.3 is all due to John Koza. For the primary work, see Footnote 56.
73
Consider the tree in Figure 18, containing the mathematical expression sin(cos(x sin x) +
x
x). This is the parse tree of a simple program which performs this expression. In a parse tree, a
node is a function or if statement etc., and the children of a node are the arguments to that function.
If we used only functions and no operators (for example, using a function subtract(x, y) instead of
x y ), we might write this in pseudo-C-ish syntax such as:
sin(
add(
cos(subtract(x, sin(x))),
multiply(x, sqrt(x))));
The Lisp family of languages is particularly adept at this. In Lisp, the function names are tucked
inside the parentheses, and commas are removed, so the function foo(bar, baz(quux)) appears as
(foo bar (baz quux)). In Lisp objects of the form ( ... ) are actually singly-linked lists, so Lisp can
manipulate code as if it were data. Perfect for tree-based GP. In Lisp, Figure 18 is:
(sin
(+
(cos ( x (sin x)))
( x (sqrt x))))
How might we evaluate the tness of the individual in Figure 18? Perhaps this expression is
meant to t some data as closely as possible. Lets say the data is twenty pairs of the formhx
i
, f (x
i
)i.
We could test this tree against a given pair i by setting the return value of the x operator to be x
i
,
then executing the tree, getting the value v
i
it evaluates to, and computing the some squared error
from f (x
i
), that is,
i
= (v
i
f (x
i
))
2
. The tness of an individual might be the square root of the
total error,
1
+
2
+ ... +
n
. The family of GP problems like this, where the objective is to t an
arbitrarily complex curve to a set of data, is called symbolic regression.
if-food-ahead
forward do
left if-food-ahead
do
forward left
right
Figure 19 An Articial Ant tree.
Programs dont have to be equations: they can actually do
things rather than simply return values. An example is the tree
shown in Figure 19, which represents a short program to move an
ant about a eld strewn with food. The operator if-food-ahead takes
two children, the one to evaluate if there is food straight ahead,
and the one to evaluate if there isnt. The do operator takes two
children and evaluates the left one, then the right one. The left and
right operators turn the ant 90
+
ERC sin
x
cos
sqrt
ERC
x
Figure 20 A tree with ERC
placeholders inserted. See
Figure 21.
Ephemeral Random Constants Its often useful to include in the
function set a potentially innite number of constants (like 0.2462 or
h0.9, 2.34, 3.14i or 2924056792 or s%&e : m) which get sprinkled into
your trees. For example, in the Symbolic Regression problem, it might be
nice to include in the equations constants such as -2.3129. How can we
do this? Well, function sets dont have to be xed in size if youre careful.
Instead you might include in the function set a special node (often a leaf
node) called an ephemeral random constant (or ERC). Whenever an ERC
is selected from the function set and inserted into the tree, it automatically
transforms itself into a randomly-generated constant of your choosing.
From then on, that particular constant never changes its value again (un-
less mutated by a special mutation operator). Figure 20 shows ERCs
inserted into the tree, and Figure 21 shows their conversion to constants.
4.3.2 Recombination
+
0.231 sin
x
cos
sqrt
-0.194
x
Figure 21 The tree in Fig-
ure 20 with ERC placehold-
ers replaced with perma-
nent constants.
GP usually does recombination using subtree crossover. The idea is
straightforward: in each individual, select a random subtree (which can
possibly be the root). Then swap those two subtrees. Its common, but
77
hardly necessary, to select random subtrees by picking leaf nodes 10% of the time and non-leaf
nodes 90% of the time. Algorithm 57 shows how select a subtree of a given type.
Algorithm 57 Subtree Selection
1: r root node of tree
2: f (node) a function which returns true if the node is of the desired type
3: global c 0
4: CountNodes(r, f )
5: if c = 0 then . Uh oh, no nodes were of the desired type!
6: return 2 . null or failure or something
7: else
8: a random integer from 1 to c inclusive
9: c 0
10: return PickNode(r, a, f )
11: procedure CountNodes(r, f ) . This is just depth-rst search
12: if f (r) is true then
13: c c + 1
14: for each child i of r do
15: CountNodes(i, f )
16: procedure PickNode(r, a, f ) . More depth-rst search!
17: if f (r) is true then
18: c c + 1
19: if c a then
20: return r
21: for each child i of r do
22: v PickNode(i, a, f )
23: if v 6= 2 then
24: return v
25: return 2 . You shouldnt be able to reach here
4.3.3 Mutation
GP doesnt often do mutation, because the crossover operator is non-homologous
61
and is highly
mutative. Even so, there are many possibilities for mutation. Here are just a few:
Subtree mutation: pick a random subtree and replace it with a randomly-generated subtree
using the algorithms above. Commonly Grow is used with a max-depth of 5. Again, leaf
nodes are often picked 10% of the time and non-leaf nodes 90% of the time.
Replace a random non-leaf node with one of its subtrees.
61
Recall that with homologous crossover, an individual crossing over with itself will just make copies of itself.
78
Pick a random non-leaf node and swap its subtrees.
If nodes in the trees are ephemeral random constants, mutate them with some noise.
Select two subtrees in the individual such that neither is contained within the other, and swap
them with one another.
Again, we can use Algorithm 57 to select subtrees for use in these techniques. Algorithm 57 is
called subtree selection but it could have just as well been called node selection: were just picking a
node. First we count all the nodes of a desired type in the tree: perhaps we want to just select a leaf
node for example. Then we pick a random number a less than the number of nodes counted. Then
we go back into the tree and do a depth-rst traversal, counting off each node of the desired type,
until we reach a. Thats our node.
4.3.4 Forests and Automatically Dened Functions
Genetic Programming isnt constrained to a single tree: its perfectly reasonable to have a genotype
in the form of a vector of trees (commonly known as a forest). For example, I once developed
simple soccer robot team programs where an individual was an entire robot team. Each robot
program was two trees: a tree called when the robot was far from the ball (it returned a vector
indicating where to run), and another tree called when the robot was close enough to a ball to kick
it (it would return a vector indicating the direction to kick). The individual consisted of some n of
these tree pairs, perhaps one per robot, or one per robot class (goalies, forwards, etc.), or one for
every robot to use (a homogeneous team). So a soccer individual might have from 2 to 22 trees!
Trees can also be used to dene automatically dened functions (ADFs)
62
which can be called
by a primary tree. The heuristic here is one of modularity. Modularity lets us search very large
spaces if we know that good solutions in them are likely to be repetitive: instead of requiring the
individual to contain all of the repetitions perfectly (having all its ducks in order)
- a very unlikely
result we can make it easier on the individual by breaking the individuals into modules with an
overarching section of the genotype dene how those modules are arranged.
In the case of ADFs, if we notice that ideal solutions are likely to be large trees with often-
repeated subtrees within them, wed prefer that the individual consist of one or two subfunctions
which are then called repeatedly from a main tree. We do that by adding to the individual a second
tree (the ADF) and including special nodes in the original parent trees function set
63
which are just
function calls to that second tree. We can add further ADFs if needed.
For purposes of illustration, lets say that a good GP solution to our problem will likely need
to develop a certain subfunction of two arguments. We dont know what it will look like but
we believe this to be the case. We could apply this heuristic belief by using a GP individual
representation consisting of two trees: the main tree and a two-argument ADF tree called, say,
ADF1.
We add a new non-leaf node to its function set: ADF1(child1, child2). The ADF1 tree can have
whatever function set we think is appropriate to let GP build this subelement. But it will need to
have two additional leaf-node functions added to the main trees function set as well. Lets call
them ARG1 and ARG2.
62
Automatically Dened Functions are also due to John Koza, but are found in his second book, John R. Koza, 1994,
Genetic Programming II: Automatic Discovery of Reusable Programs, MIT Press.
63
Every tree has its own, possibly unique, function set.
79
Main Tree ADF1 Tree
foo
bar
ADF1
baz
quux
quux
ADF1
quux quux
foo
foo
ARG1 bar
ARG2
ARG2
Figure 22 An ADF example.
Figure 22 shows an example individual. Heres
how it works. We rst evaluate the main tree.
When its time to call an ADF1 node, we rst call
its two children and store away their results (call
them result1 and result2). We then call the ADF1
tree. When its ARG1 function is called, it automati-
cally returns result1. Likewise ARG2 automatically
returns result2. When the ADF1 tree is nished,
we store away its return value (lets call it nal).
We then return to the Main tree: the ADF1 node
returns the value nal, and we continue execution
where we left off in the Main tree.
Note that you could have more than one ADF tree. And you can have ADF trees which call
other ADF trees! Theres no reason you cant have nested function calls, right? In theory you
could have recursive calls, that is, ADF trees which call each other. But your individuals wont be
smart enough to build a base case automatically, so to keep the system from going into an innite
recursive loop, youll need to have some maximum call depth built in.
One last variation: automatically dened macros (ADMs), due to Lee Spector.
64
Here, when
the ADF1 node is called, we jump immediately to the ADF1 tree without bothering to call the
children to the ADF1 node rst. Instead, whenever ARG1 is called, we jump back to the main tree
for a second, call the rst child, get its result, come back to the ADF1 tree, and have ARG1 return
that value. This happens each time ARG1 is called. Likewise for ARG2. The idea is that this gives us
a limited ability to selectively, or repeatedly, call children, in a manner similar to if-then constructs,
while-loops, etc. (Lisp implements these as macros, hence the name).
4.3.5 Strongly-Typed Genetic Programming
Strongly-Typed Genetic Programming is a variant of Genetic Programming initially by David
Montana.
65
Recall that in the examples shown earlier, each node returns the same kind of thing
(for example, in symbolic regression, all nodes return oating-point values). But in more complex
programs, this isnt really an option. For example, what if we wanted to add to symbolic regression
a special operator, If, which takes three arguments: a boolean test, the then-value to return if the
test is true, and the else-value to return if the test is false. If returns oating-point values like the
other nodes, but it requires a node which returns a boolean value. This means well need to add
some nodes which return only boolean values; both leaf nodes and perhaps some non-leaf node
operators like And or Not.
The problem here is that in order to maintain closure, we can no longer just build trees, cross
them over, or mutate them, without paying attention to the which nodes are permitted to be
children of which other nodes and where. What happens, for example, if we try to multiply
sin(x) by false? Instead, we need to assign type constraints to nodes to specify which nodes are
permitted to hook up with which others and in what way.
64
Lee Spector, 1996, Simultaneous evolution of programs and their control structures, in Peter J. Angeline and K. E.
Kinnear, Jr., editors, Advances in Genetic Programming 2, chapter 7, pages 137154, MIT Press.
65
David Montana, 1995, Strongly typed genetic programming, Evolutionary Computation, 3(2), 199230.
80
There are a variety of approaches to typing. In the simplest approach, atomic typing, each type
is just a symbol or integer. The return value of each node, the expected child types for each node,
and the expected return type for the tree as a whole, each get assigned one of these types. A node
may attach as the child of another, or act as the root node of the tree, only if the types match. In set
typing, the types arent simple symbols but are sets of symbols. Two types would match if their
intersections are nonempty. Set typing can be used to provide sufcient typing information for a
lot of things, including the class hierarchies found in object-oriented programming.
But even this may not be enough. Atomic and set typing presume a nite number of symbols.
How would we handle the situation where nodes operate over matrices? For example, consider a
matrix-multiply node which takes two children (providing matrices) and multiplies them, returning
a new matrix. The dimensions of the returned matrix are functions of the two children matrices.
What if we change one of the children to a subtree which returns a new, differently-sized matrix? Its
possible to do this if we can reconcile it by changing the return type of the parent. This may trigger
a cascade of changes to return types, or to the types of children, as the tree readjusts itself. Such
typing is commonly known as polymorphic typing and relies on type resolution algorithms similar
those found in polymorphic typing programming languages like Haskell or ML. Its complex.
4.3.6 Cellular Encoding
a. parent
double
left child right child
b.
c.
parent
Double
left child right child
E
E
F
a. b.
c.
Figure 23 The double Edge Encoding operator.
Trees can also be used as short programs to instruct
an interpreter how to create a second data structure
(usually a graph). This second data structure is
then used as the phenotype. This technique is com-
monly known as Cellular Encoding (by Fr ed eric
Gruau).
66
The general idea is to take a seed (per-
haps a graph consisting of a single node or a single
edge) and hand it to the root of the tree. The root
operator modies and expands the graph, then
hands certain expanded elements off to its children. They then expand the graph further, handing
expanded pieces to their children, and so on, until the tree is exhausted. The fully expanded graph is
then used as the phenotype.
Gruaus original formulation, which was used mostly for neural networks, operated on graph
nodes, which requires a fairly complicated mechanism. An alternative would be to operate on graph
edges, which doesnt allow all possible graphs, but is fairly useful for sparse or planar graphs such
as are often found in electrical circuits or nite-state automata. Early on, Lee Spector and I dubbed
this Edge Encoding.
67
Edge Encoding is easier to describe, so thats what Ill show off here.
66
Fr ed eric Gruau, 1992, Genetic synthesis of boolean neural networks with a cell rewriting developmental process, in
J. D. Schaffer and D. Whitley, editors, Proceedings of the Workshop on Combinations of Genetic Algorithms and Neural Networks
(COGANN92), pages 5574, IEEE Computer Society Press.
67
Lee Spector and I wrote an early paper on which named it Edge Encoding: Sean Luke and Lee Spector, 1996, Evolving
graphs and networks with edge encoding: Preliminary report, in John R. Koza, editor, Late Breaking Papers at the Genetic
Programming 1996 Conference, pages 117124, Stanford Bookstore. But I doubt were the inventors: when the paper
came out, John Koza, Forrest Bennett, David Andre, and Martin Keane were already using a related representation to
evolve computer circuits. See John R. Koza, Forrest H Bennett III, David Andre, and Martin A. Keane, 1996, Automated
WYWIWYG design of both the topology and component values of electrical circuits using genetic programming, in
John R. Koza, et al., editors, Genetic Programming 1996: Proceedings of the First Annual Conference, pages 123131, MIT Press.
81
a.
b.
c.
! d.
0
!
1
e.
0
!
1
f.
0
0 1
Figure 25 Expansion of a nite-state automaton using the Edge Encoding in Figure 24. (a) The initial edge. (b) After
applying double. (c) After applying reverse. (d) After applying loop, e, start, and 0. The white circle is a starting state. (e)
After applying bud and 1. (f) After applying split, 0, accept, and 1. The black circle is an accepting state.
Edge and Cellular Encoding tree nodes work differently from, say, the ones used for Symbolic
Regression: they take things from parents, operate on them, and then hand them to their children.
As an example, Figure 23 shows an Edge Encoding operator called double. It takes an edge handed
to it by its parent (Edge E in Figure 23b), and creates a duplicate edge connecting the same two
nodes (Edge F in Figure 23c). It then hands one edge each to its two children.
double
reverse
loop
e start
0
bud
split
0 accept
1
1
Figure 24 An Edge Encoding.
Figure 24 shows an edge encoding tree which will construct a
nite-state automaton. Besides double, the main operators are: reverse,
which reverses an edge; loop, which creates a self-loop edge on the
head node of loops edge; bud, which creates a new node and then
a new edge from the head node of buds edge out to the new node;
split, which splits its edge into an edge from splits tail node out to
a new node, and then another edge back to splits head node. Other
nite-state automaton-specic operators (e, 1, 0) label their edge or
(start, accept) label the head node of their edge.
Confused at this point? I would be! Perhaps this will help. Figure
25 shows the expansion of Figure 24, starting with a single edge, and
eventually growing into a full nite-state automaton which interprets
the regular language (1|0)
01.
Cellular and Edge encoding are examples of an indirect or developmental encoding: a repre-
sentation which contains a set of rules to develop a secondary data structure which is then used
as the phenotype. Indirect encodings are a popular research topic for two reasons. First theres
the biological attraction: DNA is an indirect encoding, as it creates RNA and protein which then
go on to do the heavy lifting in living organisms. Second, theres the notion of compactness and
modularity discussed earlier: many indirect encoding rules make repeated calls to sub-rules of
some form. In Cellular and Edge encoding theres no modularity, but you can add it trivially by
including some Automatically Dened Functions. Likewise, unless you use an ADF, theres little
compactness: Edge Encoding trees will have at least as many nodes as the graph has edges!
4.3.7 Stack Languages
An alternative to Lisp are stack languages in which code takes the form of a stream of instructions,
usually in postx notation. Real-world stack languages include FORTH and PostScript. These
languages assume the presence of a stack onto which temporary variables, and in some cases
82
chunks of code, can be pushed and popped. Rather than say 5 (3 + 4), a stack language might
say 5 3 4 +. This pushes 5, 3, and 4 on the stack; then pops the last two numbers (4 and 3) off the
stack, adds them, and pushes the result (7) on the stack; then pops off of the stack the remaining
two numbers (7 and 5), multiplies them, and pushes the result (35) back on.
(
(
a b
( (
(
c
Figure 26 The expression
((a b) ( ) ((c))) as rooted
parentheses.
Stack languages often create subroutines by pushing chunks of code
onto the stack, then executing them from the stack multiple times. For
example, we might generalize the procedure above a (b +c) into a
subroutine by wrapping its operators in parentheses and subjecting them
to a special code-pushing operator like this: push (+). Given another
special operator do, which pops a subroutine off the stack, executes it n
times, and pushes it back on the stack, we can do stuff like 5 7 9 2 4 3 6 5 9
push (+) 4 do, which computes 5 (7 +9) (2 +4) (3 +6) (5 +9).
.
.
a .
b 2
.
2 .
.
.
c 2
2
2
Figure 27 The expression
((a b) ( ) ((c))) in cons cells.
Stack languages have long been used in genetic programming. Among
the most well-known is Lee Spectors GP stack language, Push.
68
Push
maintains multiple stacks, one for each data type, allowing code to oper-
ate over different kinds of data cleanly. Push also includes special stacks
for storing, modifying, and executing code. This allows Push programs
to modify their own code as they are executing it. This makes possible, for
example, the automatic creation of self-adaptive breeding operators.
The use of stack languages in optimization presents some represen-
tational decisions. If the language simply forms a stream of symbols with
no constraints, just use a list representation (see the next Section, 4.4). But
most stack languages at least require that the parentheses used to delimit
code must be paired. There are many ways to guarantee this constraint.
In some stack languages a left parenthesis must always be followed by
a non-parenthesis. This is easy to do: its exactly like the earlier Lisp
expressions (see Figures 18 and 19). If instead your language allows
parentheses immediately after left parentheses, as in ((a b) ( ) ((c))), you
could just use the left parenthesis as the root node of a subtree and the
elements inside the parentheses as the children of that node, as shown in
Figure 26. Both approaches will require that tree nodes have arbitrary arity. Or, as is the case for
Push, you could use the traditional internal format of Lisp: nested linked lists. Each parenthesized
expression (like (a b)) forms one linked list, and elements in the expression can be other linked lists.
Nodes in each linked list node are called cons cells, represented in Figure 27 as .. The left child of
a cons cell holds a list element, and the right child points to the next cons cell in the list, or to 2,
indicating the end of the list.
4.4 Lists
Parse trees arent the only way to represent programs: they could also be represented as arbitrary-
length lists (or strings) of machine language instructions. Individuals are evaluated by converting
the lists into functions and executing them. This is known as Linear Genetic Programming, and
68
The basics: Lee Spector and Alan Robinson, 2002, Genetic programming and autoconstructive evolution with the
push programming language, Genetic Programming and Evolvable Machines, 3(1), 740. Then for the latest version of the
language, check out: Lee Spector, Jon Klein, and Martin Keijzer, 2005, The Push3 execution stack and the evolution of
control, in Proceedings of the Genetic and Evolutionary Conference (GECCO 2005), pages 16891696, Springer.
83
Grammar An Arbitrary Individual
tree n + n | n n
n n m | sin m
m 1 | 2
false false true true false true false true true false false
Figure 28 A Grammatical Evolution grammar, and an individual with a list representation.
Interpreting... [start] false false true true false true false
Expansion tree +
n n
+
n m
n
+
sin
m
m
n
+
sin
2
m
n
+
sin
2
1
n
+
sin
2
1
sin
m
+
sin
2
1
sin
1
Figure 29 Expansion of the individual shown in Figure 28.
the most well-known practitioners of this approach are Wolfgang Banzhaf, Peter Nordin, Robert
Keller, and Frank Francone. They sell a GP system called Discipulus based on this notion, and also
wrote a well-regarded book on both tree-based and linear Genetic Programming.
69
Executing arbitrary machine code strings can be dangerous if closure isnt maintained. But how
to maintain closure in such a situation? Certainly your individual wouldnt be just a bit-string,
because that would allow all sorts of machine language instructions, even undesirable ones or
nonsense ones.
70
Clearly itd have to be a list of instructions chosen from a carefully-selected set.
If the instruction set is nite in length, we could just assign a unique integer to each instruction
and represent a genotype as a list of integers. Usually schemes employ a nite set of registers as
well: this allows the machine code lists to operate essentially like directed acyclic graphs (DAGs),
with early instructions affecting instructions much further down in the list due to their shared
register. Additionally we might nd it desirable to include some special instructions that operate
on constants (Add 2, etc.).
Stack languages bear a strong resemblance to machine code, so it shouldnt be too terribly
surprising that, as mentioned in Section 4.3.7, some stack languages are straightforwardly applied
to list representations, particularly if the language has no particular syntactic constraints.
Lists can be used to generate trees as well: consider Grammatical Evolution (GE), invented
by Conor Ryan, J. J. Collins, and Michael ONeill.
71
Grammatical Evolutions representation is a
list of integers or boolean values. It then uses this list as the decision points in a pre-dened tree
grammar to build a GP Tree. The tree is then evaluated in GP style to assess tness. This somewhat
complex approach is yet another example of an indirect encoding, and though it doesnt have the
modularity common in many indirect encodings, it does have a method to its madness: it can
straightforwardly dene any tree for any desired language.
As an example, consider the ridiculous grammar and an individual represented as a list, shown
in Figure 28. To interpret this, we start with tree, and use the rst element in the list to decide how
69
Wolfgang Banzhaf, Peter Nordin, Robert E. Keller, and Frank D. Francone, 1998, Genetic Programming: An Introduction,
Morgan Kaufmann.
70
You probably dont want to call the infamous HCF (Halt and Catch Fire) instruction. Look for it on Wikipedia.
71
Conor Ryan, J. J. Collins, and Michael ONeill, 1998, Grammatical evolution: Evolving programs for an arbitrary
language, in EuroGP 1998, pages 8396.
84
Node 2 3 4 5 6 7 8 9 10 11 (F
1
) 12 (F
2
) 13 (F
3
)
Gene 510* 101 611* 002 223 203 055 026 473* 5 9 10
cos
+
!
sin
+
F
2
x
y
!
F
1
F
3
sqrt
+
2
0
1
3
4
5
6
7
8
9
10
11
12
13
Gene Function
0 +
1
2
3 /
4 sin
5 cos
6 sqrt
F
1
= x + cos(y) F
2
= cos(y) + cos(y) (x y) F
3
= sin(x (x y))
Figure 30 A Cartesian Genetic Programming example. Note that node 13 (F
3
) has a single gene value (10), and not two
gene values 1 and 0. In all other cases genes are single-digit. See text for interpretation of this gure.
to expand that (well assume that false expands to the rst item, and true expands to the second
item). Once we expand, we expand the remaining undened variables in a depth-rst fashion.
Figure 29 shows the expansion of the Individual in Figure 28.
Now we have a tree we can evaluate! Notice that we wound up not using the last 4 bits in the
individual (true true false false). What if the list is too short and we dont have enough decision
points? Typically one just wraps around to the beginning of the list again. Its not a great solution
but its workable.
72
GE is clever in that it allows us to construct any valid tree for a given grammar,
which is a lot more exible than standard Tree-based GP: indeed it negates the need to even bother
with strong typing. The downside is that this representation is naturally un-smooth in certain
places: tiny changes early in the list result in gigantic changes in the tree. This can be a problem.
I include one nal encoding here: Cartesian Genetic Programming (CGP) by Julian Miller.
73
Consider Figure 30, with a xed-length vector of 30 numbers, found in the row labeled Gene.(Note
that the nal gene is a 10 and not two genes 1 and 0.) Cartesian Genetic Programming will interpret
the genes in this vector to build a graph of function nodes similar to those in genetic programming.
These nodes come in three categories: input nodes, output nodes, and hidden (function) nodes.
The experimenter pre-denes how many of each of these nodes there are, and their layout, as
appropriate to his problem. In Figure 30 there are two input nodes (x and y), nine hidden nodes,
and three output nodes F
1
, F
2
, and F
3
.
Each node has a unique number, which is shown inside a diamond at that node. Notice that
genes are grouped together with a certain Node. These genes are responsible for dening the inputs
to that node and its function label. For example, the rst group, 510, is responsible for node 2.
Some genes are bunched together into groups. The size of the group is the maximum number of
arguments to any function in the function set, plus one. In our function set, the functions +/
all take two arguments, so the group size is 3. The rst gene a group denes the function the node
72
I dont like that approach: instead, Id bypass evaluation and just assign the individual the worst possible tness.
73
Note that this encoding is not a list encoding, but more properly a xed-length vector encoding. I include it here
because its more at home with the other list-style GP methods.
85
will be assigned: for example, the 5 in 510 refers to the function cos. The remaining genes in the
group specify the nodes from which there are incoming edges. For example, the 1 in 510 indicates
that there is an incoming edge from node 1 (the y). The nal gene, 0, is marked with an asterisk (*)
to indicate that its unused (because the function cos only needs one incoming edge).
The nodes 11 (F
1
), 12 (F
2
), and 13 (F
3
) have only a single gene each, which indicates the node
from which there is an incoming edge. For example, node 11 (F
1
) has an incoming edge from
node 5. Nodes 0 (x) and 1 (y) do not have any associated gene values. Also, youll need to restrict
the possible values each gene as appropriate. In this example, the genes dening functions are
restricted to 06, and the genes dening connections are restricted to refer only to node numbers
less than their node. CGP traditionally does the second bit using a constraint that requires that
connection-genes for a given node can only refer to nodes at most some M columns prior.
After dening the graph, we can now run it, just like a genetic programming individual: in this
case, we have a symbolic-regression solution, so we provide values for x and y, and feed values
through edges to function nodes, nally reading the results at F
1
, F
2
, and F
3
. Unlike in tree-based
genetic programming, CGP is capable of dening multiple functions at once. The three functions
that have been dened by this graph are shown at the bottom of Figure 30. Note that nodes 4 and 8
dont contribute to the nal solution at all. These are examples of introns, a term well get to in
Section 4.6.
There are obviously other reasons why you might want to use a list as a representation, besides
alternative genetic programming techniques. For example, lists could be used to represent sets or
collections, or other direct graph encodings, or strings.
Warning Lists arent particularly compatible with heterogeneous genomes (Section 4.1.4), where
each gene has its own mutation and initialization mechanisms. This is because list crossover and
mutation methods not only change the value of genes but their location.
4.4.1 Initialization
How new lists are generated largely depends on the domain-specic needs of the method involved.
But generally speaking there are two issues: specifying the length of the list, and populating it. One
simple way to do the former is to sample a length from the geometric distribution (Algorithm 46,
perhaps with the minimum list size being 1). Beware again that the distribution will have a very
high number of small lists: you may wish to use a atter distribution.
To populate the list, just march through the list and set each of its values to something random
but appropriate. Remember that for some problems this isnt sufcient, as there may be constraints
on which elements may appear after other elements, so youll need to be more clever there.
4.4.2 Mutation
Like initialization, mutation in lists has two parts: changing the size of the list, and changing the
contents of the list. Contents may be changed in exactly the same way that you do for xed-length
vectors: using a bit-ip mutation or integer randomization, etc. Remember that you may not be able
to change some elements without changing others due to certain constraints among the elements.
Changing the length likewise depends on the problem: for example, some problems prefer to
only add to the end of a list. One simple approach is to sample from some distribution, then add
(or subtract, if it so happens) that amount to the list length. For example, we could do a random
86
walk starting at 0, ipping a coin until it comes up tails. The number we arrive at is what you add
to (or delete from, if its negative) the end of the list. This should look familiar:
Algorithm 58 Random Walk
1: b coin-ip probability . Make b bigger to make the random walks longer and more diuse
2: m 0
3: if p random number chosen uniformly from 0.0 to 1.0 inclusive then
4: repeat
5: n either a 1 or -1, chosen at random
6: if m + n is an acceptable amount then
7: m m + n
8: else if mn is an acceptable amount then
9: m mn
10: until b < random number chosen uniformly from 0.0 to 1.0 inclusive
11: return m
Dont confuse this with Algorithm 42 (Random Walk Mutation), which uses a similar random
walk to determine the noise with which to mutate. Beware that because lists cant be any smaller
than 1, but can be arbitrarily large, a random walk like this may cause the individual lists to become
fairly large: you may need to add some countering force to keep your population from growing
simply due to your mutation operator (see the bloat discussion below for other reasons for growth).
Warning In some list-representation problems, such as Grammatical Evolution, the early elements
in the list are far more important than the later elements. In GE this is because the early elements
determine the early choices in the tree grammar, and changing them radically changes the tree;
whereas the later elements only change small subtrees or individual elements (or if the list is too
long, they dont change anything at all!) This has a huge effect on the smoothness of the landscape,
and you want to make sure your mutation procedure reects this. For example, you might only
occasionally change the elements at the beginning of the list, and much more often change the
elements near the end of the list. Linear GP may or may not have this property depending on
the nature of your problem, and in fact it can actually can have the opposite situation if the nal
machine code elements in the list get to make the last and most important changes.
4.4.3 Recombination
0 1 0 0
0 0 1 0 1 1 0 0
c
0
d
Swap
Figure 31 One-point List Crossover.
Like mutation, crossover also may depend on constraints,
but ignoring that, there are various ways you could do
crossover among variable-length lists. Two easy ones are
one-point and two-point list crossover, variations on the
standard one- and two-point vector crossovers. In one-
point list crossover, shown in Figure 31, we pick a (possibly
different) point in each list, then cross over the segments
to the right of the points. The segments can be non-zero in
length. The algorithm should look eerily familiar:
87
Algorithm 59 One-Point List Crossover
1: ~v rst list hv
1
, v
2
, ...v
l
i to be crossed over
2: ~ w second list hw
1
, w
2
, ...w
k
i to be crossed over
3: c random integer chosen uniformly from 1 to l inclusive
4: d random integer chosen uniformly from 1 to k inclusive
5: ~x snip out v
c
through v
l
from ~v
6: ~y snip out w
d
through w
k
from ~ w
7: Insert ~y into ~v where ~x was snipped out
8: Insert ~x into ~ w where ~y was snipped out
9: return ~v and ~ w
0 0 1 0 0
0 0 1 0 1 1 0 0
c
0
e
Swap
d
f
Figure 32 Two-point List Crossover.
Two-point list crossover, shown in Figure 32, is similar:
we pick two points in each individual and swap the mid-
sections. Again, note that the points dont have to be the
same. Think carefully about your list representation to
determine if one- or two-point list crossover make sense.
They have quite different dynamics. Is your representation
reliant to the particulars of whats going on in the middle,
and sensitive to disruption there, for example?
Another Warning Just as mentioned for mutation, certain elements of the list may be more
important than others and more sensitive to being messed up via crossover. So in Grammatical
Evolution for example you might want to consider picking two-point crossover points near to the
end of the list more often than ones near the front. Or stick with one-point crossover.
The two-point list crossover algorithm should likewise feel familiar to you:
Algorithm 60 Two-Point List Crossover
1: ~v rst list hv
1
, v
2
, ...v
l
i to be crossed over
2: ~ w second list hw
1
, w
2
, ...w
k
i to be crossed over
3: c random integer chosen uniformly from 1 to l inclusive
4: d random integer chosen uniformly from 1 to l inclusive
5: e random integer chosen uniformly from 1 to k inclusive
6: f random integer chosen uniformly from 1 to k inclusive
7: if c > d then
8: Swap c and d
9: if e > f then
10: Swap e and f
11: ~x snip out v
c
through v
d
from ~v
12: ~y snip out w
e
through w
f
from ~ w
13: Insert ~y into ~v where ~x was snipped out
14: Insert ~x into ~ w where ~y was snipped out
15: return ~v and ~ w
88
4.5 Rulesets
A set is, of course, a collection of objects, possibly empty, where all the objects are different. Sets can
be used for all sorts of stuff, but the big item seems to be sets of rules which either form a computer
program of sorts (perhaps to direct a robot about in a simulated environment) or which dene an
indirect encoding which grows a graph structure from a simple initial seed.
Rules in rulesets usually take a form which looks like ifthen. The if part is commonly called
the body of the rule and the then part is commonly called the head of the rule. There are two
common kinds of rulesets, which I will call state-action and production rulesets. State-action rules
are designed to perform some action (the then) when some situation or event has occurred in the
world (the if ). For example, a robots sensors might trigger a rule which causes the robot to turn left.
Production rules are different in that some rules then actions trigger other rules if portions. For
example, if a rule a b res, it would then cause some other rule b c to re. Production rules are
mostly used to construct indirect encodings which grow graph structures etc. The interconnection
among the rules in production rulesets means that they bear more than a passing resemblance,
representation-wise, to directed graph structures.
The rst question is: what data structure would you use to hold a set of objects? We could use a
variable-sized vector structure like a list. Or we could use a hash table which stores the elements as
keys and arbitrary things as values. In my experience, most people implement sets with lists.
The basic closure constraint in a set is its uniqueness property: often you have to make sure
that when you create sets, mutate them, or cross them over, the rules remain all different. Unless
you have a mutation or crossover operation which does this naturally, you may need to go back
into the set after the fact and remove duplicates. This is a trivial procedure:
Algorithm 61 Duplicate Removal
1: ~v collection of elements converted into a vector hv
1
, v
2
, ...v
l
i
2: h {} . Represent h with a hash table, its faster
3: l
0
l
4: for i from l down to 1 do
5: if v
i
h then . A duplicate!
6: Swap v
i
and v
l
0
7: l
0
l
0
1
8: else . Not a duplicate!
9: h h {v
i
}
10: ~v
0
blank vector hv
0
1
, v
0
2
, ...v
0
l
0
i
11: for i from 1 to l
0
do
12: v
0
i
v
i
13: return ~v
0
converted back into a collection
Note that this modies the order of the original list ~v. You can represent h with a hash table
easily: to add an element to h, you just add it as the key to the hash table (the value can be anything:
for example, the element itself). To test to see if v
l
h, you just check to see if v
l
is a key in the hash
table already. Piece of cake.
89
4.5.1 State-Action Rules
An agent is an autonomous computational entity, that is, one which manipulates the world on its
own, in response to feedback it receives from the world. Agents include autonomous robots, game
agents, entities in simulations, etc. One common kind of program an agent might follow is a policy:
a collection of simple rules to tell the agent what to do in each possible situation it may nd itself
in. These rules are often called state-action rules. Here are some state-action rules for an agent to
get around in a city: are you downtown? Then get on the train. Are you on the train? Then take the
train to the wharf. Did you miss your stop? Then get off the train and get on the return train. Etc.
State-action rules take on various guises, but a typical form is a b ... y z, where the
a, b, ..., y are state descriptions and z is an action or class. A state description is some feature about the
current world that might or might not be true. An action is what we should do if that feature is
true. For example, a robot might have rules like:
Left Sonar Value > 3.2 Forward Sonar Value 5.0 Turn Left to 50
We might test our ruleset by plopping a simulated robot down in an environment and using
these rules to guide it. Each time the robot gets sensor information, it gathers the rules whose
bodies are true given its current sensor values. The matching rules are collectively known as the
match set. Then the robot decides what to do based on what the heads of these rules suggest
(suggest as turn left by 50
).
One way to think of the rule bodies is as describing regions in the state space of the robot, and
the heads as what to do in those regions. In the case of the rule above, the rule body has roped off a
region thats less than 3.2 in one dimension and 5.0 in another dimension, and doesnt cut out
any portions along any other dimensions.
There are two interesting issues involved here. First, what if no rules match the current
condition? This is commonly known as under-specication of the state space: there are holes in
the space which no rule covers. This is often handled by requiring a default rule which res when
no other rule res. More interestingly, what if more than one rule matches the current condition,
but those rules disagree in their heads in an incompatible way (one says Turn Left and one
says Turn Right, say)? This is known as over-specication of the state space. Well need employ
some kind of arbitration scheme to decide what to do. Most commonly, if we have lots of rules,
we might have a vote. Another way is to pick a rule at random. And yes: a state space can be
simultaneously under- and over-specied.
State-action rulesets often introduce a twist to the tness assessment process. Specically, as we
move the agent around, we may not only assess the tness of the individual itself but also assess
the tness of the individual rules inside the ruleset individual. At the very least this can be done by
breaking the rules into those which red during the course of running the individual and those
which never red (and thus arent responsible for the wonderful/terrible outcome that resulted).
We can then punish or reward only the rules which red. Or if after turning Left the robot received
an electric shock, which might penalize the series of rules whose rings which led up to that shock,
but not penalize later rules. We might be more inclined to mutate or eliminate (by crossover) the
more-penalized rules.
Metaheuristics designed for optimizing policies using state-action rules, Michigan-Approach
Learning Classier Systems and Pitt-Approach Rule Systems, are discussed in Section 10.
90
4.5.2 Production Rules
Production rules are similar to state-action rules except that the actions are used to trigger the
states of other rules. Production rules are sort of backwards looking: they tend to look like this:
a b c ... z. This is because production rules are red (typically) by a single event (triggered
by some other rule usually) and then this causes them to trigger multiple downstream rules in turn.
A lot of the uses for production rules is to enable modular indirect encodings which can describe
large complex solutions with lots of repetitions, but do so in a small, compact rule space which is
more easily searched. This of course assumes that good solutions will have lots of repetitions; this
in turn depends largely on the kind of problem youre trying to solve.
Typically the symbols which appear in the heads (the right side) production rules are of two
forms: nonterminal symbols, which may also appear in the bodies of rules, and terminal symbols,
which often may not. Terminal symbols basically dont expand any further. Note that for most
production systems, theres a xed number of rules, one per nonterminal.
An early example of applying evolutionary computation to production rules was developed
by Hiroaki Kitano to nd certain optimal graph structures for recurrent neural networks and the
like.
74
Imagine that youre trying to create an 8-node directed, unlabeled graph structure. Our
ruleset might look like this (numbers are terminal symbols):
a
b c
c d
1 0
d c
1 1
1 0
0 1
0 0
0 0
0 0
1 1
1 1
This is an indirect encoding of the graph structure, believe it or not. We start with the 1 1
matrix
matrix into
b c
c d
. From
there we apply rules to each of the elements in that matrix, expanding them into their 2 2 elements,
resulting in the matrix
1 0 1 1
d c 1 0
1 1 0 1
1 0 0 0
1 1 0 0 1 1 1 1
1 1 0 0 1 1 1 1
0 1 1 1 1 1 0 0
0 0 1 0 1 1 0 0
1 1 1 1 0 0 1 1
1 1 1 1 0 0 1 1
1 1 0 0 0 0 0 0
1 1 0 0 0 0 0 0
. At this
point were out of nonterminal symbols. (Since we made up expansion rules like 1
1 1
1 1
for our terminal symbols, we could have either expanded until we ran out of nonterminals, or
expanded some number of predened times.) This is our adjacency matrix for the graph, where
a 1 at position hi, ji means theres an edge from i to j and a 0 means no edge. I wont bother
drawing this sucker for you!
74
This paper was one of the seminal papers in indirect encodings. Hiroaki Kitano, 1990, Designing neural networks
using a genetic algorithm with a graph generation system, Complex Systems, 4, 461476.
91
Figure 33 Plant patterns created by a Lin-
denmayer System.
A more recent example of indirect encoding with produc-
tion rules is in nding optimal Lindenmayer Systems (or L-
Systems). These are sets of production rules which produce
a string of symbols. That string is then interpreted as a small
computer program of sorts to produce some nal object such
as a plant or tree, fractal or pattern, or machine of some sort.
L-Systems were made popular by Aristid Lindenmayer, a biol-
ogist who developed them to describe plant growth patterns.
75
A simple example of an L-System is one which creates the
Koch Curve, a fractal pattern. The rule system consists of the
single rule F F + F F F + F. It works like this: we start with a single F. Applying this rule,
this expands to F + F F F + F. Expanding each of these Fs using the rule, we get:
F + F F F + F + F + F F F + F F + F F F + F F + F F F + F + F + F F F + F
Expanding yet again, we get:
F + F F F + F + F + F F F + F F + F F F + F F + F F F + F + F + F F F + F+
F + F F F + F + F + F F F + F F + F F F + F F + F F F + F + F + F F F + F
F + F F F + F + F + F F F + F F + F F F + F F + F F F + F + F + F F F + F
F + F F F + F + F + F F F + F F + F F F + F F + F F F + F + F + F F F + F+
F + F F F + F + F + F F F + F F + F F F + F F + F F F + F + F + F F F + F
The + and are terminal symbols. What do you do with such a string? Well, if you interpreted
the F as draw a line forward and + and as turn left and turn right respectively, you
would wind up with the Koch Curve shown in Figure 34. Further expansions create more complex
patterns.
Figure 34 A Quadratic Koch Curve.
These rules can get really complicated. Figure 35 shows
an actual L-System used by biologists to describe the
branching pattern of the red seaweed Bostrychia radicans.
76
One interesting use of L-Systems with evolutionary
computation, by Greg Hornby, was in discovering useful
designs such as novel chairs or tables. Hornby also ap-
plied L-Systems together with Edge Encoding to discover
animal body forms and nite-state automata-like graph
structures.
77
The L-System ruleset expanded into a string,
which was then interpreted as a series of Edge Encoding
instructions (double, split, etc.) to produce the nal graph.
75
Przemyslaw Prusinkiewicz and Aristid Lindenmayer produced a beautiful book on L-Systems: Przemyslaw
Prusinkiewicz and Aristid Lindenmayer, 1990, The Algorithmic Beauty of Plants, Springer. Its out of print but available
online now, at http://algorithmicbotany.org/papers/#abop
76
From Ligia Collado-Vides, Guillermo G omez-Alcaraz, Gerardo Rivas-Lechuga, and Vinicio G omez-Gutierrez, 1997,
Simulation of the clonal growth of Bostrychia radicans (Ceramiales-Rhodophyta) using Lindenmayer systems, Biosystems,
42(1), 1927.
77
Greg gave what I consider the best paper presentation ever at GECCO. He did a regular presentation on using
L-systems to evolve walking creatures. But at the end of the presentation he dumped out of a canvas sack a mass of
tinkertoys and servos. He pressed a button, and it came to life and began to walk across the table. It was a tinkertoy
version of his best-tness-of-run individual. For more information on Gregs work, his thesis is a good pick: Gregory
Hornby, 2003, Generative Representations for Evolutionary Design Automation, Ph.D. thesis, Brandeis University.
92
4.5.3 Initialization
O FGD
D G[+++FGFGRG][-GF]GFGA
A FGFGFGFG[+++FGR][-GF]GFGB
B FGFGFGFG[+++FGR][-GF]GFGC
C FGFGFGFG[+++FGR][-GF]GFGK
R FG[+FGFGU]GFGFGE
E [-FGFGX]GFGFGH
H [+FGFGW]GFGFGZFG
K FGFGFG[+++FGR][-FGA]GFGL
L FGFGFG[+++FGR][-GF]GFGP
P FGFGFG[+++FGR][-GF]GFGQ
Q FGFGFGT
T FGFGFG[+++FGR][+FGA]GFGA
U [+FGFGF]GFG
X [-FGFGF]GFG
W [+FGFGF]GFG
Z [-FGFGF]GFG
Figure 35 Another L-System.
Like direct-enoded graph structures, building rulesets is
mostly a matter of determining how many elements you
want, and then creating them. We begin by picking a de-
sired ruleset size n, using some distribution (the Geometric
Distribution, Algorithm 46, is probably ne). We then cre-
ate a ruleset out n of randomly-generated elements.
When doing production rules, there are some addi-
tional constraints. Specically, the various symbols which
appear in the heads of the rules need to match symbols in
the bodies of the rules. Otherwise, how would you match
up an event triggered by a rule with the follow-on rule
which is red as a result? Likewise, you probably wont
want two rules that have the same body, that is, two pro-
duction rules of the form a b, c and a d, e, f . Which
one should re? Arbitration doesnt make much sense in
production rules, unlike state-action rules, unless perhaps
your production rules are probabilistic.
In some production rule systems, the number of rules is xed to the size of the nonterminal set.
In other rule systems you might have a variable number of symbols. In the second case you will
need to make sure that all the symbols in rule heads have a corresponding rule with that symbol in
the body. And rules with symbols in their bodies but appearing nowhere in any other rules heads
are essentially orphans (this can happen in the xed-case as well). Additionally, you may or may
not allow recursion among your rules: can rule A trigger rule B, which then triggers rule A again?
For example, imagine if letters are our expansion variable symbols and numbers are our terminals.
Heres a ruleset with some potential problems:
a b, c Theres no c rule! What gets triggered from the c event?
b d, 0
a d, b Um, do we want duplicate rule bodies?
d b Is recursion allowed in this ruleset?
e 1 Theres nothing that will ever trigger this rule! Its just junk!
During initialization youll need to handle some of these situations. You could generate rules at
random and then try to x things. Or you could create some n nonterminal symbols and then
construct rules for each of them. Heres an algorithm along those lines: its not particularly uniform,
but it does let you choose whether to allow recursive rules or not, and whether or not to permit
disconnected rules (that is, ones never triggered). It should get you the general idea: but if you
used this, youd probably need to heavily modify it for your purposes.
93
Algorithm 62 Simple Production Ruleset Generation
1:
~
t pre-dened set of terminal symbols (that dont expand)
2: p approximate probability of picking a terminal symbol
3: r ag: true if we want to allow recursion, else false
4: d ag: true if we want to allow disconnected rules, else false
5: n a random integer > 0 chosen from some distribution
6: ~v vector of unique symbols hv
1
, ..., v
n
i . The symbol in v
1
will be our start symbol
7:
Best 2
3: repeat
4: for i from 1 to n do
5: AssessJointFitness(hii, P
(1)
, ..., P
(n)
) . Computes tness values for only population P
(i)
6: for each vector ~s of individuals hP
(1)
a
, ..., P
(n)
z
i: P
(1)
a
P
(1)
, etc., assessed in Line 5 do
7: if
Best = 2 or JointFitness(~s) > JointFitness(
Best) then
8:
Best ~s
9: P
(i)
Join(P
(i)
, Breed(P
(i)
))
10: until
Best is the ideal solution or we have run out of time
11: return
Best
Note that in the For-loop we assess some joint tnesses but only apply them to the individuals
in population P
(i)
. We could do that with a variant of algorithm 83 which works like this. For
each individual in P
(i)
we perform some k tests by grouping that individual with randomly-chosen
individuals from the other populations to form a complete solution:
Algorithm 89 K-fold Joint Fitness Assessment with N 1 Collaborating Populations
1: P
(1)
, ..., P
(n)
populations
2: i index number of the Population to be Tested
3: k desired minimum number of tests per individual
4: ~s hs
1
, ..., s
n
i an (empty for now) complete solution . Well ll it up with individuals
5: for each individual P
(i)
j
P
(i)
do . For each individual to test...
6: for w from 1 to k do . Do k tests...
7: for l from 1 to n do . Build a complete solution including the individual to test
8: if l = i then . Its the individual to test
9: s
l
= P
(l)
j
10: else . Pick a random collaborator
11: s
l
= individual chosen at random from P
(l)
12: Test(~s) . Test the complete solution
13: AssessFitness(P
(i)
j
) . Using the results of all Tests involving P
(i)
j
14: return P
(1)
, ..., P
(n)
Weve abandoned here any attempt of using unique collaborators: but you can do that if you
really want to try it. I dont think its that valuable because the space is so much larger. The
Sequential approach is the original method proposed by Potter and De Jong, and it still remains
popular. But, in the formulation described above, its wasteful because we do many tests but only
use them to assess the tness of a single individual the collaborators are forgotten about. We
could x that by keeping around the previous tests and including them when we get around to
testing the collaborating individuals for their tness assessment. Or we could just do the Parallel
approach. Specically, we test everyone together, then breed everyone at once:
124
Algorithm 90 An Abstract Parallel N-Population Cooperative Coevolutionary Algorithm
1: P
(1)
, ..., P
(n)
Build n Initial Populations
2:
Best 2
3: repeat
4: AssessJointFitness(h1, ..., ni, P
(1)
, ..., P
(n)
) . Computes tness values for all populations
5: for each vector ~s of individuals hP
(1)
a
, ..., P
(n)
z
i: P
(1)
a
P
(1)
, etc., assessed in Line 4 do
6: if
Best = 2 or JointFitness(~s) > JointFitness(
Best) then
7:
Best ~s
8: for i from 1 to n do
9: P
(i)
Join(P
(i)
, Breed(P
(i)
))
10: until
Best is the ideal solution or we have run out of time
11: return
Best
This doesnt look like a big change, but it is. Because we can group all the joint tnesses together
at one time, we can save some testing time by not doing further tests on collaborators whove been
involved in a sufcient number of tests already. We could do this with a variation of Algorithm 85,
but with N > 2 it might sufce to just pick collaborators at random, even if some by chance get
tested more than others, hence:
Algorithm 91 K-fold Joint Fitness Assessment of N Populations
1: P
(1)
, ..., P
(n)
populations
2: k desired minimum number of tests per individual
3: ~s hs
1
, ..., s
n
i an (empty for now) complete solution . Well ll it up with individuals
4: for i from 1 to n do . For each population...
5: for each individual P
(i)
j
P
(i)
do . For each individual in that population...
6: m number of tests individual P
(i)
j
has been involved in so far
7: for w from m+1 to k do . Do at most k tests...
8: for l from 1 to n do . Build a complete solution including the individual to test
9: if l = i then . Its the individual to test
10: s
l
= P
(l)
j
11: else . Pick a random collaborator
12: s
l
= individual chosen at random from P
(l)
13: Test(~s) . Test the complete solution
14: for i from 1 to n do
15: for each individual P
(i)
j
P
(i)
do
16: AssessFitness(P
(i)
j
) . Using the results of all Tests involving P
(i)
j
17: return P
(1)
...P
(n)
125
Pathological Conditions in Testing So what could go wrong? For one, theres the theoretical
possibility of laziness. If certain populations are doing impressively, other populations may just
come along for the ride. For example, lets say youre trying to nd an optimal team of basketball
players. Youve got a population of centers, of forwards, of guards, etc. Your guard population has
converged largely to consist of copies of Michael Jordan. The Michael Jordans are so impressive that
the population of (say) forwards doesnt need to do any work for the team to be near optimal. In
essence, all the forwards tnesses look the same to the system: regardless of the forward selected,
the team does really really well. So the system winds up selecting forwards at random and the
forwards dont improve. This condition is the cooperative equivalent of the Loss of Gradient
pathology discussed earlier. The basic solution to this is to change your tness function to be
more sensitive to how the forwards are doing. For example, you might apply some kind of credit
assignment scheme to assign the tness differently to different cooperating individuals. Be careful:
the system is now likely no longer cooperative, that is, coordinating individuals no longer receive
the same tness, and this can result in unexpected dynamics.
Joint Space
Broad
Suboptimum
P
o
p
u
l
a
t
i
o
n
A
Population B
I
n
d
i
v
i
d
u
a
l
A
1
I
n
d
i
v
i
d
u
a
l
A
2
Narrow Optimum
Figure 45 Relative Overgeneralization.
Laziness is the tip of the iceberg though. How do
you assess the tness of a cooperative coevolutionary
individual based on tests? Early on it was thought that
you might base it on the average of the test results with
various collaborators from the other population(s). Lets
say that there is one optimal joint solution, but the hill
leading to it is very small; whereas theres a large sub-
optimal peak elsewhere, as in Figure 45. If we tested
individuals A1 and A2 with many individuals from Pop-
ulation B and took the average, A1 would appear tter
on average even though A2 was actually a collaborator
in the optimum. A1 is a jack-of-all-trades-but-master-of-
none individual which is never phenomenal anywhere,
but most of the time its involved in a joint solution
thats better than average.
This situation leads to a pathological condition called relative overgeneralization, where the
populations converge to joint solutions which are suboptimal, but involve lots of jacks-of-all-trades.
Paul Wiegand discovered this unfortunate situation.
104
The way to x this is to assess tness as the
maximum of the tests rather than their average. However to get good results you may need to do a
lot of tests, perhaps even against the entire other population. It turns out that usually there are just
a few special collaborators in the other population(s) which, if you tested just with them, would
compute tness orderings for your entire population in exactly the same way as testing against
everyone. Liviu Panait, a former student of mine, developed a 2-population cooperative algorithm,
iCCEA, which computes this archive of special collaborators, resulting in far fewer tests.
105
Finally, if your tness function has multiple global optima, or near-optima, you could also
wind up victim to miscoordination.
106
Lets say you have two cooperating populations, A and B,
104
See Pauls thesis: R. Paul Wiegand, 2004, An Analysis of Cooperative Coevolutionary Algorithms, Ph.D. thesis, George
Mason University, Fairfax, Virginia.
105
See his thesis: Liviu Panait, 2006, The Analysis and Design of Concurrent Learning Algorithms for Cooperative Multiagent
Systems, Ph.D. thesis, George Mason University, Fairfax, Virginia.
106
Miscoordination isnt a disaster: an explorative enough system will nd its way out. But its worthwhile mentioning
that it is a disaster in a sister technique in articial intelligence, multiagent reinforcement learning.
126
Joint Space
P
o
p
u
l
a
t
i
o
n
A
Population B
Individual B2
I
n
d
i
v
i
d
u
a
l
A
1
Suboptimal
Region
Individual B1
I
n
d
i
v
i
d
u
a
l
A
2
Global Optimum 1
Global Optimum 2
Figure 46 Miscoordination.
and two global optima, 1 and 2. The two optima are off-
set from one another as shown in Figure 46. Population
A has discovered an individual A1 who is part of global
optimum 1 (yay!), and likewise Population B has discov-
ered an individual B2 who is part of global optimum 2
(yay!). But neither of these individuals will survive, be-
cause Population A hasnt yet discovered individual A2
who, when collaborating with B2, would help B2 shine.
Likewise Population B hasnt yet found individual B1
who would make A1 look great. In the worst case, these
populations are trying out A1 and B2 in combination,
which winds up in a quite suboptimal region of the joint
space. Thus, though A1 and B2 are optimal for their
respective populations, the populations cant tell: they
look bad.
6.4 Niching: Diversity Maintenance Methods
To add exploration in your system, perhaps to prevent it from converging too rapidly to suboptimal
solutions, there are many options available. So far weve considered:
Increasing your sample (population) size
Adding noise to your Tweak procedure
Being less selective among individuals (picking less t ones more often)
Adding random restarts to your system
Adding explicit separation constraints in your population (as is done in various parallel
stochastic optimization approaches like Island Models or Spatially-embedded Models)
Explicitly trying to add different individuals from the current ones in the population (as is
done in Scatter Search with Path Relinking)
One approach weve not yet considered is to punish individuals in some way for being too
similar to one another. For example, we might explicitly lower the tness of individuals if theyre
too close to other individuals (tness sharing). Or we could pick individuals to die based on how
similar they are to new incoming children in a steady-state or generation-gap algorithm (crowding).
These approaches all affect the survivability of individual A (versus individual B) based on whether
or not there exists some individual C (which is similar to A), in the population already, or being
introduced new to the population. Thus these methods are coevolutionary in nature.
107
Before we examine techniques, we need to consider what similar means. Two individuals can
be similar in at least three ways:
107
One additional diversity maintenance approach we wont really discuss here its not coevolutionary in nature is
incest prevention. Here, individuals are not permitted to cross over with other individuals if they share a parent (or a
grandparent, or however deep youd like to go). There has also been a bit of work on what I call explicit speciation,
where each individual has a small tag which indicates its species (the tag can be mutated), and selection or breeding is
constrained in some way to be mostly within species. This usually is for other purposes than diversity maintenance.
127
Phenotypically: they behave similarly.
Genotypically: they have roughly the same makeup when it comes to breeding.
Individuals may have similar tness.
Ideally were looking for a phenotypical similarity: but often its not easy to determine what
that is exactlyor perhaps your phenotypes and genotypes are basically identical. So often one
settles on some notion of genotypical similarity. Fitness similarity makes no sense in this context:
but when we get to multi-objective algorithms (which have more than one tness measure), it will
suddenly make lots of sense!
To determine how similar individuals are, well need some kind of distance measure which
ideally denes a metric distance
108
in the phenotypical (or genotypical) space. If your individuals
already reside in a metric space, youre in luck. For example, if your individuals are vectors of
real-valued numbers (individual i has the genotype hi
1
, ..., i
n
i and individual j has the genotype
hj
1
, ..., j
n
i), and youre making the assumption that genotype distance is the same as phenotype
distance, then you might use the sum squared genotype distance, that is, d(i, j) =
p
k
(i
k
j
k
)
2
.
For boolean vectors, you could use the Hamming distance, which counts the number of times that
two genes are different, that is, d(i, j) =
k
i
k
j
k
, where is the XOR (exclusive OR) operator. If
your individuals are more complex trees, sayhave a lot of fun dening a distance measure
among them!
6.4.1 Fitness Sharing
The idea behind tness sharing is to encourage diversity in individuals by reducing their tness
for being too similar to one another.
109
The most common form of tness sharing, proposed by
David Goldberg and Jon Richardson, requires you to dene a neighborhood radius . We punish a
given individuals tness if there are other individuals within that radius. The more individuals
inside that radius, and the closer the individuals are to the given individual, the worse its tness.
Given our distance function d(i, j), we compute a sharing function s between two individuals i
and j, which tells us how much punishment i will receive for j being near it:
s(i, j) =
(
1 (d(i, j)/)
if d(i, j) <
0 otherwise
> 0 is a tuning parameter you can set to change the degree of punishment i receives for j
being particularly close by. The size of is tricky: too small and the force for diversity is weak; but
108
A metric space is a space where we can construct a distance measure which obeys the triangle inequality. More
specically, the distance function d(i, j) must have the following properties. First, it should always be 0 (whats
a negative distance?). Second, it should be 0 only if i = j. Third, the distance from i to j should be the same as the
distance from j to i. And last, the triangle inequality: for any three points i, j, and k, it must always be true that
d(i, k) d(i, j) + d(j, k). That is, going from point i to point k directly is always at least as short as taking a detour
through j. Metric spaces include ordinary multi-dimensional real-valued Euclidian space and the space of boolean
vectors (using Hamming distance). But whats the metric space of trees? Does one even exist?
109
The term tness sharing is unfortunate: theyre not sharing tness with one another. Theyre all just having their
tnesses reduced because theyre too close to one another. The technique was rst discussed, I believe, in David Goldberg
and Jon Richardson, 1987, Genetic algorithms with sharing for multimodal function optimization, in John J. Grefenstette,
editor, Proceedings of the Second International Conference on Genetic Algorithms, pages 4149, Lawrence Erlbaum Associates.
128
it shouldnt be so large that multiple optima fall in the same neighborhood (or even close to that).
Now we adjust the tness as follows:
f
i
=
(r
i
)
j
s(i, j)
r
i
is the actual (raw) tness of individual i and f
i
is the adjusted tness we will use for the
individual instead. > 1 is a scaling factor which youll need to tune carefully. If its too small,
individuals wont move towards optima out of fear of crowding too near one another. If its too
large, crowding will have little effect. Of course you probably dont know much about the locations
of your optima (which is why youre using an optimization algorithm!), hence the problem. So
there you have it, three parameters to ddle with: , , and .
If your tness assessment is based on testing an individual against a bank of test problems (for
example, seeing which of 300 test problems its able to solve), you have another, simpler way to
do all this. Robert Smith, Stephanie Forrest, and Alan Perelson have proposed an implicit tness
sharing:
110
if an individual can perform well on a certain test case and few other individuals can
do so, then the individual gets a big boost in tness. The approach Smith, Forrest, and Perelson
took was to repeatedly sample from the population over and over again, and base tness on those
samples. In Implicit Fitness Sharing, you must divide the spoils with everyone else who did as
well as you did on a given test.
Algorithm 92 Implicit Fitness Sharing
1: P population
2: k number of times we should sample . Should be much bigger than ||P||
3: how many individuals per sample
4: T test problems used to assess tness
5: C ||P|| by ||T|| matrix, initially all zeros . C
i,j
is how often individual P
i
was in a sample for T
j
6: R ||P|| by ||T|| matrix, initially all zeros . R
i,j
is individual P
i
s sum total reward for T
j
7: for each T
j
T do
8: for k times do
9: Q unique individuals chosen at random from P
10: for each individual Q
l
Q do
11: i index of Q
l
in P
12: C
i,j
C
i,j
+ 1
13: S individual(s) in Q which performed best on T
j
. Everyone in S performed the same
14: for each individual S
l
S do
15: i index of S
l
in P
16: R
i,j
R
i,j
+ 1/||S||
17: for each individual P
i
in P do
18: Fitness(P
i
)
j
R
i,j
/C
i,j
19: return P
110
This was part of a larger effort to develop optimization algorithms fashioned as articial immune systems. The
authors rst suggested it in Robert Smith, Stephanie Forrest, and Alan Perelson, 1992, Population diversity in an
immune system model: Implications for genetic search, in L. Darrell Whitley, editor, Proceedings of the Second Workshop on
Foundations of Genetic Algorithms, pages 153165, Morgan Kaufmann.
129
Note that its possible that an individual will never get tested with this algorithm, especially if k
is too small: you will want to check for this and include the individual in a few tests.
Believe it or not, this is quite similar to tness sharing: the neighborhood of an individual is
phenotypical: those individuals who solved similar test problems. Youll again need a neighbor-
hood radius . But this time instead of dening an explicit radius in phenotype space, the radius
is a sample size of individuals that compete for a given test problem t. Youll need to ddle with
the new as well, but its likely not as sensitive. k is a parameter which should be as large as you
can afford (time-wise) to get a good sample.
6.4.2 Crowding
Crowding doesnt reduce the tness of individuals for being too similar; rather it makes them more
likely to be picked for death in a steady-state system. Though steady-state evolution is usually
exploitative, the diversity mechanism of crowding counters at least some of that. The original
version of crowding, by Ken De Jong,
111
was similar to a steady-state mechanism: each generation
we breed some n new individuals. Then one by one we insert the individuals in the population,
replacing some individual already there. The individual selected to die is chosen using Tournament
Selection not based on tness but on similarity with the individual to insert. Note that because of the
one-by-one insertion, some of the individuals chosen to die might be some of those n children; so
this isnt quite a steady-state algorithm. But its ne to do crowding by using a plain-old steady-state
algorithm with selection for death based on similarity to the inserted child.
As it turns out, crowding doesnt perform all that well. But we can augment it further by
requiring that the child only replaces the individual chosen to die if the child is tter than that
individual. This approach is called Restricted Tournament Selection,
112
by Georges Harik, and
seems to work pretty well.
Samir Mahfoud proposed an entirely different mechanism, Deterministic Crowding,
113
in
which we randomly pair off parents in the population, then each pair produces two children.
Each child is matched with the parent to which it is most similar. If the child is tter than its
matched parent, it replaces the parent in the population. The idea here is to push children to replace
individuals (in this case, their own parents) which are similar to them and arent as t as they are.
Mahfouds formulation is an entire generational evolutionary algorithm instead of simply a tness
assessment mechanism:
111
From his thesis, Kenneth De Jong, 1975, An Analysis of the Behaviour of a Class of Genetic Adaptive Systems, Ph.D. thesis,
University of Michigan. The thesis is available online at http://cs.gmu.edu/eclab/kdj thesis.html
112
Georges Harik, 1995, Finding multimodal solutions using restricted tournament selection, in Larry J. Eshelman,
editor, Proceedings of the 6th International Conference on Genetic Algorithms, pages 2431, Morgan Kaufmann.
113
Mahfoud rst mentioned this in Samir Mahfoud, 1992, Crowding and preselection revisited, in Reinhard M anner
and Bernard Manderick, editors, Parallel Problem Solving From Nature II, pages 2736, North-Holland. But it actually got
eshed out in his thesis, Samir Mahfoud, 1995, Niching Methods for Genetic Algorithms, Ph.D. thesis, University of Illinois
and Urbana-Champaign.
This is somewhat related to an early notion of niching called preselection, where an individual would simply replace
its direct parent if it was tter than the parent. Theres no need to compute a distance or similarity measure at all: we just
run on the heuristic assumption that parents are usually very similar to their children. Preselection is an old concept,
dating at least from Daniel Joseph Cavicchio Jr., 1970, Adaptive Search Using Simulated Evolution, Ph.D. thesis, Computer
and Communication Sciences Department, University of Michigan.
130
Algorithm 93 Deterministic Crowding
1: popsize desired population size
2: P {}
3: for popsize times do
4: P P {new random individual}
5: Best 2
6: for each individual P
i
P do
7: AssessFitness(P
i
)
8: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
9: Best P
i
10: repeat
11: Shue P randomly . To shue an array randomly, see Algorithm 26
12: for i from 1 to ||P|| by 2 do
13: Children C
a
, C
b
Crossover(Copy(P
i
), Copy(P
i+1
))
14: C
a
Mutate(C
a
)
15: C
b
Mutate(C
b
)
16: AssessFitness(C
a
)
17: AssessFitness(C
b
)
18: if Fitness(C
a
) > Fitness(Best) then
19: Best C
a
20: if Fitness(C
b
) > Fitness(Best) then
21: Best C
b
22: if d(C
a
, P
i
) + d(C
b
, P
i+1
) > d(C
a
, P
i+1
) + d(C
b
, P
i
) then
23: Swap C
a
and C
b
. Determine which child should compete with which parent
24: if Fitness(C
a
) > Fitness(P
i
) then . Replace the parent if the child is better
25: P
i
C
a
26: if Fitness(C
b
) > Fitness(P
i+1
) then . Replace the parent if the child is better
27: P
i+1
C
b
28: until Best is the ideal solution or we have run out of time
29: return Best
131
132
7 Multiobjective Optimization
Its often the case that were not interested in optimizing a single tness or quality function, but
rather multiple functions. For example, imagine that a building engineer wants to come up with
an optimal building. He wants to nd buildings that are cheap, tall, resistant to earthquakes, and
energy efcient. Wouldnt that be a great building? Unfortunately, it might not exist.
C
h
e
a
p
e
r
More Energy Efcient
Solution A
Dominated by A
Dominated by A
Figure 47 Region of solutions Pareto domi-
nated by solution A, including the solution on
the border. Keep in mind that this is not a de-
piction of the phenotype space, but rather results
for the two objectives.
Each of these functions to optimize is known as an
objective. Sometimes you can nd solutions which are
optimal for every objective. But more often than not, ob-
jectives are at odds with one another. Your solutions are
thus often trade-offs of various objectives. The building en-
gineer knows he cant nd the perfect building: cheap, tall,
strong, green. Rather, he might be interested in all the best
options he has available. There are lots of ways of dening
a set of best options, but theres one predominant way:
the Pareto
114
front of your space of candidate solutions.
Lets say you have two candidate buildings, M and N.
M is said to Pareto dominate N if M is at least as good as
N in all objectives, and superior to N in at least one objec-
tive. If this were the case, why would you ever pick N
instead of M? M is at least as good everywhere and bet-
ter in something. If we have just two objectives (Cheaper,
More Energy Efcient) Figure 47 shows the region of space
dominated by a given building solution A. The region is
nearly closed: the border is also dominated by A, except
the corner (individuals identical to A in all objectives).
C
h
e
a
p
e
r
More Energy Efcient
Pareto
Nondominated
Front
Figure 48 The Pareto front of nondominated
solutions.
Neither M nor N dominates the other if theyre iden-
tical in all objectives, or if N is better in some things but
M is better in other things. In those cases, both M and N
are of interest to our building engineer. So another way
of saying the best options is the set of buildings which
are dominated by no other building. We say that these build-
ings are nondominated. This set of buildings is the Pareto
nondominated front (or just Pareto front) of the space of
solutions. Figure 48 at right shows the Pareto front of the
possible solutions in our two-objective space. Pareto fronts
dene outer borders. In a two-objective situation the Pareto front is often a curve demarcating
that outer border. In a three-objective situation its a skin of sorts. If you have one solution which
is clearly superior to all the others (a superman, so to speak), the front collapses to that single
individual.
As shown in Figure 49, Pareto fronts come in different avors. Convex fronts are curved
outwards towards better solutions. Concave fronts are curved inwards away from better solutions.
114
Vilfredo Pareto (18481923) was an Italian economist responsible for a lot of important economics mathematics
concepts, including Paretos Law of income distribution, the 8020 Rule (80% of events happen from only 20% of causes,
so you can x most of your problems by focusing on just a few issues), and Pareto Efciency and Pareto Optimality,
which is what were discussing here.
133
Nonconvex fronts arent entirely convex, and they include concave fronts as a subcategory. Fronts
can also be discontinuous, meaning that there are regions along the front which are simply impos-
sible for individuals to achieve: theyd be dominated by another solution elsewhere in the valid
region of the front. There also exist locally Pareto-optimal fronts in the space where a given point,
not on the global Pareto front, happens to be pareto-optimal to everyone near the point. This is the
multiobjective optimization equivalent of local optima.
C
h
e
a
p
e
r
More Energy Efcient
Convex
Concave
Non-Convex
Discontinuous
(Also Non-Convex)
Figure 49 Four kinds of Pareto fronts.
Spread Its not enough to offer our building engineer 100
points that lie on the Pareto front. What if theyre all in one
far corner of the front? That doesnt tell him much at all
about the options he has available. More likely he wants
samples that are spread evenly across the entire front. Thus
many of the algorithms that optimize for Pareto fronts also
try to force diversity measures. But interestingly, the dis-
tance measures used are rarely with regard to genotypical
or phenotypical distance; rather theyre distance in tness:
how far are the candidate solutions away from each other
in the multi-objective space? This turns out to be much sim-
pler to compute than genotypical or phenotypical distance.
The Problem of Too Many Objectives As the number of objectives grows, the necessary size of
the populations needed to accurately sample the Pareto front grows exponentially. All the methods
in this section face certain challenges when scaling to large numbers of objectives (and by large
I mean perhaps more than 4). Its a difculty stemming from the nature of the problem itself.
To counter this, researchers have lately been turning to more exotic techniques, particularly ones
centering around the hypervolume covered by the Pareto front; but these techniques are both
complex and generally of high computational cost. Well focus on the more basic methods here.
A Note on Dening Fitness It is traditional in multiobjective optimization literature to dene
tness in terms of error. That is, the lower the objective value, the better. Thus in most Pareto
optimization diagrams you come across, the front will be those individuals closer to the origin. I
try to be consistent throughout this text, and so in this section well continue to assume that larger
objective values are superior. Hence the organization of gures and algorithms in this chapter.
7.1 Naive Methods
Before we get to the Pareto methods, lets start with the more naive (but sometimes pretty good)
methods used to shoehorn multiobjective problems into a style usable by most traditional meta-
heuristic algorithms.
The simplest way to do this is to bundle all the objectives into a single tness using some kind
of linear function. For example, maybe you feel that one unit of Cheap is worth ten units of Tall,
ve units of Earthquake Resistant, and four units of Energy Efcient. Thus we might dene the
quality of a solution as a weighted sum of how well it met various objectives:
Fitness(i) = Cheapness(i) +
1
10
Height(i) +
1
5
EarthquakeResistance(i) +
1
4
EnergyEfciency(i)
134
True Pareto Front
(Theoretical Optimum)
A
B
6
24
20
5
Figure 50 A may be considered
superior, but B has a higher total.
Weve seen this theme a number of times in the past so far. For
example: linear parsimony pressure; and the average of various test
cases. There are three problems with this. First, youre required to
come up with the degree to which one objective is worth another
objective. This is likely hard to do, and may be close to impossible
if your objectives are nonlinear (that is, the difference between 9
and 10 height is much greater than the difference between 2 and 3
height, say). Its the same basic problem discussed regarding linear
parsimony pressure in the Section 4.6 (Bloat). Second, realize that if M
Pareto dominates N, its already the case that Fitness(M) Fitness(N),
assuming your weights are positive. So a Pareto method in some
sense gives you some of this stuff for free already. Third, a weighted
sum may not match the goal of moving towards the Pareto Front.
Consider the simplest scenario, where were adding up the objectives
(that is, all weights are 1). We have two objectives, and Figure 50
shows the true Pareto front. Individual A is very close to the front, and so is the more desirable
individual. But Individual B sums to a higher value, and so would be selected over A using this
tness strategy.
To solve the rst problem (having to come up with weights), we could instead abandon linear
functions and simply treat the objectives as uncomparable functions. For example, perhaps we
simply invent preferences among the objectives in order to perform a lexicographic ordering: M
is better than N if it is superior in Height. If theyre the same Height, its better if its superior
in Cheapness. Then Earthquake Resistance. Then Energy Efciency. We can provide a selection
procedure by extending Algorithm 63 (Lexicographic Tournament Selection) to the case of more
than two objectives. Basically when comparing two individuals, we run through the objectives
(most important to least important) until we nd one clearly superior to the other in that objective.
Assuming we have an ObjectiveValue(objective, individual) function which tells us the quality of
individual with regard to the given objective, we might perform a tournament selection like this:
Algorithm 94 Multiobjective Lexicographic Tournament Selection
1: Best individual picked at random from population with replacement
2: O {O
1
, ..., O
n
} objectives to assess with . In lexicographic order, most to least preferred.
3: t tournament size, t 1
4: for i from 2 to t do
5: Next individual picked at random from population with replacement
6: for j from 1 to n do
7: if ObjectiveValue(O
j
, Next) > ObjectiveValue(O
j
, Best) then . Clearly superior
8: Best Next
9: break from inner for
10: else if ObjectiveValue(O
j
, Next) < ObjectiveValue(O
j
, Best) then . Clearly inferior
11: break from inner for
12: return Best
135
We could also pick an objective at random each time to use for tness for this selection only:
Algorithm 95 Multiobjective Ratio Tournament Selection
1: Best individual picked at random from population with replacement
2: O {O
1
, ..., O
n
} objectives to assess with
3: t tournament size, t 1
4: j random number picked uniformly from 1 to n
5: for i from 2 to t do
6: Next individual picked at random from population with replacement
7: if ObjectiveValue(O
j
, Next) > ObjectiveValue(O
j
, Best) then
8: Best Next
9: return Best
Or we could use voting: an individual is preferred if it is ahead in more objectives:
Algorithm 96 Multiobjective Majority Tournament Selection
1: Best individual picked at random from population with replacement
2: O {O
1
, ..., O
n
} objectives to assess with, more important objectives rst
3: t tournament size, t 1
4: for i from 2 to t do
5: Next individual picked at random from population with replacement
6: c 0
7: for each objective O
j
O do
8: if ObjectiveValue(O
j
, Next) > ObjectiveValue(O
j
, Best) then
9: c c + 1
10: else if ObjectiveValue(O
j
, Next) < ObjectiveValue(O
j
, Best) then
11: c c 1
12: if c > 0 then
13: Best Next
14: return Best
Finally, we could extend Algorithm 64 (Double Tournament Selection) to the case of more
than two objectives. Here we perform a tournament based on one objective. The entrants to that
tournament are selected using tournament selections on a second objective. The entrants to that
tournament are selected using tournament selections on a third objective, and so on. Thus the
winner is more often that not a jack-of-all-trades which is pretty good in all objectives.
136
Algorithm 97 Multiple Tournament Selection
1: O {O
1
, ..., O
n
} objectives to assess with
2: T {T
1
, ..., T
n
} tournament sizes for the objectives in O, all 1 . Allows dierent weights
3: return ObjectiveTournament(O, T)
4: procedure ObjectiveTournament(O, T)
5: Best individual picked at random from population with replacement
6: n ||O|| . O and T change in size. The current last elements are O
n
and T
n
7: if O{O
n
} is empty then . O
n
is the last remaining objective!
8: Best individual picked at random from population with replacement
9: else
10: Best ObjectiveTournament(O{O
n
}, T {T
n
}) . Delete the current objective
11: for i from 2 to T
n
do
12: if O{O
n
} is empty then . This is the remaining objective!
13: Next individual picked at random from population with replacement
14: else
15: Next ObjectiveTournament(O{O
n
}, T {T
n
}) . Delete the current objective
16: if ObjectiveValue(O
n
, Next) > ObjectiveValue(O
n
, Best) then
17: Best Next
18: return Best
7.2 Non-Dominated Sorting
The previous algorithms attempt to merge objectives into one single tness value by trading off
one objective for another in some way. But a lot of current algorithms instead use notions of Pareto
domination to get a little more closely at what better means in a multiobjective sense.
One simple way to do this is to construct a tournament selection operator based on Pareto
domination. But rst, lets review the denition. Individual A Pareto dominates individual B if
A is at least as good as B in every objective and better than B in at least one objective. Heres an
algorithm which computes that:
Algorithm 98 Pareto Domination
1: A individual A . Well determine: does A dominate B?
2: B individual B
3: O {O
1
, ..., O
n
} objectives to assess with
4: a false
5: for each objective O
i
O do
6: if ObjectiveValue(O
i
, A) > ObjectiveValue(O
i
, B) then
7: a true . A might dominate B
8: else if ObjectiveValue(O
i
, B) > ObjectiveValue(O
i
, A) then
9: return false . A denitely does not dominate B
10: return a
137
Now we can build a binary tournament selection procedure based on Pareto domination:
Algorithm 99 Pareto Domination Binary Tournament Selection
1: P population
2: P
a
individual picked at random from P with replacement
3: P
b
individual picked at random from P with replacement
4: if P
a
Pareto Dominates P
b
then
5: return P
a
6: else if P
b
Pareto Dominates P
a
then
7: return P
b
8: else
9: return either P
a
or P
b
, chosen at random
C
h
e
a
p
e
r
More Energy Efcient
Rank 1
Rank 2
Rank 3
Rank 4
Rank 5
Figure 51 Pareto ranks.
Unfortunately, even if two individuals dont Pareto-
dominate one another, and thus are equally attractive to
the experimenter, one individual might still be preferred
for optimization purposes. Specically, if A has many in-
dividuals in the population who Pareto-dominate it, and B
has none, then were interested in selecting B because well probably select individuals better than
A in the next generation anyway. Sure, B doesnt Pareto dominate A. But A is part of the rabble.
To get at this notion, we need a notion of how close an individual is to the Pareto front. There
are various ways to do this, and well discuss additional one (strength) in the next section. But we
start here with a new concept called a Pareto Front Rank. Individuals in the Pareto front are in
Rank 1. If we removed these individuals from the population, then computed a new front, individuals in
that front would be in Rank 2. If we removed those individuals, then computed a new front, wed
get Rank 3, and so on. Its like peeling an onion. Figure 51 shows the notion of ranks.
Lets start by dening how to compute a Pareto front. The trick is to go through the population
and add an individual to the front if it isnt dominated by anyone presently in the front, and remove
individuals from the front if they got dominated by this new individual. Its fairly straightforward:
Algorithm 100 Computing a Pareto Non-Dominated Front
1: G {G
1
, ..., G
m
} Group of individuals to compute the front among . Often the population
2: O {O
1
, ..., O
n
} objectives to assess with
3: F {} . The front
4: for each individual G
i
G do
5: F F {G
i
} . Assume G
i
s gonna be in the front
6: for each individual F
j
F other than G
i
do
7: if F
j
Pareto Dominates G
i
given O then . Oh well, guess its not gonna stay in the front
8: F F {G
i
}
9: break out of inner for-loop
10: else if G
i
Pareto Dominates F
j
given O then . An existing front member knocked out!
11: F F {F
j
}
12: return F
138
Computing the ranks is easy: gure out the rst front, then remove the individuals, then gure
out the front again, and so on. If we pre-process all the individuals with this procedure, we could
then simply use the Pareto Front Rank of an individual as its tness. Since lower Ranks are better,
we could convert it into a tness like this:
Fitness(i) =
1
1 + ParetoFrontRank(i)
The algorithm to compute the ranks builds two results at once: rst it partitions the population
P into ranks, with each rank (a group of individuals) stored in the vector F. Second, it assigns a
rank number to an individual (perhaps the individual gets it written internally somewhere). That
way later on we can ask both: (1) which individuals are in rank i, and (2) what rank is individual j
in? This procedure is called Non-Dominated Sorting, by N. Srinvas and Kalyanmoy Deb.
115
Algorithm 101 Front Rank Assignment by Non-Dominated Sorting
1: P population
2: O {O
1
, ..., O
n
} objectives to assess with
3: P
0
P . Well gradually remove individuals from P
0
4: R h i . Initially empty ordered vector of Pareto Front Ranks
5: i 1
6: repeat
7: R
i
Pareto Non-Dominated Front of P
0
using O
8: for each individual A R
i
do
9: ParetoFrontRank(A) i
10: P
0
P
0
{A} . Remove the current front from P
0
11: i i + 1
12: until P
0
is empty
13: return R
C
h
e
a
p
e
r
More Energy Efcient
A
B
B
1
B
2
A
1
A
2
Figure 52 The sparsity of individual B is
higher than individual A because A
1
+ A
2
<
B
1
+ B
2
.
Sparsity Wed also like to push the individuals in the pop-
ulation towards being spread more evenly across the front.
To do this we could assign a distance measure of some sort
among individuals in the same Pareto Front Rank. Lets
dene the sparsity of an individual: an individual is in a
more sparse region if the closest individuals on either side
of it in its Pareto Front Rank arent too close to it.
Figure 52 illustrates the notion were more or less after.
Well dene sparsity as Manhattan distance,
116
over every
objective, between an individuals left and right neighbors
115
First published in N. Srinivas and Kalyanmoy Deb, 1994, Multiobjective optimization using nondominated sorting
in genetic algorithms, Evolutionary Computation, 2, 221248. This paper also introduced Algorithm 100.
116
Manhattan lies on a grid, so you cant go directly from point A to point B unless youre capable of leaping tall
buildings in a single bound. Instead you must walk horizontally so many blocks, then vertically so many blocks. Thats
the Manhattan distance from A to B.
139
along its Pareto Front Rank. Individuals at the far ends of the Pareto Front Rank will be assigned
an innite sparsity. To compute sparsity, youll likely need to know the range of possible values
that any given objective can take on (from min to max). If you dont know this, you may be forced
to assume that the range equals 1 for all objectives.
Algorithm 102 Multiobjective Sparsity Assignment
1: F hF
1
, ..., F
m
i a Pareto Front Rank of Individuals
2: O {O
1
, ..., O
n
} objectives to assess with
3: Range(O
i
) function providing the range (max min) of possible values for a given objective O
i
4: for each individual F
j
F do
5: Sparsity(F
j
) 0
6: for each objective O
i
O do
7: F
0
F sorted by ObjectiveValue given objective O
i
8: Sparsity(F
0
1
)
9: Sparsity(F
0
||F||
) . Each end is really really sparse!
10: for j from 2 to ||F
0
|| 1 do
11: Sparsity(F
0
j
) Sparsity(F
0
j
) +
ObjectiveValue(O
i
, F
0
j+1
) ObjectiveValue(O
i
, F
0
j1
)
Range(O
i
)
12: return F with Sparsities assigned
To compute the sparsities of the whole population, use Algorithm 101 to break it into Pareto
Front ranks, then for each Pareto Front rank, call Algorithm102 to assign sparsities to the individuals
in that rank.
We can now use sparsity to do a kind of crowding, but one which is in the multiobjective space
rather than in a genotype or phenotype space. We dene a tournament selection to select rst based
on Pareto Front Rank, but to break ties by using sparsity. The idea is to get individuals which are
not only close to the true Pareto front, but also nicely spread out along it.
Algorithm 103 Non-Dominated Sorting Lexicographic Tournament Selection With Sparsity
1: P population with Pareto Front Ranks assigned
2: Best individual picked at random from P with replacement
3: t tournament size, t 1
4: for i from 2 to t do
5: Next individual picked at random from P with replacement
6: if ParetoFrontRank(Next) < ParetoFrontRank(Best) then . Lower ranks are better
7: Best Next
8: else if ParetoFrontRank(Next) = ParetoFrontRank(Best) then
9: if Sparsity(Next) > Sparsity(Best) then
10: Best Next . Higher sparsities are better
11: return Best
140
This alone does a good job. But the Non-Dominated Sorting Genetic Algorithm II (or NSGA-
II, by Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan),
117
goes a bit further: it
also keeps around all the best known individuals so far, in a sort of ( + ) or elitist fashion.
Algorithm 104 An Abstract Version of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II)
1: m desired population size
2: a desired archive size . Typically a = m
3: P {P
1
, ..., P
m
} Build Initial Population
4: A {} archive
5: repeat
6: AssessFitness(P) . Compute the objective values for the Pareto front ranks
7: P P A . Obviously on the rst iteration this has no eect
8: BestFront Pareto Front of P
9: R Compute Front Ranks of P
10: A {}
11: for each Front Rank R
i
R do
12: Compute Sparsities of Individuals in R
i
. Just for R
i
, no need for others
13: if ||A|| + ||R
i
|| a then . This will be our last front rank to load into A
14: A A the Sparsest a ||A|| individuals in R
i
, breaking ties arbitrarily
15: break from the for loop
16: else
17: A A R
i
. Just dump it in
18: P Breed(A), using Algorithm 103 for selection (typically with tournament size of 2)
19: until BestFront is the ideal Pareto front or we have run out of time
20: return BestFront
The general idea is to hold in A an archive of the best n individuals discovered so far. We
then breed a new population P from A, and everybody in A and P gets to compete for who gets
to stay in the archive. Such algorithms are sometimes known as archive algorithms. Ordinarily
an approach like this would be considered highly exploitative. But in multiobjective optimization
things are a little different because were not looking for just a single point in space. Instead were
looking for an entire Pareto front which is spread throughout the space, and that front alone imposes
a bit of exploration on the problem.
Note that we only compute Sparsities for a select collection of Pareto Front Ranks. This is
because theyre the only ones that ever use them: the other ranks get thrown away. You can just
compute Sparsities for all of Q if you want to, its no big deal.
7.3 Pareto Strength
Pareto Front Ranks are not the only way we can use Pareto values to compute tness. We could
also identify the strength of an individual, dened as the number of individuals in the population
that the individual Pareto dominates.
117
Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan, 2000, A fast elitist non-dominated sorting
genetic algorithm for multi-objective optimization: NSGA-II, in Marc Schoenauer, et al., editors, Parallel Problem Solving
from Nature (PPSN VI), pages 849858, Springer. This paper also introduced Algorithm 102.
141
We could use an individuals strength as his tness. Theres a problem with this, however.
Strength doesnt necessarily correspond with how close an individual is to the Pareto front. Indeed,
individuals near the corners of the front are likely to not be very strong compared to individuals
fairly distant from the front, as shown in Figure 53. Alternatively, we may dene the weakness of
an individual to be the number of individuals which dominate the individual. Obviously individuals
on the Pareto front have a 0 weakness, and individuals far from the front are likely to have a high
weakness. A slightly more rened version of weakness is the wimpiness
118
of an individual: the
sum total strength of everyone who dominates the individual, for an individual i and a group G, that is,
Wimpiness(i) =
gG that Pareto Dominate i
Strength(g)
Ideally wed like the wimpiness of an individual to be as low as possible. A non-dominated
individual has a 0 wimpiness. We could use some kind of non-wimpiness as a tness too. To do
this, we could convert wimpiness such that wimpier individuals have lower values. Perhaps:
Fitness(i) =
1
1 + Wimpiness(i)
C
h
e
a
p
e
r
More Energy Efcient
A
B
Figure 53 Individual A is closer to the Pareto
front, but individual B is Stronger.
Eckart Zitzler, Marco Laumanns, and Lothar Thiele
built an archive-based algorithm around the notion
of strength (or more correctly, wimpiness), called the
Strength Pareto Evolutionary Algorithm (or SPEA). The
current version, SPEA2, competes directly with NSGA-II
and various other multiobjective stochastic optimization
algorithms.
119
Like NSGA-II, SPEA2 maintains an archive
of the best known Pareto front members plus some oth-
ers. SPEA2 also similarly employs both a Pareto measure
and a crowding measure in its tness procedure. However,
SPEA2s Pareto measure is Wimpiness, and its crowding
measure is based on distance to other individuals in the
multiobjective space, rather than distance along ranks.
SPEA2s similarity measure computes a distance to
other individuals in the population, and specically, to the kth closest individual in the popu-
lation. There are many fancy ways of computing this in a reasonably efcient manner. Here Im
just going to suggest a grotesquely inefcient, but simple, approach.
120
Basically we compute
the distance from everyone to everyone. Then, for each individual in the population, we sort the
population by distance to that individual, and take the kth closest individual. This is O(n
2
lg n),
where n is the population size. Thats not great.
118
Of course I made up these names (except for strength).
119
Theyre sort of intertwined. SPEA was introduced in Eckart Zitzler and Lothar Thiele, 1999, Multiobjective evo-
lutionary algorithms: A comparitive case study and the strength pareto approach, IEEE Transactions on Evolutionary
Computation, 3(4), 257271. NSGA-II then came out in 2000, and SPEA2 then came out as Eckart Zitzler, Marco Laumanns,
and Lothar Thiele, 2002, SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization,
in K. Giannakoglou, et al., editors, Evolutionary Methods for Design, Optimization, and Control, pages 1926.
120
Hey, tness assessment time is the dominant factor timewise nowadays anyway!
142
Algorithm 105 Compute the Distance of the Kth Closest Individual
1: P {P
1
, ..., P
m
} population
2: O {O
1
, ..., O
n
} objectives to assess with
3: P
l
individual to compute kth closest individual to
4: k desired individual index (the kth individual from l)
5: global D m vectors, each of size m . D
i
holds a vector of distances of various individuals i
6: global S {S
1
, ..., S
m
} . S
i
will be true if D
i
has already been sorted
7: perform once only
8: for each individual P
i
P do
9: V {} . Our distances
10: for each individual P
j
P do
11: V V
nq
n
m=1
(ObjectiveValue(O
m
, P
i
) ObjectiveValue(O
m
, P
j
))
2
o
12: D
i
V
13: S
i
false
14: perform each time
15: if S
l
is false then . Need to sort
16: Sort D
l
, smallest rst
17: S
l
true
18: W D
l
19: return W
k+1
. Its W
k+1
because W
1
is always 0: the distance to the same individual
Given the Wimpiness of an individual and the kth closest individual to it, we can nally dene
a tness. Dene a pre-tness value G
i
as follows:
G
i
Wimpiness(i) +
1
2 + d
i
d
i
is the distance to the kth closest individual to i, where k =
l
p
||P||
m
typically.
121
The smaller
the value of G
i
the better. The idea is that a big distance d
i
makes G
i
smaller (because its far away
from other individuals we want diversity!) and likewise a small Wimpiness makes G
i
smaller.
SPEA2 in reality uses G
i
as the tness of individual i: but in keeping with our tradition (higher
tness is better), lets convert it into a nal tness like weve done before:
Fitness(i) =
1
1 + G
i
Each iteration, SPEA2 will build an archive consisting of the current Pareto front of the popula-
tion. The archive is supposed to be of size n. If there arent enough individuals in the front to ll all
those n, SPEA2 will ll the rest with other t individuals selected from the population. If there are
instead too many individuals in the Pareto front to t into n, SPEA2 needs to trim some individuals.
It does this by iteratively deleting individuals who have the smallest kth closest distance (starting
with k = 1, breaking ties with k = 2, and so on). The goal is to get in the archive those individuals in
the Pareto front which are furthest away from one another and other individuals in the population.
The algorithm for constructing the archive looks like this:
121
Actually, Zitzler and Thiele dont say how you should round it: you could just as well do k =
j
p
||P||
k
I suppose.
143
Algorithm 106 SPEA2 Archive Construction
1: P {P
1
, ..., P
m
} population
2: O {O
1
, ..., O
n
} objectives to assess with
3: a desired archive size
4: A Pareto non-dominated front of P . The archive
5: Q P A . All individuals not in the front
6: if ||A|| < a then . Too small! Pack with some more individuals
7: Sort Q by tness
8: A A the a ||A|| ttest individuals in Q, breaking ties arbitrarily
9: while ||A|| > a do . Too big! Remove some k-closest individuals
10: Closest A
1
11: c index of A
1
in P
12: for each individual A
i
A except A
1
do
13: l index of A
i
in P
14: for k from 1 to m1 do . Start with k = 1, break ties with larger values of k
15: if DistanceOfKthNearest(k, P
l
) < DistanceOfKthNearest(k, P
c
) then
16: Closest A
i
17: c l
18: break from inner for
19: else if DistanceOfKthNearest(k, P
l
) > DistanceOfKthNearest(k, P
c
) then
20: break from inner for
21: A A {Closest}
22: return A
Now were ready to describe the SPEA2 top-level algorithm. Its very similar to NSGA-II
(Algorithm 104): the primary difference is that the archive construction mechanism, which is more
complex in SPEA2, was broken out into a separate algorithm, which simplies the top-level:
Algorithm 107 An Abstract Version of the Strength Pareto Evolutionary Algorithm 2 (SPEA2)
1: m desired population size
2: a desired archive size . Typically a = m
3: P {P
1
, ..., P
m
} Build Initial Population
4: A {} archive
5: repeat
6: AssessFitness(P)
7: P P A . Obviously on the rst iteration this has no eect
8: BestFront Pareto Front of P
9: A Construct SPEA2 Archive of size a from P
10: P Breed(A), using tournament selection of size 2 . Fill up to the old size of P
11: until BestFront is the ideal Pareto front or we have run out of time
12: return BestFront
144
In short: given a population P and an (initially empty) archive A, we build a new archive
of the Pareto Front from P A, trimmed if necessary of close individuals, plus some other t
individuals from P to ll in any gaps. Then we create a new population P by breeding from A
(which eventually comes close to random selection as the Pareto front improves). Note that unlike
in NSGA-II, in SPEA2 you can specify the archive size, though usually its set to the same value as
NSGA-II anyway (a = m).
SPEA2 and NSGA-II both are basically versions of ( +) in multiobjective space, coupled with
a diversity mechanism and a procedure for selecting individuals that are closer to the Pareto front.
Both SPEA2 and NSGA-II are fairly impressive algorithms,
122
though NSGA-II is a bit simpler and
has lower computational complexity in unsophisticated versions.
122
Believe me, I know. Zbigniew Skolicki and I once constructed a massively parallel island model for doing multiob-
jective optimization. If there were n objectives, the islands were organized in a grid with n corners, one per objective.
For example with 2 objectives, the grid was a line. If there were 3 objectives, the grid was a triangle mesh. If there were 4
objectives, the grid was a mesh lling the volume of a tetrahedron (three-sided pyramid). Each island assessed tness as
a weighted sum of the objectives. The closer an island was to a corner, the more it weighted that corners objective. Thus
islands in the corners or ends were 100% a certain objective, while (for example) islands near the center weighted each
objective evenly. Basically each island was searching for its own part of the Pareto front, resulting in (hopefully) a nicely
distributed set of points along the front. We got okay results. But SPEA2, on a single machine, beat our pants off.
145
146
8 Combinatorial Optimization
1", $3
1/2", $13
2/3", $1
1/3", $7
4/5", $9
2", $11
1/5", $5
5/4", $4
157/60"
Available Blocks
(Shown with Heights and Values)
Knapsack
(Shown with Height)
Figure 54 A knapsack problem. Fill the knapsack with as
much value ($$$) without exceeding the knapsacks height.
So far the kinds of problems weve tack-
led are very general: any arbitrary search
space. Weve seen spaces in the forms of per-
mutations of variables (xed-length vectors);
spaces that have reasonable distance metrics
dened for them; and even spaces of trees or
sets of rules.
One particular kind of space deserves
special consideration. A combinatorial op-
timization problem
123
is one in which the
solution consists of a combination of unique
components selected from a typically nite,
and often small, set. The objective is to nd
the optimal combination of components.
A classic combinatorial optimization
problem is a simple form of the knapsack problem: were given n blocks of different heights
and worth different amounts of money (unrelated to the heights) and a knapsack
124
of a certain
larger height, as shown in Figure 54. The objective is to ll the knapsack with blocks worth the most
$$$ (or eee or ) without overlling the knapsack.
125
Blocks are the components. Figure 55
shows various combinations of blocks in the knapsack. As you can see, just because the knapsack
is maximally lled doesnt mean its optimal: what counts is how much value can be packed into
the knapsack without going over. Overfull solutions are infeasible (or illegal or invalid).
2", $11
Overfull
(Not Legal)
Underfull Not 100% Filled
But Optimal ($35)
1/2", $13
4/5", $9
1/5", $5
2/3", $1
1/3", $7
100% Filled
But only $23
5/4", $4
1/5", $5
2/3", $1
1/2", $13
1", $3
5/4", $4
4/5", $9
1/3", $7
Figure 55 Filling the knapsack.
This isnt a trivial or obscure problem. Its
got a lot of literature behind it. And lots
of real-world problems can be cast into this
framework: knapsack problems show up in
the processor queues of operating systems;
in allocations of delivery trucks along routes;
and in determining how to get exactly $15.05
worth of appetizers in a restaurant.
126
Another example is the classic travel-
ing salesman problem (or TSP), which has
a set of cities with some number of routes
(plane ights, say) between various pairs
cities. Each route has a cost. The salesman
must construct a tour starting at city A, vis-
iting all the cities at least once, and nally
123
Not to be confused with combinatorics, an overall eld of problems which could reasonably include, as a small subset,
practically everything discussed so far.
124
Related are various bin packing problems, where the objective is to gure out how to arrange blocks so that they
will t correctly in a multi-dimensional bin.
125
There are various knapsack problems. For example, another version allows you to have as many copies of a given
block size as you need.
126
http://xkcd.com/287/
147
returning to A. Crucially, this tour must have the lowest cost possible. Put another way, the cities
are nodes and the routes are edges in a graph, labelled by cost, and the object is to nd a minimum-
cost cycle which visits every node at least once. Here the components arent blocks but are rather
the edges in the graph. And the arrangement of these edges matters: there are lots of sets of edges
which are nonsense because they dont form a cycle.
Costs and Values While the TSP has cost (the edge weights) which must be minimized, Knapsack
instead has value ($$$) which must be maximized. These are really just the same thing: simply
negate or invert the costs to create values. Most combinatorial optimization algorithms traditionally
assume costs, but well include both cases. At any rate, one of many ways you might convert the
cost of a component C
i
into a value (or vice versa) would be something along the lines of:
Value(C
i
) =
1
Cost(C
i
)
Thats the relationship well assume in this Section. This of course assumes that your costs (and
values) are > 0, which is the usual case. If your costs or values are both positive and negative,
some of the upcoming methods do a kind value-proportional selection, so youll need to add some
amount to make them all positive. Finally, there exist problems in which components all have
exactly the same value or cost. Or perhaps you might be able to provide your algorithm with a
heuristic
127
that you as a user have designed to favor certain components over others. In this case
you could use Value(C
i
) = Heuristic(C
i
).
Knapsack does have one thing the TSP doesnt have: it has additional weights
128
(the block
heights) and a maximum weight which must not be exceeded. The TSP has a different notion of
infeasible solutions than simply ones which exceed a certain bound.
8.1 General-Purpose Optimization and Hard Constraints
Combinatorial optimization problems can be solved by most general-purpose metaheuristics such
as those weve seen so far, and in fact certain techniques (Iterated Local Search, Tabu Search, etc.)
are commonly promoted as combinatorics problem methods. But some care must be taken because
most metaheuristics are really designed to search much more general, wide-open spaces than the
constrained ones found in most combinatorial optimization problems. We can adapt them but need
to take into consideration these restrictions special to these kinds of problems.
129
As an example, consider the use of a boolean vector in combination with a metaheuristic such
as simulated annealing or the genetic algorithm. Each slot in the vector represents a component,
and if the slot is true, then the component is used in the candidate solution. For example, in
Figure 54 we have blocks of height 2,
1
3
,
5
4
,
1
5
,
4
5
, 1,
2
3
, and
1
2
. A candidate solution to the problem
in this Figure would be a vector of eight slots. The optimal answer shown in Figure 55 would be
hfalse, true, false, true, true, false, true, truei, representing the blocks
1
3
,
1
5
,
4
5
,
2
3
, and
1
2
.
The problem with this approach is that its easy to create solutions which are infeasible. In
the knapsack problem we have declared that solutions which are larger than the knapsack are
127
A heuristic is a rule of thumb provided by you to the algorithm. It can often be wrong, but is right often enough that
its useful as a guide.
128
Yeah, confusing. TSP edge weights vs. combinatorial component weights. Thats just the terminology, sorry.
129
A good overview article on the topic, by two greats in the eld, is Zbigniew Michalewicz and Marc Schoenauer,
1996, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation, 4(1), 132.
148
simply illegal. In Knapsack, its not a disaster to have candidate solutions like that, as long as
the nal solution is feasible we could just declare the quality of such infeasible solutions to be
their distance from the optimum (in this case perhaps how overfull the knapsack is). We might
punish them further for being infeasible. But in a problem like the Traveling Salesman Problem,
our boolean vector might consist of one slot per edge in the TSP graph. Its easy to create infeasible
solutions for the TSP which are simply nonsense: how do we assess the quality of a candidate
solution whose TSP solution isnt even a tour?
The issue here is that these kind of problems, as congured, have hard constraints: there are
large regions in the search space which are simply invalid. Ultimately we want a solution which is
feasible; and during the search process itd be nice to have feasible candidate solutions so we can
actually think of a way to assign them quality assessments! There are two parts to this: initialization
(construction) of a candidate solution from scratch, and Tweaking a candidate solution into a new
one.
Construction Iterative construction of components within hard constraints is sometimes straight-
forward and sometimes not. Often its done like this:
1. Choose a component. For example, in the TSP, pick an edge between two cities A and B. In
Knapsack, its an initial block. Let our current (partial) solution start with just that component.
2. Identify the subset of components that can be concatenated to components in our partial
solution. In the TSP, this might be the set of all edges going out of A or B. In Knapsack, this is
all blocks that can still be added into the knapsack without going over.
3. Tend to discard the less desirable components. In the TSP, we might emphasize edges that
are going to cities weve not visited yet if possible.
4. Add to the partial solution a component chosen from among those components not yet
discarded.
5. Quit when there are no components left to add. Else go to step 2.
This is an intentionally vague description because iterative construction is almost always highly
problem-specic and often requires a lot of thought.
Tweaking The Tweak operator can be even harder to do right, because in the solution space
feasible solutions may be surrounded on all sides by infeasible ones. Four common approaches:
Invent a closed Tweak operator which automatically creates feasible children. This can be a
challenge to do, particularly if youre including crossover. And if you create a closed operator,
can it generate all possible feasible children? Is there a bias? Do you know what it is?
Repeatedly try various Tweaks until you create a child which is feasible. This is relatively
easy to do, but it may be computationally expensive.
Allow infeasible solutions but construct a quality assessment function for them based on
their distance to the nearest feasible solution or to the optimum. This is easier to do for some
problems than others. For example, in the Knapsack problem its easy: the quality of an
overfull solution could be simply based on how overfull it is (just like underfull solutions).
149
Assign infeasible solutions a poor quality. This essentially eliminates them from the popula-
tion; but of course it makes your effective population size that much smaller. It has another
problem too: moving just over the edge between the feasible and infeasible regions in the
space results in a huge decrease in quality: its a Hamming Cliff (see Representation, Section
4). In Knapsack, for example, the best solutions are very close to infeasible ones because
theyre close to lled. So one little mutation near the best solutions and whammo, youre
infeasible and have big quality punishment. This makes optimizing near the best solutions a
bit like walking on a tightrope.
None of these is particularly inviting. While its often easy to create a valid construction
operator, making a good Tweak operator thats closed can be pretty hard. And the other methods
are expensive or allow infeasible solutions in your population.
Component-Oriented Methods The rest of this Section concerns itself with methods specially
designed for certain kinds of spaces often found in combinatorial optimization, by taking advantage
of the fact that the that solutions in these spaces consist of combinations of components drawn from a
typically xed set. Its the presence of this xed set that we can take advantage of in a greedy, local
fashion by maintaining historical quality values, so to speak, of individual components rather
than (or in addition to) complete solutions. There are two reasons you might want to do this:
While constructing, to tend to select from components which have proven to be better choices.
While Tweaking, to modify those components which appear to be getting us in a local
optimum.
Well begin with a straightforward metaheuristic called Greedy Randomized Adaptive Search
Procedures (or GRASP) which embodies the basic notion of constructing combinatorial solutions
out of components, then Tweaking them. From there we will move to a related technique, Ant
Colony Optimization, which assigns historical quality values to these components to more
aggressively construct solutions from the historically better components. Finally, well examine a
variation of Tabu Search called Guided Local Search which focuses instead on the Tweak side of
things: its designed to temporarily punish those components which have gotten the algorithm
into a rut.
Some of these methods take advantage of the historical quality values of individual com-
ponents, but use them in quite different ways. Ant Colony Optimization tries to favor the best-
performing components; but Guided Local Search gathers this information to determine which
low-performing components appear to show up often in local optima.
The meaning of Quality or Fitness Because combinatorial problems can be cast as either cost
or as value, the meaning of quality or tness of a candidate solution is shaky. If your problem is
in terms of value (such as Knapsack), its easy to dene quality or tness simply as the sum total
value, that is,
i
Value(C
i
), of all the components C
i
which appear in the candidate solution. If
your problem is in terms of cost (such as the TSP), its not so easy: you want the presence of many
low-cost components to collectively result in a high-quality solution. A common approach is to
dene quality or tness as 1/(
i
Cost(C
i
)), for each component C
i
that appears in the solution.
150
8.2 Greedy Randomized Adaptive Search Procedures
At any rate, lets start easy with a single-state metaheuristic which is built on the notions of
constructing and Tweaking feasible solutions, but which doesnt use any notion of component-level
historical quality: Greedy Randomized Adaptive Search Procedures or GRASP, by Thomas Feo
and Mauricio Resende.
130
The overall algorithm is really simple: we create a feasible solution by
constructing from among highest value (lowest cost) components (basically using the approach
outlined earlier) and then do some hill-climbing on the solution.
Algorithm 108 Greedy Randomized Adaptive Search Procedures (GRASP)
1: C {C
1
, ..., C
n
} components
2: p percentage of components to include each iteration
3: m length of time to do hill-climbing
4: Best 2
5: repeat
6: S {} . Our candidate solution
7: repeat
8: C
0
components in C S which could be added to S without being infeasible
9: if C
0
is empty then
10: S {} . Try again
11: else
12: C
00
the p% highest value (or lowest cost) components in C
0
13: S S {component chosen uniformly at random from C
00
}
14: until S is a complete solution
15: for m times do
16: R Tweak(Copy(S)) . Tweak must be closed, that is, it must create feasible solutions
17: if Quality(R) > Quality(S) then
18: S R
19: if Best = 2 or Quality(S) > Quality(Best) then
20: Best S
21: until Best is the ideal solution or we have run out of time
22: return Best
Instead of picking the p% best available components, some versions of GRASP pick components
from among the components whose value is no less than (or cost is no higher than) some amount.
GRASP is more or less using a truncation selection among components to do its initial construction
of candidate solutions. You could do something else like a tournament selection among the
components, or a tness-proportionate selection procedure (see Section 3 for these methods).
GRASP illustrates one way how to construct candidate solutions by iteratively picking compo-
nents. But its still got the same conundrum that faces evolutionary computation when it comes to
the Tweak step: you have to come up with some way of guaranteeing closure.
130
The rst GRASP paper was Thomas A. Feo and Mauricio G. C. Resende, 1989, A probabilistic heuristic for a compu-
tationally difcult set covering problem, Operations Research Letters, 8, 6771. Many of Resendes current publications on
GRASP may be found at http://www.research.att.com/mgcr/doc/
151
8.3 Ant Colony Optimization
Marco Dorigos Ant Colony Optimization (or ACO)
131
is an approach to combinatorial optimization
which gets out of the issue of Tweaking by making it optional. Rather, it simply assembles candidate
solutions by selecting components which compete with one another for attention.
ACO is population-oriented. But there are two different kinds of populations in ACO. First,
there is the set of components that make up a candidate solutions to the problem. In the Knapsack
problem, this set would consist of all the blocks. In the TSP, it would consist of all the edges. The
set of components never changes: but we will adjust the tness (called the pheromone) of the
various components in the population as time goes on.
Each generation we build one or more candidate solutions, called ant trails in ACO parlance, by
selecting components one by one based, in part, on their pheromones. This constitutes the second
population in ACO: the collection of trails. Then we assess the tness of each trail. For each trail,
each of the components in that trail is then updated based on that tness: a bit of the trails tness
is rolled into each components pheromone. Does this sound like some kind of one-population
cooperative coevolution?
The basic abstract ACO algorithm:
Algorithm 109 An Abstract Ant Colony Optimization Algorithm (ACO)
1: C {C
1
, ..., C
n
} components
2: popsize number of trails to build at once . ant trails is ACOspeak for candidate solutions
3: ~p hp
1
, ..., p
n
i pheromones of the components, initially zero
4: Best 2
5: repeat
6: P popsize trails built by iteratively selecting components based on pheromones and costs or
values
7: for P
i
P do
8: P
i
Optionally Hill-Climb P
i
9: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
10: Best P
i
11: Update ~p for components based on the tness results for each P
i
P in which they participated
12: until Best is the ideal solution or we have run out of time
13: return Best
I set this up to highlight its similarities to GRASP: both algorithms iteratively build candidate
solutions, then hill-climb them. There are obvious differences though. First, ACO builds some
popsize candidate solutions all at once. Second, ACOs hill-climbing is optional, and indeed its often
not done at all. If youre nding it difcult to construct a closed Tweak operator for your particular
representation, you can entirely skip the hill-climbing step if need be.
Third, and most importantly, components are selected not just based on component value or
cost, but also on pheromones. A pheromone is essentially the historical quality of a component:
131
ACOs been around since around 1992, when it Dorigo proposed it in his dissertation: Marco Dorigo, 1992,
Optimization, Learning and Natural Algorithms, Ph.D. thesis, Politecnico di Milano, Milan, Italy. The algorithms here are
loosely adapted from Dorigo and Thomas St utzles excellent recent book: Marco Dorigo and Thomas St utzle, 2004, Ant
Colony Optimization, MIT Press.
152
often approximately the sum total (or mean, etc.) tness of all the trails that the component has
been a part of. Pheromones tell us how good a component would be to select regardless of its
(possibly low) value or (high) cost. After assessing the tness of trails, we update the pheromones
in some way to reect new tness values weve discovered so those components are more or less
likely to be selected in the future.
So where are the ants? Well, heres the thing. ACO was inspired by earlier research work in
pheromone-based ant foraging and trail formation algorithms: but the relationship between ACO
and actual ants is... pretty thin. ACO practitioners like to weave the following tale: to solve the
Traveling Salesman Problem, we place an Ant in Seattle and tell it to go wander about the graph,
from city to city, eventually forming a cycle. The ant does so by picking edges (trips to other cities
from the ants current city) that presently have high pheromones and relatively good (low) edge
costs. After the ant has nished, it lays a xed amount of pheromone on the trail. If the trail is
shorter (lower costs), then of course that pheromone will be distributed more densely among its
edges, making them more desirable for future ants.
Thats the story anyway. The truth is, there are no ants. There are just components with historical
qualities (pheromones), and candidate solutions formed from those components (the trails),
with tness assessed to those candidate solutions and then divvied up among the components
forming them.
8.3.1 The Ant System
The rst version of ACO was the Ant System or AS. Its not used as often nowadays but is a good
starting point to illustrate these notions. In the Ant System, we select components based on a
tness-proportionate selection procedure of sorts, employing both costs or values and pheromones
(well get to that). We then always add tnesses into the component pheromones. Since this could
cause the pheromones to go sky-high, we also always reduce (or evaporate) all pheromones a bit
each time.
The Ant System has ve basic steps:
1. Construct some trails (candidate solutions) by selecting components.
2. (Optionally) Hill-Climb the trails to improve them.
3. Assess the tness of the nal trails.
4. Evaporate all the pheromones a bit.
5. Update the pheromones involved in trails based on the tness of those solutions.
In the original AS algorithm, theres no hill-climbing: Ive added it here. Later versions of ACO
include it. Heres a version of the algorithm (note certain similarities with GRASP):
153
Algorithm 110 The Ant System (AS)
1: C {C
1
, ..., C
n
} components
2: e evaporation constant, 0 < e 1
3: popsize number of trails to construct at once
4: initial value for pheromones
5: t iterations to Hill-Climb
6: ~p hp
1
, ..., p
n
i pheromones of the components, all set to
7: Best 2
8: repeat
9: P {} . Our trails (candidate solutions)
10: for popsize times do . Build some trails
11: S {}
12: repeat
13: C
0
components in C S which could be added to S without being infeasible
14: if C
0
is empty then
15: S {} . Try again
16: else
17: S S {component selected from C
0
based on pheromones and values or costs}
18: until S is a complete trail
19: S Hill-Climb(S) for t iterations . Optional. By default, not done.
20: AssessFitness(S)
21: if Best = 2 or Fitness(S) > Fitness(Best) then
22: Best S
23: P P {S}
24: for each p
i
~p do . Decrease all pheromones a bit (evaporation)
25: p
i
(1 e)p
i
26: for each P
j
P do . Update pheromones in components used in trails
27: for each component C
i
do
28: if C
i
was used in P
j
then
29: p
i
p
i
+ Fitness(P
j
)
30: until Best is the ideal solution or we have run out of time
31: return Best
Component Values or Costs, and Selecting Components We construct trails by repeatedly se-
lecting from those components which, if added to the trail, wouldnt make it infeasible. Knapcksack
is easy: keep on selecting blocks until its impossible to select one without going over. But the TSP
is more complicated. For example, in the TSP we could just keep selecting edges until we have
a complete tour. But we might wind up with edges we didnt need, or a bafingly complex tour.
Another approach might be to start with a city, then select from among those edges going out of
the city to some city weve not seen yet (unless we have no choice), then select from among edges
going out of that city, and so on. However it may be the case that the optimal tour requires that we
go through certain cities repeatedly. Or what if the only possible tours require that you go from
154
Salt Lake City to Denver, yet thats got a high cost (low value) so we keep avoiding it and picking
other cities, only to be forced to backtrack? We could have some pretty ugly tours. Anyway: the
point is, trail construction can require some forethought.
AS selects using what Ill call a components desirability: combining values and pheromones:
Desirability(C
i
) = p
i
(Value(C
i
))
e
...or if your problem is using costs...
Desirability(C
i
) = p
1
Cost(C
i
)
e
and e are tuning parameters.
132
Note that as the pheromone goes up the quality goes up.
Likewise, if a component has a higher value (or lower cost), then the quality goes up. Now AS
simply does a desirability-proportionate selection among the components were considering,
similar to Algorithm 30. If you like you could perform some other selection procedure among your
components, like tournament selection or GRASP-style truncation to p% based on desirability.
Initializing the Pheromones You could set them all to = 1. For the TSP, the ACO folks often set
them to = popsize (1/Cost(D)), where D is some costly, absurd tour like the Nearest Neighbor
Tour (construct a TSP tour greedily by always picking the lowest cost edge).
Evaporating Pheromones The Ant System evaporates pheromones because otherwise the
pheromones keep on piling up. But theres perhaps a better way to do it: adjust the pheromones up
or down based on how well theyve performed on average. Instead of evaporating and updating
as was shown in the Ant System, we could just take each pheromone p
i
and adjust it as follows:
Algorithm 111 Pheromone Updating with a Learning Rate
1: C {C
1
, ..., C
n
} components
2: ~p hp
1
, ..., p
n
i pheromones of the components
3: P {P
1
, ..., P
m
} population of trails
4: learning rate
5: ~r hr
1
, ..., r
n
i total desirability of each component, initially 0
6: ~c hc
1
, ..., c
n
i component usage counts, initially 0
7: for each P
j
P do . Compute the average tness of trails which employed each component
8: for each component C
i
do
9: if C
i
was used in P
j
then
10: r
i
r
i
+ Desirability(P
j
)
11: c
i
c
i
+ 1
12: for each p
i
~p do
13: if c
i
> 0 then
14: p
i
(1 )p
i
+
r
i
c
i
.
r
i
c
i
is the average tness computed earlier
15: return ~p
132
This isnt set in stone. For example, we could do Desirability(C
i
) = p
i
+ (Value(C
i
))
e
. Or we could do
Desirability(C
i
) = p
i
+ (1 )Value(C
i
).
155
0 1 is the learning rate. For each component, were computing the average tness of
every trail which used that component. Then were throwing out a small amount of what we know
so far (1 s worth) and rolling in a little bit of what weve just learned this iteration about how
good a component is (s worth). If is large, we quickly adopt new information at the expense of
our historical knowledge. Its probably best if is small.
133
Optional Hill-Climbing: More Exploitation AS doesnt have hill-climbing by default. But we
could hill-climb the ant trail S it right after the AssessFitness step, just like we do in GRASP. And
just like in GRASP were going to have the same issue: guaranteeing that each time we Tweak an
ant trail, the child is still a valid ant trail. For some problems this is easy, for others, not so easy.
Anyway, hill-climbing adds more exploitation to the problem, directly moving towards the locally
best solutions we can nd. Often this is a good approach for problems like TSP, which tend to
benet from a high dose of exploitation.
8.3.2 The Ant Colony System: A More Exploitative Algorithm
There have been a number of improvements on AS since it was rst proposed (some of which were
mentioned earlier). Here Ill mention one particularly well-known one: the Ant Colony System
(ACS).
134
ACS works like the Ant System but with the following changes:
1. The use of an elitist approach to updating pheromones: only increase pheromones for
components used in the best trail discovered so far. In a sense this starts to approach (1 + ).
2. The use of a learning rate in pheromone updates.
3. A slightly different approach for evaporating pheromones.
4. A strong tendency to select components that were used in the best trail discovered so far.
Elitism ACS only improves the pheromones of components that were used in the best-so-
far trail (the trail we store in Best), using the learning rate method stolen from Algorithm
111. That is, if a component is part of the best-so-far trail, we increase its pheromones as
p
i
(1 )p
i
+ Fitness(Best).
This is very strongly exploitative, so all pheromones are also decreased whenever theyre used
in a solution, notionally to make them less desirable for making future solutions in order to push
the system to explore a bit more in solution space. More specically, whenever a component C
i
is
used in a solution, we adjust its pheromone p
i
(1 )p
i
+ , where is a sort of evaporation
or unlearning rate, and is the value we initialized the pheromones to originally. Left alone, this
would eventually reset the pheromones to all be .
Elitist Component Selection Component selection is also pretty exploitative. We ip a coin of
probability q. If it comes up heads, we select the component which has the highest Desirability.
Otherwise we select in the same way as AS selected: though ACS simplies the selection mechanism
by getting rid of (setting it to 1).
133
Well see the 1 vs. learning rate metaphor again in discussion of Learning Classier Systems. Its a common
notion in reinforcement learning too.
134
Again by Marco Dorigo and Luca Gambardella no, there are plenty of people doing ACO besides Marco Dorigo!
156
Now were ready to do the Ant Colony System. Its not all that different from AS in structure:
Algorithm 112 The Ant Colony System (ACS)
1: C {C
1
, ..., C
n
} Components
2: popsize number of trails to construct at once
3: elitist learning rate
4: evaporation rate
5: initial value for pheromones
6: tuning parameter for heuristics in component selection . Usually = 1
7: e tuning parameter for pheromones in component selection
8: t iterations to Hill-Climb
9: q probability of selecting components in an elitist way
10: ~p hp
1
, ..., p
n
i pheromones of the components, all set to
11: Best 2
12: repeat
13: P {} . Our candidate solutions
14: for popsize times do . Build some trails
15: S {}
16: repeat
17: C
0
components in C S which could be added to S without being infeasible
18: if C
0
is empty then
19: S {} . Try again
20: else
21: S S { component selected from C
0
using Elitist Component Selection }
22: until S is a complete trail
23: S Hill-Climb(S) for t iterations . Optional. By default, not done.
24: AssessFitness(S)
25: if Best = 2 or Fitness(S) > Fitness(Best) then
26: Best S
27: for each p
i
~p do . Decrease all pheromones a bit (evaporation)
28: p
i
(1 )p
i
+
29: for each component S
i
do . Update pheromones only of components in Best
30: if S
i
was used in Best then
31: p
i
(1 )p
i
+ Fitness(Best)
32: until Best is the ideal solution or we have run out of time
33: return Best
As before, we might be wise to do some hill-climbing right after the AssessFitness step.
At this point you may have picked up on an odd feature about ACO. The selection of com-
ponents in candidate solutions is greedily based on how well a component has appeared in
high-quality solutions (or perhaps even the best solution so far). It doesnt consider the possibil-
ity that a component needs to always appear with some other component in order to be good,
and without the second component its terrible. That is, ACO completely disregards linkage among
components.
157
Thats a pretty bold assumption. This could, in theory, lead to the same problems that co-
operative coevolution has: jacks-of-all-trades. ACS tries to get around this by pushing hard for
the best-so-far result, just as cooperative coevolutions best-of-n approaches and archive methods
try to view components in the light of their best situation. I think ACO has a lot in common
with coevolution, although its not been well studied. In some sense we may view ACO as a one
population pseudo-cooperative coevolution algorithm.
Its possible to surmount this by trying a population not of components but of (say) all possible
pairs of components. We could select pairs that have been performing well. This would move up
the chain a little bit as far as linkage is concerned, though itd make a much bigger population.
Pheromones for pairs or triples, etc., of components are known as higher-order pheromones.
ACO also has a lot in common with Univariate Estimation of Distribution Algorithms (dis-
cussed in Section 9.2)
135
Heres how to look at it: the components tnesses may be viewed as
probabilities and the whole population is thus one probability distribution on a per-component
basis. Contrast this to the evolutionary model, where the population may also be viewed as a
sample distribution over the joint space of all possible candidate solutions, that is, all possible com-
binations of components. It should be obvious that ACO is searching a radically simpler (perhaps
simplistic) space compared to the evolutionary model. For general problems that may be an issue.
But for many combinatorial problems, its proven to be a good tradeoff.
8.4 Guided Local Search
Theres another way we can take advantage of the special component-based space found in
combinatorial optimization problems: by marking certain components which tend to cause local
optima and trying to avoid them.
Recall that Feature-based Tabu Search (Algorithm 15, in Section 2.5) operated by identifying
features in solutions found in good solutions, and then made those features taboo, temporarily
banned from being returned to by later Tweaks. The idea was to prevent the algorithm from revisit-
ing, over and over again, those local optima in which those features tended to be commonplace.
If you can construct a good, closed Tweak operator, it turns out that Feature-based Tabu Search
can be nicely adapted to the combinatorial optimization problem. Simply dene features to be
the components of the problem. For example, Feature-based Tabu Search might hill-climb through
the space of Traveling Salesman Problem solutions, temporarily making certain high-performing
edges taboo to force it out of local optima in the TSP.
A variant of Feature-based Tabu Search called Guided Local Search (GLS) seems to be particu-
larly apropos for combinatorial optimization: it assigns historical quality measures to compo-
nents, like Ant Colony Optimization does. But interestingly, it uses this quality information not to
home in on the best components to use, but rather to make troublesome components taboo and
force more exploration.
GLS is by Chris Voudouris and Edward Tsang.
136
The algorithm is basically a variation of Hill-
Climbing that tries to identify components which appear too often in local optima, and penalizes
later solutions which use those components so as to force exploration elsewhere.
135
This has been noted before, and not just by me: see p. 57 of Marco Dorigo and Thomas St utzle, 2004, Ant Colony
Optimization, MIT Press. So weve got similarities to coevolution and to EDAs... hmmm....
136
Among the earlier appearances of the algorithm is Chris Voudouris and Edward Tsang, 1995, Guided local search,
Technical Report CSM-247, Department of Computer Science, University of Essex. This technical report was later
updated as Chris Voudouris and Edward Tsang, 1999, Guided local search, European Journal of Operational Research, 113(2),
469499.
158
To do this, Guided Local Search maintains a vector of pheromones,
137
one per component, which
reect how often each component has appeared in high-quality solutions. Instead of hill-climbing
by Quality, GLS hill-climbs by an AdjustedQuality function which takes both Quality and the presence
of these pheromones into account.
138
Given a candidate solution S, a set of components C for the
problem, and a vector ~p of current pheromones, one per component, the adjusted quality of S is
dened as:
AdjustedQuality(S, C, ~p) = Quality(S)
i
(
p
i
if component C
i
is found in S
0 otherwise
Thus the hill-climber is looking for solutions both of high quality but also ones which are relatively
novel: they use components which havent been used much in high-quality solutions before. High
pheromones are bad in this context. The parameter determines the degree to which novelty gures
in the nal quality computation, and it will need to be tuned carefully.
After doing some hill-climbing in this adjusted quality space, the algorithm then takes its
current candidate solution S, which is presumably at or near a local optimum, and increases the
pheromones on certain components which can be found in this solution. To be likely to have its
pheromones increased, a component must have three qualities. First, it must appear in the current
solutionthat is, its partly responsible for the local optimum and should be avoided. Second,
it will tend to have lower value or higher cost: we wish to move away from the least important
components in the solution rst. Third, it will tend to have lower pheromones. This is because GLS
doesnt just want to penalize the same components forever: itd like to turn its attention to other
components for some exploration. Thus when a components pheromone has increased sufciently,
its not chosen for further increases. Spread the love!
To determine the components whose pheromones should be increased, GLS rst computes the
penalizability of each component C
i
with current pheromone p
i
as follows:
139
Penalizability(C
i
, p
i
) =
1
(1 + p
i
) Value(C
i
)
...or if your problem is using costs...
Penalizability(C
i
, p
i
) =
Cost(C
i
)
(1 + p
i
)
Guided Local Search then picks the most penalizable component presently found in the current
solution S and increments its pheromone p
i
by 1. If theres more than one such component (theyre
tied), their pheromones are all increased.
Compare the Penalizability function with the Desirability function in Section 8.3.1: note that
components with high Desirability generally have low Penalizability and vice versa. While ACO
seeks to build new candidate solutions from historically desirable components, GLS punishes
components which have often appeared in local optima, though the ones it punishes the most are
the least desirable such components.
137
Im borrowing ACO terminology here: GLS calls them penalties.
138
In the name of consistency Im beginning to deviate from the standard GLS formulation: the algorithm traditionally
is applied to minimization rather than maximization problems.
139
GLS traditionally uses the term utility rather than my made-up word penalizability. Utility is a highly loaded term
that usually means something quite different see Section 10 for example so Im avoiding it.
159
Now that we have a way to adjust the quality of solutions based on pheromones, and a way to
increase pheromones for components commonly found in local optima, the full algorithm is quite
straightforward: its just hill-climbing with an additional, occasional, adjustment of the current
pheromones of the components. Theres no evaporation (which is quite surprising!).
Guided Local Search doesnt specify how we determine that were stuck in a local optimum
and must adjust pheromones to get ourselves out. Usually theres no test for local optimality.
Thus below the approach Ive taken is borrowed from Algorithm 10 (Hill-Climbing with Random
Restarts, Section 2.2), where we hill-climb until a random timer goes off, then update pheromones
under the presumption that weve hill-climbed long enough to roughly get ourselves trapped in a
local optimum.
Algorithm 113 Guided Local Search (GLS) with Random Updates
1: C {C
1
, ..., C
i
} set of possible components a candidate solution could have
2: T distribution of possible time intervals
3: ~p hp
1
, ..., p
i
i pheromones of the components, initially zero
4: S some initial candidate solution
5: Best S
6: repeat
7: time random time in the near future, chosen from T
8: repeat . First do some hill-climbing in the pheromone-adjusted quality space
9: R Tweak(Copy(S))
10: if Quality(R) > Quality(Best) then
11: Best R
12: if AdjustedQuality(R, C, ~p) > AdjustedQuality(S, C, ~p) then
13: S R
14: until Best is the ideal solution, time is up, or we have run out of time
15: C
0
{}
16: for each component C
i
C appearing in S do . Find the most penalizable components
17: if for all C
j
C appearing in S, Penalizability(C
i
, p
i
) Penalizability(C
j
, p
j
) then
18: C
0
C
0
{C
i
}
19: for each component C
i
C appearing in S do . Penalize them by increasing their pheromones
20: if C
i
C
0
then
21: p
i
p
i
+ 1
22: until Best is the ideal solution or we have run out of time
23: return Best
The general idea behind Guided Local Search doesnt have to be restricted to hill-climbing: it
could be used for population-based methods as well (and indeed is, where one version is known as
the Guided Genetic Algorithm).
160
9 Optimization by Model Fitting
Most of the methods weve examined so far sample the space of candidate solutions and select the
high-quality ones. Based on the samples, new samples are generated through Tweaking. Eventually
the samples (if were lucky) start migrating towards the tter areas in the space.
But theres an alternative to using selection and Tweak. Instead, from our samples we might
build a model (or update an existing one) which gives us an idea of where the good areas of the
space are. From that model we could then generate a new set of samples.
Models can take many forms. They could be neural networks or decision trees describing how
good certain regions of the space are. They could be sets of rules delineating regions in the space.
They could be distributions over the space suggesting where most of the population should go.
The process of tting a model (sometimes known as a hypothesis) to a sample of data is commonly
known as induction, and is one of the primary tasks of machine learning.
This model building and sample generation is really just an elaborate way of doing selection and
Tweaking, only were not generating children directly from other individuals, but instead created
uniformly from the region in which the tter individuals generally reside.
Much of the model-tting literature in the metaheuristics community has focused on models
in the form of distributions, especially simplied distributions known as marginal distributions.
This literature is collectively known as Estimation of Distribution Algorithms (EDAs). But there
are other approaches, largely cribbed from the machine learning community. Well begin with one
such alternative, then get to EDAs afterwards.
9.1 Model Fitting by Classication
Figure 56 Model tting by clas-
sication via a decision tree.
Black circles are t and white
circles are unt individuals
in the population. The learned
model delimits t and unt re-
gions of the genotype space.
A straightforward way to t a model to a population is to simply
divide the population into the t individuals and the unt individuals,
then tell a learning method to use this information to identify the
tter regions of the space as opposed to the unt regions. This is
basically a binary classication problem.
140
One of the better-known variations on model-tting by classi-
cation is the Learnable Evolution Model (LEM), by Ryszard Michal-
ski.
141
The overall technique is very simple: rst, do some evolution.
Then when your population has run out of steam, break it into two
groups: the t and unt individuals (and possibly a third group of
middling individuals). Use a classication algorithm to identify the
regions of the space containing the t individuals but not containing
the unt individuals. Replace the unt individuals with individuals
sampled at random from those identied regions. Then go back to do
some more evolution.
There are plenty of binary classication algorithms available in the machine learning world:
for example, decision trees, Support Vector Machines (SVMs), k-Nearest-Neighbor (kNN), even
140
Classication is the task of identifying the regions of space which belong to various classes (or categories). Here, we
happen to be dividing the genotype space into two classes: the t individuals class and the unt individuals class. Hence
the term binary classication.
141
Ryszard Michalski, 2000, Learnable evolution model: Evolutionary processes guided by machine learning, Machine
Learning, 38(12), 940.
161
Michalskis own AQ
142
algorithm. LEM doesnt care all that much. Figure 56 shows the results
of applying a decision tree to divide up the t from unt regions. Note some portions of the
space could have been t better: part of this is due to the particular learning bias of the decision
tree algorithm, which emphasizes rectangles. Every learning method has a bias: pick your poison.
The algorithm:
Algorithm 114 An Abstract Version of the Learnable Evolution Model (LEM)
1: b number of best individuals
2: w number of worst individuals . b + w ||P||. If you wish, you can make b + w = ||P||.
3: P Build Initial Population
4: Best 2
5: repeat
6: repeat . Do some evolution
7: AssessFitness(P)
8: for each individual P
i
P do
9: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
10: Best P
i
11: P Join(P, Breed(P))
12: until neither P nor Best seem to be improving by much any more
13: P
+
P ttest b individuals in P . Fit a model
14: P
P least t w individuals in P
15: M learned model which describes the region of space containing members of P
+
but not P
16: Q w children generated randomly from the region described in M . Generate children
17: P Join(P, Q) . Often P (P P
) Q
18: until Best is the ideal solution or we have run out of time
19: return Best
Some notes. First, the Join operation in Line 17 is often done by simply replacing the w worst
individuals in P, that is, P
) Q. But you
could do Join in other ways as well. Second, M could also be based not on P but on all previously
tested individuals: why waste information?
Third, its plausible, and in fact common, to do no evolution at all, and do only model building:
that is, eliminate Lines 6, 11, and 12. This model-building-only approach will be used in later
algorithms in this Section. Or, since its sometimes hard to determine if things are improving,
you could jut run the evolution step for some n times and then head into model-building, or apply
a timer a-la Hill-Climbing with Random Restarts (Algorithm 10).
Generating Children from the Model The models produced by classication algorithms fall
into two common categories: generative models and discriminative models. Generative models
can easily generate random children for you. Discriminative models cannot. But many common
classication algorithms (including all mentioned so far) produce discriminative models! What to
do? We could apply rejection sampling to our discriminative models: repeatedly generate random
individuals until one falls in the high tness region according to our model.
142
Originally called A
q
, later restyled as AQ. I dont know why.
162
Algorithm 115 Simple Rejection Sampling
1: n desired number of samples
2: M learned model
3: P {}
4: for n times do
5: repeat
6: S individual generated uniformly at random
7: until S is in a t region as dened by M
8: P P {S}
9: return P
As the run progresses and the population homes in on the optima in the space, the regions of
t individuals become very small, and rejection sampling starts getting expensive. Alternatively,
you could try to gather the list of regions that are considered valid, and sample from them according
to their size. Imagine that youve gone through the model (a decision tree say) and have gathered a
list of t regions. For each region you have computed a volume. You could perform a kind of
region-based sampling where you rst pick a region proportional to their volumes (using Fitness
Proportionate Selection, but with volumes rather than tnesses), and then select a point uniformly
at random within the chosen region. This would also create an entirely uniform selection.
Algorithm 116 Region-based Sampling
1: n desired number of samples
2: M learned model
3: P {}
4: R {R
1
, ..., R
m
} t regions from M, each with computed volumes
5: for n times do
6: R
i
selected from R using Volume-Proportionate Selection . (Like algorithm 30, so to speak)
7: P P { individual generated uniformly from within the bounds of R
i
}
8: return P
It turns out that many discriminative models dont just create boundaries delimiting regions,
but really dene fuzzy functions specifying the probability that a given point belongs to one class
or another. Deep in the low tness regions, the probability of a point being high tness is
very small; while deep in the high tness regions its quite big. On the borders, its half/half.
Furthermore, there exist approximate probability estimation functions even for those algorithms
which are notionally boundary-oriented, such as k-Nearest-Neighbor, SVMs, and decision trees. For
example, in a decision tree, the probability of a region belonging to the high tness class could be
assumed to be proportional to the number of high tness individuals from the population from
which we built the model which were located in that region.
Assuming we have this probability, we could apply a weighted rejection sampling, where we
keep kids only with a probability matching the model:
163
(a) A population of 5 indi-
viduals
(b) A population of 20 indi-
viduals
(c) A population of 75 indi-
vdiuals
(d) A distribution of an in-
nite number of individuals,
with Subgure (c) overlaid
for reference.
Figure 57 The distribution of a population candidate solutions, using samples of 5, 20, and 75, plus an innite population
distribution.
Algorithm 117 Weighted Rejection Sampling
1: n desired number of samples
2: M learned model
3: P {}
4: for n times do
5: repeat
6: S individual generated uniformly at random
7: p probability that S is t, according to M
8: until p random number chosen uniformly from 0.0 to 1.0 inclusive
9: P P {S}
10: return P
Algorithm 115 (simple rejection sampling) is just a degenerate version of weighted sampling,
where the probability is 1.0 if youre in the t region and 0.0 of youre in the unt region.
9.2 Model Fitting with a Distribution
An alternative form of model is a distribution of an innite-sized population using some math-
ematical function. This is the basic idea behind Estimation of Distribution Algorithms (EDAs).
To conceptualize this, lets begin with an approach to a distribution which in fact no EDAs (to
my knowledge) use, but which is helpful for illustration. Figure 57(a) shows a population of 5
individuals sampling the space roughly in proportion to the tness of those regions. Figure 57(b)
has increased this to 20 individuals, and Figure 57(c) to 75 individuals. Now imagine that we keep
increasing the population clear to individuals. At this point our innite population has become
a distribution of the sort shown in Figure 57(d), with different densities in the space. Thus in
some sense we may view Figures 57(a), (b), and (c) as sample distributions of the true underlying
innite distribution shown in Figure 57(d).
164
Thats basically what a population actually is: in an ideal world wed have an innite number
of individuals to work with. But we cant, because, well, our computers cant hold that many. So
we work with a sample distribution instead.
The idea behind an Estimation of Distribution Algorithm is to represent that innite population
in some way other than with a large number of samples. From this distribution we will typically
sample a set of individuals, assess them, then adjust the distribution to reect the new tness results
weve discovered. This adjustment imagines that the entire distribution is undergoing selection
143
such that tter regions of the space increase in their proportion of the distribution, and the less
t regions decrease in proportion. Thus the next time we sample from the distribution, well be
sampling more individuals from the tter areas of the space (hopefully).
Algorithm 118 An Abstract Estimation of Distribution Algorithm (EDA)
1: D Build Initial Innite Population Distribution
2: Best 2
3: repeat
4: P a sample of individuals generated from D
5: AssessFitness(P)
6: for each individual P
i
P do
7: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
8: Best P
i
9: D UpdateDistribution(D, P) . Using Ps tness results, D undergoes selection
10: until Best is the ideal solution or we have run out of time
11: return Best
At this point you may have noticed that estimation of distribution algorithms are really just a
fancy way of tting generative models to your data. Such models are often essentially telling you
the probability that a given point in space is going to be highly t. Because theyre generative,
we dont need to do rejection sampling etc.: we can just produce random values under the models.
In theory.
Figure 58 Approximating the
distribution in Figure 57(d) with
a histogram.
Representing Distributions So far weve assumed our space is real-
valued and multidimensional. Lets go with that for a while. How
could you represent a distribution over such a monster? One way is
to represent the distribution as an n-dimensional histogram. That is,
we discretize the space into a grid, and for each grid point we indicate
the proportion of the population which resides at that grid point. This
approach is shown in Figure 58. The difculty with this method is
twofold. First, we may need a fairly high-resolution grid to accurately
represent the distribution (though we could do better by allowing the
grid squares to vary in size, as in a kd-tree or quadtree). Second, if
we have a high dimensional space, were going to need a lot of grid
points. Specically, if we have n genes in our genome, and each has
been discretized into a pieces, well need a
n
numbers. Eesh.
143
As its an innite population, Tweaking is not actually necessary. Just selection.
165
Figure 59 Approximating the
distribution in Figure 57(d)
with three multivariate Gaus-
sian curves.
Another way to represent our innite population is with some kind
of parametric distribution. For example, we could use some m number
of gaussian curves to approximate the real distribution as shown in
Figure 59 (with m = 3). This has the advantage of not requiring a
massive number of grid squares. But it too has some problems. First
off, how many gaussian curves do we need to accurately describe this
population? Second, gaussian curves may not give you the cost savings
you were expecting. A one-dimensional gaussian, like everyones seen
in grade school, just needs a mean and variance
2
to dene it.
But in an n-dimensional space, a multinomial gaussian which can be
stretched and tilted in any dimension requires a mean vector ~ of size
n and a covariance matrix
144
which is n
2
in size. So if you have
1000 genes, you need a covariance matrix of size 1,000,000 for a single
gaussian.
Still though, n
2
is lots better than a
n
. But its not nearly good enough. Thus most estimation of
distribution algorithms cheat and use a different representation which is radically simpler but at a
huge cost: a set of marginal distributions.
A marginal distribution is a projection of the full distribution onto (usually) a single dimension.
For example, Figure 60 shows the projection of the full joint distribution in two different directions,
one for x and one for y. If we just use the marginal distributions in each dimension, then instead of
a joint distribution of n dimensions, we just have n 1-dimensional distributions. Thus a marginal
distribution contains proportions of an innite population which contain the various possible
values for a single gene. There is one marginal distribution per gene.
Joint Distribution
of Vector Genotypex, y
Marginal Distribution of x Gene
M
a
r
g
i
n
a
l
D
i
s
t
r
i
b
u
t
i
o
n
o
f
y
G
e
n
e
Figure 60 Marginalized versions of the distri-
bution in Figure 57(d). Since each distribution
has two peaks, each could probably be rea-
sonably approximated with two gaussians per
distribution (four in total).
Weve not come up with a new representation: just a
way to reduce the dimensionality of the space. So well
still need to have some way of representing each of the
marginal distributions. As usual, we could use (for ex-
ample) a parametric representation like one or more 1-
dimensional gaussians; or we could use a 1-dimensional
array as a histogram, as shown in Figure 61.
From Figure 60 it appears that we could probably get
away with representing each marginal distribution with,
it appears, roughly two 1-dimensional gaussians. Each
such gaussian requires a mean and a variance: thats just
8 numbers (a mean and variance for each gaussian, two
gaussians per marginal distribution, two marginal distribu-
tions). In general, if we needed b gaussians per dimension,
wed need 2bn numbers. A tiny amount compared to n
2
.
Or if we chose to use a histogram, discretizing our one-
dimensional distributions each into b buckets, thats still
bn numbers, instead of the a
n
we needed for the joint his-
togram. Great! (Actually, theres an ugly problem, but well
get to that in a bit.)
144
Yes. is classically used to represent covariance matrices. Not to be confused with the summation symbol . Ugh.
Try summing covariance matrices some time:
i
ij
. Wonderful.
166
Now that weve burned out on real-valued spaces, consider (nite
145
) discrete spaces. Rep-
resenting a joint discrete space is exactly like the grid in Figure 58, except (of course) dont need
to discretize: were already discrete. However we still have a potentially huge number of points,
making attractive the marginal distributions again. Each marginal distribution is, as usual, a
description of the fractions of the population which have a particular value for their gene. Each
gene thus has a marginal distribution consisting of just an array of fractions, one for every possible
gene value. Similar to the marginalized histogram example.
In fact, if you have w possible gene values, you dont really need an array of size w. You just
need the rst w1 elements. The array must sum to 1 (its a distribution), so its clear what the last
element value is.
Figure 61 Gaussian and histogram represen-
tations of a 1-dimensional marginal distribu-
tion.
We can get even simpler still: what if our space is simply
multidimensional boolean? That is, each point in space is
just a vector of booleans? You couldnt get simpler: the
marginal distribution for each gene is represented by just
a single number: the fraction of the population which has
a 1 in that gene position (as opposed to a 0). Thus you can
think of all marginal distributions for an n dimensional
boolean problem as a single real-valued vector of length n,
with each value between 0.0 and 1.0.
9.2.1 Univariate Estimation of Distribution Algorithms
Now that we have a way of reducing the space complexity through marginalization, and can
represent marginal distributions in various ways, we can look at some actual EDAs. The rst EDAs
were univariate EDAs: they used the marginalizing trick described earlier. Most of them also
operated over discrete or even boolean spaces.
Among the earliest such EDAs was Population-Based Incremental Learning (PBIL), by
Shumeet Baluja.
146
PBIL assumes a nite discrete space. This algorithm begins with n marginal
distributions, one per gene. Each distribution is initially uniform, but thatll change soon. The
algorithm then repeatedly samples individuals by picking one gene from each distribution. It
then assesses the tness of the individuals, and applies truncation selection to throw out the worst
ones. It then updates each marginal distribution by throwing out a little of its old probabilities
and rolling in a little of the proportions of values for that gene which exist among the remaining
(tter) individuals. We then throw away the individuals and go back to making new ones from the
revised distribution.
145
Countably innite spaces, like the space of all integers or the space of trees or graphs, present a much yuckier
problem and typically arent handled by EDAs.
146
The rst PBIL document was Shumeet Baluja, 1994, Population-based incremental learning: A method for integrating
genetic search based function optimization and competitive learning, Technical Report CMU-CS-94-163, Carnegie Mellon
University. The rst formal publication, with Rich Caruana, was Shumeet Baluja and Rich Caruana, 1995, Removing the
genetics from the standard genetic algorithm, in Armand Prieditis and Stuart Russell, editors, Proceedings of the Twelfth
International Conference on Machine Learning (ICML), pages 3846, Morgan Kaufmann.
167
Algorithm 119 Population-Based Incremental Learning (PBIL)
1: popsize number of individuals to generate each time
2: b how many individuals to select out of the generated group
3: learning rate: how rapidly to update the distribution based on new sample information
4: D {D
1
, ..., D
n
} marginal distributions, one per gene . Each uniformly distributed
5: Best 2
6: repeat
7: P {} . Sample from D
8: for i from 1 to popsize do
9: S individual built by choosing the value for each gene j at random under distribution D
j
10: AssessFitness(S)
11: if Best = 2 or Fitness(S) > Fitness(Best) then
12: Best S
13: P P {S}
14: P the ttest b individuals in P . Truncation selection
15: for each gene j do . Update D
16: N distribution over the possible values for gene j found among the individuals in P
17: D
j
(1 )D
j
+ N
j
18: until Best is the ideal solution or we have run out of time
19: return Best
That last equation ( D
j
(1 )D
j
+ N
j
) deserves some explanation. Keep in mind that
because PBIL operates over discrete spaces, each distribution D
j
is just a vector of fractions, one for
each value that gene j can be. We multiply each of these fractions by 1 , and add in s worth of
fractions from N
j
. N
j
is the vector, one per value that gene j can be, of the fraction of members of P
that have that particular value for gene j. So helps us to gradually change the distribution.
In short: we sampled from D, threw out the least t samples, and rolled their resulting distribu-
tions back into D. As a result D has shifted to be closer to the tter parts of the space.
A variation on PBIL is the Univariate Marginal Distribution Algorithm (UMDA), by Heinz
M uhlenbein.
147
UMDA differs from PBIL only in two small respects. First, UMDA doesnt specify
truncation selection as the way to reduce P: any selection procedure is allowed. Second, UMDA
entirely replaces the distribution D each time around. That is, = 1. Because theres no gradualness,
if our latest sample doesnt contain a given value for a certain gene, that value is lost forever, just
like using crossover without mutation in the genetic algorithm. As a result, to maintain diversity
we will require a large sample each time if the number of discrete values each gene can take on is
large. Perhaps for this reason, UMDA is most often applied to boolean spaces.
Next, we consider the Compact Genetic Algorithm (cGA) by Georges Harik, Fernando Lobo,
and David Goldberg, which operates solely over boolean spaces.
148
cGA is different from PBIL
in important ways. Once again, we have a distribution and use it to generate some collection of
individuals, but rather than do selection on those individuals, we instead compare every pair of
147
Heinz M uhlenbein, 1997, The equation for response to selection and its use for prediction, Evolutionary Computation,
5(3), 303346.
148
Its never been clear to me why its cGA and not CGA. Georges Harik, Fernando Lobo, and David Goldberg, 1999,
The compact genetic algorithm, IEEE Transactions on Evolutionary Computation, 3(4), 287297.
168
individuals P
i
and P
k
in our sample. Assume P
i
is tter. For each gene j, if P
i
and P
k
differ in value
at gene j, we shift D
j
so that it will generate P
i
s gene value more often in the future. Since cGA
works only with booleans, gene values can only be 1 or 0, and each distribution D
j
is represented by
just a real-valued number (how often we pick a 1 versus a 0). If P
i
was 1 and P
k
was 0, we increase
D
j
by a small amount. Thus not only do the t individuals have a say in how the distribution
changes, but the unt individuals do as well: theyre telling the distribution: dont be like me!
The cGA doesnt model an innite population, but rather a very large but nite population.
Thus the cGA has steps for incrementing or decrementing distributions, each step
1
discretization
in
size. Moving one step up in a discretization represents one more member of that large population
taking on that particular gene value. Though Im not sure why you couldnt just say
D
j
(1 )D
j
+ (value of gene j in P
i
value of gene j in P
k
)
(or in the notation of the algorithm below, use U and V instead of P
i
and P
k
).
Algorithm 120 The Compact Genetic Algorithm (cGA)
1: popsize number of individuals to generate each time
2: discretization number of discrete values our distributions can take on . Should be odd, 3
3: D {D
1
, ..., D
n
} marginal boolean distributions, one per gene . Each uniform: set to 0.5
4: gameover false
5: Best 2
6: repeat
7: if for all genes j, D
j
= 1 or D
j
= 0 then . D has converged, so lets quit after this loop
8: gameover true
9: P {} . Sample from D
10: for i from 1 to popsize do
11: S individual built by choosing the value for each gene j at random under distribution D
j
12: AssessFitness(S)
13: if Best = 2 or Fitness(S) > Fitness(Best) then
14: Best S
15: P P {S}
16: for i from 1 to ||P|| do . For all pairs P
i
and P
k
, i 6= k...
17: for k from i + 1 to ||P|| do
18: U P
i
19: V P
k
20: if Fitness(V) > Fitness(U) then . Make sure U is the tter individual of the two
21: Swap U and V
22: for each gene j do . Update each D
j
only if U and V are dierent
23: if the value of gene j in U > the value of gene j and D
j
< 1 then . 1 vs. 0
24: D
j
D
j
+
1
discretization
. Push closer to a 1
25: else if the value of gene j in U < the value of gene j and D
j
> 0 then . 0 vs. 1
26: D
j
D
j
1
discretization
. Push closer to a 0
27: until Best is the ideal solution, or gameover=true, or we have run out of time
28: return Best
169
I augmented this with our standard Best mechanism: though in fact the cGA doesnt normally
include that gizmo. Instead the cGA normally runs until its distributions are all 1s or 0s, which
indicates that the entire population has converged to a given point in the space. Then it just
returns that point (this is easily done by just sampling from the D
j
distributions one last time). To
augment with the Best mechanism, Im just running the loop one nal time (using the gameover
counter) to give this nal sampling a chance to compete for the Best slot.
The version of cGA shown here is the more general round robin tournament version, in
which every individual is compared against every other individual. A more common version of
cGA just generates two individuals at a time and compares them. This can be implemented simply
by setting the size of P to 2 in the round-robin tournament version.
In the round robin tournament version, we have to ensure that 0 D
j
1; but in the ||P|| = 2
version, it so happens that happens automagically. When D
j
reaches (say) 0, then 100% of the
individuals sampled from it will have 0 in that gene slot. That includes U and V. U and V will now
always have the same value in that slot and the if-statements (lines 23 and 25) will be turned off.
Real-Valued Representations So far weve seen algorithms for boolean and discrete marginal
distributions. How about real-valued ones?
Once weve marginalized a real-valued distribution, were left with m separate 1-dimensional
real-valued distributions. As discussed earlier, we could just discretize those distributions, so each
gene would have some n (discrete) gene values. At this point we could just use PBIL: generate an
individual by, for each gene, rst picking one of those discrete gene values, then picking a random
real-valued number within that discretized region. Likewise, to determine if a (discretized) gene
value is found in a given individual, you just discretize the current value and see if it matches.
There are other approaches too. For example, you could represent each marginal distribution
with a single gaussian. This would require two numbers, the mean and variance
2
, per distribu-
tion. To create an individual, for each gene you just pick a random number under the gaussian
distribution dened by and
2
, that is, the Normal distribution N(,
2
) (see Algorithm 12).
In PBIL, to adjust the distribution to new values of and
2
based on the tness results, we
rst need to determine the mean
N
j
and variance
2
N
j
of the distribution N
j
described by the t
individuals stored in P. The mean is obvious:
N
j
=
1
||P||
P
i
P
value of gene j of P
i
We could use the unbiased estimator
149
for our variance:
2
N
j
=
1
||P|| 1
P
i
P
(value of gene j of P
i
N
j
)
2
Now we just update the distribution D
j
. Instead of using this line:
D
j
(1 )D
j
+ N
j
We could do:
D
j
(1 )
D
j
+
N
j
2
D
j
(1 )
2
D
j
+
2
N
j
149
I think this is what we want. If it isnt, then its
1
||P||
rather than
1
||P||1
.
170
The idea is to make the distribution in D
j
more similar to the sample distribution we gathered
in N
j
. To be maximally general,
2
has its own learning rate , but if you like you could set = .
Of course, in Figure 60 the distributions werent described easily with a single gaussian, but
rather would be okay with two gaussians each. Updating a multimodal distribution like that
is perfectly doable but trickier, involving a variant of gradient descent called the Expectation
Maximization or EM algorithm. Thats a whole topic in and of itself, so Ill just leave it there. But
in truth, Id use several gaussians per marginal distribution in most cases.
9.2.2 Multivariate Estimation of Distribution Algorithms
There is a very big problem with using marginal distributions, and it turns out it is the exact
same problem that is faced by Cooperative Coevolution: it assumes that there is no linkage at all
between genes. Each gene can be relegated to its own separate distribution without considering
the joint distribution between the genes. Were throwing information away. As a result, marginal
distributions suffer from essentially the same maladies that Cooperative Coevolution does.
150
As a
result, univariate EDAs may easily get sucked into local optima for many nontrivial problems.
Recognizing this problem, recent EDA research has focused on coming up with more sophis-
ticated EDAs which dont just use simple marginal distributions. But we cant just go to the full
joint distribution: its too huge. Instead theyve moved a little towards the joint by using bivariate
distributions: one distribution for every pair of genes in the individual. If you have n genes,
this results in n
2
n distributions, and thats prohibitively expensive. And if we go to triples or
quadruples of genes per distribution, it gets uglier still.
Various algorithms have been proposed to deal with this. The prevailing approach seems to
be to nd the pairs (or triples, or quadruples, etc.) of genes which appear to have the strongest
linkage, rather than computing all the combinations. We can then represent the joint distribution
of the space as a collection of univariate distributions and the most strongly-linked bivariate (or
multivariate) distributions. This sparse approximate representation of the space is known as a
Bayes Network. Now instead of building the distribution D from our samples, we build a Bayes
Network N which approximates the true distribution D as well as possible. N is likewise used to
generate our new collection of samples.
Here Im going to disappoint you: Im not going explain how to build a Bayes Network from a
collection of data (in our case, a small population), nor explain how to generate a new data point
(individual) from the same. There is an entire research eld devoted to these topics. Its complex!
And depending on the kind of data (real-valued, etc.), and the models used to represent them
(gaussians, histograms, whatnot), it can get much more complex still. Instead, it might be wise to
rely on an existing Bayes Network or Graphical Model package to do the hard work for you.
With such a package in hand, the procedure is pretty easy. We begin with a random sample
(population) and cut it down to just the tter samples. We then build a network from those samples
150
The model behind Cooperative Coevolution is basically identical to univariate estimation of distribution algorithms
in its use of marginalization. The only difference is that Cooperative Coevolution uses samples (individuals in popula-
tions) for its marginal distributions, while univariate EDAs use something else gaussians, histograms, what have
you. Compare Figures 45 and 46 in the Coevolution Section with Figure 60 showing marginalized distributions: theyre
very similar. Christopher Vo, Liviu Panait, and I had a paper on all this: Christopher Vo, Liviu Panait, and Sean Luke,
2009, Cooperative coevolution and univariate estimation of distribution algorithms, in FOGA 09: Proceedings of the Tenth
ACM SIGEVO Workshop on the Foundations of Genetic Algorithms, pages 141150, ACM. Its not a giant result but it was
fun to write.
171
which approximates their distribution in the space. From this distribution we generate a bunch
of new data points (the children). Then the children get joined into the population. This is the
essence of the Bayesian Optimization Algorithm (BOA) by Martin Pelikan, David Goldberg, and
Eric Cant u-Paz. A more recent version, called the Hierarchical Bayesian Optimization Algorithm
(hBOA),
151
is presently the current cutting edge, but BOA sufces for our purposes here:
Algorithm 121 An Abstract Version of the Bayesian Optimization Algorithm (BOA)
1: p desired initial population size
2: desired parent subset size
3: desired child subset size
4: Best 2
5: P {P
1
, ..., P
p
} Build Initial Random Population
6: AssessFitness(P)
7: for each individual P
i
P do
8: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
9: Best P
i
10: repeat
11: Q P Select t individuals from P . Truncation selection is ne
12: N construct a Bayesian Network distribution from Q
13: R {}
14: for times do
15: R R { individual generated at random under N}
16: AssessFitness(R)
17: for each individual R
j
R do
18: if Fitness(R
j
) > Fitness(Best) then
19: Best R
j
20: P Join(P, R) . You could do P Q R, for example
21: until Best is the ideal solution or we have run out of time
22: return Best
So whats really going on with algorithms like these? Theyre actually little more than ex-
travagant methods for doing population resampling. But theyre different in an important way:
the Bayes Network is essentially nding not just highly t individuals to resample into a new
population, its trying to identify why theyre highly t. What features do they appear to have in
common? Which elements in the individuals appear to matter and which ones dont?
This is a big deal: it can home in on the best parts of the space fairly rapidly. But it comes at a
considerable cost: Algorithms along these lines can get very complex due to manipulation of the
Bayes Network, particularly if the space isnt something simple like a boolean space.
151
I have no idea why its not HBOA. The BOA algorithm was introduced in Martin Pelikan, David E. Goldberg, and
Erick Cant u-Paz, 1999, BOA: The bayesian optimization algorithm, in Wolfgang Banzhaf, et al., editors, Proceedings of
the Genetic and Evolutionary Computation Conference GECCO-1999, pages 525532, Morgan Kaufmann. Two years later,
hBOA was published in Martin Pelikan and David E. Goldberg, 2001, Escaping hierarchical traps with competent genetic
algorithms, in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001), pages 511518, Morgan
Kaufmann. Warning: hBOA is patented.
172
10 Policy Optimization
Section 4.5.1 introduced the notion of an agent which follows a simple program called a policy.
Much of this section concerns methods for an agent to learn or optimize its policy.
152
To do so,
the agent will wander about doing what an agent does, and occasionally receive a reward (or
reinforcement) to encourage or discourage the agent from doing various things. This reward
ultimately trickles back through earlier actions the agent did, eventually teaching the agent which
actions help to lead to good rewards and away from bad ones.
In the machine learning community, non-metaheuristic methods for learning policies are well
established in a subeld called reinforcement learning. But those methods learn custom rules for
every single state of the world. In contrast, there are evolutionary techniques, known as Michigan-
Approach Learning Classier Systems (LCS) or Pitt-Approach Rule Systems, which nd much
smaller, sparse descriptions of the entire state space. Well begin by examining reinforcement
learning because it is so closely associated with the evolutionary methods both historically and
theoretically. Specically, well spend quite a few pages on a non-metaheuristic reinforcement
learning method called Q-Learning. Then well move to the evolutionary techniques.
I wont kid you. This topic can be very challenging to understand. Youve been warned.
153
10.1 Reinforcement Learning: Dense Policy Optimization
We begin with a non-metaheuristic set of techniques for learning dense policies, collectively known
as reinforcement learning, partly to put the metaheuristic methods (in Section 10.2) in context,
and partly because it teaches some concepts well need as we go on.
Reinforcement learning is a strange term. Generally speaking, it refers to any method that learns
or adapts based on receiving quality assessments (the rewards or punishments the reinforcement).
Thus every single topic discussed up to this point could be considered reinforcement learning.
Unfortunately this very general term has been co-opted by a narrow sub-community interested
in learning policies consisting of sets of ifthen rules. Recall that in Section 4.5 such rules were
called state-action rules, and collectively described what to do in all situations the agent might
nd itself in. The reinforcement learner gures out what the optimal state-action ruleset is for a
given environment, based solely on reinforcement received when trying out various rulesets in the
environment.
What kinds of environments are we talking about? Heres an example: a cockroach robots
world is divided into grid squares dened by GPS coordinates. When the robot tries to move from
grid square to grid square (say, going north, south, east, or west), sometimes it succeeds, but with a
certain probability it winds up in a different neighboring square by accident. Some grid squares
block the robots path in certain directions (perhaps theres a wall). In some grid locations there are
yummy things to eat. In other places the robot gets an electric shock. The robot does not know which
squares provide the food or the shocks. Its just trying to gure out, for each square in its world,
152
Unlike most other topics discussed so far, this is obviously a specic application to which metaheuristics may be
applied, rather than a general area. But its included here because this particular application has spawned unusual and
important metaheuristics special to it; and its a topic of some pretty broad impact. So were going with it.
153
If you want to go deeper into Q-Learning and related methods, a classic text on reinforcement learning is Richard
Sutton and Andrew Barto, 1998, Reinforcement Learning: an Introduction, MIT Press. This excellent book is available online
at http://www.cs.ualberta.ca/sutton/book/the-book.html for free.
173
-2
+1
Figure 62 Robot cockroach
world with rewards (all unla-
beled states have zero reward).
what direction should he go so as to maximize the yummy food and
minimize the shocks over the robots lifetime.
At right is a possible robot cockroach world, where if the cock-
roach stumbles into one area it gets a yummy treat (+1), and if it
stumbles into another area it gets an electric shock (2).
In this example, the cockroach robot is our agent. The grid squares
are the external states (or just states) the agent may nd itself in. The
directions the cockroach tries to move are the actions available to the
agent; different states may have different actions (in this case, because
of the presence of walls). The yummy things to eat are positive re-
inforcement or positive reward, and the electric shocks are likewise
negative reinforcement, or punishment, or negative reward (so to
speak). The agents attempt to maximize positive reinforcement over
its lifetime is also known as trying to maximize the agents utility
154
(or value). The probability of winding up in a new state based on the current state and chosen
action is known as the transition model. Our agent usually doesnt know the transition model, but
one exists.
The reason each ifthen rule is called a state-action rule in this context is because the if side
indicates a possible external state, and the then side indicates what action to take when in that
state. The agent is trying to construct a set of such rules, one for each possible external state, which
collectively describe all the actions to take in the world. This collection of rules is known as a
policy, and it is traditionally
155
denoted as a function (s) which returns the action a to take when
in a given state s. Figure 63 shows a likely optimal policy for the cockroach world.
! " # "
" " "
" " " #
! # #
Figure 63 An optimal policy for
the cockroach robot world.
Lets do another example. We want to learn how to play Tic-Tac-
Toe (as X) against a random opponent based entirely on wins and
losses. Each possible board situation where X is about to play may
be considered a state. For each such state, there are some number of
moves X could make; these are available actions for the state. Then
our opponent plays a random move against us and we wind up in a
new state: the probability that playing a given action in a given state
will wind up in a given new state is the transition model. Doing actions
in certain states wind up punishing us or rewarding us because they
cause us to immediately win or lose. Those are our reinforcements.
For example, if X plays at the location + in the state
X X +
O X
O 4 O
then X receives a positive reinforcement because X wins the game. If
X plays at the location , X probably loses immediately and receives
negative reinforcement provided the opponent isnt stupid
156
(keep
in mind, the next state is after the opponent makes his move too). And if X plays at 4 then X
doesnt get any reinforcement immediately as the game must still continue (for a bit). Not getting
reinforcement is also a kind of reinforcement: its just a reinforcement of zero. Ultimately were
trying to learn a policy which tells us what to do in each board conguration.
154
Not to be confused with utility in Section 8.4.
155
Yes, using as a function name is stupid.
156
Of course, to get in this situation in the rst place, our random opponent wasnt the sharpest knife in the drawer.
174
Heres a third example, stolen from Minoru Asadas
157
work in robot soccer. A robot is trying
to learn to push a ball into a goal. The robot has a camera and has boiled down what it sees into the
following simple information: the ball is either not visible or it is in the left, right, or center of the
eld of view. If the ball is visible, its also either small (far away), medium, or large (near). Likewise
the goal is either not visible, on the left, right, or center, and if visible its either small, medium, or
large. All told there are ten ball situations (not visible, left small, left medium, left large, center
small, center medium, center large, right small, right medium, right large) and likewise ten goal
situations. A state is a pair of goal and ball situations: so there are 100 states. The robot can move
forward, curve left, curve right, move backward, back up to the left, and back up to the right. So
there are 6 actions for each state. The robot receives a positive reward for getting the ball in the
goal and zero for everything else.
Its not just robots and games: reinforcement learning is in wide use in everything from factory-
oor decision making to gambling to car engines deciding when and how to change fuel injection
to maximize efciency to simulations of competing countries or businesses. Its used a lot.
All these examples share certain common traits. First, we have a xed number of states. Second,
each state has a xed number of actions, though the number and makeup of actions may differ
from state to state. Third, were assuming that performing an action in a given state transfers
to other states with a xed probability. Thats nonsense but its necessary nonsense to make
the problem tractable. Fourth, were also assuming that we receive rewards for doing certain
actions in certain states, and that these rewards are either deterministic or also occur with a xed
probability on a per state/action basis. Thats also a somewhat ridiculous assumption but keeps
things tractable. And now the nal nonsense assumption: the transition probabilities are based
entirely on our current state and actionearlier actions or states do not inuence the probabilities
except through the fact that they helped us to land in our current state and action. That is, to gure
out what the best possible action is for a given state, we dont need to have any memory of what
we did a while back. We just need a simple ifthen describing what to do given the situation
we are in now. This last assumption is commonly known as an assumption of a Markovian
158
environment. Very few real situations are Markovian: but this assumption truly makes the problem
tractable, so we try to make it whenever possible if its not totally crazy.
10.1.1 Q-Learning
Q-Learning is a popular reinforcement learning algorithm which is useful to understand before we
get to the evolutionary models. In Q-Learning, the agent maintains a current policy (s) (the best
policy its gured out so far) and wanders about its environment following that policy. As it learns
that some actions arent very good, the agent updates and changes its policy. The goal is ultimately
to gure out the optimal (smartest possible) policy, that is, the policy which brings in the highest
expected rewards over the agents lifetime. The optimal policy is denoted with
(s).
The agent doesnt actually store the policy: in fact the agent stores something more general than
that: a Q-table. A Q-table is a function Q(s, a) over every possible state s and action a that could
be performed in that state. The Q-table tells us how good it would be to be presently in s, and
157
Among lots of other things, Minoru Asada is the co-founder of the RoboCup robot soccer competition.
158
Andrey Andreyevich Markov was a Russian mathematician from 18561922, and was largely responsible for Markov
chains, which are lists of states s
1
, s
2
, ... the agent nds itself in as it performs various actions in a Markovian environment.
This eld, a major area in probability theory, is a large part of what are known as stochastic processes, not to be confused
with stochastic optimization.
175
s a
s!
(1)
s!
(2)
"*(s!
(1)
)
"*(s!
(2)
)
s!
(3)
"*(s!
(3)
)
...
...
...
...
...
...
...
...
...
Initial Action Initial State
Possible
Resulting States
Best Actions for
Those States
and Other Actions
P
(s
!
(1
)
| s
,a
)
P
(s
!
(3
)
| s
,a
)
P(s!
(2)
| s,a)
Further Possible
Resulting States
And So on
Probabilities
that doing a while in s
will land you in
various states s!, such as
s!
(1)
, s!
(2)
, s!
(3)
etc.
P
(
...)
P
(...)
P(...)
P
(...)
P(...)
P
(...)
P(...)
P
(...)
P
(
...)
other a!
other a!
other a!
other a!
other a!
other a!
Figure 64 The Q-Learning state-action model. We are presently in some state s and decide to perform an action a. With
a certain probability P(s
0
|s, a), doing that action a while in s leads to a state s
0
(here there are three possible s
0
we could
land in, s
0(1)
, s
0(2)
, and s
0(3)
). We presume that from then on out we make the smartest possible action
(s
0
) for each state
s
0
, leading to still further states and and smartest possible actions for them, and so on. Note that in this model the rst
action we do (a) may not be the smartest action for s.
then perform action a, and then follow the optimal policy from then on. Thus the Q-value tells us the
utility of doing action a when in s if we were a perfect agent (other than our initial choice of a). The
agent starts with crummy Q-tables with lots of incorrect information, and then tries to update them
until they approach the optimal Q-table, denoted Q
(s)) to have
a higher Q
(s) = argmax
a
Q
thereafter.
In a perfect world, where we actually knew P(s
0
|s, a), theres a magic equation which we can
use to compute Q
(s, a):
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a) max
a
0
Q
(s
0
, a
0
) (1)
This equation says: the Q
(s
2
, a
2
)
value at time 2 is equal to the sum total rewards from then on, that is, R(s
2
, a
2
) + R(s
3
, a
3
) + ....
Similarly the Q
(s
1
, a
1
) value at time 1 is equal to R(s
1
, a
1
) + R(s
2
, a
2
) + R(s
3
, a
3
) + .... Thus
Q
(s
1
, a
1
) = R(s
1
, a
1
) + Q
(s
2
, a
2
). Similarly, Q
(s
0
, a
0
) = R(s
0
, a
0
) + Q
(s
1
, a
1
). See the similarity
with Equation 1? That equation had the additional term
s
0 P(s
0
|s, a) max
a
0 Q
(s
0
, a
0
), rather than
just Q
(s
0
, a
0
). This is because of the transition probability P. The term tells us the weighted average
Q
equation is derived from a famous simpler equation by Richard Bellman called the Bellman Equation.
That equation doesnt have actions explicitly listed, but rather assumes that the agent is performing some (possibly
suboptimal) hard-coded policy . The Bellman equation looks like this:
U(s) = R(s) + max
a
s
0
P(s
0
|s, a)U(s
0
)
The U(s) bit is the equivalent of Q
(s, a), but it assumes that the a we do is always (s). By the way, its U for Utility,
just as its R for Reward or Reinforcement. Sometimes instead of U youll see V (for the synonymous Value). The probability
function isnt usually denoted P(s
0
|s, a) I wrote it that way to be consistent with probability theorybut is rather
usually written T(s, a, s
0
). That is, T for Transition Probability. Hmmm, I wonder we if could use Q for Q-tility...
177
Algorithm 122 Q-Learning with a Model
1: R(S, A) reward function for doing a while in s, for all states s S and actions a A
2: P(S
0
|S, A) probability distribution that doing a while in s results in s
0
, for all s, s
0
S and a A
3: cut-down constant . 0 < < 1. 0.5 is ne.
4: Q
(S, A) table of utility values for all s S and a A, initially all zero
5: repeat
6: Q
0
(S, A) Q
(s, a) R(s, a) +
s
0 P(s
0
|s, a) max
a
0 Q
0
(s
0
, a
0
)
10: until Q
(S, A)
That is, we start with absurd notions of Q
values dont change any more. This notion is called bootstrapping, and it may seem
crazy but its perfectly doable because of a peculiarity of Q-learning made possible by Markovian
environments: the Q-learning world has no local optima. Just one big global optimum. Basically
this is an obsfucated way of doing hill-climbing.
Q-Learning as Reinforcement Learning The algorithm just discussed is an example of what is
known in engineering and operations research circles as dynamic programming. This isnt to be
confused with the use of the same term in computer science.
160
In computer science, dynamic
programming is an approach to solve certain kinds of problems faster because they can be broken
into subproblems which overlap. In engineering, dynamic programming usually refers to guring
out policies for agents in Markovian environments where the transition probability P and reward
function R are known beforehand.
From an articial intelligence perspective, if we have P and R, this isnt a very interesting
algorithm. Instead, what we really want is an algorithm which discovers Q
(s
0
, a
0
) according to how often they occur. Instead, now well just add them in as we
wind up in various s
0
. Wander around enough and the distribution of these s
0
approaches P(s
0
|s, a).
So: well build up an approximation of Q
(S, A)
How does the agent decide what action to make? The algorithm will converge, slowly, to the
optimum if the action is picked entirely at random. Alternatively, you could pick the best action
possible for the state s, that is, use
. Well, we could fake it by picking the best action weve discovered so far with our
(crummy) Q-table, that is, argmax
a
Q(s, a).
That seems like a nice answer. But its got a problem. Lets go back to our cockroach example.
The cockroach is wandering about and discovers a small candy. Yum! As the cockroach wanders
about in the local area, nothings as good as that candy; and eventually for every state in the local
area the cockroachs Q table tells it to go back to the candy. Thatd be great if the candy was
the only game in town: but if the cockroach just wandered a bit further, itd discover a giant pile
of sugar! Unfortunately itll never nd that, as its now happy with its candy. Recognize this
problem? Its Exploration versus Exploitation all over again. If we use the best action a that weve
discovered so far, Q-learning is 100% exploitative. The problem is that the model-free version of the
algorithm, unlike the dynamic programming version, has local optima. Were getting trapped in a
local optimum. And the solution is straight out of stochastic optimization: force more exploration.
We can do this by adding some randomness to our choices of action. Sometimes we do the best
action we know about so far. Sometimes we just go crazy. This approach is called e-greedy action
selection, and is guaranteed to escape local optima, though if the randomness is low, we may be
waiting a long time. Or we might do a Simulated Annealing kind of approach and initially just do
crazy things all the time, then little by little only do the best thing we know about.
Last, its ne to have be a constant throughout the run. Though you may get better results if
you reduce for those Q(s, a) entries which have been updated many times.
179
Generalization Believe it or not, there was a reason we covered all this. Reinforcement Learning
would be the end of the story except for a problem with the technique: it doesnt generalize.
Ordinarily a learner should be able to make general statements about the entire environment based
on just a few samples of the environment. Thats the whole point of a learning algorithm. If you
have to examine every point in the space, whats the point of using a learning algorithm? Youve
already got knowledge of the entire universe.
Reinforcement Learning learns a separate action for every point in the entire space (every
single state). Actually its worse than that: Q-learning develops a notion of utility for every possible
combination of state and action. Keep in mind that in the Soccer Robot example, there were 100 states
and 6 actions. Thats a database of 600 elements! And thats a small environment. Reinforcement
Learning doesnt scale very well.
Many approaches to getting around this problem are basically versions of discretizing the space
to reduce its size and complexity. Alternatively you could embed a second learning algorithm
typically a neural network into the reinforcement learning framework to try to learn a simple set
of state action rules which describe the entire environment.
Another approach is to use a metaheuristic to learn a simple set of rules to describe the
environment in a general fashion. Such systems typically use an evolutionary algorithm to cut up
the space of states into regions all of which are known to require the same action. Then each rule is
simply of the form region descriptionaction. Instead of having one rule per state, we have one rule
per region, and we can have as few regions as it takes to describe the entire space properly. Well
cover those next. But rst...
A Final Derivation You can skip this if you like. The goal is to show where the magic equation
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a) max
a
0
Q
(s
0
, a
0
)
came from. Were going to go through the derivation of Q
as telling us, for any given state s and action a, how good it would be to start in state s,
then perform action a, and then perform the smartest possible actions thereafter (that is, thereafter,
we use
(s, a) = E[
t=0
R(s
t
, a
t
)|s
0
= s, a
0
= a, a
t1
=
(s
t
)]
Theres a problem. Imagine that there are two actions A and B, and if you always do action A,
regardless of your state, you get a reward of 1. But if you always do action B, you always get a
reward of 2. If our agents lifetime is innite, both of these sum to innity. But clearly B is preferred.
We can solve this by cutting down future rewards so they dont count as much. We do this by
adding a multiplier 0 < < 1, raised to the power of t so it makes future rewards worth less. This
causes the sums to always be nite, and Bs sum to be higher than As sum.
Q
(s, a) = E[
t=0
t
R(s
t
, a
t
)|s
0
= s, a
0
= a, a
t1
=
(s
t
)] (3)
180
Now lets pull our rst actions s and a out of the sum. In the sum theyre known as s
0
and a
0
.
Theyll come out with their associated , which happens to be
0
.
Q
(s, a) = E[
0
R(s
0
, a
0
) +
t=1
t
R(s
t
, a
t
)|s
0
= s, a
0
= a, a
t1
=
(s
t
)]
From now on out, the goal is going to be to massage the stuff inside the expectation so that it
looks like the expectation in Equation 3 again. Lets get going on that. Obviously
0
= 1 so we can get
rid of it. Now theres nothing in the expectation that R(s
0
, a
0
) relies on so it can be pulled straight
out, at which time we can rename s
0
and a
0
back to s and a.
Q
(s, a) = R(s, a) + E[
t=1
t
R(s
t
, a
t
)|s
0
= s, a
0
= a, a
t1
=
(s
t
)]
Next comes the most complex part of the derivation. Wed like to get rid of the s
0
and a
0
still inside the expectation. So well create a new state s
0
to be the next state s
1
. But recall from
Figure 64, there are actually many possible states s
0(1)
, s
0(2)
, ... each with an associated probability
P(s
0(1)
|s, a), P(s
0(2)
|s, a), ... that the given s
0
state will be the one we wind up landing in after doing
action a in state s. So if we pull s
0
out of the expectation, nothing in the expectation will reect
this fact, and well have to explicitly state that the old expectation has been broken into multiple
expectations, one per s
0
, and were adding them up, multiplied by the probabilities that theyd
occur. Here we go:
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a)E[
t=1
t
R(s
t
, a
t
)|s
1
= s
0
, a
t1
=
(s
t
)]
Now we can change the inner sum back to t = 0, because theres nothing inside the expectation
that relies on timestep 0 anymore. So inside the expectation well just redene t = 1 to be t = 0.
This will cause everything to be multiplied by one fewer so well need to add a as well:
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a)E[
t=0
t
R(s
t
, a
t
)|s
0
= s
0
, a
t0
=
(s
t
)]
That isnt dependent on anything, so we can pull it clear out of the expectation and the sum:
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a)E[
t=0
t
R(s
t
, a
t
)|s
0
= s
0
, a
t0
=
(s
t
)]
Notice that inside the expectation we now have a new s
0
but no a
0
. We remedy that by breaking
our a
t0
up again. Instead of dening a
0
to be
(s
0
), were going to invent a new symbol a
0
to
represent the action we perform when were in s
0
, that is, a
0
= a
0
. This allows us to move the a
0
denition outside of the expectation. But once again to do this we have to keep around the notion
that a
0
is the smartest possible action to perform when in a given s
0
. We do this by introducing the
operator max to select the a
0
that yields the highest possible expectation (that is, its the smartest
pick, and so is clearly
(s
0
)):
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a) max
a
0
E[
t=0
t
R(s
t
, a
t
)|s
0
= s
0
, a
0
= a
0
, a
t1
=
(s
t
)]
181
And now the payoff for all this manipulation. Notice that the expectation (everything after the
max) now looks very similar to Equation 3. The only difference is that were using s
0
instead of s and
a
0
instead of a. This allows us to just say:
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a) max
a
0
Q
(s
0
, a
0
)
Ta-da! A recursive denition pops out by magic!
10.2 Sparse Stochastic Policy Optimization
As mentioned before, the primary issue with reinforcement learning is that it constructs a unique
rule for every possible state. Q-learning is even worse, as it builds a table for every state/action
combination. If there are lots of states (and lots of actions), then there are going to be a lot of slots
in that table. There are a variety of ways to counter this, including simplifying the state space or
trying to use a learning method like a neural network to learn which states all have the same action.
Popular current methods include Ronald Williamss REINFORCE
161
algorithms and Andrew Ng
and Michael Jordans PEGASUS,
162
techniques collectively known as policy search.
!
a
!
b
"
e
"
c
#
d
!
Figure 66 A sparse version of
the optimal policy for the cock-
roach robot world, with ve rules
(a...e). Compare to Figure 63.
The state marked is covered by
three different rules (a, c, and d),
with d being the most specic.
We could also use metaheuristics to learn a sparse representation
of this rule space. The idea is to learn a set of rules, each of which
attach an action not to a single state but to a collection of states with
some feature in common. Rather than have one rule per state, we
search for a small set of rules which collectively explain the space in
some general fashion.
Imagine that states describe a point in N-dimensional space. For
example, in our soccer robot example, we might have four dimen-
sions: ball size, ball position (including not there), goal size, and
goal position (including not there). In the cockroach example, we
might have two dimensions: the x and y values of the grid location
of the cockroach. In the Tic-Tac-Toe example we might have nine
dimensions: each of the board positions. Given an N-dimensional
space, one kind of rule might describe a box or rectangular region
in that space rather than a precise location. For example, heres a
possible rule for the cockroach robot:
x 4 and x 5 and y 1 and y 9 go up
Such a rule is called a classication rule, as it has classied (or labelled) the rectangular region
from h4, 1i and h5, 9i with the action go up. The rule is said to cover this rectangular region. The
objective is to nd a set of rules which cover the entire state space and properly classify the states
in their covered regions with the actions from the optimal policy. For example, in Figure 66 we
have a small set of rules which collectively dene exactly the same policy as shown in Figure 63.
If rules overlap (if the problem is over-specied), we may need an arbitration scheme. Were I
to hand-code such a ruleset, the arbitration scheme Id pick would be based on specicity: rules
161
Ronald J. Williams, 1992, Simple statistical gradient-following algorithms for connectionist reinforcement learning,
in Machine Learning, pages 229256.
162
This is a nontrivial paper to read. Andrew Ng and Michael Jordan, 2000, PEGASUS: A policy search method for large
MDPs and POMDPs, in Proceedings of the Sixteenth Conference on Uncertainty in Articial Intelligence, pages 406415.
182
covering smaller regions defeat rules covering larger regions. Figure 66 does exactly that. But the
methods discussed later use different approaches to arbitration.
There are two basic ways we could use a metaheuristic to learn rulesets of these kinds:
A candidate solution (or individual) is a complete set of rules. Evolving rulesets is known as
Pitt Approach Rule Systems.
An individual is a single rule: and the whole population is the complete set of rules. Evolving
individual rules and having them participate collectively is known as the Michigan Approach
to Learning Classier Systems, or just simply Learning Classier Systems (LCS).
163
10.2.1 Rule Representation
State-action rules in Q-learning took the following form:
If I am in the following state... Then do this...
The rst part is the rule body, which denes the kinds of world states which would trigger the
rule. The second part is the rule head, which denes the action to take when the rule is triggered.
We can generalize the rule body in two different ways to cover more than one state. First, rule
bodies might require exact matches:
If I am in a state which exactly ts the following features...Then do this...
Or we can have rules which describe imprecise matches:
If I am in a state which sort of looks like this, even with a few errors...Then do this...
In the rst case, we have the issue of under-specication: we need to make sure that for every
possible state, theres some rule which covers that state. To guarantee this we might need to rely
on some kind of default rule which is assumed to match when no others do. Alternatively, the
algorithm might generate a rule on-the-y, and insert it into the ruleset, to match a state if it
suddenly shows up.
In the second case, we dont need to worry about under-specication, since every rule matches
every state to some degree. But we will need to dene a notion of how well a rule matches. This is
known as a rules match score. The rule which the best match score might be selected.
In either case, well still need to worry about over-specication, requiring an arbitration scheme.
Instead of specicity, the later methods use some combination of:
The utility of the rule essentially its Q-value, determined by the agent as it has tried out
the rule in various situations. Recall that utility is a measure of how often the rule led to high
rewards. Higher utility rules might be preferred over lower-utility rules.
The variance in the rules utility: if the rule is consistent in yielding high rewards, it might be
preferred over more tenuous rules which occasionally get lucky.
163
Dont confuse these with classication algorithms from machine learning, such as those mentioned in Section 9.1.
Those algorithms nd classications for whole regions of space based on provided samples in the space which have
been pre-labelled for them (part of an area called supervised learning). Whereas the metaheursistics described here nd
classications for regions based solely on reinforcement information gleaned while wandering about in the space.
183
The error in the rules utility: the difference between the rules utility and the utilities of rules
which it leads to.
The match score of the rule: rules more apropos to the current situation would be preferred
over ones whose bodies dont match the situation very well.
Much of rule representation concerns itself with the rule body, which can take on many forms,
so its worth considering them:
Real-Valued or Integer Metric Spaces This state space is particularly common in Pitt-approach
rule systems, though its being increasingly studied in Michigan-approach methods too. There are
lots of ways you could describe the space, though boxes are the most common. Here are a few:
Boxes Weve seen these already.
Example: x 20 and x 30 and y 1 and y 9 go up
Match Score: If the points in the box, its got a perfect match score (1.0 maybe?);
else perhaps its match score is equal to the percentage of dimensions
in whose ranges it lies. For example, h40, 5i is covered by y but not by
x, which might result in a match score of 0.5. Another approach: the
match score decreases with distance from the box boundary.
Toroidal Boxes If your state space is bounded, a box could go off of one side and wrap
around to the other. Imagine if the space was toroidal in the x direction and bounded from 0
to 360. The rule below would be true either when 60 x 360 or 0 x 20 (assuming y is
in the right region). This isnt totally nuts: its useful if x described an angle, for example.
Example: x 60 and x 20 and y 1 and y 9 go up
Match Score: Same as regular boxes.
Hyperspheres or Hyperellipsoids A rule might be dened as a point (the center of the
sphere) and a radius. Or a rule might be dened as a point and associated data describing a
rotated multidimensional ellipsoid (perhaps a covariance matrix like those used to describe
multidimensional Gaussian curves). Heres an example of a simple hypersphere:
Example: If the state hx, y, zi lies within a sphere centered at h4, 7, 2i and of
radius 9.3 go up
Match Score: Same notion as regular boxes.
Exemplars Specic points in the space which serve as examples for the regions around
them. A ruleset of exemplars divides the environment up into a Voronoi tessellation: regions
of space delimited by which exemplar each region is closest to.
164
Such rules rely entirely on
match scores, so certain techniques (often Michigan approach methods) might not be able to
use them. You may think of exemplars as innitely small hyperspheres.
Example: If the state is nearest to h4, 7, 2i go up
Match Score: The further from the exemplar, the lower the match score.
164
After Georgy Feodosevich Voronoi, 18681908, a Russian mathematician. Voronoi tesselations (sometimes called
Voronoi diagrams) are widely used in lots of areas of computational geometry, everything from graphics to wireless
networks to robotics. The notion of dividing space up by exemplars also forms the basis of the k-Nearest-Neighbor
(kNN) machine learning algorithm.
184
Hyperplanes
165
The rule cuts a plane through the space, dividing an area on which we
have an opinion from an area in which the rule has no opinion. Hyperplanes may likewise be
problematic for some Michigan approach methods.
Example: If 2.3x + 9.2y 7.3z > 4.2 go up
Match Score: If the point is on the matching side of the hyperplane, it matches
perfectly (or its match score improves if further away from the plane).
If the point is on the non-matching side of the hyperplane, its match
score is worse, but improves as it approaches the hyperplane.
Non-Metric Integer Spaces As weve seen earlier in the Section 4 (Representation), integer spaces
might describe metric spaces or simply dene unordered sets of objects (0 = red, 1 = blue,
etc.). Integer-space rule bodies are no different. An unordered integer rule might look like this:
x = red and y = soft and z = hollow go up
Here the rule, like exemplars, describes an exact point in the (unordered) space. A match score
might be dened in terms of the number of variables which exactly match the given state.
Unordered set rules might also have disjunctions:
x = red and y = soft and z = (hollow or solid) go up
A disjunction would be considered a single condition, and itd be true if any of its parts were true.
Boolean Spaces Though they have lately been generalized to other kinds of rules, Michigan
Approach classier systems have traditionally focused on a single kind of rule: one involving
boolean conditions.
Because theyre so simple, boolean rules tend to take on a certain standard pattern: combinations
of yes, no, and doesnt matter. Lets say each state in your state space is described by three
boolean values, x, y, and z. Thus your space has eight states. A boolean rule over three dimensions,
might look like this:
x = 1 and y = 0 (and z doesnt matter) go up
In the parlance of Michigan Approach classier systems, such a rule is usually written like this:
10# go up
Note that the # sign means this one doesnt matter. The more doesnt matter dimensions in
the rule, the less specic. Match scores might again be dened in terms of the number of values
(that matter) which exactly match the state.
Could rule bodies be trees or graphs? More complex functions? Who knows?
165
Theres a clever way of converting hyperplanes into more complex subregions of space, called kernelization, a
technique made popular by Support Vector Machines (SVMs) in machine learning. Ive not had much luck with
kernelization in the context of rule systems though.
185
10.3 Pitt Approach Rule Systems
The Pitt Approach
166
applies an evolutionary algorithm to nd a set of rules which best describes
the optimal policy. A candidate solution is simply a set of such rules. Section 4.5 introduced the
notions of rulesets popularly used in Pitt Approach rule systems and suggested approaches to
initializing, recombining, and mutating them. Here we will discuss a particularly well-known Pitt
Approach algorithm, SAMUEL.
167
SAMUEL was developed by John Grefenstette, Connie Ramsey, and Alan Schultz at the Naval
Research Laboratory.
168
The idea is to employ a Pitt Approach to optimizing rulesets as entire
candidate solutions in stochastic optimization, and to also use reinforcement learning ideas to
improve the rules within a candidate solution. SAMUEL traditionally uses a genetic algorithm,
but most any optimization method is plausible. All the actual magic is in the tness assessment
functionwhere rule utilities are computed in addition to the tness of the whole ruleset and
in the breeding operators. SAMUEL iterates through four basic steps:
1. Each individual is tested n times and the results are used to update the utilities of its rules.
2. Using the updated utility information, each individuals rules are improved in a special rule
mutation procedure.
3. Each individual is tested again some m additional times and the results are used to update
the tness of the individual (ruleset) as a whole.
4. After all individuals have undergone the rst three steps, we perform traditional evolutionary
algorithm style breeding and selection on the individuals based on tness.
Fitness and Utility Assessment The two assessment steps (1 and 3 above) are nearly identical
except for the statistics they update: so well treat them together here, and in fact Algorithm 124 is
used to describe both steps.
Both assessment procedures involve placing the agent in the world and having it follow the
policy as dictated by the ruleset being tested. As the agent is wandering about, well need to decide
which action the agent will choose at any given step. This is rst done by computing a match set
consisting of rules which best match the current state, that is, those with the highest match score.
Next, only the highest-scoring rules for each action are retained. SAMUEL then chooses a rule to
perform from the match set using some kind of score-based selection procedure. For example, we
might simply choose the rule with the highest score; or select with a probability proportional to
the rules score (as in tness-proportionate selection, Algorithm 30). This two-level mechanism
166
Ken De Jong and students developed the Pitt Approach at the University of Pittsburgh. Hence the name.
167
SAMUEL is an acronym for Strategy Acquisition Method Using Empirical Learning. Yes, its pushing it. In reality,
Grefenstette, Ramsey, and Shultz were looking for a way to name the algorithm after Arthur Samuel, a famous machine
learning pioneer who (coincidentally I believe) died the same year as the seminal SAMUEL paper. While at IBM in
the 1950s, Arthur Samuel developed a program which learned on its own how to play checkers, and this program is
considered a major landmark in articial intelligence history. Hmm, I seem to have a lot of footnotes about checkers....
SAMUEL was rst dened in John Grefenstette, Connie Ramsey, and Alan Schultz, 1990, Learning sequential decision
rules using simulation models and competition, Machine Learning, 5(4), 355381. Though you can get a roughly current ver-
sion of the manual online via CiteSeer
x
, presently at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.9876
168
NRL was instrumental in the development of GPS and much of modern radar.
186
(truncation followed by score-based selection) is intended to prevent large numbers of identical
crummy rules from being selected over a few high-quality ones.
The rst tness assessment procedure updates utility information about the rules. Recall that
Q-learning assumes that rewards occur throughout the agents life. In contrast, SAMUEL assumes
that rewards tend to happen at the end of an agents life. This leads to different strategies for
distributing rewards. In Q-learning, when a reward is received, it is stored in the Q-value for that
state-action combination; and later on when another state-action combination leads to this state,
the Q-value is then partially distributed to the earlier combination. Well see this assumption again
in Michigan Approach methods, in Section 10.4. But SAMUEL instead directly and immediately
distributes rewards to all state-action rules which led to the reward. Such rules are called active.
More specically: if a rule contained an action which was used at some time in the past, prior to a
reward r appearing, then when r is nally received, the utility of the rule is updated as:
Utility(R
i
) (1 ) Utility(R
i
) + r
SAMUEL also maintains an approximation of the variance of the utilities of each rule because
we want to have rules which both lead to high rewards and are consistent in leading to them. Each
time the utility is updated, variance in utility is also updated as:
UtilityVariance(R
i
) (1 ) UtilityVariance(R
i
) + (Utility(R
i
) r)
2
Finally, SAMUEL uses this information to build up a quality of sorts of each rule, called the
rules strength,
169
which is a combination its utility and utility variance. Strength affects how likely
the rule is to be mutated later on.
Strength(R
i
) Utility(R
i
) + UtilityVariance(R
i
)
We commonly set to a low value less than 1, as utility is more important than variance.
Distributing reward evenly among all rules is an odd choice. I would have personally dis-
tributed so that later rules received more reward than earlier rules. Interestingly, SAMUEL maintains
information about how long ago a rule was active, though it uses it only to determine which rules
to delete. This value is called the activity level of a rule. Rules start with an activity level of
1
2
, and
are updated each time the agent performs an action. Rules which had that particular action in their
heads are increased like this:
Activity(R
i
) (1 ) Activity(R
i
) +
Given an 0 1, this has the effect of shifting a rules activity towards 1 when the rules
action is chosen. Rules without that action in their heads have their activity levels decreased:
Activity(R
i
) Activity(R
i
)
for 0 1. This has the effect of slowly decreasing the rules activity level towards zero.
The second assessment procedure in SAMUEL is used to compute the tness of the entire
individual (the ruleset). This is simply dened as the sum of rewards received by the individual
during testing. The following algorithm describes both tness procedures: the particular procedure
being done (utility or tness) is determined by the dotness variable.
169
Not to be confused with Pareto strength (Section 7.3).
187
Algorithm 124 SAMUEL Fitness Assessment
1: S individual being assessed
2: learning and decay rate
3: activity level increase rate
4: how much variance to include
5: activity level decay rate
6: dotness are we assessing to compute tness (as opposed to rule strength)?
7: n number of times to test the agent
8: f 0
9: R {R
1
, ..., R
l
} rules in the ruleset of the individual S
10: for n times do
11: s an initial state of agent
12: Z {} . Active Rule Set
13: for each rule R
i
R do . All rules which were in an action set this time around
14: Activity(R
i
) 0.5
15: repeat
16: for each rule R
i
R do . No matter how badly they match the state
17: ComputeMatchScore(R
i
, s)
18: N all actions which appear in the head of any rule in R
19: M {} . Match Set
20: for each action N
j
N do . Find the highest-scoring rule for each action
21: R
0
R all rules in R whose heads are action N
j
22: M M { the rule R
0
i
R
0
whose match score is highest }
23: R
a
SelectWithReplacement(M) . Select among the highest-scoring rules
24: A R all rules whose heads (actions) are the same as the head of R
a
. Action Set
25: for each rule A
i
A do . Increase activity
26: Activity(A
i
) (1 ) Activity(A
i
) +
27: if A
i
/ Z then
28: Z Z {A
i
}
29: for each rule R
i
R A do . Decrease activity
30: Activity(R
i
) Activity(R
i
)
31: Perform action R
a
, transitioning to a new state s . Notice no reward
32: until the agents life is over
33: r cumulative reward (assessment) of the agent . Ah, heres the reward. Only at the end.
34: if dotness is false then . Were doing runs to update the strengths of the rules
35: for each rule Z
i
Z do
36: Utility(Z
i
) (1 ) Utility(Z
i
) +r
37: UtilityVariance(Z
i
) (1 ) UtilityVariance(Z
i
) +(Utility(Z
i
) r)
2
38: Strength(Z
i
) Utility(Z
i
) UtilityVariance(Z
i
)
39: else . Were doing runs to update tness
40: f f + r
41: if dotness is true then
42: tness of S f
188
Mutation SAMUEL has two mutation steps, each following one of the assessment steps. After
the rst assessment procedure (which determines rule strength), the rules in the individual are
modied. Hopefully this improves the individual for the second tness assessment (whose purpose
is to compute the actual tness of the individual). After the second tness procedure, we do regular
breeding of the population with more bulk-style, traditional operations.
Lets start with the rst mutation step: improving the rules. SAMUEL performs any of the
following mutations on the individual to try to improve it for the second stage:
Rule Deletion If a rule is sufciently old (brand new rules are never deleted), has a
sufciently low activity value (its not red recently), or its strength is sufciently low, or if
the rule is subsumed by another rule with greater strength, then the rule is a candidate for
deletion. We may also delete a few rules randomly. Its up to you to decide these thresholds
and how many deletions occur. We say that a rule A is subsumed by another rule B if every
state that A covers is also covered by B, and B covers some additional states as well, and the
two rules have the same actions in their heads.
Rule Specialization If a rule is not very strong and covers a large number of states, it is a
candidate for specialization since it may be crummy because of the large region its covering.
We add to the ruleset a new rule subsumed by the old rule (and thus more specic) and which
has the same action in its head. The original rule is retained. For example, the rule
x 4 and x 5 and y 1 and y 9 go up
Might be specialized to
x = 5 and y 6 and y 9 go up
Rule Generalization This is the opposite of rule specialization. If a rule is very strong and
covers a small number of states, it is a candidate for generalization because it might do well
with more states. We add to the ruleset a new rule which subsumes the old rule (and thus is
more general) and has the same action in its head. The original rule is retained.
Rule Covering Covering is similar to generalization, but is based on information we
gleaned from the assessment process. Lets say that during assessment we discovered that a
certain rule had often red but was fairly consistent in not completely matching the state. For
example, returning to our rule
x 4 and x 5 and y 1 and y 9 go up
Imagine that this rule had been selected a number of times when y = 4, x = 6. Obviously
x = 6 is out of bounds for the rule, but the y = 4 match was good enough, and the rule was
strong enough, for it to win even with only a partial match. Rule covering would select this
rule and create a new one more likely to match, for example:
x 4 and x 6 and y 1 and y 9 go up
The original rule is retained.
189
Rule Merging If two rules are sufciently strong, share the same action in their heads, and
overlap sufciently in the number of states they cover, theyre candidates for merging into a
single rule which is the union of them. The original rules are retained.
Notice that all these mutation mechanisms are directed, that is, theyre explicitly exploitative,
aimed at pushing the rules so that they perform better next time. For this reason, John Grefenstette
refers to this mutation step as Lamarckian (see Section 3.3.4) it improves the individuals during
the course of assessment.
The remaining mutation operators occur during breeding just like any other evolutionary
algorithm, and have more of an explorative nature to them:
Plain Old Mutation Make random mutations to some rules. The original rules are not
retained. This is the more explorative mutation.
Creep Mutation
170
Make a very small, local random change to a few rules. The objective
here is to push a little bit for hill-climbing.
Recombination Section 4.5.5 mentioned various approaches to crossing over rulesets. SAMUEL
offers other possibilities:
A version of Uniform Crossover Some n times, the two individuals trade a rule at random.
Clustered Crossover From the tness assessment procedure we gather some statistics:
specically, we want to know which sequences of rules led to a reward. From this we identify
pairs of rules which often led to a reward when they both appeared in a sequence. We then do
a uniform crossover, but at the end try to ensure that these pairs dont get split up: if one rule
winds up in individual A and the other in individual B, we move one of the two rules to the
other individual (swapping over some other rule instead). The idea is to recognize that there
is very strong linkage among rules in rulesets, and we want to cross over whole teams of rules
which have performed well as a group.
Notice that both of these recombination operators dont change the size of either ruleset. Nor
do the mutation operators during breeding. SAMUEL appears to restrict ruleset size changes to the
exploitative Lamarckian mutation operators which occur after the rst assessment procedure.
Selection You can use any old tness-based selection procedure. Though SAMUEL traditionally
uses an odd combination of truncation selection and Stochastic Universal Sampling. Specically,
we compute the mean tness over the whole population, as well as the variance in the tness. We
then update a baseline tness as follows:
baseline (1 ) baseline + (mean tness variance in tness)
... where 0 1, and is a parameter indicating how important variance is. Once we have our
baseline tness, the only individuals which are even considered for selection are those whose tness
is higher than the baseline. We then use a standard selection procedure (SAMUEL used Stochastic
Universal Sampling) to select among those individuals.
In truth, I wonder if just doing plain-old truncation selection would do just as well.
170
My vote for creepiest mutation name.
190
Initialization There are lots of ways to initialize the ruleset. In SAMUEL three are common:
Create a set of random rules.
Seed the rules in each individual with rules you believe to be helpful to the agent.
Perform adaptive initialization. Each individual starts with a set of rules that are totally
general, one for each possible action:
In all cases go up In all cases go down ...etc...
Run this for a while to get an idea of the strength of each rule. As youre doing this, apply
a fair number of Rule Specialization operators, as described earlier, to make these general
rules more specic. The idea is to gracefully let SAMUEL nd good initial operators based on
a bit of initial experience in a sandbox.
Self-Adaptive Operators SAMUEL has an optional gimmick for adjusting the probability that
various mutation operators will occur (particularly the Lamarckian ones). Each individual
contains its own operator probabilities. Lets say that P(O
i
, I
j
) is the probability that operator O
i
is
performed on individual I
j
. This probability is stored in individual I
j
itself, and children receive
the same set of probabilities that their parents had. Each timestep all the operator probabilities in
all individuals are decreased like this:
P(O
i
, I
j
) (1 )P(O
i
, I
j
)
... where 0 1. This eventually pushes the probabilities towards 0. But when an individual
is mutated or crossed over using an operator, the probability of that operator is increased for the
resulting individual(s), perhaps something like:
P(O
i
, I
j
) (1 )P(O
i
, I
j
) +
This pushes this probability, eventually, towards 1.
This is an example of self-adaptive operators, where the individuals contain their own mutation
and crossover probabilities. Self-adaptive operators have been around for a long time, since early
work in Evolution Strategies. But in my personal experience theyre nicky.
171
I wouldnt bother.
10.4 Michigan Approach Learning Classier Systems
After John Holland
172
developed the Genetic Algorithm around 1973, he turned his attention to
a related topic: how to use an evolutionary process to discover a set of rules which describe, for
each situation an agent nds himself in, what to do in that situation. I think Holland pitched
it more generally than this as a general machine learning classier rather than one used for
agent actions and this is where the name Learning Classier Systems (LCS) came from. Rather
171
My proposed dissertation work was originally going to be using self-adaptive operators. Lets just say I wound up
doing something else.
172
John Holland is at the University of Michigan. Hence the name. Hollands earliest work on the topic is John Holland,
1975, Adaptation in Natural and Articial Systems, University of Michigan Press. But the notion of learning classier
systems werent formalized until a later paper, John Holland, 1980, Adaptive algorithms for discovering and using
general patterns in growing knowledge bases, International Journal of Policy Analysis and Information Systems, 4(3), 245268.
191
than having individuals being whole solutions (rulesets), Holland envisioned a population of
individual rules which would ght for survival based on how effective they were in helping the
classier as a whole. Thus, like Ant Colony Optimization, Learning Classier Systems have a very
one-population coevolutionary feel to them.
Hollands original formulation was somewhat baroque. Since then, Stewart Wilson has created
a streamlined version called the Zeroth Level Classier System (ZCS).
173
ZCS is a steady-state
evolutionary computation technique. The evolutionary computation loop iterates only occasionally.
Instead, most of the time is spent updating the tness values of the entire generation based on their
collective participation, as rules, in a reinforcement learning setting. Then after a while a few new
rules are bred from the population and reinserted into it, displacing some existing low-tness rules.
ZCS maintains a population of sparse ifthen rules. Each rule is associated with a current
tness which reects the utility of the rule. To test the rules, the agent is placed in a starting state,
and then begins performing actions chosen from the population. This is done by rst selecting
all the rules which cover the current state of the agent. This set of rules forms the match set M. If
there is more than one such rule, ZCSs arbitration scheme selects from among the match set using
a tness-based selection method (traditionally tness-proportionate selection).
One way in which ZCS differs from SAMUEL is that it expects a complete match rather than
allowing partial matches. Match scores are never used. If the match set is in fact emptynot
a single rule covers the current state ZCS creates a random rule which covers the state (and
possibly others), and which has a random action. The tness of the rule is set to the average tness
of the population at present. ZCS then marks an existing rule for death in the population and
replaces it with this new rule. Rules are usually marked for death via a tness-based selection
method, tending to select less-t rules more often.
Once ZCS has a winning rule, it extracts the action from the head of the rule, then creates a
subset of the match set called the action set A, consisting of all the rules whose head was also that
action. The action is performed, and the agent receives a reward r and transitions to a new state s
0
,
at which point ZCS constructs the next match set M
0
and action set A
0
. Each rule A
i
A then has
its tness updated as:
Fitness(A
i
) (1 ) Fitness(A
i
) +
1
||A||
(r +
A
0
j
A
0
Fitness(A
0
j
)) (4)
Look familiar? Hint: lets dene a function G, consisting of the combined tness (utility) of
all the rules in the present action set A. That is, G(A) =
i
Fitness(A
i
). Equation 4 above would
result in the equivalent equation for G:
G(A) (1 ) G(A) +
1
||A||
(r + G(A
0
))
Compare this to Equation 2. Unlike SAMUEL, ZCS updates utility (ZCSs rule tness) in basically
a Q-learning fashion. ZCS also punishes rules for not getting picked (that is, the rules in M A).
Let B = M A. Then the tness of each rule B
i
B is decreased as:
Fitness(B
i
) Fitness(B
i
)
This has basically the same effect as evaporation did in Ant Colony Optimization (see Section
8.3.1). can be a value between 0 and 1, and shouldnt be very large. All told, the algorithm for
updating tnesses in the match set is:
173
Introduced in Stewart Wilson, 1994, ZCS: A zeroth level classier system, Evolutionary Computation, 2(1), 118.
192
Algorithm 125 Zeroth Classier System Fitness Updating
1: M previous match set
2: M
0
next match set . Unused. We keep it here to be consistent with Algorithm 131.
3: A previous action set
4: A
0
next action set
5: r reward received by previous action
6: learning rate . 0 < < 1. Make it small.
7: evaporation constant . 0 < < 1. Make it large.
8: cut-down constant . 0 < < 1. 0.5 is ne.
9: for each A
i
A do
10: Fitness(A
i
) (1 ) Fitness(A
i
) +
1
||A||
(r +
A
0
j
A
0 Fitness(A
0
j
))
11: B M A
12: for each B
i
B do
13: Fitness(B
i
) Fitness(B
i
)
Because ZCS uses tness as utility, when ZCS produces children as a result of steady-state
breeding, it needs to assign them an initial tness: otherwise they would never even be considered
for match sets. To this end, half the tness of each parent is removed from the parent and added into
each child (because we want to approximately maintain the sum total tness in our population):
Algorithm 126 Zeroth Classier System Fitness Redistribution
1: P
a
, P
b
parents
2: C
a
, C
b
children
3: crossedover are the children the result of crossover?
4: if crossedover = true then
5: Fitness(C
a
), Fitness(C
b
)
1
4
( Fitness(P
a
) + Fitness(P
b
) )
6: else
7: Fitness(C
a
)
1
2
Fitness(P
a
)
8: Fitness(C
b
)
1
2
Fitness(P
b
)
9: Fitness(P
a
)
1
2
Fitness(P
a
)
10: Fitness(P
b
)
1
2
Fitness(P
b
)
Now we can examine the top level ZCS loop. The loop has two parts:
1. We update the utilities (tnesses) of the rules by testing them with the agent: we repeatedly
create a match set, pick an action from the match set, determine the action set, perform the
action and receive reward, and update the tness values of the rules in the match set. Fitness
values are updated with Algorithm 125.
2. After doing this some n times, we then perform a bit of steady-state breeding, producing
a few new rules and inserting them into the population. The tness of the new children is
initialized using Algorithm 126.
193
Algorithm 127 The Zeroth Level Classier System (ZCS)
1: popsize desired population size
2: n agent runs per evolutionary loop . Make it large.
3: c probability of crossover occurring . Make it small.
4: P Generate Initial Population, given popsize . See Text
5: repeat . First we do the reinforcement stage to build up tness values
6: for n times do
7: s an initial state of agent
8: r 0
9: M {}
10: A {}
11: repeat
12: M
0
P match set for state s . That is, all P
i
P which cover s
13: if M
0
is empty then . Oops, nothings covering s, make something
14: M
0
{ Create New Individual Covering s } . See Text
15: if ||P|| = popsize then . Were full, delete someone
16: P P {SelectForDeath(P)}
17: P P M
0
18: a best action from M
0
. The action of the winner of SelectWithReplacement(M
0
)
19: A
0
M
0
action set for action a . That is, all M
0
j
M whose action is a
20: UpdateFitnesses with M, M
0
, A, A
0
and r
21: Have agent perform action a, resulting in new reward r and transitioning to new state s
22: M M
0
23: A A
0
24: until the agents life is over
25: UpdateFitnesses with M, M
0
, A, {} and r . Final iteration. Note M = M
0
, and A
0
= {}
26: Parent P
a
SelectWithReplacement(P) . And now we begin the breeding stage
27: Parent P
b
SelectWithReplacement(P)
28: Child C
a
Copy(P
a
)
29: Child C
b
Copy(P
b
)
30: if c random number chosen uniformly from 0.0 to 1.0 then
31: C
a
, C
b
Crossover(C
a
, C
b
)
32: RedistributeFitnesses(P
a
, P
b
, C
a
, C
b
, true)
33: else
34: RedistributeFitnesses(P
a
, P
b
, C
a
, C
b
, false)
35: C
a
Mutate(C
a
)
36: C
b
Mutate(C
b
)
37: if ||P|| = popsize then . Make room for at least 2 new kids
38: P P {SelectForDeath(P)}
39: if ||P|| + 1 = popsize then
40: P P {SelectForDeath(P)}
41: P P {C
a
, C
b
}
42: until we have run out of time
43: return P
194
The parameter n species the number of tness updates performed before another iteration of
steady-state evolution. If n is too small, the algorithm starts doing evolution on sketchy information,
and becomes unstable. If n is too large, the algorithm wastes time getting very high-quality tness
information when it could be spending it searching further. Usually, n needs to be large.
There are various ways to generate the initial population. One obvious way is to ll P with
popsize random individuals, each assigned a small initial tness (like 1). Another common approach
is to keep P initially empty. P will then ll with individuals generated on-the-y as necessary.
Given a state s, ZCS creates an individual on-the-y at random with the constraint that its
condition must cover s. The tness of this individual is typically set to the population mean; or if
there is no population yet, then it is set to an arbitrary initial tness (again something small, like 1).
In ZCS, crossover is optional. This is of course the case in many algorithms, but in ZCS its
particularly important because crossover is often highly destructive. The parameter p reects how
often crossover is done in creating children (usually not often). If crossover occurs, the redistributor
is informed so as to average out the tness values between them.
The ZCS algorithm is the rst metaheuristic covered so far which doesnt return a best result:
rather the entire population is the result. The population itself is the solution to the problem.
The XCS Algorithm Building on ZCS, Stewart Wilson developed a next-generation version which
he called XCS.
174
XCS has since gone through a number of iterations, including additions from Pier
Luca Lanzi and Martin Butz. Basically XCS differs from ZCS in four primary places:
How the action is selected
How UpdateFitnesses is performed
How SelectWithReplacement is done in the evolutionary portion of the algorithm
How RedistributeFitnesses is performed
The big change is that XCS has four measures of quality, rather than just tness:
XCS has an explicit measure of rule utility
175
separate from tness. It is essentially a rough
notion of Q-value, and, when weighted by tness, is used to select actions.
XCS maintains a rule utility error measure, a historical estimate of the difference between the
current utility of the rule and the current utility of the rules in the next time step. This is used
in calculating the tness, not in selecting actions. Well use the 1 trick to fold in newer
results, so recent utility errors count more than older ones.
From the rules utility error, XCS derives an accuracy measure: lower error, higher accuracy.
Below a certain amount of error, the accuracy is thresholded to 1 (perfect).
XCSs rule tness isnt utility, but an estimate of the historical accuracy of the rule. Beyond its
role in evolution, tness is used in weighting the utility when determining action selection.
174
XCS doesnt appear to stand for anything! The earliest version of the algorithm appeared in Stewart Wilson, 1995,
Classier tness based on accuracy, Evolutionary Computation, 3(2), 149175.
XCS is complex. For a more accurate description of the algorithm, see Martin Butz and Stewart Wilson, 2001, An
algorithmic description of XCS, in Advances in Learning Classier Systems, volume 1996/2001, pages 267274, Springer.
Much of the code in these lecture notes was derived from this paper. Note that my version has some simplifying syntactic
changes (no prediction array for example) but it should operate the same (knock on wood).
175
What I am calling utility and utility error of a rule, XCS calls the prediction and prediction error.
195
Picking an Action XCS picks an action from the match set M by rst determining the best
action in M. To do this it gathers all the rules in M which propose the same action. XCS then adds
up their utilities, probabilistically weighted by their tnesses (tter rules get to contribute more to
the utility of the action).
Algorithm 128 XCS Fitness-Weighted Utility of an Action
1: M match set
2: N
i
action
3: R M all rules in M whose heads are N
i
4: if
rR
Fitness(r) 6= 0 then
5: return
rR
( Utility(r) Fitness(r) )
rR
Fitness(r)
6: else
7: return 0
Now we can determine which of the actions is the best one:
Algorithm 129 XCS Best Action Determination
1: M match set
2: N all actions which appear in the head of any rule in M
3: Best 2
4: bestc 0
5: for each action N
i
N do
6: c XCS Fitness-Weighted Utility of action N
i
7: if Best = 2 or c > bestc then
8: Best N
i
9: bestc c
10: return Best
Now we either pick a random action (with e probability), or we choose our best action. This
approach should look familiar: its once again e-greedy action selection, just like in Q-learning.
176
Algorithm 130 XCS Action Selection
1: M match set
2: e exploration probability . 0 e 1
3: N all actions which appear in the head of any rule in M
4: if e random number chosen uniformly from 0.0 to 1.0 inclusive then
5: return a member of N chosen uniformly at random
6: else
7: return the action provided by XCS Best Action Determination given M and N
176
This was rst proposed for XCS in Pier Luca Lanzi, 1999, An analysis of generalization in the XCS classier system,
Evolutionary Computation, 7(2), 125149.
196
Updating Fitness During testing we no longer have just a tness to update: well need to update
all three elements: the utility, the utility error, and the tness. The utility is updated Q-style:
Utility(A
i
) (1 ) Utility(A
i
) + (r + b)
What is b? Its the XCS Fitness-Weighted Utility (Algorithm 128) of the best action (Algorithm
129) the next time around so youll need to delay tness updating of this iteration until you have
gone one more iteration. Again, compare this to Equation 2.
The utility error is updated similarly, by rolling in the new error computed by subtracting the
utility from the likely best utility of the next action set:
UtilityError(A
i
) (1 ) UtilityError(A
i
) + ||b Utility(A
i
)||
To compute the tness, we rst convert the error into an accuracy a
i
. If the error is less than
or equal to than some small value e, the accuracy a
i
is considered to be perfect, that is, 1. Otherwise,
the accuracy a
i
is set to
e
UtilityError(A
i
)
A
j
A
a
j
Utility, Utility Error, and Fitness are initially set to something small, like 1. Theres no evapora-
tion. Heres the algorithm in full:
Algorithm 131 XCS Fitness Updating
1: M previous match set . Note: for the nal iteration of the ZCS/XCS top loop, M = M
0
2: M
0
next match set
3: A previous action set
4: A
0
next action set . Unused. We keep it here to be consistent with Algorithm 125.
5: r reward received by previous action
6: e the highest error in utility that should still warrant full tness
7: learning rate . 0 < < 1. Make it small.
8: tness adjustment parameter . > 1
9: cut-down constant . 0 < < 1. 0.5 is ne.
10: tness adjustment parameter . Presumably 0 1. Im guessing 1 is ne.
11: n the action returned by XCS Best Action Selection on M
0
12: b the XCS Fitness-Weighted Utility of action n
13: ~a ha
1
, ..., a
||A||
i vector of accuracies, one per rule in A
14: for each rule A
i
A do
15: Utility(A
i
) (1 ) Utility(A
i
) +(r + b)
16: UtilityError(A
i
) (1 ) UtilityError(A
i
) + ||b Utility(A
i
)||
17: if UtilityError(A
i
) > e then . Convert error into accuracy (big error, low accuracy)
18: a
i
e
UtilityError(A
i
)
19: else
20: a
i
1 . Why its not a
i
I have no idea
21: for each rule A
i
A do
22: Fitness(A
i
) (1 ) Fitness(A
i
) +
a
i
A
j
A
a
j
. Normalize the accuracies
197
Redistributing Fitness In addition to tness, XCS now also needs to redistribute utility and
utility error. And unlike ZCS, rather than redistribute tness from the parents, XCS just cuts down
the tness of the child. Specically:
Algorithm 132 XCS Fitness Redistribution
1: P
a
, P
b
parents
2: C
a
, C
b
children
3: tness cut-down . Use 0.1
4: crossedover are the children the result of crossover?
5: if crossedover = true then
6: Fitness(C
a
), Fitness(C
b
)
1
4
(Fitness(P
a
) + Fitness(P
b
))
7: Utility(C
a
), Utility(C
b
)
1
4
(Utility(P
a
) + Utility(P
b
))
8: UtilityError(C
a
), UtilityError(C
b
)
1
4
(UtilityError(P
a
) + UtilityError(P
b
))
9: else
10: Fitness(C
a
)
1
2
Fitness(P
a
)
11: Fitness(C
b
)
1
2
Fitness(P
b
)
12: Utility(C
a
)
1
2
Utility(P
a
)
13: Utility(C
b
)
1
2
Utility(P
b
)
14: UtilityError(C
a
)
1
2
UtilityError(P
a
)
15: UtilityError(C
b
)
1
2
UtilityError(P
b
)
Performing SelectWithReplacement SelectWithReplacement is not performed over the whole
population as it was in ZCS. Rather, its just performed over the action set. That is, lines 28 and 29
of Algorithm 127 should look like this:
Parent P
a
SelectWithReplacement(A)
Parent P
b
SelectWithReplacement(A)
Other Gizmos To this basic algorithm, XCS normally adds some other gizmos. First, theres the
notion of microclassiers. XCS considers each individual not just as one rule, but actually as a
whole lot of rules that are exactly the same. This is done by including with each individual a count
variable which indicates how many copies of the rule are considered to be in the individual.
When we do tness updating (Algorithm 131), the very last line includes this count variable so that
each of those embedded rules get a voice:
Fitness(A
i
) (1 )Fitness(A
i
) +
a
i
Count(A
i
)
A
j
A
a
j
Count(A
j
)
Counts also gure when were creating new rules or selecting rules for deletion. If we create a
new rule, we check rst to see if its identical to an existing rule. If so, the existing rule has its count
increased, and the new rule isnt actually added to the population. When we delete a rule, and its
count is higher than 1, we just decrease the count and retain the rule; only when its count is 1 do
we delete it. Note that this could result in the population size changing a bit. This gizmo is largely
a mechanism to cut down on the total number of classiers, but it doesnt really affect the results.
198
Because initial tness and utility is arbitrarily set, XCS also grants new rules a bit of leeway, to
give them a chance to get their utilities and utility errors ramped up. This is done by maintaining
an experience counter for each rule which is incremented each time that rule appears in an action
set. The learning rate is decreased little by little until the experience exceeds 1/, at which point
the learning rate is thereafter.
Putting this all together, we can extend the XCS Fitness Updating algorithm (Algorithm 131) to
include these additional gizmos:
Algorithm 133 XCS Fitness Updating (Extended)
1: M previous match set . Note: for the nal iteration of the ZCS/XCS top loop, M = M
0
2: M
0
next match set
3: A previous action set
4: A
0
next action set . Unused. We keep it here to be consistent with Algorithm 125.
5: r reward received by previous action
6: e the highest error in utility that should still warrant full tness
7: learning rate . 0 < < 1. Make it small.
8: tness adjustment parameter . > 1
9: cut-down constant . 0 < < 1. 0.5 is ne.
10: tness adjustment parameter . Presumably 0 1. Im guessing 1 is ne.
11: n the action returned by XCS Best Action Selection on M
0
12: b the XCS Fitness-Weighted Utility of action n
13: ~a ha
1
, ..., a
||A||
i vector of accuracies, one per rule in A
14: for each rule A
i
A do
15: Experience(A
i
) Experience(A
i
) +1
16:
0
max(
1
Experience(A
i
)
, )
17: Utility(A
i
) (1
0
) Utility(A
i
) +
0
(r + b)
18: UtilityError(A
i
) (1
0
) UtilityError(A
i
) +
0
||b Utility(A
i
)||
19: if UtilityError(A
i
) > e then . Convert error into accuracy (big error, low accuracy)
20: a
i
e
UtilityError(A
i
)
21: else
22: a
i
1 . Why its not a
i
I have no idea
23: for each rule A
i
A do
24: Fitness(A
i
) (1 ) Fitness(A
i
) +
a
i
Count(A
i
)
A
j
A
a
j
Count(A
j
)
The big changes are on lines 15, 16, and 24.
Finally, XCS has optional subsumption procedures: it checks for a subsumed rule whose covered
states are entirely covered by some other rule which is both reasonably t and sufciently old. The
goal is, once again, to force diversity and eliminate redundancy. Subsumption could show up in
two places. First, when a brand-new rule is created, XCS may refuse to include it in the population
if its subsumed by some other rule; instead, the subsuming rule has its count increased by one.
Second, after building an action set A, XCS could check A to see if any rules subsume any others. If
so, the subsumed rules are removed from the population.
199
10.5 Regression with the Michigan Approach
And now for a twist. Ordinarily algorithms like SAMUEL, ZCS, and XCS (and Q-learning) are used
to nd a policy (s) a which produces the right action (a) for a given state s for an agent under
various Markovian assumptions. But theres another, distantly related use for XCS: regression. That
is, tting a real-valued function y(s) to various states s.
The most common algorithm in this vein is XCSF, which hijacks XCS to do real-valued re-
gression: the states, so to speak, are sample points drawn from a multidimensional real-valued
space, and the actions are real-valued numbers.
177
y(s) is the function which maps the states
to actions. I put everything in quotes because although XCSF uses XCS to do its dirty work,
its not really learning a state-action policy at all. XCSF is not interested in agents and Markovian
state-to-state transitions. Instead, its just trying to learn y(s).
178
As a result, XCSF makes some big simplications. XCSF doesnt have a utility per se: instead,
each rule M
i
in the match set M for a given s S simply makes a prediction,
179
or guess, of y(s)
which we will call p(M
i
, s) (this is essentially the rules action). XCSFs estimate of y(s) is the
tness-weighted average prediction among all the rules in the match set. Rules are gradually
modied so that that their predictions will more closely match y(s) in the future, and so XCSFs
estimate will as well.
In XCSF each state s S (and for consistency with XCS Ill keep referring to it as s) is represented
internally by a real-valued multidimensional point: lets call it ~x. The condition part of a rule will
be a region in this space; and the action will be some function over this region which explains
how the rule predicts the value of those s which fall in this region. One classical way to dene a
rule in XCSF is as a real-valued box region with a gradient running through it from one corner to
the other. The gradient is the action. We dene the rule in the form:
{
~
l = hl
1
, ...l
n
i, ~u = hu
1
, ..., u
n
i, ~ w = hw
0
, w
1
, ..., w
n
i }
Notice that ~ w has an extra value w
0
but
~
l and~u do not. This rule denes a box with a lower corner
at
~
l and an upper corner at ~u. The rule predicts that y(
~
l) = w
0
, and that y(~u) = w
0
+
n
i=1
w
i
(u
i
l
i
).
In general a point ~x within this box is predicted to have a y(~x) value of:
180
y(~x) = w
0
+
n
i=1
w
i
(x
i
l
i
)
To be consistent with previous XCS notation well dene the prediction abstractly as p(M
j
, s),
where M
j
is the rule in question, and s is an input point. In this case, M
j
is {
~
l, ~u, ~ w} and s is ~x.
Given this representation, XCSF estimates y(s) using a piecewise linear function: it approx-
imates y(s) using a bunch of overlapping linear regions, one per rule. Multiple rules may cover
a given point s (these are the match set for s), and in this case the prediction of y(s) is be the
tness-weighted average of the p(...) values for each of the these rules. Which leads us to...
177
This also makes the C in XCSF a misnomer, though inexplicably the XCSF folks still refer to all this as classica-
tion! A good introduction to XCSF may be found in Stewart W. Wilson, 2002, Classiers that approximate functions,
Natural Computing, 1(23), 211234. Like XCS, XCSF doesnt seem to stand for anything.
178
This isnt to say you couldnt retain these features. See for example Pier Luca Lanzi, Daniele Loiacono, Stewart W.
Wilson, and David E. Goldberg, 2005, XCS with computed prediction in continuous multistep environments, in Congress
on Evolutionary Computation, pages 20322039.
179
Recall from footnote 175 that XCS used the term prediction in a similar way: but recall that I opted for utility to be
consistent with reinforcement learning. But here, stripped of agents and states, the term prediction is a good choice.
180
In early papers, y(~x) = w
0
+
n
i=1
w
i
x
i
(no l
i
). This works but boxes far from the origin will be very sensitive to ~ w.
200
Algorithm 134 XCSF Fitness-Weighted Collective Prediction
1: M match set
2: if
M
i
M
Fitness(r) 6= 0 then
3: return
M
i
M
( p(M
i
, s) Fitness(M
i
) )
M
i
M
Fitness(M
i
)
4: else
5: return 0
Compare to Algorithm 128 (XCS Fitness-Weighted Utility of an Action). Once our population
has converged to a good set of rules, now we have a way of interpreting them as a function which
predicts y(s) for any point s. Of course there are other ways of representing rules besides as
boxes with linear gradients. For example, you could represent them as hyperellipsoids with radial
gradients inside them. Or you could use neural networks, or a tile coding of some sort.
Eventually youll want to use your learned XCSF model in the real world. But the model will
probably be underspecied, and have regions that it doesnt cover: what if the match set M is
empty? Returning 0 in this case isnt very satisfying. Instead, XCSF folks suggest that you pick a
value > 0. If during usage, ||M|| < , then M gets bulked up with the ||M|| rules closest, in
some measure, to the testing point s, but not already in M. This can go for XCS and ZCS too.
The intuition behind XCSF is to adapt its rules so as to concentrate more rules on the complex
parts of the space. It does this through evolution: but during Fitness Updating it also applies a
special gradient descent operation which directly modies a rules condition so that it is more likely
to produce the right prediction next time. This works as follows. When a rule doesnt predict the
correct value during XCSFs, the rule is revised a bit so that next time its more likely to be closer to
the correct value. Recall that our s is represented by the point ~x. Our rule is
~
l, ~u, ~ w. Let r = y(~x).
Recall that the rules prediction of r is w
0
+
n
i=1
w
i
(x
i
l
i
). So the difference b between the correct
value r and the predicted value is simply b = r w
0
n
i=1
w
i
(x
i
l
i
).
Now we need an equation for updating ~ w so that b is lessened next time. Lets use the delta
rule
181
from neural networks:
~ w ~ w +hb, b(x
1
l
1
), ..., b(x
n
l
n
)i
Now to the tness. Recall that XCS didnt base tness on utility, but rather utility error, a
historical average estimate of how the utility differed from the utility at the next state. But we dont
have a next state any more, nor any notion of utility any more: were not doing state-to-state
181
Where did this magic rule come from? Its simple. We want to minimize the error: to do this we need some error
function E which is zero when b = 0 and is more and more positive as b gets further from 0. Because it makes the math
work out nicely, lets use E =
1
2
b
2
=
1
2
(r w
0
n
i=1
w
i
(x
i
l
i
))
2
. We want to update ~ w so as to reduce E, and will
use gradient descent to do it (recall Algorithm 1). Thus ~ w ~ w E(~ w). This means that each w
i
will be updated
as w
i
w
i
E
w
i
. Taking the derivative of E with respect to w
0
gets us
E
w
0
= (r w
0
n
i=1
w
i
(x
i
l
i
))(1) = b.
Okay, that was weirdly easy. For any other w
j
,
E
w
j
= (r w
0
n
i=1
w
i
(x
i
l
i
))((x
j
l
j
)) = b(x
j
l
j
). Since were
multiplying everything by , thus ~ w ~ w +hb, b(x
1
l
1
), ..., b(x
n
l
n
). Ta da!
While the delta rule is easy to implement, much of the XCSF community has since moved to estimation using the more
complex recursive least squares, as its considered stabler. For more information, see Pier Luca Lanzi, Daniele Loiacono,
Stewart W. Wilson, and David E. Goldberg, 2007, Generalization in the XCSF classier system: Analysis, improvement,
and extension, Evolutionary Computation, 15(2), 133168.
201
transitions. Instead, XCSF just keeps a historical average estimate of the error b using the 1
trick. Well call this the Prediction Error, but note that its used identically to the old Utility
Error in computing tness via accuracy (in Algorithms 131 and 133).
PredictionError(M
i
) (1 ) PredictionError(M
i
) + ||b||
At this point the algorithm below should make more sense. Compare to Algorithm 131:
Algorithm 135 XCSF Fitness Updating
1: M match set
2: s input data point
3: r desired output for the input data point
4: learning rate . 0 < < 1. Make it small.
5: tness adjustment parameter . > 1
6: tness adjustment parameter . Presumably 0 1. Im guessing 1 is ne.
7: ~a ha
1
, ..., a
||M||
i vector of accuracies, one per rule in M
8: for each rule M
i
M do
9: hx
1
, ..., x
n
i the point ~x represented by s
10: {hl
1
, ..., l
n
i, hu
1
, ..., u
n
i, hw
0
, ..., w
n
i} lower points, upper points, weights in M
i
. Note w
0
11: b r (w
0
+
n
i=1
w
i
(x
i
l
i
)) . Error between correct value and prediction
12: ~ w ~ w +hb, b(x
1
l
1
), ..., b(x
n
l
n
)i . Delta rule
13: Revise M
i
to new ~ w values
14: PredictionError(M
i
) (1 ) PredictionError(M
i
) + ||b||
15: if PredictionError(M
i
) > e then . Convert error into accuracy (big error, low accuracy)
16: a
i
e
PredictionError(M
i
)
17: else
18: a
i
1
19: for each rule M
i
M do
20: Fitness(M
i
) (1 ) Fitness(M
i
) +
a
i
M
j
M
a
j
. Normalize the accuracies
Evolution Details Selection, Crossover, and Mutation, are basically the same as in XCS. However
you decide to represent your rules (as an array of numbers say), youll want to take care that
crossover and mutation dont produce invalid rule conditions. XCSF can also use XCSs tness
redistribution (Algorithm 132) though obviously utility doesnt exist any more, and utility error
should be changed to prediction error.
Initialization is more or less the same as in XCS or ZCS (see the text discussing Algorithm
127 for reminders), through XCSF usually initially generates populations by starting with an
empty population rather than a fully randomly-generated one. Also, because the population
starts out empty, XCSF usually adds new individuals in response to an uncovered state s. To do
this, XCSF traditionally denes the box dening the condition of the rule as follows. Lets say
that s is the point ~x in the space. For each dimension k of the box, XCSF creates two random
numbers i
k
and j
k
, each between 0 and some maximum value q (which you have to dene). Then
the box is dened as running from the lower point hx
0
i
0
, x
1
i
1
, ..., x
n
i
n
i to the upper point
hx
0
+ j
0
, x
1
+ j
1
, ..., x
n
+ j
n
i.
202
Now were ready to describe the main loop. Its basically ZCS, but with a slightly different inner
loop because rather than dealing with action sets, actions, rewards, state transitions, and so on,
XCSF picks a state s, determines the Match Set for it, computes and reports a collective predicted
value, and then revises the rules and updates their tnesses. There is no action set at all.
Heres the revised top-level algorithm. Notice the strong relationship with ZCS (Algorithm 127):
Algorithm 136 The XCSF Algorithm
1: S {s
1
, ..., s
z
} input data points
2: y(s) function which returns the desired output for input data point s S
3: popsize desired population size
4: f tness value to be assigned to initial population members . Can be whatever. Say, 1.
5: n agent runs per evolutionary loop . Make it large.
6: c probability of crossover occurring . Make it small.
7: P Generate Initial Population, given f and popsize
8: repeat
9: for n times do
10: for each s S do . Do these in randomly shued order
11: M P match set for state s . That is, all P
i
P which cover s
12: if M is empty then . Oops, nothings covering s, make something
13: M { Create New Individual Covering s } . See Text
14: if ||P|| = popsize then . Were full, delete someone
15: P P {SelectForDeath(P)}
16: P P M
17: Report the collective prediction of s by the members of M
18: r y(s)
19: UpdateFitnesses with M, s, and r
20: Parent P
a
SelectWithReplacement(P) . And now we begin the breeding stage
21: Parent P
b
SelectWithReplacement(P)
22: Child C
a
Copy(P
a
)
23: Child C
b
Copy(P
b
)
24: if c random number chosen uniformly from 0.0 to 1.0 then
25: C
a
, C
b
Crossover(C
a
, C
b
)
26: RedistributeFitnesses(P
a
, P
b
, C
a
, C
b
, true)
27: else
28: RedistributeFitnesses(P
a
, P
b
, C
a
, C
b
, false)
29: C
a
Mutate(C
a
)
30: C
b
Mutate(C
b
)
31: if ||P|| = popsize then . Make room for at least 2 new kids
32: P P {SelectForDeath(P)}
33: if ||P|| + 1 = popsize then
34: P P {SelectForDeath(P)}
35: P P {C
a
, C
b
}
36: until we have run out of time
37: return P
203
10.6 Is this Genetic Programming?
Door
(Initially Closed)
Switch
(Initially Off)
Room B
Room A
Room C
Actions:
Go to B
Flick Switch
Actions:
Go to A
Go to C
Actions:
Go to B
Exit Door
Figure 67 A robot world with three
rooms, a door, and a switch. avail-
able actions for each room are shown.
The robot can only exit if the door is
opened. Flicking the switch opens
the door.
Back to XCS and SAMUEL. In some important sense, policies are
programs which control agents. These programs consist of ifthen
rules where the if side consists of the current state of the world.
Even without control structures, this is often a lot more sophisti-
cated than the lions share of programs that tree-structured or
machine-code genetic programming develops (see Sections 4.3 and
4.4). But is this sufcient to be called programming?
Well, in lots of environments, you need more than just the state
of the world to decide what to do. You also need a memory where
you store some form of information gleaned from the history of
whats happened. That memory is typically called the internal
state of the agent (as opposed to the world state, or external state).
Consider Figure 67 at right. The robot starts in room A and
wants to go out the door. We would like to develop a policy that
enables the robot to go to room C, ick the switch (which opens
the door), return to A, and go out the door. The policy might be:
In A and door closed go to B
In B go to C
In C and switch off ick switch
In C and switch on go to B
In B um....
In A and door open go out the door
The problem is that we already have a rule for B! Go to C. We need two rules for B: if Im headed
to ick the switch, go to C, but if Im headed out the door, go to A. Trouble is, in room B we have
nothing to go on, no external state information, which can help us distinguish these features. The
two B situations are aliased: they require different actions but exhibit the same external state.
We need some memory: specically, we need memory of whether we icked the switch or not.
Lets give the agent a single bit of memory. Initially the bit is 0. Now we might construct this policy:
In A and door closed go to B
In B and memory bit is 0 go to C
In C and switch off ick switch and set memory bit to 1
In C and switch on go to B
In B and memory bit is 1 go to A
In A and door open go out the door
Problem solved! Heres the thing: by adding a single bit of memory, weve potentially doubled
our state space. A single bit isnt too bad, but several bits and we radically increase the complexity
of our world. Techniques for handling these issues are fairly cutting-edge. I personally view policy
optimization methods as the closest thing we have to successful genetic programming at present:
but were still a long ways from true automatic programming. Your job is safe.
204
11 Miscellany
Always the most interesting section of a book.
182
11.1 Experimental Methodology
11.1.1 Random Number Generators, Replicability, and Duplicability
Random Number Generators Metaheuristics employ randomness to some degree. Like all
stochastic techniques, the validity of your results may rely on the quality of your random number
generator. Unfortunately, there are a lot of very very bad randomnumber generators in common use.
Many of the more infamous generators come from a family of linear congruential randomnumber
generators, where the next random number is a function of the previous one: x
t+1
= (ax
t
+ c)
mod m. The values for a, c, and m must be very carefully chosen in order for this generator to be
even adequate to use. But bad choices of these constants have led to some truly infamous results.
The RANDU generator, for example, ruined experimental results as far back as the 1960s. A mistake
in the ANSI C specication led to the propagation of a horrible generator in C and C++s rand()
function even to this day. And Javas java.util.Random produces such non-random results that
theres an entire web page devoted to making fun of it.
183
When I examine new Java metaheuristics
toolkits, the rst thing I check is whether theyre using java.util.Random or not.
The revelation of a poor generator has cast doubt on more than one research paper in the
literature. You ought to pick a high-grade generator. My own personal choice is the Mersenne
Twister, a highly respected generator with very good statistical properties and an ultra-long period
(the amount of time before it starts repeating its sequence), but there are other very good ones out
there as well.
Generators need to be seeded and used properly. Too many times have I seen beginners
repeatedly instantiating a new java.util.Random instance, generating one integer from it, then
throwing it away, seemingly blissfully unaware that this is grotesquely nonrandom. This awful
approach gives you a sequence of numbers loosely following your computers wall clock time. A
good way to use a random number generator in your experiments is:
1. Choose a very high grade random number generator.
2. Pick a unique seed for each and every experimental run you do.
3. Seed your generator based on the seed you picked.
4. If youre using a language like Java in which generators are objects, create only one generator
per experimental run and continue to use it throughout the run, never creating a new one.
Unless you know exactly what youre doing, itd be wise to not deviate from this procedure.
Replicability When you perform your experiments and write them up for a conference or journal,
you must strive for replicability. You should report your results in such a way that a competent
coder could replicate your experiments, using a different programming language and operating
182
Compare to Footnote 3, p. 11.
183
Sun Renes Randomness: http://alife.co.uk/nonrandom/
205
system, and still get more or less the same results. Otherwise, whos to know if you just didnt make
this stuff up? To make replicable experiments youll need to describe your algorithm and relevant
parameters in sufcient detail. Pseudocode would be nice.
Even if you have described your algorithm in detail, if the algorithm is gigantic and absurdly
complex, its not considered replicable. You cant just thumb your nose at your readers and say
replicate this giant monster, I dare you. Instead, youll probably need to provide actual code
somewhere for your readers to access so they dont have to write it themselves. People are scared
of providing code so others can examine it, mostly because theyre ashamed of their code quality.
Be brave.
184
Duplicability If you are performing experiments and making claims, its helpful to strive not
just for replicability but for the higher standard of duplicability. Here youre enabling others to
exactly duplicate your results, ideally in environments other than your particular computer. The
difference between replicability and duplicability is fundamental when dealing with a stochastic
system: replicable experiments can be more or less repeated, with results which are statistically
equivalent. Duplicable experiments are exactly the same when run elsewhere. For example, a good
metaheuristics toolkit should be able to enable you to move to a new operating system and a new
CPU and repeat the identical experiment.
185
To get duplicability, youll need to think about your
language and environment choice.
Why is this important? Lets say youve published some experiments, and Person X approaches
you telling you he cant replicate your results. Uh oh. No problem, you say, and you hand him
your code. Then he tries to run the code on his system and gets... a different result. How do you
prove your claims are still valid? Could it be a bug in his operating system, compiler, or CPU? Or
yours? Did you forget to give him the specic random number generator seeds that produce the
given result? Its for these reasons that duplicability provides a bit of piece of mind. Replicability is
crucial; duplicability would be nice. Consider it.
11.1.2 Comparing Techniques
By far the most common kind of experiment youll nd yourself doing in metaheuristics is compar-
ing two different techniques. For example, lets say you want to show that, on some problem Foo,
if you apply Particle Swarm Optimization with = 0.9, = 0.1, = 0.1, = 0, e = 1, and with a
population of size 10, youll get better results than if you use the 5 + 1 Evolution Strategy using
Gaussian Convolution with
2
= 0.1. How do you do this?
By What Yardstick Should We Compare our Techniques? This is the rst question that needs
to be answered. At the end of a run, you often are left with a single best solution (or at least one
184
I must admit, I often am not. But I try to be.
185
And now we come to the delicate point where I suggest that you may wish to consider a language other than C++:
its not a language which makes duplicability easy. C++ and C depend critically on the specics of your CPU: how
large is a long? How is cos performed? How about sqrt? Is your CPU big-endian, little-endian, or something else?
Does compiling with certain oating-point optimizations turned on change the results? It can be frustrating to get
results running on Machine A, only to recompile on Machine B and get something subtly, but importantly, different.
Perhaps with everyone using the same Intel processors these days, its less of a concern. But still, consider picking a
safe language: Java in particular can provide precise duplicable results if you need it to.
206
which isnt worse than any of the others). The quality or tness of this solution is known as the
best of run. In most cases youd like this best of run quality to be as good as possible.
For most metaheuristics comparisons your goal is to demonstrate that technique A in some
sense performs better than technique B with regard to best of run quality. Nowadays evaluations
are the primary cost in in metaheuristics, so most researchers tend to ask the following question: if
you could do a single run with a xed budget of m evaluations, and needed a solution of the highest
quality possible, which technique should you pick? This is exactly the same thing as asking: which
technique has the highest expected (or mean) best of run?
186
An alternative question that has been asked before is: how many evaluations do I need to run
before I reach some level q of quality? Often q is simply dened as the optimum. Or: if I run
my technique n times, how often do I reach this level? Such formulations have taken many guises
in the past, but the most common one, found in the genetic programming world, is the so-called
computational effort measure.
It is my opinion that this alternative question usually isnt a good question to ask. Metaheuristics
are applied to hard problems. If youre gauging techniques by how quickly they solve a problem,
then your problem is trivial and your claims are may be unhelpful for more realistic problems.
Further, such measures are somewhat challenging to establish statistical signicance for, and
computational effort in particular may be less accurate than hoped for.
187
A third question comes from the machine learning community: if I nd a candidate solution
which does well for some set T of test cases, how well is this solution likely to perform in the real
world? This is a question of generalizability: were asking how well technique A learns about the
world from a small sample (T) of inputs. One simple approach to gauging this is to create two
disjoint sets of test cases T and S. You can make T however large you like, but Id make S relatively
large, perhaps 100. T will be the test cases used to to develop our solution (commonly called the
training set). Once we have a nal solution, we gauge its quality by applying it to the test cases in
S which it has never seen before and seeing how well it performs. S is called the test set. There
exist more nuanced methods for doing train/test methodologies, such as k-fold cross validation,
but the one described is very common.
Finally, multiobjective problems pose special difculties, because the result of a multiobjective
run is not a single solution but a whole set of solutions which lie along the Pareto front. As a
result, there really is no satisfactory way to compare multiobjective optimization techniques. Still though,
186
What if you could run a technique ve times and take the best result of the ve? Which is better then? It turns out,
its not necessarily A. If A had a mean of 5 but a variance of 0.01, while B had a mean of 4 (worse) but a variance of 20,
youd pick A if you ran just once, but youd prefer B if you could run more than once and take the maximum of the runs.
187
Liviu Panait and I wrote a paper attacking the philosophy behind computational effort and similar measures and
noting its poor correlation with expected-quality measures: Sean Luke and Liviu Panait, 2002, Is the perfect the enemy of
the good?, in W. B. Langdon, et al., editors, GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference,
pages 820828, Morgan Kaufmann Publishers, New York.
Steffan Christensen and Franz Oppacher have also been tough on the computational effort measure: theyve established
that it signicantly underestimates the true effort: Steffen Christensen and Franz Oppacher, 2002, An analysis of Kozas
computational effort statistic for genetic programming, in James A. Foster, et al., editors, Proceedings of the 5th European
Conference on Genetic Programming (EuroGP 2002), pages 182191, Springer.
Matthew Walker, Howard Edwards, and Chris Messom been establishing methods to compute statistical signicance
for the computational effort measure. If youre interested in going after the alternative question, you should denitely
try to use a method like theirs to add some rigor to any claims. Their latest work is Matthew Walker, Howard Edwards,
and Chris Messom, 2007, The reliability of condence intervals for computational effort comparisons, in Dirk Thierens,
et al., editors, GECCO 07: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, volume 2,
pages 17161723, ACM Press.
207
researchers have to do something. Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele proposed
various measures for comparing techniques
188
which are still in wide use today. Many of these
techniques assume that you know beforehand what the true Pareto front is: this probably will not
be true for real problems. Much research is now turning towards comparing techniques based on
which has the largest hypervolume the volume of the multiobjective space dominated by the
front discovered by the technique. Hypervolume is, unfortunately, nontrivial and expensive to
compute.
Statistical Signicance Okay so youve settled on a question to ask and a way of getting results
out of your Particle Swarm Optimization and Evolution Strategy techniques. You run PSO once
and get a 10.5. You run your Evolution Strategy once and get a 10.2. So PSO did better, right?
Nope. How do you know that your results arent due to the random numbers you happened to
get from your generator? What happens if you run a second time with a different random number
generator seed? Will PSO still beat ES then or will it be the other way around? Keep in mind that
this is a stochastic technique, not a deterministic one. To determine that PSO really is better than ES
for problem Foo, youll need to run some n times and take the average. To eliminate the possibility
of randomness messing with your results, n needs to be large.
You could do this trivially by running your problems A and B, say, a billion times each, and
comparing their means. But who has time to do a billion runs? We need a way to state with
some deniteness that A is better than B after testing A and B each some smaller number of times:
perhaps 50 or 100. To do this, we need a hypothesis test.
The literature on hypothesis tests is huge, and there are many options. Here my goal is to suggest
a couple of approaches which I think will serve you well for the large majority of situations you
may nd yourself in. Before we get to hypothesis tests, lets begin with some strong suggestions:
Unless you know what youre doing, always run each technique at least 30 times. I strongly
suggest 50 or 100 times per technique. The more runs you do, the easier it is to prove that the
techniques produce different expected results.
Each run should be independent there should be no relationship between the runs. In
particular, each run should employ a unique random number seed.
Be as conservative as you possibly can with regard to your claim. Dont just compare
your newfangled Particle Swarm method against a specic Evolution Strategy. Instead, try
Evolution Strategies with lots of different parameter settings to nd the one which performs
the best. Compare your new method against that best-performing one. Make it as hard as
possible for your claim to succeed.
Okay, so youve done all these things. You now have 100 independent results for technique A
and 100 independent results for technique B. The mean of the A results is better (lets say, higher)
than the mean of the B results. What do you do now?
Your hypothesis is that A is better than B. The null hypothesis your enemyclaims that
theres no difference between the two, that is, the perceived difference is just due to your random
numbers. You need to compute what the probability is that the null hypothesis is wrong. You want
188
Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele, 2000, Comparison of multiobjective evolutionary algorithms:
Empirical results, Evolutionary Computation, 8(2), 125148
208
Desired Probability
dof 95% 98% 99% 99.8%
1 12.706 31.821 63.657 318.313
2 4.303 6.965 9.925 22.327
3 3.182 4.541 5.841 10.215
4 2.776 3.747 4.604 7.173
5 2.571 3.365 4.032 5.893
6 2.447 3.143 3.707 5.208
7 2.365 2.998 3.499 4.782
8 2.306 2.896 3.355 4.499
9 2.262 2.821 3.250 4.296
10 2.228 2.764 3.169 4.143
11 2.201 2.718 3.106 4.024
12 2.179 2.681 3.055 3.929
13 2.160 2.650 3.012 3.852
14 2.145 2.624 2.977 3.787
15 2.131 2.602 2.947 3.733
16 2.120 2.583 2.921 3.686
17 2.110 2.567 2.898 3.646
18 2.101 2.552 2.878 3.610
19 2.093 2.539 2.861 3.579
20 2.086 2.528 2.845 3.552
21 2.080 2.518 2.831 3.527
22 2.074 2.508 2.819 3.505
23 2.069 2.500 2.807 3.485
24 2.064 2.492 2.797 3.467
25 2.060 2.485 2.787 3.450
26 2.056 2.479 2.779 3.435
27 2.052 2.473 2.771 3.421
28 2.048 2.467 2.763 3.408
29 2.045 2.462 2.756 3.396
30 2.042 2.457 2.750 3.385
31 2.040 2.453 2.744 3.375
32 2.037 2.449 2.738 3.365
33 2.035 2.445 2.733 3.356
34 2.032 2.441 2.728 3.348
Desired Probability
dof 95% 98% 99% 99.8%
35 2.030 2.438 2.724 3.340
36 2.028 2.434 2.719 3.333
37 2.026 2.431 2.715 3.326
38 2.024 2.429 2.712 3.319
39 2.023 2.426 2.708 3.313
40 2.021 2.423 2.704 3.307
41 2.020 2.421 2.701 3.301
42 2.018 2.418 2.698 3.296
43 2.017 2.416 2.695 3.291
44 2.015 2.414 2.692 3.286
45 2.014 2.412 2.690 3.281
46 2.013 2.410 2.687 3.277
47 2.012 2.408 2.685 3.273
48 2.011 2.407 2.682 3.269
49 2.010 2.405 2.680 3.265
50 2.009 2.403 2.678 3.261
51 2.008 2.402 2.676 3.258
52 2.007 2.400 2.674 3.255
53 2.006 2.399 2.672 3.251
54 2.005 2.397 2.670 3.248
55 2.004 2.396 2.668 3.245
56 2.003 2.395 2.667 3.242
57 2.002 2.394 2.665 3.239
58 2.002 2.392 2.663 3.237
59 2.001 2.391 2.662 3.234
60 2.000 2.390 2.660 3.232
61 2.000 2.389 2.659 3.229
62 1.999 2.388 2.657 3.227
63 1.998 2.387 2.656 3.225
64 1.998 2.386 2.655 3.223
65 1.997 2.385 2.654 3.220
66 1.997 2.384 2.652 3.218
67 1.996 2.383 2.651 3.216
68 1.995 2.382 2.650 3.214
Desired Probability
dof 95% 98% 99% 99.8%
69 1.995 2.382 2.649 3.213
70 1.994 2.381 2.648 3.211
71 1.994 2.380 2.647 3.209
72 1.993 2.379 2.646 3.207
73 1.993 2.379 2.645 3.206
74 1.993 2.378 2.644 3.204
75 1.992 2.377 2.643 3.202
76 1.992 2.376 2.642 3.201
77 1.991 2.376 2.641 3.199
78 1.991 2.375 2.640 3.198
79 1.990 2.374 2.640 3.197
80 1.990 2.374 2.639 3.195
81 1.990 2.373 2.638 3.194
82 1.989 2.373 2.637 3.193
83 1.989 2.372 2.636 3.191
84 1.989 2.372 2.636 3.190
85 1.988 2.371 2.635 3.189
86 1.988 2.370 2.634 3.188
87 1.988 2.370 2.634 3.187
88 1.987 2.369 2.633 3.185
89 1.987 2.369 2.632 3.184
90 1.987 2.368 2.632 3.183
91 1.986 2.368 2.631 3.182
92 1.986 2.368 2.630 3.181
93 1.986 2.367 2.630 3.180
94 1.986 2.367 2.629 3.179
95 1.985 2.366 2.629 3.178
96 1.985 2.366 2.628 3.177
97 1.985 2.365 2.627 3.176
98 1.984 2.365 2.627 3.175
99 1.984 2.365 2.626 3.175
100 1.984 2.364 2.626 3.174
1.960 2.326 2.576 3.090
Table 4 Table of t-values by degrees of freedom (dof ) and desired probability that the Null Hypothesis is wrong (2-tailed
t-tests only). To verify that the Null Hypothesis is wrong with the given probability, you need to have a t-value larger
than the given value. If your degrees of freedom exceed 100, be conservative: use 100, unless theyre huge, and so you
can justiably use . 95% is generally an acceptable minimum probability, but higher probabilities are preferred.
that probability to be as high as possible. To be accepted in the research community, you usually
need to achieve at least a 95% probability; and ideally a 99% or better probability.
A hypothesis test estimates this probability for you. Hypothesis tests come in various avors:
some more often claim that A is better than B when in fact theres no difference. Others will more
conservatively claim that theres no difference between A and B when in fact there is a difference.
You always want to err on the side of conservatism.
209
The most common hypothesis test, mostly because its easy to do, is Students t-Test.
189
Among
the most conservative such t-Tests is one which doesnt presume that the results of A and B come
from distributions with the same variance.
190
Well use the two-tailed version of the test. To
do the test, you rst need to compute the means
A
,
B
, variances
2
A
,
2
B
, and number of results
(n
A
, n
B
, in our example, n
A
= n
B
= 100) for technique A and technique B respectively. With these
you determine the t statistic and the degrees of freedom.
t =
|
A
B
|
q
2
A
n
A
+
2
B
n
B
degrees of freedom =
2
A
n
A
+
2
B
n
B
2
A
n
A
2
/(n
A
1) +
2
B
n
B
2
/(n
B
1)
Lets say your degrees of freedom came out to 100 and you have chosen 95% as your probability.
From Table 4, we nd that you must have a t value of 1.984 or greater. Imagine that that your
t value came out as, oh, lets say, 0.523. This tells us that you have failed to disprove the Null
Hypothesis with an adequate probability. Thus you have no evidence that PSO is actually better
than ES for the Foo problem.
As you can see from the table, if you want to make it easier to pass the t-test, the way to do it
is to increase your degrees of freedom. This translates into doing more runs (that is, increasing
n
A
and n
B
). More runs is always good! But beware: if you need a very large number of runs to
do this, its likely the case that though your techniques are different, the difference is very small.
Now youll run up against the so what? question: so what if PSO eeks out just barely better
results than ES on problem Foo? Thus what you usually want to be able to argue is both (1) that
the difference between your two techniques statistically signicant, that is, that a hypothesis test
agrees with you that it actually exists; and (2) that the difference is also considerable and likely to
be important.
The t-Test should be viewed as the absolute minimum you should do for published work.
Anything less and you should be ashamed of yourself. The problem with the t-Test and its a big
problemis that it is parametric, that is, it relies solely on the mean, variance, and sample count of
your results. This is because the t-Test makes a huge assumption: that the results produced by your
techniques A and B are each drawn from a normal (Gaussian) distribution.
In metaheuristics scenarios, thats almost never true.
A great many metaheuristics problems produce results which are fairly skewed. Now the t-Test
is pretty robust even with relatively skewed data. But if the data is too skewed, the t-Test starts
being less accurate than it should. Also very bad for the t-Test is data with multiple peaks.
To compensate for this, theres a better approach: a nonparametric hypothesis test. This kind of
test ignores the actual values of your data and only considers their rank ordering with respect to one
another.
191
As a result, such tests are much less sensitive, but they are not fooled by assumptions
about how your results are distributed. If you pass a non-parametric test, few can criticize you.
189
Its called this because its based on work by William Sealy Gosset around 1908, who worked at Guinness Brewery
and secretly published under the pseudonym Student. He did so because Guinness wouldnt allow its workers to
publish anything out of fear of leaking trade secrets. The t-Test itself was, however, mostly derived by Ronald Aylmer
Fisher, a famous statistician who conversed with Gosset and made his work popular.
190
This t-Test variant is known as Welchs t-Test, after Bernard Lewis (B. L.) Welch, who developed it.
191
Sound familiar? Think: tness-proportionate selection versus tournament selection.
210
There are a various nonparametric tests, notably the Mann-Whitney UTest, but Mark Wineberg
and Steffen Christensen
192
suggest a particularly simple and effective alternative :
1. Throw all the results of techniques A and B together into one vector.
2. Sort the vector by result value.
3. Replace the result values with their rank values (that is, their locations in the vector).
4. Results with the same value are assigned the average of their combined ranks.
5. Break the results back into the technique-A results and the technique-B results.
6. Using the rank values rather than the original result values, do a t-Test.
Lets do an example. Imagine that, against good judgement and the recommendations of this
text, you have decided only to do ve runs of each technique (PSO and ES). Your results were:
PSO: 0.1 0.5 0.8 0.9 0.9 ES: 0.2 0.3 0.5 0.7 0.9
We put them together into one
vector and sort it.
0.1 0.2 0.3 0.5 0.5 0.7 0.8 0.9 0.9 0.9
PSO ES ES ES PSO ES PSO ES PSO PSO
Next we include ranks.
1 2 3 4 5 6 7 8 9 10
0.1 0.2 0.3 0.5 0.5 0.7 0.8 0.9 0.9 0.9
PSO ES ES ES PSO ES PSO ES PSO PSO
Next we average ranks for
results with the same values.
1 2 3 4.5 4.5 6 7 9 9 9
0.1 0.2 0.3 0.5 0.5 0.7 0.8 0.9 0.9 0.9
PSO ES ES ES PSO ES PSO ES PSO PSO
Next we replace the values
with just the ranks.
1 2 3 4.5 4.5 6 7 9 9 9
PSO ES ES ES PSO ES PSO ES PSO PSO
Finally, we break the results back out into their groups again. The ranks are all that are left.
PSO: 1 4.5 7 9 9 ES: 2 3 4.5 6 9
We can now do a plain-old t-Test on these revised values instead. Note that were no longer
testing whether the means of the two techniques are different from one another. Instead, since were
looking at rank orderings, its somewhat closer to saying that the medians of the two techniques
differ. Its still a better measure than a plain t-Test by a long shot.
192
See the very last entry in Section 11.3.1 for pointers to their excellent lecture slides. A number of suggestions here
were inspired from those slides.
211
Comparing More than Two Techniques t-Tests only compare two techniques. Lets say you have
ve techniques, A, B, C, D, and E. You want to prove that A does better than the rest. How do you
compare them? One approach is to compare A against B (with a hypothesis test), then A against C,
then A against D, then A against E. If you do this, remember that its critical that each time you
compare A against another technique, you should do a new set of independent runs for A, with
new random number generator seeds. Dont reuse your old runs. Or perhaps you want to compare
each method against every other method: that is, A versus B, A versus C, A versus D, A versus E,
B versus C, B versus D, B versus E, C versus D, C versus E, and nally D versus E. Phew! Again,
remember that each comparison should use new, independent runs.
Doing individual pairwise hypothesis tests isnt sufcient though. Keep in mind that the
point of a hypothesis test is to compute the probability that your claim is valid. If you do a single
comparison (A versus B) at 95% probability, there is a 5% chance that your claim is false. But if
you compare A against four other techniques (A versus B, A versus C, A versus D, A versus E),
each at 95% probability, you have an approximately 20% chance that one of them is false. If you
compared each method against the others, resulting in ten comparisons, you have an approximately
50% chance that one of them is false! Its pretty common that youll do a lot of experiments in your
paper. And so with a high probability one of your hypothesis tests will come up false.
Its better style to try to x this probability, and ideally get it back up to 95% (or whatever value
you had originally chosen). The simplest way to do this is to apply the Bonferroni correction.
Specically, if you have m comparisons to do, and the desired probability of one of them being
wrong is p total, then revise each individual probability of being wrong to be p/m, and thus the
probability of being right is 1 p/m. In our examples above, if we wish to compare A against the
other techniques (four comparisons), and want to retain a 95% probability of being right that is,
a 1/20 chance of being wrong, then each of our comparisons should be done with a
1/20
4
= 1/80
probability of being wrong. That translates into using a 1 1/80 = 0.9875% probability for each
hypothesis test. Similarly, if youre comparing all the techniques (ten comparisons), youll have
1 1/200 = 0.995%. Not easy to beat!
A much less extreme method, in terms of how high your probability has to go, is the ANOVA,
a fairly complex method which compares m techniques at one time and tells you if any one of them is
different from the others. Interestingly, the ANOVA doesnt tell you which techniques are different
from which others: for that you apply a so-called post-hoc comparison, the most conservative of
which (always be conservative!) is the Tukey comparison.
193
One difculty with the ANOVA is
that, like the original t-Test, it assumes that your distributions are normal. Which is rarely the case.
There exist non-parametric ANOVA methods as well. The ANOVA (and related tests) are far too
complex to describe here: consult a good statistics book.
One of the strange effects youll get when comparing m techniques is nontransitivity among
your results. For example, lets say that, looking at their means, A > B > C > D > E. But when
you run the ANOVA, it tells you that A and B arent statistically different, and B and C arent
statistically different, but A and C are statistically signicantly different! Furthermore, D and
E arent statistically different, but A, B, and C are all statistically signicantly different from D
and E. Eesh. How do you report something like this? Usually, with overbars connecting groups
with no signicant difference among them:
A B C D E
Be sure to notice the overlapping but
unconnected overbars over A, B, and C.
193
Named after the statistician John Tukey.
212
11.2 Simple Test Problems
The test problems below are common, and sometimes trivial, tness or quality functions suitable
for small experiments and projects. Problems are provided for xed-length boolean and real-valued
vectors, multiobjective scenarios, and Genetic Programming (and Grammatical Evolution).
Many of these problems have been overused in the eld and are a bit dated: if youre working
on a scientic research paper, you ought to spend some time examining the current benchmarks
applied to techniques like yours. Also: if youre using test problems as benchmarks to compare
techniques, be wary of the temptation to shop for benchmarks, that is, to hunt for that narrow set
of benchmark problems that happens to make your technique look good. You can always nd
one, but what have you gained? Instead, try to understand how your technique performs on a
wide range of well-understood problems from the literature, or on problems of strong interest to a
specic community.
194
11.2.1 Boolean Vector Problems
Max Ones Max Ones, sometimes called OneMax, is a trivial boolean problem: its the total
number of ones in your vector. This is the classic example of a linear problem, where there is no
linkage between any of the vector values at all. Simple Hill-Climbing can solve this problem easily.
Max Ones is due to David Ackley:
195
f (hx
1
, ..., x
n
i) =
n
i=1
x
i
Leading Ones This problem is also quite simple: it counts the number of ones in your vector,
starting at the beginning, until a zero is encountered. Put another way, it returns the position of the
rst zero found in your vector (minus one). The equation below is a clever way of describing this
mathwise, but you wouldnt implement it like that too expensive. Just count the ones up to the
rst zero. Leading Ones is not a linear problem: the contribution of a slot x
i
in the vector depends
critically on the values of the slots x
1
, ..., x
i1
. Nonetheless, its pretty simple to solve.
f (hx
1
, ..., x
n
i) =
n
i=1
i
j=1
x
j
Leading Ones Blocks This variant of Leading Ones is somewhat more challenging. Given
a value b, we count the number of strings of ones, each b long, until we see a zero. For ex-
ample, if b = 3, then f (h1, 1, 0, 0, 0, 1, 1, 0, 1i) = 0 because we dont have a string of 3 at the
194
At this point its worth bringing up the infamous No Free Lunch Theorem, or NFL, by David Wolpert and William
Macready. The NFL stated that within certain constraints, over the space of all possible problems, every optimization
technique will perform as well as every other one on average (including Random Search). That is, if there exists a set
of problems P for which technique A beats technique B by a certain amount, there also exists an equal-sized set of
problems P
0
for which the opposite is true. This is of considerable theoretical interest but, I think, of limited practical
value, because the space of all possible problems likely includes many extremely unusual and pathological problems
which are rarely if ever seen in practice. In my opinion, of more of interest is what kinds of techniques perform well on
the typical problems faced by practitioners, and why. For more on the NFL, see David Wolpert and William Macready,
1997, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation, 1(1), 6782.
195
David Ackley, 1987, A Connectionist Machine for Genetic Hillclimbing, Kluwer Academic Publishers.
213
beginning yet. But f (h1, 1, 1, 0, 0, 0, 1, 0, 1i) = 1. Furthermore, f (h1, 1, 1, 1, 0, 1, 1, 0, 1i) = 1 but
f (h1, 1, 1, 1, 1, 1, 0, 1, 0i) = 2, and ultimately f (h1, 1, 1, 1, 1, 1, 1, 1, 1i) = 3. A simple way to do this is
to do Leading Ones, then divide the result by b, and oor it to the nearest integer:
f (hx
1
, ..., x
n
i) =
$
1
b
n
i=1
i
j=1
x
j
%
Trap The so-called Trap Problems are classic examples of deceptive functions.. Heres a simple
one which is easily described: the tness of your vector is the number of zeros in the vector, unless
you have no zeros at all, in which case the tness of the vector is the suddenly optimally high (n +1).
Thus this problem sets up a gradient to lead you gently away from the optimal all-ones (no zeros)
case, and deep into the trap. For example, f (h0, 0, 0, 0i) = 4, f (h0, 0, 1, 0i) = 3, f (h1, 0, 1, 0i) = 2,
f (h1, 0, 1, 1i) = 1, but boom, f (h1, 1, 1, 1i) = 5. A clever math formulation of this has two terms:
the sum part is the number of zeros in the vector. The product part only comes into play when you
have all ones. Various trap functions were originally due to David Ackley.
196
f (hx
1
, ..., x
n
i) =
n
n
i=1
x
i
!
+ (n + 1)
n
i=1
x
i
11.2.2 Real-Valued Vector Problems
Many classic real-valued vector problems are minimization problems rather than maximization
ones. To convert them to a maximization problem, the simplest solution is to negate the result. If
youre using Fitness Proportionate Selection or SUS, youll also need to add a big enough number
that there arent any negative values. Id use Tournament Selection instead.
Most of the problems described below are shown, in the trivial 2-dimensional case, in Figure 68.
Sum Sum is the trivial real-valued version of Max Ones. Its just the sum of your vector. As
would be expected, Sum is a linear problem and so has no linkage.
f (hx
1
, ..., x
n
i) =
n
i=1
x
i
x
i
[0.0, 1.0]
Linear Linear functions are the generalization of Sum, and again have no linkage at all. Theyre
just the weighted sum of your vector, where each weight is given by a constant a
i
. Given a vector
of constants ha
0
, ..., a
n
i, which you provide, we weight each element, then add them up:
f (hx
1
, ..., x
n
i) = a
0
+
n
i=1
a
i
x
i
x
i
[0.0, 1.0]
196
As was Max Ones. See Footnote 195, p. 213.
214
-5
-2.5
0
2.5
5
-5
-2.5
0
2.5
5
-10
0
10
-5
-2.5
0
2.5
5
-10
0
10
-5
-2.5
0
2.5
5
-5
-2.5
0
2.5
5
0
5
10
15
20
-5
-2.5
0
2.5
5
0
5
10
15
20
-5
-2.5
0
2.5
5
-5
-2.5
0
2.5
5
0
20
40
60
-5
-2.5
0
2.5
5
0
20
40
60
-2
-1
0
1
2
-2
-1
0
1
2
0
1000
2000
-2
-1
0
1
2
0
1000
2000
Sum Step Sphere Rosenbrock
-5
-2.5
0
2.5
5
-5
-2.5
0
2.5
5
0
500
1000
1500
-5
-2.5
0
2.5
5
0
500
1000
1500
-400
-200
0
200
400
-400
-200
0
200
400
-500
0
500
-400
-200
0
200
400
-500
0
500
-500
-250
0
250
500
-500
-250
0
250
500
0
50
100
150
-500
-250
0
250
500
0
50
100
150
-40
-20
0
20
40
-40
-20
0
20
40
0
1
2
3
-40
-20
0
20
40
0
1
2
3
Rastrigin Schwefel Griewank Griewank (Detail)
Figure 68 Real-valued problems in two dimensions (hx
1
, x
2
i).
Step Another no-linkage function, but this time its got a wrinkle. Because it uses the oor
function, there are regions where small mutations in any given oating point value dont change
tness at all. This function is part of a popular test suite by Ken De Jong,
197
and so has traditional
bounds on the x
i
values (between 5.12 and +5.12 inclusive). The function is usually minimized,
though it doesnt matter much: you can search for the maximum too, its about the same.
(Minimize) f (hx
1
, ..., x
n
i) = 6n +
n
i=1
bx
i
c x
i
[5.12, 5.12]
Sphere Our last no-linkage problem, due to Ingo Rechenberg.
198
Here were summing the squares
of the individual elements. This is again a minimization problem, and is part of De Jongs test suite
(note the bounds). Maximization is also interesting, as there are global maxima at the corners.
(Minimize) f (hx
1
, ..., x
n
i) =
n
i=1
x
2
i
x
i
[5.12, 5.12]
197
Perhaps too popular. Ken De Jong has been waging a campaign to get people to stop using it! The test suite was
proposed in De Jongs PhD thesis: Kenneth De Jong, 1975, An Analysis of the Behaviour of a Class of Genetic Adaptive
Systems, Ph.D. thesis, University of Michigan. The thesis is available online at http://cs.gmu.edu/eclab/kdj thesis.html
198
Ingo Rechenberg, 1973, Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution,
Fromman-Holzbook, Stuttgart, Germany.
215
Rosenbrock A classic optimization problem well predating the eld, from Howard Rosen-
brock.
199
In two dimensions, this function creates a little valley bent around a low hill, with
large wings on each side. The minimum is at h1, 1, ..., 1i, in the valley on one side of the low hill, and
individuals often get stuck on the other side. The traditional bounds are shown. Its a minimization
problem.
(Minimize) f (hx
1
, ..., x
n
i) =
n1
i=1
(1 x
i
)
2
+ 100(x
i+1
x
2
i
)
2
x
i
[2.048, 2.048]
Rastrigin Originally proposed by Leonard Andreevich Rastrigin
200
in 1974 as a two-dimensional
function, and later extended by Heinz M uhlenbein, M. Schomisch, and Joachim Born to more
variables.
201
This function is essentially a large egg carton bent under a basketball: its a combination
of Sphere and a sine wave which creates a great many local optima. Its a minimization problem.
Some literature has x
i
[5.12, 5.12], following De Jongs tradition (thats what Im doing here),
but others use different bounds.
(Minimize) f (hx
1
, ..., x
n
i) = 10n +
n
i=1
x
2
i
10 cos(2x
i
) x
i
[5.12, 5.12]
Schwefel This function, due to Hans-Paul Schwefel,
202
has many local optima like Rastrigin; but
is organized so that the local optima are close to one another (and thus easier to jump to) the further
you get from the global optima. Its thus described as a deceptive problem. Again, minimization.
Notice the larger traditional bounds than weve seen so far.
(Minimize) f (hx
1
, ..., x
n
i) =
n
i=1
x
i
sin
q
|x
i
|
x
i
[512.03, 511.97]
Some variations add 418.9829n to the function to set the minimum to about 0.
Griewank Not to be outdone by Rastrigin, Andreas Griewanks similar function has a zillion
local optima.
203
The function is minimized, and traditionally has bounds from600 to +600, which
creates massive numbers of local optima.
(Minimize) f (hx
1
, ..., x
n
i) = 1 +
1
4000
i=1
x
2
i
!
+
n
i=1
cos
x
i
x
i
[600, 600]
199
Howard Rosenbrock, 1960, An automatic method for nding the greatest or least value of a function, The Computer
Journal, 3(3), 174184.
200
I believe this was from Leonard Andreevich Rastrigin, 1974, Systems of Extremal Control, Nauka, in Russian. Nearly
impossible to get ahold of, so dont bother.
201
Heinz M uhlenbein, D. Schomisch, and Joachim Born, 1991, The parallel genetic algorithm as function optimizer, in
Richard Belew and Lashoon Booker, editors, Proceedings of the Fourth International Conference on Genetic Algorithms, pages
271278.
202
Hans-Paul Schwefel, 1977, Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie, Birkhauser.
203
Andreas Griewank, 1981, Generalized descent for global optimization, Journal of Optimization Theory and Applications,
34, 1139.
216
Rotated Problems Many of the real-valued test problems described above consist of linear com-
binations of each of the variables. This often makes them susceptible to techniques which assume
low linkage among genes, and so its considered good practice to rotate
204
them by an orthonormal
matrix M. If your original tness function was f (~x), youd instead use a new rotated tness
function g(~x) = f (M~x) (assuming that ~x is a column vector). This has the effect of creating linkages
among variables which were previously largely unlinked, and thus making a more challenging
problem for algorithms which assume low linkage.
Ideally youd draw M randomly and uniformly from the space of rotations or reections. If the
problem is two-dimensional, its easy to just do a rotation: choose a random value of from [0, 2),
and set M =
cos sin
sin cos
. But that only works because theres a single possible rotation axis.
For a dimensionality higher than two, doing this stuff quickly becomes non-obvious.
As it turns out, rotation in an n-dimensional space is more or less equivalent to choosing a new
orthonormal basis in your vector space. The following algorithm uses the Gram-Schmidt process
to transform a set of randomly chosen vectors into an orthonormal basis.
Algorithm 137 Create a Uniform Orthonormal Matrix
1: n desired number of dimensions
2: M n n matrix, all zeros
3: for i from 1 to n do
4: for j from 1 to m do
5: M
ij
random number chosen from the Normal distribution N( = 0,
2
= 1)
6: for i from 1 to n do
7: Row vector
~
M
i
=
~
M
i
i1
j=1
h
~
M
i
~
M
j
i
~
M
j
. Subtract out projections of previously built bases
8: Row vector
~
M
i
=
~
M
i
||
~
M
i
||
. Normalize
9: return M
As a reminder, h
~
M
i
~
M
j
i is a dot product.
This algorithm is a very old method indeed, but the earliest adaptation to metaheuristics I
am aware of is due to Nikolaus Hansen, Andreas Ostermeier, and Andreas Gawelczyk.
205
The
algorithm above is based on their adaptation.
Important note: rotation will produce vectors M~x which potentially lie outside your original
bounds for ~x: youll need to make sure that f (M~x) can return rational quality assessments for these
vectors, or otherwise change the original bounds for ~x to prevent this from happening.
204
Okay, not quite rotate. Picking a new orthonormal basis will also add reections. Its still good.
205
Nikolaus Hansen, Andreas Ostermeier, and Andreas Gawelczyk, 1995, On the adaptation of arbitrary normal
mutation distributions in evolution strategies: the generating set adaptation, in L. J. Eshelman, editor, Proceedings of the
Sixth International Conference on Genetic Algorithms, pages 5764, Morgan Kaufmann. A more straightforward description
of the algorithm is in Nikolaus Hansen and Andreas Ostermeier, 2001, Completely derandomized self-adaptation in
evolution strategies, Evolutionary Computation, 9(2), 159195.
217
0 0.2 0.4 0.6 0.8 1
Objective 1
0
0.2
0.4
0.6
0.8
1
O
b
j
e
c
t
i
v
e
2
0 0.2 0.4 0.6 0.8 1
Objective 1
0
0.2
0.4
0.6
0.8
1
O
b
j
e
c
t
i
v
e
2
ZDT1 ZDT2
0 0.2 0.4 0.6 0.8
Objective 1
-0.75
-0.5
-0.25
0
0.25
0.5
0.75
1
O
b
j
e
c
t
i
v
e
2
0 0.2 0.4 0.6 0.8 1
Objective 1
0
0.2
0.4
0.6
0.8
1
1.2
O
b
j
e
c
t
i
v
e
2
ZDT3 ZDT4
Pareto front is discontinuous.
Thin line indicates the highest local suboptimal pareto
front. Other local suboptimal pareto fronts not shown.
Figure 69 Pareto fronts of four multiobjective problems (ZDT1, ZDT2, ZDT3, and ZDT4) as described in Section 11.2.3.
All four problems are minimization problems, so lower objective values are preferred.
11.2.3 Multiobjective Problems
The problems described below are all from a classic multiobjective comparison paper by Eckart
Zitzler, Kalyanmoy Deb, and Lothar Thiele.
206
Like many multiobjective test problems, theyre all
set up for minimization: you can change this to maximization by negating (for example). All four
problems have two objectives O
1
and O
2
. The problems are all designed such that O
2
is a function
of two auxillary functions g and h. The global Pareto fronts for all four problems, and in one case a
strong local Pareto front, are all shown in Figure 69.
206
Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele, 2000, Comparison of multiobjective evolutionary algorithms:
Empirical results, Evolutionary Computation, 8(2), 125148.
218
ZDT1 This is a basic multiobjective problem with a convex Pareto front for real-valued vector
individuals n = 30 genes long. The problem has no local optima.
(Minimize) O
1
(hx
1
, ..., x
n
i) = x
1
x
i
[0, 1]
O
2
(hx
1
, ..., x
n
i) = g(hx
1
, ..., x
n
i) h(hx
1
, ..., x
n
i)
g(hx
1
, ..., x
n
i) = 1 +
9
n 1
n
i=2
x
i
h(hx
1
, ..., x
n
i) = 1
r
x
1
g(hx
1
, ..., x
n
i)
ZDT2 This function is like ZDT1, but is concave. Again, n = 30. The problem has no local optima.
(Minimize) O
1
(hx
1
, ..., x
n
i) = x
1
x
i
[0, 1]
O
2
(hx
1
, ..., x
n
i) = g(hx
1
, ..., x
n
i) h(hx
1
, ..., x
n
i)
g(hx
1
, ..., x
n
i) = 1 +
9
n 1
n
i=2
x
i
h(hx
1
, ..., x
n
i) = 1
x
1
g(hx
1
, ..., x
n
i)
2
ZDT3 This function has a discontinuous Pareto front. Again, n = 30. The problem has no local
optima.
(Minimize) O
1
(hx
1
, ..., x
n
i) = x
1
x
i
[0, 1]
O
2
(hx
1
, ..., x
n
i) = g(hx
1
, ..., x
n
i) h(hx
1
, ..., x
n
i)
g(hx
1
, ..., x
n
i) = 1 +
9
n 1
n
i=2
x
i
h(hx
1
, ..., x
n
i) = 1
r
x
1
g(hx
1
, ..., x
n
i)
x
1
g(hx
1
, ..., x
n
i)
sin(10x
1
)
ZDT4 This function has a convex Pareto front but has a many local suboptimal Pareto fronts to
trap individuals, making this a moderately challenging problem. The problem is dened for a
smaller value of n than the others: n = 10. The value x
1
ranges in [0, 1], but the other x
i
all range in
[5, 5].
(Minimize) O
1
(hx
1
, ..., x
n
i) = x
1
x
1
[0, 1], x
i>1
[5, 5]
O
2
(hx
1
, ..., x
n
i) = g(hx
1
, ..., x
n
i) h(hx
1
, ..., x
n
i)
g(hx
1
, ..., x
n
i) = 1 + 10(n 1) +
n
i=2
x
2
i
10 cos(4x
i
)
h(hx
1
, ..., x
n
i) = 1
r
x
1
g(hx
1
, ..., x
n
i)
219
11.2.4 Genetic Programming Problems
As theyre optimizing small computer programs, genetic programming problems are somewhat
more colorful, and detailed, than the mathematical functions weve seen so far. The problems
described here arent very complex: theyre often tackled with a population of 1000 or so, run for 51
generations (including the initial generation). The problems described here are from John Koza.
207
Function Arity Description
(+ i j ) 2 Returns i + j
( i j ) 2 Returns i j
( i j ) 2 Returns i j
(% i j ) 2 If j is 0, returns 1, else returns i/j
(sin i ) 1 Returns sin(i )
(cos i ) 1 Returns cos(i )
(exp i ) 1 Returns e
i
(rlog i ) 1 If j is 0, returns 0, else returns log(|i|)
x 0 Returns the value of the independent
variable (x).
ERCs 0 (Optional) Ephemeral random con-
stants chosen from oating-point val-
ues from -1 to 1 inclusive.
Table 5 Symbolic Regression Function Set
Symbolic Regression This is the canoni-
cal example problem for genetic program-
ming, and is perhaps overused. The ob-
jective is to nd a mathematical expres-
sion which best ts a set of data points
of the form hx, f (x)i for some unknown
(to the optimization algorithm) function f .
The traditional function to t is f (x) =
x
4
+ x
3
+ x
2
+ x, though Koza also sug-
gested the functions g(x) = x
5
2x
3
+ x
and h(x) = x
6
2x
4
+ x
2
. These functions
are shown in Figure 70.
We begin by creating twenty random
values x
1
, ..., x
20
, each between -1 and 1,
which will be used throughout the dura-
tion of the run. An individual is assessed as
follows. For each of the 20 x
i
values, we set the leaf-node function x to return the value of x
i
, then
evaluate the individuals tree. The return value from the tree will be called, say, y
i
. The tness of
the individual is how close those 20 y
i
matched their expected f (x
i
), usually using simple distance.
That is, the tness is
20
i=1
| f (x
i
) y
i
|.
-1 -0.5 0 0.5 1
-0.2
0
0.2
0.4
0.6
0.8
Figure 70 The functions f (x): , g(x): , and
h(x): - - - - as discussed in the Symbolic Regression sec-
tion. f (1) = 4.
Obviously this is a minimization problem. Its
easily converted to maximization with
1
1+tness
. An
example ideal solution is: (+ (* x (* (+ x (* x x))
x)) (* (+ x (cos (- x x))) x))
11-bit Boolean Multiplexer The objective of the
11-bit Boolean Multiplexer problems is to nd a
boolean function which performs multiplexing over
a 3-bit address. There are three boolean-valued ad-
dress variables (A0, A1, and A2) and eight corre-
sponding boolean-valued data variables (D0, D1,
D2, D3, D4, D5, D6, D7). The 11-bit Boolean Mul-
tiplexer problem must return the value of the data
variable at the address described by the binary val-
ues of A0, A1, and A2. For example, if A2 is false
207
Adapted from John R. Koza, 1992, Genetic Programming: On the Programming of Computers by Means of Natural Selection,
MIT Press and from John R. Koza, 1994, Genetic Programming II: Automatic Discovery of Reusable Programs, MIT Press.
220
and A1 is true and A0 is true, the address is 3 (binary 011), and so the optimal individual would
return the value stored in D3. Since there are eleven boolean variables altogether, there are 2048
permutations of these variables and hence 2048 test cases. A trivial variant, the 6-bit Boolean
Multiplexer, has two address variables (A0 and A1), four data variables (D0, D1, D2, D3), and 64
test cases.
Function Arity Description
(and i j ) 2 Returns i j
(or i j ) 2 Returns i j
(not i ) 1 Returns i
(if test then else) 3 If test is true, then then is re-
turned, else else is returned.
a0, a1, and a2 0 Return the values of variables
A0, A1, and A2 respectively.
d0, d1, d2, d3, d4,
d5, d6, and d7
0 Return the values of variables
D0, D1, D2, D3, D4, D5, D6,
and D7 respectively.
Table 6 11-bit Boolean Multiplexer Function Set
A Multiplexer individual consists of a
single tree. To assess the tness of an in-
dividual, for each test case, the data and
address variables are set to return that test
cases permutation of boolean values, and
the individuals tree is then evaluated. The
tness is the number of test cases for which
the individual returned the correct value
for the data variable expected, given the
current setting of the address variables.
An example of an ideal 11-bit Boolean
Multiplexer solution is:
(if (not a0) (if (not a0) (if (not a1) (if a2 (if a2 d4 d6) d0) (if a2 d6 (if a2 d4 d2))) (if (or a2 a2) (if a1 (or (if (not (if a2 d5 d0))
(and (and d4 d0) (and a2 d5)) (or (and d7 d0) (not a1))) (if (not a1) (if (if d4 d1 d5) d0 d5) (or d6 (or (and (and d4 d0) (or (and
d5 d1) (and d6 d6))) (and d7 (or (if a0 (or a2 a2) d4) (and d1 (and d5 a2)))))))) d5) (if a1 (or d3 (and d7 d0)) (if a0 d1 d0)))) (if
(or a2 a2) (if a1 (if (not a1) (if (and d7 d0) (if a2 d5 d0) (if a2 d6 d3)) (and d7 (or (if a0 a2 (or d1 a1)) (not a1)))) d5) (if a1 (or
(if (not a0) (if a2 d6 (if a2 d4 d2)) (if a1 d3 (or (or d3 (if a1 d3 d1)) (not a2)))) (not a1)) (if a0 d1 d0))))
Function Arity Description
(and i j ) 2 Returns i j
(or i j ) 2 Returns i j
(nand i j ) 2 Returns (i j)
(nor i j ) 2 Returns (i j)
d0, d1, d2, etc. 0 Return the values of variables D0,
D1, D2, ... respectively. The num-
ber of dx nodes in the function set
is the number of bits in the particu-
lar Parity problem being run.
Table 7 Even N-Parity Function Set
Even N-Parity The Even N-Parity prob-
lems are, like 11-bit Boolean Multiplexer,
also boolean problems over some n number
of data variables. In the Even N-Parity prob-
lems, the objective is to return true if, for the
current boolean settings of these variables,
there is an even number of variables whose
value is true. There are thus 2
n
test cases.
Fitness assessment is basically the same as
11-bit Boolean Multiplexer.
Even N-Parity varies in difculty de-
pending on N, due to the number of test
cases. Bill Langdon notes that Parity doesnt
have any building blocks.
208
An ideal Even
4-Parity solution:
(nand (or (or (nor d3 d0) (nand (or d3 d1) (nor d2 d3))) d3) (nor (nor (and (or (and (or (or (nor d1 d2) (and d3 d0)) (and d1 d2))
(nand (and d0 d3) (nand (or d0 d1) (or d2 d1)))) (and (or d0 d2) (and d1 d1))) (nand (and (nor d3 d0) (and (and (nand (nand (nor
d3 d3) (or (or d0 d0) (nor (and d3 d0) (nor d1 (nand d3 d2))))) d2) (nor d1 d1)) (or (or d0 d1) (nor d3 d2)))) (nand (or d0 d1)
(nor d3 d3)))) (or (and (nand d1 d1) (and d1 d3)) (nor (nand (or d1 d2) (nor d3 d0)) d0))) (and (or (or (or (and (nand d1 d1) (and
d1 d3)) (nor (nand (or d1 d2) (nor d3 d0)) (and (nand d1 d3) (and d3 d0)))) (and d3 d0)) (and d3 d2)) (and (and d1 d2) (or (or
d0 (nor (or d0 d0) (and d2 d3))) d0)))))
208
William Langdon, 1999, Scaling of program tree tness spaces, Evolutionary Computation, 7(4), 399428.
221
Function Arity Description
(progn3 a b c) 3 a, b, then c are executed.
(progn2 a b) 2 a, then b are executed.
(if-food-ahead then else) 2 If food is immediately in
front of the ant, then is exe-
cuted, else else is executed.
move 0 Moves the ant forward one
square, eating food if it is
there.
left 0 Rotates the ant ninety de-
grees to the left.
right 0 Rotates the ant ninety de-
grees to the right.
Table 8 Articial Ant Function Set
Articial Ant Articial Ant is an oddly
challenging problem
209
for genetic pro-
gramming. The Articial Ant problem at-
tempts to nd a simple robotic ant algo-
rithm which will nd and eat the most food
pellets within 400 time steps.
210
The ant
may move forward, turn left, and turn right.
If when moving forward it chances across
a pellet, it eats it. The ant can also sense
if there is a pellet in the square directly in
front of it. The grid world in which the Ar-
ticial Ant lives is shown in Figure 71. The
pellet trail shown is known as the Santa Fe
Trail. The world is toroidal: walking off an
edge moves the ant to the opposite edge.
Start Here, Oriented to the Right
Figure 71 The Santa Fe Trail, a toroidal grid
world. Black squares indicate pellet locations.
An Articial Ant individual consists of a single tree.
Fitness assessment works as follows. The ant starts on the
upper-left corner cell, and facing right. The tree is executed:
as each sensory or movement node is executed, the Ant
senses or moves as told. When the tree has completed ex-
ecution, it is re-executed again and again. Each movement
counts as one time step. Assessment nishes when the
Ant has eaten all the pellets in the world or when the 400
time steps have expired. The Ants tness is the number of
pellets it ate.
The Articial Ant problem is different from the Sym-
bolic Regression and the boolean problems in that the re-
turn value of each tree node is ignored. The only thing
that matters is each nodes action in the world, that is, each
nodes side effect: moving the ant, turning it, etc. This
means that in Articial Ant, the order in which the nodes
are executed determines the operation of the individual,
whereas in the previous problems, it doesnt matter in what
order subtrees are evaluated. A (highly parsimonious) ex-
ample of an optimal Articial Ant solution is: (progn3 (if-
food-ahead move (progn2 left (progn2 (progn3 right right
right) (if-food-ahead move right)))) move right).
209
One of my all-time favorite papers, mostly due to its Knuth-like excessive attention to detail, is exactly on this topic:
W. B. Langdon and R. Poli, 1998, Why ants are hard, in John R. Koza, et al., editors, Genetic Programming 1998: Proceedings
of the Third Annual Conference, pages 193201, Morgan Kaufmann.
210
400 may be due to a misprint that has since established itself. John Koza is believed to have actually used 600.
222
Function Arity Description
(progn2 a b) 2 a, then b are executed. Returns the
return value of b.
(v8a i j ) 2 Evaluates i and j , adds the vectors
they return, modulo 8, and returns
the result.
(frog i ) 1 Evaluates i . Let hx, yi be i s re-
turn value. Then frog moves hx, yi
squares relative to its present rotation,
where the positive X axis points in
the present forward direction of the
lawnmower, and the positive Y axis
points in the present heading left di-
rection. Returns hx, yi.
mow 0 Moves the lawnmower forward one
square, mowing that square of lawn if
it is not already mown. Returns h0, 0i.
left 0 Rotates the lawnmower ninety de-
grees to the left. Returns h0, 0i.
ERCs 0 Ephemeral random constants of the
form hx, yi, where x is an integer cho-
sen from the range (0, ..., x
max
1)
and y is an integer chosen from the
range (0, ..., y
max
1), where x
max
and
y
max
are the width and height of the
lawn in squares, respectively.
Table 9 Lawnmower Function Set.
Lawnmower In the Lawnmower problem,
the individual directs a lawnmower to mow
a toroidal grid lawn, much as the Articial
Ant domain directs an ant to move about
its toroidal grid world. In the Lawnmower
domain, an individual may turn left, mow
forwards, or hop some hx, yi units away.
Lawnmower has no sensor information: it
must be hard-coded to mow the lawn blind.
The standard lawn size is 8 by 8.
Koza proposed this domain originally
to demonstrate the advantages of automat-
ically dened functions (ADFs).
211
Lawn-
mower is difcult without ADFs but fairly
trivial when using ADFs. When not using
ADFs, a Lawnmower individual consists of
a single tree, and the function set is shown
in Table 9. When using ADFs, a Lawn-
mower individual consists of three trees:
the main tree, an ADF1 tree and an ADF2
tree; and the function set is augmented as
described in Table 10.
To assess tness, the lawnmower is
placed somewhere on the lawn, and the in-
dividuals tree is executed once. Each mow
and frog command moves the lawnmower
and mows the lawn in its new location.
Once the tree has been executed, the tness
is the number of squares of lawn mown. An
example optimal individual with ADFs:
Additional ADF functions for Main Tree
Function Arity Description
(adf1 arg1) 1 Automatically dened function
which calls the ADF1 tree.
adf2 0 Automatically dened function
which calls the ADF2 tree.
Additional ADF functions for ADF1 Tree
Function Arity Description
adf2 0 Automatically dened function
which calls the ADF2 tree.
arg1 0 The value of argument arg1 passed
when the ADF1 tree is called.
Removed ADF functions for ADF2 Tree
Function
(frog i ) Removed from the ADF2 function set.
Table 10 Additions to the Lawnmower Function Set when set
up with two additional ADF trees (ADF1 and ADF2). All three
trees have the same function set except where noted above.
Main Tree: (progn2 (progn2 (adf1 (progn2 (adf1 left) (v8a
h7,0i h0,4i))) (progn2 left h3,4i)) (v8a (progn2 (adf1 (v8a
left left)) (progn2 (frog mow) (adf1 adf2))) (adf1 (progn2
(v8a h6,7i adf2) (progn2 h1,1i mow)))))
ADF1: (v8a (v8a (v8a (progn2 (v8a adf2 mow) (v8a
adf2 mow)) (frog (v8a mow arg1))) (v8a (v8a (frog
arg1) (progn2 h1,4i h2,6i)) (progn2 (v8a h1,5i adf2) (frog
mow)))) (v8a (v8a (v8a (progn2 adf2 adf2) (v8a adf2
mow)) (v8a (progn2 arg1 adf2) (frog left))) (frog (v8a (v8a
arg1 left) (v8a h7,0i mow)))))
ADF2: (progn2 (v8a (progn2 (v8a (v8a mow mow) (v8a
mow h5,1i)) (v8a (v8a mow left) (progn2 left mow)))
(v8a (progn2 (v8a mow mow) (progn2 h1,3i h2,1i)) (v8a
(progn2 h3,6i mow) (progn2 left h3,4i)))) (v8a (progn2
(progn2 (v8a mow left) (progn2 h6,6i h1,4i)) (progn2 (v8a
mow left) (v8a mow h7,7i))) (progn2 (v8a (progn2 left
left) (v8a mow left)) (v8a (progn2 left h2,1i) (v8a h1,7i
mow)))))
211
Ive reordered/renamed Kozas original ADFs.
223
Although this individual looks imposing, in fact with ADFs Lawnmower is fairly easy for
genetic programming to solve. Much of this individual is junk. The reason ADFs work so much
better in this domain is simple and unfair: a Lawnmower individual is executed only once, and has
no iteration or recursion, and so within its tree must exist enough commands to move lawnmower
to every spot of lawn. To do this for a single tree demands a big tree. But with when using ADF
trees, the main tree can repeatedly call ADFs (and ADF1 can repeatedly call ADF2), so the total size
of the individual can be much smaller and still take advantage of many more total moves.
Like Articial Ant, Lawnmower operates via side-effects and so execution order is important.
11.3 Where to Go Next
This is a woefully inadequate collection of resources that Ive personally found useful.
11.3.1 Bibliographies, Surveys, and Websites
Its an open secret that computer science researchers put a great many of their papers online,
where theyre often accessible from CiteSeer
x
. Google Scholar is also useful, but usually points to
documents behind publishers rewalls.
http://citeseerx.ist.psu.edu
http://scholar.google.com
The Hitchhikers Guide to Evolutionary Computation was the FAQ for the Usenet group comp.ai.genetic.
Its fairly dated: for example its software collection doesnt include anything current. Still, theres a
lot there, especially older work.
http://code.google.com/p/hhg2ec/
The single biggest bibliography in the eld is the Genetic Programming Bibliography, by Bill
Langdon, Steven Gustafson, and John Koza. I cannot overstate how useful this huge, immaculately
maintained bibliography has been to me (much of my work has been in genetic programming).
http://www.cs.bham.ac.uk/wbl/biblio/
Bill Langdon also maintains an extensive collection of bibliographies of EC conferences, etc.
http://www.cs.bham.ac.uk/wbl/biblio/ec-bibs.html
Carlos Coello Coello maintains a very large collection of multiobjective optimization resources.
http://www.lania.mx/ccoello/EMOO/
Tim Kovacs maintains a fairly complete bibliography on Learning Classier Systems.
http://www.cs.bris.ac.uk/kovacs/lcs/search.html
Jarmo Alander built a bibliography of practically all Genetic Algorithm publications up to 1993.
ftp://ftp.cs.bham.ac.uk/pub/Mirrors/ftp.de.uu.net/EC/refs/2500GArefs.ps.gz
224
Many other bibliographies can be found at the Collection of Computer Science Bibliographies. Look
under the Articial Intelligence, Neural Networks, and Parallel Processing subtopics.
http://liinwww.ira.uka.de/bibliography/
Liviu Panait and I wrote a large survey of cooperative multiagent learning, which includes a lot of
stuff on coevolution and its relationships to other techniques (like multiagent Q-learning).
http://cs.gmu.edu/eclab/papers/panait05cooperative.pdf
Liviu Panait and Sean Luke, 2005, Cooperative multi-agent learning: The state of the art,
Autonomous Agents and Multi-Agent Systems, 11, 2005
A good Particle Swarm Optimization website, with lots of resources, is Particle Swarm Central.
http://www.particleswarm.info
Marco Dorigo maintains one of the best Ant Colony Optimization websites out there, including
pointers to software, publications, and venues.
http://www.aco-metaheuristic.org
Paula Festa and Mauricio Rensede maintain an annotated bibliography of GRASP literature.
http://www.research.att.com/mgcr/grasp/gannbib/gannbib.html
Lee Spector has a website on the Push language and publications.
http://hampshire.edu/lspector/push.html
Julian Miller runs a website on Cartesian Genetic Programming.
http://cartesiangp.co.uk/
Michael ONeill maintains a website on Grammatical Evolution resources.
http://www.grammatical-evolution.org
Rainer Storn also maintains a website on Differential Evolution.
http://www.icsi.berkeley.edu/storn/code.html
Various papers on Guided Local Search may be found at Edward Tsangs laboratory website:
http://www.bracil.net/CSP/gls-papers.html
Mark Wineberg and Steffen Christensen regularly do a lecture on statistics specically for meta-
heuristics researchers. Mark keeps a PDF of the lecture slides on his home page.
http://www.cis.uoguelph.ca/wineberg/publications/ECStat2004.pdf
http://www.cis.uoguelph.ca/wineberg/
225
ACM SIGEvo is the ACMs special interest group on evolutionary computation. In addition to
sponsoring various major conferences and journals, they also have a newsletter, SIGEvolution.
The IEEE Computational Intelligence Societys Evolutionary Computation Technical Committee
(IEEE-CIS-ECTC, phew) is the approximate equivalent for the IEEE.
http://www.sigevo.org
http://www.sigevolution.org
http://www.ieee-cis.org/technical/ectc/
11.3.2 Publications
Ready for lots more? Thomas Weises 800-page, free open text Global Optimization Algorithms: Theory
and Application goes in-depth in a number of the topics covered here. Its got a lot of formalism,
with analysis and descriptive applications, and well over 2000 references. Did I mention its free?
http://www.it-weise.de
As far as books go, I think the single best guide to the craft of stochastic optimization is How to Solve
It: Modern Heuristics,
212
by Zbigniew Michalewicz and David Fogel. Fun to read, lled with stories
and examples, and covering a very broad collection of issues and topics.
Zbigniew Michalewicz and David Fogel, 2004, How to Solve It: Modern Heuristics, Springer
The best book on Ant Colony Optimization is Marco Dorigo and Thomas St utzles Ant Colony
Optimization.
Marco Dorigo and Thomas St utzle, 2004, Ant Colony Optimization, MIT Press
If you are interested in genetic programming, check out Genetic Programming: an Introduction by
Wolfgang Banzhaf, Peter Nordin, Robert Keller, and Frank Francone. Its aging but still good.
Wolfgang Banzhaf, Peter Nordin, Robert E. Keller, and Frank D. Francone, 1998, Genetic
Programming: An Introduction, Morgan Kaufmann
A much newer Genetic Programming work is A Field Guide to Genetic Programming by Riccardo
Poli, Bill Langdon, and Nick McPhee, which has the added benet of being free online if youre too
cheap to buy the print copy! (Buy the print copy.)
Riccardo Poli, William B. Langdon, and Nicholas Freitag McPhee, 2008, A Field Guide to
Genetic Programming, Available in print from lulu.com
http://www.gp-eld-guide.org.uk/
Kalyanmoy Debs Multi-Objective Optimization Using Evolutionary Algorithms is a good text for
multiobjective optimization, but its expensive.
Kalyanmoy Deb, 2001, Multi-Objective Optimization using Evolutionary Algorithms, Wiley
212
This books name is adapted from a very famous book which revolutionized the use of algorithmic methods for
solving complex problems: George P olya, 1945, How to Solve It, Princeton University Press.
226
Kenneth Price, Rainer Storn, and Jouni Lampinens Differential Evolution is likewise good but
expensive.
Kenneth Price, Rainer Storn, and Journi Lampinen, 2005, Dierential Evolution: A Practical
Approach to Global Optimization, Springer
James Kennedy, Russell Eberhart, and Yuhui Shis seminal book on Particle Swarm Optimization is
Swarm Intelligence. Unfortunately this was a very poor choice of name: there was already a Swarm
Intelligence, published two years earlier, largely about Ant Colony Optimization. That one was by
Eric Bonabeau, Marco Dorigo, and Guy Theraulaz.
213
James Kennedy, Russell Eberhart, and Yuhui Shi, 2001, Swarm Intelligence, Morgan Kaufmann
Eric Bonabeau, Marco Dorigo, and Guy Theraulaz, 1999, Swarm Intelligence: From Natural to
Articial Systems, Oxford University Press
Though it is getting somewhat long in the tooth, Melanie Mitchells An Introduction to Genetic
Algorithms is still quite a good, well, introduction to genetic algorithms.
Melanie Mitchell, 1996, An Introduction to Genetic Algorithms, MIT Press
David Fogels Blondie24 recounts the development of a one-population competitive coevolutionary
algorithm to learn how to play checkers very strongly, and casts it in the context of articial
intelligence in general.
David Fogel, 2001, Blondie24: Playing at the Edge of AI, Morgan Kauman
Last, but far from least, Ken De Jongs Evolutionary Computation: A Unied Approach puts not only
most of the population methods but a signicant chunk of all of metaheuristics under one unifying
framework. It covers a lot of what we dont cover here: the theory and analysis behind these topics.
Kenneth De Jong, 2006, Evolutionary Computation: A Unied Approach, MIT Press
11.3.3 Tools
Theres lots of stuff out there. Heres just a few:
So lets get the obvious one out of the way rst. ECJ
214
is a popular population-based
toolkit with facilities for parallel optimization, multiobjective optimization, and most representa-
tions, including genetic programming. ECJ is designed for large projects and so it has a somewhat
steep learning curve. But its author is very responsive, and unusually handsome as well. If
you meet this person in the street, you should give him a big hug. ECJ also dovetails with a
multiagent simulation toolkit called MASON. Both are in Java. ECJs web page points to a lot of
other Java-based systems, if ECJs too heavyweight for you.
http://cs.gmu.edu/eclab/projects/ecj/
http://cs.gmu.edu/eclab/projects/mason/
213
Believe it or not, theres now a third book which has foolishly been titled Swarm Intelligence!
214
ECJ doesnt actually stand for anything. Trust me on this. Though people have made up things like Evolutionary
Computation in Java or whatnot.
227
If you prefer C++, here are two particularly good systems. EO is an evolutionary computation
toolkit, and an extension, ParadisEO, adds single-state, parallel, and multiobjective optimization
facilities. A competitor, Open BEAGLE, also provides good evolutionary and parallel tools.
http://eodev.sourceforge.net/
http://paradiseo.gforge.inria.fr
http://beagle.gel.ulaval.ca
If youre looking for more general purpose metaheuristics frameworks (single-state optimization,
combinatorial optimization methods, etc.), you might consider the ones examined in a recent survey
by Jos e Antonio Parejo, Antonio Ruiz-Cort es, Sebasti an Lozano, and Pablo Fernandez.
215
Besides
some of the above frameworks (ECJ, EO/ParadiseEO), they looked at EasyLocal++, EvA2, FOM,
HeuristicLab, JCLEC, MALLBA, OAT, and Opt4j.
http://tabu.diegm.uniud.it/EasyLocal++/
http://www.ra.cs.uni-tuebingen.de/software/EvA2/
http://www.isa.us.es/fom/
http://dev.heuristiclab.com
http://jclec.sourceforge.net
http://neo.lcc.uma.es/mallba/easy-mallba/
http://optalgtoolkit.sourceforge.net
http://opt4j.sourceforge.net
If you need a good XCS library, Martin Butz has an XCSF library in Java, and Pier Luca Lanzi has
XCS implementations in C and C++.
http://www.wsi.uni-tuebingen.de/lehrstuehle/cognitive-modeling/code/
http://illigal.org/2003/10/01/xcs-tournament-selection-classier-system-implementation-in-c-version-12/
http://illigal.org/2009/03/24/xcslib-the-xcs-classier-system-library/
The Particle Swarm Optimization folks have coalesced around a single C le as a kind of reference
standard. Its well written and documented. As of this printing, the latest was the SPSO-2011
version. You can nd this and lots of other PSO systems here:
http://www.particleswarm.info/Programs.html
Genetic Programming Systems Because of its complexity, GP tends to encourage systems built
just for it. ECJ, EO, and Open BEAGLE all have strong support for tree-style GP and, in some cases,
variations like Grammatical Evolution or Push. Theyre popular tools if youre doing Java or C++.
Besides these systems, you should also check out...
If youre looking to do GP in straight C, lil-gp is a bit long in the tooth nowadays but still handy.
http://garage.cse.msu.edu/software/lil-gp/
215
Jos e Antonio Parejo, Antonio Ruiz-Cort es, Sebasti an Lozano, and Pablo Fernandez, 2012, Metaheuristics optimization
frameworks: a survey and benchmarking, Soft Computing, 16, 527561.
228
Likewise, if youd like to do GP in MATLAB, check out Sara Silvas GPlab.
http://gplab.sourceforge.net/
Lee Spector maintains a list of Push implementations. The big one is Clojush, written in Clojure.
http://faculty.hampshire.edu/lspector/push.html
There are several Grammatical Evolution systems, all listed here, including the seminal libGE.
http://www.grammatical-evolution.org/software.html
The best-known implementation of Linear Genetic Programming is Discipulus. Note: it is not free.
http://www.rmltech.com/
Julian Millers Cartesian Genetic Programming lists all the current CGP implementations.
http://cartesiangp.co.uk/resources.html
Eureqa is a well-regarded system for using Genetic Programming to analyze, visualize, and solve
nontrivial Symbolic Regression problems.
http://creativemachines.cornell.edu/eureqa/
11.3.4 Conferences
The big kahuna is the Genetic and Evolutionary Computation Conference, or GECCO, run by ACM
SIGEvo (http://www.sigevo.org). GECCO is the merging of the former GP and ICGA conferences.
Its usually held in the United States, and has lots of smaller workshops attached to it.
If youre an undergraduate student, I highly recommend that you submit to the GECCO
Undergraduate Student Workshop. Its a great venue to show off your stuff, and theyre friendly and
encouraging. If youre a graduate student and would like some tough feedback on your proposed
thesis work, a great pick is the GECCO Graduate Student Workshop, where you present your work in
front of a panel of luminaries who then critique it (and theyre not nice!). This is a good thing: better
to hear it in a friendly workshop than when youre doing your proposal or thesis defense! Both
workshops are specially protected from the rest of the conference and run by people who really
care about you as a student.
The primary European conference is the International Conference on Parallel Problem Solving from
Nature, or PPSN. Its not historically been very large but of unusually high quality.
The third major conference is the IEEE Congress on Evolutionary Computation, or CEC, held in various
spots around the world. Its often quite large.
The three conferences above are dominated by evolutionary computation techniques. An alternative
conference for other methods is the Metaheuristics International Conference or MIC.
229
The oldest theory workshop, and almost certainly the most respected venue in the eld,
216
is the
venerable Foundations of Genetic Algorithms workshop, or FOGA, run by ACM SIGEvo, and usually
in the United States. Its not just about the Genetic Algorithm any more, but rather about all kinds
of metaheuristics theory: indeed, in 2009 there wasnt a single Genetic Algorithm paper in the whole
workshop! FOGA is held every other year. The year that FOGAs not held, an alternative theory
workshop has lately been hosted at Schloss Dagstuhl (http://www.dagstuhl.de) in Germany.
Europe is also host to the European Conference on Genetic Programming, or EuroGP, an alternative
conference focused, not surprisingly, on genetic programming.
Not to be outdone, the invitation-only Genetic Programming Theory and Practice Workshop, or GPTP,
is held each year at the University of Michigan.
Ant Colony Optimization also has its own conference apart fromthe big ones above: the International
Conference on Ant Colony Optimization and Swarm Intelligence or ANTS.
217
Particle Swarm Optimization and Ant Colony Optimization folks, among others, have also lately
been attending the IEEE Swarm Intelligence Symposium or SIS.
The area of Evolvable Hardware (EH)
218
concerns itself with the optimization of hardware designs:
circuits, antennas, and the like. This eld often has a prominent showing at the NASA/ESA
Conference on Adaptive Hardware and Systems.
I would be remiss in not mentioning conferences in Articial Life (ALife),
219
the simulation of ab-
stract biological processes. ALife has long been strongly associated with metaheuristics, and particu-
larly with evolutionary computation.
220
Major ALife conferences include the International Conference
on the Simulation and Synthesis of Living Systems (or ALife), the European Conference on Articial Life (or
ECAL), and From Animals to Animats: the International Conference on Simulation of Adaptive Behavior
(or SAB). ALife and ECAL are run by the International Society of Articial Life (http://alife.org).
SAB is run by the International Society for Adaptive Behavior (http://www.isab.org.uk).
216
For example, I have twice chosen to publish at FOGA rather than in even our best journals. Thats not atypical.
217
Annoyingly, this is not an acronym.
218
Evolvable Hardware is notable in that the tness function is often done in real hardware. Heres a famous story.
Adrian Thompson was an early Evolvable Hardware researcher who worked on optimizing computer circuits using
evolutionary algorithms. Adrian had access to early releases of the Xilinx XC6216 FPGA, a chip which was capable of
forming arbitrary circuits on-chip through the deft use of a grid of programmable gates. The evolutionary algorithm
performed tness assessment by actually programming the chip with the given circuit, then testing its performance
on an oscilloscope. Problem is, when Adrian received the nal optimized circuits, they were sometimes consisted of
disconnected circuits with various vestigial sections that didnt do anything. But when he deleted these regions, the
circuit stopped working on the chip! It turns out that the early Xilinx chips given to Adrian had bugs on them, and
the evolutionary algorithm was nding solutions which identied and took advantage of the bugs. Not generalizable! See
Adrians homepage for various literature: http://www.informatics.sussex.ac.uk/users/adrianth/ade.html
219
ALife lies at the intersection of computer scientists interested in stealing ideas from biology, and biologists interested
in using computers for modeling. Since youre probably in the former camp, allow me to suggest a recent text which
romps all over the area, everything from evolutionary neural networks to swarms to Lindenmayer systems: Dario
Floreano and Claudio Mattiuissi, 2008, Bio-Inspired Articial Intelligence: Theories, Methods, and Technologies, MIT Press.
220
ALife is so strongly associated with evolutionary computation that the journal Evolutionary Computation has a sister
journal, Articial Life, which is among the primary journals in ALife.
230
11.3.5 Journals
At this point, I think the three primary journals in the eld are all evolutionary computation
journals: but they accept papers on all topics in metaheuristics (and indeed many of the seminal
non-EC metaheuristics papers are in these journals).
The oldest and (I think) the most respected journal in the eld is Evolutionary Computation (MIT
Press), often nicknamed ECJ.
221
Originally founded by Ken De Jong, Evolutionary Computation has a
long track record of strong theoretical publication and good empirical work.
222
As articial life and metaheuristics have long been closely associated, Evolutionary Computation has
a sister journal, also by MIT press: Articial Life.
IEEE Transactions on Evolutionary Computation (IEEE TransEC) is a rst-rate, highly ranked journal
which has a bit more of an application and technical emphasis. My rst solo journal publication was
in IEEE TransEC and it was a most pleasant publication experience. Because its an IEEE journal,
IEEE TransEC also benets from a high Impact Factor, which isnt something to be dismissed!
Genetic Programming and Evolvable Machines (GPEM) is a newer journal which emphasizes genetic
programming and evolvable hardware, but takes a wide range of papers. Its well regarded and is
published by Springer.
223
The GPEM editor also maintains a blog, listed below.
http://gpemjournal.blogspot.com/
11.3.6 Email Lists
There are plenty of email lists, but let me single out three in particular.
EC-Digest is a long-running mailing list for announcements of interest to the metaheuristics
community. Its moderated and low-bandwidth.
http://ec-digest.research.ucf.edu/
The Genetic Programming Mailing List is an active discussion list covering GP.
http://tech.groups.yahoo.com/group/genetic programming/
The Ant Colony Optimization Mailing List is a relatively light discussion list mostly for announce-
ments regarding ACO.
https://iridia.ulb.ac.be/cgi-bin/mailman/listinfo/aco-list
http://iridia.ulb.ac.be/mdorigo/ACO/mailing-list.html
221
Im not sure if Ken De Jong has yet forgiven me giving my software the same acronym I just didnt know that
Evolutionary Computation sometimes had a Journal after it!
222
Truth in advertising: Im presently on the Evolutionary Computation editorial board.
223
More truth in advertising: Im on the editorial board of Genetic Programming and Evolvable Machines.
231
11.4 Example Course Syllabi for the Text
Weeks are numbered, and each week is assumed to be approximately four hours of lecture time.
Topics are organized in approximate order of signicance and dependency. Note that the Combina-
torial Optimization, Coevolution, and Model Fitting sections make eeting, nonessential reference
to one another. Rough chapter dependencies are shown in the Table of Contents (page 1).
Simple Syllabus A lightweight one-semester course covering common algorithms and topics.
1. Introduction, Gradient-based Optimization
(Sections 0, 1)
2. Single-State Methods (Sections 22.4)
3. Population Methods (Sections 33.2, 3.6)
4. Representation (Sections 44.1, 4.34.3.3)
Optional:
5. Multiobjective Optimization (Section 7)
6. Combinatorial Optimization (Sections 8.18.3)
7. Parallel Methods (Sections 55.3)
8. Coevolution (Sections 66.3)
Firehose Syllabus An intensive one-semester senior-level or masters level course.
1. Introduction, Gradient-based Optimization
(Sections 0, 1)
2. Single-State Methods (Section 2)
3. Population Methods (Section 3)
4. Representation (Sections 44.1, 4.3, and 4.4)
5. Representation (Sections 4.2, 4.5, and 4.6)
6. Multiobjective Optimization (Section 7)
7. Combinatorial Optimization (Section 8)
8. Parallel Methods (Section 5)
9. Coevolution (Section 6)
10. Model Fitting (Section 9)
11. Policy Optimization (Sections 1010.2)
(presuming no prior knowledge of Q-Learning)
12. Policy Optimization (Sections 10.310.5)
232
Errata
The errata
224
omits a great many minor typo xes and other insignicant changes.
Errata for Online Version 0.1 Online Version 0.2
Page 0 Fixed the URL: it was books rather than book. Its a long story.
Page 0 Updated the Thanks. I wasnt sufciently thankful.
Page 14 Added a thing on Newton.
Page 23 Added a thing on Gauss.
Page 24 Tweaked Table 1 in the hopes that its a bit less obtuse now.
Page 35 Added mention of adaptive and self-adaptive operators to Evolution Strategies.
Page 57 Algorithm 39 (Particle Swarm Optimization), line 3, should read:
proportion of personal best to be retained
Page 82 New representation added (stack languages and Push). Id meant to include them but hadnt yet gured out
how to do it (theyre both trees and lists).
Page 225 Added Push to the Miscellany.
Page 233 Added the Errata. Let no one say Im afraid of self-reference.
Thanks to Lee Spector, Markus, Don Miner, Brian Ross, Mike Fadock, Ken Oksanen, Asger Ottar Alstrup, and James
OBeirne.
Errata for Online Version 0.2 Online Version 0.3
Page 0 Updated the Thanks. Again, not nearly thankful enough.
Page 15 Algorithm 3 (Newtons Method with Restarts (One Dimensional Version)), line 6, should read:
until f
0
(x) = 0 and f
00
(x) < 0
Page 15 Added a footnote on zero gradient.
Page 123 Added a footnote on Alternating Optimization and its relationship to N-Population Cooperative Coevolution.
Page 224 Revised the URLs for the Hitchhikers Guide to Evolutionary Computation (the Usenet FAQ), and for Encore.
Thanks to Joerg Heitkoetter, Don Sofge, and Akhil Shashidhar.
Errata for Online Version 0.3 Online Version 0.4
Page 32 New footnote distinguishing between survival and parent selection.
Page 138 Algorithm 100 (Computing a Pareto Non-Dominated Front), after line 8, insert:
break out of inner for-loop
Page 140 Algorithm 102 (Multiobjective Sparsity Assignment), added a comment to make it clear how to use the
algorithm on a single Pareto Front Rank.
Page 140 Algorithm 102 (Multiobjective Sparsity Assignment), line 12 should read:
return R with Sparsities assigned
Page 141 Algorithm 104 (An Abstract Version of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II)), added
some comments to make it more clear what AssessFitness does.
Page 224 Further updates to Hitchhikers Guide to Evolutionary Computation URL.
Page 232 Added PSO to the Simple Syllabus.
Thanks to Jeff Bassett, Guillermo Calder on-Meza, and Joerg Heitkoetter.
224
Man, you are really bored, arent you. Reading the errata. I mean, come on.
233
Errata for Online Version 0.4 Online Version 0.5
Page 26 Scatter Search should have been Section 3.3.5.
Page 38 Added footnote to note Crossovers use with ES.
Page 41 Added a footnote to mention Schwefels early work in K-vector uniform crossover.
Page 42 Modied Footnote 28 to give more credit to Schwefel and discuss terminology.
Page 49 Complete revision of section to broaden denition of memetic algorithms.
Page 106 Added a bit more on Asynchronous Evolution.
Page 148 Added Footnote 129 on further reading on constrained stochastic optimization.
Page 161 Changed the Section name from Model Fitting to Optimization by Model Fitting. Its more tting.
Thanks to Hans-Paul Schwefel, Pablo Moscato, Mark Coletti, and Paul Wiegand.
Errata for Online Version 0.5 Online Version 0.6
Page 12 Added notation on functions.
Page 40 Added some further discussion about the perils of crossover.
Page 45 Added a footnote on tournament selection variations.
Page 47 Minor extension in discussion about elitism.
Page 49 More discussion about memetic algorithms.
Page 76 Split the footnote.
Pages 120125 Changed K-fold to K-fold in all algorithm names.
Page 121 Because its not formally elitist, changed the name of Algorithm 87 (Elitist Relative Fitness Assessment
with an Alternative Population) to K-fold Relative Fitness Assessment with the Fittest of an Alternative
Population.
Page 125 Changed k more tests to simply k tests in the comments (Algorithm 91, K-fold Joint Fitness Assessment
of N Populations).
Page 209 Changed recommendation to use 100 rather than for large degrees of freedom.
Page 226 Added Global Optimization Algorithms: Theory and Application. How did I manage to not include this?
Thanks to Yury Tsoy and Vittorio Ziparo.
Errata for Online Version 0.6 Online Version 0.7
Page 27 Improved the code of Algorithm 15 (Feature-based Tabu Search) (no corrections).
Page 77 Improved the code of Algorithm 56 (The PTC2 Algorithm) (no corrections).
Page 102 Algorithm 68 (Simple Parallel Genetic Algorithm-style Breeding), line 2, should read:
T set of threads {T
1
, ...T
n
}
Page 178 Text should read:
We need to replace the
s
0 P(s
0
|s, a) portion of Equation 1.
Page 182 Text should read:
h4, 1i
Page 181 Equation should read
Q
(s, a) = R(s, a) +
s
0
P(s
0
|s, a) max
a
0
E[
t=0
t
R(s
t
, a
t
)|s
0
= s
0
, a
0
= a
0
, a
t1
=
(s
t
)]
Page 184 Added another reference to K-Nearest Neighbor.
234
Page 202 Algorithm 131 (XCS Fitness Updating), line 21, should read:
for each rule A
i
A do
Page 198 Line number references (28 and 29) corrected.
Page 199 Algorithm 133 (XCS Fitness Updating (Extended)), line 15, should read:
Experience(A
i
) Experience(A
i
) +1
Page 199 Algorithm 133 (XCS Fitness Updating (Extended)), line 23, should read:
for each rule A
i
A do
Page 199 Line number references (15, 16, and 24) corrected.
Thanks to Yury Tsoy.
Errata for Online Version 0.7 Online Version 0.8
Page 27 Algorithm 15 (Feature-based Tabu Search), line 9, should read:
Remove from L all tuples of the form hX, di where c d > l
Page 90 Associated agents with policies.
Page 94 Algorithm 62 (Simple Production Ruleset Generation), completely updated to x a few minor bugs and to
give the user the option to prevent disconnected rules. The revised algorithm now reads:
1:
~
t pre-dened set of terminal symbols (that dont expand)
2: p approximate probability of picking a terminal symbol
3: r ag: true if we want to allow recursion, else false
4: d ag: true if we want to allow disconnected rules, else false
5: n a random integer > 0 chosen from some distribution
6: ~v vector of unique symbols hv
1
, ..., v
n
i . The symbol in v
1
will be our start symbol
7:
rules empty vector of rules hrules
1
, ..., rules
n
i
8: for i from 1 to n do . Build rules
9: l a random integer 1 chosen from some distribution
10:
~
h empty vector of symbols hh
1
, ...h
l
i
11: for j from 1 to l do
12: if (r = false and i=n) or p < random value chosen uniformly from 0.0 to 1.0 inclusive then
13: h
j
a randomly chosen terminal symbol from
~
t not yet appearing in
~
h
14: else if r = false then
15: h
j
a randomly chosen nonterminal from v
i+1
, ..., v
n
not yet appearing in
~
h
16: else
17: h
j
a randomly chosen nonterminal symbol from ~v not yet appearing in
~
h
18: rules
i
rule of the form v
i
h
1
h
2
... h
l
19: if d = false then . Fix disconnected rules
20: for i from 2 to n do
21: if v
i
does not appear in the head of any of the rules rules
1
, ..., rules
i1
then
22: l a random integer chosen uniformly from 1 to i 1 inclusive
23: Change rules
l
from the form v
l
h
1
... to the form v
l
v
i
h
1
...
24: return
rules
Page 126 Michael Jordan was a guard, not a center. That was embarrassing!
Page 140 Algorithm 102 (Multiobjective Sparsity Assignment) tweaked to permit objectives with different ranges this
was added by Deb et al in later implementations of the algorithm. To do this, insert after line 2 the following
line:
Range(O
i
) function providing the range (max min) of possible values for a given objective O
i
235
Then change line 12 to read:
Sparsity(F
0
j
) Sparsity(F
0
j
) +
ObjectiveValue(O
i
, F
0
j+1
) ObjectiveValue(O
i
, F
0
j1
)
Range(O
i
)
Page 173 Again associated agents with policies.
Page 207 Moved Footnote 186.
Page 225 Moved Mark and Steffans resource entry to the end of the list to be consistent with Footnote 192.
Page 230 Added Footnote 218 about Adrian Thompson and computer circuits.
Page 230 Added Footnote 219 about Dario Floreanos book.
Thanks to Ivan Krasilnikov, Yuri Tsoy, Uday Kamath, Faisal Abidi, and Yow Tzu Lim.
Errata for Online Version 0.8 Online Version 0.9
All Pages Added hyperlinks to all chapter, page, remote footnote, or index references. They appear as little red
rectangles. Footnote marks do not have them: its too distracting. The hyperrefs do not appear when the
publication is printed. Well see if people can stand them or not.
Page 0 Noted Roger Alsing and added him to the index. He wasnt there due to a bug in L
A
T
E
X: you cant add index
entries for pages less than 0.
Page 9 Footnote 1 expanded to discuss more alternate names.
Page 11 Reformatted the Notation to be entirely in itemized form.
Page 21 Line 14 of Algorithm 10 (Hill Climbing with Random Restarts) should read:
Until Best is the ideal solution or we have run out of total time
Page 24 Its the Box-Muller-Marsaglia Polar Method.
Page 24 Fixes to Algorithm 12 (Sample from the Gaussian Distribution (Box-Muller-Marsaglia Polar Method)). We
were doing normal distribution transforms using variance instead of standard deviation. Specically, Lines 8
and 9 should read:
g + x
q
2
ln w
w
and
h + y
q
2
ln w
w
Page 24 We were doing normal distribution transforms using variance instead of standard deviation. Equation should
read should read:
N(,
2
) = +
2
N(0, 1) = + N(0, 1)
Page 25 Line 11 of Algorithm 12 (Simulated Annealing) should read:
until Best is the ideal solution, we have run out of time, or t 0
Page 29 Line 16 of Algorithm 16 (Iterated Local Search with Random Restarts) should read:
Until Best is the ideal solution or we have run out of total time
Page 37 Line 16 of Algorithm 20 (The Genetic Algorithm) should read:
Q Q {Mutate(C
a
), Mutate(C
b
)}
Page 40 Footnote 26 added to discuss epistasis.
Page 40 Expansion of the paragraph discussing crossover and linkage.
Page 53 Line 2 of Algorithm 37 (Simplied Scatter Search with Path Relinking) is deleted (it dened an unused
variable).
236
Page 88 Line 7 of Algorithm 9 (One Point List Crossover) should read:
~y snip out w
d
through w
k
from ~ w
Thanks to Daniel Carrera, Murilo Pontes, Maximilian Ernestus, Ian Bareld, Forrest Stonedahl, and Yuri Tsoy.
Errata for Online Version 0.9 Online Version 0.10
Page 63 Line 12 of Algorithm 42 (Random Walk Mutation) should read:
until b < random number chosen uniformly from 0.0 to 1.0 inclusive
Page 69 Algorithm 47 renamed to Build a Simple Graph.
Page 69 Algorithm 48 completely replaced and renamed to Build a Simple Directed Acyclic Graph. The revised algorithm
now reads:
1: n chosen number of nodes
2: D(m) probability distribution of the number of edges out of a node, given number of in-nodes m
3: f (j, k, Nodes, Edges) function which returns true if an edge from j to k is allowed
4: set of nodes N {N
1
, ...N
n
} . Brand new nodes
5: set of edges E {}
6: for each node N
i
N do
7: ProcessNode(N
i
) . Label it, etc., whatever
8: for i from 2 to n do
9: p random integer 1 chosen using D(i 1)
10: for j from 1 to p do
11: repeat
12: k random number chosen uniformly from 1 to i 1 inclusive
13: until f (i, k, N, E) returns true
14: g new edge from N
i
to N
k
15: ProcessEdge(g)
16: E E {g}
17: return N, E
Page 76 Line 11 of Algorithm 54 (The Full Algorithm) should read:
Child i of n DoFull(depth + 1, max, FunctionSet)
Page 77 Line 9 of Algorithm 56 (The PTC2 Algorithm) should read:
for each child argument slot b of r
Page 87 Line 10 of Algorithm 58 (Random Walk) should read:
until b < random number chosen uniformly from 0.0 to 1.0 inclusive
Page 100 Line 37 of Algorithm 65 (Thread Pool Functions) should read:
Wait on l
Page 107 Line 21 of Algorithm 73 (Asynchronous Evolution) should read:
if ||P|| = popsize
Page 116 Line 7 of Algorithm 81 (Single-Elimination Tournament Relative Fitness Assessment) should read:
Q
j
defeated Q
j+1
in that last Test
Page 122 Text changed to clarify that Potter and DeJong proposed cooperative coevolution, not competitive coevolu-
tion.
Page 230 Footnote 220 added about the relationship between ALife and Evolutionary Computation.
Thanks to Gabriel Balan and Muhammad Iqbal.
237
Errata for Online Version 0.10 Online Version 0.11
Page 54 Added some new discussion on survival selection.
Page 55 Algorithm 38 (Differential Evolution) completely revised to x a bug and to make the resulting algorithm
simpler. The new version is:
1: mutation rate . Commonly between 0.5 and 1.0, higher is more explorative
2: popsize desired population size
3: P h i . Empty population (its convenient here to treat it as a vector), of length popsize
4: Q 2 . The parents. Each parent Q
i
was responsible for creating the child P
i
5: for i from 1 to popsize do
6: P
i
new random individual
7: Best 2
8: repeat
9: for each individual P
i
P do
10: AssessFitness(P
i
)
11: if Q 6= 2 and Fitness(Q
i
) > Fitness(P
i
) then
12: P
i
Q
i
. Retain the parent, throw away the kid
13: if Best = 2 or Fitness(P
i
) > Fitness(Best) then
14: Best P
i
15: Q P
16: for each individual Q
i
Q do . We treat individuals as vectors below
17: ~a a copy of an individual other than Q
i
, chosen at random with replacement from Q
18:
~
ba copy of an individual other than Q
i
or ~a, chosen at random with replacement fromQ
19: ~c a copy of an individual other than Q
i
, ~a, or
~
b, chosen at random with replacement from Q
20:
~
d ~a + (
~
b ~c) . Mutation is just vector arithmetic
21: P
i
one child from Crossover(
~
d, Copy(Q
i
))
22: until Best is the ideal solution or we ran out of time
23: return Best
Page 108 Algorithm 75 renamed to Random Walk Selection. Text immediately before the algorithm changed to properly
reect the description of the algorithm. Finally, Line 11 of the algorithm should read:
return the individual located at
~
l in the space
Page 129 Comment on Line 6 of Algorithm 92 (Implicit Fitness Sharing) should read:
R
i,j
is individual R
i
s sum total reward for T
j
Page 129 Lines 10 and 11 of Algorithm 92 (Implicit Fitness Sharing) should read:
for each individual Q
l
Q do
i index of Q
l
in P)
Page 135 Line 6 of Algorithm 94 (Multiobjective Lexicographic Tournament Selection) should read:
for j from 1 to n do
Page 144 For clarity, added a line and revised a comment in Algorithm 107 (An Abstract Version of the Strength Pareto
Evolutionary Algorithm 2 (SPEA2)).
Page 180 The wrong equation had been labelled Equation 3.
Page 176 Reference to Figure 65 changed to Figure 64.
Page 188 Line 41 of Algorithm 124 (SAMUEL Fitness Assessment) should read:
if dotness is true then
Page 196 Comment on Line 2 of Algorithm 130 (XCS Action Selection) should read:
0 e 1
238
Page 204 Caption to Figure 67 should read:
A robot world with three rooms, a door, and a switch. available actions for each room are shown. The robot
can only exit if the door is opened. Flicking the switch opens the door.
Page 223 Changed the Lawnmower example to more clearly indicate that (frog i ) is not in ADF2; and that the ADF2
and ADF1 are reordered and renamed with respect to Kozas originals.
Thanks to Joseph Zelibor and Muhammad Iqbal.
Errata for Online Version 0.11 Online Version 0.12
All Pages Made the term Evolution Strategies (and ES) plural. I always view it as a mass noun, and thus singular, but
Im in the minority there.
Page 41 Additional line inserted at Line 11 of Algorithm 27 (Uniform Crossover among K Vectors), which reads:
~ w W
j
Page 43 Explained what it means to select with replacement.
Page 54 Its Jouni Lampinen.
Page 62 Added summary of vector representation functions discussed so far.
Page 74 Expanded the C and Lisp code, removing the value of a footnote.
Page 99 Added a reference to Zbigniew Skolickis thesis.
Page 219 The problems ZDT1, ZDT2, and ZDT3 should have the range:
x
i
[0, 1]
ZDT4 should have the range:
x
1
[0, 1], x
i>1
[5, 5]
Thanks to Petr Po sk and Faisal Abidi.
Errata for Online Version 0.12 Online Version 1.0 (First Print Edition)
Only minor modications. Buy the Book! http://cs.gmu.edu/sean/book/metaheuristics/
Thanks to Kevin Molloy.
Errata for Online Version 1.0 (First Print Edition) Online Version 1.1
(Note: many of these errata found their way into later versions of the rst Print Edition, after January 1, 2011)
Page 46 Added small item about handling odd population sizes with Elitism.
Page 62 Added page numbers to algorithm references in table.
Page 85 Added footnote on better handling of Grammatical Evolution.
Page 216 Modied traditional bounds for Rosenbrock to x
i
[2.048, 2.048]. Also adjusted the gures in Figure 68 to
make Rosenbrock more easily understood with the revised bounds.
Page 216 Rastrigins function should read:
f (hx
1
, ..., x
n
i) = 10n +
n
i=1
x
2
i
10 cos(2x
i
)
Additionally, Rastrigin wasnt part of the DeJong Test Suite: though it often has traditional bounds of
x
i
[5.12, 5.12] in the literature.
Page 216 Modied traditional bounds for Schwefel to x
i
[512.03, 511.97].
Page 227 Revised Journi Lampinen to Jouni Lampinen (again).
Thanks to Keith Sullivan, Matthew Molineaux, and Brian Olson.
239
Errata for Online Version 1.1 Online Version 1.2
Page 0 Updated the URL to the NIST/SEMATECH handbook.
Page 15 Somehow typed a double-prime when I meant a single-prime. Text in Footnote 6 should read:
And to top it off, f
0
(x) = f
00
(x) = 0 for at minima, at saddle points, and at maxima.
Page 129 Comment on Line 6 of Algorithm 92 (Implicit Fitness Sharing) should read:
R
i,j
is individual P
i
s sum total reward for T
j
This is the second time this same comment has been revised. Oops.
Page 190 Fixed incorrect link to properly refer to Section 3.3.4.
Page 223 Text should read:
(adf1 arg1) 1 Automatically dened function which calls the ADF1 tree.
...and...
arg1 0 The value of argument arg1 passed when the ADF1 tree is called.
Page 224 CiteSeer is gone: only CiteSeer
x
remains.
Page 224 Encore is gone.
Page 225 Pablo Moscatos Memetic Algorithms page is gone.
Thanks to Matthew Molineaux, Kevin Molloy, Adam Szkoda, and Bill Barksdale.
Errata for Online Version 1.2 Online Version 1.3
Page 121 Lines 16 through 18 of Algorithm 86 (The Compact Genetic Algorithm) should read:
else if the value of gene j in U < the value of gene j and D
j
> 0 then
Page 140 Revised Algorithm 102 (Multiobjective Sparsity Assignment) to assign the sparsities of a single Pareto Front
Rank, rather than a set of Pareto Front Ranks. This isnt a bugx but a modication to make the usage of the
algorithm simpler and clearer with respect to the rest of the NSGA-II code. The revised algorithm is:
1: F hF
1
, ..., F
m
i a Pareto Front Rank of Individuals
2: O {O
1
, ..., O
n
} objectives to assess with
3: Range(O
i
) function providing the range (max min) of possible values for a given objective O
i
4: for each individual F
j
F do
5: Sparsity(F
j
) 0
6: for each objective O
i
O do
7: F
0
F sorted by ObjectiveValue given objective O
i
8: Sparsity(F
0
1
)
9: Sparsity(F
0
||F||
) . Each end is really really sparse!
10: for j from 2 to ||F
0
|| 1 do
11: Sparsity(F
0
j
) Sparsity(F
0
j
) +
ObjectiveValue(O
i
, F
0
j+1
) ObjectiveValue(O
i
, F
0
j1
)
Range(O
i
)
12: return F with Sparsities assigned
240
Page 141 Fixed Algorithm 104 (An Abstract Version of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II)) so
that on the rst pass breeding is only done from the archive. Also generalized the archive size. Now its:
1: m desired population size
2: a desired archive size . Typically a = m
3: P {P
1
, ..., P
m
} Build Initial Population
4: A {} archive
5: repeat
6: AssessFitness(P) . Compute the objective values for the Pareto front ranks
7: P P A . Obviously on the rst iteration this has no eect
8: BestFront Pareto Front of P
9: R Compute Front Ranks of P
10: A {}
11: for each Front Rank R
i
R do
12: Compute Sparsities of Individuals in R
i
. Just for R
i
, no need for others
13: if ||A|| + ||R
i
|| a then . This will be our last front rank to load into A
14: A A the Sparsest a ||A|| individuals in R
i
, breaking ties arbitrarily
15: break from the for loop
16: else
17: A A R
i
. Just dump it in
18: P Breed(A), using Algorithm 103 for selection (typically with tournament size of 2)
19: until BestFront is the ideal Pareto front or we have run out of time
20: return BestFront
Page 144 Slight revision (no bug xes) to Algorithm 107 (An Abstract Version of the Strength Pareto Evolutionary
Algorithm 2 (SPEA2)) so that its parallel to the NSGA-II code. The revised code is:
1: m desired population size
2: a desired archive size . Typically a = m
3: P {P
1
, ..., P
m
} Build Initial Population
4: A {} archive
5: repeat
6: AssessFitness(P)
7: P P A . Obviously on the rst iteration this has no eect
8: BestFront Pareto Front of P
9: A Construct SPEA2 Archive of size a from P
10: P Breed(A), using tournament selection of size 2 . Fill up to the old size of P
11: until BestFront is the ideal Pareto front or we have run out of time
12: return BestFront
Page 155 Revisions to the Ant Systems method of selecting pheromones. Methods should read:
Desirability(C
i
) = p
i
(Value(C
i
))
e
and
Desirability(C
i
) = p
1
Cost(C
i
)
e
The original methods have been added to a new footnote.
Page 158 De-speculated older speculative text about higher-order pheromones. Theres nothing new under the sun.
Page 169 Line 25 of Algorithm 120 (An Abstract Parallel Previous 2-Population Competitive Coevolutionary Algorithm)
should read:
for each individual P
0
i
P
0
do
if ExternalFitness(P
0
i
) > ExternalFitness(Best) then
Best P
0
i
241
Page 170 Text should read:
(lines 23 and 25)
Page 216 Added new section on creating rotated problems.
Page 230 Lindemayer Lindenmayer.
Thanks to Daniel Rothman, Khaled Ahsan Talukder, and Yuri Tsoy.
Errata for Online Version 1.3 Online Version 2.0 (Second Print Edition)
Page 0 Minor tweaks to the frontmatter.
Page 10 Additional text and footnote discussing inverse problems.
Page 14 Fixed error in Newtons method and also modied discussion to make it clear that Newtons Method
converges not just to maxima, but to minima and to saddle points. Algorithm 2 renamed to Newtons Method
(Adapted for Optima Finding), and should now read:
1: ~x random initial vector
2: repeat
3: ~x ~x [H
f
(~x)]
1
f (~x) . In one dimension: x x
f
0
(x)
f
00
(x)
4: until ~x is the ideal solution or we have run out of time
5: return ~x
Page 15 Newtons Method with Restarts (Algorithm 3) is replaced with Gradient Ascent with Restarts, plus some
tweaks to the surrounding text discussing it. Algorithm 3 should now read:
1: ~x random initial value
2: ~x
~x . ~x
~x
9: ~x random value
10: until we have run out of time
11: return ~x
B
|
q
2
A
n
A
+
2
B
n
B
Page 210 Added a bit about Welchs t-Test.
243
Page 217 Clarications (no bug xes) to the clarity of Algorithm 137 (Create a Uniform Orthonormal Matrix), which
should now read:
1: n desired number of dimensions
2: M n n matrix, all zeros
3: for i from 1 to n do
4: for j from 1 to m do
5: M
ij
random number chosen from the Normal distribution N( = 0,
2
= 1)
6: for i from 1 to n do
7: Row vector
~
M
i
=
~
M
i
i1
j=1
h
~
M
i
~
M
j
i
~
M
j
. Subtract out projections of previously built bases
8: Row vector
~
M
i
=
~
M
i
||
~
M
i
||
. Normalize
9: return M
Page 227 Signicant revision to tools section.
Thanks to Pier Luca Lanzi, Khaled Ahsan Talukder, Andrew Reeves, Liang Liu, and Len Matsuyama.
244
Index
e-greedy action selection, 173, 190
(+), 28
(+1), 42
(, ), 27
(1+), 18
(1+1), 17
(1, ), 18
Ackley, David, 33, 207, 208
action set, 186
actions, 168
activity level, 181
Agarwal, Sameer, 135
agent, 85, 167, 168
Agrawal, Samir, 17
Alander, Jarmo, 218
aliased states, 198
allele, 25
Alsing, Roger, 0
Alternating Optimization (AO), 117
Andre, David, 76
Angeline, Peter, 62
annealing, 19
ANOVA, 206
Ant Colony Optimization (ACO), 146
Ant Colony System (ACS), 150
Ant System (AS), 147
ant trails, 146
AQ, 156
arbitration scheme, 85, 176
archive, 120, 135
arity, 69
arms race, 116
arrays, 6
articial immune systems, 123
Articial Life (ALife), 224
Asada, Minoru, 169
assessment procedure, 11
Asynchronous Evolution, 100
automatically dened functions (ADFs), 73
automatically dened macros (ADMs), 74
Baker, James, 38
Baldwin Effect, 44
Baluja, Shumeet, 161
Banzhaf, Wolfgang, 78, 220
Baxter, John, 22
Bayes Network, 165
Bayesian Optimization Algorithm (BOA), 165
Bellman Equation, 171
Bellman, Richard, 171
Bennett, Forrest, 76
best of run, 201
biasing, 26, 56
bin packing, 141
black box optimization, 3
bloat, 82, 90
Blondie24, 106
Bonabeau, Eric, 221
Bonferroni correction, 206
bootstrapping, 172
Born, Joachim, 210
Box, George Edward Pelham, 18
Box-Muller-Marsaglia Polar Method, 18
breeding, 25
Brindle, Anne, 39
building blocks, 34, 215
Butz, Martin, 189, 222
candidate solution, see individual, 11
Cant u-Paz, Eric, 166
Cartesian Genetic Programming (CGP), 80
Caruana, Rich, 161
Cavicchio, Daniel Joseph Jr., 45, 124
Cellular Encoding, 76
Chellapilla, Kumar, 105, 106
child, 25
Chinook, 107
Christensen, Steffen, 204, 219
chromosome, 25
classication, 155, 176
closure, see operator, closed
co-adaptive, 104
Coello Coello, Carlos, 218
coevolution, 103
N-Population Cooperative, 104, 116
1-Population Competitive, 103, 105
2-Population Competitive, 103, 111
parallel, 113
parallel previous, 114
sequential, 112
serial, 112
compositional, 103
test-based, 103
collections, 5
Collins, J. J., 79
combinatorial optimization problem, 141
Compact Genetic Algorithm (cGA), 162
compactness, 77
components, 141
computational effort, 201
cons cells, 78
convergence, 34
convergence time, 7
Copy, 11, 53
copy-forward, 97
245
cost, 142
covariance matrix, 160
cover, 176
Cramer, Nichael, 67
credit assignment, 120
crossover, 25, 27, 32
Clustered, 184
Intermediate Recombination, 36
for Integers, 58
Line Recombination, 35
for Integers, 58
Multi-Point, 33
One-Point, 32
for Lists, 82
Subtree, 72
Two-Point, 32
for Lists, 82
Uniform, 32
among K Vectors, 35
Crowding, 124
Deterministic, 124
cycles, 107
Dawkins, Richard, 44
De Jong, Kenneth, 42, 116, 118, 208, 221, 225
Deb, Kalyanmoy, 17, 133, 135, 201, 212, 220
deceptive functions, 16, 53, 208
decision trees, 155
decoding, 54
delta rule, 195
demes, 97, 103, 111
desirability, 149
Differential Evolution (DE), 48
diploid, 112
directed acyclic graph, 63
directed mutation, 25, 49
Discipulus, 78
distance measure, 122
distributions, 155
bivariate, 165
Gaussian, 17
marginal, 155, 160
normal, 17
standard normal, 18
Diversity Maintenance, see niching
Dorigo, Marco, 146, 150, 219221
duplicability, 200
dynamic programming, 172
Eberhart, Russell, 49, 221
Edge Encoding, 76
Edwards, Howard, 201
elites, 40
elitism, 150
encoding, 54
developmental, 60, 77
direct, 60
indirect, 60, 77, 79, 86
ephemeral random constant, 71
epistasis, 32, 34
Estimation of Distribution Algorithms (EDAs), 155, 158
Multivariate, 165
Univariate, 161
evaluation, 25
evaporation, 147
Evolution Strategies (ES), 27
Evolutionary Algorithm (EA), 25
Evolutionary Computation (EC), 25
Evolutionary Programming (EP), 30
Evolvable Hardware (EH), 224
Expectation Maximization (EM), 117, 165
explicit speciation, 121
Exploration versus Exploitation, 14, 16, 173
external state, 168, 198
Feature-based Tabu Search, 21, 152
Feo, Thomas, 145
Fernandez, Pablo, 222
Festa, Pauola, 219
Fisher, Ronald Aylmer, 203
tness, see quality, 25
absolute, 104
baseline, 184
external, 105
internal, 105
joint, 117
relative, 104
tness assessment, 25
relative, 107
tness functions, see problems
tness landscape, 25
tness scaling, 39
tness sharing, 122
implicit, 123
Floreano, Dario, 224
Fogel, David, 105, 106, 220
Fogel, Lawrence, 30, 105
forest, 73
Forrest, Stephanie, 123
FORTH, 77
Francone, Frank, 78, 220
full adjacency matrix, 61
Full algorithm, 69
function set, 69
functions, 6
Gambardella, Luca, 150
Gauss, Karl Friedrich, 17
Gawelczyk, Andreas, 211
Gelatt, Charles Daniel Jr., 19
gene, 25
generalizability, 201, 224
246
generation, 25
Generation Gap Algorithms, 42
generational, 25
generative models, 159
Genetic Algorithm (GA), 30
Genetic Programming (GP), 67, 198
GENITOR, 41, 42
genome, 25
genotype, 25, 53
Gibbs Sampling, 3
global optima, 8
global optimization algorithm, 9, 14
Glover, Fred, 20, 45
GNARL, 62
Goldberg, David, 122, 162, 165, 194, 195
Gosset, William Sealy, 203
Gradient Ascent, 4, 7
Gradient Ascent with Restarts, 9
Gradient Descent, 7
Gram-Schmidt process, 211
Grammatical Evolution (GE), 79
graphs, 60
Gray code, 55
Gray, Frank, 55
Greedy Randomized Adaptive Search Procedures (GRASP),
145
Grefenstette, John, 45, 180, 184
Griewank, Andreas, 210
Grow algorithm, 69
Guided Genetic Algorithm, 154
Guided Local Search (GLS), 152
Gustafson, Steven, 218
Hamming cliff, 54
Hamming distance, 122
Hansen, Nikolaus, 211
hard constraints, 143
Harik, Georges, 124, 162
Hessian, 8
heuristic, 142
Hierarchical Bayesian Optimization Algorithm (hBOA),
166
Hill-Climbing, 4, 11
Hill-Climbing with Random Restarts, 14
Hillis, Daniel, 111
history, 198
Holland, John, 30, 185
homologous, 34, 72
Hornby, Gregory, 87
hyperellipsoids, 195
Hyperheuristics, 45
hypervolume, 128, 201
hypothesis, see model
hypothesis test, 202
nonparametric, 204
iCCEA, 120
illegal solutions, 141
incest prevention, 121
individual, see candidate solution, 25
induction, 155
infeasible solutions, 141
informant, 50
informative gradient, 16
initialization, 26, 53
adaptive, 185
initialization procedure, 11
internal state, 198
introns, 81, 90
invalid solutions, 141
inverse problems, 4
inviable code, 90
island models, 97
asynchronous, 98
synchronous, 98
island topology, 97
fully-connected, 97
injection model, 97
toroidal grid, 97
Iterated Local Search (ILS), 22, 142
Ja skowski, Wojciech, 110
Join, 26
Jordan, Michael
the basketball player, 120
the professor, 176
k-fold cross validation, 201
k-Means Clustering, 117
k-Nearest-Neighbor (kNN), 155, 178
Kauth, Joan, 41, 42
kd-tree, 159
Keane, Martin, 76
Keijzer, Martin, 78
Keller, Robert, 78
Kennedy, James, 49, 221
kernelization, 179
Kirkpatrick, Scott, 19
Kitano, Hiroaki, 86
Klein, Jon, 78
Koch Curve, 87
Kovaks, Timothy, 218
Koza, John, 42, 67, 76, 214, 216, 218
Krawiec, Krysztof, 110
L-Systems, see Lindenmayer Systems
Laguna, Manuel, 45
Lamarck, Jean-Baptiste, 44
Lamarckian Algorithms, 44
Lampinen, Jouni, 48, 221
Langdon, William, 215, 216, 218, 220
Lanzi, Pier Luca, 189, 190, 194, 195, 222
laziness, 120
247
Learnable Evolution Model (LEM), 45, 155
learning bias, 156
Learning Classier Systems (LCS), 167, 177, 185
learning gradient, 105
learning rate, 150
Lindenmayer Systems, 87, 224
Lindenmayer, Aristid, 87
Linear Genetic Programming, 78
linear problem, 207
linkage, 32, 34, 207
lists, 67, 78
Lobo, Fernando, 162
local optima, 8
local optimization algorithm, 8, 14
Loiacono, Daniele, 194, 195
loss of gradient, 116
Lourenco, Helena, 22
Lozano, Sebasti an, 222
Lucas, Simon, 91
machine learning, 155
Mahfoud, Samir, 124
Manhattan distance, 133
Manhattan Project, 19
Mann-Whitney U Test, 204
Markov Chain Monte Carlo (MCMC), 3
Markov, Andrey Andreyevich, 169
Markovian environment, 169
Marsaglia, George, 18
Mart, Rafael, 45
Martin, Olivier, 22
master-slave tness assessment, 99
match score, 177
match set, 85, 180, 186
matrices, 6, 60, 67, 76
Mattiussi, Claudio, 224
maxima, 7
McPhee, Nicholas, 220
mean vector, 160
Memetic Algorithms, 44
memory, 198
Mercer, Robert Ernest, 45
Messom, Chris, 201
Meta-Genetic Algorithms, 45
Meta-Optimization, 45
metaheuristics, 3
metric distance, 122
Metropolis Algorithm, 19
Metropolis, Nicholas, 19
Meyarivan, T., 135
Michalewicz, Zbigniew, 142, 220
Michalski, Ryszard, 155
Michigan-Approach Learning Classier Systems, 85, 167,
177, 185
microclassiers, 192
Miikkulainen, Risto, 62, 107
Miller, Juian, 80
Miller, Julian, 219
minima, 7
miscoordination, 120
Mitchell, Melanie, 221
model, 6, 155
discriminative, 156
generative, 156
modication procedure, 11
modularity, 73, 77, 86
Mona Lisa, 0
Montana, David, 74
Monte Carlo Method, 19
Moscato, Pablo, 44
M uhlenbein, Heinz, 35, 162, 210
Muller, Mervin, 18
mutation, 25, 27
Bit-Flip, 31
Creep, 184
Duplicate Removal, 84
Gaussian Convolution, 17, 29
Gaussian Convolution Respecting Zeros, 61
Integer Randomization, 57
Point, 57
Polynomial, 17
Random Walk, 57
Subtree, 72
mutation rate, 29
adaptive, 29
NEAT, 62, 107
Needle in a Haystack style functions, 16, 53
neighbors, 102
NERO, 107
neural networks, 155
Neural Programming (NP), 66
Newtons Method, 8
Newton, Sir Isaac, 8
Ng, Andrew, 176
niches, 104
niching, 104
No Free Lunch Theorem (NFL), 207
noisy functions, 16
Non-Dominated Sorting, 133
Non-Dominated Sorting Genetic Algorithm II (NSGA-II),
135
non-homologous, see homologous
Nordin, Peter, 78, 220
null hypothesis, 202
ONeill, Michael, 79, 219
objective, 127
objective functions, see problems
One-Fifth Rule, 30
operator
adaptive, 29
248
closed, 67, 78, 143
self-adaptive, 29, 78, 185
Opportunistic Evolution, 100
Ostermeier, Andreas, 211
over-specication, 4, 85, 176
P olya, George, 220
Panait, Liviu, 91, 120, 165, 201, 219
Parejo, Jos e Antonio, 222
parent, 25
Pareto domination, 127
Pareto front, 127
concave, 127
convex, 127
discontinuous, 128
local, 128
nonconvex, 128
Pareto Front Rank, 132
Pareto nondominated, 127
Pareto strength, 135
Pareto weakness, 136
Pareto wimpiness, 136
parse trees, 67
parsimony pressure, 91
double tournament, 91
lexicographic, 91
linear, 91
non-parametric, 91
Particle Filters, 38
Particle Swarm Optimization (PSO), 25, 49
particles, 50
Pelikan, Martin, 165
penalties, 152
Perelson, Alan, 123
phenotype, 25, 53
pheromone, 146
higher-order, 152
piecewise linear function, 194
Pitt-Approach Rule Systems, 85, 167, 180
Poli, Riccardo, 216, 220
policy, 85, 167, 168
policy search, 176
Pollack, Jordan, 62
population, 25
alternative, 111
collaborating, 111
foil, 111
primary, 111
Population-Based Incremental Learning (PBIL), 161
PostScript, 77
Potter, Mitchell, 116, 118
Pratap, Amrit, 135
prediction, 189, 194
prediction error, 189
premature convergence, 28, 34
preselection, 124
Price, Kenneth, 48, 221
probability distributions, 6
problem, 11
problems
11-bit Boolean Multiplexer, 214
Articial Ant, 68, 216
De Jong test suite, 208, 209
Even N-Parity, 215
Griewank, 210
Knapsack, 141
Lawnmower, 217
Leading Ones, 207
Leading Ones Blocks, 207
Linear Problems, 208
Max Ones, 207
OneMax, 207
Rastrigin, 209
rotated, 210
Schwefel, 210
Sphere, 208
Step, 208
Sum, 208
Symbolic Regression, 68, 69, 71, 74, 214
Traveling Salesman (TSP), 21, 141
ZDT1, 212
ZDT2, 213
ZDT3, 213
ZDT4, 213
Prusinkiewicz, Przemyslaw, 87
PTC2, 70
Push, 78
Q-learning, 167, 169, 172
Q-table, 169
Q-value, 170
quadtree, 159
quality, see tness, 11
queues, 6
Ramped Half-and-Half algorithm, 70
Ramsey, Connie, 180
random number generator, 199
java.util.Random, 199
linear congruential, 199
Mersenne Twister, 199
RANDU, 199
Random Search, 4, 14
random walk, 19
Rastrigin, Leonard Andreevich, 210
Rechenberg, Ingo, 27, 209
recombination, see crossover
recursive least squares, 195
REINFORCE, 176
reinforcement, 167
negative, 168
positive, 168
249
reinforcement learning, 167, 172
multiagent, 120
relative overgeneralization, 120
Rensende, Mauricio, 219
replicability, 199
representation, 13, 53
resampling techniques, 25
Resende, Mauricio, 145
reward, see reinforcement
Richardson, Jon, 122
Robert Keller, 220
Robinson, Alan, 78
robustness, 105
Rosenbluth, Arianna and Marshall, 19
Rosenbrock, Howard, 209
Ruiz-Cort es, Antonio, 222
rule
active, 181
default, 85, 177
production, 84
state-action, 84, 85, 167
rule body, 84, 177
rule covering, 183
rule deletion, 183
rule generalization, 183
rule head, 84, 177
rule merging, 183
rule specialization, 183
rule strength, 181
Ryan, Conor, 79, 91
saddle points, 7
sample, 25
sample distributions, 158
sampling
region-based, 157
rejection, 156
weighted rejection, 157
Sampson, Jeffrey R., 45
SAMUEL, 45, 180
Saunders, Gregory, 62
Scatter Search with Path Relinking, 45
schedule, 19
schema theory, 34
Schlierkamp-Voosen, Dirk, 35
Schoenauer, Marc, 142
Schomisch, M., 210
Schultz, Alan, 180
Schwefel, Hans-Paul, 27, 35, 36, 210
seeding, 26, 56
selection, 25
Fitness-Proportionate, 37
Fitnessless, 110
non-parametric, 39
parent, 26, 48
Roulette, 37
Stochastic Universal Sampling, 38
survival, 26, 48
Tournament, 39
Restricted, 124
Truncation, 37
selection pressure, 18
selection procedure, 11
Shi, Yuhui, 221
Sigvardsson, Oskar, 0
Silva, Sara, 222
Simulated Annealing, 19
Skolicki, Zbigniew, 93, 139
Smith, Robert, 123
smoothness, 16, 53
Solkoll, 0
sorting networks, 111
sparsity, 133
spatially embedded models, 101
species, 104
specicity, 176
Spector, Lee, 74, 76, 78, 219
spread, 128
Srinvas, N., 133
St utzle, Thomas, 22, 146, 220
stack languages, 77, 79
Stanley, Kenneth, 62, 107
state space, 85
states, 168
statistically signicant, 204
steady-state, 25, 41
Steepest Ascent Hill-Climbing, 12
Steepest Ascent Hill-Climbing with Replacement, 12
Stewart, Potter, 3
stochastic, 12
stochastic optimization, 3
stochastic processes, 169
stochastic search, 3
Storn, Rainer, 48, 219, 221
Strength Pareto Evolutionary Algorithm 2 (SPEA2), 136
strings, see lists
Students t-Test, 203
subpopulations, 97, 103
subsolution, 117
subsumption, 193
subtree selection, 73
supervised learning, 177
Support Vector Machines (SVMs), 155, 179
swarm, 50
symbols, 88
nonterminal, 86
terminal, 86
tabu list, 20
Tabu Search, 20, 142
Teller, Augusta and Edward, 19
Teller, Eric (Astro), 19, 66
250
temperature, 20
test cases, 105, 111, 201
test problems, see problems
test set, 201
tests, 105
Theraulaz, Guy, 221
Thiele, Lothar, 201, 212
Thompson, Adrian, 224
threads, 95
Tinsley, Marion, 107
tournament size, 39
training set, 201
transition model, 168
Tree-Style Genetic Programming, 42
trees, 67
Truncation Selection, 27
Tsang, Edward, 152, 219
Tukey, John, 206
tuples, 6
Tweak, 11, 14, 53
type constraints, 75
typing
atomic, 76
polymorphic, 76
set, 76
unbiased estimator, 164
under-specication, 4, 85, 177
unimodal functions, 15
Univariate Marginal Distribution Algorithm (UMDA), 162
utility, 168
utility error, 177
utility variance, 177, 181
value, 142, 168
Vecchi, Mario, 19
vector processor, 102
vectors, 6
Vo, Christopher, 165
Voronoi tessellation, 178
vote, 85
Voudouris, Chris, 152
Walker, Matthew, 201
weak methods, 3
weights, 142
Welchs t-Test, 204
Welch, Bernard Lewis, 204
Whitley, Darrell, 41, 42
Wiegand, Paul, 120
Wieloch, Bartosz, 110
Williams, Ronald, 176
Wilson, Stewart, 186, 194, 195
Wineberg, Mark, 204, 219
XCS, 189
XCSF, 194
Xilinx, 224
Zeroth Level Classier System (ZCS), 186
Ziggurat Method, 18
Zitzler, Eckart, 201, 212
251