Parallel Distributed Processing by David Rumelhart
Parallel Distributed Processing by David Rumelhart
James L. McClelland
Preface v
1 Introduction 1
1.1 WELCOME TO THE NEW PDP HANDBOOK . . . . . . . . . 1
1.2 MODELS, PROGRAMS, CHAPTERS AND EXCERCISES . . . 2
1.2.1 Key Features of PDP Models . . . . . . . . . . . . . . . . 2
1.3 SOME GENERAL CONVENTIONS AND CONSIDERATIONS 4
1.3.1 Mathematical Notation . . . . . . . . . . . . . . . . . . . 4
1.3.2 Pseudo-MATLAB Code . . . . . . . . . . . . . . . . . . . 5
1.3.3 Program Design and User Interface . . . . . . . . . . . . . 5
1.3.4 Exploiting the MATLAB Envioronment . . . . . . . . . . 6
1.4 BEFORE YOU START . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 MATLAB MINI-TUTORIAL . . . . . . . . . . . . . . . . . . . . 7
1.5.1 Basic Operations . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.2 Vector Operations . . . . . . . . . . . . . . . . . . . . . . 8
1.5.3 Logical operations . . . . . . . . . . . . . . . . . . . . . . 10
1.5.4 Control Flow . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.5 Vectorized Code . . . . . . . . . . . . . . . . . . . . . . . 12
i
ii CONTENTS
v
vi PREFACE
version of the PDP software a reality. Most important are Sindy John, a pro-
grammer who has been working with me for nearly 5 years, and Brenden Lake,
a former Stanford Symbolic Systems major. Sindy had done the vast major-
ity of the coding in the current version of the pdptool software, and wrote the
User’s Guide. Brenden helped convert several chapters, and added the mate-
rial on Kohonen networks in Chapter 6. He has also helped tremendously with
the implementation of the on-line version of the handbook. Two other Sym-
bolic Systems undergraduates also contributed quite a bit: David Ho wrote the
MATLAB tutorial in Chapter 1, and Anna Schapiro did the initial conversion
of Chapter 3.
It is tragic that David Rumelhart is no longer able to contribute, leaving me
in the position as sole author of this work. I have been blessed and honored,
however, to work with many wonderful collaborators, post-docs, and students
over the years, and to have benefited from the insights of many others. All
these people are the authors of the ideas presented here, and their names will
be found in references cited throughout this handbook.
Jay McClelland
Stanford, CA
September, 2011
Chapter 1
Introduction
1
2 CHAPTER 1. INTRODUCTION
in conjunction with additional readings from the PDP books and other sources.
In particular, those unfamiliar with the PDP framework should read Chapter 1
of the first PDP volume (Rumelhart et al., 1986) to understand the motivation
and the nature of the approach.
This chapter provides some general information about the software and the
handbook. The chapter also describes some general conventions and design
decisions we have made to help the reader make the best possible use of the
handbook and the software that comes with it. Information on how to set
up the software (Appendix A), and a user’s guide (Appendix C), are provided
in Appendices. At the end of the chapter we provide a brief tutorial on the
MATLAB computing environment, within which the software is implemented.
Figure 1.1: A simple PDP network consisting of one pool of units, and one
projection, such that each unit recieves connections from all other units in the
same pool. Each unit also can receive external input (shown coming in from the
left). If this were a pool in a larger network, the units could receive additional
projections from other pools (not shown) and could project to units in other
pools (as illustrated by the arrows proceeding out of the units to the right. (From
Figure 1, p. 162 in McClelland, J. L.& Rumelhart, D. E. (1985). Distributed
memory and the representation of general and specific information. Journal
of Experimental Psychology: General, 114, 159-197. Copyright 1985 by the
American Psychological Association. Permission Pending.)
of each recieving unit for the next processing step, according to a specified ac-
tivation function. A train process presents a series of input patterns (or sets
of input patterns), processes them using a process similar to the test process,
then possibly compares the results generated to the values specified in target
patterns (or sets of provided target patterns) and then carries out further pro-
cessing steps that result in the adjustment of connections among the processing
units.
The exact nature of the processes that take place in both training and test-
ing are essential ingredients of specific PDP models and will be considered as we
work through the set of models described in this book. The models described in
Chapters 2 and 3 explore processing in networks with modeler-specified connec-
tion weights, while the models described in most of the later chapters involve
learning as well as processing.
Scalars. Scalar (single-valued) variables are given in italic typeface. The names
of parameters are chosen to be mnemonic words or abbreviations where
possible. For example, the decay parameter is called decay.
Vectors. Vector (multivalued) variables are given in boldface; for example, the
external input pattern is called extinput. An element of such a vector
is given in italic typeface with a subscript. Thus, the ith element of the
external input is denoted extinputi . Vectors are often members of larger
sets of vectors; in this case, a whole vector may be given a subscript.
For example, the jth input pattern in a set of patterns would be denoted
ipatternj .
Weight matrices. Matrix variables are given in uppercase boldface; for exam-
ple, a weight matrix might be denoted W. An element of a weight matrix
is given in lowercase italic, subscripted first by the row index and then by
the column index. The row index corresponds to the index of the receiving
unit, and the column index corresponds to the index of the sending unit.
1.3. SOME GENERAL CONVENTIONS AND CONSIDERATIONS 5
Thus the weight to unit i from unit j would be found in the jth column
of the ith row of the matrix, and is written wij .
not to experiment with changing them if losing the state of a program would be
costly.
% This is a comment.
y = 2*x + 1 % So is this.
Note that MATLAB performs actual floating-point division, not integer di-
vision. Also note that MATLAB is case sensitive.
length(v) % 8
length(x) % 3
length(z) % 0
y(2) % -4.5
w(end) % 9
x(1) % 4
We can use colon notation in this context to select a range of values from
the vector.
v(2:5) % [4 5 6 7]
w(1:end) % [1 3 5 7 9]
w(end:-1:1) % [9 7 5 3 1]
y(1:2:5) % [-6 -4.5 0]
1.5. MATLAB MINI-TUTORIAL 9
In fact, we can specify any arbitrary “index vector” to select arbitrary ele-
ments of the vector.
y([2 4 5]) % [-4.5 -1.5 0]
v(x) % [6 5 4]
w([5 5 5 5 5]) % [9 9 9 9 9]
Furthermore, we can change a vector by replacing the selected elements with
a vector of the same size. We can even delete elements from a vector by assigning
the empty matrix “[]” to the selected elements.
y([2 4 5]) = [42 420 4200] % y = [-6 42 -3 420 4200]
v(x) = [0 -1 -2] % v = [3 -2 -1 0 7 8 9 10]
w([3 4]) = [] % w = [1 3 9]
Mathematical vector operations We can easily add (“+”), subtract (“-”),
multiply (“*”), divide (“/”), or exponentiate (“.^”) each element in a vector by
a scalar. The operation simply gets performed on each element of the vector,
returning a vector of the same size.
a = [8 6 1 0]
a/2 - 3 % [1 0 -2.5 -3]
3*a.^2 + 5 % [197 113 8 5]
Similarly, we can perform “element-wise” mathematical operations between
two vectors of the same size. The operation is simply performed between ele-
ments in corresponding positions in the two vectors, again returning a vector of
the same size. We use “+” for adding two vectors, and “-” to subtract two vec-
tors. To avoid conflicts with different types of vector multiplication and division,
we use “.*” and “./” for element-wise multiplication and division, respectively.
We use “.^” for element-wise exponentiation.
b = [4 3 2 9]
a+b % [12 9 3 9]
a-b % [4 3 -1 -9]
a.*b % [32 18 2 0]
a./b % [2 2 0.5 0]
a.^b % [4096 216 1 0]
Finally, we can perform a dot product (or inner product) between a row
vector and a column vector of the same length by using (“*”). The dot product
multiplies the elements in corresponding positions in the two vectors, and then
takes the sum, returning a scalar value. To perform a dot product, the row vector
must be listed before the column vector (otherwise MATLAB will perform an
outer product, returning a matrix).
r = [9 4 0]
c = [8; 7; 5]
r*c % 100
10 CHAPTER 1. INTRODUCTION
1 == 2 % 0
1 ~= 2 % 1
2 < 2 % 0
2 <= 3 % 1
(2*2) > 3 % 1
3 >= (5+1) % 0
3/2 == 1.5 % 1
To test whether a binary vector contains any 1s, we use “any()”. To test
whether a binary vector contains all 1s, we use “all()”.
We can use the binary vectors as a different kind of “index vector” to select
elements from a vector; this is called “logical indexing”, and it returns all of the
elements in the vector where the corresponding element in the binary vector is
1. This gives us a powerful way to select all elements from a vector that meet
certain criteria.
While loops. A while loop works the same way as an if statement, except
that, when the MATLAB interpreter reaches the end keyword, it returns to the
beginning of the while block and tests the condition again. MATLAB executes
the statements in the while block repeatedly, as long as the condition is true. A
break statement within the while loop will cause MATLAB to skip the rest of
the loop.
i = 3
while i > 0
disp(i)
i = i - 1;
end
disp(’Blastoff!’)
For loops. To execute a block of code a specific number of times, we can use a
for loop. A for loop takes a counter variable and a vector. MATLAB executes
the statements in the block once for each element in the vector, with the counter
variable set to that element.
r = [9 4 0];
c = [8 7 5];
sum = 0;
for i = 1:3 % The counter is ’i’, and the range is ’1:3’
sum = sum + r(i) * c(i); % This will be executed 3 times
end
my_favorite_primes = [2 3 5 7 11]
for order = [2 4 3 1 5]
disp(my_favorite_primes(order))
end
r = [9 4 0];
c = [8;7;5];
We have seen two ways to perform a dot product between these two vectors.
We can use a for loop:
sum = 0;
for i = 1:3
sum = sum + r(i) * c(i);
end
% After the loop, sum = 100
However, the following “vectorized” code is more concise, and it takes ad-
vantage of MATLAB’s optimization for vector and matrix operations:
for i = 1:3
r(i) = r(i) * 2;
end
% After the loop, r = [18 8 0]
multiplier = [2;3;4];
for j = 1:3
c(j) = c(j) * multiplier(j);
end
% After the loop, c = [16 21 20]
multiplier = [2;3;4];
c = c .* multiplier; % After this statement, c = [16 21 20]
14 CHAPTER 1. INTRODUCTION
Chapter 2
Our own explorations of parallel distributed processing began with the use of
interactive activation and competition mechanisms of the kind we will exam-
ine in this chapter. We have used these kinds of mechanisms to model visual
word recognition (McClelland and Rumelhart, 1981; Rumelhart and McClel-
land, 1982) and to model the retrieval of general and specific information from
stored knowledge of individual exemplars (McClelland, 1981), as described in
PDP:1. In this chapter, we describe some of the basic mathematical observa-
tions behind these mechanisms, and then we introduce the reader to a specific
model that implements the retrieval of general and specific information using
the “Jets and Sharks” example discussed in PDP:1 (pp. 25-31).
After describing the specific model, we will introduce the program in which
this model is implemented: the iac program (for interactive activation and com-
petition). The description of how to use this program will be quite extensive; it
is intended to serve as a general introduction to the entire package of programs
since the user interface and most of the commands and auxiliary files are com-
mon to all of the programs. After describing how to use the program, we will
present several exercises, including an opportunity to work with the Jets and
Sharks example and an opportunity to explore an interesting variant of the basic
model, based on dynamical assumptions used by Grossberg (e.g., (Grossberg,
1978)).
2.1 BACKGROUND
The study of interactive activation and competition mechanisms has a long
history. They have been extensively studied by Grossberg. A useful introduction
to the mathematics of such systems is provided in Grossberg (1978). Related
mechanisms have been studied by a number of other investigators, including
Levin (1976), whose work was instrumental in launching our exploration of
15
16 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
PDP mechanisms.
An interactive activation and competition network (hereafter, IAC network )
consists of a collection of processing units organized into some number of com-
petitive pools. There are excitatory connections among units in different pools
and inhibitory connections among units within the same pool. The excitatory
connections between pools are generally bidirectional, thereby making the pro-
cessing interactive in the sense that processing in each pool both influences and
is influenced by processing in other pools. Within a pool, the inhibitory con-
nections are usually assumed to run from each unit in the pool to every other
unit in the pool. This implements a kind of competition among the units such
that the unit or units in the pool that receive the strongest activation tend to
drive down the activation of the other units.
The units in an IAC network take on continuous activation values between
a maximum and minimum value, though their output—the signal that they
transmit to other units—is not necessarily identical to their activation. In our
work, we have tended to set the output of each unit to the activation of the unit
minus the threshold as long as the difference is positive; when the activation
falls below threshold, the output is set to 0. Without loss of generality, we can
set the threshold to 0; we will follow this practice throughout the rest of this
chapter. A number of other output functions are possible; Grossberg (1978)
describes a number of other possibilities and considers their various merits.
The activations of the units in an IAC network evolve gradually over time.
In the mathematical idealization of this class of models, we think of the acti-
vation process as completely continuous, though in the simulation modeling we
approximate this ideal by breaking time up into a sequence of discrete steps.
Units in an IAC network change their activation based on a function that takes
into account both the current activation of the unit and the net input to the
unit from other units or from outside the network. The net input to a particular
unit (say, unit i) is the same in almost all the models described in this volume:
it is simply the sum of the influences of all of the other units in the network
plus any external input from outside the network. The influence of some other
unit (say, unit j) is just the product of that unit’s output, outputj , times the
strength or weight of the connection to unit i from unit j. Thus the net input
to unit i is given by
X
neti = wij outputj + extinputi . (2.1)
j
In the IAC model, outputj = [aj ]+ . Here, aj refers to the activation of unit j,
and the expression [aj ]+ has value aj for all aj > 0; otherwise its value is 0.
The index j ranges over all of the units with connections to unit i. In general
the weights can be positive or negative, for excitatory or inhibitory connections,
respectively.
Human behavior is highly variable and IAC models as described thus far are
completely deterministic. In some IAC models, such as the interactive activation
model of letter perception (McClelland and Rumelhart, 1981) these determin-
istic activation values are mapped to probabilities. However, it became clear in
2.1. BACKGROUND 17
detailed attempts to fit this model to data that intrinsic variability in processing
and/or variability in the input to a network from trial to trial provided better
mechanisms for allowing the models to provide detailed fits to data. McClel-
land (1991) found that injecting normally distributed random noise into the net
input to each unit on each time cycle allowed such networks to fit experimental
data from experiments on the joint effects of context and stimulus information
on phoneme or letter perception. Including this in the equation above, we have:
X
neti = wij outputj + extinputi + normal(0, noise) (2.2)
j
Where normal(0, noise) is a sample chosen from the standard normal dis-
tribution with mean 0 and standard deviation of noise. For simplicity, noise is
set to zero in many IAC network models.
Once the net input to a unit has been computed, the resulting change in the
activation of the unit is as follows:
If (neti > 0),
∆ai = (max − ai )neti − decay(ai − rest).
Otherwise,
∆ai = (ai − min)neti − decay(ai − rest).
Note that in this equation, max, min, rest, and decay are all parameters. In
general, we choose max = 1, min ≤ rest ≤ 0, and decay between 0 and 1. Note
also that ai is assumed to start, and to stay, within the interval [min, max].
Suppose we imagine the input to a unit remains fixed and examine what will
happen across time in the equation for ∆ai . For specificity, let’s just suppose
the net input has some fixed, positive value. Then we can see that ∆ai will get
smaller and smaller as the activation of the unit gets greater and greater. For
some values of the unit’s activation, ∆ai will actually be negative. In particular,
suppose that the unit’s activation is equal to the resting level. Then ∆ai is
simply (max − rest)neti . Now suppose that the unit’s activation is equal to
max, its maximum activation level. Then ∆ai is simply (−decay)(max − rest).
Between these extremes there is an equilibrium value of ai at which ∆ai is 0.
We can find what the equilibrium value is by setting ∆ai to 0 and solving for
ai :
0 = (max − ai )neti − decay(ai − rest)
= (max)(neti ) + (rest)(decay) − ai (neti + decay)
(max)(neti ) + (rest)(decay)
ai = (2.3)
neti + decay
Using max = 1 and rest = 0, this simplifies to
neti
ai = (2.4)
neti + decay
What the equation indicates, then, is that the activation of the unit will reach
equilibrium when its value becomes equal to the ratio of the net input divided by
18 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
the net input plus the decay. Note that in a system where the activations of other
units—and thus of the net input to any particular unit—are also continually
changing, there is no guarantee that activations will ever completely stabilize—
although in practice, as we shall see, they often seem to.
Equation 3 indicates that the equilibrium activation of a unit will always
increase as the net input increases; however, it can never exceed 1 (or, in the
general case, max) as the net input grows very large. Thus, max is indeed the
upper bound on the activation of the unit. For small values of the net input,
the equation is approximately linear since x/(x + c) is approximately equal to
x/c for x small enough.
We can see the decay term in Equation 3 as acting as a kind of restoring
force that tends to bring the activation of the unit back to 0 (or to rest, in the
general case). The larger the value of the decay term, the stronger this force
is, and therefore the lower the activation level will be at which the activation
of the unit will reach equilibrium. Indeed, we can see the decay term as scaling
the net input if we rewrite the equation as
neti /decay
ai = (2.5)
(neti /decay) + 1
When the net input is equal to the decay, the activation of the unit is 0.5 (in the
general case, the value is (max + rest)/2). Because of this, we generally scale
the net inputs to the units by a strength constant that is equal to the decay.
Increasing the value of this strength parameter or decreasing the value of the
decay increases the equilibrium activation of the unit.
In the case where the net input is negative, we get entirely analogous results:
(min)(neti ) − (decay)(rest)
ai = (2.6)
neti − decay
(min)(neti )
ai = (2.7)
neti − decay
This equation is a bit confusing because neti and min are both negative quan-
tities. It becomes somewhat clearer if we use amin (the absolute value of min)
and aneti (the absolute value of neti ). Then we have
(amin)(aneti )
ai = − (2.8)
aneti + decay
What this last equation brings out is that the equilibrium activation value ob-
tained for a negative net input is scaled by the magnitude of the minimum
(amin). Inhibition both acts more quickly and drives activation to a lower final
level when min is farther below 0.
2.1. BACKGROUND 19
and
netb = eb − γaa (2.12)
From these equations we can easily see that b will tend to be at a disadvantage
since the stronger excitation to a will tend to give a a larger initial activation,
thereby allowing it to inhibit b more than b inhibits a. The end result is a
phenomenon that Grossberg (1976) has called “the rich get richer” effect: Units
with slight initial advantages, in terms of their external inputs, amplify this
advantage over their competitors.
2.1.2 Resonance
Another effect of the interactive activation process has been called “resonance”
by Grossberg (1978). If unit a and unit b have mutually excitatory connections,
then once one of the units becomes active, they will tend to keep each other
active. Activations of units that enter into such mutually excitatory interactions
are therefore sustained by the network, or “resonate” within it, just as certain
frequencies resonate in a sound chamber. In a network model, depending on
parameters, the resonance can sometimes be strong enough to overcome the
effects of decay. For example, suppose that two units, a and b, have bidirectional,
excitatory connections with strengths of 2 x decay . Suppose that we set each
unit’s activation at 0.5 and then remove all external input and see what happens.
The activations will stay at 0.5 indefinitely because
= (1 − 0.5)(2)(decay)(0.5) − (decay)(0.5)
= (0.5)(2)(decay)(0.5) − (decay)(0.5)
=0
Thus, IAC networks can use the mutually excitatory connections between units
in different pools to sustain certain input patterns that would otherwise decay
away rapidly in the absence of continuing input. The interactive activation
process can also activate units that were not activated directly by external
input. We will explore these effects more fully in the exercises that are given
later.
uses a slightly different activation equation than the one we have presented
here (taken from our earlier work with the interactive activation model of word
recognition). In Grossberg’s formulation, the excitatory and inhibitory inputs
to a unit are treated separately. The excitatory input (e) drives the activation
of the unit up toward the maximum, whereas the inhibitory input (i) drives the
activation back down toward the minimum. As in our formulation, the decay
tends to restore the activation of the unit to its resting level.
where amin is the absolute value of min as above. This is in balance only if
i = e/amin.
Our use of the net input rule was based primarily on the fact that we found
it easier to follow the course of simulation events when the balance of excitatory
and inhibitory influences was independent of the activation of the receiving
unit. However, this by no means indicates that our formulation is superior
computationally. Therefore we have made Grossberg’s update rule available
as an option in the iac program. Note that in the Grossberg version, noise is
added into the excitatory input, when the noise standard deviation parameter
is greater than 0.
tool called the PDPTool User Guide should be consulted to get a general un-
derstanding of the structure of the PDPtool system.
Here we describe key characteristics of the IAC model software implemen-
tation. Specifics on how to run exercises using the IAC model are provided as
the exercises are introduced below.
2.2.1 Architecture
The IAC model consists of several units, divided into pools. In each pool, all the
units are assumed to be mutually inhibitory. Between pools, units may have
excitatory connections. In iac models, the connections are benerally bidirection-
ally symmetric, so that whenever there is an excitatory connection from unit i
to unit j, there is also an equal excitatory connection from unit j back to unit
i. IAC networks can, however, be created in which connections violate these
characteristics of the model.
2.2.4 Parameters
In the IAC model there are several parameters under the user’s control. Most
of these have already been introduced. They are
max The maximum activation parameter.
2.2. THE IAC MODEL 23
rest The resting activation level to which activations tend to settle in the ab-
sence of external input.
decay The decay rate parameter, which determines the strength of the ten-
dency to return to resting level.
estr This parameter stands for the strength of external input (i.e., input to
units from outside the network). It scales the influence of external signals
relative to internally generated inputs to units.
alpha This parameter scales the strength of the excitatory input to units from
other units in the network.
gamma This parameter scales the strength of the inhibitory input to units
from other units in the network.
net.pool(2).projection(2).weight(i, j)
net.pool(3).projection(2).weight(j, i)
function cycle
for cy = 1: ncycles
cycleno = cycleno + 1;
getnet();
update();
% what follows is concerned with
% pausing and updating the display
if guimode && display_granularity == cycle
update_display();
end
end
The getnet and update routines are somewhat different for the standard
version and Grossberg version of the program. We first describe the standard
versions of each, then turn to the Grossberg versions.
Standard getnet. The standard getnet routine computes the net input for
each pool. The net input consists of three things: the external input, scaled by
estr; the excitatory input from other units, scaled by alpha; and the inhibitory
input from other units, scaled by gamma. For each pool, the getnet routine first
2.2. THE IAC MODEL 25
accumulates the excitatory and inhibitory inputs from other units, then scales
the inputs and adds them to the scaled external input to obtain the net input. If
the pool-specific noise parameter is non-zero, a sample from the standard normal
distribution is taken, then multiplied by the value of the ’noise’ parameter, then
added to the excitatory input.
Whether a connection is excitatory or inhibitory is determined by its sign.
The connection weights from every sending unit to a pool(wts) are examined.
For all positive values of wts, the corresponding excitation terms are incre-
mented by pool(sender).activation(index) ∗ wts(wts > 0). This operation
uses matlab logical indexing to apply the computation to only those elements
of the array that satisfy the condition. Similarly, for all negative values of
wts, pool(sender).activation(index) ∗ wts(wts < 0) is added into the inhibition
terms. These operations are only performed for sending units that have positive
activations. The code that implements these calculations is as follows:
function getnet
for i=1:numpools
pool(i).excitation = 0.0;
pool(i).inhibition = 0.0;
for sender = 1:numprojections_into_pool(i)
positive_acts_indices = find(pool(sender).activation > 0);
if ~isempty(positive_acts_indices)
for k = 1:numelements(positive_acts_indices)
index = positive_acts_indices(k);
wts = projection_weight(:,index);
pool(i).excitation (wts>0) = pool(i).excitation(wts>0)
+ pool(sender).activation(index) * wts(wts>0);
pool(i).inhibition (wts<0) = pool(i).inhibition(wts<0)
+ pool(sender).activation(index) * wts(wts<0);
end
end
pool(i).excitation = pool(i).excitation * alpha;
pool(i).inhibition = pool(i).inhibition * gamma;
if (pool(i).noise)
pool(i).excitation = pool(i).excitation +
Random(’Normal’,0,pool(i).noise,size(pool(1).excitation);
end
pool(i).netinput = pool(i).excitation + pool(i).inhibition
+ estr * pool(i).extinput;
end
Standard update. The update routine increments the activation of each unit,
based on the net input and the existing activation value. The vector pns is
a logical array (of 1s and 0s), 1s representing those units that have positive
26 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
netinput and 0s for the rest. This is then used to index into the activation and
netinput vectors and compute the new activation values. Here is what it looks
like:
function update
for i = 1:numpools
pns = find(pool(i).netinput > 0);
if ~isempty(pns)
pool(i).activation(pns) = pool(i).activation(pns)
+ (max- pool(i).activation(pns))*pool(i).netinput(pns)
- decay*(pool(i).activation(pns) - rest);
end
nps = ~pns;
if ~isempty(nps)
pool(i).activation(nps) = pool(i).activation(nps)
+ (pool(i).activation(nps) -min))*pool(i).netinput(nps)
- decay*(pool(i).activation(nps) - rest);
end
pool(i).activation(pool(i).activation > max) = max;
pool(i).activation(pool(i).activation < min) = min;
end
The last two conditional statements are included to guard against the anoma-
lous behavior that would result if the user had set the estr, istr, and decay
parameters to values that allow activations to change so rapidly that the ap-
proximation to continuity is seriously violated and activations have a chance to
escape the bounds set by the values of max and min.
Grossberg versions. The Grossberg versions of these two routines are struc-
tured like the standard versions. In the getnet routine, the only difference is
that the net input for each pool is not computed; instead, the excitation and
inhibition are scaled by alpha and gamma, respectively, and scaled external
input is added to the excitation if it is positive or is added to the inhibition if
it is negative:
In the update routine the two different versions of the standard activation rule
are replaced by a single expression. The routine then becomes
2.3. EXERCISES 27
function update
pool(i).activation = pool(i).activation
+ (max - pool(i).activation) .* pool(i).excitation
+ (pool(i).activation - min) .* pool(i).inhibition
- decay * (pool(i).activation - rest);
pool(i).activation(pool(i).activation > max) = max;
pool(i).activation(pool(i).activation < min) = min;
2.3 EXERCISES
In this section we suggest several different exercises. Each will stretch your
understanding of IAC networks in a different way. Ex. 2.1 focuses primarily on
basic properties of IAC networks and their application to various problems in
memory retrieval and reconstruction. Ex. 2.2 suggests experiments you can do
to examine the effects of various parameter manipulations. Ex. 2.3 fosters the
exploration of Grossberg’s update rule as an alternative to the default update
rule used in the iac program. Ex. 2.4 suggests that you develop your own task
and network to use with the iac program.
If you want to cement a basic understanding of IAC networks, you should
probably do several parts of Ex. 2.1 , as well as Ex. 2.2 The first few parts of
Ex. 2.1 also provide an easy tutorial example of the general use of the programs
in this book.
The “data base” for this exercise is the Jets and Sharks data base shown in
Figure 10 of PDP:1 and reprinted here for convenience in Figure 2.1. You are
28 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
to use the iac program in conjunction with this data base to run illustrative
simulations of these basic properties of memory. In so doing, you will observe
behaviors of the network that you will have to explain using the analysis of IAC
networks presented earlier in the “Background section”.
Starting up. In MATLAB, make sure your path is set to your pdptool folder,
and set your current directory to be the iac folder. Enter ‘jets’ at the MATLAB
command prompt. Every label on the display you see corresponds to a unit
in the network. Each unit is represented as two squares in this display. The
square to the left of the label indicates the external input for that unit (initially,
all inputs are 0). The square to the right of the label indicates the activation
of that unit (initially, all activation values are equal to the value of the rest
parameter, which is -0.1).
2.3. EXERCISES 29
If the colorbar is not on, click the ‘colorbar’ menu at the top left of the
display. Select ‘on’. To select the correct ‘colorbar’ for the jets and sharks
exercise, click the colorbar menu item again, click ‘load colormap’ and then
select the jmap colormap file in the iac directory. With this colormap, an
activation of 0 looks gray, -.2 looks blue, and 1.0 looks red. Note that when you
hold the mouse over a colored tile, you will see the numeric value indicated by
the color (and you get the name of the unit, as well). Try right-clicking on the
colorbar itself and choosing other mappings from ‘Standard Colormaps’ to see
if you prefer them over the default.
The units are grouped into seven pools: a pool of name units, a pool of gang
units, a pool of age units, a pool of education units, a pool of marital status
units, a pool of occupation units, and a pool of instance units. The name pool
contains a unit for the name of each person; the gang pool contains a unit for
each of the gangs the people are members of (Jets and Sharks); the age pool
contains a unit for each age range; and so on. Finally, the instance pool contains
a unit for each individual in the set.
The units in the first six pools can be called visible units, since all are
assumed to be accessible from outside the network. Those in the gang, age,
education, marital status, and occupation pools can also be called property
units. The instance units are assumed to be inaccessible, so they can be called
hidden units.
Each unit has an inhibitory connection to every other unit in the same pool.
In addition, there are two-way excitatory connections between each instance
unit and the units for its properties, as illustrated in Figure 2.2 (Figure 11 from
PDP:1 ). Note that the figure is incomplete, in that only some of the name and
instance units are shown. These names are given only for the convenience of
the user, of course; all actual computation in the network occurs only by way
of the connections.
Note: Although conceptually there are six distinct visible pools, and they
have been grouped separately on the display, internal to the program they form
a single pool, called pool(2). Within pool(2), inhibition occurs only among units
within the same conceptual pool. The pool of instance units is a separate pool
(pool(3)) inside the network. All units in this pool are mutually inhibitory.
The values of the parameters for the model are:
max = 1.0
min = −0.2
rest = −0.1
decay = 0.1
estr = 0.4
alpha = 0.1
gamma = 0.1
The program produces the display shown in Figure 2.3. The display shows
the names of all of the units. Unit names are preceded by a two-digit unit
number for convenience in some of the exercises below. The visible units are on
30 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
Figure 2.2: The units and connections for some of the individuals in Figure
2.1. (Two slight errors in the connections depicted in the original of this figure
have been corrected in this version.) (From “Retrieving General and Specific
Knowledge From Stored Knowledge of Specifics” by J. L. McClelland, 1981,
Proceedings of the Third Annual Conference of the Cognitive Science Society.
Copyright 1981 by J. L. McClelland. Reprinted by permission.)
the left in the display, and the hidden units are on the right. To the right of
each visible unit name are two squares. The first square indicates the external
input to the unit (which is initially 0). The second one indicates the activation
of the unit, which is initially equal to the value of the rest parameter.
Since the hidden units do not receive external input, there is only one square
to the right of the unit name for these units, for the unit’s activation. These
units too have an initial activation activation level equal to rest.
On the far right of the display is the current cycle number, which is initialized
to 0.
Since everything is set up for you, you are now ready to do each of the sep-
arate parts of the exercise. Each part is accomplished by using the interactive
activation and competition process to do pattern completion, given some probe
that is presented to the network. For example, to retrieve an individual’s prop-
erties from his name, you simply provide external input to his name unit, then
allow the IAC network to propagate activation first to the name unit, then from
there to the instance units, and from there to the units for the properties of the
2.3. EXERCISES 31
Figure 2.3: The initial display produced by the iac program for Ex. 2.1.
instance.
the curve for the activation of unit 36-Ken. Most of the other curves are still
at or near rest. (Explain to yourself why some have already gone below rest
at this point.) A confusing fact about these graphs is that if lines fall on top
of each other you only see the last one plotted, and at this point many of the
lines do fall on top of each other. In the instance unit panels, you will see one
curve that rises above the others, this one for hidden unit 22 Ken. Explain to
yourself why this rises more slowly than the name unit for Ken, shown in the
Name panel.
Two variables that you need to understand are the update after variable in
the test panel and the ncycles variable in the testing options popup window. The
former (update after) tells the program how frequently to update the display
while running. The latter (ncycles) tells the program how many cycles to run
when you hit run. So, if ncycles is 10 and update after is 1, the program will
run 10 cycles when you click the little running man, and will update the display
after each cycle. With the above in mind you can now understand what happens
when you click the stepping icon. This is just like hitting run except that the
program stops after each screen update, so you can see what has changed. To
continue, hit the stepping icon again, or hit run and the program will run to
the next stopping point (i.e. next number divisible by ncycles.
As you will observe, activations continue to change for many cycles of pro-
cessing. Things slow down gradually, so that after a while not much seems to
be happening on each trial. Eventually things just about stop changing. Once
you’ve run 100 cycles, stop and consider these questions.
A picture of the screen after 100 cycles is shown in Figure 2.4. At this
point, you can check to see that the model has indeed retrieved the pattern
for Ken correctly. There are also several other things going on that are worth
understanding. Try to answer all of the following questions (you’ll have to refer
to the properties of the individuals, as given in Figure 2.1).
Q.2.1.1.
None of the visible name units other than Ken were activated, yet
a few other instance units are active (i.e., their activation is greater
than 0). Explain this difference.
Q.2.1.2.
Save the activations of all the units for future reference by typing: saveVis
= net.pool(2).activation and saveHid = net.pool(3).activation. Also, save the
Figure in a file, through the ‘File’ menu in the upper left corner of the Figure
panel. The contents of the figure will be reset when you reset the network, and
it will be useful to have the saved Figure from the first run so you can compare
it to the one you get after the next run.
2.3. EXERCISES 33
Figure 2.4: The display screen after 100 cycles with external input to the name
unit for Ken.
Retrieval from a partial description. Next, we will use the iac program
to illustrate how it can retrieve an instance from a partial description of its
properties. We will continue to use Ken, who, as it happens, can be uniquely
described by two properties, Shark and in20s. Click the reset button in the
network window. Make sure all units have input of 0. (You will have to right-
click on Ken and set that unit back to 0). Set the external input of the 02-Sharks
unit and the 03-in20s unit to 1.00. Run a total of 100 cycles again, and take a
look at the state of the network.
Q.2.1.3.
Describe the differences between this state and the state after 100
cycles of the previous run, using savHid and savVis for reference.
What are the main differences?
Q.2.1.4.
net.pool(3).proj(2).weight(10, 13) = 0;
net.pool(2).proj(2).weight(13, 10) = 0;
Run the network again for 100 cycles and observe what happens.
Q.2.1.5.
Describe how the model was able to fill in what in this instance turns
out to be the correct occupation for Lance. Also, explain why the
model tends to activate the Divorced unit as well as the Married
unit
net.pool(3).proj(2).weight(10, 13) = 1;
Set the external input of Jets to 1.00. Run the network for 100 cycles and
observe what happens. Reset the network and set the external input of Jets
back to 0.00. Now, set the input to in20s and JH to 1.00. Run the network
again for 100 cycles; you can ask it to generalize about the people in their 20s
with a junior high education by providing external input to the in20s and JH
units.
Q.2.1.6.
Consider the activations of units in the network after settling for
100 cycles with Jets activated and after settling for 100 cycles with
in20s and JH activated. How do the resulting activations compare
with the characteristics of individuals who share the specified prop-
erties? You will need to consult the data in Figure 2.1 to answer
this question.
2.3. EXERCISES 35
Now that you have completed all of the exercises discussed above, write a
short essay of about 250 words in response to the following question.
Q.2.1.7.
Describe the strengths and weaknesses of the IAC model as a model
of retrieval and generalization. How does it compare with other
models you are familiar with? What properties do you like, and
what properties do you dislike? Are there any general principles you
can state about what the model is doing that are useful in gaining
an understanding of its behavior?
Q.2.2.1.
What effects do you observe from decreasing the values of estr,
alpha, gamma, and decay by a factor of 2? What happens if you
set them to twice their original values? See if you can explain what
is happening here. For this exercise, you should consider both the
asymptotic activations of units, and the time course of activation.
What do you expect for these based on the discussion in the “Back-
ground” section? What happens to the time course of the activation?
Wny?
Q.2.2.2.
36 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
Q.2.3.1.
What happens when you repeat some of the simulations suggested
in Ex. 2.1 with gb mode on? Can these effects be compensated for
by adjusting the strengths of any of the parameters? If so, explain
why. Do any subtle differences remain, even after compensatory
adjustments? If so, describe them.
Hint.
In considering the issue of compensation, you should consider the
difference in the way the two update rules handle inhibition and the
differential role played by the minimum activation in each update
rule.
Q.2.4.1.
Describe your task, why it is interesting, your knowledge base, and
the experiments you run on it. Discuss the adequacy of the IAC
model to do the task you have set it.
Hint.
2.3. EXERCISES 37
You might bear in mind if you undertake this exercise that you
can specify virtually any architecture you want in an IAC network,
including architectures involving several layers of units. You might
also want to consider the fact that such networks can be used in
low-level perceptual tasks, in perceptual mechanisms that involve
an interaction of stored knowledge with bottom-up information, as
in the interactive activation model of word perception, in memory
tasks, and in many other kinds of tasks. Use your imagination, and
you may discover an interesting new application of IAC networks.
38 CHAPTER 2. INTERACTIVE ACTIVATION AND COMPETITION
Chapter 3
Constraint Satisfaction in
PDP Systems
In the previous chapter we showed how PDP networks could be used for content-
addressable memory retrieval, for prototype generation, for plausibly making
default assignments for missing variables, and for spontaneously generalizing to
novel inputs. In fact, these characteristics are reflections of a far more general
process that many PDP models are capable of -namely, finding near-optimal
solutions to problems with a large set of simultaneous constraints. This chapter
introduces this constraint satisfaction process more generally and discusses two
models for solving such problems. The specific models are the schema model,
described in PDP:14 and the Boltzmann machine, described in PDP:7. These
models are embodied in the cs (constraint satisfaction) program. We begin with
a general discussion of constraint satisfaction and some general results. We then
turn to the schema model. We describe the general characteristics of the schema
model, show how it can be accessed from cs, and offer a number of examples
of it in operation. This is followed in turn by a discussion of the Boltzmann
machine model.
3.1 BACKGROUND
Consider a problem whose solution involves the simultaneous satisfaction of a
very large number of constraints. To make the problem more difficult, suppose
that there may be no perfect solution in which all of the constraints are com-
pletely satisfied. In such a case, the solution would involve the satisfaction of
as many constraints as possible. Finally, imagine that some constraints may be
more important than others. In particular, suppose that each constraint has an
importance value associated with it and that the solution to the problem in-
volves the simultaneous satisfaction of as many of the most important of these
constraints as possible. In general, this is a very difficult problem. It is what
Minsky and Papert (1969) have called the best match problem. It is a problem
39
40 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
Thus, the net input to a unit provides the unit with information as to its con-
tribution to the goodness of the entire network state. Consider any particular
42 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
unit in the network. That unit can always behave so as to increase its contribu-
tion to the overall goodness of fit if, whenever its net input is positive, the unit
moves its activation toward its maximum activation value, and whenever its net
input is negative, it moves its activation toward its minimum value. Moreover,
since the global goodness of fit is simply the sum of the individual goodnesses,
a whole network of units behaving in such a way will always increase the global
goodness measure. This can be demonstated more formally by examining the
partial derivative of the overall goodness with respect to the state of unit i. If
we take this derivative, all terms in which ai is not a factor drop out, and we
are simply left with the net input:
X
∂G/∂ai = neti = wij aj + inputi + biasi (3.4)
peak. The problem of local maxima is difficult for many systems. We address it
at length in a later section. Suffice it to say, that different PDP systems differ
in the difficulty they have with this problem.
The development thus far applies to both of the models under discussion in
this chapter. It can also be noted that if the weight matrix in an lAC network
is symmetric, it too is an example of a constraint satisfaction system. Clearly,
there is a close relation between constraint satisfaction systems and content-
addressable memories. We turn, at this point, to a discussion of the specific
models and some examples with each. We begin with the schema model of
PDP:14.
if neti > 0
∆ai = neti (1 − ai )
otherwise,
∆ai = neti ai
Note that in this second case, since neti is negative and ai is positive, we
are decreasing the activation of the unit. This rule has two virtues: it conforms
to the requirements of our goodness function and it naturally constrains the
activations between 0 and 1. As usual in these models, the net input comes
from three sources: a unit’s neighbors, its bias, and its external inputs. These
sources are added. Thus, we have
X
neti = istr( wij aj + biasi ) + estr(inputi ). (3.5)
j
Here the constants istr and estr are parameters that allow the relative contri-
butions of the input from external sources and that from internal sources to be
readily manipulated.
44 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
3.3 IMPLEMENTATION
The cs program implementing the schema model is much like iac in structure. It
differs in that it does asynchronous updates using a slightly different activation
rule, as specified above. cs consists of essentially two routines: (a) an update
routine called rupdate (for random update), which selects units at random and
computes their net inputs and then their new activation values, and (b) a control
routine, cycle, which calls rupdate in a loop for the specified number of cycles.
Thus, in its simplest form, cycle is as follows:
function cycle
for i = 1:ncycles
cycleno = cycleno+1;
rupdate();
end
Thus, each time cycle is called, the system calls rupdate ncycles times, and
updates the current cycle number (a second call to cycle will continue cycling
where the first one left off). Note that the actual code includes checks to see if
the display should be updated and/or if the process should be interrupted. We
have suppressed those aspects here to focus on the key ideas.
The rupdate routine itself does all of the work. It randomly selects a unit,
computes its net input, and assigns the new activation value to the unit. It does
this nupdates times. Typically, nupdates is set equal to nunits, so a single call
to rupdate, on average, updates each unit once:
function rupdate
for updateno = 1:nupdates
i = randint(1, nunits);
netinput(i) = activation*weight(i,:);
netinput = istr*(netinput+bias(i)) + estr*input(i);
if netinput > 0
activation(i) = activation(i) + netinput*(l-activation(i));
else
activation(i) = activation(i) + netinput*activation(i);
end
end
The code shown here not only suppresses the checks for interupts and display
updates; it also suppresses the fact that units are organized into pools and
projections. Instead it represents in simple form the processing that would occur
in a network with a single pool of units and a single matrix of connections. It
is a constraint of the model, not enforced in the code, that the weight matrix
must be symmetric and its diagonal elements should all be 0.
3.4. RUNNING THE PROGRAM 45
actfunction. Two models we will consider are available within the cs program:
‘Schema’, ‘Boltzmann’.The user can select whether the network follows
the schema model (already described) or the Boltzmann model (to be
described below) via the actfunction dropdown menu.
46 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
ncycles. Determines the number of cycles to run when the run button is clicked
or the runprocess(’test’) command is entered.
istr. This parameter scales the effect of the internal inputs (the bias input
to each unit and the input coming from other units via the connection
weights).
estr. Determines via a dropdown menu whether the external input is clamped
or scaled. If clamp is selected, estr is ignored, and external inputs to
units specify the activation value to which the unit will be set. If scale
is selected, external inputs are treated as one contributing factor entering
into a unit’s net input, and are scaled by the value of estr, which can be
entered in the numeric box to the right.
annealsched. This command will be described later when the concept of an-
nealing has been introduced in the Boltzmann machine section.
testset. The user may choose to load one or more pattern file specifying pat-
terns of external inputs to units. When such a file has been loaded, a
checkbox called ’pat’ is added to the test window. When checked, one
of the patterns in the current pattern file can be selected. values will be
applied as clamps or external inputs as specified by the Ext Input selector.
Create/Edit logs This allows the user to create logs and graphs of network
variables as described in the PDPTool User’s Guide.
Q.3.1.1.
Using 3.1, explain quantitatively how the exact value of goodness
comes to be 6.4 when the network has reached the state shown in
the display. Remember that all weights and biases are scaled by the
istr parameter, which is set to .4. Thus each excitatory weight can
be treated as having value .4, each inhibitory weight -.6, and the
positive bias as having value .2.
You can run the cube example again by issuing the newstart command and
then hitting the run button. Do this until you find a case where, after 20 cycles,
there are four units on in cube A and four on in cube B. The goodness will be
4.8.
Q.3.1.2.
Using 3.1, explain why the state you have found in this case corre-
sponds to a goodness value of 4.8.
Continue again until you find a case where, after 20 cycles, there are two
units on in one of the two cubes and six units on in the other.
50 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
Q.3.1.3.
Using Equation 3.1, explain why the state you have found in this
case also corresponds to a goodness value of 4.8.
Now run about 20 more cases of newstart followed by run, and record for each
the number of units on in each subnetwork after 20 cycles, making a simple tally
of cases in which the result was [8 0] (all eight units in the left cube activated,
none in the right), [6 2], [4 4], [2 6], and [0 8]. Examine the states where there
are units on in both subnetworks.
To facilitate this process, we have provided a little function called onecube(n)
that you can execute from the command line. This function issues one newstart
and then runs n cycles, showing the final state only. To enter the command
again, you can use ctrl-p, followed by enter. You can change the value of n by
editing the command before you hit enter. For present purposes, you should
simply leave n set at 20. Standalone users must follow the directions in this
footnote.1
Q.3.1.4.
How many times was each of the two valid interpretations found?
How many times did the system settle into a local maximum? What
were the local maxima the system found? To what extent do they
correspond to reasonable interpretations of the cube?
Now that you have a feeling for the range of final states that the system can
reach, try to see if you can understand the course of processing leading up to
the final state.
Q.3.1.5.
What causes the system to reach one interpretation or the other?
How early in the processing cycle does the eventual interpretation
become clear? What happens when the system reaches a local max-
imum? Is there a characteristic of the early stages of processing that
leads the system to move toward a local maximum?
Hint.
Note that if you wish to study how the network evolved to a particu-
lar solution you obtained at the end of 20 cycles following a newstart,
you can use reset to prepare the network to run through the very
same sequence of unit updates again. If at that point you set Update
after to 1 update, you can then then follow the steps to the solution
1 Standalone users should use the command ’runscript onecbscript.m’ to achieve the same
effect as calling the onecube function. The command can be re-executed using the up-arrow
key followed by return. To change the number of cycles, you will need to edit the onecbscript.m
file using your preferred text editor such as wordpad or emacs.
3.5. OVERVIEW OF EXERCISES 51
Figure 3.3: The state of the system 20 cycles after the network was initialized
at startup.
Q.3.1.6.
How does the distribution of final states vary as the value of istr
is varied? Report your results in table form for each value of istr,
showing the number of times the network settles to [8 0], [6 2], [4 4],
[2 6], and [0 8] in each case. Consider the distribution of different
types of local maxima for different values of istr carefully. Do your
best explain the results you obtain.
52 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
Hint.
At low values of istr you will want to increase the value of the
ncycles argument to onecube; 80 works well for values like 0.1. Do
not be disturbed by the fact that the values of goodness are differ-
ent here than in the previous runs. Since istr effectively scales the
weights and biases, it also multiplies the goodness so that goodness
is proportional to istr.
Biasing the necker cube toward one of the two possible solutions. Although
we do not provide an excise associated with this, we note that it is possible to
use external inputs to bias the network in favor of one of the two interpretations.
Study the effects of adding an input of 1.0 to some of the units in one of the
subnetworks, using a command like the following at the command prompt:
net.pool(2).extinput = [1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0];
The first eight units are for interpretation A, so this should provide external
input (scaled by estr, which is set to the same .4 value as istr ) to all eight of the
units corresponding to that interpreation. If you wish, you can explore how the
distribution of interpretations changes as a result of varying the number of units
receiving external input of 1. You can also vary the strength of the external
inputs, either by supplying values differet from 1 in the external input vector,
or by adjusting the value of the estr parameter.
the problem of local maxima in the goodness function. To understand how this
is done, it will be useful to begin with an example of a local maximum and
try to understand in some detail why it occurs and what can be done about it.
Figure 3.6.1 illustrates a typical example of a local maximum with the Necker
cube. Here we see that the system has settled to a state in which the upper
four vertices were organized according to interpretation A and the lower four
vertices were organized according to interpretation B. Local maxima are always
blends of parts of the two global maxima. We never see a final state in which
the points are scattered randomly across the two interpretations.
All of the local maxima are cases in which one small cluster of adjacent
vertices are organized in one way and the rest are organized in another. This is
because the constraints are local. That is, a given vertex supports and receives
support from its neighbors. The units in the cluster mutually support one
another. Moreover, the two clusters are always arranged so that none of the
inhibitory connections are active. Note in this case, Afur is on and the two units
it inhibits, Bfur and Bbur, are both off. Similarly, Abur is on and Bbur and
Bfur are both off. Clearly the system has found little coalitions of units that
hang together and conflict minimally with the other coalitions. In Ex. 3.1, we
had the opportunity to explore the process of settling into one of these local
maxima. What happens is this. First a unit in one subnetwork comes on. Then
3.6. GOODNESS AND PROBABILITY 55
a unit in the other subnetwork, which does not interact directly with the first, is
updated, and, since it has a positive bias and at that time no conflicting inputs,
it also comes on. Now the next unit to come on may be a unit that supports
either of the two units already on or possibly another unit that doesn’t interact
directly with either of the other two units. As more units come on, they will
fit into one or another of these two emerging coalitions. Units that are directly
inconsistent with active units will not come on or will come on weakly and then
probably be turned off again. In short, local maxima occur when units that
don’t interact directly set up coalitions in both of the subnetworks; by the time
interaction does occur, it is too late, and the coalitions are set.
Interestingly, the coalitions that get set up in the Necker cube are analogous
to the bonding of atoms in a crystalline structure. In a crystal the atoms interact
in much the same way as the vertices of our cube. If a particular atom is oriented
in a particular way, it will tend to influence the orientation of nearby atoms so
that they fit together optimally. This happens over the entire crystal so that
some atoms, in one part of the crystal, can form a structure in one orientation
while atoms in another part of the crystal can form a structure in another ori-
entation. The points where these opposing orientations meet constitute flaws in
the crystal. It turns out that there is a strong mathematical similarity between
our network models and these kinds of processes in physics. Indeed, the work
of Hopfield (1982, 1984) on so-called Hopfield nets, of Hinton and Sejnowski
(1983), PDP:7, on the Boltzmann machine, and of Smolensky (1983), PDP:6,
on harmony theory were strongly inspired by just these kinds of processes. In
physics, the analogs of the goodness maxima of the above discussion are energy
minima. There is a tendency for all physical systems to evolve from highly ener-
getic states to states of minimal energy. In 1982, Hopfield, (who is a physicist),
observed that symmetric networks using deterministic update rules behave in
such a way as to minimize an overall measure he called energy defined over the
whole network. Hopfield’s energy measure was essentially the negative of our
goodness measure. We use the term goodness because we think of our system
as a system for maximizing the goodness of fit of the system to a set of con-
straints. Hopfield, however, thought in terms of energy, because his networks
behaved very much as thermodynamical systems, which seek minimum energy
states. In physics the stable minimum energy states are called attractor states.
This analogy of networks falling into energy minima just as physical systems
do has provided an important conceptual tool for analyzing parallel distributed
processing mechanisms.
Hopfield’s original networks had a problem with local ”energy minima” that
was much worse than in the schema model described earlier. His units were
binary. (Hopfield (1984) subsequently proposed a version in which units take
on a continuum of values to help deal with the problem of local minima in his
original model. The schema model is similar to Hopfield’s 1984 model, and
with small values of istr we have seen that it is less likely to settle to a local
minimum). For binary units, if the net input to a unit is positive, the unit
takes on its maximum value; if it is negative, the unit takes on its minimum
value (otherwise, it doesn’t change value). Binary units are more prone to local
56 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
minima because the units do not get an opportunity to communicate with one
another before committing to one value or the other. In Ex. 3.1, we gave you
the opportunity to run a version close to the binary Hopfield model by setting
istr to 2.0 in the Necker cube example. In this case the units are always at
either their maximum or minimum values. Under these conditions, the system
reaches local goodness maxima (energy minima in Hopfield’s terminology) much
more frequently.
Once the problem has been cast as an energy minimization problem and
the analogy with crystals has been noted, the solution to the problem of local
goodness maxima can be solved in essentially the same way that flaws are dealt
with in crystal formation. One standard method involves annealing. Annealing
is a process whereby a material is heated and then cooled very slowly. The idea
is that as the material is heated, the bonds among the atoms weaken and the
atoms are free to reorient relatively freely. They are in a state of high energy.
As the material is cooled, the bonds begin to strengthen, and as the cooling
continues, the bonds eventually become sufficiently strong that the material
freezes. If we want to minimize the occurrence of flaws in the material, we must
cool slowly enough so that the effects of one particular coalition of atoms has
time to propagate from neighbor to neighbor throughout the whole material
before the material freezes. The cooling must be especially slow as the freezing
temperature is approached. During this period the bonds are quite strong so
that the clusters will hold together, but they are not so strong that atoms in
one cluster might not change state so as to line up with those in an adjacent
cluster even if it means moving into a momentarily more energetic state. In this
way annealing can move a material toward a global energy minimum.
The solution then is to add an annealing-like process to our network models
and have them employ a kind of simulated annealing. The basic idea is to add
a global parameter analogous to temperature in physical systems and therefore
called temperature. This parameter should act in such a way as to decrease the
strength of connections at the start and then change so as to strengthen them
as the network is settling. Moreover, the system should exhibit some random
behavior so that instead of always moving uphill in goodness space, when the
temperature is high it will sometimes move downhill. This will allow the system
to ”step down from” goodness peaks that are not very high and explore other
parts of the goodness space to find the global peak. This is just what Hinton and
Sejnowski have proposed in the Boltzmann machine, what Geman and Geman
(1984) have proposed in the Gibbs sampler, and what Smolensky has proposed
in harmony theory. The essential update rule employed in all of these models is
probabilistic and is given by what we call the logistic function:
eneti /T
p(ai = 1) = (3.6)
eneti /T + 1
where T is the temperature. Dividing the numerator and denominator by eneti /T
gives the following version of this function, which is the one must typically used:
1
p(ai = 1) = (3.7)
1 + e−neti /T
3.6. GOODNESS AND PROBABILITY 57
This differs from the basic schema model in several important ways. First, like
Hopfield’s original model, the units are binary. They can only take on values of 0
and 1. Second, they are stochastic – that is, their value is subject to uncertainty.
The update rule specifies only a probability that the units will take on one or the
other of their values. This means that the system need not necessarily go uphill
in goodness-it can move downhill as well. Third, the behavior of the systems
depends on a global parameter, T , which determines the relative likelihood of
different states in the network.
In fact, in networks of this type, a very important relationship holds betwen
the equilibrium probability of a state and the state’s goodness:
eGi /T
p(Si ) = P G 0 /T (3.8)
i0 e
i
The denominator of this expression is a sum over all possible states, and is often
difficult to compute, but we now can see the likehood ratio of being in either of
two states S1 or S2 is given by
p(S1 ) eG1 /T
= G /T , (3.9)
p(S2 ) e 2
or alternatively,
p(S1 )
= e(G1 −G2 )/T . (3.10)
p(S2 )
A final way of looking at this relationship that is sometimes useful comes if we
take the log of both sides of this expression:
p(S1 )
log( ) = (G1 − G2 )/T. (3.11)
p(S2 )
At equilibrium, the log odds of the two states is equal to the difference in
goodness, divided by the temperature.
These simple expressions above can serve two important purposes for neural
network theory. First, they allow us to predict what a network will do from
knowledge of the constraints encoded in its weights, biases, and inputs, when
the network is run at a fixed temperature. This allows a mathematical derivation
of aspects a network’s behavior and it allows us to relate the network’s behavior
to theories of optimal inference.
Second, these expressions allow us to prove that we can, in fact, find a way
to have networks settle to one of their global maxima. In essence, the reason for
this is that, as T grows small, the probility ratios of particular pairs of states
become more and more extreme. Consider two states with Goodness 5 and
goodness 4. When T is 1, the ratio of the probabilities is e, or 2.73:1. But when
the temperature is .1, the ratio e10 , or 22,026:1. In general, as temperature goes
down we can make the ratio of the probabilities of two states as large as we like,
even if the goodness difference between their probabilities is small.
58 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
However, there is one caveat. This is that the above is true, only at equilib-
rium, and provided that the system is ergodic.
The equilibrium probability of a state is a slightly tricky concept. It is best
understood by thinking of a very large number of copies of the very same net-
work, each evolving randomly over time (with a different random seed, so each
one is different). Then we can ask: What fraction of these networks are in any
given state at any given time? They might all start out in the same state, and
they may all tend to evolve toward better states, but at some point the tendency
to move into a good state is balanced by the tendency to move out of it again,
and at this point we say that the probability distribution has reached equilib-
rium. At moderate temperatures, the flow between states occurs readily, and
the networks tend to follow the equilibrium distribution at they jump around
from state to state. At low temperatures, however, jumping between states
becomes very unlikely, and so the networks may be more likely to be found in
local maxima than in states that are actually better but are also neighbors of
even better states. When the flow is possible, we say that the system is ergodic.
When the system is ergotic, the equilibrium is independent of the starting state.
When the flow is not completely open, it is not possible to get from some states
to some other states.
In practice “ergodicity” is a matter of degree. If nearby relatively bad states
have very low probability of being entered from nearly higher states, it can
simply take more time that it seems practical to wait for much in the way of
flow to occur. This is where simulated annealing comes in. We can start with
the temperature high, allowing a lot of flow out of relatively good states and
gradually lower the temperature to some desired level. At this temperature, the
distribution of states can in some cases approximate the equilibrium distribu-
tion, even though there is not really much ongoing movement between different
states.
in which all of the units are active, plus 8 times .5 for the bias inputs to the
eight units representing the corners of that cube. The local maxima we have
considered all have goodness of 12.
The annealing schedule can be set through the options popup (annealsched
pushbutton) but is in fact easier to specify at the command prompt via the
settestopts command. Throughout this exercise, we will work with a final tem-
perature of 0.5. The initial annealing schedule is set in the script by the com-
mand:
This tells the network to initialize the temperature at 2, then linearly reduce it
over 20 cycles to .5. In general the schedule consists of a set of time value pairs,
separated by a semicolon. Times must increase, and values must be greater
than 0 (use .001 if you want effectively 0 temperature).
Each time you run an ensemble of networks, you will do so using the many-
cubes command. This command takes two arguments. The first is the number
of instances of the network settling process to run and the second is the number
of cycles to use in running each instance. Enter the command shown below now
at the command prompt with arguments 100 and 20, to run 100 instances of
the network each for 20 cycles. Standalone users must follow the directions in
this footnote.2
histvals = manycubes(100,20)
If you have the Network Viewer window up on your screen, you’ll see the initial
and final states for each instance of the settling process flash by one by one.
At the end of 100 instances, a bargraph will pop up, showing the distribution
of goodness values of the states reached after 20 cycles. The actual numbers
corresponding to the bars on the graph are stored in the histvals variable; there
are 33 values corresponding to goodnesses from 0 to 16 in steps of 0.5. Enter
histvals(17:33) to display the entries corresponding to goodness values from 8
to 16. In one run of the manycubes(100,20), the results came out like this:
We want to know whether we are getting about the right equilibrium dis-
tribution of states, given that our final temperature is .5. We can calculate the
2 Standalone users should use the command runscript manycbscript.m. The variable hist-
vals will be created and its contents can be dumped to the command window by simply typing
the variable name ’histvals’, followed by enter, at the command prompt. The parameters of
the command, called ncubes and mccycles, are set at the top of the manycbscript.m file, and
can be changed by editing their values using your preferred text editor.
60 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
Q.3.2.1.
Since we need to have the right numbers to proceed, we give the answers to
the question above:
• 16 states with goodness 13.5, which are near misses with an extra unit on
in the other cube.
• 16 states with goodness 12.5, which are near misses with a unit off in the
active cube.
If your answer differs with regard to the number of goodness 12 states, here
is an explanation. Competition occurs between pairs of units in the two cubes
corresponding to the two alternative interpretations of each of the four front-
to-back edges. Either pair of units can be on without direct mutual inhibitory
conflict. Other edges of the cubes do not have this property. There are thus four
ways to have an edge missing from one cube, and four ways to have an edge
missing from the other cube. Thus there are four 6-2 and four 2-6 maxima.
Similarly, the top surface of cube A can coexist with the bottom surface of
cube B (or vise versa) without mutual inhibition, and the left surface of cube
A can coexist with the right surface of cube B (or vice versa) without mutual
inhibition, giving four possible 4-4 local maxima. Note that the four units
corresponding to the front or back surface of cube A cannot coexist with the
four units corresponding to either the front or the back surface of cube B due
to the mutual inhibition.
3.6. GOODNESS AND PROBABILITY 61
So now we can finally ask, what is the relative probability of being in a state
with one of these four goodnesses, given the final temperature achieved in the
network?
We can calculate this as follows. For each Goodness Value (GV ), we have:
e(GV /T )
p(GV ) = NGV (3.12)
Z
Here NGV represents the number of different states having the goodness value
in question, and e(GV /T ) is proportional to the probability of being in any one
of these states. We use Z to represent the denominator, which we will not need
to calculate. Consider first the value of this expression for the highest goodness
value, GV = 16, corresponding to the global maxima. There are two such max-
ima, so NGV = 2. So to calculate the numerator of this expression (disregarding
Z) for our temperature T = .5, we enter the following at or MATLAB command
prompt:
2*exp(16/.5)
We see this is a very large number. To see it in somewhat more compact format
enter format short g at the MATLAB prompt then enter the above expression
again. The number is 1.58 times 10 to the 14th power. Things come back into
perspective when we look at the ratio of the probability of being in a state with
goodness 16 to the probability of being in a state with goodness 13.5. There
are 16 such states, so the ratio is given by:
(2*exp(16/.5))/(16*exp(13.5/.5))
The ratio is manageable: it is 18.6 or so. Thus we should have a little more
than 18 times as many states of goodness 16 as we have states of goodness 13.5.
In fact we are quite far off from this in my run; we have 62 cases of states of
goodness 16, and 14 cases of states of goodness 13.5, so the ratio is 4.4 and is
too low.
Let’s now do the same thing for the ratio of states of goodness 12 to states
of goodness 16. There are 12 states of goodness 12, so the ratio is entered as
(2*exp(16/.5))/(12*exp(12/.5))
Calculating, we find this ratio is 496.8. Since I observed 10 states of goodness
12, and 62 of goodness 16, the observed ratio is hugely off: 62/10 is only 6.2.
Looking at the probability ratio for states of goodness 12.5 vs. 12, we have:
(16*exp(12.5/.5))/(12*exp(12/.5))
The ratio is 3.6. Thus, at equilibrium the network should be in a 12.5 goodness
state more often than a 12 goodness state. However, we have the opposite
pattern, with 10 instances of goodness 12 and 2 of goodness 12.5. Clearly, we
have not achieved an approximation of the equilibrium distribution; it appears
that many instances of the network are stuck in one of the local maxima, i.e.
the states with goodness of 12.
62 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
Q.3.2.2.
Try to find an annealing schedule that runs over 100 cycles with
a final temperature of 0.5 that ends up as close as possible to the
right final distribution over the states with goodness values of 16,
13.5, 12.5, and 12. (Some lower goodness values may also appear.)
In particular, try to find a schedule that produces 1 or fewer states
with a goodness value of 12. You’ll need to do several runs of 100
cycles, 100 instances each using the manycubes command. Report
the results of your best annealing schedule and three other sched-
ules in a table, showing the annealing schedules used as well as the
number of states per goodness value (between 8 and 16) at the end
of each run. For your best schedule, report the results of two runs,
since there is variability from run to run. Explain the adjustments
you make to the annealing schedule and the thoughts that led you
to make them.
Hint.
A higher initial temperature with two intermediate milestones pro-
duces results that come close to matching the correct equilibrium
distribution. It is hard to get a perfect match – see what you come
up with.
The next question harkens back to the discussion of the physical process of
annealing in the discussion of the physics analogy:
Q.3.2.3.
Discuss the likelihood of a network escaping from a [4 4] local max-
imum to another local maximum at T = 1: Provide an example of
2 sequences of evens by which this might occur. Picking one such
sequence, calculate the probability that such a series of states would
occur. Once such a maximum is escaped to a global maximum, what
sequence of events would have to happen to get back to this [4 4] local
maximum? Also consider excape and return to [6 2] and [2 6] type
local maxima. We don’t expect exact answers here, but the ques-
tion will hopefully elicit reasonable intuitions. With these thoughts
in hand, discuss your understanding of why your best schedule re-
ported above works as well or poorly as it does, and how you might
improve on it further.
Q.3.2.4.
Consider how you might change the network used in the cube exam-
ple to avoid the problem of local maxima. Assume you still have the
same sixteen vertex units, and the same bias inputs making each
of the two interpretations equally likely. Briefly explain (with an
optional drawing) one example in which adding connections would
help and one example in which adding hidden units would help.
Your answer may help to illustrate both that local maxima are not neces-
sarily inevitable and that hidden units (units representing important clusters of
64 CHAPTER 3. CONSTRAINT SATISFACTION IN PDP SYSTEMS
inputs that tend to occur together in experience) may play a role in solving the
local maximum problem. More generally, the point here is to suggest that the
relationship between constraints, goodness, and probability may be a useful one
even beyond avoiding the problem of getting stuck in local maxima.
Chapter 4
In previous chapters we have seen how PDP models can be used as content-
addressable memories and constraint-satisfaction mechanisms. PDP models are
also of interest because of their learning capabilities. They learn, naturally and
incrementally, in the course of processing. In this chapter, we will begin to
explore learning in PDP models. We will consider two “classical” procedures
for learning: the so-called Hebbian, or correlational learning rule, described by
Hebb (1949) and before him by William James (1950), and the error-correcting
or “delta” learning rule, as studied in slightly different forms by Widrow and
Hoff (1960) and by Rosenblatt (1959).
We will also explore the characteristics of one of the most basic network
architectures that has been widely used in distributed memory modeling with
the Hebb rule and the delta rule. This is the pattern associator. The pattern
associator has a set of input units connected to a set of output units by a single
layer of modifiable connections that are suitable for training with the Hebb rule
and the delta rule. Models of this type have been extensively studied by James
Anderson (see Anderson, 1983), Kohonen (1977), and many others; a number of
the papers in the Hinton and Anderson (1981) volume describe models of this
type. The models of past-tense learning and of case-role assignment in PDP:18
and PDP:19 are pattern associators trained with the delta rule. An analysis of
the delta rule in pattern associator models is described in PDP:11.
As these works point out, one-layer pattern associators have several sug-
gestive properties that have made them attractive as models of learning and
memory. They can learn to act as content-addressable memories; they gener-
alize the responses they make to novel inputs that are similar to the inputs
that they have been trained on; they learn to extract the prototype of a set of
repeated experiences in ways that are very similar to the concept learning char-
acteristics seen in human cognitive processes; and they degrade gracefully with
damage and noise. In this chapter our aim is to help you develop a basic un-
65
66CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
4.1 BACKGROUND
4.1.1 The Hebb Rule
In Hebb’s own formulation, this learning rule was described eloquently but only
in words. He proposed that when one neuron participates in firing another, the
strength of the connection from the first to the second should be increased. This
has often been simplified to ‘cells that fire together wire together’, and this in
turn has often been representated mathematically as:
In studying this rule, we will assume that activations are distributed around
0 and that the units in the network have activations that can be set in either
of two ways: They may be clamped to particular values by external inputs or
they may be determined by inputs via their connections to other units in the
network. In the latter case, we will initially focus on the case where the units
are completely linear; that is, on the case in which the activation and the output
of the unit are simply set equal to the net input:
X
ai = aj wij (4.3)
j
In this formulation, with the activations distributed around 0, the wij as-
signed by Equation 4.2 will be proportional to the correlation between the acti-
vations of units i and j; normalizations can be used to preserve this correlational
property when units have mean activations that vary from 0.
The correlational character of the Hebbian learning rule is at once the
strength of the procedure and its weakness. It is a strength because these
4.1. BACKGROUND 67
Figure 4.1: Two simple associative networks and the patterns used in training
them.
correlations can sometimes produce useful associative learning; that is, partic-
ular units, when active, will tend to excite other units whose activations have
been correlated with them in the past. It can be a weakness, though, since
correlations between unit activations often are not sufficient to allow a network
to learn even very simple associations between patterns of activation.
First let’s examine a positive case: a simple network consisting of two input
units and one output unit (Figure 4.1A). Suppose that we arrange things so
that by means of inputs external to this network we are able to impose patterns
of activation on these units, and suppose that we use the Hebb rule (Equation
4.1 above) to train the connections from the two input units to the output unit.
Suppose further that we use the four patterns shown in Figure 4.1B; that is, we
present each pattern, forcing the units to the correct activation, then we adjust
the strengths of the connections between the units. According to Equation 4.1,
w20 (the weight on the connection to unit 2 from unit 0) will be increased in
strength for each pattern by amount , which in this case we will set to 1.0. On
the other hand, w21 will be increased by amount in two of the cases (first and
last pattern) and reduced by in the other cases, for a net change of 0.
68CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
As a result of this training, then, this simple network would have acquired
a positive connection weight to unit 2 from unit 0. This connection will now
allow unit 0 to make unit 2 take on an activation value correlated with that of
unit 0. At the same time, the network would have acquired a null connection
from unit 1 to unit 2, capturing the fact that the activation of unit 1 has no
predictive relation to the activation of unit 2. In this way, it is possible to use
Hebbian learning to learn associations that depend on the correlation between
activations of units in a network.
Unfortunately, the correlational learning that is possible with a Hebbian
learning rule is a “unitwise” correlation, and sometimes, these unitwise cor-
relations are not sufficient to learn correct associations between whole input
patterns and appropriate responses. To see that this is so, suppose we change
our network so that there are now four input units and one output unit, as
shown in Figure 4.1C. And suppose we want to train the connections in the
network so that the output unit takes on the values given in Figure 4.1D for
each of the four input patterns shown there. In this case, the Hebbian learning
procedure will not produce correct results. To see why, we need to examine the
values of the weights (equivalently, the pairwise correlations of the activations
of each sending unit with the receiving unit). What we see is that three of the
connections end up with 0 weights because the activation of the corresponding
input unit is uncorrelated with the activation of the output unit. Only one of
the input units, unit 2, has a positive correlation with unit 4 over this set of
patterns. This means that the output unit will make the same response to the
first three patterns since in all three of these cases the third unit is on, and this
is the only unit with a nonzero connection to the output unit.
Before leaving this example, we should note that there are values of the
connection strengths that will do the job. One such set is shown in Figure 4.1E.
The reader can check that this set produces the correct results for each of the
four input patterns by using Equation 4.3.
Apparently, then, successful learning may require finding connection strengths
that are not proportional to the correlations of activations of the units. How
can this be done?
ei = ti − ai (4.5)
the difference between the teaching input to unit i and its obtained activation.
To see how this rule works, let’s use it to train the five-unit network in Figure
4.1C on the patterns in Figure 4.1D. The training regime is a little different here:
For each pattern, we turn the input units on, then we see what effect they have
on the output unit; its activation reflects the effects of the current connections
in the network. (As before we assume the units are linear.) We compute the
difference between the obtained output and the teaching input (Equation 4.5).
Then, we adjust the strengths of the connections according to Equation 4.4. We
will follow this procedure as we cycle through the four patterns several times,
and look at the resulting strengths of the connections as we go. The network is
started with initial weights of 0. The results of this process for the first cycle
through all four patterns are shown in the first four rows of Figure 4.2.
The first time pattern 0 is presented, the response (that is, the obtained
activation of the output unit) is 0, so the error is +1. This means that the
changes in the weights are proportional to the activations of the input units. A
value of 0.25 was used for the learning rate parameter, so each ∆w is ±0.25.
These are added to the existing weights (which are 0), so the resulting weights
are equal to these initial increments. When pattern 1 is presented, it happens
to be uncorrelated with pattern 0, and so again the obtained output is 0. (The
output is obtained by summing up the pairwise products of the inputs on the
current trial with the weights obtained at the end of the preceding trial.) Again
the error is +1, and since all the input units are on in this case, the change in
the weight is +0.25 for each input. When these increments are added to the
original weights, the result is a value of +0.5 for w04 and w24 , and 0 for the
other weights. When the next pattern is presented, these weights produce an
output of +1. The error is therefore −2, and so relatively larger ∆w terms
result. Even so, when the final pattern is presented, it produces an output of
+1 as well. When the weights are adjusted to take this into account, the weight
from input unit 0 is negative and the weight from unit 2 is positive; the other
weights are 0. This completes the first sweep through the set of patterns. At
this point, the values of the weights are far from perfect; if we froze them at
these values, the network would produce 0 output to the first three patterns. It
would produce the correct answer (an output of −1) only for the last pattern.
The correct set of weights is approached asymptotically if the training pro-
cedure is continued for several more sweeps through the set of patterns. Each
of these sweeps, or training epochs, as we will call them henceforth, results in
a set of weights that is closer to a perfect solution. To get a measure of the
closeness of the approximation to a perfect solution, we can calculate an error
measure for each pattern as that pattern is being processed. For each pattern,
the error measure is the value of the error (t − a) squared. This measure is then
summed over all patterns to get a total sum of squares or tss measure. The
resulting error measure, shown for each of the illustrated epochs in Figure 4.2,
70CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
Figure 4.2: Learning with the delta rule. See text for explanation.
4.1. BACKGROUND 71
gets smaller over epochs, as do the changes in the strengths of the connections.
The weights that result at the end of 20 epochs of training are very close to
the perfect solution values. With more training, the weights converge to these
values.
The error-correcting learning rule, then, is much more powerful than the
Hebb rule. In fact, it can be proven rather easily that the error-correcting rule
will find a set of weights that drives the error as close to 0 as we want for each
and every pattern in the training set, provided such a set of weights exists.
Many proofs of this theorem have been given; a particularly clear one may be
found in Minsky and Papert (1969) (one such proof may be found in PDP:11 ).
input output
+ + + + +
- - - + -
In this case, we see that three of the input units are perfectly correlated with the
output and one is uncorrelated with it. If we present these two input patterns
repeatedly with a small learning rate, the connection weights will converge to
You can verify that with these connection weights, the network will produce the
correct output for both inputs. Now, consider what would happen if the input
patterns were:
input output
+ + + + +
- + + + -
Here, only one input unit is correlated with the output unit. What will the
connection weights converge to in this case? The second, third, and fourth
unit cannot help predict the output, so these weights will all be 0. The first
unit will have to do all the work, and so the first weight will be 1. While
this set of weights would work for the first set of input patterns, the learning
rule tends to spread the responsibility or divide the labor among the units that
best predict the output. This tendency to ’divide the labor’ among the input
units is a characteristic of error correcting learning, and does not occur with
the simple Hebbian learning rule because that rule is only sensitive to pairwise
input-output correlations.
72CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
may find it worthwhile to read PDP:9. An in-depth analysis of the delta rule in pattern
associators is in PDP:11.
4.2. THE PATTERN ASSOCIATOR 73
Assuming all the weights in the network are initially 0, we can express the
value of each weight as
which is equivalent to
o = W * i’;
in MATLAB, where o is a column vector. Substituting for wij from Equation
4.7 yields X
oit = ijl oil ijt (4.9)
j
Since we are summing with respect to j in this last equation, we can pull out
and oil : X
oit = oil ijl ijt (4.10)
j
Equation 4.10 says that the output at the time of test will be proportional to
the output at the time of learning times the sum of the elements of the input
pattern at the time of learning, each multiplied by the corresponding element
of the input pattern at the time of test.
This sum of products of corresponding elements is called the dot product.
It is very important to our analysis because it expresses the similarity of the
two patterns il and it . It is worth noting that we have already encountered
an expression similar to this one in Equation 4.2. In that case, though, the
4.2. THE PATTERN ASSOCIATOR 75
In MATLAB, this is
ot = k * ol * sum(it .* il) / length(it);
This result is very basic to thinking in terms of patterns since it demonstrates
that what is crucial for the performance of the network is the similarity relations
among the input patterns–their correlations–rather than their specific properties
considered as individuals.2 Thus Equation 4.12 says that the output pattern
produced by our network at test is a scaled version of the pattern stored on the
learning trial. The magnitude of the pattern is proportional to the similarity of
the learning and test patterns. In particular, if k = 1 and if the test pattern is
identical to the training pattern, then the output at test will be identical to the
output at learning.
An interesting special case occurs when the normalized dot product between
the learned pattern and the test pattern is 0. In this case, the output is 0:
There is no response whatever. Patterns that have this property are called
orthogonal or uncorrelated ; note that this is not the same as being opposite or
anticorrelated.
2 Technically, performance depends on the similarity relations among the patterns and on
their overall strength or magnitude. However, among vectors of equal strength (e.g., the
vectors consisting of all +1s and −1s), only the similarity relations are important.
76CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
ndp_ab = sum(a.*b)/length(a);
You will see that patterns b, c, and d are all orthogonal to pattern a; in
fact, they are all orthogonal to each other. Pattern e, on the other hand, is not
orthogonal to pattern a, but is anticorrelated with it. Interestingly, it forms
an orthogonal set with patterns b, c, and d. When all the members of a set of
patterns are orthogonal to each other, we call them an orthogonal set.
Now let us consider what happens when an entire ensemble of patterns is
presented during learning. In the Hebbian learning situation, the set of weights
resulting from an ensemble of patterns is just the sum of the sets of weights
resulting from each individual pattern. Note that, in the model we are con-
sidering, the output pattern, when provided, is always thought of as clamping
the state the output units to the indicated values, so that the existing values of
the weights actually play no role in setting the activations of the output units.
Given this, after learning trials on a set of input patterns il each paired with
an output pattern ol , the value of each weight will be
X
wij = ijl oil (4.13)
l
In words, the output of the network in response to input pattern t is the sum
of the output patterns that occurred during learning, with each pattern’s con-
tribution weighted by the similarity of the corresponding input pattern to the
test pattern. Three important facts follow from this:
1. If a test input pattern is orthogonal to all training input patterns, the
output of the network will be 0; there will be no response to an input
pattern that is completely orthogonal to all of the input patterns that
occurred during learning.
2. If a test input pattern is similar to one of the learned input patterns
and is uncorrelated with all the others, then the test output will be a
scaled version of the output pattern that was paired with the similar input
pattern during learning. The magnitude of the output will be proportional
to the similarity of the test input pattern to the learned input pattern.
4.2. THE PATTERN ASSOCIATOR 77
3. For other test input patterns, the output will always be a blend of the
training outputs, with the contribution of each output pattern weighted
by the similarity of the corresponding input pattern to the test input
pattern.
In the exercises, we will see how these properties lead to several desirable
features of pattern associator networks, particularly their ability to generalize
based on similarity between test patterns and patterns presented during train-
ing.
These properties also reflect the limitations of the Hebbian learning rule;
when the input patterns used in training the network do not form an orthogonal
set, it is not in general possible to avoid contamination, or “cross-talk,” between
the response that is appropriate to one pattern and the response that occurs to
the others. This accounts for the failure of Hebbian learning with the second
set of training patterns considered in Figure 4.1. The reader can check that the
input patterns we used in our first training example in Figure 4.1 (which was
successful) were orthogonal but that the patterns used in the second example
were not orthogonal.
It is interesting to compare this to the Hebb rule. Consider first the case
where each of the learned patterns is orthogonal to every other one and is
presented exactly once during learning. Then ol will be 0 (a vector of all zeros)
for all learned patterns l, and the above formula reduces to
X
wij = til ijl (4.17)
l
In this case, the delta rule produces the same results as the Hebb rule; the
teaching input simply replaces the output pattern from Equation 4.13. As long
as the patterns remain orthogonal to each other, there will be no cross-talk
between patterns. Learning will proceed independently for each pattern. There
is one difference, however. If we continue learning beyond a single epoch, the
78CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
delta rule will stop learning when the weights are such that they allow the
network to produce the target patterns exactly. In the Hebb rule, the weights
will grow linearly with each presentation of the set of patterns, getting stronger
without bound.
In the case where the input patterns il , are not orthogonal, the results of the
two learning procedures are more distinct. In this case, though, we can observe
the following interesting fact: We can read Equation 4.15 as indicating that the
change in the weights that occurs on a learning trial is storing an association of
the input pattern with the error pattern; that is, we are adding to each weight
an increment that can be thought of as an association between the error for
the output unit and the activation of the input unit. To see the implications
of this, let’s examine the effects of a learning trial with input pattern il paired
with output pattern tl on the output produced by test pattern it . The effect of
the change in the weights due to this learning trial (as given by Equation 4.15)
will be to change the output of some output unit i by an amount proportional
to the error that occurred for that unit on the learning trial, ei , times the dot
product of the learned pattern with the test pattern:
∆oit = keil (il · it )n
Here k is again equal to times the number of input units n. In vector notation,
the change in the output pattern ot can be expressed as
∆ot = kel (il · it )n
Thus, the change in the output pattern at test is proportional to the error
vector times the normalized dot product of the input pattern that occurred
during learning and the input pattern that occurred during test. Two facts
follow from this:
1. If the input on the learning trial is identical to the input on the test trial
so that the normalized dot product is 1.0 and if k = 1.0, then the change
in the output pattern will be exactly equal to the error pattern. Since
the error pattern is equal to the difference between the target and the
obtained output on the learning trial, this amounts to one trial learning
of the desired association between the input pattern on the training trial
and the target on this trial.
2. However, if it is different from il but not completely different so that
(il · it )n n is not equal to either 1 or 0, then the output produced by it
will be affected by the learning trial. The magnitude of the effect will be
proportional to the magnitude of (il · it )n .
The second effect–the transfer from learning one pattern to performance on
another–may be either beneficial or interfering. Importantly, for patterns of all
+1s and −1s, the transfer is always less than the effect on the pattern used
on the learning trial itself, since the normalized dot product of two different
patterns must be less than the normalized dot product of a pattern with itself.
This fact plays a role in several proofs concerning the convergence of the delta
rule learning procedure (see Kohonen, 1977, and PDP:11 for further discussion).
4.2. THE PATTERN ASSOCIATOR 79
for all output units i for all target-input pattern pairs p. A consequence of this
constraint for the sets of input-output patterns that can be learned by a pattern
associator is something we will call the linear independence requirement:
2. The learning process converges with the delta rule as long as there is a set
of weights that will solve the learning problem.
3. A set of weights that will solve the problem does not always exist.
• Linear. Here the activation of output unit i is simply equal to the net
input.
function:
1
p(oi = 1) = (4.20)
1+ e−neti /T
This is the same activation function used in Boltzmann machines.
• The Hebb rule. Hebbian learning in the pattern associator model works
as follows. Activations of input units are clamped based on an externally
supplied input pattern, and activations of the output units are clamped
to the values given by some externally supplied target pattern. Learning
then occurs by adjusting the strengths of the connections according to the
Hebbian rule:
∆wij = oi ij (4.22)
and the vector correlation (also called the cosine of the angle between two vec-
tors) is the dot product of the two vectors divided by the product of their
lengths:
(u · v)
vcor(u, v) =
||u|| ||v||
The normalized vector length is obtained by dividing the length by the square
root of number of elements. Given these definitions, we can now consider the
relationships between the various measures. When the target pattern consists of
+1s and −1s, the normalized dot product of the output pattern and the target
pattern is equal to the normalized vector length of the output pattern times the
vector correlation of the output pattern and the target:
4.4 IMPLEMENTATION
The pa program implements the pattern associator models in a very straight-
forward way. The program is initialized by defining a network, as in previous
chapters. A PA network consists of a pool of input units (pool(2)) and a pool of
output units (pool(3)). pool(1) contains the bias unit which is always on but is
not used in these exercises. Connections are allowed from input units to output
units only. The network specification file (pa.net) defines the number of input
units and output units, as well as the total number of units, and indicates which
connections exist. It is also generally necessary to read in a file specifying the
set of pattern pairs that make up the environment of the model.
Once the program is initialized, learning occurs through calls to a routine
called train. This routine carries out nepochs of training, where the training
mode can be selected in the Train options window. strain trains the network
with patterns in sequential order, while ptrain permutes the order. The number
of epochs can also be set in that window. The routine exits if the total sum of
squares measure, tss, is less than some criterion value, ecrit which can also be
set in Train options. Here is the train routine:
function train()
for iter = 1:nepochs
patn = getpatternrange(data,options);
for p = 1:npatterns
pno = patn(p);
setinput(data,pno,options);
compute_output(data,pno,options);
compute_error;
sumstats;
if (options.lflag)
change_weights(options);
end
if (net.tss < options.ecrit)
return;
end
end
end
This calls four other routines: one that sets the input pattern (setinput),
one that computes the activations of the output units from the activations of
the input units (compute output), one that computes the error measure (com-
pute error ), and one that computes the various summary statistics (sumstats).
Below we show the compute output and the compute error routines. First,
compute output:
function compute_output(pattern,patnum,opts)
Hebb and Delta are the two possible values of the lrule field under Train
options. The lr variable in the code corresponds to the learning rate, which is
set by the lrate field in Train options.
Note that for Hebbian learning, we use the target pattern directly in the
learning rule, since this is mathematically equivalent to clamping the activations
of the output units to equal the target pattern and then using these activations.
nvl ; the vector correlation measure, vcor ; the pattern sum of squares, pss; and
the total sum of squares, tss.
newstart Button on the Network Viewer, in the train panel. It seeds the
random number with a new random seed, and then returns the program
to its initial state before any learning occurred. That is, sets all weights to
0, and sets nepochs to 0. Also clears activations and updates the display.
ptrain Option under trainmode in the Train options. This option, when the
network is trained, presents each pattern pair in the pattern list once in
each epoch. Order of patterns is rerandomized for each epoch.
reset Button on the main network window. Same as newstart, but reseeds the
random number generator with the same seed that was used last time the
network was initialized.
strain Option under trainmode in the Train options. This option, when the
network is trained, pairs are presented in the same, fixed order in each
epoch. The order is simply the order in which the pattern pairs are en-
countered in the list.
Test all Radio button on the test panel on the Network Viewer. If this option
is checked and testing is run, the network will test each testing pattern in
sequence. Pressing the step button will present each one by one for better
viewing. If it is not checked, the network will test just the selected test
pattern. To select a pattern, click on it in the Testing Patterns frame.
ecrit Parameter in Train options. Error criterion for stopping training. If the
tss at the end of an epoch of training is less than this, training stops.
lflag Check box in Train options. Normally checked, it enables weight updates
during learning.
nepochs Number of training epochs conducted each time the run button is
pressed.
Update After Field in the train and test windows of Network Viewer. Values
in the menu are cycle, pattern, and epoch. If the value is cycle, the screen
is updated after processing each pattern and then updated again after the
weights are changed. This only applies for training. If the value is pattern,
the screen is only updated after the weights are changed. If the value is
epoch, the screen is updated at the end of each epoch. The number field
to the left of this option controls how many cycles, patterns, or epochs
occur before an update is made.
4.5. RUNNING THE PROGRAM 87
actfunction Field in Train options or Test options. Select from linear, linear
threshold, stochastic, or continuous sigmoid.
lrule Field in Train options. Select between the Hebb and Delta update rules.
lrate Parameter in Train options. Scales the size of the changes made to the
weights. Generally, if there are n input units, the learning rate should be
less than or equal to 1/n.
noise Parameter in Train and Test options. Range of the random distortion
added to each input and target pattern specification value during training
and testing. The value added is uniformly distributed in the interval
[−noise, +noise].
temp Denominator used in the logistic function to scale net inputs in both the
continuous sigmoid and stochastic modes. Generally, temp can be set to
1. Note that there is only one cycle of processing in pa, so there is no
annealing.
pss Pattern sum of squares, equal to the sum over all output units of the squared
difference between the target for each unit and the obtained activation of
the unit.
target Vector of target values for output units, based on the current target
pattern, subject to effects of noise.
tss Total sum of squares, equal to the sum of all patterns so far presented
during the current epoch of the pattern sum of squares.
vcor Vector correlation of the obtained activation vector over the output units
and the target vector.
Figure 4.4: Display layout for the first pa exercise while processing pattern a,
before any learning has occurred.
Now you can train the network on this first pattern pair for one epoch. Select
the train panel, and then select cycle in the train panel. With this option, the
program will present the first (and, in this case, only) input pattern, compute
the output based on the current weights, and then display the input, output,
and target patterns, as well as some summary statistics. If you click “step” in
the train panel, the network will pause after the pattern presentation.
In the upper left corner of the display area, you will see some summary
information, including the current ndp, or normalized dot product, of the output
obtained by the network with the target pattern; the nvl, or normalized vector
length, of the obtained output pattern; and the vcor, or vector correlation, of
the output with the target. All of these numbers are 0 because the weights are
0, so the input produces no output at all. Below these numbers are the pss, or
pattern sum of squares, and the tss, or total sum of squares. They are the sum
of squared differences between the target and the actual output patterns. The
first is summed over all output units for the current pattern, and the second
is summed over all patterns so far encountered within this epoch (they are,
therefore, identical at this point).
Below these entries you will see the weight matrix on the left, with the
90CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
input vector that was presented for processing below it and the output and
target vectors to the right. The display uses shade of red for positive values and
shades of blue for negative values as in previous models. A value of +1 or −1
is not very saturated, so that a value can be distinguished over a larger range.
The window of the right of the screen shows the patterns in use for training
or test, whichever is selected. Input and target patterns are separated by a
vertical separator. You will see that the input pattern shown below the weights
matches the single input pattern shown on the right panel and that the target
pattern shown to the right of the weights matches the single target pattern to
the right of the vertical separator.
If you click step a second time, the target will first be clamped onto the
output units, then the weights will be updated according to the Hebbian learning
rule:
∆wij = (lrate)oi ij (4.25)
Q.4.1.1.
Now, with just this one trial of learning, the network will have “mastered”
this particular association, so that if you test it at this point, you will find that,
given the learned input, it perfectly reproduces the target. You can test the
network using the test command. Simply select the test panel, then click step.
In this particular case the display will not change much because in the previous
display the output had been clamped to reflect the very target pattern that
the network has now computed. The only thing that actually changes in the
display are the ndp, vcor, and nvl fields; these will now reflect the normalized
dot product and correlation of the computed output with the target and the
normalized length of the output. They should all be equal to 1.0 at this point.
You are now ready to test the generalization performance of the network.
You can enter patterns into a file. Start by opening the “one.pat” file, copy
the existing pattern and paste several times in a new .pat file. Save this file as
“gen.pat”. Edit the input pattern entries for the patterns and give each pattern
its own name. See Q.4.1.2 for information on the patterns to enter. Leave the
target part of the patterns the same. Then, click Test options, click Load new,
and load the new patterns for testing.
Q.4.1.2.
Try at least 4 different input patterns, testing each against the orig-
inal target. Include in your set of patterns one that is orthogonal to
the training pattern and one that is perfectly anticorrelated with it,
as well as one or two others with positive normalized dot products
4.6. OVERVIEW OF EXERCISES 91
with the input pattern. Report the input patterns, the output pat-
tern produced, and the ndp, vcor, and nvl in each case. Relate the
obtained output to the specifics of the weights and the input pat-
terns used and to the discussion in the “Background” section (4.1)
about the test output we should get from a linear Hebbian associa-
tor, as a function of the normalized dot product of the input vector
used at test and the input vector used during training.
If you understand the results you have obtained in this exercise, you under-
stand the basis of similarity-based generalization in one-layer associative net-
works. In the process, you should come to develop your intuitions about vector
similarity and to clearly be able to distinguish uncorrelated patterns from anti-
correlated ones.
first 1.0 -1.0 1.0 -1.0 1.0 -1.0 1.0 -1.0 1.0 1.0 -1.0 -1.0 1.0 1.0 -1.0 -1.0
We provide sets of patterns that meet these conditions in the two files or-
tho.pat and li.pat. However, we want you to make up your own patterns. Save
both sets for your use in the exercises in files called myortho.pat and myli.pat.
For each set of patterns, display the patterns in a table, then answer each of the
next two questions.
Q.4.2.1.
Read in the patterns using the “Load New” option in both the Train
and Test options, separately. Reset the network (this clears the
weights to 0s). Then run one epoch of training using the Hebbian
learning rule by pressing the “Run” button. What happens with
each pattern? Run three additional epochs of training (one at a
time), testing all the patterns after each epoch. What happens? In
what ways do things change? In what ways do they stay the same?
Why?
92CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
Q.4.2.2.
Turn off Hebb mode in the program by enabling the delta rule un-
der Train options, and try the above experiment again. Make sure
to reset the weights before training. Describe the similarities and
differences between the results obtained with the various measures
(concentrate on ndp and tss) and explain in terms of the differential
characteristics of the Hebbian and delta rule learning schemes.
For the next question, reset your network, and load the pattern set in the
file li.pat for both training and testing. Run one epoch of training using the
Hebb rule, and save the weights, using a command like:
liHebbwts = net.pool(3).proj(1).weight
Then press reset again, and switch to the delta rule. Run one epoch of training
at a time, and examine performance at the end of each epoch by testing all
patterns.
Q.4.2.3.
In li.pat, one of the input patterns is orthogonal to both of the others,
which are partially correlated with each other. When you test the
network at the end of one epoch of training, the network exhibits
perfect performance on two of the three patterns. Which pattern
is not perfectly correct? Explain why the network is not perfectly
correct on this pattern and why it is perfectly correct on the other
two patterns.
Keep running training epochs using the delta rule until the tss measure drops
below 0.01. Store the weights in a variable, such as liDeltawts, so that you can
display them numerically.
Q.4.2.4.
Examine and explain the resulting weight matrix, contrasting it with
the weight matrix obtained after one cycle of Hebbian learning with
the same patterns (these are the weights you saved before). What are
the similarities between the two matrices? What are the differences?
For one thing, take note of the weight to output unit 1 from input
unit 1, and the weight to output unit 8 to input unit 8. These are
the same under the Hebb rule, but different under the Delta rule.
Why? Make sure you find other differences, and explain them as
well. For all of the differences you notice, try to explain rather than
just describe the differences.
Hint.
4.6. OVERVIEW OF EXERCISES 93
To answer this question fully, you will need to refer to the patterns.
Remember that in the Hebb rule, each weight is just the sum of the
co-products of corresponding input and output activations, scaled by
the learning rate parameter. But this is far from the case with the
Delta rule, where weights can compensate for one another, and where
such things as a division of labor can occur. You can fully explain the
weights learned by the Delta rule, if you take note of the fact that all
eight input units contribute to the activation of each of the output
units. You can consider each output unit independently, however,
since the error measure treats each output unit independently.
As the final exercise in this set, construct a set of input-output pattern pairs that
cannot be learned by a delta rule network, referring to the linear independence
requirement and the text in Section 4.2.3 to help you construct an unlearnable
set of patterns. Full credit will be given for sets containing more than 2 patterns,
such that with all but one of the patterns, the set can be learned, but with all
three, the set cannot be learned.
Q.4.2.5.
Hint.
This means that each element in each input pattern and in each target pattern
will have its activation distorted by a random amount uniformly distributed
between +0.5 and −0.5.
Then load in a set of patterns (your orthogonal set from Ex. 4.2 or the
patterns in ortho.pat). Then you can see how well the model can do at pulling
out the “signals” from the “noise.” The clearest way to see this is by studying
the weights themselves and comparing them to the weights acquired with the
same patterns without noise added. You can also test with noise turned off; in
fact as loaded, noise is turned off for testing, so running a test allows you to see
how well the network can do with patterns without noise added.
Q.4.3.1.
Hint.
You may find it useful to rerun the relevant part of Ex. 4.2 (Q.
4.2.2). You can save the weights you obtain in the different runs as
before, e.g.
nonoisewts = pool(3).proj(1).weight;
For longer runs, remember that you can set Epochs in Train options
to a number larger than the default value to run more epochs for
each press of the “Run” button.
The results of this simulation are relevant to the theoretical analyses de-
scribed in PDP:11 and are very similar to those described under “central ten-
dency learning” in PDP:25, where the effects of amnesia (taken as a reduction
in connection strength) are considered.
4.6. OVERVIEW OF EXERCISES 95
Q.4.4.1.
At the end of the 10th epoch, the tss should be in the vicinity
of 30, or about 1.5 errors per pattern. Given the values of the
weights and the fact that Temp is set to 1, calculate the net input
to the last output unit for the first two input patterns, and calculate
the approximate probability that this last output unit will receive
the correct activation in each of these two patterns. MATLAB will
4.6. OVERVIEW OF EXERCISES 97
p = 1/(1+exp(-net.pool(3).netinput(8)))
At this point you should be able to see the solution to the rule of 78 patterns
emerging. Generally, there are large positive weights between input units and
corresponding output units, with unit 7 exciting unit 8 and unit 8 exciting unit
7. You’ll also see rather large inhibitory weights from each input unit to each
other unit within the same subgroup (i.e., 1, 2, and 3; 4, 5, and 6; and 7 and
8). Run another 40 or so epochs, and a subtler pattern will begin to emerge.
Q.4.4.2.
Generally there will be slightly negative weights from input units
to output units in other subgroups. See if you can understand why
this happens. Note that this does not happen reliably for weights
coming into output units 7 and 8. Your explanation should explain
this too.
At this point, you have watched a simple PDP network learn to behave
in accordance with a simple rule, using a simple, local learning scheme; that
is, it adjusts the strength of each connection in response to its errors on each
particular learning experience, and the result is a system that exhibits lawful
behavior in the sense that it conforms to the rule.
For the next part of the exercise, you can explore the way in which this
kind of pattern associator model captures the three-stage learning phenomenon
exhibited by young children learning the past tense in the course of learning
English as their first language. To briefly summarize this phenomenon: Early
on, children know only a few words in the past tense. Many of these words
happen to be exceptions, but at this point children tend to get these words
correct. Later in development, children begin to use a much larger number of
words in the past tense, and these are predominantly regular. At this stage,
they tend to overregularize exceptions. Gradually, over the course of many
years, these exceptions become less frequent, but adults have been known to
say things like ringed or taked, and lower-frequency exceptions tend to lose
their exceptionality (i.e., to become regularized) over time.
The 78 model can capture this pattern of results; it is interesting to see it do
this and understand how and why this happens. For this part of the exercise,
you will want to reset the weights, and read in the file hf.pat, which contains
a exception pattern (147 −→ 147) and one regular pattern (258 −→ 257). If
we imagine that the early experience of the child consists mostly of exposure to
high-frequency words, a large fraction of which are irregular (8 of the 10 most
frequent verbs are irregular), this approximates the early experience the child
might have with regular and irregular past-tense forms. If you run 30 epochs
98CHAPTER 4. LEARNING IN PDP MODELS: THE PATTERN ASSOCIATOR
of training using ptrain with these two patterns, you will see a set of weights
that allows the model to often set each output bit correctly, but not reliably.
At this point, you can read in the file all.pat, which contains these two pattern
pairs, plus all of the other pairs that are consistent with the rule of 78. This file
differs from the 78.pat file only in that the input pattern 147 is associated with
the “exceptional” output pattern 147 instead of what would be the “regular”
corresponding pattern 148. Save the weights that resulted from learning hf.pat.
Then read in all.pat and run 10 more epochs.
Q.4.4.3.
Given the weights that you see at this point, what is the network’s
most probable response to 147 ? Can you explain why the network
has lost the ability to produce 147 as its response to this input
pattern? What has happened to the weights that were previously
involved in producing 147 from 147 ?
One way to think about what has happened in learning the all.pat stimuli
is that the 17 regular patterns are driving the weights in one direction and the
single exception pattern is fighting a lonely battle to try to drive the weights
in a different direction, at least with respect to the activation of units 7 and 8.
Since eight of the input patterns have unit 7 on and “want” output unit 8 to
be on and unit 7 to be off and only one input pattern has input unit 7 on and
wants output unit 7 on and output unit 8 off, it is hardly a fair fight.
If you run more epochs (upwards of 300), though, you will find that the
network eventually finds a compromise solution that satisfies all of the patterns.
Q.4.4.4.
Q.4.5.1.
4.6. OVERVIEW OF EXERCISES 99
In this chapter, we introduce the back propagation learning procedure for learn-
ing internal representations. We begin by describing the history of the ideas and
problems that make clear the need for back propagation. We then describe the
procedure, focusing on the goal of helping the student gain a clear understand-
ing of gradient descent learning and how it is used in training PDP networks.
The exercises are constructed to allow the reader to explore the basic features of
the back propagation paradigm. At the end of the chapter, there is a separate
section on extensions of the basic paradigm, including three variants we call cas-
caded back propagation networks, recurrent networks, and sequential networks.
Exercises are provided for each type of extension.
5.1 BACKGROUND
The pattern associator described in the previous chapter has been known since
the late 1950s, when variants of what we have called the delta rule were first
proposed. In one version, in which output units were linear threshold units, it
was known as the perceptron (cf. Rosenblatt, 1959, 1962). In another version,
in which the output units were purely linear, it was known as the LMS or
least mean square associator (cf. Widrow and Hoff, 1960). Important theorems
were proved about both of these versions. In the case of the perceptron, there
was the so-called perceptron convergence theorem. In this theorem, the major
paradigm is pattern classification. There is a set of binary input vectors, each
of which can be said to belong to one of two classes. The system is to learn a set
of connection strengths and a threshold value so that it can correctly classify
each of the input vectors. The basic structure of the perceptron is illustrated
in Figure 5.1. The perceptron learning procedure is the following: An input
vector is presented to the system (i.e., the input units are given an activation of
1 if the corresponding value of the input vector is 1 and are given 0 otherwise).
101
102CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Figure 5.1: The one-layer perceptron analyzed by Minsky and Papert. (From
Perceptrons by M. L Minsky and S. Papert, 1969, Cambridge, MA: MIT Press.
Copyright 1969 by MIT Press. Reprinted by permission.)
P
The net input to the output unit is computed: net = i wi ii . If net is greater
than the threshold θ, the unit is turned on, otherwise it is turned off. Then the
response is compared with the actual category of the input vector. If the vector
was correctly categorized, then no change is made to the weights. If, however,
the output turns on when the input vector is in category 0, then the weights
and thresholds are modified as follows: The threshold is incremented by 1 (to
make it less likely that the output unit will come on if the same vector were
presented again). If input ii is 0, no change is made in the weight Wi (that
weight could not have contributed to its having turned on). However, if ii = 1,
then Wi is decremented by 1. In this way, the output will not be as likely to
turn on the next time this input vector is presented. On the other hand, if the
output unit does not come on when it is supposed to, the opposite changes are
made. That is, the threshold is decremented, and those weights connecting the
output units to input units that are on are incremented.
Mathematically, this amounts to the following: The output, o, is given by
o = 1 if net < θ
o = 0 otherwise
∆θ = −(tp − op ) = −δp
where p indexes the particular pattern being tested, tp is the target value in-
dicating the correct classification of that input pattern, and δp is the difference
5.1. BACKGROUND 103
between the target and the actual output of the network. Finally, the changes
in the weights, ∆wi . are given by
The remarkable thing about this procedure is that, in spite of its simplic-
ity, such a system is guaranteed to find a set of weights that correctly classifies
the input vectors if such a set of weights exists. Moreover, since the learning
procedure can be applied independently to each of a set of output units, the
perceptron learning procedure will find the appropriate mapping from a set of
input vectors onto a set of output vectors if such a mapping exists. Unfortu-
nately, as indicated in Chapter 4, such a mapping does not always exist, and
this is the major problem for the perceptron learning procedure.
In their famous book Perceptrons, Minsky and Papert (1969) document the
limitations of the perceptron. The simplest example of a function that cannot
be computed by the perceptron is the exclusive-or (XOR), illustrated in Fig-
ure 5.1. It should be clear enough why this problem is impossible. In order
for a perceptron to solve this problem, the following four inequalities must be
satisfied:
0 × w1 + 0 × w2 < θ → 0 < θ
0 × w1 + 1 × w2 > θ → w1 > θ
1 × w1 + 0 × w2 > θ → w2 > θ
1 × w1 + 1 × w2 < θ → w1 + w2 < θ
Obviously, we can’t have both w1 and w2 greater than θ while their sum,
w1 + w2 , is less than θ. There is a simple geometric interpretation of the class of
problems that can be solved by a perceptron: It is the class of linearly separable
functions. This can easily be illustrated for two dimensional problems such as
XOR. Figure 5.1.1 shows a simple network with two inputs and a single output
and illustrates three two-dimensional functions: the AND, the OR, and the
XOR. The first two can be computed by the network; the third cannot. In these
geometrical representations, the input patterns are represented as coordinates
104CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Figure 5.3: A. A simple network that can solve the AND and OR problems but
cannot solve the XOR problem. B. Geometric representations of these problems.
See test for explanation.
Figure 5.4: Adding an extra input makes it possible to solve the XOR problem.
(From PDP:8, p. 319.)
if we add the appropriate third dimension, that is, the appropriate new feature,
the problem is solvable. Moreover, as indicated in Figure 5.6, if you allow a
multilayered perceptron, it is possible to take the original two-dimensional prob-
lem and convert it into the appropriate three-dimensional problem so it can be
solved. Indeed, as Minsky and Papert knew, it is always possible to convert
any unsolvable problem into a solvable one in a multilayer perceptron. In the
more general case of multilayer networks, we categorize units into three classes:
input units, which receive the input patterns directly; output units, which have
associated teaching or target inputs; and hidden units, which neither receive
inputs directly nor are given direct feedback. This is the stock of units from
which new features and new internal representations can be created. The prob-
lem is to know which new features are required to solve the problem at hand. In
short, we must be able to learn intermediate layers. The question is, how? The
original perceptron learning procedure does not apply to more than one layer.
Minsky and Papert believed that no such general procedure could be found.
To examine how such a procedure can be developed it is useful to consider the
other major one-layer learning system of the 1950s and early 1960s, namely, the
least-mean-square (LMS) learning procedure of Widrow and Hoff (1960).
Note the introduction of the bias term which serves the same function as the
threshold θ in the Perceptron. Providing a bias equal to −θ and setting the
threshold to 0 is equivalent to having a threshold of θ. The bias is also equivalent
to a weight to the output unit from an input unit that is always on.
The error measure being minimised by the LMS procedure is the summed
squared error. That is, the total error, E, is defined to be
X XX
E = Ep = (tip − oip )2
p p i
where the index p ranges over the set of input patterns, i ranges over the set of
output units, and Ep represents the error on pattern p. The variable tip is the
desired output, or target, for the i th output unit when the pth pattern has been
presented, and oip is the actual output of the i th output unit when pattern
p has been presented. The object is to find a set of weights that minimizes
this function. It is useful to consider how the error varies as a function of any
given weight in the system. Figure 5.7 illustrates the nature of this dependence.
In the case of the simple single-layered linear system, we always get a smooth
error function such as the one shown in the figure. The LMS procedure finds the
values of all of the weights that minimize this function using a method called
gradient descent. That is, after each pattern has been presented, the error on
that pattern is computed and each weight is moved ”down” the error gradient
toward its minimum value for that pattern. Since we cannot map out the entire
error function on each pattern presentation, we must find a simple procedure
for determining, for each weight, how much to increase or decrease each weight.
The idea of gradient descent is to make a change in the weight proportional to
the negative of the derivative of the error, as measured on the current pattern,
5.1. BACKGROUND 107
where = 2k and δip = (tip − oip ) is the difference between the target for unit
i on pattern p and the actual output produced by the network. This is exactly
the delta learning rule described in Equation 15 from Chapter 4. It should
also be noted that this rule is essentially the same as that for the perceptron.
In the perceptron the learning rate was 1 (i.e., we made unit changes in the
weights) and the units were binary, but the rule itself is the same: the weights
are changed proportionally to the difference between target and output times
the input. If we change each weight according to this rule, each weight is moved
toward its own minimum and we think of the system as moving downhill in
weight-space until it reaches its minimum error value. When all of the weights
have reached their minimum points, the system has reached equilibrium. If the
system is able to solve the problem entirely, the system will reach zero error
1 It should be clear from Figure 5.7 why we want the negation of the derivative. If the weight
is above the minimum value, the slope at that point is positive and we want to decrease the
weight; thus when the slope is positive we add a negative amount to the weight. On the other
hand, if the weight is too small, the error curve has a negative slope at that point, so we want
to add a positive amount to the weight.
108CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Figure 5.7: Typical curve showing the relationship between overall error and
changes in a single weight in the network.
and the weights will no longer be modified. If the network is unable to get the
problem exactly right, it will find a set of weights that produces as small an
error as possible.
In order to get a fuller understanding of this process it is useful to care-
fully consider the entire error space rather than a one-dimensional slice. In
general this is very difficult to do because of the difficulty of depicting and vi-
sualizing high-dimensional spaces. However, we can usefully go from one to two
dimensions by considering a network with exactly two weights. Consider, as an
example, a linear network with two input units and one output unit with the
task of finding a set of weights that comes as close as possible to performing
the function OR. Assume the network has just two weights and no bias terms
like the network in Figure 5.1.1A. We can then give some idea of the shape
of the space by making a contour map of the error surface. Figure 5.8 shows
the contour map. In this case the space is shaped like a kind of oblong bowl.
It is relatively flat on the bottom and rises sharply on the sides. Each equal
error contour is elliptically shaped. The arrows around the ellipses represent
the derivatives of the two weights at those points and thus represent the di-
rections and magnitudes of weight changes at each point on the error surface.
The changes are relatively large where the sides of the bowl are relatively steep
and become smaller and smaller as we move into the central minimum. The
long, curved arrow represents a typical trajectory in weight-space from a start-
ing point far from the minimum down to the actual minimum in the space. The
weights trace a curved trajectory following the arrows and crossing the contour
lines at right angles.
The figure illustrates an important aspect of gradient descent learning. This
5.1. BACKGROUND 109
Figure 5.8: A contour map illustrating the error surface with respect to the two
weights w1 and w2 for the OR problem in a linear network with two weights
and no bias term. Note that the OR problem cannot be solved perfectly in a
linear system. The minimum sum squared error over the four input-output pairs
occurs when w1 = w2 = 0.75. (The input-output pairs are 00 − 0,01 − 1,10 − 1,
and 11 − 1.)
is the fact that gradient descent involves making larger changes to parameters
that will have the biggest effect on the measure being minimized. In this case,
the LMS procedure makes changes to the weights proportional to the effect
they will have on the summed squared error. The resulting total change to the
weights is a vector that points in the direction in which the error drops most
steeply.
∂Ep
∆wij = −k
∂wij
∂Ep
− ∝ δip ajp
∂wij
This generalizes the delta rule from the LMS procedure to the case where there
is a non-linearity applied to the output units, with the δ terms now defined so
as to take this non-linearity into account.
Now let us consider a weight that projects from an input unit k to a hidden
unit j, which in turn projects to an output unit, i in a very simple network
consisting of only one unit at each of these three layers (see Figure 5.1.2). We
can ask, what is the partial derivative of the error on the output unit i with
respect to a change in the weight wjk to the hidden unit from the input unit? It
may be helpful to talk yourself informally through the series of effects changing
2 In the networks we will be considering in this chapter, the output of a unit is equal to its
activation. We use the symbol a to designate this variable. This symbol can be used for any
unit, be it an input unit, an output unit, or a hidden unit.
5.1. BACKGROUND 111
Figure 5.9: A 1:1:1 network, consisting of one input unit, one hidden unit,
and one output unit. In the text discussing the chain of effects of changing the
weight from the input unit to the hidden unit on the error at the output unit,
the index i is used for the output unit, j for the hidden unit j, and k for the
input unit.
such a weight would have on the error in this case. It should be obvious that
if you increase the weight to the hidden unit from the input unit, that will
increase the net input to the hidden unit j by an amount that depends on the
activation of the input unit k. If the input unit were inactive, the change in
the weight would have no effect; the stronger the activation of the input unit,
the stronger the effect of changing the weight on the net input to the hidden
unit. This change, you should also see, will in turn increase the activation of the
hidden unit; the amoung of the increase will depend on the slope (derivative)
of the unit’s activation function evaluated at the current value of its net input.
This change in the activation with then affect the net input to the output unit
i by an amount depending on the current value of the weight to unit i from
unit j. This change in the net input to unit i will then affect the activation
of unit i by an amount proportional to the derivative of its activation function
evaluated at the current value of its net input. This change in the activation
of the output unit will then affect the error by an amount proportional to the
difference between the target and the current activation of the output unit.
we would write
∂Ep ∂Ep ∂aip ∂netip ∂aip ∂netjp
=
∂wjk ∂aip ∂neti ∂ajp ∂netj ∂wjk
The factors in the chain are given in the reverse order from the verbal description
above since this is how they will actually be calculated using back propagation.
The first two factors on the right correspond to the last two links of the chain
described above, and are equal to the δ term for output unit i as previously
discussed. The third factor is equal to the weight to output unit i from hidden
unit j; and the fourth factor corresponds to the derivative of the activation
function of the hidden unit, evaluated at its net input given the current pattern
p, f 0 (netjp ). Taking these four factors together they correspond to (minus) the
partial derivative of the error at output unit i with respect to the net input to
hidden unit j.
Now, if there is more than one output unit, the partial derivative of the error
across all of the output units is just equal to the sum of the partial derivatives
of the error with respect to each of the output units:
X
δjp = f 0 (netjp ) wij δip . BP Equation
i
The equation above is the core of the back propagation process and we call it
the BP Equation for future reference.
∂net
Because ∂wjkjp equals akp , the partial derivative of the error with respect to
the weight then becomes:
∂Ep
− = δjp akp .
∂wjk
The application of the back propagation rule, then, involves two phases:
During the first phase the input is presented and propagated forward through
the network to compute the output value aip for each unit. This output is
then compared with the target, and scaled by the derivative of the activation
function, resulting in a δ term for each output unit. The second phase involves
a backward pass through the network (analogous to the initial forward pass)
during which the δ term is computed for each unit in the network. This second,
backward pass allows the recursive computation of δ as indicated above. Once
these two phases are complete, we can compute, for each weight, the product of
the δ term associated with the unit it projects to times the activation of the unit
it projects from. Henceforth we will call this product the weight error derivative
since it is proportional to (minus) the derivative of the error with respect to the
weight. As will be discussed later, these weight error derivatives can then be
used to compute actual weight changes on a pattern-by-pattern basis, or they
may be accumulated over the ensemble of patterns with the accumulated sum
of its weight error derivatives then being applied to each of the weights.
Adjusting bias weights. Of course, the generalized delta rule can also be
used to learn biases, which we treat as weights from a special “bias unit” that is
always on. A bias weight can project from this unit to any unit in the network,
and can be adjusted like any other weight, with the further stipulation that the
activation of the sending unit in this case is always fixed at 1.
The activation function. As stated above, the derivation of the back prop-
agation learning rule requires that the derivative of the activation function,
f 0 (neti ) exists. It is interesting to note that the linear threshold function, on
which the perceptron is based, is discontinuous and hence will not suffice for
back propagation. Similarly, since a network with linear units achieves no ad-
vantage from hidden units, a linear activation function will not suffice either.
Thus, we need a continuous, nonlinear activation function. In most of our work
on back propagation and in the program presented in this chapter, we have used
the logistic activation function.
In order to apply our learning rule, we need to know the derivative of this
function with respect to its net input. It is easy to show that this derivative is
equal to aip (1 − aip ). This expression can simply be substituted for f 0 (net) in
the derivations above.
It should be noted that aip (1 − aip ) reaches its maximum when aip = 0.5
and goes to 0 as aip approaches 0 or 1 (see Figure 5.1.2). Since the amount
of change in a given weight is proportional to this derivative, weights will be
changed most for those units that are near their midrange and, in some sense,
not yet committed to being either on or off. This feature can sometimes lead
to problems for backpropagation learning, and the problem can be especially
serious at the output layer. If the weights in a network at some point during
learning are such that a unit that should be on is completely off (or a unit that
should be off is completely on) the error at that unit is large but paradoxically
the delta term at that unit is very small, and so no error signal is propagated
back through the network to correct the problem.
An improved error measure. There are various ways around the problem
114CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
just noted above. One is to simply leave the f 0 (neti ) term out of the calculation
of delta terms at the output units. In practice, this solves the problem, but it
seems like a bit of a hack.
Interestingly, however, if the error measure E is replaced by a different mea-
sure, called the ’cross-entropy’ error, here called CE, we obtain an elegant result.
The cross-entropy error for pattern p is defined as
X
CEp = − [tip log(aip ) + (1 − tip ) log(1 − aip )]
i
If the target value tip is thought of as a binary random variable having value
one with probability pip , and the activation of the output unit aip is construed
as representing the network’s estimate of that probability, the cross-entropy
measure corresponds to the negative logarithm of the probability of the observed
target values, given the current estimates of the pip ’s. Minimizing the cross-
entropy error corresponds to maximizing the probability of the observed target
values. The maximum is reached when for all i and p, aip = pip .
Now very neatly, it turns out that the derivative of CEp with respect to aip
is
tip (1 − tip )
− − .
aip (1 − aip )
When this is multiplied times the derivative of the logistic function evaluated
at the net input, aip (1 − aip ), to obtain the corresponding δ term δip , several
things cancel out and we are left with
This is the same expression for the δ term we would get using the standard
sum squared error measure E, if we simply ignored the derivative of the activa-
tion function! Because using cross entropy error seems more appropriate than
summed squared error in many cases and also because it often works better,
5.1. BACKGROUND 115
we provide the option of using the cross entropy error in the pdptool back
propagation simulator.
Even using cross-entropy instead of sum squared error, it sometimes happens
that hidden units have strong learned input weights that ’pin’ their activation
against 1 or 0, and in that case it becomes effectively impossible to propagate
error back through these units. Different solutions to this problem have been
proposed. One is to use a small amount of weight decay to prevent weights from
growing too large. Another is to add a small constant to the derivative of the
activation function of the hidden unit. This latter method works well, but is
often considered a hack, and so is not implemented in the pdptool software.
Weight decay is available in the software, however, and is described below.
Local minima. Like the simpler LMS learning paradigm, back propagation
is a gradient descent procedure. Essentially, the system will follow the contour
of the error surface, always moving downhill in the direction of steepest descent.
This is no particular problem for the single-layer linear model. These systems
always have bowl-shaped error surfaces. However, in multilayer networks there
is the possibility of rather more complex surfaces with many minima. Some of
the minima constitute complete solutions to the error minimization problem, in
the sense at these minima the system has reached a completely errorless state.
All such minima are global minima. However, it is possible for there to be some
residual errror at the bottom of some of the minima. In this case, a gradient
descent method may not find the best possible solution to the problem at hand.
Part of the study of back propagation networks and learning involves a study
of how frequently and under what conditions local minima occur. In networks
with many hidden units, local minima seem quite rare. However with few hidden
units, local minima can occur. The simple 1:1:1 network shown in Figure 5.1.2
can be used to demonstate this phenomenon. The problem posed to this network
is to copy the value of the input unit to the output unit. There are two basic
ways in which the network can solve the problem. It can have positive biases on
the hidden unit and on the output unit and large negative connections from the
input unit to the hidden unit and from the hidden unit to the output unit, or it
can have large negative biases on the two units and large positive weights from
the input unit to the hidden unit and from the hidden unit to the output unit.
These solutions are illustrated in Figure 5.11. In the first case, the solution
works as follows: Imagine first that the input unit takes on a value of 0. In this
case, there will be no activation from the input unit to the hidden unit, but
the bias on the hidden unit will turn it on. Then the hidden unit has a strong
negative connection to the output unit so it will be turned off, as required in
this case. Now suppose that the input unit is set to 1. In this case, the strong
inhibitory connection from the input to the hidden unit will turn the hidden
unit off. Thus, no activation will flow from the hidden unit to the output unit.
In this case, the positive bias on the output unit will turn it on and the problem
will be solved. Now consider the second class of solutions. For this case, the
connections among units are positive and the biases are negative. When the
input unit is off, it cannot turn on the hidden unit. Since the hidden unit has
a negative bias, it too will be off. The output unit, then, will not receive any
116CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Figure 5.11:
input from the hidden unit and since its bias is negative, it too will turn off
as required for zero input. Finally, if the input unit is turned on, the strong
positive connection from the input unit to the hidden unit will turn on the
hidden unit. This in turn will turn on the output unit as required. Thus we
have, it appears, two symmetric solutions to the problem. Depending on the
random starting state, the system will end up in one or the other of these global
minima.
Interestingly, it is a simple matter to convert this problem to one with one
local and one global minimum simply by setting the biases to 0 and not allowing
them to change. In this case, the minima correspond to roughly the same two
solutions as before. In one case, which is the global minimum as it turns out,
both connections are large and negative. These minima are also illustrated in
Figure 5.11. Consider first what happens with both weights negative. When the
input unit is turned off, the hidden unit receives no input. Since the bias is 0,
the hidden unit has a net input of 0. A net input of 0 causes the hidden unit to
take on a value of 0.5. The 0.5 input from the hidden unit, coupled with a large
negative connection from the hidden unit to the output unit, is sufficient to
turn off the output unit as required. On the other hand, when the input unit is
turned on, it turns off the hidden unit. When the hidden unit is off, the output
unit receives a net input of 0 and takes on a value of 0.5 rather than the desired
value of 1.0. Thus there is an error of 0.5 and a squared error of 0.25. This, it
turns out, is the best the system can do with zero biases. Now consider what
happens if both connections are positive. When the input unit is off, the hidden
unit takes on a value of 0.5. Since the output is intended to be 0 in this case,
there is pressure for the weight from the hidden unit to the output unit to be
small. On the other hand, when the input unit is on, it turns on the hidden unit.
5.1. BACKGROUND 117
Since the output unit is to be on in this case, there is pressure for the weight to
be large so it can turn on the output unit. In fact, these two pressures balance
off and the system finds a compromise value of about 0.73. This compromise
yields a summed squared error of about 0.45—a local minimum.
Usually, it is difficult to see why a network has been caught in a local min-
imum. However, in this very simple case, we have only two weights and can
produce a contour map for the error space. The map is shown in Figure 5.1.2.
It is perhaps difficult to visualize, but the map roughly shows a saddle shape.
It is high on the upper left and lower right and slopes down toward the center.
It then slopes off on each side toward the two minima. If the initial values of
the weights begin one part of the space, the system will follow the contours
down and to the left into the minimum in which both weights are negative. If,
however, the system begins in another part of the space, the system will fol-
low the slope into the upper right quadrant in which both weights are positive.
Eventually, the system moves into a gently sloping valley in which the weight
from the hidden unit to the output unit is almost constant at about 0.73 and
the weight from the input unit to the hidden unit is slowly increasing. It is
slowly being sucked into a local minimum. The directed arrows superimposed
on the map illustrate thelines of force and illustrate these dynamics. The long
arrows represent two trajectories through weightspace for two different starting
points.
It is rare that we can create such a simple illustration of the dynamics of
weight-spaces and see how clearly local minima come about. However, it is likely
that many of our spaces contain these kinds of saddle-shaped error surfaces.
Sometimes, as when the biases are free to move, there is a global minimum on
either side of the saddle point. In this case, it doesn’t matter which way you
move off. At other times, such as in Figure 5.1.2, the two sides are of different
depths. There is no way the system can sense the depth of a minimum from the
edge, and once it has slipped in there is no way out. Importantly, however, we
find that high-dimensional spaces (with many weights) have relatively few local
minima.
Momentum. Our learning procedure requires only that the change in weight
be proportional to the weight error derivative. True gradient descent requires
that infinitesimal steps be taken. The constant of proportionality, , is the
learning rate in our procedure. The larger this constant, the larger the changes
in the weights. The problem with this is that it can lead to steps that overshoot
the minimum, resulting in a large increase in error. For practical purposes we
choose a learning rate that is as large as possible without leading to oscillation.
This offers the most rapid learning. One way to increase the learning rate
without leading to oscillation is to modify the back propagation learning rule
to include a momentum term. This can be accomplished by the following rule:
∆wij (n + 1) = (δip ajp ) + α∆wij (n)
where the subscript n indexes the presentation number and α is a constant that
determines the effect of past weight changes on the current direction of move-
ment in weight space. This provides a kind of momentum in weight-space that
118CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Figure 5.12: A contour map for the 1:1:1 identity problem with biases fixed
at 0. The map show a local minimum in the positive quadrant and a global
minimum in the lower left-hand negative quadrant. Overall the error surface is
saddle-shaped. See the text for further explanation.
effectively filters out high-frequency variations of the error surface in the weight-
space. This is useful in spaces containing long ravines that are characterized by
steep walls on both sides of the ravine and a gently sloping floor. Such situations
tend to lead to divergent oscillations across the ravine. To prevent these it is
necessary to take very small steps, but this causes very slow progress along the
ravine. The momentum tends to cancel out the tendency to jump across the
ravine and thus allows the effective weight steps to be bigger. In most of the
simulations reported in PDP:8, α was about 0.9. Our experience has been that
we get the same solutions by setting α = 0 and reducing the size of , but the
system learns much faster overall with larger values of α and .
Symmetry breaking. Our learning procedure has one more problem that can
be readily overcome and this is the problem of symmetry breaking. If all weights
start out with equal values and if the solution requires that unequal weights be
developed, the system can never learn. This is because error is propagated back
through the weights in proportion to the values of the weights. This means that
all hidden units connected directly to the output units will get identical error
5.1. BACKGROUND 119
signals, and, since the weight changes depend on the error signals, the weights
from those units to the output units must always be the same. The system
is starting out at a kind of unstable equilibrium point that keeps the weights
equal, but it is higher than some neighboring points on the error surface, and
once it moves away to one of these points, it will never return. We counteract
this problem by starting the system with small random weights. Under these
conditions symmetry problems of this kind do not arise. This can be seen in
Figure 5.1.2. If the system starts at exactly (0,0), there is no pressure for it to
move at all and the system will not learn; if it starts virtually anywhere else, it
will eventually end up in one minimum or the other.
Weight decay. One additional extension of the back propagation model that
we will consider here is the inclusion of weight decay. Weight decay is simply a
tendency for weights to be reduced very slightly every time they are updated.
If weight-decay is non-zero, then the full equation for the change to each weight
becomes the following:
3 We moving between these options, it is important to note that weight decay is applied
each time weights are updated. If weights are updated after each pattern, a smaller value of
weight decay should be used than if they are updated after a batch of n patterns or a whole
epoch.
120CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
5.2 IMPLEMENTATION
The bp program implements the back propagation process just described. Net-
works are assumed to be feedforward only, with no recurrence. An implementa-
tion of backpropagation for recurrent networks is described in a later chapter.
The network is specified in terms of a set of pools of units. By convention,
pool(1) contains the single bias unit, which is always on. Subsequent pools
are declared in an order that corresponds to the feed-forward structure of the
network. Since activations at later layers depend on the activations at earlier
layers, the activations of units must be processed in correct order, and therefore
the order of specification of pools of units is important. Indeed, since deltas
at each layer depend on the delta terms from the layers further forward, the
backward pass must also be carried out in the correct order. Each pool has a
type: it can be an input pool, an output pool, or a hidden pool. There can be
more than one input pool and more than one output pool and there can be 0
or more hidden pools. Input pools must all be specified before any other pools
and all hidden pools must be specified before any output pools.
Connections among units are specified by projections. Projections may be
from any pool to any higher numbered pool; since the bias pool is pool(1) it
may project to any other pool, although bias projections to input pools will
have no effect since activations of input units are clamped to the value specified
by the external input. Projections from a layer to itself are not allowed.
Weights in a projection can be constrainted to be positive or negative. These
constraints are imposed both at initialization and after each time the weights are
incremented during processing. Two other constraints are imposed only when
weights are initialized; these constraints specify either a fixed value to which the
weight is initialized, or a random value. For weights that are random, if they
are constrained to be positive, they are initialized to a value between 0 and the
value of a parameter called wrange; if the weights are constrained to be negative,
the initialization value is between -wrange and 0; otherwise, the initialization
value is between wrange/2 and -wrange/2. Weights that are constrained to a
fixed value are initialized to that value.
The program also allows the user to set an individual learning rate for each
projection via a layer-specific lrate parameter. If the value of this layer-specific
lrate is unspecified, the network-wide lrate variable is used.
The bp program also makes use of a list of pattern pairs, each pair consisting
of a name, an input pattern, and a target pattern. The number of elements in
the input pattern should be equal to the total number of units summed across
all input pools. Similarly, the number of elements of the target pattern should
be equal to the total number of output units summed across all output pools.
Processing of a single pattern occurs as follows: A pattern pair is chosen,
and the pattern of activation specified by the input pattern is clamped on the
input units; that is, their activations are set to whatever numerical values are
specified in the input pattern. These are typically 0’s and 1’s but may take any
real value.
Next, activations are computed. For each noninput pool, the net inputs
5.2. IMPLEMENTATION 121
to each unit are computed and then the activations of the units are set. This
occurs in the order that the pools are specified in the network specification,
which must be specified correctly so that by the time each unit is encountered,
the activations of all of the units that feed into it have already been set. The
routine performing this computation is called compute output. Once the output
has been computed some summary statistics are computed in a routine called
sumstats. First it computes the pattern sum of squares (pss), equal to the
squared error terms summed over all of the output units. Analogously, the pce
or pattern cross entropy, the sum of the cross entropy terms accross all the
output units, is calculated. Then the routine adds the pss to the total sum of
squares (tss), which is just the cumulative sum of the pss for all patterns thus
far processed within the current epoch. Similarly the pce is added to the tce, or
total cross entropy measure.
Next, error and delta terms are computed in a routine called compute error.
The error for a unit is equivalent to (minus) the partial derivative of the error
with respect to a change in the activation of the unit. The delta for the unit is
(minus) the partial derivative of the error with respect to a change in the net
input to the unit. First, the error terms are calculated for each output unit.
For these units, error is the difference between the target and the obtained
activation of the unit. After the error has been computed for each output unit,
we get to the ”heart” of back propagation: the recursive computation of error
and delta terms for hidden units. The program iterates backward over the layers,
starting with the last output layer. The first thing it does in each layer is set the
value of delta for the units in the current layer; this is equal to the error for the
unit times the derivative of the activation function as described above. Then,
once it has the delta terms for the current pool, the program passes this back
to all pools that project to the current pool; this is the actual back propagation
process. By the time a particular pool becomes the current pool, all of the units
that it projects to will have already been processed and its total error will have
been accumulated, so it is ready to have its delta computed.
After the backward pass, the weight error derivatives are then computed
from the deltas and activations in a routine called compute weds. Note that
this routine adds the weight error derivatives occasioned by the present pattern
into an array where they can potentially be accumulated over patterns.
Weight error derivatives actually lead to changes in the weights when a
routine called change weights is called. This may be called after each pattern
has been processed, or after each batch of n patterns, or after all patterns in
the training set have been processed. When this routine is called, it cycles
through all the projections in the network. For each, the new delta weight is
first calculated. The delta weight is equal to (1) the accumulated weight error
derivative scaled by the lrate, minus the weight decay scaled by wdecay, plus
a fraction of the previous delta weight where the fraction is the value of the
momentum parameter. Then, this delta weight is added into the weight, so
that the weight’s new value is equal to its old value plus the delta weight. At
the end of processing each projection, the weight error derivative terms are all
set to 0, and constraints on the values of the weights are imposed in the routine
122CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
constrain weights.
Generally, learning is accomplished through a sequence of epochs, in which
all pattern pairs are presented for one trial each during each epoch. The pre-
sentation is either in sequential or permuted order. It is also possible to test the
processing of patterns, either individually or by sequentially cycling through the
whole list, with learning turned off. In this case, compute output, compute error,
and sumstats are called, but compute wed and change weights are not called.
memory management issue has not been observed on windows. We are currently
investigating this and will be updating the documentation as soon as we find a
fix.
In the bp program the principle measures of performance are the pattern
sum of squares (pss) and the total sum of squares (tss), and the pattern cross-
entropy pce and the total cross entropy tce. The user can specify whether the
error measure used in computing error derivatives is the sum squared error or
the cross entropy. Because of its historical precedence, the sum squared error is
used by default. The user may optionally also compute an additional measure,
the vector correlation of the present weight error derivatives with the previous
weight error derivatives. The set of weight error derivatives can be thought of
as a vector pointing in the steepest direction downhill in weight space; that
is, it points down the error gradient. Thus, the vector correlation of these
derivatives across successive epochs indicates whether the gradient is staying
relatively stable or shifting from epoch to epoch. For example, a negative value
of this correlation measure (called gcor for gradient correlation) indicates that
the gradient is changing in direction. Since the gcor can be thought of as
following changes in the direction of the gradient, the check box for turning on
this computation is called follow
Control over testing is straightforward. With the “test all” box checked, the
user may either click run to carry out a complete pass through the test set, or
click step to step pattern by pattern, or the user may uncheck the “text all”
button and select an individual pattern by clicking on it in the network viewer
window and then clicking run or step.
There is a special mode available in the bp program called cascade mode.
This mode allows activation to build up gradually rather than being computed
in a single step as is usually the case in bp. A discussion of the implementation
and use of this mode is provided later in this chapter.
As with other pdptool programs the user may adjust the frequency of dis-
play updating in the train and test windows. It is also possible to log and create
graphs of the state of the network at the pattern or epoch level using create/edit
logs within the training and testing options panels.
5.4 EXERCISES
We present four exercises using the basic back propagation procedure. The first
one takes you through the XOR problem and is intended to allow you to test
and consolidate your basic understanding of the back propagation procedure and
the gradient descent process it implements. The second allows you to explore
the wide range of different ways in which the XOR problem can be solved; as
you will see the solution found varies from run to run initialized with different
starting weights. The third exercise suggests minor variations of the basic back
propagation procedure, such as whether weights are changed pattern by pattern
or epoch by epoch, and also proposes various parameters that may be explored.
The fourth exercise suggests other possible problems that you might want to
124CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Figure 5.13: Architecture of the XOR network used in the exercises (From
PDP:8, p.332.)
Figure 5.14: The display produced by the bp program, initialized for XOR.
number and the total sum of of squares (tss) resulting from testing all four
patterns. The next line contains the value of the gcor variable, currently 0 since
no error derivatives have yet been calculated. Below that is a line containing
the current pattern name and the pattern sum of squares pss associated with
this pattern. To the right in the “patterns” panel is the set of input and target
patterns for XOR. Back in the main network viewer window, we now turn our
attention to the area to the right and below the label “sender acts”. The colored
squares in this row shows the activations of units that send their activations
forward to other units in the network. The first two are the two input units,
and the next two are the two hidden units. Below each set of sender activations
are the corresponding projections, first from the input to the hidden units,
and below and to the right of that, from the hidden units to the single output
unit. The weight in a particular column and row represents the strength of the
connection from a particular sender unit indexed by the column to the particular
receiver indexed by the row.
To the right of the weights is a column vector indicating the values of the
bias terms for the receiver units-that is, all the units that receive input from
other units. In this case, the receivers are the two hidden units and the output
unit.
To the right of the biases is a column for the net input to each receiving unit.
There is also a column for the activations of each of these receiver units. (Note
that the hidden units’ activations appear twice, once in the row of senders and
once in this column of receivers.) The next column contains the target vector,
which in this case has only one element since there is only one output unit.
Finally, the last column contains the delta values for the hidden and output
126CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
units.
Note that shades of red are used to represent positive values, shades of blue
are used for negative values, and a neutral gray color is used to represent 0.
The color scale for weights, biases, and net inputs ranges over a very broad
range, and values less than about .5 are very faint in color. The color scale
for activations ranges over somewhat less of a range, since activations can only
range from 0 to 1. The color scale for deltas ranges over a very small range
since delta values are very small. Even so, the delta values at the hidden level
show very faintly compared with those at the output level, indicating just how
small these delta values tend to be, at least at this early stage of training. You
can inspect the actual numberical values of each variable by moving your mouse
over the corresponding colored square.
The display shows what happened when the last pattern pair in the file
xor.pat was processed. This pattern pair consists of the input pattern (1 1)
and the target pattern (0). This input pattern was clamped on the two input
units. This is why they both have activation values of 1.0, shown as a fairly
saturated red in the first two entries of the sender activation vector. With these
activations of the input units, coupled with the weights from these units to the
hidden units, and with the values of the bias terms, the net inputs to the hidden
units were set to 0.60 and -0.40, as indicated in the net column. Plugging these
values into the logistic function, the activation values of 0.64 and 0.40 were
obtained for these units. These values are shown both in the sender activation
vector and in the receiver activation vector (labeled act, next to the net input
vector). Given these activations for the hidden units, coupled with the weights
from the hidden units to the output unit and the bias on the output unit, the
net input to the output unit is 0.48, as indicated at the bottom of the net
column. This leads to an activation of 0.61, as shown in the last entry of the
act column. Since the target is 0.0, as indicated in the target column, the error,
or (target - activation) is -0.61; this error, times the derivative of the activation
function (that is, activation (1 - activation)) results in a delta value of -0.146,
as indicated in the last entry of the final column. The delta values of the hidden
units are determined by back propagating this delta term to the hidden units,
using the back-propagation equation.
Q.5.1.1.
Show the calculations of the values of delta for each of the two hidden
units, using the activations and weights as given in this initial screen
display, and the BP Equation. Explain why these values are so small.
At this point, you will notice that the total sum of squares before any learning
has occurred is 1.0507. Run another tall to understand more about what is
happening.
Q.5.1.2.
Report the output the network produces for each input pattern and
explain why the values are all so similar, referring to the strengths of
5.4. EXERCISES 127
the weights, the logistic function, and the effects of passing activation
forward through the hidden units before it reaches the output units.
Now you are ready to begin learning. Acivate the training panel. If you
click run (don’t do that yet), this will run 30 epochs of training, presenting each
pattern sequentially in the order shown in the patterns window within each
epoch, and adjusting the weights at the end of the epoch. If you click step, you
can follow the tss and gcor measures as they change from epoch to epoch. A
graph will also appear showing the tss. If you click run after clicking step a few
times, the network will run to the 30 epoch milestone, then stop.
You may find in the course of running this exercise that you need to go
back and start again. To do this, you should use the reset command, followed
by clicking on the load weights button, and selecting the file xor.wts. This file
contains the initial weights used for this exercise. This method of reinitializing
guarantees that all users will get the same starting weights.
After completing the first 30 epochs, stop and answer this question.
Q.5.1.3.
The total sum of squares is smaller at the end of 30 epochs, but is
only a little smaller. Describe what has happened to the weights
and biases and the resulting effects on the activation of the output
units. Note the small sizes of the deltas for the hidden units and
explain. Do you expect learning to proceed quickly or slowly from
this point? Why?
Run another 90 epochs of training (for a total of 120) and see if your pre-
dictions are confirmed. As you go along, watch the progression of the tss in the
graph that should be displayed (or keep track of this value at each 30 epoch
milestone by recording it manually). You might find it interesting to observe
the results of processing each pattern rather than just the last pattern in the
four-pattern set. To do this, you can set the update after selection to 1 pat-
tern rather than 1 epoch, and use the step button for an epoch or two at the
beginning of each set of 30 epochs.
At the end of another 60 epochs (total: 180), some of the weights in the
network have begun to build up. At this point, one of the hidden units is
providing a fairly sensitive index of the number of input units that are on. The
other is very unresponsive.
Q.5.1.4.
Explain why the more responsive hidden unit will continue to change
its incoming weights more rapidly than the other unit over the next
few epochs.
Run another 30 epochs. At this point, after a total of 210 epochs, one of the
hidden units is now acting rather like an OR unit: its output is about the same
for all input patterns in which one or more input units is on.
128CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Q.5.1.5.
Explain this OR unit in terms of its incoming weights and bias term.
What is the other unit doing at this point?
Now run another 30 epochs. During these epochs, you will see that the
second hidden unit becomes more differentiated in its response.
Q.5.1.6.
Describe what the second hidden unit is doing at this point, and
explain why it is leading the network to activate the output unit
most strongly when only one of the two input units is on.
Run another 30 epochs. Here you will see the tss drop very quickly.
Q.5.1.7.
Explain the rapid drop in the tss, referring to the forces operating
on the second hidden unit and the change in its behavior. Note that
the size of the delta for this hidden unit at the end of 270 epochs is
about as large in absolute magnitude as the size of the delta for the
output unit. Explain.
Click the run button one more time. Before the end of the 30 epochs, the
value of tss drops below ecrit, and so training stops. The XOR problem is solved
at this point.
Q.5.1.8.
Q.5.2.1.
5.4. EXERCISES 129
At the end of each run, record, after each random seed, the final
epoch number, and the final tss. Create a table of these results to
turn in as part of your homework. Then, run through a test, inspect-
ing the activations of each hidden unit and the single output unit
obtained for each of the four patterns. Choose two successful runs
that seem to have reached different solutions than the one reached
in Exercise 5.1, as evidenced by qualitative differences in the hidden
unit activation patterns. For these two runs, record the hidden and
output unit activations for each of the four patterns, and include
these results in a second table as part of what you turn in. For each
case, state what logical predicate each hidden unit appears to be cal-
culating, and how these predicates are then combined to determine
the activation of the output unit. State this in words for each case,
and also use the notation described in the hint below to express this
information succinctly.
Hint. The question above may seem hard at first, but should become easier
as you consider each case. In Excercise 5.1, one hidden unit comes on when
either input unit is on (i.e., it acts as an OR unit), and the other comes on
when both input units are on (i.e., it acts as an AND unit). For this exercise,
we are looking for qualitatively different solutions. You might find that one
hidden unit comes on when the first input unit is on and the second is off (This
could be called ‘A and not B’), and the other comes on when the first is on and
the second is off (‘B and not A’). The weights from the hidden units to the output
unit will be different from what they were in Exercise 5.1, reflecting a difference
in the way the predicates computed by each hidden unit are combined to solve
the XOR problem. In each case you should be able to describe the way the
problem is being solved using logical expressions. Use A for input unit 1, B for
input unit 2, and express the whole operation as a compound logical statement,
using the additional logical terms ’AND’ ’OR’ and ’NOT’. Use square brackets
around the expression computed by each hidden unit, then use logical terms to
express how the predicates computed by each hidden unit are combined. For
example, for the case in Exercise 5.1, we would write: [A OR B] AND NOT [A
AND B].
trying to examine its effects of the rate and outcome of learning. We don’t
want to prescribe your experiments too specifically, but one thing you could do
would be the following. Re-run each of the eight runs that you carried out in
the previous exercise under the variation that you have chosen. To do this, you
first set the training option for your chosen variation, then you set the random
seed to the value from your first run above, then click reset. The network will
now be initialized exactly as it was for that first run, and you can now test the
effect of your chosen variation by examining whether it effects the time course
or the outcome of learning. You could repeat these steps for each of your runs,
exploring how the time course and outcome of learning are affected.
Q.5.3.1.
Describe what you have chosen to vary, how you chose to vary it,
and present the results you obtained in terms of the rate of learning,
the evolution of the weights, and the eventual solution achieved.
Explain as well as you can why the change you made had the effects
you found.
Q.5.4.1.
Describe the problem you have chosen, and why you find it interest-
ing. Explain the network architecture that you have selected for the
problem and the set of training patterns that you have used. De-
scribe the results of your learning experiments. Evaluate the back
propagation method for learning and explain your feelings as to its
adequacy, drawing on the results you have obtained in this experi-
ment and any other observations you have made from the readings
or from this exercise.
Hints. To create your own network, you will need to create the necessary
.net, .tem, and .pat files yourself; once you’ve done this, you can create a script
file (with .m extension) that reads these files and launches your network. The
steps you need to take to do this are described in Appendix B, How to create
5.4. EXERCISES 131
your own network. More details are available in the PDPTool User’s Guide,
Appendix C.
In general, if you design your own network, you should strive to keep it
simple. You can learn a lot with a network that contains as few as five units (the
XOR network considered above), and as networks become larger they become
harder to understand.
To achieve success in training your network, there are many parameters that
you may want to consider. The exercises above should provide you with some
understanding of the importance of some of these parameters. The learning rate
(lrate) of your network is important; if it is set either too high or too low, it
can hinder learning. The default 0.1 is fine for some simple networks (e.g., the
838 encoder example discussed in Appendix B), but smaller rates such as 0.05,
0.01 or 0.001 are often used, especially in larger networks. Other parameters to
consider are momentum, the initial range of the weights (wrange), the weight
update frequency variable, and the order of pattern presentation during training
(all these are set through the train options window).
If you are having trouble getting your network to learn, the following ap-
proach may not lead to the fastest learning but it seems fairly robust: Set
momentum to 0, set the learning rate fairly low (.01), set the update frequency
to 1 pattern, set the training regime to permuted (ptrain), and use cross-entropy
error. If your network still doesn’t learn make sure your network and training
patterns are specified correctly. Sometimes, it may also be necessary to add
hidden units, though it is surprising how few you can get away with in many
cases, though with the minimum number, as we know from XOR, you can get
stuck sometimes.
The range of the initial random weights can hinder learning if it is set too
high or too low. A range that is too high (such as 20) will push the hidden units
to extreme activation values (0 or 1) before the network has started learning,
which can harm learning (why?). If this parameter is too small (such as .01),
learning can also be very slow since the weights dilute the back propagation of
error. The default wrange of 1 is ok for smaller networks, but it may be too big
for larger networks. Also, it may be worth noting that, while a smaller wrange
and learning rate tends to lead to slower learning, it tends to produce more
consistent results across different runs (using different initial random weights).
Other pre-defined bp networks. In addition to XOR, there are two further
examples provided in the PDPTool/bp directory. One of these is the 4-2-4
encoder problem described in PDP:8. The files 424.tem, 424.net, 424.pat, and
FourTwoFour.m are already set up for this problem just type FourTwoFour at
the command prompt to start up this network. The network viewer window is
layed out as with XOR, such that the activations of the input and hidden units
are shown across the top, and the bias, net input, activations, targets and deltas
for the hidden and output units are shown in vertical columns to the right of
the two arrays of weights.
Another network that is also ready to run is Rumelhart’s Semantic Network,
described in Rumelhart and Todd (1993), Rogers and McClelland (2004) (Chap-
ters 2 and 3), and McClelland and Rogers (2003). The files for this are called
132CHAPTER 5. TRAINING HIDDEN UNITS WITH BACK PROPAGATION
Competitive Learning
In Chapter 5 we showed that multilayer, nonlinear networks are essential for the
solution of many problems. We showed one way, the back propagation of error,
that a system can learn appropriate features for the solution of these difficult
problems. This represents the basic strategy of pattern association—to search
out a representation that will allow the computation of a specified function.
There is a second way to find useful internal features: through the use of a
regularity detector, a device that discovers useful features based on the stimu-
lus ensemble and some a priori notion of what is important. The competitive
learning mechanism described in PDP:5 is one such regularity detector. In this
section we describe the basic concept of competitive learning, show how it is
implemented in the cl program, describe the basic operations of the program,
and give a few exercises designed to familiarize the reader with these ideas.
133
134 CHAPTER 6. COMPETITIVE LEARNING
having each unit activate itself and inhibit its neighbors. Such a network can readily be
employed to choose the maximum value of a set of units. In our simulations, we do not use
this mechanism. We simply compute the maximum value directly.
2 Note that for consistency with the other chapters in this book we have adopted terminol-
ogy here that is different from that used in the PDP:5. Here we use where g was used in
PDP:5. Also, here the weight to unit i from unit j is designated wij . In PDP:5, i indexed
the sender not the receiver, so wij referred to the weight from unit i to unit j.
136 CHAPTER 6. COMPETITIVE LEARNING
number of active lines, then all vectors are the same length and each can be
viewed as a point on an N -dimensional hypersphere, where N is the number of
units in the lower level, and therefore, also the number of input lines received
by each unit in the upper level. Each × in Figure 6.2A represents a particular
pattern. Those patterns that are very similar are near one another on the sphere,
and those that are very different are far from one another on the sphere. Note
that since there are N input lines to each unit in the upper layer, its weights
can also be considered a vector in N -dimensional space. Since all units have the
same total quantity of weight, we have N -dimensional vectors of approximately
fixed length for each unit in the cluster.3 Thus, properly scaled, the weights
themselves form a set of vectors that (approximately) fall on the surface of
the same hypersphere. In Figure 6.2B, the ’s represent the weights of two
units superimposed on the same sphere with the stimulus patterns. Whenever
a stimulus pattern is presented, the unit that responds most strongly is simply
the one whose weight vector is nearest that for the stimulus. The learning
rule specifies that whenever a unit wins a competition for a stimulus pattern,
it moves a fraction of the way from its current location toward the location
of the stimulus pattern on the hypersphere. Suppose that the input patterns
fell into some number, M , of “natural” groupings. Further, suppose that an
inhibitory cluster receiving inputs from these stimuli contained exactly M units
(as in Figure 6.2C). After sufficient training, and assuming that the stimulus
groupings are sufficiently distinct, we expect to find one of the vectors for the
M units placed roughly in the center of each of the stimulus groupings. In this
case, the units have come to detect the grouping to which the input patterns
belong. In this sense, they have “discovered” the structure of the input pattern
sets.
• If the stimuli are highly structured, the classifications are highly stable. If
the stimuli are less well structured, the classifications are more variable,
and a given stimulus pattern will be responded to first by one and then
by another member of the cluster. In our experiments, we started the
weight vectors in random directions and presented the stimuli randomly.
In this case, there is rapid movement as the system reaches a relatively
stable configuration (such as one with a unit roughly in the center of
each cluster of stimulus patterns). These configurations can be more or
less stable. For example, if the stimulus points do not actually fall into
nice clusters, then the configurations will be relatively unstable and the
presentation of each stimulus will modify the pattern of responding so
that the system will undergo continual evolution. On the other hand, if
the stimulus patterns fall rather nicely into clusters, then the system will
become very stable in the sense that the same units will always respond
to the same stimuli.4
6.1.3 Implementation
The competitive learning model is implemented in the cl program. The model
implements a single input (or lower level) layer of units, each connected to all
members of a single output (or upper level) layer of units. The basic strategy for
the cl program is the same as for bp and the other learning programs. Learning
occurs as follows: A pattern is chosen and the pattern of activation specified
by the input pattern is clamped on the input units. Next, the net input into
each of the output units is computed. The output unit with the largest input
4 Grossberg (1976) has addressed this problem in his very similar system. He has proved
that if the patterns are sufficiently sparse and/or when there are enough units in the cluster,
then a system such as this will find a perfectly stable classification. He also points out that
when these conditions do not hold, the classification can be unstable. Most of our work is
with cases in which there is no perfectly stable classification and the number of patterns is
much larger than the number of units in the inhibitory clusters.
6.1. SIMPLE COMPETITIVE LEARNING 139
is determined to be the winner and its activation value is set to 1. All other
units have their activation values set to 0. The routine that carries out this
computation is
function compute_output()
After the activation values are determined for each of the output units, the
weights must be adjusted according to the learning rule. This involves increasing
the weights from the active input lines to the winner and decreasing the weights
from the inactive lines to the winner. This routine assumes that each input
pattern sums to 1.0 across units, keeping the total amount of weight equal to
1.0 for a given output unit. If we do not want to make this assumption, the
routine could easily be modified by implementing Equation 6.1 instead.
function change_weights()
% find the weight vector to be updated (belonging to the winning output unit)
% ------------------------------------------------------------------------
wt = net.pool(‘output’).proj(1).weight(net.pool(‘output’).winner,:);
Figure 6.3: Initial screen display for the cl program running the Jets and Sharks
example with two output units.
gang to which the individual belongs. In the case of Art, we have a .2 on the left
and a 0 on the right. This represents the fact that Art is a Jet and not a Shark.
Note that there is at most one .2 in each row. This results from the fact that
the values on the various dimensions are mutually exclusive. Art has a .2 for
the third value of the Age row, indicating that Art is in his 40s. The rest of the
values are similarly interpreted. The weights are in the same configuration as
the inputs. The corresponding weight value is displayed below each of the two
output unit labels (unit 1 and unit 2 ). Each cell contains the weight from the
corresponding input unit to that output unit. Thus the upper left-hand value
for the weights is the initial weight from the Jet unit to output unit 1. Similarly,
the lower right-hand value of the weight matrix is the initial weight from bookie
to unit 2. The initial values of the weights are random, with the constraint
that the weights for each unit sum to 1.0. (Due to scaling and roundoff, the
actual values displayed should sum to a value somewhat less than 1.0.) The
lrate parameter is set to 0.05. This means that on any trial 5% of the winner’s
weight is redistributed to the active lines.
Now try running the program by clicking the run button in the train window.
Since nepochs is set to 20, the system will stop after 20 epochs. Look at the
new values of the weights. Try several more runs, using the newstart command
to reinitialize the system each time. In each case, note the configuration of the
weights. You should find that usually one unit gets about 20% of its weight on
the jets line and none on the sharks line, while the other unit shows the opposite
pattern.
Q.6.1.1.
Hint.
You can find out how the system responds to each subpattern by
stepping through the set of patterns in test mode — noting each
time which unit wins on that pattern (this is indicated by the output
activation values displayed on the screen).
Q.6.1.2.
Examine the values of the weights in the other rows of the weight
matrix. Explain the pattern of weights in each row. Explain, for
example, why the unit with a large value on the Jet input line has
the largest weight for the 20s value of age, whereas the unit with a
large value on the Shark input line has its largest weight for the 30s
value of the age row.
Now repeat the problem and run it several more times until it reaches a
rather different weight configuration. (This may take several tries.) You might
142 CHAPTER 6. COMPETITIVE LEARNING
be able to find such a state faster by reducing lrate to a smaller value, perhaps
0.02.
Q.6.1.3.
Thus far the system has used two output units and it therefore classified the
patterns into two classes. We have prepared a version with three output units.
First, close the pdptool windows. Then access the program by the command:
jets3
Q.6.1.5.
Thus as with simple competitive learning, the output weights are pulled towards
the input pattern. This pull is proportional to the activation of the particular
output unit and the learning rate .
• In the SOM presented, the net input for an output unit is the inner prod-
uct of the input vector and the weight vector, as is common for neural
networks. This type of model is discussed in Hertz et al. (1991). However,
SOMs usually use Euclidean distance between the vectors instead. In this
case, the winning output unit would have the smallest distance from the
input pattern instead of the largest inner product.
6.2.3 Implementation
The SOM is also implemented in the cl program. Learning is very similar to
simple competitive learning. A pattern is chosen and clamped onto the input
units. Using the same routine as simple competitive learning, the output unit
with the largest net input is chosen as the winner. Unique to the SOM, the
activation of each output unit is set according to the Gaussian function based
on distance from the winner. The routine that carries out this computation is:
function compute_output()
Figure 6.4: The evolution of the network is illustrated at 1 input pattern (A),
250 input patterns (B), and 1000 input patterns (C). In the plots, the blue
points are the 1000 input points to be presented. The red points are the weights
for each of the 5x5 output units, and adjacent output units are connected by
green lines. At initialization in A, there is little order, with neighboring output
units in the grid spread to opposite ends of the space. After 250 patterns in
B, the map is compressed between the two Gaussians. Order is starting to
emerge since neighboring grid units seem to be nearby in input space. However,
coverage of the Gaussians is still poor. In C, the output units form a clear grid,
illustrating the elastic net analogy. The output units are crowded in the center
of each Gaussian where the density of input patterns is concentrated, avoiding
the sparse gap between the Gaussians. This illustrates the constraints on the
model: concentrating clusters in areas of high density, maximizing between-
cluster distance, and retaining the input topology by keeping neighboring output
units as neighbors in input space.
146 CHAPTER 6. COMPETITIVE LEARNING
% ---------------------------------------------
[xwin, ywin] = ind2sub(net.pool(‘output’).geometry,net.pool(‘output’).winner);
After the activation values are determined for each of the units, the weights
are updated. In contrast with simple competitive learning, not just the winner’s
weights are updated. Each of the output units is pulled towards the input
pattern in proportion to its activation and the learning rate. This is done with
the following routine:
function change_weights()
% for each output unit, in proportion to the activation of that output unit,
% adjust the weights in the direction of the input pattern
% ---------------------------------------------------------------------
for k =1 :size(wt,1)
wt(k,:) = wt(k,:) + (lrate .* (net.pool(‘output’).activation(k)*...
(net.pool(‘input’).activation - wt(k,:))));
end
net.pool(‘output’).proj(1).weight = wt;
end
Figure 6.5: Initial screen display for the cl program running the topographic
map.
1. Start MATLAB, make sure the pdptool path is set, and change to the
pdptool/cl directory.
2. At the Matlab prompt, type “topo.” This will bring up two square arrays
of units, the upper one representing an input layer (like the skin surface)
and the lower one representing an internal representation (like the cortical
sheet) This window is displayed in Figure 6.5.
3. Start by running a test to get your bearings. Note that there are training
and testing windows, train on the left and testing on the right. To test,
click the selector button next to ‘options’ under test. Then select test all
(so that it is checked) and click run.
The program will step through 100 input patterns, each producing a blob
of activity at a point in the input space. The edges of the input space are used
only for the flanks of the blobs, their centers are restricted to a region of 10x10
units. The centers of the blobs will progress across the screen from left to right,
then down one row and across again, etc. In the representation layer you will
see a large blob of activity that will jump around from point to point based on
the relatively random initial weights (more on this in Part 3).
Note that the input patterns are specified in the pattern file with a name,
the letter x, then three numerical entries. The first is the x position on the
input grid (patx ), the second is the y position (paty ), and the third is a spread
parameter (σ), determining the width (standard deviation) of the Gaussian
blob. All spreads have been set to 1. The activation of an input unit i, at grid
coordinates (ix ,iy ), is determined by:
1 −1 2 2
activei = 2
e 2σ2 ((ix −patx ) +(iy −paty ) ) (6.4)
2πσ
which is the same Gaussian function (Equation 6.2) that determines the output
activations, depending on the winning unit’s grid position.
The pool structure of the network is as follows:
Pool(1) is not used in this model.
Pool(2) is the input pool.
Pool(3) is the representation pool.
There is only one projection in the network, net.pool(3).proj(1), which contains
the weights in the network.
the network with the patterns presented in a randomly permuted order within
each epoch (each pattern is presented once per epoch). The display will update
once per epoch, showing the last pattern presented in the epoch in the display.
You can reduce the frequency of updating if you like to, say, once per 10 or 20
epochs in the update after window.
Now if you test again, you may see some order beginning to emerge. That is,
as the input blob progresses across and then down when you run a test all, the
activation blob will also follow the same course. It will probably be jerky and
coarse at the point, and sometimes the map comes out twisted. If it is twisted
at this stage it is probably stuck.
If it is not twisted, you can proceed to refining the map. This is done by
a process akin to annealing, in which you gradually reduce the lrange variable.
A reasonable choice reducing it every 200 epochs of training in the following
increments: 2.8 for the first 200 epochs, 2.1 for the second 200 epochs, 1.4 for
the third 200 epochs, and 0.7 for the last 200 epochs. So, you’ve trained for 200
epochs as lrange = 2.8, so set “net.pool(3).lrange = 2.1.”
Then, run 200 more epochs (just click run) and test again. At this stage the
network seems to favor the edges too much (a problem that lessens but often
remains throughout the rest of training). Then set net.pool(3).lrange to 1.4 at
the command prompt; then run another 200 epochs, then test again, then set
it to 0.7, run another 200, then finally test again.
You may or may not have a nice orderly net at this stage. To get a sense of
how orderly, you can log your output in the following manner. In test options,
click “write output” then “set write options.” Click “start new log” and use
the name “part1 log.mat.” Click “Save” and you will return to the set write
output panel. In this panel, go into network variables, click net, it will open,
click pool(3), it will open, click “winner” in pool(3), then click “add.” The
line “pool(3).winner” will then appear under selected. Click “OK.” NOTE: you
must also click OK on the Testing options popup for the log to actually be
opened for use.
Now run a test again. The results will be logged as a vector showing the
winners for each of the 100 input patterns. At the matlab command window
you can now load this information into matlab:
mywinners = load(‘part1_log’);
reshape(mywinners.pool3_winner,10,10)’
you will get a 10x10 array of the integer indexes of the winners in your command
window. The numbers in the array correspond to the winning output unit. The
output unit array (the array of colored squares you see on the gui) is column
major, meaning that you count vertically 1-10 first and then 11 starts from the
next column, so that 1, 11, 21, 31, 41 etc. are on the same horizontal line. In
the matrix printed in your command window, the spatial position corresponds
to position of the test pattern centers. Thus, a perfect map will be numbered
150 CHAPTER 6. COMPETITIVE LEARNING
down then across such that it would have 1-10 in the first column, 11-21 in the
second column, etc.
1 11 21 31 ...
2 12 22 32 ...
3 13 23 33 ...
4 14 24 34 ...
.. .. .. .. ..
. . . . .
** The above array is your first result. Bring this (in printed form) to class for
discussion. If your results are not perfect, which is quite likely, what is “wrong”
with yours?
NOTE: The log currently stays open and logs all subsequent tests until you
shut it off. To do this, click “test options” / “set write options” / and the click
“log status off.” You should probably start a new log file each time you want
to examine the results, since the contents of the log file will require parsing
otherwise. Also the file will then be available to reload and you can examine its
contents easily.
help create an orderly response grid, but rotated 90 degrees, twisted, or perhaps
worse, a jumbled mess. We will explore this initial topographic weight bias in
this exercise.
Note that when the network is initialized, the weights are assigned according
to the explanation below:
Create a set of weights, weight rand(r,i), which are drawn randomly
from a uniform distribution from 0 to 1. Then, normalize the weights
such that
sum(weight_rand(r,:)) = 1
where ‘d’ is the distance between unit r and unit i in their respective
grids (aligned such that the middle 10x10 square of the input grid
aligns with the 10x10 output grid). Thus, the weights have a Gaus-
sian shape, such that the input units connect most strongly with
output units that share a similar position in their respective grids.
Also,
sum(weight_topo(r,:)) = 1 %approximately
net.pool(3).proj(1).weight(r,i) =
(1. - topbias).* weight_rand(r,i) +
topbias .* weight_topo(r,i);
row numbered 6, 16, 26, . . . , 96. If the ones digits in the row are numbers other
than 6, this would be indicative of reorganization. Specifically, if the ones digits
are 4 or 5, that means the representation of this input row has creeped upwards,
taking over territory that would previously have responded to the upper half of
the patterns.
** This array is your third result. Bring this also to class for discussion.
NOTE: You can either test with the same patterns you train with, or with
the original set of 100 patterns. The program generally allows different sets of
patterns for training and testing.
154 CHAPTER 6. COMPETITIVE LEARNING
Chapter 7
Since the publication of the original pdp books (Rumelhart et al., 1986; Mc-
Clelland et al., 1986) and back-propagation algorithm, the bp framework has
been developed extensively. Two of the extensions that have attracted the most
attention among those interested in modeling cognition have been the Simple
Recurrent Network (SRN) and the recurrent back-propagation (RBP) network.
In this and the next chapter, we consider the cognitive science and cognitive
neuroscience issues that have motivated each of these models, and discuss how
to run them within the PDPTool framework.
7.1 BACKGROUND
7.1.1 The Simple Recurrent Network
The Simple Recurrent Network (SRN) was conceived and first used by Jeff
Elman, and was first published in a paper entitled Finding structure in time
(Elman, 1990). The paper was ground-breaking for many cognitive scientists
and psycholinguists, since it was the first to completely break away from a
prior commitment to specific linguistic units (e.g. phonemes or words), and to
explore the vision that these units might be emergent consequences of a learning
process operating over the latent structure in the speech stream. Elman had
actually implemented an earlier model in which the input and output of the
network was a very low-level spectrogram-like representation, trained using a
spectral information extracted from a recording of his own voice saying ‘This is
155
156CHAPTER 7. THE SIMPLE RECURRENT NETWORK: A SIMPLE MODEL THAT CAPTURES T
Figure 7.1: The SRN network architecture. Each box represents a pool of units
and each forward arrow represents a complete set of trainable connections from
each sending unit to each receiving unit in the next pool. The backward arrow,
from the hidden layer to the context layer denotes a copy operation. To see how
this architecture is represented in the PDPTool implementation, see Figure 7.7.
Reprinted from Figure 2, p. 163, of Servan-Schreiber et al. (1991).
the voice of the neural network’. We will not discuss the details of this network,
except to note that it learned to produce this utterence after repeated training,
and contained no explicit feature, phoneme, syllable, morpheme, or word-level
units.
In Elman’s subsequent work he stepped back a little from the raw-stimulus
approach used in this initial unpublished simulation, but he retained the funda-
mental commitment to the notion that the real structure is not in the symbols we
as researchers use but in the input stream itself. In Finding structure in time,
Elman presented several simulations, one addressing the emergence of words
from a stream of sub-lexical elements (he actually used the letters making up
the words as the elements for this), and the other addressing the emergence of
sentences from a stream of words. In both models, the input at any given time
slice comes from a small fixed alphabet; interest focuses on what can be learned
in a very simple network architecture, in which the task posed to the network is
to predict the next item in the sting, using the item at time t, plus an internal
representation of the state of a set of hidden units from the previous time step.
An SRN of the kind Elman employed is illustrated in Figure 7.1. We actually
show the network used in an early follow-up study by Servan-Schreiber et al.
(1991), in which a very small alphabet of elements is used (this is the particular
network provided with the PDPTool software, and it will be described in more
detail later).
The beauty of the SRN is its simplicity. In fact, it is really just a three-layer,
feed-forward back propagation network. The only proviso is that one of the two
parts of the input to the network is the pattern of activation over the network’s
7.1. BACKGROUND 157
Figure 7.2: Root mean squared error in predicting each of the indicated letters
from Elman’s letter-in-word prediction experiment. The letters shown are the
first 55 letters in the text used for training the network. Reprinted from Figure
6, p. 194, of Elman (1990).
to places where something ends and something else begins. One such place
might be between ‘fifteen’ and ‘men’ in a sentence like ‘Fifteen men sat down at
a long table’, although there is unlikely to be a clear boundary between these
words in running speech.
Elman’s approach to these issues, as previously mentioned, was to break
utternances down into a sequence of elements, and present them to an SRN. In
his letter-in-word simulation, he actually used a stream of sentences generated
from a vocabulary of 15 words. The words were converted into a stream of
elements corresponding to the letters that spelled each of the words, with no
spaces. Thus, the network was trained on an unbroken stream of letters. After
the network had looped repeatedly through a stream of about 5,000 elements,
he tested its predictions for the first 50 or so elements of the training sequence.
The results are seen in Figure 7.2.
What we see is that the network tends to have relatively high prediction error
for the first letter of each word. The error tends to drop throughout the word,
and then suddenly to shoot up again at the first letter of the next word. This
is not always true – sometimes, afer a few downward steps, there is an uptick
7.1. BACKGROUND 159
within a word, but such uptics generally correspond to places where there might
be the end of what we ordinarily call a word. Thus, the network has learned
something that corresponds at least in part with our intuitive notion of ‘word’,
without building in the concept of word or ever making a categorical decision
about the locations of word boundaries.
The other two findings come from a different simulation, in which the ele-
ments of the sequences used corresponded to whole words, strung together again
to form simple sentences. The set of words Elman used corresponded to sev-
eral familiar nouns and verbs. Each sentence involved a verb, and at least one
noun as subject, with an optional subsequent noun as direct object. Verbs and
nouns fell into different sub-types, – there were, for example, verbs of perception
(which require an animate subject but can take any noun as object) and verbs
of consumption, which require something consumable, and verbs of descruction,
each of which had different restrictions on the nouns that could occur with it
as subject and object. Crucially, the input patterns representing the nouns
and verbs were randomly assigned, and thus did not capture in any way the
coocurrence structure of the domain. Over the course of learning, however, the
network came to assign each input its own internal representation. In fact, the
hidden layer reflected both the input and the context; as a result, the patterns
the network learned to assign provided a highly context-sensitive form of lexical
representation.
The next two figures illustrate findings from this simulation. The first of
these (Figure 7.3) shows a cluster analysis based on the average pattern over
the hidden layer assigned to each of the different words in the corpus. What we
see is that the learned average internal representations indicate that the network
has been able to learn the category structure and even the sub-category structure
of the “lexicon” of this simple artificial language. The reason for this is largely
that the predictive consequences of each word correspond closely to the syntactic
category and sub-category structure of the language. One may note, in fact,
that the category structure encompasses distinctions that are usually treated
as syntactic (noun or verb, and within verbs, transitive vs intransitive) as well
as distinctions that are usually treated as semantic (fragile-object, food item),
and at least one distinction that is clearly semantic (animate vs. inanimate)
but is also often treated as a syntactically relevant “subcategorization feature”
in linguistics. The second figure (Figure 7.4) shows a cluster analysis of the
patterns assigned two of the words (BOY and GIRL) in each of many different
contexts. The analysis establishes that the overall distinction between BOY
and GIRL separates the set of context-sensitive patterns into two highly similar
subtrees, indicating that the way context shades the representation of BOY is
similar to the way in which it shades the representation of GIRL.
Overall, these three simulations from Elman (1990) show how both segmen-
tation of a stream into larger units and assignment of units into a hierarchical
similarity structure can occur, without there actually being any enumerated list
of units or explicit assignment to syntactic or semantic categories.
Elman continued his work on SRN’s through a series of additional impor-
tant and interesting papers. The first of these (Elman, 1991) explored the
160CHAPTER 7. THE SIMPLE RECURRENT NETWORK: A SIMPLE MODEL THAT CAPTURES T
Figure 7.3: Result of clustering the average pattern over the hidden units for
each of the words used in Elman’s (1990) sentence-structure simulation. Noun
and verb categories are cleanly separated. Within nouns, there is strong cluster-
ing by animacy, and within animates, by human vs animal; then within animal,
by predator vs prey. Inanimates cluster by type as well. Within verbs, cluster-
ing is largely based on whether the verb is trainsitive (DO-OBLIG) intransitive
(DO-ABS), or both (DO-OPT), although some verbs are not perfectly classified.
Reprinted from Figure 7, p. 200, of Elman (1990)
7.1. BACKGROUND 161
learning of sentences with embedded clauses, illustrating that this was possible
in this simple network architecture, even though the network did not rely on
the computational machinery (an explicitly recursive computational structure,
including the ability to ‘call’ a computational process from within itself) usually
thought to be required to deal with imbeddings. A subsequent and highly influ-
ential paper (Elman, 1993) reported that success in learning complex embedded
structures depended on starting small – either starting with simple sentences,
and gradually increasing the number of complex ones, or limiting the network’s
ability to exploit context over long sequences by clearing the context layer after
every third element of the training sequence. However, this later finding was
later revisited by Rohde and Plaut (1999). They found in a very extensive series
of investigations that starting small actually hurt eventual performance rather
than helped it, except under very limited circumstances. A number of other
very interesting investigations of SRN’s have also been carried our by Tabor
and collaborators, among other things using SRN’s to make predictions about
participants reading times as they read word-by-word through sentences (Tabor
et al., 1997).
Figure 7.5: The stochastic finite-state transition network used in the gsm sim-
ulation. Strings are generated by transintioning between nodes connected by
links, and emitting the symbol associated with each link. Where two links leave
the same node, one is chosen at random with equal probability. Reprinted from
Figure 3, p. 60, of Servan-Schreiber et al. (1991), based on the network used
earlier by Rebur (1976).
the SRN if the material needed at the end of the embedding is also of some
relevance within the sequence.
The reader is referred to the paper by Servan-Schreiber et al. (1991) for
further details of these investigations. Here we concentrate on describing the
simulation model that allows the reader to explore the SRN model, using the
same network and one of the specific training sets used by Servan-Schreiber
et al. (1991).
7.2.1 Sequences
The srn network type also allows provides a construct called the “sequence”,
which consists of one or more input-output pairs to be presented in a fixed order.
The idea is that one might experience a series of sequences, such that each
sequence has a fixed structure, but the order in which the sequences appear can
be random (permuted) within each epoch. In the example provided, a sequence
is a sequence of characters beginning with a B and ending with an E.
Sequences can be defined in the pattern file in two different ways:
Default The default method involves beginning the specification of each se-
quence with a pname, followed by a series of input-output pairs, followed
by ‘end’ (see the file gsm21 s.pat for an example). When the file is read,
a data structure element is created for the sequence. At the beginning
of each sequence, the state of the context units is initialized to all .5’s at
the same time that the first input pattern is presented on the input. At
each successive step through the sequence, the state of the context units
is equal to the state of the hidden units determined during the previous
step in the sequence.
SeqLocal The SeqLocal method of specifying a sequence works only for a re-
stricted class of possible cases. These are cases where (a) each input and
target pattern involves a single active unit; all other inputs and targets
are 0; and (b) the target at step n is the input at step n+1 (except for
the last element of the sequence). For such cases, the .pat file must begin
with a line like this:
7.2. THE SRN PROGRAM 165
SeqLocal b t s x v p e
This line specifies that the following entries will be used to construct actual
input output pattern pairs, as follows. The character strings following
SeqLocal are treated as labels for both the input and output units, with
the first label being used for the first input unit and the first output unit
etc. Single characters are used in the example but strings are supported.
Specific sequences are then specified by lines like the following:
p05 b t x s e
of type ‘hidden’ and should receive a single projection from some other layer
(typically a hidden layer) which has its constraint type field set to ‘copyback’.
When the constraint type is ‘copyback’, there are no actual weights, the state
of the ‘sending’ units is simply copied to the ‘receiving’ units at the same time
that the next target input is applied (except at the beginning of a new sequence,
where the state of each of the context units is set to the clearval, which by default
is set to .5).
7.3 EXERCISES
The exercise is to replicate the simulation discussed in Sections 3 and 4 of
Servan-Schreiber et al. (1991). The training set you will use is described in
more detail in the paper, but is presented here in Figure 7.6. This particular
set contains 21 patterns varying in length from 3 to 8 symbols (plus a B at the
beginning and an E at the end of each one).
Figure 7.6: The training patterns used in the gsm exercise. A B is added to the
beginning of each sequence and an E is added to the end in the gsm simulation.
Reprinted from Figure 12, p. 173, of Servan-Schreiber et al. (1991).
7.3. EXERCISES 167
Figure 7.7: Network view showing the layout for the srn network.
To run the exercise, download the latest version of pdptool, set your path to
include pdptool and all of its children, and change to the pdptool/srn directory.
Type ‘gsm’ (standing for “Graded State Machine”) at the matlab prompt. After
everything loads you will see a display showing (at the right of the screen) the
input, hidden and output units, and a vector representing the target for the
output units (see Figure 7.7). To the left of the input is the context. Within
the input, output, and target layers the units are layed out according to Figure
7.1. Your exercise will be to test the network after 10, 50, 150, 500, and 1000
epochs of training. The parameters of the simulation are a little different from
the parameters used in the published article (it is not clear what values were
actually used in the published article in some cases), and the time course of
learning is a little extended relative to the results reported in the paper, but
the same basic pattern appears.
One thing to be clear about at the outset is that the training and testing
is organized at the sequence level. Each sequence corresponds to a string that
could be generated by the stochastic finite state automaton shown in Figure
7.5. The ptrain option (which is the one used for this exercise) permutes the
order of presentation of sequences but presents the elements of the sequence in
its cannonical sequential order. Each sequence begins with a B and ends with
an E, and consists of a variable number N of elements. As describe above, the
sequence is broken down into N-1 input-target pairs, the first of which has a
B as input and the successor of B in the sequence as its target, and the last
of which has the next to last symbol as its input and E as its target. When
168CHAPTER 7. THE SIMPLE RECURRENT NETWORK: A SIMPLE MODEL THAT CAPTURES T
the B symbol is presented the context is reset to .5’s. It makes sense to update
the display during testing at the pattern level, so that you can step through
the patterns within each sequence. During training I update the display at the
epoch level or after 10 epochs.
Before you begin, consider:
• What is the 0-order structure of the sequences? That is, if you had no idea
about even the current input, what could you predict about the output?
This question is answered by noting the relative frequency of the various
outputs. Note that B is never an output, but all the other characters can
be outputs. The 0-order structure is thus just the relative frequency of
each character in the output.
• What is the 1st order structure of the sequences? To determine this ap-
proximately, consult the network diagram (last page of this handout) and
note which letters can occur after each letter. Make a little grid four
yourself of seven rows each containing seven cells. The row stands for the
current input, the cell within a row for a possible successor. So, consulting
the network diagram, you will find that B can be followed only by T or P .
So, place an X in the second (T ) cell of the first row and the 6th (P ) cell
of the first row. Fill in the rest of the rows, being careful to attend to the
direction of the arrows coming out of each node in the diagram and the
label on each arc. You should find (unless I made a mistake) that three of
the letters actually have the exact same set of possible successors. Check
yourself carefully to make sure you got this correct.
OK, now run through a test, before training. You should find that the
network produces near uniform output over the output units at each step of
testing.
NOTE: For testing, set update to occur after 1 pattern in the test window.
Use the single step mode and, in this case, quickly step through, noticing how
little the output changes as a function of the input. The weights are initialized
in a narrow range, making the initial variation in output unit activation rather
tiny. You can examine the activations of the output units at a given point by
typing the following to the matlab console:
net.pool(5).activation
When you get tired or stepping through, hit run in the test window. The
program will then quickly finish up the test.
The basic goal of this exercise is to allow you to watch the network proceed to
learn about the 0th, 1st, and higher-order structure of the training set. You have
already examined the 0th and 1st order structure; the higher-order structure is
the structure that depends on knowing something about what happened before
the current input. For example, consider the character V . What can occur after
a V , where the V is preceded by a T ? What happens when the V is preceeded
by an X? By a P ? By another V ? Similar questions can be asked about other
letters.
7.3. EXERCISES 169
Q.7.0.3.
Set nepochs in the training options panel to ten, and run ten epochs
of training, then test again.
What would you say has been learned at this point? Explain your
answer by referring to the pattern of activation across the output
units for different inputs and for the the same input at different
points in the sequence.
Continue training, testing after a total of 50, 150, 500, and 1000
epochs. Answer the same question as above, for each test point
point.
Q.7.0.4.
Summarize the time course of learning, drawing on your results for
specific examples as well as the text of section 4 of the paper. How do
the changes in the representations at the hidden and context layers
contribute to this process?
Q.7.0.5.
Write about 1 page about the concept of the SRN as a graded state
machine and its relation to various types of discrete-state automata,
based on your reading of the entire article (including especially the
section on spanning embedded sequences).
Try to be succinct in each of your answers. You may want to run the whole
training sequence twice to get a good overall sense of the changes as a function
of experience.
170CHAPTER 7. THE SIMPLE RECURRENT NETWORK: A SIMPLE MODEL THAT CAPTURES T
Chapter 8
Recurrent
Backpropagation: Attractor
network models of semantic
and lexical processing
Recurrent back-propagation networks came into use shortly after the develop-
ment of the back-propagation algorithm was first developed, and there are many
variants of such networks. Williams and Zipser (1995) provide a thorough review
of the recurrent back-propagation computational framework. Here we descibe
a particular variant, used extensively in PDP models the effects of brain injury
on lexical and semantic processing (Plaut and Shallice, 1993; Plaut et al., 1996;
Rogers et al., 2004; Dilkina et al., 2008).
8.1 BACKGROUND
A major source of motivation for the use of recurrent backpropagation networks
in this area is the intuition that they may provide a way of understanding
the pattern of degraded performance seen in patients with neuropsychological
deficits. Such patients make a range of very striking errors. For example, some
patients with severe reading impairments make what are called semantic errors
– misreading APRICOT as PEACH or DAFFODIL as TULIP. Other patients,
when redrawing pictures they saw only a few minutes ago, will sometimes put
two extra legs on a duck, or draw human-like ears on an elephant.
In explaining these kinds of errors, it has been tempting to think of the pa-
tient as having settled into the wrong basin of attraction in a semantic attractor
network. For cases where the patient reads ‘PEACH’ instead of ‘APRICOT’,
the idea is that there are two attractor states that are ‘near’ each other in a
semantic space. A distortion, either of the state space itself, or of the mapping
171
172CHAPTER 8. RECURRENT BACKPROPAGATION: ATTRACTOR NETWORK MODELS OF SE
into that space, can result in an input that previously settled to one attractor
state settling into the neighboring attractor. Interestingly, patients who make
these sorts of semantic errors also make visual errors, such as misreading ‘cat’
as ‘cot’, or even what are called ‘visual-then-semantic’ errors, mis-reading ‘sym-
pathy’ as ‘orchestra’. All three of these types of errors have been captured
using PDP models that rely on the effects of damage in networks containing
learned semantic attractors (Plaut and Shallice, 1993). Figure 8.1 from Plaut
and Shallice (1993) illustrates how both semantic and visual errors can occur
as a result of damage to an attractor network that has learned to map from
orthography (a representation of the spelling of a word) to semantics (a rep-
resenation of the word’s meaning), taking printed words and mapping them to
basins of attraction within a recurrent semantic network.
The use of networks with learned semantic attractors has an extensive his-
tory in work addressing semantic and lexical deficits, building from the work of
Plaut and Shallice (1993) and other early work (Farah and McClelland, 1991;
Lambon Ralph et al., 2001). Here we focus on a somewhat more recent model
introduced to address a progressive neuropsychological condition known as se-
8.2. THE RBP PROGRAM 173
mantic dementia by Rogers et al. (2004). In this model, the ‘semantic’ represen-
tation of an item is treated as an attractor state over an population of neurons
thought to be located in a region known as the ‘temporal pole’ or anterior tem-
poral lobe. The neurons in this integrative layer receive input from, and project
back to, a number of different brain regions, each representing a different type
of information about an item, including what it looks like, how it moves, what
it sounds like, the sound of its name, the spelling of the word for it, etc. The ar-
chitecture of the model is sketched in Figure 8.2 (top). Input coming to any one
of the visible layers can be used to activate the remaining kinds of information,
via the bi-directional connections among the visible layers and the integrative
layer and the recurrent connections among the units in the integrative layer.
According to the theory behind the model, progressive damage to the neurons
in the integrative layer and/or to the connections coming into and out of this
integrative layer underlies the progressive deterioration of semantic and abilities
in semantic dementia patients (?).
about the object, and the visual percept. Thus the model performs pattern
completion, much like, for example, the Jets-and-Sharks iac model from Chapter
2. A big difference is that the Rogers et al. (2004) model uses learned distributed
representations rather than instance units for each known concept.
where tgti (t) is the externally supplied target at tick t. The cce for a given unit
at a given time step is
ccei (t) = −dt ∗ (tgti (t) ∗ log(ai (t)) + (1 − tgti (t)) ∗ log(1 − ai (t))) ,
In the forward pass we also calculate a quantity we will here call the ‘direct’
dEdnet for each time step. This is that portion of the partial derivative of the
error with respect to the net input of the unit that is directly determined by
the presence of a target for the unit at time step t. If squared error is used, the
direct dEdnet is given by
Note that the direct dEdnet will eventually be scaled by dt, but this is applied
during the backward pass as discussed below. Of course if there is no target the
directdEdneti (t) is 0.
If cross-entropy error is used instead, we have the following simpler expres-
sion, due to the cancellation of part of the gradient of the activation function
with part of the derivative of the cross entropy error:
The process of calculating net inputs, activations, the two error measures, and
the direct dEdnet takes place in the forward processing pass. This process
continues until these quantities have been computed for the final time step.
The overall error measure for a specific unit is summed over all ticks for
which a target is specified:
X
ei = ei (t)
t∗
This is done separately for the sse and the cce measures.
The activations of each unit in each tick are kept in an array called the
activation history. Each pool of units keeps its own activation history array,
which has dimensions [nticks+1, nunits]
where
X
newdEdneti (t) = ai (t) ∗ (1 − ai (t)) ∗ wki ∗ dEdnetk (t + 1) + directdEdneti (t).
k
The subscript k in the summation above indexes the units that receive connec-
tions from unit i. Note that state 0 is thought of as immutable, so deltas need
not be calculated for that state. Note also that for the last state (the state
whose index is nticks + 1), there is no future to inherit error derivatives from,
so in that case we have simply have
For the backward pass calculation, t starts at the next-to-last state (whose index
is nticks) and runs backward to t = 1; however, for units and ticks where the
external input is hard clamped, the value of dEdnet is kept at 0.
All of the dEdneti (t) values are maintained in an array called dEdnethistory.
As with the activations, there is a separate dEdnet history for each pool of units,
which like the activation history array, has dimensions [nticks+1,nunits]. (In
practice, the values we are calling directdEdnet scaled by dt and placed in this
history array on the forward pass, and the contents of that array is thus simply
incremented during the backward pass).
178CHAPTER 8. RECURRENT BACKPROPAGATION: ATTRACTOR NETWORK MODELS OF SE
nticks+1
X
wedij = dEdneti (t) ∗ aj (t − 1)
t=1
Figure 8.3: Architecture of the Rogers et al. (2004) model. Verbal descriptors
(names, perceptual, functional, and encyclopedic) and visual feature units re-
ceive input directly from the environment. The environmental input is displayed
directly below the corresponding pool activations.
180CHAPTER 8. RECURRENT BACKPROPAGATION: ATTRACTOR NETWORK MODELS OF SE
1. First save your complete set of learned weights using the save weights
command.
[r s] = size(net.pool(4).proj(3).weight);
8.3. USING THE RBP PROGRAM WITH THE ROGERS NETWORK 181
3. Then create a mask of 0’s and 1’s to specify which weights to destroy (0)
and which to keep (1):
This creates a mask matrix with each entry being zero with probability x
and 1 with probability 1 − x.
4. Then use the elementwise matrix multiply to zero the unfortunate con-
nections:
net.pool(4).proj(3).weight = net.pool(4).proj(3).weight.*mask;
This will zero out all of the weights associated with mask values of 0. You
can apply further lesions if you wish or re-load and apply different lesions
as desired.
One can add Gaussian noise with standard deviation s to the weights in a
particular projection even more simply:
net.pool(4).proj(3).weight = net.pool(4).proj(3).weight + s*randn(r,s);
Lesioning units is a bit more complicated, and routines need to be implemented
to accomplish this.
information is followed by the name of the layer to which the pattern should be
applied, followed by a string of numbers specifying input values for each unit in
the pool. If the letter is H the states of the units in the specified pool are hard
clamped to the input values. If the letter is S the value specified is treated as
a component of the net input of the unit. In this case the input value and the
unit’s bias determine its initial net input for state 0, and its activation is set to
the appropriate value for that net input.
For both hard and soft clamps, the input is applied to the state at the starting
edge of the start time indicated, and remains in place for duration*ticksperinterval.
In this case, the values 1 3 mean that the input is clamped on in states 0 through
11. This does not include the state at the starting edge of interval 4 (state 12).
Target specifications begin with the letter T then a start time and a duration.
In this case the values are 6 and 2, specifying that the target is clamped for
two intervals beginning with interval 6. The target applies to the state at the
trailing edge of the first tick after the start time. So in this case the target
applies to states 22 to 29. As with input patterns, the start time and duration
are followed by a pool name and a sequence of values specifying targets for the
units in the pool indicated.
Temporal-Difference
Learning
9.1 BACKGROUND
To borrow an apt example from Sutton (1988), imagine trying to predict Sat-
urday’s weather at the beginning of the week. One way to go about this is
to observe the conditions on Monday (say) and pair this observation with the
actual meteorological conditions on Saturday. One can do this for each day of
the week leading up to Saturday, and in doing so, form a training set consisting
of the weather conditions on each of the weekdays and the actual weather that
obtains on Saturday. Recalling Chapter 5, this would be an adequate train-
ing set to train a network to predict the weather on Saturday. This general
approach is called supervised learning because, for each input to the network,
we have explicily (i.e. in a supervisory manner) specified a target value for
183
184 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
the network. Note, however, that one must know the actual outcome value in
order to train this network to predict it. That is, one must have observed the
actual weather on Saturday before any learning can occur. This proves to be
somewhat limiting in a variety of real-world scenarios, but most importantly, it
seems a rather inefficient use of the information we have leading up to Saturday.
Suppose, for instance, that it has rained persistently throughout Thursday and
Friday, with little sign that the storm will let up soon. One would naturally
expect there to be a higher chance of it raining on Saturday as well. The rea-
son for this expectation is simply that the weather on Thursday and Friday are
relevant to predicting the weather on Saturday, even without actually knowing
the outcome of Saturday’s weather. In other words, partial information relevant
to our prediction for Saturday becomes available on each day leading up to it.
In supervised learning as it was previously described, this information is effec-
tively not employed in learning because Saturday’s weather is our sole target
for training.
Unsupervised learning, in contrast, operates instead by attempting to use in-
termediate information, in addition to the actual outcome on Saturday, to learn
to predict Saturday’s weather. While learning on observation-outcome pairs
is effective for predictions problems with only a single step, pairwise learning,
for the reason motivated above, is not well suited to prediction problems with
multiple steps. The basic assumption that underlies this unsupervised approach
is that predictions about some future value are ”not confirmed or disconfirmed
all at once, but rather bit by bit” as new observations are made over many
steps leading up to observation of the predicted value (Sutton, 1988). (Note
that, to the extent that the outcome values are set by the modeler, TD learn-
ing is not unsupervised. We use the terms ’supervised’ and ’unsupervised’ here
for convenience to distinguished the pairwise method from the TD method, as
does Sutton. The reader should be aware that the classification of TD and RL
learning as unsupervised is contested.)
While there are a variety of techniques for unsupervised learning in predic-
tion problems, we will focus specifically on the method of Temporal-Difference
(TD) learning (Sutton, 1988). In supervised learning generally, learning occurs
by minimizing an error measure with respect to some set of values that param-
eterize the function making the prediction. In the connectionist applications
that we are interested in here, the predicting function is realized in a neural
network, the error measure is most often the difference between the output of
the predicting function and some prespecificed target value, and the values that
parameterize the function are connection weights between units in the network.
For now, however, we will dissociate our discussion of the prediction function
from the details of its connectionist implementation for the sake of simplicity
and because the principles which we will explore can be implemented by meth-
ods other than neural networks. In the next section, we will turn our attention
to understanding how neural networks can implement these prediction tech-
niques and how the marriage of the two techniques (TD and connectionism)
can provide added power.
The general supervised learning paradigm can be summarized in a familiar
9.1. BACKGROUND 185
formula
∆wt = α (z − V (st )) Ow V (st ),
where wt is a vector of the parameters of our prediction function at time step
t, α is a learning rate constant, z is our target value, V (st ) is our prediction for
input state st , and Ow V (st ) is the vector of partial derivatives of the prediction
with respect to the parameters w. We call the function, V (·), which we are
trying to learn, the value function. This is the same computation that underlies
back propagation, with the amendment that wt are weights in the network and
the back propagation procedure is used to calculate the gradient Ow V (st ). As
we noted earlier, this update rule cannot be computed incrementally with each
step in a multi-step prediction problem because the value of z is unknown until
the end of the sequence. Thus, we are required to observe a whole sequence
before updating the weights, so the weight update for an entire sequence is just
the sum of the weight changes for each time step,
m
X
w←w+ ∆wt
t=1
where Vt is shorthand for V (st ), Vm+1 is defined as z, and sm+1 is the terminal
state in the sequence. The validity of this formula can be verified by merely
expanding the sum and observing that all terms cancel, leaving only z − Vt .
Thus, substituting for ∆wt in the sequence update formula we have
m
X
w←w+ α(z − Vt )Ow Vt ,
t=1
from which we can extract the weight update rule for a single time step by
noting that the term inside the first sum is equivalent to ∆wt by comparison to
186 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
the eligibility trace at time t. Eligibility traces are the primary mechanisms of
temporal credit assignment in TD learning. That is, credit (or “blame”) for
the TD error occurring on a given step is assigned to the previous steps as
determined by the eligibility trace. For high values of λ, predictions occurring
earlier in a sequence are updated to a greater extent than with lower values of
λ for a given error signal on the current step.
To consider an example, suppose we are trying to learn state values for the
states in some sequence. Let us refer to states in the sequence by letters of
the alphabet. Our goal is to learn a function V (·) from states to state-values.
State-values approximate the expected value of the variable occuring at the end
of the sequence. Suppose we initialize our value function to zero, as is commonly
done (it is also common to initialize them randomly, which is natural in the case
of neural networks). We encounter a series of states such as a, b, c, d. Since
predictions for these values is zero at the outset, no useful learning will occur
until we encounter the outcome value at the end of the sequence. Nonetheless,
we find the output gradient for each input state and maintain the eligibility trace
as a discounted sum of these gradients. Suppose that this particular sequence
ends with z = 1 at the terminal state. Upon encountering this, we have a useful
error as our training signal, z − V (d). After obtaining this error, we multiply it
by our eligibility trace to obtain the weight changes. In applying these weight
changes, we correct the predictions of all past states to be closer to the value at
the end of the sequence, since the eligibility trace includes the gradient for past
states. This is desirable since all of the past states in the sequence led to an
outcome of 1. After a few such sequences, our prediction values will no longer be
random. At this point, valuable learning can occur even before a sequence ends.
Suppose that we encounter states c and then b. If b already has a high (or low)
value, then the TD error, V (b) − V (c) will produce an appropriate adjustment
in V (c) toward V (b). In this way, TD learning is said to bootstrap in that it
employs its own value predictions to correct other value predictions. One can
think of this learning process as the propagation of the outcome value back
through the steps of the sequence in the form of value predictions. Note that
this propagation occurs even when λ = 0, although higher values of λ will speed
the process. As with supervised learning, the success of TD learning depends
on passes through many sequences, although learning will be more efficient in
general.
Now we are in a state to consider how it is exactly that TD errors can drive
more efficient learning in prediction problems than its supervised counterpart.
After all, with supervised learning, each input state is paired with the actual
value that it is trying to predict - how can one hope to do better than training
on veridical outcome information? Figure 9.1 illustrates a scenario in which we
can do better. Suppose that we are trying to learn the value function for a game.
We have learned thus far that the state labeled BAD leads to a loss 90% of the
time and a win 10% of the time, and so we appropriately assign it a low value.
We now encounter a NOVEL state which leads to BAD but then results in a
win. In supervised learning, we would construct a NOVEL-win training pair
and the NOVEL state would be initially assigned a high value. In TD learning,
188 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
Figure 9.1: State space of a game, see text for explanation. (From Sutton
(1988). Reprinted by permission.)
the NOVEL state is paired with the subsequent BAD state, and thus TD as-
signs the NOVEL state a low value. This is, in general, the correct conclusion
about the NOVEL state. Supervised methods neglect to account for the fact
that the NOVEL state ended in a win only by transitioning through a state that
we already know to be bad. With enough training examples, supervised learn-
ing methods can acheive equivalent performance, but TD methods make more
efficient use of training data in multi-step prediction problems by bootstrapping
in this way and so can learn more efficiently with limited experience.
∞
X
Rt = rt+1 + γrt+2 + γ 2 rt+3 + ... = γ k rt+k+1 .
k=0
Discounting simply means that rewards arriving futher in the future are
worth less. Thus, for lower values of γ, distal rewards are valued less in the
value prediction for the current time. The addition of the γ parameter not
only generalizes TD to non-episodic tasks, but also provides a means by which
to control how far the agent should look ahead in making predictions at the
current time step.
We would like our prediction Vt to approximate Rt , the expected return at
time t. Let us refer to the correct prediction for time t as Vt∗ ; so Vt∗ = Rt . We
can then derive the generalized TD error equation from the right portion of the
return equation.
190 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
∞
X
Vt∗ = γ k rt+k+1
k=0
∞
X
= rt+1 + γ k rt+k+2
k=1
X∞
= rt+1 + γ k+1 rt+k+2
k=0
X∞
= rt+1 + γ γ k rt+k+2
k=0
∗
= rt+1 + γVt+1
We note that the TD error at time t will be Et = Vt∗ − Vt and we use Vt+1
∗
as an imperfect proxy for the true value Vt+1 (bootstrapping). We substitute
to obtain the generalized error equation:
Et = (rt+1 + γVt+1 ) − Vt
This error will be multiplied by a learning rate α and then used to update
our weights. The general update rule is
Vt ← Vt + α[rt+1 + γVt+1 − Vt ]
Recall, however, that when using the TD method with a function approxi-
mator like a neural network, we update Vt by finding the output gradient with
respect to the weights. This step is not captured in the previous equation. We
will discuss these implementation specific details later.
It is worthwhile to note that the return value provides a replacement for
planning. In attaining some distal goal, we often thing that an agent must plan
many steps into the future to perform the correct sequence of actions. In TD
learning, however, the value function Vt is adjusted to reflect the total expected
return after time t. Thus, in considering how to maximize total returns in
making a choice between two or more actons, a TD agent need only choose the
action with the highest state value. The state values themselves serve as proxies
for the reward value occurring in the future. Thus, the problem of planning a
sequence of steps is reduced to the problem of choosing a next state with a high
state value. Next we explore the formal characterization of the full RL problem.
Figure 9.2: The architecture of a RL agent. On the left, the arrows labeled
’state’ and ’reward’ denote the two signals that the agent received from the
environment. On the right, the arrow labeled ’action’ denotes the only signal
the environment receives from the agent. For each step, the agent receives
state and reward signals and then produces an action signal that changes the
environment. The dotted line denotes the time horizon of a single step with the
new state and reward signals after action at has been performed. (From Sutton
and Barto (1998). Reprinted by permission.)
relate to one another. The environment provides two signals to the agent: the
current environment state, st , which can be thought of as a vector specifying all
the information about the environment which is available to the agent; and the
reward signal, rt , which is simply the reward associated with the co-occurring
state. The reward signal is the only training signal from the environment.
Another training signal is generated internally to the agent: the value of the
successor state needed in forming the TD error. The diagram also illustrates
the ability of the agent to take an action on the environment to change its state.
Note that the architecture of a RL agent is abstract: it need not align with
actual physical boundaries. When thinking conceptually about an RL agent, it
is important to keep in mind that the agent and environment are demarcated
by the limits of the agent’s control (Sutton and Barto, 1998). That is, anything
that cannot be arbitrarily changed by the agent is considered part of the agent’s
environment. For instance, although in many biological contexts the reward
signal is computed inside the physical bounds of the agent, we still consider
them as part of the environment because they cannot be trivially modified by the
agent. Likewise, if a robot is learning how to control its hand, we should consider
the hand as part of the environment, although it is physically continuous with
the robot.
We have not yet specified how action choice occurs. In the simplest case, the
agent evaluates possible next states, computes a state value estimate for each
one, and chooses the next state based on those estimates. Take, for instance,
an agent learning to play the game Tic-Tac-Toe. There might be a vector of
length nine to represent the board state. When it is time for the RL agent to
192 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
choose an action, it must evaluate each possible next state and obtain a state
value for each. In this case, the possible next states will be determined by the
open spaces where an agent can place its mark. Once a choice is made, the
environment is updated to reflect the new state.
Another way in which the agent can modify its environment is to learn
state-action values instead of just state values. We define a function Q(·) from
state-action pairs to values such that the value of Q(st , at ) is the expected
return for taking action at while in state st . In neural networks, the Q function
is realized by having a subset of the input units represent the current state
and another subset represent the possible actions. For the sake of illustration,
suppose our learning agent is a robot that is moving around a grid world to
find pieces of garbage. We may have a portion of the input vector to represent
sensory information that is available to the robot at its current location in the
grid - this portion corresponds to the state in the Q function. Another portion
would be a vector for each possible action that the robot can take. Suppose
that our robot can go forward, back, left, right, and pickup a piece of trash
directly in front of it. Therefore, we would have a five place action vector, one
place corresponding to each of the possible actions. When we must choose an
action, we simply fix the current state on the input vector and turn on each
of the five action units in turn, computing the value for each. The next action
is chosen from these state-action values and the environment state is modified
accordingly.
The update equation for learning state-action values is the same in form as
that of learning state values which we have already seen:
This update rule is called SARSA for State Action Reward State Action
because it uses the state and action of time t along with the reward, state, and
action of time t + 1 to form the TD error. This update rule is called associative
because the values that it learns are associated with particular states in the
environment. We can write the same update rule without any s terms. This
update rule would learn the value of actions, but these values would not be
associated with any environment state, thus it is nonassociative. For instance,
if a learning agent were playing a slot machine with two handles, it would be
learning only action values, i.e. the value of pulling lever one versus pulling lever
two. The associative case of learning state-action values is commonly thought
of as the full reinforcement learning problem, although the related state value
and action value cases can be learned by the same TD method.
Action choice is specified by a policy. A policy is considered internal to
the agent and consists of a rule for choosing the next state based on the value
predictions for possible next states. More specifically, a policy maps state values
to actions. Policy choice is very important due to the need to balance exploration
and exploitation. The RL agent is trying to accomplish two related goals at
the same time: it is trying to learn the values of states or actions and it is
trying to control the environment. Thus, initially the agent must explore the
9.3. TD AND BACK PROPAGATION 193
state space to learn approximate value predictions. This often means taking
suboptimal actions in order to learn more about the value of an action or state.
Consider an agent who takes the highest-valued action at every step. This
policy is called a greedy policy because the agent is only exploiting its current
knowledge of state or action values to maximize reward and is not exploring to
improve its estimates of other states. A greedy policy is likely undesirable for
learning because there may be a state which is actually good but whose value
cannot be discovered because the agent currently has a low value estimate for it,
precluding its selection by a greedy policy. Furthermore, constant exploratory
action is necessary in nonstationary tasks in which the reward values actually
change over time.
A common way of balancing the need to explore states with the need to
exploit current knowledge in maximizing rewards is to follow an ε-greedy policy
which simply follows a greedy policy with probabiliy 1 − ε and takes a random
action with probability ε. This method is quite effective for a large variety of
tasks. Another common policy is softmax. You may recognize softmax as the
stochastic activation function for Boltzmann machines introduced in Chapter 3
on Constraint Satisfaction - it is often called the Gibbs or Boltzmann distribu-
tion. With softmax, the probability of choosing action a at time t is
eQt (a)/τ
p(a) = Pn Qt (b)/τ
,
b=1 e
where the denominator sums over all the exponentials of all possible ac-
tion values and τ is the temperature coefficient. A high temperature causes
all actions to be equiprobable, while a low temperature skews the probability
toward a greedy policy. The temperature coefficient can also be annealed over
the course of training, resulting in greater exploration at the start of training
and greater exploitation near the end of training. All of these common policies
are implemented in the pdptool software, along with a few others which are
documented in Section 9.5.
neural networks is, of course, their ability to generalize learning across similar
states. Thus, combining both TD and back propagation results in an agent that
can flexibly learn to maximize reward over mulitple time steps and also learn
structural similarities in input patterns that allow it to generalize its predic-
tions over novel states. This is a powerful coupling. In this section we sketch
the mathematical machinery involved in coupling TD with back propagation.
Then we explore a notable application of the combined TDBP algorithm to the
playing of backgammon as a case study.
The key difference between regular back propagation and TD back propa-
gation is that we must adjust the weights for the input at time t at time t + 1.
This requires that we compute the output gradient with respect to weights for
input at time t and save this gradient until we have the TD error at time t + 1.
Just as in regular back propagation, we use the logistic activation function,
1
f (neti ) = 1+exp(−net i)
, where neti is the net input coming into output unit i
from a unit j that projects to it. Recall that the derivative of the activation
∂ai
function with respect to net input is simply ∂net i
= f 0 (neti ) = ai (1 − ai ), where
ai is the activation at output unit i, and the derivative of the net input neti with
respect to the weights is ∂net
∂wij = aj . Thus, the output gradient with respect to
i
the point. However, if there is only one checker on a point, the other player
can land on the point and take the occupying checker off the board. The taken
pieces are placed on the “bar”, where they must be reentered into the game.
The purpose of the game is to move each of one’s checkers all the way around
and off the board. The first player to do this wins. If a player manages to
remove all her pieces from the board before the other player has removed any of
his pieces, she is said to have won a “gammon,” which is worth twice a normal
win. If a player manages to remove all her checkers and the other player has
removed none of his and has checkers on the bar, then she is said to have won
a “backgammon,” which is worth three times a normal win. The game is often
played in matches and can be accompanied by gambling.
The game, like chess, has been studied intently by computer scientists. It has
a very large branching rate: the number of moves available on the next turn is
very high due to the high number of possible dice rolls and the many options for
disposing of each roll. This limits tree search methods from being very effective
in programming a computer to play backgammon. The large number of possible
board positions also precludes effective use of lookup tables. Gerald Tesauro at
9.3. TD AND BACK PROPAGATION 197
IBM in the late 80’s was the first to successfully apply TD learning with back
propagation to learning state values for backgammon (Tesauro, 1992).
Tesauro’s TD-Gammon network had an input layer with the board represen-
tation and a single hidden layer. The output layer of the network consisted of
four logistic units which estimate the probability of white or black both achiev-
ing a regular win or a gammon. Thus, the network was actually estimating four
outcome values at the same time. The input representation in the first version
of TD-Gammon was 198 units. The number of checkers on each of the 24 points
on the board were represented by 8 units. Four of the units were devoted to
white and black respectively. If one white checker was on a given point, then
the first unit of the four was on. If two white checkers were on a given point,
then both the first and second unit of the group of four were on. The same is
true for three checkers. The fourth unit took on a graded value to represent
the number of checkers above three: (n/2), where n is the number of checkers
above three. In addition to these units representing the points on the board,
an additional two units encoded how many black and white checkers were on
the bar (off the board), each taking on a value (n/2), where n is the number of
black/white checkers on the bar. Two more units encoded how many of each
player’s checkers had already been removed from the board, taking on values of
(n/15), where n is the number of checkers already removed. The final two units
encoded whether it was black or white’s turn to play. This input projected to a
hidden layer of 40 units.
Tesauro’s previous backgammon network, Neurogammon, was trained in a
supervised manner on a corpus of expert level games. TD-Gammon, however,
trained by completely by self-play. Moves were generated by the network itself
in the following fashion: the computer simulated a dice roll and generated all
board positions that were possible from the current position and the number
of the die. The network was fed each of these board positions and the one
that the network ranked highest was chosen as the next state. The network
then did the same for the next player and so on. Initially, the network chose
moves randomly because its weights were initialized randomly. After a sufficient
number of games, the network played with more direction, allowing it to explore
good strategies in greater depth. The stochastic nature of backgammon allowed
the network to thoroughly explore the state space. This self-play learning regime
proved effective against Neurogammon’s supervised technique.
The next generation of TD-Gammon employed a different input represen-
tation that included a set of conceptual features that are relevent to experts.
For instance, units were added to encode the probability of a checker being hit
and the relative strength of blockades (Tesauro, 2002). With this augmenta-
tion of the raw board position, TD-Gammon acheived expert-level play and is
still widely regarded as the best computerized player. It is commonly used to
analyze games and evaluate the quality of decisions by expert players. Thus,
the input representation to TDBP networks is an important consideration when
building a network.
There are other factors contributing to the success that Tesauro achieved
with TD-Gammon (Tesauro, 1992). Notably, backgammon is a non-deterministic
198 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
game. Therefore, it has a relatively smooth and continuous state space. This
means simply that similar board positions have similar values. In deterministic
games, such as chess, a small difference in board position can have large con-
sequences for the state value. Thus, the sort of state value generalization in
TD-Gammon would not be as effective and the discontinuities in the chess state
space would be harder to learn. Similarly, a danger with learning by self-play is
that the network will learn a self-consistent strategy in which it performs well
in self-play, but performs poorly against other opponents. This was remedied
largely by the stochastic nature of backgammon which allowed good coverage of
the state space. Parameter tuning was largely done heuristically with λ = 0.7
and α = 0.1 for much of the training. Decreases in λ can help focus learning
after the network has become fairly proficient at the task, but these parameter
settings are largely the decision of the modeler and should be based on specific
considerations for the task at hand.
9.4 IMPLEMENTATION
The tdbp program implements the TD back propagation algorithm. The struc-
ture of the program is very similar to that of the bp program. Just as in the
bp program, pool(1) contains the single bias unit, which is always on. Subse-
quent pools must be declared in the order of the feedforward structure of the
network. Each pool has a specified type: input, hidden, output, and context.
There can be one or more input pools and all input pools must be specified
before any other pools. There can be zero or more hidden pools and all hidden
pools must be specified before output pools but after input pools. There can be
one or more output pools, and they are specified last. There are two options for
output activation function: logistic and linear. The logistic activation function,
as we have seen in earlier chapters, ranges between 0 and 1 and has a natural
interpretation as a probability for binary outcome tasks. The linear activation
function is equivalent to the net input to the output units. The option of a
linear activation function provides a means to learn to predict rewards that do
not fall in the range of 0 to 1. This does not exclude the possibility of using
rewards outside that range with the logisitic activation function - the TD error
will still be generated correctly, although the output activation can never take
on values outside of [0, 1]. Lastly, context pools are a special type of hidden
pool that operate in the same way as in the srn program.
The gamma parameter can be set individually for each output pool, and
if this value it unspecified, the network-wide gamma value is used. Output
units also have an actfunction parameter which takes values of either ’logistic’
or ’linear’. Each pool also has a delta variable and an error variable. As
mentioned previously, the delta value will, in general, be a two dimensional
matrix. The error variable simply holds the error terms for each output unit
that lies forward from the pool in the network. Thus, it will not necessarily
be of the same size as the number of units in the pool, but it will be the same
size as the first dimension of the delta matrix. Neither the error nor the delta
9.4. IMPLEMENTATION 199
parameter is available for display in the network window although both can be
printed to the console since they are fields of the pool data structure.
Projections can be between any pool and a higher numbered pool. The bias
pool can project to any pool, although projections to input pools will have no
effect because the units of input pools are clamped to the input pattern. Con-
text pools can receive a special type of copyback projection from another hidden
pool, just as in the srn program. Projections from a layer to itself are not al-
lowed. Every projection has an associated eligibility trace, which is not available
for display in the network display since, in general, it is a three dimensional ma-
trix. It can, however, be printed to the console and is a field of the projection
data structures in the net global variable called eligtrace. Additionally, each
projection has a lambda and lrate parameter which specify those values for that
projection. If either of these parameters is unspecified, the network-wide lambda
and lrate values are used.
There are two modes of network operation - “beforestate” or “afterstate” -
which are specified in the .net file of the network when the network is created.
In “beforestate” mode, the tdbp program (1) presents the current environment
state as the network input, (2) performs weight changes, (3) obtains and eval-
uates possible next states, and (4) sets the the next state. In contrast, the
“afterstate” mode instructs the tdbp program to (1) obtain and evaluate pos-
sible next states, (2) set the selected state as input to the network, (3) perform
weight changes, and (4) set the next state. In both cases, learning only occurs
after the second state (since we need two states to compute the error). For sim-
ple prediction problems, in which the network does not change the state of the
environment, the “beforestate” mode is usually used. For any task in which the
network modifies the environment, “afterstate” is likely the desired mode. Any
SARSA learning should use the “afterstate” mode. Consider briefly why this
should be the case. If the agent is learning to make actions, then in “afterstate”
mode it will first choose which action to take in the current state. Its choice
will then be incorporated into the eligibility trace for that time step. Then the
environment will be changed according to the network choice. This new state
will have an associated reward which will drive learning on the next time step.
Thus, this new reward will be the result of taking the selected action in the
associated state, and learning will correctly adjust the value of the initial state-
action pair by this new reward. In other words, the reward signal should follow
the selection of an action for learning state-action pairs. If you are learning to
passively predict, or learning a task in which states are selected directly, then
the pre-selection of the next state is unnecessary.
between the environment class and the tdbp program - they have no definition
in the conceptual description of RL. Formal methods, however, do have concep-
tual meaning in the formal definition of RL. We briefly describe the function of
each of these methods here. It is suggested that you refer back to Figure 9.4.1
to understand why each method is classified control vs. formal.
Note about Matlab classes. Matlab classes operate a little differently from
classes in other programming languages with which you might be familiar. If a
method in your class changes any of the property variables of your object, then
that method must accept your object as one of its input arguments and return
the modified object to the caller. For example, suppose you have an object
named environment who has a method named changeState which takes an
argument new_state as its input and sets the object’s state to the new state,
then you must define this method with two input arguments like the following:
function obj = changeState(obj, new_state)
obj.state = new_state;
end
You should call this method in the following way:
environment = environment.changeState(new_state);
When you call this method in this way, environment gets send to the method
as the first argument, and other arguments are sent in the order that they
appear in the call. In our example method, the variable obj holds the ob-
ject environment. Since you want to modify one of environment’s properties,
namely the state property, you access it within your method as obj.state.
After you make the desired changes to obj in your method, obj is returned
to the caller. This is indicated by the first obj in the method declaration
obj = changeState(obj, new_state). In the calling code, you see that this
modifed object gets assigned to environment, and you have successfully modi-
fied your object property. Many of the following methods operate this way.
Control Methods:
• environment_template() - This is the constructor method. It it called
when the environment object is created and returns the new environ-
ment instance. You should rename this function as the name of your
environment - it must have the same name as the name of the class at
the top of the environment class file and the filename itself. In general,
this function only calls the construct() function and returns the re-
turned value. As you might guess, the construct() method does the real
work of setting up the environment. We have separated these functions
for two reasons. First, the reset() function can just call construct()
to reset the environment. Second, you can intitialize variables in the
environment_template() function which will persist over the course of
training. You can think of there being two types of variables within your
environment class: persisting and non-persisting. Non-persisting variables
are initialized in the construct() method. Thus, they are reset each time
202 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
policy, the next state is not determined by the network output. In this
case, next_state will be a vector of zeros and your setNextState imple-
mentation should ignore it and simply set the environment state based on
other criteria.
the output layers. The defer policy merely calls the setNextState() method
of the environment class with a vector of zeros, which should be ignored in
the implementation of that function. This allows the environment to change
its own state, and is used in passive prediction problems where the network
does not select next states. Lastly, the userdef policy passes the same matrix
of next states returned by the getNextStates() method of the environment
class along with a corresponding matrix of the network output for each of the
states to the doPolicy() method in the environment class. This allows the
user to write a custom policy function, as described above. Note that all of
the above policies will only use the value estimate on the first output unit of
the lowest numbered output pool. If the task requires that multiple values be
predicted by more than one ouput unit, and state selection should take multiple
value estimates into account, then the userdef policy is required. Recall that
TD-Gammon employed four output units to predict different possible outcomes
of the game. One might wish to take the values of all these output units into
account in choosing an action.
Testing options. The testing options window is much the same as the training
options window, but only values relevant to testing are displayed. It is possible,
especially early in training, for the network to cycle among a set of states when
testing in with the greedy policy, resulting in a infinite loop. To prevent this,
the stepcutoff parameter defines the maximum number of steps in an episode.
When the network reaches the cutoff, the episode will be terminated and it will
be reported in the pattern list.
9.6 EXERCISES
Ex9.1. Trash Robot
In this exercise we will explore the effect of the environment reward structure
and parameter values on the behavior of a RL agent. To begin, open the file
trashgrid.m by locating it in the tdbp folder, right clicking it, and clicking “Open
as Text.” The file will open in a new window. This is the environment class for
the excercise and you will want to keep it open because you will need to edit it
later. To begin the exercise, type start3by3grid to the Matlab command prompt.
This will open the 3by3grid.net network and set up the training settings. You
can view the current training settings by opening the training options window.
The network is a SARSA network and is set up to use the softmax policy for
action selection with a constant temperature of 1. In the network window, you
will see a 3 by 3 grid and another set of 4 units labeled “action.” These four
units represent the four possible actions that can be taken in this grid world:
moving up, down, left, or right. Thus, there are a total of 13 input units in this
network. You also see that there are 6 hidden units and a single output unit
whose value is reported in decimal form.
The job of the agent in the following exercises will be to make the necessary
moves to enter the terminal square on the grid, where it will receive a reward
206 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
based on how long it took. There is also a single piece of trash which the agent,
who can be thought of as a simple trash-collecting robot, can pick up by entering
the square where the trash is. Activate the test panel and set the Update After
drop down to Step. Then click the Step button a few times to observe an
episode. You will see that the unit in the upper left corner of the grid turns on -
this is unit (1,1). The grid is numbered 1 to 3 from the top down and 1 to 3 from
the left to right. This unit marks the current location of the agent in the grid.
At each time step, the agent will move to a neighboring square in one of the
four cardinal directions. The unit in the bottom left corner of the grid will have
a value of .5. This indicates that there is a piece of trash in that square. The
terminal square is unmarked, but it is position (2,3) (two down, three right).
The episode ends when the agent enters this square. If you continue to step
through the episode, you will see that the robot moves randomly. Try clicking
Newstart and observe how the path of the robot changes. The robot may cycle
between squares or stumble into the terminal square.
Q.9.1.1.
Find the getCurrentReward() function in the trashgrid.m file. Try
to figure out the reward structure of this task by understanding this
function. If the robot took the fastest route to the terminal square,
what reward would it receive? Explain the rewards the robot can
receive and how.
Set the Update After text box to 10 epochs and train the network for 350
epochs. You will see a summary for each epoch appear in the patterns list. You
will notice that initially the episodes are long and usually end with 0 reward.
By the end of training, you should see improvement.
Q.9.1.2.
After training the network, switch to the test panel and step through
a few episodes. Since the test policy is greedy, every episode will
be the same. What path does the robot take? Was your prediction
about the reward correct? You can try clicking New Start and train-
ing the network again from scratch to see if the behavior changes.
Do this a few times and report on what you see. Does the robot use
the same path each time it is trained? If not, what do the different
paths have in common?
After observing the network behavior with the greedy policy, switch the test
policy to softmax and make sure the temperature is set to 1.
Q.9.1.3.
Step through a few episodes. What changes do you notice? Since we
are now using softmax to choose actions, the behavior of the robot
across episodes will vary. It may help to run more than one episode
9.6. EXERCISES 207
Q.9.1.4.
Step through a few episodes and observe the action values that are
displayed in the pattern list. It will help to remember that the
actions are encoded at the end of the input vector, with the last
four positions representing up, down, left, and right respectively.
What is implicitly encoded in the relative value estimates at each
step? How does this lead to the bistable behavior that the network
is exhibiting?
Now switch to the training panel, reset the newtork, and inspect the reward
function in the trashgrid.m environment class file.
Q.9.1.5.
First, be sure to remember the current getCurrentReward function,
as you will have to restore it later - copying and pasting it elsewhere
is a good idea. Change one of the assignments to reward in order to
change which of the two paths the robot prefers to follow. Then train
the network to 350 epochs from scratch and observe its behavior
under the greedy policy. Did the change you made result in your
expected change in the robot’s behavior? Why?
Now revert the changes you just made to the reward function and Reset the
network again.
Q.9.1.6.
Keeping in mind the function of the gamma parameter in calculat-
ing returns, change its value in order to induce the same pattern of
behavior that you obtained my manipulating the reward function.
208 CHAPTER 9. TEMPORAL-DIFFERENCE LEARNING
Train the network after any change you make, always remembering
to Reset between training runs. It may take a few tries to get it
right. Test the effect of each change you make by stepping through
an episode with the greedy policy. If you find that the network cy-
cles between squares indefintiely during testing, then it likely needs
more epochs of training. What change did you make and why did
this change in behavior occur? Use your knowledge of the return
calculation, the role of gamma, the reward function, and the layout
of the grid to explain this change in behavior.
We have intentionally kept this grid world very small so that training will be
fast. Your are encouraged to modify the trashgrid.m environment and make a
new network to explore learning in a larger space. You may also wish to explore
behavior with more than one piece of trash, or perhaps trash locations that
change for each episode.
Appendix A
PDPTool is a neural network simulator for Matlab that implements the models
described in Parallel distributed processing: Explorations in the microstructure
of cognition (Rumelhart et al., 1986; McClelland et al., 1986). This program is
a teaching aid for courses in Parallel Distributed Processing.
This document describes how to install and run PDPTool on your computer.
For instructions on using the software, see the PDPTool User’s Guide, Appendix
C.
If you encounter difficulties with installation, send email to: [email protected].
NOTE: The version of the software available here does not work on MATLAB
versions R2014a or later (see immediately below for details).
A.2 Installation
Source files for PDPTool are located at http://web.stanford.edu/group/pdplab/pdptool/pdptool.zip.
Install PDPTool using the following steps.
3. Start Matlab.
209
210APPENDIX A. PDPTOOL INSTALLATION AND QUICK START GUIDE
4. In Matlab, set your path variable to point to PDPTool using the following
steps.
(a) From the File menu, select Set path. A dialog box opens.
(b) Click the Add with subfolders button. A directory browser window
opens.
(c) Locate the folder called pdptool. Select it and click OK.
(d) Click the Save button on the set path dialog box to save the path for
future sessions.
(e) Click the Close button.
(a) From the File menu, select Preferences. A dialog box opens.
(b) Select Command History from the list of options on the left. This
displays the current command history settings.
(c) In the Saving section of the history settings, select Save after [n]
commands, where [n] is a numerical field.
(d) Change [n] to 1.
(e) Click OK.
In this appendix, we describe the steps you need to take to build your own
network within one of the PDPtool simulation models. In the course of this, we
will introduce you to the various files that are required, what their structure is
like, and how these can be created through the PDPtool GUI. Since users often
wish to create their own back propagation networks, we’ve chosen an example
of such a network. By following the instructions here you’ll learn exactly how
to create an 8-3-8 auto-encoder network, where there are eight unary input
patterns consisting of a single unit on and all the other units off. For instance,
the network will learn to map the input pattern
1 0 0 0 0 0 0 0
to the identical pattern
1 0 0 0 0 0 0 0
as output, through a distributed hidden representation. Over the course of this
tutorial, you will create a network that learns this mapping, with the finished
network illustrated in Figure B.6.
Creating a network involves four main steps, each of which is explained in a
section:
1. Creating the network itself (Appendix B.1)
2. Creating the display template (Appendix B.2)
3. Creating the example file (Appendix B.3)
4. Creating a script to initialize the network (Appendix B.4)
While this tutorial does a complete walkthrough of setting up an auto-
encoder in PDPtool, in the interest of brevity, many of the commands and
options of PDPtool are left unmentioned. You are encouraged to use the PDP-
tool User’s Guide (Appendix C) as a more complete reference manual.
213
214 APPENDIX B. HOW TO CREATE YOUR OWN NETWORK
mkdir encoder838
cd encoder838
pdp
In the main pdp window, select “Create...” from the Network pull-down menu.
In the “Network Name” box, enter “encoder838” (or whatever you want to name
your network). The “Network Type” is Feed-forward Back propagation.
It might be useful to see the program create the “.net” file as we go along.
Click the “View Script” button in the top-left corner of the window. Your
“Network setup” box should look something like the one in Figure B.1. Note
that one pool of units – pool(1), the bias pool, is already created for you. This
pool contains a single unit that always has an activation of 1; connections from
this pool to other pools implement bias weights in the network.
Name edit box. This will replace the edit box with a pop-up menu of all currently defined
pools, selecting a pool name will show the number of units and the pool type that was set for
it which can then be edited if desired. Right clicking on the pop-up menu will change it back
into an edit box.
B.1. CREATING THE NETWORK ITSELF 215
Figure B.1: The Network Setup window. This is the first step in setting up our
feed-forward back propagation network.
216 APPENDIX B. HOW TO CREATE YOUR OWN NETWORK
this case the network uses the network-wide learning rate parameter.
B.2. CREATING THE DISPLAY TEMPLATE 217
Once the projections are defined, you are done defining your network, and you
are ready to continue with the other steps. Click the save button at the top of
the window (the floppy disk icon), and save the file as ‘encoder838.net’. Our
encoder838.net file is in Figure B.2 so you can check if yours is the same.
Figure B.2: The ‘encoder838.net’ file created through the Network Setup win-
dow. Double-check to see if yours is the same.
B.3. CREATING THE EXAMPLE FILE 219
Details of this parameter are in the PDPtool User’s Guide (Appendix C).
For the auto-encoder network, follow the orientations we have selected. If
you make a mistake when adding an item Value or Label, you can highlight it
in the right panel and press “Remove”.
Now it’s time to add the rest of the items in the network. For each item,
follow all the steps above. Thus, for each item, you need to add a Label and
then the Value with the specified orientation. We list each item below, where
the first one is the input activation that we just took care of. Your screen should
look like Figure B.3 when you are done adding the items (however, this screen
does not indicate whether or not you have the orientations or transposes the
same way that I do, but this will make a difference in a second).
After adding these items, click “Done” if your screen looks like Figure B.3.
The “Set display Positions” screen should then pop-up, where you get to place
the items on the template. An intuitive way to visualize this encoder network
is shown in Figure B.4. To place an item on the template, select it on the left
panel. Then, right click on the grid to place the item about there, and you can
then drag to the desired position. If you want to return the item to the left
panel, click “Reset” with the item highlighted.
p1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
220 APPENDIX B. HOW TO CREATE YOUR OWN NETWORK
Figure B.3: The Select Display Items window. When creating your template,
this is the screen where you add network items to the display. For the auto-
encoder we are creating here, the list of added items should look like this (the
cpname label and scalar are there but not visible).
B.3. CREATING THE EXAMPLE FILE 221
Figure B.4: The Set Display Positions window. Here, you place the items
you selected in Figure B.3 on the display, which is the panel you see when your
network is running. A recommended layout for the encoder network is displayed
here.
222 APPENDIX B. HOW TO CREATE YOUR OWN NETWORK
p2 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0
p3 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0
p4 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0
p5 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0
p6 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0
p7 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
p8 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1
pdp
which starts the pdp program. First, click “Load script”, and select the en-
coder838.net file that contains your network setup (Section B.1). Second, click
“Load pattern” to load the unary.pat file you created (Section B.3) as both the
testing and training patterns. Third, click the Display menu and select “Load
Template,” selecting the encoder838.tem you created in Section B.2.
Also, you will want to select Training options that are reasonable for this
network, which will become the parameters used when the network starts up
with this script. Select the “Network” menu then “Training options.” The
defaults are fine for this network, but they may have to be changed when setting
up a custom network. For advice on setting some crucial parameters, such as
the lrate and wrange, see the hints for setting up your own network in the back
propagation chapter (Ex 5.4 Hints). When done adjusting training options,
click “Apply” and then “OK.”
We are ready to launch the network, which is done by selecting “Launch
network window” from the “Network” menu. The Network Viewer should pop
up, resembling Figure B.6 (except that the network in this figure has already
been trained).
Finally, click “Reset,” which is necessary for certain parameters such as
“wrange” to take effect. Now, all the actions you have just completed should
B.4. CREATING A SCRIPT TO INITIALIZE THE NETWORK 223
in the Matlab command window, and the Network Viewer should pop-up.
That’s it; the network is finished. Train the network and see how it uses the
hidden layer to represent the eight possible input patterns.
224 APPENDIX B. HOW TO CREATE YOUR OWN NETWORK
Figure B.6: This is the completed network, up and running. It has been trained
for 600 epochs, and the network is being tested on pattern 5, where just the 5th
input and output unit should be active. As you can see, the network output
response is very good.
Appendix C
225
226 APPENDIX C. PDPTOOL USER’S GUIDE
Appendix D
PDPTool Standalone
Executable
227
228 APPENDIX D. PDPTOOL STANDALONE EXECUTABLE
(a) If you are using bash shell - ( If you are not sure about your shell
type echo $SHELL on the command prompt). Place the following
commands in your .bash profile file after replacing MCRRootDir with
the name of the directory in which you installed MCR Compiler
Runtime( see step 7(c)).
.bash_profile file will be directly under your home directory.
Replace <version> with the MCR version intalled in step 7(c).
The files in MCRRootDir are installed under version specific directory name e
MCRROOT=MCRRootDir/<version>
MCRJRE=${MCRROOT}/sys/java/jre/glnxa64/jre/lib/amd64
LD_LIBRARY_PATH=${MCRROOT}/runtime/glnxa64:
${MCRROOT}/bin/glnxa64:
${MCRROOT/sys/os/glnxa64:
${MCRJRE/native_threads:
${MCRJRE}/server:
${MCRJRE}/client:
${MCRJRE}
export XAPPLRESDIR ${MCRROOT}/X11/app-defaults
export LD_LIBRARY_PATH ${LD_LIBRARY_PATH}
(b) If you are using C shell, place the following commands in your .cshrc
file after replacing MCRRootDir with the name of the directory in
which you installed MCR Compiler Runtime ( see step 7(c)).
D.1. INSTALLING UNDER LINUX 229
set MCRROOT=MCRRootDir/<version>
set MCRJRE = ${MCRROOT}/sys/java/jre/glnxa64/jre/lib/amd64
set LD_LIBRARY_PATH = ${MCRROOT}/runtime/glnxa64:
${MCRROOT}/bin/glnxa64:
${MCRROOT}/sys/os/glnxa64:
${MCRJRE}/native_threads:
${MCRJRE}/server:
${MCRJRE}/client:
${MCRJRE}
setenv XAPPLRESDIR ${MCRROOT}/X11/app-defaults
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}
(a) If you are using bash shell - ( If you are not sure about your shell type
echo $SHELL on the command prompt).Place the following com-
mand in your .bash profile file. Replace pdptool path with the full
directory name created in Step 1.
export PATH=${PATH}:pdptool_path
The above change to .bash_profile file will be applied when you source the file using t
source .bash_profile
(b) If you are using C shell, place the following command in your .cshrc
file
setenv PATH=${PATH}:pdptool_path
The above change to .cshrc file will be applied when you source the file using the comma
source .cshrc
10. You are now ready to start using pdptool. For the most part, things will
work as when running pdptool within MATLAB, so you can carry out
exercises and create and run your own networks, as described in the main
body of the Handbook. Information specific to running the standalone
version under Linux is given in the next section.
230 APPENDIX D. PDPTOOL STANDALONE EXECUTABLE
2. Extract the archived files into a new directory called ’pdptool’ in your
home directory or any other location on your linux machine using any
archiving tool like unzip.
(a) You can enter built-in MATLAB commands to the pdp command
prompt. For example, entering ’2/3’ will return ans = .66667. Enter-
ing ’net’ will return the top-level constituents of the loaded network,
if any.
(b) The up and down arrow keys allow you to move up and down through
previous commands issued to the pdp prompt. Right and left arrows
allow movement within a line so that you can edit a provious com-
mand.
(c) You can use Ctrl+C and Ctrl+P keys to copy and paste text between
the pdp prompt and outside the shell.
(d) You can copy text from the pdp command Prompt window to the
Clipboard by clicking and dragging the mouse to select the text to
be copied; then press enter or right-click to execute the copy. You
can then paste this information into an editor window to save to a
file or edit. Such a file could become the basis of a set of commands
that you could save in a script file (e.g., myscript.m) for execution
when needed, using the command runscript myscript.m.
3. Unzip the package with any windows archiving utility, extracting all files
into your new folder. This will extract the following files -
(a) readme.txt - This describes essentially the same set of steps as this
section. Additionally , it might contain instruction on updating the
MCR on your computer.
(b) pdptool.exe - This is the pdptool executable.
5. Verify that MCR is installed on your system and is the version specified in
the readme file. If it is, continue to next step. The usual location for MCR
is C:\Program Files\MATLAB\MATLAB Compiler Runtime\<version
>, where <version >indicates the version number, e.g v714. If the MCR
is not installed or if it is an older version, download the MCRInsaller.exe
file
6. Set up the environment : Note the full path to the PDPStandAlone folder
where you extracted the executable to. (for example, the path might be
’C:\Users\YourUserName\Desktop\PDPStandAlone’). Add this to the
’Path’ environment variable. You can do this easily by doing a right-
click on ’My computer’ , select properties, then select ’Advanced’ tab. It
will have an ’Environment Variables’ button. Click on it to get a dialog
box for setting envornment variables. Here, select ’Path’ in the ’System
Variables’ , then select edit, it will open up a small editbox with the current
path, enter a ’;’ (semi-colon), then add the path to the folder containing
pdptool.exe. Then Click on ’Ok’ and dismiss the properties window.
7. You are now ready to run pdptool. For the most part, things will work
as when running pdptool within MATLAB, so you can carry out exercises
and create and run your own networks, as described in the main body
of the Handbook. Information specific to running the standalone version
under Windows is given in the next section.
2. Extract the archived files into a new folder called ’pdptool’ to your Desk-
top or any other location on computer. (Some methods of extraction may
create a directory called pdptool in the location you specify – others ex-
tract all files to that location; this depends on the extractor you use and
the method of extraction).
3. Unzip the package with any archiving utility, extracting all files into your
new folder.
6. First read readme.txt. If this is your first time installation, you MUST
download and run MCRInstaller.dmg first. Verify that MCR is installed
on your system and is the version specified in the readme file. If it is, con-
tinue to next step. The usual location for MCR is /Applications/MATLAB/MATLAB Compiler Runtime/<vers
>, where <version >indicates the version number, e.g v714. If the MCR
is not installed or if it is an older version, download the MCRInsaller.exe
file
(f) If you are using C shell - ( If you are not sure about your shell type
echo $SHELL on the command prompt). Place the following in your
.cshrc/.tcshrc file. Replace pdptool path with the full directory name
created in Step 1 of this section.
setenv PATH=${PATH}:pdptool_path
(g) The above changes to .cshrc file will be applied when you source the
file using the command:
source .cshrc
8. You are now ready to start using pdptool. For the most part, things will
work as when running pdptool within MATLAB, so you can carry out
exercises and create and run your own networks, as described in the main
body of the Handbook. Information specific to running the standalone
version under OSX is given in the next section.
D.6. RUNNING UNDER MAC OSX 235
(a) You can enter built-in MATLAB commands to the pdp command
prompt. For example, entering ’2/3’ will return ans = .66667. Enter-
ing ’net’ will return the top-level constituents of the loaded network,
if any.
(b) The up and down arrow keys allow you to move up and down through
previous commands issued to the pdp prompt. Right and left arrows
allow movement within a line so that you can edit a provious com-
mand.
(c) You can use Command+C and Command+P keys to copy and paste
text between the pdp prompt and outside the shell.
(d) You can copy text from the pdp command Prompt window to the
Clipboard by clicking and dragging the mouse to select the text to
be copied; then press enter or right-click to execute the copy. You
can then paste this information into an editor window to save to a
file or edit. Such a file could become the basis of a set of commands
that you could save in a script file (e.g., myscript.m) for execution
when needed, using the command runscript myscript.m.
236 APPENDIX D. PDPTOOL STANDALONE EXECUTABLE
Bibliography
237
238 BIBLIOGRAPHY
Rogers, T. T., Lambon Ralph, M. A., Garrard, P., Bozeat, S., McClelland, J. L.,
Hodges, J. R., and Patterson, K. (2004). The structure and deterioration of
semantic memory: A neuropsychological and computational investigation.
Psychological Review, 111(205-235). [PDF].
Rogers, T. T. and McClelland, J. L. (2004). Semantic Cognition: A Parallel
Distributed Processing Approach. MIT Press, Cambridge, MA.
Rohde, D. (1999). Lens: The light, efficient network simulator. Technical Re-
port CMU-CS-99-164, Carnegie Mellon University, Department of Computer
Science, Pittsburgh, PA.