Beer 2006
Beer 2006
Randall D. Beer
[email protected]
Department of Electrical Engineering and Computer Science and Department of
Biology, Case Western Reserve University, Cleveland, OH 44106, U.S.A.
1 Introduction
τ ẏ = −y + W σ (y + θ ) + I (2.1)
dynamics from which any other dynamics can be reproduced to any desired
degree of accuracy.
CTRNNs are a special case of the general class of additive neural net-
work models τ ẏ = −y + W ξ (y + θ ) + I (Grossberg, 1988). Additive neu-
ral networks have been extensively studied in both their continuous-time
and discrete-time versions (Cowan & Ermentrout, 1978; Cohen & Gross-
berg, 1983; Hopfield, 1984; Hirsch, 1989; Borisyuk & Kirillov, 1992; Blum
& Wang, 1992; Zhaojue, Schieve, & Das, 1993; Beer, 1995; Hoppensteadt &
Izhikevich, 1997; Tiňo, Horne & Giles, 2001; Pasemann, 2002; Haschke &
Steil, 2005). Although our analysis will focus on equation 2.1, the results
we obtain would be qualitatively identical for any additive model with a
(smooth, monotone, bounded) sigmoidal activation function ξ (x).
A particularly convenient class of activation functions can be parame-
terized as
α
σα,β,µ (x) = + β,
1 + e −µx
y = µ−1 y
τ = τ
W = (α µ)−1 W
θ = µ−1 θ
I = µ−1 I − (α µ)−1 W · β,
A B
1 1
0.8 0.8
0.6 0.6
o
o
0.4 0.4
0.2 0.2
0 0
-4 -2 0 2 4 -4 -2 0 2 4
I+ θ I+ θ
Figure 1B), where the left and right edges of the fold are given by (Beer,
1995)
√ √
w+ w−4 w ± w(w − 4)
I L (w), I R (w) = ±2 ln − ,
2 2
from the other neurons. Center-crossing circuits are important for a va-
riety of reasons. First, the richest possible dynamics can be found in the
neighborhood of such circuits. By “richest possible dynamics,” I mean dy-
namics that makes maximal use of the available degrees of freedom in the
circuit. Second, the bifurcations of the central equilibrium point of a center-
crossing circuit can often be fully characterized analytically. Finally, for any
given weight matrix, the corresponding center-crossing circuit serves as a
symmetry point in the net input parameter space for that circuit.
product (Guckenheimer,
& Sturmfels, 1997). Given two N × N ma-
Myers,
trices A = a i j and B = b i j , A B is the 12 N (N − 1) × 12 N (N − 1) matrix
whose rows are labeled by the multi-index ( p, q ) (where p = 2, 3, . . . , N,
and q = 1, 2, . . . , p − 1), whose columns are labeled by the multi-index
(r, s) (where r = 2, 3, . . . , N, and s = 1, 2, . . . , r − 1), and whose elements
are given by (Kuznetsov, 1998)
a a b b
1 pr ps pr ps
(A B)( p, q )(r, s) = + .
2 b qr b q s a qr a q s
J = diag τ −1 · diag() · W − 1 ,
w ψ −1 w ψ
11 1 21 1
det(J) = det w τ1ψ τ1
w22 ψ2 − 1
12 2
τ2 τ2
= 1 − w11 ψ1 − w22 ψ2 + ψ1 ψ2 det W = 0,
3016 R. Beer
where
1± 1 − 4ψi − 2ψi
σ −1 (ψi ) = ln .
2ψi
Note that σ −1 (·) is two-valued for a sigmoidal function. Since each compo-
nent of θ can come from either branch, each bifurcation manifold in space
can therefore generate up to 2 N bifurcation manifolds in net input space.
Figures 2C and 2D show the θ-space bifurcation manifolds corresponding
to the -space manifolds shown in Figures 2A and 2B, respectively.
Using the same approach, we can also calculate and display the local
bifurcation manifolds of three-neuron circuits. Figures 3A and 3B provide
an external view of the local bifurcation manifolds for two different three-
neuron circuits, and the slices in Figures 3C and 3D reveal some of the rich
internal structure. In principle, this method can be applied to circuits of
any size. However, the exponential scaling of the number of bifurcation
manifolds in net input space and the difficulty of visualizing manifolds
in dimensions greater than three make it practical for small circuits only.
Mathematica code for the visualization of the local bifurcation manifolds
of CTRNNs in two and three dimensions can be found in the electronic
supplement (Beer, 2005).
CTRNN Parameter Space 3017
A B
0.25 0.25
0.2 0.2
0.15 0.15
ψ2 ψ2
0.1 0.1
0.05 0.05
0 0
0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25
ψ1 ψ1
C D
-1 -1
-2 -1.5
-3 -2
θ2 θ2
-4 -2.5
-5 -3
-6 -3.5
-6 -5 -4 -3 -2 -1 -4.5 -4 -3.5 -3 -2.5 -2
θ1 θ1
connection weights) with the richest dynamics and the highest density of
bifurcations. As it turns out, the center of each of these regions corresponds
to the center-crossing network for that weight matrix. In these central re-
gions, all N neurons are dynamically active: the range of net input each
neuron receives from the other neurons overlaps the most sensitive region
of its activation function σ (·). Dynamically active neurons can respond to
changes in the outputs of other neurons and are therefore capable of par-
ticipating in nontrivial dynamics.
A second key feature of CTRNN parameter space apparent in Figures 2
and 3 is that the local bifurcation manifolds flatten out as we move away
from the central region, forming quasirectangular regions with an apparent
combinatorial structure. This structure is produced by different subsets of
neurons becoming saturated: the range of net input received is such that
σ (·) ≈ 0 or σ (·) ≈ 1. Saturated neurons effectively drop out of the dynamics
and become constant inputs to other neurons because their outputs are in-
sensitive to changes in the outputs of those other neurons (see Figure 4). For
example, in Figure 3A, we see a central “cube” surrounded by six “poles,”
which are in turn interconnected by twelve “slabs.” In the central cube, all
three neurons are dynamically active. In each pole, only two neurons are
dynamically active; one of the neurons is saturated either on or off. Thus, in
these regions, the dynamics of the three-neuron circuit effectively becomes
two-dimensional. Indeed, the structure of the local bifurcation manifolds
visible in the pole cross-sections in Figure 3A is similar to that of the two-
neuron circuit in Figure 2C (Haschke, 2004). In the slabs, only one neuron
is dynamically active, while in the eight remaining corner regions of this
plot, all three neurons are saturated.
We can use this observation to partition the net input space of a CTRNN
into regions of dynamics with different effective dimension depending on
the number of neurons that are dynamically active. A CTRNN with an effec-
tive dimension of M has limit sets whose extent, distribution, and responses
to perturbations span an M-dimensional subspace of the N-dimensional
output space of an N-neuron circuit, with 0 ≤ M ≤ N (see Figure 4). In this
section, we will completely characterize the combinatorics and geometry of
an approximation to these regions for arbitrary N. Due to some differences
in the details, we first consider the case when all wii > 4 and then the case
when some wii < 4.
4.1 All wii > 4. When all wii > 4, the SSIO curves of all neurons are
folded (see Figure 1B), and the edges of these folds play an important
role in structuring CTRNN parameter space (see Figure 4). Specifically,
when the right edge of a neuron’s synaptic input range falls below its
left fold, that neuron will be saturated off regardless of the states of the
other neurons in the circuit (see the left rectangle in Figure 1B). And when
the left edge of a neuron’s synaptic input range falls above its right fold,
that neuron will be saturated on (see the right rectangle in Figure 1B). At
3020 R. Beer
A
1
0.8
0.6
o2
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
o1
B
1
0.8
0.6
o2
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
o1
C
1
0.8
0.6
o2
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
o1
CTRNN Parameter Space 3021
For example, Figure 6 shows the structure of R3M for M = 0, . . . , 3 for the
same circuit shown in Figures 3A and 5C.
CTRNN Parameter Space 3023
RM
N
(W) = N
Q |J (W), (4.1)
N
S∈K N−M
J ∈Z(S)
3024 R. Beer
where P(J ) is the set of permutations of the raised and lowered indices J .
For example, 3 Q1 2 = 3 B 1 2 ∪ 3 B 2 1 (see Figure 7). Here 3 B 1 2 can be interpreted
to mean that neuron 1 is on and neuron 2 is on given that neuron 1 is
on, and 3 B 2 1 means that neuron 2 is on and neuron 1 is on given that
neuron 2 is on. Since L contains N − M elements, each N Q|J is composed
of (N − M)! possibly overlapping N B|Ls. Thus, R M N
is composed of a total
N N−M
of M 2 (N − M)! = M! 2N! N−M
rectangular hypersolids, which obviously
grows very quickly with the codimension N − M.
In order to calculate the bounds of a N B |L rectangular hypersolid, we
will need the synaptic input spectrum IiS (W) of neuron i: the set of possible
synaptic inputs received by neuron i from the subcircuit consisting of the
neurons in S. The min and max elements of this set define the range of
synaptic inputs that neuron i can receive (see Figure 1). I can be defined as
where P ow (W) denotes the power set of the set of weights W and P ow (W)
denotes the set of sums of the elements of the sets in P ow (W). For example,
the synaptic input spectrum of neuron 2 in the {2, 3, 4} subcircuit of a four-
neuron network is
{2,3,4}
I2 = Pow({w23 , w24 })
= {{}, {w23 }, {w24 }, {w23 , w24 }}
= {0, w23 , w24 , w23 + w24 }.
A and let Ii be the net synaptic input that i receives from the neurons in S.
Then neuron i will be saturated off when the left edge of its fold (I L (wii ) −
(θi + Ii )) falls above the right edge of the range of synaptic input it receives
from the active neurons (max IiA ) (see the left rectangle in Figure 1B), it
will be saturated on when the right edge of its fold (I R (wii ) − (θi + Ii ))
falls below the left edge of the range of synaptic input it receives from the
active neurons (min IiA ) (see the right rectangle in Figure 1B), and it will be
dynamically active otherwise.
3026 R. Beer
−∞ if ei = 0
lvi = N \{v , ..., vi−1 }
i−1
I R (wvi vi ) − min Ivi 1 − e j wvi v j if ei = 1
j=1
(4.4)
I (w ) − max I N \{v1 , ..., vi−1 } − i−1
L vi vi vi e j wvi v j if e i = 0
uvi =
j=1
∞ if e i = 1
where N = {1, . . . , N} and the set difference S1 \S2 is the set consisting of
all elements of S1 that are not in S2 .
For example, for 4 B 4 1 3 we have L = {4 ,1 , 3 }, with e = (1, 0, 1) and v =
(4, 1, 3). Consider neuron 1 in this circuit, which occurs at an index of i = 2
in L. The notation 4 B 4 1 3 tells us that neuron 1 is saturated off (e 2 = 0),
which requires that the left edge of its fold falls to the right of the maximum
{1, 2, 3, 4}\{4} {1, 2, 3}
synaptic input max I1 = max I1 it receives (see Figure 1B).
The left edge of its fold is given by I L (w11 ) offset by the net input it receives
from the other saturated neurons relative to its own bias: I L (w11 ) − θ1 −
1
j=1 e j wvi v j = I L (w11 ) − θ1 − e 1 w14 = I L (w11 ) − θ1 − w14 . Thus, the neuron
{1, 2, 3}
1 boundary of 4 B 4 1 3 is I L (w11 ) − θ1 − w14 > max I1 or θ1 < I L (w11 ) −
{1, 2, 3}
max I1 − w14 .
On the other hand, each dynamically active neuron i in A = N \L leads
to an inequality of the form li < θi < ui , with
N−M
li = I L (wii ) − max IiA − e j wi v j
j=1
(4.5)
N−M
ui = I R (wii ) − min IiA − e j wi v j .
j=1
neurons relative to its own bias: I L (w22 ) − θ2 − 3j=1 e j w2v j = I L (w22 ) −
θ2 − (e 1 w24 + e 2 w21 + e 3 w23 ) = I L (w22 ) − θ2 − w23 − w24 (resp. I R (w22 ) −
θ2 − w23 − w24 ). Thus, the neuron 2 bound is I L (w22 ) − w23 − w24 < θ2 <
I R (w22 ) − w23 − w24 .
Completing this example, the rectangular hypersolid 4 B 4 1 3 would be
defined by the four inequalities
{1,2,3}
θ1 < I L (w11 ) − max I1 − w14
I L (w22 ) − w23 − w24 < θ2 < I R (w22 ) − w23 − w24
{2,3}
I R (w33 ) − min I3 − w34 < θ3
{1,2,3,4}
I R (w44 ) − min I4 < θ4 .
4.2 Some wii < 4. In contrast to the wii > 4 case, the SSIO of a neuron
whose self-weight is less than 4 is unfolded (see Figure 1A). Thus, such a
neuron will not undergo the extremal saddle node bifurcations that play
such a crucial role in the parameter space structure characterized above.
Can this analysis be extended to circuits containing such neurons?
To gain some insight into this question, Figure 8 compares the net input
parameter space of a two-neuron CTRNN with w22 = 5 (see Figure 8A) and
w22 = 3 (see Figure 8B). Note that the left and right branches of saddle node
bifurcations disappear when w22 passes below 4, as expected. However,
there are still differences in the effective dimensionality of the dynamics of
this circuit as θ2 is varied. For example, between the two saddle node bi-
furcation curves, the three-equilibrium-point phase portrait changes from
occupying the interior of the state space (and therefore being effectively
two-dimensional in the distribution of its equilibrium points and its re-
sponse to perturbations) at the point C1 to occupying only the bottom edge
(effectively one-dimensional) at C2. Similarly, outside the saddle node bi-
furcation curves, the single equilibrium point phase portrait changes from
occupying the right edge (effectively one-dimensional) at D1 to occupying
the bottom right-hand corner (effectively zero-dimensional) at D2.
3028 R. Beer
Thus, when w22 < 4, regions of dynamics with different effective dimen-
sionality still exist, but there are no sharp boundaries between them because
these regions are no longer delineated by saddle node bifurcations. If we
wish to extend our definitions of the R M N
(W) boundaries from the previous
section to the case when some self-weights are less than four, then we need
to identify some feature of a neuron’s unfolded SSIO curve against which
we can compare the range of synaptic inputs that it receives. Different
choices will lead to somewhat different boundaries.
Perhaps the simplest way to accomplish this is to make use of the piece-
wise linear approximation (the dashed lines in Figure 1A),
0 y < −θ − 2
σ̃ (y + θ ) = y + θ + 1 −θ − 2 ≤ y ≤ −θ + 2
4 2
1 y > −θ + 2,
and use the points where the linear pieces intersect as markers for boundary
calculations (black points in Figure 1A). By setting the resulting one-neuron
equation to 0 and solving for the net input, we obtain left and right bound-
aries of −2 and 2 − w, respectively, leading to the extended definitions
−2 w<4
Ĩ L (w) =
I L (w) w ≥ 4
2−w w<4
Ĩ R (w) =
I R (w) w ≥ 4.
5 Calculating R M
N
Probabilities
vol R M
N
P RM
N
= , (5.1)
(wmax − wmin ) N2 (θmax − θmin ) N
|P(J )|
vol( Q|J ) =
N
vol( B | L) −
N
(−1) i
vol N
B |L ,
L∈P(J ) i=2 P(J )
H∈Ki L∈H
(5.3)
equation 5.3 will also be a rectangular hypersolid, the bounds of which can
be found by taking the appropriate maxs and mins of the bounds of the
constituent N B |L s.
Finally, the volume of each rectangular hypersolid N B |L is given by
N
vol( N B|L) = [ui ]θθmax
min
− [l i ]θmax
θ min
,
W i=1
2
where W is the hypercube [wmin , wmax ] N , the expressions ui and li for the
bounds of the ith dimension of N B |L are given in equations 4.4 and 4.5, and
the notation [x]max
min means to clip x to the bounds [min, max]. Since ui and li
depend on only the N weights coming into neuron i, denoted Wi , this N2 -
dimensional integral can be factored into the product of N N-dimensional
integrals as
N
θ
vol N
B |L = [ui ]θmax
min
− [li ]θθmax
min
. (5.4)
i=1 Wi
N
Thus, in order to calculate P(R M ), we must evaluate equations 5.1 to
5.4. Mathematica code supporting the construction and evaluation of such
expressions for sufficiently small N − M is provided in the electronic sup-
plement (Beer, 2005). Note that these expressions are not as efficient as they
could be. By taking into account integral symmetries, it should be possible
to derive equivalent expressions that involve the evaluation of considerably
fewer integrals.
N
As a concrete illustration of the calculation of P(R M ), consider the region
R N of N-dimensional dynamics in an N-neuron circuit. This is not only the
N
simplest case, but also the most important, because all N neurons are dy-
namically active. In this case, R N
N consists of a single rectangular hypersolid
and thus
vol R N
N = vol
N
Q = vol N
B .
θmax N
− I L (w) − max I N θmin dw1 · · · dw N−1 dw , (5.5)
arbitrary i. Note that the lower limit of the outermost integral must be
4 because we are using the original region definitions (which are defined
only for w ≥ 4) rather than the extended ones. Note also that we have as-
sumed that wmin = −wmax for simplicity and wmax ≥ 4 so that vol(R N N ) is
nonzero.
Although it is unclear in general how to evaluate these arbitrarily it-
erated piecewise integrals in closed form, it is possible to evaluate them
for fixed w and θ bounds and fixed N (see section A.1). In addition, de-
pending on the range of θmin and θmax relative to the points θmin and
θmax where clipping begins to occur, there are two cases of interest where
evaluation of these integrals for general N is relatively straightforward:
(1) when clipping dominates equation 5.5 and (2) when no clipping
occurs.
The points θmin and θmax can be defined as follows. For 4 ≤ w ≤
∈ [I R (wmax ) , −2] and I L (w)
wmax , we have I R (w) ∈ [I L (wmax ) , −2]. Since
I N has the form 0, ±wmax , . . . , ±wmax (N − 1) , we can conclude that
I R (w) − min I N ∈ [I R (wmax ) , wmax (N − 1) − 2] and that I L (w) − max I N ∈
[I L (wmax ) − wmax (N − 1) , −2], giving
The first case in which the iterated integrals can be evaluated in closed
form is when θmin θmin and θmax θmax , which will occur when N be-
comes sufficiently large relative to fixed θmin and θmax . In this case, the
integrands are almost everywhere clipped to either θmin or θmax , and the
iterated integrals evaluate to
N
N−1
N = (θmax − θmin ) (wmax − 4) (2wmax )
vol∞ R N ,
where the ∞ subscript reminds us that this expression is accurate only for
sufficiently large N. The probability of a random parameter sample hitting
RNN therefore scales as
vol∞ R N
N wmax − 4 N
P∞ R N
N = N2
= .
(2wmax ) (θmax − θmin ) N 2wmax
The second case in which the iterated integrals can be calculated in closed
form is when θmin ≤ θmin and θmax ≥ θmax , which will occur when N is small
relative to fixed θmin and θmax . In this case, the θ bounds are sufficiently large
that no clipping occurs and the [·]θθmaxmin
can be dropped from the integrands.
CTRNN Parameter Space 3033
vol0 R N
N = (2
N−2
(wmax ) N−1 (wmax (N(wmax − 4) − wmax
+ wmax (wmax − 4) + ln 256 + 4)
√
− 8(wmax − 1) ln( wmax − 4 + wmax )
+ 2 wmax (wmax − 4) − 8 ln 2)) N ,
where the 0 subscript reminds us that this expression is accurate only for
sufficiently small N. The probability in this case thus scales as
vol0 R N
N
P0 R N
N = .
(2wmax ) N2 (θmax − θmin ) N
better fit for N > 7. The data actually begin to deviate from P0 by N = 2
(since θmin = −24 < θmin = I L (16) − 16 (N − 1) for N > 1), but the largest
error occurs in the crossover region between these two curves, where the θ
clipping begins to become significant. This becomes even more apparent for
the narrower bounds θmin = −16 and θmax = 16 (see Figure 9B). If higher ac-
curacy is required in this crossover region, then the full iterated integrals for
vol R NN must be evaluated (see section A.1). Such calculations can also be
used to choose appropriate [wmin , wmax ] and [θmin , θmax ] parameter ranges
so as to maximize P(R N N ) for a CTRNN of a given size.
A
5
3
%
2
0
2 4 6 8 10
N
B
5
3
%
2
0
2 4 6 8 10
N
from the other neurons should fall entirely within its SSIO fold—that is,
I L (wii ) − min IiN ≤ θi ≤ I R (wii ) − max IiN for all neurons i. Thus, the den-
sity of this phase portrait in the parameter space of an N-neuron circuit can
be estimated as
P P3NN =
N
w!
max w!
max max "
w! θ θ #
··· I R (w) − max I N θmax − I L (w) − min I N θmax dw1 · · · dw N−1 dw
min min 0
4 wmin wmin
2
,
(wmax − wmin ) N (θmax − θmin ) N
P P92 =
! ! " # 2
16 16
4 −16 [I R (w) − max (0, w1 )]16 16
−16 − [I L (w) − min (0, w1 )]−16 dw1 dw
0
.
324 322
(6.1)
√ √ √ 2 2
1152 − 576 3 ln 2 + 3 + 240 ln 2 + 3
P P92 = ≈ 0.0060%.
1073741824
choose each bias from the range I L (wii ) − min IiN , I R (wii ) − max IiN , we
are guaranteed to obtain the maximal phase portrait.
20
A B C
15 15
15
10 10
10
5 5
θ2 5 θ2 θ2
0 0
0
-5 -5
-5
-10 -10
-25 -20 -15 -10 -5 0 -20 -15 -10 -5 0 5 -20 -15 -10 -5 0 5
θ1 θ1 θ1
D E F
15 15 15
10 10 10
5 5 5
θ2 θ2 θ2
0 0 0
-5 -5 -5
where
max(θmin , I R (wii ) − max(0, wi j,i= j )) wii ≥ 4
ai =
θmin wii < 4
min(θmax , I L (wii ) − min(0, wi j,i= j )) wii ≥ 4
bi =
θmax wii < 4
clip the Hopf curve to the saddle node bifurcation manifolds and θ bounds,
and
gives the domain of integration (see section A.4 for an explanation of the
last two conditions). Since it is well known that oscillations can occur in
a two-neuron CTRNN only when the cross-weights are oppositely signed
(Ermentrout, 1995), we have assumed above that w12 > 0 and w21 < 0 and
doubled the integral to account for the opposite possibility.
We will not attempt to evaluate this integral in closed form. Assuming
wi j , θi ∈ [−16, 16] and τi ∈ [0.5, 10], numerical integration using a quasi-
random Monte Carlo method gives P P1LC 2
≈ 0.22%, which accords quite
well with the empirical probability of 0.24% observed in a random sample of
106 two-neuron circuits. The empirical estimate was obtained by randomly
generating 10 initial conditions in the range yi ∈ [−16, 16], integrating each
with the forward Euler method for 2500 integration steps of size 0.1 to skip
transients, and then integrating for an additional 500 integration steps. If
the output of either neuron varied by more than 0.05 during this second
integration for any initial condition, then the circuit was classified as os-
cillatory. Since the empirical value includes both central and noncentral
oscillations, it would be expected to be slightly higher.
How does the probability of oscillation P O N scale with N in CTRNNs?
By “oscillation,” I mean any asymptotic behavior other than an equilibrium
point, so that periodic, quasi-periodic, and chaotic dynamics are included.
Although this question is beyond the theory described in this letter, we
can examine it empirically. A plot of P O N is shown in Figure 11A (black
curve), with the P P1LC 2
value calculated above corresponding to the N = 2
point in this plot. For comparison, the scaling of oscillation probability with
N for random center-crossing circuits is also shown (gray curve). Note that
both curves monotonically increase toward 100%, although oscillations are
clearly much more likely in random center-crossing circuits than they are
in completely random CTRNNs (Beer, 1995; Mathayomchan & Beer, 2002).
A
100
80
60
%
40
20
0
0 20 40 60 80 100
N
3
% 25
2
1 20
0 15
0
N
5 10
10
15 5
M
20
25 0
3040 R. Beer
Interestingly, samples of oscillatory circuits taken from this data set sug-
gest that chaotic dynamics becomes increasingly common for large N,
which is consistent with other work on chaos in additive neural networks
(Sompolinsky & Crisanti, 1988).
In order to gain some insight into the underlying structure of P(O N ),
N
the probability P(O M ) that exactly M neurons are oscillating in an N-
neuron circuit (with the remaining N − M neurons in saturation) is plotted
in Figure 11B. As N increases, note that (1) the most probable oscillating
subcircuit size (denoted M ) increases, (2) the distribution of oscillatory
subcircuits broadens, and (3) the probability of M increases. Several fac-
tors underlie these features. First, the distribution broadens and shifts to
the right because the range of possible subcircuit sizes grows with N. At
least within the range
√ of this plot, the shift in peak location with N is
roughly M ≈ 1.26 N − 0.074. Second, the probability of M increases be-
N
cause both the number of possible subcircuits M ∼ 2 N grows expo-
M
nentially and the parameter ranges over which a subcircuit of a given size
can oscillate increases with N − M. As long as P O N = M P O M N
< 1,
the probability of M can continue to increase. However, as P O N ap-
proaches 1, one would expect the probability of M to decrease as a fixed
area is distributed across an increasing range of subcircuit sizes. Finally, the
quantitative details of P O M N
obviously depend on the relative proportion
of different oscillatory regions that fall within the range of allowable bias
values.
7 Discussion
Appendix
+···
3044 R. Beer
$ %θmax
N−1 wmax 0 0
N−1
+ ··· I R (w) − wi dw1 · · · dw N−1 dw.
N−1 4 −wmax −wmax i θmin
+···
$ %θmax
N−1 wmax 0 0
N−1
+ (wmax )0 ··· I R (w) − wi dw1 · · · dw N−1 dw,
N−1 4 −wmax −wmax i=1 θmin
which sums to
N−1
N−1
RN = (wmax ) N−k−1 SRk
k
k=0
$ %θmax
wmax 0 0
k
with SRk ≡ ··· I R (w) − wi dw1 · · · dwk dw.
4 −wmax −wmax i=1 θmin
N−1
N−1
LN = (wmax ) N−k−1 SLk
k
k=0
$ %θmax
wmax wmax wmax
k
with SLk ≡ ··· I L (w) − wi dw1 · · · dwk dw.
4 0 0 i=1 θmin
such integrals for fixed k and fixed w and θ ranges. Thus, for modest N, it
is possible to explicitly calculate vol(R N
N ) in closed form.
For example, consider the point of greatest discrepancy between the
approximate curves and the empirical data in Figure 9B, which occurs at
N = 4. For wmax = 16, θmin = −16, and θmax = 16, we obtain
4
vol(R44 ) = 4096 SR0 − SL0 + 768 SR1 − SL1 + 48 SR2 − SL2 + SR3 − SL3
4
−7798784γ R0 − 557056γ R1 + 10752γ R2 + 128γ R3 − 2γ R4 − 7798784γ L0 + 557056γ L1 + 19968γ L2 + 256γ L3 + γ L4
= ,
331776
which is equivalent to
N−1
wmax wmax wmax
= 2 N−1 ··· wi dw1 · · · dw N−1 dw
i=1 4 0 0
(wmax )2
= 2 N−1 (N − 1) (wmax − 4) (wmax ) N−2 .
2
Raising the sum of the previous two expressions to the Nth power and
simplifying gives
N ) = (2
vol0 (R N (wmax ) N−1 (wmax (N(wmax − 4) − wmax
N−2
+ wmax (wmax − 4) + ln 256 + 4)
√
− 8(wmax − 1) ln( wmax − 4 + wmax )
+ 2 wmax (wmax − 4) − 8 ln 2)) N .
CTRNN Parameter Space 3047
We can then split the inner integral of equation 6.1 and evaluate to obtain
! ! ! IW (w) 2
16 0
4 −IW (w) I W (w) + w1 dw1 + 0 IW (w) − w1 dw1 dw
P P92 =
324 322
! 2
16
4 IW (w)2 dw
=
326
√ √ √
(1152 − 576 3 ln(2 + 3) + 240 ln(2 + 3)2 )2
= .
1073741824
If Ĵ is the Jacobian of the linearized system evaluated at ȳ, then the Hopf
bifurcation condition is given by
where
Note that these H expressions are real valued only when β(χ( θ2 )2 + κη1 ),
β(χ( θ1 )2 + κη2 ) ≥ 0 and that they have singularities at η1 = 0 and η2 = 0.
The last two conditions in the domain of integration D given in the main
text arise from the requirement that κη1 , κη2 ≥ 0 for the H functions to be
real valued when θ1 = θ2 = 0, since β is strictly positive.
Acknowledgments
I thank Jeff Ames, Michael Branicky, Alan Calvitti, Hillel Chiel, Bard Ermen-
trout, Eldan Goldenberg, Robert Haschke, and Eduardo Izquierdo-Torres
for their feedback on an earlier draft of this letter. This research was sup-
ported in part by NSF grant EIA-0130773.
References
Beer, R. D., & Gallagher, J. C. (1992). Evolving dynamical neural networks for adap-
tive behavior. Adaptive Behavior, 1, 91–122.
Borisyuk, R. M., & Kirillov, A.B. (1992). Bifurcation analysis of a neural network
model. Biological Cybernetics, 66, 319–325.
Blum, E. K., & Wang, X. (1992). Stability of fixed points and periodic orbits and
bifurcations in analog neural networks. Neural Networks, 5, 577–587.
Chiel, H. J., Beer, R. D., & Gallagher, J. C. (1999). Evolution and analysis of
model CPGs for walking I. Dynamical modules. J. Computational Neuroscience, 7,
99–118.
Chow, T. W. S., & Li, X.-D. (2000). Modeling of continuous time dynamical systems
with input by recurrent neural networks. IEEE Trans. on Circuits and Systems—I:
Fundamental Theory and Applications, 47, 575–578.
Cohen, M. A., & Grossberg, S. (1983). Absolute stability of global pattern forma-
tion and parallel memory storage by competitive neural networks. IEEE Trans.
Systems, Man and Cybernetics, 13, 813–825.
Cowan, J. D., & Ermentrout, G. B. (1978). Some aspects of the eigenbehavior of neural
nets. In S. A. Levin (Ed.), Studies in mathematical biology 1: Cellular behavior and the
development of pattern (pp. 67–117). Providence, RI: Mathematical Association of
America.
de Jong, H. (2002). Modeling and simulation of genetic regulatory networks: A
literature review. J. Computational Biology, 9, 67–103.
Dunn, N. A., Lockery, S. R., Pierce-Shimomura, J. T., & Conery, J. S. (2004). A neural
network model of chemotaxis predicts functions of synaptic connections in the
nematode Caenorhabditis elegans. J. Computational Neuroscience, 17, 137–147.
Edwards, R. (2000). Analysis of continuous-time switching networks. Physica D, 146,
165–199.
Ermentrout, B. (1995). Phase-plane analysis of neural activity. In M. A. Arbib (Ed.),
The handbook of brain theory and neural networks (pp. 732–738). Cambridge, MA:
MIT Press.
Ermentrout, B. (1998). Neural networks as spatio-temporal pattern-forming systems.
Reports on Progress in Physics, 61, 353–430.
Funahashi, K. I., & Nakamura, Y. (1993). Approximation of dynamical systems by
continuous time recurrent neural networks. Neural Networks, 6, 801–806.
Getting, P. A. (1989). Emerging principles governing the operation of neural net-
works. Annual Review of Neuroscience, 12, 185–204.
Ghigliazza, R. M., & Holmes, P. (2004). Minimal models of bursting neurons: How
multiple currents, conductances and timescales affect bifurcation diagrams. SIAM
J. Applied Dynamical Systems, 3, 636–670.
Goldman, M. S., Golowasch, J., Marder, E., & Abbott, L. F. (2001). Global structure,
robustness and modulation of neuronal models. J. Neuroscience, 21, 5229–5238.
Golowasch, J., Goldman, M. S., Abbott, L. F., & Marder, E. (2002). Failure of averaging
in the construction of a conductance-based neural model. J. Neurophysiol., 87,
1129–1131.
Grossberg, S. (1988). Nonlinear neural networks: Principles, mechanisms, and archi-
tectures. Neural Networks, 1, 17–61.
Guckenheimer, J., Myers, M., & Sturmfels, B. (1997). Computing Hopf bifurcations
I. SIAM J. Numerical Analysis, 84, 1–21.
3050 R. Beer
Harvey, I., Husbands, P., Cliff, D., Thompson, A., & Jacobi, N. (1997). Evolutionary
robotics: The Sussex approach. Robotics and Autonomous Systems, 20, 205–224.
Haschke, R. (2004). Bifurcations in discrete-time neural networks: Controlling complex
network behavior with inputs. Unpublished doctoral dissertation, University of
Bielefeld.
Haschke, R., & Steil, J. J. (2005). Input space bifurcation manifolds of recurrent neural
networks. Neurocomputing, 64C, 25–38.
Hirsch, M. (1989). Convergent activation dynamics in continuous time networks.
Neural Networks, 2, 331–349.
Hopfield, J. J. (1984). Neurons with graded response properties have collective com-
putational properties like those of two-state neurons. Proc. National Academy of
Sciences, 81, 3088–3092.
Hoppensteadt, F. C., & Izhikevich, E. M. (1997). Weakly connected neural networks.
Berlin: Springer.
Izquierdo-Torres, E. (2004). Evolving dynamical systems: Nearly neutral regions in con-
tinuous fitness landscapes. Unpublished master’s thesis, University of Sussex.
Kimura, M., & Nakano, R. (1998). Learning dynamical systems by recurrent neural
networks from orbits. Neural Networks, 11, 1589–1599.
Kuznetsov, Y. A. (1998). Elements of applied bifurcation theory (2nd ed.). Berlin: Springer.
Lewis, J. E., & Glass, L. (1992). Nonlinear dynamics and symbolic dynamics of neural
networks. Neural Computation, 4, 621–642.
Marder, E., & Abbott, L. F. (1995). Theory in motion. Current Opinion in Neurobiology,
5, 832–840.
Marder, E., & Calabrese, R. L. (1996). Principles of rhythmic motor pattern generation.
Physiological Reviews, 76, 687–717.
Mathayomchan, B., & Beer, R. D. (2002). Center-crossing recurrent neural networks
for the evolution of rhythmic behavior. Neural Computation, 14, 2043–2051.
Nolfi, S., & Floreano, D. (2000). Evolutionary robotics. Cambridge, MA: MIT Press.
Pasemann, F. (2002). Complex dynamics and the structure of small neural networks.
Network: Computation in Neural Systems, 13, 195–216.
Prinz, A. A., Billimoria, C. P., & Marder, E. (2003). Alternative to hand-tuning
conductance-based models: Construction and analysis of databases of model
neurons. J. Neurophysiol., 90, 3998–4015.
Prinz, A. A., Bucher, D., & Marder, E. (2004). Similar network activity from disparate
circuit parameters. Nature Neuroscience, 7, 1345–1352.
Psujek, S., Ames, J., & Beer, R. D. (2006). Connection and coordination: The interplay
between architecture and dynamics in evolved model pattern generators. Neural
Computation, 18, 729–747.
Rinzel, J., & Ermentrout, B. (1998). Analysis of neural excitability and oscillations. In
C. Koch and I. Segev (Eds.), Methods in neuronal modeling (2nd ed., pp. 251–291).
Cambridge, MA: MIT Press.
Rowat, P. F., & Selverston, A. I. (1997). Oscillatory mechanisms in pairs of neurons
with fast inhibitory synapses. J. Computational Neuroscience, 4, 103–127.
Selverston, A. I. (1980). Are central pattern generators understandable? Behavioral
and Brain Sciences, 3, 535–571.
Seys, C. W., & Beer, R. D. (2004). Evolving walking: The anatomy of an evolutionary
search. In S. Schaal, A. Ijspeert, A. Billard, S. Vijayakumar, J. Hallam, & J.-A. Meyer
CTRNN Parameter Space 3051