3 Stochastic Resonance
3 Stochastic Resonance
3.1 Introduction
In this chapter, we discuss a surprising discovery made in the 1980s known as
stochastic resonance. It concerns a cooperative effect seen in certain nonlinear
systems in the presence of random noise. The signature of stochastic resonance
is that the coherence of the system output improves with an increase of random
noise, at least over some range of noise levels. The subject has been vigorously
studied over the past decade. Stochastic resonance is now known to occur in nu-
merous examples spanning a wide range of physical systems. The lion’s share of
the research has focussed on systems that can be adequately described by clas-
sical physics, though some attention has been devoted to quantum mechanical
realizations.
The purpose of this chapter is to develop the main features of the theory in a
way which is accessible to the non-expert. There already exist general overviews
of stochastic resonance [1,2]. These tell the interesting story of how the idea of
stochastic resonance was originally proposed to explain why the Earth suffers
Ice Ages with apparently near-periodic regularity, how that idea found its true
application in phenomena as diverse as laser dynamics and predator-detection of
the simple crayfish, and current efforts to see whether it could explain outstand-
ing mysteries in human perception in the visual and auditory systems. There is
also a comprehensive technical review article on stochastic resonance [3].
In contrast to these references, this chapter aims to present the theory of
stochastic resonance at a quantitative level suitable for graduate students in the
physical sciences. The goal is to provide a solid basis from which to explore the
(by now rather large) technical literature on stochastic resonance. We also hope
to convey the current frontiers of the subject and where open questions remain.
Despite great progress, stochastic resonance remains an active and exciting area
of research.
quantum mechanics. The idea is that some aspect of the system is not under per-
fect control, and suffers fluctuations about its nominal value. For example, sup-
pose we have a simple mass-spring system, which is subject to very complicated
and highly erratic forces due to collisions with the surrounding air molecules.
The equation of motion for the displacement x is
where m is the mass, k is the spring constant, ! is the spring’s unstretched length,
and the overdot denotes differentiation with respect to time. The forcing func-
tion ξ is for all practical purposes random. How do we deal with this equation
mathematically? We imagine that the function ξ is deterministic, but changes
each time we run the experiment. We also suppose that we have statistical in-
formation about the ensemble of functions ξ1 , ξ2 , ξ3 , ..., and nothing more. Then
we can imagine solving the sequence of problems
and so on. In the above, p(ξ, t) stands for the probability density of ξ as a function
of time, p(ξ1 , t1 ; ξ2 , t2 ) is the joint probability density of ξ at two times, and so
forth. The second of these plays a central role, and is called the autocorrelation
function, or sometimes just the correlation function, for short.
We call the process ξ stationary if the moments depend only on the time-
differences, so that
"ξ(t)# = "ξ(0)# ,
"ξ(t1 )ξ(t2 )# = "ξ(0)ξ(t2 − t1 )# ,
"ξ(t1 )ξ(t2 )ξ(t3 )# = "ξ(0)ξ(t2 − t1 )ξ(t3 − t1 )# , (3.6)
...
For stationary processes one can compute the power spectrum via the Wiener–
Khintchine theorem,
! ∞
S(Ω) = 4 C(τ ) cos(Ωτ )dτ, (3.9)
0
"ξ(t)# = 0 , (3.11)
"ξ(t)ξ(t% )# = κδ(t − t% ) . (3.12)
In the first term, the angular brackets can be moved all the way in until they hit
ξ since this is the only random quantity, and using (3.11) we conclude that this
term vanishes. There are no random factors in the second term: the ensemble
average has no effect and so the angular brackets are just dropped. Thus,
ε ε sin φ
"x(t)# = # sin (ωt + φ) − e−γt # , (3.16)
γ 2 + ε2 γ 2 + ε2
#
where sin φ = γ/ γ 2 + ε2 . After a transient time, the mean response is periodic
with an amplitude which is independent of the noise strength.
Meanwhile, the correlation function "x(t)x(t + τ )# consists of four terms. Of
these, the two cross-terms drop out since each involves a single factor of ξ which
therefore vanishes upon ensemble averaging. This leaves two terms:
! t ! t+τ
! !!
"x(t)x(t + τ )# = e−γ(2t+τ ) eγt eγt "ξ(t% )ξ(t%% )#dt%% dt% (3.17)
0 0
! t ! t+τ
! !!
+ ε2 e−γ(2t+τ ) eγt eγt cos(ωt% ) cos(ωt%% ) dt%% dt% .
0 0
Fig. 3.1. Power spectrum for the linear system driven by both white noise and a
periodic input, consisting of a sharp peak at the signal frequency ω, and a Lorentzian
noise background, see (3.21)
where 2π/ω is the driving period. As the notation indicates, the resulting corre-
lation function depends only on the time difference τ ,
κ −γτ 1 ε2
C(τ ) = e + cos ωτ . (3.20)
2γ 2 γ 2 + ω2
Note that the periodic influence of the drive has not been destroyed by the
extra averaging. The correlation function is now stationary, so we get the power
spectrum by taking the Fourier transform:
2κ ε2
S(Ω) = + 2 δ (Ω − ω) . (3.21)
γ2 +Ω 2 γ + ω2
The result is plotted in Fig. 3.1. We see that the power spectrum consists of
a broadband part plus a narrow spike at the signal frequency. For low enough
signal frequencies, the broadband spectrum is essentially flat. A useful measure
of the output coherence is the signal-to-noise ratio (SNR). We divide the strength
of the delta function spike at the signal frequency by the level of broadband noise
at this same frequency. In our example, this gives
ε2 γ 2 + ω2 ε2
SNR = = . (3.22)
γ 2 + ω 2 2κ 2κ
It is worth making an additional technical point here. In experiments (or simu-
lations), one typically integrates the power spectrum over some finite but small
bandwidth ∆Ω to determine both the output signal and noise powers. The band-
width should be large enough to pick up all the power under the spike at the
112 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
signal frequency, but still small enough so that the broadband spectrum is flat
over the interval. If we carry out these integrations for the present example, the
signal part is unchanged but the noise part picks up a bandwidth factor, with
result
ε2
SNR = . (3.23)
2κ∆Ω
From a theoretical perspective, the bandwidth factor is essentially arbitrary, and
we won’t bother to include it.
Our primary interest is in the behavior of the SNR as a function of the
input noise strength κ, and in this example we see that the SNR monotonically
decreases with κ. Raising the noise level always degrades the quality of the
output. You may say that this last statement is intuitively obvious, and that
only an overeducated ninny would bother to demonstrate this fact by doing a
calculation. But in fact this intuition can be wrong! This is what makes stochastic
resonance such an intriguing phenomenon.
where the primes denote differentiation with respect to x and ∆V is the energy
difference between the potential maximum at x2 and minimum at x1 . This is the
justly famous Kramers formula. It is widely (not to say universally!) applicable
because, as we can see, the details of the potential are irrelevant. Only the
energy barrier and the curvatures at the minimum and maximum enter the
formula. For a derivation of this, see Gardiner’s book [6]. Here is the basic
idea. Consider the evolution of the probability density. Initially, it is a delta
function, p(x, 0) = δ(x − x1 ), and its subsequent time evolution p(x, t) follows a
partial differential equation, called the Fokker–Planck equation [6]. Since we are
interested in the very first instant a particle reaches x3 , we solve the Fokker–
Planck equation with an absorbing boundary at x3 . The solution represents the
3 Stochastic Resonance 113
Fig. 3.2. Effective particle potential for Kramers’ escape under the governing equa-
tion (3.24). The potential exhibits stable equilibria at x1 and x3 , and an unstable
equilibrium at x2 , on top of the potential barrier of height ∆V
As time passes, pesc increases monotonically from zero toward one. The first
passage time distribution is just the derivative
∂
T (t; x1 → x3 ) = pesc , (3.27)
∂t
and the mean time is
! ∞
"T # = t T (t; x1 → x3 )dt . (3.28)
0
This last result can be written directly in terms of the potential function V :
! x3 ! y
"T # = α dy eαV (y) dz e−αV (z) , (3.29)
x1 −∞
where α = 2/κ. For weak noise, κ is small, so α is large, and the integrals can
be evaluated in terms of the extremal values of V , which leads to the relatively
detail-free form of the Kramers formula.
114 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
and similarly for n− (t). If the separation of time scales is obeyed, then the
precise location of x that divides the phase space is unimportant, though the
logical place for it is at the potential maximum x2 . We also imagine that the
transition rates between states are specified. These rates may be given by the
Kramers formula, or they may be determined in some other way (including
possibly experimental measurements). The dynamics is then governed by the
linear ordinary differential equations
Fig. 3.3. Modulated potential function for two state SR. The periodic driving force
which is rocking the bistable potential back and forth passes its extrema at one fourth
(middle right) and three fourths (middle left) of the driving field period. This induces
a periodic modulation of the transition rates (3.34)
we suppose that the transition rates are known, and we calculate the SNR in
terms of these rates. This part is very general. Second, we relate the rates to
the various system parameters. This part depends on the details of the system
under consideration.
From (3.32) and the condition n+ + n− = 1, we have
ṅ+ = − (Wup + Wdown ) n+ + Wup . (3.33)
We assume time dependent transition rates
Wup = W0 + ε cos ωt ,
Wdown = W0 − ε cos ωt . (3.34)
As time passes the output x switches between two values, say x = ±c (to be
identified with x1 and x3 in Fig. 3.1).
where pjoint denotes the joint probability to find the particle in state x at time
t, and in state x% at time t + τ . For our two state system, the double integral
collapses to a discrete sum of four terms:
Collecting the various terms, and substituting into this last equation, we find
the very simple result
with solution
& '
−2W0 (t−t0 ) 1 ε cos(ωt0 − φ) 1 ε cos(ωt0 − φ)
n+ (t) = e n+ (t0 ) − − # + + # .
2 4W02 + ω 2 2 4W02 + ω 2
(3.43)
This is nearly the same as (3.35), except for two periodic terms due to the
modulation. The equilibrium probabilities are
1 ε cos(ωt0 − φ)
peq (c, t) = n+ (t0 → −∞) = # ,
2 4W02 + ω 2
peq (−c, t) = 1 − peq (c, t) . (3.44)
The conditional probabilities are also determined by (3.43). For example, con-
sider the probability that x(t + τ ) = c given that x(t) = c. One simply sets
n+ (t0 ) = 1 and t − t0 = τ in (3.43). If we denote this process of substitution by
double brackets, we have
Similarly,
pc (c, t + τ | − c, t) = [[n+ (t0 ) = 0; t − t0 = τ ]] . (3.46)
The two other conditional probabilities we need are simply constructed from the
first two:
and since the time average of cos2 (ωt − φ) = 1/2, and the time average of
cos(ωt − φ) cos(ωt + ωτ − φ) = (1/2) cos ωτ , we find
( ) *+ ( +
2 2ε2 2c2 ε2
C(τ ) = c 1 − e −2W0 τ
+ cos ωτ . (3.50)
4W02 + ω 2 4W02 + ω 2
The first term gives the broadband part of the power output. Note that it di-
minishes
,∞ with increasing ε in such a way that the total power is conserved:
0
S(Ω)dΩ = const.
From S(Ω), we can write down the signal-to-noise ratio SNR. Just divide the
coefficient of the delta function by the value of the broadband term at Ω = ω.
The resulting expression simplifies to
) *−1
πε2 2ε2
SNR = 1− . (3.52)
2W0 4W02 + ω 2
πε2
SNR ≈ . (3.53)
2W0
This is the main result of the first part of the theory. What we need now is to
express ε and W0 in terms of the specific system parameters, which means we
need a theory for the transition rates Wup and Wdown . This part of the theory
depends on the details of the particular system. We consider here a popular
example, the so-called overdamped particle in a double well potential. The dy-
namics is governed by the Langevin equation
where ξ is white noise. This is an example of (3.24) described earlier during the
discussion of the Kramers escape formula, and in fact we will use that formula
to determine the transition rates. The corresponding potential V (x, t) is
1 1
V (x, t) = − γx2 + x4 − ax cos ωt (3.55)
2 4
which when a = 0 describes a symmetric bistable system, and when a )= 0 is
periodically tilted in an antisymmetric fashion (see Fig. 3.3). We assume that the
modulation strength a is sufficiently weak that the system remains in the high-
barrier/low-noise limit throughout, in which case the Kramers formula holds.
3 Stochastic Resonance 119
Actually, the problem is a little more subtle since we are considering a time-
dependent potential, while the Kramers formula is derived for the time indepen-
dent case. It remains valid, however, if the modulation frequency is sufficiently
low, which is sometimes referred to as the adiabatic limit of the time dependent
problem.
We equate the transition rates to the reciprocal of the mean first passage
time (3.25), so that
- ) *
1 2
W = Vmin |Vmax | exp − ∆V .
%% %% (3.56)
2π κ
If we expand the last exponential factor for small a, and equate W (t) to the
form assumed in the rate equation part (3.34) of the theory we get the required
expressions
γ 2
W0 = √ e−γ /2κ , (3.63)
2π
√
a 2γ 3/2 −γ 2 /2κ
ε = e . (3.64)
πκ
This is the main result of the second part of the theory. We can put this together
with the general expression for the signal-to-noise ratio. Using the simpler form
(3.53), the result reduces to
√ 2 2
2a γ −γ 2 /(2κ)
SNR = e . (3.65)
κ2
120 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
Fig. 3.4. Signal to noise ratio (SNR) vs. input noise strength for the two state theory.
By virtue of (3.65), the SNR exhibits a maximum at an optimal, non-vanishing value
κ∗ of the noise strength κ. This is the characteristic feature of stochastic resonance.
SNR is measured in ‘decibels’(in short dB), defined as SNR (dB) = 10 log10 SNR
This is the final result of the two state theory. Figure 3.4 shows a semilog
plot of the SNR, as a function of noise strength κ. The function in (3.65) is very
flat and goes to zero in the noise-free limit (so the logarithm goes to −∞ as
κ → 0), and the SNR decays as 1/κ2 for large noise. Of course, the Kramers
formula breaks down for large noise, so the derived expression in this limit can’t
be taken seriously, although we expect it to be qualitatively right since the SNR
must fall off (one would think!) for sufficiently high κ. In any event, the exciting
result is that the SNR passes through a maximum at an intermediate noise level
κ∗ . From the formula, we readily find that κ∗ = γ 2 /4 = ∆V , where ∆V is the
barrier height in the unmodulated limit (a = 0).
Stochastic resonance has turned out to be very easy to see experimentally.
However, there is one competing effect which is often encountered which may
diminish or even mask entirely the maximum in Fig. 3.4. This is the so-called
intrawell motion, a name borrowed from the double-well potential picture and
which refers to the fact that the potential minima will in general vary with the
time-dependent signal. This means that the output of the system is not constant
between switching events; rather, it oscillates with small amplitude at the signal
frequency, which of course contributes a small amount of output power at the
signal frequency. In the limit of small noise, the power contained in the switching
events is very small, and the residual power due to the intrawell oscillations is no
longer negligible. We can get a quantitative measure of this effect by studying
the dynamics restricted to one of the potential minima (since the oscillations
3 Stochastic Resonance 121
Fig. 3.5. Effect of intrawell motion on the output SNR. As the relative amount of
intrawell motion increases, its contribution increasingly masks the underlying stochastic
resonance behavior by increasing the SNR value at small values of the noise intensity
κ (i.e., below κ∗ )
are small in the important limit). We already did this calculation earlier in the
chapter, when we studied as an example the dynamics described by (3.10). We
found that
ε2
SNRintra = , (3.66)
2κ
which diverges at zero noise strength and is otherwise monotonically decreasing
with κ. Whether or not this contribution washes out the maximum in expression
(3.65) depends on how ‘soft’ the potential is about the two minima. Said another
way, the more nearly the output follows a strict two-state waveform, the less
overlap there is between the intrawell and interwell contributions to the SNR (see
Fig. 3.5). If you read through the literature on stochastic resonance carefully,
you will find that in some experiments the data has been filtered [8] to make
it strictly two-state before the power spectrum is computed. In other cases, the
stochastic resonance effect is so strong that such filtering is unnecessary.
Finally, we note that although in most cases the SNR is used for a quanti-
tative analysis of the stochastic resonance, some authors prefer to look at the
output signal power S alone, i.e. the coefficient in front of the δ-function in the
power spectrum, (3.51). By this measure, a system exhibits stochastic resonance
if S has a local maximum at some non-zero input noise intensity. (Compare
this against the behavior of a linear system: since superposition applies, S is
independent of noise intensity.) In the case of stochastic resonance, S exhibits a
maximum at usually roughly the same temperature as the SNR [3,9].
122 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
Fig. 3.6. Experimental results from the Schmitt trigger (after Ref. [10]). The top two
panels show the power spectrum before (a) and after (b) adding noise. The bottom
panel (c) shows a plot gathered from several spectra of the SNR vs. input noise level ,
for three different levels of input signal amplitude (increasing from !
" through ◦ to $)
124 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
Fig. 3.7. Results from the Georgia Tech ring laser experiment [11], where the intensity
of the clockwise laser mode is optimally synchronized with a weak signal injected
through an acousto-optical modulator placed in the cavity, at nonvanishing noise level
where stochastic resonance was found only in parameter regimes where incoher-
ent tunneling dominates over coherent transitions. Hence, also in the quantum
case, the basic mechanism of stochastic resonance can be understood in terms
of a two state model (with quantum noise activated transition rates ), although
the exact quantitative behavior may be amended by quantum coherence [15].
In addition to the interaction with the atoms, the photon field is also coupled
to the cavity walls, which are cooled to a low temperature T < 1 K, such that the
mean number nb = (exp(!ω/kT ) − 1)−1 [19] of photons in thermal equilibrium
is smaller than one. According to the standard master equation of the damped
harmonic oscillator (employing weak-coupling and Markov approximation) [19],
the influence of the heat bath onto the photon field is fully characterized by
the temperature T (or, equivalently, nb ) and the cavity decay rate γ, which
quantifies the coupling strength to the heat bath. Specifically, given n photons
inside the cavity, the probability of emission of a photon from the heat bath
into the cavity is given by γnb (n + 1), whereas γ(nb + 1)n is the probability of
absorption. Thus, in total, the dynamics of the maser field can be described as a
jump process between neighboring photon numbers n → n ± 1 with the following
transition rates :
t+
n = r βn + γ nb (n + 1) , (3.69)
n = γ (nb + 1) n .
t− (3.70)
The average of one single realization of this jump process over a sufficiently long
time approaches the following stationary photon number distribution [6]:
(ss)
n
. t+
p(ss)
n = p0 k−1
, (3.71)
k=1
t−
k
(ss)
where p0 is determined by normalization. We assume that the atomic flux r is
much larger than the cavity decay rate γ, such that the stationary distribution is
far away from thermal equilibrium. Under these circumstances a double-peaked
stationary distribution, indicating a bistable photon field, can establish if t+
n and
t−
n as a function of n intersect (at least) three times: at n 1 and n 2 (defining a
stable equilibrium), and at n3 (corresponding to an unstable equilibrium, with
n1 < n3 < n2 ). An example is shown in Fig. 3.8.
In this case, the photon number will be found almost always near one of
the two maxima at n1 or n2 , and transitions between these metastable states
occur. The transition rates W1,2 (from n1 to n2 , and vice versa) of these ‘macro-
scopic’ jumps of the photon field can be expressed in terms of the rates for the
microscopic jumps [6]:1
2 −1
n/ n
/
W1−1 = [p(ss) + −1
n tn ] p(ss)
m ,
n=n1 m=0
/n2 /∞
W2−1 = [p(ss) + −1
n tn ] p(ss)
m . (3.72)
n=n1 +1 m=n
1
For large n’s, if the discrete photon number can be approximated by a continuous
variable n, the transition rates may also be obtained by a Kramers analysis, see
Sect. 3.4. However, the effective micromaser potential derived in [21] explicitly de-
pends on T , leading to a temperature dependence of the transition rates different
from the classical Kramers law (3.29).
3 Stochastic Resonance 127
Fig. 3.8. Double-peaked stationary state of the photon number distribution of the
micromaser field in the bistable operation mode. The experimental parameters (com-
parable to the laboratory scenario described in [20]) are: temperature T = 0.6 K
(corresponding to a mean thermal photon number nb = 0.2), atomic flux r = 40γ, and
vacuum Rabi angle φ = 1.03
therefore (since nb < 1) do not depend on nb as strongly as the t+n ’s, which are
proportional to nb . As as result, the two rates intersect at about T = 0.5 K.
A further difference from the classical Kramers rates (3.25) is that they do
not vanish even at T = 0. This is due to the presence of the noise associated
with the atomic detections (the measurement noise) on the one hand, leading
to t+
n > 0, and the fact that the heat bath can absorb photons at random times
even at T = 0, on the other hand, leading to t− n > 0. These noise sources are
of genuinely quantum mechanical origin, what explains the deviation from the
classical Kramers law.
Stochastic resonance appears if we add a periodic signal to our system, e.g.,
by modulating the atomic flux
Fig. 3.10. Time evolution of the probability β to detect an atom in |d%, with periodi-
cally modulated atomic flux r(t), according to (3.73), with mean value &r% = 40γ, mod-
ulation amplitude ∆r = 6.9γ, and period 2π/ω = 42 s. Vacuum Rabi angle φ = 1.03.
The noise-induced synchronization of quantum jumps is poor for the lowest temper-
ature (too rare quantum jumps), optimal for the intermediate temperature (almost
regular quantum jumps), and again poor for the highest temperature (too frequent
quantum jumps). Also note the clearly observable intrawell motion for the lowest tem-
perature
(at low T ) or suppresses (at high T ) the signal as compared to the two-state
model [22]. Let us note that the two-state model does not exhibit a maximum
of the SNR, but rather monotonically increases as a function of T [22], until
the two-state approximation breaks down for high temperatures. (This can be
traced back to an untypical behavior of the modulated transition rates, whose
modulation amplitudes of which increase with increasing temperature, whereas
the modulation amplitude (3.25) of the classical Kramers rates is approximately
constant in the relevant temperature region.) Although an increase of the SNR
with increasing temperature may also be considered as a fingerprint of stochastic
resonance, this shows that the signal strength S may, in some cases, give a better
quantitative picture of stochastic resonance than the SNR. On the other hand,
the exact model (i.e., without two-state approximation) does exhibit a maximum
of the SNR, since the intrawell dynamics reduce the signal output S at high
temperatures, see Fig. 3.11.
130 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
Fig. 3.11. Signal output S as a function of the temperature T , for the same parameters
as in Fig. 3.10. A stochastic resonance maximum is observed at T ' 0.6 K. The circles
show the results of the simulation of the maser dynamics (run for 75 000 s for each
value of T ), which agree perfectly well with the results of an exact calculation of the
power spectrum (dashed line) [22]. The deviation from two state model (solid line)
reveals the influence of the intrawell dynamics
Finally, we want to discuss briefly the case of ‘coherent pumping’, where the
atoms are injected into the cavity in a coherent superposition
Fig. 3.12. Time series of an excitable system such as a neuron. Most of the time the
system resides in its rest state, with output signal V = 0. Excitation to its bursting
state causes a narrow spike in V (t)
The two state theory describes many systems known to exhibit stochastic reso-
nance. Many, but not all. The original idea and its early development explicitly
considers hopping between two stable states. But it has turned out that stochas-
tic resonance occurs more generally. Today, we think that there are a handful of
different mechanisms, all of which show the same general properties which we
identify with stochastic resonance.
One place where the two state theory won’t do is in so-called excitable sys-
tems. These are systems which spend most of their time in a resting state, but
can enter into a transient bursting state if perturbed strongly enough. Neurons
are a common example. A typical time series is shown in Fig. 3.12. Each exci-
tation event is represented as a narrow spike. Most of the time the system is in
its rest state, where the output V is zero.
We now develop the theory of stochastic resonance for such a system. The
basic idea of the calculation is as follows. First, we imagine that the spike train is
characterized by an event rate α which depends on a weak periodic influence as
well as a (possibly strong) random influence. In the absence of any input signal,
we assume that the events occur randomly and independently at an average rate
α0 which depends on the input noise level. The effect of a weak periodic signal
is assumed to periodically modulate the event rate, so that
Here, ω is the frequency of the input signal and ε is a constant which depends
on the size of the signal.
132 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
We also assume that each event contributes a narrow spike to the output,
according to /
V = F (t − ti ) , (3.76)
i
where the event times are t1 , t2 , . . . . The pulses are narrow, and it will turn out
that only the pulse area (and not its detailed shape) is important. Note that
overlapping pulses simply add in the output. Under the assumption that the
probability of an event depends only on the instantaneous value of α and not on
the arrival times of any previous events, we can calculate the correlation function
"V (t)V (t + τ )#, and then the power spectrum and the signal to noise ratio.
With this plan in mind, let’s first consider the case where α is constant,
corresponding to the case of no input signal. Then the problem reduces to the
classical ‘shot effect’ [4]. Suppose that there occur exactly K events in a long
time interval t ∈ (0, tmax ). We denote the corresponding output by VK . Then
! ! K K
tmax
dt1 tmax
dtK / /
"VK (t)VK (t + τ )# = ... F (t − ti )F (t + τ − tj )
0 tmax 0 tmax i=1 j=1
K (!
K /
/ ! +
tmax
dt1 tmax
dtK
= ... F (t − ti )F (t + τ − tj ) . (3.77)
i=1 j=1 0 tmax 0 tmax
The double sum has K 2 terms. Of these, there are K terms with i = j and
K(K − 1) terms with i )= j:
!
dti
tmax
"VK (t)VK (t + τ )# = K F (t − ti )F (t + τ − ti )
0 tmax
! tmax
dti
+ K(K − 1)
0 tmax
! tmax
dtj
× F (t − ti )F (t + τ − tj ) . (3.78)
0 t max
To explicitly evaluate the integrals we assume that the pulse shape F is a rect-
angle of width ∆t and height H. Then
! tmax
dti H2
F (t − ti )F (t + τ − ti ) = (∆t − |τ |) , (3.79)
0 tmax tmax
If F is sharply peaked,
! tmax ! tmax
α(ti )F (t − ti )dti ≈ α(t) dti F (t − ti ) = α(t)H∆t , (3.90)
0 0
and
! tmax ! tmax
α(tj )F (t + τ − tj )dtj ≈ α(t + τ ) dtj F (t + τ − tj )
0 0
= α(t + τ )H∆t , (3.91)
as well as
! tmax ! tmax
α(ti )F (t − ti )F (t + τ − ti )dti ≈ α(t) F (t − ti )F (t + τ − ti )dti
0 0
2
= α(t) (H∆t) δ(τ ) , (3.92)
so that
K 2
"VK (t)VK (t + τ )# = α(t) (H∆t) δ(τ )
Z
K(K − 1) 2
+ α(t)α(t + τ ) (H∆t) , (3.93)
Z2
if there are exactly K events in (0, tmax ). Taking the weighted sum over all K
yields the full correlation function :
∞
/
C(τ ; t) = "VK (t)VK (t + τ )#PK (tmax )
K=0
2
= (H∆t) {α(t)δ(τ ) + α(t)α(t + τ )} . (3.94)
Then we have
! 2π
1
"C(τ ; t)#ψ = C(τ ; t)dψ
2π 0
20
= (H∆t) δ(τ )α0 + α02
! 2π +
2 1
+α1 cos(ωt + ψ) cos(ωt + ωτ + ψ)dψ
2π 0
( +
2 2 1 2
= (H∆t) δ(τ )α0 + α0 + α1 cos ωτ , (3.96)
2
Going through the phase average as before, we arrive at the power spectrum
4 2 /∞
2
S(Ω) = (H∆t) 2α0 + α02 δ(Ω) + αj2 δ(Ω − jω) . (3.99)
π π j=1
α12
SNR = . (3.100)
πα0
To complete the theory, we need to know how the event rate depends on the
system parameters, in particular the input noise intensity. Since this depends on
the specific details of the system, we consider an example. Suppose α obeys a
Kramers-type formula
$ %
U
α(t) = exp − (1 + η cos ωt) , (3.101)
κ
where κ is the noise strength and U, η, and ω are constants. The justification
for this form in terms of a particular Langevin model can be found elsewhere
[23]; U plays the role of the potential barrier and η is proportional to the signal
amplitude. The rate can be Fourier expanded as in (3.98). For the SNR we need
only the lowest two coefficients α0 and α1 , with result
Fig. 3.13. Signal to noise ratio vs. noise input strength from the theory for excitable
systems. Much as for bistable systems treated in the preceding sections, the maximum
SNR is reached at an optimal, non-vanishing noise level
stochastic resonance is far less studied than its classical counterpart, and the
future may hold great advances in this direction. Indeed, quantum opticians are
recently getting aware of this robust phenomenon [8,15,22,24,25,26,27,28]. In the
classical context, the great unresolved issue is whether there are any practical
applications of stochastic resonance.
Ideas for applications fall into two categories. The first involves engineering
technology. There have been proposals for direct use in electronics, resulting
in at least two U.S. government patents [29,30]. Another technological use is
to retrofit threshold detectors whose performance have degraded with age. The
most interesting example of this type may be the biomedical application of spe-
cially designed stockings for people with diminished balancing ability to help
them stand up [31].
The other category of applications is biological systems: does Mother Nature
already use stochastic resonance in some of her detectors? Sensory neurons are
notoriously noisy, and stochastic resonance might account for the exquisite sen-
sitivity of some animals to detect weak coherent signals. The first experiments
on biological stochastic resonance were reported in 1993 using mechanoreceptors
of the crayfish Procambarus clarkii [32]. Two years later, experiments demon-
strated stochastic resonance at the sub-cellular level in lipid bilayer membranes
[33]. Other examples include the mechanosensory systems used by crickets to de-
tect air currents [34] and by rats to detect pin pricks [35]. Experiments on hair
cells, important auditory detectors in many vertebrates (including humans), are
especially suggestive [36]: the optimal noise level appears to coincide with the
naturally occurring level set by equilibrium thermal fluctuations! Perhaps these
cells evolved to take maximum advantage of these inevitable environmental fluc-
tuations.
The change in context from physical to life sciences has led researchers to
reconsider and refine even some very basic issues. For example, the extension
of stochastic resonance to excitable systems was primarily motivated by exper-
iments on neurons. In a similar way, a theoretical mechanism which employs a
randomly fluctuating rate [37] raises interesting fundamental questions concern-
ing its connection with microscopic stochastic descriptions. Another question of
great importance is: what is the most appropriate measure of ‘output perfor-
mance’ ? In the biological context information transmission is more significant
than signal-to-noise ratio, but it may be that the most relevant measure – what-
ever it is – depends on the particular application. Related to this is the question:
what kind of signals are most relevant? Truly periodic signals are uncommon in
the natural world, and the nonlinear nature of stochastic resonance suggests
that the study of complicated signals cannot be reduced to a superposition of
elemental periodic ones. And what about other important biological properties
of sensory neurons such as adaptation and refraction? These effects are absent
from existing theories of stochastic resonance. Finally, the biological context has
given new impetus to the study of stochastic resonance in arrays of elements
[38,39], where it appears both that overall performance can be improved and
that tuning of the noise strength may be unnecessary.
138 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner
References
1. K. Wiesenfeld, F. Jaramillo: Chaos 8, 539 (1998)
2. F. Moss, K. Wiesenfeld: Scientific American 273, 66 (1995)
3. L. Gammaitoni, P. Hänggi, P. Jung, F. Marchesoni: Rev. Mod. Phys. 70, 223
(1998)
4. see, e.g., S.O. Rice, in: Selected Papers on Noise, Stochastic Processes, ed. by N.
Wax (Dover, New York 1954)
5. P. Jung: Phys. Rep. 234, 175 (1993)
6. C. Gardiner: Handbook of Stochastic Methods, 2nd ed. (Springer, Berlin 1983)
7. M. Grifoni, P. Hänggi: Phys. Rev. Lett. 76, 1611 (1996), Phys. Rev. E 54, 1390
(1996)
8. T. Wellens, A. Buchleitner: Phys. Rev. Lett. 84, 5118 (2000)
9. A. Bulsara, L. Gammaitoni: Physics Today 49, 39 (March 1996)
10. S. Fauve, F. Heslot: Phys. Lett. A 97, 5 (1983)
11. B. McNamara, K. Wiesenfeld, R. Roy: Phys. Rev. Lett. 60, 2626 (1988)
12. R. Löfstedt, S. N. Coppersmith: Phys. Rev. Lett. 72, 1947 (1994)
13. S. Chakravarty, A. J. Leggett: Phys. Rev. Lett. 52, 5 (1984)
14. I. Goychuk, P. Hänggi: Phys. Rev. E 59, 5137 (1999)
15. T. Wellens, A. Buchleitner: Chem. Phys. 268, 131 (2001)
16. B. T. H. Varcoe et al.: Nature 403, 743 (2000)
17. E. T. Jaynes, F.W. Cummings: Proc. IEEE 51, 89 (1963)
18. J. Krause, M. O. Scully, H. Walther: Phys. Rev. A 34, 2032 (1986)
19. P. Meystre, M. Sargent III: Elements of Quantum Optics (Springer, Berlin 1990)
20. O. Benson, G. Raithel, H. Walther: Phys. Rev. Lett. 72, 3506 (1994)
21. P. Filipowicz, J. Javanainen, P. Meystre: Phys. Rev. A 34, 3077 (1986)
22. T. Wellens, A. Buchleitner: J. Phys. A 32, 2895 (1999)
23. K. Wiesenfeld et al.: Phys. Rev. Lett. 72, 2125 (1994)
24. A. Buchleitner, R. N. Mantegna: Phys. Rev. Lett. 80, 3932 (1998)
25. L. Viola et al.: Phys. Rev. Lett. 84, 5466 (2000)
26. S. F. Huelga, M. Plenio: Phys. Rev. A 62, 52111 (2000)
27. L. Sanchez-Palencia et al.: Phys. Rev. Lett. 88, 133903 (2002)
28. P. K. Rekdal, B.-S. K. Skagerstam: Physica A 305, 404 (2002)
29. A. D. Hibbs: ‘Detection and communications device employing stochastic
resonance’, U.S. Patent No. 5574369 (1996)
30. A. R. Bulsara et al.: ‘Controlled stochastic resonance circuit’, U.S. Patent No.
6285249 (2001)
31. J. Niemi, A. Priplata, M. Salen, J. Harry, J. J. Collins: Noise-enhanced balance
control, preprint (2001)
32. J. K. Douglass, L. Wilkens, E. Pantazelou, F. Moss: Nature (London)365, 337
(1993)
33. S. M. Bezrukov, I. Vodyanoy: Nature (London) 378, 362 (1995)
34. J. E. Levin, J. P. Miller: Nature (London) 380, 165 (1996)
35. J. J. Collins, T. T. Imhoff, P. Grigg: J. Neurophysiology 76, 642 (1996)
36. F. Jaramillo, K. Wiesenfeld: Nature Neuro. 1, 384 (1998)
37. S. M. Bezrukov, I. Vodyanoy: Nature (London) 385, 319 (1997)
38. J. J. Collins, C. C. Chow, T. T. Imhoff: Nature (London), 376, 236-238 (1995)
39. J. F. Lindner, B. K. Meadows, W. L. Ditto, M. E. Inchiosa, A. R. Bulsara: Phys.
Rev. Lett. 75, 3-6 (1995)