0% found this document useful (0 votes)
99 views

3 Stochastic Resonance

- The document discusses stochastic resonance, a phenomenon where the coherence of a system's output improves with increasing random noise over some range of noise levels. - It aims to present the theory of stochastic resonance at a quantitative level suitable for graduate students, providing the mathematical tools used like stochastic processes and developing the theory through an example of a driven linear system. - The example illustrates concepts like incorporating randomness into models using white noise, calculating power spectra, and determining correlation functions.

Uploaded by

secretasian
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

3 Stochastic Resonance

- The document discusses stochastic resonance, a phenomenon where the coherence of a system's output improves with increasing random noise over some range of noise levels. - It aims to present the theory of stochastic resonance at a quantitative level suitable for graduate students, providing the mathematical tools used like stochastic processes and developing the theory through an example of a driven linear system. - The example illustrates concepts like incorporating randomness into models using white noise, calculating power spectra, and determining correlation functions.

Uploaded by

secretasian
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

3 Stochastic Resonance

Kurt Wiesenfeld1 , Thomas Wellens2 , and Andreas Buchleitner2


1
Georgia Institute of Technology, Atlanta, GA 30332, USA
2
Max-Planck-Institut für Physik komplexer Systeme, D-01187 Dresden, Germany

3.1 Introduction
In this chapter, we discuss a surprising discovery made in the 1980s known as
stochastic resonance. It concerns a cooperative effect seen in certain nonlinear
systems in the presence of random noise. The signature of stochastic resonance
is that the coherence of the system output improves with an increase of random
noise, at least over some range of noise levels. The subject has been vigorously
studied over the past decade. Stochastic resonance is now known to occur in nu-
merous examples spanning a wide range of physical systems. The lion’s share of
the research has focussed on systems that can be adequately described by clas-
sical physics, though some attention has been devoted to quantum mechanical
realizations.
The purpose of this chapter is to develop the main features of the theory in a
way which is accessible to the non-expert. There already exist general overviews
of stochastic resonance [1,2]. These tell the interesting story of how the idea of
stochastic resonance was originally proposed to explain why the Earth suffers
Ice Ages with apparently near-periodic regularity, how that idea found its true
application in phenomena as diverse as laser dynamics and predator-detection of
the simple crayfish, and current efforts to see whether it could explain outstand-
ing mysteries in human perception in the visual and auditory systems. There is
also a comprehensive technical review article on stochastic resonance [3].
In contrast to these references, this chapter aims to present the theory of
stochastic resonance at a quantitative level suitable for graduate students in the
physical sciences. The goal is to provide a solid basis from which to explore the
(by now rather large) technical literature on stochastic resonance. We also hope
to convey the current frontiers of the subject and where open questions remain.
Despite great progress, stochastic resonance remains an active and exciting area
of research.

3.2 Some Mathematical Tools

The quantitative theory of stochastic resonance involves the study of fluctuating


dynamical systems. The basic tools used to develop the theory are those of
stochastic processes.
The first question is this: how do we incorporate randomness into a model?
Here, we have in mind randomness which is separate from that which arises in

A. Buchleitner and K. Hornberger (Eds.): LNP 611, pp. 107–138, 2002.


c Springer-Verlag Berlin Heidelberg 2002
!
108 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

quantum mechanics. The idea is that some aspect of the system is not under per-
fect control, and suffers fluctuations about its nominal value. For example, sup-
pose we have a simple mass-spring system, which is subject to very complicated
and highly erratic forces due to collisions with the surrounding air molecules.
The equation of motion for the displacement x is

mẍ = −k (x − !) + ξ(t) , (3.1)

where m is the mass, k is the spring constant, ! is the spring’s unstretched length,
and the overdot denotes differentiation with respect to time. The forcing func-
tion ξ is for all practical purposes random. How do we deal with this equation
mathematically? We imagine that the function ξ is deterministic, but changes
each time we run the experiment. We also suppose that we have statistical in-
formation about the ensemble of functions ξ1 , ξ2 , ξ3 , ..., and nothing more. Then
we can imagine solving the sequence of problems

mẍ1 = −k (x1 − !) + ξ1 (t) ,


mẍ2 = −k (x2 − !) + ξ2 (t) ,
mẍ3 = −k (x3 − !) + ξ3 (t) , (3.2)

and so on, generating an ensemble of outputs xj . Since we have only statistical


information about the ξj , we can only hope to recover statistical information
about the xj .
What kind of statistical information do we need to supply about ξ? Well, we
typically specify information about how ξ is correlated with itself at different
times,
!
"ξ(t)# = ξ p(ξ, t) dξ, (3.3)
! !
"ξ(t1 )ξ(t2 )# = ξ1 ξ2 p(ξ1 , t1 ; ξ2 , t2 ) dξ1 dξ2 , (3.4)
! ! !
"ξ(t1 )ξ(t2 )ξ(t3 )# = ... (3.5)

and so on. In the above, p(ξ, t) stands for the probability density of ξ as a function
of time, p(ξ1 , t1 ; ξ2 , t2 ) is the joint probability density of ξ at two times, and so
forth. The second of these plays a central role, and is called the autocorrelation
function, or sometimes just the correlation function, for short.
We call the process ξ stationary if the moments depend only on the time-
differences, so that

"ξ(t)# = "ξ(0)# ,
"ξ(t1 )ξ(t2 )# = "ξ(0)ξ(t2 − t1 )# ,
"ξ(t1 )ξ(t2 )ξ(t3 )# = "ξ(0)ξ(t2 − t1 )ξ(t3 − t1 )# , (3.6)
...

For physical reasons we often expect the random process ξ to be stationary.


3 Stochastic Resonance 109

An important quantity is the power spectrum, which gives information about


a dynamical process in terms of its frequency content. If we consider a function
ξ(t) over the time interval t ∈ (0, tmax ), then we can form the Fourier transform
! tmax
˜
ξ(Ω) = ξ(t)e−iΩt dt . (3.7)
0
The power spectrum is
1 " "
" ˜ "2
S(Ω) = lim "ξ(Ω)" . (3.8)
tmax →∞ 2πtmax

For stationary processes one can compute the power spectrum via the Wiener–
Khintchine theorem,
! ∞
S(Ω) = 4 C(τ ) cos(Ωτ )dτ, (3.9)
0

where C is the autocorrelation function [4].

3.3 Example: Driven Linear System


Let’s consider a simple example. Later, we’ll use the results of this example to
compare against a system which displays stochastic resonance. But for now we
just want to illustrate the various tools we just introduced in a concrete example.
Consider a linear system driven by noise and a periodic input, with governing
equation
ẋ = −γx + ξ + ε cos(ωt), x(0) = 0, (3.10)
where ξ is a random function and γ, ε, and ω are constants. We need to specify
the statistical properties of ξ. Let’s take

"ξ(t)# = 0 , (3.11)
"ξ(t)ξ(t% )# = κδ(t − t% ) . (3.12)

The parameter κ represents the strength of the fluctuations. By themselves,


these two conditions don’t uniquely specify the random process ξ, but for the
quantities we’re going to be interested in they are all we need. It is common to
call a random process with these properties ‘white noise’. The name refers to
the power spectrum of ξ. The autocorrelation function of ξ depends on the time
difference only, so we calculate the power spectrum using the Wiener–Khintchine
theorem (3.9), with result
S(Ω) = 2κ . (3.13)
The noise is ‘white’ because it contains equal power in every frequency bin.
The solution to (3.10) is
! t
!
x(t) = e−γt eγt (ξ(t% ) + ε cos ωt% ) dt% . (3.14)
0
110 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

The ensemble average of this is


! t ! t
γt! !
"x(t)# = "e −γt
e ξ(t ) dt # + "e
% % −γt
eγt ε cos(ωt% ) dt% # . (3.15)
0 0

In the first term, the angular brackets can be moved all the way in until they hit
ξ since this is the only random quantity, and using (3.11) we conclude that this
term vanishes. There are no random factors in the second term: the ensemble
average has no effect and so the angular brackets are just dropped. Thus,
ε ε sin φ
"x(t)# = # sin (ωt + φ) − e−γt # , (3.16)
γ 2 + ε2 γ 2 + ε2
#
where sin φ = γ/ γ 2 + ε2 . After a transient time, the mean response is periodic
with an amplitude which is independent of the noise strength.
Meanwhile, the correlation function "x(t)x(t + τ )# consists of four terms. Of
these, the two cross-terms drop out since each involves a single factor of ξ which
therefore vanishes upon ensemble averaging. This leaves two terms:
! t ! t+τ
! !!
"x(t)x(t + τ )# = e−γ(2t+τ ) eγt eγt "ξ(t% )ξ(t%% )#dt%% dt% (3.17)
0 0
! t ! t+τ
! !!
+ ε2 e−γ(2t+τ ) eγt eγt cos(ωt% ) cos(ωt%% ) dt%% dt% .
0 0

Evaluating the integrals yields

κ −γτ sin [ωt + φ] sin [ω (t + τ ) + φ]


"x(t)x(t + τ )# = e + ε2 (3.18)
2γ γ 2 + ω2
κ −γτ 1 ε2
= e + [cos ωτ + cos (2ωt + ωτ + 2φ)] ,
2γ 2 γ 2 + ω2
where in the first equation we neglected the transient response by taking the
limit t % 1/γ. Notice, however, that even after the transient has died away,
the correlation function is not stationary. That is, the correlation function de-
pends on both the initial time t and the time difference τ . On the other hand,
the dependence on t is merely periodic, a consequence of the periodic forcing
function.
It is certainly possible to measure and otherwise investigate the systematic
variations in output that occur over one period of the drive [5] – the variance
might be largest at a particular point in the drive cycle, for instance. However,
this is relatively subtle information, and not of central interest in the study of
stochastic resonance. Instead, it is convenient to make an additional time average
(in t) over one period, and take
! 2π/ω
ω
C(τ ) = "x(t)x(t + τ )#dt , (3.19)
2π 0
3 Stochastic Resonance 111

Fig. 3.1. Power spectrum for the linear system driven by both white noise and a
periodic input, consisting of a sharp peak at the signal frequency ω, and a Lorentzian
noise background, see (3.21)

where 2π/ω is the driving period. As the notation indicates, the resulting corre-
lation function depends only on the time difference τ ,
κ −γτ 1 ε2
C(τ ) = e + cos ωτ . (3.20)
2γ 2 γ 2 + ω2
Note that the periodic influence of the drive has not been destroyed by the
extra averaging. The correlation function is now stationary, so we get the power
spectrum by taking the Fourier transform:
2κ ε2
S(Ω) = + 2 δ (Ω − ω) . (3.21)
γ2 +Ω 2 γ + ω2
The result is plotted in Fig. 3.1. We see that the power spectrum consists of
a broadband part plus a narrow spike at the signal frequency. For low enough
signal frequencies, the broadband spectrum is essentially flat. A useful measure
of the output coherence is the signal-to-noise ratio (SNR). We divide the strength
of the delta function spike at the signal frequency by the level of broadband noise
at this same frequency. In our example, this gives
ε2 γ 2 + ω2 ε2
SNR = = . (3.22)
γ 2 + ω 2 2κ 2κ
It is worth making an additional technical point here. In experiments (or simu-
lations), one typically integrates the power spectrum over some finite but small
bandwidth ∆Ω to determine both the output signal and noise powers. The band-
width should be large enough to pick up all the power under the spike at the
112 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

signal frequency, but still small enough so that the broadband spectrum is flat
over the interval. If we carry out these integrations for the present example, the
signal part is unchanged but the noise part picks up a bandwidth factor, with
result
ε2
SNR = . (3.23)
2κ∆Ω
From a theoretical perspective, the bandwidth factor is essentially arbitrary, and
we won’t bother to include it.
Our primary interest is in the behavior of the SNR as a function of the
input noise strength κ, and in this example we see that the SNR monotonically
decreases with κ. Raising the noise level always degrades the quality of the
output. You may say that this last statement is intuitively obvious, and that
only an overeducated ninny would bother to demonstrate this fact by doing a
calculation. But in fact this intuition can be wrong! This is what makes stochastic
resonance such an intriguing phenomenon.

3.4 Mean First Passage Time


and Kramers Escape Formula
The following example serves to illustrate certain aspects of the Fokker–Planck
approach to analyzing stochastic processes and also presents an important result
we will use later. We begin with a nonlinear stochastic problem
dV
ẋ = − +ξ , (3.24)
dx
where the potential function V has the shape illustrated in Fig. 3.2.
We ask: if a particle is initially placed at the lefthand minimum x1 , how long
will it take for the particle to reach the righthand minimum x3 for the first time?
The answer will vary from realization to realization, and we call the expectation
value the ‘mean first passage time’. In the limit of weak noise, the result is

"T # = # e2∆V /κ , (3.25)
−V %% (x )V %% (x )
1 2

where the primes denote differentiation with respect to x and ∆V is the energy
difference between the potential maximum at x2 and minimum at x1 . This is the
justly famous Kramers formula. It is widely (not to say universally!) applicable
because, as we can see, the details of the potential are irrelevant. Only the
energy barrier and the curvatures at the minimum and maximum enter the
formula. For a derivation of this, see Gardiner’s book [6]. Here is the basic
idea. Consider the evolution of the probability density. Initially, it is a delta
function, p(x, 0) = δ(x − x1 ), and its subsequent time evolution p(x, t) follows a
partial differential equation, called the Fokker–Planck equation [6]. Since we are
interested in the very first instant a particle reaches x3 , we solve the Fokker–
Planck equation with an absorbing boundary at x3 . The solution represents the
3 Stochastic Resonance 113

Fig. 3.2. Effective particle potential for Kramers’ escape under the governing equa-
tion (3.24). The potential exhibits stable equilibria at x1 and x3 , and an unstable
equilibrium at x2 , on top of the potential barrier of height ∆V

conditional probability that the particle is at position x at time t given that it


started at position x1 . Therefore, the probability that the particle has escaped
after time t is ! x 3

pesc (t; x1 ) = 1 − dx pcond (x, t|x1 , 0) . (3.26)


−∞

As time passes, pesc increases monotonically from zero toward one. The first
passage time distribution is just the derivative

T (t; x1 → x3 ) = pesc , (3.27)
∂t
and the mean time is
! ∞
"T # = t T (t; x1 → x3 )dt . (3.28)
0

This last result can be written directly in terms of the potential function V :
! x3 ! y
"T # = α dy eαV (y) dz e−αV (z) , (3.29)
x1 −∞

where α = 2/κ. For weak noise, κ is small, so α is large, and the integrals can
be evaluated in terms of the extremal values of V , which leads to the relatively
detail-free form of the Kramers formula.
114 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

3.5 Rate Equation Description


The Fokker–Planck equation governs the time evolution of the probability distri-
bution, but we will be able to use a simplified description for one of the theories
of stochastic resonance developed in the next section. The simpler version is
called the rate equation description, and reduces the continuous phase space to
a discrete set of attractors. For a damped particle in a bistable potential there
are two stable states, and we make the approximation that the system is either
in one state or the other. This is reasonable if the relaxation times are short
compared with the typical transition times between the states. The probability
density is then written

p(x, t) ≈ n+ (t)δ(x − x+ ) + n− (t)δ(x − x− ) , (3.30)

where, for example ! ∞


n+ = p(x, t)dx , (3.31)
x2

and similarly for n− (t). If the separation of time scales is obeyed, then the
precise location of x that divides the phase space is unimportant, though the
logical place for it is at the potential maximum x2 . We also imagine that the
transition rates between states are specified. These rates may be given by the
Kramers formula, or they may be determined in some other way (including
possibly experimental measurements). The dynamics is then governed by the
linear ordinary differential equations

ṅ+ = Wup n− − Wdown n+ ,


ṅ− = Wdown n+ − Wup n− . (3.32)

These can be reduced to a single equation since n+ + n− = 1.

3.6 Two State Theory of Stochastic Resonance


We now have the tools assembled to develop the theory of two state stochastic
resonance. More sophisticated treatments can be found [3,7], but the simple
theory here has the virtue that it is quite general, and is the one most often used
in the literature. Its chief disadvantage is that the transition rates are assumed
to be given.
The basic situation is illustrated in Fig. 3.3. There are two stable states,
with the system making transitions between them. To fix ideas, you can imagine
a heavily damped particle moving in a double well potential subject to noise.
We also include a time periodic influence which asymmetrically alters the rates,
favoring first transitions ‘up’ and then a half-period later favoring transitions
‘down’. This amounts to rocking the potential back and forth as shown in Fig. 3.3.
The goal of the theory is to calculate the output signal to noise ratio (SNR)
as a function of the input noise strength. The theory splits into two parts. First,
3 Stochastic Resonance 115

Fig. 3.3. Modulated potential function for two state SR. The periodic driving force
which is rocking the bistable potential back and forth passes its extrema at one fourth
(middle right) and three fourths (middle left) of the driving field period. This induces
a periodic modulation of the transition rates (3.34)

we suppose that the transition rates are known, and we calculate the SNR in
terms of these rates. This part is very general. Second, we relate the rates to
the various system parameters. This part depends on the details of the system
under consideration.
From (3.32) and the condition n+ + n− = 1, we have
ṅ+ = − (Wup + Wdown ) n+ + Wup . (3.33)
We assume time dependent transition rates
Wup = W0 + ε cos ωt ,
Wdown = W0 − ε cos ωt . (3.34)
As time passes the output x switches between two values, say x = ±c (to be
identified with x1 and x3 in Fig. 3.1).

3.7 The Unmodulated Case


It is instructive to consider first the unmodulated case (ε = 0). This allows us
to see the overall structure of the calculation without getting bogged down in
complicated expressions.
We can immediately integrate (3.33)
$ %
−2W0 (t−t0 ) 1 1
n+ (t) = e n+ (t0 ) − + , (3.35)
2 2
116 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

where n+ (t0 ) is the initial condition. Note that this determines:


1 the conditional probabilities
1 −2W0 (t−t0 ) 1
pc (x = c, t|x = c, t0 ) e = + ,
2 2
1 −2W0 (t−t0 ) 1
pc (x = c, t|x = −c, t0 ) = − e + , (3.36)
2 2
where we set n+ (t0 ) = 1 in the first of these, and n+ (t0 ) = 0 in the second.
In this notation, the first of these is read ‘the conditional probability that
x = c at time t given that x = c at the initial time t0 ’.
Equation (3.35) also determines
2 the equilibrium probabilities
1
peq (x = c) = n+ (t0 → −∞) = ,
2
1
peq (x = −c) = 1 − n+ (t0 → −∞) = . (3.37)
2
From these, we can easily compute the correlation function.
In general, when the state variable can take on a continuous range of values,
the correlation function is given by
! !
"x(t)x(t + τ )# = xx% pjoint (x, t; x% , t + τ ) dx dx%
! !
= xx% pc (x% , t + τ |x, t)peq (x, t) dx dx% , (3.38)

where pjoint denotes the joint probability to find the particle in state x at time
t, and in state x% at time t + τ . For our two state system, the double integral
collapses to a discrete sum of four terms:

"x(t)x(t + τ )# = c2 pc (c, t + τ |c, t)peq (c, t) + (c)(−c)pc (c, t + τ | − c, t)peq (−c, t)


+(−c)(c)pc (−c, t + τ |c, t)peq (c, t)
+(−c)2 pc (−c, t + τ | − c, t)peq (−c, t). (3.39)

Collecting the various terms, and substituting into this last equation, we find
the very simple result

"x(t)x(t + τ )# = c2 e−2W0 τ , (3.40)

and so (with (3.9))


! ∞
S(Ω) = 4 "x(t)x(t + τ )# cos Ωτ dτ
0
2W0
= 4c2 . (3.41)
Ω 2 + 4W02
The power spectrum is a Lorentzian with half-width 2W0 .
3 Stochastic Resonance 117

3.8 Time Dependent Rates


Now put the modulation back in (ε )= 0). The rate equation (3.33) reads

ṅ+ = −2W0 n+ + W0 + ε cos ωt , (3.42)

with solution
& '
−2W0 (t−t0 ) 1 ε cos(ωt0 − φ) 1 ε cos(ωt0 − φ)
n+ (t) = e n+ (t0 ) − − # + + # .
2 4W02 + ω 2 2 4W02 + ω 2
(3.43)
This is nearly the same as (3.35), except for two periodic terms due to the
modulation. The equilibrium probabilities are
1 ε cos(ωt0 − φ)
peq (c, t) = n+ (t0 → −∞) = # ,
2 4W02 + ω 2
peq (−c, t) = 1 − peq (c, t) . (3.44)

The conditional probabilities are also determined by (3.43). For example, con-
sider the probability that x(t + τ ) = c given that x(t) = c. One simply sets
n+ (t0 ) = 1 and t − t0 = τ in (3.43). If we denote this process of substitution by
double brackets, we have

pc (c, t + τ |c, t) = [[n+ (t0 ) = 1; t − t0 = τ ]] . (3.45)

Similarly,
pc (c, t + τ | − c, t) = [[n+ (t0 ) = 0; t − t0 = τ ]] . (3.46)
The two other conditional probabilities we need are simply constructed from the
first two:

pc (−c, t + τ |c, t) = 1 − pc (c, t + τ |c, t) ;


pc (−c, t + τ | − c, t) = 1 − pc (c, t + τ | − c, t) . (3.47)

Armed with the equilibrium and conditional probabilities, it is a straightfor-


ward (if tedious) exercise to construct the correlation function using (3.39), with
result
2 2
1 −2W0 τ ε cos (ωt − φ)
"x(t)x(t + τ )# = e −2W0 τ
− 4e
c2 4W02 + ω 2
2

+ cos(ωt − φ) cos(ωt + ωτ − φ) . (3.48)
4W02 + ω 2

As expected, the periodic modulation leads to a correlation function that de-


pends on both t and τ . We do an additional phase averaging (like (3.19))
! 2π/ω
ω
C(τ ) = "x(t)x(t + τ )#dt , (3.49)
2π 0
118 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

and since the time average of cos2 (ωt − φ) = 1/2, and the time average of
cos(ωt − φ) cos(ωt + ωτ − φ) = (1/2) cos ωτ , we find
( ) *+ ( +
2 2ε2 2c2 ε2
C(τ ) = c 1 − e −2W0 τ
+ cos ωτ . (3.50)
4W02 + ω 2 4W02 + ω 2

Use the Wiener–Khintchine theorem (3.9) to determine the power spectrum:


! ∞
S(Ω) = 4 C(τ ) cos(Ωτ )dτ
0
) *
2 2ε2 2W0 4πc2 ε2
= 4c 1 − 2 2 + δ(Ω − ω) . (3.51)
4W0 + ω 2 4W0 + Ω 2 4W02 + ω 2

The first term gives the broadband part of the power output. Note that it di-
minishes
,∞ with increasing ε in such a way that the total power is conserved:
0
S(Ω)dΩ = const.
From S(Ω), we can write down the signal-to-noise ratio SNR. Just divide the
coefficient of the delta function by the value of the broadband term at Ω = ω.
The resulting expression simplifies to
) *−1
πε2 2ε2
SNR = 1− . (3.52)
2W0 4W02 + ω 2

For small modulations, ε * 1, and so

πε2
SNR ≈ . (3.53)
2W0
This is the main result of the first part of the theory. What we need now is to
express ε and W0 in terms of the specific system parameters, which means we
need a theory for the transition rates Wup and Wdown . This part of the theory
depends on the details of the particular system. We consider here a popular
example, the so-called overdamped particle in a double well potential. The dy-
namics is governed by the Langevin equation

ẋ = +γx − x3 + a cos ωt + ξ , (3.54)

where ξ is white noise. This is an example of (3.24) described earlier during the
discussion of the Kramers escape formula, and in fact we will use that formula
to determine the transition rates. The corresponding potential V (x, t) is
1 1
V (x, t) = − γx2 + x4 − ax cos ωt (3.55)
2 4
which when a = 0 describes a symmetric bistable system, and when a )= 0 is
periodically tilted in an antisymmetric fashion (see Fig. 3.3). We assume that the
modulation strength a is sufficiently weak that the system remains in the high-
barrier/low-noise limit throughout, in which case the Kramers formula holds.
3 Stochastic Resonance 119

Actually, the problem is a little more subtle since we are considering a time-
dependent potential, while the Kramers formula is derived for the time indepen-
dent case. It remains valid, however, if the modulation frequency is sufficiently
low, which is sometimes referred to as the adiabatic limit of the time dependent
problem.
We equate the transition rates to the reciprocal of the mean first passage
time (3.25), so that
- ) *
1 2
W = Vmin |Vmax | exp − ∆V .
%% %% (3.56)
2π κ

Consider first the unmodulated case a = 0. Then V %% = 3x2 − γ, the potential



minima lie at x1,3 = ± γ, the potential maximum is at x2 = 0, and so the two
barrier heights are equal and given by ∆V = γ 2 /4. Putting this into the rate
formula yields
γ 2
W = √ e−γ /(2κ) . (3.57)

The two rates are equal because the unmodulated potential is symmetric. Now
consider the modulated case, with a )= 0 but small. Then

xmin = ± γ + correction , (3.58)
xmax = 0 + correction ; (3.59)
1 2 √
∆V ≈ V (0) − V (x2 = γ) = γ ± a γ cos ωt , (3.60)
4
so that the rate formula becomes
( ) *+
γ 2 1 2 √
W (t) = √ exp − γ ± a γ cos ωt (3.61)
2π κ 4
( √ +
γ −γ 2 /(2κ) 2 γ
= √ e exp ∓a cos ωt . (3.62)
2π κ

If we expand the last exponential factor for small a, and equate W (t) to the
form assumed in the rate equation part (3.34) of the theory we get the required
expressions
γ 2
W0 = √ e−γ /2κ , (3.63)


a 2γ 3/2 −γ 2 /2κ
ε = e . (3.64)
πκ
This is the main result of the second part of the theory. We can put this together
with the general expression for the signal-to-noise ratio. Using the simpler form
(3.53), the result reduces to
√ 2 2
2a γ −γ 2 /(2κ)
SNR = e . (3.65)
κ2
120 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

Fig. 3.4. Signal to noise ratio (SNR) vs. input noise strength for the two state theory.
By virtue of (3.65), the SNR exhibits a maximum at an optimal, non-vanishing value
κ∗ of the noise strength κ. This is the characteristic feature of stochastic resonance.
SNR is measured in ‘decibels’(in short dB), defined as SNR (dB) = 10 log10 SNR

This is the final result of the two state theory. Figure 3.4 shows a semilog
plot of the SNR, as a function of noise strength κ. The function in (3.65) is very
flat and goes to zero in the noise-free limit (so the logarithm goes to −∞ as
κ → 0), and the SNR decays as 1/κ2 for large noise. Of course, the Kramers
formula breaks down for large noise, so the derived expression in this limit can’t
be taken seriously, although we expect it to be qualitatively right since the SNR
must fall off (one would think!) for sufficiently high κ. In any event, the exciting
result is that the SNR passes through a maximum at an intermediate noise level
κ∗ . From the formula, we readily find that κ∗ = γ 2 /4 = ∆V , where ∆V is the
barrier height in the unmodulated limit (a = 0).
Stochastic resonance has turned out to be very easy to see experimentally.
However, there is one competing effect which is often encountered which may
diminish or even mask entirely the maximum in Fig. 3.4. This is the so-called
intrawell motion, a name borrowed from the double-well potential picture and
which refers to the fact that the potential minima will in general vary with the
time-dependent signal. This means that the output of the system is not constant
between switching events; rather, it oscillates with small amplitude at the signal
frequency, which of course contributes a small amount of output power at the
signal frequency. In the limit of small noise, the power contained in the switching
events is very small, and the residual power due to the intrawell oscillations is no
longer negligible. We can get a quantitative measure of this effect by studying
the dynamics restricted to one of the potential minima (since the oscillations
3 Stochastic Resonance 121

Fig. 3.5. Effect of intrawell motion on the output SNR. As the relative amount of
intrawell motion increases, its contribution increasingly masks the underlying stochastic
resonance behavior by increasing the SNR value at small values of the noise intensity
κ (i.e., below κ∗ )

are small in the important limit). We already did this calculation earlier in the
chapter, when we studied as an example the dynamics described by (3.10). We
found that
ε2
SNRintra = , (3.66)

which diverges at zero noise strength and is otherwise monotonically decreasing
with κ. Whether or not this contribution washes out the maximum in expression
(3.65) depends on how ‘soft’ the potential is about the two minima. Said another
way, the more nearly the output follows a strict two-state waveform, the less
overlap there is between the intrawell and interwell contributions to the SNR (see
Fig. 3.5). If you read through the literature on stochastic resonance carefully,
you will find that in some experiments the data has been filtered [8] to make
it strictly two-state before the power spectrum is computed. In other cases, the
stochastic resonance effect is so strong that such filtering is unnecessary.
Finally, we note that although in most cases the SNR is used for a quanti-
tative analysis of the stochastic resonance, some authors prefer to look at the
output signal power S alone, i.e. the coefficient in front of the δ-function in the
power spectrum, (3.51). By this measure, a system exhibits stochastic resonance
if S has a local maximum at some non-zero input noise intensity. (Compare
this against the behavior of a linear system: since superposition applies, S is
independent of noise intensity.) In the case of stochastic resonance, S exhibits a
maximum at usually roughly the same temperature as the SNR [3,9].
122 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

3.9 Two-State Examples:


Classical and Quantum Stochastic Resonance
You can find numerous physical examples of stochastic resonance in the technical
literature. Let’s take a brief look here at three of these. Together, they give an
idea of how diverse physical problems can be described by the same underlying
theory.
The first example is taken from the original experimental paper reporting
stochastic resonance [10]. This experiment involved a simple electronic switch
known as a Schmitt trigger. This is a situation where the output is truly two-
state: the output voltage is either plus or minus fifteen volts, so we don’t have
to worry about the complication of intrawell motion. The input to the trigger is
the sum of two independent sources, one noisy and one periodic. The amplitude
of the periodic input is kept very small – too small to induce switching of the
trigger on its own – while the amplitude of the noise can be varied as desired.
The experimental results are reflected in Fig. 3.6. The first two, Figs. 3.6a and
b, show the output power spectrum for two different levels of input noise. The
middle panel (b) corresponds to the larger value of input noise. Two effects are
obvious: the sharp line at the signal frequency (ω/2π = 23 Hz in the experiment)
is larger, and the broadband noise level is smaller. (The latter effect, though
striking, turns out to be a feature rarely seen in experiments. It is predicted by
the full expression (3.51) though one can see there that it is a relatively minor
effect, quantitatively speaking.) It is obvious from the data that the output signal
to noise ratio has increased with the addition of input noise, providing a clear
demonstration of stochastic resonance. From each power spectrum, one can pull
off the signal to noise ratio. The results from a large set of such runs, for different
noise strengths and different signal amplitudes, are shown in Fig. 3.6c.
The second example is taken from experiments on a ring laser [11]. In this
kind of laser, the cavity is in the form of a closed ring, and laser light can
propagate in either the clockwise or counterclockwise sense, but not both. Which
of these modes is observed depends on fluctuations in the initial conditions. If
the laser is turned off and then turned on again, there is a 50/50 chance it
will be in the same mode. It is possible to modify these odds to prefer one
of the modes over the other. In the experiment, this was done using a crystal
inserted in the laser cavity, and acoustically exciting the crystal with the sum of
a small periodic signal and a relatively large noise component. The output was
the intensity of the clockwise laser mode. A time series of this output looks like
random switching between two states, but there is a regularity to the switching
which becomes evident at an optimal level of input noise. The signal to noise
ratio can be measured in the same way as in the Schmitt trigger experiment,
and the results are shown in Fig. 3.7. Since this system is very nearly two-state,
with very little ‘intrawell motion’, the predictions of the two state theory fit the
data extremely well.
The third example is drawn from a theoretical paper investigating the possi-
bility of quantum stochastic resonance [12] . The specific problem studied con-
cerned a two state system coupled to a heat bath of harmonic oscillators. The
3 Stochastic Resonance 123

Fig. 3.6. Experimental results from the Schmitt trigger (after Ref. [10]). The top two
panels show the power spectrum before (a) and after (b) adding noise. The bottom
panel (c) shows a plot gathered from several spectra of the SNR vs. input noise level ,
for three different levels of input signal amplitude (increasing from !
" through ◦ to $)
124 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

Fig. 3.7. Results from the Georgia Tech ring laser experiment [11], where the intensity
of the clockwise laser mode is optimally synchronized with a weak signal injected
through an acousto-optical modulator placed in the cavity, at nonvanishing noise level

problem is fundamentally quantum mechanical because the transitions are due to


tunneling through the energy barrier separating the two metastable states. As a
result, the Kramers formula is not the appropriate theory for the transition rates;
instead, one uses the dissipative quantum tunneling formula of Chakravarty and
Leggett [13]. The periodic ‘signal’ in this problem can arise either through a time
dependent temperature T or a time dependent barrier height. The results are
intriguing: while indeed stochastic resonance is predicted, the conditions when
it is expected are different than in the classical case. This is due to the differ-
ent temperature dependence of the quantum tunneling rates as compared to the
classical Kramers rates. In particular, if the underlying potential is symmetric,
i.e. for any purely even potential like (3.55), the tunneling rates are propor-
tional to T 2α−1 (where the dissipation coefficient α quantifies the strength of
coupling to the heat bath), instead of depending exponentially on T , like the
classical Kramers rate (3.25) (replace there κ by T ). As a consequence, quantum
stochastic resonance in a symmetric potential is observed only in the regime
α > 1 of strong dissipation [14]. For weak dissipation, on the other hand, quan-
tum stochastic resonance is predicted when including an asymmetric part to the
potential [12] (which would diminish the effect in the classical case).
A further, fundamental difference of quantum and classical stochastic reso-
nance is that a rate equation description of the dynamics is not always correct
for quantum systems. Indeed, it is well known that, in the absence of any dissi-
pation, a quantum mechanical two state system exhibits purely coherent oscil-
lations, and remnants of this coherent dynamics are also present in the case of
very weak dissipation at low temperatures. This regime has been examined in [7],
3 Stochastic Resonance 125

where stochastic resonance was found only in parameter regimes where incoher-
ent tunneling dominates over coherent transitions. Hence, also in the quantum
case, the basic mechanism of stochastic resonance can be understood in terms
of a two state model (with quantum noise activated transition rates ), although
the exact quantitative behavior may be amended by quantum coherence [15].

3.10 Quantum Stochastic Resonance in the Micromaser


In the above discussed dissipative two level system, transitions between the two
states are induced by quantum tunneling. Another genuinely quantum mechani-
cal effect which may influence the transition rates of a bistable system is the noise
arising from the measurement process : according to the (a priori unpredictable)
measurement result, the quantum system is projected onto the corresponding
eigenstate of the measured observable. In the following, we will present an ex-
ample of quantum stochastic resonance, where the measurement noise plays an
important role.
Our bistable system will be a single mode of the quantized radiation field
(i.e., a harmonic oscillator) sustained by a microwave cavity of very high quality.
(By this a very small decay rate γ of the radiation field within the cavity is
understood. The storage time γ −1 can reach several hundred milliseconds in
most advanced experiments [16].) In order to control and to monitor the state of
the photon field, we let the field interact with a two-level atom, whose state can
be measured after exit from the cavity. If the atom is initially in its upper energy
eigenstate |u#, and the field in the Fock state |n#, then the final quantum state
after the atom-field interaction U reads, according to the Jaynes–Cummings
model [17] (see Sects. 2.1.1 and 4.11.3 in this volume):
√ √
U |u, n# = cos(φ n + 1)|u, n# − i sin(φ n + 1)|d, n + 1#. (3.67)
Here, the vacuum Rabi angle φ = gtint is given by the atom-field coupling
strength g and the interaction time tint . As is evident from (3.67), the probability
of detecting the atom finally in the lower state |d#, and thereby emitting a photon
into the field, |n# → |n + 1#, is given by

βn = sin2 (φ n + 1), (3.68)
whereas with probability 1 − βn the field remains in the state |n#.
While this clearly demonstrates the random influence of the atomic detection
on the photon field, we still do not see any bistability of the photon field. As we
will show in the following, the latter can be induced if we let the photon field
interact with a sequence of two-level atoms, and also take into account the cavity
dissipation. We consider a steady flux r of atoms, initially in the upper state |u#,
which arrive in the cavity at random, uncorrelated times. We assume that the
average time interval r−1 between two consecutive atoms is much larger than
the interaction time tint of a single atom with the cavity. Then the probability
of finding two (or more) atoms in the cavity at the same time can be neglected,
and we are dealing with a one-atom or micromaser [18].
126 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

In addition to the interaction with the atoms, the photon field is also coupled
to the cavity walls, which are cooled to a low temperature T < 1 K, such that the
mean number nb = (exp(!ω/kT ) − 1)−1 [19] of photons in thermal equilibrium
is smaller than one. According to the standard master equation of the damped
harmonic oscillator (employing weak-coupling and Markov approximation) [19],
the influence of the heat bath onto the photon field is fully characterized by
the temperature T (or, equivalently, nb ) and the cavity decay rate γ, which
quantifies the coupling strength to the heat bath. Specifically, given n photons
inside the cavity, the probability of emission of a photon from the heat bath
into the cavity is given by γnb (n + 1), whereas γ(nb + 1)n is the probability of
absorption. Thus, in total, the dynamics of the maser field can be described as a
jump process between neighboring photon numbers n → n ± 1 with the following
transition rates :

t+
n = r βn + γ nb (n + 1) , (3.69)
n = γ (nb + 1) n .
t− (3.70)

The average of one single realization of this jump process over a sufficiently long
time approaches the following stationary photon number distribution [6]:

(ss)
n
. t+
p(ss)
n = p0 k−1
, (3.71)
k=1
t−
k

(ss)
where p0 is determined by normalization. We assume that the atomic flux r is
much larger than the cavity decay rate γ, such that the stationary distribution is
far away from thermal equilibrium. Under these circumstances a double-peaked
stationary distribution, indicating a bistable photon field, can establish if t+
n and
t−
n as a function of n intersect (at least) three times: at n 1 and n 2 (defining a
stable equilibrium), and at n3 (corresponding to an unstable equilibrium, with
n1 < n3 < n2 ). An example is shown in Fig. 3.8.
In this case, the photon number will be found almost always near one of
the two maxima at n1 or n2 , and transitions between these metastable states
occur. The transition rates W1,2 (from n1 to n2 , and vice versa) of these ‘macro-
scopic’ jumps of the photon field can be expressed in terms of the rates for the
microscopic jumps [6]:1
2 −1
n/ n
/
W1−1 = [p(ss) + −1
n tn ] p(ss)
m ,
n=n1 m=0
/n2 /∞
W2−1 = [p(ss) + −1
n tn ] p(ss)
m . (3.72)
n=n1 +1 m=n
1
For large n’s, if the discrete photon number can be approximated by a continuous
variable n, the transition rates may also be obtained by a Kramers analysis, see
Sect. 3.4. However, the effective micromaser potential derived in [21] explicitly de-
pends on T , leading to a temperature dependence of the transition rates different
from the classical Kramers law (3.29).
3 Stochastic Resonance 127

Fig. 3.8. Double-peaked stationary state of the photon number distribution of the
micromaser field in the bistable operation mode. The experimental parameters (com-
parable to the laboratory scenario described in [20]) are: temperature T = 0.6 K
(corresponding to a mean thermal photon number nb = 0.2), atomic flux r = 40γ, and
vacuum Rabi angle φ = 1.03

Fig. 3.9. Average residence times W1,2


−1
in the two metastable states as a function of
the inverse temperature 1/T . Atomic flux r = 40γ = 667 s−1 , vacuum Rabi angle
φ = 1.03. A deviation from Kramers’ law (a straight line in the semilogarithmic plot)
is detected, due to quantum-noise induced transitions at T = 0

In Fig. 3.9, we plotted the transition rates as a function of the temperature T ,


for the same experimental parameters as in Fig. 3.8.
Evidently, the rate W2 from n2 to n1 is more sensitive to the temperature
than W1 . The reason is that the microscopic rates t− n , given by (3.69), which
are responsible for transitions from n2 to n1 , are proportional to nb + 1, and
128 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

therefore (since nb < 1) do not depend on nb as strongly as the t+n ’s, which are
proportional to nb . As as result, the two rates intersect at about T = 0.5 K.
A further difference from the classical Kramers rates (3.25) is that they do
not vanish even at T = 0. This is due to the presence of the noise associated
with the atomic detections (the measurement noise) on the one hand, leading
to t+
n > 0, and the fact that the heat bath can absorb photons at random times
even at T = 0, on the other hand, leading to t− n > 0. These noise sources are
of genuinely quantum mechanical origin, what explains the deviation from the
classical Kramers law.
Stochastic resonance appears if we add a periodic signal to our system, e.g.,
by modulating the atomic flux

r(t) = "r# + ∆r sin(ωt). (3.73)

The modulation amplitude ∆r should be small enough to prevent deterministic


transitions between the two metastable states, and the modulation period 2π/ω
−1
should be of the same order of magnitude as the transition rates W1,2 (close to
the intersection W1 = W2 in Fig. 3.9), in order to enable a matching of time
scales (driving frequency - transition frequency). A numerical simulation of the
maser dynamics in the presence of such a periodic signal is shown in Fig. 3.10,
for three different temperatures of the environment.
Here, we plotted the time evolution of the probability β to detect an exiting
atom in the lower state |d#, determined by averaging the atomic detection results
(‘1’ or ‘0’ for detection in |d# or |u#, respectively) over small time windows of
length ∆t = 1 s (ca. 700 detection events). Since β depends on the photon
number inside the cavity, see (3.68), the observed quantum jumps between βn1 -
0.15 and βn2 - 0.65 are a signature of the jumps of the photon field between the
two metastable states around n1 = 6 and n2 = 26. As also predicted by Fig. 3.9,
for the lowest temperature T = 0.3 K, Fig. 3.10a, the average residence time in
state 1 is much longer than the driving period 2π/ω = 42 s. Consequently, the
individual quantum jumps occur at unpredictable times. However, if we add the
right amount of noise to the system, i.e., increase the temperature to T = 0.6 K,
we observe almost periodic transitions: in most cases, the photon field jumps
from state 1 to 2 and back again once per modulation period, see Fig. 3.10b. If
we further increase the temperature, Fig. 3.10c, the cooperativity between signal
and noise is lost again. This illustrates nicely the stochastic resonance effect: the
most regular behavior occurs at a finite, nonvanishing noise level.
For a quantitative analysis of the synchronization effect, we can calculate
the power spectra either by Fourier transformation of numerically simulated
atomic detection sequences (such as in Fig. 3.10), or by employing the two-state
model described by (3.33), using the (adiabatically modulated) transition rates
given in (3.72). The results are shown in Fig. 3.11, where we plot the strength
S of the signal peak as a function of the temperature. While, in both cases, a
stochastic resonance maximum occurs at T - 0.6 K – the same temperature
where the optimal synchronization is observed in Fig. 3.10 – the quantitative
behavior is quite different, due to the intrawell modulation, which enhances
3 Stochastic Resonance 129

Fig. 3.10. Time evolution of the probability β to detect an atom in |d%, with periodi-
cally modulated atomic flux r(t), according to (3.73), with mean value &r% = 40γ, mod-
ulation amplitude ∆r = 6.9γ, and period 2π/ω = 42 s. Vacuum Rabi angle φ = 1.03.
The noise-induced synchronization of quantum jumps is poor for the lowest temper-
ature (too rare quantum jumps), optimal for the intermediate temperature (almost
regular quantum jumps), and again poor for the highest temperature (too frequent
quantum jumps). Also note the clearly observable intrawell motion for the lowest tem-
perature

(at low T ) or suppresses (at high T ) the signal as compared to the two-state
model [22]. Let us note that the two-state model does not exhibit a maximum
of the SNR, but rather monotonically increases as a function of T [22], until
the two-state approximation breaks down for high temperatures. (This can be
traced back to an untypical behavior of the modulated transition rates, whose
modulation amplitudes of which increase with increasing temperature, whereas
the modulation amplitude (3.25) of the classical Kramers rates is approximately
constant in the relevant temperature region.) Although an increase of the SNR
with increasing temperature may also be considered as a fingerprint of stochastic
resonance, this shows that the signal strength S may, in some cases, give a better
quantitative picture of stochastic resonance than the SNR. On the other hand,
the exact model (i.e., without two-state approximation) does exhibit a maximum
of the SNR, since the intrawell dynamics reduce the signal output S at high
temperatures, see Fig. 3.11.
130 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

Fig. 3.11. Signal output S as a function of the temperature T , for the same parameters
as in Fig. 3.10. A stochastic resonance maximum is observed at T ' 0.6 K. The circles
show the results of the simulation of the maser dynamics (run for 75 000 s for each
value of T ), which agree perfectly well with the results of an exact calculation of the
power spectrum (dashed line) [22]. The deviation from two state model (solid line)
reveals the influence of the intrawell dynamics

Finally, we want to discuss briefly the case of ‘coherent pumping’, where the
atoms are injected into the cavity in a coherent superposition

|ψ# = a|u# + b |d# (3.74)

of their energy eigenstates. Above, we considered the case a = 1, b = 0, of ‘in-


coherent pumping’(no initial coherence between the upper and the lower atomic
eigenstate). However, by applying a suitable classical microwave pulse on the
atoms just before entering the cavity, we may, in principle, choose arbitrary val-
ues of a and b. In general, an initial atomic coherence between |u# and |d# will
induce nonvanishing coherences of the cavity field between different photon num-
bers, what prevents the simple description (3.69), (3.70) of the maser dynamics
in terms of a jump process between neighboring photon numbers. Nevertheless,
for suitably chosen experimental parameters, the photon field exhibits - just as
in the case of incoherent pumping - a bistable behavior, with transition rates
W1,2 between two metastable states (although the calculation of the transition
rates is more complicated [15]). In contrast to the case of incoherent pumping,
however, the quantum jumps of the photon field can now be monitored also by
measuring other components of the atomic Bloch√ vector on exit from the cavity,
e.g., when detecting the atoms in (|u# ±| d#)/ 2 instead of |u#, |d#. Furthermore,
we can inscribe the weak periodic signal directly into the initial atomic coher-
ence, by modulating a(t), b(t). A detailed discussion of the thereby achievable
stochastic resonance in the atomic coherence can be found in [8,15].
3 Stochastic Resonance 131

Fig. 3.12. Time series of an excitable system such as a neuron. Most of the time the
system resides in its rest state, with output signal V = 0. Excitation to its bursting
state causes a narrow spike in V (t)

3.11 Stochastic Resonance in Excitable Systems

The two state theory describes many systems known to exhibit stochastic reso-
nance. Many, but not all. The original idea and its early development explicitly
considers hopping between two stable states. But it has turned out that stochas-
tic resonance occurs more generally. Today, we think that there are a handful of
different mechanisms, all of which show the same general properties which we
identify with stochastic resonance.
One place where the two state theory won’t do is in so-called excitable sys-
tems. These are systems which spend most of their time in a resting state, but
can enter into a transient bursting state if perturbed strongly enough. Neurons
are a common example. A typical time series is shown in Fig. 3.12. Each exci-
tation event is represented as a narrow spike. Most of the time the system is in
its rest state, where the output V is zero.
We now develop the theory of stochastic resonance for such a system. The
basic idea of the calculation is as follows. First, we imagine that the spike train is
characterized by an event rate α which depends on a weak periodic influence as
well as a (possibly strong) random influence. In the absence of any input signal,
we assume that the events occur randomly and independently at an average rate
α0 which depends on the input noise level. The effect of a weak periodic signal
is assumed to periodically modulate the event rate, so that

α(t) = α0 + ε cos ωt . (3.75)

Here, ω is the frequency of the input signal and ε is a constant which depends
on the size of the signal.
132 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

We also assume that each event contributes a narrow spike to the output,
according to /
V = F (t − ti ) , (3.76)
i

where the event times are t1 , t2 , . . . . The pulses are narrow, and it will turn out
that only the pulse area (and not its detailed shape) is important. Note that
overlapping pulses simply add in the output. Under the assumption that the
probability of an event depends only on the instantaneous value of α and not on
the arrival times of any previous events, we can calculate the correlation function
"V (t)V (t + τ )#, and then the power spectrum and the signal to noise ratio.
With this plan in mind, let’s first consider the case where α is constant,
corresponding to the case of no input signal. Then the problem reduces to the
classical ‘shot effect’ [4]. Suppose that there occur exactly K events in a long
time interval t ∈ (0, tmax ). We denote the corresponding output by VK . Then
! ! K K
tmax
dt1 tmax
dtK / /
"VK (t)VK (t + τ )# = ... F (t − ti )F (t + τ − tj )
0 tmax 0 tmax i=1 j=1
K (!
K /
/ ! +
tmax
dt1 tmax
dtK
= ... F (t − ti )F (t + τ − tj ) . (3.77)
i=1 j=1 0 tmax 0 tmax

The double sum has K 2 terms. Of these, there are K terms with i = j and
K(K − 1) terms with i )= j:
!
dti
tmax
"VK (t)VK (t + τ )# = K F (t − ti )F (t + τ − ti )
0 tmax
! tmax
dti
+ K(K − 1)
0 tmax
! tmax
dtj
× F (t − ti )F (t + τ − tj ) . (3.78)
0 t max

To explicitly evaluate the integrals we assume that the pulse shape F is a rect-
angle of width ∆t and height H. Then
! tmax
dti H2
F (t − ti )F (t + τ − ti ) = (∆t − |τ |) , (3.79)
0 tmax tmax

if |τ | < ∆t and is otherwise zero. Plotted as a function of τ , this is a triangle of


width 2∆t and area H 2 ∆t2 /tmax . In the limit as ∆t → 0 and H → ∞ such that
the product H∆t remains constant, this becomes a Dirac delta function
! 2
tmax
dti (H∆t)
F (t − ti )F (t + τ − ti ) = δ(τ ) . (3.80)
0 tmax tmax
3 Stochastic Resonance 133

Similarly, we can evaluate the double integral in (3.78),


! tmax ! tmax $! tmax %2
dti dtj dti
F (t − ti )F (t + τ − tj ) ≈ F (t − ti )
0 tmax 0 tmax 0 tmax
) *2
H∆t
= , (3.81)
tmax
where the first line neglects the small correction for pulses which overlap each
other or the endpoints of the interval (0, tmax ). Thus,
K 2 K(K − 1) 2
"VK (t)VK (t + τ )# = (H∆t) δ(τ ) + (H∆t) . (3.82)
tmax t2max
This expression assumes that there are exactly K events in the interval (0, tmax ).
Now, the probability that there are exactly K events in a time tmax depends on
the event rate α according to the Poisson distribution:
K
(αtmax ) −αtmax
PK (tmax ) = e . (3.83)
K!
Performing the weighted average of (3.82) over all possible K gives us the cor-
relation function

/
C(τ ) = "VK (t)VK (t + τ )#PK (tmax )
K=0
/∞ ( + K
2 K K (K − 1) (αtmax ) −αt
= (H∆t) δ(τ ) + e
tmax t2max K
K=0
20 1
= (H∆t) αδ(τ ) + α2 , (3.84)
and the corresponding power spectrum is, using (3.9),
20 1
S(Ω) = (H∆t) 2α + 4πα2 δ(Ω) . (3.85)
We now repeat the calculation, this time using a time dependent rate α(t).
We modify the Poisson distribution (3.83) as follows
1 K −Z
PK (tmax ) = Z e , (3.86)
K!
where ! tmax
Z(tmax ) = α(t)dt . (3.87)
0
As a check, this properly reduces to (3.83) when α is constant.
If there are exactly K events in (0, tmax ),
/K / K ! tmax
α(t1 )dt1
"VK (t)VK (t + τ )# = ...
i=1 j=1 0
Z
! tmax
α(tK )dtK
× F (t − ti )F (t + τ − tj ) . (3.88)
0 Z
134 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

Group terms according to whether or not i = j,


!
K tmax
"VK (t)VK (t + τ )# = α(ti )F (t − ti )F (t + τ − ti )dti
Z 0
!
K(K − 1) tmax
+ α(ti )F (t − ti )dti
Z2 0
! tmax
× α(tj )F (t + τ − tj )dtj . (3.89)
0

If F is sharply peaked,
! tmax ! tmax
α(ti )F (t − ti )dti ≈ α(t) dti F (t − ti ) = α(t)H∆t , (3.90)
0 0

and
! tmax ! tmax
α(tj )F (t + τ − tj )dtj ≈ α(t + τ ) dtj F (t + τ − tj )
0 0
= α(t + τ )H∆t , (3.91)

as well as
! tmax ! tmax
α(ti )F (t − ti )F (t + τ − ti )dti ≈ α(t) F (t − ti )F (t + τ − ti )dti
0 0
2
= α(t) (H∆t) δ(τ ) , (3.92)

so that
K 2
"VK (t)VK (t + τ )# = α(t) (H∆t) δ(τ )
Z
K(K − 1) 2
+ α(t)α(t + τ ) (H∆t) , (3.93)
Z2
if there are exactly K events in (0, tmax ). Taking the weighted sum over all K
yields the full correlation function :

/
C(τ ; t) = "VK (t)VK (t + τ )#PK (tmax )
K=0
2
= (H∆t) {α(t)δ(τ ) + α(t)α(t + τ )} . (3.94)

As we have come to expect, the presence of the periodic signal results in an


expression which depends on both τ and t, so we perform a phase average to
eliminate the t-dependence. For example, suppose

α(t) = α0 + α1 cos (ωt + ψ) . (3.95)


3 Stochastic Resonance 135

Then we have
! 2π
1
"C(τ ; t)#ψ = C(τ ; t)dψ
2π 0
20
= (H∆t) δ(τ )α0 + α02
! 2π +
2 1
+α1 cos(ωt + ψ) cos(ωt + ωτ + ψ)dψ
2π 0
( +
2 2 1 2
= (H∆t) δ(τ )α0 + α0 + α1 cos ωτ , (3.96)
2

and the power spectrum becomes


( +
2 4 2
S(Ω) = (H∆t) 2α0 + α02 δ(Ω) + α12 δ(Ω − ω) . (3.97)
π π

More generally, if the rate α is periodic, we can write



/
α(t) = α0 + αj cos (jωt + ψj ) . (3.98)
j=1

Going through the phase average as before, we arrive at the power spectrum
 
 4 2 /∞ 
2
S(Ω) = (H∆t) 2α0 + α02 δ(Ω) + αj2 δ(Ω − jω) . (3.99)
 π π j=1 

This describes a constant broadband background plus a series of spikes at the


signal frequency ω and its harmonics. Reading off the signal to noise ratio yields

α12
SNR = . (3.100)
πα0
To complete the theory, we need to know how the event rate depends on the
system parameters, in particular the input noise intensity. Since this depends on
the specific details of the system, we consider an example. Suppose α obeys a
Kramers-type formula
$ %
U
α(t) = exp − (1 + η cos ωt) , (3.101)
κ

where κ is the noise strength and U, η, and ω are constants. The justification
for this form in terms of a particular Langevin model can be found elsewhere
[23]; U plays the role of the potential barrier and η is proportional to the signal
amplitude. The rate can be Fourier expanded as in (3.98). For the SNR we need
only the lowest two coefficients α0 and α1 , with result

8I12 (z) −U/κ


SNR = e , (3.102)
πI0 (z)
136 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

Fig. 3.13. Signal to noise ratio vs. noise input strength from the theory for excitable
systems. Much as for bistable systems treated in the preceding sections, the maximum
SNR is reached at an optimal, non-vanishing noise level

where In is the modified Bessel function of order n and z = ηU/κ. A plot of


the SNR vs. κ is shown in Fig. 3.13. The theory predicts an increase in SNR
over some range of noise input. The theory does a fair job explaining data from,
e.g., experiments on crayfish mechanoreceptors and simulations of the Fitzhugh-
Nagumo equation [23]. The biggest discrepancy is that the theory predicts too
rapid a fall off at high noise. This could be due to the breakdown of some
fundamental assumption such as statistical independence of consecutive events,
or may simply reflect that the Kramers-type formula (3.101) doesn’t properly
capture the true rate dependence. In fact, there is nothing published (to our
knowledge) on what the correct rate formula should be for either the crayfish
neuron or the Fitzhugh-Nagumo equation, so there is no reason to expect close
agreement. The important point to take away is that the qualitative behavior
is common to the various systems, and is the same whether the underlying
dynamics represents bistable switching or excitable bursting.

3.12 The Frontier of Stochastic Resonance


Noise is usually considered a nuisance, but in systems which display stochastic
resonance an increase in input noise improves their sensitivity to discriminate
weak signals. Stochastic resonance is now firmly established as a common phe-
nomenon which appears in a wide variety of physical situations. And the list of
examples continues to grow.
The study of stochastic resonance is by now a mature field. There are several
similar but distinct fundamental mechanisms that can give rise to the effect,
two of which we covered in this chapter. The quantum mechanical version of
3 Stochastic Resonance 137

stochastic resonance is far less studied than its classical counterpart, and the
future may hold great advances in this direction. Indeed, quantum opticians are
recently getting aware of this robust phenomenon [8,15,22,24,25,26,27,28]. In the
classical context, the great unresolved issue is whether there are any practical
applications of stochastic resonance.
Ideas for applications fall into two categories. The first involves engineering
technology. There have been proposals for direct use in electronics, resulting
in at least two U.S. government patents [29,30]. Another technological use is
to retrofit threshold detectors whose performance have degraded with age. The
most interesting example of this type may be the biomedical application of spe-
cially designed stockings for people with diminished balancing ability to help
them stand up [31].
The other category of applications is biological systems: does Mother Nature
already use stochastic resonance in some of her detectors? Sensory neurons are
notoriously noisy, and stochastic resonance might account for the exquisite sen-
sitivity of some animals to detect weak coherent signals. The first experiments
on biological stochastic resonance were reported in 1993 using mechanoreceptors
of the crayfish Procambarus clarkii [32]. Two years later, experiments demon-
strated stochastic resonance at the sub-cellular level in lipid bilayer membranes
[33]. Other examples include the mechanosensory systems used by crickets to de-
tect air currents [34] and by rats to detect pin pricks [35]. Experiments on hair
cells, important auditory detectors in many vertebrates (including humans), are
especially suggestive [36]: the optimal noise level appears to coincide with the
naturally occurring level set by equilibrium thermal fluctuations! Perhaps these
cells evolved to take maximum advantage of these inevitable environmental fluc-
tuations.
The change in context from physical to life sciences has led researchers to
reconsider and refine even some very basic issues. For example, the extension
of stochastic resonance to excitable systems was primarily motivated by exper-
iments on neurons. In a similar way, a theoretical mechanism which employs a
randomly fluctuating rate [37] raises interesting fundamental questions concern-
ing its connection with microscopic stochastic descriptions. Another question of
great importance is: what is the most appropriate measure of ‘output perfor-
mance’ ? In the biological context information transmission is more significant
than signal-to-noise ratio, but it may be that the most relevant measure – what-
ever it is – depends on the particular application. Related to this is the question:
what kind of signals are most relevant? Truly periodic signals are uncommon in
the natural world, and the nonlinear nature of stochastic resonance suggests
that the study of complicated signals cannot be reduced to a superposition of
elemental periodic ones. And what about other important biological properties
of sensory neurons such as adaptation and refraction? These effects are absent
from existing theories of stochastic resonance. Finally, the biological context has
given new impetus to the study of stochastic resonance in arrays of elements
[38,39], where it appears both that overall performance can be improved and
that tuning of the noise strength may be unnecessary.
138 Kurt Wiesenfeld, Thomas Wellens, and Andreas Buchleitner

References
1. K. Wiesenfeld, F. Jaramillo: Chaos 8, 539 (1998)
2. F. Moss, K. Wiesenfeld: Scientific American 273, 66 (1995)
3. L. Gammaitoni, P. Hänggi, P. Jung, F. Marchesoni: Rev. Mod. Phys. 70, 223
(1998)
4. see, e.g., S.O. Rice, in: Selected Papers on Noise, Stochastic Processes, ed. by N.
Wax (Dover, New York 1954)
5. P. Jung: Phys. Rep. 234, 175 (1993)
6. C. Gardiner: Handbook of Stochastic Methods, 2nd ed. (Springer, Berlin 1983)
7. M. Grifoni, P. Hänggi: Phys. Rev. Lett. 76, 1611 (1996), Phys. Rev. E 54, 1390
(1996)
8. T. Wellens, A. Buchleitner: Phys. Rev. Lett. 84, 5118 (2000)
9. A. Bulsara, L. Gammaitoni: Physics Today 49, 39 (March 1996)
10. S. Fauve, F. Heslot: Phys. Lett. A 97, 5 (1983)
11. B. McNamara, K. Wiesenfeld, R. Roy: Phys. Rev. Lett. 60, 2626 (1988)
12. R. Löfstedt, S. N. Coppersmith: Phys. Rev. Lett. 72, 1947 (1994)
13. S. Chakravarty, A. J. Leggett: Phys. Rev. Lett. 52, 5 (1984)
14. I. Goychuk, P. Hänggi: Phys. Rev. E 59, 5137 (1999)
15. T. Wellens, A. Buchleitner: Chem. Phys. 268, 131 (2001)
16. B. T. H. Varcoe et al.: Nature 403, 743 (2000)
17. E. T. Jaynes, F.W. Cummings: Proc. IEEE 51, 89 (1963)
18. J. Krause, M. O. Scully, H. Walther: Phys. Rev. A 34, 2032 (1986)
19. P. Meystre, M. Sargent III: Elements of Quantum Optics (Springer, Berlin 1990)
20. O. Benson, G. Raithel, H. Walther: Phys. Rev. Lett. 72, 3506 (1994)
21. P. Filipowicz, J. Javanainen, P. Meystre: Phys. Rev. A 34, 3077 (1986)
22. T. Wellens, A. Buchleitner: J. Phys. A 32, 2895 (1999)
23. K. Wiesenfeld et al.: Phys. Rev. Lett. 72, 2125 (1994)
24. A. Buchleitner, R. N. Mantegna: Phys. Rev. Lett. 80, 3932 (1998)
25. L. Viola et al.: Phys. Rev. Lett. 84, 5466 (2000)
26. S. F. Huelga, M. Plenio: Phys. Rev. A 62, 52111 (2000)
27. L. Sanchez-Palencia et al.: Phys. Rev. Lett. 88, 133903 (2002)
28. P. K. Rekdal, B.-S. K. Skagerstam: Physica A 305, 404 (2002)
29. A. D. Hibbs: ‘Detection and communications device employing stochastic
resonance’, U.S. Patent No. 5574369 (1996)
30. A. R. Bulsara et al.: ‘Controlled stochastic resonance circuit’, U.S. Patent No.
6285249 (2001)
31. J. Niemi, A. Priplata, M. Salen, J. Harry, J. J. Collins: Noise-enhanced balance
control, preprint (2001)
32. J. K. Douglass, L. Wilkens, E. Pantazelou, F. Moss: Nature (London)365, 337
(1993)
33. S. M. Bezrukov, I. Vodyanoy: Nature (London) 378, 362 (1995)
34. J. E. Levin, J. P. Miller: Nature (London) 380, 165 (1996)
35. J. J. Collins, T. T. Imhoff, P. Grigg: J. Neurophysiology 76, 642 (1996)
36. F. Jaramillo, K. Wiesenfeld: Nature Neuro. 1, 384 (1998)
37. S. M. Bezrukov, I. Vodyanoy: Nature (London) 385, 319 (1997)
38. J. J. Collins, C. C. Chow, T. T. Imhoff: Nature (London), 376, 236-238 (1995)
39. J. F. Lindner, B. K. Meadows, W. L. Ditto, M. E. Inchiosa, A. R. Bulsara: Phys.
Rev. Lett. 75, 3-6 (1995)

You might also like