0% found this document useful (0 votes)
13 views

A Statistical Measure of Complexity- Desiquilibrio

Uploaded by

Rone nascimento
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

A Statistical Measure of Complexity- Desiquilibrio

Uploaded by

Rone nascimento
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Ricardo López-Ruiz

Héctor Mancini
Xavier Calbet
arXiv:1009.1498v1 [nlin.AO] 8 Sep 2010

A Statistical Measure of
Complexity
– Book Chapter –

September 9, 2010
Contents

1 A Statistical Measure of Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


1.1 Shannon Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 A Statistical Complexity Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 LMC Complexity: Extremal Distributions . . . . . . . . . . . . . . . . . . . . . . 13
1.4 Rényi Entropies and LMC Complexity . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.1 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5.2 Gaussian and exponential distributions . . . . . . . . . . . . . . . . . . 19
1.5.3 Complexity in a two-level laser model . . . . . . . . . . . . . . . . . . . 20
1.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3
Chapter 1
A Statistical Measure of Complexity

Abstract In this chapter, a statistical measure of complexity is introduced and some


of its properties are discussed. Also, some straightforward applications are shown.

5
6 1 A Statistical Measure of Complexity

1.1 Shannon Information

Entropy plays a crucial theoretical role in physics of macroscopic equilibrium sys-


tems. The probability distribution of accessible states of a constrained system in
equilibrium can be found by the inference principle of maximum entropy [1]. The
macroscopic magnitudes and the laws that relate them can be calculated with this
probability distribution by standard statistical mechanics techniques.
The same scheme could be thought for extended systems far from equilibrium,
but in this case we do have neither a method to find the probability distribution nor
the knowledge of the relevant magnitudes bringing the information that can predict
the system’s behavior. It is not the case, for instance, with the metric properties of
low dimensional chaotic systems by means of the Lyapunov exponents, invariant
measures and fractal dimensions [2].
Shannon information or entropy H [3] can still be used as a magnitude in a gen-
eral situation with N accessible states:
N
H = −K ∑ pi log pi (1.1)
i=1

with K a positive real constant and pi the normalized associated probabilities,


∑Ni=1 pi = 1. An isolated system in equilibrium presents equiprobability, pi = 1/N
for all i, among its accessible states and this is the situation of maximal entropy,

Hmax = K log N. (1.2)

If the system is out of equilibrium, the entropy H can be expanded around this
maximum Hmax :

NK N 1 2
 
NK

H(p1 , p2 , . . . , pN ) = K log N −
2 i=1
p i −
N
+ . . . = Hmax −
2
D + ...
(1.3)
where the quantity D = ∑i (pi − 1/N)2 , that we call disequilibrium, is a kind of
distance from the actual system configuration to the equilibrium. If the expression
(1.3) is multiplied by H we obtain:
NK
H 2 = H · Hmax − H · D + K 2 f (N, pi ), (1.4)
2
where f (N, pi ) is the entropy multiplied by the rest of the Taylor expansion terms,
which present the form N1 ∑i (N pi − 1)m with m > 2. If we rename C = H · D,

C = cte · H · (Hmax − H) + K f¯(N, pi ), (1.5)

with cte−1 = NK/2 and f¯ = 2 f /N. The idea of distance for the disequilibrium is
now clearer if we see that D is just the real distance D ∼ (Hmax − H) for systems
in the vicinity of the equiprobability. In an ideal gas we have H ∼ Hmax and D ∼ 0,
1.2 A Statistical Complexity Measure 7

then C ∼ 0. Contrarily, in a crystal H ∼ 0 and D ∼ 1, but also C ∼ 0. These two


systems are considered as classical examples of simple models and are extrema in a
scale of disorder (H) or disequilibrium (D) but those should present null complexity
in a hypothetic measure of complexity. This last asymptotic behavior is verified by
the variable C (Fig. 1.1) and C has been proposed as a such magnitude [4]. We
formalize this simple idea recalling the recent definition of LMC complexity in the
next section.
Let us see another important property [5] arising from relation (1.5). If we
take the time derivative of C in a neighborhood of equilibrium by approaching
C ∼ H(Hmax − H), then we have

dC dH
∼ −Hmax . (1.6)
dt dt

The irreversibility property of H implies that dHdt ≥ 0, the equality occurring only
for the equipartition, therefore
dC
≤ 0. (1.7)
dt
Hence, in the vicinity of Hmax , LMC complexity is always decreasing on the evo-
lution path towards equilibrium, independently of the kind of transition and of the
system under study. This does not forbid that complexity can increase when the sys-
tem is very far from equilibrium. In fact this is the case in a general situation as it
can be seen, for instance, in the gas system presented in Ref. [6].

1.2 A Statistical Complexity Measure

On the most basic grounds, an object, a procedure, or system is said to be “com-


plex” when it does not match patterns regarded as simple. This sounds rather like
an oxymoron but common knowledge tells us what is simple and complex: simpli-
fied systems or idealizations are always a starting point to solve scientific problems.
The notion of “complexity” in physics [7, 8] starts by considering the perfect crystal
and the isolated ideal gas as examples of simple models and therefore as systems
with zero “complexity”. Let us briefly recall their main characteristics with “order”,
“information” and “equilibrium”.
A perfect crystal is completely ordered and the atoms are arranged following
stringent rules of symmetry. The probability distribution for the states accessible to
the perfect crystal is centered around a prevailing state of perfect symmetry. A small
piece of “information” is enough to describe the perfect crystal: the distances and the
symmetries that define the elementary cell. The “information” stored in this system
can be considered minimal. On the other hand, the isolated ideal gas is completely
disordered. The system can be found in any of its accessible states with the same
probability. All of them contribute in equal measure to the “information” stored in
the ideal gas. It has therefore a maximum “information”. These two simple systems
8 1 A Statistical Measure of Complexity

are extrema in the scale of “order” and “information”. It follows that the definition
of “complexity” must not be made in terms of just “order” or “information”.
It might seem reasonable to propose a measure of “complexity” by adopting
some kind of distance from the equiprobable distribution of the accessible states of
the system [4]. Defined in this way, “disequilibrium” would give an idea of the prob-
abilistic hierarchy of the system. “Disequilibrium” would be different from zero if
there are privileged, or more probable, states among those accessible. But this would
not work. Going back to the two examples we began with, it is readily seen that a
perfect crystal is far from an equidistribution among the accessible states because
one of them is totally prevailing, and so “disequilibrium” would be maximum. For
the ideal gas, “disequilibrium” would be zero by construction. Therefore such a dis-
tance or “disequilibrium” (a measure of a probabilistic hierarchy) cannot be directly
associated with “complexity”.
In Figure 1.1 we sketch an intuitive qualitative behavior for “information” H and
“disequilibrium” D for systems ranging from the perfect crystal to the ideal gas. As
indicated in the former section, this graph suggests that the product of these two
quantities could be used as a measure of “complexity”: C = H · D. The function C
has indeed the features and asymptotic properties that one would expect intuitively:
it vanishes for the perfect crystal and for the isolated ideal gas, and it is different
from zero for the rest of the systems of particles. We will follow these guidelines to
establish a quantitative measure of “complexity”.
Before attempting any further progress, however, we must recall that “complex-
ity” cannot be measured univocally, because it depends on the nature of the descrip-
tion (which always involves a reductionist process) and on the scale of observation.
Let us take an example to illustrate this point. A computer chip can look very differ-
ent at different scales. It is an entangled array of electronic elements at microscopic
scale but only an ordered set of pins attached to a black box at a macroscopic scale.
We shall now discuss a measure of “complexity” based on the statistical descrip-
tion of systems. Let us assume that the system has N accessible states {x1 , x2 , ..., xN }
when observed at a given scale. We will call this an N-system. Our understanding of
the behavior of this system determines the corresponding probabilities {p1 , p2 , ..., pN }
(with the condition ∑Ni=1 pi = 1) of each state (pi > 0 for all i). Then the knowledge
of the underlying physical laws at this scale is incorporated into a probability dis-
tribution for the accessible states. It is possible to find a quantity measuring the
amount of “information”. As presented in the former section, under to the most
elementary conditions of consistency, Shannon [3] determined the unique func-
tion H(p1 , p2 , ..., pN ) given by expression (1.1), that accounts for the “information”
stored in a system, where K is a positive constant. The quantity H is called infor-
mation. The redefinition of information H as some type of monotone function of
the Shannon entropy can be also useful in many contexts. In the case of a crystal, a
state xc would be the most probable pc ∼ 1, and all others xi would be very improb-
able, pi ∼ 0 i 6= c. Then Hc ∼ 0. On the other side, equiprobability characterizes an
isolated ideal gas, pi ∼ 1/N so Hg ∼ K log N, i.e., the maximum of information for
a N-system. (Notice that if one assumes equiprobability and K = κ ≡ Boltzmann
1.2 A Statistical Complexity Measure 9

INFORMATION = H

C = H*D = COMPLEXITY

DISEQUILIBRIUM = D

CRYSTAL IDEAL GAS

Fig. 1.1 Sketch of the intuitive notion of the magnitudes of “information” (H) and “disequilibrium”
(D) for the physical systems and the behavior intuitively required for the magnitude “complexity”.
The quantity C = H · D is proposed to measure such a magnitude.

constant, H is identified with the thermodinamic entropy, S = κ log N). Any other
N-system will have an amount of information between those two extrema.
Let us propose a definition of disequilibrium D in a N-system [9]. The intuitive
notion suggests that some kind of distance from an equiprobable distribution should
be adopted. Two requirements are imposed on the magnitude of D: D > 0 in order to
have a positive measure of “complexity” and D = 0 on the limit of equiprobability.
The straightforward solution is to add the quadratic distances of each state to the
equiprobability as follows:
N  2
1
D=∑ pi − . (1.8)
i=1 N

According to this definition, a crystal has maximum disequilibrium (for the dom-
inant state, pc ∼ 1, and Dc → 1 for N → ∞) while the disequilibrium for an ideal
gas vanishes (Dg ∼ 0) by construction. For any other system D will have a value
between these two extrema.
10 1 A Statistical Measure of Complexity

We now introduce the definition of complexity C of a N-system [4, 10]. This is


simply the interplay between the information stored in the system and its disequi-
librium: !  !
N N 
1 2
C = H · D = − K ∑ pi log pi · ∑ pi − . (1.9)
i=1 i=1 N
This definition fits the intuitive arguments. For a crystal, disequilibrium is large but
the information stored is vanishingly small, so C ∼ 0. On the other hand, H is large
for an ideal gas, but D is small, so C ∼ 0 as well. Any other system will have an
intermediate behavior and therefore C > 0.
As was intuitively suggested, the definition of complexity (1.9) also depends on
the scale. At each scale of observation a new set of accessible states appears with its
corresponding probability distribution so that complexity changes. Physical laws at
each level of observation allow us to infer the probability distribution of the new set
of accessible states, and therefore different values for H, D and C will be obtained.
The straightforward passage to the case of a continuum number of states, x, can be
easily inferred. Thus we must treat with probability
R +∞ distributions with a continuum
support, p(x), and normalization condition −∞ p(x)dx = 1. Disequilibrium has the
R +∞ 2
limit D = −∞ p (x)dx and the complexity could be defined by:
 Z +∞  Z +∞ 
2
C = H ·D = − K p(x) log p(x)dx · p (x)dx . (1.10)
−∞ −∞

Other possibilities for the continuous extension of C are also possible. For instance,
a successful attempt of extending the LMC complexity for continuous systems has
been performed in Ref. [11]. When the number of states available for a system is a
continuum then the natural representation is a continuous distribution. In this case,
the entropy can become negative. The positivity of C for every distribution is re-
covered by taking the exponential of H [12]. If we define Ĉ = Ĥ · D = eH · D as an
extension of C to the continuous case interesting properties characterizing the indi-
cator Ĉ appear. Namely, its invariance under translations, rescaling transformations
and replication convert Ĉ in a good candidate to be considered as an indicator bring-
ing essential information about the statistical properties of a continuous system.
Direct simulations of the definition give the values of C for general N-systems.
The set of all the possible distributions {p1 , p2 , ..., pN } where an N-system could be
found is sampled. For the sake of simplicity H is normalized to the interval [0, 1].
Thus H = ∑Ni=1 pi log pi / log N. For each distribution {pi } the normalized informa-
tion H({pi }), and the disequilibrium D({pi }) (eq. 1.8) are calculated. In each case
the normalized complexity C = H · D is obtained and the pair (H,C) stored. These
two magnitudes are plotted on a diagram (H,C(H)) in order to verify the qualitative
behavior predicted in Figure 1.1. For N = 2 an analytical expression for the curve
C(H) is obtained. If the probability of one state is p1 = x, that of the second one is
simply p2 = 1 − x. Complexity vanishes for the two simplest 2-systems: the crystal
(H = 0; p1 = 1, p2 = 0) and the ideal gas (H = 1; p1 = 1/2, p2 = 1/2). Let us
notice that this curve is the simplest one that fulfills all the conditions discussed in
1.2 A Statistical Complexity Measure 11

Fig. 1.2 In general, dependence of complexity (C) on normalized information (H) is not univocal:
many distributions {pi } can present the same value of H but different C. This is shown in the case
N = 3.

the introduction. The largest complexity is reached for H ∼ 1/2 and its value is:
C(x ∼ 0.11) ∼ 0.151. For N > 2 the relationship between H and C is not univocal
anymore. Many different distributions {pi } store the same information H but have
different complexity C. Figure 1.2 displays such a behavior for N = 3. If we take
the maximum complexity Cmax (H) associated with each H a curve similar to the
one for a 2-system is recovered. Every 3-system will have a complexity below this
line and upper the line of Cmin (H) and also upper the minimum envelope complex-
ity Cminenv . These lines will be analytically found in a next section. In Figure 1.3
curves Cmax (H) for the cases N = 3, . . . , 10 are also shown. Let us observe the shift
of the complexity-curve peak to smaller values of entropy for rising N. This fact
agrees with the intuition telling us that the biggest complexity (number of possi-
bilities of ‘complexification’) be reached for lesser entropies for the systems with
bigger number of states.
Let us return to the point at which we started this discussion. Any notion of
complexity in physics [7, 8] should only be made on the basis of a well defined or
operational magnitude [4, 10]. But two additional requirements are needed in order
to obtain a good definition of complexity in physics: (1) the new magnitude must be
measurable in many different physical systems and (2) a comparative relationship
and a physical interpretation between any two measurements should be possible.
12 1 A Statistical Measure of Complexity

Fig. 1.3 Complexity (C = H · D) as a function of the normalized information (H) for a system with
two accessible states (N = 2). Also curves of maximum complexity (Cmax ) are shown for the cases:
N = 3, . . ., 10.

Many different definitions of complexity have been proposed to date, mainly


in the realm of physical and computational sciences. Among these, several can be
cited: algorithmic complexity (Kolmogorov-Chaitin) [13, 14], the Lempel-Ziv com-
plexity [15], the logical depth of Bennett [16], the effective measure complexity of
Grassberger [17], the complexity of a system based in its diversity [18], the thermo-
dynamical depth [19], the ε -machine complexity [20] , the physical complexity of
genomes [21], complexities of formal grammars, etc. The definition of complexity
(1.9) proposed in this section offers a new point of view, based on a statistical de-
scription of systems at a given scale. In this scheme, the knowledge of the physical
laws governing the dynamic evolution in that scale is used to find its accessible states
and its probability distribution. This process would immediately indicate the value
of complexity. In essence this is nothing but an interplay between the information
stored by the system and the distance from equipartition (measure of a probabilistic
hierarchy between the observed parts) of the probability distribution of its accessi-
ble states. Besides giving the main features of a “intuitive” notion of complexity, we
will show in this chapter that we can go one step further and that it is possible to
compute this quantity in relevant physical situations [6, 22, 23]. The most important
point is that the new definition successfully enables us to discern situations regarded
as complex.
1.3 LMC Complexity: Extremal Distributions 13

1.3 LMC Complexity: Extremal Distributions

Now we proceed to calculate the distributions which maximize and minimize the
LMC complexity and its asymptotic behavior [6].
Let us assume that the system can be in one of its N possible accessible states, i.
The probability of the system being in state i will be given by the discrete distribu-
tion function, fi ≥ 0, with the normalization condition I ≡ ∑Ni=1 fi = 1. The system
is defined such that, if isolated, it will reach equilibrium, with all the states having
equal probability, fe = N1 . Since we are supposing that H is normalized, 0 ≤ H ≤ 1,
and 0 ≤ D ≤ (N − 1)/N, then complexity, C, is also normalized, 0 ≤ C ≤ 1.
When an isolated system evolves with time, the complexity cannot have any pos-
sible value in a C versus H map as it can be seen in Fig. 1.2, but it must stay within
certain bounds, Cmax and Cmin . These are the maximum and minimum values of C
for a given H. Since C = D · H, finding the extrema of C for constant H is equivalent
to finding the extrema of D.
There are two restrictions on D: the normalization, I, and the fixed value of the
entropy, H. To find these extrema undetermined Lagrange multipliers are used. Dif-
ferentiating expressions of D, I and H, we obtain

∂D
= 2( f j − fe ) , (1.11)
∂ fj
∂I
= 1, (1.12)
∂ fj
∂H 1
=− (ln f j + 1) . (1.13)
∂ fj ln N

Defining λ1 and λ2 as the Lagrange multipliers, we get:

2( f j − fe ) + λ1 + λ2 (ln f j + 1)/ lnN = 0 . (1.14)

Two new parameters, α and β , which are a linear combinations of the Lagrange
multipliers are defined:
f j + α ln f j + β = 0 , (1.15)
where the solutions of this equation, f j , are the values that minimize or maximize
the disequilibrium.
In the maximum complexity case there are two solutions, f j , to Eq. (1.15) which
are shown in Table 1.1. One of these solutions, fmax , is given by
  
1 1 − fmax
H =− fmax ln fmax + (1 − fmax) ln , (1.16)
ln N N −1

and the other solution by (1 − fmax )/(N − 1). The maximum disequilibrium, Dmax ,
for a fixed H is
14 1 A Statistical Measure of Complexity
 2
2 1 − fmax
Dmax = ( fmax − fe ) + (N − 1) − fe , (1.17)
N −1

and thus, the maximum complexity, which depends only on H, is

Cmax (H) = Dmax · H . (1.18)

The behavior of the maximum value of complexity versus ln N was computed in


Ref. [24].

Table 1.1 Probability values, f j , that give a maximum of disequilibrium, Dmax , for a given H.
Number of states fj Range of f j
with f j
1
1 f max N ... 1
1− fmax
N −1 N−1 0 . . . N1

Table 1.2 Probability values, f j , that give a minimum of disequilibrium, Dmin , for a given H.
Number of states fj Range of f j
with f j
n 0 0
1
1 f min 0 . . . N−n
1− fmin 1 1
N −n−1 N−n−1 N−n . . . N−n−1

n can have the values 0, 1, . . . N − 2.

Equivalently, the values, f j , that give a minimum complexity are shown in Table
1.2. One of the solutions, fmin , is given by
  
1 1 − fmin
H =− fmin ln fmin + (1 − fmin) ln , (1.19)
ln N N −n−1

where n is the number of states with f j = 0 and takes a value in the range n =
0, 1, . . . , N − 2. The resulting minimum disequilibrium, Dmin , for a given H is,
 2
1 − fmin
Dmin = ( fmin − fe )2 + (N − n − 1) − fe + n fe2 . (1.20)
N −n−1

Note that in this case f j = 0 is an additional hidden solution that stems from the
positive restriction in the fi values. To obtain these solutions explicitly we can define
xi such that fi ≡ xi 2 . These xi values do not have the restriction of positivity imposed
to fi and can take a positive or negative value. If we repeat the Lagrange multiplier
method with these new variables a new solution arises: x j = 0, or equivalently, f j =
1.3 LMC Complexity: Extremal Distributions 15

0. The resulting minimum complexity, which again only depends on H, is

Cmin (H) = Dmin · H . (1.21)

As an example, the maximum and minimum of complexity, Cmax and Cmin , are plot-
ted as a function of the entropy, H, in Fig. 1.4 for N = 4. Also, in this figure, it is
shown the minimum envelope complexity, Cminenv = Dminenv · H, where Dminenv is
defined below. In Fig. 1.5 the maximum and minimum disequilibrium, Dmax and
Dmin , versus H are also shown.

Fig. 1.4 Maximum, minimum, and minimum envelope complexity, Cmax , Cmin , and Cminenv respec-
tively, as a function of the entropy, H, for a system with N = 4 accessible states.

As shown in Fig. 1.5 the minimum disequilibrium function is piecewise defined,


having several points where its derivative is discontinuous. Each of these function
pieces corresponds to a different value of n (Table 1.2).In some circumstances it
might be helpful to work with the “envelope” of the minimum disequilibrium func-
tion. The function, Dminenv , that traverses all the discontinuous derivative points in
the Dmin versus H plot is
1
Dminenv = e−H ln N − , (1.22)
N
and is also shown in Figure 1.5.
16 1 A Statistical Measure of Complexity

Fig. 1.5 Maximum, minimum, and minimum envelope disequilibrium, Dmax , Dmin , and Dminenv
respectively, as a function of the entropy, H, for a system with N = 4 accessible states.

When N tends toward infinity the probability, fmax , of the dominant state has a
linear dependence with the entropy,

lim fmax = 1 − H , (1.23)


N→∞

and thus the maximum disequilibrium scales as limN→∞ Dmax = (1 − H)2 . The max-
imum complexity tends to

lim Cmax = H · (1 − H)2 . (1.24)


N→∞

The limit of the minimum disequilibrium and complexity vanishes, limN→∞ Dminenv =
0, and thus
lim Cmin = 0 . (1.25)
N→∞

In general, in the limit N → ∞, the complexity is not a trivial function of the entropy,
in the sense that for a given H there exists a range of complexities between 0 and
Cmax , given by Eqs. (1.25) and (1.24), respectively.
In particular, in this asymptotic limit, the maximum of Cmax is found when
H = 1/3, or equivalently fmax = 2/3, which gives a maximum of the maximum
complexity of Cmax = 4/27. This value was numerically calculated in Ref. [24].
1.4 Rényi Entropies and LMC Complexity 17

1.4 Rényi Entropies and LMC Complexity

Generalized entropies were introduced by Rényi [25] in the form of


!
N
1
Iq = log ∑ pi , q
(1.26)
1−q i=1

where q is an index running over all the integer values. By differentiating Iq with
respect to q a negative quantity is obtained independently of q, then Iq monotonously
decreases when q increases.
The Rényi entropies are an extension of the Shannon information H. In fact, H
is obtained in the limit q → 1:
N
H = I1 = lim Iq = − ∑ pi log pi , (1.27)
q→1
i=1

where the constant K of Eq. (1.1) is considered to be the unity. The disequilibrium
D is also related with I2 = − log ∑Ni=1 p2i . We have that


N
1 1
D = ∑ p2i − = e−I2 − , (1.28)
i=1 N N

then the LMC complexity is


 
−I2 1
C = H · D = I1 · e − . (1.29)
N

The behavior of C in the neighborhood of Hmax takes the form


1
C∼ (log2 N − I1I2 ), (1.30)
N
The obvious generalization of the Rényi entropies for a normalized continuous dis-
tribution p(x) is
1
Z
Iq = log [p(x)]q dx. (1.31)
1−q
Hence,
Z
H = I1 = − p(x) log p(x)dx, (1.32)
Z
D = e−I2 = [p(x)]2 dx. (1.33)

The dependence of Ĉ = eH · D with I1 and I2 yields

log Ĉ = (I1 − I2). (1.34)


18 1 A Statistical Measure of Complexity

This indicates that a family of different indicators could derive from the differences
established among Rényi entropies with different q-indices [5]. Let us remark at
this point the coincidence of the indicator log Ĉ with the quantity Sstr introduced by
Varga and Pipek as a meaningful parameter to characterize the shape of a distribu-
tion. They apply this formalism to the Husimi representation, i.e., to the projection
of wave functions onto the coherent state basis [26]. A further generalization of the
LMC complexity measure as function of the Rényi entropies has been introduced in
Ref. [27].
The invariance of Ĉ under rescaling transformations implies that this magnitude
is conserved in many different processes. For instance, the initial Gaussian-like dis-
tribution will continue to be Gaussian in a classical diffusion process. Then Ĉ is
constant in time: ddtĈ = 0, and we have:

dI1 dI2
= . (1.35)
dt dt
The equal losing rate of I1 and I2 , i.e., the synchronization of both quantities, is the
cost to be paid in order to maintain the shape of the distribution associated to the
system and, hence, all its statistical properties will remain unchanged during its time
evolution.

1.5 Some Applications

If by complexity it is to be understood that property present in all systems attached


under the epigraph of ‘complex systems’, this property should be reasonably quan-
tified by the measures proposed in the different branches of knowledge. In our case,
the main advantage of LMC complexity is its generality and the fact that it is opera-
tionally simple and do not require a big amount of calculations [28]. This advantage
has been worked out in different examples, such as the study of the time evolution
of C for a simplified model of an isolated gas, the “tetrahedral gas” [6] or also in
the case of a more realistic gas of particles [29, 30], the slight modification of C as
an effective method by which the complexity in hydrological systems can be iden-
tified [31], the attempt of generalize C in a family of simple complexity measures
[32, 33, 34], some statistical features of the behavior of C for DNA sequences [35]
or earthquake magnitude time series [36], some wavelet-based informational tools
used to analyze the brain electrical activity in epilectic episodes in the plane of co-
ordinates (H,C) [37], a method to discern complexity in two-dimensional patterns
[38] or some calculations done on quantum systems [39, 40, 41, 42, 43]. As an
example, we show in the next subsections some straightforward calculation of the
LMC complexity [44].
1.5 Some Applications 19

1.5.1 Canonical ensemble

Each physical situation is closely related to a specific distribution of microscopic


states. Thus, an isolated system presents equipartition, by hypothesis: the mi-
crostates compatible with a macroscopic situation are equiprobable [45]. The system
is said to be in equilibrium. For a system surrounded by a heat reservoir the proba-
bility of the microstates associated to the thermal equilibrium follow the Boltzmann
distribution. Let us try to analyze the behavior of C in an ideal gas in thermal equilib-
rium. In this case the probability pi of each accesible state is given by the Boltzmann
distribution:

e− β E i
pi = , (1.36)
QN
d 3N pd 3N q
Z
QN = e−β E(p,q) = e−β A(V,T) , (1.37)
N!h3N
where QN is the partition function of the canonical ensemble, β = 1/κ T with κ
the Boltzmann constant and T the temperature, V the volume, N the number of
particles, E(p, q) the hamiltonian of the system, h is the Planck constant and A(V, T )
the Helmholtz potential.
Calculation of H and D gives us:


H(V, T ) = (1 + T ) (κ log QN ) = S(V, T ), (1.38)
∂T
D(V, T ) = e2β [A(V,T )−A(V,T/2)] . (1.39)

Note that Shannon information H coincides with the thermodynamic entropy S


when K is identified with κ . If a system verifies the relation U = Cv T (U the in-
ternal energy, Cv the specific heat) the complexity takes the form:

C(V, T ) ∼ cte(V ) · S(V, T )e−S(V,T )/κ (1.40)

that matches the intuitive function proposed in Figure 1.1.

1.5.2 Gaussian and exponential distributions

Gaussian distribution: Suppose a continuum of states represented by the x variable


whose probability density p(x) is given by the normal distribution of variance σ :

x2
 
1
p(x) = √ exp − 2 . (1.41)
σ 2π 2σ

After calculating H and D, the expression for C is the following:


20 1 A Statistical Measure of Complexity


 
K 1
Cg = H · D = √ + log(σ 2π ) . (1.42)
2σ π 2

If we impose the additional condition H ≥ 0, then σ ≥p σmin = (2π e)−1/2. The


highest complexity is reached for a determined width: σ̄ = (e/2π ).
Exponencial distribution: Consider an exponencial distribution of variance γ :
 1 −x/γ
e x > 0,
p(x) = γ (1.43)
0 x < 0.

The same calculation gives us:


K
Ce = (1 + log γ ), (1.44)

with the condition H ≥ 0 imposing γ ≥ γmin = e−1 . The highest complexity corre-
sponds in this case to γ̄ = 1.
Remark that for the same width than a Gaussian distribution (σ = γ ), the expo-
nential distribution presents a higher complexity (Ce /Cg ∼ 1.4).

1.5.3 Complexity in a two-level laser model

One step further, combining the results obtained in the former sections, is now done.
We calculate LMC complexity for an unrealistic and simplified model of laser [46].
Let us suppose a laser of two levels of energy: E1 = 0 and E2 = ε , with N1 atoms
in the first level and N2 atoms in the second level, and the condition N1 + N2 =
N (the total number of atoms). Our aim is to sketch the statistics of this model
and to introduce the results of photon counting [47] that produces an asymmetric
behavior of C as function of the population inversion η = N2 /N. In the range η ∈
(0, 1/2) spontaneous and stimulated emission can take place, but only in the range
η ∈ (1/2, 1) the condition to have lasing action is reached, because the population
must be, at least, inverted, η > 1/2.
The entropy S of this system vanishes when N1 or N2 is zero. Moreover, S must be
homegenous of first order in the extensive variable N [48]. For the sake of simplicity
we approach S by the first term in the Taylor expansion:
N1 N2
S∼κ = κ N η (1 − η ). (1.45)
N
The internal energy is U = N2 ε = ε N η and the statistical temperature is:
−1
∂S ε

1
T= = . (1.46)
∂U N κ (1 − 2η )
1.6 Conclusions 21

Note that for η > 1/2 the temperature is negative as corresponds to the stimulated
emission regime dominating the actual laser action.
We are now interested in introducing qualitatively the results of laser photon
counting in the calculation of LMC complexity. It was reported in [47] that the
photo-electron distribution of laser field appears to be poissonian. In the continuous
limit the Poisson distribution is approached by the normal distribution [49]. The
width σ of this energy distribution in the canonical ensemble is proportional to the
statistical temperature of the system. Thus, for a switched on laser in the regime
η ∈ [1/2, 1], the width of the gaussian energy distribution can be fitted by choosing
σ ∼ −T ∼ 1/(2η − 1) (recall that T < 0 in this case). The range of variation of σ
is [σ∞ , σmin ] = [∞, (2π e)−1/2 ]. Then we obtain:

(2π e)−1/2
σ∼ . (1.47)
2η − 1
By replacing this expression in Eq. (1.42), and rescaling by a factor proportional
to entropy, S ∼ κ N, (in order to give to it the correct order of magnitude), LMC
complexity for a population inversion in the range η ∈ [1/2, 1] is reobtained:

Claser ≃ κ N · (1 − 2η ) log(2η − 1). (1.48)

We can consider at this level of discussion Claser = 0 for η < 1/2. Regarding the
behavior of this function, it is worth noticing the value η2 ≃ 0.68 where the laser
presents the highest complexity. By following theses ideas, if the width, σ , of the
experimental photo-electron distribution of laser field is measured, the population
inversion parameter, η , would be given by Eq. (1.47). In a next step, the LMC
complexity of the laser system would be obtained by Eq. (1.48).
It is necessary to remark that a model helps us to approach the reality and pro-
vides invaluable guidance in the goal of a finer understanding of a physical phe-
nomenon. From this point of view the present calculation evidently only tries to
enlighten the problem of calculating the LMC complexity of a physical system via
an unrealistic but simplified model.

1.6 Conclusions

A definition of complexity (LMC complexity) based on a probabilistic description


of physical systems has been explained. This definition contains basically an inter-
play between the information contained in the system and the distance to equiparti-
tion of the probability distribution representing the system. Besides giving the main
features of an intuitive notion of complexity, we show that it allows to successfully
discern situations considered as complex in systems of a very general interest. Also,
its relationship with the Shannon information and the generalized Rényi entropies
has been shown to be explicit. Moreover it has been possible to establish the de-
22 1 A Statistical Measure of Complexity

crease of this magnitude when a general system evolves from a near-equilibrium


situation to the equipartition.
From a practical point of view, we are convinced that this statistical complexity
measure provides a useful way of thinking [50] and it can help in the future to gain
more insight on the physical grounds of models with potential biological interest.

References

1. E.T. Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106, 620-630
(1957)
2. R. Badii, R., Politi, A.: Complexity. Hierarchical Structures and Scaling in Physics. Cam-
bridge University Press, Cambridge (1997)
3. Shannon, C.E., Weaver, W.: The Mathematical Theory of Communication. University of
Illinois Press, Urbana, Illinois (1949)
4. López-Ruiz, R., Mancini, H.L., Calbet, X.: A statistical measure of complexity. Phys. Lett.
A 209, 321-326 (1995)
5. López-Ruiz, R.: Shannon information, LMC complexity and Rényi entropies: a straightfor-
ward approach. Biophys. Chem. 115, 215 (2005).
6. Calbet, X., López-Ruiz, R.: Tendency toward maximum complexity in a non-equilibrium
isolated system. Phys. Rev. E 63, 066116 (9pp) (2001)
7. Anderson, P.W.: Is complexity physics? Is it science? What is it?. Physics Today, 9-11, July
(1991)
8. Parisi, G.: Statistical physics and biology. Physics World, 6, 42-47, Setember (1993)
9. Nicolis, G., Prigogine, I.: Self-organization in Nonequilibrium Systems. Wiley, New York
(1977)
10. López-Ruiz, R.: On Instabilities and Complexity. Ph. D. Thesis, Universidad de Navarra,
Pamplona (1994)
11. Catalán, R.G., Garay, J., López-Ruiz, R.: Features of the extension of a statistical measure of
complexity for continuous systems. Phys. Rev. E 66, 011102(6) (2002)
12. Dembo, A., Cover, T.M., Thomas, J.A.: Information theoretic inequalities. IEEE Trans. In-
formation Theory 37, 1501-1518 (1991)
13. Kolmogorov, A.N.: Three approaches to the definition of quantity of information. Probl.
Inform. Theory 1, 3-11 (1965)
14. Chaitin, G.J.: On the length of programs for computing finite binary sequences. J. Assoc.
Comput. Mach. 13, 547-569 (1966); Information, Randomness & Incompleteness. World
Scientific, Singapore (1990)
15. Lempel A., Ziv, J.: On the complexity of finite sequences. IEEE Trans. Inform Theory 22,
75-81 (1976)
16. Bennett, C.H.: Information, dissipation, and the definition of organization. Emerging Syn-
theses in Science, David Pines ed., Santa Fe Institute, Santa Fe, NM, 297-313 (1985)
17. Grassberger, P.: Toward a quantitative theory of self-generated complexity. Int. J. Theor.
Phys. 25, 907-938 (1986)
18. Huberman, B.A., Hogg, T.: Complexity and adaptation. Physica D 22, 376-384 (1986)
19. LLoyd, S., Pagels, H.: Complexity as thermodynamic depth. Ann. Phys. (N.Y.) 188, 186-213
(1988)
20. Crutchfield, J.P., Young, K.: Inferring statistical complexity. Phys. Rev. Lett. 63, 105-108
(1989)
21. Adami, C., Cerf, N.T.: Physical complexity of symbolic sequences. Physica D 137, 62-69
(2000)
22. Sánchez, J.R., López-Ruiz, R.: A method to discern complexity in two-dimensional patterns
generated by coupled map lattices. Physica A 355, 633-640 (2005)
References 23

23. Escalona-Morán, M., Cosenza, M.G., López-Ruiz, R., Garcı́a, P.: Statistical complexity and
nontrivial collective behavior in electroencephalographic signals. Int. J. Bif. Chaos 20, spe-
cial issue on Chaos and Dynamics in Biological Networks, Ed. Chávez & Cazelles (2010)
24. Anteneodo, C., Plastino, A.R.: Some features of the statistical LMC complexity. Phys. Lett.
A 223, 348-354 (1996)
25. Rényi, A.: Probability Theory. North-Holland, Amsterdam (1970)
26. Varga, I., Pipek, J.: Rényi entropies characterizing the shape and the extension of the phase
space representation of quantum wave functions in disordered systems. Phys. Rev. E 68,
026202(8) (2003)
27. López-Ruiz, R., Nagy, Á, Romera, E., Sañudo, J.: A generalized statistical complexity mea-
sure: Applications to quantum systems. J. Math. Phys. 50, 123528(10) (2009)
28. Perakh, M.: Defining complexity. On Talk Reason, www.talkreason.org/articles/ complex-
ity.pdf, August (2004)
29. Calbet, X., López-Ruiz, R.: Extremum complexity distribution of a monodimensional ideal
gas out of equilibrium. Physica A 382, 523-530 (2007)
30. Calbet, X., López-Ruiz, R.: Extremum complexity in the monodimensional ideal gas: the
piecewise uniform density distribution approximation. Physica A 388, 4364-4378 (2009)
31. Feng, G., Song, S., Li, P., A statistical measure of complexity in hydrological systems. J.
Hydr. Eng. Chin. (Hydr. Eng. Soc.) 11, article no. 14 (1998)
32. Shiner, J.S., Davison, M., Landsberg, P.T.: Simple measure for complexity. Phys. Rev. E 59,
1459-1464 (1999)
33. Martin, M.T., Plastino, A., Rosso, O.A.: Statistical complexity and disequilibrium. Phys.
Lett. A 311 (2-3), 126-132 (2003)
34. Lamberti, W., Martı́n, M.T., Plastino, A., Rosso, O.A.: Intensive entropic non-triviality mea-
sure. Physica A 334, 119-131 (2004)
35. Yu, Z., Chen, G.: Rescaled range and transition matrix analysis of DNA sequences. Comm.
Theor. Phys. (Beijing China) 33 673-678 (2000)
36. Lovallo, M., Lapenna, V., Telesca, L.: Transition matrix analysis of earthquake magnitude
sequences. Chaos, Solitons and Fractals 24, 33-43 (2005).
37. Rosso, O.A., Martin, M.T., Plastino, A.: Brain electrical activity analysis using wavelet-based
informational tools (II): Tsallis non-extensivity and complexity measures. Physica A 320,
497-511 (2003)
38. Sánchez, J.R., López-Ruiz, R.: Detecting synchronization in spatially extended discrete sys-
tems by complexity measurements. Discrete Dyn. Nat. Soc. 9, 337-342 (2005)
39. Chatzisavvas, K.Ch., Moustakidis, Ch.C., Panos, C.P.: Information entropy, information dis-
tances, and complexity in atoms. J. Chem. Phys. 123, 174111 (10 pp) (2005)
40. Sañudo, J., López-Ruiz, R.: Statistical complexity and Fisher-Shannon information in the
H-atom. Phys. Lett. A 372, 5283-5286 (2008)
41. Montgomery Jr., H.E., Sen, K.D.: Statistical complexity and Fisher-Shannon information
measure of H2+ . Phys. Lett. A 372, 2271-2273 (2008)
42. Kowalski, A.M., Plastino, A., Casas, M.: Generalized complexity and classical-quantum
transition. Entropy 11, 111-123 (2009)
43. López-Ruiz, R., Sañudo, J.: Evidence of magic numbers in nuclei by statistical indicators.
Open Syst. Inf. Dyn. 17, issue 3, Setember (2010)
44. López-Ruiz, R.: Complexity in some physical systems. Int. J. of Bifurcation and Chaos 11,
2669-2673 (2001)
45. Huang, K.: Statistical Mechanics. John Wiley & Sons, New York (1987)
46. Svelto, O.: Principles of Lasers. Plenum Press, New York (1949)
47. Arecchi, F.T.:, Measurement of the statistical distribution of Gaussian and laser sources.
Phys. Rev. Lett. 15, 912-916 (1965)
48. Callen, H.B.: Thermodynamics and an Introduction to Thermostatistics. J. Whiley & Sons,
New York (1985).
49. Harris, J.W., Stocker, H.: Handbook of Mathematics and Computational Science Springer-
Verlag, New York (1998)
50. “I think the next century will be the century of complexity”, Stephen Hawking in San José
Mercury News, Morning Final Edition, January 23 (2000)

You might also like