A Statistical Measure of Complexity- Desiquilibrio
A Statistical Measure of Complexity- Desiquilibrio
Héctor Mancini
Xavier Calbet
arXiv:1009.1498v1 [nlin.AO] 8 Sep 2010
A Statistical Measure of
Complexity
– Book Chapter –
September 9, 2010
Contents
3
Chapter 1
A Statistical Measure of Complexity
5
6 1 A Statistical Measure of Complexity
If the system is out of equilibrium, the entropy H can be expanded around this
maximum Hmax :
NK N 1 2
NK
∑
H(p1 , p2 , . . . , pN ) = K log N −
2 i=1
p i −
N
+ . . . = Hmax −
2
D + ...
(1.3)
where the quantity D = ∑i (pi − 1/N)2 , that we call disequilibrium, is a kind of
distance from the actual system configuration to the equilibrium. If the expression
(1.3) is multiplied by H we obtain:
NK
H 2 = H · Hmax − H · D + K 2 f (N, pi ), (1.4)
2
where f (N, pi ) is the entropy multiplied by the rest of the Taylor expansion terms,
which present the form N1 ∑i (N pi − 1)m with m > 2. If we rename C = H · D,
with cte−1 = NK/2 and f¯ = 2 f /N. The idea of distance for the disequilibrium is
now clearer if we see that D is just the real distance D ∼ (Hmax − H) for systems
in the vicinity of the equiprobability. In an ideal gas we have H ∼ Hmax and D ∼ 0,
1.2 A Statistical Complexity Measure 7
dC dH
∼ −Hmax . (1.6)
dt dt
The irreversibility property of H implies that dHdt ≥ 0, the equality occurring only
for the equipartition, therefore
dC
≤ 0. (1.7)
dt
Hence, in the vicinity of Hmax , LMC complexity is always decreasing on the evo-
lution path towards equilibrium, independently of the kind of transition and of the
system under study. This does not forbid that complexity can increase when the sys-
tem is very far from equilibrium. In fact this is the case in a general situation as it
can be seen, for instance, in the gas system presented in Ref. [6].
are extrema in the scale of “order” and “information”. It follows that the definition
of “complexity” must not be made in terms of just “order” or “information”.
It might seem reasonable to propose a measure of “complexity” by adopting
some kind of distance from the equiprobable distribution of the accessible states of
the system [4]. Defined in this way, “disequilibrium” would give an idea of the prob-
abilistic hierarchy of the system. “Disequilibrium” would be different from zero if
there are privileged, or more probable, states among those accessible. But this would
not work. Going back to the two examples we began with, it is readily seen that a
perfect crystal is far from an equidistribution among the accessible states because
one of them is totally prevailing, and so “disequilibrium” would be maximum. For
the ideal gas, “disequilibrium” would be zero by construction. Therefore such a dis-
tance or “disequilibrium” (a measure of a probabilistic hierarchy) cannot be directly
associated with “complexity”.
In Figure 1.1 we sketch an intuitive qualitative behavior for “information” H and
“disequilibrium” D for systems ranging from the perfect crystal to the ideal gas. As
indicated in the former section, this graph suggests that the product of these two
quantities could be used as a measure of “complexity”: C = H · D. The function C
has indeed the features and asymptotic properties that one would expect intuitively:
it vanishes for the perfect crystal and for the isolated ideal gas, and it is different
from zero for the rest of the systems of particles. We will follow these guidelines to
establish a quantitative measure of “complexity”.
Before attempting any further progress, however, we must recall that “complex-
ity” cannot be measured univocally, because it depends on the nature of the descrip-
tion (which always involves a reductionist process) and on the scale of observation.
Let us take an example to illustrate this point. A computer chip can look very differ-
ent at different scales. It is an entangled array of electronic elements at microscopic
scale but only an ordered set of pins attached to a black box at a macroscopic scale.
We shall now discuss a measure of “complexity” based on the statistical descrip-
tion of systems. Let us assume that the system has N accessible states {x1 , x2 , ..., xN }
when observed at a given scale. We will call this an N-system. Our understanding of
the behavior of this system determines the corresponding probabilities {p1 , p2 , ..., pN }
(with the condition ∑Ni=1 pi = 1) of each state (pi > 0 for all i). Then the knowledge
of the underlying physical laws at this scale is incorporated into a probability dis-
tribution for the accessible states. It is possible to find a quantity measuring the
amount of “information”. As presented in the former section, under to the most
elementary conditions of consistency, Shannon [3] determined the unique func-
tion H(p1 , p2 , ..., pN ) given by expression (1.1), that accounts for the “information”
stored in a system, where K is a positive constant. The quantity H is called infor-
mation. The redefinition of information H as some type of monotone function of
the Shannon entropy can be also useful in many contexts. In the case of a crystal, a
state xc would be the most probable pc ∼ 1, and all others xi would be very improb-
able, pi ∼ 0 i 6= c. Then Hc ∼ 0. On the other side, equiprobability characterizes an
isolated ideal gas, pi ∼ 1/N so Hg ∼ K log N, i.e., the maximum of information for
a N-system. (Notice that if one assumes equiprobability and K = κ ≡ Boltzmann
1.2 A Statistical Complexity Measure 9
INFORMATION = H
C = H*D = COMPLEXITY
DISEQUILIBRIUM = D
Fig. 1.1 Sketch of the intuitive notion of the magnitudes of “information” (H) and “disequilibrium”
(D) for the physical systems and the behavior intuitively required for the magnitude “complexity”.
The quantity C = H · D is proposed to measure such a magnitude.
constant, H is identified with the thermodinamic entropy, S = κ log N). Any other
N-system will have an amount of information between those two extrema.
Let us propose a definition of disequilibrium D in a N-system [9]. The intuitive
notion suggests that some kind of distance from an equiprobable distribution should
be adopted. Two requirements are imposed on the magnitude of D: D > 0 in order to
have a positive measure of “complexity” and D = 0 on the limit of equiprobability.
The straightforward solution is to add the quadratic distances of each state to the
equiprobability as follows:
N 2
1
D=∑ pi − . (1.8)
i=1 N
According to this definition, a crystal has maximum disequilibrium (for the dom-
inant state, pc ∼ 1, and Dc → 1 for N → ∞) while the disequilibrium for an ideal
gas vanishes (Dg ∼ 0) by construction. For any other system D will have a value
between these two extrema.
10 1 A Statistical Measure of Complexity
Other possibilities for the continuous extension of C are also possible. For instance,
a successful attempt of extending the LMC complexity for continuous systems has
been performed in Ref. [11]. When the number of states available for a system is a
continuum then the natural representation is a continuous distribution. In this case,
the entropy can become negative. The positivity of C for every distribution is re-
covered by taking the exponential of H [12]. If we define Ĉ = Ĥ · D = eH · D as an
extension of C to the continuous case interesting properties characterizing the indi-
cator Ĉ appear. Namely, its invariance under translations, rescaling transformations
and replication convert Ĉ in a good candidate to be considered as an indicator bring-
ing essential information about the statistical properties of a continuous system.
Direct simulations of the definition give the values of C for general N-systems.
The set of all the possible distributions {p1 , p2 , ..., pN } where an N-system could be
found is sampled. For the sake of simplicity H is normalized to the interval [0, 1].
Thus H = ∑Ni=1 pi log pi / log N. For each distribution {pi } the normalized informa-
tion H({pi }), and the disequilibrium D({pi }) (eq. 1.8) are calculated. In each case
the normalized complexity C = H · D is obtained and the pair (H,C) stored. These
two magnitudes are plotted on a diagram (H,C(H)) in order to verify the qualitative
behavior predicted in Figure 1.1. For N = 2 an analytical expression for the curve
C(H) is obtained. If the probability of one state is p1 = x, that of the second one is
simply p2 = 1 − x. Complexity vanishes for the two simplest 2-systems: the crystal
(H = 0; p1 = 1, p2 = 0) and the ideal gas (H = 1; p1 = 1/2, p2 = 1/2). Let us
notice that this curve is the simplest one that fulfills all the conditions discussed in
1.2 A Statistical Complexity Measure 11
Fig. 1.2 In general, dependence of complexity (C) on normalized information (H) is not univocal:
many distributions {pi } can present the same value of H but different C. This is shown in the case
N = 3.
the introduction. The largest complexity is reached for H ∼ 1/2 and its value is:
C(x ∼ 0.11) ∼ 0.151. For N > 2 the relationship between H and C is not univocal
anymore. Many different distributions {pi } store the same information H but have
different complexity C. Figure 1.2 displays such a behavior for N = 3. If we take
the maximum complexity Cmax (H) associated with each H a curve similar to the
one for a 2-system is recovered. Every 3-system will have a complexity below this
line and upper the line of Cmin (H) and also upper the minimum envelope complex-
ity Cminenv . These lines will be analytically found in a next section. In Figure 1.3
curves Cmax (H) for the cases N = 3, . . . , 10 are also shown. Let us observe the shift
of the complexity-curve peak to smaller values of entropy for rising N. This fact
agrees with the intuition telling us that the biggest complexity (number of possi-
bilities of ‘complexification’) be reached for lesser entropies for the systems with
bigger number of states.
Let us return to the point at which we started this discussion. Any notion of
complexity in physics [7, 8] should only be made on the basis of a well defined or
operational magnitude [4, 10]. But two additional requirements are needed in order
to obtain a good definition of complexity in physics: (1) the new magnitude must be
measurable in many different physical systems and (2) a comparative relationship
and a physical interpretation between any two measurements should be possible.
12 1 A Statistical Measure of Complexity
Fig. 1.3 Complexity (C = H · D) as a function of the normalized information (H) for a system with
two accessible states (N = 2). Also curves of maximum complexity (Cmax ) are shown for the cases:
N = 3, . . ., 10.
Now we proceed to calculate the distributions which maximize and minimize the
LMC complexity and its asymptotic behavior [6].
Let us assume that the system can be in one of its N possible accessible states, i.
The probability of the system being in state i will be given by the discrete distribu-
tion function, fi ≥ 0, with the normalization condition I ≡ ∑Ni=1 fi = 1. The system
is defined such that, if isolated, it will reach equilibrium, with all the states having
equal probability, fe = N1 . Since we are supposing that H is normalized, 0 ≤ H ≤ 1,
and 0 ≤ D ≤ (N − 1)/N, then complexity, C, is also normalized, 0 ≤ C ≤ 1.
When an isolated system evolves with time, the complexity cannot have any pos-
sible value in a C versus H map as it can be seen in Fig. 1.2, but it must stay within
certain bounds, Cmax and Cmin . These are the maximum and minimum values of C
for a given H. Since C = D · H, finding the extrema of C for constant H is equivalent
to finding the extrema of D.
There are two restrictions on D: the normalization, I, and the fixed value of the
entropy, H. To find these extrema undetermined Lagrange multipliers are used. Dif-
ferentiating expressions of D, I and H, we obtain
∂D
= 2( f j − fe ) , (1.11)
∂ fj
∂I
= 1, (1.12)
∂ fj
∂H 1
=− (ln f j + 1) . (1.13)
∂ fj ln N
Two new parameters, α and β , which are a linear combinations of the Lagrange
multipliers are defined:
f j + α ln f j + β = 0 , (1.15)
where the solutions of this equation, f j , are the values that minimize or maximize
the disequilibrium.
In the maximum complexity case there are two solutions, f j , to Eq. (1.15) which
are shown in Table 1.1. One of these solutions, fmax , is given by
1 1 − fmax
H =− fmax ln fmax + (1 − fmax) ln , (1.16)
ln N N −1
and the other solution by (1 − fmax )/(N − 1). The maximum disequilibrium, Dmax ,
for a fixed H is
14 1 A Statistical Measure of Complexity
2
2 1 − fmax
Dmax = ( fmax − fe ) + (N − 1) − fe , (1.17)
N −1
Table 1.1 Probability values, f j , that give a maximum of disequilibrium, Dmax , for a given H.
Number of states fj Range of f j
with f j
1
1 f max N ... 1
1− fmax
N −1 N−1 0 . . . N1
Table 1.2 Probability values, f j , that give a minimum of disequilibrium, Dmin , for a given H.
Number of states fj Range of f j
with f j
n 0 0
1
1 f min 0 . . . N−n
1− fmin 1 1
N −n−1 N−n−1 N−n . . . N−n−1
Equivalently, the values, f j , that give a minimum complexity are shown in Table
1.2. One of the solutions, fmin , is given by
1 1 − fmin
H =− fmin ln fmin + (1 − fmin) ln , (1.19)
ln N N −n−1
where n is the number of states with f j = 0 and takes a value in the range n =
0, 1, . . . , N − 2. The resulting minimum disequilibrium, Dmin , for a given H is,
2
1 − fmin
Dmin = ( fmin − fe )2 + (N − n − 1) − fe + n fe2 . (1.20)
N −n−1
Note that in this case f j = 0 is an additional hidden solution that stems from the
positive restriction in the fi values. To obtain these solutions explicitly we can define
xi such that fi ≡ xi 2 . These xi values do not have the restriction of positivity imposed
to fi and can take a positive or negative value. If we repeat the Lagrange multiplier
method with these new variables a new solution arises: x j = 0, or equivalently, f j =
1.3 LMC Complexity: Extremal Distributions 15
As an example, the maximum and minimum of complexity, Cmax and Cmin , are plot-
ted as a function of the entropy, H, in Fig. 1.4 for N = 4. Also, in this figure, it is
shown the minimum envelope complexity, Cminenv = Dminenv · H, where Dminenv is
defined below. In Fig. 1.5 the maximum and minimum disequilibrium, Dmax and
Dmin , versus H are also shown.
Fig. 1.4 Maximum, minimum, and minimum envelope complexity, Cmax , Cmin , and Cminenv respec-
tively, as a function of the entropy, H, for a system with N = 4 accessible states.
Fig. 1.5 Maximum, minimum, and minimum envelope disequilibrium, Dmax , Dmin , and Dminenv
respectively, as a function of the entropy, H, for a system with N = 4 accessible states.
When N tends toward infinity the probability, fmax , of the dominant state has a
linear dependence with the entropy,
and thus the maximum disequilibrium scales as limN→∞ Dmax = (1 − H)2 . The max-
imum complexity tends to
The limit of the minimum disequilibrium and complexity vanishes, limN→∞ Dminenv =
0, and thus
lim Cmin = 0 . (1.25)
N→∞
In general, in the limit N → ∞, the complexity is not a trivial function of the entropy,
in the sense that for a given H there exists a range of complexities between 0 and
Cmax , given by Eqs. (1.25) and (1.24), respectively.
In particular, in this asymptotic limit, the maximum of Cmax is found when
H = 1/3, or equivalently fmax = 2/3, which gives a maximum of the maximum
complexity of Cmax = 4/27. This value was numerically calculated in Ref. [24].
1.4 Rényi Entropies and LMC Complexity 17
where q is an index running over all the integer values. By differentiating Iq with
respect to q a negative quantity is obtained independently of q, then Iq monotonously
decreases when q increases.
The Rényi entropies are an extension of the Shannon information H. In fact, H
is obtained in the limit q → 1:
N
H = I1 = lim Iq = − ∑ pi log pi , (1.27)
q→1
i=1
where the constant K of Eq. (1.1) is considered to be the unity. The disequilibrium
D is also related with I2 = − log ∑Ni=1 p2i . We have that
N
1 1
D = ∑ p2i − = e−I2 − , (1.28)
i=1 N N
This indicates that a family of different indicators could derive from the differences
established among Rényi entropies with different q-indices [5]. Let us remark at
this point the coincidence of the indicator log Ĉ with the quantity Sstr introduced by
Varga and Pipek as a meaningful parameter to characterize the shape of a distribu-
tion. They apply this formalism to the Husimi representation, i.e., to the projection
of wave functions onto the coherent state basis [26]. A further generalization of the
LMC complexity measure as function of the Rényi entropies has been introduced in
Ref. [27].
The invariance of Ĉ under rescaling transformations implies that this magnitude
is conserved in many different processes. For instance, the initial Gaussian-like dis-
tribution will continue to be Gaussian in a classical diffusion process. Then Ĉ is
constant in time: ddtĈ = 0, and we have:
dI1 dI2
= . (1.35)
dt dt
The equal losing rate of I1 and I2 , i.e., the synchronization of both quantities, is the
cost to be paid in order to maintain the shape of the distribution associated to the
system and, hence, all its statistical properties will remain unchanged during its time
evolution.
e− β E i
pi = , (1.36)
QN
d 3N pd 3N q
Z
QN = e−β E(p,q) = e−β A(V,T) , (1.37)
N!h3N
where QN is the partition function of the canonical ensemble, β = 1/κ T with κ
the Boltzmann constant and T the temperature, V the volume, N the number of
particles, E(p, q) the hamiltonian of the system, h is the Planck constant and A(V, T )
the Helmholtz potential.
Calculation of H and D gives us:
∂
H(V, T ) = (1 + T ) (κ log QN ) = S(V, T ), (1.38)
∂T
D(V, T ) = e2β [A(V,T )−A(V,T/2)] . (1.39)
x2
1
p(x) = √ exp − 2 . (1.41)
σ 2π 2σ
√
K 1
Cg = H · D = √ + log(σ 2π ) . (1.42)
2σ π 2
with the condition H ≥ 0 imposing γ ≥ γmin = e−1 . The highest complexity corre-
sponds in this case to γ̄ = 1.
Remark that for the same width than a Gaussian distribution (σ = γ ), the expo-
nential distribution presents a higher complexity (Ce /Cg ∼ 1.4).
One step further, combining the results obtained in the former sections, is now done.
We calculate LMC complexity for an unrealistic and simplified model of laser [46].
Let us suppose a laser of two levels of energy: E1 = 0 and E2 = ε , with N1 atoms
in the first level and N2 atoms in the second level, and the condition N1 + N2 =
N (the total number of atoms). Our aim is to sketch the statistics of this model
and to introduce the results of photon counting [47] that produces an asymmetric
behavior of C as function of the population inversion η = N2 /N. In the range η ∈
(0, 1/2) spontaneous and stimulated emission can take place, but only in the range
η ∈ (1/2, 1) the condition to have lasing action is reached, because the population
must be, at least, inverted, η > 1/2.
The entropy S of this system vanishes when N1 or N2 is zero. Moreover, S must be
homegenous of first order in the extensive variable N [48]. For the sake of simplicity
we approach S by the first term in the Taylor expansion:
N1 N2
S∼κ = κ N η (1 − η ). (1.45)
N
The internal energy is U = N2 ε = ε N η and the statistical temperature is:
−1
∂S ε
1
T= = . (1.46)
∂U N κ (1 − 2η )
1.6 Conclusions 21
Note that for η > 1/2 the temperature is negative as corresponds to the stimulated
emission regime dominating the actual laser action.
We are now interested in introducing qualitatively the results of laser photon
counting in the calculation of LMC complexity. It was reported in [47] that the
photo-electron distribution of laser field appears to be poissonian. In the continuous
limit the Poisson distribution is approached by the normal distribution [49]. The
width σ of this energy distribution in the canonical ensemble is proportional to the
statistical temperature of the system. Thus, for a switched on laser in the regime
η ∈ [1/2, 1], the width of the gaussian energy distribution can be fitted by choosing
σ ∼ −T ∼ 1/(2η − 1) (recall that T < 0 in this case). The range of variation of σ
is [σ∞ , σmin ] = [∞, (2π e)−1/2 ]. Then we obtain:
(2π e)−1/2
σ∼ . (1.47)
2η − 1
By replacing this expression in Eq. (1.42), and rescaling by a factor proportional
to entropy, S ∼ κ N, (in order to give to it the correct order of magnitude), LMC
complexity for a population inversion in the range η ∈ [1/2, 1] is reobtained:
We can consider at this level of discussion Claser = 0 for η < 1/2. Regarding the
behavior of this function, it is worth noticing the value η2 ≃ 0.68 where the laser
presents the highest complexity. By following theses ideas, if the width, σ , of the
experimental photo-electron distribution of laser field is measured, the population
inversion parameter, η , would be given by Eq. (1.47). In a next step, the LMC
complexity of the laser system would be obtained by Eq. (1.48).
It is necessary to remark that a model helps us to approach the reality and pro-
vides invaluable guidance in the goal of a finer understanding of a physical phe-
nomenon. From this point of view the present calculation evidently only tries to
enlighten the problem of calculating the LMC complexity of a physical system via
an unrealistic but simplified model.
1.6 Conclusions
References
1. E.T. Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106, 620-630
(1957)
2. R. Badii, R., Politi, A.: Complexity. Hierarchical Structures and Scaling in Physics. Cam-
bridge University Press, Cambridge (1997)
3. Shannon, C.E., Weaver, W.: The Mathematical Theory of Communication. University of
Illinois Press, Urbana, Illinois (1949)
4. López-Ruiz, R., Mancini, H.L., Calbet, X.: A statistical measure of complexity. Phys. Lett.
A 209, 321-326 (1995)
5. López-Ruiz, R.: Shannon information, LMC complexity and Rényi entropies: a straightfor-
ward approach. Biophys. Chem. 115, 215 (2005).
6. Calbet, X., López-Ruiz, R.: Tendency toward maximum complexity in a non-equilibrium
isolated system. Phys. Rev. E 63, 066116 (9pp) (2001)
7. Anderson, P.W.: Is complexity physics? Is it science? What is it?. Physics Today, 9-11, July
(1991)
8. Parisi, G.: Statistical physics and biology. Physics World, 6, 42-47, Setember (1993)
9. Nicolis, G., Prigogine, I.: Self-organization in Nonequilibrium Systems. Wiley, New York
(1977)
10. López-Ruiz, R.: On Instabilities and Complexity. Ph. D. Thesis, Universidad de Navarra,
Pamplona (1994)
11. Catalán, R.G., Garay, J., López-Ruiz, R.: Features of the extension of a statistical measure of
complexity for continuous systems. Phys. Rev. E 66, 011102(6) (2002)
12. Dembo, A., Cover, T.M., Thomas, J.A.: Information theoretic inequalities. IEEE Trans. In-
formation Theory 37, 1501-1518 (1991)
13. Kolmogorov, A.N.: Three approaches to the definition of quantity of information. Probl.
Inform. Theory 1, 3-11 (1965)
14. Chaitin, G.J.: On the length of programs for computing finite binary sequences. J. Assoc.
Comput. Mach. 13, 547-569 (1966); Information, Randomness & Incompleteness. World
Scientific, Singapore (1990)
15. Lempel A., Ziv, J.: On the complexity of finite sequences. IEEE Trans. Inform Theory 22,
75-81 (1976)
16. Bennett, C.H.: Information, dissipation, and the definition of organization. Emerging Syn-
theses in Science, David Pines ed., Santa Fe Institute, Santa Fe, NM, 297-313 (1985)
17. Grassberger, P.: Toward a quantitative theory of self-generated complexity. Int. J. Theor.
Phys. 25, 907-938 (1986)
18. Huberman, B.A., Hogg, T.: Complexity and adaptation. Physica D 22, 376-384 (1986)
19. LLoyd, S., Pagels, H.: Complexity as thermodynamic depth. Ann. Phys. (N.Y.) 188, 186-213
(1988)
20. Crutchfield, J.P., Young, K.: Inferring statistical complexity. Phys. Rev. Lett. 63, 105-108
(1989)
21. Adami, C., Cerf, N.T.: Physical complexity of symbolic sequences. Physica D 137, 62-69
(2000)
22. Sánchez, J.R., López-Ruiz, R.: A method to discern complexity in two-dimensional patterns
generated by coupled map lattices. Physica A 355, 633-640 (2005)
References 23
23. Escalona-Morán, M., Cosenza, M.G., López-Ruiz, R., Garcı́a, P.: Statistical complexity and
nontrivial collective behavior in electroencephalographic signals. Int. J. Bif. Chaos 20, spe-
cial issue on Chaos and Dynamics in Biological Networks, Ed. Chávez & Cazelles (2010)
24. Anteneodo, C., Plastino, A.R.: Some features of the statistical LMC complexity. Phys. Lett.
A 223, 348-354 (1996)
25. Rényi, A.: Probability Theory. North-Holland, Amsterdam (1970)
26. Varga, I., Pipek, J.: Rényi entropies characterizing the shape and the extension of the phase
space representation of quantum wave functions in disordered systems. Phys. Rev. E 68,
026202(8) (2003)
27. López-Ruiz, R., Nagy, Á, Romera, E., Sañudo, J.: A generalized statistical complexity mea-
sure: Applications to quantum systems. J. Math. Phys. 50, 123528(10) (2009)
28. Perakh, M.: Defining complexity. On Talk Reason, www.talkreason.org/articles/ complex-
ity.pdf, August (2004)
29. Calbet, X., López-Ruiz, R.: Extremum complexity distribution of a monodimensional ideal
gas out of equilibrium. Physica A 382, 523-530 (2007)
30. Calbet, X., López-Ruiz, R.: Extremum complexity in the monodimensional ideal gas: the
piecewise uniform density distribution approximation. Physica A 388, 4364-4378 (2009)
31. Feng, G., Song, S., Li, P., A statistical measure of complexity in hydrological systems. J.
Hydr. Eng. Chin. (Hydr. Eng. Soc.) 11, article no. 14 (1998)
32. Shiner, J.S., Davison, M., Landsberg, P.T.: Simple measure for complexity. Phys. Rev. E 59,
1459-1464 (1999)
33. Martin, M.T., Plastino, A., Rosso, O.A.: Statistical complexity and disequilibrium. Phys.
Lett. A 311 (2-3), 126-132 (2003)
34. Lamberti, W., Martı́n, M.T., Plastino, A., Rosso, O.A.: Intensive entropic non-triviality mea-
sure. Physica A 334, 119-131 (2004)
35. Yu, Z., Chen, G.: Rescaled range and transition matrix analysis of DNA sequences. Comm.
Theor. Phys. (Beijing China) 33 673-678 (2000)
36. Lovallo, M., Lapenna, V., Telesca, L.: Transition matrix analysis of earthquake magnitude
sequences. Chaos, Solitons and Fractals 24, 33-43 (2005).
37. Rosso, O.A., Martin, M.T., Plastino, A.: Brain electrical activity analysis using wavelet-based
informational tools (II): Tsallis non-extensivity and complexity measures. Physica A 320,
497-511 (2003)
38. Sánchez, J.R., López-Ruiz, R.: Detecting synchronization in spatially extended discrete sys-
tems by complexity measurements. Discrete Dyn. Nat. Soc. 9, 337-342 (2005)
39. Chatzisavvas, K.Ch., Moustakidis, Ch.C., Panos, C.P.: Information entropy, information dis-
tances, and complexity in atoms. J. Chem. Phys. 123, 174111 (10 pp) (2005)
40. Sañudo, J., López-Ruiz, R.: Statistical complexity and Fisher-Shannon information in the
H-atom. Phys. Lett. A 372, 5283-5286 (2008)
41. Montgomery Jr., H.E., Sen, K.D.: Statistical complexity and Fisher-Shannon information
measure of H2+ . Phys. Lett. A 372, 2271-2273 (2008)
42. Kowalski, A.M., Plastino, A., Casas, M.: Generalized complexity and classical-quantum
transition. Entropy 11, 111-123 (2009)
43. López-Ruiz, R., Sañudo, J.: Evidence of magic numbers in nuclei by statistical indicators.
Open Syst. Inf. Dyn. 17, issue 3, Setember (2010)
44. López-Ruiz, R.: Complexity in some physical systems. Int. J. of Bifurcation and Chaos 11,
2669-2673 (2001)
45. Huang, K.: Statistical Mechanics. John Wiley & Sons, New York (1987)
46. Svelto, O.: Principles of Lasers. Plenum Press, New York (1949)
47. Arecchi, F.T.:, Measurement of the statistical distribution of Gaussian and laser sources.
Phys. Rev. Lett. 15, 912-916 (1965)
48. Callen, H.B.: Thermodynamics and an Introduction to Thermostatistics. J. Whiley & Sons,
New York (1985).
49. Harris, J.W., Stocker, H.: Handbook of Mathematics and Computational Science Springer-
Verlag, New York (1998)
50. “I think the next century will be the century of complexity”, Stephen Hawking in San José
Mercury News, Morning Final Edition, January 23 (2000)