Cellular Automata Tutorial
Cellular Automata Tutorial
1. Introduction
From the theoretical point of view, Cellular Automata (CA) were introduced in the late 1940's
by John von Neumann (von Neumann, 1966; Toffoli, 1987) and Stanislaw Ulam. From the
more practical point of view it was moreless in the late 1960's when John Horton Conway
developed the Game of Life (Gardner, 1970; Dewdney, 1989; Dewdney, 1990).
CA's are discrete dynamical systems and are often described as a counterpart to partial
differential equations, which have the capability to describe continuous dynamical systems.
The meaning of discrete is, that space, time and properties of the automaton can have only a
finite, countable number of states. The basic idea is not to try to describe a complex system
from "above" - to describe it using difficult equations, but simulating this system by
interaction of cells following easy rules. In other words:
Not to describe a complex system with complex equations, but let the complexity emerge
by interaction of simple individuals following simple rules.
The first system extensively calculated on computers is - as mentioned above - the Game of
Life. This game became that popular, that a scientific magazine published regularly articles
about the "behaviour" of this game. Contests were organized to prove certain problems. In the
late 1980's the interest on CA's arised again, as powerful computers became widely available.
Today a set of accepted applications in simulation of dynamical systems are available.
2. A Glance at Dynamical Systems and Chaos or "A new Science named Complexity"
This proposed epitaph from Alexander Pope for Isaac Newtons grave shows the mental
attitude of the 19th century. Since the beginning of the 20th century the scientific community
had to dismiss (with a heavy heart) the idea of the capability to calculate the future state of a
physical system exactly if the current state of the system is known with a high precision.
This idea is called in a more philosophical terminology the Laplacian Demon, refering to an
ideal demon, knowing precisely the current state of a system, must be able to foresee the
exact development of this system. In the back of the mind was the believe, that small errors in
analysation of the current system will cause small errors in extrapolation of future
development of the system.)
I want to point out that these ideas had to fall regarding a few selected milestones, that show
the necessity for the change of the paradigm:
1
Oscar II the Swedish king, who was very interested in mathematics founded a price won by
Poincare proofing, that there is no closed form mathematical solution for this problem.
(Besides: The article of Poincare was that chaotic, that a printed and partially sold circulation
has to be reprinted, as the last changes from Poincare arrived that late. This reprint has to be
paid by Poincare, so that the sum he received from Oscar II was more than spent.
The contact of Lorenz to chaotic systems occurred in doing weather forecasts. To tell it in few
words: trying different models (and trying to speed up the calculations) Lorenz reduced the
precision of one parameter, thinking, that this will lead in little reduced accuracy in the result
of the calculation. The astonishing outcome was a completely different result!
A consequence both men draw is, that complex dynamical systems often show huge effects
on small changes in the starting conditions.
Werner Heisenberg supplements the finding, that it is impossible to measure the position and
speed of a particle with high precision at the same time. This cognition is one of the basis
bricks of the quantum theory. Putting the results of Lorenz and Poincare together with the
uncertainty relation Heisenberg's one can see clearly, that the Laplacian Demon has lost it's
right to exist. This was a shock for most of the scientists. New attempts to the understanding
of complex dynamical systems were necessary. With the work of Benoit Mandelbrot the new
science of fractal (broken) dimensions and chaos started, recently a more unified approach
called the science of complexity emerged.
Trying to give a definition of complexity shows the difficult situation this new branch stucks
in. The physicist Seth Lloyd from the MIT found that more than 30 different definitions of
Complexity exist.
As there are a lot of different streams in chaos and complexity research, it is not the space
here to discuss these interesting developments in detail. I just felt the necessity to get into
touch with these items. For more serious introductions I can recommend McCauley (1993)
and Ott (1994).
The basic element of a CA is the cell. A cell is a kind of a memory element and stores - to say
it with easy words - states. In the simplest case, each cell can have the binary states 1 or 0. In
more complex simulation the cells can have more different states. (It is even thinkable, that
each cell has more than one property or attribute and each of these properties or attributes can
have two or more states.)
These cells are arranged in a spatial web - a lattice. The simplest one is the one dimensional
"lattice", meaning that all cells are arranged in a line like a string of perls. The most common
CA's are built in one or two dimensions. Whereas the one dimensional CA has the big
advantage, that it is very easy to visualize. The states of one timestep are plotted in one
dimension, and the dynamic development can be shown in the second dimension. A flat plot
of a one dimensional CA hence shows the states from timestep 0 to timestep n. Consider a
2
two dimensional CA: a two dimensional plot can evidently show only the state of one
timestep. So visualizing the dynamic of a 2D CA is by that reason more difficult.
By that reasons and because 1D CA's are generally more easy to handle (there is a much
smaller set of possible rules, compared to 2D CA's, as will be explained in the next section)
first of all the one dimensional CA's have been exploited be theoreticians. Most theoretical
papers available deal with properties of 1D CA's.
3.3 Neighbourhoods
Up to now, these cells arranged in a lattice represent a static state. To introduce dynamic into
the system, we have to add rules. The "job" of these rules is to define the state of the cells for
the next time step. In cellular automata a rule defines the state of a cell in dependence of the
neighbourhood of the cell.
The red cell is the center cell, the blue cells are the neighbourhood cells. The states of these
cells are used to calculate the next state of the (red) center cell according to the defined rule.
As the number of a cells in a lattice has to be finite (by practical purposes) one problem
occurs considering the proposed neighbourhoods described above: What to do with cells at
borders? The influence depends on the size of the lattice. To give an example: In a 10x10
lattice about 40% of the cells are border cells, in a 100x100 lattice only about 4% of the cells
are of that kind. Anyway, this problem must be solved. Two solutions of this problem are
common:
1. Opposite borders of the lattice are "sticked together". A one dimensional "line"
becomes following that way a circle, a two dimensional lattice becomes a torus.
2. the border cells are mirrored: the consequence are symmetric border properties.
3
The more usual method is the possibility 1.
1. every group of states of the neighbourhood cells is related a state of the core cell. E.g.
consider a one-dimensional CA: a rule could be "011 -> x0x", what means that the
core cell becomes a 0 in the next time step (generation) if the left cell is 0, the right
cell is 1 and the core cell is 1. Every possible state has to be described.
2. "Totalistic" Rules: the state of the next state core cell is only dependent upon the
sum of the states of the neighbourhood cells. E.g. if the sum of the adjacent cells is 4
the state of the core cell is 1, in all other cases the state of the core cell is 0.
3. "Legal" Rules: a special kind of totalistic rules are the legal rules. As it is not of
advantage in most cases to use rules that produce a pattern from total zero-state
lattices (all cells in the automaton are 0), Wolfram defined the so called legal rules .
These rules are a subset of all possible rules, a selection of rules that produce no one's
from zero-state lattices.
Case 1 shows why one-dimensional automata are that popular in theoretical studies. if only
the left, the right neighbour and the cell itself is considered in the rules (r=1), there are 256
possible rules totally:
2 3=8 possible states for three binary digits, each of these 8 states can produce a 1 or a 0 for
the center cell in the next generation. Hence 28=256 rules are existing.
So in general terms the number of rules can be calculated by kkn, where k is the number of
states for the cell and n is the number of neighbours (including the core cell itself). So we can
easily see, that for a two-dimensional CA with Moore neighbourhood, 2 states and r=1
follows k=2 and n=9. So 229= 536.870.912 possible rules exist. A comprehensive study of all
rules in higher dimensional automata is thus not easily possible.
Another important variation are the reversible cellular automata. The "only" difference to the
CA's described above is, that the time development of the system is completely reversible.
That means, that at any time-step of the development the rules allow to go forward or back in
time without losing any information.
3.5 Mathematics...
3.5.a Introduction
I want show possible mathematical formulations of essential parts of the cellular automata
theory in this section. It is not the purpose of this chapter to develop a comprehensive
mathematical framework on this topic. A more detailed review and some hardcore
mathematics, some more theoretically founded introductions, respectively, you can find in:
4
• Grassberger P., 1984: from the "chaos guru" Grassberger: aspects and correlation of
CA's with chaos theory
• Willson S.J., 1984: growth rates and fractal dimensions
where i, j are the number of column/row of the lattice with the maximum extent of n columns
and m rows. Let
be the definition of the Moore neighbourhood. (Other neighbourhood definitions are similar.
E.g. for the Extended Moore neighbourhood you have to replace the <= 1 with <= 2).
Consider (as it is easier to understand) a one dimensional cellular automaton with two
possible states for each cell, in mathematical terms , and totalistic rules, meaning,
that the next state of each cell depends only on the sum of the states of the adjacent cells. So
the state of cell zi for the next timestep (t+1), one could define the totalistic rule as
meaning that the state of the core-cell zi becomes 1 if the sum of the neighbourhood cells
including the core-cell is , 0 otherwise. To write this formula for the two dimensional
automaton is not very different from this formulation and will be done in the examples
section describing the Game of Life.
A striking restriction of all possible rules to so called legal rules was introduced be Wolfram.
The idea is: from the total zero-state - the state of all cells is 0 may not emerge any
development - no 1 may appear in any cell! Consider a one-dimensional CA with two states
and two neighbours on each side. 32 totalistic legal rules are existing (out of 1024 possible
rules totally).
It is possible to assign a definite number to each possible legal rule. These code-numbers can
be derived as follows:
5
In the case of an automaton as described above, to all 32 legal rules one can assign a definite
Code C f containing all even numbers from 0 to 64.
(One has to take care, that z(t+1) doesn't leave the defined set of states e.g. between 0..(n-1)
by calculating the difference modulo 2). To "turn round" the direction of time, hence to
calculate z(t-1) out of z(t) one simply has to use the formula. z(t-1) = f(z(t), Nz(t)) - z(t+1).
The function (rule) f is arbitrary. So one has an easy possibility to create reversible CA's out
of a broad set of rules.
3.6 Summary
It has to be noticed, that the definitions above are of a very conventional type. One shall not
limit to these propositions! A lot of useful extensions are proposed already, and thinkable in
general. More about further developments can be found in the Applications section.
A system that is capable to do universal computation is able to perform any finite algorithm.
4.2 Classes
"...many (perhaps all) cellular automata fall into four basic behaviour classes.", Stephen
Wolfram (Wolfram, 1984).
The first three types can be read as Cantor sets with a certain dimensionality, either in
countable or in fractal dimension.
Class 3 is the most frequent class. With increasing k and r the probability to find a class 3
automaton for an arbitrary selected rule is again increasing.
Christopher Langton introduced the term "On the Edge of Chaos" with the intention, that this
is the most "creative" state of a dynamical system. A permanent flickering between order and
chaos. In most non-linear systems one can detect a parameter that is responsible for the
transition from order to chaos. To give an example: consider a water tap that is leaking. The
parameter defining the state of the system (order-chaos) is evidently the flow-rate of the
water. Langton tried to detect this parameter in cellular automata systems.
To cut a long story short: He found an equivalent parameter for CA's: . is the probability
that a cell will be alive in the next generation, with a maximum of 0,5. (Values over 0,5
would lead to an inverted system) Let's discuss the possible values:
Behaviour
0 No development. All cells die in the next step.
near 0 not much action. all cells die within a short period of time
slightly higher typical class 2 behaviour - periodic patterns are occurring
the higher the parameter, the longer takes the stabilization of the
0 << < 0,3
class 2 patterns
"critical" point round
class 4 automata!
0,3
near 0,5 class 3
With this new schema, Wolfram's classes receive a new meaning in the theory of CA
behaviour. To repeat the important idea of Langton: below this certain value (in this case
7
about 0,3) the system is too simple - as there is too much order - to be creative. Other
extremes are too chaotic to find structures or destroy any structure. In his opinion only around
this certain point - Langton calls it "in the edge of chaos" - real life is possible.
5. Applications
The Game of Life (GOL) was one of the first "applications" showing that cellular automata
are capable of producing dynamic patterns and structures. The GOL is "plays" on a two
dimensional lattice with binary cell states, Moore neighbourhood and arbitrary border
conditions. To be vivid: a 1 can be interpreted that the cell is "living", a 0 that the cell is
"dead". John Horton Conway introduced the set of rules as described below:
• a cell that is dead at the time step t, becomes alive at time t+1 if exactly three of the
eight neighbouring cells at time t were alive.
• a cell that is alive at time t dies at time t+1 if at time t less than two or more than
three cells are alive.
Though these rules seem to be (are !) rather simple, vivid life can establish following this
dynamic. A set of often occurring patterns have been described, some are flickering infinitely
between two states like blinkers, some are static blocks, snakes, ships,..., others are moving
over the lattice and vanish into infinity of the lattice.
One example are the "famous" glider figure, whose dynamic is shown in the figure below.
Also so called glider-guns are existing, that fire for an infinite period of time such gliders. A
lot of different dynamic has been described and tested. E.g. the pattern that occurs if two
gliders are colliding. These gliders for example can be used as signals instead of electric
impulses and "computers" can be built within these rules...
Glider
By the way, meanwhile a lot of variations of these classic rules and some new rules exist. If
you are interested in this fascinating game take a look at Dewdney (1990), Gerhart (1995),
Gardner (1970, 1983), Hayes.
For less playful, but more realistic customers, the dynamic of cellular automata can be used to
simulate the behaviour of gas - particles. These automata have to be constructed (in contrast
to the game of life) as reversible automata (as a gas particle may e.g. hardly disappear from
the lattice).
The construction of a gas model is similar to the so called billiard automaton, so the
principles is described together in this section. In contrary to the Life automaton, the rules
8
base on 2x2 parts of the lattice. This system is also called the Margulos Neighbourhood. A
selection of rules can be:
where the "o" stands for a particle (or a billiard ball) and a "." stands for an empty cell.
After applying the rule and proceeding to the next time step, the 2x2 raster is moved for one
block diagonally. Hence the movement of a particle needs two time-steps. If this description
is a little hard to imagine take a look at the animated example below:
This example shows the collision of two particles or billiard balls if you like it. Already this
simple HPP (Hardy, de Pozzis, Pomeau)-gas model is coming close to the results of the
Navier-Stokes equation, but even more details models have been introduced like the FHP
(Fritsch, Hasslacher, Pomeau) model, that allows movements of particles to six directions
instead of only four. The FHP-gas model can be used for example to simulate aerodynamics.
For more information read: Gerhart (1995), Lim (1988), Xiao-Guang Wu (1994).
A little different application is the CA- Ising model, that can be used to simulate ferro-
magnetism. Every cell stands for the spin of a small magnet, where the state 1 may represent
an "up" vector and the state 0 the "down" vector. The orientation of the spin is variable and
depending on the local neighbourhood, where two conditions can be named:
CA's can be used to simulate this system, with the additional difficulty, that the spin (energy)
of the whole system has to be a constant. More information can be found in Gerhart (1995),
Hayes, Doolen (1991), Toffoli (1987).
5.4 Self-Reproduction
9
Historically John von Neumann posed the question, whether a self-reproducing machine is
existing, or in more detail, what kind of logical organisation of an automaton is sufficient to
produce self-organisation. His approach to this problem was a classical logic-mathematical
one. The result of his theoretical deduction is, that an universal Turing machine embedded in
a cellular array using cells with 29 states and a neighbourhood with 5 cells is existing. This
machine is called a universal constructor, and is capable to construct any machine described
on the input tape and reproduce the input tape and construction machinery. To be exact, this
system consists of two "layers":
Because John von Neumann (1903-1957) died too early, he didn't publish his system himself,
but his friend and colleague Arthur W.Burks after his death (Burks, 1968). E.F.Codd (Codd,
1968) tried to show a possibility to reduce the complexity of von Neumanns automaton and
introduced a machine requiring only 8-states per cell capable of self-reproduction. This
machine works in a similar matter as the von Neumann design.
Christopher Langton and others discussed the general problem of how to define the property
"self-reproduction", to exclude trivial systems, but avoid a too restrictive definition: the key
problem is the demand on universal construction. This demand is not fulfilled by natural
systems (Think of patterns on animal furs for example. The biochemical proceeding is able to
create the patterns on a jaguar's fur, but surely not capable of universal construction...).
"It seems clear, that we should take the 'self' of 'self-reproduction' seriously, and require of a
configuration that the construction of the copy should be actively directed by the
configuration itself." (Langton), not in the first order by the "physical properties", the rules of
the automaton.
In the following figure I want to show the self-reproduction scheme presented by Codd. The
basic structure is a so called data-path, containing a string of cells - core cells (in the figure
below in the state 2) - surrounded by sheath cells (state 1).
2 2 2 2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2
Instead of the series of "1's" one can substitute these cells by "command"-states e.g. by a "7
0" signal. The dynamic behaviour is then:
time t: t+1:
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2
1 1 1 0 7 1 1 1 1 1 1 1 1 -> 1 1 1 1 0 7 1
1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2
Now let's extend this schema with a T-junction. We can see, that the moving code sequence is
duplicated at the junction point:
t: t+1:
10
2 1 2 2 1
2
2 1 2 2 1
2
2 1 2 2 1
2
2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 1
2 2 2 2 2 2
1 1 1 1 0 7 1 1 1 1 1 1 1 1 1 1 1 1 0 7
1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2
t+2: t+3:
2 1 2 2 1
2
2 1 2 2 1
2
2 1 2 2 7
2
2 2 2 2 2 2 7 2 2 2 2 2 2 2 2 2 2 2 2 0
2 2 2 2 2 2
1 1 1 1 1 1 0 7 1 1 1 1 1 1 1 1 1 1 1 1
0 7 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2
Another important property of this machine is the path-extension. The next figure shows the
path extension of a capped data-path with the command "7 0 1 1 6 0 "
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 0 6 1 1 0 1 1 1 1 2 -...-> 1 1 1 1 1 0 6 1 1 1
1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2
-...-> 1 1 1 1 1 1 1 1 1 1 1 2
2 2 2 2 2 2 2 2 2 2 2
One further "figure" has to be explained: The periodic emitter. This very important pattern
can be used to produce a timer signal or to store data. With the understanding of the two
previous explained figures the function should be clear:
2 2 2 2 2 2
2 0 1 1 1 1 1 2
2 7 2 2 2 2 1 2
2 1 2 2 1 2
2 1 2 2 1 2
2 1 2 2 2 2 7 2 2 2 2 2 2 2 2 2 2 2 2 2
2 1 1 1 1 1 0 7 1 1 1 1 1 1 1 1 0 7 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
The information is infinitely stored in the upper cycles, and the path-extension machine
creates an infinite sequence. The code used by Codd has the advantage to be capable of
universal construction. But as explained above the restrictions Langton described stay. So
Langton simplified to codeset with the result, that the new machines are not longer capable of
universal construction, but it is possible to code longer programs within a cycle. (In Codd's
schema there is not enough room in a loop to store information to construct a self-reproducing
loop.) For example: instead of the "7 0 - 6 0" code, Langton uses the sequence: "7 0"
With the new code-set described in detail in Langton (1984) it is easily possible to create self-
reproducing loops and even populations of loops, but the principle stays the same. To
comprise the results achieved by Langtons modification:
• the loops created with the new rules are in contrast to the von Neumann or Codd rules
simple structures
• Langton proofed, that universality is a sufficient, but not a necessary condition for
self-reproduction
• though simple, the loops are capable of not trivial reproduction employing both
transcription and translation
Differential Equations look back to a long history. Names like Bernoulli, Euler, Gauß,
Langrange, Laplace, Poisson are close related to the development of this mathematical theory.
A successful tradition of about 300 years can not be easily neglected, particularly because the
ascent of modern physics is part of this story. Especially the later developed partial
differential equations are significant for modern physics. In addition to that, the property of
12
continuity, that these equations offer was corresponding to the philosophy - to the paradigm
No wonder that physicists prefer using DE's up to now, as physics is written in the words of
DE and a large experience for their symbolic integration exists. This was evidently a big
advantage at times, calculations had to be performed by hand. Today as the calculation in
mathematics is done by computer and acception of numerical computation is increasing in the
past 40 to 50 years we have to check the applications of differential equations again.
Furthermore there is the problem that most differential equations have no closed form
solution (especially the equations of practical relevance)! To solve certain differential
equations, one gives up the symbolic calculation and does numerical approximations.
Generally the way to solve these equations can be comprehended as
One of the well known CA-scientists Tommaso Toffoli contributed to this problematic
(Toffoli, 1984):
"... in modeling physics with the traditional approach, we start for historical reasons ... with
mathematical machinery that probably has much more than we need, and we have to spend
much effort disabling these 'advanced features' so that we can get our job done in spite of
them."
On the other hand, it is very difficult to produce quantitative results with cellular automata
without losing the simplicity and vividness of rules (which seems to be an important
advantage of this method). Likewise the finding of rules is often very intuitive and sometimes
difficult and the number of rules is finite. But one has to regard, that though the number of
differential equations is infinite, the set of equations we can write down is countable, and the
set of equations we can definitely solve is very small.
Refering to Niels Bohr's citation at the beginning (though related to the complementary
principle in quantum physics): "Opposites don't contradict, but complement each other" my
conclusion is, that (for the moment) both methods have their right to exist. One method can
stimulate the other with it's results. Now we should try to exploit the possibilities of discrete
numerical methods (as cellular automata are). We should not put our energy into the dispute
which method is better, but in development of the methods. Future will show, which method
will be superior - or even if actually one is superior.
Paul Adrien Maurice Dirac introduced the idea, that a physical law has to be of mathematical
beauty (Dirac 1963; Neuser, 1996). Though he was probably the first formulating this, in my
opinion the whole concept of modern science is based on this theory. From the philosophical
point of view Ernst Mach talked about science beeing a very pragmatic kind of economy of
thinking. But this idea didn't explain the process of thinking and cognition, but was some kind
of a basis of the evolutionary theory of cognition (see Mach 1905, 1910; Riedl 1985; Lorenz
1993).
13
In my opinion it seems to be a wise conception to try to bring esthetics into science as humans
tend to simplify complex systems for better understanding. Cellular automata can be
understood as a way to simulate complex systems by constructing an esthetic - "beautiful"
and easy to understand model. But in this whole discussion of beauty we should not mix up
models with reality. Nobody can finally determine if nature is simple in terms of human
understanding, or if we are simply so much restricted in our cognition and capability that we
can not go beyond construction of simplified pictures and views of nature.
In an often cited example the famous physicist Richard P.Feynman explained, that watching a
chess game, complex behaviour can emerge out of a system that is limited or pressed into
fixed rules. Just on the basis of these rules the whole system can evolve. This situation can be
compared to natural systems. Nevertheless I am not very content with this example. Too
many problems start with telling this story:
1. Maybe the most important factor is, that the game chess has a destination, the
"killing" of the hostile king. Compared with nature, this would lead to a teleologic
point of view, most scientists would deny today (me too).
2. The game is played by two men, though on the basis of the rules, but - following
point 1. - with a plan in their minds. Even considering computers playing the game
doesn't change this situation significantly.
3. What do simplicity, complexity, order or disorder mean in terms of a chess game?
There is no "meta- meaning" in this system. All meaning comes from the destination
to win. This is very different to natural systems, where this "meta-meaning" is often
understood watching the next layer e.g. the meta-meaning of atoms can be understood
in terms of molecules, and so on.
I believe we have to distinguish these points, otherwise we may run into danger creating
specious arguments.
7.3 Modeling
From my (radical constructivistic) point of one could call the discussion if a continuous
model (differential equation) is superior over a basic discrete system (cellular automaton) as
beeing obsolete. We could consider a simple pragmatic view for this dispute. This discussion
of model-philosophy shall happen, but for practical work it is not necessary. In some cases
cellular automata show high performance in modeling, in other cases one will love the high
precision of quantitative estimation resulting from differential equations. So we can reduce
the question on the parallelity of model and nature. This problem on the other hand is
essential and may not be ignored.
A cellular automata simulation gives good results - corresponding with analysis of the
"nature" of some system. May we conclude now, that the "real" system is behaving like our
automaton?
I guess spontaneous most readers would answer: "Evidently not!". But do we ask this question
also for other physical models? Some differential equation or more generally, one deduced
law from quantum mechanics correlates with the observation made, the scientific community
follows that the theory is correct.
By the way: the machines to make these observations - e.g. scanning tunnel microscopes,
CERN, ... - are built on the basis of the model, whose deduction I want to prove. I don't want
to discredit quantum mechanics - I even wouldn't have the genius to do so - but I believe we
should put the question more often if we probably do circular proves!
14
Let's keep this in mind if we enthusiasts of new methods - in this case cellular automata
- do interpret results (see also Popper, 1934). If we try to achieve too much with one step,
we take the risk to discredit the whole method - as conventional scientists will call us (with
some right) unserious.
My own opinion is, that the shift of the paradigm at the beginning of this century to a view
into a "discrete world" is not closed yet. In many areas we do "modern" - discrete - science
with "old" - continuous - methods. No wonder, because continuous methods have margin of
250 years. But the second part of the shift - the change in mathematical modelling and
scientific method was introduced with the radical changes the microcomputers brought. As I
believe in the ideas of Thomas Kuhn (Kuhn, 1970)it is common for shifts in paradigmata we
will have to be patient yet.
15