0% found this document useful (0 votes)
43 views

Logical Categories of Learning

The document discusses Bateson's categories of learning and how they relate to machine and artificial learning. It explains that learning 0 has been achieved in systems like chess playing computers, while learning I describes artificial neural networks. However, both of these approaches lack an environment and therefore do not truly involve learning. Learning II involves a change in the learning process and requires models beyond classical logic to account for changing contexts and standpoint dependencies.

Uploaded by

Nikos Cuatro
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Logical Categories of Learning

The document discusses Bateson's categories of learning and how they relate to machine and artificial learning. It explains that learning 0 has been achieved in systems like chess playing computers, while learning I describes artificial neural networks. However, both of these approaches lack an environment and therefore do not truly involve learning. Learning II involves a change in the learning process and requires models beyond classical logic to account for changing contexts and standpoint dependencies.

Uploaded by

Nikos Cuatro
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Pubished in the special »GREGORY BATESON MEMORIAL« issue of KYBERNETES, vol.36, issue 7/8, 2007, p.

1000-1011
http://www.emeraldinsight.com/Insight/viewContainer.do?containerType=Journal&containerId=357

Eberhard von Goldammer a) & Joachim Paul b) [*]

»THE LOGICAL CATEGORIES OF LEARNING AND


COMMUNICATION« — reconsidered from a
polycontextural point of view
— learning in machines and living systems —
Abstract
jan-2007

Purpose—Bateson's model of classifying different types of learning will be analyzed from a logical
and technical point of view. While learning_0 has been realized for chess playing computers, learning I
turns out today as the basic concept of artificial neural nets (ANN). All models of ANN are basically
(non linear) data filters, which is the idea behind simple and behavioristic input-output models.
Design/methodology/approach—We will discuss technical systems designed on the concept of
learning 0 and learning I and we will demonstrate that these models do not have an environment, i.e.
they are non-cognitive and therefore "non-learning" systems.
Findings—Models based on Bateson's category of learning II differ fundamentally from learning 0 and
I. They cannot be modeled any longer on the basis of classical (mono-contextural) logics. Technical
artifacts which belong to this category have to be able to change their algorithms (behavior) by their
preprint_

own effort. Learning II turns out as a process which cannot be described or modeled on a sequential
time axis. Learning II as a process belongs to the category of (parallel interwoven) heterarchical-hier-
archical process-structures.
Originality/value—In order to model this kind of process-structures the polycontextural theory has to
be used—a theory which was introduced by the German-American philosopher and logician Gotthard
Günther (1900-1984) and has been further developed by Rudolf Kaehr and others.
Keywords: machine learning, polycontexturality, standpoint dependency
Paper type—conceptual paper

Introduction
Bateson himself summarizes his logical categories of learning as follows (Bateson,
p.293):
Zero learning is characterized by specificity of response, which—right or wrong—is
not subject to correction.
Learning I is a change in specificity of response by correction of errors of choice
within a set of alternatives.
Learning II is change in the process of Learning I, e.g., a corrective change in the set
of alternatives from which choice is made, or it is a change in how the sequence of ex-
perience is punctuated.
Learning III is a change in the process of Learning II, e.g., a corrective change in the
system of sets of alternatives from which choice is made. (We shall see later that to
demand this level of performance of some men and some mammals is sometimes
pathogenic.)
Learning IV would be a change in Learning III, but probably does not occur in any
adult living organism on this earth. Evolutionary process has, however, created or-
ganisms whose ontogeny brings them to Level III. The combination of phylogenesis
with ontogenesis, in fact, achieves Level IV.

* a) University of Applied Sciences Dortmund, Dortmund, Germany, eMail: [email protected]


b) Medienzentrum NRW, Düsseldorf, Germany
In the following we will discuss some of the questions already raised by Bateson him-
self:
"… The question is not, "Can machines learn?" but what level or order of learning
does a given machine achieve?" (Bateson, 1972, p. 284).
Nearly half a century later the answer is very simple: Zero learning has been realized,
for example, by Deep Blue—a chess-playing computer developed by IBM in 1997—a
machine which defeated the world champion Garry Kasparov. This event obviously
affected modern economists so much that they still believe that von Neumann's game
theory—which forms the basis of all algorithms underlying models such as Deep
Blue—is the up to date theoretical highlight for modelling and understanding eco-
nomic behavior.[1] From an epistemological point of view all these game models be-
long to Bateson's category of zero learning. Phenomena which approach this degree of
simplicity occur in various contexts such as
"… in simple electronic circuits, where the circuit structure is not itself subject to
change resulting from the passage of impulses within the circuit—i.e., where the
causal links between »stimulus« and »response« are as the engineers say »soldered
in«."(Bateson, 1972, p. 284).
Today one could argue differently, e.g.: phenomena which approach this degree of
simplicity occur in algorithms, where neither the instructions nor the data are subject
to changes resulting from the passage through the set of instruction (the program)—
i.e., where the causal links between »stimulus« and »response« are pre-determined by
the designer of the program.
In other words, if such a game will be repeated with the same moves, the result of the
game always will be the same, i.e., the machine or the (zero order) algorithm does not
learn anything at all.
Learning I also has been realized technically: The best known example are the models
of artificial neural nets. In analogy to zero order learning, one could describe first or-
der learning as a "process where the data—but not the instructions(!)—of a learning
algorithm are subject to changes resulting from the passage through the program and
where the causal links between »stimulus« and »response« are again pre-determined
by the programmer". From a conceptual point of view these models are digital (non-
linear) data filters. The written down sequence of learning steps appears formally as a
Markov chain and therefore is completely determined. Other models which belong to
this category of "learning" are Genetic Algorithms where the data are adapted to a
given fitness function by trial and error.
There is another important argument which was pointed out by Bateson in connection
with learning I:
"Note that in all cases of Learning I, there is in our description an assumption about
the »context«. This assumption must be made explicit. The definition of Learning I as-
sumes that the buzzer (the stimulus) is somehow the »same« at Time 1 and at Time 2.
And this assumption of »sameness« must also delimit the »context«, which must
(theoretically) be the same at both times. It follows that the events which occurred at
Time 1 are not, in our description, included in our definition of the context at Time 2,
because to include them would at once create a gross difference between "context at
Time 1" and »context at Time 2«. (To paraphrase Heraclitus: »No man can go to bed
with the same girl for the first time twice.«)
The conventional assumption that context can be repeated, at least in some cases, is
one which the writer adopts in this essay as a cornerstone of the thesis that the study

2
of behavior must be ordered according to the Theory of Logical Types. Without the
assumption of repeatable context (and the hypothesis that for the organisms which we
study the sequence of experience is really somehow punctuated in this manner), it
would follow that all »learning« would be of one type: namely, all would be zero
learning." (Bateson, 1972, p. 288)
All technical models which are known today and which have been realized fulfill the
condition of a repeatable context. The reason is very simple: All technical models
have one feature in common—they have no environment and hence no changing con-
texts. For example: A robot working at an assembly line in a car manufacturing
process only has an environment from the standpoint of an observer of both the robot
and the assembly line. From a "standpoint of the robot", however, the robot does not
have an environment. Such a robot even does not have its own standpoint. All the
"environment" which is important for the functioning of the robot such as the screws
or the car body, where the screws have to be fixed, are parts of the robot program and
therefore belong to the robot and not to its environment—these robots neither have an
environment nor an own standpoint.
Standpoint dependency is a necessity for modeling situations with changing contexts!
Classical mathematics and logic—which form the basis for any technical construct
today—do not allow modeling of standpoint dependencies. Or, to phrase it in a
somewhat shortened way: So far as mathematics is concerned, the result of 2_times_2
does not depend on standpoints and by analogy all the classic standard and non-
standard logic conceptions are non-standpoint dependent calculi—or to put it in the
terminology of Gotthard Günther they are mono-contextural calculi.

Learning II, III, IV or … the Tower of Babel


As an example where learning II has been recorded Bateson refers to the cases such
as "reversal learning":
"Typically in these experiments the subject is first taught a binary discrimination.
When this has been learned to criterion, the meaning of the stimuli is reversed. If X
initially "meant" R1, and Y initially meant R2, then after reversal X comes to mean R2 ,
and Y comes to mean R1. Again the trials are run to criterion when again the meanings
are reversed. In these experiments, the crucial question is: Does the subject learn
about the reversal? I.e., after a series of reversals, does the subject reach criterion in
fewer trials than he did at the beginning of the series?" (Bateson, 1972, p. 296)
From the two patterns in Figure 1 the process of reversal learning can easily be
retraced: Any neural net model can be adapted to pattern 1. If the net algorithm has
been trained successfully to pattern 1 then pattern 2 will be offered and the adapting
process starts again until the net algorithm is adapted to pattern 2. Thereafter the
adaptation of pattern 1 begins again and so forth. The crucial question is: What does
the net algorithm learn (by its own effort) from the reversion of the task? For learning
algorithms that belong to the category of learning II one has to expect a shortening of
the learning time for the two processes of adaptation. For the models of artificial
neural nets, however, nobody would expect and nobody ever has observed a short-
ening of the so-called learning process by reversion of the two adaptation processes
using artificial neural net models.

3
The question arises:
a) What is the difference between learning I and learning
II from an algorithmic point of view?
And furthermore one has to ask:
b) Why "from an algorithmic point of view" and not from
the view of logical types?
The second question already has been answered by Bateson
himself, because …
"… the word »learning« undoubtedly denotes change of some
kind. To say what kind of change is a delicate matter…. Change
denotes process. But processes are themselves subject to Figure 1: pattern for
»change« …" (Bateson, 1972, p.283) "reversal learning"

and
" …the world of action, experience, organization, and learning cannot be completely
mapped onto a model which excludes propositions about the relation between classes
of different logical type…" (Bateson, 1972, p.307)
Processes and actions can only be modeled algorithmically with the intention to im-
plement the model into a machine (cf., Kaehr, 2003).
An answer to the first question is much more difficult and has been given by the Ger-
man-American philosopher and logician Gotthard Günther who introduced the Theory
of Polycontexturality into life sciences (Günther, 1976, 1979a, 1980). Before we trace
the main idea of this theory we have to take a short look on Bateson's Notes on Hier-
archies (Bateson, 1972, p. 307):

et cetera ...
"If C1 is a class of propositions, and C2 is a class of
propositions about the members of C1; C3 then being a
class of propositions about the members of C2; how then
shall we classify propositions about the relation between
these classes? For example, the proposition »As
level_3
members of C1 are to members of C2, so members of C2
are to members of C3« cannot be classified within the
unbranching ladder of types.
The whole of this essay is built upon the premise that the ? ? level_2
relation between C2 and C3 can be compared with the C4
relation between C1 and C2. I have again and again taken ?
a stance to the side of my ladder of logical types to ?2 C3
discuss the structure of this ladder. The essay is
? ?1
therefore itself an example of the fact that the ladder is level_1
not unbranching. C2
It follows that a next task will be to look for examples of
C1
learning which cannot be classified in terms of my
hierarchy of learning but which fall to the side of this
hierarchy as learning about the relation between steps of Figure 2: The hierarchy of
the hierarchy." [emphasis by the authors] logical types: C1, C2, … see text.

Figure 2 gives an example of Bateson's hierarchy of different types (classes). Based on


Platon's pyramid of Diairesis a physical object can be defined through a generic term
(genus proximum) and specific attributes (differentia specifica) such as information on
the weight, length, material, or shape, etc. Each entity exists as something in particular

4
and it has characteristics that are a part of what it is. In other words, Aristotle's Law of
Identity strictly holds, i.e., everything that exists has a specific nature. What the
pyramid of different classes (or types) in Figure 2 depicts, is the structural pattern of
an absolute hierarchy where all elements are linked by a common measure. This is the
well known world of natural sciences which—from a epistemological point of view—
belongs to an ontology of identity. In other words, Bateson's categories of learning
describe the results of different processes with attributes observed during different
learning situations. From a technical point of view, however, the central question is:
How can we model the process of learning II ? What about the transitions between the
different levels of logical types? How can these transitions be modeled in a formal
mathematical way in order to develop and to implement algorithms which are able to
learn in the sense of learning II by their own efforts?

Circles and '(un)branching ladders' or … »from classifica-


tion to process« [Bateson, 1979, p.204]
For an analysis of these questions, we will introduce the following symbol for the
order relation which exists between an operator O and its operand O:
order relation
T (O) F (O) (1)
Relation (1) also stands for a logical domain—as it is given, e.g., in Figure 2 by the
domain labeled as "level_1"—and T and F stand for true and false (or 1, 0) where an
order relation exists between T and F by the rules, the syntax of the logic. A logical
domain may be realized technically, for example, by the model of a Turing machine
(TM), i.e., by a computer which strictly works according to the rules of classical logic.
Günther introduced the notion contexture for a logical domain, i.e., the model of the
Turing machine or today's computer are mono-contextural logical machines.
In the following we describe learning I by the relation O(O) which stands for an
hetero-referential process as given by equation (1). Since an operator is always of
logical higher type than its operand, C2 in Figure 2 may be considered as an operator
and C1 as the corresponding operand. In order to describe learning II as a process, we
have to ask for relations that correspond to transitions which have been marked in
Figure 2, for example, by ?1 or ?2. Since classical standard logic and all non-standard
derivatives as well as mathematics are mono-contextural theories, we are faced with a
well known fundamental problem—the problem of self-referentiality—a problem,
which has been depicted by the graphical metaphor in Figure 3.
...

interpreted as:
A A
O, O: O (O) = ~ O (O) (1)
Osr(O)
O level 3 O Osr O E
O sr,
A
O: Osr(O) = ~ O (Osr) (2)
O level 2 O interpreted as: Osr(Osr) = ~ O (Osr) (3)
O level 1 O Osr(Osr) O (Osr)
(a) (b) (c)
Figure 3: The problem of self-reference from a monocontextural point of view
~ : negation, ∀ : universal quantifier, ∃ : existential quantifier

For any modeling of cognitive-volitive processes one has to distinguish logically


between the picture and the image of the picture or between the object and the image

5
of the object. This has been achieved in Bateson's work—describing the results of
learning processes—by different logical categories which leads to the hierarchy of
logical types (logical domains) as shown in Figure 2. However, there are no logical
operators which allow the modeling between the different logical types (domains)—
operators which become necessary if the process of learning has to be modeled and
not only the result, the content of a learning process.
Figure 3a shows the different logical types of Figure 2 using the symbolic metaphor of
equation (1). The crucial point of Figure 3a is, to understand that the different logical
domains are not mediated, they are isolated, i.e., there are no logical operators that
allow transitions between the different logical types (domains) and their elements.
And as a matter of fact any system of n logical types can always be reduced (type
reduction) to only one logical type whereby the different processes which are the
object of modeling will be homogenized to sequential process-structures that always
obey the transitivity law. Therefore these processes are always hierarchically—and
never heterarchically—structured; and any attempt of a formal logical description of
cognitive-volitive processes ends up within the thicket of notorious circuli vitiosi (see
also: Günther, 1979a; Kaehr & von Goldammer, 1988, 1989).
Figure 3b represents the process of hetero-referencing from the operator (cognitive
system) to the operand (object), a process where an image of the object is created
from which the cognitive system references on itself in order to make a distinction
between itself and its environment. This is a self-referential process. From a logi-
cal point of view, this process is a vicious circle, i.e., a logical antinomy. This has
been shown in Figure 3c. Relation (1) in equation 3c expresses the fact, that an
order relation exists between an operator and its operand, i.e., the operand cannot
become an operator of it"self". Relation (2) refers to the hetero-referential aspect
of the process and relation (3) to the self-referential aspect. Needless to say that
relation (3) is in contradiction with the self-referential situation in Figure 3b and
within this context it can be seen, that self-referentiality cannot be modeled by
recursion as suggested frequently by artificial intelligence scientists. In other
words, self-reference cannot be modeled without antinomies and ambiguities
within the linguistic frame of classical standard logic—the classical standard logic
reveals a basic weakness as an intellectual tool for modeling self-referential
processes.

O i+2 O i+1

O i+1 Oi

Oi O i-1

Figure 4 : Self-Reference from a polycontextrural point of view


: exhange relation, ⎯→ order relation
(for more details see: von Goldammer, E. & Kaehr, R., 1990)

6
In Figure 4 we have introduced a symbol for an exchange relation between an operator
O i+1 and operand Oi+1 which belong to different logical domains respectively. Within
the mono-contextural logical world no such exchange relation exists. The logical
domains are mediated provided the exchange relation is based on logical operations
between different logical domains (contextures). In other words, in Figure 4 different
logical places come into play—a situation which has no meaning in all classical stan-
dard and non-standard logic conceptual designs. Since we are not restricted to a lim-
ited number of contextures, Figure 4 represents an ensemble of any number of me-
diated contextures. Obviously here is the ladder to escape the eye—the black hole, the
abyss—of circularities. The question is, how can we work with such a ladder?

Mnemonic Traces or … »mental process requires circular (or


more complex) chains of determination« [Bateson, 1979, p.114]
In order to demonstrate the meaning of the 1
mediated contextures in Figure 4, a decision
1
making process between three different
standpoints as shown in Figure 5 will be dis-
cussed. Figure 5c reflects the transitivity law
and will not be discussed anymore since it is 3 2 3 2
self explaining. The arrows in Figure 5a are
interpreted as follows: Standpoint S2 is pre- (a) (b)
ferred to standpoint S1, S3 is preferred to S2
and S1 is preferred to S3 and accordant in
1 2 3 implies 1 3
Figure 5b. Although the transitivity law does
not hold for both processes represented by (c)
Figure 5a,b they do not symbolize a decision Figure 5: Günther's "heterarchical" circles
process if they are considered separately. The
reason is very simple: In both cases a decision already has been made in advance, i.e.,
the three standpoints have already been arranged according to some priorities—but
this should be the result of a decision process and cannot be taken for granted. To put
it in other words: Any modeling of a real decision process requires coequal, equivalent
standpoints during the decision making process. This can only be achieved in the
symbolic representation of Figure 5a,b if both processes represented by the two circles
are thought parallel and simultaneously. But this is impossible, as it was nicely de-
scribed in Bateson's metalogue "How much do you know?" (Bateson, 1972, p. 21):
D: I wanted to find out if I could think two thoughts at the same time. So I thought
"It's summer" and I thought "It's winter." And then I tried to think the two
thoughts together.
F: Yes?
D: But I found I wasn't having two thoughts. I was only having one thought about
having two thoughts.
F: Sure, that's just it. You can't mix thoughts, you can only combine them. And in
the end, that means you can't count them. Because counting is really only adding
things together. And you mostly can't do that.
It is not only impossible to think two thoughts at the same time, one even can neither
observe or measure (directly or indirectly) a decision making process(-structure). In
other words: It is in general impossible to observe or to measure mental processes

7
such as thinking or learning. What we can observe or experience are the actions, i.e.
the "products", the content of these processes but not the processes themselves.[2]
Why is it so?
The answer can be given with reference to McCulloch's undiscovered paper "A heter-
archy of values …" (McCulloch, 1945): The structure of all mental processes is an
interplay of heterarchical and hierarchical interwoven components. A heterarchical
process structure is defined as a process where the transitivity law cannot be applied
any longer and therefore these process-structures cannot be mapped sequentially, i.e.
these process-structures never can be measured! To express it inversely: For any
measurement the transitivity law strictly holds; its validity is—so to speak—a neces-
sity for all experimental processes of measurement.
It was Gotthard Günther who provided a basis for modeling such process structures.
His polycontextural theory not only contains a many-placed logic but also a theory of
heterarchical numbers (Günther, 1976) and the pre-logical theory of morphogrammtic
as well as the pre-semiotical theory of kenogrammatic (cf., Kaehr, 2003, 2004).
In the following we will demonstrate in a short and somewhat simplified way how a
decision making process can be rationalized within the language of Günther's poly-
contextural theory, a theory which has to be considered as the basis for a standpoint
dependent systems theory.
Again three standpoints are considered which will be indexed by natural numbers.
Each number stands not only for a standpoint but also for a logical place which repre-
sents a standpoint by at least one contexture, i.e., a logical domain.[3] The following
chain of negations which is very often taken as an example in the work of Gotthard
Günther will be interpreted:
p = N1, 2, 1, 2, 1, 2 p ( 2a )
and
p = N2, 1, 2, 1, 2, 1 p ( 2b )
Where p = N1, 2, 1, 2, 1, 2 p corresponds to
p = N1 (N2 (N1 (N2 (N1 (N2 p))))) =def N1 N2 N1 N2 N1 N2 p ( 3a )
and p = N2, 1, 2, 1, 2, 1 p corresponds to
p = N2 (N1 (N2 (N1 (N2 (N1 p))))) =def N2 N1 N2 N1 N2 N1 p ( 3b )

The different (global) negations in (2) will be executed from the right to the left. The
negation N1 and N2 are defined according to the table (4a, b):

p N1 p p N2 p
1 2 1 1
2 1 2 3
3 3 3 2
( 4a ) (4b )

The proposition variable p will be considered from a standpoint S1 in relation to


standpoint S2 or any other standpoint. In other words, the (global) negations have to
interpreted as inter-contextural negations, i.e., a contexture is negated or rejected in
relation to another contexture. (3b) can be interpreted as given in the following steps:
8
step 1: p = N2 N1 N2 N1 N2 N1 p
If the propositon p is considered from S1 in relation to S2, standpoint S1 can be designated or not
designated, i.e. negated or rejected. A designation (affirmation) of S1 would be the end of the inter-
contextural negation process, i.e., the logical domain (contexture) corresponding to S1 would have
been chosen. If, however, S1 in relation to S2 will not be designated – which is the case in our exam-
ple – then an exchange of the standpoint from S1 to S2 occurs, as indicated in table (4a). Since every
standpoint is characterized by at least one logical domain (contexture) this process corresponds to an
exchange of standpoints. From a logical point of view it is an inter-contextural (or discontextural )
process.
step 2: p = N2 N1 N2 N1 N2 N1 p
Now the proposition p will be considered from standpoint S2 in relation to S3. Again the negation (or
rejection) of S2 in relation to S3 is of interest, because an affirmation (or designation) of S2 would
terminate the inter-contextural (discontextural) process. According to table (4b) an exchange from
standpoint S2 to S3 results.
step 3: p = N2 N1 N2 N1 N2 N1 p
Now the proposition p will be considered from S3 in relation to S1/S2 and no exchange of the stand-
point occurs (cf. table 4a).
step 4: p = N2 N1 N2 N1 N2 N1 p
Considering the proposition from S3 in relation to S2 causes an exchange from S3 to S2 (cf. table 4b).
step 5: p = N2 N1 N2 N1 N2 N1 p
Within the range of step 5 the proposition p will be considered from standpoint S2 in relation to S1
(inversion of step 1). An exchange from S2 to S1 takes place.
step 6: p = N2 N1 N2 N1 N2 N1 p
Step 6 can be considered as the inversion of step 3, i.e., the proposition p is considered from S1 in
relation to S3/S2 and no exchange of the standpoint occurs (cf. table 4a).

At the end of such a negation circle the proposition p has a "history of reflection" as
Günther calls it in the foreword of his Beiträge…(2nd volume) (Günther, 1979). The
classical negation ( ~ ) never gains such a "history of reflection". While the inter-
contextural transitions (the rejections within the negation chain) correspond to the
cognitive aspects of a cognitive-volitive process. The designation of a standpoint, of a
contexture on the other side corresponds to the volitive aspects of a cognitive-volitive
process. For a more detailed discussion on cognition and volition it is referred to the
literature, especially to Günther's "Cognition and Volition" (Günther, 1979a).
UPSHOT: The classical standard logic as well as all (classical) non-standard logics
like modal-logic, probability logic, fuzzy logic, or paraconsistent logics, etc. are truth-
definite in the sense of an ontology of identity ("something is or is not"—any third is
excluded—cf. example above). Günther calls the sciences or languages based upon
these truth-definite logics positive sciences or languages. All natural languages as well
as the artificial languages like the classical standard- and non-standard-logics or math-
ematic are positive languages. Positive languages are characterized by their (intra-
contextural) negations which always imply indirectly the corresponding positive
proposition.
Günther's negative language (Günther, 1979b) can be considered as complementary
to the artificial positive languages. The negative language is characterized by a variety
of negations (negation chains or negation circles) which operate inter-contextural (not
intra-contextural) and which are mutually mediated. Therefore any inter-contextural
negation always refers to at least one further contexture, i.e., any rejection (negation)
of a contexture (standpoint or logical place) is always related to at least one further
contexture (standpoint or logical place) as it was demonstrated above (step 1 to 6). In
other words, a contexture (standpoint or logical place) can only be negated (rejected)
in relation to (at least) one further contexture. This corresponds to a process (not a
9
state!) where the positive appears not before a contexture (standpoint or logical place)
has been designated in the sense of an affirmation. From the view of the classical logic
these negations are meaningless since all classical standard- and non-standard logics
are mono-contextural, i.e., only one contexture (one standpoint, one logical place) ex-
ists which can be located only outside but not within the contexture.

Resume or … »time is out of joint« [Bateson, 1979, p.231]


Learning only occurs in systems with cognitive-volitive abilities. Until today no such
technical devices have ever been constructed. On the basis of the classical ontology of
identity one never will be able to model the cognitive-volitive abilities of living sys-
tems in a formal mathematical way—this is, so to speak the blind spot of modern brain
research and of modern artificial intelligence research.
Today's situation is dominated by a scientific mainstream of brain and artificial intel-
ligence research that neither has analyzed McCulloch's A heterarchy of values… nor
Bateson's Logical Categories of Learning… and—most notably—the scientific-logical
consequences of these nearly half a century old basic studies which—from a methodo-
logical point of view—are still unexcelled. With the fundamental work of Gotthard
Günther the situation is even worse: it has been pointedly ignored by the scientific
mainstream of artificial intelligence and brain research. And strange enough even the
community of second order cybernetics was unconcerned about Günther's theoretical
work and his philosophy.

Links and Further Readings


A complete bibliography of Gotthard Günther's work can be found at the electronic journal:
< http://www.vordenker.de > (Paul, J., ed.)
URL: http://www.vordenker.de/ggphilosophy/gg_bibliographie.htm
Fundamental theoretical studies of the post-Güntherian era on poly-logic, polycontexturality, morpho-
and kenogrammatic by Rudolf Kaehr can be found at: < http://www.thinkartlab.com > (Kaehr, R., ed.)

Notes
1 2005 the economists Robert J. Aumann and Thomas C. Schelling got the Sveriges Riksbank Prize
in Economic Sciences in Memory of Alfred Nobel "for having enhanced our understanding of
conflict and cooperation through game-theory analysis".
2 This is, so to speak, the quintessence of Varela's closure thesis (Varela, 1979)—Closure Thesis:
"Every autonomous system is organizationally closed ... organizational closure is to describe a
system with no input and no output ..."
3 For an implementation Günther's heterarchically structured numbers, which he called dialectical-
or keno-numbers have to be used. This is of importance in present context because the heterarchi-
cally structured system of numbers prevents any formation of a hierarchy of logical types. Using
natural numbers instead of keno-numbers is only one of the simplifications which we use in the
presented example. We also have not mentioned the proemial relationship and its importance in
Günther's Theory of Polycontexturality. In his scientific essay Strukturelle Minimalbedingungen
einer Theorie des objektiven Geistes als Einheit der Geschichte (Günther, 1980, Band 3, p. 136-
182) Günther describes the logical complexity underlying any formal description of mental
processes. Both Günther's morphogrammatic which is a pre-logical theory and his kenogrammatic
which is a pre-semiotical theory also cannot be discussed within such a short report. For more
details it is referred to the literature (cf., Kaehr, 2004).

10
References
All articles marked by (*) or (#) are available at: www.vordenker.de or www.thinkartlab.com

Bateson, G. (1972), Steps to an Ecology of Mind, Intertext Books, London — International Textboook
Company Ltd., Chandler Publ. Company.
Bateson, G. (1979), Mind and Nature—A Necessary Unity, W. Collins Sons & Co. Ldt., Glasgow.
Günther, G. (1976-1980), Beiträge zur Grundlegung einer operationsfähigen Dialektik, Band 1-3,
Felix Meiner Verlag, Hamburg.
Günther, G. (1976), "Cybernetic Ontology and Transjunctional Operations", in: Günther, G., Beiträge
zur Grundlegung einer operationsfähigen Dialektik, Band 1, p. 249-328.(*)
Günther, G. (1979a), "Cognition and Volition", in: Günther, G., Beiträge zur Grundlegung einer ope-
rationsfähigen Dialektik, Band 2, p. 203-240.(*)
Günther, G. (1979b), "Identität, Gegenidentität und Negativsprache", Hegeljahrbücher, p. 72-88.(*)
English translation by J. Paul and J. Newbury (2005), in: www.vordenker.de
Kaehr, R. and von Goldammer, E. (1989), "Again Computers and the Brain", Journal of Molecular
Electronics, Vol.4, p. S31-S37.(*)
Kaehr, R. and von Goldammer, E. (1989), "Poly-contextural Modelling of Heterarchies in Brain Func-
tions", in: Cotterill, R.M.J. (ed.), Models of Brain Functions, Cambridge University Press, p. 483-
497.(*)
Kaehr, R. (2003), Derrida's Machine, in: Kaehr, R. (ed.), URL:
http://www.thinkartlab.com/pkl/media/index.htm (#)
Kaehr, R. (2004), Skizze-0.9.5: Strukturation der Interaktivität. Grundriss einer Theorie der
Vermittlung, in: Kaehr, R. (ed.), URL: http://www.thinkartlab.com/pkl/media/SKIZZE-0.9.5-medium.pdf (#)
McCulloch, W. St. (1945), "A Heterarchy of Values Determined by the Topolgy of Neural Nets", Bul-
letin of Mathematical Biophysics, Vol. 7, p. 89-93.(*) Reprinted in: McCulloch, W.St. (1988), Em-
bodiments of Mind, The MIT Press.(*)
Varela, F. (1979), "Principles of Biological Autonomy", in: Klir, G. (ed.), General Systems Research
Vol.II, North Holland Publ., Amsterdam, p.58.
von Goldammer, E. and Kaehr, R. (1990), "Problems of Autonomy and Discontexturality in the Theory
of Living Systems", in: Moeller, D.P.F. and Richter, O. (eds.), Analyse dynamischer Systeme in
Medizin, Biologie und Oekologie, Informatik-Fachberichte 275, Springer Verlag, Berlin, p. 3-12.(*)

11

You might also like