Informational Non-Reductionist Theory of Consciousness That Providing Maximum Accuracy of Reality Prediction
Informational Non-Reductionist Theory of Consciousness That Providing Maximum Accuracy of Reality Prediction
Vityaev Е.Е.
Sobolev institute of mathematics, Novosibirsk, Russia
[email protected]
Annotation.
The paper considers a non-reductionist theory of consciousness, which is not reducible to
theories of reality and to physiological or psychological theories. Following D.I.Dubrovsky's
"informational approach" to the "Mind-Brain Problem", we consider the reality through the prism of
information about observed phenomena, which, in turn, is perceived by subjective reality through
sensations, perceptions, feelings, etc., which, in turn, are information about the corresponding brain
processes. Within this framework the following principle of the Information Theory of Consciousness
(ITS) development is put forward: the brain discovers all possible causal relations in the external
world and makes all possible inferences by them. The paper shows that ITS built on this principle:
(1) also base on the information laws of the structure of external world; (2) explains the structure and
functioning of the brain functional systems and cellular ensembles; (3) ensures maximum accuracy
of predictions and the anticipation of reality; (4) resolves emerging contradictions and (5) is an
information theory of the brain's reflection of reality.
3. "Natural" classification
Let's move on to the next law of the informational structure of the external world
objects – the "natural" classification. The first rather detailed analysis of the "natural"
classification belongs to J.S. Mill [21]. First, we will separate the "artificial"
classifications from the "natural" ones: "Let's take any attribute, and if some things
have it and others do not, then we can base the division of all things into two classes
on it.", "But if we turn to ... the class of "animal" or "plant", ... then we will find that in
this respect some classes are very different from others. ... have so many features that
they cannot be ... enumerated" [21].
J.S. Mill defines the "natural" classification as follows: "Most of all, it
corresponds to the goals of scientific (natural) classification when objects are combined
into such groups regarding which the greatest number of general proposals can be
made" [21]. Based on the concept of "natural" classification, J. S. Mill defines the
concept of an "image" of a class as a certain pattern that has all the characteristics of
this class.
Naturalists wrote that the creation of a "natural" classification consists in
"indication" – from an infinitely large number of features it is necessary to move to a
limited number of them, which would replace all other features [13]. This means that
in "natural" classes, the attributes are strongly correlated, for example, if there are 128
classes and the attributes are binary, then only 7 attributes can be independent
"indicator" attributes among them, since 27 = 128, and other attributes can be predicted
by the values of these 7 attributes. We can choose various 7-10 attributes as "indicator"
and then other attributes, of which there are potentially infinitely many, can be
predicted from these selected attributes. Therefore, there is an exponential (relative to
the number of attributes) number of causal relationships linking the attributes of objects
of "natural" classes.
Such redundancy of information, but already when perceiving objects of the
external world, is confirmed in cognitive sciences when considering "natural"
concepts.
The highly correlated structure of the objects of the external world is also revealed
by the theory of "natural" concepts. "Natural" classification reveals the structure of the
objects of the external world, and "natural" concepts, studied in cognitive sciences,
determine the perception of these "natural" objects as elements of subjective reality.
In the works of Eleanor Rosch, the following principle of categorization of
"natural" categories was formulated: «Perceived World Structure … is not an
unstructured total set of equiprobable co-occurring attributes. Rather, the material
objects of the world are perceived to possess … high correlational structure …
combinations of what we perceive as the attributes of real objects do not occur
uniformly. Some pairs, triples, etc., are quite probable, appearing in combination …
with one, sometimes another attribute; others are rare; others logically cannot or
empirically do not occur» [25].
Directly perceived objects (basic objects) are information–rich bundles of
observable properties that create categorization (an image in the J.S. Mill definition):
«Categories can be viewed in terms of their clear cases if the perceiver places emphasis
on the correlational structure of perceived attributes … By prototypes of categories
we have generally meant the clearest cases of category membership» [24].
In further research, it was found that models based on features, similarities and
prototypes are not enough to describe "natural" classes. Considering these studies, Bob
Rehder put forward a theory of causal models, according to which: "people's intuitive
theories about categories of objects consist of a model of the category in which both a
category's features and the causal mechanisms among those features are explicitly
represented" [23]. In the theory of causal models, the relation of an object to a category
is no longer based on a set of signs and proximity by signs, but on the basis of the
similarity of the generative causal mechanism.
Bob Rehder used Bayesian networks to represent causal knowledge [22].
However, they do not support cycles and therefore cannot model cyclic causal
relationships. The formalization we propose further in the form of probabilistic formal
concepts directly models cyclic causal relationships [5-8,28-29,32].
In accordance with the Principle of unlimited inference, the brain carries out all
possible conclusions on causal relationships. These causal relationships, of which there
is an exponential number, in the process of perceiving "natural" objects, loop on
themselves, forming a certain "resonance", which is a system with highly integrated
information in the sense of G.Tononi. At the same time, "resonance" occurs if and only
if these causal relationships reflect some "natural" object in which a potentially infinite
set of features mutually assume each other. The resulting cycles of conclusions on
causal relationships are mathematically described by "fixed points", which are
characterized by the fact that further application of conclusions to the properties under
consideration does not predict new properties. The set of mutually related properties
obtained at a fixed point gives the "image" of the class or "prototype" of the concept
and its "causal model". Therefore, the brain perceives a "natural" object not as a set of
features, but as a "resonating" system of causal connections that close on themselves
through the simultaneous inference of the entire set of features of the "image" or
"prototype" forming a "causal model".
It can be shown that the MSCR causal relationships organized into cellular
ensembles make it possible to identify objects of the external world as reliably as
possible and then predict the properties of these objects as accurately as possible using
this identification, since only MSPS causal relationships related to this class are used
for predictions. This forms a second, even more accurate, from the point of view of
forecasting, level of organization of information processes.
We propose a fundamentally new mathematical apparatus for determining
integrated information, "natural" classification and "natural" concepts. Our
formalization is based on a probabilistic generalization of the formal concepts analysis
[8,29-32]. Formal concepts can be defined as fixed points of deterministic rules (with
no exceptions) [19]. But, as J. Mill wrote: "Natural groups ... are determined by
features, ... while taking into account not only the features that are certainly common
to all the objects included in the group, but the whole set of those features, of which all
occur in most of these objects, and the majority in all." Therefore, it is necessary to get
away from deterministic rules and replace them with probabilistic ones in order to
determine the features not exactly, but for the majority. Therefore, we generalize
formal concepts to the probabilistic case, replacing deterministic rules with MSPS
causal relationships and defining probabilistic formal concepts as fixed points of these
maximally specific rules [8,29-32]. Due to the fact that the conclusions, based on the
most specific causal relationships are consistent, the resulting fixed point will also be
consistent and will not contain both a feature and its negation, i.e. such a definition of
probabilistic formal concepts is correct.
It can be shown [9] that probabilistic formal concepts adequately formalize
"natural" classification and, in moreover, the resulting "natural" classification satisfies
all the requirements that naturalists imposed on "natural" classifications [9].
Let's consider an example of computer simulation of the "natural" classes,
"natural" concepts and integrated information discovery for the encoded digits. Let
X(a) – be the set of properties of object a given by some set of predicates, and let
(Pi & ... & Pi ⇒ Pi ) ∈ MS(X) – be the set of MSPS of causal relationships performed
1 k 0
for properties X, {Pi ,..., Pi } ⊂ X then the prediction operator Pr and the fixed point can
1 k
application of the operator Pr. Since with each application of the operator Pr, the value
of the Krit criterion increases and reaches a local maximum at a fixed point, then a
fixed point, which reflect some "natural" object, has a maximum of integrated
information and the "exclusion" property according to G.Tononi.
Let us encode the digits as shown in fig. 2. and form a training set, consisting of
360 shuffled digits (12 digits of fig. 2 duplicated in 30 copies without specifying where
which digit is). On this set, a semantic probabilistic inference revealed 55089 MSCR
causal relationships – general statements about objects that J.S. Mill spoke about.
b)
a) c)
Consider the following example on fig. 4, containing both numbers and letters.
You can learn only on numbers and build probabilistic formal concepts of numbers,
you can learn on letters and build probabilistic concepts of letters only, and you can
learn both on numbers and letters and build formal concepts of numbers and letters. In
each of these cases, various MSCR causal relationships will be found, but MSCR
causal relationships describing numbers and letters together will contain additional
signs separating them from each other, which is obtained automatically by MSCR
causal relationships. When considering (in context) both letters and numbers of MSCR
causal relationships will have a higher probability, then MSCR on numbers or letters
and therefore they will be triggered in the formal model of the neuron. Our formal
neuron model, which detects the most specific causal connections [30], follows the
well–known physiological property of neurons - more probable conditional stimuli are
triggered faster in time.
The formalization of the second type KOGs – the KOGs of functional systems, is
based on the consideration of purposeful behavior, which is carried out by developing
conditional (causal) links between the actions and its results. These conditional
connections are sufficient for modeling functional systems and developing animats.
P.K. Anokhin wrote that "We are talking about the collateral branches of the
pyramidal tract, diverting to many neurons "copies" of impulsations that go to the
pyramidal tract" [4-5]. Thus, when a motor neuron sends a signal to the muscles about
some action, copies of this excitation are sent, including to the projection zones, which
can record the result of the action performed. Therefore, the brain detects all causal
connections between actions and their results.
We show in the diagram fig. 5 that this is sufficient to explain the basic
mechanisms of the functional systems of the brain formation [7,31]. Let's assume that
we have no experience yet and a motivational excitement has arisen, shown by the
black triangle. Then, to meet the need by trial and error, we can do some action that
will be activated by some neuron, indicated by a white triangle. Simultaneously with
the activation of this action, a "copy" of the excitation of this neuron will be sent to the
projection zones, where there will be a neuron that will react to the result of the action
received from the outside world. Since this neuron will first receive excitation from the
activation of an action by a white neuron, it will form a conditional relation between
the activation of an action by a white neuron and the result obtained. If now, after
receiving this result that has changed the situation, we carry out some next action, also
indicated by a white triangle, then we will get the following result, for which there will
also be a neuron that will react to the result of this action. If, as a result, the need was
satisfied and the goal is achieved, then the entire chain of active neurons and
conditional connections that led to the result will be reinforced and stored in the
memory. Thus, there will be an internal contour of forecasting the results achievement
by the causal relationships. Then, at the next occurrence of motivational excitement,
this chain of actions will be extracted from memory and will predict the achievement
of the result along the inner contour even before any actions. So an action plan will be
formed, which, according to the inner contour, as stated in the quote by P.K. Anokhin,
activates neurons waiting for the results of actions, which will form an acceptor of the
results of actions, studied in detail in the theory of functional systems. Thus, the
formation and operation of the functional system can be explained by the formation of
causal relationships between the action and its results.
In terms of MSCR causal relationships, the scheme of functional systems is as
follows fig. 6. [7,31]. We consider the need as a request to the functional system to
achieve the goal indicated by the predicate PG0. This request enters the afferent
synthesis block and, for functional systems that do not have functional subsystems,
extracts causal relationships of the form Pi1,…,Pim,Ak1,…,Akl => PG0 from memory,
leading to the goal PG0 achievement, where Pi1,…,Pim are the properties of the
environment necessary to achieve the goal achievement, and Ak1,…,Akl – a sequence
of actions leading to the goal. At the same time, the properties of Pi1,…,Pim must be
present in the properties of the environment P1,…,Pn entering the afferent synthesis
block. For hierarchically organized functional systems, this query extracts causal
relationships of a more complex type Pi1,…,Pim,PGj1,…,PGjn,Ak1,…,Akl => PG0 from
the MSCR memory, including requests to achieve the sub-goals PGj1,…,PGjn.
Brain
Motivation
Result
Then the extracted rules are sent to the decision-making block, where forecast of
the goal achievement is made for each rule and a probability estimation of the goal
achievement is calculated. Prediction according to the rules, where only actions are
performed, is carried out according to the probability of the rule itself. The forecast
according to the rules which requests sub-goals is carried out by sending these requests
to functional subsystems, decision-making in them and receiving from them the
probability estimations of the corresponding sub-goals achievement. The resulting
probability of the forecast is calculated by the product of the probability of the rule on
the probability of achieving of its sub-goals. After that, a decision is made by the rule
selecting that has the maximum probability estimation of the goal achievement.
Then an action plan is formed, including all the actions included in the rule and
all the actions that are in the functional subsystems. Simultaneously with the action
plan, the acceptor of the actions result is formed, including the expectation of all
predicted sub-results in functional subsystems, as well as in the functional system itself.
After that, the action plan begins to be implemented, and the expected results are
compared with the results obtained.
If all the sub-results and the final result are achieved and coincide with the
expected results, then the rule itself and all the rules of the functional subsystems that
were selected in the decision-making process are reinforced and their probability
increases. If the result is not achieved in some subsystem, then the corresponding rule,
selected by decision-making block, of this functional subsystem is penalized. Then
there is a tentative research reaction that revises the decision. This model has been
successfully used to model animates [11,16,31].
Fig. 7. Multi-faceted reality.
Let us consider the third level of prediction accuracy provided by the information
theory of consciousness – consciousness as a mechanism for resolving contradictions.
The world is multifaceted like a diamond fig. 7 and there is no single consistent
description of it, and the function of consciousness is to choose the appropriate context
correctly, within which you can get the most accurate prediction. In science, such
contexts are paradigms that form a certain view, point of view and the corresponding
system of concepts of a particular theory. These paradigms, as a rule, are not
compatible with each other.
This point of view on consciousness is also expressed by V.M. Allakhverdov [2].
In his work [1] he writes: "Consciousness, faced with contradictory information, tries
to remove this information from the surface of consciousness or modify it so that the
contradiction disappears or ceases to be perceived as a contradiction." In this work, he
cites 7 cases of resolving contradictions by consciousness. All these cases are explained
by the properties or interaction of probabilistic formal concepts that define the concepts
or contexts in question. Let 's consider two of them for brevity:
1. Case 1. The easiest way to get rid of contradiction or ambiguity is to choose
one interpretation for awareness, and not to realize all the others (incompatible
with it) (negatively choose).
Example. The phenomenon of binocular competition, when different stimuli are
simultaneously presented to the different eyes to the subject. If two images are
presented, one of which is more likely or familiar, the subjects mostly see only it.
Explanation. A probabilistic formal concept mutually predicts the properties
included in the concept, as well as the negation of other properties that should not
be in it, and thereby inhibits alternatives.
2. Case 2. When realizing the different sides of the contradiction, an attempt is
made to find a way of explanation — the connection of different sides into a
consistent whole.
Example. In the conditions of binocular competition, if you present a red circle
on one eye and a black triangle on the other, the subject will see a black triangle
on a red background.
Explanation. If the perceived features do not contradict each other and do not
inhibit each other, then they can form a combined probabilistic formal concept
and be perceived accordingly.
In all cases, it is possible to correctly choose the most appropriate probabilistic formal
concept or context to resolve the contradiction and obtain the most accurate prediction
in accordance with the causal relationships of the chosen concept or context.
References
1. V.M. Allakhverdov, O.V. Naumenko, M.G. Filippova, O.B. Shcherbakova, M.O. Avanesyan,
E.Y. Voskresenskaya, A.S. Starodubtsev. How consciousness gets rid of contradictions //
SHAGI/STEPS 2015(1) The journal of the School of advanced studies in Humanities (in Russian)
2. Allakhverdov V.M. Consciousness as a paradox. Exp. Psy. St. Pet.: DNA, 2000. (in Russian)
3. Anokhin K.V. Cognitom: in search of a general theory of cognitive science // The Sixth
International Conference on Cognitive Science, Kaliningrad, 2014, pp. 26-28. (in Russian)
4. Anokhin P.K. The problem of decision-making in psychology and physiology // Problems of
decision-making. Moscow: Nauka, 1976. pp. 7-16. (in Russian)
5. Anokhin P.K. Biology and neurophysiology of the conditioned reflex and its role in adaptive
behavior. Oxford a.o.: Pergamon press, 1974. 574 p.
6. Vityaev E.E., Neupokoev N.V. Formal model of perception and image as a fixed point of
anticipation. Neuroinformatics. 2012, volume 6, No. 1, pp. 28-41. (in Russian)
7. Vityaev E.E. Logic of brain work // Approaches to modeling thinking. ed. V.G. Redko. URSS
Editorial, Moscow, 2014, pp. 120-153. (in Russian)
8. Vityaev E.E., Demin A.V., Ponomarev D.K. Probabilistic generalization of formal concepts //
Programming. Vol.38, No.5, 2012, pp. 219-230. (in Russian)
9. Vityaev E.E., Martynovich V.V. Formalization of "natural" classification and systematics through
fixed points of predictions // SEMR. News. V. 12, IM SB RAS, 2015, pp. 1006-1031. (in Russian)
10. Dubrovsky D.I. The problem of "consciousness and the brain": An information approach.
Knowledge, Understanding, Skills. 2013, №4. (in Russian)
11. Mukhortov V.V., Khlebnikov S.V., Vityaev E.E. Improved algorithm of semantic probabilistic
inference in the 2-dimensional animate problem // Neuroinformatics, 6(1). 50-62. (in Russian)
12. Rudolf Carnap. Philosophical foundations of Physics. M., "Progress", 1971, p. 388. (in Russian)
13. Smirnov E.S. The construction of a species from a taxonomic point of view. Zool. Journal. (1938).
17:3, pp. 387-418. (in Russian)
14. Max Tegmark. Our mathematical universe. ACT, 2016, p. 592. (in Russian)
15. Cartwright, N. Causal Laws and Effective Strategies. Noûs. (1979) 13(4): 419-437.
16. Demin A.V., Vityaev E.E. Learning in a virtual model of the C. elegans nematode for locomotion
and chemotaxis // Biologically Inspired Cognitive Architectures. 2014, v.7, pp.9–14.
17. Hebb D.O. The Organization of Behavior. Wiley: New York; 1949.
18. Hempel C.G. Maximal Specificity and Lawlikeness in Probabilistic Explanation. Philosophy of
Science. (1968) 35, pp. 116-33.
19. Ganter B. Formal Concept Analysis: Methods, and Applications in Computer Science. TU
Dresden (2003).
20. Masafumi Oizumi, Larissa Albantakis, Giulio Tononi. From the Phenomenology to the
Mechanisms of Consciousness: Integrated Information Theory 3.0 // PLOS Computational
Biology. May 2014, V.10. Issue 5.
21. Mill J.S. System of Logic. Ratiocinative and Inductive. L., 1983.
22. Bob Rehder, Jay B. Martin. Towards A Generative Model of Causal Cycles // 33rd Annual
Meeting of the Cognitive Science Society 2011, (CogSci 2011), Boston, Massachusetts, USA,
20-23 July 2011, V.1 pp. 2944-2949.
23. Rehder B. Categorization as causal reasoning // Cognitive Science, 27. 2003, pp. 709–748.
24. Rosch E.H. Natural categories // Cognitive Psychology 4. 1975, P. 328-350.
25. Rosch E. Principles of Categorization // Rosch E.&Lloyd B.B. (eds), Cognition and
Categorization. Lawrence Erlbaum Associates, Publishers, 1978. pp. 27–48.
26. Tononi G. Information integration: its relevance to brain function and consciousness. Arch. Ital.
Biol., 148: 299-322, 2010.
27. Tononi G. Integrated information theory of consciousness: an updated account. Arch Ital Biol
150, 2012, 56–90.
28. Vityaev E.E. The logic of prediction. // Proceedings of the 9th Asian Logic Conference
Mathematical Logic in Asia. World Scientific, Singapore. 2005, pp. 263–276.
29. Vityaev E.E., Martinovich V.V. Probabilistic Formal Concepts with Negation // A. Voronkov, I.
Virbitskaite (Eds.). PCI 2014, LNCS 8974, 2015, pp. 385-399.
30. Vityaev E.E. A formal model of neuron that provides consistent predictions // Biologically
Inspired Cognitive Architectures 2012 // Advances in Intelligent Systems and Computing, v.196,
Springer, 2013, pp. 339-344.
31. Vityaev E. Purposefulness as a Principle of Brain Activity // Anticipation: Learning from the Past.
Cognitive Systems Monographs, V.25, Springer, 2015, pp. 231-254.
32. Vityaev, E., Odintsov, S. How to predict consistently? Trends in Mathematics and Computational
Intelligence // Studies in Computational Intelligence 796. Mara Eug. Cornejo (ed), 2019, 35-41.