0% found this document useful (0 votes)
11 views

2502.10383v1

The document discusses the distinction between artificial and natural computing, emphasizing that while artificial computing relies on algorithmic processes to transform representations, natural computing may involve a different mode that could potentially solve the hard problem of consciousness. It explores the nature of representation and interpretation in cognitive processes, questioning whether mental contents exist independently of communication and whether machines can possess understanding. The author critiques various theories in artificial intelligence, including Turing's test and connectionism, ultimately suggesting that the relationship between representation, interpretation, and cognition remains a complex and unresolved issue.

Uploaded by

raialisha889
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

2502.10383v1

The document discusses the distinction between artificial and natural computing, emphasizing that while artificial computing relies on algorithmic processes to transform representations, natural computing may involve a different mode that could potentially solve the hard problem of consciousness. It explores the nature of representation and interpretation in cognitive processes, questioning whether mental contents exist independently of communication and whether machines can possess understanding. The author critiques various theories in artificial intelligence, including Turing's test and connectionism, ultimately suggesting that the relationship between representation, interpretation, and cognition remains a complex and unresolved issue.

Uploaded by

raialisha889
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Representation and Interpretation in Artificial and Natural

Computing
Luis A. Pineda1,*
arXiv:2502.10383v1 [cs.AI] 14 Feb 2025

1
Universidad Nacional Autónoma de México, IIMAS, Mexico City, 04510, Mexico
*
[email protected]

Abstract
Artificial computing machinery transforms representations through an objective pro-
cess, to be interpreted subjectively by humans, so the machine and the interpreter
are different entities, but in the putative natural computing both processes are per-
formed by the same agent. The method or process that transforms a representation
is called here the mode of computing. The mode used by digital computers is the al-
gorithmic one, but there are others, such as quantum computers and diverse forms of
non-conventional computing, and there is an open-ended set of representational for-
mats and modes that could be used in artificial and natural computing. A mode based
on a notion of computing different from Turing’s may perform feats beyond what the
Turing Machine does but the modes would not be of the same kind and could not be
compared. For a mode of computing to be more powerful than the algorithmic one, it
ought to compute functions lacking an effective algorithm, and Church Thesis would
not hold. Here, a thought experiment including a computational demon using a hypo-
thetical mode for such an effect is presented. If there is natural computing, there is a
mode of natural computing whose properties may be causal to the phenomenological
experience. Discovering it would come with solving the hard problem of consciousness;
but if it turns out that such a mode does not exist, there is no such thing as natural
computing, and the mind is not a computational process.
Keywords: Computation, Representation, Interpretation, Mode of Computing, Arti-
ficial Intelligence, Church Thesis, Consciousness.

1 Intuitive notions of computing machines


The Turing Machine (TM) was adopted as the general model of computing very soon after
Turing’s original paper was published (1). Together with Church’s Thesis, stating that the
TM computes the full set of functions that can be computed intuitively by people given
enough time and material resources, or alternatively that every fully general machine or
model of computing is equivalent to the TM (e.g. (2)), defines the notion of computing

1
or computability: that a TM does. According to this view, all digital computers are
physical realization of the TM. Opposing such a strong current of opinion, one can ask
whether there are other forms of computing differing genuinely from Turing’s conception.
In particular, the so-called “natural computing”: If all computing engines are TMs, and
the brain/mind is a computing engine, then the brain/mind is a TM. However, does the
computer, the human invention, characterize properly a natural phenomenon that has
never been observed directly? Is it not that there is here an inversion of the scientific
method, that starts from observing a phenomenon, making and induction, formulating
a hypothesis, that should be verified empirically? Is it not that natural computing, if
there is indeed such a thing, should be characterized on its own? These questions can
be addressed from the perspective of what a computer engine actually does: transforming
representations. In the case of the TM, the symbols on the tape in the initial and final
states represent the argument and the value of the function to be computed, respectively,
and the job of the engine is to carry on with the corresponding transformation. Indeed, the
difference between standard machinery and computers is that, while the former performs
useful and predictable work, the only thing the latter does is transforming representations
to be interpreted by humans and possible by other higher-evolved animals. However, such
functionality may be achieved by various physical means, natural or artificial, opening
the possibility of conceiving a large set of computing machines with their own particular
properties.

2 Representation and Interpretation


A representation is a set of marks or distinctions on a physical medium –a material object–
that is interpreted as mental content. We do not know what is the ultimate nature of
mental contents, but we have introspective access to them: knowledge, beliefs, desires,
intentions, feelings, emotions, pain and fear, and every thing that constitutes our psy-
chological life, are mental contents. These are private and subjective to every cognitive
individual, hence inaccessible to objective scientific investigation directly. However, they
can be shared through communication, which provides a window to the mind. If there is
communication, there is representation; and if there is representation, there is intentional
action and interpretation. In spoken language, mental contents are “placed on” the wave
sound, the medium, intentionally by the speaker, and are interpreted and “placed in” the
mind of the listener. The mental contents in the minds of the speaker and hearer are similar
to the extent to which communication is successful, but may differ in many respects due
to contingencies of the production, transmission, and reception of the message, including
the nature of the motor and sensory organs involved in linguistic communication, and the
knowledge, beliefs, and expectations of the communicating individuals. Agents also inter-
pret directly the signals and forces of the world, and in any case, the mind is constituted by
interpretations. The “transduction” from material objects into mental contents and vice

2
versa emerged at some point in the phylogenetic history, possibly very early, and underlies
the nature of experience and consciousness, but we do not know what these are, nor why
and how they appeared. These questions underlie the hard problem of consciousness, for
which we do not have satisfactory answers.
Are there mental contents without communication? There may be animal species that
perform intentional actions and make interpretations without communicating, but the win-
dow to the mind would be closed and we could not tell. Is there communication without
mental content? Does the sunflower communicate with the Sun, or just senses the light that
produces a chemical reaction and induces a mechanical force, and follows the sun? Does
an excavator communicate with the stones that it carries or just applies a force to move
them? Masses, forces, and signals constitute the material world, but are not contents. Do
computers communicate? Do they make intentional actions and interpretations? Do they
have mental contents, or rather, they only process signals and apply forces in the material
world? It is reasonable to believe that humans and animals with a sufficiently developed
neural system do make the transduction between the material and the mental, but there
is no evidence to sustain that other kinds of material entity experience the world, feel that
they are alive and have some form of consciousness.
Evolution provided natural representations, such as the sign systems used by animals
for communicating. In the case of humans, the paradigmatic form of natural representa-
tions is spoken language. Then, at the birth of civilization, appeared forms of expression
with a conventional character, such as the manuscript language, which used the paper as
the medium, and the ink to write on the marks, inaugurating the history of textual repre-
sentations. Written language allowed communicating at different locations and times, and
provided the first form of external memory for recording the facts: accounting, historical,
biographical, and the literary art, and also allowed humans to make calculations that can-
not be done mentally, the origin of algorithms. The second chapter was opened with the
invention of movable printing types and the printing press, a machine that automates the
production of representations. The third came with the invention of telecommunication
typographic machines, such as the telegraph, in which texts were not only output by au-
tomatic means, but could also be typed in. The invention of the computer gave rise to
the fourth chapter in which representations are not only input and output but are also
transformed by a purely mechanical process, implementing algorithms. Babbage’s analyti-
cal engine used mechanical gears as the medium, whose positions played the role of digits,
which were interpreted as numbers. A Turing’s great insight for the design of the universal
computing engine was the use of typographic text as the representational format, which
allowed that everything that can be expressed through text can be computed potentially.
Each of the fourth chapters of the history of representations gave rise to a great cultural
revolution, and we are living in the fourth era.
There is no computation without representation, and there is no representation without
interpretation: if there were no agents –intentional entities capable of making intentional
actions and interpretations– “representations” would be inert matter, such as stones on the

3
bed of a river wrinkled by the flow of water. Hence, there is no computation without rep-
resentation and interpretation. The interpreter must know the interpretation conventions,
including the notation and the standard configuration, and quite a lot of common sense
knowledge to understand the configuration as a representation and not as a wrinkled stone.
The interpreter must be a player in a representational language game. Artificial comput-
ing, the human invention, has an objective aspect, which is the machine that transforms
the representation, and a subjective one, which is the interpretation; hence, computing is
a relational objective-subjective phenomenon. Cognition is based on the hypothesis that
the mind is a computing process and the brain is a computing engine. In this latter view,
representations become “internalized”, giving rise to the representational hypothesis: the
mind consists not only of interpretations, but also includes representations of interpreta-
tions. The question is what the form of such putative objects is. It is implausible that
it is the typographic text, neither any alternative equally expressive external conventional
format. If there are internal natural representations, they are much older in the history
of life than conventional ones. Could it be that the format is the spoken language? This
is indeed a natural one, but it may be just for communication, and there may be another
for computation. There are also mental contents that are not linguistic, such as music,
feelings, pain, fear, etc., that are experienced directly; hence they are interpretations, but
their representational formats are very unlikely to be linguistic. Candidate putative for-
mats would be those used by working memory, as suggested by Baddeley (3, 4); long-term
memory, both semantic and episodic, as advocated by Tulving (5, 6); the formats of the
objects produced by the scene construction process proposed by Hassabis & Maguire (7),
used in navigation, imagination, vivid dreaming, etc.; or implicit memory procedural for-
mats putatively supported by the basal ganglia, the cerebellum, and the motor cortex,
among others; but we do not know their form, or whether they are indeed representations,
that is, marks on a medium. Furthermore, their mere internalization does not make them
intentional. Random Access Memories (RAM) of digital computers are very much like pa-
per, and their states like ink marks! Fodor’s Language of Thought (LoT) (8) is supposed
to be intentional, but what about its interpreter? Do the expressions on the internal tape
interpret themselves? If the expressions of LoT are representations, this explanation is
empty. Alternatively, if the interpretation is a computational process, but the interpreter
is outside the TM, there is a natural computational process, the one who understands, that
is outside of the TM. Hence, there is a computing device that is not a TM. In the putative
natural computing, there should be an objective machine that transforms an internal rep-
resentation, an objective brain process, but also a subjective interpretation, performed by
the brain of the same agent. The whole idea of representational cognition rests on finding
out what is/are the representational format/s of the mind and what is the nature of the
interpretation that makes representations intentional. This is again the strong problem of
consciousness.

4
3 Artificial Intelligence
Turing introduced the so-called imitation game in the 1950 paper, which was popularized
as Turing’s test, and stated that a machine that won it should be ascribed thought, under-
standing, and consciousness (9), supporting that such machines have mental contents. He
also advanced the construction of intelligent machines and gave a clear illustration with
his chess program Turochamp, that used extensive symbolic search. After almost three
decades of research in Artificial Intelligence (AI), Simon and Newell stated the so-called
Physical Symbol Hypothesis according to which a system of physical symbols –the TM–
has the necessary and sufficient conditions to generate general intelligence (10). Physical
symbols are representations; hence, the machines that manipulate them, make interpreta-
tions, and have content. Newell went further with such a line of discussion and stated that
computing machines should be analyzed at different system levels, such that each level has
its own functionality, input and output, and can be studied independently of the others
(11). He postulated a hierarchy of system levels that includes the physical world at the
bottom, with several physical hardware levels on top of it, supporting the so-called symbol
level, which is directly below the knowledge level, as illustrated in Figure 1. The knowledge
and the symbol levels correspond roughly to the computational and the algorithmic levels
of Marr’s system levels hierarchy (12).

Figure 1: Newell’s hierarchy of system levels (13).

The symbol level is also the level in which software programs are stated and evaluated,
and corresponds to the TM’s representations, and according to Newell, the representations
expressed at such a level are interpreted into the knowledge level where the medium is
knowledge itself, and the only causal law is the principle of rationality, giving rise to a pu-
tative computational consciousness. However, knowledge is content, and such a “medium”
is not material; furthermore, the “units of knowledge” at such level are acted upon by
a non-material principle; hence, Newell adopts a dualist ontology, and the claims are not
scientific. However, we can reinterpret Newell’s hierarchy simply by stating that the knowl-
edge level is human knowledge, so the representations at the symbol level are interpreted by
people. This move corresponds to Searle’s distinction between strong and weak AI (14), the
former corresponding to Turing and Newell’s views and the latter to the suggested reinter-
pretation of Newell’s hierarchy (13). Searle’s Chiness Room mental experiment presented

5
in the same paper shows that symbolic manipulation can be performed without under-
standing, so a computing entity may have a symbol level without sustaining a knowledge
level.
There have been alternative proposals to Turing’s intuition of computing, such as Con-
nectionism, especially in the form of Rumelhart’s parallel distributed program (15). He
claimed that cognition emerges from simple computing units assembled into large networks,
such that individual units interact with their local neighbors, giving rise to distributed rep-
resentations (16) that are computed through massive parallel processes, the so-called Arti-
ficial Neural Networks (ANN). Rumelhart stated explicitly in the introduction of his book
that ANNs are more powerful than TMs, challenging Church Thesis. From the present
perspective, the neural network level would be placed instead of the symbol level in the
hierarchy. However, although the typical diagrams illustrating the network structure and
functionality –which should be placed at a system level immediately below the knowledge
level too, as diagrams are representations to be interpreted– suggest a distributed architec-
ture and a parallel information flow, in their computer implementation the diagrams are
translated into vector and matrix representations and operations, and their computation
is no different from models of other conceptual domains that use the same kind of math-
ematical structures and algorithms, and ANN are TMs. Hence, Rumelhart’s challenge is
not sustained. Connectionist systems were also conceived as representational, at least in
Rumelhart’s formulation, and correspond to the strong AI view, and the claim that they
have contents cannot be sustained neither.
Another proposal was Brooks’s program of intelligence without representation (17, 18)
that gave rise to the so-called embedded cognition. This approach consisted in implement-
ing robotic mechanisms using sensors and actuators of various kinds, either constructed
physically or simulated with computers, programmed with so-called procedural represen-
tations, as opposed to declarative ones. However, if there are no representations, there
are neither procedural representations, and Brooks’s move consists rather of placing the
knowledge level directly above the hardware levels, adopting a non-representational view
of the mind. Hence, the computing process is no longer causal and essential to the robot’s
behavior. In practice, embedded devices are control systems that use processors contin-
gently, such as automatic pilots. A modern car has a large number of computers embedded
within its circuitry, but it is still a car. Control systems, such as thermostats and speed
controllers for trains and ships, were available long before the invention of computers, do
useful work and have an objective character, but do not represent anything nor transform
representations, and should be considered standard machinery. The representational and
non-representational views of the mind can be placed within psychological structuralism
and functionalism, respectively (19). The two currents of thought admit interpretations
–without them there would be no mind– and the contention is whether the mind holds rep-
resentations too. Brooks’s robots are machines but not living entities, and the question for
embedded cognition, as well as for strong AI, is how machines can have phenomenological
experience.

6
4 Machines versus organisms
Machines are human inventions: devices that do useful work in predictable ways. Turing
stated that the determinism of digital computers is more perfect in practice than that
advocated by Laplace in the 1950 paper, and computers are the paradigmatic case of
determinate machines. Indeterminacy involves the entropy, which is not included in the
definition nor in the functionality of the TM. Living individuals, on their part, are natural
organisms produced by evolution –they are not human inventions– and have some level of
indeterminacy. Speaking of them as machines may derive from a deterministic conception
of the Universe and the unity of science: the Universe is like a clock, and every thing within
it is a machine too; hence there is no need to make the distinction between living entities
and machines. However, rather than holding strong irreducible positions, the notion of
machine may be relaxed, so the less determinate the entity, the lesser its machine-like
nature. In particular, intentionality, agency and the mind, may enjoy an indetermination
space where the entropy is not too low nor too high (13) (Section 12). Low or very low
indeterminacy characterizes machines and automata; conversely, if the entropy is very high,
there is very little structure or none, the behavior is chaotic, and life cannot be sustained.
Biological organs, such as the heart, controlled by automatic biological mechanisms, such
as homeostasis, may be considered standard biological machinery, but biological structures
that support intentionality should not. Indeterminacy may be a necessary condition for
intentionality, but it is not sufficient because what gives rise to phenomenological experience
is still unexplained. Hybrid systems integrate biological with artificial machinery, and may
also integrate machinery with the biological organs supporting higher mental function and
intentionality. Turing explicitly stated in the presentation of the imitation game that
humans are not machines, so the question of whether machines can think has content
(9). The opposition between organisms and machines is central to natural and artificial
computing, and should be considered in a check list of questions to clarify the positions on
machine intelligence (20).

5 The Mode of Computing


The are alternative formalisms to the TM, such as the Lambda calculus, the theory of
Recursive Functions, and the so-called Abacus Computation, which use registers as in the
Von Nuemann architecture (2). Turing himself showed the equivalence between the TM
and the Lambda calculus in the appendix of the 1936 paper (1), and all these machines
compute the same set of functions and can be reduced to each other. Even recurrent
neural networks are argued to be equivalent in this regard (21). All these formalisms use
algorithms: a formal or mechanical procedure that transforms the representation of the
argument of a function into the representation of its value, hence corresponds to the symbol
level, which can be subsumed into a more general level that here we call the algorithmic
level. Turing thesis, in its mathematical sense, is based on this equivalence.

7
However, there are other means of transforming representations by methods or pro-
cesses, natural or artificial, that do not follow an algorithm, but do so by other means,
such as analogical and quantum computers, as well as other forms of non-conventional
computing, such as neuromorphic systems that use memristive materials and devices (i.e.
(22)). We refer to such process or methods, including the algorithmic one, as modes of
computing, and replace the symbol level in Figure 1 by a generic level, which we refer to as
the Mode of Computing (13). The use of particular modes requires appropriate interfaces
to express the input and output representations that are transformed by a particular mode,
perhaps in a holistic non-analyzable process.
The mode of computing allows us to conceive other intuitive notions of computing,
different from Turing’s conception. All that is required is to substitute the algorithmic
level by a novel mode with its corresponding representational format in the system levels
hierarchy. Two main kinds of modes are the determinate and indeterminate ones. The TM
is the paradigmatic case of the former, but there are notions of computing in which the
mode of computing is indeterminate and has a level of entropy. A case at hand is relational-
indeterminate computing, used in the Entropic Associative Memory, in which the object of
computing is not the function but the relation, and relational evaluation takes a function
as its argument –which is called the cue– and constructs a novel function out of the cue and
the indeterminate relation in a stochastic way –the recollection (23, 24, 25, 26). Allowing
for alternatives to Turing’s notion of computing invalidates the so-call strong version of
Church-Turing Thesis, stating that the TM is the most powerful computing engine that
can ever exist in any possible sense (27). Different modes underlie different notions of
computing, and comparing them constitutes a category mistake.

6 The Computational Demon and Church Thesis


Church Thesis states that the TM computes the set of computable functions. In the
intended sense, a function is computable if it has an effective algorithm: an algorithm
that produces the value of an argument if it is defined, or a mark indicating that the
function has no value for such an argument, for all the objects in the function’s domain.
To know this in general requires knowing whether the machine will halt or not for each
of the function’s arguments, and the so-called Halting Machine (HM) would have to be
available. It is well known that the HM cannot be a TM (e.g. (2)). Hence, for knowing
that a function is computable, it is necessary to find an algorithm that computes it, and
show that it is effective in terms of its particular structure. Church Thesis would be refuted
if a single function that does not have an effective algorithm were found but computed, as
the machine performing such a feat would not be a TM.
Let us think of computing functions through a mental experiment. Suppose that there
is the computational demon. This is an omniscient being, such as Laplace’s, who knows and
is able to compute instantly all functions, total and partial, with finite and infinite domains

8
and co-domains; and as Maxwell’s demon, can interact with the material world, and read
and write representations, so when is presented with the representation of an argument and
a function, produces the representation of its value, if it is defined, or a mark indicating that
such function has no value for such an argument otherwise. Let us suppose that the demon
is assisted by a myriad of little computational demons, each capable to do the same feat
for a particular individual function, such that every function has its little demon, so once
the computational demon receives a function and its argument, handles the computation
to the corresponding little demon, who performs the computation and returns the value to
the master.
Let us define the set R including the demons that compute computable functions.
For this, we denominate the functions with finite domain and co-domain, both total and
partial, as finite, and all other functions as infinite functions. The finite functions are enu-
merable through a computable function, and have a unique identifier, which also provides
its extensional definition, and can be computed in a table whose columns correspond to
its arguments and whose rows to their possible values (13) (Section 11). Hence, all the
little demos that compute finite functions are in R. The identifiers are the names of the
corresponding demons, which can use any particular mode of computing to carry on with
the computation, including the table, or an algorithm. That is up to the little demon. The
tables help to picture that there are functions that have a clear pattern relating arguments
and values systematically, and hence are structured objects, but there are others lacking
such a pattern, so the relation between arguments and values is quite arbitrary.
The computational demon can also generate finite total and partial functions –included
in the enumeration– by assigning to each object in the domain an object in the co-domain,
or no object if the function is partial, such that all the assignments are equally likely. In
this setting, the co-domain can be thought of as a uniform probability distribution. A
function generated by this procedure would have little structure or none, and would be
unlikely to have an intensional definition and an algorithm. The computational demon can
also generate functions with a larger degree of structure by restricting the possible values
of each object in the domain to a particular subset of the co-domain, say by considering
the subsets of possible values of its neighbors, reducing the entropy of the distribution. In
the limit, if the subset considered for each argument has only one element, the entropy of
the corresponding distribution is zero. If such an object is the same for all the arguments,
the generated function is a constant function. More generally, the computational demon
can assign a probability of being selected to each value in the co-domain, including the
no-value case, for each object in the domain, and the entropy associated to the domain as
a whole is the average entropy of the distributions associated to its arguments. It is plau-
sible that the degree of structure of functions depends on the entropy of the distributions
used in their generation, so functions generated using distributions with low or moderate
entropy are very likely to have an algorithm, possibly with low computational complexity;
functions generated using distributions with larger entropy are less likely to be defined
and/or computed; and functions whose generation involves very high or maximal entropy

9
are likely to be undefinable and non-computable.
In any case, a finite function lacking an algorithm can nevertheless be computed through
its table or its extensional definition, and Church Thesis is trivially true for this set. This
is the case for digital computers that have a finite register size. However, the enumeration
function and the amount of memory required to hold the table grow exponentially, and
in practice it is necessary to find algorithms that can be computed with current digital
computers, or use an alternative mode, such as quantum computing.
We now turn to the case of infinite functions. As in the finite case, there are infinite
functions with a great deal of structure, which are very likely to have an intensional def-
inition and an algorithm. A function may be anonymous but described by an algorithm
directly, or the algorithm may be implicitly defined, such as those computing functions
learned by current machine deep learning algorithms or produced by evolutionary compu-
tation, although strictly speaking, such functions are finite and only approximate infinite
ones. An infinite function is computable by a TM if it is definable by any means and has
an effective algorithm, and their corresponding demons are also included in R.
Conversely, there are infinite functions with very little or no structure at all, which
can be imaged but are not definable and are non-computable in an absolute sense. The
computational demon can achieve the feat of choosing values equally likely among the
objects included in an infinite enumerable co-domain, and assign one such value to each
object of an infinite enumerable domain; hence, can generate a non-countable number of
non-computable functions, and handle them to their corresponding little demons, who are
not in R. Infinite functions generated by such a procedure cannot be named, defined, or
described by people, so we cannot ask the computational demon to compute them, and
their corresponding little demos are locked up in a room, a computational limbo, that will
remain closed forever.
For their part, infinite but definable functions that do not have effective algorithms,
hence cannot be computed by a TM, can be computed by their corresponding little demons
–using a hypothetical mode of computing– who are included in R. Suppose that Ω is one
such function that is computed by its little demon using a mode of computing, artificial
or natural, which cannot be algorithmic. Then R is larger than the set of demons that
compute functions that have an effective algorithm. Hence, Church Thesis is refuted in the
present thought experiment. Suppose further that there is an actual mode of computing,
natural or artificial, which we call the device, that implements the demon of Ω. The
discovery of such a device would refute Church Thesis in its mathematical sense.

7 Natural Computing
If there is natural computing, there should be a natural mode of computing directly above
the brain structures involved in cognition and immediately below the knowledge level. The
definition of such mode would involve finding the format/s of mental representation/s and

10
the physical mode/s of computing that transforms them. In contrast to artificial comput-
ing, in which the entity modifying the representation objectively and the one performing
the interpretation subjectively are different, in natural computing both processes are per-
formed by the same agent, so computing and interpreting are two aspects of the same
phenomenon, and experiencing the world and being conscious are the manifestations of
natural computing. The brain may use different modes for directly transforming represen-
tations, and the phenomenological aspect of intentionality may be due to their particular
properties. There could even be functions lacking an effective algorithm but having a de-
vice used in mental processes, and the brain would be a more powerful computing organ
than the Turing Machine. The question of whether there is natural computing is open to
empirical investigation, and finding it out would come with solving the hard problem of
consciousness. If it is ever found, the representational hypothesis of mind would be up-
held. However, if it turns out that such an object does not exist, the non-representational
hypothesis holds, and the mind is not a computational process, but something else, not
known as yet.

8 Acknowledgments
The author acknowledges the partial support of grant PAPIIT-UNAM IN103524, México.

References
[1] Alan M. Turing. On computable numbers, with an application to the entscheidungs
problem. Proceedings of the London Mathematical Society, 42:230–265, 1936. URL
https://www.wolframscience.com/prizes/tm23/images/Turing.pdf .

[2] George S. Boolos and Richard C. Jeffrey. Computability and Logic (Third Edition).
Cambridge University Press, 1989.

[3] A. D. Baddeley. The concept of working memory: A view of its


current state and probable future. Cognition, 10:17–23, 1981. URL
https://doi.org/10.1016/0010-0277(81)90020-2.

[4] A. D. Baddeley. Working memory. Science, 255:556–559, 1992. URL


https://doi.org/10.1126/science.1736359.

[5] E. Tulving. Précis of elements of episodic memory. Behavioral and Brain Sciences, 7:
223–268, 1984. URL https://doi.org/10.1017/S0140525X0004440X.

[6] E. Tulving. Episodic memory: From mind to brain. Annual Review of Phychologoy,
53:1–25, 2002. URL https://doi.org/10.1146/annurev.psych.53.100901.135114.

11
[7] Demis Hassabis and Eleanor A. Maguire. Deconstructing episodic memory
with construction. Trends in Cognitive Sciences, 11(7):299–306, 2007. URL
https://doi.org/10.1016/j.tics.2007.05.001.

[8] Jerry A. Fodor. The Language of Thought. Harvard University Press, 1975.

[9] Alan M. Turing. Computing machinery and intelligence. Mind, 59:433—460, 1950.
URL https://academic.oup.com/mind/article/LIX/236/433/986238.

[10] A. Newell and H. Simon. Computer science as empirical inquiry: Sym-


bols and search. Communications of the ACM, 19(3):113–126, 1976. URL
https://doi.org/10.1145/360018.360022.

[11] A. Newell. The knowledge level. Artificial Intelligence, 18:87–127, 1982. URL
https://doi.org/10.1016/0004-3702(82)90012-1.

[12] David Marr. Vision: A Computational Investigation into the Human Representation
and Processing of Visual Information. Henry Holt and Co., Inc., New York, NY, USA,
1982.

[13] Luis A. Pineda. The mode of computing. Cognitive Systems Research, 84:101204,
2024. ISSN 1389-0417. URL https://doi.org/10.1016/j.cogsys.2023.101204.

[14] J. R. Searle. Minds, brains, and programs. Behavioral and Brain Sciences, 3(3):
417–457, 1980. URL https://doi.org/10.1017/S0140525X00005756.

[15] D. E. Rumelhart, J. L. McClelland, and the PDF Research Group. Parallel Distributed
Processing, Explorations in the Microstructure of Cognition, Vol.1: Foundations. The
MIT Press, Cambridge, Mass., 1986.

[16] G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. Distributed representations


(chapter 3). In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed
Processing, Explorations in the Microstructure of Cognition, Vol.1: Foundations. The
MIT Press, Cambridge, Mass., 1986.

[17] R. Brooks. Intelligence without representation. Artificial Intelligence, 47:139–159,


1991. URL https://doi.org/10.1016/0004-3702(91)90053-M.

[18] Rodney A. Brooks. Intelligence without Reason. In Proceedings of the 12th Interna-
tional Joint Conference on Artificial Intelligence - Volume 1, IJCAI’91, page 569–595,
San Francisco, CA, USA, 1991. Morgan Kaufmann Publishers Inc. ISBN 1558601600.
URL https://people.csail.mit.edu/brooks/papers/AIM-1293.pdf .

[19] Anthony Chemero. Radical Embodied Cognitive Science. Review of General Psychol-
ogy, 17(2):145–150, 2013. URL https://doi.org/10.1037/a0032923.

12
[20] Nicolas Rouleau and Michael Levin. Discussions of machine versus living intelli-
gence need more clarity. Nature Machine Intelligence, 6:1424–1426, 2024. URL
https://doi.org/10.1038/s42256-024-00955-y.

[21] G. Z. Sun, H. H. Chen, Y. C. Lee, and C. L. Giles. Turing equivalence of neural


networks with second order connection weights. In Anon, editor, Proceedings. IJCNN
- International Joint Conference on Neural Networks, pages 357–362. Publ by IEEE,
1992. URL https://ieeexplore.ieee.org/document/155360.

[22] Martin Ziegler. Novel hardware and concepts for unconventional computing. Scientific
Reports, 10:11843, 2020. URL https://doi.org/10.1038/s41598-020-68834-1.

[23] L. A. Pineda, G. Fuentes, and R. Morales. An Entropic Associative Memory. Scientific


Reports, 11:6948, 2021. URL https://doi.org/10.1038/s41598-021-86270-7.

[24] L. A. Pineda and R. Morales. Weighted Entropic Associative Mem-


ory and Phonetic Information. Scientific Reports, 12:16703, 2022. URL
https://doi.org/10.1038/s41598-022-20798-0.

[25] Luis A. Pineda and Rafael Morales. Imagery in the entropic associative memory.
Scientific Reports, 13:9553, 2023. URL https://doi.org/10.1038/s41598-023-36761-6.

[26] Rafael Morales and Luis A. Pineda. Entropic hetero-associative memory, 2024. URL
https://arxiv.org/abs/2411.02438.

[27] B. Jack Copeland. The church-turing thesis. In Edward N. Zalta, editor, The Stanford
Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring
2019 edition, 2019. URL https://plato.stanford.edu/ENTRIES/church-turing/.

13

You might also like