0% found this document useful (0 votes)
12 views

D'autry, P - Why Computers Can't Be Conscious. The Limits of Science, Arguments From

Uploaded by

Michael Zock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

D'autry, P - Why Computers Can't Be Conscious. The Limits of Science, Arguments From

Uploaded by

Michael Zock
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

18/07/2024 12:26 Why Computers Can’t Be Conscious.

an’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Photo by Manuel on Unsplash

H umans have transcended their biology from the aggregate of cells they are
made up of and transformed it into something profoundly different. Yet it is
baffling how the advances and influences of modern neuroscience led its
practitioners to squeeze the wonder of human nature into a brain and how
computer scientists further transduced the brain into the logic gates of silicon
circuitry.

“Because humans have “input” in the form of the senses, and “output” in the form of
speech and actions, it has become an AI creed that a convincing mimicry of human input-
output behavior amounts to actually achieving true human qualities in computers.”—Ari
Schulman
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 2/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Scientists get away with this metaphysical murder because the technologies they
develop are so practically useful. (Tallis 2014.) It is astonishing how a herd of
philosophers succumbs to the glory of neuroscience and gives those reductionists a
helping hand.

The entertainment media, through films like Her and Ex-Machina, turns this folly
into future likelihood.

This essay concerns popular “neuro” intellectual fashions, stretching from


biologistic to computer pseudo-science. It also answers many of the questions and
valid criticisms surrounding an essay I published on AI and consciousness.

Why AI Will Never Be Conscious.


Nor will it fill your dishwasher anytime soon.
medium.com

After a conference on consciousness in 1998, Christopher Koch, a pioneering


neuroscientist researchingconsciousness proposed a wager to David Chalmers, a
philosopher of mind, that the neural correlates ofconsciousnesswould be
discovered in the brain within 25 years.

The outcome: Philosopher 1, Scientist 0.

Koch had still not found the neural indicators of consciousness in the brain. In 2023,
Chalmers was declared the winner and took home a case of fine Portuguese wine.

The success of generative AI and GPTs fueled expectations that AGI (Artificial
General Intelligence) is near and that AI can become conscious. A recent paper by
19 scientists goes further: there are “no obvious technical barriers to building AI
systems which satisfy these indicators (of consciousness).”

There is absolutely no doubt that we can build intelligent machines. However, the
hypothesis that we can create a conscious machine is based on a series of fallacious
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 3/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

assumptions about science, the brain, and the mind.

Science and its limits


The problem of consciousness
In 2005, Science Magazine chose the “hard problem” of consciousness as the second
most important unanswered problem in science. The hard problem is to explain
how matter can give rise to subjective experience. How does a physical system like
the brain generate consciousness? How can the taste of vanilla be distinguished
from the smell of lavender through patterns of neural activity? How does the brain,
a physical object, create non-physical essences, such as values, purpose, meaning,
feelings, or thoughts?

Despite its spectacular advances, neuroscience cannot answer those questions —


none of them.

The problem is intractable and stems from what we consider consciousness to be.
While the possibility of conscious machines cannot be categorically refuted, neither
can the hypothesis of Russel’s teapot or the flying spaghetti monster. Both are
logically sound arguments. But the issue is not whether the argument of a conscious
machine is logically coherent, but whether it is a valid hypothesis to entertain.

Neuroscientists strongly believe that solving easy problems related to consciousness


will bring us progress toward an ever-closer correspondence between the structure
of brain processes and the structure of conscious experience. However, the hope
that closercorrespondencebetween neural activity and inner experience will solve
the hard problem is like the belief that if you walk far enough, you will reach the
horizon.

There can be no conventional “scientific” theory of consciousness because science


can only describe and predict how nature behaves, not what nature is. The latter
investigation belongs to the realm of philosophy, or more precisely, metaphysics.
Bertrand Russell came to this observation already a century ago (Russell 2009).

Science explains how something works; philosophy seeks an understanding of what


that something means.Sciencelooks at the facts,philosophyinvestigates their
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
meaning.
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 4/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Quantum mechanics is an astonishingly successful discipline in physics, and its


mathematics describes quantum reality with rock-solid precision. Yet what quantum
reality means is a whole different matter. How can a quantum object be both a
particle and a wave? Does consciousness collapse the wave function during
measurement? Orthodoxy in the discipline suppressed those types of questions for
decades and put the careers of physicists pursuing this track on a dead-end.
Quantum physicists would exhort, “Shut up and Calculate!”

Science reduces all reality to quanta, numbers, or units of measure. It deals with
exteriors, or the empirical realm of reality that can be perceived with the senses and
their extensions. Consciousness is a slippery concept for scientists because it is
utterly subjective and cannot be reduced to its objective correlates.

Solving the hard problem from a quantitative viewpoint is like figuring out how
screens are created by the movies they play.
Explanation and interpretation
What science can explain about the text of King Lear are its quantitative dimensions:
the density and weight of the paper, the chemical composition of the ink of the
signs, the number of pages, and so on. That is what you can observe or know about a
text empirically. However, to understand the meaning of King Lear, we need to learn
a language and its symbols before we can engage in intersubjective (dialogical)
understanding with others to embark on an interpretation.

Interpretation is not just a subjective whim; King Lear is not about the frivolity of
war. There are good and bad interpretations. And those interpretative
understandings are just as important, sometimes more, than empirical
explanations.

Another example is the talking cure, or psychotherapy, which is an interpretative


odyssey that shows how interpretations impact well-being and human agency. You
will not interpret your anxiety in terms of brainwaves, cortisol, serotonin, and
dopamine levels but in terms of the first-person experience (or qualia) of anxiety
itself: alienation, disorientation, discomfort, and overthinking. Those are, contrary
to knowledge, phenomenal experiences that can only be understood through
intersubjective context.

There is no instrument to observe reason, values, thoughts, or emotions directly.


Alexander
Please Linklater
open extension puts
popup to it words
fetch best:for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 5/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

“It’s good to be reminded that, though fMRI scans are remarkable, neurologists still learn
more about mental functioning by spending time with patients.”

In other words, to engage in dialogue to apprehend meaning and context to


understand a patient’s experience.

The very question of whether matter can give rise to qualia or subjective experience
cannot be empirically adjudicated, and lies outside science. In other words, the belief
that matter generates qualia has no justification within science: it is a philosophical
(metaphysical) doctrine. All attempts to solve the hard problem have failed, and it
stubbornly remains an anomaly for dogmatists in the church of metaphysical
materialism.

Yet, there is no lack of trying.

One approach is the non-solution: we can never solve the hard problem because of
how our minds are constructed. This is philosophical silliness. Why should this
approach make sense? Why do philosophy if you don’t want to understand the world
around you?

Another is explaining consciousness away in the form of eliminative materialism, the


radical claim that consciousness is illusionary; in other words, mental states are
tricking themselves into believing they do not exist. There is a whole branch in the
philosophy of mind advocating this surreal, self-defeating Gödelian knot (Kastrup
2015). It does not require elaborate thinking to see those philosophers pull the rug
from beneath their own argument: if I can’t trust my mind to tell me I am conscious,
why should I trust my mind when it suggests I am not? And if consciousness is an
illusion, who or what is having the illusion? This makes you wonder how such an
illusion can take place outside consciousness.

A third avenue is modern pan-psychism, a more refined and sophisticated form of


physicalism that states everything has a mind: animals, trees, rocks, and even
subatomic particles.

“They try to explain animal and human consciousness in terms of more basic forms of
consciousness: the consciousness of basic material entities, such as quarks and electrons.”

When atoms, with their rudimentary bits of “consciousness,” are grouped together
in complex aggregates, panpsychism assumes all those tiny bits will create
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 6/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

“something” that can produce the meaning of life, the taste of salt, or the feeling of
love. However, the hard problem remains a mystery because it pushes down qualia
to more fundamental levels of matter (elementary particles) yet cannot explain how
those aggregates lead to subjective experience.

To paraphrase an old joke: consciousness works in practice, but doesn’t in theory.

Is the mind a machine?

“The mind is fundamentally a sentential computing device, taking sentences as input from
sensory transducers, performing logical operations on them, and issuing other sentences as
output.” (Churchland 1989)

“We see that actions follow from our perceptions, and our perceptions are constructed by
brain activity … Our actions are entirely mechanistic (…). We are programmable
machines, just as we take for granted the fact that the earth is round. ” (Carter 2010)

This is scientism in the guise of neurophilosophy, metaphysics in service of


cognitivist science, a form of philosophy Peter Winch criticized decades ago as
“underlabourer,” a tool to clear up linguistic confusions that muddy proper
scientific explanation (operationalization of language). In the words of John Locke:
“clearing the ground a little, and removing some of the rubbish that lies in the way to
knowledge.

Language, then, becomes a tool to adequately represent a thing and describe what
that thing does.

Now, we can transduce the “who” into “what” and “I” into an aggregate of “its.” The
journey to the truth about human nature involves reducing everything, including
ourselves, to matter. In the thrall of neuroscience, language was — pun intended —
lobotomized.

Raymond Tallis calls the maneuver “thinking by transferred epithet.” (Tallis, 2016) It
makes language, as Wittgenstein would put it, “go on a holiday.” This transference is
at the heart of what Tallis refers to as neuromythology.

Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 7/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

“Philosophers spend their time worrying about concepts. Why? The concepts of cognitive
science are mostly just fine. What we need is to get on with discovering the facts.”

The brain does not generate consciousness


Conflating correlation, cause, and identity
Biologism and neurophilosophy are wrong about human beings’ place in nature
because they conflate correlation with causation and identity while putting mind,
biology, and matter at the same abstraction level.

A normally functioning brain is necessary but not sufficient for experience. The
correlations of subjective experience with patterns of neural activity do not
establish the causality of brain-generated consciousness. (Papinaeu 2001). While
subjective experience has objective neurological correlates, that is all science can
conclude. The presence of firefighters at a location can be correlated with a fire, but
the cause of the fire cannot be deduced from that correlation.

Is the perceived correlation of neural firing identical with the experience of, say, a
perception? The experience of the color red is not identical to the knowledge of its
neuro-imaged correlate. The firefighters at the fire are not the fire they were sent to
extinguish. Furthermore, if nerve or neural impulses are assumed to cause qualia or
consciousness, they cannot be identical to each other. If A is identical to B, A cannot
cause B, and vice versa.

Only the monotheistic God is capable of such a feat.

Bewitched language is running riot.

Neuromythology misleads us into believing we know and make sense of more than
we do. I will elaborate with 2 examples.

Terminology is defined to be suitable for operationalization. With a handwave,


brain-mind barriers are dissolved through a long chain of unchecked metaphors.
Information
Claude Shannon’s groundbreaking theory of information, which lies at the
foundation of telecommunications and computer science, defined a unit of
information as a “bit” representing a choice between 2 equally likely alternatives.
Information content is measured by the amount of entropy or uncertainty in a
message: the more uncertainty, the more information. Hence, the flip of a coin
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 8/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

carries high entropy (information), as the outcome is uncertain and has a 50/50
chance.

To make this engineering definition of information work, it had to dispense with


any notion of meaning. In the words of Shannon:

“The semantic aspects of communication are irrelevant to the engineering aspects.”

It is then a small step to regard the nervous system’s function as a transmission


system and conceptualize the brain as a processing and storage device.

The method of measuring information then became its definition, regardless of


how informative (interesting, exciting, pleasant, or shocking) that information is for
the recipient. Yet the excision of meaning from the ordinary understanding of
information, which etymologically comes from the Latin in-formare, which means to
“give form, shape or character” to something, dismisses the requirement for
understanding.

Therefore, a computer cannot think or understand; all it does is exchange symbols


according to a set of rules (algorithms). Computer instruction sets determine how
symbols and numerals are exchanged for one another.

The Chinese room thought experiment, conceived by the philosopher John Searle,
refutes the Turing test (a measure of a machine’s intelligence as equivalent to a
human’s) and the concept of the mind as an information processing system. The
analogy goes as follows:

Someone who is not familiar with Chinese is in a closed space. That person receives
“input”, or Chinese characters, via a slot, manipulates these symbols according to a
set of English rules, and “outputs” Chinese characters that seem like thoughtful
answers. To observers from the outside, the room appears to understand Chinese.
Yet the individual within just obeys commands without understanding any of the
symbols.

Computers and modern GPT-based AI systems make it appear that they can
understand language without producing actual understanding.

Syntax (structure, form, rules, and arrangements of words) is not equal to semantics
(meaning and interpretation). Sentences with correct syntax can, just like logically
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
coherent statements, be completely meaningless. dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 9/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Comprehending meaning is not similar to rule-based manipulation of symbols


because the symbols remain uninterpreted. The Chinese room challenges the
notion that a computer with the right programming can be intelligent and
understand things like humans. It implies that passing a Turing test does not always
imply actual comprehension or intelligence.

Information divested from semantics also enables panpsychist philosophers of


mind to see information and, by extension, experience everywhere. A door carries
information as its closed state is 1 and its open state is 0.

“One can find information states in a rock — when it expands and contracts, for
example — or even in the different states of an electron. So there will be experience
associated with a rock or an electron.” My italics (Chalmers, 1997).

If rocks are conscious, there can be no denying that the brain is conscious. The
mind then depends on the “functional organization” of the information bits in the
brain. Yet how this functional organization leads to qualia…. No one knows.

Memory
Computers have a memory where information is stored and processed. Yet
neuroscience has not found the location or “address” of memory “files” in the brain,
or even if such “files” exist.

The behaviorist psychologist Karl Lashley induced brain damage in rats to


determine the discrete locations of memory and learning. While performing brain
lesions on rats trained to navigate a maze, he observed that performance was
unaffected. He concluded that “memory” had no discrete location but was spread all
over the brain.

According to the theory put forth by his student, the neuroscientist Donald Hebb,
the neural basis of memory is based on developing “cell-assemblies,” or collections
of cells (neurons firing together, wire together) that represent remembered
experiences, and the selective stimulation or inhibition of synapses between nerve
cells. This change, for Hebb, was only a local one, not a change in the brain as a
whole.

Local versus distributed, atomistic versus gestalt theories of memory: neuroscience


is still all over the place when it comes to the location of memory in the brain:

Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 10/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

“What has been found are some neural correlates of memory formation — that is, brain
activity that accompanies it — but not the information storage itself.” (Kastrup, 2015)

Connectionism
Can we model the brain by simulating neural patterns to create a mind?

Connectionism, a form of computing on which GPTs and LLMs are based, offers an
alternative to classic cognitivist science, which views the mind as similar to a digital
computer processing a symbolic language. Brains do not seem to operate according
to rules, have no processing unit, and information does not reside in files or
addresses. Connectionism simulates the brain as a system of massive
interconnections in distributed structures that allow connections between
simulated neurons to change due to learning, training, or “experience”.

The exponential rise in computing power led to the rediscovery of self-


organizational ideas in physics and nonlinear mathematics. It also allows for the
inclusion of neurobiologist knowledge about the resilience of the brain to damage
and the flexibility of cognition.

Unlike the cognitive computational theory, which assumes the brain computes
through processing symbols, connectionism starts from the premise that computing
starts at the level of the connections of large amounts of simple components, or
simulated neurons, that dynamically connect and “self-organize.” Such large
artificial neural networks achieved astonishing success in a range
ofcognitivecapacities, such as pattern recognition, categorization, and translation.

Meaning does not reside in the artificial neurons that make up the system but “in
complex patterns of activity that emerge from the interactions of many such
constituents” my italics (Varela 2016). I will elaborate on terms such as complexity,
emergence, and patterns further below.

The spectacular advances in generative AI led to an explosion of interest in


connectionism and its latest incarnation, deep learning. The question of how AI’s
emergent properties are related to symbolic computation or how symbolic
regularities “emerge” from complex patterns is one of the hottest fields in AI
research
Please today popup
open extension and istonot well
fetch understood.
words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 11/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Yet, the question is misleading. The emergence of regularities from complex patterns
is a mathematical process, not a phenomenological or experiential one. The neural
network simulation through a GPT and LLM does not understand the human symbols
it is fed. nor the output it generates. What it does is transform a question into an
answer through a series of mathematical and geometric algorithms.

This is why a GPT stands for General Pre-trained Transformer. It is trained on human
data as input and human-interpreted output.

Artificial neural networks will undoubtedly advance machine intelligence. Whether


those systems will give rise to consciousness or experience how it is to be like a
system of artificial neurons is an altogether different matter.

The misplaced hope that given enough processing power and complexity, the
computer system will make the “jump” to consciousness still shatters on the hard
problem simply because the map is not the territory.

Fallacious abstractions and misplaced isomorphisms


Appeals to emergence, patterns, and complexity, terms with enormous fallacious
explanatory power, are made to explain the occurrence of “higher” functions that
are not evident in the components of complex machines.

AI engineers mentalize machines while mechanizing minds. They aim to dissolve


the mind-brain barrier through the highly abstract isomorphism between brains
and computers, which they achieve by simulating neural networks with software.
Yet silicon circuits, brains, and minds belong to a radically different “kind” or
essence level.

Cells, brains, and organisms are all made of matter. But life is more significant than
matter. For instance, autopoiesis, the ability for self-maintenance and self-
replication, is unique to life and not found in the physiosphere.

Matter is more fundamental than life but less significant: destroy all life, and atoms
will continue to exist. Destroy all matter, and the biosphere goes along with it. The
physiosphere is part of the biosphere, but not vice versa. The same reasoning
applies to the noosphere, or the psychosocial realm, where minds reside. The
noosphere has a higher significance than the biosphere, which is, in turn, more
fundamental.
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 12/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Matter, life, and mind: each level adds a mysterious “extra,” creating greater depth
or an increase in qualitative essence that cannot be engineered through science.

Life has not yet been created in a lab, and consciousness has not been created from
synthetic life. To create it from dead matter is an altogether harder challenge. Yet
this does not discourage Ray Kurzweil, the high priest of transhumanism, from
predicting we will be able to upload our consciousness into a computer.

Leaving philosophical nuances aside, the jump in qualitative essence seems to emerge
from increased quantitative complexity, but defining what this jump constitutes or
explaining why there is such a jump remains out of our reach.

An analogy can evoke an understanding of why that will remain a mystery. A


computer is not different from a system of pipes, valves, and water. The valves are
like transistors that can switch on and off. Pipes are circuits, and water is the
electrical current. The isomorphism makes complete sense from a computational
perspective: the systems only differ in number or size, but not in essence.

Hence, such a system of pipes and valves, given an immensely large and complex
size, can replicate the operations of any existing computer.

Do we have good reasons to believe that a system made of pipes, valves, and water
correlates with private conscious inner life the way your brain does? Is there something it
is like to be the pipes, valves, and water put together? If you answer ‘yes’ to this question,
then logic forces you to start wondering if your house’s sanitation system — with its pipes,
valves, and water — is conscious and whether it is murder to turn off the mains valve
when you go on vacation.”

But it is also the mind that makes the distinction between order and complexity.
Hence, an appeal to patterns, emergence, and complexity puts the cart before the
horse because the mind finds and defines patterns, emergence, and complexity. Neural
patterns in the brain are incredibly complex: billions of neurons allow
combinations that exceed the number of atoms in the universe

Such complexity should disarm the surprise that consciousness “emerges” from it
and establish the criteria for complexity, pattern, and emergence “as not
understanding them.”

Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 13/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

Consciousness extends beyond the brain: body and lifeworld


Some of the most poignant arguments against AI and artificial consciousness stem
from the existentialist and phenomenological traditions, notably those of Edmund
Husserl, Martin Heidegger, and Maurice Merleau-Ponty (who was well-steeped in
the neuroscience of his time). Their views, once ridiculed and ignored by AI
scientists, have found their way into the latest AI research.

We are not a brain in a vat, as many AI researchers implicitly conceive of


intelligence and consciousness. According to Merleau-Ponty’s concept of the “lived
body,” physical interaction with the environment is essential for proper neural
development and perception:

“If a kitten is carried around during its critical early months of brain development,
allowed to see its environment but prevented from moving independently, it would develop
severe visual impairments. While the kitten’s eyes and optic nerves would be physically
normal, its higher visual processing abilities. like depth perception and object recognition
would be severely compromised.”

The brain, which exists within the physical confines of the human body, is
intricately interconnected with the physio, bio, and noosphere. In the context of
human existence, we are fundamentally linked to a collective of minds and the
intricate social and cultural constructs shaped by these individual selves and
cultures.

Heidegger clarified that our “being” cannot be separated from its background
context: our consciousness and self are not stored inside a body or cabinet. They are
part and parcel integrated into a meaningful life-world that is co-created with
others.

While the embodied cognition movement and its integration in AI, based on ideas
developed decades earlier by Husserl, Heidegger and Merleau-Ponty represent a
step forward, its ontological foundation remains in the brain and nervous system.
Supersizing the mind does not get cognitivist scientists beyond the problem of how
a brain-body can be about occurrences other than itself and, out of that “aboutness,”
create a lifeworld for its owner and others.

Furthermore, embodied cognition leads to the paradox that reasoning from a


computational
Please open extension perspective is significantly
popup to fetch words simpler
for this frame. than computing
(F2 - hide/show; context menuperception and
- save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 14/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

sensorimotor skills. We cannot yet build the intelligence into vehicles to allow them
autonomous driving, despite numerous and repeatedly failed predictions that ADVs
were near. In spite of all the hype, AI cannot and won’t be able to fill your
dishwasher soon.

Adding exponentially more computational resources to solve the hard problem will
not provide a solution, only a vastly improved simulation. The application of those
resources remains an open-ended progress in AI's walk toward the (unreachable)
horizon of consciousness.

Conclusion
Consciousness and mind are much more than a mirror of nature:

“Cognition is not the representation of a pregiven


world by a pregiven mind but is rather the enactment
of a world and a mind on the basis of a history of the
variety of actions that a being in the world performs.”
(Varella 2016)
The challenge posed by phenomenologists remains. The brain and nervous system
are perceived, dissected, and experimented on by and through the minds of scientists:
the brain is inside a mind and consciousness. Not vice versa.

What if the hard problem of consciousness — and consciousness in machines is


misconstrued?

What if consciousness is fundamental, and matter is an epiphenomenon of


consciousness, and instead, we have the hard problem of matter? While this sounds
non-intuitive, reflect on this:

Do we know anything that exists outside our experience (sensing, perceiving,


thinking….)?

Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 15/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium

If you answer no to this question, then you understand the belief that science can
create consciousness is nothing but a Disneyfication of Artificial Intelligence.

The ghost in the machine is just that: a ghost.

Sources
Carter Rita, Mapping the Mind. University of California Press, 2010.

Chalmers David, The Conscious Mind. In Search of a Fundamental Theory. Oxford


University Press. 1997.

Churchland Patricia, Neurophilosophy. Towards a Unified Theory of the Mind/Brain.


The MIT Press. 1989.

Dennett Daniel, Consciousness Explained. Little, Brown and Co. 1991.

Kastrup Bernardo, Brief Peeks Beyond; Critical Essays on Metaphysics, Neuroscience, Free
Will, Skepticism and Culture. John Hunt Publishing, 2015.

Papineau David, “The Rise of Physicalism”, C. Gillett & B.M. Loewer. (eds.), Physicalism
and its Discontents, Cambridge: Cambridge University Press 2001

Russell, Bertrand, Human Knowledge: Its Scope and Limits. London, UK: Routledge
Classics.Tallis Raymeon, 2009.

Tallis Raymond, In Defense of Wonder, and Other Philosophical Reflections, Routledge,


2014.4.

Tallis Raymond, Aping Mankind, Routledge Classics, 2016.

Varela Francisco, Thompson Evan, Rosch Eleanor, The Embodied Mind, revised
edition: Cognitive Science and Experience. MIT Press. 2016

AI Conciousness Philosophy Robotics Neuroscience

Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 16/24

You might also like