D'autry, P - Why Computers Can't Be Conscious. The Limits of Science, Arguments From
D'autry, P - Why Computers Can't Be Conscious. The Limits of Science, Arguments From
an’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
H umans have transcended their biology from the aggregate of cells they are
made up of and transformed it into something profoundly different. Yet it is
baffling how the advances and influences of modern neuroscience led its
practitioners to squeeze the wonder of human nature into a brain and how
computer scientists further transduced the brain into the logic gates of silicon
circuitry.
“Because humans have “input” in the form of the senses, and “output” in the form of
speech and actions, it has become an AI creed that a convincing mimicry of human input-
output behavior amounts to actually achieving true human qualities in computers.”—Ari
Schulman
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 2/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
Scientists get away with this metaphysical murder because the technologies they
develop are so practically useful. (Tallis 2014.) It is astonishing how a herd of
philosophers succumbs to the glory of neuroscience and gives those reductionists a
helping hand.
The entertainment media, through films like Her and Ex-Machina, turns this folly
into future likelihood.
Koch had still not found the neural indicators of consciousness in the brain. In 2023,
Chalmers was declared the winner and took home a case of fine Portuguese wine.
The success of generative AI and GPTs fueled expectations that AGI (Artificial
General Intelligence) is near and that AI can become conscious. A recent paper by
19 scientists goes further: there are “no obvious technical barriers to building AI
systems which satisfy these indicators (of consciousness).”
There is absolutely no doubt that we can build intelligent machines. However, the
hypothesis that we can create a conscious machine is based on a series of fallacious
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 3/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
The problem is intractable and stems from what we consider consciousness to be.
While the possibility of conscious machines cannot be categorically refuted, neither
can the hypothesis of Russel’s teapot or the flying spaghetti monster. Both are
logically sound arguments. But the issue is not whether the argument of a conscious
machine is logically coherent, but whether it is a valid hypothesis to entertain.
Science reduces all reality to quanta, numbers, or units of measure. It deals with
exteriors, or the empirical realm of reality that can be perceived with the senses and
their extensions. Consciousness is a slippery concept for scientists because it is
utterly subjective and cannot be reduced to its objective correlates.
Solving the hard problem from a quantitative viewpoint is like figuring out how
screens are created by the movies they play.
Explanation and interpretation
What science can explain about the text of King Lear are its quantitative dimensions:
the density and weight of the paper, the chemical composition of the ink of the
signs, the number of pages, and so on. That is what you can observe or know about a
text empirically. However, to understand the meaning of King Lear, we need to learn
a language and its symbols before we can engage in intersubjective (dialogical)
understanding with others to embark on an interpretation.
Interpretation is not just a subjective whim; King Lear is not about the frivolity of
war. There are good and bad interpretations. And those interpretative
understandings are just as important, sometimes more, than empirical
explanations.
“It’s good to be reminded that, though fMRI scans are remarkable, neurologists still learn
more about mental functioning by spending time with patients.”
The very question of whether matter can give rise to qualia or subjective experience
cannot be empirically adjudicated, and lies outside science. In other words, the belief
that matter generates qualia has no justification within science: it is a philosophical
(metaphysical) doctrine. All attempts to solve the hard problem have failed, and it
stubbornly remains an anomaly for dogmatists in the church of metaphysical
materialism.
One approach is the non-solution: we can never solve the hard problem because of
how our minds are constructed. This is philosophical silliness. Why should this
approach make sense? Why do philosophy if you don’t want to understand the world
around you?
“They try to explain animal and human consciousness in terms of more basic forms of
consciousness: the consciousness of basic material entities, such as quarks and electrons.”
When atoms, with their rudimentary bits of “consciousness,” are grouped together
in complex aggregates, panpsychism assumes all those tiny bits will create
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 6/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
“something” that can produce the meaning of life, the taste of salt, or the feeling of
love. However, the hard problem remains a mystery because it pushes down qualia
to more fundamental levels of matter (elementary particles) yet cannot explain how
those aggregates lead to subjective experience.
“The mind is fundamentally a sentential computing device, taking sentences as input from
sensory transducers, performing logical operations on them, and issuing other sentences as
output.” (Churchland 1989)
“We see that actions follow from our perceptions, and our perceptions are constructed by
brain activity … Our actions are entirely mechanistic (…). We are programmable
machines, just as we take for granted the fact that the earth is round. ” (Carter 2010)
Language, then, becomes a tool to adequately represent a thing and describe what
that thing does.
Now, we can transduce the “who” into “what” and “I” into an aggregate of “its.” The
journey to the truth about human nature involves reducing everything, including
ourselves, to matter. In the thrall of neuroscience, language was — pun intended —
lobotomized.
Raymond Tallis calls the maneuver “thinking by transferred epithet.” (Tallis, 2016) It
makes language, as Wittgenstein would put it, “go on a holiday.” This transference is
at the heart of what Tallis refers to as neuromythology.
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 7/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
“Philosophers spend their time worrying about concepts. Why? The concepts of cognitive
science are mostly just fine. What we need is to get on with discovering the facts.”
A normally functioning brain is necessary but not sufficient for experience. The
correlations of subjective experience with patterns of neural activity do not
establish the causality of brain-generated consciousness. (Papinaeu 2001). While
subjective experience has objective neurological correlates, that is all science can
conclude. The presence of firefighters at a location can be correlated with a fire, but
the cause of the fire cannot be deduced from that correlation.
Is the perceived correlation of neural firing identical with the experience of, say, a
perception? The experience of the color red is not identical to the knowledge of its
neuro-imaged correlate. The firefighters at the fire are not the fire they were sent to
extinguish. Furthermore, if nerve or neural impulses are assumed to cause qualia or
consciousness, they cannot be identical to each other. If A is identical to B, A cannot
cause B, and vice versa.
Neuromythology misleads us into believing we know and make sense of more than
we do. I will elaborate with 2 examples.
carries high entropy (information), as the outcome is uncertain and has a 50/50
chance.
The Chinese room thought experiment, conceived by the philosopher John Searle,
refutes the Turing test (a measure of a machine’s intelligence as equivalent to a
human’s) and the concept of the mind as an information processing system. The
analogy goes as follows:
Someone who is not familiar with Chinese is in a closed space. That person receives
“input”, or Chinese characters, via a slot, manipulates these symbols according to a
set of English rules, and “outputs” Chinese characters that seem like thoughtful
answers. To observers from the outside, the room appears to understand Chinese.
Yet the individual within just obeys commands without understanding any of the
symbols.
Computers and modern GPT-based AI systems make it appear that they can
understand language without producing actual understanding.
Syntax (structure, form, rules, and arrangements of words) is not equal to semantics
(meaning and interpretation). Sentences with correct syntax can, just like logically
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
coherent statements, be completely meaningless. dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 9/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
“One can find information states in a rock — when it expands and contracts, for
example — or even in the different states of an electron. So there will be experience
associated with a rock or an electron.” My italics (Chalmers, 1997).
If rocks are conscious, there can be no denying that the brain is conscious. The
mind then depends on the “functional organization” of the information bits in the
brain. Yet how this functional organization leads to qualia…. No one knows.
Memory
Computers have a memory where information is stored and processed. Yet
neuroscience has not found the location or “address” of memory “files” in the brain,
or even if such “files” exist.
According to the theory put forth by his student, the neuroscientist Donald Hebb,
the neural basis of memory is based on developing “cell-assemblies,” or collections
of cells (neurons firing together, wire together) that represent remembered
experiences, and the selective stimulation or inhibition of synapses between nerve
cells. This change, for Hebb, was only a local one, not a change in the brain as a
whole.
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 10/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
“What has been found are some neural correlates of memory formation — that is, brain
activity that accompanies it — but not the information storage itself.” (Kastrup, 2015)
Connectionism
Can we model the brain by simulating neural patterns to create a mind?
Connectionism, a form of computing on which GPTs and LLMs are based, offers an
alternative to classic cognitivist science, which views the mind as similar to a digital
computer processing a symbolic language. Brains do not seem to operate according
to rules, have no processing unit, and information does not reside in files or
addresses. Connectionism simulates the brain as a system of massive
interconnections in distributed structures that allow connections between
simulated neurons to change due to learning, training, or “experience”.
Unlike the cognitive computational theory, which assumes the brain computes
through processing symbols, connectionism starts from the premise that computing
starts at the level of the connections of large amounts of simple components, or
simulated neurons, that dynamically connect and “self-organize.” Such large
artificial neural networks achieved astonishing success in a range
ofcognitivecapacities, such as pattern recognition, categorization, and translation.
Meaning does not reside in the artificial neurons that make up the system but “in
complex patterns of activity that emerge from the interactions of many such
constituents” my italics (Varela 2016). I will elaborate on terms such as complexity,
emergence, and patterns further below.
Yet, the question is misleading. The emergence of regularities from complex patterns
is a mathematical process, not a phenomenological or experiential one. The neural
network simulation through a GPT and LLM does not understand the human symbols
it is fed. nor the output it generates. What it does is transform a question into an
answer through a series of mathematical and geometric algorithms.
This is why a GPT stands for General Pre-trained Transformer. It is trained on human
data as input and human-interpreted output.
The misplaced hope that given enough processing power and complexity, the
computer system will make the “jump” to consciousness still shatters on the hard
problem simply because the map is not the territory.
Cells, brains, and organisms are all made of matter. But life is more significant than
matter. For instance, autopoiesis, the ability for self-maintenance and self-
replication, is unique to life and not found in the physiosphere.
Matter is more fundamental than life but less significant: destroy all life, and atoms
will continue to exist. Destroy all matter, and the biosphere goes along with it. The
physiosphere is part of the biosphere, but not vice versa. The same reasoning
applies to the noosphere, or the psychosocial realm, where minds reside. The
noosphere has a higher significance than the biosphere, which is, in turn, more
fundamental.
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 12/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
Matter, life, and mind: each level adds a mysterious “extra,” creating greater depth
or an increase in qualitative essence that cannot be engineered through science.
Life has not yet been created in a lab, and consciousness has not been created from
synthetic life. To create it from dead matter is an altogether harder challenge. Yet
this does not discourage Ray Kurzweil, the high priest of transhumanism, from
predicting we will be able to upload our consciousness into a computer.
Leaving philosophical nuances aside, the jump in qualitative essence seems to emerge
from increased quantitative complexity, but defining what this jump constitutes or
explaining why there is such a jump remains out of our reach.
Hence, such a system of pipes and valves, given an immensely large and complex
size, can replicate the operations of any existing computer.
Do we have good reasons to believe that a system made of pipes, valves, and water
correlates with private conscious inner life the way your brain does? Is there something it
is like to be the pipes, valves, and water put together? If you answer ‘yes’ to this question,
then logic forces you to start wondering if your house’s sanitation system — with its pipes,
valves, and water — is conscious and whether it is murder to turn off the mains valve
when you go on vacation.”
But it is also the mind that makes the distinction between order and complexity.
Hence, an appeal to patterns, emergence, and complexity puts the cart before the
horse because the mind finds and defines patterns, emergence, and complexity. Neural
patterns in the brain are incredibly complex: billions of neurons allow
combinations that exceed the number of atoms in the universe
Such complexity should disarm the surprise that consciousness “emerges” from it
and establish the criteria for complexity, pattern, and emergence “as not
understanding them.”
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 13/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
“If a kitten is carried around during its critical early months of brain development,
allowed to see its environment but prevented from moving independently, it would develop
severe visual impairments. While the kitten’s eyes and optic nerves would be physically
normal, its higher visual processing abilities. like depth perception and object recognition
would be severely compromised.”
The brain, which exists within the physical confines of the human body, is
intricately interconnected with the physio, bio, and noosphere. In the context of
human existence, we are fundamentally linked to a collective of minds and the
intricate social and cultural constructs shaped by these individual selves and
cultures.
Heidegger clarified that our “being” cannot be separated from its background
context: our consciousness and self are not stored inside a body or cabinet. They are
part and parcel integrated into a meaningful life-world that is co-created with
others.
While the embodied cognition movement and its integration in AI, based on ideas
developed decades earlier by Husserl, Heidegger and Merleau-Ponty represent a
step forward, its ontological foundation remains in the brain and nervous system.
Supersizing the mind does not get cognitivist scientists beyond the problem of how
a brain-body can be about occurrences other than itself and, out of that “aboutness,”
create a lifeworld for its owner and others.
sensorimotor skills. We cannot yet build the intelligence into vehicles to allow them
autonomous driving, despite numerous and repeatedly failed predictions that ADVs
were near. In spite of all the hype, AI cannot and won’t be able to fill your
dishwasher soon.
Adding exponentially more computational resources to solve the hard problem will
not provide a solution, only a vastly improved simulation. The application of those
resources remains an open-ended progress in AI's walk toward the (unreachable)
horizon of consciousness.
Conclusion
Consciousness and mind are much more than a mirror of nature:
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 15/24
18/07/2024 12:26 Why Computers Can’t Be Conscious. The limits of Science, Arguments from… | by Peter D'Autry | Jul, 2024 | Medium
If you answer no to this question, then you understand the belief that science can
create consciousness is nothing but a Disneyfication of Artificial Intelligence.
Sources
Carter Rita, Mapping the Mind. University of California Press, 2010.
Kastrup Bernardo, Brief Peeks Beyond; Critical Essays on Metaphysics, Neuroscience, Free
Will, Skepticism and Culture. John Hunt Publishing, 2015.
Papineau David, “The Rise of Physicalism”, C. Gillett & B.M. Loewer. (eds.), Physicalism
and its Discontents, Cambridge: Cambridge University Press 2001
Russell, Bertrand, Human Knowledge: Its Scope and Limits. London, UK: Routledge
Classics.Tallis Raymeon, 2009.
Varela Francisco, Thompson Evan, Rosch Eleanor, The Embodied Mind, revised
edition: Cognitive Science and Experience. MIT Press. 2016
Please open extension popup to fetch words for this frame. (F2 - hide/show; context menu - save selected word into
dictionary)
https://medium.com/@peterdautry/why-computers-cant-be-conscious-1232f0a77fa4 16/24