Is The Brain A Digital Computer?
Is The Brain A Digital Computer?
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
American Philosophical Association is collaborating with JSTOR to digitize, preserve and extend
access to Proceedings and Addresses of the American Philosophical Association
John R. Searle
University of California/Berkeley
Presidential Address delivered before the Sixty-fourth Annual Pacific Division Meeting of the
American Philosophical Association in Los Angeles, California, March 30, 1990.
There are different ways to present a Presidential Address to the APA; the one I have
chosen is simply to report on work that I am doing right now, on work in progress. I am
going to present some of my further explorations into the computational model of the
mind. [1]
The basic idea of the computer model of the mind is that the mind is the program and
the brain the hardware of a computational system. A slogan one often sees is "the mind
is to the brain as the program is to the hardware." [2]
Let us begin our investigation of this claim by distinguishing three questions:
21
I want to begin the discussion by trying to state as strongly as I can why Cognitivism
has seemed intuitively appealing. There is a story about the relation of human intelligence
to computation that goes back at least to Turing's classic paper (1950), and I believe it is
the foundation of the Cognitivist view. I will call it the Primal Story:
We begin with two results in mathematical logic, the Church-Turing thesis (sometimes
called Church's thesis) and Turing's theorem. For our purposes, the Church-Turing
thesis states that for any algorithm there is some Turing machine that can implement
m. The Definition of
So far, we seem to have arrived at a problem. Syntax is not part of physics. This has
the consequence that if computation is defined syntactically then nothing is intrinsically
a digital computer solely in virtue of its physical properties. Is there any way out of this
problem? Yes, there is, and it is a way standardly taken in cognitive science, but it is out
of the frying pan and into the fire. Most of the works I have seen in the computational
theory of the mind commit some variation on the homunculus fallacy. The idea always
is to treat the brain as if there were some agent inside it using it to compute with. A
typical case is David Marr (1982) who describes the task of vision as proceeding from a
two-dimensional visual array on the retina to a three-dimensional description of the
external world as output of the visual system. The difficulty is: Who is reading the
description? Indeed, it looks throughout Marr's book, and in other standard works on the
subject, as if we have to invoke a homunculus inside the system in order to treat its
operations as genuinely computational.
Many writers feel that the homunculus fallacy is not really a problem, because, with
Dennett (1978), they feel that the homunculus can be "discharged". The idea is this: Since
the computational operations of the computer can be analyzed into progressively simpler
units, until eventually we reach simple flip-flop, "yes-no", "1-0" patterns, it seems that the
higher-level homunculi can be discharged with progressively stupider homunculi, until
finally we reach the bottom level of a simple flip-flop that involves no real homunculus at
all. The idea, in short, is that recursive decomposition will eliminate the homunculi.
It took me a long time to figure out what these people were driving at, so in case
someone else is similarly puzzled I will explain an example in detail: Suppose that we have
An unmyelinated axon of this group does not respond when the general illumination
is turned on or off. If the sharp edge of an object either lighter or darker than the
background moves into its field and stops, it discharges promptly and continues
discharging, no matter what the shape of the edge or whether the object is smaller or
larger than the receptive field. (p. 239).
I have never heard anyone say that all this is just the hardware implementation, and
that they should have figured out which program the frog was implementing. I do not
doubt that you could do a computer simulation of the frog's "bug detectors". Perhaps
In this section I turn finally to what I think is, in some ways, the central issue in all
of this, the issue of information processing. Many people in the "cognitive science"
scientific paradigm will feel that much of my discussion is simply irrelevant and they will
argue against it as follows:
There is a difference between the brain and all of these other systems you have been
describing, and this difference explains why a computational simulation in the case of
the other systems is a mere simulation, whereas in the case of the brain a computa-
tional simulation is actually duplicating and not merely modeling the functional
properties of the brain. The reason is that the brain, unlike these other systems, is an
information processing system. And this fact about the brain is, in your words,
"intrinsic". It is just a fact about biology that the brain functions to process
information, and since we can also process the same information computationally,
computational models of brain processes have a different role altogether from
computational models of, for example, the weather. So there is a well defined research
question: "Are the computational procedures by which the brain processes information
the same as the procedures by which computers process the same information?
What I just imagined an opponent saying embodies one of the worst mistakes in
cognitive science. The mistake is to suppose that in the sense in which computers are used
to process information, brains also process information. To see that that is a mistake,
contrast what goes on in the computer with what goes on in the brain. In the case of the
computer, an outside agent encodes some information in a form that can be processed by
the circuitry of the computer. That is, he or she provides a syntactical realization of the
information that the computer can implement in, for example, different voltage levels.
The computer then goes through a series of electrical stages that the outside agent can
interpret both syntactically and semantically even though, of course, the hardware has no
intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not
matter provided only that you can get it to implement the algorithm. Finally, an output
is produced in the form of physical phenomena which an observer can interpret as symbols
with a syntax and a semantics.
But now contrast that with the brain. In the case of the brain, none of the relevant
neurobiological processes are observer relative (though of course, like anything, they can
be described from an observer relative point of view) and the specificity of the neurophysi-
ology matters desperately. To make this difference clear, let us go through an example.
Suppose I see a car coming toward me. A standard computational model of vision will
take in information about the visual array on my retina and eventually print out the
sentence, "There is a car coming toward me". But that is not what happens in the actual
biology. In the biology a concrete and specific series of electro-chemical reactions are set
up by the assault of the photons on the photo receptor cells of my retina, and this entire
6. But the causal explanations we then give do not cite causal properties d
from the physics of the implementation and the intentionality of the hom
7. The standard, though tacit, way out of this is to commit the homunculu
The homunculus fallacy is endemic to computational models of cognit
cannot be removed by the standard recursive decomposition arguments
addressed to a different question.
Footnotes
[1] For earlier explorations see Searle (1980) and Searle (1984).
[2] This view is announced and defended in a large number of books and articles
many of which appear to have more or less the same title, e.g. Computers and Though
(Feigenbaum and Feldman, eds. 1963), Computers and Thought (Sharples et al, 1988), Th
Computer and the Mind (Johnson-Laird, 1988), Computation and Cognition (Pylyshyn, 198
"The Computer Model of the Mind" (Block, 1990, forthcoming), and of course,
"Computing Machinery and Intelligence" (Turing, 1950).
[3] People sometimes say that it would have to add six to itself eight times. But tha
is bad arithmetic. Six added to itself eight times is fifty-four, because six added to itsel
zero times is still six. It is amazing how often this mistake is made.
[4] The example was suggested by John Batali.
[5] I am indebted to a remarkably large number of people for discussions of the issue
in this paper. I cannot thank them all, but special thanks are due to John Batali, Vino
Goel, Ivan Havel, Kirk Ludwig, Dagmar Searle and Klaus Strelau.
References