0% found this document useful (0 votes)
48 views8 pages

Clark (2001) - Natural Born Cyborgs

This document discusses how cognitive technologies, both ancient and modern, should be understood as integral parts of human intelligence and the human mind. It argues that human cognition extends beyond the brain alone and involves collaboration with external tools and technologies. As an example, it describes how the writing of academic papers involves iterative loops between the brain, body, and technologies like notes, sources, and the ability to mark up pages. It claims we have always been adept at integrating our minds with our tools, and new technologies will become even more integrated into our cognitive processes.

Uploaded by

Leandro Rivas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views8 pages

Clark (2001) - Natural Born Cyborgs

This document discusses how cognitive technologies, both ancient and modern, should be understood as integral parts of human intelligence and the human mind. It argues that human cognition extends beyond the brain alone and involves collaboration with external tools and technologies. As an example, it describes how the writing of academic papers involves iterative loops between the brain, body, and technologies like notes, sources, and the ability to mark up pages. It claims we have always been adept at integrating our minds with our tools, and new technologies will become even more integrated into our cognitive processes.

Uploaded by

Leandro Rivas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Andy Clark

School of Cognitive and Computing Sciences


University of Sussex
Brighton BN1 9QH
U.K.

‘Soon, perhaps, it will be impossible to tell where human ends and


machines begins'.
Maureen McHugh, China Mountain Zhang, p. 214

'The machine is us, our processes, an aspect of our embodiment ... We


are responsible for boundaries. We are they ... I would rather be a cyborg
than a goddess'.
Donna Haraway, "A Cyborg Manifesto", in Simians, Cyborgs, and
Women, pp. 180-181

Cognitive technologies, ancient and modern, are best understood


(I suggest) as deep and integral parts of the problem-solving systems we
identify as human intelligence. They are best seen as proper parts of the
computational apparatus that constitutes our minds. Understanding what is
distinctive about human reason thus involves understanding the complementary
contributions of both biology and (broadly speaking) technology, as well as the
dense, reciprocal patterns of causal and co-evolutionary influence that run
between them.

My body is an electronic virgin. I incorporate no silicon chips, no retinal or cochlear


implants, no pacemaker. I don't even wear glasses (though I do wear clothes). But I
am slowly becoming more and more a Cyborg. So are you. Pretty soon, and still
without the need for wires, surgery or bodily alterations, we shall be kin to the
Terminator, to Eve 8, to Cable...just fill in your favorite fictional Cyborg. Perhaps we
already are. For we shall be Cyborgs not in the merely superficial sense of combining
flesh and wires, but in the more profound sense of being human-technology
symbionts: thinking and reasoning systems whose minds and selves are spread across
biological brain and non-biological circuitry.
This may sound like futuristic mumbo-jumbo, and I happily confess that I wrote
the preceding paragraph with an eye to catching your attention, even if only by the
dangerous route of courting your disapproval! But I do believe that it is the plain and
literal truth. I believe, to be clear, that it is above all a SCIENTIFIC truth, a reflection
of some deep and important facts about (a whiff of paradox here?) our special, and
distinctively HUMAN nature. And certainly, I don’t think this tendency towards
cognitive hybridization is a modern development. Rather, it is an aspect of our huan-
ity which is as basic and ancient as the use of speech, and which has been extending

M. Beynon, C.L. Nehaniv, and K. Dautenhahn (Eds.): CT 2001, LNAI 2117, pp. 17-24, 2001.
© Springer-Verlag Berlin Heidelberg 2001
18 A. Clark

its territory ever since. We see some of the ‘cognitive fossil trail’ of the Cyborg trait
in the historical procession of potent Cognitive Technologies that begins with speech
and counting, morphs first into written text and numerals, then into early printing
(without moveable typefaces), on to the revolutions of moveable typefaces and the
printing press, and most recently to the digital encodings that bring text, sound and
image into a uniform and widely transmissible format. Such technologies, once up-
and-running in the various appliances and institutions that surround us, do far more
than merely allow for the external storage and transmission of ideas.
What’s more, their use, reach and transformative powers are escalating. New
waves of user-sensitive technology will bring this age-old process to a climax, as our
minds and identities become ever more deeply enmeshed in a non-biological matrix
of machines, tools, props, codes and semi-intelligent daily objects. We humans have
always been adept at dovetailing our minds and skills to the shape of our current tools
and aids. But when those tools and aids start dovetailing back- when our technologies
actively, automatically, and continually tailor themselves to us, just as we do to them-
then the line between tool and user becomes flimsy indeed. Such technologies will be
less like tools and more like part of the mental apparatus of the person. They will
remain tools in only the thin and ultimately paradoxical sense in which my own
unconsciously operating neural structures (my hippocampus, my posterior parietal
cortex) are tools. I do not really 'use' my brain. There is no user quite so ephemeral.
Rather, the operation of the brain makes me who and what I am. So too with these
new waves of sensitive, interactive technologies. As our worlds become smarter, and
get to know us better and better, it becomes harder and harder to say where the world
stops and the person begins
What are these technologies? They are many, and various. They include potent,
portable machinery linking the user to an increasingly responsive world-wide-web.
But they include also, and perhaps ultimately more importantly, the gradual smarten-
ng-up and interconnection of the many everyday objects which populate our homes
and offices. This brief note, however, is not going to be about new techology. Rather,
it is about us, about our sense of self, and about the nature of the human mind. The
goal is not to guess at what we might soon become, but to better appreciate what we
already are: creatures whose minds are special precisely because they are tailor-made
for multiple mergers and coalitions.
Cognitive technologies, ancient and modern, are best understood (I suggest) as
deep and integral parts of the problem-solving systems we identify as human intel-
igence. They are best seen as proper parts of the computational apparatus that
constitutes our minds. If we do not always see this, or if the idea seems outlandish or
absurd, that is because we are in the grip of a simple prejudice: the prejudice that
whatever matters about MY mind must depend solely on what goes on inside my own
biological skin-bag, inside the ancient fortress of skin and skull. But this fortress has
been built to be breached. It is a structure whose virtue lies in part in it's capacity to
delicately gear its activities to collaborate with external, non-biological sources of
order so as (originally) to better solve the problems of survival and reproduction.
Thus consider two brief examples: one old (see the Epilogue to Clark (1997)) and
one new. The old one first. Take the familiar process of writing an academic paper.
Confronted, at last, with the shiny finished product the good materialist may find
herself congratulating her brain on its good work. But this is misleading. It is
misleading not simply because (as usual) most of the ideas were not our own anyway,
but because the structure, form and flow of the final product often depends heavily on
Natural-Born Cyborgs? 19

the complex ways the brain co-operates with, and depends on, various special features
of the media and technologies with which it continually interacts. We tend to think of
our biological brains as the point source of the whole final content. But if we look a
little more closely what we may often find is that the biological brain participated in
some potent and iterated loops through the cognitive technological environment. We
began, perhaps, by looking over some old notes, then turned to some original sources.
As we read, our brain generated a few fragmentary, on-the-spot responses which were
duly stored as marks on the page, or in the margins. This cycle repeats, pausing to
loop back to the original plans and sketches, amending them in the same fragmentary,
on-the-spot fashion. This whole process of critiquing, re-arranging, streamlining and
linking is deeply informed by quite specific properties of the external media, which
allow the sequence of simple reactions to become organized and grow (hopefully)
into something like an argument. The brain's role is crucial and special. But it is not
the whole story. In fact, the true power and beauty of the brain's role is that it acts as a
mediating factor in a variety of complex and iterated processes which continually
loop between brain, body and technological environment. And it is this larger system
which solves the problem. We thus confront the cognitive equivalent of Dawkins'
(1982) vision of the extended phenotype. The intelligent process just is the spatially
and temporally extended one which zigzags between brain, body and world.
Or consider, to take a superficially very different kind of case, the role of sketching
in certain processes of artistic creation. Van Leeuwen, Verstijnen and Hekkert (1999)
offer a careful account of the creation of certain forms of abstract art, depicting such
creation as heavily dependent upon “an interactive process of imagining, sketching
and evaluating [then re-sketching, re-evaluating, etc.]" (op cit p. 180). The question
the authors pursue is: why the need to sketch? Why not simply imagine the final
artwork “in the mind’s eye” and then execute it directly on the canvas? The answer
they develop, in great detail and using multiple real case-studies, is that human
thought is constrained, in mental imagery, in some very specific ways in which it is
not constrained during on-line perception. In particular, our mental images seem to
be more interpretatively fixed: less able to reveal novel forms and components.
Suggestive evidence for such constraints includes the intriguing demonstration
(Chambers and Reisberg (1989)) that it is much harder to discover (for the first time)
the second interpretation of an ambiguous figure (such as the duck/rabbit) in recall
and imagination than when confronted with a real drawing. Good imagers, who
proved unable to discover a second interpretation in the mind's eye, were able
nonetheless to draw what they had seen from memory and, by then perceptually
inspecting their own unaided drawing, to find the second interpretation. Certain forms
of abstract art, Van Leeuwen et al go on to argue, likewise, depend heavily on the
deliberate creation of “multi-layered meanings” – cases where a visual form, on
continued inspection, supports multiple different structural interpretations. Given the
postulated constraints on mental imagery, it is likely that the discovery of such
multiply interpretable forms will depend heavily on the kind of trial and error process
in which we first sketch and then perceptually (not merely imaginatively) re-
encounter visual forms, which we can then tweak and re-sketch so as to create a
product that supports an increasingly multi-layered set of structural interpretations.
This description of artistic creativity is strikingly similar, it seems to me, to our story
about academic creativity. The sketch-pad is not just a convenience for the artist, nor
simply a kind of external memory or durable medium for the storage of particular
20 A. Clark

ideas. Instead, the iterated process of externalizing and re-perceiving is integral to the
process of artistic cognition itself.
One useful way to understand the cognitive role of many of our self-created
cognitive technologies is thus as affording complementary operations to those that
come most naturally to biological brains. Consider here the connectionist image
(McClelland, Rumelhart and the PDP Research Group 1986, Clark 1989) of bio-
ogical brains as pattern-completing engines. Such devices are adept at linking patterns
of current sensory input with associated information: you hear the first bars of the
song and recall the rest, you see the rat’s tail and conjure the image of the rat.
Computational engines of that broad class prove extremely good at tasks such as
sensori-motor co-ordination, face recognition, voice recognition, etc. But they are not
well-suited to deductive logic, planning, and the typical tasks of sequential reason.
They are, roughly speaking, “Good at Frisbee, Bad at Logic” – a cognitive profile that
is at once familiar and alien. Familiar, because human intelligence clearly has
something of that flavor. Yet alien, because we repeatedly transcend these limits,
planning family vacations, running economies, solving complex sequential problems,
etc., etc. A powerful hypothesis, which I first encountered in Rumelhart, Smolensky,
McClelland and Hinton (1986), is that we transcend these limits, in large part, by
combining the internal operation of a connectionist, pattern-completing device with a
variety of external operations and tools which serve to reduce various complex,
sequential problems to an ordered set of simpler pattern-completing operations of the
kind our brains are most comfortable with. Thus, to borrow the classic illustration,
we may tackle the problem of long multiplication by using pen, paper and numerical
symbols. We then engage in a process of external symbol manipulations and storage
so as to reduce the complex problem to a sequence of simple pattern-completing steps
that we already command, first multiplying 9 by 7 and storing the result on paper,
then 9 by 6, and so on. The value of the use of pen, paper, and number symbols is thus
that – in the words of Ed Hutchins;

“[Such tools] permit the [users] to do the tasks that need to be done
while doing the kinds of things people are good at: recognizing
patterns, modeling simple dynamics of the world, and manipulating
objects in the environment.” Hutchins (1995) p. 155

This description nicely captures what is best about good examples of cognitive
technology: recent word-processing packages, web browsers, mouse and icon sysems,
etc. (It also suggests, of course, what is wrong with many of our first attempts at
creating such tools – the skills needed to use those environments (early VCR’s, word-
processors, etc.) were precisely those that biological brains find hardest to support,
such as the recall and execution of long, essentially arbitrary, sequences of operations.
See Norman (1999) for further discussion.
The conjecture, then, is that one large jump or discontinuity in human cognitive
evolution involves the distinctive way human brains repeatedly create and exploit
various species of cognitive technology so as to expand and re-shape the space of
human reason. We – more than any other creature on the planet – deploy non-bio-
logical elements (instruments, media, notations) to complement our basic biological
modes of processing, creating extended cognitive systems whose computational and
problem-solving profiles are quire different from those of the naked brain.
Natural-Born Cyborgs? 21

The true significance of recent work on “embodied, embedded” problem-solving


(see Clark 1997 for a review) may thus lie not in the endless debates over the use or
abuse of notions like internal representation, but in the careful depiction of complex,
looping, multi-layered interactions between the brain, the body and reliable features
of the local problem-solving environment. Internal representations will, almost
certainly, feature in this story. But so will external representations, and artifacts, and
problem-transforming tricks. The right way to “scale-up” the lessons of connectionist
research (and simple robotics- see e.g. Brooks 1991, Beer 1995) so as to illuminate
human thought and reason is to recognize that human brains maintain an intricate
cognitive dance with an ecologically novel, and immensely empowering, environ-
ment: the world of symbols, media, formalisms, texts, speech, instruments and cul-
ture. The computational circuitry of human cognition flows both within and beyond
the head, through this extended network in ways which radically transform the space
of human thought and reason.
Such a point is not new, and has been well-made by a variety of theorists working
in many different traditions. This brief and impressionistic sketch is not the place to
delve deeply into the provenance of the idea, but some names to conjure with include
Vygotsky, Bruner, Dennett, Hutchins, Norman and (to a greater or lesser extent) all
those currently working on so-called ‘situated cognition’. My own work on the idea
(see Clark 1997,1998, 1999) also owes much to a brief collaboration with David
Chalmers (see our paper, ‘The Extended Mind’ in ANALYSIS 58: 1: 1998 p.7-19). I
believe, however, that the idea of human cognition as subsisting in a hybrid, extended
architecture (one which includes aspects of the brain and of the cognitive techno-
logical envelope in which our brains develop and operate) remains vastly under-
appreciated. We cannot understand what is special and distinctively powerful about
human thought and reason by simply paying lip-service to the importance of the web
of surrounding Cognitive Technologies. Instead, we need to understand in detail how
our brains dovetail their problem-solving activities to these additional resources, and
how the larger systems thus created operate, change and evolve. In addition, and
perhaps more philosophically, we need to understand that the very ideas of minds and
persons are not limited to the biological skin-bag, and that our sense of self, place and
potential are all malleable constructs ready to expand, change or contract at sur-
prisingly short notice.
A natural question to press, of course, is this: since no other species on the planet
builds as varied, complex and open-ended designer environments as we do (the claim,
after all, is that this is why we are special), what is it that allowed this process to get
off the ground in our species in such a spectacular way? And isn't that, whatever it is,
what really matters? Otherwise put, even if it’s the designer environments that makes
us so intelligent, what biological difference lets us build/discover/use them in the first
place?
This is a serious, important and largely unresolved question. Clearly, there must be
some (perhaps quite small) biological difference that lets us get our collective foot in
the designer environment door - what can it be? The story I currently favor located the
difference in a biological innovation for greater neural plasticity combined with the
extended period of protected learning called “childhood” Thus Quartz (1999) and
Quartz and Sejnowski (1997) present strong evidence for a vision of human cortex
(especially the most evolutionarily recent structures such as neocortex and prefrontal
cortex) as an “organ of plasticity” whose role is to dovetail the learner to encountered
structures and regularities, and to allow the brain to make the most of reliable external
22 A. Clark

problem-solving resources. This “neural constructivist” vision depicts neural (espe-


cially cortical) growth as experience - dependent, and as involving the actual
construction of new neural circuitry (synapses, axons, dendrites) rather than just the
fine-tuning of circuitry whose basic shape and form is already determined. One
upshot is that the learning device itself changes as a result of organism-environmental
interactions - learning does not just alter the knowledge base for a fixed com-
putational engine, it alters the internal computational architecture itself. Evidence for
this neural constructivist view comes primarily from recent neuroscientific studies
(especially work in developmental cognitive neuroscience). Key studies here include
work involving cortical transplants, in which chunks of visual cortex were grafted into
other cortical locations (such as somatosensory or auditory cortex) and proved plastic
enough to develop the response characteristics appropriate to the new location (see
Schlagger and O’Leary (1991)), work showing the deep dependence of specific
cortical response characteristics on developmental interactions between parts of
cortex and specific kinds of input signal (Chenn, (1997)) and a growing body of
constructivist work in Artificial Neural Networks: connectionist networks in which
the architecture (number of units and layers, etc.) itself alters as learning progresses -
see e.g. Quartz and Sejnowski (1997). The take home message is that immature cortex
is surprisingly homogeneous, and that it ‘requires afferent input, both intrinsically
generated and environmentally determined, for its regional specialization’ (Quartz
(1999) p.49).
So great, in fact, is the plasticity of immature cortex (and especially, according to
Quartz and Sejnowski, that of prefrontal cortex) that O'Leary dubs it 'proto-cortex'.
The linguistic and technological environment in which the brain grows and develops
is thus poised to function as the anchor point around which such flexible neural
resources adapt and fit. Such neural plasticity is, of course, not restricted to the human
species (in fact, some of the early work on cortical transplants was performed on rats)
, though our brains do look to be far and away the most plastic of them all. Combined
with this plasticity, however, we benefit from a unique kind of developmental space-
the unusually protracted human childhood.
In a recent evolutionary account which comports perfectly with the neural con-
structivist vision, Griffiths and Stotz (2000) argue that the long human childhood
provides a unique window of opportunity in which "cultural scaffolding [can] change
the dynamics of the cognitive system in a way that opens up new cognitive
possibilities" (op cit p.11) These authors argue against what they nicely describe as
the "dualist account of human biology and human culture" according to which
biological evolution must first create the "anatomically modern human" and is then
followed by the long and ongoing process of cultural evolution. Such a picture, they
suggest, invites us to believe in something like a basic biological human nature,
gradually co-opted and obscured by the trappings and effects of culture and society.
But this vision (which is perhaps not so far removed from that found in some of the
more excessive versions of evolutionary psychology) is akin, they argue, to looking
for the true nature of the ant by "removing the distorting influence of the nest" (op cit
p.10). Instead we humans are, by nature, products of a complex and heterogeneous
developmental matrix in which culture, technology and biology are pretty well
inextricably intermingled. The upshot, in their own words, is that:

“The individual representational system is part of a larger rep-


resentational environment which extends far beyond the skin. Cognitive
Natural-Born Cyborgs? 23

processes actually involve as components what are more traditionally


conceived as the expressions of thought and the objects of thought.
Situated cognition takes place within complex social structures which
‘scaffold’ the individual by means of artifactual, linguistic and
institutional devices...[and]..culture makes humans as much as the
reverse.” (Griffiths and Stotz (2000)).

In short it is a mistake to posit a biologically fixed “human nature” with a simple


“wrap-around” of tools and culture. For the tools and culture are indeed as much
determiners of our nature as products of it. Ours are (by nature) unusually plastic
brains whose biologically proper functioning has always involved the recruitment and
exploitation of non-biological props and scaffolds. More so than any other creature on
the planet, we humans are indeed natural-born cyborgs, factory tweaked and primed
so as to be ready to participate in cognitive and computational architectures whose
bounds far exceed those of skin and skull.
All this adds interesting complexity to recent evolutionary psychological accounts
(see e.g. Pinker (1997)) which emphasize our ancestral environments. For we must
now take into account a plastic evolutionary overlay which yields a constantly mov-
ing target, an extended cognitive architecture whose constancy lies mainly in its con-
tinual openness to change. Even granting that the biological innovations which got
this ball rolling may have consisted only in some small tweaks to an ancestral reper-
toire, the upshot of this subtle alteration is now a sudden, massive leap in cognitive-
architectural space. For the cognitive machinery is now intrinsically geared to self-
transformation, artifact-based expansion, and a snowballing/bootstrapping process of
computational and representational growth. The machinery of human reason (the
environmentally extended apparatus of our distinctively human intelligence) thus
turns out to be rooted in a biologically incremental progression while simultaneously
existing on the far side of a precipitous cliff in cognitive-architectural space.

The project of understanding human thought and reason is easily misconstrued. It is


misconstrued as the project of understanding what is special about the human brain.
No doubt there is something special about our brains. But understanding our peculiar
profiles as reasoners, thinkers and knowers of our worlds requires an even broader
perspective: one that targets multiple brains and bodies operating in specially con-
structed environments replete with artifacts, external symbols, and all the variegated
scaffoldings of science, art and culture. Understanding what is distinctive about
human reason thus involves understanding the complementary contributions of both
biology and (broadly speaking) technology, as well as the dense, reciprocal patterns
of causal and co-evolutionary influence that run between them.
For us humans there is nothing quite so natural as to be bio-technological hybrids:
cyborgs of an unassuming stripe. For we benefit from extended cognitive archi-
tectures comprising biological and non-biological elements, delicately intertwined.
We are cognitive hybrids who occupy a region of design space radically different
from those of our biological forbears. Taking this idea on board, and transforming it
24 A. Clark

into a balanced scientific account of mind, should be a prime objective for the
Cognitive Sciences of the next few hundred years.

Beer, R. (1995). “A Dynamical Systems Perspective Perspective on Agent-Environment


Interaction.” Artificial Intelligence 72: 173-215.
Brooks, R. (1991). “Intelligence without representation.” Artificial Intelligence 47: 139-159.
Chambers D, and Reisberg,D (1989)” Can Mental Images Be Ambiguous?” Journal of
Experimental Psychology: Human Perception and Performance II(3) 317-328.
Chenn, A. (1997). Development of the Cerebral Cortex in W. Cowan, T. Jessel and S. Ziputsky
(eds) Molecular and Cellular Approaches to Neural Development Oxford, England, Oxford
University Press 440-473.
Clark, A. (1989). Microcognition: Philosophy, Cognitive Science and Parallel Distributed
Processing. Cambridge, MIT Press.
Clark, A. (1997). Being There: Putting Brain, Body and World Together Again. Cambridge,
MA, MIT Press.
Clark, A. (1998). Magic Words: How Language Augments Human Computation. Language
and Thought. J. Boucher and P. Carruthers. Cambridge, Cambridge University Press.
Clark, A (1999). "An Embodied Cognitive Science?" Trends In Cognitive Sciences 3:9:1999:
345-351.
Clark, A. and Chalmers, D. (1998). “The Extended Mind.” Analysis 58: 7-19.
Dawkins, R. (1982). The Extended Phenotype (New York: Oxford University Press).
Dennett, D. (1996). Kinds of Minds. New York, Basic Books.
Griffiths, P. E. and K. Stotz (2000). How the mind grows: A developmental perspective on the
biology of cognition. Synthese 122(1-2): 29-51.
Hutchins , E. (1995). Cognition In The Wild Cambridge, MA, MIT Press.
Norman, D. (1999). The Invisible Computer Cambridge, MA, MIT Press.
Pinker, S. (1997). How the Mind Works New York, Norton.
Quartz, S. (1999). The Constructivist Brain Trends In Cognitive Science 3:2: 48-57.
Quartz, S. and Sejnowski, T (1997). The Neural Basis of Cognitive Development: A
Constructivist Manifesto Behavioral and Brain Sciences 20:537-596.
Rumelhart, D. Smolensky, P. McClelland, D. and Hinton, G. (1986). Schemata and Sequential
Thought Processes in PDP Models, in Parallel Distributed Processing: Explorations in the
Microstructure of Cognition, vol 2, MIT Press, Cambridge, MA p.7-57.
Schlagger, B. and O’Leary, D. (1991). Potential of Visual Cortex to Develop an Array of
Functional Units Unique to Somatosensory Cortex Science 252 1556-1560.
Van Leeuwen, C., Verstijnen, I. and Hekkert, P. (1999). Common unconscious dynamics
underlie common conscious effects: a case study in the interactive nature of perception and
creation. In S. Jordan (ed) Modeling Consciousness Across the Disciplines Lanhan, MD,
University Press of America.

You might also like