Understanding Computers and Cognition PDF
Understanding Computers and Cognition PDF
Computers and
Cogni tion
A
New
Foundation
forDesign
TERRY WINOGRAD
FERNANDO FLORES
A
77
Addison-Wesley Publishing Company, Inc.
Reading, Massachusetts Menlo Park, California New York
Don Mills, Ontario Wokingharn, England Amsterdam
Bonn Sydney Singapore Tokyo Madrid San Juan
Contents
Preface xi
Acknowledgments xiii
1 Introduction 3
1.1 The question of design 4
1.2 The role of tradition 7
1.3 Our path 8
v
vi CONTENTS
Bibliography 181
Name Index 191
Subject Index 194
For the people of Chile
Preface
xi
xii PREFACE
Acknowledgments
In developing this book over a number of years, we have benefited from
discussions with many people, including Jean Bamberger, Chauncey Bell,
Daniel Bobrow, Aaron Cicourel, Werner Erhard, Michael Graves, Heinz
von Foerster, Anatol Holt, Rob Kling, George Lakoff, Juan Ludlow, Don-
ald Norman, Donald Schon, John Searle, Francisco Varela, and students
in a series of seminars at Stanford University. We are especially grate-
ful to Ralph Cherubini, Hubert Dreyfus, Anne Gardner, Elihu Gerson,
xiv PREFACE
Terry Winograd
Fernando Flores
Theoretical Background
Chapter 1
Introduction
Computers are everywhere. The versatile silicon chip has found a place in
our homes, our schools, our work, and our leisure. We must cope with a
flood of new devices, bringing with them both benefits and dangers.
Popular books and magazines proclaim that we are witnessing a
`com puter revolution,' entering a `micro-millennium' in which computers
will completely transform our lives.
We search for images that can help us anticipate computer impacts and
direct further development. Are computers merely giant adding machines
or are they electronic brains? Can they only do `programmed' rote tasks
or can we expect them to learn and create? The popular discourse about
these questions draws heavily on the analogy between computation and
human thought:
3
4 CHAPTER 1. INTRODUCTION
'As we will see in later chapters, this includes the initial breakdown implicit in the
condition of unreadzness that calls for buying and assembling a newsystem
6 CHAPTER 1 INTRODUCTION
way we treat deviants (such as prisoners and the mentally ill) to the way we
teach our children.
Looking at computers we find the same process at work. The develop-
ment of the technology has led to new uses of terms such as `information,'
'input,' `output,' `language,' and `communication,' while work in areas
such as artificial intelligence is bringing new meanings to words like 'intel-
ligence,' `decision,' and `knowledge..' The technical jargon gives shape to
our commonsense understanding in a way that changes our lives.
In order to become aware of the effects that computers have on society
we must reveal the implicit understanding of human language, thought,
and work that serves as a background for developments in computer tech-
nology. In this endeavor we are doubly concerned with language. First,
we are studying a technology that operates in a domain of language. The
computer is a device for creating, manipulating, and transmitting sym-
bolic (hence linguistic) objects. Second, in looking at the impact of the
computer, we find ourselves thrown back into questions of language-how
practice shapes our language and language in turn generates the space of
possibilities for action.
This book, then, is permeated by a concern for language. Much of our
theory is a theory of language, and our understanding of the computer
centers on the role it will play in mediating and facilitating linguistic action
as the essential human activity. In asking what computers can do, we
are drawn into asking what people do with them, and in the end into
addressing the fundamental question of what it means to be human.
2For example, see Dreyfus, What Computers Can't Do (1979) and Haugeland,
"The nature and plausibility of cognitivism" (1978).
1.3. OUR PATH 11
nious techniques for making analysis and recognition more flexible, the
scope of comprehension remains severely limited.. There may be practical
applications for computer processing of natural-language-like formalisms
and for limited processing of natural language, but computers will remain
incapable of using language in the way human beings do, both in interpre-
tation and in the generation of commitment that is central to language.
Chapter 10 takes a critical look at some major current research di-
rections in areas such as knowledge engineering, expert systems, and the
so-called `fifth generation' computers, It describes the overall shift from
the general goal of creating programs that can understand language and
thought to that of designing software for specialized task domains, and
evaluates the projections being made for the coming years.
Part III (Chapters 11 and 12) presents an alternative orientation to
design, based on the theoretical background we have developed. The rel-
evant questions are not those comparing computers to people, but those
opening up a potential for computers that play a meaningful role in human
life and work, Once we move away from the blindness generated by the
old questions, we can take a broader perspective on what computers can
do
Chapter 11 addresses the task of designing computer tools for use in or-
ganizational contexts. We focus on the activity of people called `managers,'
but the same concerns arise in all situations involving social interaction
and collaborative effort. Drawing on Heidegger's discussion of `thrownness'
and `breakdown,' we conclude that models of rationalistic problem solving
do not reflect how actions are really determined, and that programs based
on such models are unlikely to prove successful. Nevertheless, there is a
role for computer technology in support of managers and as aids in coping
with the complex conversational structures generated within an organiza-
tion. Much of the work that managers do is concerned with initiating,
monitoring, and above all coordinating the networks of speech acts that
constitute social action.
Chapter 12 returns to the fundamental questions of design and looks
at possibilities for computer technology opened up by the understanding
developed in the preceding chapters. After briefly reviewing the relevant
theoretical ideas outlined earlier, we examine some of the phenomena to
which design must address itself, illustrating our approach with a concrete
example. We also consider design in relation to systematic domains of
human activity, where the objects of concern are formal structures and
the rules for manipulating them. The challenge posed here for design is
not simply to create tools that accurately reflect existing domains, but to
provide for the creation of new domains. Design serves simultaneously to
bring forth and to transform the objects, relations, and regularities of the
world of our concerns.
1.3. OUR PATH 13
The rationalistic
tradition
Current thinking about computers and their impact on society has been
shaped by a rationalistic tradition that needs to be re-examined and chal-
lenged as a source of understanding. As a first step we will characterize
the tradition of rationalism and logical empiricism that can be traced back
at least to Plato. This tradition has been the mainspring of Western sci-
ence and technology, and has demonstrated its effectiveness most clearly
in the `hard sciences'-those that explain the operation of deterministic
mechanisms whose principles can be captured in formal systems. The
tradition finds its highest expression in mathematics and logic, and has
greatly influenced the development of linguistics and cognitive psychology.
We will make no attempt to provide a full historical account of this
tradition, or to situate it on some kind of intellectual map,. Instead, we
have chosen to concentrate on understanding its effects on current dis-
course and practice, especially in relation to the development and impact
of computers. The purpose of this chapter is to outline its major points
and illustrate their embodiment in current theories of language, mind, and
action.
There are obvious questions about how we set situations into corre-
spondence with systematic `representations' of objects and properties, and
with how we can come to know general rules. In much of the rationalistic
tradition, however, these are deferred in favor of emphasizing the formu-
lation of systematic rules that can be used to draw logical conclusions.
Much of Western philosophy-from classical rhetoric to modern symbolic
logic-can be seen as a drive to come up with more systematic and precise
formulations of just what constitutes valid reasoning.
Questions of correspondence and knowledge still exercise philosophers,
but in the everyday discourse about thinking and reasoning they are taken
as unproblematic. In fact when they are raised, the discussion is often
characterized as being too philosophical. Even within philosophy, there
are schools (such as analytic philosophy) in which the problems raised by
the first two items are pushed aside, not because they are uninteresting,
but because they are too difficult and open-ended. By concentrating on
formalisms and logical rules, the philosopher can develop clear technical
results whose validity can be judged in terms of internal coherence and
consistency.
There is a close correlation between the rationalistic tradition and the
approach of organized science. In a broad sense, we can view any orga nized
form of inquiry as a science, but in ordinary usage more is implied. There
must be some degree of adherence to the scientific method. This method
consists of a series of basic steps (that can be repeated in successive
refinements of the science):
IA
16
CHAPTER 2. THE RATIONALISTIC TRADITION
'I n s o m e w a y s , t h e r a t i o n a l i s t i c t r a d i t i o n m i g h t b e t t e r b e t e r m e d t h e ` a n a l y t i c t r a d i - t i o n ' W e h a v e a d o p t e d a m o r e n e u t r a l l a b e
t a k e t h er s oi dt he e W e a l s o a r e n o t c o n c e r n e d h e r e w i t h t h e d e b a t e b e t w e e n ` r a t i o n a l i s t s 'a n d ` e m p i r i c i s t s , ' T h e r a t i o n a l i s t i c t r a d i t i o n
2 LANGUAGE, TRUTH, AND THE WORLD 17
1. Sentences say things about the world, and can be either true or false.
2.. What a sentence says about the world is a function of the words it
contains and the structures into which these are combined.
3. The content words of a sentence (such as its nouns, verbs, and ad-
jectives) can be taken as denoting (in the world) objects, properties,
relationships, or sets of these.
2Analysis of this style was developed by such prominent philosophers and logicians
as Frege (1949), Russell (1920), and Tarski (1944) More recent work is presented in
18 CHAPTER 2. THE RATIONALISTIC TRADITION
between what we say and what we mean. There are two levels at which
to define the problem. First, there is the problem of `semantic corre-
spondence..' Just what is the relationship between a sentence (or a word)
and the objects, properties, and relations we observe in the world? Few
philosophers adhere to the naive view that one can assume the presence of
an objective reality in which objects and their properties are `simply there.'
They recognize deep ontological problems in deciding just what constitutes
a distinct object or in what sense a relation or event `exists.' Some limited
aspects (such as the reference of proper names) have been studied within
the philosophy of language, but it is typically assumed that no formal
answers can be given to the general problem of semantic correspondence.
The second, more tractable level at which to study meaning is to take
for granted that some kind of correspondence exists, without making a
commitment to its ontological grounding. Having done this, one can look
at the relations among the meanings of different words, phrases, and sen-
tences, without having to answer the difficult question of just what those
meanings are.3
There are many different styles in which this study can be carried out.
The approach called `structural semantics' or `linguistic semantics'4 deals
only with the linguistic objects (words, phrases, and sentences) themselves.
The fact that "Sincerity admires John" is anomalous or that "male parent"
and "father" are synonymous can be encompassed within a theory relating
words (and posited features of words) to their occurrences in certain kinds
of phrases and sentences. Within such a theory there need be no reference
either to the act of uttering the words or to the states of affairs they
describe.
This approach is limited, however, because of its dependence on spe-
cific words and structures as the basis for stating general rules. Most
theories of semantics make use of a formalized language in which deeper
regularities can be expressed. It is assumed that each sentence in a nat-
ural language (such as English) can be set into correspondence with one
or more possible interpretations in a formal language (such as the first-
order predicate calculus) for which the rules of reasoning are well defined.
The study of meaning then includes both the translation of sentences into
the corresponding formal structures and the logical rules associated with
these structures. Thus the sentences "Every dog has a tail," "All dogs
several collections, such as Linsky (1952), Davidson and Harman (1972), Hintikka,
Moiavcsik, and Suppes (1973), and some of the papers in Keenan (1975),
3For a critique of recent attempts to extend this methodology, see Winograd, "Moving
the semantic fulcrum" (1985).
41n this category, we include work such as that of Leech (1969), Lyons (1963), Katz
and Fodor (1964), and Jackendoff (1976)
2.2. LANGUAGE, TRUTH, AND THE WORLD 19
have tails," and "A dog has a tail" are all translated into the same form,
while "I walked to the bank" will have two possible translations (corre-
sponding to the two meanings of "bank"), as will "Visiting relatives can
be boring" (corresponding to different interpretations of who is doing the
visiting).
Most current work in this direction adopts some form of `truth
theoretic' characterization of meaning. We can summarize its underlying
assumptions as follows:
3.. There are systematic rules of logic that account for the interrelation
of the truth conditions for different formulas.
'These include indexical pronouns ("I," "you"), place and time adverbs ("here,"
"now"), and the use of tenses (as in "He will go."). It is also clearly understood
that there are dependencies on the linguistic context (as with the anaphoric pro-
nouns "he," "she," "it") and that there are metaphorical and poetic uses of language
which depend on complex personal contexts.
20 CHAPTER 2, THE RATIONALISTIC TRADITION
(1976), p. 67,
There are several key elements to this view of problem solving, which is
generally taken for granted in artificial intelligence research:
publication in 1977, and the Cognitive Science Society held its first annual
meeting in 19797 A number of other conferences, journals, and research
funding programs have followed.
Of course cognitive science is not really new. It deals with phenomena
of thought and language that have occupied philosophers and scientists for
thousands of years Its boundaries are vague, but it is clear that much of
linguistics, psychology, artificial intelligence, and the philosophy of mind
fall within its scope. In declaring that it exists as a science, people are
marking the emergence of what Lakatos calls a `research programme.'8
Lakatos chooses this term in preference to Kuhn's `paradigm' (The Struc-
ture of Scientific Revolutions, 1962) to emphasize the active role that a
research programme plays in guiding the activity of scientists He sees the
history of science not as a cyclic pattern of revolution and normal science,
but as a history of competing research programmes. He distinguishes be-
tween `mature science,' consisting of research programmes, and `immature
science,' consisting of "a mere patched up pattern of trial and error."
A research programme is more than a set of specific plans for carrying
out scientific activities. The observable details of the programme reflect
a deeper coherence which is not routinely examined. In the day-to-day
business of research, writing, and teaching, scientists operate within a
background of belief about how things are, This background invisibly
shapes what they choose to do and how they choose to do it. A research
programme grows up within a tradition of thought, It is the result of
many influences, some recognized explicitly and others concealed in the
social structure and language of the community. Efforts to understand
and modify the research programme are made within that same context,
and can never escape it to produce an `objective' or `correct' approach,.
The research programme of cognitive science encompasses work that
has been done under different disciplinary labels, but is all closely related
through its roots in the rationalistic tradition. Cognitive science needs
to be distinguished from `cognitive psychology,' which is the branch of
traditional (experimental) psychology dealing with cognition. Although
cognitive psychology constitutes a substantial part of what is seen as cog-
nitive science, it follows specific methodological principles that limit its
scope.. In particular, it is based on an experimental approach in which
progress is made by performing experiments that can directly judge be-
tween competing scientific hypotheses about the nature of cognitive mech-
1. All cognitive systems are symbol systems. They achieve their intel-
ligence by symbolizing external and internal situations and events,
and by manipulating those symbols,
9For an excellent overview of both the history and current direction of cognitive
science, see Gardner, The Mind's NewScience (1985)
Chapter 3
3.1 Hermeneutics
Hermeneutics' began as the theory of the interpretation of texts, par-
ticularly mythical and sacred texts. Its practitioners struggled with the
problem of characterizing how people find meaning in a text that exists
over many centuries and is understood differently in different epochs. A
2Emilio Betti (Teona Generale della Interpretazione, 1955) has been the most
influential supporter of this approach. Hirsch's Validity in Interpretation (1967)
applies Betti's view to problems of literary criticism.
3Gadamer, Truth and Method (1975) and Philosophical Hermeneutics (1976)
4 1n his discussions of hermeneutics, Gadamer makes frequent reference to a person's
`horizon.' As with many of the words we will introduce in this chapter, there is no
simple translation into previously understood terms The rest of the chapter will
serve to elucidate its meaning through its use.
27
3 HERMENEUTICS 29
that the person uses,5 That language in turn is learned through activities
of interpretation The individual is changed through the use of language,
and the language changes through its use by individuals. This process is
of the first importance, since it constitutes the background of the beliefs
and assumptions that determine the nature of our being.6 We are social
creatures:
5The attempt to elucidate our own pre-understanding is the central focus of the
branch of sociology called `ethnomethodology,' as exemplified by Garfinkel, "What
The Presentation of Self in Everyday Life (1959),
is ethnomethodology" (1967), Goffman,
and Cicourel,
Cognitive Sociology (1974)
2. There are `objective facts' about that world that do not depend on
the interpretation (or even the presence) of any person.
8Ibid., p 247
32 CHAPTER 3. UNDERSTANDING AND BEING
is led to interpret the world falsely, but is the necessary condition of having a
background for interpretation (hence Being) This is clearly expressed in
the later writings of Gadamer:
You cannot step back and reflect on your actions. Anyone who has
been in this kind of situation has afterwards felt "I should have said... "
or "I shouldn't have let Joe get away with... " In the need to respond
immediately to what people say and do, it is impossible to take time to
analyze things explicitly and choose the best course of action. In fact, if
you stop to do so you will miss some of what is going on, and implicitly
choose to let it go on without interruption. You are thrown on what people
loosely call your `instincts,' dealing with whatever comes up.
find steps that will achieve your goals. You must, as the idiom goes, `flow
with the situation.'
Language is action. Each time you speak you are doing something
quite different from simply `stating a fact.' If you say "First we have to
address the issue of system development" or "Let's have somebody on
the other side talk," you are not describing the situation but creating it.
The existence of "the issue of system development" or "the other side"
is an interpretation, and in mentioning it you bring your interpretation
into the group discourse. Of' course others can object "That isn't really an
issue-you're confusing two things" or "We aren't taking sides, everyone
has his own opinion" But whether or not your characterization is taken
for granted or taken as the basis for argument, you have created the objects
and properties it describes by virtue of making the utterance.
Cognition as a biological
phenomenon
The previous chapter presented the primary basis for our theoretical orien-
tation, but our own understanding initially developed through a different
path. The rationalistic orientation of our prior training in science and tech-
nology made the foundations of hermeneutics and phenomenology nearly
inaccessible to us, Before we could become open to their relevance and
importance we needed to take a preliminary step towards unconcealing the
tradition in which we lived, recognizing that it was in fact open to serious
question,
For us, this first step came through the work of Humberto Maturana, a
biologist who has been concerned with understanding how biological
processes can give rise to the phenomena of cognition and language..
Be ginning with a study of the neurophysiology of vision, which led to the
classic work on the functional organization of the frog's retina,' he went on
to develop a theory of the organization of living systems2 and of language
and cognition,3
produced in that laboratory, provides a broad insight into its work Since we have
been most directly influenced by Maturana's writings we will refer primarily to them
and to him
As will become obvious in this chapter and throughout the book, the words `cognitive'
and `cognition' are used in quite different ways by different writers We will not
attempt to give a simple definition of Maturana's use, but will clarify it through the
discussion in the chapter
4. THE CLOSURE OF THE NERVOUS SYSTEM 41
6The original work in this area is described in Land, "The retinex theory of color
vision" (1977)
42 CHAPTER 4. COGNITION AS A BIOLOGICAL PHENOMENON
specifies
the structure of the perturbed system that determines, or better,
what structural configurations of the medium can perturb it.
From this perspective, there is no difference between perception and
hallucination. If the injected irritant creates a pattern of neural activity
identical to that which would be produced by heat applied to the area
served by the nerve, then there is no neurophysiological sense to the ques-
tion of whether the heat was really `perceived' or was a `hallucination..'
At first, this refusal to distinguish reality from hallucination may seem
far-fetched, but if we think back to color vision it is more plausible. The
question of whether the shadow in the stick experiment was `really green'
is meaningless once we give up the notion that the perception of green
corresponds in a simple way to a pattern of physical stimuli. In giving a
scientific explanation of the operation of the nervous system at the physical
level, we need to explain how the structure of the system at any moment
generates the pattern of activity. The physical means by which that struc-
ture is changed by interaction within the physical medium lie outside the
domain of the nervous system itself.
Of course an observer of the nervous system within its medium can
make statements about the nature of the perturbation and its effect on
patterns of activity. For this observer it makes sense to distinguish the
from the standpoint
situation of an injected irritant from one of heat. But
of the nervous system
it is not a relevant, or even possible, distinction,.
Along with this new understanding of perception, Maturana argues
against what he calls the 'fallacy of instructive interaction.' `Instructive
interaction' is his term for the commonsense belief that in our interactions
with our environment we acquire a direct representation of it-that
prop erties of the medium are mapped onto (specify the states of)
structures in the nervous system. He argues that because our interaction is
always through the activity of the entire nervous system, the changes are
not in the nature of a mapping. They are the results of patterns of activity
which, although triggered by changes in the physical medium, are not
rep resentations of it. The correspondences between the structural changes
and the pattern of events that caused them are historical, not structural..
They cannot be explained as a kind of reference relation between neural
structures and an external world„
domain of
The structure of the organism at any moment determines a
perturbations-a
space of possible effects the medium could have on the
sequence of structural states that it could follow. The medium selects
among these patterns, but does not generate the set of possibilities. In
7 Here and throughout this chapter we use the term `medium' rather than 'environ-
ment' to refer to the space in which an organism exists This is to avoid the conno-
tation that there is a separation between an entity and its `environment' An entity
exists as part of a medium, not as a separate object inside it
44 CHAPTER 4 COGNITION AS A BIOLOGICAL PHENOMENON
8For a collection of papers by Maturana, Varela, and others on autopoiesis, see Zeleny,
Autopoiesrs, a Theory of the Living Or ganization (1978).
4.2. AUTOPOIESIS, EVOLUTION, AND LEARNING 45
91n later work, Maturana and Varela distinguish autopoiesis, as a property of cellular
systems, from a more general property of operational closure that applies to a broader
class of systems We will not pursue the distinction here, but it is explicated in
Varela's El Arbol de Conocimiento (forthcoming).
46 CHAPTER 4. COGNITION AS A BIOLOGICAL PHENOMENON
relevance of action to
A cognitive explanation is one that deals with the
the maintenance of autopoiesis,. It operates in a phenomenal domain
(domain of
phenomena) that is distinct from the domain of mechanistic structure-
determined behavior:
For example, when the male and female of a species develop a sequence
of mutual actions of approach and recognition in a mating ritual, we as ob-
servers can understand it as a coherent pattern that includes both animals.
Our description is not a description of what the male and female (viewed
as mechanisms made up of physical components) do, but a description of
the mating dance as a pattern of mutual interactions. The generation of
a consensual domain is determined by the history of states and interac-
tions among the participants (and their progenitors) within the physical
domain. However, as observers of this behavior we can distinguish a new
domain in which the system of behaviors exists, The consensual domain is
reducible neither to the physical domain (the structures of the organisms
that participate in it) nor to the domain of interactions (the history by
which it came to be), but is generated in their interplay through structural
coupling as determined by the demands of autopoiesis for each participant.
Maturana refers to behavior in a consensual domain as `linguistic
behavior.' Indeed, human language is a clear example of a consensual
do main, and the properties of being arbitrary and contextual have at times
been taken as its defining features. But Maturana extends the term
'lin guistic' to include any mutually generated domain of interactions.
Lan guage acts, like any other acts of an organism, can be described in
the domain of structure and in the domain of cognition as well, But
their existence as language is in the consensual domain generated by
mutual interaction. A language exists among a community of individuals,
and is continually regenerated through their linguistic activity and the
structural coupling generated by that activity.
Language, as a consensual domain, is a patterning of `mutual orienting
behavior,' not a collection of mechanisms in a `language user' or a 'seman tic'
coupling between linguistic behavior and non-linguistic perturbations
experienced by the organisms.
Maturana points out that language is connotative and not denotative,
and that its function is to orient the orientee within his or her cognitive
domain, and not to point to independent entities. An observer will at
times see a correspondence between the language observed and the entities
observed, just as there is a correspondence between the frog's visual system
and the existence of flies. But if we try to understand language purely
within the cognitive domain, we blind ourselves to its role as orienting
50 CHAPTER 4.. COGNITION AS A BIOLOGICAL PHENOMENON
Language, listening,
and commitment
A claims that B's first response was a lie (or at best `misleading'),
while B contends that it was literally true. Most semantic theories in the
rationalistic tradition provide formal grounds to support B, but a theory
of language as a human phenomenon needs to deal with the grounds for
A's complaint as well-i.e., with the `infelicity' of B's reply.
At first, it seems that it might be possible simply to expand the defini-
tion of "water." Perhaps there is a `sense' of the word that means "water
in its liquid phase in sufficient quantity to act as a fluid," so that a sen-
tence containing the word "water" is ambiguous as to whether it refers
to this sense or to a sense dealing purely with chemical composition. But
this doesn't help us in dealing with some other possible responses of B:
54
56 CHAPTER 5. LANGUAGE, LISTENING, AND COMMITMENT
3
lB: Yes, there's a bottle of water in the refrigerator, with a little
emon in it to cover up the taste of the rust from the pipes
Response 1
is facetious, like the one about eggplants. But that is
only because of background. It might be appropriate if
checking for sour ces of humidity that ruined some photoperson A
weregraphic plates
2. I'm sorry I missed the meeting yesterday. My car had a flat tire,,
3. There's an animal over, there in the bushes
2The example of Sentence 3 above is like those studied in Rosch, "Cognitive repre-
sentations of semantic categories" (1975)
58 CHAPTER 5. LANGUAGE, LISTENING, AND COMMITMENT
for the world, This world is always already organized around fundamen tal
human projects, and depends upon these projects for its being and
organization.
To recapitulate in more explicitly Heideggerian language, the world is
encountered as something always already lived in, worked in, and acted
upon. World as the background of obviousness is manifest in our everyday
dealings as the familiarity that pervades our situation, and every possible
utterance presupposes this. Listening for our possibilities in a world in
which we already dwell allows us to speak and to elicit the cooperation
of others. That which is not obvious is made manifest through language.
What is unspoken is as much a part of the meaning as what is spoken.
Habermas argues that every language act has consequences for the
participants, leading to other immediate actions and to commitments for
future action. In making a statement, a speaker is doing something like
60 CHAPTER 5. LANGUAGE, LISTENING, AND COMMITMENT
31t is also not adequate to do as Putnam suggests in "Is semantics possible?" (1970),
locating `real' meaning in the usage of the `experts' who deal with scientific terms..
Our, "water" examples demonstrate that this deals with meaning only in a specialized
and limited sense
5.3. OBJECTIVITY AND TRADITION 63
with the domain of relevant actions, you may decide that you can't "take
me seriously" or "believe what I say." A fundamental condition of
suc cessful communication is lost, The need for continued mutual
recognition of commitment plays the role analogous to the demands of
autopoiesis in selecting among possible sequences of behaviors.
From this analogy we can see how language can work without any `ob-
jective' criteria of meaning. We need not base our use of a particular word
on any externally determined truth conditions, and need not even be in full
agreement with our language partners on the situations in which it would
be appropriate. All that is required is that there be a sufficient coupling
so that breakdowns are infrequent, and a standing commitment by both
speaker and listener to enter into dialog in the face of a breakdown.4
The conditions of appropriateness for commitment naturally take into
account the role of a shared unarticulated background. When a person
promises to do something, it goes without saying that the commitment is
relative to unstated assumptions. If someone asks me to come to a meeting
tomorrow and I respond "I'll be there," I am performing a commissive
speech act.. By virtue of the utterance, I create a commitment. If I find
out tomorrow that the meeting has been moved to Timbuktu and don't
show up, I can justifiably argue that I haven't broken my promise. What I
really meant was "Assuming it is held as scheduled... " On the other hand,
if the meeting is moved to an adjacent room, and I know it but don't show
up, you are justified in arguing that I have broken my promise, and that
the `Timbuktu excuse' doesn't apply. The same properties carry over to
all language acts: meaning is relative to what is understood through the
tradition.
It may appear that there is a conflict between our emphasis on mean-
ing as commitment and on the active interpretive role of the listener. If
the meaning of a speech act is created by listening within a background,
how can the speaker be responsible for a commitment to its consequences?
But of course, there is no contradiction, just as there is no contradiction
in the preceding example of a promise. As participants in a shared tra-
dition, we are each responsible for the consequences of how our acts will
be understood within that tradition. The fact that there are no objective
rules and that there may at times be disagreements does not free us of
that responsibility.
4A similar insight was put forth in discussions of meaning ('semiotics') by the prag-
matists, such as Peirce, Dewey and Mead. As John-Steiner and Tatter ("An interac-
tionist model of language development," 1983) describe the pragmatist orientation:
"The semiotic process is purposive, having a directed flow. It functions to
choreo graph and to harmonize the mutual adjustments necessary for the carrying
out of human social activities, It has its sign function only within the intentional
context of social cooperation and direction, in which past and future phases of
activity are brought to bear upon the present."
64 CHAPTER 5. LANGUAGE, LISTENING, AND COMMITMENT
A: Declare
B: Assert
O
A: Accept \ BRenege
B: Counter
A: Counter
The lines indicate actions that can be taken by the initial speaker (A) and
hearer (B) The initial action is a request from A to B, which specifies some
conditions of satisfaction. Following such a request, there are precisely five
5These are the acts directly relevant to the structure of completion of the conversation
for action. There are of course other possibilities in which the conversational acts
themselves are taken as a topic, for example in questioning the intelligibility ("What,
I didn't hear you") or legitimacy ("You can't order me to do that!") of the acts.
66 CHAPTER 5. LANGUAGE, LISTENING, AND COMMITMENT
3. There are many cases where acts are `listened to' without being
explicit. If the requestor can recognize satisfaction of the request
directly, there may be no explicit assertion of completion. Other
acts, such as declaring satisfaction, may be taken for granted if some
amount of time goes by without a declaration to the contrary. What
is not said is listened to as much as what is said..
6. The network does not say what people should do, or deal with conse-
quences of the acts (such as backing out of a commitment). These are
important phenomena in human situations, but are not generated in
the domain of conversation formalized in this network.
5.4. RECURRENCE AND FORMALIZATION 67
The analysis illustrated by this network can then be used as a basis for
further dimensions of recurrent structure in conversations,. These include
temporal relations among the speech acts, and the linking of conversations
with each other (for example, a request is issued in order to help in the
satisfaction of some promise previously made by the requestor). These
will be discussed further in Chapter 11.
Other kinds of conversations can be analyzed in a similar vein, For
example, in order to account for the truthfulness of assertives in the do-
main of recurrent structures of conversation, we need a `logic of argument,'
where `argument' stands for the sequence of speech acts relevant to the ar-
ticulation of background assumptions„ When one utters a statement, one
is committed to provide some kind of `grounding' in case of a breakdown.
This grounding is in the form of another speech act (also in a situational
context) to satisfy the hearer that the objection is met., There are three
basic kinds of grounding: experiential, formal, and social.
7For a discussion of the central role that metaphor plays in language use, see Lakoff
and Johnson, Metaphors We Live By (1980).
5.5. BREAKDOWN, LANGUAGE, AND EXISTENCE 69
`thing' exists (or that it has some property), we have brought it into a
domain of articulated objects and qualities that exists in language and
through the structure of language, constrained by our potential for action in
the world.
As an example, let us look once again at the meaning of individual
words, and the problem of how a particular choice of words is appropriate
in a situation. We have shown how "water" can have different interpre-
tations in different situations, but how does it come to have the same
interpretation in more than one? The distinctions made by language are
not determined by some objective classification of `situations' in the world,
but neither are they totally arbitrary.8 Distinctions arise from recurrent
patterns of breakdown in concernful activity. There are a variety of human
activities, including drinking, putting out fires, and washing, for which the
absence or presence of "water" determines a space of potential breakdowns.
Words arise to help anticipate and cope with these breakdowns. It is often
remarked that the Eskimos have a large number of distinctions for forms of
snow. This is not just because they see a lot of snow (we see many things
we don't bother talking about), but precisely because there are recurrent
activities with spaces of potential breakdown for which the distinctions
are relevant.
It is easy to obscure this insight by considering only examples that fall
close to simple recurrences of physical activity and sensory experience. It
naively seems that somehow "snow" must exist as a specific kind of entity
regardless of any language (or even human experience) about it. On the
other hand, it is easy to find examples that cannot be conceived of as
existing outside the domain of human commitment and interaction, such
as "friendship," "crisis," and "semantics." In this chapter we have chosen
to focus on words like "water" instead of more explicitly socially-grounded
words, precisely because the apparent simplicity of physically-interpreted
terms is misleading.
We will see in Part II that this apparently paradoxical view (that noth-
ing exists except through language) gives us a practical orientation for
understanding and designing computer systems. The domain in which
people need to understand the operation of computers goes beyond the
physical composition of their parts, into areas of structure and behavior
for which naive views of objects and properties are clearly inadequate.
The `things' that make up `software,' `interfaces,' and `user interactions'
are clear examples of entities whose existence and properties are generated
in the language and commitment of those who build and discuss them.
8 W inograd, in "Moving the semantic fulcrum" (1985), criticizes the assumptions made
by Barwise and Perry in basing their theory of `situation semantics' (Situations and
Attitudes, 1983) on a naive realism that takes for granted the existence of specific
objects and properties independent of language..
Chapter 6
Towards a new
orientation
70
72 CHAPTER 6.. TOWARDS A NEW ORIENTATION
computers stresses the role they will play in `applying knowledge' and
`making decisions,' If, on the other hand, we take action as primary, we
will ask how computers can play a role in the kinds of actions that make
up our lives-particularly the communicative acts that create requests
and commitments and that serve to link us to others, The discussion of
word processors in Chapter 1 (which pointed out the computer's role in a
network of equipment and social interactions) illustrates how we can gain
a new perspective on already existing systems and shape the direction of
future ones..
We also want to better understand how people use computers, The ra-
tionalistic tradition emphasizes the role played by analytical understand-
ing and reasoning in the process of interacting with our world, including
our tools.. Heidegger and Maturana, in their own ways, point to the im-
portance of readiness-to-hand (structural coupling) and the ways in which
objects and properties come into existence when there is an unreadiness or
breakdown in that coupling, From this standpoint, the designer of a com-
puter tool must work in the domain generated by the space of potential
breakdowns,. The current emphasis on creating `user-friendly' computers
is an expression of the implicit recognition that earlier systems were not
designed with this domain sufficiently in mind, A good deal of wisdom
has been gained through experience in the practical design of systems, and
one of our goals is to provide a clearer theoretical foundation on which to
base system design. We will come back to this issue in our discussion of
design in Chapter 12.
Finally, our orientation to cognition and action has a substantial impact
on the way we understand computer programs that are characterized by
their designers as `thinking' and `making decisions.' The fact that such
labels can be applied seriously at all is a reflection of the rationalistic
tradition, In Chapters 8 through 10, we will examine work in artificial
intelligence, arguing that the current popular discourse on questions like
"Can computers think?" needs to be reoriented.
'This assumption, which has also been called the physical symbol system hypothesis,
is discussed at length in Chapter 8
6.3. PRE- UNDERSTANDING AND BACKGROUND 75
`present-to-hand' to us, perhaps for the first time. In this sense they
function in a positive rather than a negative way.
New design can be created and implemented only in the space that
emerges in the recurrent structure of breakdown. A design constitutes an
interpretation of breakdown and a committed attempt to anticipate future
breakdowns. In Chapter 10 we will discuss breakdowns in relation to the
design of expert systems, and in Chapter 11 their role in management and
decision making.
Most important, though, is the fundamental role of breakdown in cre-
ating the space of what can be said, and the role of language in creating
our world. The key to much of what we have been saying in the preceding
chapters lies in recognizing the fundamental importance of the shift from
an individual-centered conception of understanding to one that is socially
based. Knowledge and understanding (in both the cognitive and linguis-
tic senses) do not result from formal operations on mental representations
of an objectively existing world. Rather, they arise from the individual's
committed participation in mutually oriented patterns of behavior that
are embedded in a socially shared background of concerns, actions, and
beliefs. This shift from an individual to a social perspective-from mental
representation to patterned interaction-permits language and cognition
to merge. Because of what Heidegger calls our `thrownness,' we are largely,
forgetful of the social dimension of understanding and the commitment it
entails. It is only when a breakdown occurs that we become aware of the
fact that `things' in our world exist not as the result of individual acts of
cognition but through our active participation in a domain of discourse
and mutual concern.
In this view, language-the public manifestation in speech and writing of
this mutual orientation-is no longer merely a reflective but rather a
constitutive medium. We create and give meaning to the world we live in
and share with others. To put the point in a more radical form, we
design ourselves (and the social and technological networks in which our
lives have meaning) in language.
Computers do not exist, in the sense of things possessing objective
features and functions, outside of language. They are created in the con-
versations human beings engage in when they cope with and anticipate
breakdown. Our central claim in this book is that the current theoretical
discourse about computers is based on a misinterpretation of the nature
of human cognition and language. Computers designed on the basis of
this misconception provide only impoverished possibilities for modelling
and enlarging the scope of human understanding. They are restricted to
representing knowledge as the acquisition and manipulation of facts, and
communication as the transferring of information. As a result, we are now
witnessing a major breakdown in the design of computer technology-a
6.5, BREAKDOWN AND THE ONTOLOGY OF DESIGN 79
Computers and
representation
This book is directed towards understanding what can be done with com-
puters. In Part I we developed a theoretical orientation towards human
thought and language, which serves as the background for our analysis of
the technological potential, In Part II we turn towards the technology it-
self, with particular attention to revealing the assumptions underlying its
development. In this chapter we first establish a context for talking about
computers and programming in general, laying out some basic issues that
apply to all programs, including the artificial intelligence work that we will
describe in subsequent chapters. We go into some detail here so that read-
ers not familiar with the design of computer systems will have a clearer
perspective both on the wealth of detail and on the broad relevance of a
few general principles.
Many books on computers and their implications begin with a descrip-
tion of the formal aspects of computing, such as binary numbers, Boolean
logic, and Turing machines. This sort of material is necessary for technical
mastery and can be useful in dispelling the mysteries of how a machine
can do computation at all, But it turns attention away from the more
significant aspects of computer systems that arise from their larger-scale
organization as collections of interacting components (both physical and
computational) based on a formalization of some aspect of the world. In
this chapter we concentrate on the fundamental issues of language and
rationality that are the background for designing and programming com-
puters
We must keep in mind that our description is based on an idealiza-
tion in which we take for granted the functioning of computer systems
83
84 CHAPTER 7. COMPUTERS AND REPRESENTATION
extent that they produce results that are correct relative to the domain:
they give the actual location of the satellite or the legal deductions from
the paycheck. They are effective to varying degrees, depending on how
efficiently the computational operations can be carried out. Much of the
detailed content of computer science lies in the design of representations
that make it possible to carry out some class of operations efficiently.
Research on artificial intelligence has emphasized the problem of rep-
resentation. In typical artificial intelligence programs, there is a more
complex correspondence between what is to be represented and the cor-
responding form in the machine For example, to represent the fact that
the location of a particular object is "between 3 and 5 miles away" or
"somewhere near the orbiter," we cannot use a simple number. There
must be conventions by which some structures (e g , sequences of charac-
ters) correspond to such facts, Straightforward mappings (such as simply
storing English sentences) raise insuperable problems of effectiveness. The
operations for coming to a conclusion are no longer the well-understood
operations of arithmetic, but call for some kind of higher-level reasoning.
In general, artificial intelligence researchers make use of formal logical
systems (such as predicate calculus) for which the available operations
and their consequences are well understood.. They set up correspondences
between formulas in such a system and the things being represented in
such a way that the operations achieve the desired veridicality. There is
a great deal of argument as to the most important properties of such a
formal system, but the assumptions that underlie all of the standard
approaches can be summarized as follows:
3This point has been raised by a number of philosophers, such as Fodor in "Method-
ological solipsism considered as a research strategy in cognitive psychology" (1980),
and Searle in "Minds, brains, and programs" (1980) We will discuss its relevance
to language understanding in Chapter 9
T2 LEVELS OF REPRESENTATION 87
The logical machine. The computer designer does not generally begin
with a concept of the machine as a collection of physical components, but
as a collection of logical elements, The components at this level are logical
abstractions such as or-gates, inverters, and flip-flops (or, on a higher level
of the decomposition, multiplexers, arithmetic-logical units, and address
decoders). These abstractions are represented by activity in the physical
components. For example, certain ranges of voltages are interpreted as
representing a logical `true' and other ranges a logical `false.' The course
of changes over time is interpreted as a sequence of discrete cycles, with
the activity considered stable at the end of each cycle, If the machine is
properly designed, the representation at this level is veridical-patterns
of activity interpreted as logic will lead to other patterns according to
88 CHAPTER 7. COMPUTERS AND REPRESENTATION
4Even this story is too simple It was true of computers ten years ago, but most
present-day computers have an additional level called 'micro-code' which implements
the abstract machine instructions in terms of instructions for a simpler abstract
machine which in turn is defined in terms of the logical machine
5The difference between compiling and interpretation is subtle and is not critical for our
discussion.
7.2. LEVELS OF REPRESENTATION 89
computing device. People who have not programmed computers have not
generally had experiences that provide similar intuitions about systems.
One obvious fact is that for a typical complex computer program, there is
no intelligible correspondence between operations at distant levels. If you
ask someone to characterize the activity in the physical circuits when the
program is deciding where the satellite is, there is no answer that can be
given except by building up the description level by level.. Furthermore, in
going from level to level there is no preservation of modularity. A single
high-level language step (which is chosen from many different types avail-
able) may compile into code using all of the different machine instructions,
and furthermore the determination of what it compiles into will depend
on global properties of the higher-level code.
If it were not for this last possibility we could argue that any properly
con structed computer program is related to a subject domain only through
the relationships of representation intended by its programmers. However
there remains the logical possibility that a computer could end up
oper ating successfully within a domain totally unintended by its designers
or the programmers who constructed its programs.
This possibility is related to the issues of structural coupling and in-
structional interaction raised by Maturana.. He argues that structures in
the nervous system do not represent the world in which the organism lives.
Similarly one could say of the display hack program that its structures do
not represent the geometrical objects that it draws. It is possible that we
might (either accidentally or intentionally) endow a machine with essential
qualities we do not anticipate. In Section 8.4 we will discuss the relevance
of this observation to the question of whether computers can think,
Chapter 8
Computation and
intelligence
Apparent autonomy. It has been argued that the clock played a major
role in the development of our understanding of physical systems because it
exhibited a kind of autonomy that is not shared by most mechanical tools
Although it is built for a purpose, once it is running it can go on for a long
time (even indefinitely) without any need for human intervention. This
feature of autonomous operation made it possible for the clock to provide a
model for aspects of the physical world, such as the motions of the planets,
and of biological organisms as well. The computer exhibits this kind of
autonomy to a much larger degree Once it has been programmed it can
carry out complex sequences of operations without human intervention.
2Practitioners of artificial intelligence will be aware that in many cases the program-
mer deals directly with the correspondence between the subject domain and the
symbol structures provided by a higher-level programming language, with no sys-
tematic level of logical representation It is coming to be widely accepted (see, for
example, Hayes, "In defence of logic," 1977; Nilsson, Princxples of Artificial
Intelli-
gence, 1980; Newell, "The knowledge level," 1982) that this kind of haphazard formal
system cannot be used to build comprehensible programs, and that the distinction
between the logical formalismand its embodiment in a computation is critical,
8.3. THE PHENOMENON OF BLINDNESS 97
1979, p. 157) gives a typical response,. When asked "Surely it can only do
what you have previously programmed it to do?" he suggests the answer
is unexpectedly simple: "The same is true of animals and humans." It
should be clear that in this response (as in all similar responses) there
is a failure to distinguish between two very different things: structure-
determined systems (which include people, computers, and anything else
operating according to physical laws), and systems programmed explicitly
with a chosen representation.. Stating matters carefully: "An animal or
human can do only what its structure as previously established will allow
3Dxeyfus and Dreyfus, in Mznd Over Machine (1985), argue along lines very similar to
ours that expertise cannot be captured in any collection of formal rules
100 CHAPTER 8. COMPUTATION AND INTELLIGENCE
it to do." But this is not at all the same as "what you have previously
programmed it to do" for any "you" and any notion of "programmed"
that corresponds to the way computers are actually programmed.
Parameter adjustment. The simplest approach (and the one that has
led to the most publicized results) is to limit the program's learning ca-
pacities to the adjustment of parameters that operate within a fixed rep-
5
resentation. For example, Samuel's checker-playing program contained a
collection of evaluation functions, which computed the benefits of a pro-
posed position. These functions (such as adding up the number of pieces,
or seeing how many jumps were possible) gave different results for differ-
ent situations. The obvious problem was deciding how much weight to
give each one in the overall choice of a move. Samuel's program gradually
shifted the weights and then adopted the modified weighting if the moves
thereby chosen were successful.
More sophisticated applications of this technique have been developed,
but the general idea remains the same: a fixed structure is in place, and
of learned interconnections. The guiding image was that of a large num ber
of essentially uniform neurons with widespread and essentially uniform
potential for interconnections. Work over the years in neuroanatomy and
neurophysiology has demonstrated that living organisms do not fit this
image. Even an organism as small as a worm with a few hundred neu rons
is highly structured, and much of its behavior is the result of built-in
structure, not learning,.
Of course, the structure of nervous systems came into being through
evolution.. But if we try to duplicate evolution rather than the struc-
tural change that takes place in the lifetime of an individual, we are faced
with even less knowledge of the mechanisms of change. This remains true
despite recent advances in molecular genetics, which further reveal the
complexity of layer upon layer of mechanisms involved at even the most
microscopic level There is also the obvious fact that the time scale of
biological evolution is epochal; changes occur as a result of coupling over
millions of years
In discussions of artificial evolution it is sometimes argued that there
is no need for an artificial system to evolve at this slow pace, since its
internal operations are so much faster than those of organic nervous sys-
tems, Millions upon millions of `generations' could be produced in a single
day„ But this is wrong for two reasons. First, nature does not do things
sequentially.. Although each generation for a higher organism takes days
or years, there are millions of individual organisms all undergoing the pro-
cess simultaneously. This high degree of parallelism more than cancels out
the additional speed of computing. Second, and more important, the view
that evolution can go at the speed of the machine ignores the fundamental
process of structural coupling.. In order for changes to be relevant to sur-
vival in the medium the organism inhabits, there must be sufficient time
for those changes to have an effect on how the organism functions. The
evolutionary cycle must go at the speed of the coupling, not the speed at
which internal changes occur. Unless we reduce our notion of the medium
to those things that can be rapidly computed (in which case we fall into
all of the problems discussed above for small simplistic representations),
an artificial system can evolve no faster than any other, system that must
undergo the same coupling.
It is highly unlikely that any system we can build will be able to un-
dergo the kind of evolutionary change (or learning) that would enable it
to come close to the intelligence of even a small worm, much less that of
a person.
The issue of'structural change is also closely tied to the problem of the
physical embodiment of computers as robots. It has been argued that the
representation relation between a computer and the world its structures
describe will be different when we can build robots with visual, tactile,
104 CHAPTER 8. COMPUTATION AND INTELLIGENCE
and other sensors and motor effectors operating in the physical domain.
As a number of philosophers have pointed out,10 a computer attached
to a television camera is no different in principle from one attached to a
teletype. The relevant properties and their representation are fixed by the
design of the equipment. A scanned-in video scene that has been processed
into an array of numerical `pixels' (picture elements) is exactly equivalent
to a long list of sentences, each of the form "The amount of light hitting
the retina at point [x, A is z."
In designing a fixed correspondence between parameters of the recep tors
(or effectors) and elements of the representation, the programmer is
embodying exactly the sort of blindness about which we have been speak ing,
Once again, this does not apply to systems that, rather than being
designed to implement a particular representation, evolve through
struc tural coupling, However, the possibilities for computers whose
physical structure evolves in this way are even more remote than those of
programming by evolutionary change.
2, We have explicitly said that computers can perform some tasks (such
as playing complex games) as well as people: Some researchers would
take this as constituting intelligent behavior. How, then, do we
exclude this behavior from the domain of intelligence?
3. Finally, we have left open the possibility that some suitably designed
machine or sequence of machines might be able to undergo adequate
structural coupling, and hence have the same claims to intelligence
as any organism, including a person. Since we accept the view that a
person is a physical structure-determined system, we cannot be sure
that a similar system made out of silicon and metals might not be
equivalent to one composed of protoplasm.
"See, for example Putnam's discussion of natural kinds in "Is semantics possible?"
(1970)
106 CHAPTER 8. COMPUTATION AND INTELLIGENCE
Understanding language
'In discussions of computers and formal systems, the term `natural language' is used
for ordinary human languages, to distinguish them from constructed formal
languages, such as the predicate calculus and FORTRAN
108 CHAPTER 9. UNDERSTANDING LANGUAGE
(a) For any relevant fact about the world there can be a corre-
sponding structure in the representation system.
(b) There is a systematic way of correlating sentences in natural
language with structures in the representation system, so that
the corresponding structure states the same fact as the sentence.
(c) Systematic formal operations on representation structures can
be devised to carry out valid reasoning,
If)7
9,1 AI AND LANGUAGE UNDERSTANDING 109
other programs that deal with text using key-word recognition or statisti-
cal methods) only `manipulate without understanding..'
Many claims have been made that the `artificial intelligence paradigm'
is new and superior to older philosophical approaches to language. But
with a few notable exceptions (discussed later), these claims have proved
rather empty. Fodor, in "Methodological solipsism considered as a research
strategy in cognitive psychology" (1980), has pointed out that to the extent
that they are clearly defined, artificial intelligence models are generally
equivalent to older philosophical ones in the analytic tradition, drawing on
ideas that go back to Aristotle and beyond.. Before pursuing the limitations
of this tradition, let us look at what has been achieved.
One of the most widely known programs of the kind described here
was Winograd's SHRDLU (Understanding Natural Language, 1972).
The
program carried on a dialog with a person (via teletype) concerning the
activity of a simulated robot arm in a tabletop world of toy objects (often
referred to as the `blocks world'). The program could answer questions,
carry out commands, and incorporate new facts about its world. It dis-
played the simulated world on a video screen, showing the activities it
carried out as it moved the objects around.. The following dialog3 illus-
trates some of its abilities (lines in lower case are typed by the person,
those in upper case by the computer):
• find a block which is taller than the one you are holding and put it
into the box,
OK
• is at least one of them narrower than the one which I told you to
pick up?
YES, THE RED CUBE.
• is it supported?
YES, BY THE TABLE.
• had you touched any pyramid before you put the green one on the
little cube?
YES, THE GREEN ONE.
• why?
TO GET RID OF IT.
In addition to giving responses that call for reasoning about the do-
main (the world of toy blocks), the system draws conclusions about the
structure of the conversation and about the hypothesized knowledge of
the other participant. In order to analyze a phrase like "the pyramid,"
context must be considered. The command "Grasp the pyramid" near the
beginning of the dialog is underspecified since there are several pyramids
in the robot's world, But the later question "What is the pyramid sup-
ported by?" is understood as referring to the specific pyramid mentioned
in a previous response. By keeping a record of the conversation, SHRDLU
could often determine the referent of a phrase. It also kept track of part
of the knowledge implicit in the person's utterances, for example the fact
that a particular block was referred to using its color. It could then use
this to help in choosing among alternative interpretations. For example, if
a block had been referred to by the person as "the green block," it would
not be considered as a likely referent for "it" in a subsequent question
"What color is it?"
In SHRDLU, this reasoning about the conversation did not make use
of the same representation formalism as for the blocks world itself, but
was done in an ad hoc style. Nevertheless, in essence it was no different
9.2. THE PROBLEM OF BACKGROUND 111
This good example raises interesting issues and seems to call for
some distinctions. Full understanding of the sentence indeed
results in knowing about the young man's desire for love, but it
would seem that there is a useful lesser level of understanding
in which the machine would know only that he would like her
to come to dinner,. - McCarthy, "An unreasonable book" (1976),
p. 86.
Gadamer, Heidegger, Habermas, and others argue that the goal of re-
ducing even `literal' meanings to truth conditions is ultimately impossible,
and inevitably misleading, It focusses attention on those aspects of lan-
guage (such as the statement of mathematical truths) that are secondary
and derivative, while ignoring the central problems of meaning and com-
munication,. When we squeeze out the role of interpretation, we are left
not with the essence of meaning, but with the shell, Chapter 5 showed
how the meaning of a concrete term like "water" could be understood only
relative to purpose and background. Looking at computer programs, we
see this kind of problem lurking at every turn.. In order for a computer sys-
tem to draw conclusions from the use of a word or combination of words,
meaning must be identified with a finite collection of logical predicates (its
truth conditions) or procedures to be applied. Complications arise even
in apparently simple cases..
In classical discussions of semantics, the word "bachelor" has been put
forth as a word that can be clearly defined in more elementary terms:
"adult human male who has never been married."4 But when someone
refers to a person as a "bachelor" in an ordinary conversational situation,
much more (and less) is conveyed. "Bachelor" is inappropriate if used in
describing the Pope or a member of a monogamous homosexual couple,
and might well be used in describing an independent career woman. The
problem is not that the definition of bachelor is complex and involves more
terms than accounted for in the classical definition. There is no coherent
`checklist' of any length such that objects meeting all of its conditions will
consistently be called "bachelors" and those failing one or more of them
will not.' The question "Is X a bachelor?" cannot be answered without
considering the potential answers to "Why do you want to know?" This
is what Gadamer means by "A question is behind each statement that
f i rst gives it its meaning." It is possible to create artificial `stipulative
definitions,' as in a mathematics text or in establishing the use of terms in
4This is, of course, only one of its definitions. Others, as pointed out in Katz and
Fodor's account of semantics ("The structure of a semantic theory," 1964), relate to fur
seals and chivalry.
5For a discussion of examples like this, see Fillmore, "An alternative to checklist
theories of meaning" (1975) and Winograd, "Toward a procedural understanding of
semantics" (1976)
9.2. THE PROBLEM OF BACKGROUND 113
legal documents, but these do not account for the normal use of language.
When we leave philosophical examples and look at the words appearing
in everyday language, the problem becomes even more obvious. Each of the
nouns in the sentence "The regime's corruption provoked a crisis of
confidence in government" raises a significant problem for definition. It is
clear that purpose and context play a major role in determining what will be
called a "crisis," "corruption," or a "regime."
Other problems arise in trying to deal with words such as "the" and
"and," which seem the closest to logical operators. In SHRDLU, as de-
scribed above, the program for determining the referent of a definite noun
phrase such as "the block" made use of a list of previously mentioned
objects. The most recently mentioned thing fitting the description was
assumed to be the referent. But this is only a rough approximation.
Sometimes it gives a wrong answer, and other times it gives no clue at
all.. Consider the text: "Tommy had just been given a new set of blocks.
He was opening the box when he saw Jimmy coming in."
'See, for example, Schank and Abelson, Scripts Plans Goals and Understanding
(1977); Hobbs, "Coherence and coreference" (1979); Grosz, "Utterance and objective:
Issues in natural language communication" (1980),
114 CHAPTER 9. UNDERSTANDING LANGUAGE
(b) Rules for the composition of these into the meanings of phrases
and sentences, where these rules can take into account specific
properties of the current state of speaker and hearer (including
memory of the preceding text)..
7A partial exception is Barwise and Perry, (Sztuatzons and Attitudes, 1983), who
attempt
to situate these complexities in the tradition of analytic philosophy of language.
Winograd, in "Moving the semantic fulcrum" (1985), discusses the relevance of their
work for artificial intelligence.
9.3. UNDERSTANDING AS PATTERN RECOGNITION 115
or not they explicitly use formal logic, there is an assumption that formu las
represent propositions of the kind traditionally assigned truth values, such
as "Every dog is an animal." A major goal of frame formalisms was to
represent `defaults': the ways things are typically, but not always. For
example we might want to include the fact "Dogs bark" without precluding
the possibility of a mute dog..
The frame intuition can be implemented only in a system that does
informal reasoning-one that comes to conclusions based on partial evidence,
makes assumptions about what is relevant and what is to be expected in
typical cases, and leaves open the possibility of mistake and contradiction. It
can be 'non-monotonic'-it can draw some conclusion, then reverse it on
the basis of further information,.8
The problem, of course, is to know when something is to be treated as
`typical' and when the various parts of the frame are to be taken as rele-
vant. Here, if we look at the literature on frame systems, we find a mix-
ture of hand waving and silence. Simple rules don't work. If, for example,
defaults are used precisely when there is no explicit (previously derived)
information to the contrary, then we will assume that one holds even when
a straightforward simple deduction might contradict it, If analogies are
treated too simply, we attempt to carry over the detailed properties of one
object to another for which they are not appropriate.
It should be clear that the answer cannot lie in extending the details
of the rules within the subject domain, If the default that rooms have
windows is to be applied precisely in the cases of "those rooms that... and
not those that.. " then it is no longer a default. We have simply refined
our description of the world to distinguish among more properties that
rooms can have
Another approach has been to postulate `resource-limited processing'
as a basis for reasoning 9 In any act of interpretation or reasoning, a sys-
tem (biological or computer) has a finite quantity of processing resources
to expend. The nature of these resources will be affected by the details
of the processor, its environment, and its previous history.. The outcome
of the process is determined by the interaction between the structure of
the task and the allocation of processing. The ability to deal with partial
or imprecise information comes from the ability to do a finite amount of
processing, then jump to a conclusion on the basis of what has happened
so far, even though that conclusion may not be deducible or even true,
This is what Minsky refers to when he talks of the need to "bypass logic."
From one point of view, a resource-limited system is a purely logical
formal system-it operates with precise rules on well-defined structures,
as does any computer program From another viewpoint the system is
carrying out informal reasoning. The key to this paradox lies in the use of
formal rules that are relative to the structure of the computer system that
embodies the formalism 10 In reasoning about some task environment,
a frame-based system can come to conclusions on the basis not only of
statements about the world, but also on the basis of the form of the rep-
resentation and the processes that manipulate it (for example, concluding
something is false because it is represented as typically false, and with
some bounded amount of deduction in this case it cannot be proved true).
Once again, the intuition is related to the work we have been present-
ing. Maturana's account of structure-determined systems deals directly
with how the system's structure (rather than an externally observed struc-
ture of the environment) determines its space of operation. However, there
is a significant difference in that the frame approach assumes a mechanism
operating on representations, albeit in a resource-limited way.
Although the general idea of frames with resource-limited reasoning
has some plausibility, it has not produced computer systems with any de-
gree of generality. The problem lies in accounting for how the detailed
structure of the system leads to the desired results. Only very simplistic
examples have been given of what this structure might look like, and those
examples cannot be extended in any obvious way. Programs actually writ-
ten using frame systems tend to fall into two classes. Either the structures
are written with a few specific examples in mind and work well only for
those examples and minor variations on them," or they do not make any
essential use of the frame ideas (adopting only a frame-like notation) and
are equivalent to more traditional programs. 12
Furthermore, even if a system containing frames with an appropri-
ate structure could be constructed, it still does not escape the problems
of blindness described in Chapter 8. The programmer is responsible for
a characterization of the objects and properties to be dealt with using
frames, to exactly the same degree as the programmer of any representa-
tion system. The program begins with a characterization of the possible
objects and properties. Detailed consideration of its internal structure
(both of representations and of processes on them) cannot move
beyond
ro Hofstadter, in Godel, Escher, Bach (1979) elaborates this point clearly and at great
length.
" See, for example, the programs described in Schank, "Language and memory" (1981).
12This
was the experience with KRL, as described in Bobrow et al , "Experience with
KRL-0" (1977).
9.4. WHAT DOES IT MEAN TO UNDERSTAND? 119
Program1 prints out the time of day whenever the precise sequence
"What time is it?" is typed in, Any other sequence is simply ig-
nored. Such a program might well operate to the satisfaction of
those who use it, and they might want to claim that it "understands
the question," since it responds appropriately.
Program 2 accepts sequences of the form "What ... is it?" where the
gap is filled by "time," "day," "month," or "year." It types out the
appropriate answer to each of these and ignores any sequence not
matching this pattern.
I was startled to see how quickly and how very deeply people
conversing with DOCTOR became emotionally involved with
the computer and how unequivocally they anthropomorphized
it.. Another widespread, and to me surprising, reaction to
the ELIZA program was the spread of a belief that it demon-
strated a general solution to the problem of computer under-
standing of natural language, - Weizenbaum, Computer Power
and Human Reason (1976), p 6
did Paul feel?" with "Paul caught Sarah committing adultery" and "Paul
was surprised."
If we examine the workings of BORIS we find a menagerie of script-
like representations (called MOPS, TOPS, TAUS, and META-MOPS) that
were used in preparing the system for the one specific story it could answer
questions about, For example, TAU-RED-HANDED is activated "when a
goal to violate a norm, which requires secrecy for its success, fails during
plan execution due to a witnessing.." It characterizes the feeling of the
witness as "surprised." In order to apply this to the specific story, there
are MOPS such as M-SEX (which is applied whenever two people are
in a bedroom together) and M-ADULTERY (which includes the structure
needed to match the requirements of TAU-RED-HANDED). The apparent
human breadth of the program is like that of ELIZA,. A rule that "If
two people are in a bedroom together, infer they are having sex" is as
much a micro-world inference as "If one block is directly above another,
infer that the the lower one supports the upper." The illusions described
by Weizenbaum are fueled by subject matter that makes it appear that
complex and subtle understanding is taking place.
In a similar vein, the program that can "draw analogies among
Shake spearean plays" operates in a micro-world that the programmer
fashioned after his reading of Shakespeare.17 The actual input is not a
Shakespeare play, or even a formal representation of the lines spoken by
the charac ters, but a structure containing a few objects and relations
based on the plot, The complete representation of Macbeth used for
drawing analogies consisted of the following:
Current directions in
artificial intelligence
Robotics
In the early years of artificial intelligence, work on robotics emphasized the
quest for general principles of intelligence underlying human perception
and action. Abstract work on symbolic problem solving was motivated by
plans for a robot operating with a `hand' or maneuvering its way around an
environment. Current work in robotics applies some techniques developed in
this earlier work, but is better understood as extending a process of
automation that began decades ago.
Computers already play a major role in the physical work of industry,
for example in controlling the complex processes of oil refineries and in
guiding numerically controlled milling machines. As computer hardware
becomes cheaper, it becomes practical to automate more activities:
Once again we need to be cautious about words like `think' (even when
set off in quotation marks).. Nevertheless it is quite likely that automation
will continue to develop, including general-purpose programmable manip-
ulators and visual-manual coordination. It is not within the scope of this
book to analyze the economic potential for such systems or to discuss the
social effects of their widespread use. However, it is important to separate
out the real potential for such devices from the implications that come
from calling them applications of artificial `intelligence,' and even from
the use of the word `robot„'
This book has not focussed on aspects of intelligence directly concerned
with perception and action in a physical world. As we discussed in Chapter
128 CHAPTER 10.. CURRENT DIRECTIONS IN Al
8, this is not because the issues are different, but because the central
focus of the argument is clearer in `disembodied' areas, The limitations
of representation and programming based on a formal characterization of
properties and actions are just as strong (if not stronger) in dealing with
physical robots.. Perception is much more than the encoding of an external
world into a representation, and action is more than the execution of
`motor routines.' Nevertheless, robotic devices that operate in artificially
limited domains may be quite useful, in spite of (or at times because of)
not reflecting the nature of human perception and action.
Even those who believe in the success of artificial intelligence view such
claims as a gross exaggeration of the capabilities of any existing system.
They are all the more notable since the company is directed by one of the
leading figures in artificial intelligence research, who is also chairman of
the computer science department at a major university.2
'Advertising brochure for Cognitive Systems, Inc., distributed at the AAAI National
Conference, August 1982
2From the brochure: "Cognitive Systems, Inc. is an outgrowth of research at the Ar-
tificial Intelligence (AI) Lab of the Computer Science Department at Yale University.
The Yale Al Lab is one of the foremost research centers in the country in the field
of natural language processing. Cognitive Systems was founded by Professor Roger
Schank, Chairman of the Computer Science Department and Director of Research
at the Al Lab at Yale."
10. 1, THE FORKING OF THE PATHS 129
Cognitive modelling
3For an overview of the newer approaches to vision, see Marr, V 1st on (1982)
10.2. EXPERT SYSTEMS 131
combining them in various ways and testing the effectiveness of those com-
binations.. As we discussed in the section on learning in Chapter 8, this
does not enable them to move beyond the limitations of their initial do-
main. Similarly, detailed theories may be developed that in some way
model the functioning of nervous systems and the modification of their
structure over time. There is much to be discovered about how our ner-
vous systems really work, but Al theories and neurophysiological theories
are in different domains. Detailed theories of neurological mechanisms will
not be the basis for answering the general questions about intelligence and
understanding that have been raised in this book any more than detailed
theories of transistor electronics would aid in understanding the complex-
ities of computer software.
earliest computers were used for code breaking and ballistic calculations.
To the extent that areas can be well-defined and the rules for them set
down precisely, `expert systems' can be created and will operate
successfully, However, there are two important caveats.
First, there is a danger inherent in the label `expert system.' When we
talk of a human `expert' we connote someone whose depth of understand-
ing serves not only to solve specific well-formulated problems, but also to
put them into a larger context. We distinguish between experts and `idiot
savants.' Calling a program an `expert' is misleading in exactly the same
way as calling it `intelligent' or saying it `understands.' The misrepresen-
tation may be useful for those who are trying to get research funding or
sell such programs, but it can lead to inappropriate expectations by those
who attempt to use them. Dreyfus and Dreyfus (Mind Over Machine,
1985) describe four stages of progressively greater expertise, of which only
the first, `novice' stage, can be accounted for with the kind of rules that
have been used by programs attempting to duplicate expert performance.
The second problem with creating `expert systems' is the difficulty in
understanding and conveying a sense of the limitations of a particular pro-
gram and of the approach in general. A good example of the problem can
be found in the application of computers to medicine. Nobody would ques-
tion that there are relevant computations (such as the determination of
electrolyte balances) that are too complex for practical hand calculations.
There are other areas (such as the recognition of specific infections and the
analysis of electrocardiograms) in which the domain can be circumscribed
carefully enough so that programs can be effective even though there is no
simple `closed form' algorithm for getting the `right answer.' But in the
popular press (and much of the professional literature as well), computers
are described as `diagnosing diseases' and `choosing treatments.'
An editorial in the prestigious New England Journal of Medicine
eval uated the potential for computer diagnosis:
5By late 1984, the projection had been reduced by about half, due to serious deficits and
budget cuts by the Japanese government.
6For an analysis and critique of this program see Davis, "Assessing the Strategic
Computing Initiative" (1985),
10.3. THE FIFTH GENERATION COMPUTER SYSTEM 135
The first page of the study lists four major social areas in which fifth
generation computers will "play active roles in the resolving of anticipated
social bottlenecks" :
is unrealistic to expect that they will have a major positive impact on the
wide variety of problems identified by Moto-oka.
However, the more detailed plans appearing in the same study give
a somewhat different perspective. The project is not a monolithic push
towards a single goal, but an attempt to promote and coordinate research
on advanced computer technology in a quite general way Although the
artificial intelligence aspect gets the most attention in public descriptions,
it is only one component of a full spectrum of computer science research,
To the extent that there is a common theme tying the project together,
we can summarize it as a loosely linked series of research commitments:
4. Expert systems will become much more advanced when built using
specialized machines and programming languages especially suited
to them.
5. These new machines will make use of parallel processing (the ability
to carry on many computations simultaneously) much more than
current computers,
Many of the steps in this chain can stand on their own., It is no novelty
to say that the human-computer interface is a prime area for research or
that new advances will come from parallel machine architectures. Many of
the specific research themes deal with one or another of the steps above,
and their success or failure will stand independently of the chain of rea-
soning,. The project will produce useful results even if it does not increase
productivity in agriculture or significantly improve the quality of life for
the aged, Let us consider each of these steps more carefully,
motivation for technical development, we must not fall into the trap of
assuming that the technology will fix the problems. In the case of
computers there is a double temptation, since if the direct application of
computer technology to the problems is not the answer, one might hope
for `intelligent' programs that can tell us what to do, As we
emphasized in Chapter 8, this is an idle dream.
within the project, but will not be central to its overall goal of mak ing
the machine more accessible. More ambitious Al goals such as
general-purpose machine translation and real natural-language
understanding will be dropped.
The grandiose goals, then, will not be met, but there will be useful
spinoffs. In the long run, the ambitions for truly intelligent computer
7Initially the Japanese researchers plan to use PROLOG (which stands for
PROgramming in LOGic), a language developed in Europe. This is considered a
starting point fromwhich new languages and systems will develop.
10.3. THE FIFTH GENERATION COMPUTER SYSTEM 139
systems, as reflected in this project and others like it around the world,
will not be a major factor in technological development. They are too
rooted in the rationalistic tradition and too dependent on its assumptions
about intelligence, language, and formalization.
PART III
Design
Chapter 11
Management and
conversation
143
144 CHAPTER 11 MANAGEMENT AND CONVERSATION
From the point of view of classical decision theory, the task is to lay out
the space of alternatives and to assign valuations to each. This includes
dealing with the uncertainties, such as not knowing how much the repair
will cost, how much trouble it will be to find a good used car, and what your
f i nancial situation will be in the future. In addition it requires comparing
factors that are not directly comparable. How important is it to be able
to go camping in comfort? How bad is it to be stuck with car payments
that make the budget a hassle every month? What would it mean to give
up the vacation? How valuable is it to avoid the increased worry about
breakdowns that comes with a used car? How do you feel about being
seen driving an old clunker?
the problem is not just that it is impossible to apply systematic de-
cision techniques, but that there is a bias inherent in formulating the
situation as one of choosing between these alternatives. Imagine that on
the next day (since you can't drive to work) you call and check the city
buses and find one you can take. After a few days of taking the bus you
realize you didn't really need the car. The problem of "How can I get a
working car?" has not been solved, it has been dissolved. You realize that
the problem you really wanted to solve was "How can I get to work?"
Of course one might argue that you failed to identify the real space
of alternatives, but that it existed nevertheless., But imagine a slightly
different scenario., The bus ride takes too long, and you are complaining
about the situation to your friend at work. He commiserates with you,
since his bicycling to work is unpleasant when it rains. The two of you
come up with the idea of having the company buy a van for an employee
112 DECISION MAKING AND RESOLUTION 149
car pool. In this case, resolution comes in the creation of a new alternative.
The issue is not one of choosing but of generating.. Or, on the other hand,
you might steal a car, or set up housekeeping in a tent in your office, or
commit suicide.. Each in its own way would `solve the problem.' We are
seriously misled if we consider the relevant space of alternatives to be
the space of all logical possibilities, Relevance always comes from a pre-
orientation within a background.
You talk to another friend who has just gotten his car back from the
shop, He hears your story and expresses surprise at the whole thing. It
never occurred to him to do anything except have it fixed, just as he always
did, For him the resolution of having it repaired was not a decision. A
breakdown of irresolution never occurred. This exclusionary feature is the
principal element of resolution. It is sometimes articulated as reasons or
arguments, in which some of the excluded paths can be pointed out ("I
can't afford to buy a new car"), But there is always more that is not
articulated, falling back into the fathomless background of obviousness.
This kind of exclusionary commitment is present even in situations
where we do not experience irresolution; we simply act, order, promise,
declare, or decline to commit ourselves to some acts. It is naive to
be lieve that these are not rational actions, on the grounds that the process
of deliberation (in the sense of choosing among alternatives) is missing.
Commitment to an action, with exclusion of other possibilities, is the
common feature of the processes that precede action,
We call the process of going from irresolution to resolution 'delibera-
tion..' The principal characteristic of deliberation is that it is a kind of
conversation (in which one or many actors may participate) guided by
questions concerning how actions should be directed. Sometimes we can
specify conditions of further inquiry to attain a resolution. On other oc-
casions the resolution will come from a more prolonged hesitation and/or
debate, Only in some of these cases will the phenomenon of choosing
between alternatives occur, and a process of ranking according to some
metric or other criterion may occur even less frequently.
We can describe the conversation that constitutes deliberation in the
following terms:
It is worth noting that much of what is called problem solving does not
deal with situations of irresolution, but takes place within the normal state of
resolution, For example, when a linear programming model is used to
schedule operations in a refinery, the `problem' to be solved does not call
for a resolution, Resolution concerns the exploration of a situation, not
the application of habitual means.
may come to hide the purposes they were intended to serve.. The blind-
ness can take on immense proportions when the survival of an organization
is assured by some external declaration, as with public bureaucracies or
armies in peacetime. It becomes attached to programs and projects, at-
tending to recurrent requests, with little sensitivity to the consequences
and implications of its activity or to the declared commitments of the
organization.
Decision making, as described in the first section of this chapter, is
part of the recurrent activity. The professional discipline of `systems
anal ysis' has focussed on routine structured processes, frequently
attacking problems of volume and routine rather than problems of
communication. If we look carefully at the process of resolution described
above, we see that the key elements are the conversation among the
affected parties and the commitment to action that results from reaching
resolution. Success cannot be attributed to a decision made by a particular
actor, but only to the collective performance.
Careful observers of what successful managers do (such as Mintzberg,
in The Nature of Managerial Work, 1973) have remarked that their
activi-
ties are not well represented by the stereotype of a reflecting solitary mind
studying complex alternatives. Instead, managers appear to be absorbed
in many short interactions, most of them lasting between two and twenty
minutes. They manifest a great preference for oral communication-by
telephone or face to face. We may say that managers engage in conver-
sations in which they create, take care of, and initiate new commitments
within an organization. The word `management' conveys the sense of
active concern with action, and especially with the securing of effective
cooperative action. At a higher level, management is also concerned with
the generation of contexts in which effective action can consistently be
realized.
In understanding management as taking care of the articulation and
activation of a network of commitments, produced primarily through
prom ises and requests, we cover many managerial activities, Nevertheless,
we also need to incorporate the most essential responsibilities of managers:
to be open, to listen, and to be the authority regarding what activities and
commitments the network will deal with. These can be characterized as
participation in `conversations for possibilities' that open new backgrounds
for the conversations for action.
The key aspect of conversations for possibilities is the asking of the
questions "What is it possible to do?" and "What will be the domain of
actions in which we will engage?" This requires a continuing reinterpre-
tation of past activity, seen not as a collection of past requests, promises,
and deeds in action conversations, but as interpretations of the whole
situation-interpretations that carry a pre-orientation to new possibilities
152 CHAPTER 11. MANAGEMENT AND CONVERSATION
r
for the future. Like the car owner in our example, the manager needs to
be continually open to opportunities that go beyond the previous horizon.
the most relevant things for the manager's concern. An information system
dedicated to collecting data and answering predetermined questions, even
one as well-designed as possible, will be harmful if it is not complemented
by heterodox practices and a permanent attitude of openness to listening.
1972):
2See, for example, Holt, Ramsey, and Grimes, "Coordination system technology as
the basis for a programming environment" (1983), and Sluzier and Cashman, "XCP:
An experimental tool for supporting office procedures" (1984),
162 CHAPTER 11. MANAGEMENT AND CONVERSATION
situations. In many contexts this kind of explicitness is not called for, and
may even be detrimental. Language cannot be reduced to a representation
of speech acts. A coordination system deals with one dimension of lan-
guage structure-one that is systematic and crucial for the coordination
of action, but that is part of the larger and ultimately open-ended domain
of interpretation.
We conclude here by pointing out that computer tools themselves are
only one part of the picture. The gain from applying conversation theory
in organizations has to do with developing the communicative competence,
norms, and rules for the organization, including the training to develop the
appropriate understanding. This includes the proper terminology, skills,
and procedures to recognize what is missing, deteriorated, or obtruding
(i.e., what is broken-down), and the ability to cope with the situation.
People have experience in everyday dealing with others and with situa-
tions. Nevertheless, there are different levels of competence. Competence
here does not mean correct grammatical usage or diction, but successful
dealing with the world, good managerial abilities, and responsibility and
care for others. Communicative competence means the capacity to express
one's intentions and take responsibilities in the networks of commitments
that utterances and their interpretations bring to the world. In their day-
to-day being, people are generally not aware of what they are doing. They
are simply working, speaking, etc., more or less blind to the pervasiveness
of the essential dimensions of commitment. Consequently, there exists
a domain for education in communicative competence: the fundamental
relationships between language and successful action. People's conscious
knowledge of their participation in the network of commitment can be re-
inforced and developed, improving their capacity to act in the domain of
language.
Chapter 12
Using computers:
Adirection for design
'We do not use `design' here in the narrow sense of a specific methodology for creating
artifacts, but are concerned with a broad theory of design like that sought in the
work of reflective architects such as Alexander (Notes on the Synthesis of Form, 1964)..
164 CHAPTER 12. A DIRECTION FOR DESIGN
Readiness-to-hand
One popular vision of the future is that computers will become easier to
use as they become more like people. In working with people, we
establish domains of conversation in which our common pre-understanding
lets us communicate with a minimum of words and conscious effort. We
become explicitly aware of the structure of conversation only when there is
some kind of breakdown calling for corrective action. If machines could
understand in the same way people do, interactions with computers would be
equally transparent.
This transparency of interaction is of utmost importance in the
de sign of tools, including computer systems, but it is not best achieved by
attempting to mimic human faculties. In driving a car, the control
interaction is normally transparent. You do not think "How far should I
turn the steering wheel to go around that curve?" In fact, you are not
even aware (unless something intrudes) of using a steering wheel.
Phenomeno logically, you are driving down the road, not operating controls.
The long evolution of the design of automobiles has led to this readiness-
to-hand. It is not achieved by having a car communicate like a person, but
by pro viding the right coupling between the driver and action in the
relevant domain (motion down the road).
In designing computer tools, the task is harder but the issues are the
same. A successful word processing device lets a person operate on the
words and paragraphs displayed on the screen, without being aware of
formulating and giving commands. At the superficial level of `interface
design' there are many different ways to aid transparency, such as special
function keys (which perform a meaningful action with a single keystroke),
pointing devices (which make it possible to select an object on the screen),
and menus (which offer a choice among a small set of relevant actions).
12 1. A BACKGROUND FOR COMPUTER DESIGN 165
More important is the design of the domains in which the actions are
generated and interpreted.. A bad design forces the user to deal with com-
plexities that belong to the wrong domain. For example, consider the user
of an electronic mail system who tries to send a message and is confronted
with an `error message' saying "Mailbox server is reloading," The user op-
erates in a domain constituted of people and messages sent among them.
This domain includes actions (such as sending a message and examining
mail) that in turn generate possible breakdowns (such as the inability to
send a message). Mailbox servers, although they may be a critical part
of the implementation, are an intrusion from another domain-one that
is the province of the system designers and engineers, In this simple ex-
ample, we could produce a different error message, such as "Cannot send
message to that user, Please try again after five minutes," Successful
system builders learn to consider the user's domain of understanding after
seeing the frustrations of people who use their programs„
But there is a more systematic principle at stake here. The program-
mer designs the language that creates the world in which the user operates,
This language can be `ontologically clean' or it can be a jumble of related
domains, A clearly and consciously organized ontology is the basis for the
kind of simplicity that makes systems usable. When we try to understand
the appeal of computers like the Apple MacIntosh (and its predecessor the
Xerox Star), we see exactly the kind of readiness-to-hand and ontological
simplicity we have described. Within the domains they encompass-text
and graphic manipulation-the user is `driving,' not `commanding.' The
challenge for the next generation of design is to move this same effec-
tiveness beyond the superficial structures of words and pictures into the
domains generated by what people are doing when they manipulate those
structures.
Anticipation of breakdown
Our study of Heidegger revealed the central role of breakdown in human
understanding, A breakdown is not a negative situation to be avoided, but
a situation of non-obviousness, in which the recognition that something is
missing leads to unconcealing (generating through our declarations) some
aspect of the network of tools that we are engaged in using. A breakdown
reveals the nexus of relations necessary for us to accomplish our task,. This
creates a clear objective for design-to anticipate the forms of breakdown
and provide a space of possibilities for action when they occur. It is im-
possible to completely avoid breakdowns by means of design. What can be
designed are aids for those who live in a particular domain of breakdowns.
These aids include training, to develop the appropriate understanding of
the domain in which the breakdowns occur and also to develop the skills
166 CHAPTER 12. A DIRECTION FOR DESIGN
and procedures needed to recognize what has broken down and how to
cope with the situation.
Computer tools can aid in the anticipation and correction of
break downs that are not themselves computer breakdowns but are in the
appli cation domain. The commitment monitoring facilities in a
coordination system are an example, applied to the domain of conversations
for action. In the design of decision support systems, a primary
consideration is the anticipation of potential breakdowns. An early example
of such a system was Cybersyn,2 which was used for monitoring production
in a sector of the economy of Chile. This system enabled local groups to
describe the range of normal behavior of economic variables (such as the
output of a particular factory), and to be informed of significant patterns of
variation that could signal potential breakdown.
But more importantly, breakdowns play a fundamental role in design.
As the last section pointed out, the objects and properties that constitute
the domain of action for a person are those that emerge in breakdown.
Returning to our simple example of an electronic mail system, our `fix'
left a person with certain courses of action in face of the breakdown. He
or she can simply forget about sending the message or can wait until later
to try sending it again. But it may be possible to send it to a different
`mail server' for delayed forwarding and delivery. If so, it is necessary
to create a domain that includes the existence of mail servers and their
properties as part of the relevant space in which the user exists.
In designing computer systems and the domains they generate, we must
anticipate the range of occurrences that go outside the normal functioning
and provide means both to understand them and to act. This is the basis
for a heuristic methodology that is often followed by good programmers
("In writing the program try to think of everything that could go wrong"),
but again it is more than a vague aphorism. The analysis of a human
context of activity can begin with an analysis of the domains of breakdown,
and that can in turn be used to generate the objects, properties, and
actions that make up the domain.
situation. Once the manager senses this, the typical next step would be to
go to computer service vendors to find out what kinds of `systems' are
available and to see if they are worth getting The space of possibilities is
determined by the particular offerings and the 'features' they exhibit. But
we can begin with a more radical analysis of what goes on in the store and
what kinds of tools are possible..
As a first step we look for the basic networks of conversation that consti-
tute the business. We ask "Who makes requests and promises to whom,
and how are those conversations carried to completion?" At a first level
we treat the company as a unity, examining its conversations with the
outside world-customers, suppliers, and providers of services. There are
some obvious central conversations with customers and suppliers, opened
by a request for (or offer of) dresses in exchange for money. Secondary
conversations deal with conditions of satisfaction for the initial ones: con-
versations about alteration of dresses, conversations concerning payment
(billing, prepayment, credit, etc.), and conversations for preventing break-
down in the physical setting (janitorial services, display preparation, etc.).
Taking the business as a composite, we can further examine the con-
versational networks among its constituents: departments and individual
workers. There are conversations between clerk and stockroom, clerk and
accounting, stockroom and purchasing, and so forth. Each of these con-
versation types has its own recurrent structure, and plays some role in
maintaining the basic conversations of the company. As one simple ex-
ample, consider the conversation in which the stock clerk requests that
purchase orders be sent to suppliers.. Instances of this conversation are ei-
ther triggered by a conversation in which a salesperson requested an item
that was unavailable, or when the stock clerk anticipates the possibility of
such a breakdown.. Other conversations are part of the underlying struc-
ture that makes possible the participation of individuals in the network
(payroll, work scheduling, performance evaluations, etc.). Each conversa-
tion has its own structure of completion and states of incompletion with
associated time constraints.
request), and others that deal with breakdown (e.g., if a request for al-
teration is not met on time it may trigger a request by the customer to
see the manager). Having compiled this description, we can see possibili-
ties for restructuring the network on the basis of where conversations fail
to be completed satisfactorily. We may, for example, note that customer
dissatisfaction has come from alterations not being done on time (per-
haps because alterations are now being combined for the three stores and
therefore the tailors aren't immediately available). Actions might include
imposing a rigid schedule for alterations (e.g., never promise anything for
less than a week) so that commitments will be met on time, even if the
times that can be promised are less flexible. Or it might mean introducing
better tools for coordination, such as a computer-based system for keeping
track of alteration requests and giving more urgent ones higher priority.
We have not tried to deal in our dress shop example with concrete ques-
tions of computer devices. In practice one needs to make many choices
based on the availability, utility, and cost of different kinds of equipment-
computers, software packages, networks, printers, and so on. In doing so,
all of the same theoretical considerations apply. As computer users know
all too well, breakdown is a fundamental concern. It is important to rec-
ognize in this area that breakdowns must be understood within a larger
network of conversation as well. The issue is not just whether the machine
will stop working, but whether there is a sufficient network of auxiliary
conversations about system availability, support, training, modification,
and so on. Most of the well-publicized failures of large computer systems
have not been caused by simple breakdowns in their functioning, but by
breakdowns in this larger `web of computing" in which the equipment
resides.
3This termis fromKling and Scacchi, "The web of computing" (1982), which is
based on empirical studies of experience with large scale computer systems in a social
context.
174 CHAPTER 12, A DIRECTION FOR DESIGN
ified practitioner, and it can well be argued that the domains generated in
developing the system are themselves significant research contributions.
Such profession-oriented domains can be the basis for
computational
tools that do some tasks previously done by professionals. They can also
be the basis for tools that aid in communication and the cooperative ac-
cumulation of knowledge. A profession-oriented domain makes explicit
aspects of the work that are relevant to computer-aided tools and can be
general enough to handle a wide range of what is done within a profession,
in contrast to the very specialized domains generated in the design of a
particular computer system.. A systematic domain is a structured formal
representation that deals with things the professional already knows how
to work with, providing for precise and unambiguous description and ma-
nipulation. The critical issue is its correspondence to a domain that is
ready-to-hand for those who will use it.
Examples of profession-oriented systematic domains already exist. One
of the reasons for Visicalc's great success is that it gives accountants trans-
parent access to a systematic domain with which they already have a great
deal of experience-the spreadsheet. They do not need to translate their
actions into an unfamiliar domain such as the data structures and algo-
rithms of a programming language. In the future we will see the develop-
ment of many domains, each suited to the experience and skills of workers
in a particular area, such as typography, insurance, or civil engineering.
To some extent, the content of each profession-oriented domain will be
unique. But there are common elements that cross the boundaries. One
of these-the role of language in coordinated action-has already been
discussed at length.. The computer is ultimately a structured dynamic
8The training was developed by F., Flores through Hermenet, Inc., in San Francisco,
and Logonet, Inc.., in Berkeley.
Bibliography
181
182 BIBLIOGRAPHY
Bobrow, Daniel, Terry Winograd, and the KRL Research Group, Experi-
ence with KRL-0: One cycle of a knowledge representation language,
Proceedings of the Fifth International Joint Conference on
Artificial
Intelligence, Pittsburgh: Carnegie Mellon Computer Science Depart-
ment, 1977, 213-222.
Boguslaw, Robert, The New Utopians: A Study of System Design
and
Social Change, Englewood Cliffs, NJ: Prentice Hall, 1965.
Buchanan, Bruce, New research on expert systems, in J.E. Hayes, D.
Michie, and Y-H. Pao (Eds.), Machine Intelligence 10,
Chichester:
Ellis Horwood Ltd., 1982, 269-299.
Business Week, Artificial Intelligence; The second computer age begins,
March 8, 1982, 66-72.
Business Week, Robots join the labor force, June 9, 1980, 62-76.
Cadwallader-Cohen, J.B., W.S. Zysiczk, and R.R. Donnelly, The chaos-
tron, Datamation, 7:10 (October 1961). Reprinted in R. Baker (Ed.),
A Stress Analysis of a Strapless Evening Gown, New York:
Anchor,
1969.
Cashman, Paul M. and Anatol W. Holt, A communication-oriented ap-
proach to structuring the software maintenance environment, ACM
SIGSOFT, Software Engineering Notes, 5:1 (January 1980), 4-17.
Chomsky, Noam, Reflections on Language, New York: Pantheon, 1975.
Cicourel, Aaron V., Cognitive Sociology: Language and Meaning in
Social
Interaction, New York: Free Press, 1974.
Club of Rome, The Limits to Growth, New York: Universe Books, 1972.
CNSRS, Report on artificial intelligence, prepared by the Inria Sico club
and the CNSRS working party on Artificial Intelligence, in Technology
and Science of Informatics, 2:5 (1983), 347-362.
D'Amato, Anthony, Can/should, computers replace judges?, Georgia Law
Review, 11 (1977), 1277-1301..
Davidson, Donald and G. Harman (Eds.), Semantics of Natural
Language,
Dordrecht: Reidel, 1972.
Davis, Dwight B., Assessing the Strategic Computing Initiative, High
Technology, 5:4 (April 1985), 41-49..
Davis, Randall, Interactive transfer of expertise: Acquisition of new infer-
ence rules, Artificial Intelligence, 12:2 (August 1979), 121-157.
Dennett, Daniel, Intentional systems, The Journal of Philosophy, 68 (1971),
87-106. Reprinted in Haugeland (1981), 220-242.
Dennett, Daniel, Mechanism and responsibility, in Ted Honderich (Ed.),
Essays on the Freedom of Action, London: Routledge and Kegan Paul,
1973. Reprinted in D. Dennett, Brainstorms: Philosophical Essays on
Reason, New York: Harper & Row, 1972 (2nd Edition with new
Pref ace, 1979)..
Dreyfus, Hubert L., Being-in-the- World: A Commentary on Division
I of
Heidegger's Being and Time, Cambridge, MA: M.I.T. ,Press, in press.
Dreyfus, Hubert L., and Stuart E. Dreyfus, Mind Over Machine, New
York: Macmillan/The Free Press, 1985.
Evans, Christopher, The Micro Millennium, New York: Viking, 1979.
Feigenbaum, Edward, AAAI President's message, Al Magazine, 2:1 (Win-
ter 1980/81), 1, 15..
Feigenbaum, Edward and Julian Feldman (Eds.), Computers and Thought,
Grice, H. Paul, Logic and conversation, in P. Cole and J.L. Morgan (Eds.),
Studies in Syntax, Volume III, New York: Academic Press, 1975.
Grosz, Barbara, Utterance and objective: Issues in natural language com-
munication, Al Magazine, 1:1 (Spring 1980), 11-20.
Habermas, Jurgen, What is universal pragmatics?, in J. Habermas, Com-
munication and the Evolution of Society (translated by Thomas
Mc-
Carthy), Boston: Beacon Press, 1979, 1-68.
Habermas, Jurgen, Wahrheitstheorien, in H. Fahrenbach (Ed.), Wirk-
lzchkezt and Refiexion, Neske: Pfullingen, 1973, 211-265. Quotations
based on anonymous translation (manuscript 40 pp,), "Theories of
truth," undated..
Haugeland, John, The nature and plausibility of cognitivism, The Behav-
ioral and Brain Sciences, 2 (1978), 215-260. Reprinted in
Haugeland
(1981), 243-281.
Haugeland, John, Mind Design, Montgomery, VT: Bradford/M.I..T. Press,
1981.
Haugeland, John, Artificial Intelligence: The Very Idea, Cambridge,
MA:
Bradford/M. I. T. Press, 1985.
Hayes, Pat, In defence of logic, Proceedings of the Fifth International
Joint
Conference on Artificial Intelligence, Pittsburgh: Carnegie
Mellon
Computer Science Dept., 1977, 559-565.
Heidegger, Martin, Being and Time (translated by John Macquarrie and
Edward Robinson), New York: Harper & Row, 1962.
Heidegger, Martin, What Is Called Thinking? (translated by Fred D.
Wieck and J. Glenn Gray), New York: Harper & Row, 1968.
Heidegger, Martin, On the Way to Language (translated by Peter Hertz),
43 (1960), 129-175..
Maturana, Humberto R., Gabriela Uribe, and Samy Frenk, A biological
theory of relativistic color coding in the primate retina, Arch. Biologia
y Med. Exp., Suplemento No, 1, Santiago: University of Chile, 1968.
Maturana, Humberto R.. and Francisco Varela, Autopoiesis and
Cognition:
The Realization of the Living, Dordrecht: Reidel, 1980.
McCarthy, John, An unreasonable book (review of Joseph Weizenbaum's
Computer Power and Human Reason), Creative Computing, 2:9
(Sep tember-October 1976), 84-89,.
McDermott, John, RI: A rule-based configurer of computer systems, Ar-
tificial Intelligence, 19:1(September 1982), 39-88.
Minsky, Marvin (Ed.), Semantic Information Processing, Cambridge, MA:
M.I.T. Press, 1967..
Minsky, Marvin, A framework for representing knowledge, in Winston
(1975), 211-277.
Minsky, Marvin, The society theory of thinking, in P. Winston and R.
Brown (Eds.), Artificial Intelligence: An MIT Perspective,
Cambridge,
MA: M.I.T. Press, 1979, 421-452.
Minsky, Marvin, K-Lines: a theory of memory, in Norman (1981), 87-104.
Mintzberg, Henry, The Nature of Managerial Work, New York: Harper
&
Row, 1973.
Moore, J. and A. Newell, How can MERLIN understand?, in L. Gregg
(Ed.), Knowledge and Cognition, Baltimore, MD: Lawrence Eribaum
Associates, 1973.
Moravcsik, Julius, How do words get their meanings?, The Journal of
Philosophy, 78:1 (January 1981), 5-24.
BIBLIOGRAPHY 187
191
192
NAME INDEX
194
SUBJECT INDEX
195
98
and structural change, 94
Coordination system, 157-162, 177,
for systematic domain, 174-177 179
and task domain, 97
as theory of behavior, 26 for programming 161
Computerization, 75, 173, 154- to reduce work rigidity, 169
The Coordinator, 159
155 Correspondence theory of mean-
ing, 17-20
Concealment (See Tradition)
Counteroffer, 65
Concept formation, 101-102
Coupling (See Structural coupling)
Concernful activity, 33, 37, 69, 73
Cybernetics, 38n, 51, 130
Condition of satisfaction, 60, 65- Cybersyn, 166
66, 171-172
Connectionist approach, 130 Dasein, 31
Consciousness, 52 Data bank, 155
Consensual domain, 48-52, 76 Data base, 89, 129
Content word, 17
Context Decision, programmed vs. nonpro-
grammed, 153
analysis in SHRDLU, 110 Decision maker, 144
of answer, 155
Decision making, 20-23, 144-150
in decision making, 146 and pre-orientation, 147
in expert system, 131-133 and rationality, 145
and frames, 116 vs. resolution, 147-150, 151
and interpretation, 28 (See also Management, Prob-
and literal meaning, 19 lem solving)
in semantics, 19, 55, 113 Decision support system, 152-157,
of speech act, 56 166
of systematic domain, 177 Declaration, type of speech act, 59
and textual meaning, 30
Decomposition, 87
Conversation, 64-68, 157-162, 168- 173
Decontextualization, 28
completion of, 66, 160 Default, in frame, 115, 117
computer tools for, 157-162, 172- Deliberation, as conversation, 149
173 DENDRAL, 131
as dance, 159 Denotation, 17
Description, 50-52
recurrent, 168, 172-173
Design, 4-7, 77-79, 163-179
Conversation for action, 64, 151,
159-161 and anticipation (See Breakdown)
of computer systems (See Com-
Conversation for possibilities, 151 puter)
Conversational principle, 57
Cooperative domain, 50 of conversation structures, 158,
Coordination 169-170
of domain, 165
in organization, 143, 158
of formal representation, 96, 97
198
SUBJECT INDEX
Design (continued)
as generation of new possibili- Driving, as example of decision
ties, 170 making, 145
ontological, 163, 177-179 DSS (See Decision support sys-
tem)
as problem-solving, 77
and structural coupling, 53, 164,
Dualism, 30-31, 39, 47
178
Effectiveness
Designer', power of, 154
vs. efficiency, 153
Desire, attributed to computer,
of representation, 85
106
Efficiency
Detached observation and reflec-
tion, 34, 71 of computer operations, 91
vs. effectiveness, 153
Diagnosis, by computer, 131-133
Electronic mail, 165-166
Directive, type of speech act, 58,
ELIZA, 120-124
157
Emergent situation, 153
Disintegration, of autopoietic sys-
Empiricism, 16n
tem, 45
Energy crisis, 147
Display hack, 91-92
Engagement, in speech act, 59
Dissolving, of problem, 148
Distinction Environment, 43n, 44-46, 48
Epistemology, 44, 73
in language, 69, 174
Equipment (See Network of equip-
by observer, 50-51, 73
ment)
DOCTOR, 120-121
Eskimo, 69
Doing, as interpretation, 143-144
Established situation, 153
Domain, 46-53, 170-179
Ethnomethodology, 29n
of action, 53, 166
EURISKO, 130
of anticipation, 172
Everydayness, 34, 98
of breakdown, 166, 170-171, 173
Evolution
created by design, 179
artificial, 100-104
created by language, 174
design of, 165 biological, 44-46
of distinctions, 73 and learning programs, 100-107 as
structural coupling, 45
of explanation, 52-53
time scale of, 103
linguistic, 51
Exclusionary commitment, 149
of perturbations, 43, 53, 75
of recurrence, 64 Existence, 68-69 (See also Being,
Ontology)
of structure-determined behav-
ior, 47, 73 Expectation (See Frame)
Experiential, grounding of mean-
(See also Cognitive domain, Con-
ing, 67
sensual domain, Cooperative
Expert, 132
domain, Systematic domain)
and meaning, 62n
Dress shop, as example of design,
167-174 representation of knowledge, 99
Expert system, 131-139
SUBJECT INDEX
199
limitations, 132-133
Hermenet, Inc., 179n
as systematic domain, 175
(See also Fifth generation) Hermeneutic circle, 30, 32
Explanation (See Domain) Hermeneutics, 27-30
Expressive, type of speech act, 59 and commitment, 76
and frames, 116
objectivist, 28
Felicity condition, of speech act,
56, 58 (See also Interpretation)
Fifth generation computer system, Heuristic (See Search)
133-139 Hierarchical decomposition, 87
First order structural change, 94 Hierarchy of levels, 94
Fit, as example of condition of High-level programming language,
satisfaction, 171-172 88-91, 96n
Historicity
Formal grounding of meaning, 67
Formal representation (See Rep- of domains, 64
resentation) of individual, 29
Formalist approach to manage- and tradition, 7
ment, 21
History, embodied in structure,
Formalization, and recurrence, 64- 47
68
Horizon, 28, 30 (See also Herme-
Frame, 115-119, 130 neutics, Interpretation, Pre-
Frog, visual system, 41, 46 understanding)
Front end, natural language, 128 Hypothesis, in science, 15
Functional description, 39
Idealism, 31
Game
Illocutionary force, 59
as model for logic, 67
in coordinator, 159-161
program for playing, 97
Game theory, 20 Illocutionary point, 58-59
Implementation
General Problem Solver, 95
Gestalt psychology, 51 of computer program, 91
of functional mechanism, 39
Goal, in problem solving, 22, 53
of programming language, 87
GPS (See General Problem Solver) Indexical, 19n, 111
Gravity, as example of present- Indicative, 19
at-hand, 36
Individual, 29, 33
Grounding, of speech act, 67-69 Infelicity, 55
Reasoning (continued)
Resource, in computer system, 91
and frames, 116-118
Resource-limited processing, 117-
informal, 117-118
119
resource limited, 118
Responsibility
in SHRDLU, 110
for being understood, 63
(See also Logic, Representation,
and communicative competence,
Thinking)
162
Recognition as understanding, 115-
in computer systems, 123, 155 as
119
essential to human, 106 (See
Recurrence
also Commitment)
in conversation, 67-68, 161, 168-
Restaurant, example of script, 120
170
Roadmap, as analogy for mean-
and distinction, 69
ing, 61
and meaning, 60-68
Robot, 86, 103-104, 127-128
in organization, 150, 158, 161
simulated in SHRDLU, 109-110
of propositional content, 161
in science, 16
SAM, 119-121
of tasks, 153
Sapir-Whorf hypothesis, 29n
Recursive decomposition, 87
Satisfaction (See Condition of sat-
Reference
isfaction)
in SHRDLU, 110, 113
Scandal of philosophy, 31
of symbol in computer, 86
Scheduling, of work, 169
Relevance
Schema, 115
of alternative, 149
Science, 14-16, 24, 67
of computer system, 153
cognitive (See Cognitive science)
Representation, 33, 72-74, 84-92
Scientific method, 15-16
accidental, 91-92
Script, 120, 122
and blindness, 97-100
Search
and cognition, 73
in problem solving, 22-23
in computer, 84-92, 96-100
procedure for, 96-97
of facts, 89
Second order structural change,
formality of, 85, 96n
94
as interpretation, 35
Selection, 45, 100
and knowledge, 72-74
Semantic correspondence, 18
and language, 108
Semantics, 18 (See also Meaning,
in learning, 36, 101-104
Situation semantics)
levels in computer, 86-92
Semiotics, 63n
in nervous system, 41-48, 73
Semi-structured task, 152-153
in problem solving, 22=23
Sense, of word, 55
(See also Frame, Script)
Sequential processor, 88
Representation hypothesis, 74
Sex, 122
Research programme, 24
Shakespeare, computer understand-
Resolution, 147-150, 151
ing of, 119, 122-123
SUBJECT INDEX
205
Specification, of state, 43
Specification language, 176
Speech, recognition by computer, 129
in background, 63
in coordinator, 159-162
and rationalistic tradition, 60
taxonomy of, 159
and time, 67, 160
(See also Commitment, Con-
versation)
Spreadsheet, 175n, 176
Stance, for explanation, 106
Standard observer, 67
Superhuman-human fallacy, 99
Symbol structure, 22-23, 84-86
Symbol system, 25 (See also phys-
ical symbol system)
System Development Foundation, xiv
Temporality (continued)
in speech act, 160 Truth condition, 19, 54, 57, 112
Terminal, of frame, 115 Truth theoretic semantics, 17-19
Text (See Interpretation, Under-
Ultra-intelligent machine, 4
standing)
Unanticipated effect, of technol-
Theoretical understanding, 32
ogy, 154
Theory
Unconcealment (See Tradition)
as computer program, 26
relevance to design, xii
Understanding, 27-33, 115-124
and autonomy, 123
Thing, 72-73
and Being, 27-37,
Thinking 16, 71, 73 (See also Cog-
as commitment, 123-124
nition, Intelligence, Rational-
by computer, 75, 107-124, 128-
ity, Reasoning, Understand-
129, 135, 159
ing)
and memory, 116
Thought (See Thinking)
and ontology, 30-33
Thrownness, 33-36, 71, 78, 97,
145-147 practical vs, theoretical, 32 as
recognition, 115-119
within language, 68
Time (See Temporality) (See also Interpretation, Pre-
understanding)
Tool, (See Computer, Design, Net-
Unity, 44
work of equipment, Technol-
ogy) Unpredictability, of deterministic
TOP, 122 device, 95
Tradition, 7-9, 60-63 Unreadiness, 5n, 147
concealment of, 7, 179 Unready-to-hand, 36, 72, 73 (See
also Ready-to-hand)
and language, 40, 61
Unstructured problem, 153
and objectivity, 60-63
User-friendly, 72, 164
and pre-under standing, 74
rationalistic (See Rationalistic
Validity claim, 59
tradition)
Value, in decision-making, 21-22
unconcealment of, 5, 179
Veridicality, of representation, 85
Transformation, and design, 177-
Visicalc, 175, 176
179
Translation (See Machine trans- Vision
lation) color, 41-43
Transparency of interaction, 164 by computer, 104, 130
Triggering in frog, 38, 41
in conversation, 168 VLSI (Very large scale integra-
by perturbation, 48-49
tion), 133n, 136-138
Truth, 17-20, 55-58 Voice (See Speech)
as agreement (Habermas), 62 vs.
Water, as example of word mean-
appropriateness (Austin), 57 and
ing, 55-56, 60, 62n, 69
grounding, 67
Web of computing, 173
SUBJECT INDEX
207
Word processing
World view, 8, 29n
as example of equipment, 5-7,
36-37, 53, 164 Xerox Palo Alto Research
as systematic domain, 175 Center, xi
Workgroup system (See Coordi- Xerox Star, 165
nation system)
Yale University, 128n