0% found this document useful (0 votes)
43 views

Sempraginsls Davidson2022

This document discusses formal semantics and pragmatics in sign languages. It covers topics like meaning and language, questions and answers, logical connectives, anaphora in spatial discourse, classifiers and role shifting, quantification, countability, and intensionality. The document contains several chapters that analyze these topics in depth using examples from sign languages.

Uploaded by

Isaack Saymon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Sempraginsls Davidson2022

This document discusses formal semantics and pragmatics in sign languages. It covers topics like meaning and language, questions and answers, logical connectives, anaphora in spatial discourse, classifiers and role shifting, quantification, countability, and intensionality. The document contains several chapters that analyze these topics in depth using examples from sign languages.

Uploaded by

Isaack Saymon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 237

Formal Semantics and Pragmatics

in Sign Languages

Kathryn Davidson

August 24, 2022


Contents

1 Meaning and language 4


1 Event experiences . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Propositional meaning . . . . . . . . . . . . . . . . . . . . . . 9
3 Formal semantics: Basics . . . . . . . . . . . . . . . . . . . . 13
4 Information: entailment, presupposition, and implicature . . 23
5 Fieldwork and semantics in understudied languages . . . . . . 27
6 On notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Questions, answers, and information 32


1 Question-Answer clauses . . . . . . . . . . . . . . . . . . . . . 38
2 Sentence-final focus position . . . . . . . . . . . . . . . . . . . 43
3 Topicalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4 Contrast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5 Embedding diagnostics . . . . . . . . . . . . . . . . . . . . . . 50
6 Incompatibility of depictions and alternatives . . . . . . . . . 52
7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3 Logical connectives 57
1 Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2 Conjunction and Disjunction . . . . . . . . . . . . . . . . . . 67
3 Semantics/Pragmatics interface: Implicatures . . . . . . . . . 72
4 Coordination and information structure . . . . . . . . . . . . 74
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4 Anaphora: a spatial discourse 77


1 Space as depictive and arbitrary . . . . . . . . . . . . . . . . 78
2 Pronouns: Evidence for shared structure . . . . . . . . . . . . 80
3 Verbs: Agreement versus cliticization . . . . . . . . . . . . . . 88
4 Loci vs dynamic indices . . . . . . . . . . . . . . . . . . . . . 93
5 Loci vs. gender/noun class features . . . . . . . . . . . . . . . 99
6 Loci as depictive . . . . . . . . . . . . . . . . . . . . . . . . . 104
7 Spatial restriction . . . . . . . . . . . . . . . . . . . . . . . . . 106
8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

1
CONTENTS 2

5 Classifiers, role shift, and demonstrations 113


1 Depictive classifiers . . . . . . . . . . . . . . . . . . . . . . . . 113
2 Classifier semantics . . . . . . . . . . . . . . . . . . . . . . . . 116
3 Argument structure of depictive classifiers . . . . . . . . . . . 118
4 Classifier pragmatics . . . . . . . . . . . . . . . . . . . . . . . 120
5 Role shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6 Interpreting indexical expressions . . . . . . . . . . . . . . . . 126
7 Role shift as context shift . . . . . . . . . . . . . . . . . . . . 127
8 Cross-linguistic variation, attraction and iconicity . . . . . . . 130
9 Constructed actions/Action role shift . . . . . . . . . . . . . . 133
10 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6 Quantification 139
1 Quantification strategies across sign languages . . . . . . . . . 142
2 Quantificational domains . . . . . . . . . . . . . . . . . . . . 144
3 Quantification and binding . . . . . . . . . . . . . . . . . . . 148
4 Quantification and scope . . . . . . . . . . . . . . . . . . . . . 152
5 Psycholinguistic studies: Comprehension . . . . . . . . . . . . 154
6 Psycholinguistic studies: Production . . . . . . . . . . . . . . 155
7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

7 Countability 160
1 The mass/count distinction . . . . . . . . . . . . . . . . . . . 164
2 Classifiers and countability . . . . . . . . . . . . . . . . . . . 167
3 Grammatical number . . . . . . . . . . . . . . . . . . . . . . . 170
4 Telicity and aspect . . . . . . . . . . . . . . . . . . . . . . . . 176
5 Event visibility . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6 Pluractionality . . . . . . . . . . . . . . . . . . . . . . . . . . 182
7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

8 Intensionality 188
1 Conditionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
2 Attitude verbs . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3 Modals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
4 Intensional predicates and iconicity . . . . . . . . . . . . . . . 196
5 De dicto/de re . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

9 Conclusions 202
1 Events and propositions . . . . . . . . . . . . . . . . . . . . . 202
2 On pragmatic universality . . . . . . . . . . . . . . . . . . . . 208
3 Cross-linguistic typology . . . . . . . . . . . . . . . . . . . . . 210
4 Historical change . . . . . . . . . . . . . . . . . . . . . . . . . 212
5 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . 213
Preface

This book was written with two aims for two separate audiences, and hav-
ing done this will likely fall short for each audience in different ways; it is
hoped that the advantage of bringing these audiences together, however, will
outweigh the disadvantages. The first audience for this book is intended to
be sign language researchers and the Deaf signing community. People who
know the most about sign languages may encounter research talking about
how sign languages are discussed in the field of formal semantics and want
to learn more to follow that research. To this audience, I hope that this
text provides an overview and reference of what is being said about sign
languages by researchers in formal semantics and related work in pragmat-
ics and syntax. The paramount goal of this book is to break down barriers
between research published on the formal semantics of sign languages and
the community that the language belongs to. I’m sure I speak for many in
the field of semantics in hoping that this will lead to significant update and
revision to what is presented in the following pages, as that would be the
best indication of progress.
The second audience for this book is the community of formal seman-
tics/pragmatics researchers as well as those in adjacent fields such as philoso-
phy (especially logic and philosophy of language) and psychology (especially
psycholinguistics, psychology of language, developmental and cognitive psy-
chology). Usually if pushed, all of these groups acknowledge the importance
of building theories of language that take sign languages into account, and
yet many researchers in these fields hesitate to include sign languages in
their work through lack of familiarity with glossing conventions, lack of ba-
sic training in the terminology and ideas, and difficulty envisioning what
sentences look like and what are possible parameters of variation. To this
audience, I hope that this book will provide references to existing work
within the field and a sense of sign languages as described within the formal
semantic framework. It is a secondary goal of this book to make sign lan-
guages more familiar for those working on formal semantics, and to hopefully
provide enough basic knowledge of sign languages within the framework that
such researchers without any previous ties to sign languages will reach out
to scholars in the community for more collaborations and mutual support.

1
CONTENTS 2

To both audiences, the book should provide a sense of what has been
claimed regarding formal semantics and pragmatics in sign languages, and
provide guideposts to many major outstanding questions in the field. The
assumed background is an introductory linguistics course.
I am a hearing person who learned American Sign Language as an adult.
Given this, I lack the deeper knowledge that members of the Deaf commu-
nity have about sign languages, and so my perspective in this book is that
of an academic: the kind of outsider knowledge that has historically been
privileged but which should not be confused with lived experience when it
comes to authority on the language itself. On top of that, language is al-
ways changing and varies across people, time, and contexts, so in the spirit
of linguistic analysis, nothing in what follows should be understood as pre-
scriptive, or what should be, but rather descriptive, only what seems to be
given what we know about natural language semantics in both spoken and
signed languages. Moreover, theoretical linguistics is a young field and sign
language linguistics is even younger, so perhaps the next generation of such
an overview will rewrite much of what is contained here; certainly the goal
for this book will have been met if this provides an accessible introduction
to the current state of theoretical thinking to the widest range of readers,
with hopes of progress and revision to come.
Each chapter attempts to begin with a general introduction to the topic
in semantics (work that has often but not always focused on written English),
and then reviews the main ways in which the topic has been studied in
sign language linguistics. Chapters conclude with both concrete examples
of semantic analysis of particular phenomena, as well as future oriented
possible directions, both of which are intended to support further work.
In general within formal approaches to sign language linguistics, seman-
tics has been a relatively understudied field until quite recently, but there
are predecessors to this text that anyone interested in this topic should be
sure to consult if they are interested in complementary readings. The first
is Sandler and Lillo-Martin (2006) who provide for generative approaches to
phonology and syntax what this book aims to provide for formal approaches
to semantics of sign linguistics, working within a framework that assumes
similar ingredients at more abstract levels combining across language modal-
ities. A concise introduction to sign language linguistics more broadly can
be found in Hill et al. (2018). Schlenker (2022) provides a broader introduc-
tion to semantics as applied to many domains inside and outside of language
which includes several areas of focus on sign languages. Finally, there are
many dissertations that go more in depth on some of the topics here while
also presenting widely accessible introductions to aspects of formal seman-
tics; an excellent example for dynamic semantics is Barberà (2015) and for
information structure is Kimmelman (2014).
CONTENTS 3

Acknowledgements Any academic work is in many ways the result of


a group effort. The first thanks go to my own mentors, who all provided
support and direction, especially Ivano Caponigro, Rachel Mayberry, David
Barner, and Diane Lillo-Martin and the language/psych/cogsci communities
at UC San Diego, UConn, and Yale. At Harvard, I owe many thanks to all of
the members of the M&M lab, especially Annemarie Kocab, Dorothy Ahn,
Gunnar Lund, Kate Henninger, Nozomi Tomita, Chrissy Zlogar, Aurore
Gonzalez, Hande Sevgi, Shannon Bryant, and Yuhan Zhang all for collabo-
ration on projects that directly influenced the way that ideas are presented in
this book, Ronice Müller de Quadros, Deborah Chen Pichler, Andrea Sims,
and Laura Wagner for collaborations that influenced this writing, Gennaro
Chierchia for co-teaching a course that influenced the anaphora chapter,
Dora Mihoc for so much early lab support, our departmental administrators
Helen Lewis, Kate Pilson, and Victoria Koc, Andrew Bottoms for collabo-
ration on everything ASL at Harvard, and the Linguistics Department and
LangCog, Logic Group, and MBB communities on campus for wider per-
spective and inspiration. Of course, none of these people are responsible for
any mistakes in what follows, which are all mine. This project was supported
in part by a grant from the National Science Foundation, BCS 1844186: Ex-
perimental pragmatics and semantics in visual language. Finally, my most
enthusiastic thanks go to Jessica Tanner, who modeled the ASL filmed for
this book. Once when we were both on a stage at Yale, Jessica gave a
presentation that highlighted the artistic beauty inherent in sign languages,
arguing that they are more powerful than word-by-word text, able to paint
a whole picture and make a story come alive. Formal linguists notoriously
focus on [what to some are] the “boring” parts of a language, ignoring a
language’s beauty in a prioritization of scientific credibility. I hope that this
text manages to do some justice to both parts.
1

Meaning and language

How do we know what other people “mean” when they share ideas using lan-
guage? What is “meaning” in human language? And for that matter, what
counts as “language”? Answers to these kinds of questions are foundational
and far-reaching, far more complex than any single research program or sin-
gle discipline can handle, potentially encompassing social meaning and iden-
tity, actions and intentions, philosophy of language, psychology of mental
representations, language development, actions and persuasion, and cross-
linguistic variation and typology, etc. That said, there has been enormous
advancement in the last few decades regarding how certain kinds of meaning
work in both signed and spoken languages from multiple perspectives. In
this book we are going to focus on at least two different kinds of meaning
that can be expressed through natural language, although these shouldn’t
be taken to be exhaustive. Furthermore, the approach will not be to argue
in favor of prioritizing one kind of meaning over the other, but rather take
as given that both capture true aspects of linguistic meaning, and that to
understand language meaning requires understanding both parts and how
they interact.
One kind of meaning can be thought of as the “picture in your head”
of some event or situation that you want to share with someone else (as
the language producer) or that you want to comprehend (as the language
receiver). Imagine that I am describing a rainbow in the sky on a beautiful
autumn day in New England, describing the colors, the shapes, the arc
of the rainbow, the feel of the wind, etc, in great detail. Increasing the
details that I use will helpfully add to the vivid mental model you may be
building as I try to share the experience with you, either by using highly

vivid words (e.g. perhaps windswept if I’m using English, in ASL)


or depictions (e.g. gesturing the expanse of the tree line, or at the gentle
fall of a leaf). Neither the producer nor the comprehender needs to actually

4
1. MEANING AND LANGUAGE 5

have experienced such a moment directly, but the idea is that we can use
language along with other communication systems like painting, enacting,
etc. to try to share something like a specific image/episode of it with you,
for example what it is like to be in New England on such a day, what my
birthday party was like when I was 10, or what it was like to watch bees
make honey that day at the fair, etc. We can remember and reason about
(at least some aspects of) experiences in the absence of language; what we
share about experiences need not depend on language, but it seems that
they can certainly be influenced by language, as when I use language to
share a new experience I had with you.
If you ask most people, the kind of “share the picture in your head”
meaning might be the first thing that comes to mind when they think about
the meaning of a sentence, but linguists working within the formal ap-
proaches to semantics and pragmatics focus on a different aspect of
meaning, the fact that we can raise and answer questions in order to share
information that might not even be able to be encoded as an experience of
any kind. Examples of this are the information we share when we say I’ve
never seen a rainbow or None of the students can identify the queen bee, or
even the content of generalized declarative memory like Boston is the capital
of Massachusetts or penguins lay eggs. These are not necessarily linked to
any particular event experience, but rather play an important role in sup-
porting reasoning over alternatives, i.e. whether you have or haven’t seen a
rainbow, which city is the capital of Massachusetts, which animal kingdom
penguins belong to, etc.
An important motivation behind formal approaches to semantics is that
we can reason over not only what was said, but also use logic to reason
about what else follows from what was said. For example, even though it
is hard to imagine a particular experience tied to I’ve never seen a rainbow,
we can infer many things if someone says that sentence, such as the fact
that the speaker specifically didn’t see a rainbow last week, or the week
before. We can similarly infer from None of the students can identify the
queen bee that there is a unique queen bee, and that if Nick is a student,
then Nick hasn’t identified the queen bee. This information is represented
not as specific experiences but as generalizations across scenarios in which
various facts are true, that we seem to be able to reason about in regular
ways. Statements like these and the deductions we can make from them
are a way we can learn, for example, that bees make honey (even if we’ve
never witnessed that process and can’t imagine what it would look like),
that Boston is in Massachusetts (even if we’ve never been there), that I love
science fiction (even if you’ve never witnessed me enjoying it), and even
understand me if I say I’ve never a rainbow when there is no particular
event that I am describing but rather a lack of them.
One foundational idea in this book is that language can be used for both
of these functions of meaning: we can use language to evoke particular event
1. MEANING AND LANGUAGE 6

experiences for our interlocutor, and we can also use language to share in-
formation that allows our interlocutor to reason over alternatives. When it
comes to language, some pieces of language contribute only to one of these,
some only to the other, and some to both. To take a familiar example to
see how they can work together, imagine we are reading a children’s sto-
rybook that contains both text and illustrations. The illustrations provide
evocative details about the events that make up the story, while the text
can convey information that reinforces the illustrations as well as other in-
formation which might be impossible to depict but allows us to rule out or
rule in certain information, such as I’ve never seen a rainbow, or all penguin
species lay eggs. In semiotic terms, illustrations convey meaning iconically
via depiction, while the words convey meaning symbolically via description
(Clark, 1996). We will see ways in which the compositional properties of
these kinds of meanings differ, yet both are integral to understanding mean-
ing in language broadly, across all language modalities including writing,
speaking, and signing (Dingemanse, 2015; Clark, 2016; Hodge and Ferrara,
2022). One of the theoretical aims of this book is to investigate how iconic
and symbolic representations interact in human language, with a special fo-
cus on symbolic representations as analyzed using formal semantic models.
In investigating these kinds of interactions between depiction and de-
scription in meaning, the particular focus of this book will be on sign lan-
guages as used by Deaf communities throughout the world, for several
reasons. First, one unreason: it is NOT because sign languages are mostly
depictive! Just like spoken languages, sign languages are highly symbolic and
compositional, and it is this aspect we will be emphasizing most of all, and
that is the focus of most formal semantics. However, because they are less
commonly written and less commonly presented as divorced from depictive
aspects, sign languages force semanticists to embrace both aspects of their
meaning, whereas the depictive aspects of spoken language communication
tend to be either ignored, or treated as roughly equivalent to descriptive con-
tent. Moreover, sign languages have been understudied relative to spoken
languages and thus it is worthwhile for all linguists to pay attention to them
in order to broaden the study of linguistics, notably here semantics, beyond
well-studied spoken languages like English. Another reason to look at these
views of meaning in sign languages is that sign languages helps us dissociate
the particulars of spoken language from larger conclusions we might want to
draw about the human mind and meaning: when the manual/visual modal-
ity is utilized to its fullest extent for language, as it is in sign languages, we
might ask what notable properties, if any, do human languages take on that
is often missed when researchers only paid attention to speech, or (even more
commonly) only to language as expressed in written text? And finally, we
focus on sign languages here in order to encourage those who already know
more about sign languages to consider questions of interest in formal studies
on meaning: as will become apparent in the following chapters, the field of
1. MEANING AND LANGUAGE 7

sign language linguistics has been growing rapidly in recent years but the
subfield of formal semantic approaches to sign languages is still primarily
engaged with by people who came to the topic through a general interest in
semantics more than those who came through an interest in sign languages,
and it is a goal of this work to support bridging between the two.

1 Event experiences
To clarify further the kinds of “meaning” that will be relevant in our study
of sign languages, let’s consider each type in turn, first with a sentence of
written English and then with a signed sentence in American Sign Language
(ASL). The written sentence is in (1). This sentence has two components of
meaning that will be relevant for us.

(1) Ten students stood in a line at the library desk, but none of them had
their library card.

First, it might evoke a kind of “picture in our head” of students near a


desk, perhaps feeling a bit sheepish for not having remembered to bring their
library cards. This is the representation one can have of the particular event
being described, and the details can vary from person to person: I might
imagine the students crowded in a line parallel to the desk, while someone
else might imagine the students crowded in a line perpendicular to the desk,
approaching one at a time; we may also differ in the attitudes we imag-
ine for the students, if we imagine them holding any particular attitudes,
perhaps based on our own experiences in such situations. This sentence is
not especially vivid in its depiction so the image may be minimal, or even
non-existent, while for others this sentence may evoke a rather vivid scene.
When paired with a photograph or illustration, some aspects might become
more vivid and detailed: perhaps we’d learn more about their attitudes, the
arrangement of the students, and other features of the experience.
Looking at the ASL sentence in (2) we can reason similarly: it might
evoke images and/or memories of libraries, desks, waiting, etc., and we might
imagine attitudes of the students. In the case of the visually presented ASL
sentence, the signer can also express her attitude about the content of the

sentence, as in the surprised expression on the sign ‘none’. This


sentence also conveys something about the shape of the students stand-
ing around the desk, which is clearly a single file line parallel to the desk,
1. MEANING AND LANGUAGE 8

expressed through the classifier expression ‘(upright fig-


ures) standing in a line’.

(2)

‘Ten students stood in a line behind/along the library desk. None of


them remembered a library card.’
(Glossing for ASL examples primarily follow the ID-Gloss and SLAASh ID
Glossing principles, Hochgesang 2020; Hochgesang et al. 2020.)

Thus, while there are clearly similarities in the kinds of events we would
imagine based on the the English and ASL sentences we’ve seen, there are
also differences. These are often the kinds of things that people who know
both languages remark on as different, especially focusing on the expres-
sive power of sign languages that are lacking in English, especially written
English without intonation or gestures about the arrangements, etc.
In the approach we are taking in this book, we will lump together the
content of depictions and expressive meaning, including the shape of the

students in line in and expressions on as both de-


pictive. There are both scientifically motivated and practical reasons for
doing this. Scientifically, both of these pieces of meaning seem able to share
a format with the contents of our perception of the world itself, which isn’t
to say it isn’t complex or abstract (Siegel, 2011), only that we can under-
stand it via its correspondance to the world as we experience it, that is, we
understand what it means because we can reason about it via stimulation
and cause and effect, like when the signer’s expression means that the au-
thor is surprised (“natural meaning” in terms of Grice 1957), since based on
our experiences out in the world that given expression is typically tied to
the attitude. Practically, this is the distinction psychologists have already
made (Clark, 1996, 2016).
1. MEANING AND LANGUAGE 9

Depictions are infrequently represented in written English and so they


often do not play a large role in formal semantic theories of (spoken lan-
guage) meaning, but in sign linguistics quite a lot of attention has been
paid to the way that depictions convey meaning iconically as part of sign
language discourse (Taub, 2001; Liddell, 2003; Emmorey, 2003; Perniss and
Vigliocco, 2014; Ferrara and Hodge, 2018; Hodge and Ferrara, 2022). Many
analyses make use of Cognitive Linguistics approaches to meaning, in which
we reason about meaning through the way that we reason about and experi-
ence the world, and thus iconicity and its influences on meaning often takes
center stage. We don’t want to lose the insights into depiction in language
we gain from this approach, but at the same time, one argument of this book
is that it isn’t enough. The remainder of this chapter focuses on insights
from formal approaches to modeling linguistic meaning via propositions that
we also need to capture in a full theory of sign language semantics.

2 Propositional meaning
Beyond whatever particular event or “picture in your head” that a piece of
language might evoke, we use language to share extremely precise informa-
tion about things that simply cannot be experienced. Let’s return to the
same example sentences from English/ASL about the library desk. The En-
glish sentence in (3) conveys not just some impression of what it might have
been like to be there, but also the facts that there is a library desk, and ten
students in a particular relation to that desk, and that there is not a single
one of the students who brought their card. The latter of these is, of course,
quite difficult to model as an experience, given that it is about what did not
happen. Nevertheless we can reason productively over these utterances. For
example, if we believe the speaker, then we can be sure that (a) if we ask one
of the ten students in line, that student will not have their library card with
them, and that (b) there are more than five students. Although neither of
these facts were explicitly stated, we feel completely confident about them:
there is no way that (a) or (b) can be false, if we accept the original claim in
(3). We call this relation entailment: a sentence p entails another sentence
q if for every circumstance in which p is true, q is also true. There is no
scenario in which the main sentence in (3) is true but in which (a) or (b) is
false, so the target sentence in (3) entails (3a) and also entails (3b).

(3) Ten students stood in a line at the library desk, but none of them had
their library card.
Entails:
a. If we ask one of the ten students in line, that student will not have
their library card with them.
b. There are more than five students.
1. MEANING AND LANGUAGE 10

This captures something powerful about language: we can use language to


convey information about our world to such a specific extent that we can
learn and reason about things we did not actually see, and things and events
that do not even exist.
The power of being able to reason with confidence over possibilities based
on linguistic information has been the basis for modeling the informational
component of meaning using logic, as conveying a set of possibilities in which
it holds. This is one motivation for a formal truth-conditional approach
to semantics. For example, we can think about our English sentence about
students standing in line (3) as a function that sorts possibilities into those
that are true and those that are false with respect to its content. For ex-
ample, sentence (3) might be true in the following scenarios, or "ways the
world might be" (imagine the multiverse!): {w1 , w3 , w5 , w7 , w9 ...}. In some
of these possibilities the students are in a parallel line (let’s assume these
are {w1 , w5 , w9 }) while in others the students will be in a perpendicular line
(e.g. {w3 , w7 , ...}), since the English sentence in (3) doesn’t discriminate
between these arrangements, but in all of them there will be more than five
students, and in all of them it will be the case that if you begin talking
to one of the ten students, they won’t have their library card with them.
This is what it means to model entailment: if p entails q then the possibil-
ities/worlds in which p is true are a subset of those in which q is true. For
example, we can imagine that (3) is true in {w1 , w3 , w5 , w7 , w9 ...}, whereas
(3a) is true in all of those worlds and then some others, perhaps the worlds
in which that student didn’t bring their card but some other students did,
e.g. {w1 , w3 , w4 , w5 , w7 , w8 , w9 ...}.
Researchers in the field of "formal semantics" are interested specifically
in how we make these kinds of entailment inferences: it seems that there are
an infinite number of entailed sentences from any given sentence, including
sentences we have never heard before. We must, therefore, have a way to
reason about and express information across a variety of circumstances, and
a way to express facts and generalities. Moreover, we must reason about
them in a productive way, since we can understand entailments of sentences
we have never even heard before. A helpful way to model this kind of propo-
sitional meaning and how it arises compositionally is by building on a logic
over symbols, basically the same underlying notions used in symbolic com-
puting. We’ll take the view in this book that this meaning is complementary
to the kind of evoked event experiences that can also result from linguistic
expressions, that we focused on in the previous section.
Sign languages provide exactly the same sort of evidence for an under-
lying logic and symbolic computing. Consider the ASL sentence in (4).
1. MEANING AND LANGUAGE 11

(4)

Entails:
a. If we ask one of the ten students in line, that student will not have
their library card with them.
b. There are more than five students.
The ASL example, like the English example, evokes an image: a library
desk, a bunch of students, and in this case a particular arrangement of a

line perpendicular to the desk, since ‘stand in line’ in ASL


conveys, in its form, the arrangement of the line. Like the English example,
this ASL example need not be especially vivid with respect to the attitudes
of the students, and so we might imagine them with one attitude or another,
or perhaps our mental image of the scene would fail to include any attitude
of the students at all. It does convey some attitude of the speaker, as in

‘none’. In this way, the ASL sentence might convey an image of a


particular scene that is quite similar to the English case, although with more
detail in this case, for example about the arrangement of the students with
respect to the desk, etc.
The ASL sentence also conveys a propositional meaning similar, but not
exactly the same, as the English sentence. Many of the entailments are the
same. For example, the target sentence in (4) entails that (a) if we ask one
of the ten students in line, that student will not have remembered to bring
their library card, and (b) there are more than five students. Thus we want
to model several of the same mysteries: how is it that despite never seeing
this sentence before, we know what it means to a detailed enough extent
that we can be sure that given that (4) is true, then (a) and (b) are also
true? Note that some of these inferences, especially (a), are difficult if not
impossible to model as images/specific events, since (a) is about what would
not happen, namely, that students would not have a library card. Thinking
of this meaning in terms of underlying logical structures provides advantages
1. MEANING AND LANGUAGE 12

for modeling these kinds of inferences, especially negative ones. In doing so,
important questions are raised for sign languages specifically, and for all
languages in general. For example, what status should the representation of
a particular event have with respect to our representation of a proposition?
Recall the arrangment of the students around the desk is conveyed by (4)
in ASL but not by (1), and so we want to restrict the possible scenarios
in which (4) is true based on arrangement in a way we do not want to do
for English. This means that we will want a way to refer to these events
in the logical system in ASL (we will see more about this in Chapter 5).
Moreover, beyond the spatial arrangements of the students, there seem to be
other linguistic differences between the sentences. For example, the English
sentence seems to presuppose a library desk as already existing or familiar
in the conversation (“the desk”), while the ASL sentence seems neutral with

respect to whether the desk is familiar or new ( ‘a/the desk’). These


differences too we want to model as part of one or both of these aspects of
meaning.
The schematic in (Fig 1.1) roughly outlines the approach we will be tak-
ing in this book regarding the study of these dimensions of meaning and their
relations to spoken and sign languages. On the one hand, we are interested
in meaning, which we will be representing in two complementary ways: the
particular event kind (those “pictures in the head”) and the propositional
kind (that symbolically encode information and compose with logical struc-
tures). However, we don’t have direct access to meanings in language. In
fact, we only have direct access to the linguistic signal: for a spoken language
like English this may include audio waves and visual waves, as exemplified
in the lower left hand box in Fig 1.1. In a sign language, this information
will be visual, as exemplified in the lower right hand box. In both cases,
the signal gets processed by a phonological system that takes a continuous
signal and encodes new levels of abstraction, which is processed by a system
of grammatical and syntactic structures that do the same at a higher level of
abstraction, etc., which then itself interfaces with a system for meaning that
depends on these syntactic structures. There is ample evidence for all of the
same complexities of structure and processing in sign languages as in spoken
languages with regard to phonology, morphology, syntax, etc. (Hill et al.,
2018; Sandler and Lillo-Martin, 2006; Valli and Lucas, 2000; Padden, 1988;
Stokoe et al., 1976). When it comes to meaning, we’ll model the system
as having (at least) two components, of the sort we have been discussing:
representations of particular events that we can reason about through ex-
perience, and representations of propositions that allow us to reason over
alternatives. A final assumption we will be assuming is that our representa-
tions of particular events can be influenced directly by the linguistic signal,
1. MEANING AND LANGUAGE 13

Figure 1.1: Descriptive/symbolic and depictive/iconic processing streams

e.g. the shape or orientation of a rainbow gesture, and that this is processed
not through linguistic structures but through the process of understanding
depictions, the result of which is explicitly not propositional (Fodor, 2007;
Camp, 2018).
The study of meaning in general, and the study of meaning in sign lan-
guages in particular, is able to be divided roughly by the kind of meaning
that they focus on and take to be core to language meaning. On the one
hand, the perspective of cognitive linguistics tends to model the meaning
of a sentence as the kind of event experience and/or simulation of the world
that one shares with an interlocutor. Under this view, the goal of someone
who studies semantics is to understand how language is used to share these
experiences. On the other hand, the perspective of formal linguistics
tends to consider the meaning of the sentence to be the propositional mean-
ing, and so the goal becomes to understand the ways that sentences convey
information via entailments and to understand the properties of whatever
logical apparatus we have in our minds that permits this reasoning. We
introduce this approach in greater detail in the next section.

3 Formal semantics: Basics


We take as our starting point the idea that the goal of the propositional
meaning of a sentence is to share information by narrowing down alterna-
tives. One way to think about propositional meaning/telling us the way the
1. MEANING AND LANGUAGE 14

Figure 1.2: Assertion as narrowing possibilities

world is from a formal perspective is that propositional meaning narrows


down the live possibilities that we consider as ways that the world might be.
Under this view, when we assert a sentence, the goal is to keep some possi-
bilities under consideration and to eliminate other possibilities (Stalnaker,
1978). This process is schematized in Figure 1.2, where we might begin with
some set of possible ways that things might be. If we’re inclined to think
probablistically, we might want to think about this infinite set as having
a probability distribution over possibilities, some more likely than others.
For example, if I don’t know the weather in San Diego CA at the moment,
then in some possibilities that I am considering it is currently snowing there
and in others it is sunny, but those in which it is sunny are generally go-
ing to be much more likely than the snowing ones given climate properties
of the region. This set of possibilities/possible worlds is a model of the
knowledge that speakers in a conversation share, their common ground.
If conversational participants agree that Boston is in Massachusetts, then in
every one of these possibilities in their common ground, Boston will be in
Massachusetts. If they don’t know or agree about whether it is raining in
Boston, then in some of these possibilities there will be rain in Boston; in
others, there will not be rain in Boston.
How does the common ground change? Perhaps participants are inter-
ested in resolving an issue, for example, whether the speaker has or hasn’t
seen a rainbow. We can model this issue as a partition of all of the possibil-
ities in the common ground into those in which the answer is yes (I did see
a rainbow) and those in which the answer is no (I did not see a rainbow).
An assertion then has the effect of eliminating possibilities, and a helpful
assertion will eliminate possibilities in a way that aligns with this partition.
For example, a sentence like I didn’t see a rainbow has the effect of elimi-
nating under consideration all of the possibilities in which the speaker did
see a rainbow, and so by conveying this sentence we have narrowed down
1. MEANING AND LANGUAGE 15

our set of possibilities, as schematized in the three step-by-step frames in


Fig 1.2. Thus, gaining information comes from eliminating possibilities.
The information conveyed by the pictures in Figure 1.2 can also be con-
veyed using a formal logical notation, as shown at the bottom of the figure.
For example, we can start with a set of possibilities {w1 , w3 , w5 , ...}, which
we can call the proposition p (p = {w1 , w3 , w5 , ...}). Then, we might par-
tition this set of possibilities via a question, in this example Did you see
a rainbow?, which we can (informally for now) think about as moving the
discourse in two possible directions: to add the proposition q = {w1 , w3 , ...}
(I saw a rainbow) or the proposition ¬q = {w5 , w9 , ...} (I didn’t see a rain-
bow). In this case, the new proposition q = {w1 , w3 , ...} is added to the
common ground by taking the intersection of this new proposition and the
old common ground (p ∩ q). Thus, by asserting I saw a rainbow, we add a
new proposition q to our common ground, gaining information by narrowing
possible ways that the world might be.
One objection often raised to this way of looking at information as “pos-
sibilities” or “possible worlds” is that it is computationally intractable for
people using language to be actually thinking about infinitely many possible
ways that things might be when they hear a sentence. Don’t we just activate
the picture in our mind, i.e. the representation of a particular event that
we discussed above? Certainly it does seem that we can evoke a picture in
our mind, but the claim here and in formal semantics more generally is that
we also have a symbolic representation that is separated from any particu-
lar model of the world, and acts as functions over possibilities. This does
not need to mean that we actively consider every possibility! (That sort of
superpower seems undesirable if the multiverse as experienced by Michelle
Yeoh’s character in the (2022) film Everything Everywhere All at Once is any
indication!) Instead we can reason symbolically. A useful analogy might be
the concept of an odd number: we can understand this as a sorting function
that considers each of the infinitely many integers and returns true if the
number isn’t divisible by two (that collection of numbers is the set of odd
numbers), and false if it is divisible by two. We can understand the concept
odd number without actively considering all possible integers in our mind
at once. Similarly, a proposition like I saw a rainbow can be thought of as
a sorting function that takes in a possible world/possible way that things
might be and returns true if certain conditions hold (in this case, that I saw a
rainbow in that world), and returns false if those conditions fail to hold. We
can understand the proposition without holding all possible worlds in our
mind, in cognitive scientific terms, as a function available for mental com-
putations (Fodor, 2008) that takes possibilities as arguments. Conveniently,
this makes negated sentences like I didn’t see a rainbow just as natural to
incorporate into our system: the proposition ¬q is simply the complement
set of worlds to q, an advantage for modeling human languages, all of which
have expressions for negation.
1. MEANING AND LANGUAGE 16

Given this functional way of thinking about propositions, let’s end this
section by introducing a functional notation. Recall that we can think about
the propositional meaning of a sentence like I saw a rainbow as a function
that takes in possible worlds and returns true if some conditions hold (that
makes up the set of possible worlds in which is is true). Let’s write this
explicitly by introducting a new bracket notation that is conventional in
formal semantics, so that JK takes a piece of language (like the English
sentence I saw a rainbow) and provides the propositional meaning of the
sort we differentiated from the representation for the event we saw above.
In equation form, we can read (5a) as saying that the propositional semantic
value of I saw a rainbow is the set of possible worlds in which I saw a rainbow,
or equivalently, the function that returns true for those worlds in which I
saw a rainbow. Here q is the proposition that I saw a rainbow, to relate
to Figure 1.2. The propositional value of a near translation equivalent in
American Sign Language would seem, at least on first blush, to pick out the
same proposition, (5b).

(5) a. JI saw a rainbowK = λw.q(w) = λw.I saw a rainbow in w


‘The function that returns TRUE for worlds in which I saw a rain-
bow, false otherwise.’
b.

J K
= λw.q(w) = λw.I saw a rainbow in w
‘The function that returns TRUE for worlds in which I saw a rain-
bow, false otherwise.’

If these two sentences are true and false in exactly the same scenarios as
we suggested in the propositional meanings that we gave in (5) then we say
that they are synonymous.
It has become conventional to use λ notation of the sort we show in (5)
(λw.I saw a rainbow in w) because it makes explicit the compositional/functional
properties of a propositional meaning. For example, we introduced the idea
that a proposition is a sorting function over possible worlds. We can also
model subparts of propositions as functions taking in different arguments,
and the λ notation brings to the front of the equation information about the
arguments that each kind of expression takes. In what follows we’ll introduce
this notation and its compositional properties via several examples.
One subpart of propositions are predicates, like see a rainbow or happy.
These can be modeled as functions over individuals: English happy or ASL
1. MEANING AND LANGUAGE 17

sorts individuals into those who do and do not count as happy


(whatever is relevant to being happy in that context), while see a rainbow

and both sort individuals into those that do and don’t


see a rainbow (at some relevant context).

(6) a. JhappyK = λx.happy(x)


‘The function that returns TRUE for individuals who are happy,
false otherwise’
b.

J K = λx.happy(x)
‘The function that returns TRUE for individuals who are happy,
false otherwise’

(7) a. Jsee a rainbowK = λx.see-rainbow(x)


‘The function that returns TRUE for individuals who see a rainbow,
false otherwise’
b.

J K = λx.see-rainbow(x)
‘The function that returns TRUE for individuals who see a rainbow,
false otherwise’

Note that although the two forms are different in English (e.g. happy)

and in ASL (e.g. ), the semantic values are the same, since these
two words have basically the same symbolic meaning: they are functions
that will return TRUE for individuals who are happy, and false otherwise.
Often beginning formal semantic students who are thinking only about the
1. MEANING AND LANGUAGE 18

semantics of English get confused by the fact that ‘happy’ occurs both within
the semantic value brackets and outside of them, i.e. on the left and right
side of the equation in (6a). This can feel circular. But as soon as we use
care to separate our object language that we are trying to model (English in
(6a) and ASL in (6b)) from the metalanguage we use to talk about the sort-
ing function (English mixed with mathematical notation, since that is the
textual language of this book) then hopefully it is clear that it is not circular,
but rather helps us make clear predictions for what linguistic expressions are
and are not synonymous.
So far we have hardly motivated λ notation much, but let’s consider some
linguistic expressions for functions that take in two participant arguments,

like English see or ASL . These are typically used to talk about
scenarios (in this case, seeing events) that have both an agent (the see-er)
and a theme (what is seen). The propositional contribution can be modeled
as a two-place function that looks for two arguments, first the theme and
then the agent, which is represented in (8). The idea is that arguments are
applied to the function expressed by λ notation in order from outside to
inside, so that the first argument that we “feed” to the function in (8) will
saturate the λx, replacing each occuring of x with the value of the argument
that we feed it. Then, the remaining argument will be fed to λy, replacing
any occurances of y (this process is known as “β-reduction”, see Heim and
Kratzer (1998) for further introduction for linguists).

(8) JseeK = λxλy.y sees x

To walk through an example, we’ll have to simplify a little bit at this


stage, in particular, let’s imagine that the phrase the girl picks out a par-
ticular girl (this girl, ), which we can represent on the mathematical side
either using a picture of her or a letter, say g. We can do the same
with the phrase the rainbow, which picks out a particular rainbow (this one,

), which we might also name with a letter, let’s say r. (What we are
simplifying for now is the meaning of the English definite article the.) Under
this view, the propositional meaning/semantic value of the English phrase
the girl is a particular girl, not a function, which is a property of referential
noun phrases (“referential” meaning that it picks out a particular thing),
and similarly with the rainbow. This contrasts with a verb like see, which is
a function still looking for arguments to saturated/fulfill it (that’s why the
symbolic meaning for see has those lambda expressions, unlike the referential
expressions in (9)).
1. MEANING AND LANGUAGE 19

(9) a. JThe girlK = =g

b. JThe rainbowK = =r

The interesting piece comes when they combine. We know from a long
tradition of crosslinguistic work in syntax that objects combine with verbs
before subjects, meaning that see the rainbow forms a unit; as semanticists,
we will ask how we arrive at a meaning for this syntactic constituent. What
we propose is that see regularly contributes a function looking first for an
individual contributed by the object to take as an argument (that’s what
λx means, x is a variable over individuals people/places/things like girls,
rainbows, etc.) (10a). Then its syntactic object the rainbow contributes its
propositional semantic value which is an individual, (10b). When the value
for the rainbow is fed into the see (two-place) function, it returns a (one-
place) function see the rainbow which returns TRUE for individuals who
see the rainbow r and false otherwise (10c). This then takes in one more
argument, the subject of the sentence the girl, returning a proposition which
is a function that takes in worlds and returns TRUE for those in which the
girl g ( ) sees the rainbow r ( ) and false otherwise (10d).

(10) a. JseeK = λxλy.y sees x

b. Jthe rainbowK = =r

c. JseeK(Jthe rainbowK) = λxλy.y sees x( = r) = λy.y sees r


Jsee the rainbowK = λy.y sees r

d. Jsee the rainbowK(Jthe girlK) = λy.y sees the rainbow( = g)


Jthe girl sees the rainbowK = g sees r

There are some levels of simplification here, but (10) exemplifies the
basic idea of compositionality and function application behind much of the
formal semantic approach to language. Advantages include showing how
symbols combine in regular ways, such that we can convey new information
about the world by just putting familiar symbols together in new ways, i.e.
that language is systematic. One complication we introduced above is that
the meaning of an assertion like The girls sees the rainbow is a function over
worlds, and the right hand side of (10d) hardly looks like a function over
worlds, but in fact it is one in disguise, it is simply the truth conditions
given a particular world of evaluation: it will be true in any situation if g
(that particular girl) sees r (that particular rainbow). If we want to give the
more general meaning that this is a function across worlds, we add in our λw
which turns this into a function over worlds (w is, perhaps unsurprisingly, a
1. MEANING AND LANGUAGE 20

variable used for worlds); this is often considered to be introduced at higher


levels in the syntax (we will leave other complexities of the equation in (11)
out for now, such as the model dependence of the meaning of each of the
content words, and assignment functions that we will see more about in
Chapter 4).

(11) Jthe girl sees the rainbowK = λw.g sees r in w


‘The function which takes in worlds and returns TRUE for those in
which the girl g sees the rainbow r and false otherwise.’
Focusing on the compositional aspects, we can follow precisely this same
procedure with a sentence in a sign language. Let’s look at several com-
ponent pieces from American Sign Language and how they combine, as
exemplified in (12). In (12a), we see that the semantic value of the verb

is a function that needs first one individual argument (the theme)


and then another individual argument (the agent). The semantic value for

is the rainbow (12b), and this composes as the theme argument

for (12c). Finally, the semantic value for is the girl, which
composes as the agent argument (12d). The result are the truth conditions
in a particular world, and we can define the proposition as the function over
worlds in which those conditions hold (13).

(12) a. J K = λxλy.y sees x

b. J K= =r

c. J K(J K) = λxλy.y sees x( = r) = λy.y sees r

J K = λy.y sees r

d. J K(J K) = λy.y sees the rainbow( = g)

J K = g sees r
1. MEANING AND LANGUAGE 21

(13) J K = λw.g sees r in w


‘The function which takes in worlds and returns TRUE for those in
which g (the girl) sees r (the rainbow) and false otherwise.’

We can of course also present the same compositional process using gloss-
ing of the signs instead of pictures, as in (14)- we will frequency see glossing
used when pictures are not available, in this text typically when citing ex-
amples from previously published works. But, it’s always important to keep
in mind that glosses represent the signs, so that (13) is a more direct rep-
resentation of the endeavor (and hopefully highlights the non-circularity of
this process, as a bonus).

(14) a. Jgirl see rainbowK = λw.g sees r in w


b.

J K = λw.g sees r in w

Finally, let us consider this compositional approach in light of the ap-


proach to semantics that we are taking in this book, where we expect sen-
tences to convey both representations of particular events and of proposi-
tions. The idea is that a sentence in English that we would write like The
girl saw the rainbow or in ASL that we would gloss girl see rainbow
can have the same propositional semantic value. Let’s look at the ASL sen-
tence in (15), which has the propositional semantic value that we have been
discussing as well as possibly the representation for a particular event, in
this case a scenario in which a girl looks at a rainbow, pictured in (15) and
notated with a double parents LM to evoke the notion of a camera taking a
picture, although it’s going to be less helpful to think about it as a function
of its pieces since the whole point is that it is not going to be compositional
in the same way as the symbolic/propositional contribution. In terms of its
content, note that this event has to include, say, the rainbow up in the sky
and to the side (not, say, near the ground or in the center), since see depicts
looking upward to the side. In the model of this sentence as it is presented
so far, that is not a propositional contribution but it does seem to bear on
the image/experience of the event.
1. MEANING AND LANGUAGE 22

(15) Propositional representation:

J K = λw.g sees r in w

Particular event representation:

L M=

Finally, it is clear that we attempt to reconcile the results of these simulation


vs. symbolic representations, as we generally do for other kinds of dual
representations in the mind, so the kinds of particular events that a sentence
evokes will influence the exact functions we take the symbols to denote, and
in reverse, that the propositional representation certainly in turn influences
and can in large part drive the way that we represent a particular event.
Why are we going through all of this trouble to differentiate the proposi-
tional contribution from representations of particular events? Could we try
to think about these as part of the same kind of meaning, as most people
do? It’s certainly possible to try to encode the depictive details as parts
of propositional meaning contribution. It’s also possible to try to model
propositional meaning through a kind of simulation/representation of an
event. However, one kind of argument for not collapsing these two kinds
of meaning into the same thing will come from Chapter 2, when we discuss
the way that answers to questions and alternatives seem to be incompatible
with depictive/iconic content. In addition, it allows us to simplify things at
this point when we look at a sentence that involves a depictive component
that adds or affects our representations of events, such as a different atti-
tude, slightly different depiction/orientation of the rainbow, etc, but doesn’t
seem to change the propositional contribution in terms of which situations
we are asserting as true. All sorts of things can affect a representation of a
particular event, and they’re certainly not limited to sign languages; many
researchers have highlighted this kind of meaning in spoken language gen-
erally (Clark, 2016), spoken language ideophones specifically (Dingemanse,
2015; Kita, 1997) and sign languages (Ferrara and Hodge, 2018; Hodge and
Ferrara, 2022). In addition, there seem to be all sort of things that affect a
propositional semantic contribution without affecting our representation of

a particular event. Consider, for example, the sign . When negation


like this is added to a sentence, we end up with completely opposite truth
1. MEANING AND LANGUAGE 23

conditions (16a), whereas our representation of a particular event seems


to be affected very differently, and in fact may not convey any particular
experienced-like event at all, since nothing is claimed to have happened.
Alternatively, it might simply convey some aspects which are taken to exist,
such as the girl and/or the rainbow (16b).

(16) a. Propositional representation:

J K
= λw.g does not see r in w
b. Particular event representation:

L M

At this point hopefully the approach is clear: we are interested in the


propositional component of meaning in sign languages most notably for its
compositional properties and how it allows us to reason about the world,
while at the same time understanding how it interacts with the way that
we represent particular events, which seems to be the target of so much of
the depictive and affective content in human languages. The idea is that
by tracking both together, we can understand the separate roles that they
play, and, ultimately, understand the also numerous ways in which they are
able to interact in different areas of human language in general, and in sign
languages in particular.

4 Information: entailment, presupposition, and im-


plicature
One motivation for investigating propositional meaning in depth is the ar-
gument that it supports all kind of reasoning, most notably the entailments
that follow from sentences. In this section we will focus on separating differ-
ent kinds of inferences that arise in language and how to think about them
1. MEANING AND LANGUAGE 24

through this dual propositional/event particular lens.


Let’s remember our earlier sentence (17). Under many circumstances,
we tend to expect that the people we are talking to are telling the truth and
mean what they say. So, imagine that someone signs (17). What are some
possible things that we can conclude? For one thing, we can conclude that
someone has seen a rainbow.

(17) a.

‘The girl saw the rainbow’


Entails:
b. Someone has seen a rainbow.
In fact, this is a logical relationship: there is absolutely no situation in
which the target sentence is true, but Someone has seen a rainbow is false.
As we’ve already mentioned, we call this relationship entailment: sentence
(17a) entails sentence (17b) because there is no situation in which (17a) is
true but (17b) is false. There are several entailments of (17a); some others
are in (18).

(18) a.

‘The girl saw the rainbow’


Entails:
b. There was a rainbow somewhere.
c. It is false that the girl didn’t see a rainbow.

So, while (17a) tells us something rather specific, we can also conclude
many other facts from this information. How are we able to do this? Formal
semanticists are motivated by this kind of data in assuming that humans
are using an underlying logical system in this kind of reasoning, and that it
is the same one that allows us to understand the meaning of sentences that
we’ve never encountered before.
We can also infer new information via implicature, a category of infer-
ences that are cancellable, unlike entailment. Implicatures can arise based
1. MEANING AND LANGUAGE 25

on real world knowledge, as in the case in (19) which is based on the fact
that rainbows are easier to see outside because they’re up in the sky. This
is an implicature and not an entailment because we actually can imagine
a scenario in which the girl saw the rainbow but wasn’t outside: perhaps
she was looking through a window. Implicatures can also arise through
reasoning about the amount of information that someone might share. For
example, another implicature might be that the girl didn’t cause or create
the rainbow, because if she had made the rainbow herself (drawn it, made
one in a lab using a crystal and light refraction, etc.) then we probably
would have said that instead of just that she saw it. But of course, this is
just an implicature and not an entailment: we can quite easily imagine a
scenario in which she did create the rainbow and also saw it.

(19) a.

‘The girl saw the rainbow’


Implicates:
b. The girl was outside.
c. She didn’t create the rainbow.

A third category of inference is known as presupposition, the informa-


tion that not directly asserted but rather already presumed. For example,
the English sentence in (20a) presupposes that there is a unique girl (and a
unique rainbow) in the context that will be familiar to the participants in
the conversation (20b). This is known as the uniqueness presupposition of
the English definite article the. The ASL sentence in (21a) doesn’t have a
definite article, and seems to be able to be used in the English contexts as
well as in contexts in which there is no girl already given in the context, but
rather it introduces the girl to the discourse. We have not so far been focus-
ing on this difference, but it is an example of the cross-linguistic differences
that you can find between any two languages in something that seems to
have similar assertive content, i.e. they have quite similar entailments but
differ in presuppositions.

(20) a. The girl saw the rainbow


Presupposes:
b. There is a unique girl
1. MEANING AND LANGUAGE 26

(21) a.

‘The girl saw the rainbow’


Does not seem to presuppose:
b. There is a unique girl

A notable feature of presuppositions is that their content seems to be


unable to be targeted by logical operators, i.e. their meaning persists or
“projects” past negation and in question and conditional statements. Con-
sider, for example, the sentences in (22), which are known as the “P(-
resupposition) family” of sentences: they involve the positive sentence (22a),
its negation (22b), its polar question form (22c), and a conditional in which
it is the antecedent (22d). Although these drastically change the proposition
that is asserted (in fact, (22a) and (22b) are entirely complementary propo-
sitions), they all still presuppose that there is a unique girl for purposes of
the conversation.

(22) a. The girl saw the rainbow


b. The girl didn’t see the rainbow
c. Did the girl see the rainbow?
d. If the girl saw the rainbow, she smiled.

These all presuppose:


e. There is a unique girl

This resistance to being targeted by different logical operators (i.e. pre-


supposition projection) is a hallmark of presuppositions, and differentiates
them from both entailments and implicatures. Notably, however, the back-
grounded nature of presuppositions (whether they need to already be known
before the sentence is utterered) and their projectivity are dissociable fea-
tures. You could, for example, convey new information in a form that
projects past these same logical operators, such as in a non-restrictive rela-
tive clause: The girl, who I am meeting tomorrow for lunch, saw/didn’t see
the rainbow. For more exercises on the differences between entailment, impli-
cature, and presupposition, the reader is referred to Chapter 1 of Chierchia
and McConnell-Ginet (2000) for more examples in English. These cate-
gories should be viewed as a helpful way to organize the inferences involved
in sentence meaning, but not in a way that should be taken as exhaustive of
1. MEANING AND LANGUAGE 27

possible categories: many features of them can be analyzed independently,


and this is part of an interesting area of research at the semantics/pragmatics
interface in both spoken and in sign languages, as we will see.
In sign language linguistics, presupposition becomes relevant in discus-
sions about iconic depictions because they, too, often seem to “project”
past certain operators in patterns that are quite similar to classic presup-
positions, both in co-speech getures accompanying spoken language and in
iconic aspects of sign languages (Schlenker, 2021; Tieu et al., 2019). Thus,
a big picture question we will return to in the Conclusions chapter of this
book is why and how we might want to think about depictions with respect
to presuppositions. Several accounts are presented; the one will will tend to
take in this text is that iconic depictions appear to project because they are
not propositional, that is, they cannot be affected by propositional opera-
tors like negation or conditionals. In some sense, this makes their projection
behavior less stipulative than many other presuppositions (such as, say, the
definite article the) which otherwise seem to be stated as conventionalized
fact, the lexicalization of which varies from language to language (some lan-
guages mark definiteness, others only specificity, others neither). We might
even take the sign language patterns to suggest that we look for similarly
explanatory answers in spoken language presuppositions.

5 Fieldwork and semantics in understudied lan-


guages
So far we have motivated our study of meaning by data from both English
and American Sign Language, and so it is worth discussing the status of
semantic data from different spoken and signed languages. For much of its
history, the study of semantics in general and formal semantics in particular
was focused on a very small number of human languages, notably English
and German. Many features that are well represented in these languages
have thus subsequently been well represented in literature on semantics.
Classic examples are definite determiners (English the) or nominal quanti-
fiers, which are not unique to these languages but are also not found in all
languages of the world (Partee, 1995) and yet have been the focus of entire
chapters in many introductory linguistic texts, and the subject of much of the
most interesting semantic theorizing. This is in part for good reason: these
topics are fascinating and provide excellent windows into semantic structure.
On the other hand, there are surely topics that have been understudied be-
cause they appear less prominently in these well studied languages, such
as noun class/classifier systems, the effect of discourse configurational word
orders on semantics, adverbial quantification, the role of intonation, eviden-
tiality, and many others. This is especially true in sign languages, in which
it is often tempting to go looking for similar structures to spoken languages
1. MEANING AND LANGUAGE 28

instead of investigating the languages on their own terms.


Nevertheless, there have been major strides in work on understudied
languages in semantics especially in the last decade or two, and efforts to
increase knowledge for fieldwork by semanticists and semantics for field-
workers. Bochnak and Matthewson (2015) provide detailed suggestions for
doing semantics on understudied languages, most notably the elicitation of
acceptability judgments for gathering semantic data. Since this is such a
core notion to collecting semantic data, let us illustrate the idea here.
A basic acceptability judgement setup involves presenting a context in
any multitudes of mediums: typically using language, but could also be a
picture, or both. Then, a linguistic form is presented in this context, and
members of the linguistic community are asked if the language utterance is
acceptable in a given context. Let’s exemplify an acceptability judgment in
(23), which includes a pictured context and two sentences in ASL.

(23) Context (presented in English text and picture): A girl is


looking between some trees into the sky at a rainbow.

Target sentences:
a.

(acceptable in this context)


b.

(not acceptable in this context)

These are extremely simple (and quite obvious) examples, but the con-
cept is that the sentence in (23a) is judged acceptable in the given context,
while the same sentence with negation in (23b) is judged to be not acceptable
1. MEANING AND LANGUAGE 29

in the same context. This tells us that at a minimum, these two sentences
differ in some meaningful way; knowing what we do about the language,

this should hardly be surprising since provides nearly the opposite


meaning, but if we were wanting to investigate the meaning of this form
from an absolute beginning starting point, this would be a great place to
start.
Acceptability judgements are not the only kind of data that we use in
semantics; we can look at natural productions and assume that they are
acceptable in their context (the downside is that sometimes natural produc-
tions don’t include a nice minimal contrast like that in (23)). We can also
ask for inference judgments, like whether we could infer (23b) from (23a)
(we likely would not), or we could ask for an ordering of preferences for
which forms are better/worse given a context. Some of these investigative
means will be better or worse for different semantic/pragmatic phenomena,
and for different data gathering contexts. For example, sometimes we might
want to gather data from a wide range of participants through an online
study/experiment, while in other situations we can focus more closely on the
language of a very small number of language users. In general, we will not
make a strong distinction between experimental and non-experimental work
in this book, taking the view that it is a gradient notion especially within
linguistics, where data collection of the minimal pair sort that we see in (23)
is common even in “theoretical” papers, a kind of mini experiment (David-
son, 2020; Matthewson, 2022), so the goal will be to digest the literature out
there to lay out the theoretical story. Underlying this is, certainly, a push
for better and stronger data collection across all sign languages; for more
resources on approaching this, the semantic fieldwork literature is a broadly
very useful starting point (Bochnak and Matthewson, 2015; Deal, 2015),
along with growing work on experimental semantics/pragmatics (Schwarz,
2014; Noveck, 2018; Cummins and Katsos, 2019).

6 On notation
Human language has the potential to be produced and comprehended in
many modalities: the text used to write or read this paragraph is one form
of language, but this book will often be discussing language that exists in
another medium: speech or sign. There are long used conventions for repre-
senting speech via text: I’ll typically use English orthography as in (24a), but
the International Phonetic Alphabet is another option to more accurately
represent the form of the words in spoken English (24b). Although both
are helpful and used in semantics (truthfully, English orthography is over-
whelmingly used in formal semantics to represent English), neither rightfully
1. MEANING AND LANGUAGE 30

capture many properties of speech that we might be interested in for pur-


poses of studying meaning, such as facial expressions, prosodic breaks, etc.
They also, obviously, fail to capture co-speech gestures that often contribute
to meaning in dialogues that are primarily based on spoken languages like
English.

(24) a. The girl saw a rainbow.

b.

A similar pattern holds true for sign languages, except that there is no
clearly conventionalized orthographic representation for representing ASL
to the same extent that there is for English. The most common method
for representing ASL on the page are semi-conventionalized glossing into
English of the sort in (25a), and we will make some use of these as well.
When possible, for any glossing of examples presented anew in this text
from ASL, I aim to follow the SLAASh ID glossing principles (Hochgesang,
2020) and employ conventionalized/searchable ID glosses for ASL from the
ASL Signbank (Hochgesang et al., 2020). However, glossing from examples
cited from other works are typically given in the forms that those authors
used, especially in the case of other sign languages, in order to prioritize
faithfulness to the original source. Most importantly, however, just like IPA
is not the full picture of English but is much more faithful to linguistic forms
than conventionalized English orthography, most sentences in this text also
include still images of the signs used in each sentence, to more directly
represent the forms used, as in (25b). Like IPA for spoken language this will
not convey timing and other finer details, but it should give a much more
clear and accurate sense of the object language than glossing alone, and
hopefully allow us to focus on the sign language itself, and not the glosses,
as the object of study.

(25) a. girl see rainbow


b.

In general, glosses capture symbolic/propositional content well, and strug-


gle more with capturing depictive content. This is inherent in the form: text
is intentionally symbolic, whereas pictures more often accurately convey de-
pictive content in signed languages. Thus, instead of using too many glossing
symbols which assume a symbolic analysis, this text will aim to use the sim-
plest gloss possible and then illustrate further details of the form through
1. MEANING AND LANGUAGE 31

the pictures. For example, repetition can be glossed in symbolic ways (e.g.
+ + +) but this seems to assume a symbolic analysis; similarly, verbs that
include quite a lot of depiction are sometimes glossed simply as if they are
just as symbolic as a word (e.g. give, move). Since we are ultimately in-
terested in how these symbolic and iconic components interact (and fail to
interact), the pictures should be considered the ultimate reference for the
sentence, with the gloss an additional, secondary source of information.
In the following chapters we will focus on different areas of research
in formal semantics of sign languages and synthesize this research into as
coherent a picture as possible. Instead of building up from the smallest
pieces, though, we will start by investigating the largest pieces (questions in
a discourse and their answers) and work our way down to smaller pieces that
include sentences and their connectives, and eventually nouns and verbs and
quantifiers, until we cut across other dimensions such as countability and
intensionality, before returning at the end to the issues raised in this chapter:
how sign languages fit into the larger picture of how we model meaning in
human language.
2

Questions, answers, and


information

We’ve talked about two ways that language can convey meaning, with the
idea that one route functions to evoke a particular event experience, and
the other is for resolving issues via propositions built from symbolic com-
positional structures. In this chapter, we’re going to dig into the latter to
a much greater extent. The view we pursue follows classic work by Roberts
(2012) (first published in 1996) in roughly taking questions and answers to
be the backbone of the way that issues are resolved in a discourse, in order to
model the backgrounding/foregrounding of information in language, often
called the information structure. In sign languages, we can see reflec-
tions of the question-answer discourse structure in the form of some sentence
structures, which we will showcase in this chapter. Moreover, questions and
their possible answers (the latter of which which form alternatives to each
other) provide a useful way to probe and categorize the kinds of linguistic
forms that can give rise to propositional meaning (descriptions) from those
that only bear directly on particular events (depictions).
In the previous chapter we introduced the idea that propositional mean-
ing can be thought of as a function that divides the circumstances in which
a sentence is true from those in which it is not. That is to say, we introduce
a truth-conditional semantics for propositional meaning. This allows us to
model the meaning of things that don’t happen just as easily as those that
do (such as never seeing a rainbow), and also allows us to model entailments:
one utterance entails another if in all situations in which the first is true,
the second is also true. It also allows us to model new information as a
restriction on the ways that the world might be that are in the common
ground, so that we can view information exchange as eliminating possibili-
ties. There is yet another advantage to this view of propositional meaning
as possible worlds: it allows us to model questions as requests for particular
ways of updating the common ground. Consider, first, a polar question (the

32
2. QUESTIONS, ANSWERS, AND INFORMATION 33

Figure 2.1: Narrowing possibilities via a polar question

kind of question to which you can answer yes, or no), such as Did you see a
rainbow? From the perspective of propositional meaning, each polar ques-
tion is a request for the interlocutor to determine which set of two (infinite)
sets of possibilities might be right: either the worlds where the answer is
YES (e.g. I DID see a rainbow) or the worlds in which the answer is NO
(e.g. I DIDN’T see a rainbow). A visual schematic representation of this
question-answering process and the associated narrowing of possibilities can
be seen in Figure 2.1.
Although precise implementations differ, this general view of questions
as a set of possible answers is commonly accepted in formal semantics as
a way to model the behavior of questions not just when they are asked di-
rectly, but also indirectly as embedded clauses, e.g. She wondered if you
saw a rainbow. (Hamblin, 1976; Karttunen, 1977; Groenendijk and Stokhof,
1982, 1984). The schemas used in Fig 2.1 reflect a particular implementa-
tion of this idea of questions as partitions on the set of possible worlds by
Groenendijk and Stokhof (1984); in contrast, taking the question to be a set
of propositional alternatives (e.g. {I saw a rainbow, I didn’t see a rainbow})
builds from the work of Hamblin (1976) and Karttunen (1977). Although
we won’t go into the detailed semantics of embedded questions much here,
this can be motivated also by unembedded questions: someone raises a ques-
tion by providing a set of possible ways to update the discourse, and their
interlocutor answers it by choosing from among those possible updates. In
this kind of view, backgrounded information is what is already presumed by
the combination of a common ground and its partition, while new, focused
information is contributed by the answer (Roberts, 2012; Rooth, 1992).
The form of overt polar questions has been relatively well studied for
some sign languages, in part because a robust pattern has been observed:
the polar question typically differs from the corresponding assertion in its
2. QUESTIONS, ANSWERS, AND INFORMATION 34

suprasegmental features, i.e. sometimes there is no difference between the


manual signs of the assertion and the polar question. Suprasegmental dis-
tinctions between polar questions and assertions are found spoken languages
too, as in Italian (26a-b), where we can see that the words are the same for
the assertive statement and the question, but the difference is expressed
through intonation (reflected in the orthography only in the presence of a
question mark).

(26) (Italian)
a. Laura ha mangiato. ‘Laura ate.’
b. Laura ha mangiato? ‘Did Laura eat?’

While neutral questions in English frequently are marked segmentally with


auxiliary inversion, e.g. Did... ?, prosody can also differentiate certain
interrogatives from declaratives in English e.g. the rise at the end of a
biased declarative question like She saw a rainbow? (Gunlogson, 2004).
An example of the suprasegmental difference between the declarative
question in ASL and the polar question can be seen in (27a)-(27b): the first
has neutral expressions while the second is much more marked, especially via
the eyebrows. Suprasegmental information in sign languages is convention-
ally called nonmanual marking, to distinguish it from the manual based
signs. Nonmanual marking can be represented above the signs in which it
appears, as in (27c-d), the glossed version of (27a)-(27b).

(27) a.

‘The girl saw a rainbow.’


b.

‘Did the girl see a rainbow?’


c. ix girl see rainbow
polar-q
d. ix girl see rainbow q?

(27b) and (27b) also exemplify the sentence-final question marker q,


which is optional in ASL. In terms of spoken language typology, question
2. QUESTIONS, ANSWERS, AND INFORMATION 35

Figure 2.2: Narrowing possibilities via a constituent question

particle likes this are the most common option for marking the declara-
tive/polar question difference (Dryer, 2013). We see two strategies involved
in polar questions in ASL, then, using nonmanuals and question particles,
both of which are represented in (27) but notably one thing that ASL does
not do is use auxiliary inversion of the sort familiar from English (Did the
girl...), one of many places where these two languages differ in their struc-
ture.
Polar questions also raise interesting questions about the ways that they
can be answered, i.e. the forms and meanings of different polar response
particles. Negative polar questions made especially interesting test cases:
asking Is it not raining? in English can lead to the felicitous response No,
it’s not. and also Yes right, it’s not.; other languages make finer or different
distinctions in the meaning of their response particulars (e.g. the French
three-way distinction between oui, non, and si). Loos et al. (2020) provide
a detailed investigation of possible polar response strategies in German sign
language (DGS) and fit them into a larger typology of spoken languages
while Gonzalez et al. (2019) investigate these distinctions in ASL.
Polar questions aren’t the only kind of questions, though: we can see
another example of questions that function as a discourse organizer in con-
stituent questions (sometimes called wh-questions, using who, what, when,
where, why, etc.). We can model their semantics as an extension of the se-
mantics we gave for polar questions, illustrated schematically in Fig. 2.2.
The wh-question provides a set of alternative ways to update the dis-
course, which are based on different answers, e.g. in this case things that
Kate saw (due to the question, What did Kate see?). A possible partition
is {Kate saw a rainbow, Kate saw a cloud, Kate saw a robin}, which would
be the partition if we limited the possible objects seen to only these three
2. QUESTIONS, ANSWERS, AND INFORMATION 36

options, and also limited the seeing to a single object. We can imagine loos-
ening both of these requirements so that the list of possible objects seen is
much less constrained, and also allowing multiple possibilities, e.g. Kate saw
a rainbow and a cloud, which would then count as its own answer (separate
from the single answers Kate saw a rainbow or Kate saw a cloud). In other
words, under this system we should view answers as exhaustive: if the
answer is Kate saw a rainbow then it means the same thing as Kate saw
only a rainbow, in response to the question What did Kate see?.
When it comes to the form of constituent questions in sign languages,
it is a notable feature that many sign languages seem to also have required
suprasegmental marking for wh-questions, as the nonmanual marking seen
in (28) (for wh-questions in ASL, this is often described as “brow furrow-
ing”, in contrast to “brow raising” often seen in polar questions). It has
been a matter of much discussion in the literature on sign language syn-
tax where exactly wh-words are permitted to occur in the word order of
sentences in sign languages: see Petronio and Lillo-Martin 1997 and Neidle
2000 for overviews. Cecchetto et al. (2009) detail a way that the seemingly
typologically unusual word order for questions found in sign languages of
the world relates to the use of nonmanual marking: they note that sign lan-
guages frequently have sentence-final wh-words, while in spoken languages
wh-words are frequently sentence-initial if they are not pronounced in their
canonical position. From a pure syntax perspective, this is a major puzzle,
with no obvious reason for a difference between signed vs. spoken modali-
ties. However, Cecchetto et al. (2009) attribute this difference to the ability
of the nonmanual marking to convey appropriate semantic/syntactic depen-
dencies, i.e. to use nonmanual marking to convey the question status of an
utterance and to relate any dislocated wh-words to the argument positions
in which they are interpreted. We will not go especially deeply into this
primarily syntactic issue, but we can see some evidence of that in a sentence
like (28c), where the wh-word is sentence-final even though it queries the
subject, which would typically be sentence initial in ASL. At the same time,
we see a sentence-initial wh-word in (28b), despite the word querying the
object, which typically follows the verb. So, there are ways that a wh-word
may end up sentence-initially, or sentence-finally, in both cases potentially
differing from the canonical position of the constituent it queries (the word
order for the assertion is subject-verb-object, i.e. girl see rainbow).

(28) a.
2. QUESTIONS, ANSWERS, AND INFORMATION 37

‘What did the girl see?’


b.

‘What did the girl see?’


c.

‘Who saw a rainbow?’

Regarding semantics and pragmatics, the point we have been empha-


sizing is that questions partition the common ground, and assertions that
answers those questions can be modeled as narrowing information by elimi-
nating possibilities, a way of organizing a discourse. This view of discourse
as a series of questions and answers broadened its implications even fur-
ther when it was extended by Roberts (2012) to model discourse when the
question was implicit. Roberts argues that every statement is implicitely an
answer to some Question Under Discussion (QUD), whether it is a very gen-
eral question, for example, How are things?/ What do you know? or a more
specific question, for example, Should I bring an umbrella? This has the
advantage of accounting for information structural properties of sentences
such as focus placement. For example, in the context of the question What
did Kate see?, the sentence with focus on the object is acceptable (29a)
but the same sentence with focus on the subject is not (29b), reflecting the
requirement for question-answer congruence (Rooth, 1992).

(29) What did Kate see?


a. Kate saw a RAINBOW. (acceptable answer)
b. KATE saw a rainbow. (unacceptable answer)

Importantly, focus placement reflects QUDs even when the question has
not been stated overtly. For example, the context in (30) doesn’t involve
any overt questions, but it seems to raise the same question What did Kate
see? implicitely, so that it has become the Question Under Discussion that
ideally should be answered by the conversational participants. Thus, focus
placement in the assertion should reflect its status as an answer to this ques-
tion. There are many ways to add complexity to this picture, including sub
2. QUESTIONS, ANSWERS, AND INFORMATION 38

questions under discussion etc. (see Büring 2003 for an immediate extension
to contrastive topics) but this rough outline forms a useful foundation for
what follows.

(30) Context: Kate returns from the window with a big smile on her face,
saying So beautiful!. Someone close to the window announces to the
rest of the room:
a. Kate saw a RAINBOW. (acceptable continuation)
b. KATE saw a rainbow. (unacceptable continuation)

In the remainder of this chapter we will highlight specific ways that sign
languages make use of the structure of a discourse and reflect it in linguistic
forms, and how this interplays with other aspects of the language. The first
section will discuss question-answer clauses in sign languages, a well studied
area in sign language linguistics that becomes simpler when we think about
it as an overt manifestation of this question-answer structure of dialogue and
information packaging. We will then move on to more complicated discourse
structures that allow for foregrounding/backgrounding information such the
use of sentence final focus positions, and then topicalization. In the fourth
and final section we will discuss the interaction of depictive content and
questions, showing that alternatives are not compatible with depiction and
representations of particular events, only propositional meaning.

1 Question-Answer clauses
We begin with a structure that follows quite naturally from the view of a
discourse as structure by questions and their answers. In American Sign Lan-
guage, one common way to express backgrounded/foregrounded information
is actually through a sentence that contains what looks like a question-
answer pair, as in example (31). A non-exhaustive list of other languages
that have a similar type of sentence strcuture are the sign language of the
Netherlands (Kimmelman and Vink, 2017), Russian sign language (Khristo-
forova and Kimmelman, 2021), South African sign language (Huddlestone,
2017), and Hong Kong Sign Language (Gan, 2022).

(31)

‘What the girl saw was a rainbow’


2. QUESTIONS, ANSWERS, AND INFORMATION 39

We might naturally wonder: are these simply examples of someone ask-


ing a question in the way one usually does, and then answering it them-
selves, basically playing two different roles as in (32)? Or, is it a more con-
ventionalized sentence structure, something like the English pseudocleft in
(33)?

(32) Person A: What does the girl see?


Person B: She sees a rainbow.
(33) What the girl sees is a rainbow.

Both of these share an important pragmatic property with question-


answer clauses, which is that they present the answer part (the rainbow) as
new, focused information, and the question part (what the girl saw) as old
backgrounded information. However, although similar in their pragmatic
properties they differ quite a bit in their syntactic/semantic properties: the
discourse question-answer pair in (32) involves two separate sentences, while
the pseudocleft in (33) is a single unified clause with a main verb that is
the copular verb is. Each has been suggested as an analysis of the sign lan-
guage question-answer pair structure. Neidle (2000) propose that the ASL
question-answer pairs are, underlyingly, just like questions and answers in a
discourse between participants, except that one person is basically playing
both roles. In contrast, Wilbur (1994) analyze the ASL version as in (31)
as a pseudocleft, equivalent to English (33).
Let’s first focus on the similarities, namely, what is backgrounded and
foregrounded across the three structures (discourse question/answers, the
QAC in ASL, and the English pseudocleft). In each the answer part (rainbow)
is new and the question part (girl see what) is old. We can see evidence
that the same pragmatics govern question-answer dialogues, pseudoclefts,
and ASL question-answer clauses by comparing their acceptability in the
supportive context in (34), in contrast to a context with a different QUD,
in (35).

(34) Context: a bunch of people are wondering what the girl saw.
a. Person A: What does the girl see?
Person B: She sees a RAINBOW. (acceptable)
b. What the girl sees is a rainbow. (acceptable)
c.

(acceptable)
2. QUESTIONS, ANSWERS, AND INFORMATION 40

Figure 2.3: Narrowing possibilities via a question-answer clause

(35) Context: a bunch of people are wondering who saw the rainbow.
a. Person A: What does the girl see?
Person B: She sees a RAINBOW. (not acceptable)
b. What the girl sees is a rainbow. (not acceptable)
c.

(not acceptable)

Taking pieces of both the discourse question answer pair structure and
the pseudocleft structure, Caponigro and Davidson (2011) propose that the
expression-types in (31)/(34c)/(35c) are a question and its answer in terms
of their semantic/pragmatic contribution. We can think of this as in the
schema in figure 2.3, where the question kate see what is raised, with
the answer [kate see] rainbow just as when there are two people in a
discourse. However, their proposal is that the question and its answer are
connected syntactically to each other in the same way as two parts of a
pseudocleft are connected by a copula verb (be and its variants, which are
often covert/unpronounced in ASL). Just like a copula verb in equative
constructions can equate two things (e.g. [Mary] is [the winner of the game]),
the copula verb in question-answer clauses in ASL can equate two things: the
question clause, which has the semantics in (36), roughly [the answer to the
question girl see what], and the answer clause [girl see rainbow]. This
2. QUESTIONS, ANSWERS, AND INFORMATION 41

process is reflected in the compositional steps in (36), built upon existing


work on the semantics of questions and answerhood operators proposed
outside of ASL, e.g. Dayal (2002).

(36)

a. J K = λw.λp[p(w) = 1∧∃x[p = λw′ g sees x in w′ ]]


= Q0 = {p0 = g sees rainbow in w, p1 = g sees bird in w, ...}
‘Takes a world and returns the set of propositions which are true
in that world and which vary on what g sees’

b. J K = λw.g sees r in w
c. J(be)K = λp.λq[p = q]

d. J(be) K = λq[q = λw.g sees r in w]


e. J(Ans)K = λQλwAN S(w)(Q) ⇒def
λQλwλw′ .∀p ∈ Q(w).[p(w′ ) = 1]
‘Takes a question and returns the proposition that entails all other
true propositions’

f. J(Ans) K = λw∀p ∈ Q(w0 )[p(w) = 1]

g. J[(Ans) ](be) K
= λw∀p ∈ Q(w0 )[p(w) = 1] = λw.g sees r in w
‘The function that takes in worlds and returns TRUE only for the
complete true answer to the question (What did the girl see?, in
(36a)), is equal to the function that takes in world and returns
true if the girl saw the rainbow in that world’

One notable aspect of this proposal is that much of the motivation be-
hind the anlaysis presented in (36) was originally from analyses of spoken
language pseudoclefts in English (Dikken et al., 2000; Schlenker, 2003). But,
in fact, Caponigro and Davidson (2011) argue that there are several notable
differences between QACs in ASL and pseudoclefts in English, suggesting
that the given analysis which is along the lines of those proposed for pseu-
doclefts is actually much more appropriate for the QAC case than the pseu-
docleft case. This has implications for the analysis of English in return, of
2. QUESTIONS, ANSWERS, AND INFORMATION 42

course, for if the analysis is a better fit for ASL, then another solution will
be needed for English to the extent that the two differ. And they do seem
to differ. For example, there is a clear contrast between an English pseudo-
cleft and ASL QAC in that pseudoclefts can’t be formed with polar/yes-no
questions (37). Another difference is that pseudoclefts require referential
answers, whereas QACs can have quantificational/non-referential answers
(38). Pseudoclefts can also only use a subset of possible wh-words in their
language, but QACs allow any wh-word, including which (39) (Caponigro
and Davidson, 2011).

(37) a. *Does/Whether Alex sell books is no/yes.


b.

(38) a. ?What Alex sells is few books.


b.

(39) a. ?Which girl Alex likes is Mia.


b.

Each of these properties seems to involve a stronger restriction in the En-


glish pseudocleft than is seen in the ASL QAC, where instead the QAC pat-
terns more like a question-answer pair in discourse; Caponigro and Davidson
(2011) argue that they are a single clause based on, among other factors,
their ability to be embedded as a single clause under attitude predicates.
While the above clearly differentiate English pseudoclefts and ASL QACs,
the picture may more complicated, especially as we extend the inquiry be-
yond ASL to other sign languages. First, an intruiging idea by Kimmelman
2. QUESTIONS, ANSWERS, AND INFORMATION 43

and Vink (2017), extended by Hauser (2019), is that sign languages fall
along a grammaticalization cline, with discourse question-answer pairs at
one end and pseudoclefts at the other end, as in (40).

(40) Discourse Q & A’s → Embeddable QACs → Pseudoclefts

This idea is further supported by findings in more sign languages, including


Hong Kong Sign Language (Gan, 2022). That said, this perhaps raises even
more questions, such as why polar questions would be lost for pseudoclefts.
Moreover, when we look to spoken languages, the puzzle gets deeper: Ko-
rean, for example, seems to have a conditional-like structure that connects
a question and its answer. Intruigingly, Korean QACs also permit (and
in fact, favor) polar questions instead of wh-questions, ruling out a direct
analogy with English pseudoclefts but suggesting a strong similarity with
ASL QACs. This relates to another puzzle regarding nonmanual marking:
in ASL conditional clauses are in fact marked with the same nonmanual
marking seen in QACs (brow raising), which seems to further call out for
a connection between the two. Further open questions involve whether a
copula structure is the most plausible connection for these clauses, or if
perhaps the clausal connecting structure in Caponigro and Davidson (2011)
could be revised toward an analysis of the question and answer as connected
via a conditional (e.g. if... then) semantic operator rather than a copula
with an equative (e.g. is equal to) semantics. In any case, a major takeaway
from QACs in general and their prevalence in sign languages of the world
is in some sign languages they clearly use question-answer pairs (whether
in clausal form or separate) to express focus by using an answer to a direct
question, directly mirroring the structure of discourse proposed by Roberts
(2012) as a method for directly reflecting information structure.

2 Sentence-final focus position


We saw in QACs that a question and its answer can be juxtaposed in a clause
in order to focus the answer, placing the new information in a sentence-final
position. This turns out to be a special case of a more general phenomenon
of using the sentence-final position for focus in American Sign Language and
seemingly many other sign languages too. In this section we zoom out to
discuss a wider array of linguistic material that can appear at the end of
a sentence: verbs, negation, and modals, all of which can appear sentence-
finally for focus related purposes.
We saw above that negation can appear at the end of a sentence when
there is a polar QAC, as in (41). In another expression that looks at least
superficially similar, negation can also appear at the end of a sentence that is
a simple negative declarative, as in (42). In this “doubled” negation version,
2. QUESTIONS, ANSWERS, AND INFORMATION 44

the main clause must already be negative, including with negative nonman-
ual marking (that negative nonmanual marking sets it apart in form quite
noticeably from a QAC), and the first part is usually not seen as a question,
but rather as a statement. In these cases, the sentence-final negation is
“doubling" the negation in the main clause, not answering a question raised
by that clause. The information structural effect is quite similar in the two
constructions, though: both seem to focus the negativity/negative valence
of the answer, as we can see in their acceptability in a context in which that
is what is at issue (43).

br
(41) a. mary have book, no. (Polar QAC)
‘Mary doesn’t have a book.’
b.

‘Alex doesn’t sell books.’

neg neg
(42) a. mary not have book, no. (Focus doubled negation)
‘Mary doesn’t have a book.’
b.

‘A/The girl didn’t see a/the rainbow.’

(43) Context: A bunch of people are wondering whether or not Mary has
a book.
br
a. mary have book, no.
‘Mary doesn’t have a book.’ (acceptable)
neg neg
b. mary not have book, no.
‘Mary doesn’t have a book.’ (acceptable)

These so-called “focus doubles” have received attention in the formal syn-
tax/semantics literature in ASL (Petronio and Lillo-Martin, 1997) as well
as in Brazilian sign language (Libras) (Lillo-Martin and de Quadros, 2004),
2. QUESTIONS, ANSWERS, AND INFORMATION 45

where they have been analyzed as markers of focus, with a dedicated focus
syntactic projection that induces the focused constituent (in this case, nega-
tion) to be pronounced sentence-finally as well as optionally in its canonical
position. This analysis is supported by the emphatic effect they seem to
have, as well as parsimony with the other uses sentence-final position, which
is also used for wh-words in ASL (wh-words in questions are considered to
be focused); recall that wh-words themselves can be doubled in ASL (44).

(44)

‘What did the girl see?’

Davidson and Koulidobrova (2015) note that while verbs, negative words,
and modals all share the sentence-final position, when it comes to what may
and may not co-occur together in a sentence, the positive doubles are in
complementary distribution with any forms of sentential negation. They
take this to be evidence for doubling as a marker not of constituent focus but
of polarity focus, building on work on polarity marking in sign languages by
Geraci (2005). Under this analysis, when the issue at stake is about the truth
of a sentence (i.e. when there is a polar QUD), adding a non-negative double
is licensed when the answer is positive, whereas if the answer is negative then
adding a negation (internally to that sentence only, or internally and via a
double) is allowed.
At this point, the picture of discourse structure in ASL is the following:
QACs instantiate part of the QUD-answer discourse structure. Doubles are
used when the polarity of a clause is under discussion (i.e. when the QUD is
a polar question). This raises a natural question about what happens when
these interact: how do polar QACs work, i.e. what happens when negation is
sentence-final because it is the answer to a QAC? Especially, what happens
when the “question” part before the negation is both a question and has
negative polarity? In this case, the whole thing is interpreted as a negative
QAC, as in (45).

br neg
(45) mary not have book, no.
‘Mary doesn’t have a book.’

Gonzalez et al. (2019) investigate these particular structures in depth in


ASL, finding an interesting pattern: a negative answer to a negative question
is always interpreted as disagreeing with the negative polarity, i.e. expressing
a positive polarity. In other words, a negative answer to a negative QAC
2. QUESTIONS, ANSWERS, AND INFORMATION 46

can’t be interpreted as agreeing with the negative polarity expressed in the


question, i.e. expressing the negative version, even though this is possible for
question-answer pairs in a discourse (e.g. across two participants) in ASL
(just as in English). This restriction is illustrated in (46), which involves a
claim and then a disagreement with that claim via QAC, used to support
making the negative QAC appropriate in terms of its information structure.

(46) (ASL, Gonzalez et al. 2019)


Context:
headshake
Amy claims: zoe play video-games never
‘Zoe never plays video games.’
brow-raise
a. Zoe responds: [Q−constituent ixzoe play video-games never ],
headshake
[A−constituent no once-in-a-while]
‘I do play video games once in a while.’
brow-raise
b. Zoe responds: *[Q−constituent ixzoe play video-games never ],
headshake
[A−constituent no never]
(Not acceptable, would mean: ‘I never play video games.’)

One reason for this pattern could be that negative doubling (of the type
seen above in (42)) blocks the negative polarity-agreeing polar QAC: both
serve the same purpose of expressing the statement with negative polarity,
focusing on the negative polarity in that sentence-final position. Given that
the structure proposed for doubling is structurally simpler than the syn-
tactic structures proposed for QACs (doubles involve just a single clause,
while QACs are multi-clausal), it could be that QACs are ruled out on the
grounds that the speaker should use the simplest structure that does the
job between these two possible focusing options (sentence-final focus and
QACs). Clearly, there is interesting psycholinguistic predictions to be made
here about choices in language production, and more investigation to deter-
mine how robust these patterns are crosslinguistically.

3 Topicalization
We have so far covered the notion of focus, a notion that is often discussed
in opposition to the concept of topic. We might, for example, want to think
about the question portion of a QAC as its topic, especially when it is given
and backgrounded, taken for granted by participants and providing an orga-
nizing structure for the answer. In general, it seems to be common for sign
languages to orient their word order so that topics precede focus, and to the
extent that these information structural notions organize the word order,
2. QUESTIONS, ANSWERS, AND INFORMATION 47

we might say that sign languages are influenced by discourse configura-


tional constraints. One strategy of overtly forcing topics to precede focus
is to move the topic to a sentence-initial position: we can see an example of
this in the paradigm in (47a-c), which all have the same subject (fs(Alex)),
verb (have), and object (book), yet exhibit different orders, with book
being marked as a topic in (47b) and fs(Alex) being marked as a topic in
(47c).

(47) a.

‘Alex has a book’


b.

‘Alex has a book’


c.

‘Alex has a book’


The propositional information that is conveyed in each of these sentences
is the same: remember that one way to think about the meaning that a
sentence conveys, is to think about what it tells us about the way that the
world is, or more importantly, what kinds of possible worlds that we can
rule out. These sentences all rule “in” the same worlds (where Alex has a
book), and out the same worlds (where Alex doesn’t have a book), but do
so from differernt perspectives. Example (47b) takes the book as a given,
and tells us something about it. In contrast, example (47c) takes Alex as
given and tells us something about her. Unlike focus, it’s not quite as clear
that these pragmatic differences clearly pull apart through an acceptability
judgment given a context, or what the contexts are which pull them apart,
2. QUESTIONS, ANSWERS, AND INFORMATION 48

but we can see one kind of example in (48) where people give somewhat
different responses for differently topicalized sentences in the same context.

(48) Context: Participants in a conversation had previously been wonder-


ing where a lost sweater was, and they recently found it with Marie.
Now, the speaker wants people to shift their attention to a missing
book (in doing so, raising the QUD Where is the book?), and to point
out that Alex has it.
a.

‘Alex has a book’ (questionably acceptable)


b.

‘Alex has a book’ (acceptable)


c.

‘Alex has a book’. (not acceptable)

Given the differences in acceptability in this context, there certainly


seems to be a semantic/pragmatic effect to topic-hood in ASL. Kimmelman
and Pfau (2016) describe in careful detail the different things that topics
can mean, both functionally and syntactically, and the ways that they can
appear in sign languages. One of the most interesting points to emerge is
the interaction between manual and nonmanual components. For example,
we can see in the examples above that the moved topic is accompanied by
brow raising nonmanual marking, which also accompanied the question in
QACs, as well as the antecedent of conditionals. How can we think about the
contribution of this suprasegmental marking as compared to manual signs?
Most other discussion in the sign language linguistics literature on topics is
2. QUESTIONS, ANSWERS, AND INFORMATION 49

focused on their syntax: in syntactic theory topics can be moved from a po-
sition closer to where they are interpreted, or they could be “base generated”
in the sentence-initial position, without any special relationship to the se-
mantic argument position. Wilbur (1994), Wilbur and Patschke (1999), and
Wilbur (2011) provide clear proposals for the interaction of syntatic position
and prosodic marking. From the semantic/pragmatic perspective, there has
been must less work done on the notion of topichood in sign languages. In
spoken languages, topics are known to affect anaphora, indexicals, and other
expressions in a sentence, but these remain largely unexplored in the sign
linguistics literature as far as I am aware.

4 Contrast
So far in this section we’ve covered several aspects of focus and topics; a
third important information structural notion is that of contrast (see Repp
2016 for overview of contrast in spoken languages). Wilbur and Patschke
(1998) describe in detail the use of body leans in American Sign Language,
most notably the forward and backward body leans, which they propose
mark contrast in interesting ways: leaning backward marks exclusion, and
foreward inclusion. A second way to mark contrast involves equal alterna-
tives, which are often expressed through the use of a partitioned horizontal
spaces (Wilbur and Patschke, 1998). For example, the alternative question
in (49) presents coffee and tea as parallel/equal alternatives, and in doing
so assigns them to the ipsilateral area of signing space (e.g. a) and the
contralateral area of signing space (e.g. b), respectively.

(49)

‘Which do you want, coffee or tea?’

The role of horizontal space for contrast is connected tightly to many


other topics in sign languages: information structure, on the one hand,
and anaphora, on the other hand: the answer to the question in (49) can
involve pointing to one of the areas of signing space (e.g. a) to indicate that
one prefers the drink that was signed in that space (e.g. coffee). We will
investigate this use of space for anaphora in more depth in Chapter 4, where
the important takeaway from contrast is that the use of horizontal signing
space for establishing contrast may play a role in determining when space
is used for linking a pronoun with its antecedent in the discourse. Work
2. QUESTIONS, ANSWERS, AND INFORMATION 50

by Pfau and Quer (2010) and Wilbur and Patschke (1999) provide further
important reading especially on the interaction of syntax with nonmanual
markings in marking contrast in sign languages.
When it comes to understanding why the use of horizontal signing space
is used for expressing contrast, we may gain insight by considering non-
propositional meaning. Lakoff and Johnson (1980) discuss the well-used
metaphor similarity is proximity, which we might expect to be active in
the creations and comprehension of depictive structures (Casasanto, 2008).
In this case, we may expect that space is used to reflect conceptual distance,
so that conceptual contrast leads to use of different sides of signing space,
but not in a way that we need encode in the propositional contribution.

5 Embedding diagnostics
Information structure is important in sign linguistics not just for the view
it provides on various word orders, nonmanual markings, and pragmatics,
but also as a diagnostic of general clausal structure. To take one exam-
ple, Davidson and Caponigro (2016) ask whether polar questions can be
embedded in Amerian Sign Language. In other words, English allows for
one declarative sentence to embed another (50a), as well as a declarative
sentence to embed a polar question (50b), and a declarative sentence to em-
bed a constituent/wh- question (50c). In English, one can usually tell the
difference between a declarative clause and an interrogative from structural
differences, like so-called “do support” (compare: She bought a book/Did she
buy a book?). However, embedding makes things a bit trickier: in English,
the embedded clause (Her sister bought a book) looks the same on the sur-
face (e.g. (50a-b); whether it is interpreted as a question or not depends on
the verb in the main clause.

(50) a. Mary thought that [her sister bought a book].


(embedded proposition)
b. Mary wondered whether [her sister bought a book].
(embedded question)
c. Marie asked [who bought a book].
(embedded question)

In American Sign Language, a similar pattern arises. An unembedded


declarative clause looks different on the surface from a polar clause, although
in ASL this isn’t through “do support” or any other word order differences,
but rather nonmanual marking: a polar question has brow raising nonman-
ual marking, as illustrated in the difference between polar question and
declarative in (51).
2. QUESTIONS, ANSWERS, AND INFORMATION 51

(51) a.

‘My sister has a book’


b.

‘Did my sister buy a book?’

As in English, though, this difference disappears in embedded contexts,


where the main clause brow raising nonmanual marking for polar questions
is replaced instead by a nonmanual marking related to the main verb’s at-
titude, in this case think (52).

(52)

‘Alex was thinking about whether her sister bought a book’

The erasure of the one signal, nonmanual marking, that differentiated


the embedded from unembedded polar question makes it more difficult to
diagnose the presence of an embedded polar question. However, foundational
work on syntax in American Sign Language has provided diagnostics for
embedding generally in ASL that can be quite straightforwardly extended
(Padden 1988, Liddell 1980). Padden (1988) discusses the use of subject
pronoun copy in ASL, which involves repeated a pronoun that referes to
a subject again in the sentence final position. This can be the subject of the
main clause, as in (53a) where the first person features on the sentence final
2. QUESTIONS, ANSWERS, AND INFORMATION 52

pronoun match those of the subject of the main clause (ix-1). It can similarly
copy the subject of an embedded clause, as in (53b), where the subject copy
features match those of the embedded subject (ix-a). In contrast, it cannot
copy the features of an object, as in the children in (ix-arc-b). Padden (1988)
argues that it really is subjecthood and not distance or number features or
other potential confounding factors which is accounting for the pattern of
the sort seen in (53).

(53) (ASL, Padden 1988)


a. ix-1 decide ix-a shoulda driveb see children ix-1
b. ix-1 decide ix-a shoulda driveb see children ix-a
c. *ix-1 decide ix-a shoulda driveb see children-b ix-arc-b
‘I decided he ought to drive over to see his children.’

One helpful feature of subject pronoun copy for investigating structure in


sign languages is that if a long-distance subject is able to be copied at the end
of a clause, it provides evidence that a clause in between is truly embedded
and not sequential. For example, we can feel confident that (53a) is not a
sequence of two sentences (e.g. I decided. He ought to drive.) because of the
copying of the subject of the first sentence at the end. Padden (1988) uses
this argument to show that ASL does allow clausal embedded, in contrast to
claims made in the literature at that time that it might not (e.g. Thompson
1977). We can similarly use it to show that ASL permits embedding of
polar questions as well (Davidson and Caponigro, 2016). This can provide a
means for future work on other sentence structures in ASL and in other sign
languages to diagnose clausal structure, especially distinguishing between
embedding and parataxis/multiple sequential clauses without a hierarchical
relationship.

6 Incompatibility of depictions and alternatives


Recall that in Chapter 1 we emphasized two aspects of meaning, the proposi-
tional and the experienced event. One might have noticed that this chapter
has focused nearly entirely on propositional meaning, with little discussion
of depiction. This is not an accident, and in this section we’ll focus on why:
it seems that the question-answer organization behind information structure
is particularly incompatible with propositional content, such that the sorts
of things that questions are made of (“at issue” alternatives) seem to be
largely incompatible with the sorts of meaning that we get from depictions.
The first nugget of evidence along these lines comes from a paper by Kita
(1997) in which he introduces the paradigm in (54). In (54a), a “mimetic”
expression describes and depicts the movement of a heavy round object with
continuous rotation. If this mimetic expression gorogoro worked in the usual
2. QUESTIONS, ANSWERS, AND INFORMATION 53

way of other non-depictive modifiers, we might expect that the propositional


meaning of this sentence would be something like a function takes worlds
and returns TRUE for those in which a ball rolled in the gorogoro way,
and FALSE for those worlds in which that does not happen. In that case,
negation should simply apply, and return the complementary set of worlds;
intruigingly, Kita (1997) reports that a negation with a mimetic is not well
formed (54b), despite a parallel sentence with negation being well-formed
with a descriptive/non-depictive modifier like sizukani ‘quietly’ (54c).

(54) (Japanese, Kita 1997)


a. Depiction, no negation
tama ga gorogoro to korogat-ta no o mi-ta
ball Nom Mimetic roll-Past Nominalizaer Acc see-Past
‘(One) saw a ball rolled gorogoro’
(gorogoro = movement of a heavy round object with continuous
rotation).
b. Depiction, with negation (not acceptable)
*tama ga gorogoro to korogat-ta no de wa
ball Nom Mimetic roll-Past Nominalizaer Cop Focus
na-i
Neg
‘It was not the case that a ball rolled gorogoro.’
c. Descriptive modifier, with negation
tama ga sizukani korogat-ta no de wa na-i
ball Nom quietly roll-Past Nominalizaer Cop Focus Neg
‘It was not the case that a ball rolled quietly.’

The takeaway from (54) seems to be that negation resists depictive con-
tent, in this case exemplified by Japanese “mimetics” like gorogoro. In fact,
we find similar evidence when we look at depictive elements in English:
a depictive onomatopeia that’s perfectly well-formed in (55a) becomes ill-
formed under negation (55b), while a descriptive modifier conveying purely
propositional content is perfectly fine under negation (55c). (Of note: the
ill-formed nature depends on the meaning: it’s possible to give a meaning
to this which involves “ metalinguistic” negation, in other words, when it
means that a better expression should have been used, but this isn’t the
meaning expressed by negation in the non-depictive case.)
2. QUESTIONS, ANSWERS, AND INFORMATION 54

(55) (English)
a. Depiction, no negation
The bird was chirrrp-chirrping[expressed in a sing-songy manner]
on her perch.
b. Depiction, with negation (not acceptable)
*The bird wasn’t chirrrp-chirrping[expressed in a sing-songy man-
ner] on her perch.
c. Descriptive modifier, with negation
The bird wasn’t chirping loudly on her perch.

These interactions between negation and depictive modifiers in Japanese


and English are relevant for sign linguistic data because sign languages, like
spoken languages, support a great deal of expressivity through depiction.
Take a depictive classifier (a topic we will have much more to say about in
Chapter 5) in (56). In a positive sentence, the classifier depicts the (diffi-
cult) manner of retrieving the book (56a). Just like in the Japanese and
English cases, this is no longer well-formed under negation (56b), despite a
parallel sentence with similar meaning being well formed if only a descrip-

tive/symbolic modifier is used under negation (56c).

(56) a. Depiction, no negation

‘Of all the books in a row, it was difficult to pull one down’
b. Depiction, with negation (not acceptable)
*book ds_c(books lined up), not ds_c(pull down w/difficulty)
‘Of all the books in a row, it wasn’t difficult to pull one down’
c. Descriptive modifier, with negation
2. QUESTIONS, ANSWERS, AND INFORMATION 55

‘Of all the books in a row, it wasn’t difficult to pull one down’

An explanation for this incompatibility between negation and depiction


comes from the notions we explored in Chapter 1, that representations of
meaning can involve both propositions and representations of particular
events. In a positive sentence like (56a), a proposition is conveyed from
the descriptive components, a function that returns exactly those worlds in
which there are a bunch of books in a row and someone pulls one down.
At the same time, this also evokes a representation/image of the event,

to which the depiction adds more detail


about the pulling action by depicting/enacting the action. In a negative
sentence like (56b) we might imagine that this should mean something like
(56c), but it cannot. We might imagine this is because depictions are func-
tionally at odds with generalizating over details, given that they must evoke
a particular image/experience. Negation needs to act on a propositional

alternative (e.g. ‘hard’, as opposed to ‘easy’) which is a partition


that generalizes over details, and so (56b) is ill-formed. This is further
supported by the inability of these same kinds of depictions to be used in
polar and wh-questions, both of which also depend on these kinds of alter-
natives/partitions, and which is also reported for Japanese ideophones by
Kita (1997). The takeaway in particular for thinking about questions, an-
swers, and the structure of discourse is that depiction, while important in
many areas of sign linguistics, typically is involved in constructing represen-
tations of events which are separate from the kinds of semantics we focus on
that explain the information structural moves involving questions and their
answers, which we focused on in this chapter.
2. QUESTIONS, ANSWERS, AND INFORMATION 56

7 Conclusions
In this chapter we have focused on the relationship between questions (ei-
ther overt questions or covert questions under discussion), their answers, and
linguistic forms. The specific linguistic forms that relate to these questions
were question-answer clauses, sentence-final focus position, topicalizations,
contrastive uses of space, and subject pronoun copy. In the penultimate
section, we then moved to trying to understand the way that different com-
ponents bear on the use of alternatives in sign languages, with the takeaway
that alternatives seem to require interpretation as a symbol, not as an iconic
depiction. A consequence of this is that much of the depictive nature of sign
languages happens in a way that is disjointed from much of the informa-
tion structuring, and that both should be taken care to be considered when
designing analyses for sign language semantics.
3

Logical connectives

The focus of this book is on formal approaches to modeling meaning in


natural human languages in general, and signed languages in particular.
There is an interesting parallel between the development of the field of for-
mal semantics of human languages and the development of sign language
linguistics that is highlighted in the study of logical connectives. At one
point, meaning in human language (by which scholars thought about as
only spoken language) was assumed to be much messier than the meaning
that could be conveyed in artificial languages like logic. The idea was that
logic was clean and unambiguous, because both the forms (syntax) and the
meaning (semantics) could be defined in a way that was unambiguous and
led to entirely predictable inferences like entailments. For example, in logic,
one can define a syntax (rules for allowed and disallowed forms) that simply
stipulates that if p and q are both forms in the language, and if they are
combined by a logical operator like ∧, that combination is also well-formed
in the language (p ∧ q). A semantics can similarly be defined exactly, so
that, for example, if p is true and q is also true, then (and only then) p ∧ q
is true too. The entailment that p ∧ q entails p will thus always hold. But
that’s a simple logic we can design, for which we can stipulate the forms and
meanings however we like to make entailments clear. What about human
language: does it work the same way?
Building on foundational work by philosophers like Frege, Russell, and
others, modern formal semantic work beginning with Montague (1973) has
argued that the answer is, surprisingly, yes! In many ways, meaning in
human language does work like logic. To take a striking example, Grice
(1989) argues that despite the English connective or superficially looking
quite different from logical disjunction, the logical inclusive disjunction is
the underlying meaning of the human language or. Superficially, they ap-
pear to be different because logical disjunction (p∨q) is defined as requiring a
minimum of one disjunct to be true (p is true, or q is true, or both), whereas
the English expression or seems to have stronger use conditions: we can use

57
3. LOGICAL CONNECTIVES 58

it in cases of ignorance (Alex had tea or coffee, I’m not sure which) or in
cases of choices (Alex can choose tea or coffee), and tends to be strange to
use with an inclusive meaning (if Alex had tea and coffee, it’s often consid-
ered strange to say Alex had tea or coffee). However, Grice (1989) argued
that these differences make sense in light of the way that we use language
to communicate, since in the case of, say, the inclusive meaning, if we knew
both to be true we should have used a stronger description, Alex had tea
and coffee, instead. The argument is that the meaning given to the logical
expression like the connective ∨ (inclusive disjunction) is actually the right
model for English or (its semantics), just that the natural language expres-
sion seems to carry different meaning because of the way that humans use
language and reason about expressive choices on top of their basic meaning
(its pragmatics). This approach in the middle/latter half of the 20th cen-
tury opened wide open the doors to using logic to model natural language,
including logical connectives.
When it comes to correcting an outdated view that human language is
somehow “less than” logic, the parallelism to outdated views of sign lan-
guages are compelling: just as spoken language was assumed to be more
messy and inferior to logic, but was eventually found to have more in com-
mon than previously realized, sign languages were at one point considered
impossible to analyze using logic, and yet that too has been clearly dis-
proven: they contain all of the same logical structure as spoken languages.
This in no way means that logic is all there is to either spoken or signed
languages: meaning in human language is multi-layered and contains multi-
tudes and no semanticist would argue that “meaning” should be completely
reduced to truth conditions. Rather, those who work with truth conditions
take them to be a valuable way to understand how we use language to com-
municate and learn about the world in both precise and abstract ways that
account for the inferences we draw about the world from what we are told.
Human languages in all modalities do this to a level that is unparalleled
by other organisms (biological and artificial), and makes the investigation
of this ability in sign languages especially worth understanding and appre-
ciating. In this chapter we review findings related to the logical operators
negation, conjunction, and disjunction in the semantics and pragmatics
of sign languages.

1 Negation
The simplest logical operation is negation (¬): in logic, negation can be
defined as a function that simply takes a single proposition and returns its
truth conditional opposite. For example, take a proposition (e.g. it will rain
today) that takes possibilities/worlds and returns those in which it is true
(e.g. the worlds in which it rains today: λw.it rains today in w). Negation
3. LOGICAL CONNECTIVES 59

applied to this function will return the complement set of worlds (λw.doesn’t
rain today in w, i.e. a function that takes in worlds and returns those in
which it doesn’t rain today). Put in slightly more functional notation, if
p(w) = it rains today in w, then ¬p(w) = it doesn’t rain today in w, and
thus more generally for any proposition p and world w, if p(w) = 1 then
¬p(w) = 0.
The English word not seems to frequently have this effect that we de-
scribed above for the logical operator ¬, so that we might want to define
its semantics as a function that takes propositions and switches their truth
value: JnotK = λp.¬p. So the meaning of It will not rain today is the ba-
sic proposition It will rain today, with the addition of a negative operator:
(not)(It will rain today). We can build a mini-compositional account for
this step:

(57) Context: Discussing the current weather before going outside


Answer: It will not rain today
JIt will not rain todayK = J(not)(It will rain today)K =
JnotKJIt will rain todayK assuming compositionality
JnotK = λp.¬p proposed meaning
(λp.¬p)(λw. it will rain today in w) =
λw. it will not rain today in w
Sometimes doing compositional semantics like this can appear circular
when our object and our metalanguage overlap, as when both use English in
(57) above. However, we can more easily see the value of this kind of analysis
when we turn to other languages. This is especially true for sign languages,
an area where the study of negation has been been an active subfield full
of insights, many arising from the interplay of manual and nonmanual ex-
pressions. For example, in American Sign Language the differences between
a positive statement (58a) and a negative can be the addition of a nega-

tion sign and nonmanual headshake hs (58b) or just the nonmanual


headshake (58c).

(58) Context: Discussing the current weather before going outside


a.

‘Today it is going to rain.’


b.
3. LOGICAL CONNECTIVES 60

‘Today it isn’t going to rain.’


c.

‘Today it isn’t going to rain.’

A preliminary takeaway from the paradigm in (58), and especially (58c),


is that the nonmanual headshake seems to be a way to express the negation
operator that we defined above, that switches the positive proposition to its
complement. We can then give the negative headshake a similar function to
the English negation: J(hs)K = λp¬p.

(59) Context: Discussing the current weather before going outside


Answer:

br
J K = J(hs)(today rain) K =
br hs
J(hs)KJtoday rain K assuming compositionality
J(hs)K = λp.¬p as proposed
(λp.¬p)(λw. it will rain today in w) =
λw. it will not rain today in w

The key observation to make about the analysis proposed in (59) is that
the functional meaning of headshake is the same as the functional mean-
ing of not in (58). This highlights one natural question that arises in the
study of negation in ASL and other sign languages: should English not,

ASL , and ASL hs nonmanual marking all have the same semantics,
3. LOGICAL CONNECTIVES 61

or should we think about them as making quite different contributions to


the propositional meaning? It’s not obvious one way or another at first,
even just given the simple set of sentences in (59). For example, consider
that both headshake and negation occur together in (58b). If we equate
each of these negative components to the negative operator, then we’ll end
up with the two negations cancelling each other out, something like ‘Today
it isn’t not going to rain’, which is a fine sentence in English but simply the
opposite of what the ASL sentence in (58b) means. We might reasonable

choose instead to give the manual sign the meaning of the English
word not, and say that the headshake just comes along for the ride, following
the manual sign (say, the negative manual sign brings along the nonmanual
headshake), but this would leave us with a serious question about how it
is that (59c) comes to have a negative meaning and not a postive meaning!
At this point, it is hopefully already clear how the study of negation in sign
languages, like the study of negation in general crosslinguistically, is not
a straightforward extension of English, and worth significant investigation
across sign languages, especially with the additional complexity of deter-
mining the relationship between nonmanual (suprasegment) markings and
manual signs.
In fact, it is well known that many spoken languages display variation in
precisely this area, namely, that sometimes, two negative elements are inter-
preted as a single semantic negation, while other times/in other languages
they are interpreted as two separate logical negation functions. Consider,
for example, the English example in (60); there are two interpretations of
this sentence, one that is considered the interpretation in “standard” English
(60a), and another that is common in many other varieties of English, which
is completely the opposite meaning (60b). These two possible ways to inter-
pret two negations are found in different languages and language varieties
across the world, and one can find either one being considered the “stan-
dard” interpretation. For example, in “standard” Italian, the negative form
nessuno is negative on its own (61a)(like English nobody) yet when combined
with sentential negation leads to a single negation “concord” interpretation
(61b), just like (60b).

(60) Alex doesn’t want nobody to call.


a. Alex will be glad if at least someone calls. (double neg reading)
b. Alex will be glad if no one calls. (concord reading)

(61) a. Nessuno ha telefonato


N-body has called
‘Nobody called’
3. LOGICAL CONNECTIVES 62

b. Non ha telefonato nessuno


Neg has called n-body
‘Nobody called’ (concord reading)

Languages/dialects in which two negative expressions are interpreted as


a single semantic negation are said to exhibit negative concord. In the se-
mantic/syntactic study of sign languages, it has been a matter of significant
interest how sign languages fit into this categorization, and it intersects with
discussions about the role of suprasegemental forms and language modal-
ity, namely, do non-manual expressions “count” as a negative element, or
are they in some sense just reflecting the negation expressed by another
negative element elsewhere in the sentence.
One further complication to this question is that there seems to be
variation closely related to this exact point within sign languages, namely,
whether nonmanual expressions like the negative headshake seen in (58)
counts as contributing semantic negation. On this semantic/cross-linguistic
variation point, some extremely interesting generalizations have been put
forward about sign language negation. In a typological study, Zeshan (2006)
categorizes some sign languages as nonmanual dominant, like American
Sign Language, in two respects: (a) that expressing negation requires a neg-
ative nonmanual, and (b) that the nonmanual is able to express negation
on its own. For example, an important thing to note about ASL is that
it is usually claimed that (58b) is not well formed without the headshake
nonmanual, i.e. the manual negation on its own is not acceptable without
headshake nonmanuals. In contrast, Zeshan (2006) categorizes other sign
languages like Türk İşaret Dili (TİD, Turkish Sign Language) as manual
dominant: this category of sign languages (a) always requires a manual
sign like not to express negation (we can see this unacceptability in (62a)),
and (b) if a manual negation is present (as in (62b)) then the nonmanual
negation is optional.

(62) (TİD, Zeshan 2006)


neg
a. *ix1 understand
‘I don’t understand’
neg-tilt
b. ix3 sign understand-not
‘They (singular) didn’t understand the signs.’

The claim, then, is that the requirement for nonmanual marking correlates
with the ability of nonmanual marking to serve as a negation on its own,
across sign language varieties. In terms of frequency of one category over an-
other, Zeshan (2006) reports that the non-manual dominant sign languages
are more common in her language sample: 26 out of 37 surveyed sign lan-
3. LOGICAL CONNECTIVES 63

guages permit a purely nonmanual expression of negation, as in the ASL


examples in (58b); the others do not, as in TİD.
In terms of how we might want to analyze the contribution of negative
nonmanual marking and manual signs, we see that the semantics we may
want to give to manual vs. nonmanual markings of negation may differ be-
tween sign languages. If we begin with ASL, we might be inclined to more
closely model the headshake nonmanuals as expressing propositional nega-
tion in the form of the logical operator ¬, i.e. simply taking a proposition
and returning its negation. This is the case with the simple example (58c),
in which it seems that the meaning ‘it is going to rain’ is negated by the
negative nonmanual marking. We can think about this as the basic negation
that reverses truth values.
In addition to sentential negation words like not or ASL not or nega-
tive headshake, there are also negative expressions such as nobody, Italian
nessuno, or ASL no-one, none, can’t, etc, which express quantification
along with negation in a single lexical form. In classically non-manual dom-
inant sign languages like ASL, it has been claimed that these negative signs
need to be accompanied by a negative nonmanual across the whole sentence
as well, i.e. a combination of sentential negation and a constituent negation
(63). In contrast, the basic observation is that in non-manual dominant sign
languages, negation depends on the obligatory manual sign, and non-manual
marking corresponds closely to the scope of that sign, as can be seen from
the Italian sign language (LIS) examples from Geraci (2005) in (64).

(63) (ASL)
br hs
a. today no-one 3-email-1
‘Today no one emailed me.’
br
b. *today no-one 3-email-1
‘Today no one emailed me.’

(64) (LIS, Geraci 2005)


hs
a. paolo contract sign non
hs
*paolo contract sign
‘Paolo didn’t sign the contract.’
hs
b. contract sign nobody
‘Nobody signed the contract.’

This raises the important issue of how and whether the extent of nonman-
ual marking corresponds to its semantic scope, i.e. the size of the proposition
that it negates. Under a theory in which the semantic scope corresponds
3. LOGICAL CONNECTIVES 64

to syntactic position, the syntax/semantics interface of negation becomes


highly relevant, and in fact this has been an area with quite a bit of work
in sign languages. For American Sign Language, Wood (1999) provide the
most extensive overview. Geraci (2005) investigations LIS in more details,
and Quer (2012a) provides a view of this issue in many other sign languages
of the world.
On the question of possible negative concord in sign languages, Kuhn
(2020) observes that very few sign languages show the negative concord
patterns of the Italian sort we saw above in (61) in which two separate
negative words (e.g. not and none) appear together but are interpreted as
a single negation. He offers the example of the unacceptability of (1a) in
LSF.

(65) (LSF, from Kuhn 2020)


a. *my birthday, none offer nothing
b. Nobody gave me nothing for my birthday.
There are a couple of things that are interesting about the negative con-
cord situation, or lack thereof, in many sign languages. First of all is the
unacceptability itself, which contrasts with non-concord languages in which
sentences with two negative expressions are not unacceptable but rather
interpreted with two separate negations. We can see this in the English
example in (b), which is interpreted in standardized English as double nega-
tion, e.g. as saying that everyone brought something. For many speakers
of other varieties of English, (b) can also have a concord reading, in which
nobody brought anything, but for neither group of English speakers is (b)
unacceptable like it is in LSF. Another property of the categorization of sign
languages that is unusual in terms of negative concord is that, as we saw,
there are certainly often multiple expressions of negation in the same sen-
tence, but only if nonmanual marking is included. In fact, as Kuhn (2020)
notes, if we consider nonmanual marking, then sign languages typologically
fall into the concord category, since a nonmanual expression of negation and
a manual sign are interpreted as a single negation, as in (57b). What to
make of this?
Kuhn (2020) provides a suggestion to tie together both the manual
specific components and broader language constraints, using roughly two
modality-specific pressures on sign languages and one general principle. The
first modality-specific constraint he suggests is that the use of space in sign
languages should be used only in cases of existence, for iconicity reasons.
As we have seen and will see elsewhere throughout this book (especially in
Chapter 4 and 5), signers use space in ways that very often make use of
iconic depictions of events in the world. Associating a discourse referent to
an area of space then naturally implies existence of its referent as a partic-
ipant in the event, if not in the actual world then in some experience in a
3. LOGICAL CONNECTIVES 65

dream/desire/other event that is being depicted. This same kind of con-


straint seems to be active in the contrast between the use of an overt index
in (66). In (66a) we see that overtly associating the politician to a location a
is judged as more illformed presumably because of the implied non-existence
of such a politician, compared to the same sentence with the same intended
meaning but no overt use of space to associate to the politician in (66b).
Notably if no is replaced by all, both are fine.

(66) (ASL, from Abner and Graf 2012)


a. % [no politics person]-a tell-story ix-a want win
b. [no politics person] tell-story want win
‘No politician said he wants to win’

Kuhn takes this pattern as well as other behavior of overt indices with dis-
junctive referents as an argument in favor of iconic pressures on the use
of spatial loci only in cases of some kind of existence, even these rather
abstract spatial loci. This seems plausible, and in line with the idea that
the use of loci are motivated by depiction (Liddell, 2003), even when they
have a very abstract meaning. The second modality difference that Kuhn
notes related to negative concord is the use of nonmanual marking to express
negation. This ties into a language-wide pressure he argues in favor of ex-
pressing negation redundantly, based on the abundance of negative concord
between multiple negative words in spoken languages. Tying these together,
he argues that the pressure to express negation redunancy is covered by non-
manual marking in sign languages, while the (iconic) pressure against using
space in cases of non-existence means that a manual sign (typically associ-
ated to space) will be unlikely to be used to express the second negation. He
emphasizes that these are just tendencies, and provides the example from
Russian sign language, which does seem to have negative concord based on
two manual signs.

(67) (RSL, from Kuhn and Pasalskaya 2019)


nobody nothing give-1 not
‘Nobody gave me anything.’

A picture that emerges from the typological picture presented by Ze-


shan (2004) regarding nonmanual marking and the view presented by Kuhn
(2020) on negative concord is that much of how we analyze negation in sign
languages hinges on our understanding of nonmanual negation. We might
wonder, how much variation between nonmanual marking patterns are truly
cross-linguistic and vary from sign language to sign language, versus from
signer to signer? And how should we best model nonmanual marking in
our semantics: as a contributor of negation, expressing a negative opera-
tor itself, or as a reflection of negation that originates elsewhere, and does
3. LOGICAL CONNECTIVES 66

this vary across contexts, across signers, and/or across sign languages? If
we take nonmanual marking to be a separate negator, then sign languages
freqently exhibit negative concord; on the other hand, if we focus on man-
ual signs only, then negative concord is much more rare, perhaps due to
depictive pressures on associating discourse referents to space. Related to
this question, Henninger (2022) has identified many examples of negation
expressed in ASL without negative nonmanual marking, and even more sur-
prising, negation expressed only by nonmanual marking, in its own timeslot,
making an even stronger case for a separate semantic contribution from non-
manual marking. Moreover, it shows that there is more complexity to the
pattern when it comes to natural production than most of the semantic work
has appreciated so far, and a further indication that nonmanual expressions
probably should be considered to contribute a negative function on their
own, as we modeled above in (59).
Another open question in this area relates to the possible scope for nega-
tion in sign languages. We saw above that even in language varieties without
negative concord, like the “standard” English interpretation of (1b), if two
negative expressions can appear in the same sentence then they are simply
interpreted as separate negators. Why does this seem to be unacceptable,
with or without negative nonmanuals, in sign languages like LSF and ASL?
One key might be to better understand the scopal properties of negation
in sign languages and how they relate to information structure. For exam-
ple, Geraci (2005) discusses multiple syntactic sites for negation in LIS; the
same questions arise with other sign languages and whether they must scope
over/negate just a verb phrase, the entire clause, or both.
Gonzalez et al. (2019) investigate this question by investigating the use
of negation as an answer in question-answer clause pairs. As we discussed
in Chapter 2, this is a clause-type used in sign languages to highlight the
question/answer nature of discourse structure in which the second “answer”
clause is the focused constituent and must be an answer to the question
raised in the first constituent. This is relevant to the question of double
negation because we do see examples like (68), in which there is a negation
brow-raise
in the question clause (i experience none ‘Do I have no experience (with
headshake
interpreting)?’), and then there is a negation in the answer clause ( no
have ‘No, I do have some’), with the overall interpretation of a positive.

(68) (ASL, Gonzalez et al. 2019)


brow-raise headshake
i experience none , no have.
‘I do have some experience (with interpreting).

It suggests one way that sign languages do seem to express something like
double negation. As Gonzalez et al. (2019) point out, answers in question-
answer clauses in sign languages generally seem to be restricted in their
3. LOGICAL CONNECTIVES 67

semantics, yet, or perhaps because of this, negation in answers of QACs can


provide an effective means for expressing double negation. As this pattern
highlights, many questions remain, both about the study of negation and
negative concord across languages, and the expression of negation in sign
languages, especially as it comes to the clearly critical role of understanding
nonmanual marking.
In this section we have focused on negation, which among the logical
operators of interest is the only unary connective: it takes a single argument
(and returns its opposite). In the next section we focus on two well-known
binary connectives: disjunction and conjunction, and their expression and
related semantic issues in sign languages.

2 Conjunction and Disjunction


One of the goals of formalizing the propositional aspects of natural language
meaning is to account for entailments, inferences we make about the way
things are based on the information conveyed in an utterance. So, for exam-
ple, if I tell you Today, it will rain, that entails that This week it will rain,
because the scenarios in which the first sentence holds are true in a subset
of the scenarios in which the second sentence holds. In this case, there is an
entailment because the meaning of today and the meaning of this week have
a particular relation: one expresses a temporal subset of the other. In gen-
eral, entailments are a significant motivation for modeling human language
using logic, because logic provides the means of inferring things beyond what
was actually discussed (i.e. the ability to generalize something about this
week, after just discussing today). Some clear examples of entailments in
natural language arise from expressions that seem on their face to have a
lot in common with structures found in logic: words like English and and
or (and not, as we have seen). An example can be found in the inference
(known as disjunctive syllogism) presented in (69), where a disjunction and
a negated disjunct lead one to conclude the second disjunct.

(69) Alex had coffee or Alex made some tea.


Alex didn’t make some tea.
→ Alex had coffee.

These inferences go through just like inferences in a formal logic, where


we define the meaning of p ∧ q (inclusive disjunction) as false when both p
and q are false, and true otherwise. So if p ∧ q is true but p is false, then
q must be true. We typically define conjunction p ∧ q as true only if each
of the conjuncts are true. Conjunction supports entailments like the one in
(70), for example.
3. LOGICAL CONNECTIVES 68

(70) Alex had tea and Alex had coffee.


→ Alex had tea.
There are many interesting questions regarding how all of these opera-
tors interact that have been raised in the cross-linguistic literature on spoken
languages. For example, if we have a negation and a disjunction in English,
there seems to be two possible interpretations: one in which it’s either cof-
fee or tea that he didn’t have but he might have had the other, and one
reading in which he definitely didn’t drink either one. There has been dis-
cussion about whether this ambiguity is available in other languages, and
arguments that they are not in other languages like Hungarian (Szabolcsi),
and so the interaction of these operators remains an important question in
crosslinguistic research.

(71) Alex didn’t have tea or coffee.


a. He didn’t have either one. (¬tea ∧ ¬coffee)
b. One of them he didn’t have. (¬ (tea ∧ coffee))

Within the space of logical connectives, there also seems to be crosslin-


guistic variation in the pieces involved in logical operators in terms of which
are basic and which are composed of smaller pieces. For example, English
and and or seem to both be single independent morphemes, but other con-
nectives contain multiple morphemes, like and then, or markers of disjunc-
tion in the Cheyenne language, which are complex expressions built from
conjunction and other operators (Murray, 2017). Some have also proposed
that even if the linguistic form is simple, as in the English or, the meaning
can be complex. For example, Zimmermann (2000) has argued that the
meaning of disjunction is the conjunction of possibilities: Alex drank tea or
coffee is rather It’s possible that Alex drank tea and it’s possible that Alex
drank coffee. Although these can both be used in certain scenarios, as in
expressing uncertainty between these two alternatives, on the face of it they
have an important difference: the latter allows for other possibilities (e.g.
the possibility that Alex drank neither tea nor coffee but something else,
e.g. Alex drank cider), while the former requires a true statement among
the disjuncts. This should be kept in mind when we turn to these expres-
sions in sign languages, where it is not immediately obvious what aspects
of a meaning are productive/compositional and which are stored as a single
unit.
When it comes to connectives in sign languages, one of the most striking
observations parallels that for negation: that the expression of connective
meanings like conjunction and disjunction very often involve a combination
of manual and nonmanual components. Consider, for example, that one way
to express conjunction and disjunction in ASL is to place the two options
in two different locations in space (a, b), and to use different nonmanual
3. LOGICAL CONNECTIVES 69

markings (roughly, head nodding compared to brow raising), and optional


disambiguating particles like both or the alternating pointing sign ix-a-ix-b
(72).

(72) a.

‘Alex had tea and coffee.’


b.

‘Alex had tea or coffee.’


Davidson (2013) highlights these common expressions of connectives in
ASL and analyzes the use of listing coordinates associated to areas of space
as involving a “general use coordination” that collects the listed items into a
set of alternatives {coffee, tea}, with the conjunctive/disjunctive force com-
ing not from what is connecting the coordinates but from another source,
such as the nonmanual marking, or the sentence-final disambiguating ex-
pressions like both or ix-a-ix-b. In the examples above, the idea is that
each sentence conveys two alternatives {Alex had tea, Alex had coffee}, de-
rived compositionally in the same way as in the formation of polar questions,
following Hamblin (1976) (73). Eventually, in the first case all alternatives
must be true, while in the second case at least one of the alternatives must
be true, so the addition of both creates a conjunctive statement in (73b),
a function which takes a world and returns only those in which all of the
alternatives are true, whereas the disjunctive statement (73c) expresses the
function over worlds which returns TRUE if at least one of the alternatives
is true.

(73) a.

J K
= {Alex had tea, Alex had coffee}
hn hn
b. alex have tea-a coffee-b (ASL, conjunction)
3. LOGICAL CONNECTIVES 70

‘Alex had tea and coffee.’


λw∀p.p ∈ {Alex had tea, Alex had coffee} → p(w) = 1
br br
c. alex have tea-a coffee-b (ASL, disjunction)
‘Alex had tea or coffee.’
λw∃p.p ∈ {Alex had tea, Alex had coffee} ∧ p(w) = 1

On the one hand, this is a simple analysis that takes things at their
face value, in some sense: it seems that there are many different ways to
signal these meanings that are not dependent on the item doing the con-
necting (unlike and and or in English) and this is captured by an analysis
that puts the conjunctive/disjunctive force outside of the formation of the
set of alternatives. Yet it also raises questions about the language modality
and expression of conjunction and disjunction: is the strategy of dissociat-
ing the force (conjunction vs. disjunction) from the syntactic connection
of two coordinates especially more natural in sign languages than spoken
languages? The data so far suggest that it is common in sign languages,
and possibly less common in spoken languages, although there are clear ex-
amples of both options in each modality. Consider, for example, Japanese
sign language (JSL) (Asada, 2019), which has a very similar structure to
ASL in the disjunctive vs. conjunctive force being differentiated (only) by
nonmanual marking in (74).

(74) (JSL, from Asada 2019)


hn hn
a. taroo tea coffee both drink
‘Taroo drank tea and coffee, both.’
hn hn ht
b. taroo tea coffee which drink
‘Taroo drank tea or coffee, either one.’

Other sign languages appear to exhibit the same patterns (Legeland


et al., 2018), such as Dutch SL (75) and Hong Kong SL (76), both reported
in studies that didn’t focus directly on the issue of general use coordination
but which seem to exhibit the same flexibility, and in Catalan SL (Zorzi,
2018).

(75) (NGT, conjunction: Pfau 2016)


bl-3a bl-3n
mother ix3a market ixlef t golef t
son ix3b friend ixright 3b visitright
‘The mother goes to the market (and) her son visits a friend.’

(76) (HKSL, disjunction: Tang and Lau 2012)


hn+bt right hn+bt left
ix1 go-to beijing, (pro1 ) take-a-plane, take-a-train
‘I am going to Beijing. I will take a plane or a train.’
3. LOGICAL CONNECTIVES 71

(77) (LSC, conjunction: Zorzi 2018)


hl+bl contral hl+bl ipsil. re
marina win jordi lose right?
‘Marina won and Jordi lost, right?’

There are also spoken languages that seem to express a general use coor-
dination. Gil (2019) describes coordination in Maricopa (a Yuman language
language spoken in Arizona, USA), which uses the same expression for both
disjunction and conjunction, as seen in (78), in which the addition of an
inferential marker leads a list to be interpreted as disjunctive, while in its
absence the same list is interpreted as conjunctive. A similar pattern is
confirmed by Ohori (2004) and Asada (2019) for spoken Japanese, which
has several constructions including the basic list structure in (79) that can
be interpreted as conjunction or as disjunction depending on the context
(based on the assumption that one can visit multiple places but will choose
to live in one place).

(78) (Spoken Maricopa, Gil 2019)


a. John-s Bill-s v?aawuum
John-nom Bill-nom 3.come.pl.fut
‘John and Bill will come’ (conjunction)
b. John-s Bill-s v?aawuumsaa
John-nom Bill-nom 3.come.pl.fut.infer
‘John or Bill will come’ (disjunction)

(79) (Spoken Japanese, Asada 2019)


a. Hanako-wa [koohii to koocha]-0 non-da
Hanakoa-top [coffee to tea]-acc drink-pst
‘Hanako drank coffee and tea’ (conjunction)
b. Hanako-wa [koohii to koocha]-0 osoraku ryoohoo non-da
Hanakoa-top [coffee to tea]-acc maybe both drink-pst
‘Hanako drank coffee or tea, maybe both’ (disjunction)

If there is a tendancy for sign languages to use such a strategy more


than spoken languages, there could be many reasons why. For example,
there seems to be something advantageous about associating coordinates to
different areas of space in sign languages that makes this way of expressing
coordination especially useful. Both the use of loci and the use of hand
“buoy” (a list anchored by fingers on the non-dominant signing hand) to ex-
press coordination establish the alternatives to different locations in space,
which then support reference going forward. In Chapter 2 we already dis-
cussed how this is a common way to express contrast, and we focus on the
potential for anaphora using these spatial locations in Chapter 4. Another
3. LOGICAL CONNECTIVES 72

possibility unrelated to the spatial nature of sign languages is the use of


nonmanual marking: sign languages are typically not dissociated from the
suprasegmental nonmanual marking that accompanies signs, while many
spoken languages have their prosodic information removed in, for exam-
ple, written form, and this lack in written language may have encouraged
the development of segmental forms like and and or in some languages as
opposed to conveying the semantic/pragmatic relationship between coordi-
nates suprasegmentally. In the next section, we discuss possible pragmatic
consequences of this way of expressing conjunctive and disjunctive meaning.

3 Semantics/Pragmatics interface: Implicatures


We began this chapter with the observation that human languages were
originally compared unfavorably with logical languages, and an important
turning point in the study of natural language semantics was the attempt to
model human language as logical. This was made possible through an added
twist of pragmatics: how people use their language to communciate may be
the source of why they appear messier than logics. As argued by Grice
(1989), a well known example is the analysis of or as inclusive disjunction
∨, even though in practice the linguistic expression of disjunction is rarely
used when all of its disjuncts are true. This apparent discrepency between
meaning in natural languagee and logical meaning is explained through the
notion of implicature: in a context in which we know that Alex had tea
and Alex also had coffee, and both are relevant, we could have just as easily
said Alex had tea and coffee, as we can say Alex had tea or coffee, and the
first one would be much more informative, so we use and instead. The
thinking goes, then, that in situations that we do say Alex had tea or coffee,
we must have had a reason not to use and, and so it is unlikely that we know
both to be true.

(80) Context: We want to know what Alex had to drink, and we know the
only options are coffee, tea, both, or neither
Person A: Alex had tea or coffee.

Alternative possible answers:


Alex had nothing/ Alex had only tea/ Alex had only coffee/ Alex had
tea and coffee

Person B thinks: If Person A knew that Alex had both, they would
have said Alex had tea and coffee. Since they didn’t say that, he prob-
ably didn’t have both, so it must be the case that Alex had tea or coffee
but not both.
Thus, Person B interprets Person A’s statement as:
3. LOGICAL CONNECTIVES 73

Figure 3.1: Descriptions using the “weak” scalar items some and or and
accompanying pictures to test for scalar implicatures in Davidson (2013)

Alex had (only) tea or coffee (not both)


(pragmatically strengthened reading/scalar implicature)

Due to their use of general use coordination, sign languages like ASL
present a somewhat different logical structure to investigate scalar impli-
catures than more well-studied languages like English. For example, does
general use coordination count as involving different possible alternative an-
swers for the purposes of the kind of reasoning in (80), or not? Davidson
(2013) investigates this question by comparing scalar implicature-based in-
terpretation of disjunction in ASL and in English, in order to understand
how the different scalar structures in the two languages affect interpreta-
tion of the logical expressions. This experimental study compared scalar
implicatures in two languages by comparing two participant groups: Deaf
adult signers of ASL, and hearing adult speakers of English, with each group
presented with sentences in their language (ASL or English, respectively) to
test scalar implicatures. In this study participants were all asked judge the
acceptability of a description of a picture, and the critical trials presented
sentences using expressions of disjunction in contexts where conjunction
could also be true (for example, the coordination case in Figure 3.1). If
participants accept this description, they interpret it “logically” but not
pragmatically; if they reject the description, they are interpreting it with
the pragmatically enriched meaning. Participants’ responses on these trials
were compared to trials in which descriptions matched, and also to trials of
the same structure but with quantifiers (expressions using some where all
could also be true).
The result of the study was that when it comes to a “typical” scale in
both languages like the quantifiers, both ASL and English speakers react
similarly, suggesting that both groups/languages had similar baseline ex-
pectations for calculating scalar implicatures. However, these two groups
showed a difference on the coordination scale: while ASL uses nonmanual
marking to distinguish conjunction from disjunction, English uses different
3. LOGICAL CONNECTIVES 74

words (and/or). The structure of connectives in sign languages thus has


pragmatic consequences worth investigating in other nonmanual domains
(e.g. negation) and across other sign languages. More generally, it suggests
that cross-linguistic investigations are a valuable source of understanding
the kinds of scales that do and do not support such pragmatic inferences;
other examples include languages with different modal scales, as in Deal
(2015) for Nez Perce.

4 Coordination and information structure


Recall that in the previous chapter we discussed how sign languages tend
to be strongly discourse configurational, in that the information structure
(topic, focus, etc.) is a strong influence on word order. In this section, we
discuss a place where discourse configurationality intersects with logical op-
erators through the (sometimes lack of) parallelism imposed on coordinated
structures. To be more specific, a well-known generalization based on spo-
ken languages is that the word order of two coordinated clauses tends not
to be independent. Syntax, semantics, and information structure constrain
coordination to generally be as parallel as possible. For example, we can
observe that parallel structures in (81a) are preferred compared to (81b), in
which the prepositional phrase to college is topicalized in the second coor-
dinate but the prepositional phrase to school is in its canonical position in
the first coordinate.

(81) (English, adapted from Hartmann et al. 2021)


a. Mary is going to school and Vivian is going to college.
b. ?Mary is going to school, and to college, Vivian is going.

Zorzi (2018) provides a detailed analysis of information structure within


coordinated phrases in Catalan sign language (LSC), arguing in favor of ded-
icated syntactic projections for information structural components. Hart-
mann et al. (2021) extend the study of coordination and information struc-
ture to sign languages by asking whether the same constraints that we see on
information structure and coordination, like in (81), hold in sign languages.
They review previously published literature, including sign language text-
books, and report overwhelmingly parallel examples on coordination in sign
languages. They then turn to a corpus analysis from Dutch sign language
(NGT) to investigate in more depth any possible exceptions to such con-
straints. One of the interesting findings of this work is that constraints on
parallel word order reported in many spoken languages do not seem to be
constraining NGT in the same way. For example, the disjunction in (82)
contains one disjunct where the verb go++ precedes the goal (hard of hear-
ing school), and a second disjunct where the goal (hearing school) precedes
3. LOGICAL CONNECTIVES 75

the verb go3b .

(82) (NGT, Hartmann et al.)


bl-3a bl-3b
ci [go++3a s-h school] [hearing school go3b ]
‘Because of the CI, (children) go to a hard-of-hearing school (or) go
to a hearing school.’

The takeaway from their corpus study seems to be that although most
examples do involve parallel syntax, semantics, and information structure,
there are also genuine exceptions. It is not clear whether this rough ratio
holds also in spoken language corpora, but it would be worthwhile studying
whether these “constraints” are truly constraints on production or just ten-
dencies in spoken languages, as it seems to be a tendency in NGT. Recall in
the previous section that we saw both nonmanual marking and the use of
space invoked as part of the explanation for the typological pattern of neg-
ative concord found in sign languages. It’s possible that the same factors
play important roles when it comes to parallelism and information structure
for coordination as well: we can see in (82) that space is used to contrast
the disjuncts (loci a and b), and nonmanual marking in the form of brow
lowering is likely relevant as well. Perhaps there is also influence on the use
of space coming from metaphor, as we saw in the our discussion of contrast:
the two possibilities are depicted in different areas of space, following prox-
imity is similarity, suggesting that incorporating gestural elements would
be critical for a full understanding of this contrast in spoken languages as
well. Of course, this may also simply be a difficult question to answer when
restricted entirely to a production corpus, although fieldwork and/or care-
ful experimental work has the potential to tease these apart to understand
the role of sign language specific and general language components when it
comes to parallelism contrasts on coordination.

5 Conclusions
We end this chapter turning our focus on the big picture questions: what
can we learn about sign languages from looking at logical operators like
negation, conjunction, and disjunction, what can learn about these operators
by looking at sign languages, and what kind of conclusions can we draw
about language more broadly? On the last point, it can be instructive to
consider the model that we introduced in Chapter 1, in which language
meaning contains both propositional representations while at the same
time constructing representations of particular events. Clearly, logical
operators contribute to propositional representations: that is one of their
most obvious roles. In fact, logical operators are a large motivation for that
3. LOGICAL CONNECTIVES 76

kind of meaning: we can derive entailments through logical processes like the
interaction between disjunction and negation (e.g. disjunctive syllogism: if
a disjunction p∨q holds and a negation of one disjunct ¬q holds, then we can
infer the other disjunct p is true). So if there ever was a clear contributor
to propositional meaning, logical operators are it. But do they bear on
questions about multiple types of linguistic representations for meaning?
One way to think about this question when it comes to negation, con-
junction and disjunction is to consider models of each event as independent,
at least at first. So, in the case of “it’s not going to rain today”, there may
be a positive representation of a particularly rainy day (we can reason about
it through experience/simulation) at the same time as the proposition that
rules out all of the possibilities in which it does rain today (the proposi-
tional content). This has advantages over purely propositional approaches
that cannot explain why we nevertheless seem to conjure up a mental im-
age of a rainy day even when the assertion expressed rules them out. It
also can explain why sign languages may be more flexible when it comes
to constraints like those on parallel information structure in coordination
that we discussed in the last section, if space is going to also be used for
depiction (which doesn’t have the same constraint). It also has advantages
over purely cognitive linguistic models of events which struggle to model the
contribution of logical connectives like negation, conjunction, and disjunc-
tion. Clearly, these present intruiging areas to think about the intersection
of multiple types of linguistic meaning.
4

Anaphora: a spatial
discourse

So far we’ve focused on whole sentences and how they convey information
in a discourse via propositions and the propositional operators that con-
nect them. This chapter will focus on smaller pieces of language, namely,
how we refer to, describe, and track the people, places, and things that we
are talking about. Innovative work in semantics of spoken languages in the
early 1980s by Heim (1982) and Kamp (1983) argued that we used basi-
cally the same semantic representations for tracking things within a single
sentence and for tracking them across sentences in a discourse. It is conse-
quently one of the most striking aspects of the structure of sign languages
from the perspective of formal semantics and pragmatics that sign languages
naturally incorporate the signer’s 3-dimensional signing space to keep track
of things across both a sentence and a discourse (Lillo-Martin and Klima,
1990; Schlenker, 2011), and the way that this spatial nature of discourse
is integrated with the use of space to depict these characters, objects and
events (Liddell, 2003; Taub, 2001; Cormier et al., 2013; Schlenker et al.,
2013; Fenlon et al., 2018). In this chapter we will first focus on a couple of
clear examples where locations in space (“loci”) are used for both discourse
tracking purposes (“anaphora”) and for depiction, observing that the two
systems are closely tied together. We’ll then investigate the use of a pointing
indexical sign to these loci/areas of signing space and its comparison to (per-
sonal and demonstrative) pronouns and other ways to refer to things. From
there, we’ll move to such uses in verbs, tying together the pronominal and
verbal uses. In the final section we’ll compare different formal approaches
to the problem of loci, put forth a positive proposal, and set out suggestions
for future work in this domain.

77
4. ANAPHORA: A SPATIAL DISCOURSE 78

1 Space as depictive and arbitrary


To begin, let’s feature an example of signing space used at the same time
for both depiction and for tracking discourse referents. In (83), we see part
of a narrative in which the main character (a girl) saw a rainbow. The girl
is associated to the signer’s left side, and the rainbow to the signer’s right
side. Throughout the story, including in any subsequent sentences, each of
these spaces (left signing space and right signing space) can be used to refer
back to the girl and the rainbow. At the same time, aspects of the girl
and rainbow can potentially be depicted in these spaces, such as the high
pointing toward the right side, since the rainbow was up in the sky.

(83)

‘There is a girl. She saw the rainbow (up there).’

In sign language linguistics research, especially in formal semantics, much


effort has been spent analyzing this use of space, and for good reason. On
the one hand, it has long been a puzzle within formal semantic approaches
how to think about the way that we connect pronouns to their antecedents
in a discourse. Sign languages seem able to make the link explicit by tying

a pronoun ( ) and its antecedent ( ) to the same


location in space. This is the motivation behind a significant body of work
that emphasizes the overwhelming similarities between pronouns in signed
and spoken languages, which we will focus on in the next section, in line
with decades of findings that signed and spoken languages are more similar
than they are different. Given all of this, we might want to use sign language
structure here to also better understand the process in spoken language.
On the other hand, if we over analogize, that is, if we emphasize the
similarities too much, we risk missing important aspects of the use of space
in sign languages that are simply not available or are less available in spoken
language or (especially) in written language. For example, if we think about
4. ANAPHORA: A SPATIAL DISCOURSE 79

the use of space as entirely about linking anaphoric expressions with their
antecedents, we risk losing sight of the way that this same use of space is
use for depiction (Liddell, 2003; Cogill-Koez, 2000b; Schlenker et al., 2013).
For example, it won’t explain why particular uses of space are used or not,
such as the high pointing to the rainbow in (83) or the way that pragmatics
plays a role in determining whether to use overt space at all versus using
(covert) strategies more common in spoken languages (Ahn et al., 2019).
At issue in the end is whether and how to emphasize the arbitrary vs.
the non-arbitrary nature of space in sign languages, since both come into
play directly in the meaningful use of space in sign languages. On the one
hand, it seems like space can be used somewhat arbitrarily, and when it is
used arbitrarily it seems to serve a function of linking together discourse
referents in the way that other systems have been shown to do in spoken
and written language. We will investigate these in Section 2 when we discuss
the way that pointing behaves pronominally in sign languages, and how the
places one points to (we follow the literature in calling these spatial locations
“loci”) share commonalities with restrictions on pronominal reference of the
sorts seen in written and spoken language. Again, the emphasis in this line
of work is on the striking commonalities between the use of space in sign
languages, and the way that pronouns and their antecedents are associated
with each other in other language modalities like speech and writing.
A focus on the arbitrary nature of loci contrasts with the observation
that space can be used in sign languages in a way that is not arbitrary at
all: signers can make use of space to depict a relationship between objects
and to depict events (Fenlon et al., 2018; Liddell, 2003). This use of space
exists in both spoken and signed languages, a way to show who does what to
whom (Schlenker and Chemla, 2018), how objects are arranged (Davidson,
2014) and how things move through space (Kita and Özyürek, 2003). Within
the formal semantics/pragmatics literature, it is usually said that this use
of space is interpreted iconically, i.e. not through (potentially arbitrary)
symbolic means; Ferrara and Hodge (2018) discuss such uses of space as
both depictive and/or indexical.
This chapter will focus on these two uses of space and their interactions
in sign language discourse. We will start in Section 2 by discussing the
preponderance of evidence of similarities between pronominal structures in
sign languages and spoken languages. Section 3 will continue this discussion
but with a focus on the verbal domain: how does the discourse information
conveyed in pronouns reflect in the verbal domain, i.e. through agreement or
clitics. Section 4 will then discuss the hypothesis that the (arbitrary) use of
space in sign languages is a visible manifestation of pronominal indices that
have been hypothesized in dynamic semantic accounts of spoken languages.
Section 5 will explore the comparison between this same use of space in sign
languages and gender/noun class features in spoken languages, and compare
to the index account. In section 6 we will move on to studies that highlight
4. ANAPHORA: A SPATIAL DISCOURSE 80

the depictive/iconic nature of loci. Finally, in section 7 we will present a


view that attempts to take all three into consideration by taking space to be
a kind of “locative coreference constraint”, and conclude in section 8 with
future questions and direction.

2 Pronouns: Evidence for shared structure


All languages have ways to refer to people, places, and things, and so it is an
important part of one’s competence with a language, how one keeps track
of which things you’re describing at any given moment. When we describe
people, for example, we can use proper names, expressed in ASL either using

fingerspelling ( ) and/or uniquely associated


name signs. We can also use descriptions of various sorts that involve posses-

sion ( ) and/or other modifiers (


‘the girl with curly hair’). We can also use shorter forms when we’re more

familiar with who we are talking about, such as pointing ( ) or

even leave out a name or sign altogether ((Talking about Gina:)


‘She is happy’). Given this variation in possible ways to refer to exactly
the same person, it is a deeply interesting question how we decide which
terms to use to refer to Gina at different points in a discourse. For one
thing, we know that economy constraints come into play, such as the fact
that we typically use a longer expression for something less familiar/less
expected, and shorter forms for something more familiar or more expected.
This can’t be the whole story, though: each of these expressions convey

different information, e.g. ( ) entails that the ref-


erent is both a girl and has curly hair, while a combination expression

( ) might imply that the girl with curly


hair is already familiar to the signer (Barberà, 2012) or to both the signer
and the interlocutor (MacLaughlin, 1997). Moreover, there are linguistic
4. ANAPHORA: A SPATIAL DISCOURSE 81

constraints regarding how these kinds of expressions can appear in relation


to each other, as in the complementary distribution between personal pro-
nouns and reflective pronouns described in the binding theory (Chomsky,
1981).
One area that has been the source of much discussion in formal linguis-
tics comes from one of the smallest of these expressions, the meaning of
pronouns. For example, we might at first blush want to equate a pronoun
like the English expression her with the name of its referent, e.g. Gina. But
notice that they cannot simply be exchanged in order. The full name has
to precede the pronoun: we can easily imagine that she is Gia in (84a) but
it’s much harder to have the same two expressions connected to the same
meaning when used in the reverse order, in (84b).

(84) a. Gia is happy. She likes tea.


b. She likes tea. Gia is happy. (→ She ̸= Gia)

Note, sometimes we want to notate that coreference is available or unavail-


able by using subscripts, so for example we can write that (85a) is a well-
formed discourse, but (85b) is not, and yet (85c) is.

(85) a. Giai is happy. Shei likes bread.


b. * Shei likes bread. Giai is happy.
c. Shei likes bread. Giaj is happy.

Perhaps not surprisingly, the picture that emerges from (85) is found in

sign languages, too. A pointing sign ( ) to a location in space (a) has


a similar pattern to the picture presented in English. That is, the pronoun
can’t pick out the person whose name is used later, while if they are flipped
in order, this is the most natural interpretation.

(86) a.
4. ANAPHORA: A SPATIAL DISCOURSE 82

‘Gia is happy. She (Gia) likes tea.’


b. * ix-a like tea. gia happy. ‘Gia is happy. She (Gia) likes tea.’

Preferring the whole noun phrase antecedent precede a pronoun might


seem like an obvious choice for a language to make for functional reasons,
but it is just one of many ways that human languages seem to organize the
relationship between pronouns and their antecedents, some of which (like the
linear order) seem easy to provide functional explanations for, and others less
so. For example, it’s possible to have a pronoun temporally precede a noun
with the same reference, as long as they are in certain syntactic/semantic
configurations, such as Because shei brought her teabag this morning, Giai
was able to make some tea. Chomsky (1981) famously outlined several prin-
ciples in his theory of pronominal binding, which included the observation
that a reflexive anaphor (like English herself in (87a)) needs to be “bound”
(i.e. have a co-indexed structurally higher local antecedent), that a pronoun
(like English her in (87b)) needs to be “free” (i.e. not have a co-indexed
structurally higher local antecedent) and that a referring expression (like
English Alex in (87c)) needs to be free (i.e. not have a co-indexed struc-
turally higher antecedent). These principles arising from these observations
are lettered as in (88).

(87) a. Alexi emailed herselfi . (Alex = herself, thanks to Principle A)


b. Alexi emailed herj . (Alex ̸= her, thanks to Principle B)
c. Shei emailed Alexj . (She ̸= Alex, thanks to Principle C)

Those who are interested in learning more about Binding Theory and
subsequent/related theoretical terminology, especially about the notions of
locality (meaning roughly in the same clause, verbal domain, sharing a sub-
ject, etc.), binding (being not just co-indexed but also syntactically con-
nected), C-command (a particular definition of “structurally higher”), etc.,
are encouraged to learn more in a syntax course; basic notions are also in-
troduced in the context of sign language syntax in Sandler and Lillo-Martin
(2006).
For our purposes here, it is worth noting that many languages that are
unrelated to English show the kind of pattern exemplified in (87) for ex-
pressions that seem similar to herself, her, and Gia. Sometimes exceptions
arise, and when they do, it becomes especially important to more clearly
define what “counts” as being similar to something like the reflexive herself
or not. In these cases, one is faced with the challenge of either adapting
the theory or categorizing it as a different case; so it goes, when developing
any linguistic theory, with many factors influencing one’s decision. A nat-
ural question for sign linguistics is whether these same patterns hold that
are found in so many spoken languages of the world, and in any cases that
4. ANAPHORA: A SPATIAL DISCOURSE 83

they do not, what this says about the similarities and differences between
pronouns in spoken and signed languages, and how to draw generalizations
across all languages.
Consider, first, some basic sentences in American Sign Language. Some
have claimed that they seem to show the same pattern as in English (Sandler
and Lillo-Martin, 2006; Schlenker and Mathur, 2012); others have suggested
that what are quite strong preferences in English (which originally motivated
the binding theory) are weaker preferences in ASL (Kuhn, 2015).

(88) a.

‘Gia emailed herself’ (Gia = self, thanks to Princinple A)


b. fs(Gia)-ai a-email-b ix-bj . (Gia ̸= ix, thanks to Principle B)
c. ix-bi b-email-a fs(Gia)-aj . (ix ̸= Gia, thanks to Principle C)

To the extent that these preferences resemble English ones but are weaker,
or even differ completely, a few possible explanations come to mind. First,
all languages have their own set of ways to reduce ambiguities. Consider
gender in English pronouns in the two sentence dialogue in (89a). The two
people introduced by the first sentence have names which are stereotypi-
cally associated with two different genders, and so gendered pronouns can
be used to distinguish them in the following sentence. But in a case where
the stereotypical gender associated to two names is the same, as in (89b),
the story is more clearly ambiguous in written English between the artist
as the helper or the helped one. Spoken English can be ambiguous too, but
speech can also make use of prosody, for example an extra stress on she can
signal a topic shift to signal that object of the previous sentence (Ann) is
the subject of the new sentence, i.e. the artist.

(89) a. Maryi helps Johnj . Shei /Hej is a wonderful artist.


b. Maryi helps Annj . Shei/j is a wonderful artist.
c. Maryi helps Annj . She∗i/j is a wonderful artist.
(with disambiguating prosody)

Certainly many spoken languages do not necessarily make use of a gen-


der system for resolving pronouns: for example, spoken Mandarin makes no
distiction between male and female third person pronouns (although writ-
ten Mandarin can, flipping the modality bias yet again), while German has
4. ANAPHORA: A SPATIAL DISCOURSE 84

three genders that can be expressed on pronouns (male, female, neuter) and
languages with more extensive noun classes can make many more distinc-
tions including human, classes of other living things and objects, etc. These
features can interact with other factors of languages to correctly resolve
pronouns, including organizing by topic/comment structure, information on
verbs, and many other factors.
Given all of this, it is then not at all surprising that in sign languages,
there are also many similar strategies for resolving anaphora. These include
topic/comment ordering, the use of different handshapes for different classes
of nouns, and associating different referents in a discourse to different areas
of space. We will focus on the last of these for the time being: the association
of discourse referents to different areas of space, known in the literature as
spatial “loci” (Lillo-Martin and Klima, 1990). Consider (90): the association
of Gia to the signer’s right-hand signing space and Lex to the left signing
space unamgiuously picks out Gia as the referent of the pointing sign when
it is made to the left signing space.

(90)

‘Giai helps Lexj . Shei (Gia) is a wonderful artist.’

A question that arises again and again in formal approaches to sign language
semantics is how to think about this use of space, and how it compares and
constrasts to other ways to disambiguate reference in spoken and written
language. Attempts toward answering this question will comprise the re-
mainder of this section.
One much-discussed difference between the use of loci in sign languages
and systems like gender and noun classes in spoken languages is that there
seems to be no finite limit to the use of space in terms of the number of dis-
tinctions it can support. Lillo-Martin and Klima (1990) note that between
any two areas of space associated to individuals in a discourse, the area in be-
tween them could be associated to a third referent. This seems true in terms
of theoretical possibilities/competence although “performance” in terms of
memory demands and/or perceptual distinctions lead to some natural lim-
its; given this, it seems that sign languages can make infinite distinctions.
4. ANAPHORA: A SPATIAL DISCOURSE 85

However, something that has been much less discussed in formal approaches
is that it’s not at all obvious that spoken languages are restricted to a finite
set of gender classes, either. Perhaps traditionally we have thought about
pronouns as having limits to gender distintions (e.g. he/she/it for English
third person singular reference) but neopronouns (e.g. xe) are increasingly
used and perfectly illustrate the principle that there is no principled limit to
the number of pronoun distinctions for a language (Conrod, 2019). The fact
that sign languages can make in principle infinite distinctions was once used
as an argument that they are not like gender/noun class features in spo-
ken languages, but given the creativity available in neopronouns this should
probably be set aside as a difference between spoken and sign language pro-
noun/feature classes.
Beyond the sheer number of distinctions, it has also been a point of much
discussion how arbitrary (or, nonarbitrary) are the use of loci. We saw
above one example, where the pronoun pointed high in the sky for a rainbow,
in contrast to lower for the girl watching the rainbow (83). Schlenker et al.
(2013) for example provide the following dialogue about a short basketball
player and a tall basketball player. This height difference will most naturally
be reflected in the choices of areas of space to associate to them: the loci
associated to the tall basketball player is tall but the one associated to the
shorter player is lower in signing space (91).

(91) (ASL, from Schlenker et al. 2013)


poss-1 young brother want ix-1 rest.
ix-1 understand ix-a-high.
‘My younger brother wants me to rest. I understand him.’
Interlocutor infers: the speaker’s younger brother is tall.

Another source of iconicity comes from physical presence: if someone or


something being discussed is physically present, a pronoun to pick them out
can use their actual location. In the same dialogue about Lex and Gia, for
example, if Lex is over in a corner of the room sitting down and playing a
board game, she is best referred to by pointing to her directly (92).

(92)
4. ANAPHORA: A SPATIAL DISCOURSE 86

‘Lexi helps Gina. Shej is a wonderful artist.’


(where Gina is present in the corner of the room)

Although the height (on non-present loci) and direct pointing (for present
loci) are typically considered as two different kinds of constraints on sign
language pronouns, we can consider them unified in the sense that they
can each motivate the choice of locus (to the physical presence, features, or
even more abstract notions like honorifics, etc.), following Liddell (2003).
Both show that the location associated to the referent can be motivated in
different ways. Note that some combination of motivation/arbitrariness is
present in every feature class in the world’s languages: “gender” is precisely
something with some motivation in the natural world but it is extended in
many cases in an arbitrary way so that, for example, bridge is masculine in
some languages (e.g. French le pont) but feminine in others (e.g. German
die Brücke). We will discuss some further notable cases in section 6 of this
chapter in which space is used in a motivated way for purposes of scene
depiction. The overall takeaway at this point is that the combination of
arbitrariness and motivated choice of locus in pronouns in sign languages
reflects a mix of arbitrariness and motivated choice found in pronoun systems
in spoken languages as well.
Another important similarity between the pronoun systems of spoken
and signed languages is that they are part of a larger system of ref-
erential expressions that are used in different syntactic and pragmatic
contexts. For example, depending on the pragmatic context/salience of the
reference in the context, reference in ASL can be made via bare nouns (93a),
a pronoun of the sort we have been discussing (93b), or implicitly by argu-

ment omission (93c), or a combination of and noun phrases (93d-e).


It has been argued that ix can sometimes be used as a definite article when it
is prenominal as in (93d) like the in ASL (MacLaughlin, 1997) or the “strong
definite” in German (Irani, 2016). Others have argued that ASL is an “NP
language” as in the typology argued for by Bošković (2005), concluding that
all noun phrases in ASL are bare noun phrases, and such uses of ix are
4. ANAPHORA: A SPATIAL DISCOURSE 87

modifiers of some sort (Koulidobrova, 2012), especially postnominally (93e)


or demonstratives (Koulidobrova and Lillo-Martin, 2016; Ahn, 2019a).

(93) a.

‘The girl saw the rainbow’


b.

‘She saw the rainbow’


c.

‘She saw (it)’


d.

‘The girl saw the rainbow’


e.

‘She saw the rainbow’


Barberà (2012, 2015) makes a compelling case that the prenominal ix in
examples like (93d) marks specificity, not definiteness, at least in analogous
4. ANAPHORA: A SPATIAL DISCOURSE 88

examples in Catalan Sign Language when used in the lower plane of the
signing space. Davidson and Gagne (2022) extend this observation to the
use of ix in the lower plane to express specificity in ASL, and furthermore
they argue that the source of this is a more general use of planes to express
domain size (where the smallest/lowest is a domain with a single element,
i.e. specific, in agreement with Schwarzschild (2002)’s view of specificity
as an extreme domain restriction to just one element in the domain). This
raises a point emphasized also in Koulidobrova and Lillo-Martin (2016), that
the presence of ix and the locus it associates with are independent aspects
to analyze when trying to understand sentences like (93b-e) and others.
Finally, recall in Chapter 2 that we emphasized the ways in which in-
formation structure influences the expression of sentences in sign languages.
One example of this kind of influence is the topicalization of a full noun
phrase, with a pronoun anaphoric to the noun phrase expressed in argument
position later on in the same sentence, or in the same discourse. Example
(94) illustrates this in ASL: canonical word order is subject-verb-object, but
instead of appearing in its canonical position, the subject friends has been
topicalized, and the pronominal ix-arca appears in its argument position
immediately preceding the predicate (real smart).

(94) (Davidson and Gagne, 2022)

‘(My) friends, they (are) really smart.’

In examples like (94) we can easily see the link between friend (the
topic) and ix (the subject of the sentence), but ASL also permits a sub-
ject to be implicit, i.e, unpronounced in a circumscribed set of linguistic
contexts (Lillo-Martin, 1986; Koulidobrova, 2012). Moreover, between com-
pletely overt pronouns like ix and completely covert pronouns, there are
intermediate options for providing anaphoric information, most notably the
use of verbs to indicate their arguments via incorporating the same locus
system as ix, which will be the focus of the next section.

3 Verbs: Agreement versus cliticization


Many languages in the world, including both spoken and signed languages,
change the form of their verbs to reflect different arguments. For example,
the Latin verb in (95a) has a “first person” ending because the subject is
4. ANAPHORA: A SPATIAL DISCOURSE 89

the speaker, while the verb in (95b) has a “third person” ending since the
subject is someone else not currently participating in the discourse.

(95) a. amo ‘I love’


b. amant ‘They love’

Similarly, the ASL verb moves from locus a to b in


(96a) to indicate that the person associated to a (Gina) was the subject,
while it makes the opposite movement (from b to a) to indicate a third
person was the emailer (to Gina).

(96) a.

‘They (Gina) emailed them (singular)’


b.

‘They emailed them (both singular)’

The pattern in (95) is often analyzed as agreement in formal sign linguis-


tics analogous to the Latin type, and many analyses of the ASL pattern in
(96) also use the term “agreement” to highlight this similarity, and in some
cases to argue that the underlying cause is the same (Pfau et al., 2018).
Sometimes, however, what superficially appears to be agreement of the sort
famous from Romance languages and other languages with high amounts of
infectional morphology, may in fact have different analyses under the hood.
Consider, for example, the case of French clitics l’/le (97) which in terms
of pronunciation do not stand alone: they must immediately precede a verb
and are pronounced as one prosodic unit with the verb, two reasons we
might think they are more like the Latin verbal endings/agreement. How-
ever, in at least in some cases they behave quite differently from agreement
but rather behave semantically like full pronouns in filling in argument slots.
We can see this by observing that pronouncing the object clitic along with
4. ANAPHORA: A SPATIAL DISCOURSE 90

the full object as in (97e) is unacceptable, which would be surprising if the


clitic were simply agreement. Is there any reason to think that the use of
directionality in sign languages should be analyzed along these lines?

(97) a. Marie voit Jean. ’Marie sees Jean.’


b. Marie aime Jean ‘Marie loves Jean’
c. Marie l’aime ‘Marie loves him’
d. Marie le voit. ’Marie sees him.’
e. *Marie le voit Jean. ’Marie sees Jean.’

On the surface, a major difference between French clitics and ASL loci
on verbs is that a loci marked verb does not preclude full noun phrases,
in contrast to the unacceptability for clitics in (97). However, there is a
well known phenomenon of clitic doubling (repeting the argument both as
a clitic and in argument position) in general across the worlds languages
(Anagnostopoulou, 2006), so the locus and full noun phrase co-occurrence
do not rule out a clitic analysis. Moreover, when we look more closely at
directionality on sign language verbs, we see more deep similaries between
clitics and directionality in sign language verbs. There are, for example,
many ways in which loci show odd properties from the perspective of ana-
lyzing loci as agreement; Nevins (2011) notes that the following are all odd
or unexpected properties of ASL agreement:

1. Subject agreement is optional, not required (Meier, 1982)

2. The plural cannot be marked on both subject and object at the same
time (Sandler and Lillo-Martin, 2006)

3. Number agreement is realized as through marking dissociated and


isolable from that of person (Mathur, 2000)

4. Agreement occurs in non-finite/infinitival clauses (Padden, 1988)

5. There is a preference for marking the indirect object over the direct
object (Janis, 1995)

6. Loci occur on prepositions in some sign languages (Meir, 2002)

7. Some directional marking seems to express spatial locations, not iden-


tify arguments (Padden, 1988)

What is noteworthy is that so many of these properties are seen in clitics


across the world. Nevins (2011) provides the following examples of languages
with clitics that show these properties:
4. ANAPHORA: A SPATIAL DISCOURSE 91

1. Subject person marking on verbs is optional: Gruyere clitics

2. Plural cannot be marked on both subject and object at the same time:
Georgian omnivorous number triggered by clitics

3. Number agreement realized as a dissociated gesture: Georgian

4. Clitics occur in non-finite/infinitival clauses: Italian

5. Preference of the indirect object over the direct object: French clitics

6. Clitics found on prepositions: Polish

7. Existence of locative clitics: Italian, French

Highlighting similarities between clitics in spoken languages (prosodically


dependent pronominal-like elements that fill argument positions) and di-
rectional verbs in sign languages in some ways raises more questions than
answers. For example, it is known that clitics can be an intermediate path in
language change bewteen full pronouns and agreement, and perhaps French
in its current state exemplifies this (Culbertson, 2010). Is something like
this the way to think about the ASL case too, as not entirely one or the
other within the ASL community but rather in a state of ongoing change?
Second, whether or not the use of directionality marked on verbs in sign
languages is most closely compared to clitics, agreement, or something in
between in spoken languages, the better question is how it works on its
own: what are the possibilities and what are the constraints? And how is
this best accounted for in a theory of the minds of sign language users? For
example, agreement and clitics have historically been viewed entirely differ-
ently in theoretical linguistics literature: agreement is typically viewed as a
formal link between something that has some features (say, plural/singular
grammatical number features, noun class, and/or gender on noun phrases)
and other places in the language that take on these features, such as verbs,
adjectives, determiners, etc., which “agree” with the noun phrase. Agree-
ment on a verb phrase doesn’t provide the arguments for any verbs (i.e.
it doesn’t provide any missing information directly), it is simply a reflec-
tion of something elsewhere that has provided that argument, and thus it
isn’t in complementary distribution with full noun phrases. In contrast,
(prosodically stand alone) pronouns and (phonologically dependent) clitics
tend to be viewed as making reference directly and filling argument posi-
tions, as we saw in (97). They can introduce things into a discourse on their
own, and to the extent that they have a link to another noun phrase it is
via co-indexation of the sort that can hold between any two noun phrases.
However, even given consensus that certain expressions might be clitics in
a spoken language, linguists may disagree on the relationship between full
noun phrases and clitics - in some cases there might be reason to posit a
4. ANAPHORA: A SPATIAL DISCOURSE 92

syntactic dependency between a full noun phrase and a clitics. Arguments


for treating spoken language clitics as pronouns typically comes from seman-
tics, as in the ability of a clitic to vary with a quantifier (Baker and Kramer,
2018), and we will see more examples of this discussed in Chapter 6.
Kocab et al. (2019) provide further pragmatic arguments in favor of a
clitic analysis and against agreement via the optionality (also noted above)
of directionality marking on sign language verbs. They show that the use
of directionality on verbs depends on several pragmatic factors involving
disambiguation: two animate referents in a story are more acceptable dis-
ambiguated via verb directionality, whereas a story with a single animate
referent is more acceptable without directionally marked verbs or the use
of locus and pronominal ix; similarly, stories with unambiguous continua-
tions were acceptable without directionality, whereas stories with ambiguous
continuations were much more acceptable with directionally marked verbs.
This dependence on discourse pragmatics is expected if we are dealing with
clitics and/or pronouns, but not (typically obligatory) syntactic agreement.
The conclusion in this section is that there are important semantic/pragmatic
similarities between the use of directionality in sign language verbs and the
use of pronouns in the semantic sense, including clitics. Whether or not one
also wants to highlight additional similarities to agreement (Lillo-Martin and
Meier, 2011; Pfau et al., 2018) or to pointing construed more broadly (Fen-
lon et al., 2019), the use of loci on verbs is in many ways similar to clitics,
which notably have semantic/pragmatic properties of pronouns (even if they
are phonologically not separate words). Given this, and the similarities to
pronouns shown in the previous section, the natural next question is how to
think about the semantics of loci in light of what we know about pronouns.
Entire subfields of formal semantics and both the syntax/semantics inter-
face and semantics/pragmatics interface are dedicated to questions about
how pronouns are linked to their antecedents. They ask questions like: how
does one interpretation for a pronoun become preferred over another? How
do we account for pronouns that seem to express quantification instead of re-
fer to an individual (e.g. Every girl hugged her mother)? What mechanisms
do we need in a theory of anaphora-antecedent relationships to cover the
full array of cases that we see in natural language? Sign language loci have
played interesting roles in all of these discussions. In the next section, we’ll
see how loci compare to indices used in “Dynamic Semantic” approaches to
anaphora; in the following section we’ll discuss the comparison between loci
and gender/noun class features. We’ll then propose an analysis which takes
insights from both accounts in section 6.
4. ANAPHORA: A SPATIAL DISCOURSE 93

4 Loci vs dynamic indices


One view of sign language loci which has gained significant interest in the
literature is that of loci as the indices that keep track of discourse referents.
Such indices are the core feature around which dynamic semantic systems
revolve. In so-called “dynamic semantic” approaches, the level of seman-
tic analysis is an entire discourse, not just as sentence (this contrasts with
non-dynamic approaches). Two variants are roughly the representational
structures of Kamp and Reyle (1993) or the file cards of Heim (1982) on
the one hand, or the context change potentials of Heim (1982) and dynamic
predicate logic of Groenendijk and Stokhof (1991) on the other hand; see
also Chierchia (1995). In dynamic semantic systems, the idea is roughly
that every noun phrase comes with an index (imagine these coming from
the natural numbers, which count up our discourse referents), and the index
tells you which discourse referent the noun phrase is associated with. For
example, the English noun phrase a happy artist is indefinite (contrast with
the definite the happy artist) so the function of a happy artist is to introduce
an index, which in this case might be 2. This index is never pronounced,
but nevertheless is tracked as part of the comprehender’s knowledge of the
grammar. Then later, perhaps in the same sentence or in another sentence
in the discourse, a pronoun her or she might be assigned an (unpronounced)
index 2, which ensures that her, she and a happy artist are all directed to
the same file card, e.g. roughly have the same reference. We can notate the
dependency between these interpretations as in (98); the idea of dynamic
semantics is that these indices are symbols in the semantics able to be oper-
ated on by other symbols (e.g. quantifiers, boolean operators, etc.), and the
effort is in understanding the ways that different operators introduce and
act on such indices and restrictions on their co-occurance.

(98) [A happy artist]2 emailed her2 friend. She2 wanted to meet for coffee.

There are many empirical phenomena which are accounted for straight-
forwardly in dyanmic systems, but the trade off is that the logic required
to correctly capture the relations between indices depending on where they
are in sentences/clauses is quite complex, thought at this point also quite
well thought through. (Along with references above, a gentle introduction
to dynamic approaches can be found in Coppock and Champollion 2022.)
Among many key data points are so-called “donkey sentences” (99), which
have the notable property that there seems to be no syntactic relation/scopal
dependency between the indefinite noun phrase in the antecedent clause (a
donkey) and the pronoun in the consequent clause it, even though they are
interpreted as co-varying (e.g. any donkey owned by a farmer is going to
end up fed, not just a single particular donkey, so it is not referential, i.e. it
doesn’t pick out a particular donkey).
4. ANAPHORA: A SPATIAL DISCOURSE 94

(99) If a farmer owns [a donkey]1 , he feeds it1 .

Dynamic theories update information about discourse referents (e.g. a don-


key) as they go, allowing for dependencies between these two syntactically
disconnected clauses. Much of the details of dynamics is involved in under-
standing the logical configurations which support the kind of dependencies
seen in (99) (e.g. antecedent and consequent of a conditional as we see
above, or conjunction) and those that don’t (e.g. disjunction).
If indices that keep track of the relationship between the meanings of
different noun phrases are indeed part of the linguistic structure/semantic
computations - and not a peripheral one, but essentially the most core no-
tion in dynamic semantics! - then it might be considered a mystery why such
indices would fail to be pronounced in any spoken language. One way to
think about this is that the indices do not have a syntactic realization, but
rather are a shorthand for a cognitive representational system for different
files. With rare exceptions, such as the argument that the spoken language
Washo shows evidence for syntactic reality of indices (Arregi and Hanink,
2021; Hanink, 2018), there have been few to no suggestions from spoken
languages for indices to be reflected in linguistic forms. An intruiging in-
sight by Lillo-Martin and Klima (1990) then, was that spatial locations, i.e.
loci, in American Sign Language might be a good candidate for a language
making dynamic indices overt. Their arguments are strong ones: they point
out that loci are both infinite and unambiguous, two properties of indices
in dynamic semantics. Their argument that loci are infinite is that any two
loci can have another between them. They are unambiguous because they
constrain coreference just like an index: if two noun phrases have the same
index they corefer, and if two noun phrases have the same loci they corefer.
(Lillo-Martin and Klima also note the shifting reference involved with loci;
this comes up more in Chapter 5.)
The idea that sign language loci are examples of overt variable indices
was further pursued by Quer (2005), Schlenker (2011), Steinbach and Onea
(2015), and others. Schlenker (2011) and Steinbach and Onea (2015) build
on the observations from Lillo-Martin and Klima (1990) in providing sen-
tences from three sign languages (ASL, LSF, and DGS) that compare to
donkey sentences. In these cases, the referents for noun phrases introduced
in the antecedent of a conditional or in the restrictor argument of a uni-
versal quantifier can co-vary with others that seem to be in a syntactically
inacccessible clause. Note that the example in (100) does not have stand-
alone pronouns in the consequent clause like the English sentence, but rather
convey the loci via the directional verb beat.

(100) (NGT, Steinbach and Onea 2015)


cond farmerBS:3a own3a donkey - ix3a 3a beat3b .
‘If a farmer owns a donkey, he beats it.’
4. ANAPHORA: A SPATIAL DISCOURSE 95

Schlenker argues that sign language loci are in fact an argument for
dynamic models of semantics over non-dynamic semantic models given that
sign languages provide a “visible” manifestation of loci. To understand
why, first we take a short detour, regarding so-called “Bishop sentences”.
The structure of these sentences is such that two identical indefinite noun
phrases (a bishop... a bishop) are introduced in one clause, and co-vary with
pronouns or definite descriptions in another syntactically inaccessible clause
(he... him), as in (101).

(101) If a bishop meets a bishop, he blesses him.

In spoken languages, bishop sentences have been used to argue against


thinking about pronouns as having unpronounced structure similar to a noun
phrase that disambiguates. Contrast, for example, the possible solution to
donkey sentences in (102a) (which on its face might encourage is to think
about equating it = the donkey) with the lack of clarity offered in the bishop
case in (102b) - the noun bishop hardly distinguishes the subject from the
object in this case, especially since the verb meet is symmetrical, too.

(102) a. If a farmer owns a donkey, he feeds it[(the) donkey].


b. If a bishop meets a bishop, the bishop blesses him[ (the) bishop].

The problem for non-dynamic accounts basically is that we get the in-
tuition that the bishops should vary in some kind of regular way (perhaps
with a series of events/meeting occasions) and this is communicated clearly
in (101), but less so in (102b), and so we might conclude that definite noun
phrases like the one in (102) are not the hidden structure behind the scenes.
This problem is presented in Schlenker (2011) as a choice in theoretical ap-
proaches: we can make the idea of the hidden/covert definite noun phrases
more complicated (e.g. The first bishop), etc., to salvage the idea that pro-
nouns are always hidden definite structures, or we can simply abandon the
approach altogether that the way these sentences work is through some kind
of covert noun phrase and instead complicate our logical machinery by al-
lowing that pronouns come with their own dynamic indices, and a logical
system that acts on these indices. According to Schlenker (2011), sign lan-
guages play a major role in deciding between these two accounts because
they illustrate an overt version of indices in precisely these places, via loci.
Let’s examine what these sentences look like in sign languages to better
understand this claim. As expected if loci are a way to show dynamic indices,
both donkey and bishop sentences can make use of loci to distinguish one
of the noun phrases from the other. The sentence in (103) has the structure
of bishop sentences in some ways: it includes two noun phrases in the first
clause ( ix-a a student ‘a student’) and (ix-b b student ‘a student’) and
the symmatrical verb meet, except that unlike in English the noun phrases
4. ANAPHORA: A SPATIAL DISCOURSE 96

are not entirely identical: the first is associated to locus a and the second
to locus b.

(103) (LSF, Schlenker 2011)


each-time ix-a a student a-meet-b ix-b b student, a-give-b cigarette.
‘Each time a student meets a student, he [= the former] gives him
[= the latter] a cigarette.

Under an account in which the two loci a and b are ways of overtly mark-
ing two different dynamic indices, this is exactly what we expect: “identi-
cal” noun phrases except for the indices, which then are used again in the
consequent clause on the directional verb a-give-b to provide a co-varying
interpretation. This is taken as evidence from sign languages for overt in-
dices Schlenker (2011); Zucchi (2012); Lillo-Martin and Gajewski (2014);
Schlenker (2018). However, there are at least two reasons we might be hes-
itant in jumping to this conclusion.
The most obvious objection to the argument that loci are overt indices is
that it’s not clear why the loci can’t just be considered as more descriptive
material, of the sort seen in definite noun phrases like the former, the latter,
the one in this locus, the one in that locus, etc. This is roughly the tact
taken in response to this data by Ahn (2019a,b). She notes that if the locus
is descriptive material then we might expect it to be elided in unambiguous
cases where the potential for mistaken pronoun resolution is very low, and in
fact this is exactly what we find in studies of the pragmatics of loci, both for
indexical pointing Ahn et al. (2019) and loci in directional verbs Kocab et al.
(2019). The (potentially descriptive) contribution of the locus usually has to
be interpreted as non-restrictive/non-at-issue: we definitely want universal
sentences like the one in (103) to range over all possible versions of students
who meet (and thus give each other cigarettes), not just the ones who are in
a particular location! But this is precisely what material like former/latter
etc. is able to do, so there’s no reason the spatial locus cannot do this as
well (restrict via space and not time, in contrast to former, latter, etc.)
Schlenker (2011) anticipates this point in arguing that one can add un-
pronounced descriptive material to the pronouns in donkey sentences of
basically exactly this sort we have in mind for loci (former, latter, etc.), and
points out that if we let this unpronounced material become fine-grained
enough (e.g. have infinite options for disambiguation) this becomes basically
the dynamic analysis. This seems entirely right! But, then the takeaway
for formal semantics more broadly is not that sign language loci provide a
unique kind of evidence in favor of dynamic semantic accounts, only that
they follow exactly what we would expect of any language given what we
already know about pronouns in these kinds of contexts: we are able to use
descriptive content to make increasingly fine grained distinctions, and we
seem to be able to do so overtly or covertly. In other words: a conclusion
4. ANAPHORA: A SPATIAL DISCOURSE 97

is not that sign languages show unequivocal evidence for dynamic accounts,
but rather that they show evidence on exactly the point where dynamic
and non-dynamic accounts seem to entirely agree: that languages seem to
be able to need some kind of restriction in the meaning of pronouns that
is fine-grained enough to account for an unlimited number of distinctions
between discourse referents, and while spoken languages can use all kinds of
not-at-issue descriptive content, sign languages can also use space.
The second concern for taking sign language indices to be an example of
dynamic variable indices is that in most dynamic semantic systems, indices
are used on variables both in coreferential contexts and in contexts where the
relationship between the two noun phrases seems to be more syntactic, such
as quantifier raising, wh-movement, etc. (but see Reinhart 1983, Büring
2005, and Chierchia 2020 for discussion of keeping them separate). It can
often be difficult to disentangle examples of the sort that encode discourse
dependencies (tracking two discourse referents from sentence to sentence)
from those that entirely encode syntactic dependencies, but probably the
most reliable way to do it is with negative quantifiers: we can see in (104a)
that Alex and he can be coreferential due to discourse/context factors, and
in (104b) we might be able to imagine the he in the second sentence as some
generic student we have in mind, but it’s generally unacceptable to do this
in (104c) with a negative quantifier. This contrasts with the cases in (104d-
e), where we can get coreference between the negative quantifier phrase or
question phrase and a pronoun, thanks to a syntactic/semantic dependency
within a sentence.

(104) a. Alexi likes this book. Hei knows it is good. (discourse)


b. Every studenti likes this book. Hei knows it is good. (discourse?)
c. # No studenti likes this book. Hei knows it isn’t good.
d. No studenti wants to read hisi book. (syntactic)
e. Whoi wants to read hisi book? (syntactic)

Intruigingly, the contrast arises in sign language loci as well: Graf and
Abner (2012) and Abner and Graf (2012) note that sign language loci that
are optional but acceptable to establish coreference between a pronoun and
a universal quantifier (105) seem to be unacceptable precisely in cases that
necessitate real “syntactic binding”, that is, places when there can’t be co-
reference linking the pronoun and its antecedent because there is no reference
in the case of negative quantifiers (106).

(105)
(ASL, Abner and Wilbur (2017); Abner and Graf (2012))
4. ANAPHORA: A SPATIAL DISCOURSE 98

a. all politician person-a tell-story ix-a want win


b. all politician person-a tell-story want win
‘Every politician said that he wanted to win.’

(106)
(ASL, Abner and Wilbur (2017); Abner and Graf (2012))
a. * no politician person-a tell-story ix-a want win
b. no politician person-a tell-story want win
‘No politician said that he wanted to win.’

On this point, Kuhn (2020) argues that the use of a locus in sign languages
involves a kind of iconicity, basically a claim of existence, and this seems
roughly right, and is compatible with the account offered later in this chap-
ter. Before we get there, though, let’s consider our conclusions on dynamic
indices and loci. One can be highly sympathetic to a dynamic semantic
approach that models the introduction and binding of indices of discourse
referents in ways that more accurately reflect the dynamics of discourse (for
example, binding across sentences, an asymetric semantics for conjunction,
etc.), while still retaining skepticism that sign language loci are indices -
and that is precisely our take here. Sign language loci provide (descriptive)
information about the discourse referent which can be used for disambigua-
tion, just like other descriptive information. Indices can be assigned to any
discourse referent, loci or not, and in fact, most discourse referents are not
assigned overt loci (Frederiksen and Mayberry, 2016), and their use appears
to be constrained by pragmatic information/quantity considerations similar
to other descriptive material (Ahn et al., 2019; Ahn, 2019b)
Let’s take this idea of loci as (typically not-at-issue) descriptive material,
and apply the perspective to one final sentence type often used to probe for
dynamic indices: verb phrase ellipsis. Example (107) is from Lillo-Martin
and Klima (1990) for ASL, and the claim is that such examples can have
two different logical structures, which are reflected in two different interpre-
tations of the “elided” (silent) material in the second sentence. Similarly,
the LIS example in (108) shows two different interpretations, depending on
whether the elided noun phrase in a “strict” way or a “sloppy” way.

(107) (ASL, from Lillo-Martin and Klima 1990)


marya , aliceb . ixa think ixa have mumps. ixb same
a. Mary thinks she has mumps. Alice thinks Mary has mumps, too.
b. Mary thinks she has mumps. Alice thinks Alice has mumps, too.
4. ANAPHORA: A SPATIAL DISCOURSE 99

(108) (LIS, from Cecchetto et al. 2015)


giannia secretary possa value. piero same.
a. Strict reading: “Gianni (who is at a) values the secretary who
belongs to the unique individual who is at a and is equal to
Gianni. Piero also values the secretary who belongs to the unique
individual who is at a and is equal to Gianni.”
b. Sloppy reading: “Gianni (who is at a) values his own secretary.
Piero does too.”

Note, however, that these examples merely show that there there are two
different ways to recover the content of this verb phrase: the main thing
is that the content that is elided needs to be something that is ignored
in this reconstruction. Indices presumably have this property, but other
information found in descriptive noun phrases does too: in particular, this
is true for gender features, which are also ignored for purposes of ellipsis: in
example (109) there is no disamgituation of the two readings of verb phrase
ellipsis even though the two pronouns that would be pronounced differ in
gender (Mary values her secretary, too vs. Mary values his secretary, too).

(109) John values his secretary and Mary does too.


Strict reading: Mary values John’s secretary too.
Slopping reading: Mary values Mary’s secretary.

At this point, we might wonder whether the kind of content that the locus
contributes is similar to features like noun class or gender, and in fact this
is precisely the second major approach taken in formal semantics/syntax
regarding sign language loci, which we turn to in the next section. The
takeaway at this point from ellipsis is that if loci do involve descriptive-
like material, then it cannot be of the sort that is at issue, i.e. it has to be
something like gender features that is not considered in the semantics/logical
form upon which focus alternatives are constructed.

5 Loci vs. gender/noun class features


For anyone familiar with languages that have gender and/or noun class
marking, the similarities between these kinds of features and the use of loci
in sign languages might be striking. Probably the most notable common-

ality is that the markings occur both on verbs, as in ,


and on the noun phrases that provide the arguments for those verbs, as in
4. ANAPHORA: A SPATIAL DISCOURSE 100

. This shares much in common with


person, number, and gender features in a language like French, where a verb
aimer ‘to love’ changes its form depending on different person, number, and
gender features of its subject, as the second person singular aimes in (110a)
and the third person singular in aime in (110b), and noun phrases contain
gender information in determiners (la/le) as well as subsequent pronouns
(elle/il)(111).

(110) (French)
a. Tu aimes la fille.
‘You love the girl.’
b. La fille aime le livre
‘The girl loves the book.’

(111) (French)
a. la fille... elle...
‘The girl... she...’
b. le garçon... il...
‘The boy... he...’

The comparison between (person, number, gender, etc.) features and


sign language loci has been around for some time, and has been discussed
in particular depth especially with regards to syntax by Neidle (2000) and
with regards to semantics by Kuhn (2016), Barberà (2015), and Steinbach
and Onea (2015), among many others. The main idea is simple: we see
many spoken languages making use of feature classifications like gender,
noun class, etc. to help track discourse referents. In terms of semantics,
this is often implemented via presupposition: the feature is not part of the
asserted content but rather backgrounded in a way that if it fails to hold
(say, the wrong gender or number is used) then the utterance fails to be
contextually appropriate and/or fails to have a truth value in that context,
depending on the theory of presuppositions and truth values. Let’s walk
through an example: consider a context in which the name Kate refers to
someone to whom, roughly approximating the phrasing in Conrod (2019),
it is “appropriate to use female gender features in this context”. If I say
Yesterday I ran into Kate while she was taking a walk, the pronoun she
can co-refer with Kate because it is appropriate to use the (singular, third
person, feminine) features of that pronoun to refer to Kate. This contrasts
with a minimally different sentence like Yesterday I ran into Kate while he
was taking a walk; the idea is that this sentence fails to have a truth value
4. ANAPHORA: A SPATIAL DISCOURSE 101

if the intended reference for he is supposed to be Kate, because there are


no salient individuals in that context to whom it is appropriate to use male
gender features. A conventional formal approach to this sentence would be
in (112) (Heim and Kratzer, 1998).

(112) Yesterday I1 ran into Kate2 while she2 was taking a walk
Jshe2 Kg = ιx : f em(x).g(2) = x
‘The extension of she (with index 2) under the assignment function
g is the unique individual in the domain that is referenced by 2
in the assignment function; it is only defined if the individual is
appropriately referred to using ‘she’ series pronouns’

We could consider the use of sign language loci in a similar way. Imagine
that we take definedness/ability to take a semantic value of the pronoun in
ASL to depend on the appropriate use of spatial location (e.g. a), along the
lines in (113).

(113)

‘Gia helped Lex. She (Gia) is a wonderful artist.’

J 2 K = ιx : depicted-at-a(x).g(2) = x
g

‘The extension of ix-a with a dynamic index 2 under the assignment


function g is the unique individual in the domain that is referenced
by 2 in the assignment function; only defined if the individual is
appropriately referred to using the area of signing space a’

How does this account for the patterns of loci that we have been dis-
cussing? The property of being associated to a locus will help disambiguate
sentences, like the one in (113) above, because only one of the individuals
in the first sentence was associated to that locus a. This works roughly
like gender in the process of disambiguation in English, where co-indexed
noun phrases need to not be incompatible with respect to the presupposi-
tional requirement (gender, or spatial location, ro perhaps noun class, etc.).
4. ANAPHORA: A SPATIAL DISCOURSE 102

In addition, Kuhn (2016) has highlighted other similarities between gender


features and loci, such as the way that both gender and loci are disregarded
when creating propositional alternatives. Consider that in the ASL case in
(114) and the English case in (115), the possessive expression (poss-a/his)
shares a feature in common with the subject (locus a in ASL and masculine
in English), yet this information is disregarded in the consideration of the
focus alternatives, which otherwise follow the structure of the sentence with-
out the focus operator only-one): the other referents in (114a) need not be
associated to locus a and the other referents in (115a) need not be mascu-
line. In both cases the “strict” reading of the possessive is available (in the
a sentences) as well as the so-called “sloppy” reading (in the b sentences).

(114) (ASL, Kuhn 2016)


ix-a johna only-one see poss-a mother.
a. Alternatives: {Billy saw John’s mother, Andy saw John’s mother,
Jessica saw John’s mother, ...}
‘No other kid saw John’s mother.’
b. Alternatives: {Billy saw Billy’s mother, Andy saw Andy’s mother,
Jessica saw Jessica’s mother, ...}
‘No other kid saw their own mother.’

(115) (English)
John is the only one who saw his mother.
a. Alternatives: {Billy saw John’s mother, Andy saw John’s mother,
Jessica saw John’s mother, ...}
‘No other kid saw John’s mother.’
b. Alternatives:{Billy saw Billy’s mother, Andy saw Andy’s mother,
Jessica saw Jessica’s mother, ...}
‘No other kid saw their own mother.’

Along with the verb phrase ellipsis data we saw above in (107)-(108), we can
conclude from this that the disregarding of both loci and gender features
in focus alternatives puts them in a class together, and apart from other
descriptive content: contrast with the object mother/mother, which has to
be present in each focus alternative.
One objection to thinking about sign language loci as features comes
from the apparent difference between their fluidity and that of gender fea-
tures: typically gender features are considered to be stable features across
different discourses (e.g. female gender pronouns are appropriate for Kate
across all kinds of contexts), whereas appropriate loci change from discourse
to discourse: Alex might be placed on the signer’s right hand side in one
discourse and on the signer’s left hand side in another. Except that this isn’t
4. ANAPHORA: A SPATIAL DISCOURSE 103

a strong objection to comparing the two at all, if we consider how speakers


really use gender in many languages: indeed, gender can vary when it is
convenient, including the lack of gender marking. Consider, for example the
contrast in (116).

(116) a. My friend sent me their recommendations for movies.


b. My friend sent me her recommendations for movies.

There is one interpretation of the sentence in (116a) which is that it is ap-


propriate to refer to the friend using they/them series pronouns (as their
pronouns of choice); we’re ignoring that reading here, to focus on another
interpretation, that in fact the friend may use gendered pronouns (let’s imag-
ine the friend is actually Kate, for whom it is appropriate to use she/her
pronouns) but the speaker finds no particular reason to include gender infor-
mation since it’s not relevant to the discussion. This latter reading is widely
available among most (although not all) speakers of English in the United
States (Ahn and Conrod, 2022), and makes an important point with regard
to sign language loci: something might rightfully be analyzed as a "feature"
restricting pronominal coreference, and yet still have some optionality for
expression and/or fluidity from one context to another.
One important difference between noun classes/gender and sign language
loci is that, as far as I know, gender isn’t allowed to be assigned differently
between two referents just because it would be convenient for disambigua-
tion. For example, without any background context the sentence in (117a) is
potentially ambiguous since the friend might be the referent for his, whereas
(117b) is most likely discussing a ranking of the friend’s favorite movies, since
my brother isn’t typically coreferent to feminine pronouns. In ASL, though,
the signer can make the choice to assign the two noun phrases to two sepa-
rate loci in order to ensure that the continuation is unambiguous, as in the
artist examples we have seen.

(117) a. My older brother and my friend discussed all of his favorite


movies.
b. My older brother and my friend discussed all of her favorite
movies.

That said, it is also the case that truly arbitrary uses of sign language
loci are few and far between, as shown by corpus analyses (Cormier et al.,
2015a), so we might wonder, what causes loci to be used at all? There
are two separate but possibly related answers that come to mind. For one
thing, Ahn et al. (2019) discuss the unacceptability of loci when there is no
ambiguity, especially in the case of two noun phrases that differ in animacy,
or a context in which plausibility provides clear disambiguation. This high-
lights the strong disambiguating purpose of sign language features, seeming
4. ANAPHORA: A SPATIAL DISCOURSE 104

to take them one step further than, say the difference we see in (116) in
English. It’s possible that a language like English may be moving in this
direction, though, using semantic gender only when useful for purposes of
disambiguation as the singular pronoun they becomes more commonly used
to intentionally not provide gender information. The second point is that
sign language loci are frequently used to depict while also disambiguating, a
dual nature that has long been noticed and subject to theorizing, and which
we focus on in the next section.

6 Loci as depictive
So far we have discussed several formal analyses of sign language loci, which
included considering them as the visible manifestation of indices in dynamic
semantics, or as semantic/syntactic features, but another view of sign lan-
guage loci emphasizes their iconic/depictive nature (in contrast to the abi-
trariness inherently emphasized in both formal accounts). There is some
empirical motivation for highlighting the non-arbitrary nature of loci: cor-
pus studies of natural production data show that the use of loci on verbs
and/or (pro/)nominals is highly motivated (Cormier et al., 2015a), and that
loci will be established for discourse referents especially in cases when verbs
can show as well as tell something about the events in which they are in-
volved, or if they are used to depict further aspects of the referents.
Within the formal literature, this has been sometimes described as iconic-
ity in sign language loci/features (Schlenker, 2014; Schlenker et al., 2013;
Schlenker, 2018; Lillo-Martin and Gajewski, 2014), while much foundational
work on the theorizing behind the iconic uses of sign language loci has
occurred in cognitive linguistics frameworks (Liddell, 2003; Taub, 2001;
Cormier et al., 2013). An example of an iconic reflexion in the choice of
sign language loci can be seen in the difference between the loci a associated
to the girl, which is at a lower place near the signer’s waist level, and the lo-
cus b assigned to the rainbow, which is much higher, presumably motivated
by the idea that the rainbow is up in the sky (118).

(118)

‘She saw the rainbow up there.’

A similar example is discussed in Schlenker et al. (2013), who investi-


gate this depictive use of loci under various linguistic contexts, finding that
4. ANAPHORA: A SPATIAL DISCOURSE 105

like gender and noun class features, this iconic information is not at issue,
i.e. cannot be targeted by negation, among other things. We can see this
in (119), where the pronoun ix-ahigh appears under the scope of negation
in the second sentence (ix-1 not understand ix-ahigh ), and yet still the
presupposition projects/inference goes through through that the speaker’s
younger brother is tall (i.e. the tallness is not negated).

(119) (ASL, Schlenker et al. 2013)


poss-1 young brother want ix-1 rest. ix-1 not understand
ix-ahigh .
‘My younger brother wants me to rest. I don’t understand him.’
Inference: the speaker’s younger brother is tall.

A person’s height, or the height of a rainbow in the sky, is an obvious


kind of motivated use of space, but it is by no means the only one discussed
in sign language linguistics: many others are featured in the sign linguistics
literature including Liddell (2003) and Schlenker et al. (2013), Taub (2001),
and others. These include the subset/superset relationship of two plural
loci: a subset should be assigned to a locus in a way that the spatial ar-
rangement reflects the set/superset relationship of their referents (Schlenker
et al., 2013). Liddell (2003) also emphasizes the use of depiction within a
single locus, as when an area of the referents body (e.g. head, torso, etc.)
can be targeted in an iconic way. The key takeaway from these tends to
be that the iconic components found in sign language loci are also mirrored
outside of sign languages, for example in co-speech gesture (Schlenker, 2014;
Liddell, 2003), suggesting that they may be picking up on extra-linguistic
(i.e. perhaps depictive) conventions and not language specific structures.
Compositionally, iconic meaning in loci shares with other presupposi-
tional information in language their inability to be targeted by operators
like negation, their local accomodation, etc, both in sign languages and in
gesture (Schlenker et al., 2013; Tieu et al., 2019). Schlenker (2021) takes
this pattern as a motivation for an analysis in which iconic information is
encoded as a presupposition that requires a correct mapping between the
world and the form. Under this view, just like there are words that stipulate
presuppositions into their meaning (for example, the encodes a requirement
for something like familiarity or uniqueness, again encodes the requirement
that something have happened before) in order to provide a truth value,
iconic loci stipulate the requirement that there is a mapping between the
form and some arrangement in the world.
One might ask why it is that iconic loci should be presuppositional, and
here we are left with the same kind of stipulative answer we have for sym-
bolic/lexical presuppositions, except no way to stipulate them since there
are infinitely many; this is raised as a further mystery when it comes to
the presupposition-like behavior of co-speech gestural content (Tieu et al.,
4. ANAPHORA: A SPATIAL DISCOURSE 106

2019). For answers, we probably want to look beyond anaphora, to semantic


approaches to iconic language that we have seen in other areas of this book.
Recall, for example, the inability of depictions to be targeted by negation in
ASL (and Japanese ideophones, and English...) that we discussed in Chap-
ter 2. There too, iconic content is ignored with respect to negation. So,
ideally we don’t want to stipulate this as a fact about the depictive nature
of loci, but rather to understand why depictions and negation interact in
this way. For now, though, the takeaway will be that loci serve (at least)
two purposes: they can be used to depict, and they can be used for disam-
biguation. Moreover, compositionally they pattern with the larger class of
not-at-issue content (one example of which is semantic gender/noun class
features), so any formal analysis of them should take care to predict these
uses. In the next section we will propose an analysis that is motivated by
their dual depicting/disambiguating nature and that has these properties.

7 Spatial restriction
The takeaway so far from looking at the formal semantic emphasis on the
disambiguating role of loci, and the emphasis from cognitive linguistics on
the depictive role of loci, brings us to the goal of trying to unify these two
uses. Here, we will make a proposal inspired by previous hybrid accounts
which include both depictive and descriptive components (e.g. Schlenker
et al. 2013) along with definite approaches to pronouns, with some conse-
quences to the analysis of semantic features generally. We begin with a
basic pronoun, in both English and ASL, which in the absence of any other
restrictions will simply pick out a referent based on some assignment func-
tion g. One approach from, say, the classic text Heim and Kratzer (1998)
is to simply say that every (pro)nominal includes an index (e.g. 2 in (120)
and the interpretation of the pronoun simply involves looking into one’s list
of discourse refernents and looking up the reference based on this index,
e.g. g(2). In short, the reference comes from the (value of the) index, as
we see for English (120a) and ASL (120b). On top of that, English has an
additional restriction that the reference be appropriately referred to using
that feature (e.g. female, for she/her pronouns in this case)(120a). One big
question is, what role should the locus a play (120b)?

(120) a. Jshe2 Kg = g(2) (only defined if g(2) is appropriately referred to


using the fem series pronouns, undefined otherwise)

b. J 2K
g = g(2) + ??
4. ANAPHORA: A SPATIAL DISCOURSE 107

In the classic proposal put forth by Lillo-Martin and Klima (1990), the
index is a visible version of the variable, so that instead of the complicated
(120b) above, we can simplify to (121). Here, instead of the arbitrary index
2 assigned to the discourse referent, the index is made visible by the use of
locus a, so roughly we need only track one thing, this index-as-locus.

(121)

J Kg = g(a)
‘The referent assigned to this index a’

Alternatively, we can also accept something like a “locus as variable


index” yet implemented in a non-dynamic approach to pronouns, in which
case the locus/index contributes a restriction, as in (122a), potentially with
other restrictions in the noun phrase, such as that provided by a noun, e.g.

(122b). Esipova (2019b) and Ahn (2019b) both provide discussion


and arguments regarding why an index-like restriction (e.g. x = g(a)) would
be not-at-issue while the nominal restriction (e.g. x is a student) remains
at issue.

(122) a.

J Kg = ιx[x = g(a)]
‘The unique individual which is equal to the referent assigned to
this index a’
b.

J Kg
= ιx[x = g(a) ∧ x is a student]
‘The unique individual which is a student and is equal to the
referent assigned to this index a’
4. ANAPHORA: A SPATIAL DISCOURSE 108

There are many questions of interest to formal semanticists in this area.


One of these relates to whether sign language pronouns are a convincing data
point in favor of analyses that one needs to have an index, as suggested in
(Schlenker, 2011). It has been debated for quite some time whether one
really needs assignment functions as part of the linguistic machinery; argu-
ments in favor come from complex structures accounted for through dynamic
semantics, many of which are very elegant (Heim, 1982; Kamp and Reyle,
1993; Chierchia, 1995; Mandelkern and Rothschild, 2020); arguments against
come in significant part from parsimony, i.e. if it is possible to account for
data without introducing this kind of mechanism, then don’t introduce it.
These broadly argue that the same phenomena can be accounted for with-
out introducing variables into the system (Jacobson, 2007, 2016; Barker and
Jacobson, 2007; Elbourne, 2002). A pronoun can instead get its value in
the same way that a definite description like the student does, that is, by a
contextual restriction to the right kind of situations (123).

(123) a. Jthe studentKs = ιx.x is a student and x is in the situation s


‘the unique student in the relevant situation’
b. JsheKs = ιx.x is referred to using female pronouns and x is in the situation s
‘the unique individual in the relevant situation, who is appropri-
ately referred to with female pronouns’

With respect to sign language loci, we can give a non-dynamic formulation


along the lines of the proposals in (122) above as in (124) below, where we
leave out an assignment function and instead restrict the definite description
via some relation R to the locus.

(124) a.

J Ks = ιx.x is in the situation s and R(x, a)


‘the unique individual in the situation, related via R to locus a’
b.

J Ks
= ιx.x is a student and x is in the situation s and and R(x, a).
‘the unique student in the situation related via R to locus a’
4. ANAPHORA: A SPATIAL DISCOURSE 109

The big question under this view is: what meaning comes with “related
via R to locus a”? As mentioned above, “semantic” gender marking in a
language like English is often contrasted with what seems like a more arbi-
trary use of loci in a language like ASL since choice of locus for a particular
referent changes across different contexts. However, we’ve also seen that
this distinction is not a hard and fast one, given the context dependence
of features in English as well, illustrated in (116). Ahn (2019a,b) argues
that the locus contributes a locational restriction, something like “having
the property of being assigned to the location a”. The proposal put for-
ward here is something very much along these lines, with extra inspiration
from Conrod (2019)’s approach to gendered pronouns, which argues that
the meaning of gender features on pronouns is essentially a use condition,
i.e. that it is appropriate in the context to refer to someone with female
pronouns. In ASL, we can say that for R(x, a) to hold, it simply requires it
to be appropriate (sometimes for depictive reasons, sometimes for the pure
pragmatic purpose of contrast) to refer to someone with that locus (125a);
the dynamic version is provided for comparison in (125b).

(125) a.

J Ks = ιx.x is in the situation s and R(x, a)


(where R(x, y) iff it is pragmatically appropriate to use location
y for x)
b.

J Kg = ιx[x = g(a)]
‘The unique individual which is equal to the referent assigned to
this index a’

It is interesting that from the perspective of semantics, there is a minimal


difference between the role that the locus plays in the “locus as variable”
theory and the “locus as restriction/feature” approach: in one, we simply
require that x = g(a) (x is what the assignment function assigned to locus a)
(125b); in the other, we simply require that there be a recoverable relation
R(x, a) that holds between the locus and its referent. The implications for
the theory of sign linguistics is that we are hitting on something right about
these approaches: a relation (e.g. g/R) relates the locus directly to the
4. ANAPHORA: A SPATIAL DISCOURSE 110

referent in a way that is recoverable by the conversational participants. The


implications for our theory of semantics more broadly is that sign language
loci do not distinguish so clearly between any two particular approaches to
this problem in formal semantics (contra Schlenker 2011, but in the spirit
of Schlenker 2016). How could we distinguish between them? To the extent
that multiple discourse referents can be assigned to the very same loci (e.g.
R(x, a) and R(y, a) as argued in Kuhn 2016), a feature-like account seems
to give us better empirical coverage, since assignment functions like g by
definition assign only one discourse referent per index. In other words, if
both a person and their partner, or both a person and the city that they
live in, can be associated to the same locus, then this suggests that some
method of association R(x, a) is the right way to think about this anaphoric
restriction, instead of directly linking a locus to a particular person, place,
or thing that is being tracked in a discourse, but then the implications are
pretty minimal from the perpsective of arbitrating between possible theories
in general.
Another clear advantages of approaching indexical pointing in sign lan-
guages through the definite lens (i.e not directly using the indexing function
g but rather the ι operator, also seen in the semantics of English definite
article the) is that from the semantic perspective it naturally extends to loca-
tive and demonstrative readings (Koulidobrova and Lillo-Martin, 2016; Ahn,

2019b), all of which are expressed via in many sign languages, in-
cluding American Sign Language. Under this system, something can be asso-
ciated with a particular locus for motivated/depictive reasons (known in the
sign language lingusitics literature as “topographical space”, Liddell 2003)
or for discourse anaphoric reasons, and that difference need not be encoded
in the meaning contributed by the indexical sign. It also collapses pronomi-

nal ( ) and adnominal ( ) uses of the


pointing sign, common in many languages including English demonstratives
(that, that student).
The broad insight is that the kind of semantic features we see in some
spoken language pronominal systems are actually not all that unlike the use
of loci in sign languages. For example, traditionally gender class features
were considered a limited category but recent linguistic innovations by lan-
guage communities to reflect a wider variety of gender expressions show that
they are only limited by the categories we choose to make of them. Sign
language pronouns can capitalize on multiple locations in the same way.
Moreover, in the same way that pronoun gender choices can be more or less
arbitrary, so can loci: these can be chosen to reflect iconic relationships like
4. ANAPHORA: A SPATIAL DISCOURSE 111

height of tall people, honorifics, etc., but may also be arbitrary and merely
contribute pragmatic contrast/distinction for disambiguation, just like spo-
ken language gender, which reflects classes that we perceive in the biological
and social world but can be deployed for language in ways that are sensitive
to pragmatic context. Finally, we note that encoding the locus either as an
index for an assignment function or as a restriction in a definite noun phrase
in a given situation ends up being nearly equivalent given some assumptions
about the structure of definite noun phrases, an ultimately reassuring way
of viewing a longstanding debate between “variable” and “definite”/e-type
approaches to anaphora and the contribution to this debate from sign lan-
guages.

8 Conclusions
The use of space in sign language discourses is one of the most well-studied
areas in sign language semantics/pragmatics, in part because it so beau-
tifully integrates descriptive and depictive elements of language in a dis-
course. For years, formal approaches to sign languages have been interested
in modeling the use of space, famously analogizing it to dynamic indices
(Lillo-Martin and Klima, 1990) or to semantic features like gender (Neidle,
2000; Kuhn, 2016). One conclusion of the formal discussion here is that
under an approach to pronouns in which they share significant morphosyn-
tactic structure with other definite noun phrases like demonstratives (that)
and definite phrases (the student), these distinctions are hardly significant:
both involve semantic restrictions on reference. Furthermore, the “limited
categories” of gender features has become an increasingly outdated idea, as
we see the proliferation of pronoun features to reflect an increasingly nu-
anced view of gender in spoken languages like English (and many others),
requiring a flexible semantics (Conrod, 2019). This semantics can be min-
imally adjusted to reflect many depictive uses of space in sign languages
while retaining a descriptive function (restriction in a definite noun phrase),
as exemplified in the sketch put forward in Section 7.
Many important questions remain. First, we have only touched in very
limited ways in the difference between loci on noun phrases and on verbs. In
some syntactic theories, such as Minimalism (Chomsky, 2014) these are quite
assymmetric and more work should be to understand whether and how sign
languages exhibit agreement in the classic sense, discussed in depth in (Lillo-
Martin and Meier, 2011) and (Pfau et al., 2018). More should also be said
about when loci are covert and non-covert, and how to compare with clitics;
in this area, more careful studies of language change and processes in which
pronouns change into clitics which change into agreement forms are clearly
going to be useful, as they have been in spoken languages (Culbertson, 2010).
Finally, this chapter argued for a hybrid way of thinking about the de-
4. ANAPHORA: A SPATIAL DISCOURSE 112

pictive and descriptive elements of sign language loci, but like other hybrid
accounts (e.g. Schlenker et al. 2013; Schlenker 2014) it does not directly pro-
vide an account for how the depictive aspect (including but not limited to
topographical space) works. This is mostly because depiction is processed
through a different stream: not through conventionalized linguistic struc-
tures but through cognitive processes involved in picture/image processing,
and thus linguists will have limited expertise to lend to this question except
to make references to how we convey meaning through depiction (Greenberg,
2013; Camp, 2018; Fodor, 2007). That doesn’t mean it isn’t deeply impor-
tant though, and cognitive linguists in particular have more to say about
the integration of language and depiction that one hopes can be integrated
more fully into the formal account of the descriptive properties as given in
this chapter. In the next chapter we turn to perhaps an even clearer way
that depiction integrates with description.
5

Classifiers, role shift, and


demonstrations

In several previous chapters, we have touched on ways that symbolic vs.


depictive elements of sign languages contribute meaning in separate ways.
Symbols can be involved in semantic computations based on hierarchical
syntactic structures and support the creation of propositional alternatives.
In contrast, depiction in languages involves the same processes involved in
picture interpretation within and outside of language (Clark, 2016; Dinge-
manse, 2015; Ferrara and Hodge, 2018; Kita, 1997), and as we have seen,
contributes to our representations for events but is unable to directly be
involved in the generation of propositional alternatives. Description and
depiction are abundant in both signed and spoken languages when people
are co-present, yet they are often ignored in the linguistics of spoken lan-
guages since there are limited ways to represent depiction in text, and as
we’ve seen, depictions are limited in the ways that they interact with sym-
bolic structures. However, in this chapter we will focus specifically on two
well-studied ways that depictive meaning seems to interact directly with
symbolic aspects of sign languages: depictive classifier predicates, and role
shift/constructed action. Although these topics have often been studied sep-
arately within the larger sign language linguistics literature, the argument
here is that we should ultimately think of them as arising through the same
process involving demonstrations, and so we discuss them both in this
chapter, beginning with depictive classifiers, then moving onto role shift,
and finally ending with some examples that have properties of both.

1 Depictive classifiers
We will be using the phrase "depictive classifiers" to refer to a wide class of
signed expressions found in nearly all of the world’s sign languages (Zwit-
serlood, 2012; Emmorey, 2003) which are sometimes also called “depicting

113
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 114

Figure 5.1: Depictive classifiers

verbs”, “classifiers”, “classifier predicates”, or other related phrases depend-


ing on the features being highlighted. These expressions take their hand-
shapes from a conventionalized inventory that varies from sign language to
sign language, with each handshape corresponding to a semantic class. Since
the use of meaningful forms for semantic classes of objects is similar to noun
classes in spoken language classifiers, the name “classifier” has frequently
been used in formal sign linguistics literature. Comparisons can be made
to nominal classifiers/noun classes in many East Asian languages like Man-
darin Chinese and even more directly to verbal classifiers in languages of
North America such as Navajo. For example, in American Sign Language,
the handshape is used for the semantic class of vehicles (cars, trucks,
etc.), while the handshape is used for the semantic class of humans,
or long thin objects like pencils, etc. Handshapes can then combine with
movements and locations that are interpreted as depictive, that is, they are
used with movements and locations that iconically illustrate movements and
locations of the objects classified by the handshapes. We can see three ex-
amples in Figure 5.1, each of which shows a handshape for an object class
( /4 for lines of people, /b2 for small animals, /3 for vehicles) being
used to depict the spatial arrangement (e.g. of students or cars lined up) or
movements (e.g. of a cat jumping).
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 115

The semantics of these expressions has been of interest since some of the
earliest scientific attention to sign languages: Klima and Bellugi (1979) note
that the handshape acts as a type of “pronoun”, analogizing the gender clas-
sification available for pronouns in languages like English and French to the
classes of handshapes available in classifier predicates. They also note that
these seem to have a depictive element, as in, for example, the movement
of a car up a hill which can illustrate a meandering manner of movement,
emphasizing the dual symbolic/iconic nature of classifier predicates.
Although the dual symbolic/iconic nature of classifiers has long been
underlying discussion of these “depicting classifiers” in sign languages and
is the view that we will work from here, there has also been notable push-
back to this kind of hybrid symbolic/iconic analyses from both the symbolic
and iconic directions. One perspective has emphasized the highly sym-
bolic/linguistic nature of these elements: Supalla (1983) rightfully notes
that in principle any apparently gradient/analog form can be broken down
into many (increasingly small) discrete morphemes. So, what might look
like a wholistic meandering movement can be broken down into many small
discrete movements. From the linguistic perspective, doing so makes the
representation of these structures much more complex: instead of an iconic
arc, one would have to use many morphemes expressing changes in the arc,
i.e. upward, then over, then upwards some more, etc. This tradeoff in com-
plexity might be viewed as favorable if one is committed to the notion that
language expresses only discrete meaning-form mappings. In other words,
one might motivate increasing the number of morphemes present in depictive
classifiers by trying to preserve a parallel between spoken English words and
sign language manual signs. However, many other researchers have noted
that there is no need to do this to preserve unity between the signed and
spoken mode, as spoken languages include abundant analog depictions as
well (Clark and Gerrig, 1990; Clark, 1996, 2016; Dingemanse, 2012, 2015;
Dingemanse and Akita, 2017; Kita, 1997; Davidson, 2015; Maier, 2018).
Given the increasing appreciation for the existence of depiction in spoken
language, we seem to lose any motivation for keeping the components of
depictive classifiers discrete symbols; depictive classifiers can include analog
components and still be “language.”
At the other end of the spectrum, some researchers have emphasized the
depictive nature of these expressions, as in work by Liddell (2003); Perniss
et al. (2010); Taub (2001); Cogill-Koez (2000a,b) and others. In an impor-
tant way this of course seems exactly right: clearly these are used when there
is intent to depict directly/iconically instead of (only) describe symbolically,
and it is unfortunate how often the depictive elements of depictive classifiers
have been dismissed. However, an extreme version of this hypothesis ends
up also dismissing the symbolic nature of some of their components, such as
the handshape, and their compositional status as predicates in sign language
sentences. This ends up not only wrong in terms of linguistic analysis but
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 116

can also, more practically, lead to a lack of appreciation for the complexity
of sign languages and the achievements involved in acquiring them.
While it’s good scientific practice to make sure that all ends of the ide-
ological spectrum are explored and tested, experimental evidence as well
as recent theoretical analyses appears to support a dual symbolic/iconic ap-
proach to depictive classifiers. Emmorey and Herzig (2003) used a controlled
experimental setting to compare the interpretations of depictive classifiers
by deaf signers of ASL (who have familiarity with both conventionalized and
nonconventionalized aspects of the language) and hearing non-signers (who
presumably lack familiarity with the conventionalized aspects of the lan-
guage), to determine what aspects of their forms are interpreted as categor-
ical and gradient by each of these groups and to see if the role of language ex-
perience affected these judgments. They found that handshape selection, as
well as a few modulations of the handshapes (handshape sizes, figure/ground
uses, etc.) seem to be interpreted differently between Deaf ASL signers and
hearing nonsigners, suggesting that ASL signers’ interpretations are influ-
enced by conventionalized categories since they treated some differences as
discrete categories, where nonsigners did not. In contrast, other more de-
pictive aspects (dot placement, for example) were interpreted similarly by
both groups as not discrete but analog/continuous. They take these results
to support a distinction between symbolic and categorical interpretation
of handshapes on the one hand, and iconic and gradient interpretation of
movements, locations, and modifications to these handshapes on the other
hand. On the theoretical side, Zucchi (2018) provides a detailed discussion
of the tradeoffs between discrete and analog analyses of classifier predicates
in light of larger questions about gesture-like meaning in both spoken and
sign languages, arriving at the same conclusion that sign language classifier
predicates contain both discrete symbolic components (in the choice of hand-
shapes) and analog depictive components (in the location and movements
used).

2 Classifier semantics
Building from the idea that depictive classifiers convey meaning in two ways,
in part via conventionalized symbol and in part via iconic depiction, we
will adopt a formal semantics in which the handshape is handled just the
same way as semantic features like gender and noun classes (as potentially
infinite but still discrete and symbolic) and the depictive component is an
event demonstration (Zucchi, 2012; Davidson, 2015; Zucchi, 2017). We will
walk here through a possible formalization of this intuition. First, classifier
handshapes are conventionalized and provide meaning via their symbolic
nature. For a classifier in which a 3 ( ) handshape is used, it should
require that we are discussing an event that involves a vehicle; in contrast,
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 117

if a b2 ( ) handshape is used, it should require that the event involve an


animate being that moves (in this case an animal). Importantly, depictive
classifiers as a whole are used to describe events/states, i.e. they don’t
describe something as a vehicle or as a cat; the descriptions of the objects
themselves came in the subjects of the sentences in figure 5.1 (cat, car,
etc.). Rather, depicting classifiers are predicate, which are used to depict
events in which vehicles, or cats, are participants. So what we actually want
our semantics for the handshape Images/animate.jpeg to model is there is an
event in which an animal participates. The semantic role that the animals
plays in this event is a theme, so we will want to have a function that is
true only of events in which the theme is an animal (in this case, inclusive
of humans) (126).

(126) J K = λe∃x : [animal(x)].theme(v, x)


lit. a function that takes events and returns TRUE if there is a
theme of the event and (as background) that theme is an animal

This restriction encoded in (126) seems to be a presupposition just like


gender and other noun classes, so we note it as a presupposition, preceding
the period. So far, this is entirely symbolic meaning, something which we
could imagine a language encoding in an entirely arbitrary way, exactly as
is done is verbal classifiers of the world’s spoken languages that have verbs
with noun class markers for animals and humans (Zwitserlood, 2012).
Things get even more interesting when we explore the depictive side of
classifier predicates. Under a Supalla (1983)-type analysis, we might ana-
lyze each small aspect of the depiction as individual morphemes. Under a
Liddell (2003)-type analysis, we might not be interested in giving a formal
analysis at all, since the depicting verb is just that: a depiction that cannot
participate in compositional semantics. Under a mixed analysis, we can use
the notion of a demonstration from Clark and Gerrig (1990) to capture the
depictive aspect and how it integrates with the symbolic component (David-
son, 2015; Zucchi, 2017). In this view, some events are special because they
are demonstrating events, which stand in the demonstration relation with
other events, just as one individual can stand in a relation with another indi-
vidual (e.g. a parent p of a child c: if the relationship holds then parent(p, c)
returns TRUE). So, for example, we can have one event of two cats facing
each other, and we can also have a second event of a person, say the signer,
demonstrating how the cats were sitting. If one event, which we can name
e2 is an accurate demonstration of another event, which we can name e1 ,
then we can say that e2 demonstrates e1 : demonstrate(e2 , e1 ). We can then
give a function for precisely those events which have animals as their theme
and are demonstrated by the depicting event e2 (127). (For now we’ll ignore
that there are two cats but we revisit plurality in classifiers in Chapter 7.)
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 118

Figure 5.2: An event of two cats sitting, and an event of demonstrating it

(127) J K= λx∃v[animal(x)].theme(x, v)∧demonstrate( , v)]


lit. a predicate which takes individuals (which have to be animals)
and returns TRUE if there is an event in which they are the theme

and which the event can be demonstrated by

The most notable feature of this analysis is the fact that there is a pic-
ture on the right hand side of the equation, in fact, a picture of the very
same expression that we say we are analyzing in terms of meaning. But
this isn’t circular! Rather, the demonstrate predicate requires an event of
as one of its arguments, in particular a depicting language event, and we
can find one of those in the sentence itself, i.e. from the “form” side of the
equation. Technically, any linguistic expression might be a demonstration
of something; in fact, that’s exactly how we’re going to want to analyze
quotations! But, for now, it is clear that some linguistic expressions are es-
pecially intended to depict something, and depicting verbs are an especially
clear example of that. We encode this via the demonstrate relation, which
takes the event of communicating (which we are representing in the black
and white pictures, just as we might use a different font for the quotational
use/mention distinction in English) and relates it to the event of cats facing
each other.

3 Argument structure of depictive classifiers


The demonstration semantics given so far is quite simple, but depictive
classifiers can be quite complex: often there are multiple objects and the ar-
rangement and interaction of those objects is part of what is being conveyed
in the depiction. Therefore, we want to understand these more complicated
classifiers in terms of their argument structure, and to ask what kind of
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 119

variation there might be within depictive classifier types. For example, in


the so far very simple analysis we have given for depictive classifiers, hand-
shapes all seem to have the same status: they simply restrict the kinds of
agruments that the depictive classifier predicate can take. In addition, the
argument has generally been a theme of the depicted event. However, there
is intriguing evidence that different classes of handshapes seem to give rise
to different argument structures.
An important development in the study of depictive classifiers was the
observation that different classes of classifier predicates seem to take different
arguments, and a formal analysis that models these differences, by Benedicto
and Brentari (2004). Benedicto and Brentari (2004) argue that one type of
depictive classifiers, known as whole entity classifiers, which are the sort
given in Figure (5.1), all seem to simply take one argument, a theme, but
one that acts like a syntactic subject, external to the verb phrase. They
note that these constrast with another type of classifier, handling classifiers,
which introduce two arguments: the object being handled, and the handler,
which tend to be semantic themes (the objects being handled) and semantic
agents (the handlers). Furthermore, Benedicto and Brentari give arguments
that the theme argument of the handling classifiers is different than the
theme argument of whole entity classifiers, by investigating the scope of
other operators like distributivity and negation. An example is (128): the
negation expressed with nothing scopes over the theme argument but not
the agent in the case of the handling classifier in (128b), whereas it fails to
scope over the theme argument in the entity classifier in (128a). Since both
are themes, this is evidence in favor of a syntactic/hierarchical distinction
between the theme arguments of these kinds of classifiers.

(128) (ASL, Benedicto and Brentari 2004)


a. book b+move nothing
‘None of the books fell down (on its side)’
b. ∅ book c+move nothing
‘They (singular) didn’t put any book down (on its side).’
* Nobody put the book down (on its side)

These are not even the only classifier types: Benedicto and Brentari
(2004) also discuss body part/limb classifiers, which they argue using similar
tests of negation, distributivity, etc. introduce a single internal argument;
this can likely be extended as well to whole “body classifiers” discussed
later in the section on Role Shift. Clearly there are many possibilities for
cross-linguistic research on this topic in sign languages between classes of
handshapes and the possible arguments that they can introduce. Another
issue that clearly deserves more study is the syntactic status of the (non-
linguistic) depictions introduced by the demonstrate relation. In one effort,
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 120

Quadros et al. (2020) provide two suggestions for syntax/semantics of de-


pictive classifiers that also introduce demonstration arguments and roughly
follow syntactic structure suggested by Benedicto and Brentari (2004), but
basic questions remain: is this demonstration an argument or an adjunct?
Is it obligatory or optional? Furthermore, how much this will hold in the
incorporation of different sign languages as more data are gathered remains
to be seen; examples include Sevgi (2022), who provides examples from
Turkish sign language (TİD) that illustrate interactions of different classes
simultaneously.

4 Classifier pragmatics
Depictive classifiers are a beautiful example from the perspective of seman-
tics and semiotics of both symbolic language and iconic language composing
in regular compositional ways: handshapes express symbolic restrictions
while movements and locations can iconically depict. Given the discussion
in Chapter 2 about the ways that depictions resist participating in question-
answer structures, we may ask how depictive classifiers participate in prag-
matic calculations of various sorts. As one example, expressions related to
each other in terms of logical strength, such as some/all, are well known to
lead to pragmatic inferences known as scalar implicatures: if we use a pos-
itive statement with the weaker form, e.g. some/some, this tends to lead
to the inference that the statement with the stronger term would be false
(129).

(129) a. Some of the cookies broke. (English)


b. cookie, some break (ASL)
Both implicate:
c. Not all of the cookies broke.

Implicatures are defeasible inferences: note that you can say in English I
ate some of the cookies, in fact, I ate them all, and in some contexts the
weak term doesn’t negate the strong one, as in If you’ve eaten some of the
cookies, you know how good they are, both of which are used to argue that
the scalar implicature comes about through reasoning about alternatives,
rather than being encoded in the conventionalized semantics of some (this
is true whether or not the theory takes that reasoning over alternatives as
extralinguistic or as grammatically encoded, see Chierchia 2017).
As a result, conversational participants clearly keep track of seman-
tic/pragmatic alternatives in given scenarios, and reason about them. In
fact, inferences similar to scalar implicatures follow from any ordering, even
one created ad hoc instead of on a conventionalized scale like some/all:
consider, if asked to describe what is on a table, then a response have
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 121

Figure 5.3: Stimuli from study on ad hoc implicatures using depictive clas-
sifiers in American Sign Language (Davidson, 2014)

billfold, globe ‘There is a wallet and a globe’ is typically interpreted


as saying that there is only a wallet and a globe. Generally, if there were
more things there, the speaker should have said so! This kind of inference
is known as an ad hoc implicature, and in fact young children succeed on
this kind of a task even when they show non-adult like behavior with clas-
sic scalar implicatures like those arising from the some/all contrast (Stiller
et al., 2015). With respect to sign language linguistics, Davidson (2014)
investigated scalar implicatures in American Sign Language and in English,
including in both languages classic scalar implicatures with some/all and
ad hoc implicatures. In this study, sentences in ASL used to prompt ad hoc
implicatures were in addition based on sentences that used depictive classi-
fiers, conveying not just which objects were included but also their positions
on a table (see Figure 5.3), with the goal to understand how depiction enters
into pragmatic calculations. The result was that scalar implicatures were
stronger in ASL with depictive classifiers than the comparison in English
ad hoc scales without depiction, even when the classic scalar implicatures
(based on some/all) were the same in both languages. Thus, something
about adding a depiction led to a stronger “exhaustive” interpretation, i.e.
that the description must be a complete one and no relevant alternatives
went unmentioned.
We might have thought that the effect of depiction on the pragmat-
ics of an utterance is that it is simply unrelated: as we have been saying
throughout this book, depiction seems to be resistent to the construction
of propositional alternatives. It would have been a natural conclusion, per-
haps, that pragmatic mechanisms in natural language deal only with these
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 122

propositional alternatives and the logical strength of utterances within that


domain. On the other hand, this finding suggests that the depictive compo-
nent does seem affect pragmatics, such that an incomplete depiction seems to
be less acceptable. We might look toward the underlying QUDs as an expla-
nation: depictions may be addressing a question about what things looked
like, while non-depictive lists might address a different question more fo-
cused on quantity that prioritizes exhaustive answers. In any case, it seems
that the relationship between depiction and description within pragmatics is
just getting started, and ideally can shed light on areas of spoken languages
as well where the pragmatics of descriptive and depictive content seem to in-
teract, as in ideophones (Kita, 1997) and co-speech gesture (Esipova, 2019a;
Zlogar and Davidson, 2018; Alsop et al., 2018). In fact, this issues arises in
any domain in language in which we see depiction affecting aspects of the
at-issue assertion; the same will be the case with the linguistic structures
that we turn to next.

5 Role shift
Another well studied area in sign language linguistics in which we see depic-
tion clearly interact with description is in the reporting of others’ thoughts,
words, and even others’ actions. An example from Padden (1986) is pre-
sented in Figure (130), in which the signer “shifts” eyegaze, direction, and
other nonmanuals while expressing an attitude (really ix-1 not mean)
attributed to another’s perspective (husband), represented in text as in
(130). A similar example can be found in (131a), where an attitude (ix_1
busy) is expressed from the perspective of someone else, here, Alex. When
glossed, the “shift” can be represented as a line marked with RS (131b.)

(130) (ASL, Padden 1986)


rs-a
husband-a really ix1 not mean
‘The husband goes, “Really, I didn’t mean it.”’

(131) a.

J K
‘Alex was like, “I’m busy”’
RS
b. fs(Alex) ix_1 busy
‘Alex was like, “I’m busy”’
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 123

As with depictive classifiers, terminology in this domain in sign linguis-


tics tends to depend on whether the descriptive or depictive aspects are
highlighted in the analysis. For example, constructed action and role shift
are terms with slightly different meanings that have significant overlap, both
describing the narrative technique of a speaker taking on the perspective and
characteristics of another character. Cormier et al. (2015b) provide a de-
tailed discussion of both terminology and naturally produced examples from
corpora. Within formal semantics, role shift has tended to be the preferred
term because so much of the focus has been on the semantic effect of the
perspective shift that they bring, so we will focus on that here as well, but
ultimately we end up with an analysis viewing this as a depiction, sharing
much in common with analyses that use the term constructed action.
The view that we will take in this chapter is that role shift in sign
languages shares the same semantics with structures that involve depiction
by demonstration both in spoken language stories and narratives and also
sign language deptive classifiers. The parallelism between role shift and
similar constructions in spoken languages is shared in early formal work by
Padden (1986) and Lillo-Martin (1995) in sign linguistics, and the view of
reported speech as demonstration originates from Clark and Gerrig (1990);
the view that role shift is a subset of sign language classifiers is also found
in Supalla (1983), so all of the perspectives in this chapter are quite old and
persistent, though the implementation via formal semantics is more recent.
First, let’s note that relative to spoken and signed language, written
language is more limited in expressing the depictive nature of constructed
action/role shift, but we can see remnants in written quotation: the sen-
tence Alex was like, “I’m exHAUSted!” not only describes and event of
which Alex was a participant, but also depicts aspects of Alex’s behavior, in
particular, the way that he said what he said. We can actually view all clas-
sic quotations this way: Alex said, “I’m happy” can be viewed as an event
of saying in which Alex was a participant, and the quotation “I’m happy”
depicts/demonstrates aspects of the event, namely, the way that the words
were said. This is precisely the view of quotation given by Clark and Gerrig
(1990), who analyze written and spoken language quotations as demonstra-
tions, highlighting their depictive nature. Davidson (2015) formalizes this
intuition of quotation as demonstration, and extends it to depictive classi-
fiers (as we already saw above, similar to the analysis by Zucchi 2017) and
to sign language role shift, which we focus on in this section. Subsequent
work by Maier (2017, 2018) provides further evidence in support of this
framework for both signed and spoken language quotation/role shift.
Under the demonstration view of quotation, a quotative sentence such
as Alex said, “I’m happy” is true in those possibilities in which there is a
saying event of which Alex was an agent (λw∃v.saying(v)∧agent(v, Alex))),
which accounts for the Alex said part of the quotation. It will, furthermore,
require that the quoted speech “I’m happy” is an accurate (in some relevant
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 124

way) demonstration of the original event, leading us to (132).

(132) JAlex said, “I’m happy”K =


λw∃v.saying(v) ∧ agent(v, Alex) ∧ demonstrates(I’m happy, v))

Note that just like how we had black-and-white version ( ) of depictive


classifiers on the right hand side of the equation in (127) because the linguis-
tic form is a demonstrating event, similarly in (132) the language I’m happy
appears on the right side of the equation, because that use of language is
an event which is a demonstration of another (in this case, the Alex saying
event).
One of the strongest motivations for formalizing quotation in this way
is to highlight its iconic nature and the lack of real boundaries between
written quotation and more obvious demonstrations common in spoken lan-
guage, such as Alex was like, “I’m exHAUSted!”. These depictively iconic
demonstrations can be formalized in a completely parallel way, as in (133),
returning a proposition in which there was an event in which Alex was an
agent, and which is demonstrated accurately by “I’m exHAUSted”.

(133) JAlex was like, “I’m exHAUSted!”K =


λw∃v.agent(v, Alex) ∧ demonstrates(I’m exHAUSted, v))

A further advantage is that the same analysis can also naturally apply to
classic cases of role shift/constructed action in sign languages, in which the
words and other mannerisms of another character are demonstrated, such
as in (134), where one’s thoughts/words are demonstrated. They can even
be extended to the kinds of sentences in (135), where the giving action (not
an attitude!) is demonstrated.

(134)

J K

= λw∃v.agent(v, Alex) ∧ demonstrates( , v))


‘Alex was like, “I’m tired”’
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 125

(135)

J K

= λw∃v.agent(v, Alex)∧giving(v)∧demonstrates( , v))


‘Alex was giving it away like [this]’

On top of connecting the depictive elements of written quotation, spoken


demonstration, and sign language role shift through the notion of a demon-
stration, this approach also connects sign language role shift with depictive
classifiers, which were also analyzed as involving demonstrations, as we saw
above. This is somewhat unusual from the perspective of recent sign lan-
guage linguistics, in which the two topics have been largely disconnected
(see, for example, separate overview chapters in edited collections such as
Brentari 2010 and Pfau et al. 2012) but is in line with foundational work
by Supalla (1983), who discussed “body classifiers” in this category of role
shift and constructed action, which maps nicely onto the parallel semantics
to depictive classifiers.
The core intuition behind the demonstration view of role shift/constructed
actions is that they are ultimately depictive, just like depictive classifiers,
and yet interact with descriptive language in regular ways that can be mod-
eled in formal semantics. Importantly, the depictive aspects include the
use of expressions in the same way that they were used by a character,

e.g. the use of the first person because the character used it in that
way, just as expressions that were used by a character are demonstrated by
the speaker/signer. Since a lot of attention has been paid to how and why

context-dependent expressions like ‘I/me’ are used in role shift and


in constructed action in sign languages, especially within the field of sign
language semantics, we will turn to them next in more depth.
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 126

6 Interpreting indexical expressions

Indexical expressions (e.g. English I, here, tomorrow, ASL , ,


etc.) have long captured the attention of philosophers and formal seman-
ticists for having a meaning that depends directly on the context. For ex-
ample, the meaning of here depends on who is speaking and where they
are in a way that London does not: the former will pick out the location
of the person saying here (it could be London if the speaker is in London,
or Boston if the speaker is in Boston), while London will pick out London
independently of where the speaker is. The dependence of these indexical
expressions on the context has motivated formal analyses to relativize in-
terpretations of all expressions relative to the context of evaluation, so, for
example, we have (136), where the interpretation is dependent on a context
of evaluation, picking out the speaker of context c as the meaning of the
first person pronoun I (we are ignoring the difference between statives and
events here).

(136) JI am tiredKc =
λw∃x∃v(experiencer(v, x) ∧ being − tired(v) ∧ speaker(c) = x)
‘The speaker in the context is an experiencer of a being tired event’

As argued by Kaplan (1979), indexical expressions like I are not equiv-


alent to a definite description like “the speaker” in terms of their compo-
sitional properties, which we can see by varying contexts and events. The
reference of a definite description can vary across events, as in the example
in (137a), but an indexical expression is always connected directly to the
original utterance context, as in (137b), which is not equivalent: it refers
to the speaker of the context of utterance and not the speaker of various
president-speaking-events. The takeaway is that semantic interpretation of
indexical expressions really do need to depend on the context of utterance
and cannot be paraphrased by descriptive material. This context depen-
dence was indicated through the superscript c in (136).

(137) a. Whenever the president is speaking, the speaker lives in the


White House.
b. # Whenever the president is speaking, I live in the White House.

Given this, a much discussed claim in this area of linguistics and phi-
losophy is the claim that the context is not something which itself can par-
ticipate in compositionality, i.e. be affected by any linguistic operators. In
other words, conversational participants and their roles, locations, etc. are
simply facts of a context and while language can access these details, it
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 127

doesn’t include any symbols that affect what the context for evaluation is,
i.e. nothing overwrites c. Kaplan (1979) famously argued that a linguistic
operator that changed the context of evaluation would be a “monster”. De-
spite, or perhaps because of, this claim, interest in its universality has gained
an interest in recent years, propelled by work suggesting that some languages
actually do have operators that “shift” a context of evaluaton, without the
use of quotation. Such “shifty” indexical expressions tend to look like the
Zazaki example in (138) (from Anand and Nevins (2004), presented as in
Sundaresan (2021)), which includes a first person indexical expression within
a clause introduced by and in the scope of an attitude verb, in this case va
‘say’. Notably, the first person indexical pronoun (Ez) can be interpreted as
the speaker of the context (as in English) or as the holder of the attitude in
the main clause, e.g. John (the latter seems to be impossible in English).

(138) (Zazaki, Anand and Nevins 2004)


hEsen-ij (m1k -ra) va [kE Ezj/k dEzletia].
Hesen-obl I-obl.to said that I rich.be.prs
‘Hesen told me [that I am rich].’ (Unshifted reading)
‘Hesen told me [that Hesen is rich].’ (Shifted reading)

The idea is that the context of evaluation for the indexical expression
in the embedded clause need not only be the main clause speech event,
but could also be some other context, introduced by the attitude (here,
saying). In the literature on reported examples of indexical shift there is
some variation between languages, but one stable observation is that across
languages verbs of speech are the most likely to allow this kind of shift than
are verbs of thought, which are in turn more likely to allow indexical shift in
their complement than are verbs of knowledge (Sundaresan, 2013). These
“shifty” indexicals are sometimes called “monsters”, in reference to Kaplan’s
claim, and it has become an important question for syntax/semantic theory
which languages allow such shifts, under what contexts, and involving which
indexical expressions (Deal, 2020; Sundaresan, 2021).

7 Role shift as context shift


It may come as no surprise then that in sign languages, it has also become
a major question within the study of sign language semantics whether we
see evidence among them for “monstrous” shifted indexicals. Quer (2005)
suggests that role shift is precisely a case where we do, and moreover, that
role shift is a clear case of precisely this shifting of a context. Consider
example (139) below: the first person pronoun expressed by the point to
self ix_1 is not interpreted as the speaker of the whole sentence, but rather
as Alex, just as in the Zazaki example in (138).
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 128

(139)

‘Alex said, “I’m tired”’/Alex said that he (Alex) was tired.

The translation given for (139) reflects two possible analyses of the sen-
tence with role shift in American Sign Language. On the one hand, we can
think of role shift as something like quotation, in which, again, the use of
the first person is because the quotation depicts how something was said in
the event being demonstrated, in this case, an event of Alex talking. This
would be the demonstration analysis of role shift, and typically is considered
to be a separate phenomenon from context shift, since quotations are under-
stood to be noncompositional in other ways. On the other hand, a context
shifting analysis of role shift explains the use of the first person indexical
not because that is how Alex said it, but because the context of evaluation
for the embedded clause is different than the context of evaluation for the
whole sentence, thus a case of “monstrous” contextual overwriting.
How could we ever tell these apart? Don’t they both seem to be re-
flecting something right? The main way this issue has been approached in
the theoretical literature has been to test comparisons to written language
quotation, since written quotation is clearly depictive (“mentioned” speech,
not used) and is not typically seen as integrating compositionally with the
rest of the sentence, while in contrast, shifted indexical are by definition
integrated with the rest of the sentence compositionally. We can see this
effect of compositional integration/non-integration in English through long-
distance dependencies like wh-questions, where (140a) is acceptable since
there is no quotation (I is interpreted as the speaker of the whole sentence),
and (140a) has a quotation which allows the indexical I to refer to Alex,
but then fails to allow a dependency between the wh-word and the object
of like. Another example of a long-distance dependency is the licensing of
negative polarity items like ever, which makes a natural sentence in (141a)
since ever is in a negative environment but here I refers to the speaker not
Alex, and in (141b) has a quotation allowing I to refer to Alex but no longer
supports the use of ever given that the negation is outside of the quotation.

(140) a. Who did Alex said I liked?


b. *Who did Alex said “I liked”?
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 129

(141) a. Alex didn’t think I ever had a chance.


b. *Alex didn’t think "I ever had a chance."

In other words, it might seem like we have “monstrous” indexicals in English


too, but they seem to only occur under quotation, in the same places that
break other semantic dependencies.
If we truly have a context shifting operator of the monstrous kind, we
should expect to have long-distance dependencies like wh-movement and
NPI licensing at the same time as indexicals that depend on a context that
isn’t the main context of utterance. This has been reported in many spoken
languages across the world, including Zazaki; example (142) comes from
Navajo (Speas, 2000), in which the embedded verb dínílnish ’you work’ has
second person marking even though it is interpreted as referring to Mary,
the one that Kii is addressing, not the addressee of the entire utterance, and
even though the embedded clause seems to be compositionally integrated
with the main clause given the wh-dependency (the question is about the
location of work, not the location of saying).

(142) (Navajo, Speas 2000)


Háadilá Kii Mary dínílnish yiłní
where.at Kii Mary 2.sgS.work 3sgIO.3sgS.say
‘Where did Kii tell Mary to work?’
lit. ‘Where did Kii say to Mary you work’

Returning to sign languages, it seems natural to test for context shift in


the same way: asking about the co-occurance of dependencies and shifted
indexicals. One problem is that the data on this point is quite mixed, both
because judgments seem less clear than are reported for English and for clear
shifting languages, and because the analyses of various possible long distance
dependencies is up for debate as well. Consider for example data provided
from Schlenker (2017a) for American Sign Language in (143), which has a
wh-question which has to be interpreted as a dependency with the object
of live with, shown in (143a) with a third person pronoun. In (143b), we
see a first person indexical that is interpreted as a John, not the speaker,
and a location indexical that is interpreted also with respect to the context
in which the speaker originally was asking the question, not the current
utterance context.

(143) (ASL, Schlenker 2017a)


a. Context: The speaker is in NYC; the listener was recently in LA
with John.
before ix-a john in la, who ix-a say [ix-a will live with
here] who?
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 130

‘While John was in LA, who did he say he would live with there?’
b. Context: The speaker is in NYC; the listener was recently in LA
with John.
RS
before ix-a john in la, who ix-a say [ix1 will live with there]
who?
‘While John was in LA, who did he say he would live with there?’

These sentences (143a)-(143b) seem to have the same meaning yet dif-
ferent indexicals, with one more obvious difference: the appearance of role
shift, notated through the superscript RS and line marking the extent of
the role shifting. This makes for a somewhat persuasive case of role shift
as a kind of visible context shift! But as is often the case, things are more
complicated than we see at first blush. In presenting this data, Schlenker
(2017a) notes at least four complications of this generalization in favor of
clear evidence for context shift. The first is that the very same pattern is
found with the use of a quotation introducing sign (“air quotes”) instead
of say; if air quotes introduce true quotation (which intuitively seems more
likely) then that would suggest in ASL quotation really does seem to allow
extraction - and then it becomes less clear what this diagnostic is doing
in the first place if not ruling out quotation. At least, it certainly blurs
the line in both the language and the diagnostic between non-compositional
quotation and embedded clauses. A second issue, related to the first, is that
perhaps quotation is only partial, since as Maier (2018) and others have
noted, it’s always possible to just partially quote others’ speach. Third,
a possible NPI in ASL, any, behaves just like English any in resulting in
unacceptability with shifted indexicals even in cases of role shift, suggesting
that perhaps the clause under role shift is not compositionally integrated
after all, despite the wh- test. Finally, Schlenker (2017a) finds that the
same wh-extraction tests for shifted indexicals fail to hold when applied to
role shift in French sign language (LSF), so even if they do hold for the
two signers consulted for ASL, they do not necessarily generalize to the way
that role shift interacts with compositionality and structure across sign lan-
guages or all signers. These all raise doubt on a straightforward analysis
of role shift as context shift, or at least makes the case much weaker than
some of the clearer cases in some spoken languages. We turn more broadly
to cross-linguistic differences in the next section.

8 Cross-linguistic variation, attraction and iconic-


ity
A striking fact about indexical shift across spoken languages is the exis-
tence of implicational hierarchies among the predicates that license this
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 131

shift, which, for example, privilege verbs of saying over other attitude pred-
icates (Sundaresan, 2013) and among the kinds of indexicals that can be
shifted, which, for example, privilege shifting in first person over second
person Deal (2020). It should therefore not be shocking to find both reg-
ularities and cross-linguistic variation in sign languages, even when we see
the same kinds of nonmanual movements supporting the role shift. As just
mentioned, Schlenker (2017a) reports that LSF, for example, contrasts with
ASL in not allowing wh-question dependencies in the same contexts, as in
(144a-b)(more examples are given in the original text with wh-words in var-
ious positions and the point holds across them).

(144) (LSF, Schlenker 2017a)


a. who pierre say ix-a like who?
‘Who did/does Pierre say he likes?’
RS-a
b. * who pierre say ix1 like who?
‘Who did/does Pierre say he likes?’

On top of the variation in the way that role shift interacts with long-
distance dependencies, there are also differences in the kind of indexical
expressions which seem to shift across sign languages. Quer (2005) illustrates
this with the Catalan sign language (LSC) example in (145). The first person
indexical ix1 refers not to the one who utters the sentence but to Joan, the
subject of the sentence, while at the same time another indexical, here
refers to the location in the context of the utterance, Barcelona (where the
speaker is, but not where Joan was).

(145) (LSC, Quer 2005)


topic RS-i
ix-am madrid moment joani think ix1 study finish hereb
‘When he was in Madrid, Joan thought he would finish his study
here (in Barcelona)’

Such indexical “mixing” of the kind seen in LSC is important from the
point of view of semantic analysis. At one point, it was assumed that even
in languages that allow indexical expressions to shift, they’d have to shift
together in the same clause, known as the “shift together” constraint (Anand
and Nevins, 2004). However, subsequent research on spoken languages shifty
indexicals broadened to a wider variety of language families, and it became
clear that there were not only differences, but a typology of differences such
that first person pronouns seem to shift before second person, and second
person indexicals before locative indexicals (Deal, 2020). The mixed example
in (145) actually fits the spoken language generalization well: the first person
indexical “shifts” but the locative is interpreted with respect to the utterance
context. In general if sign languages follow the spoken language pattern, we
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 132

expect that the reverse, with a shifted locative indexical but unshifted first
person indexical, should be unacceptable.
Sign languages also bring another valuable perspective to the discussion
of indexical shifting because they highlight the role of iconicity in this do-
main. Spoken language linguists have a tendency to ignore iconicity, such
as taking on a character’s emotions/bodily movements or acting in other
ways as a character does, since it’s not captured in the segmental nature of
a language’s orthography or the International Phonetic Alphabet. This is
a place in which the impoverished options for writing systems for sign lan-
guages turn to an advantage, since they don’t force attention to only certain
easily written aspects. In sign languages, it has been noticed that role shift is
most supported in iconic contexts, such as in the use of classifier predicates
(Davidson, 2015; Engberg-Pedersen, 2013; Schlenker, 2017b), and this has
motivated at least three analyses to explain the iconicity/role shift data. On
the one hand, if role shift involves a demonstration, the depictive iconicity
is core to the meaning, so it directly motivates both the use of the index-
icals and the iconic content (Davidson, 2015); the challenge becomes the
mixed cases. Maier (2017) and Maier (2018) provide an intruiging answer
to this question, suggesting that combinations of quotation/demonstration
and “unquotation” (non-demonstrated speech) is motivated by a principle of
indexical attraction: if a mentioned person or place is present in a discourse,
a signer or speaker will prefer to refer to them directly with an indexical
appropriate to that speech act, not the one used in the reported act. So,
for example, in the LSC example (145), the physical location of the speech
act occuring in Barcelona attracts the participants to use the appropriate
indexical here instead of how it was phrased in the reported utterance (e.g.
there or barcelona), motivated by pragmatic reasoning.
Iconicity adds yet another dimension of variation and uncertainty, since
many indexical expressions involve indexical pointing to a person or place.
It’s clear that we need to know more about variation in shifting among in-
dexicals, between language communities, among signers, and in iconic/non-
iconic contexts. This is especially the case because most of the theoretical
research on role shift tends to involve language consultations/elicitations
with a very small number of signers (in many cases, a single signer), often
by hearing researchers who are not native signers, and so one-off examples
run the risk of being taken as representative of a community when it might
instead be due to individual variation, or representative of an individual in
certain contexts but not others, etc. To counteract the first issue, there is
experimental work on role shift that sheds some light on the variation issue,
and more generally on role shift in sign languages, which we turn to next.
Hübl et al. (2019) conducted a quantitative experiment on role shift in
German sign language (DGS) (see Herrmann and Steinbach 2012 for more
on role shift in DGS specifically and quotation in sign languages). In the
experiment reported in Hübl et al. (2019) the participants, who were 5 Deaf
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 133

signers of DGS, were asked to view 50 video paired videos and judge their
acceptability. Each pair consisted of one video (A) and then another video
reporting on what happened in that video (B), in order to set up the right
context to test for speech reports. Among the goals of the study was to test
the attraction hypothesis, that a motivation for shifting indexicals was to
use a context-of-speech based indexical for a present discourse participant,
as in (146b), in contrast to (146a).

(146) (DGS, Hübl et al. 2019)


a. (Verbatim condition)
Felicia signs: saturday next tim with ix1 dance
‘Tim is going dancing with me on Saturday’
rs
Tim reports: felicia 3 inform1 : saturday next tim with ix1 dance
‘Felicia told me, “Tim is going dancing with me on Saturday”’
b. (Attraction condition)
Felicia signs: saturday next ix1 with ix1 dance
‘Tim is going dancing with me on Saturday’
rs
Tim reports: felicia 3 inform1 : saturday next ix1 with ix1 dance
‘Feliciai told mej , “[Ij am] going dancing with mei on Saturday”’

Their results were and mixed, finding a preference for the verbatim con-
dition for the first person pronoun and the location indexical here, and
a preference for the attraction condition for the second person pronoun,
although as they note, there are many possible explanations. That said,
work like this sets an example of how to do careful and controlled “semi-
experimental” (Davidson, 2020) work to better understand variation within
and across (signed and spoken) languages. With better data, we can un-
derstand how sign languages fit into the typology of indexical shift cross-
linguistically; until then, most questions are difficult to resolve without bet-
ter understanding the sources of variation as cross-individual, cross-context,
and/or cross-linguistic (arguably, a question arising in spoken language work
on this topic just as much as in sign languages).

9 Constructed actions/Action role shift


Finally, we end with an topic that connects depictive classifiers and role
shift. In this chapter we have so far discussed demonstrations as a way to
view classifiers as depicting events/arrangements (in Section 2), and role
shift as depicting reported speech/attitudes (in Section 5), but outside of its
use with attitude predicates, role shift can also been seen as introducing a
“body classifier” (Supalla, 1983) to depict another’s action, sometimes called
“constructed action”. This category includes the signer depicting through
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 134

their own bod the characteristics of characters in a story (movements, facial


expressions, etc.), in a sense becoming them, while “reporting” not their
speech but their actions. As we’ve seen, some conventionalized signs are
iconic in ways that quite naturally support depiction (such as for example
give in ASL), so that the sign can easily be made to resemble the action. In
these cases it’s actually possible to demonstrate the action while also signing
the conventionalized word, so that the lines between doing, depicting, and
describing become quite porous.
Within the formal semantic discussion of role shift, Schlenker (2017b)
has used the term “action role shift” to describe the shifting of person in-
dexicals on more iconic verbs that don’t include reported speech but rather
reported action. He notes that just like the example from Navajo above
((142) from Speas 2000), person marking on verbs can be shifted, but that
in sign languages, unlike in Navajo and other spoken languages, this seems to
be possible without even any speech reports at all, merely with the presence
of role shift. An example of this is the utterance in (147), which involves no
reported speech or other mental attitude report, but does include role shift
as well as a shifted indexical. Here, the purported shifted indexical is the
person marking on the verb, which begins at the signer’s body (and so takes
the first of first person marking) but which is interpreted as the referring to
the agent, Alex, (who is the subject of the sentence) and not the speaker.

(147)

‘Alex gave it to them.’

Schlenker (2017b) takes these “action role shifts” to be the strongest


evidence in favor of monstrous context shift in sign languages, given that
they clearly cannot be interpreted as quotation (there is nothing to quote,
as there is no speech report) and yet involve indexical forms like the first
person verbal agreement that are not interpreted with respect to the context
of utterance. In response, Davidson (2015) proposes that these are simply
demonstrations as adverbal modifiers, and that the first person indexical
is not actually one at all, but appears to be that because the first person
simply means that it is anchored to the body (Meir et al., 2007), and these
are cases of the signer demonstrating an action. This is supported by the
observation that these occur only with highly iconic verbs: signing give in
a way that appears as if one is giving something supports action role shift,
but Schlenker (2017b) shows that using a less iconic sign leads to a less
acceptable use of action role shift.
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 135

There is clearly an important role to play here in both experimental work


as well as corpus studies and careful categorization, as modeled in work by
Cormier et al. (2015b) for British Sign Language, who also ultimately sup-
port a demonstrative/depictive story of action role shift. Ultimately at stake
is the interesting case of the semantic analysis for action role shift, which
can be taken as either (a) the strongest evidence for context shift in sign
languages, or (b) a particularly natural implementation of the demonstra-
tion account of role shift in sign languages, depending on the centrality of
depiction to the phenomenon.
Within the same liminal space between depictive classifiers and quotation-
like role shifts are examples that may involve an attitude report, but not
necessarily a speech report and no overt attitude verb. We started this
chapter with just such an example, provided by Padden (1986), repeated in
(148), which has no overt attitude verb, just a subject (the husband) and,
as she argues and relects in the translation with “goes”, an attitude report
in a subordinate clause.

(148) (ASL, Padden 1986)


RS-a
husband-a really ix1 not mean
‘The husband goes, “Really, I didn’t mean it”’
Lillo-Martin (1995) further highlights the similarities between these kind
of attitude reports and English expressions like He’s like, I can’t believe you
did that!. Since these aren’t technically speech reports, and certainly not
direct quotations, she argues for a Point of View operator with a scope asso-
ciated to the role shifted component. There’s an important sense that this
shares with a demonstration view, namely, that the key is that a perspective
is being conveyed by the role shift, and that the right comparison to spoken
English are the complements introduced by goes and be like. There is also an
important difference: Lillo-Martin (1995)’s syntax is multi-clausal, in that
there is a main verb (the Point of View operator) and an embedded verb (in
embedded clause, e.g. mean), whereas under a demonstration analysis such
as Davidson (2015) there is a main verb with the semantics of “be like”,
and the entire content under the Role Shift becomes a demonstration, un-
derstood in the same way as quotations (i.e. as a depictive demonstration)
and not compositionally integrated.

10 Conclusions
This chapter brought together two types of structures in sign languages
that have separately received significant attention in formal semantics and
linguistics: depictive classifiers and role shift. The motivation in doing so
was to highlight the way that both of these kinds of expressions incorpo-
rate depiction along with description, and how because of this, both can be
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 136

analyzed within formal semantics in a unified way through the notion of a


demonstration. As should also be clear, this is not the only analysis on the
market, and so hopefully some of the advantages and disadvantages of par-
ticular approaches are clear: their syntactic and semantic predictions, their
emphasis on indexical interpretations, depictive iconicity, compositional in-
tegration, etc.
To see yet another example with a concrete implementation, we can
conclude by returning to (part of) the utterance we introduced earlier in
Chapter 1 of this text, which contains an example of a classifier predicate
(149), which depicts the way that students are arranged around a library
desk. Line (a) shows the propositional contribution of the noun student.
The crucial innovation comes in line (b), which shows the contribution of
the verb phrase ds4 (students in line, at a): this is a predicate, meaning that
it describes some set of individuals, just like happy or jump. In this case,
it will be true not of happy individuals, or of individuals who were themes
of jumping events, but rather true of individuals that are themes of events v

depicted by and (because of the ds_4 handshape) are upright


figures, and (because of the locus a) must be appropriately depicted in that
location (building on discussion in Chapter 4). That is the semantics given
in line (b). Line (c) provides the propositional contribution of the quantifier
ten, which takes two sets as arguments and requires that their intersection
have at least ten members. In (d) we see how the quantifier combines with
the restrictor noun student to form the generalized quantifier ten stu-
dents, and in (e) how it combines with the scope set, ds_4(students in
line, at a).

(149)

‘Ten students stood in a line [like this, here].’


a.

J K = λx.x is a student
b.
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 137

J K = λx∃v[demonstrate( , v)∧
theme(x, v)] ∧ upright-figure(x) ∧ R(x, a)
c.

J K = λP λQ. | P ∩ Q |≥ 10
d.

J K = λQ. | Q ∩ {x.x is a student} |≥ 10


e.

J K

=λw. | {x.∃v[demonstrate( , v)∧theme(x, v)∧R(x, a)∧


upright-figure(x)]} ∩ {x.x is a student} |≥ 10 in w
lit. the proposition defined as the set of worlds in which there
are at least ten individuals in the intersection of the set of stu-
dents and the set of upright figures who were themes of events

depicted by and are related to location a

As we have noted earlier in this chapter, an unusual feature from the


perspective of most formal semantic analyses is that two pieces of the “form”
appear also on the meaning side of the equations in (149): we have the

demonstration sign referenced in meaning, and the locus a


referenced. But this isn’t a mistake, rather, these are two places where
meaning is “iconic”, that is, meaning comes from the forms, similar to how
we need to reference the forms of language in quotations.
More generally, one hopes that these data show how depictive content
in general interacts with descriptive content, in tandem, in sign languages.
It is worth emphasizing in these conclusions a point that is easy to lose
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 138

sight of when focuses on sign languages in this chapter, that there is nothing
about the way that depiction is integrated into classifiers and role shift that
makes sign languages somehow less linguistic than spoken languages. Quite
the opposite is true: all languages, spoken and signed, make abundant use
of both description and depiction, including many understudied but com-
mon aspects of spoken languages such as quotation and constructed action
(Clark, 1996, 2016), depiction in ideophones (Dingemanse et al., 2015; Kita,
1997), and manner demonstrations through co-speech gestures, and there
are exciting insights to be gained by emphasizing this point of commonality
in the use of multiple semiotic resources within sign languages and across
language modalities (Ferrara and Hodge, 2018; Hodge and Ferrara, 2022).
Furthermore, not only do we gain insight by pushing these commonalities,
but formalizing the way that depiction interacts with symbolic descriptive
content can lead to new insights into commonalities both across and within
languages, such as the underlying structural similarities between commonly
studied areas of quotation and reported attitudes in spoken languages and
role shift and classifiers in sign languages.
6

Quantification

One of the most notable aspects of human language is its ability to ex-
press generalizations. Many non-human animals, for example, are able to
communicate about particular threats (e.g. predator presence) or oppor-
tunities (e.g. current location of food), but there seems to be no evidence
for their ability to express generalizations like Food is always available in
that area, Tigers sometimes come from that direction, or No eagles fly that
high. Human languages, on the other hand, are chock full of expressions of
exactly these sorts, allowing us to express generalizations in a precise way
that supports even further inferences (e.g. if food is always available some-
where, then it is available now). Expressions like this are found in every
human language that we know so far (Partee, 1995), including the earliest
stages of an emerging sign language like Nicaraguan Sign Language (Kocab
et al., 2022), making for one of the most convincing test cases of the unique
expressiveness of human language.
We use the term quantification to refer to a function that takes two sets
and expresses the relation between them build on the “tri-partite” structure
given in (150).

(150) [Quantifier (e.g. some/all/no/etc.)][X][Y]

For example, if we say No hippos fly high, then we are claiming that
the set of things that are hippos (X = {x.x is a hippo}) and the high
fliers (Y = {x.x is a thing that flies high}) have no members in their in-
tersection, e.g. nothing that is both a hippo and flies high. Languages can
vary quite a lot in how they express quantification: some languages use
determiners like Every, No, Some, etc. that form part of a noun phrase,
while other languages express quantification through adverbials like Always,
never, sometimes, etc., the latter being more common crosslinguistically
than the former, but with an overwhelming number of languages, including
English, American Sign Language (Abner and Wilbur, 2017) and Russian
sign language (Kimmelman, 2017) employing both strategies.

139
6. QUANTIFICATION 140

Quantification has also been a primary motivation for postulating an


underlying logic for natural language, due to our ability to draw inferences
beyond what is said directly, including both entailments and implicatures.
Consider, for example, the entailments of a sentence like (151a): it follows
logically from the meaning of always that on a particular day food will
be available in the given area. Similarly, the logical structure of quanti-
fiers prompts (cancellable) pragmatic judgments: since always is a strictly
stronger quantifier than sometimes, use of the weaker one like in (152) will
implicate that use of the stronger one is unwarranted.

(151) Context: looking for food today


a. Food is always available in that area.
Entails:
b. Food is available in that area today.

(152) Context: discussing predators


a. Tigers sometimes come from that direction
Implicates:
b. Tigers don’t always come from that direction

Those interested in the logical structure of language have long noted the
relationship between quantificational expressions of the sort we see in (151)
and (152), most notably the interaction of quantifiers with negation (Horn,
1989). For example, we see that the use of the existential quantifier some-
times in (152) leads to the negation of the universal (not always). Similarly,
use of the universal always entails the negation of the negative (not none).
Thus, the study of quantifiers builds on our understanding of negaton of the
sort that we saw in earlier chapters.
In terms of the expression of quantification in natural language, Partee
(1995) propose that it is characterized by a tripartite structure: a quanti-
fier, a restrictor, and a scope. So in the case of All tigers come from that
direction, we might have [Quantifier: All][Restrictor: tigers][Scope: come
from that direction]. In terms of its semantics, the quantifier can be thought
of as expressing the relationship between the restrictor and the scope, so
that in this case, All tells us that the restrictor set is a subset of the scope,
e.g. the set of tigers is a subset of the things that come from that direction,
e.g. there is nothing that is a tiger that doesn’t come from that direction.
Other quantifiers express different set relationships. For example, [Quanti-
fier: No][Restrictor: tigers][Scope: come from that direction] would be equiv-
alent to claiming that there is nothing in the intersection/overlap between
the set of tigers and the things that come from that direction. [Quantifier:
Some][Restrictor: tigers][Scope: come from that direction] would be equiva-
lent to claiming that there is something in the intersection/overlap between
6. QUANTIFICATION 141

the set of tigers and the things that come from that direction, i.e. that the
intersection is nonempty.
Let us model these relationships formally to illustrate their composi-
tionality. As above, we can say that any quantifier is a function of two
sets that expresses the relation between them (e.g. returns TRUE if that
relation holds, FALSE if it does not). The general structure for a quan-
tificational expression Quant that requires a relation R between two sets
would be JQuantK = λP λQ.R(P, Q). To be concrete with an example from
English, the quantifier every is a function that takes two sets, first P and
then Q, and requires P to be a subset of Q: JeveryK = λP λQ.P ⊆ Q. Taken
step by step we arrive at the following semantic derivation in (153).

(153) a. JeveryK = λP λQ.P ⊆ Q


‘the function that takes in two sets, and requires that the first
be a subset of the second’
b. JtigerK = λx.x is a tiger (P )
‘the function which returns TRUE only for those things that are
tigers’
c. Jcomes from that directionK = λx.x comes from that direction
(Q)
‘the function that returns TRUE only for those things that come
from that direction’
d. Jevery tigerK = λQ.x is a tiger and x ∈ Q
‘the function that takes in one set, and requires that everything
that is a tiger be a member of that set’
e. J(every tiger) (comes from that direction)K
= λw.∀x[x is a tiger in w → x comes from that direction in w]
‘true in worlds in which everything that is a tiger is also some-
thing that comes from that direction’

In this chapter we’ll take a short tour in quantification in sign lan-


guages, first starting with an overview of different kinds of quantificational
expressions in sign languages: determiners, adverbials, and other kinds of
quantificational-like expressions, in Section 1. The focus will be on variation
within and across sign languages in these expressions. In Section 2, we’ll
focus on the way that space is used to express quantificational domain infor-
mation in sign languages. In Section 3, we’ll turn to the intersection of dis-
course referents and quantification, , drawing on our discussion of anaphora
from Chapter 4, since this has often been a source for understanding both.
Finally, in Section 4 we’ll end with psycholinguistic studies on quantifica-
tion in sign languages, both production studies in an emerging language and
scalar implicature studies in an established sign language.
6. QUANTIFICATION 142

1 Quantification strategies across sign languages


Sign languages have been featured in the formal semantics literature on
quantification ever since the first cross-linguistic studies on tripartite quan-
tificational structures were proposed for natural languages. For example,
the classic compilation on cross-linguistic quantification by Partee (1995)
includes a chapter on American Sign Language and the relationship be-
tween bare nominal expressions and quantifiers in ASL (Petronio, 1995).
Petronio notes that there are several common sentence types in ASL that
involve quantification, and that often quantificational meaning is expressed
on verbs. The foundation of this work was analysis by Padden (1988) who
showed that verbs in ASL can change their form depending on their argu-

ments as in , or not, as in , along with a

third categorization of spatial verbs like whose form depends


on a path instead of the subject or object. Petronio (1995) builds on this
distinction by analyzing the interaction of quantity expressions with these
different classes of verbs, as in the analysis of the directional verb ask in
(154). She shows that the reduplicative multiple marking on the directional
verb ask is compatible with three or more (including many) students ask-
ing, but not (just) two.

(154) (ASL, Petronio 1995)


t
a. *two student ix-1 askM ultiple
‘I asked two students’
t
b. many student ix-1 askM ultiple
‘I asked many students’
t
c. three student ix-1 askM ultiple
‘I asked three students’

This is included as part of the first discussion of quantifiers in sign languages.


Similar kinds of (non-cardinal number) quantity expressions are discussed
in both Kimmelman (2017) and Abner and Wilbur (2017) in comparison to
similar expressions in spoken languages. One thing that seems to set apart
the number marking seen in sign languages such as in (154) is the option-
ality/obligatoriness of numeracy marking (singular/dual/trial/plural). On
6. QUANTIFICATION 143

the face of it, the multiple marking appears similar to tri-al marking in spo-
ken languages, but just like other “agreement” type phenomeona in sign
languages like directionality, it is optional, not obligatory: verbs that do not
change their form show no multiple marking, and sometimes even verbs like
ask need not show such marking, and can instead be signed in an prototypi-
cal/uninflected form. This suggests that the use of multiple marking may be
more semantically contentful than the seemingly purely “formal” use in spo-
ken languages like, say, Slovenian which has more formal/grammatical trial
marking, and more generally connects to the way that we modeled the op-
tionality of loci use in general in Chapter 4, so that the multiple marking can
be seen as both a way to disambiguate discourse referents via non-restrictive
modification and also to depict/show aspects of an event.
Moving on from this kind of verbal marking/pluractionality (which we
will also address more in Chapter 7), Quer (2012b) considers sign languges
in light of the tri-partite structure for quantification, bringing together ar-
guments that sign languages exhibit quantification through determiners and
adverbials using the same tripartite structure seen cross-linguistically in spo-
ken languages. He builds on an insight from Wilbur (2011) and others that
the use of nonmanual markers in ASL is related to the scope position, in
particular, that the brow raise nonmanual marking is a marker of a restric-
tor set in general (in Wilbur’s system, a particular syntactic position). In
other words, in sign language sentences with quantification, the restrictor
set will often be set off with brow raising nonmanual marking, in contrast
to the scope set.
When it comes to cross-linguistic variation, there seem to be many simi-
larites across sign languages and spoken languages in the domain of quantifi-
cation. A direct comparison between American sign language and Russian
sign language is available through Abner and Wilbur (2017) and Kimmel-
man (2017), which report on the two sign languages and their use of several
quantificational strategies, including but not limited to both determiner and
adverbial quantification. These are especially useful because they occur in a
handbook with direct comparison via the same typological survey to many
spoken languages as well, so variation between sign languages can be consid-
ered in light of the same variation among spoken languages. Kimmelman and
Quer (2021) provide an overview of quantification in sign languages, focus-
ing on lexical differences, the pervasiveness of both determiner and adverbial
quantification across sign languages, and some potential modality-specific
properties, which we turn to in the next two sections. Beyond simply the
use of space that we discuss next, there seems to be iconicity in quantifica-
tional forms themselves: for example, Crabtree and Wilbur (2020) propose
that the difference between two universal quantificational forms in ASL re-
flects a boundedness difference, such that the bounded form of asl reflectes
a bounded semantics, in contrast to the unbounded form (and corresponding
unbounded semantics) of the latter.
6. QUANTIFICATION 144

2 Quantificational domains
We have focused so far on the tripartite structure of quantification: a quan-
tifier’s force (all/some/none/etc.), it’s restrictor, and its scope. However, it
is also the case that not all of these pieces need to be visible: for example,
we can leave out much detail of the restrictor when we say everyone jumps,
which presumably has the structure [Every][one][jumps], yet one hardly de-
scribes a set on its own. What counts as one? We might say it is every
individual in some relevant context, and say that whoever they are, they
comprise the domain for quantification. This need arises even when there is
more overt information in the restrictor than just one. For example, Every
cat drank their milk is presumably telling us something about every cat in
a particularly relevant group, not every single cat that has ever existed. We
call this context-dependent aspect of quantification its domain restriction,
and it has long been a topic of interest formal semantics and pragmatics
(Stanley and Szabó, 2000; Stanley, 2002; von Fintel, 1994). As we will see,
sign languages are able to integrate their use of space for discourse referents
to convey domain restriction information in a unique way, which will be the
focus of the rest of this section.
In many sign languages, plural discourse referents can be associated to
2-dimensional areas of signing space (often expressed through an arc-like
movement across that area), in order to establish an antecedent for anaphora
in later discourse, exactly like non-plural discourse referents (155a). How-
ever, there are some properties of plural discourse referents that deserve
further discussion when it comes to quantification. One of these is that they
must respect a type of iconic geometry, such that a plural discourse referent
associated to an area of space that is inside the space associated to another
plural discourse referent, as in (155b) should have the same relationship
to it as the referents do, e.g. one should properly contain the other in its
extension (Schlenker et al., 2013).

(155) a. Plural discourse referents:

b. Plurals in containment relation:

Plural discourse referents are important to understanding sign language


quantification because quantifiers in sign languages are able to make use
of these plural referents as their restrictors, shown among other languages
for American Sign Language (Boster, 1996) and Catalan sign language (Bar-
berà, 2015). In these spatial quantified noun phrases, the quantifier is signed
6. QUANTIFICATION 145

in an area of space, and the interpretation is that the quantifier is restricted


to that particular set associate to the area of space. Compare, for exam-
ple, the quantificational expression in (156), which doesn’t make use of any
area of space to locate the quantifier, and (157) in which the quantifier is
associated to a locus a, in this case to support anaphora in the subsequent
sentence.

(156) (Davidson and Gagne, 2022)


fs(ALL)/none/someone like test QNP without locus
‘Everyone/No-one/someone likes tests/that test’

(157) (Davidson and Gagne, 2022)


Context: A group of my friends recently took the bar exam.
fs(ALL)-a/none-a/one-a fail. Spatial QNP
‘All/none/one of them (of the friends) failed.’
ix-arc-a mad.
‘They (my friends) were mad’

There are at least two interesting consequences that this has for quanti-
fier semantics in sign languages. The first is quite simple: Schlenker et al.
(2013) note that while spoken languages like English do not have an easy
way to refer back to a complement set of a mentioned referent, the use of
space supports “complement set anaphora” in sign languages. Consider for
example the short two-sentence English discourse in (158a-c). In (158a-c)
the most natural interpretation is that they refers to all of the students (the
some who did their homework and the others who did not). In (158b) the
most natural interpretation is that they refers to the some of the children
who did their homework. Both of these options for interpreting they are fine.
However, it is strange to try to have they refer to the others who did not do
their homework, as in (158c), and this is descriptively called the inability to
license “complement set anaphora.”

(158) a. Some of the children did their homework. They are a good class.
b. Some of the children did their homework. They were proud.
c. Some of the children did their homework. ?They couldn’t find
it.

In American Sign Language, there is no ambiguity, since three different


loci are used for the indexical signs in (158a-c): a large arc in the case
of (158a), a subset of this arc for (158b), and the compelementary subset
for (158c). This has led some to conclude that sign languages have iconic
support for complement set anaphora (Schlenker et al., 2013), and indeed,
perhaps another way to phrase this would simply be that when space is
used to disambiguate, the scenario for complement set anaphora never arises
6. QUANTIFICATION 146

because the use of space to depict these sets distinguishes each of these
groups uniquely.
A second major consequence of spatial quantified noun phrases is that
the use of loci supports the expression of domain narrowing and widening
through a metaphorical (more is up) use of space (Davidson and Gagne,
2022). Consider example (159): the signs in both (159a) and (159b) are
the same except that the quantifier fs(all) is signed at a neutral height
in (159a) and at a much higher height in (159b), and this has truth con-
ditional consequences: the first is interpreted as a narrower domain (All
of my friends) while the latter is interpreted as a wider domain (All of
the people in the world).

(159) (Davidson and Gagne, 2022)


Context: Signer has just said, "Last night I watched a movie with
my friends about vampires. Afterwards I went to bed and I dreamt
that. . . "

a.
‘All of my friends became vampires’
#‘All of the people in the world became vampires’

b.
‘All of the people in the world became vampires’
#‘All of my friends became vampires’

Davidson and Gagne (2022) argue that this difference comes from a
pronominal restriction in the quantifier, something roughly like fs(all) [of
ix] ‘All of them’, by illustrating that this same use of height to convey wider
6. QUANTIFICATION 147

or narrower domains is also present in the basic pronoun system indepen-


dent from quantification, as well as in directional verbs that incorporate
pronouns/have pronominal clitics (for more discussion on directionality and
clitics see Chapter 4). They conclude then that while this use of height is
particularly productive in cases of quantification when there is a clear use for
expressing domain widening/narrowing, that compositionally it should be
seen as a restriction on pronouns, similar in spirit to the spatial restrictions
we took as analyses for loci in Chapter 4.
Like role shift and classifiers, this is another place in the structure of
sign languages where depictions seem to interact directly with composi-
tional structures, in this case with truth conditional consequences about
who exactly are being quantified over! As with those, the solution comes in
through (demonstrative-like) referential pronoun (160b), and so to compare
we will walk through an example step-by-step as well (160). In this case
we will use glosses instead of images of signs, since much of what we are
focusing on is composition inside of a single expression, i.e. different parts

of meaning that contribute to the sign . The pieces/ingredients are


as follows: the universal quantifier sign fs-all (160a), the indexical pointing
sign ix-arc-neutral (a point to an arc in space at a neutral signing height)
(160b), and a silent semantic “glue” equivalent to English partitive of (160c).
Note that for the indexical, the plural restriction (expressed by the arc) is
instantiated through ¬atomic and the ‘maximal discourse entity’ through
∀y(y ≤ z → y ∈ C), following Davidson and Gagne (2022): the second says
that everything that’s part of the plural is part of the given context C.

(160) a. Jfs-allK = λP λQ(∀x(P (x) → Q(x)))


b. Jixi -arc-neutralKg,C = ιz : ¬atomic(z) ∧ ∀y(y ≤ z → y ∈ C).(z =
g(i))
(plural pronoun, the maximal discourse entity (here, the movie
watching friends) unless further restricted by descriptive content,
locus, etc.).
c. J(of)K = λxλy.y ≤ x
d. J(of)ixi -arc-neutralKg,C = λy.y ≤ ιz¬atomic(z) ∧ ∀y(y ≤ z →
y ∈ C).(z = g(i))
= λy.y ∈ Ce
e. Jfs-all-neutralKg,C =Jfs-all[-of[ixi -arc-neutral]] Kg,C
= λQ(∀x(x ∈ Ce → Q(x)))
‘All of them’ (default domain Ce unless further restricted by
descriptive content, locus, etc.)
6. QUANTIFICATION 148

The higher version, with a widening domain, involves exactly the same
composition except that the plural pronoun has a restriction to a context
set that is a superset of the default (C ⊂ {y : y ≤ z}), as in (161).

(161) a. Jixi -arc-highKg,C = ιz : ¬atomic(z) ∧ (C ⊂ {yy ≤ z}).(z = g(i))


b. Jfs-all-highKg,C = Jfs-all[-of[ixi -arc-high]] Kg,C
= λQ(∀x(x ∈ Ce′ → Q(x)))
‘All of them’ (expanded domain C ′ , where C ⊂ C ′ )

In this implementation, the difference between using a high space for


a universal quantifier (fs-all-high) and a neutral space (fs-all-neutral)
comes down to hard coding this as a restriction on the pronoun: a plural
point to a neutral height will require that everything contained in the plural
is part of the default context C, while a point to a higher space requires that
everything that makes up the plural be contained in a superset of this, a
wider context C ′ . What’s important for compositional semantics is that this
can then be applied to any other quantifiers, e.g. none, few, etc., essentially
with the same semantics as ‘all of them’ where ‘them’ comes from a narrow
or wider domain, and it allows for multiple heights and multiple domains
as long as they are ordered this way, which there is indeed evidence for in
ASL, as shown by (Davidson and Gagne, 2022). But why exactly is space
used in this way? It seems to have an origin in a well known metaphor
more is up (Lakoff and Johnson, 1980), but what is notable about this
use in sign languages is that it has truth conditional consequences when the
intended referent of the pronoun (the domain) restricts the quantifier that it
appears with. Thinking more broadly about how metaphorical uses of space
can influence semantics/pragmatics in sign languages, we might imagine
hard-coding something based on the metaphor proximity is similarity
in the use of horizontal loci to explain the use of different areas of space
in cases of contrast (as discussed in Chapter 3). These are cases where a
noncompositional aspect of meaning (metaphor) influences truth conditions
by influencing what kind of meaning gets conventionalized, a different way
yet similar in spirit to the method of integrating non-propositional content
we saw for the demonstrations in the previous Chapter 5.

3 Quantification and binding


In the previous section we focused on the way that quantified noun phrases
are able to use the spatial nature of discourse referents in sign languages
to mark domain information. But, there is another way in which discourse
referents in sign languages interact with quantifiers: through individual dis-
course referents not used for a domain but instead for variable binding.
Consider, first, the difference between the two universal quantificational
6. QUANTIFICATION 149

sentences in (162): each expresses something with universal force, that is,
something about each and every member of the set of students, and the set
of those that were glad they brought their toothbrushes, namely, that the
first is a subset of the latter. But, they do so in different ways: the first
one (162a) takes the set of students as a whole and their behavior as group
behavior, which we can notice in the plural morphology on students and
toothbrushes. In contrast, the second one quantifies individually student
by student: notice, for example, the singular morphology on student and
toothbrush in (162b).

(162) a. All of the students were glad that they brought their toothbrushes.
b. Every student was glad that they brought their toothbrush.

We saw in the previous section plenty of examples of the first sort, with
plural morphology indicated by an arc and the use of higher or lower space to
express domain information via the plural restrictor (e.g. students). What
about the second sort (162b)? This doesn’t seem to be a universal in spoken
languages by any means (Partee, 1995); nevertheless, it is not uncommon
either, so we might ask whether we see this kind of quantification in sign
languages. The answer is somewhat complicated: there is some evidence
that sign languages do have this kind of quantification, yet other evidence
that they do not, or at least not in all the same ways as English.
One kind of evidence in favor comes from sentences that express behavior
that seems to vary by individuals. Take (163), which associates the noun
phrase boy with one area of space (locus a), and associates another noun
phrase girl with a second area of space (locus b), and then in the clause
embedded under think, the singular pronouns ix-a and ix-b are intended to
be interpreted as ranging over the whole set of boys and whole set of girls,
respectively, similar to (162b) above.

(163) (Kuhn, 2015)


[all boy]a want [all girl]b think ix-a like ix-b.
‘All the boys want all the girls to think they like them.’

A similar kind of example can be found in (164) from Schlenker (2011),


which uses a universal temporal quantifier each-time and associates sep-
arate locus to two noun phrases (linguist and psychologist) in the re-
strictor of that quantifier. In the scope, the pronouns ix-a and ix-b with
singular morphology are intended to be interpreted as ranging over all of
the possible linguists and psychologists, not just a single one.

(164) (Schlenker, 2011)


each-time linguista psychologistb the-three-a,b,1 together
work, ix-a happy but ix-b happy not.
6. QUANTIFICATION 150

‘Whenever I work with a linguist and a psychologist, the linguist is


happy but the psychologist is not happy.’

Neither of these examples have the clear combination of a nominal quan-


tifier with a singular morphology on the noun phrase that we found in the
English example in (162b), but we might be inclined to overlook this given
that bare nouns like the ones in (163) are par for the course in ASL, and
temporal quantification like each-time is also common crosslinguistically
and supports binding, as in English Each time I see a student I ask for their
toothbrush. However, there is one further difference that seems to set the
sign language case apart: the inability of negative quantifiers to participate
in these same binding structures. Consider for example that in English, a
negative quantifier works just as well as a positive one, as in (165).

(165) a. None of the students were glad that they brought their toothbrushes.
b. No student was glad that they brought their toothbrush.

In contrast, Abner and Graf (2012) note that switching to negative quan-
tification (from universal quantification) significantly degrades bound quan-
tificational readings in American Sign Language (see also Graf and Abner
2012; Kuhn 2020; Abner and Wilbur 2017). In their example (166), they
report that this form is unable to express the bound meaning expressing
that nobody in the set of politicians is also in the set of individuals who
say that they wanted to win. However, the same example with a univer-
sal quantifier is improved, as is the same example without a singular locus
(more similar to the negative quantification with plural domain restrictions
we saw above).

(166) (ASL, adapted from Abner and Graf 2012)


a. politics persona tell-story ix-a want win.
‘A politician said that he wanted to win.’
b. no politics persona tell-story want win.
No politician said that he wanted to win.
c. *no politics persona tell-story ix-a want win.
Intended: No politician said that he wanted to win.

Graf and Abner (2012) take this difference to be due to the inability
to support “syntactic” binding in sign languages. The idea behind this is
that the types of quantification over individuals that we saw in the English
case No student... they... (165) are only available in languages that have a
syntactic dependency between the quantificational noun phrase (No student)
and the anaphoric pronoun (they). In ASL, Graf and Abner (2012) argue,
there is not the same syntactic dependency between the quantificational
6. QUANTIFICATION 151

noun phrase (e.g. no politics person) and the anaphoric pronoun (ix).
When we talk about a syntactic dependency, we mean the same sort of
dependency that we see in, say, wh-questions between a question word and
the position where it is interpreted, the kind of cross-clausal dependencies
that we used to probe for compositionally integrated clauses and contrast
with quotation in Chapter 5, for example. This can be contrasted with
the kind of binding that arises through discourse-based coreference that can
cross sentences, like the kind that governs coreference between the politician
and the pronoun, as in I met a student. He brought a toothbrush.; the latter
sort cannot arise in negative quantification since there is nothing there at
the discourse level to corefer to (compare the odd: I met no student. He
brought his toothbrush).
A related but different explanation for the difference between universal
and negative quantificational binding in ASL is that the inability to have
bound interpretation of sign language pronoun comes from iconic constraints
on the use of space. Kuhn (2020) argues that the use of a locus itself has
an iconic requirement, following a similar suggestion by Schlenker (2011)
that there is an iconcity presupposition that rules out the use of loci in
cases of negative quantification. Kuhn (2020) discusses this iconic restriction
on the use of loci in the larger context of two other phenomena in sign
languages that involve dependencies: negative concord (see discussion earlier
in Chapter 3) and distributivity, which uses space to mark dependencies in
an iconic manner. We can see an example in (167), where the locus a is
expressed on the universal quantifier each and uses space to illustrate the
dependency between the professors and the students.

(167) (Kuhn, 2017a)


each-a professor nominate one-arc-a student
Each professor nominated one(-dist) student.

Overall, the notion of the use of space as being ultimately depictive, even
if in a quite abstract way, has roots in many approaches to sign languages
linguistics, especially cognitive approaches like Liddell (2003) and the hy-
brid approach of Schlenker et al. (2013), and is also consistent with general
views of the use of pointing to space as demonstrative by Ahn (2019a) and
Koulidobrova and Lillo-Martin (2016). However these views all differ in their
implementations for the interface between the depictive aspects of space and
the descriptive/non-iconic aspects of sign languages. The view we will take
is as follows: depictions can be used to create and augment the event repre-
sentations we lead our interlocutors to construct. As Kuhn (2020) suggests,
establishing a locus for a negative quantifier is in conflict with any use of
that space to depict something (which, given the negative quantifier, cannot
exist) so a locus is not used in these cases of negative quantification. Recall
the incompatibility between negation and depiction is a theme we have seen
6. QUANTIFICATION 152

before, as in Chapter 2. However, when it comes to quantification, negative


quantifiers can indeed occur with spatial loci in the plural case (e.g. none-
high from Davidson and Gagne 2022), and a story consistent with the one
we are giving is that these are possible because the reference of the demon-
strative pronoun that picks out the domain exists (e.g. ‘them’) and we may
want to depict its location as part of constructing the relevant event rep-
resentation; negation plays a role in those cases simply by claiming a lack
of overlap between that set and the scope set. In contrast, in the bound
singular locus case, there is no individual associated to the locus, instead
some syntactic dependency/functional linguistic structure (Reinhart, 1983).
Presumably, more detailed data collection in this domain will help establish
these patterns more firmly and test how closely these abilities/inabilities to
allow binding track across quantifiers beyond the one or two that have been
described so far, in order to more fully clarify this picture.

4 Quantification and scope


We have focused so far on single quantificational expressions across sign lan-
guages, and ways that they can mark their restriction and scope, including
the use of space to track both plural and single discourse referents. Yet
another notable feature of quantifiers in natural language is that the pres-
ence of two or more quantifiers is known to lead to (in some cases, in some
languages) quantifier scope ambiguities: two different interpretations of a
sentence depending on which quantifier “scopes” over the other. Consider,
for example, the sentence in (168), which could be interpreted with the uni-
versal quantificational noun phrase every student having wide scope (168a),
or with the existential quantificational noun phrase a book having wide scope
(168b).

(168) Every student bought a book.


a. For every student, the student bought a book.
b. There is a book such that every student bought it.

Some languages allow the same sentence to have these two separate inter-
pretations, as exemplified in English in (168), while other languages seem to
bias interpretation toward the “surface” reading, i.e. to use word order and
other organizing properties of information structure to disambiguate. It is
a natural question, then, whether quantificational scope ambiguities arise in
sign languages. Petronio (1995) reports the narrow (169a) and wide (169b)
scope existential readings for the bare noun phrase book to be available
in ASL (note, here, that the “wide” scope reading is actually a collective
reading in which the students bought the book together, not required by
6. QUANTIFICATION 153

(168b), as well as a third reading more common for bare noun phrases that
is not equivalent to either of the English readings (169c).

(169) (ASL, Petronio 1995)


re
book two student buy.
a. ‘Two students each bought a book.’
b. ‘Two students together bought a book.’
c. ‘Two students bought books.’

In (169) the noun phrase book is topicalized, which has been argued by
Wilbur and Patschke (1999) to affect quantifier scope options via a visible
marking of topicalization/A’ movement (the eyebrow raising). Others have
noted that the use of space, as used to keep track of discourse referents,
can disambiguate such readings, especially with the use of the distributive
marker (Kuhn, 2020; Quer, 2012b). Quer (2012b) provides the example
in (170) which is not understood as ambiguous but rather unambiguously
requires student to take wide scope.

(170) (LSC, Quer 2012b)


re
student one-dist teacher poss-dist ask-dist
‘Each student asked his/her teacher.’

Lurking in these discussions about availability or unavailability of quan-


tifier scope ambiguities is often a presumption coming from analyses of text
from languages like English, taken out of context, in which there is a clear
ambiguity. This might be due to the general ability of a sentence with multi-
ple quantifiers in argument positions to permit multiple scoping possibilities.
But in context, such sentences are rarely truly ambiguous, and we will never
be looking at the context-less equivalent in sign languages given that the
visual medium frequently includes more prosodic and gestural marking pre-
cisely in order to disambiguate, much as we saw with general coordinators
in Chapter 3. Moreover, the use of space and gestural expression provides
an easier means for depiction, which can be used in any language to support
more clarity on one reading or another, which has itself perhaps motivated
the conventionalization of distributivity marking in ASL and LSC.
As a takeaway then, we should consider the question not as whether sign
languages “lack” these scope ambiguities but rather what aspects of sign lan-
guages support disambiguation and investigate the presence (or absence) of
these in spoken languages during investigation of quantifier scope readings.
Further, within a language (signed or spoken) we might find differences in
availability of scopal readings depending on one’s interest in accompany-
ing description with depiction, a known area of speaker choice/variation.
6. QUANTIFICATION 154

Finally, we discussed in Chapter 2 the ways that sign languages reflect in-
formation structure of the discourse (backgrounded information can be en-
coded in a question, new information foregrounded in an answer), which can
potentially affect scope if, for example, wide scope negation is encoded as
a negative answer (Gonzalez et al., 2019). A particularly interesting conse-
quence of this, discussed in Chapter 3, is the possibility that a strategy for
expression wide scope is through the use of question-answer clauses and us-
ing a quantifier in the answer, which disambiguates scope in sign languages
and raises the question of whether this might be an underappreciated strat-
egy in some spoken languages as well.

5 Psycholinguistic studies: Comprehension


Finally, quantification has typically been the domain of formal linguistics,
but work at the psycho-semantics interface finds multiple areas of research in
which quantification has come to the attention of psycholinguists interested
in the processes for language comprehension and production, and sign lan-
guages are no exception. One example comes from scalar implicatures and
their processing and acquisition. A long line of research on spoken languages
lead to the conclusion that existential and universal quantifiers stand in a
structural semantic relation such that in a positive environment (i.e. not
under negation or similar operators) a universal quantifier is going to entail
an existential quantifier (171). This has a pragmatic effect such that the
use of the weaker term often implies that the stronger term is not true, i.e.
the use of (171a) tends to imply that the speaker could not have truthfully
claimed (171b).

(171) a. Some of the students forgot their toothbrushes.


b. All of the students forgot their toothbrushes.

Among the profusion of work on scalar implicatures in psycholinguis-


tics, one of the most interesting findings has been the general divergence
between children and adults on evaluating underinformative scalar terms.
Foundational studies by Noveck (2001), Papafragou and Musolino (2003),
and others showed that children tend to accept a description using a weak
scalar item (e.g. (171a)) in a situation where the corresponding strong
scalar term (e.g. (171b)) is true, where adults will reject the weak term
in such a scenario. The classic description is that adults are computing a
scalar/quantity implicature (additional pragmatic inference) that one should
use the most informative term one can truthfully say, and in the absence
of using the strongest term, the stronger term is taken to be untrue. This
series of reasoning is assumed to be difficult for children for various reasons,
hence their “inability” to draw implicatures, or their having more “logical”
6. QUANTIFICATION 155

interpretations as opposed to the “pragmatically enriched” interpretation


given by adults. Much subsequent literature has differed in where to place
this difference between children and adults: it could be due to differences
in knowing what scalar terms are alternatives to each other (Barner et al.,
2011), differences in how adults and children react to pragmatic infelicity
(Katsos and Bishop, 2011), differences in children and adults’ ability to
compare alternatives (Guasti et al., 2005), or other reasons. Always, an in-
teresting question presented itself: what would happen if one’s competence
in a language were dissociated from their cognitive development (Siegal and
Surian, 2004)?
A natural way to investigate this question is to look at adults who are
learning a second language later in life, since their decades of cognitive de-
velopment would seem to be in contrast to their newer experience with that
language. However, this topic proves hard to study in that context because
many well-studied scales like existential/universal quantifiers (some/all) oc-
cur in similar patterns in one’s first and later learned languages, so it is diffi-
cult to account for language transfer effects from one’s first language to the
new language. Davidson and Mayberry (2015) investigate scalar implicature
interpretations among deaf adults with varying ages of first language expo-
sure (from birth, as compared to much later acquisition of a sign language
as first language), and compared implicature calculation in ASL across three
scales: quantifiers, logical operators expression conjunction and disjunction,
and a number task. All of the participants had many years of experience
with ASL and all considered it their dominant language, but some adults
learned ASL early in life, while others learned it later. Interestingly, all had
similar reactions to classic quantifier scalar implicatures, rejecting underin-
formative descriptions exactly as adults generally are reported to do in other
languages. The only difference found between signers based on their age of
first language acquisition was, interestingly, in the coordination scale, which
as we discussed in Chapter 3 has other differences that seem to be ASL
specific. From the perspective of understanding quantifier comprehension,
these results find similar patterns in ASL as for English in terms of their
pragmatic landscape, suggesting that despite different language experiences,
adults approach the task in a different way than children. Moreover, while
many tasks find differences in sign language processing between late and
early first language learners, scalar implicatures broadly are less affected,
although the more narrow task of learning a particular scale might be af-
fected.

6 Psycholinguistic studies: Production


Scalar implicatures involve comprehension and pragmatic reasoning about
the meaning of quantifiers one encounters; on the other side of the commu-
6. QUANTIFICATION 156

nicative equation is message formation and language production. In this do-


main, there is little work on quantifier production in sign languages, but one
area where this has been investigated in some detail and proven fruitful is in
the investigation of the emergence of quantification in a new language. Ko-
cab et al. (2022) study the production of quantifiers by signers of Nicaraguan
Sign Language, a relatively newly conventionalized national sign language
established in Managua, Nicaraguan only since the 1970s. The goal of that
study was to investigate the presence of expressions for quantification in dif-
ferent generations of signers of the language, and to understand how their
meanings might or might not change in the first decades of this language.
Kocab et al. (2022) elicited quantifier production by showing signers
pictures illustrating a scene with a quantity of animals/characters too high
to easily count and use cardinal numbers, for example 20 or so birds in a
tree, of which some proportion (all, some, none) leave and fly out of the
tree. Signers were prompted to describe a target picture showing a partic-
ular proportion. One of the most surprising results is that quantificational
expressions that distinguished universal (‘all’), existential (‘some’/‘a few’),
and negative (‘no’/‘none’) quantification were used even in the earliest gen-
erations, a finding underscored by analyses of videos from earlier decades in
which the same quantificational expressions were used (which both indicate
that these quantifiers were present early in the language’s development and
that they have these meanings). A striking conclusion of this work is that
despite the complexity that quantifiers introduce into language through their
potential for scopal interactions, tripartite structure, and domain restriction,
they emerge very quickly in the time scale of a languages’ development even
in the absence of external input, which seems to only further underscore the
core role of quantification in human languages, both signed and spoken.

7 Conclusions
Quantification is known as one of the places where human language makes
use of a combination of complexity and precision that we do not yet have
evidence for among non-human animals. There are several places where sign
language specific properties make studying quantification especially interest-
ing. One of these is at the intersection of quantification and the association
of space with discourse referents, both for domains and for potential binding;
another is in the interaction of quantifiers and other operators with respect
to scope, such as distributivity and other quantifiers. There are also psy-
cholinguistic studies that focus on quantifiers, both on pragmatics in ASL
and on the production of quantifiers in NSL, although possibilities remain
wide open in this area for future work.
There is especially a need for more crosslinguistic work on quantifiers in
sign languages. The notable exceptions, as we have seen, are chapters on
6. QUANTIFICATION 157

Russian sign language (Kimmelman, 2017) and on ASL (Abner and Wilbur,
2017) in a volume on quantification cross-linguistically, which provide a sense
of the syntactic distribution of quantifiers across these languages, and on
LSC (Quer, 2012b). Further works should also take care to consider se-
mantic/pragmatic interface questions like the nature of the scales in each
language (not necessarily equivalent), and the use of space, especially since
we have seen evidence from both Catalan SL (Barberà, 2015, 2014) and
Japanese and Nicaraguan SLs (Davidson and Gagne, 2022) that it plays an
important role in domain restriction across several unrelated sign languages.
We’ll end this chapter with a concrete example that follows the semantics
we have introduced and show how it integrates with other notions we have
covered in earlier chapters such as loci and depictions. In terms of quantifi-
cation, we will want to model the quantificational force, the restriction
and the scope/domain, the three parts of quantificational tri-partite struc-
tures. For this we can actually turn to one of our original examples from
Chapter 1, part of which can be see in (172), which includes a quantifier,

. The semantic force of this quantifier is negative existential


(172a), requiring the intersection of the restriction set (P ) and the domain
set (Q) to be the empty set ∅ (i.e. there is nothing in the intersection).
The domain is restricted to the set of individuals that comprise the plural
ix-arc, given in (172b) as proposed in Chapter 4. This is an individual,
not a set that we usually think of as a restrictor for a quantifier, but we
borrow from Davidson and Gagne 2022 an implementation for moving from
the individual to the restrictor set using semantics essentially equivalent to
the partitive structure in none of them in English. In (172c) we see the
full subject of the sentence ix-arc none, which has the semantic form of
a generalized quantifier (roughly, ‘none of them’); this takes the predicate
remember card (172d) as an argument, resulting in the proposition in
(172e).

(172)

‘None of them remembered a card.’


a.
6. QUANTIFICATION 158

J K = λP λQ.P ∩ Q = ∅
b.

J K = ιx.¬atomic ∧ R(x, a)
‘The unique plural (i.e. non-atomic) individual that is related
by R to location a’
c.

J K
= λQ.{x.x ≤ ιx.¬atomic ∧ R(x, a)} ∩ Q = ∅
‘None of them (the ones associated to a)’
d.

J Kw
= λx.x remembers (a relevant) card in w
e.

J K
= λw.{x.x ≤ ιx.¬atomic∧R(x, a)}∩{x.x remembered card} = ∅
in w
‘The proposition consisting of the worlds in which there is no
individual that both remembered a card and that is a subpart
of the plural individual related to the locus a (the lined up stu-
dents)’
6. QUANTIFICATION 159

Of course, this is just one example; sometimes quantifier domains aren’t


so overt, just like in spoken languages. Note, for example, that the pronoun
ixarc(students, a) is really in a topic position here, and structurally is op-
tional: the sentence nonesym remember card ‘None remembered their
card’ is also well formed and acceptable. Quantifiers can also take many dif-
ferent forces, e.g. none, all, some, few, etc. As we have seen, one reason
to emphasize the quantificational properties of sign languages is that they
are simply not present outside of human language in the same way, and be-
cause they support inferences that are difficult to model as representations
of particular events (what does ‘No one remembered their card’ exactly look
like?) but quite easy to model in terms of reasoning about propositional al-
ternatives, and these in turn support the inferences that participants make
when quantifiers are used.
7

Countability

Many areas of formal semantics focus on the functional words in language,


the sort of forms that we can think of as the kind of “glue” that holds
language together: the words like all, not, none, what, etc. Mostly
as a reflection of this bias in formal semantics, these have also been the
focus of much of this book so far. In large part, the emphasis on these func-
tion words is because understanding their contributions helps us understand
propositions and propositional alternatives better: these functional vocab-
ulary items strongly affect entailments, so that for example the presence or

absence of will drastically change entailments. As we pointed out


in Chapter 1, understanding how we can infer so much about what has not
directly been said helps us understand the infinite communicative potential
of human language. However, some extremely interesting entailment pat-
terns also pop up when we investigate many categories of “content” words
more closely. One area like this that has been well studied in spoken lan-
guages and that has also received significant study in sign languages comes
from understanding the internal structure of noun and verb phrases based
on their countability, e.g. how we count their internal pieces and how
different languages reflect these distinctions.
For example, English uses some nouns to talk about stuff as individual
units (cat, shoe, house), while other nouns seem to talk about the substance
and not the way it is partitioned into units (milk, rice, water). Moreover,
these differences are reflected with how these classes of nouns interact with
other elements like quantifiers. For example, the class of count nouns can
occur as a restrictor of count quantifiers like many but not mass quantifiers
like much (173a), while others known as mass nouns show the reverse
pattern (173b).

160
7. COUNTABILITY 161

(173) a. I don’t see {many/*much} cats/shoes/houses. (count)


b. I don’t see {*many/much} milk/rice/water. (mass)

Moreover, the same nouns that occur with count quantifiers can combine di-
rectly with numerals (174a), wheras those that occur with mass quantifiers
cannot combine directly with numerals (174b), and instead need to be mea-
sured (via bottles, puddles, pounds etc.) before combining with numerals
(174c).

(174) a. I see two cats/shoes/houses. (count)


b. # I see two milk/rice/water. (mass)
c. I see two {bottles of/puddles of/pounds of} milk/rice/water.
(mass)

It has been an area of significant investigation by linguists as well as


psychologists and philosophers and those working at the intersections of
these disciplines how different languages count, and how different words
come to be categorized as mass or count in a given language (with respect
to the kinds of properties like those shown in (173)-(174)), and whether
categories like “mass noun” and “count noun” are meaningful, exhaustive of
the possibilities allowed in human language, and/or have different subparts.
This is because on the one hand, there is noteworthy consistency across
languages. In terms of what concepts get categorized in which ways, if a
language makes a distinction like English does in terms of what nouns go
with which kinds of quantifiers, then the word for cat is very likely to be a
count noun and the word for milk is likely to be a mass noun. On the other
hand, there are also language specific distinctions that seem quite arbitrary:
English hair is a mass noun while its Italian equivalent capelli is a count
noun.
To make things even more complicated, other languages don’t at face
value seem to make the same distinctions that we see in terms of what
nouns can occur with what quantifiers, or they make this distinction in
different ways. For example, a great deal has been written about the topic
of countability in Mandarin, a language in which a count classifier (e.g. zhī,
shuāng) is required to appear between a numeral and the noun, as in (175a-
b); direct composition between the numeral and the noun is unacceptable
(175c), in striking contrast to English.

(175) (Mandarin)
a. Liǎng zhī māo
two CL cat
b. Liǎng shuāng xié
two CL shoe
7. COUNTABILITY 162

c. # Liǎng māo/xié
two cats/shoes

The two classifiers zhī and shuāng reflect various semantic/sortal properties
of the nouns, in a very similar way to sign language classifier handshapes
discussed in Chapter 5, hence, the use of the term “classifier” for the hand-
shapes of depicting classifier signs in sign languages.
Chierchia (1998) observes that the required use of classifiers to combine
with numbers/counting in languages like Mandarin tracks with another dis-
tinction crosslinguistically: the ability of a language to use bare nouns as
arguments for verbs, as in the contrast between (176a) in Mandarin and
(176b) in English.

(176) a. māo tiàole


cat jump
‘The/a cat jumped’
b. *Cat jumped.
The/a cat jumped.

Chierchia accounts for this difference between the English and the Man-
darin cases by supposing that all nouns in some basic sense start out as a
kind of undifferentiated/uncountable stuff (a kind, in the terms of Carlson
1977), and then some languages like Mandarin are able to allow these kinds
to participate directly as arguments to a verb like jump. In this case, they
would need another function to turn them into something countable, which
is what we see being done through their classifier morphemes. In contrast, a
language like English has nouns that are - speaking roughly- already closer
to something that is needed to be counted. In these languages, the already
countable noun can take number marking (e.g. singular and plural), and
they have the ability to combine directly with numerals like two cats. These
languages then require an extra (covert, in English) function when we want
to talk about them as kinds, such as in the cat [kind] is common (Chierchia,
1998, 2015).
A third class of languages seems to permit their nouns to occur directly
in noun phrases with numerals (like English, unlike Mandarin), and directly
as arguments (like Mandarin, unlike English). In these languages, even very
mass-like things can appear without an overt classifier. Compare the English
cases we saw above with the example in (177) from Nez Perce, which has
no need for the intervening bottle, puddle, pound, etc. between the numeral
and the noun (177a) and where English might use measure words Nez Perce
can use plural marking instead (177b) (Deal, 2017).

(177) (Deal, 2017)


7. COUNTABILITY 163

a. kuyc heecu
nine wood
‘nine pieces of wood’
b. yi-yos-yi-yos mayx
PL-blue sand
‘[individuated/apportioned] quantities of blue sand’

In such a language we might wonder if there are really mass or count cat-
egories since they don’t seem to be distinguished by syntactic distributions
(the combination with numerals, classifiers, and/or the presence of gram-
matical number). Interestingly, it seems that even in these cases there may
be evidence for a mass/count distinction if we look to other areas of the
grammar like plural marking on adjectives in Nez Perce (Deal, 2017). (As
we will see, a similar unexpected distinction, in this case in topicalization
and conjunction, shows up for ASL as well.)
In addition to the nominal domain, we can ask about countability dis-
tinctions in the verbal domain. For example, verbs can express events in
a way that is countable (English She jumped three times!) or not (English
She is playing outside!). Just like the nominal mass/count distinction, there
are formal/syntactic as well as semantic distinctions that often but do not
always align. Consider, for example, the verb phrase fold the towel. This
seems countable in some sense: we can say that we did it once, or twice,
or twenty times. We can even count instances in an amount of time, as in
(178a). In contrast, a verb phrase like fold laundry feels much stranger to
count and instead we mark volumes/durations (178b).

(178) a. She folded the towel twenty times {in an hour/# for an hour}.
(telic: fold the towel)
b. She folded laundry {# in an hour/for an hour}.
(atelic: fold laundry)

There are many distinctions to make in this domain that we won’t have time
to explore in depth here, given the potential complexity of verb phrases
crosslinguistically. However, one important distinction raised in sign lan-
guage research is the distinction between telic and atelic predicates. The
intuition behind telicity is that sometimes we can refer to events in a way
that makes reference to their end points, i.e. their telos. For example I
folded the towel tends to imply one particular event that finished when the
towel was completely folded, which is the end point/culmination/goal of that
event. In contrast, I folded laundry may be talking about the same part of
one’s evening, but it is a way of looking at this event which is less marked
by boundary points. This is reflected in, among other things, the ability to
combine with different categories of temporal modifiers that we saw in (178),
7. COUNTABILITY 164

and can be viewed (e.g. Bach 1986) as a the verbal countability distinction
in contrast to the mass/count distinction in the nominal domain.
In this chapter, we will discuss work on countability distinctions in nouns
in sign languages in the first section (Mass/Count) and countability distinc-
tions in verbs in the second section (Telicity). We will focus both on the
kinds of inferences that these distinctions help us understand in languages
generally, and on different approaches to analyzing mass/count distinctions
and telicity in sign languages.

1 The mass/count distinction


When it comes to sign languages, we’ll begin our study of countability in the
nominal domain. Koulidobrova (2021) considers the question of how Amer-
ican Sign Language patterns in the typology of mass/count languages. She
points out that unlike English-type languages or Mandarin-type languages,

bare nouns in American Sign Language ( , ) do not require


any grammatical number marking or classifier to combine with numerals for
a counted interpretation (179a-b), or to combine with quantifiers that seem
to count, like many or few (179c-d).

(179) a.

‘I want three apples’


b.

‘I want three oils’


(can be piles, containers, puddles of oil)
c.
7. COUNTABILITY 165

‘I want many apples’


d.

‘I want many amounts of oil’

As expected given the availability of the bare noun in counting contexts


(Chierchia, 1998), bare nouns in ASL can also appear in argument position
(180).

(180)

‘The/a cat jumped’

Finally, in ASL the bare noun forms can also be interpreted as measured
either by quantity or volume in a comparison expression (181).

(181) Context: Mary’s oil bottle contains more oil (by volume) than Alex’s
fifteen smaller bottles

‘Alex has more oil’ (acceptable in this context)

By these diagnostics and others that she lays out (see Koulidobrova
2021 for full set), Koulidobrova concludes that ASL is most like the third
category of languages we discussed above, like Nez Perce (Deal, 2017) and
Yudja (Lima, 2014). Moreover, she shows that just like in Nez Perce (Deal,
7. COUNTABILITY 166

2017), the distinction between two classes (mass vs. count) does emerge in
at least one area in the language, when it comes to topicalization, as she
provides in the contrast in example (182): the mass noun blood can’t be
separated from the quantificational expressions three and few, whereas
the count noun apple can.

(182) (ASL, Koulidobrova 2021)


a. *blood, i want {three/few}
‘I want three/a few bloods’
(lit. of blood, I want three/a few)
b. apple, i want {three/few}
‘I want three/a few apples’
(lit. of apples, I want three/a few)

Another place that the mass/count distinction appears in the grammar ac-
cording to Koulidobrova (2021) is in conjunction: she reports that whereas
count nouns can be conjoined with each other (183a) and mass nouns can
be conjoined with each other (183b), mass and count nouns cannot be con-
joined together (183c) except if the mass nouns are “countified” via a count
quantifier (183d).

(183) (ASL, Koulidobrova 2021)


a. give-1 book shift pen
‘Give me a book and a pen.’
b. give-1 mud shift blood
‘Give me some mud and some blood.’
c. *give-1 blood shift gun
‘Give me [some] blood and a gun.’
d. give-1 book shift few/three blood
‘Give me a book and a few/three blood.’

A takeaway is, then, that for American Sign Language there is some distinc-
tion in how mass and count is treated in the grammar, although it largely
patterns with a Yudja or Nez Perce-type laguage in allowing bare nouns to
combine directly with verbs (seeming to directly allow kinds to serve as ar-
guments, like Mandarin according to Chierchia 1998) and also not requiring
overt classifiers in count environments like numerals and count quantifiers,
although perhaps the work is done through a covert version of this same
function.
On its face this summary seems entirely straightforward, but Koulido-
brova (2021) rightfully points out two possible counterarguments to this
broad generalization. One is that ASL might instead have classifiers like
7. COUNTABILITY 167

Mandarin: we already saw in Chapter 5 some discussion of classifiers in sign


languages. Another counterpoint could be an argument that ASL does have
plural marking. We examine each point in turn, as each have been topics
of study in their own right in some sign languages, although we conclude
(in agreement with Koulidobrova 2021) that neither of these counterargu-
ments ultimately holds weight, but gaining some better understanding of
both classifiers and number marking in sign languages along the way.

2 Classifiers and countability


First, there is a class of expressions sometimes called “classifiers” in sign
languages, so understanding their semantic contribution is important to un-
derstanding countability more broadly in sign languages. We explored some
aspects of the semantic contribution of depictive classifiers in Chapter 5, fo-
cusing on the depictive side, but here we focus on the second aspect: what is
their role, if any, in countability? As we noted above, one similarity between
Mandarin-type classifiers and ASL-type classifiers is the way that they sort:
classifier handshapes in sign languages like ASL sort by animacy, size, shape,
etc. just like Mandarin-type classifier morphemes. For example, both ASL
and Mandarin have a dedicated classifier for flat objects like paper (ASL )
that differs from the classifier for long skinny objects like pencils (ASL ),
that differs yet again for the classifier for vehicles (ASL ). So, there is
something deeply right about referring to (aspects of) depictive classifiers
and classifiers. Moreover, it’s also true that there may be some traces of
quantizing in sign language classifiers: Koulidobrova (2021) points to exam-
ple (184) from Petronio (1995), which contrasts two classifiers ( vs. )
and results in what is translated as a singular vs. plural meaning difference.
We saw the “plural” version of this classifier handshape in the depiction
of students around a library desk, excerpted in (185).

(184) (Petronio, 1995)


t
a. a-store, man cl:/1/-go-a
‘The man went to a store.’
t
b. a-store, man cl:/44/-go-a
‘The men went to a store.’
7. COUNTABILITY 168

(185)

Similarities notwithstanding, we might note that even here, classifying


handshapes of depicting signs are quite unlike classifiers even in languages
like Mandarin, which do not encode number themselves but rather turn the
noun into something which can be counted. For example, the difference be-
tween one and two cats is not reflected in form of the classifier in Mandarin
(the classifier remains the same no matter how many cats there are) but
rather in the form of the numeral; this contrasts with the ASL example,
which seems to be expressing the difference between one and more than
one individual going to the store on the classifier handshape. If we tried
to analogize the two, we could force the similarity by saying that the ASL
handshape was portioning the human-kind into portions made of one indi-
vidual (in the 1/ classifier) versus into portions of multiple individuals
(in the 44/ classifier), but it’s a stretch, and further evidence suggests
differences, not similarities.
As a second example of the differences between Mandarin-type classifiers
and sign language classifiers, Pfau and Steinbach (2006) and Koulidobrova
(2021) both note that for numeral classifier languages the classifier and the
numeral have to appear adjacent to each other (Greenberg, 1972), in contrast
to sign languages where this seems to not be required: the numeral (e.g.
three) can be separated from the combination of noun (car) and depicting
classifier in both ASL (186a) and DGS (186b).

(186) a.

‘Three yellow cars are standing like this’


b. (DGS, Pfau and Steinbach 2006)
three car clvehicle (rep)
Three cars are standing next to each other.

The separation of the numeral from the classifier in (186) and the quan-
tity being expressed on the classifier and not via a numeral/quantifier in
(184) both lead to the conclusion that the classifier handshapes in ASL are
7. COUNTABILITY 169

not performing the same function we see in Mandarin. We might contrast


them in the following way: Mandarin classifiers can be seen as an overt
realization of the process of taking the “kind” of a noun (the shoe kind, the
cat kind, the oil kind, etc.) and turning it into countable units so that we
can talk about, e.g. two pairs of shoes, two individuals of cats, two bottles
of oil, etc. As we saw, Mandarin seems to require this classifier to be overt
when numerals are present. ASL, however, patterns instead with languages
like Nez Perce in having numerals which can combine directly with the noun
(187a), where the process of turning the cat kind into something countable
is done implicitly, without need for overt classifier morphemes to overtly in-
stantiate this function. There is no evidence for a role of depicting classifiers
in the process of making countable units from kinds in ASL.
Classifiers in ASL, instead, seem to be part of a system that deals with
events in the verbal domain: note that the addition of a classifier even turns
a nominal phrase (187a) into a full clause, as in (187b).

(187) Context: Someone asks what just happened.


a.

‘two cats’
(Unacceptable response; it is a well-formed expression but not
an acceptable answer in this context since it’s not a full clause)
b.

‘Two cats were like this (in location a and b)’


(Acceptable response)

Zwitserlood (2012) lays out additional arguments that sign language


classifiers seem most similar to verbal classifiers/noun classes found, for
example, in many languages of North America such as Navajo. In terms
of their semantics, then, we probably want to think about them function-
ing more like agreement/inflectional systems and not nominal classifiers for
quantizing/apportioning. Theoretical analyses by Benedicto and Brentari
(2004) and Abner (2017), among others, take this same view and are consis-
tent with the semantic story we provided in Chapter 5 using event semantics.
7. COUNTABILITY 170

That is not to say that classifiers cannot be used within a nominal structure,
but rather that they do not play the quantizing role that Mandarin classi-
fiers do, and instead even in nominal instances are derived from a verbal
form at some level (Abner, 2017).

3 Grammatical number
We’ve seen how despite having depicting classifiers, ASL differs significantly
from languages like Mandarin in how it uses (or, in the case of ASL, doesn’t
use) classifiers to apportion quantities for counting, e.g. number noun
phrases. We have not yet in depth explored the other possibility, though:
that it actually uses grammatical number in a similar way to English-type
languages. On the face of it, there is a straightforward case for separating
ASL from number marking languages, as ASL clearly permits bare nouns
in argument position (188a) and does not have any plural marking on the
noun in a numeral noun phrase (188b): note that the form of the noun is

the same in both sentences ( ). If anything, there can be an (optional)


marking on the verb, seen in the two options in (188b) and (188c).

(188) a.

‘The/a cat jumped’


b.

‘Two cats jumped.’


c.

‘Two cats jumped.’


7. COUNTABILITY 171

Figure 7.1: Lateral plural marking in DGS (Pfau and Steinbach, 2006)

However, it has been argued that some languages that initially appear
to have bare nouns in argument position (of the Mandarin variety), actually
appear more like English under a microscope, in the sense that their noun
phrases seem to be marked for grammatical number in other ways. Deal
(2017) shows that Nez Perce does precisely this through number marking on
adjectives, although this distinction is only visible precisely in noun phrases
that include these adjectives. Could ASL be a language with some kind of
covert number marking on simply noun phrases that shows evidence of a
grammatical number system in other ways?
A particularly detailed discussion of this hypothesis can be found in Pfau
and Steinbach (2006), who argue in favor of such a view for German sign
language (DGS). As they point out, grammatical number can sometimes
appear to be unmarked, as in some lexical items in English like sheep which
clearly has a grammatical number distinction as reflected on the verbal mor-
phology, despite the same form (sheep) being used for the singular and plural
noun (189).

(189) a. The sheep are sleeping.


b. The sheep is sleeping.

Pfau and Steinbach (2006) propose that this “zero-marking” of plural is


basically what is happening in DGS, except with a much wider array of
nouns. As evidence, they argue that there are some nouns that do mark
plural, in a couple of different ways. An example is the lateral movement
used to plural one set of nouns that includes person, child and chair,
illustrated in Fig 7.1. Roughly, they propose that there are many possible
forms of the plural marking, with the idea that there are several different
forms that depend on the shape and meaning of each noun, each of which
can mark plural (or, it can be marked with no change in form); for details
for DGS, see Pfau and Steinbach (2006) as well as Herbert (2018). It’s clear
that these tell us something about quantities, but are they really marking
grammatical number of the sort we see in English plural?
7. COUNTABILITY 172

For ASL in particular, early work by Petronio (1995) made clear that
number is only conveyed, at least in ASL, outside of the noun phrase, in
the verbal domain or contextually inferred. Arguments from ellided noun
phrases in Koulidobrova (2021) argue in favor of this as well. Curiously,
Pfau and Steinbach (2006) also note that number seems to be not marked
internal to the noun phrase, so in a sense there is agreement about this in
signed languages: number marking in sign languages is syntactically unlike
grammatical number of the more common sort in that it is not part of the
noun phrase. In another sense, this raises deep questions because it is then
not clear what exactly is meant by plural marking that occurs outside of an
NP, as something outside of the noun phrase would be handled in a different
way by current syntax/semantics theories of number, which take number to
originate within the nominal domain.
For the purposes of the current text and our focus on semantics, we will
highlight one important argument that seems to clearly show that ASL
does not have number marking in any sense recognized as such in spo-
ken language semantics, based on its negation. Consider the sentence Alex
doesn’t have any sheep/children/cats, which is false even if he just has one
(sheep/child/cat): it doesn’t negate the plurality, but rather the plurality
here seems to be the unmarked form, used here to express the complete
lack (Sauerland, 2003). Similarly, the dialogue in (190) uses the plural form
and it is clear it should be understood as any number, including one. This
contrasts with the ASL version of the same demonstrated in (191) and in
(192) from Koulidobrova (2021).

(190) a. Does Alex have any sheep/children/cats?


b. Yes, he has one.
c. # No, he (only) has one.

(191) a.

‘Hey, do you have three basketballs [like this]?’


b.
7. COUNTABILITY 173

‘No, only one [like this].’

(192) (ASL, Koulidobrova 2021)


y/n
a. have tree(rep)/ball(rep)/child(rep) here?
‘Do you have trees/balls/children?’
b. # yes, have one pine/basketball/daughter
‘Yes, we have one pine/basketball/daughter’
c. no, only one pine/basketball/daughter
‘No, only one pine/basketball/daughter’

This suggests a different meaning for the quantity morpheme expressed


in ASL via movement over space and the plural in English. We can easily
imagine some possibilities for the right analogy to ASL: consider an existen-
tial/indefinite expression like the English a plurality in (193). Here we find a
similar patterning to the ASL repetition, not the English grammatical num-
ber marking. The point isn’t that the repetition is exactly the same as the
indefinite experssion in English, only that we can come up with expressions
that require existence and plurality, as in a plurality of that nevertheless
does not make the same semantic contribution as number features in En-
glish.

(193) a. Does Alex have a plurality of sheep/children/cats?


b. # Yes, he has one.
c. No, he (only) has one.

Finally, number has been connected via this existential type use to the
expression of explicitly spatial information. Pfau and Steinbach (2006)
discuss how the use of classifiers seems to require the expression of a spatial
localization in a way that the lateral movement does not. Schlenker and
Lamberton (2019) focus on the spatial information in these kinds of repetions
of the sort we saw in the basketball example in (191) as well as the related
example in (194).
7. COUNTABILITY 174

(194)

‘There was a basketball like this, this, and this.’

Schlenker and Lamberton (2019) provide a semantics for these kinds of quan-
tity expressions in American Sign Language that focuses on its depictive use
in showing the arrangement of several objects and their spatial arrangement.
They explicitely resist an indefinite/existential analysis of spatial repetition
based on examples where the numeral is not an exact match to the number
of depicted points, in particular, where the numeral is greater, such as (195).

(195) (ASL, Schlenker and Lamberton 2019)


10 [trophy trophy trophy]horizontal-arc.
‘The museum has 10 trophies (spread out).’

In their own words: “The heart of the matter is that an expression such
as 10 [trophy trophy trophy]horizontal-arc is acceptable (and makes
reference to ten trophies), which makes little sense if we are dealing with a
conjunction of three singular indefinites.” They are right that the numeral
(e.g. 10) and the number of repetitions do not have to match, so in that
sense the plurality is not actually coordinating separately noun phrases that
each have existential meaning. In other words, we would not want this to
be equivalent to There is a trophy (here) and there is a trophy (here) and
there is a trophy (here). But, as we have seen, these expressions seem to
be existential nonetheless, something similar perhaps to the English expres-
sion a plurality with a given arrangement, although they cannot simply be
grammatical plural (in the sense of english −s) due to their interaction with
negation, as we saw above. In addition, Schlenker and Lamberton (2019)
rightfully point out that the way that space is used to show arrangements
that need not correspond exactly numerically is seen in many gestural sys-
tems, including homesign (Spaepen et al., 2011) and gesturers that they
elicit from non-signers (Schlenker and Lamberton, 2019). Ideally, then, we
can incorporate depiction into the semantics of these expressions while also
getting the quantificational aspect right.
Let’s consider in particular the example in (196), with basketball fol-
lowed by a depictive classifier (DS: Depicting Sign) indicating the arrange-
ment of the balls, DS_b5(three in a row). This is acceptable in the situation
in which there are three basketballs on a table, and unacceptable to describe
a situation in which there are three buttons or soccer balls on the table, or
in a situation in which there are (only) two basketballs on the table (196).
7. COUNTABILITY 175

There also seems to be the interpretation that has come up in discussion of


plurality marking in ASL and DGS in which the movements are less punc-
tuated, in which case the sentence is acceptable if there are more than three
basketballs as long as they are arranged in the shape shown (196d).

(196)

‘There are three basketballs (like this).’


Acceptability across contexts:
a. Three basketballs are arranged on a table (acceptable)
b. Three buttons sit on a table (unacceptable)
c. Two basketballs sit on a table (unacceptable)
d. Nine basketballs sit on a table in a triangle arrangement (accept-
able, a long as movements are less punctuated)

The first three judgments can be accounted for through the same classi-
fier semantics we used in Chapter 5, along the lines of (197).

(197)

J K
=λw∃e∃x, y, z : bulky_item(x, y, z).[theme(e, (x, y, z)) ∧ R(a, x)

∧R(b, y)∧R(c, z)∧demonstration(e, )in w]


‘The proposition true in worlds in which there is an event that has
three individuals who comprise the theme of the event and the event

is demonstrated by , these individuals have relations


to the areas of space a, b and c, and it fails to hold if the themes aren’t
bulky items.’

On the other hand, to capture the last (plurality-like) judgement, we’re


going to want to ensure that there is a plurality with at least three (but
perhaps more) atomic subparts, all of which have to be basketballs, as in
7. COUNTABILITY 176

(198). Here the same triangle arrangment can be used to demonstrate an


event with more than three parts, as is known to be a depicting strategy used
by homesigners (Spaepen et al., 2011) and non-signing gesturers (Schlenker
and Lamberton, 2022).

(198) J K
=λw∃e∃x1 , x, y, z : bulky_item(x, y, z).[theme(e, x1 )∧(x ̸= y ̸= z)∧
basketball(x, y, z) ∧ x, y, z ≤ x1

∧demonstration(e, )in w
‘The proposition which returns TRUE for worlds in which there is an
event that has a theme, and that theme argument has at least three
individual sub-parts, all of which are basketballs, and the event is

demonstrated by , it fails to hold if they are aren’t


bulky items.’

More broadly, the observation in this section has been that ASL (and, it
seems DGS as well) does not have grammatical number marking/plural in
the English/German way in terms of its syntax and its semantics. It does
clearly, however, express the concept of plurality through conventionalized
morphemes and the depicting classifier system, although the classifier system
itself is tied closely to the verbal system, not to the nominal system. What
can we say more generally about countability in the verbal domain? We
turn to this topic in the next section.

4 Telicity and aspect


There is generally consensus that grammatical categories like nouns and
verbs exist as distinct categories in sign languages (Abner et al. 2019). So
far in our focus on countability we have focused on nouns: whether sign
languages seem to make a distinction between mass and count nouns (indeed,
they seem to be differently sensitive to ellipsis and topicalization) and how
they fit into the typology of languages in terms of classifiers or number
marking (grammatical number marking does not seem to be widespread,
at least in ASL, and there are no overt nominal classifiers of the Mandarin
type). In this section we will shift our focus to verbs, and a productive line
of research that has investigated the role that the form of a verb in sign
7. COUNTABILITY 177

languages has on its meaning, following similar questions: do sign languages


seem to differentiate different semantic/conceptual categories? And how do
sign languages fit into a typology of languages more generally with regard
to relating forms to these categories?
Linguists working on the structure of sign languages have long noticed
that there are some verbs in sign languages that can take many forms in-
volving repetition, and that these forms seem to correspond to just as many
different meanings. For example, Klima and Bellugi (1979) discuss at some

length how for one verb , the same handshape and roughly the same
location can be combined with many different internal movements, each of
which seems to convey aspectual-like information. (Hou 2022 provides a
more modern analysis of the broad flexibility of this same sign.) Names
given to these different categories of meaning include “protractive, inces-
sant, durational, habitual, continuative,” and “iterative”. Klima and Bellugi
(1979) find similarly complex categories, with sometimes even more distinc-
tions, for other predicates, e.g. (be)sick. These kinds of categories are all
meanings that are morphemic in some spoken languages as well, although
more rarely are they morphemic all in the same language. Moreover, the
timing/rate of repetitions and holds often seems transparent to non-signers,
in the sense that they seem to be able to guess some (but certainly not
all) of the meanings of this verbal inflection without much experience with
ASL. This naturally raises the question about the nature of this change in
meaning: are the different movements morphemic or depictively iconic? Or,
is this a case of a motivated form which is interpreted as symbolic? For ex-

ample, could the difference between and


be closer to the change in form of verbs from English, such as run → ran.
Should we think about this difference in meaning as description or depiction,
as morphology or iconicity, or both?
Even if we’re focused entirely on a symbolic analysis, we will want to
understand the different contributing roles of aspectual marking that is con-
tributing by additional morphemes, and the inherent semantic properties
of a predicate. This is in many ways similar to the mass/count distinction
in which different nouns may on their own contribute mass or count mean-
ings, and yet these can be adjusted by additional morphology, e.g. water
is mass but bottle of water is countable. Moreover, the categories of telic
predicates vs. extra morphemes expressing other categories like aspect often
track together, so are more difficult to disentangle. For example, in English
it is most common to use an imperfective aspect for atelic predicates, as in
(199a) and perfective aspect with telic predicates as in (199b), although we
7. COUNTABILITY 178

can also dissociate the two, as in (199c-d).

(199) a. Alex was playing in the laundry. (imperfective aspect, atelic


predicate)
b. Alex has folded a towel. (perfective aspect, telic predicate)
c. Alex has played in the laundry. (perfective aspect, atelic
predicate)
d. Alex was folding a towel. (imperfective aspect, telic predicate)

Conceptually, we can think about grammatical aspect (perfective/imperfective)


marking the perspective we take on the event that we are reporting: imper-
fective aspect puts us in some sense inside the event (during the playing or
folding of the laundry), while perfective talks about this event from some dis-
tance. In contrast telicity, a kind of countability, tells us whether the event
is bounded: telic predicates like fold a towel come with natural boundaries
that allow us to count them, similar to count nouns like cat; atelic predicates
like play in the laundry are more similar to nouns like water in not being
naturally bounded or countable (see Bach 1986 for more on the analogy be-
tween the notions of telicity and mass/count). Also similar to countability
in the nominal domain with mass/count, there is reasonable debate to be
had about how much of the telic/atelic distinction comes from the inherent
lexical meanings or is built on them compositionality, and evidence that
both play a role. Note for example that both the verb and its arguments
play a role in the telicity of a predicate: in English, eat applesauce seems to
be atelic but eat an apple is telic, so it does not seem that we want to say
that the meaning of the verb eat determines telicity on its own, but rather
the combination of verb and its arguments. Similarly, play in the laundry
seems to be atelic but play a game of Monopoly seems to be telic, bounded
by the end of the game. We keep this in mind as we turn in the next section
to theories about telicity in sign languages.

5 Event visibility
Telicity has become an increasingly well studied topic in sign language re-
search in recent years due to an extremely interesting proposal by Wilbur
(2008) that the boundedness in the meaning of verb phrases is mirrored in
the structure of verb phrases in sign languages, and that this is done in an
iconic way (see also Malaia and Wilbur 2012). This Event Visibility Hy-
pothesis (EVH) begins with the observation that verbs expressing bounded
events, like steal, tend to be expressed with a form that is bounded as well.
This contrasts with verbs that express unbounded events, like play, which
do not end in an abrupt stop in the same way.
7. COUNTABILITY 179

As proposed in Malaia and Wilbur (2012) and Wilbur (2008), the the-
oretical claim of the EVH is that the boundary point in the form of signs
like steal is an overt manifestation of the resultative morpheme proposed
in Ramchand (2008). As a consequence of this “visibility” of the resultative
morpheme, verb forms with an overt boundary point necessarily express telic
predicates, while those without typically express atelic predicates. That is
to say: the idea that telic predicates contain a piece of meaning that specif-
ically encodes telicity has been propose independently of the sign language
data (Ramchand 2008), but Malaia and Wilbur (2012) and Wilbur (2008)
draw a fascinating parallel to sign languages when they suggest that sign
language encode this boundary morpheme overtly, by the existence of an
abrupt stop.
The EVH captures a strong intuition, which is that often something
in the verbal form in sign languages feels natural given their meaning, and
moreover, that it crops up regularly in unrelated sign languages of the world.
It would seem quite wrong to reverse the signs for play and steal even
though neither is especially iconic or transparent in its meaning. Strickland
et al. (2015) confirm this intuition experimentally, showing that non-signers
categorize signs with overt boundary points as expressing telic meanings
more often than those without boundary points. Given this intuition, a
kind of semantic analysis that posits a universal due to iconicity available
in the visual modality might be well motivated. On the other hand, the
morphological claim is quite surprising from the crosslinguistic perspective:
no spoken languages seem to encode telicity directly in their morphology.
In a skeptical view of the overtness of telicity in sign languages, Davidson
et al. (2019) note that telicity is like mass/count in many ways (Bach, 1986),
one of which is that it is usually seen as an emergent property depending
on lexical semantics and semantic propreties of its arguments; while many
things interact differently with mass and count nouns, no overt morphology
marks that distinction as such directly on the nouns themselves in the way
that the EVH proposes for telicity in sign languages. Building on Wilbur’s
work, different theoretical takes have been proposed to cover the intuition
behind the EVH. For example, Kuhn (2017b) speculates on an especially
iconic implementation of the EVH, proposing that signs might encode not
just the presence or absence of the boundary point, but that the completion
of the event is mirrored in the form of the sign, mapping the production of
the sign directly onto the event structure. He gives the example of the type
in (200), where producing the sign die in a way that ends before the citation
endpoint is interpreted as ‘almost die’, and where internal modulations of
the timing of the sign are interpreted as reflecting the timing of the event,
e.g. ‘after a struggle’ or ‘gradual’.
7. COUNTABILITY 180

(200)

Different iconic modulations of the verb die from Kuhn (2017b)

Under this view, the EVH is both stronger and yet even simpler than pro-
posed by Wilbur (2008); it simply restricts though a manner adverbial that
the event progressed in the manner shown, imposed by an iconic function
(Iconϕ ). Note that this builds the depiction into the propositional meaning.
Another way to think about the relationship between iconicity and the
grammar is to map even (perhaps, especially) the form of verbs that encode
atelic predicates to small portions of an event taking place, as suggested
by Wright (2014). Wright notes that the sign for sew in ASL is frequently
atelic and lacks a boundary point, having a repetitive movement without
a boundary, in line with the EVH, but that the internal movements of the
sign sew in ASL can be thought of as corresponding to individual stitches,
and so iconicity comes into play in the semantics through the repetition
and not the boundary. One could think about play like this as well, with
each small internal repetition mapping to some interval of playing. The fact
that telic predicates are frequently expressed with verbs that have an abrupt
boundary point is in this sense epiphenomenal, in that they do not continue
indefinitely.
The original formulation of the EVH (Wilbur, 2008; Malaia and Wilbur,
2012) as well as other takes by Kuhn (2017b) and Wright (2014) take an
iconic mapping between the telicity of the predicates and their (bounded or
unbounded) verb forms to be a given. As noted above, a source of empirical
support for this is often taken to be the fact that people who do not have
any previous experience with a sign language guess at above change levels
at whether a verb has a (typically) telic or atelic interpretation, as shown
in experimental work by Strickland et al. (2015). However, Davidson et al.
(2019) discuss several reasons to be skeptical of a strong version of the EVH.
On the one hand, they highlight the existence of alternating predicates in
ASL like read, write, drive, and ski, which take one bounded form
7. COUNTABILITY 181

in telic predicates (e.g. ) and another unbounded form in

atelic predicates (e.g. ), providing evidence in favor of


the EVH. On the other hand, they also provide several counterexamples to
the correspondance in both directions from verbs that don’t change forms.
There are atelic predicates which have a clear boundary point, and while
many of these exceptions are statives like know, some are eventives like
drink (coffee)). There are also several telic predicates which do not have
a clear boundary point, such as paint (a picture). So, clearly, there is a
nontrivial linking between these forms and meanings in the direction of the
EVH, but it is also straightforward to find counterexamples to the claim that
telicity is encoded directly as necessarily expressing endstate via a bounded
verbal form.
We can also view the behavior of nonsigners in Strickland et al. (2015)
through the lens of depiction vs. description. Their results report some
above chance level of use of verb form used by nonsigners to decide between
meanings in a way that corresponds to something like telicity. However,
this may well be a “last resort” kind of mechanism used if there are no
other cues at all; there may be some transparency in these forms that we
can call iconic, and in fact this is supported by further experimental results
showing that the same analogy (for abrupt stopping in verb forms for event
boundaries) is found in spoken language (Kuhn et al., 2021). However, that
doesn’t mean this iconicity is interpreted in way that bears on propositional
meaning: iconic motivations for conventionalized forms need not be taken to
affect interpretation. The form of the verb vote in ASL is clearly iconically
motivated by how one stuffs a physical ballot box, but it can be used to
describe all kinds of voting that is electronic, etc. (see Hodge and Ferrara
2022 and Ferrara and Hodge 2018 for detailed discussion of these kinds of
iconicity). Similarly, the English onomatopeia knock ends in a stop conso-
nant and supports iconic depiction, even though its denotation is symbolic,
i.e. it simply restricts us to certain knocking events. Thus, instead of sign
languages making the event structure “visible” in the form in a way that
should be reflected in the semantics, sign languages may be simply using
motivated forms to express meanings in the same way that all languages are
known to take advantage of when possible. In additon, these forms may ad-
ditionally support depictive meanings just like knock; perhaps the iconicity
discussed in, e.g. (almost) die by Kuhn (2017b) is emblemetic of this use of
descriptive iconicity to support depiction.
7. COUNTABILITY 182

6 Pluractionality
Finally, recall that we discussed grammatical number (e.g. singular/plural
marking) in the nominal domain. Many languages also mark number in the
verbal domain, through pluractionality. Kuhn and Aristodemo (2017)
describe and analyze two ways that French SL (LSF) marks pluralities of
events, through two different forms of repetition. One distributes over time
(they call this rep, since the form of the verb is repeated); another distributes
over participants (they call this alt, since the form of the verb involves
alternating). One is exemplified in (201a): repeating the verb forget with
the same hand is interpreted as a particular type of event, involving the
same participants (in this case, Jean as the agent and a particular word as
the theme), occuring multiple times; note that it contrasts with the phrase
every-day which permits the words to vary from occasion to occasion.

(201) (LSF, Kuhn and Aristodemo 2017)


a. jean one word forget-rep
‘Jean forgot one word repeatedly’
(can’t be different words)
b. every-day jean one word forget
‘Every day Jean forgot one word.’
(can be many words, or one word)

The other pluractionality marker in LSF is exemplified in (202a), where


the form of the verb involves alternating between the dominant and non-
dominant hands; with it comes a change in interpretation which requires
variation between the participants only. In this case, it is variation between
the students but specifically not variation in the words; note this contrasts
with the distributive marker each which permits variation both for the
students and the words.

(202) (LSF, Kuhn and Aristodemo 2017)


a. student ix-arc forgot-alt one word.
‘The students forgot (the same) one word.’
(can’t be different words)
b. student each forgot one word
‘Each student forgot one word’
(can be many words or one word)

The proposed semantics by Kuhn and Aristodemo (2017) captures this


contrast, following an analysis of two pluractionality expressions in Kaqchikel
by Henderson (2014), with the idea that pluractionality is marked in a de-
pendency between a distributivity operator (like each (over participants) or
7. COUNTABILITY 183

every-day (over times)) which interact with filters like -rep or -alt. Under
this approach there is significant similarity with grammatical number mark-
ing, which is also a filter in some sense: when we have plural marking on a
noun, this is frequently modeled as a function that takes some set of individ-
uals and returns only those which have more than one individual as some
part of them. Similarly, Kuhn and Aristodemo (2017) propose semantics
for pluractional markers in LIS which takes some set of events and returns
those which have more than one sub event as part of them. This was the
function of the restriction ¬atomic in the word on quantification in Chapter
6. In the case of -alt these have to be events (that are part of the larger
event) with different participants (θ(e′ ) ̸= θ(e′′ )); in the case of -rep these
have to be events (that are part of the larger event) with different run times
(τ (e′ ) ̸= τ (e′′ )) (203)a-b).

(203) (Kuhn and Aristodemo, 2017)


a. [[-alt]] = λV λe[V (e) ∧ ∃e′ , e′′ ⪯ e[θ(e′ ) ̸= θ(e′′ )]]
b. [[-rep]] = λV λe[V (e) ∧ ∃e′ , e′′ ⪯ e[τ (e′ ) ̸= τ (e′′ )]]

In the case of LSF -alt, they analyze it as a function that takes verb types
(e.g. forget) and returns events of that kind (e.g. the forgetting events)
such that they have two different sub events each with a different event
participant. In contrast, the LSF morpheme -rep is a function that takes
verb types (e.g. forget) and returns events of that kind (e.g. the forgetting
events) such that they have two different sub events each with a different
run time.
We end by noting that pluractionality intersects with the other area of
countability in the verbal domain we have discussed in this chapter: telicity.
Even if a basic verb plus its arguments would typically be telic, the addition
of pluractionality can cause a predicate to pass a test for telicity. Consider,
for example, the English sentence My friend gave me one book. This doesn’t
pass the “for an hour” test for telicity: you can’t say that My friend gave me
one book for a year. However, if we add repetition, it’s fine to say For a year,
my friend repeatedly gave me one book or For a year, my friend often forgot
one word. One question then becomes whether we can account for some of
the observations made about iconicity and telicity through understanding
pluractionality. Most likely, these analyses are right in different ways, for
example, that sign languages have conventionalized ways to express different
kinds of pluractionality (Kuhn and Aristodemo, 2017), and an endstate/telos
seems to be reflected in the bounded forms for many verbs like ASL steal,
die, and forget, and that on top of both of these descriptively iconic
conventionalizations, they can also support (iconic) depictions, such as the
depiction of the progression toward death explored by Kuhn (2017b) or some
of the alternations in manner as in look-at noted by Klima and Bellugi
7. COUNTABILITY 184

(1979) and in (bounded vs. ongoing) alternating verbs like read (Davidson
et al., 2019).
To imagine how this might work, let’s consider a verbal case involving

the ASL verb , which is an alternator: it regularly expresses telic


predicates and in those cases tends to use one single motion seen in (204),
and also regularly expresses atelic predicates, and in those cases uses many
small internal movements, seen in (205).

(204)

‘The girl has read (it).’


Acceptability across contexts:
a. We are discussing a famous book, and a girl we both know. The
girl recently finished reading the book. (acceptable)
b. We are discussing a famous book, and a girl we both know. The
girl is currently reading the book, but hasn’t finished. (unac-
ceptable)

(205)

‘The girl has read (it).’


Acceptability across contexts:
a. We are discussing a famous book, and a girl we both know. The
girl recently finished reading the book. (marginal)
b. We are discussing a famous book, and a girl we both know. The
girl is currently reading the book, but hasn’t finished. (accept-
able)

In the case of the verb that has an end point form, the end point is
naturally taken as a point at which one finishes the reading goal, as in the
book, in the situation in (204a). Note that the sentence with the bounded
7. COUNTABILITY 185

form is not acceptable in the situation where the girl hasn’t finished reading
the salient book (204b). In contrast, in (205), we see that a form with small
internal movements (i.e. without the endstate form) is less acceptable in
a context in which the girl finished the book, but acceptable if she is still
reading it and hasn’t finished.
We’ll naturally want to reflect these semantic differences in our semantic
analysis, but as things stand, many possibilities remain open for how ex-
actly we might want to do this. One way to model the distinction we see
in (204)-(205) is roughly via the EVH, as proposed by Wilbur (2008), in
which the endstate form in (204a) reflects the presence of the telos in the
semantics. This would correctly account for the judgments in (204), and
we can imagine why it would also account for the preference in (205), given
pragmatic competition between the two forms. But this isn’t the only way
to model the pattern seen in (204)-(205): telicity and aspect are collapsed in
these cases, so we could also model this particular distinction by taking the
endstate form to express perfective aspect (e.g. has read (the book)) and/or
the small internal movements as a type of progressive form (e.g. is reading
(the book)). Of course, the ideal goal is to dissociate telicity from aspect in
ASL and other sign languages, as we did for English above in (199), but this
proves to be complicated for both logistical and form-based reasons, detailed
in (Davidson et al., 2019). Certainly, one thing we don’t want to do is to en-
code the difference in the verb itself, since we find the same basic verb form
in both sentences (contra Strickland et al. (2015)’s implication that telicity
is a property of verbs and not verbs with their arguments). Yet another
possibility is to consider the depictive potential of these verb phrases, and
consider that some distinctions may be being convey via depiction and not
description. We began this chapter with the observation that much of the
countability expressed in sign languages seems to be more transparent to
nonsigners than other iconic areas of sign language grammar, which would
point toward depiction (although, is not an argument for it; we can have
symbols with more or less transparent meanings). But consider the kind of
event conveyed in (206) where the subject is shown as taking part in effort-
ful reading; we might simply consider this a simultaneous depiction without
propositional consequences.

(206) L M=

Most likely, there are aspects of truth to each of these observations.


Imagine, for example, that we take pluractionality to be expressed through
dedicated morphemes along the lines of Kuhn and Aristodemo (2017), and
the use of a bounded form to correspond to telic events, along the lines
7. COUNTABILITY 186

of Wilbur (2008) and Malaia and Wilbur (2012). In addition, although all
of these notions are technically dissociable, reduplication is known to be a
form recruited for progressive markings crosslinguistically, while telicity fre-
quently tracks with perfective aspect. Therefore, we might be encountering
not just challenges from the perspective of an analysis, but also challenges
from the dynamicity of language: a state of flux could make these difficult
to disentangle, but in entirely expected ways given our understanding of
(spoken) languages and language change/typology more broadly, as noted
specifically in this domain of aspect marking by Deo (2015b).
In addition, when it comes to iconicity, conventionalized symbolic mor-
phemes may nevertheless have some amount of iconicity, which can itself
support further depictive iconicity. This is exemplified in English in the
case of using onomatopeia (e.g. knock, an entirely conventionalized symbol,
representable in text, that nonetheless makes for compelling depictions).
Evidence for this might be the concentration on the signer’s face and the
intense hand movements in the utterance in (206), an example of morphemic
distinction that also naturally supports further depiction. The symbolic and
the depictive seem to especially rely on each other in the verbal domain (we
saw this in work on demonstrations in Chapter 5), even in ways that might
differ between signers and nonsigners: experiments on nonsigners may be
picking up on an underlying tendency in depiction, which might be used to
bias certain symbols to meanings (e.g. verb forms) but which are not them-
selves directly interpreted as a visible manifestation of a symbolic endstate
morpheme.

7 Conclusions
We perceive the world as comprised of different types of things, such as
events, objects, and substances. We also use language to talk about the
world in ways that reflects these different categories, but via mappings that
are not determined by them: the same event out in the world can be de-
scribed as folding laundry (atelic) or folding a towel (telic); the same puddle
can be described as water (mass) or a puddle of water (count). Sign lan-
guages have been investigated in both the nominal and verbal domain when
it comes to countability, with special attention played to the role of iconic-
ity in the expression of countability. This section emphasized in particular
the value in considering the separation of iconically motivated morphemes
from further, additional iconic depictions that they might very naturally
support. In particular, while some previous work in these domains has as-
sumed because there is some iconicity that it must be reflected directly in
the semantics (as in, e.g. the Event Visibility Hypothesis), we argued that
this need not be true either in the nominal or verbal domain, but rather
came to the conclusion that descriptive iconicity exists in both domains,
7. COUNTABILITY 187

as an organizing principle of a system of form-meaning mappings that can


simply support depictive iconicity, in the same way we see forms used for
depiction in spoken languages (Clark, 2016; Dingemanse, 2015).
8

Intensionality

Human languages, across all modalities, are notable not just for their com-
positionality and creativity, but also for their ability to go beyond the “here
and now”: we can discuss not just the present place and time, but also the
past, the future, and even alternative possibilities, how things might have
been, how we hope they might be, what we think, and what we believe
must be true. What is particularly remarkable about this ability in human
language is that we can discuss these possibilities with precision and make
inferences about them, including entailments, presuppositions, and impli-
catures, just like we do when we share information to help narrow down
the particular world we might currently be in. Consider, for example, the
relationship between (207a) and (207b): in every situation in which there
has to be a rainbow (207a) is true, and because it is also true that there can
be a rainbow, (207b) is true, so the first entails the second.

(207) a. There must be a rainbow.


Entails:
b. There might be a rainbow.

We find a similar pattern when we use attitude verbs: (208a) entails (208b).
Both have a bit of the feeling we found with quantification, where we have
to ignore the pragmatically strengthened reading of the second/entailed/(b)
sentence (209); the point is that we can clearly derive inferences that go
beyond what is directly said, and so we want to understand the logic under-
lying these sentences using words like must, might, know, think, etc.

(208) a. The girl knows that is a rainbow.


Entails:
b. The girl thinks that is a rainbow.

188
8. INTENSIONALITY 189

(209) a. I see all of the rainbows.


Entails:
b. I see some of the rainbows.

Words like must, might, know and think (but not some or all) are part
of a broader class of expressions known as intensional operators which
induce us to consider possibilities/possible worlds other than the current
one. Consider that to evaluate the non-intensional sentences in (209) we
need only consider the here and now: do I see all/some of the relevant
rainbows? But to evaluate the sentences in (207) or (208) we need to consider
other possibilities. For (207a) it seems that we need to somehow consider
every (relevant) possibility and check that there is a rainbow in every one. In
(207b) we need to make sure that there is a rainbow in at least one possibility
that we are considering: There might be a rainbow means roughly that in
some possibility that we’ll consider valid, there is a rainbow. We can reason
similarly about attitude predicates, of the sort we saw in (208): if The girl
knows that is a rainbow, then somehow in every possibility that the girl
can access through her knowledge, it’s a rainbow. The idea is that in The
girl thinks that is a rainbow, then in some possibilities that are part of her
knowledge, then it is a rainbow, but perhaps in some of them it is not; it
seems strictly weaker than know.
This same intensionality is active in conditional statements like (210):
roughly, we can think of its meaning as expressing the claim that in all of the
possibilities in which the girl sees a rainbow, then in those possibilties she
will run outside. In other words, we evaluate the consequent claim (she’ll
run outside) with respect to only the subset of possibilities in which she
sees a rainbow; it doesn’t say anything about the possibilities in which she
doesn’t see a rainbow.

(210) If the girl sees a rainbow, she’ll run outside.

Notice that at this point we seem to be introducing a pretty heterogenous


group of expressions in terms of their morpho-syntactic properties/parts of
speech: must, may, can, should, etc. are functional auxiliary verbs in En-
glish (they require another verb to form a complete sentence), while know,
think, believe, hope, etc. are content verbs in English that introduce argu-
ments, and if... then pretty clearly doesn’t involve any verbs. Kratzer (1981,
1977) describes this “notional category of modality”, where here “modality”
refers not to what mode the language occur in, but rather the presence
of a “modal” expression. In turn, something being a “modal” expression
is defined not by any particular syntactic role it plays in the sentence but
rather whether it introduces quantification over sets of possibilities/other
possible worlds. We can also talk about expressions that induce quantifi-
cation over sets of possibilities as intensional operators, and since that
8. INTENSIONALITY 190

has no other meaning in sign linguistics (unlike “modality”), we will tend


to use that expression more often than modals through the rest of this
chapter. An intensional operator, then, is one which adds some complexity
to the semantics by operating over possible worlds: this includes but is not
limited to conditionals (e.g. if... then in English), attitude predicates (e.g.
think, know, hope, want), and modal auxiliary verbs (e.g. can, must, might,
should).
Intensional operators are particularly interesting from the point of view
that we have been exploring in this book, of contributing both to represen-
tations of particular events and to representations of propositions and their
alternatives. On the eventive side of things, intensional operators can be es-
pecially evocative, allowing us to “paint a picture” of how something might
have been if things were otherwise, or how we really hope things will be. For
example, in the case of the conditional statement in (210), this might paint
a picture of some sorts for the interlocutor with multiple parts, a bit like a
movie: first we might envision her looking out a window, and then we might
see her running outside. In fact, precisely because we sometimes tend to see
both sides of the conditional as highly related, they have often been used
as evidence for theories of meaning that involve constructing models that
we reason about through simulation (Johnson-Laird, 1980). For example,
if we are reasoning about this through simulation we might (erroneously,
but naturally) infer from this that if she ran outside, it’s because she saw
a rainbow, since we’re thinking about them as causally connected (but of
course, that inference isn’t entailed by the sentence).

(211) L If she sees a rainbow, she’ll run outside. M =

On the propositional side of things, we can model this relationship as


one of quantification over possible worlds. In this case, the antecedent is
the restriction; the consequent is the scope of the quantifier, and we simply
require that for any world in which she seens a rainbow, then that world is
also a world in which she runs outside (212).

(212) J If she sees a rainbow, she’ll run outside. K


= [∀w.w ∈ {w.She sees a rainbow in w} → w ∈ {w.she runs outside in w}]

Domain restrictions for intensional operators are not always explicit in


the same way that they are in conditionals. Sometimes the restriction for
these worlds comes from the intensional operators, as we see in wants and
needs in (213).
8. INTENSIONALITY 191

(213) a. J She wants to go outside. K


= ∀w.w ∈ {w is compatible with her desires} → w ∈ {w.she is outside in w}
‘In every world compatible with her desires, she is outside in that
world.’
b. J She needs to go outside. K
= ∀w.w ∈ {w is compatible with her needs} → w ∈ {w.she is outside in w}
‘In every world compatible with her needs, she is outside in that
world.’
Compositionally, intensional operators can get complicated for several
reasons, in part due to the inherent complexity of introducing other possi-
ble worlds, and in part due to the notional category of modality and thus
the hetereogenity in the syntactic instantiation. Therefore, in this chapter
we will focus less on the compositional properties than in other chapters;
the interested reader can find more in a review by Hacquard (2010) and
lecture notes by von Fintel and Heim (2002). Instead, we will discuss differ-
ent semantic puzzles that arise with intensional operators throughout sign
languages, and attempt a better understanding through the interplay of dif-
ferent kinds of meaning and the quantification over possibilities introduced
by intensional operators.

1 Conditionals
One of the simplest intensional structures are conditional statements. In the
DGS example in (214) we see a pattern familiar to many other sign languages
of the world (including ASL) in which the conditional can be expressed
through nonmanual marking. In the conditional statement in (214a), the
antecedent tomorrow rain is expressed with raised eyebrow nonmanuals,
and the meaning is that the consequent we party cancel must holds in all
of the restriction worlds, i.e. the worlds in which it rains tomorrow. In other
words, the utterance in (214a) is going to be judged true in a scenario in
which we’re not sure what the weather will be tomorrow, but we know that
we won’t hold a party in the rain. This contrasts with the (not conditional)
utterance in (214b), which will be unacceptable in that situation, since it is
not a conditional statement but rather two independent statements, the first
claiming that it will rain tomorrow (and thus, unacceptable in a scenario in
which we do not know whether it will rain).

(214) (DGS, Pfau and Steinbach 2016)


raised brow
a. tomorrow rain, we party cancel must.
‘If it rains tomorrow, we will have to cancel the party.’
b. tomorrow rain, we party cancel must.
‘It will rain tomorrow. We must cancel the party.’
8. INTENSIONALITY 192

One important point that this contrast highlights is that nonmanual


marking has important semantic consequences when it comes to intension-
ality: with raised brows the sentence is interpreted as a conditional, as in
(215a), while without the raised eyebrow nonmanual marking, we simply
conjoin the two propositions (215b).

raised brow
(215) a. J tomorrow rain, we party cancel must K
= ∀w.w ∈ {w.it rains tomorrow in w} → w ∈ {w.we cancel party in w}
‘For all worlds w in which it rains tomorrow, we will cancel the
party in w.’
b. J tomorrow rain, we party cancel must K
= λw∀p.p ∈ {It rains tomorrow, We cancel party} → p(w) = 1
‘The proposition expressed by the worlds in which it is both true
that it rains tomorrow and it is true that we cancel the party.’

There are several interesting takeaways from even this very simple dis-
cussion of conditionals. First, we see a new perspective on the broad notion
of intensionality: sign languages show that Kratzer’s “notional category
of modality” can be expressed through the multi-modal use of nonmanual
marking in sign languages, since this is the only distinction in form between
(215a-b). Second, nonmanual marking, although able to express lots of expe-
riential content like emotions, etc., is certainly also able to contribute propo-
sitional content through symbolic means, as we have seen earlier in Chapter
3 with negation and here with the expression of a conditional. Thus, as Pfau
and Steinbach (2016) note, nonmanual marking needs to be an integral part
of formal semantic analyses in sign languages, including and especially in
the intensional domain. This doesn’t necessary mean that nonmanual mark-
ing and intensional operators necessarily go together in a priveleged way:
in fact, negation shows us immediately that nonmanual marking expresses
non-intensional meaning. Furthermore, the use of nonmanual marking in
conditionals might rather be attributed to the syntactic structure of con-
ditional statements: Wilbur and Patschke (1999) argues that brow raising
is indicative of subordinate syntactic structures. Similarly, many sign lan-
guages have manual signs that introduce conditionals, including American
Sign Language. The conclusion thus is not that the nonmanual expression
is because of the intensional semantics, then, but rather that intensional
operators can come in forms that include nonmanual markings. Finally, we
see the use of another modal expression, must, in the consequent in (214).
As von Fintel and Heim (2002) note following Kratzer (1981), conditionals
are set up to interact naturally with other modals like these auxiliaries.
8. INTENSIONALITY 193

2 Attitude verbs
Intensionality can be expressed by main verbs, as we saw above in spoken
languages, and perhaps one of the simplest examples come from verbs of
desire, such as want. Consider the example from the previous Chapter 6;
then we focused on the countability of expressions like three apples; here
we can incorporate modal semantics for the verb want (217).

(216)

‘I want three apples.’

(217) Jix-1 want three apple K


= ∀w[w ∈ {w.w is compatible with my desires} → w ∈ {w.I have three apples in w}]
‘If w is in my desire worlds, then I have three apples in w’

We’ve already introduced quite a bit about attitude verbs and nonman-
ual marking in Chapter 5, when we discussed role shift introduced by verbs
like think, say, etc, but had not focused on their own semantic contribution
and the difference between embedding clauses vs. more demonstration-like
structures. Consider the example of the attitude predicate think in (218).
Here, we gain a window into Alex’s thoughts: the meaning seems to be a bit
different than the English sentence Alex thought his sister has a book which
implicates that he would agree that she does; in the ASL example in (218)
it seems less a claim about his belief and more a claim about the content of
his thoughts, that he was thinking/wondering about whether his sister has
a book. As a first pass, we might model this as in (219).

(218)
8. INTENSIONALITY 194

‘Alex was thinking that his sister might have a book.’

(219) J fs(Alex) think my sister have book K


= ∀w.w ∈ {w is compatible with Alex’s thoughts} → w ∈ {w.his sister has a book w}

However, the analysis in (219) seems to fall short in a couple of ways. For
one thing, it doesn’t seem quite right that in all of Alex’s thought worlds,
his sister has a book. That’s probably something like the right semantics for
the English sentence Alex thinks his sister has a book, but as we noted, this
ASL utterance seems to have different truth conditions. Instead, we seem
to want to express the idea that Alex is considering this possibility, that it
is possible, not necessary, in his thought worlds. Thus, we might want to
model this as an existential quantifier, like in (220).

(220) J fs(Alex) think my sister have book K


= ∃w.w ∈ {w is compatible with Alex’s thoughts}∧w ∈ {w.his sister has a book w}
‘Alex thinks it possible that his sister has a book.’ Lit. ‘In some of
the worlds in Alex’s thoughts (which I am showing you now), his
sister has a book’
Moreover, this sentence has the use of a first person pronoun (my) which
is different from the English sentence, which uses a third person (his). This
seems to be an example of indexical shift, of the sort that we discussed
in Chapter 5. Like other shifted indexicals, it seems to require that Alex
knows that the person he is talking about (his sister) is his sister. This is
known as the de se interpretation, that is, this requires that Alex would
be able to identify the person as his sister. All of these are introduced by
the attitude predicate think. We find this kind of complexity with verbs of
reporting, like ASL say, and they tend to be handled in different ways: de se
interpretations under a quotation/demonstration analysis are expected since
the speaker would not have used the first person if they could not express the
8. INTENSIONALITY 195

attitude de se. In the case of shifted indexicals it has remained somewhat


more stipulative, although the de se interpretation of shifted indexicals is a
cross-linguistically robust one (Deal, 2020), so sign languages would follow
the same expectations as for spoken languages.

3 Modals
Compared to attitude predicates, there has been relatively less work on
modal (auxiliary) verbs like can, can’t, should, etc. in ASL and in other
sign languages. Perhaps this is because they look rather unsurprising: for
example, the ASL modals should and can in (221) looks quite a lot like
the English modals in conveying a similar meaning via a similar form, a
standalone morpheme, and in a similar syntactic environment, preceding
the main verb 1-drive-b.

(221) a.

‘She should drive to visit the flowers.’


b.

‘I decided that I can drive to visit the flowers.’

They also seem to express semantic differences in a similar way, differen-


tiating modal force through different lexical items: the first seems to be
expressing a kind of necessity (universal quantification over possible worlds
that are desired) while the second expresses a possibility (existential quan-
tification over possible worlds of some accessible sort). Not all languages
make this same distinction, so it’s not a necessary one (Deal, 2011), but it
seems to be made roughly the same way in both ASL and in English, at
8. INTENSIONALITY 196

least; research on other sign languages might find variation of the sort seen
in spoken languages.
That said, modals in ASL and other sign languages have some notable
syntactic/semantic properties that have been discussed in prior literature.
One of the most surprising, perhaps, is that they can sometimes be found
“doubled” at the end of an utterance (222). Known within generative syntac-
tic literature as “focus-doubling” Lillo-Martin and de Quadros (2004), they
have been analyzed as occupying a similar syntactic position as clause-final
negation of the sort we saw in Chapter 2.

(222) a. girl can read, can ‘The girl can read.’


b. girl must read, must ‘The girl must read.’

What is puzzling from a semantics perspective is why this set of expres-


sions that occupy this sentence final position pattern together (negation,
modals, wh-words, and main verbs). There is no obvious semantic class
that would connect negation and modals, and that’s not even to consider
that main verbs can be doubled too. Furthermore, although this is often
called a “focus” position, it doesn’t usually seem like constituent focus of the
sort that we saw occuring as the answers to the Question-Answer Clauses
in Chapter 2; recall that while we can double a wh-word, we don’t generally
double answers to a question, despite that being a classic diagnosis for focus.
One way to view these doubles is as a sort of verum focus (David-
son and Koulidobrova, 2015), where the semantic/pragmatic contribution
is not focusing any particular constituent but instead expressing the seman-
tic/pragmatic notion of the sort we see with stress on English auxiliaries,
e.g. She DID read the book, He CAN’T come to the party. This is also
consistent with (but not necessarily requiring) the idea that it would be ex-
pressed in a dedicated syntactic position for polarity (Geraci, 2005). If the
doubling of these modals is indeed an expression of verum, we could analyze
it following (Gutzmann et al., 2020) as a relation between the content and
the question under discussion. This doesn’t entirely solve the question of
why this particular set of items (negation, auxiliary verbs, etc.) expresses
verum crosslinguistically, but it does move the puzzle in sign languages to
part of the more general puzzle in a modality-independent way, since the
same classes of expressions are found across languages.

4 Intensional predicates and iconicity


Finally, one place that intensionality has been discussed in experimental
sign language literature regards the relationship of intensional meanings
and their influence on word order. Napoli et al. (2017) observe that word
order in sign languages is often flexible and seems to depend on aspects
8. INTENSIONALITY 197

Figure 8.1: Figure 1 from Napoli et al. 2017

of meaning, such as objects preceding verbs when there is an especially


pictorial aspect to a verb (Liddell, 1980). Napoli et al. (2017) design an
experiment to test one possible effect on word order, namely, whether the
main verb is an intensional operator or not. This builds on a study which
reported that predicates that seem to express intensional meanings lead to
a change in word order for silent gesturers (Schouwstra and de Swart, 2014);
Napoli et al. (2017) ask if word order in sign languages might be similarly
influenced.
The task goes something like this: participants were presented with il-
lustrations that are intended to evoke descriptions that involve intensional
predicates like want or think about, and others that are intended to evoke
non-intensional predicates like see. Examples are in Figure (8.1). Deaf sign-
ers of Libras (Brazilian sign language) were invited to describe the pictures,
and their answers may include one or more clauses. Napoli et al. (2017)
find results that are on the one hand quite compatible with Schouwstra and
de Swart’s gesture study: the “intensional” predicates often involved word
orders with the verb preceding the object (VO) (e.g. cook dream sax
‘The cooked was dreaming of the sax.’) while the “extensional” predicates
had relatively more objects precending the verb (OV) (e.g. gnome tower
look ‘The gnome was looking at the tower.’)
There is much to build on related to this study, first with regards to
terminology: it’s not at all clear that the difference between the verb classes
here is truly about intensionality, since many of the so-called “intensional”
verbs do not involve intensionality in any formal semantic sense, especially
verbs of creation (e.g. knit, draw). That said, there does seem to be some-
thing real about the tendency that this experiment draws out: word order
is semantically influenced by properties of the event. Napoli et al. (2017)
provide several possible explanations, including the age of sign languages
(and thus subjective to different constraints in terms of change) as well as
the “heaviness” of the verb (many of the “extensional” verbs involve more
morphological complexity and depiction, such as look - see Hou (2022))
8. INTENSIONALITY 198

or “iconicity” in a broad sense. It is of course, as they note, possible that


these factors are interrelated as well: highly depictive verbs might occur
sentence-finally precisely because they are so heavy, for example, although
these are also in principle dissociable. Schlenker et al. (2022) make a con-
vincing case in recent work also for the role of iconicity, showing that two
extensional predicates that differ in whether the object was visible before or
after the event (eat-up vs. spit-out) show the same distinction, and that
the pattern holds both in ASL and in silent gesture.

5 De dicto/de re
One final topic worth mentioning that is related both to attitude predicates
and to the word order distinction from the last section deals with the in-
terpretation of objects under the scope of intensional operators. Consider,
for example, the noun phrase three apple in (223). Interestingly, three
apples don’t even need to exist at all in the world in which this sentence is
expressed, and yet this sentence is acceptable; what matters for acceptabil-
ity is that three apples exist in all of the signer’s “desire worlds”, e.g. the
signer really wants three apples.

(223) Context: It’s unclear if there are any apples around, but the signer
really needs three apples to make her recipe.

‘I want three apples.’


(acceptable in this context)

We call this the de dicto interpretation of the object, which can exist in the
worlds invoked by the intensional operator (in this case, the desire worlds
of the signer) without necessarily existing in the actual world.
In formal semantic theories, de dicto interpretations of noun phrases
under intensional operators have been traditionally modeled as an effect
of scope. Consider, for example, two very different ways to understand
the noun phrase a prince in the English example in (224). In one (“de
dicto”) interpretation the sentence is unacceptable since Aurora’s desires
need not include princehood for her future husband; yet under another (“de
re”) interpretation the sentence is fine, if by a prince we mean the specific
person who we know to be a prince.
8. INTENSIONALITY 199

(224) Context: In the story Sleeping Beauty, the princess Aurora wants to
marry a man she met in the woods; her family wants her to marry
the prince of the neighboring kingdom. Aurore doesn’t realize that
the man she met is actually the very same prince of the neighboring
kingdom.
Sentence: Aurora wants to marry a prince.
a. (Not acceptable, if we interpret a prince de dicto, since her desire
worlds include marrying this particular man, and she thinks he’s
not a prince)
b. (Acceptable, if we interpret a prince de re, that is, by the way
that the participants in the conversation could describe him from
their perspective, as in fact he is the prince of the neighboring
kingdom)

We can, glossing over many details and complications we can’t get into here,
model this as scope, with the idea that in the de re case we have a (specific)
prince, and say something about him (namely, that Aurore wants to marry
him) (225a). In contrast, in the de dicto case we are saying something about
Aurora’s desires, namely, that they include prince-marrying (225b).

(225) a. ∃x[prince(x)∧∀w[w ∈ {w.w is compatible with Aurora’s desires}


→ marries(Aurore, x) in w] de re
b. ∀w[w ∈ {w.w is compatible with Aurora’s desires}
→ ∃x(prince(x) ∧ marries(Aurore, x) in w)] de dicto

The important takeaway for current purposes is that the English sentence
seems to have two different interpretations, one of which has the object inter-
preted “outside” of the verb and another in which the object is interpreted
entirely “inside”/in the scope of the attitude verb, despite having the same
word order for both interpretations marry a prince. The reason this is rel-
evant to sign linguistics is that we might expect word order effects to care
about this difference, since many sign languages seem to have more flexible
word order with respect to argument structure that changes in a way that
can reflect semantic scope. Consider (226), where the difference is naturally
expressed in the de dicto case by taking Aurore’s desires as the topic, and in
the de re case by topicalizing the prince: here, word order seems to reflect
scope order, resulting in a lack of ambiguity in ASL where one exists in
English.

(226) Context: In the story Sleeping Beauty, the princess Aurora wants to
marry a man she met in the woods; her family wants her to marry
the prince of the neighboring kingdom. Aurore doesn’t realize that
the man she met is actually the very same prince of the neighboring
kingdom.
8. INTENSIONALITY 200

a. fs(Aurore) want what, marry prince


(Not acceptable, if we interpret prince de dicto, since her desire
worlds include marrying this man, but she thinks he’s not a
prince.)
b. prince ix(at a), fs(Aurore) want marry ix(at a)
(Acceptable, if we interpret prince de re, that is, by the way
that we could describe him from our perspective, as in fact the
prince of the neighboring kingdom)

The takeaway from (226) is not that ASL and other sign languages lack
the de re/de dicto ambiguity, but rather that (as with many other languages)
there are many ways to minmize ambiguity, and a word order/syntax that is
sensitive to information structural differences is one way to disambiguate the
scopal relationships of noun phrases and intensional operators. This might
lead to word order differences in intensional contexts, perhaps yet one more
pressure on the word order findings from Napoli et al. (2017) for Libras, not
just iconicity but also semantic scope.

6 Conclusions
This chapter is perhaps the most speculative of this text, in part because the
formalizations get much more complicated than we are able to go into in this
context, and in part because there has been some of the least research done
on intensionality in sign languages. Nevertheless, there are some important
findings in this area, and great possibilities for future directions.
For one thing, although modals are more semantically complex than
negation, there are interesting parallelisms between negation and modals in
a couple of places in sign languages, and that parallelism is worth pursuing
for further understanding. For one thing, negation and modals can both
appear “doubled” in sentence-final position, perhaps both conveying verum
focus. Negation and modals also seem to optionally to induce word order
variation based on scope. For example, we saw how de re readings seem to
be highlighted by topicalizing the object in a sentence with an intensional
verb; the same can be seen with negation, where placing, say, a depiction
sentence-initially seems to ensure it is out of the scope of negation, as we
saw in Chapter 5.
Understanding the role of scope, iconicity, and word order across sign
languages is surely a future research path for formal semantics/pragmatics in
sign languages. The more general topic of intensionality is even broader than
conditionals, attitude verbs, and modal auxiliaries, encompassing a wide
range of expressions and potentially (in fact, likely) nonmanual markings as
well. In addition, Herrmann (2013) discusses modal-like meanings arising
from discourse particles and inter-sentential operators in DGS, along with
8. INTENSIONALITY 201

their relationship with nonmanual marking, and more on this kind of work
is surely due for other sign languages.
9

Conclusions

The goal of this book has been to showcase research that connects formal
semantics and sign linguistics across a variety of phemonena and hopefully
in doing so to potentially bring together different researchers across these
two domains to work together on future projects.

1 Events and propositions


One of the main threads throughout this book, introduced in Chapter 1, has
been the simultaneous contributions of meaning via depiction and descrip-
tion. We have used this to motivate a distinction in types of meaning: the
content of iconic depiction can affect event representations (only), whereas
the content of symbolic description can affect both representations of par-
ticular events as well as propositional content that supports reasoning over
alternatives. This distinction allowed us to separate the depictive from the
descriptive contributions of space in discourse anaphora in Chapter 3 and
in Chapter 4. It allowed us to model quotation-like role shift along with
depictive classifiers in Chapter 5. Finally, we were able to separate some
of the conventionalized from less conventionalized aspects of quantification
and countability in Chapters 6-7.
In terms of the relationship between iconicity and symbolic linguistic
structures, the general picture pursued in this text is one in which depiction
can exist first (say, in one’s lifespan or in a community) independently of
language: it requires no understanding of symbols and their composition to
be understood but rather various kinds of mappings to particular events,
although there may well be conventions to depiction within cultures and
contexts. In addition, use of a particular form with a particular meaning can
become conventionalized within a language community so that it becomes
a symbol available for composition, abstracted away from any particular
event representation. As part of this process, it take on even more abstract
meanings and participate in complex compositional semantic structures. It

202
9. CONCLUSIONS 203

can also, if it retains a (descriptively) iconic form, be used as support for

further depictions. This is the case with signs like ASL or words
like English knock, which are symbols with full linguistic potential that also
retain enough iconicity to easily support further depiction.
When we talk about a semantic analysis for ASL, we focus on separat-
ing propositional meaning from representations of events, and understanding
how the two interact. To illustrate the power of this, we can return to an
example from Chapter 1. Recall the sentence about students waiting in line
at the library, in (227). In Chapter 1, we discussed important entailments of
this sentence. Now that we have provided several examples of semantic anal-
yses of various phenomena in the intervening chapters, we can understand
what a simple semantic analysis would look like for this bit of language based
on the analyses given at the end of Chapters 5 and 6, and also understand
why those inferences follow.

(227)

‘Ten students stood in a (perpendicular) line at the library desk.


None of them remembered a library card.’
The first part of this dialogue is the locational description library ta-
ble, which sets up the location as a kind of topic. The topic-status is also
reflected in the raised eyebrow nonmanuals. The propositional contribution
is to raise the Question Under Discussion What happened near the library
desk?, as in (228); it also naturally evokes an event representation that in-
cludes the library desk object.

(228)
9. CONCLUSIONS 204

Raises QUD: What happened near the library desk?


e.g. raises possibilities of this sort:
{The desk caught on fire, we ate pie on the library desk, the librarian
cleaned the library desk, students lined up behind the desk, ...}

The continuation will move forward the dialogue to partially answer this
question, eliminating possibilities via its propositional contributions, and
simultaneously depicting aspects of the event.
In (229a), we provide the propositional contribution that we computed
compositionally at the end of Chapter 5. This is the proposition that con-
tains exactly those worlds in which there are at least ten members in the in-
tersection of the set of students and the set of upright figures who are themes

of events demonstrated by the depicting sign . Naturally, this


also adds details to our representation of an event, with approximately ten
student objects arranged in the way demonstrated in the depicting sign,
parallel to the desk (229b).

(229) a.

J K

=λw. | {x.∃v[demonstrate( , v)∧theme(x, v)∧R(x, a)∧


upright-figure(x)]} ∩ {x.x is a student} |≥ 10 in w
lit. the proposition defined as the set of worlds in which there
are at least ten individuals in the intersection of the set of stu-
dents and the set of upright figures who were themes of events

depicted by and are related to location a


b.

L M

=
9. CONCLUSIONS 205

Similarly, in (230a), we provide the propositional contribution that we


computed compositionally at the end of Chapter 6. This is the proposition
consisting of exactly the worlds in which there is no individual that both
remembered a card and that is in the plural individual related to the locus
a (the lined up students). This is a claim of non-existence; no objects or
events are entailed to exist by this sentence, and so (in keeping things simple
without going into mental models of negation) the active mental model is
not affected by this sentence, persisting as in (230b).

(230) a.

J K
= λw.{x.x ≤ ιx.R(x, a)} ∩ {x.remembered card} = ∅ in w
b.

L M

=
What does this gain us in terms of “meaning” and reasoning? For one
thing, we can reason through simulation via the event representation en-

coded as some kind of model, e.g. . Such models may vary quite
a bit between people in a conversation; the more detail provided in an ut-
terance, including and especially via depiction, the more likely people will
be to align on their representations of a particular event. The other kind of
meaning is propositional, which we can model as a series of questions and
answers. We take the topic to be raising the QUD in (231a), taken from
above in (228). The answer/resolution comes from the dialogue that follows,
in which we take the meaning in (231b) and (231c) and conjoin them as in
(231d).

(231) a. QUD: What happened near the library desk?


{The desk caught on fire, we ate pie on the desk, the librarian
cleaned the library desk, students lined up behind the desk, ...}

b. λw. | {x.∃v[demonstrate( , v)∧theme(x, v)∧R(x, a)∧


9. CONCLUSIONS 206

upright-figure(x)]} ∩ {x.x is a student} |≥ 10 in w


c. λw.{x.x ≤ ιx.R(x, a)} ∩ {x.remembered card} = ∅ in w

d. λw. | {x.∃v[demonstrate( , v)∧theme(x, v)∧R(x, a)∧


upright-figure(x)]} ∩ {x.x is a student} |≥ 10
∧{x.x ≤ ιx.R(x, a)} ∩ {x.remembered card} = ∅ in w
e. Answer to QUD: ‘The proposition consisting of the set of worlds
in which there are at least ten individuals in the intersection of
the set of students and the set of upright figures related to locus

a who were themes of an event depicted by , and in


which there is no individual that both remembered a card and
that is in the plural individual related to locus a (the students)’

This is a painful paragraph to read, especially because it doesn’t allow


for the scope clarity of the formalism in (231d) but the goal of (231e) is
to give the reader a sense for what is expressed long form in (231d). Why
don’t we just use the natural English, e.g. Ten students stood in line next
to the library desk. None remembered their card.? Because this is far too
ambiguous, and although perhaps a good translation equivalent, it doesn’t
accurately capture the truth conditions of the ASL sentence. For example,
it doesn’t convey the spatial relationship of the students and the desk, and
has a somewhat different information structure, i.e. it doesn’t seem to be so
clearly addressing a QUD about what happened at the desk. Recall that our
goal is to accurately predict the entailments and acceptability judgements of
the sort reported in Chapter 1, and this formalism is going to be much more
successful in doing this for ASL than a translation into another language
(e.g. English) could do. It also allows us to understand the semantics more
directly without relation to English and its idiosyncracies, but instead with
respect to a logical language that is more plausibly shared between human
languages (and, perhaps, present also in non-linguistic thought). For exam-
ple, in a situation/world in which only seven students were near the desk,
this world would return FALSE for the proposition expressed by (231e), and
in that context, the utterance is predictably unacceptable. Similarly, in a
situation/world in which there is a student among a group standing near the
desk who did remember their library card, this world will return FALSE for
the meeting expressed in (231e), and fittingly, the sentence is unacceptable.
These propositions/functions seem to be roughly working as they should.
This proposal of strongly separating model meaning from propositional
meaning is in some opposition to various other ways of accounting for iconic-
ity in language, broadly, and in sign languages, specifically. For example, in
Cognitive Linguistic approaches, meaning from iconic depictions and mean-
9. CONCLUSIONS 207

ing from descriptions both contribute to the same kind of model-like rep-
resentations of events (Liddell, 2003; Taub, 2001). These approaches natu-
rally emphasize iconicity in language, as iconicity is expected under a view
in which language (of all sorts) contributes to constructing a model of the
world. Mental models are usually taken to be iconic representations in the
human mind, with the idea that you can represent the world in the format
in which you interact with it, and which allows you to investigate and test
inferences about the world via simulation (Johnson-Laird, 1980). Naturally,
using forms in language that also seem to mirror the world seems like a di-
rect source for encoding features of the world in your mind and that of your
interlocutor (i.e. to “paint a picture” in someone’s mind); what becomes
much more challenging to model are the symbolic components of language,

especially those that express relations difficult to simulate, such as

or .
On the other end of the spectrum, we find approaches to meaning which
are purely truth conditional, and which incorporate iconicity as a new type
of constaint on these truth conditions. One way to account for this is as a
presupposition, as in the iconic presuppositions in Schlenker et al. (2013) and
Schlenker (2021). Kuhn (2017b) proposes that language allows generally for
an iconic function Icon ϕ which can be used to map forms to meaning and
imposes on them a mapping between their referent in the world (e.g. perhaps
the timing of an event) and the form of a sign (e.g. the duration of a verb).
This proposal manages nicely to capture the gradient meaning requirements
encoded in depiction, but not so much why it doesn’t integrated with logical
operators like negation. What is so different about iconic meaning? It
also raises some unanswered general questions about how it integrates with
the rest of the grammar: how do we know when Icon ϕ is present or not?

Is it inherent in descriptively iconic conventionalized symbols like ,


or only when iconicity is interpreted, i.e. in depictively iconic signs like

?
A third approach proposed by Ramchand (2019) is to divide model-
like meaning for events and propositional meaning for clauses essentially by
syntactic domain, following a larger research project of matching syntactic
domains with semantic ontology. Her proposal is that within the syntac-
9. CONCLUSIONS 208

tic domain of the verb phrase (the predicate and its arguments, along with
its adjuncts, but before tense and aspect are included), the event type is
created through a system of meaning construction/mental model creation
similar to that proposed in cognitive linguistics frameworks. Then, this
enters as an argument into a function that maps the event into a truth con-
ditional/possible worlds framework, creating a final representation that does
have truth conditions, but is no longer interpreted as iconic. One problem
with this approach is that it seems to overgenerate the iconicity interpreted

within the verb phrase. For example, both the ASL signs or vote
and the English words knock and chirp have the potential to be used purely
symbolically in a way that their form does not affect the representation of
an event in which they participate, for example, we can use the same sign
with the same form to talk about a table sitting on its side.
The approach in most of this text has taken both kinds of meaning to
be in parallel, which has empirical advantages over the other three systems.
The downside, clearly, is in theoretical parsimony: why have two theories of
meaning when one will do? The argument for the dual approach (of event
representations of particulars and of propositions for which we build alter-
natives) comes from empirical coverage crossed with parsimony: we could
stretch a truth conditional approach to include iconic functions to account
for both propositional and iconic aspects, but then we lose predictions for
when and where this iconic function appears. We could also stretch a cog-
nitive linguistic approach to cover proposition-like meaning but this loses
an enormous amount of explanatory power that we focused on in Chapters
2-3, for which cognitive linguistics has little to say about, for example, nega-
tion, connections, and entailment. A dual approach allows more complete
coverage of these phenomena while at the same time fitting in with a larger
picture of the mind in cognitive science as making use of dual/parallel pro-
cesses Kahneman and Tversky (2013), see Baggio (2021) for compositionality
in particular.

2 On pragmatic universality
This book has mostly considered semantics and pragmatics as intertwined
topics. For example, in Chapter 2 we looked at the semantics of phenom-
ena like question-answer clauses by understanding their pragmatic roles and
relation to a Question Under Discussion, and understood logical operators
in Chapter 3 as part of a system that involves both functions over propo-
sitions and related scalar implicatures. But, we haven’t addressed formal
pragmatics in a head-on way: is there anything generally special about sign
languages and pragmatics? In other words, do Gricean Maxims apply within
9. CONCLUSIONS 209

the sign language context (Henner, 2022)?


There is a component to the answer to this question that goes into vari-
ation among the users themselves, which, like everything about language
context, is crucial to the underpinings of meaning and yet is beyond the
scope of this book. Henner and Robinson (2021) and the references therein
are an excellent place to start in order to address any misguided norma-
tive aspects to a Gricean framework of modeling pragmatic meaning. They
would seem to also agree that variation by individuals with respect to these
frameworks is expected in any language modality. So putting aside the
way that different people interact with a Gricean framework in language in
general, whatever the language modality, is there in addition a principled
reason related to the visual modality that might make one wonder about
the applicability of formalizing pragmatics in this way in sign languages?
I would argue in part yes, but it is not because of the modality per se,
but rather more directly due to semiotic contributions, and only indirectly
via modality, since the visual modality is especially well suited to integrating
depiction and description. Description via symbolic means is always going
to be subject to some form of Gricean reasoning: provide truthful informa-
tion in an efficient manner, whatever “efficient” means for a given group of
participants in a given context. Sign languages are full of symbolic logical
structure just like spoken languages, and sign language dialogues are simi-
larly chock full of pragmatic inferences, just like spoken language. However,
not all language is description and thus meant for efficiency: there is no sim-
ilar need for efficiency in depiction. Consider, for example, that providing
more or less detail in a picture is not going to violate maxims for quantity in
the same way. Kita (1997) observes this assymetry for depiction in spoken
Japanese, for which he notes that two symbolic modifiers with similar mean-
ings are taken by speakers to be redundant (and thus unacceptable due to
pragmatics), while a depictive modifier with similar meaning to a symbolic
modifier is not seen as redundant at all (232).

(232) (Japanese, Kita 1997)


a. Symbolic modifier
Taro wa isogi -asi de arui -ta
Top hurriedly feet with walk Past
‘Taro walked hurriedly’
(lit. ‘Taro walked with hurried feet’).
b. Depictive modifier
Taro wa sutasuta to arui -ta
Top mimetic walk Past
(sutasuta = hurried walk of a human)
‘Taro walked hurriedly’
c. Redundant symbolic modifiers
9. CONCLUSIONS 210

*Taro wa isogi -asi de haya -aruki o si -ta


Top hurriedly feet with haste walk Acc do Past
‘Taro walked hastily hurriedly’
(lit. ‘Taro did haste-walk with hurried feet’, unacceptable)
d. Depictive and symbolic modifiers, no redunancy
Taro wa sutasuta to haya -aruki o si -ta
Top mimetic haste walk Acc do Past
‘Taro walked hurriedly’

Japanese mimetics/ideophones are a helpful example of depiction and


its pragmatic/semantic consequences in the spoken language modality, and
illustrate that similar phenomena in sign languages should be taken in con-
sideration of their semiotic contribution. When we look at a piece of sign
language discourse, we are not infrequently viewing both the descriptive
and depictive components, only one part of which are going to be subject
to pragmatic constraints like quantity maxims in the same way.
More broadly, formal pragmatics has relied extensively on written lan-
guages as a basis for theoretical analysis. Extending this analysis to spoken
and signed languages is a welcome development, but in doing so we should
keep our minds open to the possibility that contexts change, and semiotic
repertoires change, in a way that may raise new questions obscured by the
focus on written language.

3 Cross-linguistic typology
In some areas of linguistics, cross-linguistic variation has long been a driv-
ing question: phonology/phonetics naturally has been interested in how
patterns vary from language to language below the level of stored mor-
phemes/symbols, and studies in morphology and syntax have similarly long
acknowledged the importance of understanding variation in lexical and syn-
tactic structure (while, of course, all emphasizing similarities across lan-
guages as well). Semantics, on the other hand, has long been a less natural
fit for crosslinguistic investigations. Partly this is because it connects with
two external disciplines that are less focused on variation: logic, and psychol-
ogy/psycholinguistics. But it is also partly because there is less obviously a
case for variation, and in fact, one might reasonable imagine that semantics
is a place where human languages are more similar to each other than they
are different: the pieces of meaning might indeed be universal even if the
ways that we express them vary. So it has been with excitement that recent
work in formal semantics on crosslinguistic variation has raised many inter-
esting questions about the burden of proof for arguing that languages have
different semantics and/or what kinds of semantic interactions we expect to
see across languages. There are two forms that this typically takes: 1) units
9. CONCLUSIONS 211

of meaning are basically the same, but the way that languages categorize
and express them are different, or 2) units of meaning themselves vary, so
that some languages are working with difference pieces than others.
An example of the first category is work on quantification, where the
expectation and general understanding is that languages all seem to have
the ability to express quantification, but how it gets mapped to linguistic
forms can vary. Work on sign languages has shown similar variation to spo-
ken languages, when comparing American Sign Language (Petronio, 1995;
Abner and Wilbur, 2017) and Russian Sign Language (Kimmelman, 2017),
for example, and evidence for quantification can be found in the much more
recently conventionalized Nicaraguan Sign Language as well (Kocab et al.,
2022). Work on logical connectives like conjunction, disjunction, and nega-
tion tends to take a similar form: the expectation is generally that languages
have the same underlying logical organization that supports propositional
meanings and operators on these meanings, but the way that they are ex-
pressed can differ. In sign languages we see these kinds of proposals for the
use of boolean connectives (Davidson, 2013; Zorzi, 2018; Asada, 2019) and
the interplay between manual and nonmanual negation across sign languages
(Zeshan, 2006; Kuhn and Pasalskaya, 2019).
The second category of cross-linguistic variation in semantics concerns
variation in the possible pieces of meaning themselves. One place this arises
is in the study of the way that gradability is expressed. For example, Aris-
todemo and Geraci (2018) argue that Italian SL (LIS) not only has de-
grees as part of the semantic ontology (roughly, semantic units from which
comparisons are made), but makes them “visible” through the use of gra-
dient/iconic signing space. In contrast, Koulidobrova et al. (2022) argues
that ASL patterns with spoken languages like Washo (Bochnak, 2015) in not
having degrees in their ontology/as ingredients for semantic composition; as
a consequence of this, comparison in ASL is expressed with a different set
of expressions than would be possible if degrees were involved. Note that
this doesn’t mean that comparison can’t be expressed in some way: it’s not
a theory of expressability, but rather the form that comparison can be ex-
pressed being determined by the semantic primitives that it makes use of.
Perhaps this is a case of sign languages having different semantic ingredients
from each other, in the way that spoken languages have been argued to vary.
We can find similar discussions of crosslinguistic variation in semantic
ingredients being made in the literature on tense, where all languages can
clearly express that events happened in a past time even if they do not
seem to have specialized elements of tense, or alternatively, we can postu-
late that they are universal but that there is not direct evidence for them in
some languages (Matthewson, 2006). It can also be see in the literature on
mass/count, where cross-linguistic variation show distinct sets of patterns
which can argue for languages begining with different sets of semantic in-
gredients (Chierchia, 1998); in sign languages, similar analyzes have been
9. CONCLUSIONS 212

applied to American Sign Language (Koulidobrova, 2021). Again: the claim


throughout all of these works is that languages have the same potential for
expression, but that in some cases the semantic primitives/units of meaning
that they start with affect how they express these meanings.
Future work investigating cross-linguistic variation in semantics in sign
languages can surely find fruitful ground in many of these areas, and many
more besides. Moreover, sign languages also open up possible sources of
variation that have themselves been overlooked in spoken languages such as
the manual/non-manual (/suprasegmental vs. segmental) interplay. In this
book we have clearly focused on a small number of sign languages, primar-
ily American Sign Language, due to familiarity, yet there are hundreds of
sign languages all over the world wherever small or large Deaf communities
come together, so let this be a call for more work by researchers with or by
members of these communities to investigate these questions in a broader
set of languages.

4 Historical change
Like so much of linguistic theory on phonology, morphology, and syntax, the
fields of formal semantics and pragmatics have both overwhelmingly focused
on synchronic language from a particular time and place and language com-
munity, but of course, we know that language is always changing, always in
flux, across all of these variables (time, place, and person). Clearly, there
is interesting work to do to tie together formal approaches to meaning and
language change, and in recent years there have been important advances
in this domain in spoken languages, highlighted in the regular conference
on Formal Approaches to Diachronic Semantics; see Deo (2015a) for an
overview of this kind of work.
Given that historical linguistics and formal semantics have only very
recently been connected in spoken languages it is perhaps not a surprise that
there is also minimal work in this domain in signed languages. Nonetheless,
what we do know suggests an extremely rich area for investigation. In one
recent notable study, Sampson and Mayberry (2022) investigate changes

in meaning of the ASL sign . Based on archival video footage of


century-old sign language productions, they show that the sign originates

from a personal pronoun similar in use to the current , but with the
handshape currently used in self. This pronominal form was interpreted by
later generations of users of the signing community as a reflexive pronoun,
9. CONCLUSIONS 213

hence the current sense as ‘self’. Moreover, Sampson and Mayberry show
based on analysis of current signing productions that among young current
signers, the same sign can also be used as a copula (‘be’). On the one hand,
this proposes a semantic trajectory of a language in flux of exactly of the
sort we would expect to see in spoken languages in the pronominal domain;
on the other hand, it illustrates especially careful synchronic work as well
as archival work given the video format of historical data on sign languages,
and hopefully will be a model for future work on sign language meaning
change.
Another category of work that tackles the question of historical semantic
change in sign languages focuses on languages in which changes across time
can be studied through the generations of signers who still make up the lan-
guage community today, such as work on Nicaraguan Sign Language that
investigates the way that space is used in the grammar across successive
generations (Senghas, 2010; Kocab et al., 2015) and the kinds of seman-
tic/syntactic structures that are available in different generations of this
new language (Kocab et al., 2016). Meir et al. (2010) discuss the process of
emerging conventionalization in language communities more generally, and
Tomita (2021) provides an in-depth look at change in formal and mean-
ing (of indexical pointing signs) across three decades by a single individual
signer of Japanese Sign Language. All of these provide insight into how the
structure of a language community and the context of an individual influ-
ence the way that linguistic form and meaning change over time, and this
more broadly bears on foundational questions in language and cognition.
There is no question that work in this domain with respect to compositional
semantics can help us better understand the relationship between language
as mental and community activity, both in flux. Of course, the investiga-
tion should eventually comprise language in all modalities, most notably
including the semantics of tactile languages of Deafblind communities (see
Checchetto et al. 2018 for one example of change in modality from visual to
tactile).

5 Future directions
Recall we began with the question: how do we know what other people
“mean” when they share ideas using language? One thread through this
book is that we need to think about this question across multiple dimen-
sions: on the one hand, we have a logical-propositional system that seems
to use logical structure to support the communication of information about
the world. On the other hand, we seem to be able to “paint a picture”
in someone’s mind in the sense that we can help them build up a model
of a particular scenario or event that they can reason about through via
experience. An approach to semantics that considers both of these ways
9. CONCLUSIONS 214

of representing meaning is useful in general, but especially useful for un-


derstanding the interplay of description and depiction in languages that go
beyond text, and in the case of the focus of this book especially in sign
languages.
This approach also makes predictions about language processing, espe-
cially as it might be mapped to neural architectures: we know that non-
linguistic, non-compositional processing are seen as distinct from compo-
sitional/logical representations in the brain (Baggio, 2021; Frankland and
Greene, 2020); hopefully this framework for thinking about depictive and
descriptive aspects of meaning in sign languages clarify predictions for neuro-
linguistics in sign languages going forward. This approach also makes pre-
dictions in terms of cross-linguistic variation: we might expect to find min-
imal variation in depiction, but a comparable level of variation in the sym-
bolic/propositional component, since the latter is more conventionalized
within a language community. Future directions to investigate include, but
are not limited, to the psycho-linguistic and cross-linguistic pictures, and the
interplays of these systems. Finally, language is tightly connected to culture;
future work understanding how language ideologies interplay with linguistic
expressions, especially across modalities (Kusters et al., 2017), promises to
lead to much deeper understanding of sign language meaning.
Bibliography

Abner, N. (2017). What you see is what you get: Surface transparency
and ambiguity of nominalizing reduplication in American Sign Language.
Syntax, 20(4):317–352.

Abner, N., Flaherty, M., Stangl, K., Coppola, M., Brentari, D., and Goldin-
Meadow, S. (2019). The noun-verb distinction in established and emergent
sign systems. Language, 95(2):230–267.

Abner, N. and Graf, T. (2012). Binding complexity and the status of pro-
nouns in English and American Sign Language. Poster presentation at
Formal and Experimental Advances in Sign Language Theory (FEAST).

Abner, N. and Wilbur, R. B. (2017). Quantification in American Sign Lan-


guage. In Handbook of Quantifiers in Natural Language: Volume II, pages
21–59. Springer.

Ahn, B. and Conrod, K. (2022). Three ways to rate themself. In Talk


presented at the Human Sentence Processing conference, Santa Cruz CA.

Ahn, D. (2019a). ASL IX to locus as a modifier. Snippets, 1(37):2–3.

Ahn, D. (2019b). THAT thesis: A competition-based mechanism for


anaphoric expressions. PhD thesis, Harvard University, Cambridge, MA.

Ahn, D., Kocab, A., and Davidson, K. (2019). The role of contrast in
anaphoric expressions in ASL. In Proceedings of Glow-in-Asia XII.

Alsop, A., Stranahan, E., and Davidson, K. (2018). Testing contrastive in-
ferences from suprasegmental features using offline measures. Proceedings
of the Linguistic Society of America, 3(1):71–1.

Anagnostopoulou, E. (2006). Clitic doubling. The Blackwell companion to


syntax, 1:519–581.

Anand, P. and Nevins, A. (2004). Shifty operators in changing contexts. In


Semantics and Linguistic Theory, volume 14, pages 20–37.

215
BIBLIOGRAPHY 216

Aristodemo, V. and Geraci, C. (2018). Visible degrees in Italian sign lan-


guage. Natural Language & Linguistic Theory, 36(3):685–699.
Arregi, K. and Hanink, E. A. (2021). Switch reference as index agreement.
Natural Language & Linguistic Theory, 40:1–52.
Asada, Y. (2019). General use coordination in Japanese and Japanese Sign
Language. Sign Language & Linguistics, 22(1):44–82.
Bach, E. (1986). The algebra of events. Linguistics and philosophy, 9:5–16.
Baggio, G. (2021). Compositionality in a parallel architecture for language
processing. Cognitive Science, 45(5):e12949–e12949.
Baker, M. and Kramer, R. (2018). Doubled clitics are pronouns. Natural
Language & Linguistic Theory, 36(4):1035–1088.
Barberà, G. (2012). A unified account of specificity in Catalan Sign Lan-
guage (LSC). In Sinn und Bedeutung 16, pages 43–55.
Barberà, G. (2014). Use and functions of spatial planes in Catalan Sign
Language (LSC) discourse. Sign Language Studies, 14(2):147–174.
Barberà, G. (2015). The meaning of space in sign language. Reference,
specificity and structure in Catalan Sign Language discourse. De Gruyter
Mouton, Ishara Press.
Barker, C. and Jacobson, P. (2007). Direct compositionality, volume 14.
OUP Oxford.
Barner, D., Brooks, N., and Bale, A. (2011). Accessing the unsaid: The
role of scalar alternatives in childrens pragmatic inference. Cognition,
118(1):84–93.
Benedicto, E. and Brentari, D. (2004). Where did all the arguments go?:
Argument-changing properties of classifiers in ASL. Natural Language &
Linguistic Theory, 22(4):743–810.
Bochnak, M. R. (2015). The degree semantics parameter and cross-linguistic
variation. Semantics and pragmatics, 8:6–1.
Bochnak, M. R. and Matthewson, L. (2015). Methodologies in semantic
fieldwork. Oxford University Press, USA.
Bošković, Ž. (2005). On the locality of left branch extraction and the struc-
ture of NP. Studia linguistica, 59(1):1–45.
Boster, C. T. (1996). On the structure of quantified noun phrases: Evidence
from the quantifier-np split in American Sign Language. International
Review of Sign Linguistics, 1:159–208.
BIBLIOGRAPHY 217

Brentari, D. (2010). Sign languages. Cambridge University Press.

Büring, D. (2003). On d-trees, beans, and b-accents. Linguistics and phi-


losophy, 26(5):511–545.

Büring, D. (2005). Binding theory. Cambridge University Press.

Camp, E. (2018). Why maps are not propositional. In Grzankowski, A. and


Montague, M., editors, Non-propositional intentionality. Oxford Univer-
sity Press.

Caponigro, I. and Davidson, K. (2011). Ask, and tell as well: Question–


answer clauses in American Sign Language. Natural Language Semantics,
19(4):323–371.

Carlson, G. N. (1977). Reference to kinds in English. PhD thesis, University


of Massachusetts Amherst.

Casasanto, D. (2008). Similarity and proximity: When does close in space


mean close in mind? Memory & Cognition, 36(6):1047–1056.

Cecchetto, C., Checchetto, A., Geraci, C., Santoro, M., and Zucchi, S.
(2015). The syntax of predicate ellipsis in Italian Sign Language (LIS).
Lingua, 166:214–235.

Cecchetto, C., Geraci, C., and Zucchi, S. (2009). Another way to mark
syntactic dependencies: The case for right-peripheral specifiers in sign
languages. Language, 85(2):278–320.

Checchetto, A., Geraci, C., Cecchetto, C., and Zucchi, S. (2018). The lan-
guage instinct in extreme circumstances: The transition to tactile Italian
Sign Language (LISt) by Deafblind signers.

Chierchia, G. (1995). Dynamics of meaning: Anaphora, presupposition, and


the theory of grammar. University of Chicago Press.

Chierchia, G. (1998). Reference to kinds across language. Natural language


semantics, 6(4):339–405.

Chierchia, G. (2015). How universal is the mass/count distinction? Three


grammars of counting. In Chinese syntax: A cross-linguistic perspective,
pages 147–177.

Chierchia, G. (2017). Scalar implicatures and their interface with grammar.


Annual Review of Linguistics, 3:245–264.

Chierchia, G. (2020). Origins of weak crossover: when dynamic semantics


meets event semantics. Natural Language Semantics, 28(1):23–76.
BIBLIOGRAPHY 218

Chierchia, G. and McConnell-Ginet, S. (2000). Meaning and grammar: An


introduction to semantics. MIT press.

Chomsky, N. (1981). Lectures on government and binding: The Pisa lectures.


Foris, Dordrecht.

Chomsky, N. (2014). The minimalist program. MIT press.

Clark, H. H. (1996). Using language. Cambridge University Press.

Clark, H. H. (2016). Depicting as a method of communication. Psychological


review, 123(3):324.

Clark, H. H. and Gerrig, R. J. (1990). Quotations as demonstrations. Lan-


guage, 66(4):764–805.

Cogill-Koez, D. (2000a). A model of signed language classifier predicates


as templated visual representation. Sign language & linguistics, 3(2):209–
236.

Cogill-Koez, D. (2000b). Signed language classifier predicates: Linguistic


structures or schematic visual representation? Sign Language & Linguis-
tics, 3(2):153–207.

Conrod, K. (2019). Pronouns raising and emerging. PhD thesis, University


of Washington.

Coppock, E. and Champollion, L. (2022). Invitation to formal semantics.

Cormier, K., Fenlon, J., and Schembri, A. (2015a). Indicating verbs in


british sign language favour motivated use of space. Open Linguistics,
1(1).

Cormier, K., Schembri, A., and Woll, B. (2013). Pronouns and pointing in
sign languages. Lingua, 137:230–247.

Cormier, K., Smith, S., and Sevcikova-Sehyr, Z. (2015b). Rethinking con-


structed action. Sign Language & Linguistics, 18(2):167–204.

Crabtree, M. R. and Wilbur, R. B. (2020). #ALL versus ALL in American


Sign Language (ASL). Proceedings of the Linguistic Society of America,
5(1):798–806.

Culbertson, J. (2010). Convergent evidence for categorial change in French:


From subject clitic to agreement marker. Language, 86(1):85–132.

Cummins, C. and Katsos, N. (2019). The Oxford Handbook of Experimental


Semantics and Pragmatics. Oxford University Press.
BIBLIOGRAPHY 219

Davidson, K. (2013). ‘And’ or ‘or’: General use coordination in ASL. Se-


mantics & Pragmatics, 6(4):1–44.

Davidson, K. (2014). Scalar implicatures in a signed language. Sign Lan-


guage & Linguistics, 17(1):1–19.

Davidson, K. (2015). Quotation, demonstration, and iconicity. Linguistics


and Philosophy, 38(6):477–520.

Davidson, K. (2020). Is “experimental” a gradable predicate? In Asatryan,


M., Song, Y., and Whitmal, A., editors, Proceedings of North East Lin-
guistics Society (NELS) 50, pages 125–144.

Davidson, K. and Caponigro, I. (2016). Embedding polar interrogative


clauses in American Sign Language. In A Matter of Complexity: Sub-
ordination in Sign Languages, Sign languages and deaf communities 6,
pages 151–181. De Gruyter Mouton.

Davidson, K. and Gagne, D. (2022). “More is up” for domain restriction in


ASL. Semantics and Pragmatics, 15:1.

Davidson, K., Kocab, A., Sims, A. D., and Wagner, L. (2019). The relation-
ship between verbal form and event structure in sign languages. Glossa:
A journal of general linguistics, 4(1).

Davidson, K. and Koulidobrova, E. (2015). Polarity at the syntax/discourse


interface: Doubling and negation. Presentation at the LSA Annual meet-
ing.

Davidson, K. and Mayberry, R. I. (2015). Do adults show an effect of delayed


first language acquisition when calculating scalar implicatures? Language
acquisition, 22(4):329–354.

Dayal, V. (2002). Single-pair versus multiple-pair answers: Wh-in-situ and


scope. Linguistic Inquiry, 33(3):512–520.

Deal, A. R. (2011). Modals without scales. Language, pages 559–585.

Deal, A. R. (2015). Reasoning about equivalence in semantic fieldwork.


Methodologies in semantic fieldwork, pages 157–174.

Deal, A. R. (2017). Countability distinctions and semantic variation. Natural


Language Semantics, 25(2):125–171.

Deal, A. R. (2020). A theory of indexical shift: meaning, grammar, and


crosslinguistic variation. MIT Press.

Deo, A. (2015a). Diachronic semantics. Annu. Rev. Linguist., 1(1):179–197.


BIBLIOGRAPHY 220

Deo, A. (2015b). The semantic and pragmatic underpinnings of grammat-


icalization paths: The progressive to imperfective shift. Semantics and
Pragmatics, 8:14–1.

Dikken, M. d., Meinunger, A., and Wilder, C. (2000). Pseudoclefts and


ellipsis. Studia linguistica, 54(1):41–89.

Dingemanse, M. (2012). Advances in the cross-linguistic study of ideophones.


Language and Linguistics compass, 6(10):654–672.

Dingemanse, M. (2015). Ideophones and reduplication: Depiction, descrip-


tion, and the interpretation of repeated talk in discourse. Studies in Lan-
guage. International Journal sponsored by the Foundation Foundations of
Language, 39(4):946–970.

Dingemanse, M. and Akita, K. (2017). An inverse relation between expres-


siveness and grammatical integration: on the morphosyntactic typology
of ideophones, with special reference to Japanese. Journal of Linguistics,
53(3):501–532.

Dingemanse, M., Blasi, D. E., Lupyan, G., Christiansen, M. H., and Mon-
aghan, P. (2015). Arbitrariness, iconicity, and systematicity in language.
Trends in cognitive sciences, 19(10):603–615.

Dryer, M. S. (2013). Polar questions. In Dryer, M. S. and Haspelmath,


M., editors, The World Atlas of Language Structures Online. Max Planck
Institute for Evolutionary Anthropology, Leipzig.

Elbourne, P. (2002). Situations and individuals. PhD thesis, Massachusetts


Institute of Technology.

Emmorey, K. (2003). Perspectives on classifier constructions in sign lan-


guages. Psychology Press.

Emmorey, K. and Herzig, M. (2003). Categorical versus gradient properties


of classifier constructions in ASL. Perspectives on classifier constructions
in signed languages, 222:246.

Engberg-Pedersen, E. (2013). Point of view expressed through shifters. In


Language, gesture, and space, pages 143–164. Psychology Press.

Esipova, M. (2019a). Acceptability of at-issue co-speech gestures under


contrastive focus. Glossa: a journal of general linguistics, 4(1).

Esipova, M. (2019b). Composition and projection in speech and gesture.


PhD thesis, New York University, New York, NY.
BIBLIOGRAPHY 221

Fenlon, J., Cooperrider, K., Keane, J., Brentari, D., and Goldin-Meadow,
S. (2019). Comparing sign language and gesture: Insights from pointing.
Glossa: a journal of general linguistics, 4(1).

Fenlon, J., Schembri, A., and Cormier, K. (2018). Modification of indicat-


ing verbs in British Sign Language: A corpus-based study. Language,
94(1):84–118.

Ferrara, L. and Hodge, G. (2018). Language as description, indication, and


depiction. Frontiers in Psychology, 9:716.

Fodor, J. (2007). The revenge of the given. Contemporary debates in phi-


losophy of mind, pages 105–116.

Fodor, J. A. (2008). LOT 2: The language of thought revisited. Oxford


University Press on Demand.

Frankland, S. M. and Greene, J. D. (2020). Concepts and compositionality:


in search of the brain’s language of thought. Annual review of psychology,
71:273–303.

Frederiksen, A. T. and Mayberry, R. I. (2016). Who’s on first? Investigating


the referential hierarchy in simple native ASL narratives. Lingua, 180:49–
68.

Gan, L. E. (2022). Question Answer Pairs in Hong Kong Sign Language.


Presentation at the Chicago Linguistic Society.

Geraci, C. (2005). Negation in LIS (Italian Sign Language). In Proceedings


of the North East Linguistic Society (NELS 35), page 217.

Gil, D. (2019). Aristotle goes to Arizona, and finds a language without


“And”. In Semantic universals and universal semantics, pages 96–130. De
Gruyter Mouton.

Gonzalez, A., Henninger, K., and Davidson, K. (2019). Answering negative


questions in American Sign Language. Proceedings of the North East
Linguistic Society (NELS 49) held at Cornell.

Graf, T. and Abner, N. (2012). Is syntactic binding rational? In Proceed-


ings of the 11th international workshop on Tree Adjoining Grammars and
related formalisms (TAG+ 11), pages 189–197.

Greenberg, G. (2013). Beyond resemblance. Philosophical review,


122(2):215–287.

Greenberg, J. H. (1972). Numerical classifiers and substantival number:


problems in the genesis of a linguistic type. In On Language. Selected
Writings of Joseph H. Greenberg, pages 166–193.
BIBLIOGRAPHY 222

Grice, H. (1957). Meaning. The Philosophical Review, 66(3):377–388.

Grice, P. (1989). Studies in the Way of Words. Harvard University Press.

Groenendijk, J. and Stokhof, M. (1982). Semantic analysis of “wh”-


complements. Linguistics and Philosophy, 5:175–233.

Groenendijk, J. and Stokhof, M. (1991). Dynamic predicate logic. Linguis-


tics and philosophy, 14(1):39–100.

Groenendijk, J. A. G. and Stokhof, M. J. B. (1984). Studies on the Se-


mantics of Questions and the Pragmatics of Answers. PhD thesis, Univ.
Amsterdam.

Guasti, M. T., Chierchia, G., Crain, S., Foppolo, F., Gualmini, A., and
Meroni, L. (2005). Why children and adults sometimes (but not always)
compute implicatures. Language and cognitive processes, 20(5):667–696.

Gunlogson, C. (2004). True to form: Rising and falling declaratives as


questions in English. Routledge.

Gutzmann, D., Hartmann, K., and Matthewson, L. (2020). Verum focus is


verum, not focus: Cross-linguistic evidence. Glossa: a journal of general
linguistics, 5(1).

Hacquard, V. (2010). Modality. Language, 86(3):739–741.

Hamblin, C. L. (1976). Questions in Montague English. In Montague gram-


mar, pages 247–259. Elsevier.

Hanink, E. (2018). Structural sources of anaphora and sameness. The Uni-


versity of Chicago.

Hartmann, K., Pfau, R., and Legeland, I. (2021). Asymmetry and contrast:
Coordination in sign language of the netherlands. Glossa: a journal of
general linguistics, 6(1).

Hauser, C. (2019). Question-answer pairs: the help of LSF. FEAST. Formal


and Experimental Advances in Sign language Theory, 2.

Heim, I. (1982). The semantics of definite and indefinite noun phrases. PhD
thesis, University of Massachusetts.

Heim, I. and Kratzer, A. (1998). Semantics in generative grammar, volume


1185. Blackwell Oxford.

Henderson, R. (2014). Dependent indefinites and their post-suppositions.


Semantics and Pragmatics, 7:6–1.
BIBLIOGRAPHY 223

Henner, J. (2022). https://twitter.com/jmhenner/status/1489252991987183617?lang=en.


Henner, J. and Robinson, O. (2021). Unsettling languages, unruly body-
minds: Imaging a crip linguistics.
Henninger, K. (2022). Manual and nonmanual negation in American Sign
Language: A corpus study. manuscript, University of Chicago.
Herbert, M. (2018). A new classifier-based plural morpheme in German Sign
Language (DGS). Sign Language & Linguistics, 21(1):115–136.
Herrmann, A. (2013). Modal and focus particles in sign languages. In Modal
and Focus Particles in Sign Languages. De Gruyter Mouton.
Herrmann, A. and Steinbach, M. (2012). Quotation in sign languages. Quo-
tatives: Cross-linguistic and cross-disciplinary perspectives. Amsterdam:
John Benjamins, pages 203–228.
Hill, J. C., Lillo-Martin, D. C., and Wood, S. K. (2018). Sign languages:
Structures and contexts. Routledge.
Hochgesang, J. (2020). SLAASh ID glossing principles, ASL Sign-
bank and Annotation Conventions, Version 3.0. Available at
https://doi.org/10.6084/m9.figshare.12003732.v4.
Hochgesang, J., Crasborn, O., and Lillo-Martin, D. (2020). ASL Signbank.
https://ASLsignbank.haskins.yale.edu/.
Hodge, G. and Ferrara, L. (2022). Iconicity as multimodal, polysemiotic,
and plurifunctional. Frontiers in Psychology, 13.
Horn, L. (1989). A natural history of negation. Center for the Study of
Language and Information, Stanford CA.
Hou, L. (2022). Looking for multi-word expressions in American Sign Lan-
guage. Cognitive Linguistics, 33(2):291–337.
Hübl, A., Maier, E., and Steinbach, M. (2019). To shift or not to shift:
Indexical attraction in role shift in german sign language. Sign Language
& Linguistics, 22(2):171–209.
Huddlestone, K. (2017). A preliminary look at negative constructions in
south african sign language: Question-answer clauses. Stellenbosch Papers
in Linguistics, 48(1):93–104.
Irani, A. (2016). Two types of definites in American Sign Language. Presen-
tation at the Workshop on Definiteness Across Languages, Mexico City.
Jacobson, P. (2007). Direct compositionality and variable-free semantics.
Direct compositionality, pages 191–236.
BIBLIOGRAPHY 224

Jacobson, P. (2016). The short answer: Implications for direct composition-


ality (and vice versa). Language, 92(2):331–375.

Janis, W. D. (1995). A crosslinguistic perspective on ASL verb agreement.


Language, gesture, and space, pages 195–223.

Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive


science, 4(1):71–115.

Kahneman, D. and Tversky, A. (2013). Prospect theory: An analysis of


decision under risk. In Handbook of the fundamentals of financial decision
making: Part I, pages 99–127. World Scientific.

Kamp, H. (1983). A theory of truth and semantic representation. In Meaning


and the Dynamics of Interpretation, pages 329–369. Brill.

Kamp, H. and Reyle, U. (1993). From logic to discourse.

Kaplan, D. (1979). On the logic of demonstratives. Journal of philosophical


logic, 8(1):81–98.

Karttunen, L. (1977). Syntax and semantics of questions. Linguistics and


philosophy, 1(1):3–44.

Katsos, N. and Bishop, D. V. (2011). Pragmatic tolerance: Implications for


the acquisition of informativeness and implicature. Cognition, 120(1):67–
81.

Khristoforova, E. and Kimmelman, V. (2021). Question-answer pairs in


russian sign language: a corpus study. FEAST. Formal and Experimental
Advances in Sign language Theory, 4:101–112.

Kimmelman, V. (2014). Information structure in Russian Sign Language


and Sign Language of the Netherlands. PhD thesis, University of Amster-
dam.

Kimmelman, V. (2017). Quantifiers in Russian Sign Language. In Handbook


of Quantifiers in Natural Language: Volume II, pages 803–855. Springer.

Kimmelman, V. and Pfau, R. (2016). Information structure in sign lan-


guages. The Oxford Handbook of Information Structure, pages 814–833.

Kimmelman, V. and Quer, J. (2021). Quantification–theoretical perspec-


tives. In Quer, J., Pfau, R., and Herrmann, A., editors, The Routledge
Handbook of Theoretical and Experimental Sign Language Research, pages
423–439.

Kimmelman, V. and Vink, L. (2017). Question-answer pairs in Sign Lan-


guage of the Netherlands. Sign Language Studies, 17(4):417–449.
BIBLIOGRAPHY 225

Kita, S. (1997). Two-dimensional semantic analysis of Japanese mimetics.


Linguistics, pages 379–415.

Kita, S. and Özyürek, A. (2003). What does cross-linguistic variation in se-


mantic coordination of speech and gesture reveal?: Evidence for an inter-
face representation of spatial thinking and speaking. Journal of Memory
and language, 48(1):16–32.

Klima, E. S. and Bellugi, U. (1979). The signs of language. Harvard Uni-


versity Press.

Kocab, A., Ahn, D., Lund, G., and Davidson, K. (2019). Reconsidering
agreement in sign languages. In Poster at GLOW 42 in Oslo.

Kocab, A., Davidson, K., and Snedeker, J. (2022). The emergence of natural
language quantification. Cognitive Science.

Kocab, A., Pyers, J., and Senghas, A. (2015). Referential shift in Nicaraguan
Sign Language: A transition from lexical to spatial devices. Frontiers in
Psychology, 5:1540.

Kocab, A., Senghas, A., and Snedeker, J. (2016). Recursion in Nicaraguan


Sign Language. In Proceedings of the Cognitive Science Society.

Koulidobrova, E. (2012). When the quiet surfaces: ‘Transfer’ of argument


omission in the speech of ASL-English bilinguals. PhD thesis, University
of Connecticut.

Koulidobrova, E. (2021). Counting (on) bare nouns: Revelations from Amer-


ican Sign Language. In Kiss, T., Pelletier, F. J., and Husic, H., editors,
Things and Stuff: The Semantics of the Count-Mass Distinction, chap-
ter 9, page 213. Cambridge University Press.

Koulidobrova, E. and Lillo-Martin, D. (2016). A ‘point’ of inquiry: The case


of the (non-)pronominal IX in ASL. The impact of pronominal form on
interpretation, pages 221–250.

Koulidobrova, E., Martinez-Vera, G., Kunze, K., and Kunze, C. (2022).


Revisiting gradability in American Sign Language (ASL). Ms.

Kratzer, A. (1977). What ‘can’ and ‘must’ can and must mean. Linguistics
and Philosophy, 1:337–355.

Kratzer, A. (1981). The notional category of modality. In Words, worlds,


and contexts, pages 38–74.

Kuhn, J. (2015). Discourse anaphora–theoretical perspectives. In Quer, J.,


Pfau, R., and Herrmann, A., editors, The Routledge Handbook of Theo-
retical and Experimental Sign Language Research.
BIBLIOGRAPHY 226

Kuhn, J. (2016). ASL loci: Variables or features? Journal of Semantics,


33(3):449–491.
Kuhn, J. (2017a). Dependent indefinites: the view from sign language.
Journal of Semantics, 34(3):407446.
Kuhn, J. (2017b). Telicity and iconic scales in ASL.
Kuhn, J. (2020). Logical meaning in space: Iconic biases on quantification
in sign languages. Language, 96(4):320–343.
Kuhn, J. and Aristodemo, V. (2017). Pluractionality, iconicity, and scope
in French Sign Language. Semantics and Pragmatics, 10.
Kuhn, J., Geraci, C., Schlenker, P., and Strickland, B. (2021). Boundaries
in space and time: Iconic biases across modalities. Cognition, 210:104596.
Kuhn, J. and Pasalskaya, L. (2019). Negative concord in Russian Sign
Language. In Paper presented at Sinn und Bedeutung.
Kusters, A., Spotti, M., Swanwick, R., and Tapio, E. (2017). Beyond lan-
guages, beyond modalities: Transforming the study of semiotic reper-
toires. International Journal of Multilingualism, 14(3):219–232.
Lakoff, G. and Johnson, M. (1980). Metaphors we live by. University of
Chicago press.
Legeland, I., Hartmann, K., and Pfau, R. (2018). Word order asymmetries
in NGT coordination: The impact of information structure. FEAST.
Formal and Experimental Advances in Sign language Theory, 2:56–67.
Liddell, S. K. (1980). American Sign Language syntax. Mouton De Gruyter.
Liddell, S. K. (2003). Grammar, gesture, and meaning in American Sign
Language. Cambridge University Press.
Lillo-Martin, D. (1986). Two kinds of null arguments in American Sign
Language. Natural Language & Linguistic Theory, 4(4):415–444.
Lillo-Martin, D. (1995). The point of view predicate in American Sign
Language. In Language, gesture, and space, pages 165–180. Psychology
Press.
Lillo-Martin, D. and de Quadros, R. M. (2004). Focus constructions in
American Sign Language and Língua de Sinais Brasileira. Signs of the
time: Selected papers from TISLR, pages 161–176.
Lillo-Martin, D. and Klima, E. S. (1990). Pointing out differences: ASL
pronouns in syntactic theory. Theoretical issues in sign language research,
1:191–210.
BIBLIOGRAPHY 227

Lillo-Martin, D. and Meier, R. P. (2011). On the linguistic status of agree-


ment in sign languages. Theoretical linguistics, 37(3-4):95–141.

Lillo-Martin, D. C. and Gajewski, J. (2014). One grammar or two? sign


languages and the nature of human language. Wiley Interdisciplinary
Reviews: Cognitive Science, 5(4):387–401.

Lima, S. (2014). The grammar of individuation and counting. PhD thesis,


University of Massachusetts, Amherst.

Loos, C., Steinbach, M., and Repp, S. (2020). Affirming and rejecting as-
sertions in German Sign Language (DGS). In Proceedings of Sinn und
Bedeutung, volume 24, pages 1–19.

MacLaughlin, D. (1997). The structure of determiner phrases: Evidence


from American Sign Language. PhD thesis, Boston University.

Maier, E. (2017). The pragmatics of attraction. In The semantics and


pragmatics of quotation, pages 259–280. Springer.

Maier, E. (2018). Quotation, demonstration, and attraction in sign language


role shift. Theoretical Linguistics, 44(3-4):265–276.

Malaia, E. and Wilbur, R. B. (2012). Kinematic signatures of telic and atelic


events in ASL predicates. Language and Speech, 55(3):407–421.

Mandelkern, M. and Rothschild, D. (2020). Definiteness projection. Natural


Language Semantics, 28(2):77–109.

Mathur, G. (2000). The morphology-phonology interface in signed languages.


PhD thesis, Massachusetts Institute of Technology.

Matthewson, L. (2006). Temporal semantics in a superficially tenseless lan-


guage. Linguistics and Philosophy, 29(6):673–713.

Matthewson, L. (2022). Semantic fieldwork: How experimental should we


be? Semantic Fieldwork Methods.

Meier, R. P. (1982). Icons, analogues, and morphemes: The acquisition of


verb agreement in American Sign Language. University of California, San
Diego.

Meir, I. (2002). A cross-modality perspective on verb agreement. Natural


Language & Linguistic Theory, 20(2):413–450.

Meir, I., Padden, C. A., Aronoff, M., and Sandler, W. (2007). Body as
subject. Journal of Linguistics, 43(3):531–563.
BIBLIOGRAPHY 228

Meir, I., Sandler, W., Padden, C., and Aronoff, M. (2010). Emerging sign
languages. Oxford handbook of deaf studies, language, and education,
2:267–280.
Montague, R. (1973). The proper treatment of quantification in ordinary
english. In Approaches to natural language, pages 221–242. Springer.
Murray, S. (2017). Complex connectives. In Semantics and Linguistic The-
ory, volume 27, pages 655–679.
Napoli, D. J., Spence, R. S., and de Quadros, R. M. (2017). Influence of
predicate sense on word order in sign languages: Intensional and exten-
sional verbs. Language, 93(3):641–670.
Neidle, C. J. (2000). The syntax of American Sign Language: Functional
categories and hierarchical structure. MIT press.
Nevins, A. (2011). Prospects and challenges for a clitic analysis of ASL
agreement. Theoretical Linguistics, 37(3-4):173–187.
Noveck, I. (2018). Experimental pragmatics: The making of a cognitive
science. Cambridge University Press.
Noveck, I. A. (2001). When children are more logical than adults: Experi-
mental investigations of scalar implicature. Cognition, 78(2):165–188.
Ohori, T. (2004). Coordination in mentalese. In Haspelmath, M., editor,
Coordinating constructions, pages 41–55.
Padden, C. (1986). Verbs and role-shifting in American Sign Language. In
Proceedings of the fourth national symposium on sign language research
and teaching, pages 44–57. National Association of the Deaf Silver Spring,
MD.
Padden, C. A. (1988). Interaction of morphology and syntax in American
Sign Language. Routledge.
Papafragou, A. and Musolino, J. (2003). Scalar implicatures: experiments
at the semantics–pragmatics interface. Cognition, 86(3):253–282.
Partee, B. H. (1995). Quantificational structures and compositionality. In
Quantification in natural languages, pages 541–601. Springer.
Perniss, P., Thompson, R., and Vigliocco, G. (2010). Iconicity as a gen-
eral property of language: evidence from spoken and signed languages.
Frontiers in psychology, 1:227.
Perniss, P. and Vigliocco, G. (2014). The bridge of iconicity: from a world
of experience to the experience of language. Philosophical Transactions
of the Royal Society B: Biological Sciences, 369(1651):20130300.
BIBLIOGRAPHY 229

Petronio, K. (1995). Bare noun phrases, verbs and quantification in ASL.


In Quantification in natural languages, pages 603–618. Springer.

Petronio, K. and Lillo-Martin, D. (1997). Wh-movement and the position of


spec-cp: Evidence from American Sign Language. Language, pages 18–57.

Pfau, R. (2016). Syntax: complex sentences. In Baker, A., van den Bo-
gaerde, B., Pfau, R., and Schermer, T., editors, The Linguistics of Sign
Languages, pages 149–172. Amsterdam: John Benjamins.

Pfau, R. and Quer, J. (2010). Nonmanuals: their grammatical and prosodic


roles. pages 1–21.

Pfau, R., Salzmann, M., and Steinbach, M. (2018). The syntax of sign
language agreement: Common ingredients, but unusual recipe. Glossa: a
journal of general linguistics, 3(1).

Pfau, R. and Steinbach, M. (2006). Pluralization in sign and in speech: A


cross-modal typological study. Linguistic Typology, 10(2).

Pfau, R. and Steinbach, M. (2016). Complex sentences in sign languages:


Modality–typology–discourse. A matter of complexity: Subordination in
sign languages, pages 1–35.

Pfau, R., Steinbach, M., and Woll, B. (2012). Sign language. De Gruyter
Mouton.

Quadros, R. M. d., Davidson, K., Lillo-Martin, D., and Emmorey, K. (2020).


Code-blending with depicting signs. Linguistic approaches to bilingualism,
10(2):290–308.

Quer, J. (2005). Context shift and indexical variables in sign languages. In


Semantics and Linguistic Theory, volume 15, pages 152–168.

Quer, J. (2012a). Chapter 15: Negation. In Sign Language, pages 316–339.


De Gruyter Mouton.

Quer, J. (2012b). Quantificational strategies across language modalities. In


Logic, language and meaning, pages 82–91. Springer.

Ramchand, G. (2019). Verbal symbols and demonstrations across modali-


ties. Open Linguistics, 5(1):94–108.

Ramchand, G. C. (2008). Verb meaning and the lexicon: A first phase


syntax, volume 116. Cambridge University Press.

Reinhart, T. (1983). Coreference and bound anaphora: A restatement of


the anaphora questions. Linguistics and Philosophy, 6(1):47–88.
BIBLIOGRAPHY 230

Repp, S. (2016). Contrast: Dissecting an elusive information-structural


notion and its role in grammar. OUP handbook of Information Structure,
pages 270–289.

Roberts, C. (2012). Information structure: Towards an integrated formal


theory of pragmatics. Semantics and Pragmatics, 5:6–1.

Rooth, M. (1992). A theory of focus interpretation. Natural language se-


mantics, 1(1):75–116.

Sampson, T. and Mayberry, R. I. (2022). An emerging self: The copula


cycle in American Sign Language. Language, 98(2).

Sandler, W. and Lillo-Martin, D. (2006). Sign language and linguistic uni-


versals. Cambridge University Press.

Sauerland, U. (2003). A new semantics for number. In Semantics and


Linguistic Theory, volume 13, pages 258–275.

Schlenker, P. (2003). Clausal equations (a note on the connectivity problem).


Natural Language & Linguistic Theory, 21(1):157–214.

Schlenker, P. (2011). Donkey anaphora: the view from sign language (ASL
and LSF). Linguistics and Philosophy, 34(4):341–395.

Schlenker, P. (2014). Iconic features. Natural language semantics, 22(4):299–


356.

Schlenker, P. (2016). Featural variables. Natural Language & Linguistic


Theory, 34(3):1067–1088.

Schlenker, P. (2017a). Super monsters i: Attitude and action role shift in


sign language. Semantics and Pragmatics, 10.

Schlenker, P. (2017b). Super monsters ii: Role shift, iconicity and quotation
in sign language. Semantics and Pragmatics, 10(12).

Schlenker, P. (2018). Visible meaning: Sign language and the foundations


of semantics. Theoretical Linguistics, 44(3-4):123–208.

Schlenker, P. (2021). Iconic presuppositions. Natural Language & Linguistic


Theory, 39(1):215–289.

Schlenker, P. (2022). What It All Means: Semantics for (Almost) Every-


thing. MIT Press.

Schlenker, P., Bonnet, M., Lamberton, J., Lamberton, J., Chemla, E., San-
toro, M., and Geraci, C. (2022). Iconic syntax: Sign language classifier
predicates and gesture sequences.
BIBLIOGRAPHY 231

Schlenker, P. and Chemla, E. (2018). Gestural agreement. Natural Language


& Linguistic Theory, 36(2):587–625.

Schlenker, P. and Lamberton, J. (2019). Iconic plurality. Linguistics and


Philosophy, 42(1):45–108.

Schlenker, P. and Lamberton, J. (2022). Meaningful blurs: the sources of


repetition-based plurals in ASL. Linguistics and Philosophy, 45(2):201–
264.

Schlenker, P., Lamberton, J., and Santoro, M. (2013). Iconic variables.


Linguistics and philosophy, 36(2):91–149.

Schlenker, P. and Mathur, G. (2012). A strong crossover effect in ASL. Ms.,


Institut Jean-Nicod/NYU and Gallaudet University.

Schouwstra, M. and de Swart, H. (2014). The semantic origins of word


order. Cognition, 131(3):431–436.

Schwarz, F. (2014). Experimental perspectives on presuppositions, vol-


ume 45. Springer.

Schwarzschild, R. (2002). Singleton indefinites. Journal of Semantics,


19(3):289–314.

Senghas, A. (2010). The emergence of two functions for spatial devices in


Nicaraguan Sign Language. Human Development, 53(5):287–302.

Sevgi, H. (2022). One root to build them all: Roots in sign language clas-
sifiers. In Proceedings of West Coast Conference on Formal Linguistics,
39.

Siegal, M. and Surian, L. (2004). Conceptual development and conversa-


tional understanding. Trends in cognitive sciences, 8(12):534–538.

Siegel, S. (2011). The contents of visual experience. Oxford University Press.

Spaepen, E., Coppola, M., Spelke, E. S., Carey, S. E., and Goldin-Meadow,
S. (2011). Number without a language model. Proceedings of the National
Academy of Sciences, 108(8):3163–3168.

Speas, M. (2000). Person and point of view in Navajo direct discourse com-
plements. Carnie, Andrew & Jelinek, Eloise & Willie Mary Ann (eds.),
Papers in honor of Ken Hale, pages 19–38.

Stalnaker, R. C. (1978). Assertion. In Pragmatics, pages 315–332. Brill.

Stanley, J. (2002). Nominal restriction. Logical form and language, pages


365–388.
BIBLIOGRAPHY 232

Stanley, J. and Szabó, Z. (2000). On quantifier domain restriction. Mind &


Language, 15(2-3):219–261.

Steinbach, M. and Onea, E. (2015). A DRT analysis of discourse refer-


ents and anaphora resolution in sign language. Journal of Semantics,
33(3):409–448.

Stiller, A. J., Goodman, N. D., and Frank, M. C. (2015). Ad-hoc implicature


in preschool children. Language Learning and Development, 11(2):176–
190.

Stokoe, W. C., Casterline, D. C., and Croneberg, C. G. (1976). A dictionary


of American Sign Language on linguistic principles. Linstok Press.

Strickland, B., Geraci, C., Chemla, E., Schlenker, P., Kelepir, M., and Pfau,
R. (2015). Event representations constrain the structure of language:
Sign language as a window into universally accessible linguistic biases.
Proceedings of the National Academy of Sciences, 112(19):5968–5973.

Sundaresan, S. (2013). Context and (co) reference in the syntax and its
interfaces. PhD thesis, Universitetet i Tromsø.

Sundaresan, S. (2021). Shifty attitudes: Indexical shift versus perspectival


anaphora. Annual Review of Linguistics, 7:235–259.

Supalla, T. R. (1983). Structure and acquisition of verbs of motion and


location in American Sign Language.

Szabolcsi, A. Hungarian disjunctions and positive polarity. In Approaches


to Hungarian, Vol. 8.

Tang, G. and Lau, P. (2012). Coordination and subordination. In Sign


Language, pages 340–365. De Gruyter Mouton.

Taub, S. F. (2001). Language from the body: Iconicity and metaphor in


American Sign Language. Cambridge University Press.

Thompson, H. (1977). The lack of subordination in American Sign Lan-


guage. On the Other Hand: New Perspectives on American Sign Lan-
guage, Academic Press, New York, pages 181–96.

Tieu, L., Schlenker, P., and Chemla, E. (2019). Linguistic inferences without
words. Proceedings of the National Academy of Sciences, 116(20):9796–
9801.

Tomita, N. (2021). Breaking Free from Text: One JSL User’s Discourse
Journey over Time. PhD thesis, Gallaudet University.
BIBLIOGRAPHY 233

Valli, C. and Lucas, C. (2000). Linguistics of American sign language: An


introduction. Gallaudet University Press.

von Fintel, K. (1994). Restrictions on quantifier domains. PhD thesis,


University of Massachusetts, Amherst, MA.

von Fintel, K. and Heim, I. (2002). Lecture notes on intensional semantics.


ms., Massachusetts Institute of Technology.

Wilbur, R. B. (1994). Foregrounding structures in American Sign Language.


Journal of Pragmatics, 22(6):647–672.

Wilbur, R. B. (2008). Complex predicates involving events, time and aspect:


Is this why sign languages look so similar. Theoretical issues in sign
language research, pages 217–250.

Wilbur, R. B. (2011). Nonmanuals, semantic operators, domain marking,


and the solution to two outstanding puzzles in ASL. Sign Language &
Linguistics, 14(1):148–178.

Wilbur, R. B. and Patschke, C. (1999). Syntactic correlates of brow raise in


ASL. Sign Language & Linguistics, 2(1):3–41.

Wilbur, R. B. and Patschke, C. G. (1998). Body leans and the marking of


contrast in American Sign Language. Journal of Pragmatics, 30(3):275–
303.

Wood, S. (1999). Semantic and syntactic aspects of negation in ASL. West


Lafayette, IN: Purdue University Masters thesis.

Wright, T. A. (2014). Strict vs. Flexible Accomplishment Predicates. PhD


thesis, University of Texas as Austin.

Zeshan, U. (2004). Interrogative constructions in signed languages: Crosslin-


guistic perspectives. Language, 80(1):7–39.

Zeshan, U. (2006). Interrogative and negative constructions in sign language.


Ishara Press.

Zimmermann, T. E. (2000). Free choice disjunction and epistemic possibility.


Natural language semantics, 8(4):255–290.

Zlogar, C. D. and Davidson, K. (2018). Effects of linguistic context on the


acceptability of co-speech gestures. Glossa: a journal of general linguis-
tics.

Zorzi, G. (2018). Coordination and gapping in Catalan Sign Language


(LSC). PhD thesis, Universitat Pompeu Fabra.
BIBLIOGRAPHY 234

Zucchi, S. (2012). Formal semantics of sign languages. Language and Lin-


guistics Compass, 6(11):719–734.

Zucchi, S. (2017). Event categorization in sign languages. In Handbook of


Categorization in Cognitive Science, pages 377–396. Elsevier.

Zucchi, S. (2018). Sign language iconicity and gradient effects. Theoretical


Linguistics, 44(3-4):283–294.

Zwitserlood, I. (2012). Classifiers. In Sign language: An international hand-


book. De Gruyter.

You might also like