Sempraginsls Davidson2022
Sempraginsls Davidson2022
in Sign Languages
Kathryn Davidson
3 Logical connectives 57
1 Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2 Conjunction and Disjunction . . . . . . . . . . . . . . . . . . 67
3 Semantics/Pragmatics interface: Implicatures . . . . . . . . . 72
4 Coordination and information structure . . . . . . . . . . . . 74
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
1
CONTENTS 2
6 Quantification 139
1 Quantification strategies across sign languages . . . . . . . . . 142
2 Quantificational domains . . . . . . . . . . . . . . . . . . . . 144
3 Quantification and binding . . . . . . . . . . . . . . . . . . . 148
4 Quantification and scope . . . . . . . . . . . . . . . . . . . . . 152
5 Psycholinguistic studies: Comprehension . . . . . . . . . . . . 154
6 Psycholinguistic studies: Production . . . . . . . . . . . . . . 155
7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7 Countability 160
1 The mass/count distinction . . . . . . . . . . . . . . . . . . . 164
2 Classifiers and countability . . . . . . . . . . . . . . . . . . . 167
3 Grammatical number . . . . . . . . . . . . . . . . . . . . . . . 170
4 Telicity and aspect . . . . . . . . . . . . . . . . . . . . . . . . 176
5 Event visibility . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6 Pluractionality . . . . . . . . . . . . . . . . . . . . . . . . . . 182
7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
8 Intensionality 188
1 Conditionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
2 Attitude verbs . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3 Modals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
4 Intensional predicates and iconicity . . . . . . . . . . . . . . . 196
5 De dicto/de re . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
9 Conclusions 202
1 Events and propositions . . . . . . . . . . . . . . . . . . . . . 202
2 On pragmatic universality . . . . . . . . . . . . . . . . . . . . 208
3 Cross-linguistic typology . . . . . . . . . . . . . . . . . . . . . 210
4 Historical change . . . . . . . . . . . . . . . . . . . . . . . . . 212
5 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . 213
Preface
This book was written with two aims for two separate audiences, and hav-
ing done this will likely fall short for each audience in different ways; it is
hoped that the advantage of bringing these audiences together, however, will
outweigh the disadvantages. The first audience for this book is intended to
be sign language researchers and the Deaf signing community. People who
know the most about sign languages may encounter research talking about
how sign languages are discussed in the field of formal semantics and want
to learn more to follow that research. To this audience, I hope that this
text provides an overview and reference of what is being said about sign
languages by researchers in formal semantics and related work in pragmat-
ics and syntax. The paramount goal of this book is to break down barriers
between research published on the formal semantics of sign languages and
the community that the language belongs to. I’m sure I speak for many in
the field of semantics in hoping that this will lead to significant update and
revision to what is presented in the following pages, as that would be the
best indication of progress.
The second audience for this book is the community of formal seman-
tics/pragmatics researchers as well as those in adjacent fields such as philoso-
phy (especially logic and philosophy of language) and psychology (especially
psycholinguistics, psychology of language, developmental and cognitive psy-
chology). Usually if pushed, all of these groups acknowledge the importance
of building theories of language that take sign languages into account, and
yet many researchers in these fields hesitate to include sign languages in
their work through lack of familiarity with glossing conventions, lack of ba-
sic training in the terminology and ideas, and difficulty envisioning what
sentences look like and what are possible parameters of variation. To this
audience, I hope that this book will provide references to existing work
within the field and a sense of sign languages as described within the formal
semantic framework. It is a secondary goal of this book to make sign lan-
guages more familiar for those working on formal semantics, and to hopefully
provide enough basic knowledge of sign languages within the framework that
such researchers without any previous ties to sign languages will reach out
to scholars in the community for more collaborations and mutual support.
1
CONTENTS 2
To both audiences, the book should provide a sense of what has been
claimed regarding formal semantics and pragmatics in sign languages, and
provide guideposts to many major outstanding questions in the field. The
assumed background is an introductory linguistics course.
I am a hearing person who learned American Sign Language as an adult.
Given this, I lack the deeper knowledge that members of the Deaf commu-
nity have about sign languages, and so my perspective in this book is that
of an academic: the kind of outsider knowledge that has historically been
privileged but which should not be confused with lived experience when it
comes to authority on the language itself. On top of that, language is al-
ways changing and varies across people, time, and contexts, so in the spirit
of linguistic analysis, nothing in what follows should be understood as pre-
scriptive, or what should be, but rather descriptive, only what seems to be
given what we know about natural language semantics in both spoken and
signed languages. Moreover, theoretical linguistics is a young field and sign
language linguistics is even younger, so perhaps the next generation of such
an overview will rewrite much of what is contained here; certainly the goal
for this book will have been met if this provides an accessible introduction
to the current state of theoretical thinking to the widest range of readers,
with hopes of progress and revision to come.
Each chapter attempts to begin with a general introduction to the topic
in semantics (work that has often but not always focused on written English),
and then reviews the main ways in which the topic has been studied in
sign language linguistics. Chapters conclude with both concrete examples
of semantic analysis of particular phenomena, as well as future oriented
possible directions, both of which are intended to support further work.
In general within formal approaches to sign language linguistics, seman-
tics has been a relatively understudied field until quite recently, but there
are predecessors to this text that anyone interested in this topic should be
sure to consult if they are interested in complementary readings. The first
is Sandler and Lillo-Martin (2006) who provide for generative approaches to
phonology and syntax what this book aims to provide for formal approaches
to semantics of sign linguistics, working within a framework that assumes
similar ingredients at more abstract levels combining across language modal-
ities. A concise introduction to sign language linguistics more broadly can
be found in Hill et al. (2018). Schlenker (2022) provides a broader introduc-
tion to semantics as applied to many domains inside and outside of language
which includes several areas of focus on sign languages. Finally, there are
many dissertations that go more in depth on some of the topics here while
also presenting widely accessible introductions to aspects of formal seman-
tics; an excellent example for dynamic semantics is Barberà (2015) and for
information structure is Kimmelman (2014).
CONTENTS 3
How do we know what other people “mean” when they share ideas using lan-
guage? What is “meaning” in human language? And for that matter, what
counts as “language”? Answers to these kinds of questions are foundational
and far-reaching, far more complex than any single research program or sin-
gle discipline can handle, potentially encompassing social meaning and iden-
tity, actions and intentions, philosophy of language, psychology of mental
representations, language development, actions and persuasion, and cross-
linguistic variation and typology, etc. That said, there has been enormous
advancement in the last few decades regarding how certain kinds of meaning
work in both signed and spoken languages from multiple perspectives. In
this book we are going to focus on at least two different kinds of meaning
that can be expressed through natural language, although these shouldn’t
be taken to be exhaustive. Furthermore, the approach will not be to argue
in favor of prioritizing one kind of meaning over the other, but rather take
as given that both capture true aspects of linguistic meaning, and that to
understand language meaning requires understanding both parts and how
they interact.
One kind of meaning can be thought of as the “picture in your head”
of some event or situation that you want to share with someone else (as
the language producer) or that you want to comprehend (as the language
receiver). Imagine that I am describing a rainbow in the sky on a beautiful
autumn day in New England, describing the colors, the shapes, the arc
of the rainbow, the feel of the wind, etc, in great detail. Increasing the
details that I use will helpfully add to the vivid mental model you may be
building as I try to share the experience with you, either by using highly
4
1. MEANING AND LANGUAGE 5
have experienced such a moment directly, but the idea is that we can use
language along with other communication systems like painting, enacting,
etc. to try to share something like a specific image/episode of it with you,
for example what it is like to be in New England on such a day, what my
birthday party was like when I was 10, or what it was like to watch bees
make honey that day at the fair, etc. We can remember and reason about
(at least some aspects of) experiences in the absence of language; what we
share about experiences need not depend on language, but it seems that
they can certainly be influenced by language, as when I use language to
share a new experience I had with you.
If you ask most people, the kind of “share the picture in your head”
meaning might be the first thing that comes to mind when they think about
the meaning of a sentence, but linguists working within the formal ap-
proaches to semantics and pragmatics focus on a different aspect of
meaning, the fact that we can raise and answer questions in order to share
information that might not even be able to be encoded as an experience of
any kind. Examples of this are the information we share when we say I’ve
never seen a rainbow or None of the students can identify the queen bee, or
even the content of generalized declarative memory like Boston is the capital
of Massachusetts or penguins lay eggs. These are not necessarily linked to
any particular event experience, but rather play an important role in sup-
porting reasoning over alternatives, i.e. whether you have or haven’t seen a
rainbow, which city is the capital of Massachusetts, which animal kingdom
penguins belong to, etc.
An important motivation behind formal approaches to semantics is that
we can reason over not only what was said, but also use logic to reason
about what else follows from what was said. For example, even though it
is hard to imagine a particular experience tied to I’ve never seen a rainbow,
we can infer many things if someone says that sentence, such as the fact
that the speaker specifically didn’t see a rainbow last week, or the week
before. We can similarly infer from None of the students can identify the
queen bee that there is a unique queen bee, and that if Nick is a student,
then Nick hasn’t identified the queen bee. This information is represented
not as specific experiences but as generalizations across scenarios in which
various facts are true, that we seem to be able to reason about in regular
ways. Statements like these and the deductions we can make from them
are a way we can learn, for example, that bees make honey (even if we’ve
never witnessed that process and can’t imagine what it would look like),
that Boston is in Massachusetts (even if we’ve never been there), that I love
science fiction (even if you’ve never witnessed me enjoying it), and even
understand me if I say I’ve never a rainbow when there is no particular
event that I am describing but rather a lack of them.
One foundational idea in this book is that language can be used for both
of these functions of meaning: we can use language to evoke particular event
1. MEANING AND LANGUAGE 6
experiences for our interlocutor, and we can also use language to share in-
formation that allows our interlocutor to reason over alternatives. When it
comes to language, some pieces of language contribute only to one of these,
some only to the other, and some to both. To take a familiar example to
see how they can work together, imagine we are reading a children’s sto-
rybook that contains both text and illustrations. The illustrations provide
evocative details about the events that make up the story, while the text
can convey information that reinforces the illustrations as well as other in-
formation which might be impossible to depict but allows us to rule out or
rule in certain information, such as I’ve never seen a rainbow, or all penguin
species lay eggs. In semiotic terms, illustrations convey meaning iconically
via depiction, while the words convey meaning symbolically via description
(Clark, 1996). We will see ways in which the compositional properties of
these kinds of meanings differ, yet both are integral to understanding mean-
ing in language broadly, across all language modalities including writing,
speaking, and signing (Dingemanse, 2015; Clark, 2016; Hodge and Ferrara,
2022). One of the theoretical aims of this book is to investigate how iconic
and symbolic representations interact in human language, with a special fo-
cus on symbolic representations as analyzed using formal semantic models.
In investigating these kinds of interactions between depiction and de-
scription in meaning, the particular focus of this book will be on sign lan-
guages as used by Deaf communities throughout the world, for several
reasons. First, one unreason: it is NOT because sign languages are mostly
depictive! Just like spoken languages, sign languages are highly symbolic and
compositional, and it is this aspect we will be emphasizing most of all, and
that is the focus of most formal semantics. However, because they are less
commonly written and less commonly presented as divorced from depictive
aspects, sign languages force semanticists to embrace both aspects of their
meaning, whereas the depictive aspects of spoken language communication
tend to be either ignored, or treated as roughly equivalent to descriptive con-
tent. Moreover, sign languages have been understudied relative to spoken
languages and thus it is worthwhile for all linguists to pay attention to them
in order to broaden the study of linguistics, notably here semantics, beyond
well-studied spoken languages like English. Another reason to look at these
views of meaning in sign languages is that sign languages helps us dissociate
the particulars of spoken language from larger conclusions we might want to
draw about the human mind and meaning: when the manual/visual modal-
ity is utilized to its fullest extent for language, as it is in sign languages, we
might ask what notable properties, if any, do human languages take on that
is often missed when researchers only paid attention to speech, or (even more
commonly) only to language as expressed in written text? And finally, we
focus on sign languages here in order to encourage those who already know
more about sign languages to consider questions of interest in formal studies
on meaning: as will become apparent in the following chapters, the field of
1. MEANING AND LANGUAGE 7
sign language linguistics has been growing rapidly in recent years but the
subfield of formal semantic approaches to sign languages is still primarily
engaged with by people who came to the topic through a general interest in
semantics more than those who came through an interest in sign languages,
and it is a goal of this work to support bridging between the two.
1 Event experiences
To clarify further the kinds of “meaning” that will be relevant in our study
of sign languages, let’s consider each type in turn, first with a sentence of
written English and then with a signed sentence in American Sign Language
(ASL). The written sentence is in (1). This sentence has two components of
meaning that will be relevant for us.
(1) Ten students stood in a line at the library desk, but none of them had
their library card.
(2)
Thus, while there are clearly similarities in the kinds of events we would
imagine based on the the English and ASL sentences we’ve seen, there are
also differences. These are often the kinds of things that people who know
both languages remark on as different, especially focusing on the expres-
sive power of sign languages that are lacking in English, especially written
English without intonation or gestures about the arrangements, etc.
In the approach we are taking in this book, we will lump together the
content of depictions and expressive meaning, including the shape of the
2 Propositional meaning
Beyond whatever particular event or “picture in your head” that a piece of
language might evoke, we use language to share extremely precise informa-
tion about things that simply cannot be experienced. Let’s return to the
same example sentences from English/ASL about the library desk. The En-
glish sentence in (3) conveys not just some impression of what it might have
been like to be there, but also the facts that there is a library desk, and ten
students in a particular relation to that desk, and that there is not a single
one of the students who brought their card. The latter of these is, of course,
quite difficult to model as an experience, given that it is about what did not
happen. Nevertheless we can reason productively over these utterances. For
example, if we believe the speaker, then we can be sure that (a) if we ask one
of the ten students in line, that student will not have their library card with
them, and that (b) there are more than five students. Although neither of
these facts were explicitly stated, we feel completely confident about them:
there is no way that (a) or (b) can be false, if we accept the original claim in
(3). We call this relation entailment: a sentence p entails another sentence
q if for every circumstance in which p is true, q is also true. There is no
scenario in which the main sentence in (3) is true but in which (a) or (b) is
false, so the target sentence in (3) entails (3a) and also entails (3b).
(3) Ten students stood in a line at the library desk, but none of them had
their library card.
Entails:
a. If we ask one of the ten students in line, that student will not have
their library card with them.
b. There are more than five students.
1. MEANING AND LANGUAGE 10
(4)
Entails:
a. If we ask one of the ten students in line, that student will not have
their library card with them.
b. There are more than five students.
The ASL example, like the English example, evokes an image: a library
desk, a bunch of students, and in this case a particular arrangement of a
for modeling these kinds of inferences, especially negative ones. In doing so,
important questions are raised for sign languages specifically, and for all
languages in general. For example, what status should the representation of
a particular event have with respect to our representation of a proposition?
Recall the arrangment of the students around the desk is conveyed by (4)
in ASL but not by (1), and so we want to restrict the possible scenarios
in which (4) is true based on arrangement in a way we do not want to do
for English. This means that we will want a way to refer to these events
in the logical system in ASL (we will see more about this in Chapter 5).
Moreover, beyond the spatial arrangements of the students, there seem to be
other linguistic differences between the sentences. For example, the English
sentence seems to presuppose a library desk as already existing or familiar
in the conversation (“the desk”), while the ASL sentence seems neutral with
e.g. the shape or orientation of a rainbow gesture, and that this is processed
not through linguistic structures but through the process of understanding
depictions, the result of which is explicitly not propositional (Fodor, 2007;
Camp, 2018).
The study of meaning in general, and the study of meaning in sign lan-
guages in particular, is able to be divided roughly by the kind of meaning
that they focus on and take to be core to language meaning. On the one
hand, the perspective of cognitive linguistics tends to model the meaning
of a sentence as the kind of event experience and/or simulation of the world
that one shares with an interlocutor. Under this view, the goal of someone
who studies semantics is to understand how language is used to share these
experiences. On the other hand, the perspective of formal linguistics
tends to consider the meaning of the sentence to be the propositional mean-
ing, and so the goal becomes to understand the ways that sentences convey
information via entailments and to understand the properties of whatever
logical apparatus we have in our minds that permits this reasoning. We
introduce this approach in greater detail in the next section.
Given this functional way of thinking about propositions, let’s end this
section by introducing a functional notation. Recall that we can think about
the propositional meaning of a sentence like I saw a rainbow as a function
that takes in possible worlds and returns true if some conditions hold (that
makes up the set of possible worlds in which is is true). Let’s write this
explicitly by introducting a new bracket notation that is conventional in
formal semantics, so that JK takes a piece of language (like the English
sentence I saw a rainbow) and provides the propositional meaning of the
sort we differentiated from the representation for the event we saw above.
In equation form, we can read (5a) as saying that the propositional semantic
value of I saw a rainbow is the set of possible worlds in which I saw a rainbow,
or equivalently, the function that returns true for those worlds in which I
saw a rainbow. Here q is the proposition that I saw a rainbow, to relate
to Figure 1.2. The propositional value of a near translation equivalent in
American Sign Language would seem, at least on first blush, to pick out the
same proposition, (5b).
J K
= λw.q(w) = λw.I saw a rainbow in w
‘The function that returns TRUE for worlds in which I saw a rain-
bow, false otherwise.’
If these two sentences are true and false in exactly the same scenarios as
we suggested in the propositional meanings that we gave in (5) then we say
that they are synonymous.
It has become conventional to use λ notation of the sort we show in (5)
(λw.I saw a rainbow in w) because it makes explicit the compositional/functional
properties of a propositional meaning. For example, we introduced the idea
that a proposition is a sorting function over possible worlds. We can also
model subparts of propositions as functions taking in different arguments,
and the λ notation brings to the front of the equation information about the
arguments that each kind of expression takes. In what follows we’ll introduce
this notation and its compositional properties via several examples.
One subpart of propositions are predicates, like see a rainbow or happy.
These can be modeled as functions over individuals: English happy or ASL
1. MEANING AND LANGUAGE 17
J K = λx.happy(x)
‘The function that returns TRUE for individuals who are happy,
false otherwise’
J K = λx.see-rainbow(x)
‘The function that returns TRUE for individuals who see a rainbow,
false otherwise’
Note that although the two forms are different in English (e.g. happy)
and in ASL (e.g. ), the semantic values are the same, since these
two words have basically the same symbolic meaning: they are functions
that will return TRUE for individuals who are happy, and false otherwise.
Often beginning formal semantic students who are thinking only about the
1. MEANING AND LANGUAGE 18
semantics of English get confused by the fact that ‘happy’ occurs both within
the semantic value brackets and outside of them, i.e. on the left and right
side of the equation in (6a). This can feel circular. But as soon as we use
care to separate our object language that we are trying to model (English in
(6a) and ASL in (6b)) from the metalanguage we use to talk about the sort-
ing function (English mixed with mathematical notation, since that is the
textual language of this book) then hopefully it is clear that it is not circular,
but rather helps us make clear predictions for what linguistic expressions are
and are not synonymous.
So far we have hardly motivated λ notation much, but let’s consider some
linguistic expressions for functions that take in two participant arguments,
like English see or ASL . These are typically used to talk about
scenarios (in this case, seeing events) that have both an agent (the see-er)
and a theme (what is seen). The propositional contribution can be modeled
as a two-place function that looks for two arguments, first the theme and
then the agent, which is represented in (8). The idea is that arguments are
applied to the function expressed by λ notation in order from outside to
inside, so that the first argument that we “feed” to the function in (8) will
saturate the λx, replacing each occuring of x with the value of the argument
that we feed it. Then, the remaining argument will be fed to λy, replacing
any occurances of y (this process is known as “β-reduction”, see Heim and
Kratzer (1998) for further introduction for linguists).
), which we might also name with a letter, let’s say r. (What we are
simplifying for now is the meaning of the English definite article the.) Under
this view, the propositional meaning/semantic value of the English phrase
the girl is a particular girl, not a function, which is a property of referential
noun phrases (“referential” meaning that it picks out a particular thing),
and similarly with the rainbow. This contrasts with a verb like see, which is
a function still looking for arguments to saturated/fulfill it (that’s why the
symbolic meaning for see has those lambda expressions, unlike the referential
expressions in (9)).
1. MEANING AND LANGUAGE 19
b. JThe rainbowK = =r
The interesting piece comes when they combine. We know from a long
tradition of crosslinguistic work in syntax that objects combine with verbs
before subjects, meaning that see the rainbow forms a unit; as semanticists,
we will ask how we arrive at a meaning for this syntactic constituent. What
we propose is that see regularly contributes a function looking first for an
individual contributed by the object to take as an argument (that’s what
λx means, x is a variable over individuals people/places/things like girls,
rainbows, etc.) (10a). Then its syntactic object the rainbow contributes its
propositional semantic value which is an individual, (10b). When the value
for the rainbow is fed into the see (two-place) function, it returns a (one-
place) function see the rainbow which returns TRUE for individuals who
see the rainbow r and false otherwise (10c). This then takes in one more
argument, the subject of the sentence the girl, returning a proposition which
is a function that takes in worlds and returns TRUE for those in which the
girl g ( ) sees the rainbow r ( ) and false otherwise (10d).
b. Jthe rainbowK = =r
There are some levels of simplification here, but (10) exemplifies the
basic idea of compositionality and function application behind much of the
formal semantic approach to language. Advantages include showing how
symbols combine in regular ways, such that we can convey new information
about the world by just putting familiar symbols together in new ways, i.e.
that language is systematic. One complication we introduced above is that
the meaning of an assertion like The girls sees the rainbow is a function over
worlds, and the right hand side of (10d) hardly looks like a function over
worlds, but in fact it is one in disguise, it is simply the truth conditions
given a particular world of evaluation: it will be true in any situation if g
(that particular girl) sees r (that particular rainbow). If we want to give the
more general meaning that this is a function across worlds, we add in our λw
which turns this into a function over worlds (w is, perhaps unsurprisingly, a
1. MEANING AND LANGUAGE 20
for (12c). Finally, the semantic value for is the girl, which
composes as the agent argument (12d). The result are the truth conditions
in a particular world, and we can define the proposition as the function over
worlds in which those conditions hold (13).
b. J K= =r
J K = λy.y sees r
J K = g sees r
1. MEANING AND LANGUAGE 21
We can of course also present the same compositional process using gloss-
ing of the signs instead of pictures, as in (14)- we will frequency see glossing
used when pictures are not available, in this text typically when citing ex-
amples from previously published works. But, it’s always important to keep
in mind that glosses represent the signs, so that (13) is a more direct rep-
resentation of the endeavor (and hopefully highlights the non-circularity of
this process, as a bonus).
J K = λw.g sees r in w
J K = λw.g sees r in w
L M=
J K
= λw.g does not see r in w
b. Particular event representation:
L M
(17) a.
(18) a.
So, while (17a) tells us something rather specific, we can also conclude
many other facts from this information. How are we able to do this? Formal
semanticists are motivated by this kind of data in assuming that humans
are using an underlying logical system in this kind of reasoning, and that it
is the same one that allows us to understand the meaning of sentences that
we’ve never encountered before.
We can also infer new information via implicature, a category of infer-
ences that are cancellable, unlike entailment. Implicatures can arise based
1. MEANING AND LANGUAGE 25
on real world knowledge, as in the case in (19) which is based on the fact
that rainbows are easier to see outside because they’re up in the sky. This
is an implicature and not an entailment because we actually can imagine
a scenario in which the girl saw the rainbow but wasn’t outside: perhaps
she was looking through a window. Implicatures can also arise through
reasoning about the amount of information that someone might share. For
example, another implicature might be that the girl didn’t cause or create
the rainbow, because if she had made the rainbow herself (drawn it, made
one in a lab using a crystal and light refraction, etc.) then we probably
would have said that instead of just that she saw it. But of course, this is
just an implicature and not an entailment: we can quite easily imagine a
scenario in which she did create the rainbow and also saw it.
(19) a.
(21) a.
Target sentences:
a.
These are extremely simple (and quite obvious) examples, but the con-
cept is that the sentence in (23a) is judged acceptable in the given context,
while the same sentence with negation in (23b) is judged to be not acceptable
1. MEANING AND LANGUAGE 29
in the same context. This tells us that at a minimum, these two sentences
differ in some meaningful way; knowing what we do about the language,
6 On notation
Human language has the potential to be produced and comprehended in
many modalities: the text used to write or read this paragraph is one form
of language, but this book will often be discussing language that exists in
another medium: speech or sign. There are long used conventions for repre-
senting speech via text: I’ll typically use English orthography as in (24a), but
the International Phonetic Alphabet is another option to more accurately
represent the form of the words in spoken English (24b). Although both
are helpful and used in semantics (truthfully, English orthography is over-
whelmingly used in formal semantics to represent English), neither rightfully
1. MEANING AND LANGUAGE 30
b.
A similar pattern holds true for sign languages, except that there is no
clearly conventionalized orthographic representation for representing ASL
to the same extent that there is for English. The most common method
for representing ASL on the page are semi-conventionalized glossing into
English of the sort in (25a), and we will make some use of these as well.
When possible, for any glossing of examples presented anew in this text
from ASL, I aim to follow the SLAASh ID glossing principles (Hochgesang,
2020) and employ conventionalized/searchable ID glosses for ASL from the
ASL Signbank (Hochgesang et al., 2020). However, glossing from examples
cited from other works are typically given in the forms that those authors
used, especially in the case of other sign languages, in order to prioritize
faithfulness to the original source. Most importantly, however, just like IPA
is not the full picture of English but is much more faithful to linguistic forms
than conventionalized English orthography, most sentences in this text also
include still images of the signs used in each sentence, to more directly
represent the forms used, as in (25b). Like IPA for spoken language this will
not convey timing and other finer details, but it should give a much more
clear and accurate sense of the object language than glossing alone, and
hopefully allow us to focus on the sign language itself, and not the glosses,
as the object of study.
the pictures. For example, repetition can be glossed in symbolic ways (e.g.
+ + +) but this seems to assume a symbolic analysis; similarly, verbs that
include quite a lot of depiction are sometimes glossed simply as if they are
just as symbolic as a word (e.g. give, move). Since we are ultimately in-
terested in how these symbolic and iconic components interact (and fail to
interact), the pictures should be considered the ultimate reference for the
sentence, with the gloss an additional, secondary source of information.
In the following chapters we will focus on different areas of research
in formal semantics of sign languages and synthesize this research into as
coherent a picture as possible. Instead of building up from the smallest
pieces, though, we will start by investigating the largest pieces (questions in
a discourse and their answers) and work our way down to smaller pieces that
include sentences and their connectives, and eventually nouns and verbs and
quantifiers, until we cut across other dimensions such as countability and
intensionality, before returning at the end to the issues raised in this chapter:
how sign languages fit into the larger picture of how we model meaning in
human language.
2
We’ve talked about two ways that language can convey meaning, with the
idea that one route functions to evoke a particular event experience, and
the other is for resolving issues via propositions built from symbolic com-
positional structures. In this chapter, we’re going to dig into the latter to
a much greater extent. The view we pursue follows classic work by Roberts
(2012) (first published in 1996) in roughly taking questions and answers to
be the backbone of the way that issues are resolved in a discourse, in order to
model the backgrounding/foregrounding of information in language, often
called the information structure. In sign languages, we can see reflec-
tions of the question-answer discourse structure in the form of some sentence
structures, which we will showcase in this chapter. Moreover, questions and
their possible answers (the latter of which which form alternatives to each
other) provide a useful way to probe and categorize the kinds of linguistic
forms that can give rise to propositional meaning (descriptions) from those
that only bear directly on particular events (depictions).
In the previous chapter we introduced the idea that propositional mean-
ing can be thought of as a function that divides the circumstances in which
a sentence is true from those in which it is not. That is to say, we introduce
a truth-conditional semantics for propositional meaning. This allows us to
model the meaning of things that don’t happen just as easily as those that
do (such as never seeing a rainbow), and also allows us to model entailments:
one utterance entails another if in all situations in which the first is true,
the second is also true. It also allows us to model new information as a
restriction on the ways that the world might be that are in the common
ground, so that we can view information exchange as eliminating possibili-
ties. There is yet another advantage to this view of propositional meaning
as possible worlds: it allows us to model questions as requests for particular
ways of updating the common ground. Consider, first, a polar question (the
32
2. QUESTIONS, ANSWERS, AND INFORMATION 33
kind of question to which you can answer yes, or no), such as Did you see a
rainbow? From the perspective of propositional meaning, each polar ques-
tion is a request for the interlocutor to determine which set of two (infinite)
sets of possibilities might be right: either the worlds where the answer is
YES (e.g. I DID see a rainbow) or the worlds in which the answer is NO
(e.g. I DIDN’T see a rainbow). A visual schematic representation of this
question-answering process and the associated narrowing of possibilities can
be seen in Figure 2.1.
Although precise implementations differ, this general view of questions
as a set of possible answers is commonly accepted in formal semantics as
a way to model the behavior of questions not just when they are asked di-
rectly, but also indirectly as embedded clauses, e.g. She wondered if you
saw a rainbow. (Hamblin, 1976; Karttunen, 1977; Groenendijk and Stokhof,
1982, 1984). The schemas used in Fig 2.1 reflect a particular implementa-
tion of this idea of questions as partitions on the set of possible worlds by
Groenendijk and Stokhof (1984); in contrast, taking the question to be a set
of propositional alternatives (e.g. {I saw a rainbow, I didn’t see a rainbow})
builds from the work of Hamblin (1976) and Karttunen (1977). Although
we won’t go into the detailed semantics of embedded questions much here,
this can be motivated also by unembedded questions: someone raises a ques-
tion by providing a set of possible ways to update the discourse, and their
interlocutor answers it by choosing from among those possible updates. In
this kind of view, backgrounded information is what is already presumed by
the combination of a common ground and its partition, while new, focused
information is contributed by the answer (Roberts, 2012; Rooth, 1992).
The form of overt polar questions has been relatively well studied for
some sign languages, in part because a robust pattern has been observed:
the polar question typically differs from the corresponding assertion in its
2. QUESTIONS, ANSWERS, AND INFORMATION 34
(26) (Italian)
a. Laura ha mangiato. ‘Laura ate.’
b. Laura ha mangiato? ‘Did Laura eat?’
(27) a.
particle likes this are the most common option for marking the declara-
tive/polar question difference (Dryer, 2013). We see two strategies involved
in polar questions in ASL, then, using nonmanuals and question particles,
both of which are represented in (27) but notably one thing that ASL does
not do is use auxiliary inversion of the sort familiar from English (Did the
girl...), one of many places where these two languages differ in their struc-
ture.
Polar questions also raise interesting questions about the ways that they
can be answered, i.e. the forms and meanings of different polar response
particles. Negative polar questions made especially interesting test cases:
asking Is it not raining? in English can lead to the felicitous response No,
it’s not. and also Yes right, it’s not.; other languages make finer or different
distinctions in the meaning of their response particulars (e.g. the French
three-way distinction between oui, non, and si). Loos et al. (2020) provide
a detailed investigation of possible polar response strategies in German sign
language (DGS) and fit them into a larger typology of spoken languages
while Gonzalez et al. (2019) investigate these distinctions in ASL.
Polar questions aren’t the only kind of questions, though: we can see
another example of questions that function as a discourse organizer in con-
stituent questions (sometimes called wh-questions, using who, what, when,
where, why, etc.). We can model their semantics as an extension of the se-
mantics we gave for polar questions, illustrated schematically in Fig. 2.2.
The wh-question provides a set of alternative ways to update the dis-
course, which are based on different answers, e.g. in this case things that
Kate saw (due to the question, What did Kate see?). A possible partition
is {Kate saw a rainbow, Kate saw a cloud, Kate saw a robin}, which would
be the partition if we limited the possible objects seen to only these three
2. QUESTIONS, ANSWERS, AND INFORMATION 36
options, and also limited the seeing to a single object. We can imagine loos-
ening both of these requirements so that the list of possible objects seen is
much less constrained, and also allowing multiple possibilities, e.g. Kate saw
a rainbow and a cloud, which would then count as its own answer (separate
from the single answers Kate saw a rainbow or Kate saw a cloud). In other
words, under this system we should view answers as exhaustive: if the
answer is Kate saw a rainbow then it means the same thing as Kate saw
only a rainbow, in response to the question What did Kate see?.
When it comes to the form of constituent questions in sign languages,
it is a notable feature that many sign languages seem to also have required
suprasegmental marking for wh-questions, as the nonmanual marking seen
in (28) (for wh-questions in ASL, this is often described as “brow furrow-
ing”, in contrast to “brow raising” often seen in polar questions). It has
been a matter of much discussion in the literature on sign language syn-
tax where exactly wh-words are permitted to occur in the word order of
sentences in sign languages: see Petronio and Lillo-Martin 1997 and Neidle
2000 for overviews. Cecchetto et al. (2009) detail a way that the seemingly
typologically unusual word order for questions found in sign languages of
the world relates to the use of nonmanual marking: they note that sign lan-
guages frequently have sentence-final wh-words, while in spoken languages
wh-words are frequently sentence-initial if they are not pronounced in their
canonical position. From a pure syntax perspective, this is a major puzzle,
with no obvious reason for a difference between signed vs. spoken modali-
ties. However, Cecchetto et al. (2009) attribute this difference to the ability
of the nonmanual marking to convey appropriate semantic/syntactic depen-
dencies, i.e. to use nonmanual marking to convey the question status of an
utterance and to relate any dislocated wh-words to the argument positions
in which they are interpreted. We will not go especially deeply into this
primarily syntactic issue, but we can see some evidence of that in a sentence
like (28c), where the wh-word is sentence-final even though it queries the
subject, which would typically be sentence initial in ASL. At the same time,
we see a sentence-initial wh-word in (28b), despite the word querying the
object, which typically follows the verb. So, there are ways that a wh-word
may end up sentence-initially, or sentence-finally, in both cases potentially
differing from the canonical position of the constituent it queries (the word
order for the assertion is subject-verb-object, i.e. girl see rainbow).
(28) a.
2. QUESTIONS, ANSWERS, AND INFORMATION 37
Importantly, focus placement reflects QUDs even when the question has
not been stated overtly. For example, the context in (30) doesn’t involve
any overt questions, but it seems to raise the same question What did Kate
see? implicitely, so that it has become the Question Under Discussion that
ideally should be answered by the conversational participants. Thus, focus
placement in the assertion should reflect its status as an answer to this ques-
tion. There are many ways to add complexity to this picture, including sub
2. QUESTIONS, ANSWERS, AND INFORMATION 38
questions under discussion etc. (see Büring 2003 for an immediate extension
to contrastive topics) but this rough outline forms a useful foundation for
what follows.
(30) Context: Kate returns from the window with a big smile on her face,
saying So beautiful!. Someone close to the window announces to the
rest of the room:
a. Kate saw a RAINBOW. (acceptable continuation)
b. KATE saw a rainbow. (unacceptable continuation)
In the remainder of this chapter we will highlight specific ways that sign
languages make use of the structure of a discourse and reflect it in linguistic
forms, and how this interplays with other aspects of the language. The first
section will discuss question-answer clauses in sign languages, a well studied
area in sign language linguistics that becomes simpler when we think about
it as an overt manifestation of this question-answer structure of dialogue and
information packaging. We will then move on to more complicated discourse
structures that allow for foregrounding/backgrounding information such the
use of sentence final focus positions, and then topicalization. In the fourth
and final section we will discuss the interaction of depictive content and
questions, showing that alternatives are not compatible with depiction and
representations of particular events, only propositional meaning.
1 Question-Answer clauses
We begin with a structure that follows quite naturally from the view of a
discourse as structure by questions and their answers. In American Sign Lan-
guage, one common way to express backgrounded/foregrounded information
is actually through a sentence that contains what looks like a question-
answer pair, as in example (31). A non-exhaustive list of other languages
that have a similar type of sentence strcuture are the sign language of the
Netherlands (Kimmelman and Vink, 2017), Russian sign language (Khristo-
forova and Kimmelman, 2021), South African sign language (Huddlestone,
2017), and Hong Kong Sign Language (Gan, 2022).
(31)
(34) Context: a bunch of people are wondering what the girl saw.
a. Person A: What does the girl see?
Person B: She sees a RAINBOW. (acceptable)
b. What the girl sees is a rainbow. (acceptable)
c.
(acceptable)
2. QUESTIONS, ANSWERS, AND INFORMATION 40
(35) Context: a bunch of people are wondering who saw the rainbow.
a. Person A: What does the girl see?
Person B: She sees a RAINBOW. (not acceptable)
b. What the girl sees is a rainbow. (not acceptable)
c.
(not acceptable)
Taking pieces of both the discourse question answer pair structure and
the pseudocleft structure, Caponigro and Davidson (2011) propose that the
expression-types in (31)/(34c)/(35c) are a question and its answer in terms
of their semantic/pragmatic contribution. We can think of this as in the
schema in figure 2.3, where the question kate see what is raised, with
the answer [kate see] rainbow just as when there are two people in a
discourse. However, their proposal is that the question and its answer are
connected syntactically to each other in the same way as two parts of a
pseudocleft are connected by a copula verb (be and its variants, which are
often covert/unpronounced in ASL). Just like a copula verb in equative
constructions can equate two things (e.g. [Mary] is [the winner of the game]),
the copula verb in question-answer clauses in ASL can equate two things: the
question clause, which has the semantics in (36), roughly [the answer to the
question girl see what], and the answer clause [girl see rainbow]. This
2. QUESTIONS, ANSWERS, AND INFORMATION 41
(36)
b. J K = λw.g sees r in w
c. J(be)K = λp.λq[p = q]
g. J[(Ans) ](be) K
= λw∀p ∈ Q(w0 )[p(w) = 1] = λw.g sees r in w
‘The function that takes in worlds and returns TRUE only for the
complete true answer to the question (What did the girl see?, in
(36a)), is equal to the function that takes in world and returns
true if the girl saw the rainbow in that world’
One notable aspect of this proposal is that much of the motivation be-
hind the anlaysis presented in (36) was originally from analyses of spoken
language pseudoclefts in English (Dikken et al., 2000; Schlenker, 2003). But,
in fact, Caponigro and Davidson (2011) argue that there are several notable
differences between QACs in ASL and pseudoclefts in English, suggesting
that the given analysis which is along the lines of those proposed for pseu-
doclefts is actually much more appropriate for the QAC case than the pseu-
docleft case. This has implications for the analysis of English in return, of
2. QUESTIONS, ANSWERS, AND INFORMATION 42
course, for if the analysis is a better fit for ASL, then another solution will
be needed for English to the extent that the two differ. And they do seem
to differ. For example, there is a clear contrast between an English pseudo-
cleft and ASL QAC in that pseudoclefts can’t be formed with polar/yes-no
questions (37). Another difference is that pseudoclefts require referential
answers, whereas QACs can have quantificational/non-referential answers
(38). Pseudoclefts can also only use a subset of possible wh-words in their
language, but QACs allow any wh-word, including which (39) (Caponigro
and Davidson, 2011).
and Vink (2017), extended by Hauser (2019), is that sign languages fall
along a grammaticalization cline, with discourse question-answer pairs at
one end and pseudoclefts at the other end, as in (40).
the main clause must already be negative, including with negative nonman-
ual marking (that negative nonmanual marking sets it apart in form quite
noticeably from a QAC), and the first part is usually not seen as a question,
but rather as a statement. In these cases, the sentence-final negation is
“doubling" the negation in the main clause, not answering a question raised
by that clause. The information structural effect is quite similar in the two
constructions, though: both seem to focus the negativity/negative valence
of the answer, as we can see in their acceptability in a context in which that
is what is at issue (43).
br
(41) a. mary have book, no. (Polar QAC)
‘Mary doesn’t have a book.’
b.
neg neg
(42) a. mary not have book, no. (Focus doubled negation)
‘Mary doesn’t have a book.’
b.
(43) Context: A bunch of people are wondering whether or not Mary has
a book.
br
a. mary have book, no.
‘Mary doesn’t have a book.’ (acceptable)
neg neg
b. mary not have book, no.
‘Mary doesn’t have a book.’ (acceptable)
These so-called “focus doubles” have received attention in the formal syn-
tax/semantics literature in ASL (Petronio and Lillo-Martin, 1997) as well
as in Brazilian sign language (Libras) (Lillo-Martin and de Quadros, 2004),
2. QUESTIONS, ANSWERS, AND INFORMATION 45
where they have been analyzed as markers of focus, with a dedicated focus
syntactic projection that induces the focused constituent (in this case, nega-
tion) to be pronounced sentence-finally as well as optionally in its canonical
position. This analysis is supported by the emphatic effect they seem to
have, as well as parsimony with the other uses sentence-final position, which
is also used for wh-words in ASL (wh-words in questions are considered to
be focused); recall that wh-words themselves can be doubled in ASL (44).
(44)
Davidson and Koulidobrova (2015) note that while verbs, negative words,
and modals all share the sentence-final position, when it comes to what may
and may not co-occur together in a sentence, the positive doubles are in
complementary distribution with any forms of sentential negation. They
take this to be evidence for doubling as a marker not of constituent focus but
of polarity focus, building on work on polarity marking in sign languages by
Geraci (2005). Under this analysis, when the issue at stake is about the truth
of a sentence (i.e. when there is a polar QUD), adding a non-negative double
is licensed when the answer is positive, whereas if the answer is negative then
adding a negation (internally to that sentence only, or internally and via a
double) is allowed.
At this point, the picture of discourse structure in ASL is the following:
QACs instantiate part of the QUD-answer discourse structure. Doubles are
used when the polarity of a clause is under discussion (i.e. when the QUD is
a polar question). This raises a natural question about what happens when
these interact: how do polar QACs work, i.e. what happens when negation is
sentence-final because it is the answer to a QAC? Especially, what happens
when the “question” part before the negation is both a question and has
negative polarity? In this case, the whole thing is interpreted as a negative
QAC, as in (45).
br neg
(45) mary not have book, no.
‘Mary doesn’t have a book.’
One reason for this pattern could be that negative doubling (of the type
seen above in (42)) blocks the negative polarity-agreeing polar QAC: both
serve the same purpose of expressing the statement with negative polarity,
focusing on the negative polarity in that sentence-final position. Given that
the structure proposed for doubling is structurally simpler than the syn-
tactic structures proposed for QACs (doubles involve just a single clause,
while QACs are multi-clausal), it could be that QACs are ruled out on the
grounds that the speaker should use the simplest structure that does the
job between these two possible focusing options (sentence-final focus and
QACs). Clearly, there is interesting psycholinguistic predictions to be made
here about choices in language production, and more investigation to deter-
mine how robust these patterns are crosslinguistically.
3 Topicalization
We have so far covered the notion of focus, a notion that is often discussed
in opposition to the concept of topic. We might, for example, want to think
about the question portion of a QAC as its topic, especially when it is given
and backgrounded, taken for granted by participants and providing an orga-
nizing structure for the answer. In general, it seems to be common for sign
languages to orient their word order so that topics precede focus, and to the
extent that these information structural notions organize the word order,
2. QUESTIONS, ANSWERS, AND INFORMATION 47
(47) a.
but we can see one kind of example in (48) where people give somewhat
different responses for differently topicalized sentences in the same context.
focused on their syntax: in syntactic theory topics can be moved from a po-
sition closer to where they are interpreted, or they could be “base generated”
in the sentence-initial position, without any special relationship to the se-
mantic argument position. Wilbur (1994), Wilbur and Patschke (1999), and
Wilbur (2011) provide clear proposals for the interaction of syntatic position
and prosodic marking. From the semantic/pragmatic perspective, there has
been must less work done on the notion of topichood in sign languages. In
spoken languages, topics are known to affect anaphora, indexicals, and other
expressions in a sentence, but these remain largely unexplored in the sign
linguistics literature as far as I am aware.
4 Contrast
So far in this section we’ve covered several aspects of focus and topics; a
third important information structural notion is that of contrast (see Repp
2016 for overview of contrast in spoken languages). Wilbur and Patschke
(1998) describe in detail the use of body leans in American Sign Language,
most notably the forward and backward body leans, which they propose
mark contrast in interesting ways: leaning backward marks exclusion, and
foreward inclusion. A second way to mark contrast involves equal alterna-
tives, which are often expressed through the use of a partitioned horizontal
spaces (Wilbur and Patschke, 1998). For example, the alternative question
in (49) presents coffee and tea as parallel/equal alternatives, and in doing
so assigns them to the ipsilateral area of signing space (e.g. a) and the
contralateral area of signing space (e.g. b), respectively.
(49)
by Pfau and Quer (2010) and Wilbur and Patschke (1999) provide further
important reading especially on the interaction of syntax with nonmanual
markings in marking contrast in sign languages.
When it comes to understanding why the use of horizontal signing space
is used for expressing contrast, we may gain insight by considering non-
propositional meaning. Lakoff and Johnson (1980) discuss the well-used
metaphor similarity is proximity, which we might expect to be active in
the creations and comprehension of depictive structures (Casasanto, 2008).
In this case, we may expect that space is used to reflect conceptual distance,
so that conceptual contrast leads to use of different sides of signing space,
but not in a way that we need encode in the propositional contribution.
5 Embedding diagnostics
Information structure is important in sign linguistics not just for the view
it provides on various word orders, nonmanual markings, and pragmatics,
but also as a diagnostic of general clausal structure. To take one exam-
ple, Davidson and Caponigro (2016) ask whether polar questions can be
embedded in Amerian Sign Language. In other words, English allows for
one declarative sentence to embed another (50a), as well as a declarative
sentence to embed a polar question (50b), and a declarative sentence to em-
bed a constituent/wh- question (50c). In English, one can usually tell the
difference between a declarative clause and an interrogative from structural
differences, like so-called “do support” (compare: She bought a book/Did she
buy a book?). However, embedding makes things a bit trickier: in English,
the embedded clause (Her sister bought a book) looks the same on the sur-
face (e.g. (50a-b); whether it is interpreted as a question or not depends on
the verb in the main clause.
(51) a.
(52)
pronoun match those of the subject of the main clause (ix-1). It can similarly
copy the subject of an embedded clause, as in (53b), where the subject copy
features match those of the embedded subject (ix-a). In contrast, it cannot
copy the features of an object, as in the children in (ix-arc-b). Padden (1988)
argues that it really is subjecthood and not distance or number features or
other potential confounding factors which is accounting for the pattern of
the sort seen in (53).
The takeaway from (54) seems to be that negation resists depictive con-
tent, in this case exemplified by Japanese “mimetics” like gorogoro. In fact,
we find similar evidence when we look at depictive elements in English:
a depictive onomatopeia that’s perfectly well-formed in (55a) becomes ill-
formed under negation (55b), while a descriptive modifier conveying purely
propositional content is perfectly fine under negation (55c). (Of note: the
ill-formed nature depends on the meaning: it’s possible to give a meaning
to this which involves “ metalinguistic” negation, in other words, when it
means that a better expression should have been used, but this isn’t the
meaning expressed by negation in the non-depictive case.)
2. QUESTIONS, ANSWERS, AND INFORMATION 54
(55) (English)
a. Depiction, no negation
The bird was chirrrp-chirrping[expressed in a sing-songy manner]
on her perch.
b. Depiction, with negation (not acceptable)
*The bird wasn’t chirrrp-chirrping[expressed in a sing-songy man-
ner] on her perch.
c. Descriptive modifier, with negation
The bird wasn’t chirping loudly on her perch.
‘Of all the books in a row, it was difficult to pull one down’
b. Depiction, with negation (not acceptable)
*book ds_c(books lined up), not ds_c(pull down w/difficulty)
‘Of all the books in a row, it wasn’t difficult to pull one down’
c. Descriptive modifier, with negation
2. QUESTIONS, ANSWERS, AND INFORMATION 55
‘Of all the books in a row, it wasn’t difficult to pull one down’
7 Conclusions
In this chapter we have focused on the relationship between questions (ei-
ther overt questions or covert questions under discussion), their answers, and
linguistic forms. The specific linguistic forms that relate to these questions
were question-answer clauses, sentence-final focus position, topicalizations,
contrastive uses of space, and subject pronoun copy. In the penultimate
section, we then moved to trying to understand the way that different com-
ponents bear on the use of alternatives in sign languages, with the takeaway
that alternatives seem to require interpretation as a symbol, not as an iconic
depiction. A consequence of this is that much of the depictive nature of sign
languages happens in a way that is disjointed from much of the informa-
tion structuring, and that both should be taken care to be considered when
designing analyses for sign language semantics.
3
Logical connectives
57
3. LOGICAL CONNECTIVES 58
it in cases of ignorance (Alex had tea or coffee, I’m not sure which) or in
cases of choices (Alex can choose tea or coffee), and tends to be strange to
use with an inclusive meaning (if Alex had tea and coffee, it’s often consid-
ered strange to say Alex had tea or coffee). However, Grice (1989) argued
that these differences make sense in light of the way that we use language
to communicate, since in the case of, say, the inclusive meaning, if we knew
both to be true we should have used a stronger description, Alex had tea
and coffee, instead. The argument is that the meaning given to the logical
expression like the connective ∨ (inclusive disjunction) is actually the right
model for English or (its semantics), just that the natural language expres-
sion seems to carry different meaning because of the way that humans use
language and reason about expressive choices on top of their basic meaning
(its pragmatics). This approach in the middle/latter half of the 20th cen-
tury opened wide open the doors to using logic to model natural language,
including logical connectives.
When it comes to correcting an outdated view that human language is
somehow “less than” logic, the parallelism to outdated views of sign lan-
guages are compelling: just as spoken language was assumed to be more
messy and inferior to logic, but was eventually found to have more in com-
mon than previously realized, sign languages were at one point considered
impossible to analyze using logic, and yet that too has been clearly dis-
proven: they contain all of the same logical structure as spoken languages.
This in no way means that logic is all there is to either spoken or signed
languages: meaning in human language is multi-layered and contains multi-
tudes and no semanticist would argue that “meaning” should be completely
reduced to truth conditions. Rather, those who work with truth conditions
take them to be a valuable way to understand how we use language to com-
municate and learn about the world in both precise and abstract ways that
account for the inferences we draw about the world from what we are told.
Human languages in all modalities do this to a level that is unparalleled
by other organisms (biological and artificial), and makes the investigation
of this ability in sign languages especially worth understanding and appre-
ciating. In this chapter we review findings related to the logical operators
negation, conjunction, and disjunction in the semantics and pragmatics
of sign languages.
1 Negation
The simplest logical operation is negation (¬): in logic, negation can be
defined as a function that simply takes a single proposition and returns its
truth conditional opposite. For example, take a proposition (e.g. it will rain
today) that takes possibilities/worlds and returns those in which it is true
(e.g. the worlds in which it rains today: λw.it rains today in w). Negation
3. LOGICAL CONNECTIVES 59
applied to this function will return the complement set of worlds (λw.doesn’t
rain today in w, i.e. a function that takes in worlds and returns those in
which it doesn’t rain today). Put in slightly more functional notation, if
p(w) = it rains today in w, then ¬p(w) = it doesn’t rain today in w, and
thus more generally for any proposition p and world w, if p(w) = 1 then
¬p(w) = 0.
The English word not seems to frequently have this effect that we de-
scribed above for the logical operator ¬, so that we might want to define
its semantics as a function that takes propositions and switches their truth
value: JnotK = λp.¬p. So the meaning of It will not rain today is the ba-
sic proposition It will rain today, with the addition of a negative operator:
(not)(It will rain today). We can build a mini-compositional account for
this step:
br
J K = J(hs)(today rain) K =
br hs
J(hs)KJtoday rain K assuming compositionality
J(hs)K = λp.¬p as proposed
(λp.¬p)(λw. it will rain today in w) =
λw. it will not rain today in w
The key observation to make about the analysis proposed in (59) is that
the functional meaning of headshake is the same as the functional mean-
ing of not in (58). This highlights one natural question that arises in the
study of negation in ASL and other sign languages: should English not,
ASL , and ASL hs nonmanual marking all have the same semantics,
3. LOGICAL CONNECTIVES 61
choose instead to give the manual sign the meaning of the English
word not, and say that the headshake just comes along for the ride, following
the manual sign (say, the negative manual sign brings along the nonmanual
headshake), but this would leave us with a serious question about how it
is that (59c) comes to have a negative meaning and not a postive meaning!
At this point, it is hopefully already clear how the study of negation in sign
languages, like the study of negation in general crosslinguistically, is not
a straightforward extension of English, and worth significant investigation
across sign languages, especially with the additional complexity of deter-
mining the relationship between nonmanual (suprasegment) markings and
manual signs.
In fact, it is well known that many spoken languages display variation in
precisely this area, namely, that sometimes, two negative elements are inter-
preted as a single semantic negation, while other times/in other languages
they are interpreted as two separate logical negation functions. Consider,
for example, the English example in (60); there are two interpretations of
this sentence, one that is considered the interpretation in “standard” English
(60a), and another that is common in many other varieties of English, which
is completely the opposite meaning (60b). These two possible ways to inter-
pret two negations are found in different languages and language varieties
across the world, and one can find either one being considered the “stan-
dard” interpretation. For example, in “standard” Italian, the negative form
nessuno is negative on its own (61a)(like English nobody) yet when combined
with sentential negation leads to a single negation “concord” interpretation
(61b), just like (60b).
The claim, then, is that the requirement for nonmanual marking correlates
with the ability of nonmanual marking to serve as a negation on its own,
across sign language varieties. In terms of frequency of one category over an-
other, Zeshan (2006) reports that the non-manual dominant sign languages
are more common in her language sample: 26 out of 37 surveyed sign lan-
3. LOGICAL CONNECTIVES 63
(63) (ASL)
br hs
a. today no-one 3-email-1
‘Today no one emailed me.’
br
b. *today no-one 3-email-1
‘Today no one emailed me.’
This raises the important issue of how and whether the extent of nonman-
ual marking corresponds to its semantic scope, i.e. the size of the proposition
that it negates. Under a theory in which the semantic scope corresponds
3. LOGICAL CONNECTIVES 64
Kuhn takes this pattern as well as other behavior of overt indices with dis-
junctive referents as an argument in favor of iconic pressures on the use
of spatial loci only in cases of some kind of existence, even these rather
abstract spatial loci. This seems plausible, and in line with the idea that
the use of loci are motivated by depiction (Liddell, 2003), even when they
have a very abstract meaning. The second modality difference that Kuhn
notes related to negative concord is the use of nonmanual marking to express
negation. This ties into a language-wide pressure he argues in favor of ex-
pressing negation redundantly, based on the abundance of negative concord
between multiple negative words in spoken languages. Tying these together,
he argues that the pressure to express negation redunancy is covered by non-
manual marking in sign languages, while the (iconic) pressure against using
space in cases of non-existence means that a manual sign (typically associ-
ated to space) will be unlikely to be used to express the second negation. He
emphasizes that these are just tendencies, and provides the example from
Russian sign language, which does seem to have negative concord based on
two manual signs.
this vary across contexts, across signers, and/or across sign languages? If
we take nonmanual marking to be a separate negator, then sign languages
freqently exhibit negative concord; on the other hand, if we focus on man-
ual signs only, then negative concord is much more rare, perhaps due to
depictive pressures on associating discourse referents to space. Related to
this question, Henninger (2022) has identified many examples of negation
expressed in ASL without negative nonmanual marking, and even more sur-
prising, negation expressed only by nonmanual marking, in its own timeslot,
making an even stronger case for a separate semantic contribution from non-
manual marking. Moreover, it shows that there is more complexity to the
pattern when it comes to natural production than most of the semantic work
has appreciated so far, and a further indication that nonmanual expressions
probably should be considered to contribute a negative function on their
own, as we modeled above in (59).
Another open question in this area relates to the possible scope for nega-
tion in sign languages. We saw above that even in language varieties without
negative concord, like the “standard” English interpretation of (1b), if two
negative expressions can appear in the same sentence then they are simply
interpreted as separate negators. Why does this seem to be unacceptable,
with or without negative nonmanuals, in sign languages like LSF and ASL?
One key might be to better understand the scopal properties of negation
in sign languages and how they relate to information structure. For exam-
ple, Geraci (2005) discusses multiple syntactic sites for negation in LIS; the
same questions arise with other sign languages and whether they must scope
over/negate just a verb phrase, the entire clause, or both.
Gonzalez et al. (2019) investigate this question by investigating the use
of negation as an answer in question-answer clause pairs. As we discussed
in Chapter 2, this is a clause-type used in sign languages to highlight the
question/answer nature of discourse structure in which the second “answer”
clause is the focused constituent and must be an answer to the question
raised in the first constituent. This is relevant to the question of double
negation because we do see examples like (68), in which there is a negation
brow-raise
in the question clause (i experience none ‘Do I have no experience (with
headshake
interpreting)?’), and then there is a negation in the answer clause ( no
have ‘No, I do have some’), with the overall interpretation of a positive.
It suggests one way that sign languages do seem to express something like
double negation. As Gonzalez et al. (2019) point out, answers in question-
answer clauses in sign languages generally seem to be restricted in their
3. LOGICAL CONNECTIVES 67
(72) a.
(73) a.
J K
= {Alex had tea, Alex had coffee}
hn hn
b. alex have tea-a coffee-b (ASL, conjunction)
3. LOGICAL CONNECTIVES 70
On the one hand, this is a simple analysis that takes things at their
face value, in some sense: it seems that there are many different ways to
signal these meanings that are not dependent on the item doing the con-
necting (unlike and and or in English) and this is captured by an analysis
that puts the conjunctive/disjunctive force outside of the formation of the
set of alternatives. Yet it also raises questions about the language modality
and expression of conjunction and disjunction: is the strategy of dissociat-
ing the force (conjunction vs. disjunction) from the syntactic connection
of two coordinates especially more natural in sign languages than spoken
languages? The data so far suggest that it is common in sign languages,
and possibly less common in spoken languages, although there are clear ex-
amples of both options in each modality. Consider, for example, Japanese
sign language (JSL) (Asada, 2019), which has a very similar structure to
ASL in the disjunctive vs. conjunctive force being differentiated (only) by
nonmanual marking in (74).
There are also spoken languages that seem to express a general use coor-
dination. Gil (2019) describes coordination in Maricopa (a Yuman language
language spoken in Arizona, USA), which uses the same expression for both
disjunction and conjunction, as seen in (78), in which the addition of an
inferential marker leads a list to be interpreted as disjunctive, while in its
absence the same list is interpreted as conjunctive. A similar pattern is
confirmed by Ohori (2004) and Asada (2019) for spoken Japanese, which
has several constructions including the basic list structure in (79) that can
be interpreted as conjunction or as disjunction depending on the context
(based on the assumption that one can visit multiple places but will choose
to live in one place).
(80) Context: We want to know what Alex had to drink, and we know the
only options are coffee, tea, both, or neither
Person A: Alex had tea or coffee.
Person B thinks: If Person A knew that Alex had both, they would
have said Alex had tea and coffee. Since they didn’t say that, he prob-
ably didn’t have both, so it must be the case that Alex had tea or coffee
but not both.
Thus, Person B interprets Person A’s statement as:
3. LOGICAL CONNECTIVES 73
Figure 3.1: Descriptions using the “weak” scalar items some and or and
accompanying pictures to test for scalar implicatures in Davidson (2013)
Due to their use of general use coordination, sign languages like ASL
present a somewhat different logical structure to investigate scalar impli-
catures than more well-studied languages like English. For example, does
general use coordination count as involving different possible alternative an-
swers for the purposes of the kind of reasoning in (80), or not? Davidson
(2013) investigates this question by comparing scalar implicature-based in-
terpretation of disjunction in ASL and in English, in order to understand
how the different scalar structures in the two languages affect interpreta-
tion of the logical expressions. This experimental study compared scalar
implicatures in two languages by comparing two participant groups: Deaf
adult signers of ASL, and hearing adult speakers of English, with each group
presented with sentences in their language (ASL or English, respectively) to
test scalar implicatures. In this study participants were all asked judge the
acceptability of a description of a picture, and the critical trials presented
sentences using expressions of disjunction in contexts where conjunction
could also be true (for example, the coordination case in Figure 3.1). If
participants accept this description, they interpret it “logically” but not
pragmatically; if they reject the description, they are interpreting it with
the pragmatically enriched meaning. Participants’ responses on these trials
were compared to trials in which descriptions matched, and also to trials of
the same structure but with quantifiers (expressions using some where all
could also be true).
The result of the study was that when it comes to a “typical” scale in
both languages like the quantifiers, both ASL and English speakers react
similarly, suggesting that both groups/languages had similar baseline ex-
pectations for calculating scalar implicatures. However, these two groups
showed a difference on the coordination scale: while ASL uses nonmanual
marking to distinguish conjunction from disjunction, English uses different
3. LOGICAL CONNECTIVES 74
The takeaway from their corpus study seems to be that although most
examples do involve parallel syntax, semantics, and information structure,
there are also genuine exceptions. It is not clear whether this rough ratio
holds also in spoken language corpora, but it would be worthwhile studying
whether these “constraints” are truly constraints on production or just ten-
dencies in spoken languages, as it seems to be a tendency in NGT. Recall in
the previous section that we saw both nonmanual marking and the use of
space invoked as part of the explanation for the typological pattern of neg-
ative concord found in sign languages. It’s possible that the same factors
play important roles when it comes to parallelism and information structure
for coordination as well: we can see in (82) that space is used to contrast
the disjuncts (loci a and b), and nonmanual marking in the form of brow
lowering is likely relevant as well. Perhaps there is also influence on the use
of space coming from metaphor, as we saw in the our discussion of contrast:
the two possibilities are depicted in different areas of space, following prox-
imity is similarity, suggesting that incorporating gestural elements would
be critical for a full understanding of this contrast in spoken languages as
well. Of course, this may also simply be a difficult question to answer when
restricted entirely to a production corpus, although fieldwork and/or care-
ful experimental work has the potential to tease these apart to understand
the role of sign language specific and general language components when it
comes to parallelism contrasts on coordination.
5 Conclusions
We end this chapter turning our focus on the big picture questions: what
can we learn about sign languages from looking at logical operators like
negation, conjunction, and disjunction, what can learn about these operators
by looking at sign languages, and what kind of conclusions can we draw
about language more broadly? On the last point, it can be instructive to
consider the model that we introduced in Chapter 1, in which language
meaning contains both propositional representations while at the same
time constructing representations of particular events. Clearly, logical
operators contribute to propositional representations: that is one of their
most obvious roles. In fact, logical operators are a large motivation for that
3. LOGICAL CONNECTIVES 76
kind of meaning: we can derive entailments through logical processes like the
interaction between disjunction and negation (e.g. disjunctive syllogism: if
a disjunction p∨q holds and a negation of one disjunct ¬q holds, then we can
infer the other disjunct p is true). So if there ever was a clear contributor
to propositional meaning, logical operators are it. But do they bear on
questions about multiple types of linguistic representations for meaning?
One way to think about this question when it comes to negation, con-
junction and disjunction is to consider models of each event as independent,
at least at first. So, in the case of “it’s not going to rain today”, there may
be a positive representation of a particularly rainy day (we can reason about
it through experience/simulation) at the same time as the proposition that
rules out all of the possibilities in which it does rain today (the proposi-
tional content). This has advantages over purely propositional approaches
that cannot explain why we nevertheless seem to conjure up a mental im-
age of a rainy day even when the assertion expressed rules them out. It
also can explain why sign languages may be more flexible when it comes
to constraints like those on parallel information structure in coordination
that we discussed in the last section, if space is going to also be used for
depiction (which doesn’t have the same constraint). It also has advantages
over purely cognitive linguistic models of events which struggle to model the
contribution of logical connectives like negation, conjunction, and disjunc-
tion. Clearly, these present intruiging areas to think about the intersection
of multiple types of linguistic meaning.
4
Anaphora: a spatial
discourse
So far we’ve focused on whole sentences and how they convey information
in a discourse via propositions and the propositional operators that con-
nect them. This chapter will focus on smaller pieces of language, namely,
how we refer to, describe, and track the people, places, and things that we
are talking about. Innovative work in semantics of spoken languages in the
early 1980s by Heim (1982) and Kamp (1983) argued that we used basi-
cally the same semantic representations for tracking things within a single
sentence and for tracking them across sentences in a discourse. It is conse-
quently one of the most striking aspects of the structure of sign languages
from the perspective of formal semantics and pragmatics that sign languages
naturally incorporate the signer’s 3-dimensional signing space to keep track
of things across both a sentence and a discourse (Lillo-Martin and Klima,
1990; Schlenker, 2011), and the way that this spatial nature of discourse
is integrated with the use of space to depict these characters, objects and
events (Liddell, 2003; Taub, 2001; Cormier et al., 2013; Schlenker et al.,
2013; Fenlon et al., 2018). In this chapter we will first focus on a couple of
clear examples where locations in space (“loci”) are used for both discourse
tracking purposes (“anaphora”) and for depiction, observing that the two
systems are closely tied together. We’ll then investigate the use of a pointing
indexical sign to these loci/areas of signing space and its comparison to (per-
sonal and demonstrative) pronouns and other ways to refer to things. From
there, we’ll move to such uses in verbs, tying together the pronominal and
verbal uses. In the final section we’ll compare different formal approaches
to the problem of loci, put forth a positive proposal, and set out suggestions
for future work in this domain.
77
4. ANAPHORA: A SPATIAL DISCOURSE 78
(83)
the use of space as entirely about linking anaphoric expressions with their
antecedents, we risk losing sight of the way that this same use of space is
use for depiction (Liddell, 2003; Cogill-Koez, 2000b; Schlenker et al., 2013).
For example, it won’t explain why particular uses of space are used or not,
such as the high pointing to the rainbow in (83) or the way that pragmatics
plays a role in determining whether to use overt space at all versus using
(covert) strategies more common in spoken languages (Ahn et al., 2019).
At issue in the end is whether and how to emphasize the arbitrary vs.
the non-arbitrary nature of space in sign languages, since both come into
play directly in the meaningful use of space in sign languages. On the one
hand, it seems like space can be used somewhat arbitrarily, and when it is
used arbitrarily it seems to serve a function of linking together discourse
referents in the way that other systems have been shown to do in spoken
and written language. We will investigate these in Section 2 when we discuss
the way that pointing behaves pronominally in sign languages, and how the
places one points to (we follow the literature in calling these spatial locations
“loci”) share commonalities with restrictions on pronominal reference of the
sorts seen in written and spoken language. Again, the emphasis in this line
of work is on the striking commonalities between the use of space in sign
languages, and the way that pronouns and their antecedents are associated
with each other in other language modalities like speech and writing.
A focus on the arbitrary nature of loci contrasts with the observation
that space can be used in sign languages in a way that is not arbitrary at
all: signers can make use of space to depict a relationship between objects
and to depict events (Fenlon et al., 2018; Liddell, 2003). This use of space
exists in both spoken and signed languages, a way to show who does what to
whom (Schlenker and Chemla, 2018), how objects are arranged (Davidson,
2014) and how things move through space (Kita and Özyürek, 2003). Within
the formal semantics/pragmatics literature, it is usually said that this use
of space is interpreted iconically, i.e. not through (potentially arbitrary)
symbolic means; Ferrara and Hodge (2018) discuss such uses of space as
both depictive and/or indexical.
This chapter will focus on these two uses of space and their interactions
in sign language discourse. We will start in Section 2 by discussing the
preponderance of evidence of similarities between pronominal structures in
sign languages and spoken languages. Section 3 will continue this discussion
but with a focus on the verbal domain: how does the discourse information
conveyed in pronouns reflect in the verbal domain, i.e. through agreement or
clitics. Section 4 will then discuss the hypothesis that the (arbitrary) use of
space in sign languages is a visible manifestation of pronominal indices that
have been hypothesized in dynamic semantic accounts of spoken languages.
Section 5 will explore the comparison between this same use of space in sign
languages and gender/noun class features in spoken languages, and compare
to the index account. In section 6 we will move on to studies that highlight
4. ANAPHORA: A SPATIAL DISCOURSE 80
Perhaps not surprisingly, the picture that emerges from (85) is found in
(86) a.
4. ANAPHORA: A SPATIAL DISCOURSE 82
Those who are interested in learning more about Binding Theory and
subsequent/related theoretical terminology, especially about the notions of
locality (meaning roughly in the same clause, verbal domain, sharing a sub-
ject, etc.), binding (being not just co-indexed but also syntactically con-
nected), C-command (a particular definition of “structurally higher”), etc.,
are encouraged to learn more in a syntax course; basic notions are also in-
troduced in the context of sign language syntax in Sandler and Lillo-Martin
(2006).
For our purposes here, it is worth noting that many languages that are
unrelated to English show the kind of pattern exemplified in (87) for ex-
pressions that seem similar to herself, her, and Gia. Sometimes exceptions
arise, and when they do, it becomes especially important to more clearly
define what “counts” as being similar to something like the reflexive herself
or not. In these cases, one is faced with the challenge of either adapting
the theory or categorizing it as a different case; so it goes, when developing
any linguistic theory, with many factors influencing one’s decision. A nat-
ural question for sign linguistics is whether these same patterns hold that
are found in so many spoken languages of the world, and in any cases that
4. ANAPHORA: A SPATIAL DISCOURSE 83
they do not, what this says about the similarities and differences between
pronouns in spoken and signed languages, and how to draw generalizations
across all languages.
Consider, first, some basic sentences in American Sign Language. Some
have claimed that they seem to show the same pattern as in English (Sandler
and Lillo-Martin, 2006; Schlenker and Mathur, 2012); others have suggested
that what are quite strong preferences in English (which originally motivated
the binding theory) are weaker preferences in ASL (Kuhn, 2015).
(88) a.
To the extent that these preferences resemble English ones but are weaker,
or even differ completely, a few possible explanations come to mind. First,
all languages have their own set of ways to reduce ambiguities. Consider
gender in English pronouns in the two sentence dialogue in (89a). The two
people introduced by the first sentence have names which are stereotypi-
cally associated with two different genders, and so gendered pronouns can
be used to distinguish them in the following sentence. But in a case where
the stereotypical gender associated to two names is the same, as in (89b),
the story is more clearly ambiguous in written English between the artist
as the helper or the helped one. Spoken English can be ambiguous too, but
speech can also make use of prosody, for example an extra stress on she can
signal a topic shift to signal that object of the previous sentence (Ann) is
the subject of the new sentence, i.e. the artist.
three genders that can be expressed on pronouns (male, female, neuter) and
languages with more extensive noun classes can make many more distinc-
tions including human, classes of other living things and objects, etc. These
features can interact with other factors of languages to correctly resolve
pronouns, including organizing by topic/comment structure, information on
verbs, and many other factors.
Given all of this, it is then not at all surprising that in sign languages,
there are also many similar strategies for resolving anaphora. These include
topic/comment ordering, the use of different handshapes for different classes
of nouns, and associating different referents in a discourse to different areas
of space. We will focus on the last of these for the time being: the association
of discourse referents to different areas of space, known in the literature as
spatial “loci” (Lillo-Martin and Klima, 1990). Consider (90): the association
of Gia to the signer’s right-hand signing space and Lex to the left signing
space unamgiuously picks out Gia as the referent of the pointing sign when
it is made to the left signing space.
(90)
A question that arises again and again in formal approaches to sign language
semantics is how to think about this use of space, and how it compares and
constrasts to other ways to disambiguate reference in spoken and written
language. Attempts toward answering this question will comprise the re-
mainder of this section.
One much-discussed difference between the use of loci in sign languages
and systems like gender and noun classes in spoken languages is that there
seems to be no finite limit to the use of space in terms of the number of dis-
tinctions it can support. Lillo-Martin and Klima (1990) note that between
any two areas of space associated to individuals in a discourse, the area in be-
tween them could be associated to a third referent. This seems true in terms
of theoretical possibilities/competence although “performance” in terms of
memory demands and/or perceptual distinctions lead to some natural lim-
its; given this, it seems that sign languages can make infinite distinctions.
4. ANAPHORA: A SPATIAL DISCOURSE 85
However, something that has been much less discussed in formal approaches
is that it’s not at all obvious that spoken languages are restricted to a finite
set of gender classes, either. Perhaps traditionally we have thought about
pronouns as having limits to gender distintions (e.g. he/she/it for English
third person singular reference) but neopronouns (e.g. xe) are increasingly
used and perfectly illustrate the principle that there is no principled limit to
the number of pronoun distinctions for a language (Conrod, 2019). The fact
that sign languages can make in principle infinite distinctions was once used
as an argument that they are not like gender/noun class features in spo-
ken languages, but given the creativity available in neopronouns this should
probably be set aside as a difference between spoken and sign language pro-
noun/feature classes.
Beyond the sheer number of distinctions, it has also been a point of much
discussion how arbitrary (or, nonarbitrary) are the use of loci. We saw
above one example, where the pronoun pointed high in the sky for a rainbow,
in contrast to lower for the girl watching the rainbow (83). Schlenker et al.
(2013) for example provide the following dialogue about a short basketball
player and a tall basketball player. This height difference will most naturally
be reflected in the choices of areas of space to associate to them: the loci
associated to the tall basketball player is tall but the one associated to the
shorter player is lower in signing space (91).
(92)
4. ANAPHORA: A SPATIAL DISCOURSE 86
Although the height (on non-present loci) and direct pointing (for present
loci) are typically considered as two different kinds of constraints on sign
language pronouns, we can consider them unified in the sense that they
can each motivate the choice of locus (to the physical presence, features, or
even more abstract notions like honorifics, etc.), following Liddell (2003).
Both show that the location associated to the referent can be motivated in
different ways. Note that some combination of motivation/arbitrariness is
present in every feature class in the world’s languages: “gender” is precisely
something with some motivation in the natural world but it is extended in
many cases in an arbitrary way so that, for example, bridge is masculine in
some languages (e.g. French le pont) but feminine in others (e.g. German
die Brücke). We will discuss some further notable cases in section 6 of this
chapter in which space is used in a motivated way for purposes of scene
depiction. The overall takeaway at this point is that the combination of
arbitrariness and motivated choice of locus in pronouns in sign languages
reflects a mix of arbitrariness and motivated choice found in pronoun systems
in spoken languages as well.
Another important similarity between the pronoun systems of spoken
and signed languages is that they are part of a larger system of ref-
erential expressions that are used in different syntactic and pragmatic
contexts. For example, depending on the pragmatic context/salience of the
reference in the context, reference in ASL can be made via bare nouns (93a),
a pronoun of the sort we have been discussing (93b), or implicitly by argu-
(93) a.
examples in Catalan Sign Language when used in the lower plane of the
signing space. Davidson and Gagne (2022) extend this observation to the
use of ix in the lower plane to express specificity in ASL, and furthermore
they argue that the source of this is a more general use of planes to express
domain size (where the smallest/lowest is a domain with a single element,
i.e. specific, in agreement with Schwarzschild (2002)’s view of specificity
as an extreme domain restriction to just one element in the domain). This
raises a point emphasized also in Koulidobrova and Lillo-Martin (2016), that
the presence of ix and the locus it associates with are independent aspects
to analyze when trying to understand sentences like (93b-e) and others.
Finally, recall in Chapter 2 that we emphasized the ways in which in-
formation structure influences the expression of sentences in sign languages.
One example of this kind of influence is the topicalization of a full noun
phrase, with a pronoun anaphoric to the noun phrase expressed in argument
position later on in the same sentence, or in the same discourse. Example
(94) illustrates this in ASL: canonical word order is subject-verb-object, but
instead of appearing in its canonical position, the subject friends has been
topicalized, and the pronominal ix-arca appears in its argument position
immediately preceding the predicate (real smart).
In examples like (94) we can easily see the link between friend (the
topic) and ix (the subject of the sentence), but ASL also permits a sub-
ject to be implicit, i.e, unpronounced in a circumscribed set of linguistic
contexts (Lillo-Martin, 1986; Koulidobrova, 2012). Moreover, between com-
pletely overt pronouns like ix and completely covert pronouns, there are
intermediate options for providing anaphoric information, most notably the
use of verbs to indicate their arguments via incorporating the same locus
system as ix, which will be the focus of the next section.
the speaker, while the verb in (95b) has a “third person” ending since the
subject is someone else not currently participating in the discourse.
(96) a.
On the surface, a major difference between French clitics and ASL loci
on verbs is that a loci marked verb does not preclude full noun phrases,
in contrast to the unacceptability for clitics in (97). However, there is a
well known phenomenon of clitic doubling (repeting the argument both as
a clitic and in argument position) in general across the worlds languages
(Anagnostopoulou, 2006), so the locus and full noun phrase co-occurrence
do not rule out a clitic analysis. Moreover, when we look more closely at
directionality on sign language verbs, we see more deep similaries between
clitics and directionality in sign language verbs. There are, for example,
many ways in which loci show odd properties from the perspective of ana-
lyzing loci as agreement; Nevins (2011) notes that the following are all odd
or unexpected properties of ASL agreement:
2. The plural cannot be marked on both subject and object at the same
time (Sandler and Lillo-Martin, 2006)
5. There is a preference for marking the indirect object over the direct
object (Janis, 1995)
2. Plural cannot be marked on both subject and object at the same time:
Georgian omnivorous number triggered by clitics
5. Preference of the indirect object over the direct object: French clitics
(98) [A happy artist]2 emailed her2 friend. She2 wanted to meet for coffee.
There are many empirical phenomena which are accounted for straight-
forwardly in dyanmic systems, but the trade off is that the logic required
to correctly capture the relations between indices depending on where they
are in sentences/clauses is quite complex, thought at this point also quite
well thought through. (Along with references above, a gentle introduction
to dynamic approaches can be found in Coppock and Champollion 2022.)
Among many key data points are so-called “donkey sentences” (99), which
have the notable property that there seems to be no syntactic relation/scopal
dependency between the indefinite noun phrase in the antecedent clause (a
donkey) and the pronoun in the consequent clause it, even though they are
interpreted as co-varying (e.g. any donkey owned by a farmer is going to
end up fed, not just a single particular donkey, so it is not referential, i.e. it
doesn’t pick out a particular donkey).
4. ANAPHORA: A SPATIAL DISCOURSE 94
Schlenker argues that sign language loci are in fact an argument for
dynamic models of semantics over non-dynamic semantic models given that
sign languages provide a “visible” manifestation of loci. To understand
why, first we take a short detour, regarding so-called “Bishop sentences”.
The structure of these sentences is such that two identical indefinite noun
phrases (a bishop... a bishop) are introduced in one clause, and co-vary with
pronouns or definite descriptions in another syntactically inaccessible clause
(he... him), as in (101).
The problem for non-dynamic accounts basically is that we get the in-
tuition that the bishops should vary in some kind of regular way (perhaps
with a series of events/meeting occasions) and this is communicated clearly
in (101), but less so in (102b), and so we might conclude that definite noun
phrases like the one in (102) are not the hidden structure behind the scenes.
This problem is presented in Schlenker (2011) as a choice in theoretical ap-
proaches: we can make the idea of the hidden/covert definite noun phrases
more complicated (e.g. The first bishop), etc., to salvage the idea that pro-
nouns are always hidden definite structures, or we can simply abandon the
approach altogether that the way these sentences work is through some kind
of covert noun phrase and instead complicate our logical machinery by al-
lowing that pronouns come with their own dynamic indices, and a logical
system that acts on these indices. According to Schlenker (2011), sign lan-
guages play a major role in deciding between these two accounts because
they illustrate an overt version of indices in precisely these places, via loci.
Let’s examine what these sentences look like in sign languages to better
understand this claim. As expected if loci are a way to show dynamic indices,
both donkey and bishop sentences can make use of loci to distinguish one
of the noun phrases from the other. The sentence in (103) has the structure
of bishop sentences in some ways: it includes two noun phrases in the first
clause ( ix-a a student ‘a student’) and (ix-b b student ‘a student’) and
the symmatrical verb meet, except that unlike in English the noun phrases
4. ANAPHORA: A SPATIAL DISCOURSE 96
are not entirely identical: the first is associated to locus a and the second
to locus b.
Under an account in which the two loci a and b are ways of overtly mark-
ing two different dynamic indices, this is exactly what we expect: “identi-
cal” noun phrases except for the indices, which then are used again in the
consequent clause on the directional verb a-give-b to provide a co-varying
interpretation. This is taken as evidence from sign languages for overt in-
dices Schlenker (2011); Zucchi (2012); Lillo-Martin and Gajewski (2014);
Schlenker (2018). However, there are at least two reasons we might be hes-
itant in jumping to this conclusion.
The most obvious objection to the argument that loci are overt indices is
that it’s not clear why the loci can’t just be considered as more descriptive
material, of the sort seen in definite noun phrases like the former, the latter,
the one in this locus, the one in that locus, etc. This is roughly the tact
taken in response to this data by Ahn (2019a,b). She notes that if the locus
is descriptive material then we might expect it to be elided in unambiguous
cases where the potential for mistaken pronoun resolution is very low, and in
fact this is exactly what we find in studies of the pragmatics of loci, both for
indexical pointing Ahn et al. (2019) and loci in directional verbs Kocab et al.
(2019). The (potentially descriptive) contribution of the locus usually has to
be interpreted as non-restrictive/non-at-issue: we definitely want universal
sentences like the one in (103) to range over all possible versions of students
who meet (and thus give each other cigarettes), not just the ones who are in
a particular location! But this is precisely what material like former/latter
etc. is able to do, so there’s no reason the spatial locus cannot do this as
well (restrict via space and not time, in contrast to former, latter, etc.)
Schlenker (2011) anticipates this point in arguing that one can add un-
pronounced descriptive material to the pronouns in donkey sentences of
basically exactly this sort we have in mind for loci (former, latter, etc.), and
points out that if we let this unpronounced material become fine-grained
enough (e.g. have infinite options for disambiguation) this becomes basically
the dynamic analysis. This seems entirely right! But, then the takeaway
for formal semantics more broadly is not that sign language loci provide a
unique kind of evidence in favor of dynamic semantic accounts, only that
they follow exactly what we would expect of any language given what we
already know about pronouns in these kinds of contexts: we are able to use
descriptive content to make increasingly fine grained distinctions, and we
seem to be able to do so overtly or covertly. In other words: a conclusion
4. ANAPHORA: A SPATIAL DISCOURSE 97
is not that sign languages show unequivocal evidence for dynamic accounts,
but rather that they show evidence on exactly the point where dynamic
and non-dynamic accounts seem to entirely agree: that languages seem to
be able to need some kind of restriction in the meaning of pronouns that
is fine-grained enough to account for an unlimited number of distinctions
between discourse referents, and while spoken languages can use all kinds of
not-at-issue descriptive content, sign languages can also use space.
The second concern for taking sign language indices to be an example of
dynamic variable indices is that in most dynamic semantic systems, indices
are used on variables both in coreferential contexts and in contexts where the
relationship between the two noun phrases seems to be more syntactic, such
as quantifier raising, wh-movement, etc. (but see Reinhart 1983, Büring
2005, and Chierchia 2020 for discussion of keeping them separate). It can
often be difficult to disentangle examples of the sort that encode discourse
dependencies (tracking two discourse referents from sentence to sentence)
from those that entirely encode syntactic dependencies, but probably the
most reliable way to do it is with negative quantifiers: we can see in (104a)
that Alex and he can be coreferential due to discourse/context factors, and
in (104b) we might be able to imagine the he in the second sentence as some
generic student we have in mind, but it’s generally unacceptable to do this
in (104c) with a negative quantifier. This contrasts with the cases in (104d-
e), where we can get coreference between the negative quantifier phrase or
question phrase and a pronoun, thanks to a syntactic/semantic dependency
within a sentence.
Intruigingly, the contrast arises in sign language loci as well: Graf and
Abner (2012) and Abner and Graf (2012) note that sign language loci that
are optional but acceptable to establish coreference between a pronoun and
a universal quantifier (105) seem to be unacceptable precisely in cases that
necessitate real “syntactic binding”, that is, places when there can’t be co-
reference linking the pronoun and its antecedent because there is no reference
in the case of negative quantifiers (106).
(105)
(ASL, Abner and Wilbur (2017); Abner and Graf (2012))
4. ANAPHORA: A SPATIAL DISCOURSE 98
(106)
(ASL, Abner and Wilbur (2017); Abner and Graf (2012))
a. * no politician person-a tell-story ix-a want win
b. no politician person-a tell-story want win
‘No politician said that he wanted to win.’
On this point, Kuhn (2020) argues that the use of a locus in sign languages
involves a kind of iconicity, basically a claim of existence, and this seems
roughly right, and is compatible with the account offered later in this chap-
ter. Before we get there, though, let’s consider our conclusions on dynamic
indices and loci. One can be highly sympathetic to a dynamic semantic
approach that models the introduction and binding of indices of discourse
referents in ways that more accurately reflect the dynamics of discourse (for
example, binding across sentences, an asymetric semantics for conjunction,
etc.), while still retaining skepticism that sign language loci are indices -
and that is precisely our take here. Sign language loci provide (descriptive)
information about the discourse referent which can be used for disambigua-
tion, just like other descriptive information. Indices can be assigned to any
discourse referent, loci or not, and in fact, most discourse referents are not
assigned overt loci (Frederiksen and Mayberry, 2016), and their use appears
to be constrained by pragmatic information/quantity considerations similar
to other descriptive material (Ahn et al., 2019; Ahn, 2019b)
Let’s take this idea of loci as (typically not-at-issue) descriptive material,
and apply the perspective to one final sentence type often used to probe for
dynamic indices: verb phrase ellipsis. Example (107) is from Lillo-Martin
and Klima (1990) for ASL, and the claim is that such examples can have
two different logical structures, which are reflected in two different interpre-
tations of the “elided” (silent) material in the second sentence. Similarly,
the LIS example in (108) shows two different interpretations, depending on
whether the elided noun phrase in a “strict” way or a “sloppy” way.
Note, however, that these examples merely show that there there are two
different ways to recover the content of this verb phrase: the main thing
is that the content that is elided needs to be something that is ignored
in this reconstruction. Indices presumably have this property, but other
information found in descriptive noun phrases does too: in particular, this
is true for gender features, which are also ignored for purposes of ellipsis: in
example (109) there is no disamgituation of the two readings of verb phrase
ellipsis even though the two pronouns that would be pronounced differ in
gender (Mary values her secretary, too vs. Mary values his secretary, too).
At this point, we might wonder whether the kind of content that the locus
contributes is similar to features like noun class or gender, and in fact this
is precisely the second major approach taken in formal semantics/syntax
regarding sign language loci, which we turn to in the next section. The
takeaway at this point from ellipsis is that if loci do involve descriptive-
like material, then it cannot be of the sort that is at issue, i.e. it has to be
something like gender features that is not considered in the semantics/logical
form upon which focus alternatives are constructed.
(110) (French)
a. Tu aimes la fille.
‘You love the girl.’
b. La fille aime le livre
‘The girl loves the book.’
(111) (French)
a. la fille... elle...
‘The girl... she...’
b. le garçon... il...
‘The boy... he...’
(112) Yesterday I1 ran into Kate2 while she2 was taking a walk
Jshe2 Kg = ιx : f em(x).g(2) = x
‘The extension of she (with index 2) under the assignment function
g is the unique individual in the domain that is referenced by 2
in the assignment function; it is only defined if the individual is
appropriately referred to using ‘she’ series pronouns’
We could consider the use of sign language loci in a similar way. Imagine
that we take definedness/ability to take a semantic value of the pronoun in
ASL to depend on the appropriate use of spatial location (e.g. a), along the
lines in (113).
(113)
J 2 K = ιx : depicted-at-a(x).g(2) = x
g
How does this account for the patterns of loci that we have been dis-
cussing? The property of being associated to a locus will help disambiguate
sentences, like the one in (113) above, because only one of the individuals
in the first sentence was associated to that locus a. This works roughly
like gender in the process of disambiguation in English, where co-indexed
noun phrases need to not be incompatible with respect to the presupposi-
tional requirement (gender, or spatial location, ro perhaps noun class, etc.).
4. ANAPHORA: A SPATIAL DISCOURSE 102
(115) (English)
John is the only one who saw his mother.
a. Alternatives: {Billy saw John’s mother, Andy saw John’s mother,
Jessica saw John’s mother, ...}
‘No other kid saw John’s mother.’
b. Alternatives:{Billy saw Billy’s mother, Andy saw Andy’s mother,
Jessica saw Jessica’s mother, ...}
‘No other kid saw their own mother.’
Along with the verb phrase ellipsis data we saw above in (107)-(108), we can
conclude from this that the disregarding of both loci and gender features
in focus alternatives puts them in a class together, and apart from other
descriptive content: contrast with the object mother/mother, which has to
be present in each focus alternative.
One objection to thinking about sign language loci as features comes
from the apparent difference between their fluidity and that of gender fea-
tures: typically gender features are considered to be stable features across
different discourses (e.g. female gender pronouns are appropriate for Kate
across all kinds of contexts), whereas appropriate loci change from discourse
to discourse: Alex might be placed on the signer’s right hand side in one
discourse and on the signer’s left hand side in another. Except that this isn’t
4. ANAPHORA: A SPATIAL DISCOURSE 103
That said, it is also the case that truly arbitrary uses of sign language
loci are few and far between, as shown by corpus analyses (Cormier et al.,
2015a), so we might wonder, what causes loci to be used at all? There
are two separate but possibly related answers that come to mind. For one
thing, Ahn et al. (2019) discuss the unacceptability of loci when there is no
ambiguity, especially in the case of two noun phrases that differ in animacy,
or a context in which plausibility provides clear disambiguation. This high-
lights the strong disambiguating purpose of sign language features, seeming
4. ANAPHORA: A SPATIAL DISCOURSE 104
to take them one step further than, say the difference we see in (116) in
English. It’s possible that a language like English may be moving in this
direction, though, using semantic gender only when useful for purposes of
disambiguation as the singular pronoun they becomes more commonly used
to intentionally not provide gender information. The second point is that
sign language loci are frequently used to depict while also disambiguating, a
dual nature that has long been noticed and subject to theorizing, and which
we focus on in the next section.
6 Loci as depictive
So far we have discussed several formal analyses of sign language loci, which
included considering them as the visible manifestation of indices in dynamic
semantics, or as semantic/syntactic features, but another view of sign lan-
guage loci emphasizes their iconic/depictive nature (in contrast to the abi-
trariness inherently emphasized in both formal accounts). There is some
empirical motivation for highlighting the non-arbitrary nature of loci: cor-
pus studies of natural production data show that the use of loci on verbs
and/or (pro/)nominals is highly motivated (Cormier et al., 2015a), and that
loci will be established for discourse referents especially in cases when verbs
can show as well as tell something about the events in which they are in-
volved, or if they are used to depict further aspects of the referents.
Within the formal literature, this has been sometimes described as iconic-
ity in sign language loci/features (Schlenker, 2014; Schlenker et al., 2013;
Schlenker, 2018; Lillo-Martin and Gajewski, 2014), while much foundational
work on the theorizing behind the iconic uses of sign language loci has
occurred in cognitive linguistics frameworks (Liddell, 2003; Taub, 2001;
Cormier et al., 2013). An example of an iconic reflexion in the choice of
sign language loci can be seen in the difference between the loci a associated
to the girl, which is at a lower place near the signer’s waist level, and the lo-
cus b assigned to the rainbow, which is much higher, presumably motivated
by the idea that the rainbow is up in the sky (118).
(118)
like gender and noun class features, this iconic information is not at issue,
i.e. cannot be targeted by negation, among other things. We can see this
in (119), where the pronoun ix-ahigh appears under the scope of negation
in the second sentence (ix-1 not understand ix-ahigh ), and yet still the
presupposition projects/inference goes through through that the speaker’s
younger brother is tall (i.e. the tallness is not negated).
7 Spatial restriction
The takeaway so far from looking at the formal semantic emphasis on the
disambiguating role of loci, and the emphasis from cognitive linguistics on
the depictive role of loci, brings us to the goal of trying to unify these two
uses. Here, we will make a proposal inspired by previous hybrid accounts
which include both depictive and descriptive components (e.g. Schlenker
et al. 2013) along with definite approaches to pronouns, with some conse-
quences to the analysis of semantic features generally. We begin with a
basic pronoun, in both English and ASL, which in the absence of any other
restrictions will simply pick out a referent based on some assignment func-
tion g. One approach from, say, the classic text Heim and Kratzer (1998)
is to simply say that every (pro)nominal includes an index (e.g. 2 in (120)
and the interpretation of the pronoun simply involves looking into one’s list
of discourse refernents and looking up the reference based on this index,
e.g. g(2). In short, the reference comes from the (value of the) index, as
we see for English (120a) and ASL (120b). On top of that, English has an
additional restriction that the reference be appropriately referred to using
that feature (e.g. female, for she/her pronouns in this case)(120a). One big
question is, what role should the locus a play (120b)?
b. J 2K
g = g(2) + ??
4. ANAPHORA: A SPATIAL DISCOURSE 107
In the classic proposal put forth by Lillo-Martin and Klima (1990), the
index is a visible version of the variable, so that instead of the complicated
(120b) above, we can simplify to (121). Here, instead of the arbitrary index
2 assigned to the discourse referent, the index is made visible by the use of
locus a, so roughly we need only track one thing, this index-as-locus.
(121)
J Kg = g(a)
‘The referent assigned to this index a’
(122) a.
J Kg = ιx[x = g(a)]
‘The unique individual which is equal to the referent assigned to
this index a’
b.
J Kg
= ιx[x = g(a) ∧ x is a student]
‘The unique individual which is a student and is equal to the
referent assigned to this index a’
4. ANAPHORA: A SPATIAL DISCOURSE 108
(124) a.
J Ks
= ιx.x is a student and x is in the situation s and and R(x, a).
‘the unique student in the situation related via R to locus a’
4. ANAPHORA: A SPATIAL DISCOURSE 109
The big question under this view is: what meaning comes with “related
via R to locus a”? As mentioned above, “semantic” gender marking in a
language like English is often contrasted with what seems like a more arbi-
trary use of loci in a language like ASL since choice of locus for a particular
referent changes across different contexts. However, we’ve also seen that
this distinction is not a hard and fast one, given the context dependence
of features in English as well, illustrated in (116). Ahn (2019a,b) argues
that the locus contributes a locational restriction, something like “having
the property of being assigned to the location a”. The proposal put for-
ward here is something very much along these lines, with extra inspiration
from Conrod (2019)’s approach to gendered pronouns, which argues that
the meaning of gender features on pronouns is essentially a use condition,
i.e. that it is appropriate in the context to refer to someone with female
pronouns. In ASL, we can say that for R(x, a) to hold, it simply requires it
to be appropriate (sometimes for depictive reasons, sometimes for the pure
pragmatic purpose of contrast) to refer to someone with that locus (125a);
the dynamic version is provided for comparison in (125b).
(125) a.
J Kg = ιx[x = g(a)]
‘The unique individual which is equal to the referent assigned to
this index a’
2019b), all of which are expressed via in many sign languages, in-
cluding American Sign Language. Under this system, something can be asso-
ciated with a particular locus for motivated/depictive reasons (known in the
sign language lingusitics literature as “topographical space”, Liddell 2003)
or for discourse anaphoric reasons, and that difference need not be encoded
in the meaning contributed by the indexical sign. It also collapses pronomi-
height of tall people, honorifics, etc., but may also be arbitrary and merely
contribute pragmatic contrast/distinction for disambiguation, just like spo-
ken language gender, which reflects classes that we perceive in the biological
and social world but can be deployed for language in ways that are sensitive
to pragmatic context. Finally, we note that encoding the locus either as an
index for an assignment function or as a restriction in a definite noun phrase
in a given situation ends up being nearly equivalent given some assumptions
about the structure of definite noun phrases, an ultimately reassuring way
of viewing a longstanding debate between “variable” and “definite”/e-type
approaches to anaphora and the contribution to this debate from sign lan-
guages.
8 Conclusions
The use of space in sign language discourses is one of the most well-studied
areas in sign language semantics/pragmatics, in part because it so beau-
tifully integrates descriptive and depictive elements of language in a dis-
course. For years, formal approaches to sign languages have been interested
in modeling the use of space, famously analogizing it to dynamic indices
(Lillo-Martin and Klima, 1990) or to semantic features like gender (Neidle,
2000; Kuhn, 2016). One conclusion of the formal discussion here is that
under an approach to pronouns in which they share significant morphosyn-
tactic structure with other definite noun phrases like demonstratives (that)
and definite phrases (the student), these distinctions are hardly significant:
both involve semantic restrictions on reference. Furthermore, the “limited
categories” of gender features has become an increasingly outdated idea, as
we see the proliferation of pronoun features to reflect an increasingly nu-
anced view of gender in spoken languages like English (and many others),
requiring a flexible semantics (Conrod, 2019). This semantics can be min-
imally adjusted to reflect many depictive uses of space in sign languages
while retaining a descriptive function (restriction in a definite noun phrase),
as exemplified in the sketch put forward in Section 7.
Many important questions remain. First, we have only touched in very
limited ways in the difference between loci on noun phrases and on verbs. In
some syntactic theories, such as Minimalism (Chomsky, 2014) these are quite
assymmetric and more work should be to understand whether and how sign
languages exhibit agreement in the classic sense, discussed in depth in (Lillo-
Martin and Meier, 2011) and (Pfau et al., 2018). More should also be said
about when loci are covert and non-covert, and how to compare with clitics;
in this area, more careful studies of language change and processes in which
pronouns change into clitics which change into agreement forms are clearly
going to be useful, as they have been in spoken languages (Culbertson, 2010).
Finally, this chapter argued for a hybrid way of thinking about the de-
4. ANAPHORA: A SPATIAL DISCOURSE 112
pictive and descriptive elements of sign language loci, but like other hybrid
accounts (e.g. Schlenker et al. 2013; Schlenker 2014) it does not directly pro-
vide an account for how the depictive aspect (including but not limited to
topographical space) works. This is mostly because depiction is processed
through a different stream: not through conventionalized linguistic struc-
tures but through cognitive processes involved in picture/image processing,
and thus linguists will have limited expertise to lend to this question except
to make references to how we convey meaning through depiction (Greenberg,
2013; Camp, 2018; Fodor, 2007). That doesn’t mean it isn’t deeply impor-
tant though, and cognitive linguists in particular have more to say about
the integration of language and depiction that one hopes can be integrated
more fully into the formal account of the descriptive properties as given in
this chapter. In the next chapter we turn to perhaps an even clearer way
that depiction integrates with description.
5
1 Depictive classifiers
We will be using the phrase "depictive classifiers" to refer to a wide class of
signed expressions found in nearly all of the world’s sign languages (Zwit-
serlood, 2012; Emmorey, 2003) which are sometimes also called “depicting
113
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 114
The semantics of these expressions has been of interest since some of the
earliest scientific attention to sign languages: Klima and Bellugi (1979) note
that the handshape acts as a type of “pronoun”, analogizing the gender clas-
sification available for pronouns in languages like English and French to the
classes of handshapes available in classifier predicates. They also note that
these seem to have a depictive element, as in, for example, the movement
of a car up a hill which can illustrate a meandering manner of movement,
emphasizing the dual symbolic/iconic nature of classifier predicates.
Although the dual symbolic/iconic nature of classifiers has long been
underlying discussion of these “depicting classifiers” in sign languages and
is the view that we will work from here, there has also been notable push-
back to this kind of hybrid symbolic/iconic analyses from both the symbolic
and iconic directions. One perspective has emphasized the highly sym-
bolic/linguistic nature of these elements: Supalla (1983) rightfully notes
that in principle any apparently gradient/analog form can be broken down
into many (increasingly small) discrete morphemes. So, what might look
like a wholistic meandering movement can be broken down into many small
discrete movements. From the linguistic perspective, doing so makes the
representation of these structures much more complex: instead of an iconic
arc, one would have to use many morphemes expressing changes in the arc,
i.e. upward, then over, then upwards some more, etc. This tradeoff in com-
plexity might be viewed as favorable if one is committed to the notion that
language expresses only discrete meaning-form mappings. In other words,
one might motivate increasing the number of morphemes present in depictive
classifiers by trying to preserve a parallel between spoken English words and
sign language manual signs. However, many other researchers have noted
that there is no need to do this to preserve unity between the signed and
spoken mode, as spoken languages include abundant analog depictions as
well (Clark and Gerrig, 1990; Clark, 1996, 2016; Dingemanse, 2012, 2015;
Dingemanse and Akita, 2017; Kita, 1997; Davidson, 2015; Maier, 2018).
Given the increasing appreciation for the existence of depiction in spoken
language, we seem to lose any motivation for keeping the components of
depictive classifiers discrete symbols; depictive classifiers can include analog
components and still be “language.”
At the other end of the spectrum, some researchers have emphasized the
depictive nature of these expressions, as in work by Liddell (2003); Perniss
et al. (2010); Taub (2001); Cogill-Koez (2000a,b) and others. In an impor-
tant way this of course seems exactly right: clearly these are used when there
is intent to depict directly/iconically instead of (only) describe symbolically,
and it is unfortunate how often the depictive elements of depictive classifiers
have been dismissed. However, an extreme version of this hypothesis ends
up also dismissing the symbolic nature of some of their components, such as
the handshape, and their compositional status as predicates in sign language
sentences. This ends up not only wrong in terms of linguistic analysis but
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 116
can also, more practically, lead to a lack of appreciation for the complexity
of sign languages and the achievements involved in acquiring them.
While it’s good scientific practice to make sure that all ends of the ide-
ological spectrum are explored and tested, experimental evidence as well
as recent theoretical analyses appears to support a dual symbolic/iconic ap-
proach to depictive classifiers. Emmorey and Herzig (2003) used a controlled
experimental setting to compare the interpretations of depictive classifiers
by deaf signers of ASL (who have familiarity with both conventionalized and
nonconventionalized aspects of the language) and hearing non-signers (who
presumably lack familiarity with the conventionalized aspects of the lan-
guage), to determine what aspects of their forms are interpreted as categor-
ical and gradient by each of these groups and to see if the role of language ex-
perience affected these judgments. They found that handshape selection, as
well as a few modulations of the handshapes (handshape sizes, figure/ground
uses, etc.) seem to be interpreted differently between Deaf ASL signers and
hearing nonsigners, suggesting that ASL signers’ interpretations are influ-
enced by conventionalized categories since they treated some differences as
discrete categories, where nonsigners did not. In contrast, other more de-
pictive aspects (dot placement, for example) were interpreted similarly by
both groups as not discrete but analog/continuous. They take these results
to support a distinction between symbolic and categorical interpretation
of handshapes on the one hand, and iconic and gradient interpretation of
movements, locations, and modifications to these handshapes on the other
hand. On the theoretical side, Zucchi (2018) provides a detailed discussion
of the tradeoffs between discrete and analog analyses of classifier predicates
in light of larger questions about gesture-like meaning in both spoken and
sign languages, arriving at the same conclusion that sign language classifier
predicates contain both discrete symbolic components (in the choice of hand-
shapes) and analog depictive components (in the location and movements
used).
2 Classifier semantics
Building from the idea that depictive classifiers convey meaning in two ways,
in part via conventionalized symbol and in part via iconic depiction, we
will adopt a formal semantics in which the handshape is handled just the
same way as semantic features like gender and noun classes (as potentially
infinite but still discrete and symbolic) and the depictive component is an
event demonstration (Zucchi, 2012; Davidson, 2015; Zucchi, 2017). We will
walk here through a possible formalization of this intuition. First, classifier
handshapes are conventionalized and provide meaning via their symbolic
nature. For a classifier in which a 3 ( ) handshape is used, it should
require that we are discussing an event that involves a vehicle; in contrast,
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 117
The most notable feature of this analysis is the fact that there is a pic-
ture on the right hand side of the equation, in fact, a picture of the very
same expression that we say we are analyzing in terms of meaning. But
this isn’t circular! Rather, the demonstrate predicate requires an event of
as one of its arguments, in particular a depicting language event, and we
can find one of those in the sentence itself, i.e. from the “form” side of the
equation. Technically, any linguistic expression might be a demonstration
of something; in fact, that’s exactly how we’re going to want to analyze
quotations! But, for now, it is clear that some linguistic expressions are es-
pecially intended to depict something, and depicting verbs are an especially
clear example of that. We encode this via the demonstrate relation, which
takes the event of communicating (which we are representing in the black
and white pictures, just as we might use a different font for the quotational
use/mention distinction in English) and relates it to the event of cats facing
each other.
These are not even the only classifier types: Benedicto and Brentari
(2004) also discuss body part/limb classifiers, which they argue using similar
tests of negation, distributivity, etc. introduce a single internal argument;
this can likely be extended as well to whole “body classifiers” discussed
later in the section on Role Shift. Clearly there are many possibilities for
cross-linguistic research on this topic in sign languages between classes of
handshapes and the possible arguments that they can introduce. Another
issue that clearly deserves more study is the syntactic status of the (non-
linguistic) depictions introduced by the demonstrate relation. In one effort,
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 120
4 Classifier pragmatics
Depictive classifiers are a beautiful example from the perspective of seman-
tics and semiotics of both symbolic language and iconic language composing
in regular compositional ways: handshapes express symbolic restrictions
while movements and locations can iconically depict. Given the discussion
in Chapter 2 about the ways that depictions resist participating in question-
answer structures, we may ask how depictive classifiers participate in prag-
matic calculations of various sorts. As one example, expressions related to
each other in terms of logical strength, such as some/all, are well known to
lead to pragmatic inferences known as scalar implicatures: if we use a pos-
itive statement with the weaker form, e.g. some/some, this tends to lead
to the inference that the statement with the stronger term would be false
(129).
Implicatures are defeasible inferences: note that you can say in English I
ate some of the cookies, in fact, I ate them all, and in some contexts the
weak term doesn’t negate the strong one, as in If you’ve eaten some of the
cookies, you know how good they are, both of which are used to argue that
the scalar implicature comes about through reasoning about alternatives,
rather than being encoded in the conventionalized semantics of some (this
is true whether or not the theory takes that reasoning over alternatives as
extralinguistic or as grammatically encoded, see Chierchia 2017).
As a result, conversational participants clearly keep track of seman-
tic/pragmatic alternatives in given scenarios, and reason about them. In
fact, inferences similar to scalar implicatures follow from any ordering, even
one created ad hoc instead of on a conventionalized scale like some/all:
consider, if asked to describe what is on a table, then a response have
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 121
Figure 5.3: Stimuli from study on ad hoc implicatures using depictive clas-
sifiers in American Sign Language (Davidson, 2014)
5 Role shift
Another well studied area in sign language linguistics in which we see depic-
tion clearly interact with description is in the reporting of others’ thoughts,
words, and even others’ actions. An example from Padden (1986) is pre-
sented in Figure (130), in which the signer “shifts” eyegaze, direction, and
other nonmanuals while expressing an attitude (really ix-1 not mean)
attributed to another’s perspective (husband), represented in text as in
(130). A similar example can be found in (131a), where an attitude (ix_1
busy) is expressed from the perspective of someone else, here, Alex. When
glossed, the “shift” can be represented as a line marked with RS (131b.)
(131) a.
J K
‘Alex was like, “I’m busy”’
RS
b. fs(Alex) ix_1 busy
‘Alex was like, “I’m busy”’
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 123
A further advantage is that the same analysis can also naturally apply to
classic cases of role shift/constructed action in sign languages, in which the
words and other mannerisms of another character are demonstrated, such
as in (134), where one’s thoughts/words are demonstrated. They can even
be extended to the kinds of sentences in (135), where the giving action (not
an attitude!) is demonstrated.
(134)
J K
(135)
J K
e.g. the use of the first person because the character used it in that
way, just as expressions that were used by a character are demonstrated by
the speaker/signer. Since a lot of attention has been paid to how and why
(136) JI am tiredKc =
λw∃x∃v(experiencer(v, x) ∧ being − tired(v) ∧ speaker(c) = x)
‘The speaker in the context is an experiencer of a being tired event’
Given this, a much discussed claim in this area of linguistics and phi-
losophy is the claim that the context is not something which itself can par-
ticipate in compositionality, i.e. be affected by any linguistic operators. In
other words, conversational participants and their roles, locations, etc. are
simply facts of a context and while language can access these details, it
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 127
doesn’t include any symbols that affect what the context for evaluation is,
i.e. nothing overwrites c. Kaplan (1979) famously argued that a linguistic
operator that changed the context of evaluation would be a “monster”. De-
spite, or perhaps because of, this claim, interest in its universality has gained
an interest in recent years, propelled by work suggesting that some languages
actually do have operators that “shift” a context of evaluaton, without the
use of quotation. Such “shifty” indexical expressions tend to look like the
Zazaki example in (138) (from Anand and Nevins (2004), presented as in
Sundaresan (2021)), which includes a first person indexical expression within
a clause introduced by and in the scope of an attitude verb, in this case va
‘say’. Notably, the first person indexical pronoun (Ez) can be interpreted as
the speaker of the context (as in English) or as the holder of the attitude in
the main clause, e.g. John (the latter seems to be impossible in English).
The idea is that the context of evaluation for the indexical expression
in the embedded clause need not only be the main clause speech event,
but could also be some other context, introduced by the attitude (here,
saying). In the literature on reported examples of indexical shift there is
some variation between languages, but one stable observation is that across
languages verbs of speech are the most likely to allow this kind of shift than
are verbs of thought, which are in turn more likely to allow indexical shift in
their complement than are verbs of knowledge (Sundaresan, 2013). These
“shifty” indexicals are sometimes called “monsters”, in reference to Kaplan’s
claim, and it has become an important question for syntax/semantic theory
which languages allow such shifts, under what contexts, and involving which
indexical expressions (Deal, 2020; Sundaresan, 2021).
(139)
The translation given for (139) reflects two possible analyses of the sen-
tence with role shift in American Sign Language. On the one hand, we can
think of role shift as something like quotation, in which, again, the use of
the first person is because the quotation depicts how something was said in
the event being demonstrated, in this case, an event of Alex talking. This
would be the demonstration analysis of role shift, and typically is considered
to be a separate phenomenon from context shift, since quotations are under-
stood to be noncompositional in other ways. On the other hand, a context
shifting analysis of role shift explains the use of the first person indexical
not because that is how Alex said it, but because the context of evaluation
for the embedded clause is different than the context of evaluation for the
whole sentence, thus a case of “monstrous” contextual overwriting.
How could we ever tell these apart? Don’t they both seem to be re-
flecting something right? The main way this issue has been approached in
the theoretical literature has been to test comparisons to written language
quotation, since written quotation is clearly depictive (“mentioned” speech,
not used) and is not typically seen as integrating compositionally with the
rest of the sentence, while in contrast, shifted indexical are by definition
integrated with the rest of the sentence compositionally. We can see this
effect of compositional integration/non-integration in English through long-
distance dependencies like wh-questions, where (140a) is acceptable since
there is no quotation (I is interpreted as the speaker of the whole sentence),
and (140a) has a quotation which allows the indexical I to refer to Alex,
but then fails to allow a dependency between the wh-word and the object
of like. Another example of a long-distance dependency is the licensing of
negative polarity items like ever, which makes a natural sentence in (141a)
since ever is in a negative environment but here I refers to the speaker not
Alex, and in (141b) has a quotation allowing I to refer to Alex but no longer
supports the use of ever given that the negation is outside of the quotation.
‘While John was in LA, who did he say he would live with there?’
b. Context: The speaker is in NYC; the listener was recently in LA
with John.
RS
before ix-a john in la, who ix-a say [ix1 will live with there]
who?
‘While John was in LA, who did he say he would live with there?’
These sentences (143a)-(143b) seem to have the same meaning yet dif-
ferent indexicals, with one more obvious difference: the appearance of role
shift, notated through the superscript RS and line marking the extent of
the role shifting. This makes for a somewhat persuasive case of role shift
as a kind of visible context shift! But as is often the case, things are more
complicated than we see at first blush. In presenting this data, Schlenker
(2017a) notes at least four complications of this generalization in favor of
clear evidence for context shift. The first is that the very same pattern is
found with the use of a quotation introducing sign (“air quotes”) instead
of say; if air quotes introduce true quotation (which intuitively seems more
likely) then that would suggest in ASL quotation really does seem to allow
extraction - and then it becomes less clear what this diagnostic is doing
in the first place if not ruling out quotation. At least, it certainly blurs
the line in both the language and the diagnostic between non-compositional
quotation and embedded clauses. A second issue, related to the first, is that
perhaps quotation is only partial, since as Maier (2018) and others have
noted, it’s always possible to just partially quote others’ speach. Third,
a possible NPI in ASL, any, behaves just like English any in resulting in
unacceptability with shifted indexicals even in cases of role shift, suggesting
that perhaps the clause under role shift is not compositionally integrated
after all, despite the wh- test. Finally, Schlenker (2017a) finds that the
same wh-extraction tests for shifted indexicals fail to hold when applied to
role shift in French sign language (LSF), so even if they do hold for the
two signers consulted for ASL, they do not necessarily generalize to the way
that role shift interacts with compositionality and structure across sign lan-
guages or all signers. These all raise doubt on a straightforward analysis
of role shift as context shift, or at least makes the case much weaker than
some of the clearer cases in some spoken languages. We turn more broadly
to cross-linguistic differences in the next section.
shift, which, for example, privilege verbs of saying over other attitude pred-
icates (Sundaresan, 2013) and among the kinds of indexicals that can be
shifted, which, for example, privilege shifting in first person over second
person Deal (2020). It should therefore not be shocking to find both reg-
ularities and cross-linguistic variation in sign languages, even when we see
the same kinds of nonmanual movements supporting the role shift. As just
mentioned, Schlenker (2017a) reports that LSF, for example, contrasts with
ASL in not allowing wh-question dependencies in the same contexts, as in
(144a-b)(more examples are given in the original text with wh-words in var-
ious positions and the point holds across them).
On top of the variation in the way that role shift interacts with long-
distance dependencies, there are also differences in the kind of indexical
expressions which seem to shift across sign languages. Quer (2005) illustrates
this with the Catalan sign language (LSC) example in (145). The first person
indexical ix1 refers not to the one who utters the sentence but to Joan, the
subject of the sentence, while at the same time another indexical, here
refers to the location in the context of the utterance, Barcelona (where the
speaker is, but not where Joan was).
Such indexical “mixing” of the kind seen in LSC is important from the
point of view of semantic analysis. At one point, it was assumed that even
in languages that allow indexical expressions to shift, they’d have to shift
together in the same clause, known as the “shift together” constraint (Anand
and Nevins, 2004). However, subsequent research on spoken languages shifty
indexicals broadened to a wider variety of language families, and it became
clear that there were not only differences, but a typology of differences such
that first person pronouns seem to shift before second person, and second
person indexicals before locative indexicals (Deal, 2020). The mixed example
in (145) actually fits the spoken language generalization well: the first person
indexical “shifts” but the locative is interpreted with respect to the utterance
context. In general if sign languages follow the spoken language pattern, we
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 132
expect that the reverse, with a shifted locative indexical but unshifted first
person indexical, should be unacceptable.
Sign languages also bring another valuable perspective to the discussion
of indexical shifting because they highlight the role of iconicity in this do-
main. Spoken language linguists have a tendency to ignore iconicity, such
as taking on a character’s emotions/bodily movements or acting in other
ways as a character does, since it’s not captured in the segmental nature of
a language’s orthography or the International Phonetic Alphabet. This is
a place in which the impoverished options for writing systems for sign lan-
guages turn to an advantage, since they don’t force attention to only certain
easily written aspects. In sign languages, it has been noticed that role shift is
most supported in iconic contexts, such as in the use of classifier predicates
(Davidson, 2015; Engberg-Pedersen, 2013; Schlenker, 2017b), and this has
motivated at least three analyses to explain the iconicity/role shift data. On
the one hand, if role shift involves a demonstration, the depictive iconicity
is core to the meaning, so it directly motivates both the use of the index-
icals and the iconic content (Davidson, 2015); the challenge becomes the
mixed cases. Maier (2017) and Maier (2018) provide an intruiging answer
to this question, suggesting that combinations of quotation/demonstration
and “unquotation” (non-demonstrated speech) is motivated by a principle of
indexical attraction: if a mentioned person or place is present in a discourse,
a signer or speaker will prefer to refer to them directly with an indexical
appropriate to that speech act, not the one used in the reported act. So,
for example, in the LSC example (145), the physical location of the speech
act occuring in Barcelona attracts the participants to use the appropriate
indexical here instead of how it was phrased in the reported utterance (e.g.
there or barcelona), motivated by pragmatic reasoning.
Iconicity adds yet another dimension of variation and uncertainty, since
many indexical expressions involve indexical pointing to a person or place.
It’s clear that we need to know more about variation in shifting among in-
dexicals, between language communities, among signers, and in iconic/non-
iconic contexts. This is especially the case because most of the theoretical
research on role shift tends to involve language consultations/elicitations
with a very small number of signers (in many cases, a single signer), often
by hearing researchers who are not native signers, and so one-off examples
run the risk of being taken as representative of a community when it might
instead be due to individual variation, or representative of an individual in
certain contexts but not others, etc. To counteract the first issue, there is
experimental work on role shift that sheds some light on the variation issue,
and more generally on role shift in sign languages, which we turn to next.
Hübl et al. (2019) conducted a quantitative experiment on role shift in
German sign language (DGS) (see Herrmann and Steinbach 2012 for more
on role shift in DGS specifically and quotation in sign languages). In the
experiment reported in Hübl et al. (2019) the participants, who were 5 Deaf
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 133
signers of DGS, were asked to view 50 video paired videos and judge their
acceptability. Each pair consisted of one video (A) and then another video
reporting on what happened in that video (B), in order to set up the right
context to test for speech reports. Among the goals of the study was to test
the attraction hypothesis, that a motivation for shifting indexicals was to
use a context-of-speech based indexical for a present discourse participant,
as in (146b), in contrast to (146a).
Their results were and mixed, finding a preference for the verbatim con-
dition for the first person pronoun and the location indexical here, and
a preference for the attraction condition for the second person pronoun,
although as they note, there are many possible explanations. That said,
work like this sets an example of how to do careful and controlled “semi-
experimental” (Davidson, 2020) work to better understand variation within
and across (signed and spoken) languages. With better data, we can un-
derstand how sign languages fit into the typology of indexical shift cross-
linguistically; until then, most questions are difficult to resolve without bet-
ter understanding the sources of variation as cross-individual, cross-context,
and/or cross-linguistic (arguably, a question arising in spoken language work
on this topic just as much as in sign languages).
(147)
10 Conclusions
This chapter brought together two types of structures in sign languages
that have separately received significant attention in formal semantics and
linguistics: depictive classifiers and role shift. The motivation in doing so
was to highlight the way that both of these kinds of expressions incorpo-
rate depiction along with description, and how because of this, both can be
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 136
(149)
J K = λx.x is a student
b.
5. CLASSIFIERS, ROLE SHIFT, AND DEMONSTRATIONS 137
J K = λx∃v[demonstrate( , v)∧
theme(x, v)] ∧ upright-figure(x) ∧ R(x, a)
c.
J K = λP λQ. | P ∩ Q |≥ 10
d.
J K
sight of when focuses on sign languages in this chapter, that there is nothing
about the way that depiction is integrated into classifiers and role shift that
makes sign languages somehow less linguistic than spoken languages. Quite
the opposite is true: all languages, spoken and signed, make abundant use
of both description and depiction, including many understudied but com-
mon aspects of spoken languages such as quotation and constructed action
(Clark, 1996, 2016), depiction in ideophones (Dingemanse et al., 2015; Kita,
1997), and manner demonstrations through co-speech gestures, and there
are exciting insights to be gained by emphasizing this point of commonality
in the use of multiple semiotic resources within sign languages and across
language modalities (Ferrara and Hodge, 2018; Hodge and Ferrara, 2022).
Furthermore, not only do we gain insight by pushing these commonalities,
but formalizing the way that depiction interacts with symbolic descriptive
content can lead to new insights into commonalities both across and within
languages, such as the underlying structural similarities between commonly
studied areas of quotation and reported attitudes in spoken languages and
role shift and classifiers in sign languages.
6
Quantification
One of the most notable aspects of human language is its ability to ex-
press generalizations. Many non-human animals, for example, are able to
communicate about particular threats (e.g. predator presence) or oppor-
tunities (e.g. current location of food), but there seems to be no evidence
for their ability to express generalizations like Food is always available in
that area, Tigers sometimes come from that direction, or No eagles fly that
high. Human languages, on the other hand, are chock full of expressions of
exactly these sorts, allowing us to express generalizations in a precise way
that supports even further inferences (e.g. if food is always available some-
where, then it is available now). Expressions like this are found in every
human language that we know so far (Partee, 1995), including the earliest
stages of an emerging sign language like Nicaraguan Sign Language (Kocab
et al., 2022), making for one of the most convincing test cases of the unique
expressiveness of human language.
We use the term quantification to refer to a function that takes two sets
and expresses the relation between them build on the “tri-partite” structure
given in (150).
For example, if we say No hippos fly high, then we are claiming that
the set of things that are hippos (X = {x.x is a hippo}) and the high
fliers (Y = {x.x is a thing that flies high}) have no members in their in-
tersection, e.g. nothing that is both a hippo and flies high. Languages can
vary quite a lot in how they express quantification: some languages use
determiners like Every, No, Some, etc. that form part of a noun phrase,
while other languages express quantification through adverbials like Always,
never, sometimes, etc., the latter being more common crosslinguistically
than the former, but with an overwhelming number of languages, including
English, American Sign Language (Abner and Wilbur, 2017) and Russian
sign language (Kimmelman, 2017) employing both strategies.
139
6. QUANTIFICATION 140
Those interested in the logical structure of language have long noted the
relationship between quantificational expressions of the sort we see in (151)
and (152), most notably the interaction of quantifiers with negation (Horn,
1989). For example, we see that the use of the existential quantifier some-
times in (152) leads to the negation of the universal (not always). Similarly,
use of the universal always entails the negation of the negative (not none).
Thus, the study of quantifiers builds on our understanding of negaton of the
sort that we saw in earlier chapters.
In terms of the expression of quantification in natural language, Partee
(1995) propose that it is characterized by a tripartite structure: a quanti-
fier, a restrictor, and a scope. So in the case of All tigers come from that
direction, we might have [Quantifier: All][Restrictor: tigers][Scope: come
from that direction]. In terms of its semantics, the quantifier can be thought
of as expressing the relationship between the restrictor and the scope, so
that in this case, All tells us that the restrictor set is a subset of the scope,
e.g. the set of tigers is a subset of the things that come from that direction,
e.g. there is nothing that is a tiger that doesn’t come from that direction.
Other quantifiers express different set relationships. For example, [Quanti-
fier: No][Restrictor: tigers][Scope: come from that direction] would be equiv-
alent to claiming that there is nothing in the intersection/overlap between
the set of tigers and the things that come from that direction. [Quantifier:
Some][Restrictor: tigers][Scope: come from that direction] would be equiva-
lent to claiming that there is something in the intersection/overlap between
6. QUANTIFICATION 141
the set of tigers and the things that come from that direction, i.e. that the
intersection is nonempty.
Let us model these relationships formally to illustrate their composi-
tionality. As above, we can say that any quantifier is a function of two
sets that expresses the relation between them (e.g. returns TRUE if that
relation holds, FALSE if it does not). The general structure for a quan-
tificational expression Quant that requires a relation R between two sets
would be JQuantK = λP λQ.R(P, Q). To be concrete with an example from
English, the quantifier every is a function that takes two sets, first P and
then Q, and requires P to be a subset of Q: JeveryK = λP λQ.P ⊆ Q. Taken
step by step we arrive at the following semantic derivation in (153).
the face of it, the multiple marking appears similar to tri-al marking in spo-
ken languages, but just like other “agreement” type phenomeona in sign
languages like directionality, it is optional, not obligatory: verbs that do not
change their form show no multiple marking, and sometimes even verbs like
ask need not show such marking, and can instead be signed in an prototypi-
cal/uninflected form. This suggests that the use of multiple marking may be
more semantically contentful than the seemingly purely “formal” use in spo-
ken languages like, say, Slovenian which has more formal/grammatical trial
marking, and more generally connects to the way that we modeled the op-
tionality of loci use in general in Chapter 4, so that the multiple marking can
be seen as both a way to disambiguate discourse referents via non-restrictive
modification and also to depict/show aspects of an event.
Moving on from this kind of verbal marking/pluractionality (which we
will also address more in Chapter 7), Quer (2012b) considers sign languges
in light of the tri-partite structure for quantification, bringing together ar-
guments that sign languages exhibit quantification through determiners and
adverbials using the same tripartite structure seen cross-linguistically in spo-
ken languages. He builds on an insight from Wilbur (2011) and others that
the use of nonmanual markers in ASL is related to the scope position, in
particular, that the brow raise nonmanual marking is a marker of a restric-
tor set in general (in Wilbur’s system, a particular syntactic position). In
other words, in sign language sentences with quantification, the restrictor
set will often be set off with brow raising nonmanual marking, in contrast
to the scope set.
When it comes to cross-linguistic variation, there seem to be many simi-
larites across sign languages and spoken languages in the domain of quantifi-
cation. A direct comparison between American sign language and Russian
sign language is available through Abner and Wilbur (2017) and Kimmel-
man (2017), which report on the two sign languages and their use of several
quantificational strategies, including but not limited to both determiner and
adverbial quantification. These are especially useful because they occur in a
handbook with direct comparison via the same typological survey to many
spoken languages as well, so variation between sign languages can be consid-
ered in light of the same variation among spoken languages. Kimmelman and
Quer (2021) provide an overview of quantification in sign languages, focus-
ing on lexical differences, the pervasiveness of both determiner and adverbial
quantification across sign languages, and some potential modality-specific
properties, which we turn to in the next two sections. Beyond simply the
use of space that we discuss next, there seems to be iconicity in quantifica-
tional forms themselves: for example, Crabtree and Wilbur (2020) propose
that the difference between two universal quantificational forms in ASL re-
flects a boundedness difference, such that the bounded form of asl reflectes
a bounded semantics, in contrast to the unbounded form (and corresponding
unbounded semantics) of the latter.
6. QUANTIFICATION 144
2 Quantificational domains
We have focused so far on the tripartite structure of quantification: a quan-
tifier’s force (all/some/none/etc.), it’s restrictor, and its scope. However, it
is also the case that not all of these pieces need to be visible: for example,
we can leave out much detail of the restrictor when we say everyone jumps,
which presumably has the structure [Every][one][jumps], yet one hardly de-
scribes a set on its own. What counts as one? We might say it is every
individual in some relevant context, and say that whoever they are, they
comprise the domain for quantification. This need arises even when there is
more overt information in the restrictor than just one. For example, Every
cat drank their milk is presumably telling us something about every cat in
a particularly relevant group, not every single cat that has ever existed. We
call this context-dependent aspect of quantification its domain restriction,
and it has long been a topic of interest formal semantics and pragmatics
(Stanley and Szabó, 2000; Stanley, 2002; von Fintel, 1994). As we will see,
sign languages are able to integrate their use of space for discourse referents
to convey domain restriction information in a unique way, which will be the
focus of the rest of this section.
In many sign languages, plural discourse referents can be associated to
2-dimensional areas of signing space (often expressed through an arc-like
movement across that area), in order to establish an antecedent for anaphora
in later discourse, exactly like non-plural discourse referents (155a). How-
ever, there are some properties of plural discourse referents that deserve
further discussion when it comes to quantification. One of these is that they
must respect a type of iconic geometry, such that a plural discourse referent
associated to an area of space that is inside the space associated to another
plural discourse referent, as in (155b) should have the same relationship
to it as the referents do, e.g. one should properly contain the other in its
extension (Schlenker et al., 2013).
There are at least two interesting consequences that this has for quanti-
fier semantics in sign languages. The first is quite simple: Schlenker et al.
(2013) note that while spoken languages like English do not have an easy
way to refer back to a complement set of a mentioned referent, the use of
space supports “complement set anaphora” in sign languages. Consider for
example the short two-sentence English discourse in (158a-c). In (158a-c)
the most natural interpretation is that they refers to all of the students (the
some who did their homework and the others who did not). In (158b) the
most natural interpretation is that they refers to the some of the children
who did their homework. Both of these options for interpreting they are fine.
However, it is strange to try to have they refer to the others who did not do
their homework, as in (158c), and this is descriptively called the inability to
license “complement set anaphora.”
(158) a. Some of the children did their homework. They are a good class.
b. Some of the children did their homework. They were proud.
c. Some of the children did their homework. ?They couldn’t find
it.
because the use of space to depict these sets distinguishes each of these
groups uniquely.
A second major consequence of spatial quantified noun phrases is that
the use of loci supports the expression of domain narrowing and widening
through a metaphorical (more is up) use of space (Davidson and Gagne,
2022). Consider example (159): the signs in both (159a) and (159b) are
the same except that the quantifier fs(all) is signed at a neutral height
in (159a) and at a much higher height in (159b), and this has truth con-
ditional consequences: the first is interpreted as a narrower domain (All
of my friends) while the latter is interpreted as a wider domain (All of
the people in the world).
a.
‘All of my friends became vampires’
#‘All of the people in the world became vampires’
b.
‘All of the people in the world became vampires’
#‘All of my friends became vampires’
Davidson and Gagne (2022) argue that this difference comes from a
pronominal restriction in the quantifier, something roughly like fs(all) [of
ix] ‘All of them’, by illustrating that this same use of height to convey wider
6. QUANTIFICATION 147
The higher version, with a widening domain, involves exactly the same
composition except that the plural pronoun has a restriction to a context
set that is a superset of the default (C ⊂ {y : y ≤ z}), as in (161).
sentences in (162): each expresses something with universal force, that is,
something about each and every member of the set of students, and the set
of those that were glad they brought their toothbrushes, namely, that the
first is a subset of the latter. But, they do so in different ways: the first
one (162a) takes the set of students as a whole and their behavior as group
behavior, which we can notice in the plural morphology on students and
toothbrushes. In contrast, the second one quantifies individually student
by student: notice, for example, the singular morphology on student and
toothbrush in (162b).
(162) a. All of the students were glad that they brought their toothbrushes.
b. Every student was glad that they brought their toothbrush.
We saw in the previous section plenty of examples of the first sort, with
plural morphology indicated by an arc and the use of higher or lower space to
express domain information via the plural restrictor (e.g. students). What
about the second sort (162b)? This doesn’t seem to be a universal in spoken
languages by any means (Partee, 1995); nevertheless, it is not uncommon
either, so we might ask whether we see this kind of quantification in sign
languages. The answer is somewhat complicated: there is some evidence
that sign languages do have this kind of quantification, yet other evidence
that they do not, or at least not in all the same ways as English.
One kind of evidence in favor comes from sentences that express behavior
that seems to vary by individuals. Take (163), which associates the noun
phrase boy with one area of space (locus a), and associates another noun
phrase girl with a second area of space (locus b), and then in the clause
embedded under think, the singular pronouns ix-a and ix-b are intended to
be interpreted as ranging over the whole set of boys and whole set of girls,
respectively, similar to (162b) above.
(165) a. None of the students were glad that they brought their toothbrushes.
b. No student was glad that they brought their toothbrush.
In contrast, Abner and Graf (2012) note that switching to negative quan-
tification (from universal quantification) significantly degrades bound quan-
tificational readings in American Sign Language (see also Graf and Abner
2012; Kuhn 2020; Abner and Wilbur 2017). In their example (166), they
report that this form is unable to express the bound meaning expressing
that nobody in the set of politicians is also in the set of individuals who
say that they wanted to win. However, the same example with a univer-
sal quantifier is improved, as is the same example without a singular locus
(more similar to the negative quantification with plural domain restrictions
we saw above).
Graf and Abner (2012) take this difference to be due to the inability
to support “syntactic” binding in sign languages. The idea behind this is
that the types of quantification over individuals that we saw in the English
case No student... they... (165) are only available in languages that have a
syntactic dependency between the quantificational noun phrase (No student)
and the anaphoric pronoun (they). In ASL, Graf and Abner (2012) argue,
there is not the same syntactic dependency between the quantificational
6. QUANTIFICATION 151
noun phrase (e.g. no politics person) and the anaphoric pronoun (ix).
When we talk about a syntactic dependency, we mean the same sort of
dependency that we see in, say, wh-questions between a question word and
the position where it is interpreted, the kind of cross-clausal dependencies
that we used to probe for compositionally integrated clauses and contrast
with quotation in Chapter 5, for example. This can be contrasted with
the kind of binding that arises through discourse-based coreference that can
cross sentences, like the kind that governs coreference between the politician
and the pronoun, as in I met a student. He brought a toothbrush.; the latter
sort cannot arise in negative quantification since there is nothing there at
the discourse level to corefer to (compare the odd: I met no student. He
brought his toothbrush).
A related but different explanation for the difference between universal
and negative quantificational binding in ASL is that the inability to have
bound interpretation of sign language pronoun comes from iconic constraints
on the use of space. Kuhn (2020) argues that the use of a locus itself has
an iconic requirement, following a similar suggestion by Schlenker (2011)
that there is an iconcity presupposition that rules out the use of loci in
cases of negative quantification. Kuhn (2020) discusses this iconic restriction
on the use of loci in the larger context of two other phenomena in sign
languages that involve dependencies: negative concord (see discussion earlier
in Chapter 3) and distributivity, which uses space to mark dependencies in
an iconic manner. We can see an example in (167), where the locus a is
expressed on the universal quantifier each and uses space to illustrate the
dependency between the professors and the students.
Overall, the notion of the use of space as being ultimately depictive, even
if in a quite abstract way, has roots in many approaches to sign languages
linguistics, especially cognitive approaches like Liddell (2003) and the hy-
brid approach of Schlenker et al. (2013), and is also consistent with general
views of the use of pointing to space as demonstrative by Ahn (2019a) and
Koulidobrova and Lillo-Martin (2016). However these views all differ in their
implementations for the interface between the depictive aspects of space and
the descriptive/non-iconic aspects of sign languages. The view we will take
is as follows: depictions can be used to create and augment the event repre-
sentations we lead our interlocutors to construct. As Kuhn (2020) suggests,
establishing a locus for a negative quantifier is in conflict with any use of
that space to depict something (which, given the negative quantifier, cannot
exist) so a locus is not used in these cases of negative quantification. Recall
the incompatibility between negation and depiction is a theme we have seen
6. QUANTIFICATION 152
Some languages allow the same sentence to have these two separate inter-
pretations, as exemplified in English in (168), while other languages seem to
bias interpretation toward the “surface” reading, i.e. to use word order and
other organizing properties of information structure to disambiguate. It is
a natural question, then, whether quantificational scope ambiguities arise in
sign languages. Petronio (1995) reports the narrow (169a) and wide (169b)
scope existential readings for the bare noun phrase book to be available
in ASL (note, here, that the “wide” scope reading is actually a collective
reading in which the students bought the book together, not required by
6. QUANTIFICATION 153
(168b), as well as a third reading more common for bare noun phrases that
is not equivalent to either of the English readings (169c).
In (169) the noun phrase book is topicalized, which has been argued by
Wilbur and Patschke (1999) to affect quantifier scope options via a visible
marking of topicalization/A’ movement (the eyebrow raising). Others have
noted that the use of space, as used to keep track of discourse referents,
can disambiguate such readings, especially with the use of the distributive
marker (Kuhn, 2020; Quer, 2012b). Quer (2012b) provides the example
in (170) which is not understood as ambiguous but rather unambiguously
requires student to take wide scope.
Finally, we discussed in Chapter 2 the ways that sign languages reflect in-
formation structure of the discourse (backgrounded information can be en-
coded in a question, new information foregrounded in an answer), which can
potentially affect scope if, for example, wide scope negation is encoded as
a negative answer (Gonzalez et al., 2019). A particularly interesting conse-
quence of this, discussed in Chapter 3, is the possibility that a strategy for
expression wide scope is through the use of question-answer clauses and us-
ing a quantifier in the answer, which disambiguates scope in sign languages
and raises the question of whether this might be an underappreciated strat-
egy in some spoken languages as well.
7 Conclusions
Quantification is known as one of the places where human language makes
use of a combination of complexity and precision that we do not yet have
evidence for among non-human animals. There are several places where sign
language specific properties make studying quantification especially interest-
ing. One of these is at the intersection of quantification and the association
of space with discourse referents, both for domains and for potential binding;
another is in the interaction of quantifiers and other operators with respect
to scope, such as distributivity and other quantifiers. There are also psy-
cholinguistic studies that focus on quantifiers, both on pragmatics in ASL
and on the production of quantifiers in NSL, although possibilities remain
wide open in this area for future work.
There is especially a need for more crosslinguistic work on quantifiers in
sign languages. The notable exceptions, as we have seen, are chapters on
6. QUANTIFICATION 157
Russian sign language (Kimmelman, 2017) and on ASL (Abner and Wilbur,
2017) in a volume on quantification cross-linguistically, which provide a sense
of the syntactic distribution of quantifiers across these languages, and on
LSC (Quer, 2012b). Further works should also take care to consider se-
mantic/pragmatic interface questions like the nature of the scales in each
language (not necessarily equivalent), and the use of space, especially since
we have seen evidence from both Catalan SL (Barberà, 2015, 2014) and
Japanese and Nicaraguan SLs (Davidson and Gagne, 2022) that it plays an
important role in domain restriction across several unrelated sign languages.
We’ll end this chapter with a concrete example that follows the semantics
we have introduced and show how it integrates with other notions we have
covered in earlier chapters such as loci and depictions. In terms of quantifi-
cation, we will want to model the quantificational force, the restriction
and the scope/domain, the three parts of quantificational tri-partite struc-
tures. For this we can actually turn to one of our original examples from
Chapter 1, part of which can be see in (172), which includes a quantifier,
(172)
J K = λP λQ.P ∩ Q = ∅
b.
J K = ιx.¬atomic ∧ R(x, a)
‘The unique plural (i.e. non-atomic) individual that is related
by R to location a’
c.
J K
= λQ.{x.x ≤ ιx.¬atomic ∧ R(x, a)} ∩ Q = ∅
‘None of them (the ones associated to a)’
d.
J Kw
= λx.x remembers (a relevant) card in w
e.
J K
= λw.{x.x ≤ ιx.¬atomic∧R(x, a)}∩{x.x remembered card} = ∅
in w
‘The proposition consisting of the worlds in which there is no
individual that both remembered a card and that is a subpart
of the plural individual related to the locus a (the lined up stu-
dents)’
6. QUANTIFICATION 159
Countability
160
7. COUNTABILITY 161
Moreover, the same nouns that occur with count quantifiers can combine di-
rectly with numerals (174a), wheras those that occur with mass quantifiers
cannot combine directly with numerals (174b), and instead need to be mea-
sured (via bottles, puddles, pounds etc.) before combining with numerals
(174c).
(175) (Mandarin)
a. Liǎng zhī māo
two CL cat
b. Liǎng shuāng xié
two CL shoe
7. COUNTABILITY 162
c. # Liǎng māo/xié
two cats/shoes
The two classifiers zhī and shuāng reflect various semantic/sortal properties
of the nouns, in a very similar way to sign language classifier handshapes
discussed in Chapter 5, hence, the use of the term “classifier” for the hand-
shapes of depicting classifier signs in sign languages.
Chierchia (1998) observes that the required use of classifiers to combine
with numbers/counting in languages like Mandarin tracks with another dis-
tinction crosslinguistically: the ability of a language to use bare nouns as
arguments for verbs, as in the contrast between (176a) in Mandarin and
(176b) in English.
Chierchia accounts for this difference between the English and the Man-
darin cases by supposing that all nouns in some basic sense start out as a
kind of undifferentiated/uncountable stuff (a kind, in the terms of Carlson
1977), and then some languages like Mandarin are able to allow these kinds
to participate directly as arguments to a verb like jump. In this case, they
would need another function to turn them into something countable, which
is what we see being done through their classifier morphemes. In contrast, a
language like English has nouns that are - speaking roughly- already closer
to something that is needed to be counted. In these languages, the already
countable noun can take number marking (e.g. singular and plural), and
they have the ability to combine directly with numerals like two cats. These
languages then require an extra (covert, in English) function when we want
to talk about them as kinds, such as in the cat [kind] is common (Chierchia,
1998, 2015).
A third class of languages seems to permit their nouns to occur directly
in noun phrases with numerals (like English, unlike Mandarin), and directly
as arguments (like Mandarin, unlike English). In these languages, even very
mass-like things can appear without an overt classifier. Compare the English
cases we saw above with the example in (177) from Nez Perce, which has
no need for the intervening bottle, puddle, pound, etc. between the numeral
and the noun (177a) and where English might use measure words Nez Perce
can use plural marking instead (177b) (Deal, 2017).
a. kuyc heecu
nine wood
‘nine pieces of wood’
b. yi-yos-yi-yos mayx
PL-blue sand
‘[individuated/apportioned] quantities of blue sand’
In such a language we might wonder if there are really mass or count cat-
egories since they don’t seem to be distinguished by syntactic distributions
(the combination with numerals, classifiers, and/or the presence of gram-
matical number). Interestingly, it seems that even in these cases there may
be evidence for a mass/count distinction if we look to other areas of the
grammar like plural marking on adjectives in Nez Perce (Deal, 2017). (As
we will see, a similar unexpected distinction, in this case in topicalization
and conjunction, shows up for ASL as well.)
In addition to the nominal domain, we can ask about countability dis-
tinctions in the verbal domain. For example, verbs can express events in
a way that is countable (English She jumped three times!) or not (English
She is playing outside!). Just like the nominal mass/count distinction, there
are formal/syntactic as well as semantic distinctions that often but do not
always align. Consider, for example, the verb phrase fold the towel. This
seems countable in some sense: we can say that we did it once, or twice,
or twenty times. We can even count instances in an amount of time, as in
(178a). In contrast, a verb phrase like fold laundry feels much stranger to
count and instead we mark volumes/durations (178b).
(178) a. She folded the towel twenty times {in an hour/# for an hour}.
(telic: fold the towel)
b. She folded laundry {# in an hour/for an hour}.
(atelic: fold laundry)
There are many distinctions to make in this domain that we won’t have time
to explore in depth here, given the potential complexity of verb phrases
crosslinguistically. However, one important distinction raised in sign lan-
guage research is the distinction between telic and atelic predicates. The
intuition behind telicity is that sometimes we can refer to events in a way
that makes reference to their end points, i.e. their telos. For example I
folded the towel tends to imply one particular event that finished when the
towel was completely folded, which is the end point/culmination/goal of that
event. In contrast, I folded laundry may be talking about the same part of
one’s evening, but it is a way of looking at this event which is less marked
by boundary points. This is reflected in, among other things, the ability to
combine with different categories of temporal modifiers that we saw in (178),
7. COUNTABILITY 164
and can be viewed (e.g. Bach 1986) as a the verbal countability distinction
in contrast to the mass/count distinction in the nominal domain.
In this chapter, we will discuss work on countability distinctions in nouns
in sign languages in the first section (Mass/Count) and countability distinc-
tions in verbs in the second section (Telicity). We will focus both on the
kinds of inferences that these distinctions help us understand in languages
generally, and on different approaches to analyzing mass/count distinctions
and telicity in sign languages.
(179) a.
(180)
Finally, in ASL the bare noun forms can also be interpreted as measured
either by quantity or volume in a comparison expression (181).
(181) Context: Mary’s oil bottle contains more oil (by volume) than Alex’s
fifteen smaller bottles
By these diagnostics and others that she lays out (see Koulidobrova
2021 for full set), Koulidobrova concludes that ASL is most like the third
category of languages we discussed above, like Nez Perce (Deal, 2017) and
Yudja (Lima, 2014). Moreover, she shows that just like in Nez Perce (Deal,
7. COUNTABILITY 166
2017), the distinction between two classes (mass vs. count) does emerge in
at least one area in the language, when it comes to topicalization, as she
provides in the contrast in example (182): the mass noun blood can’t be
separated from the quantificational expressions three and few, whereas
the count noun apple can.
Another place that the mass/count distinction appears in the grammar ac-
cording to Koulidobrova (2021) is in conjunction: she reports that whereas
count nouns can be conjoined with each other (183a) and mass nouns can
be conjoined with each other (183b), mass and count nouns cannot be con-
joined together (183c) except if the mass nouns are “countified” via a count
quantifier (183d).
A takeaway is, then, that for American Sign Language there is some distinc-
tion in how mass and count is treated in the grammar, although it largely
patterns with a Yudja or Nez Perce-type laguage in allowing bare nouns to
combine directly with verbs (seeming to directly allow kinds to serve as ar-
guments, like Mandarin according to Chierchia 1998) and also not requiring
overt classifiers in count environments like numerals and count quantifiers,
although perhaps the work is done through a covert version of this same
function.
On its face this summary seems entirely straightforward, but Koulido-
brova (2021) rightfully points out two possible counterarguments to this
broad generalization. One is that ASL might instead have classifiers like
7. COUNTABILITY 167
(185)
(186) a.
The separation of the numeral from the classifier in (186) and the quan-
tity being expressed on the classifier and not via a numeral/quantifier in
(184) both lead to the conclusion that the classifier handshapes in ASL are
7. COUNTABILITY 169
‘two cats’
(Unacceptable response; it is a well-formed expression but not
an acceptable answer in this context since it’s not a full clause)
b.
That is not to say that classifiers cannot be used within a nominal structure,
but rather that they do not play the quantizing role that Mandarin classi-
fiers do, and instead even in nominal instances are derived from a verbal
form at some level (Abner, 2017).
3 Grammatical number
We’ve seen how despite having depicting classifiers, ASL differs significantly
from languages like Mandarin in how it uses (or, in the case of ASL, doesn’t
use) classifiers to apportion quantities for counting, e.g. number noun
phrases. We have not yet in depth explored the other possibility, though:
that it actually uses grammatical number in a similar way to English-type
languages. On the face of it, there is a straightforward case for separating
ASL from number marking languages, as ASL clearly permits bare nouns
in argument position (188a) and does not have any plural marking on the
noun in a numeral noun phrase (188b): note that the form of the noun is
(188) a.
Figure 7.1: Lateral plural marking in DGS (Pfau and Steinbach, 2006)
However, it has been argued that some languages that initially appear
to have bare nouns in argument position (of the Mandarin variety), actually
appear more like English under a microscope, in the sense that their noun
phrases seem to be marked for grammatical number in other ways. Deal
(2017) shows that Nez Perce does precisely this through number marking on
adjectives, although this distinction is only visible precisely in noun phrases
that include these adjectives. Could ASL be a language with some kind of
covert number marking on simply noun phrases that shows evidence of a
grammatical number system in other ways?
A particularly detailed discussion of this hypothesis can be found in Pfau
and Steinbach (2006), who argue in favor of such a view for German sign
language (DGS). As they point out, grammatical number can sometimes
appear to be unmarked, as in some lexical items in English like sheep which
clearly has a grammatical number distinction as reflected on the verbal mor-
phology, despite the same form (sheep) being used for the singular and plural
noun (189).
For ASL in particular, early work by Petronio (1995) made clear that
number is only conveyed, at least in ASL, outside of the noun phrase, in
the verbal domain or contextually inferred. Arguments from ellided noun
phrases in Koulidobrova (2021) argue in favor of this as well. Curiously,
Pfau and Steinbach (2006) also note that number seems to be not marked
internal to the noun phrase, so in a sense there is agreement about this in
signed languages: number marking in sign languages is syntactically unlike
grammatical number of the more common sort in that it is not part of the
noun phrase. In another sense, this raises deep questions because it is then
not clear what exactly is meant by plural marking that occurs outside of an
NP, as something outside of the noun phrase would be handled in a different
way by current syntax/semantics theories of number, which take number to
originate within the nominal domain.
For the purposes of the current text and our focus on semantics, we will
highlight one important argument that seems to clearly show that ASL
does not have number marking in any sense recognized as such in spo-
ken language semantics, based on its negation. Consider the sentence Alex
doesn’t have any sheep/children/cats, which is false even if he just has one
(sheep/child/cat): it doesn’t negate the plurality, but rather the plurality
here seems to be the unmarked form, used here to express the complete
lack (Sauerland, 2003). Similarly, the dialogue in (190) uses the plural form
and it is clear it should be understood as any number, including one. This
contrasts with the ASL version of the same demonstrated in (191) and in
(192) from Koulidobrova (2021).
(191) a.
Finally, number has been connected via this existential type use to the
expression of explicitly spatial information. Pfau and Steinbach (2006)
discuss how the use of classifiers seems to require the expression of a spatial
localization in a way that the lateral movement does not. Schlenker and
Lamberton (2019) focus on the spatial information in these kinds of repetions
of the sort we saw in the basketball example in (191) as well as the related
example in (194).
7. COUNTABILITY 174
(194)
Schlenker and Lamberton (2019) provide a semantics for these kinds of quan-
tity expressions in American Sign Language that focuses on its depictive use
in showing the arrangement of several objects and their spatial arrangement.
They explicitely resist an indefinite/existential analysis of spatial repetition
based on examples where the numeral is not an exact match to the number
of depicted points, in particular, where the numeral is greater, such as (195).
In their own words: “The heart of the matter is that an expression such
as 10 [trophy trophy trophy]horizontal-arc is acceptable (and makes
reference to ten trophies), which makes little sense if we are dealing with a
conjunction of three singular indefinites.” They are right that the numeral
(e.g. 10) and the number of repetitions do not have to match, so in that
sense the plurality is not actually coordinating separately noun phrases that
each have existential meaning. In other words, we would not want this to
be equivalent to There is a trophy (here) and there is a trophy (here) and
there is a trophy (here). But, as we have seen, these expressions seem to
be existential nonetheless, something similar perhaps to the English expres-
sion a plurality with a given arrangement, although they cannot simply be
grammatical plural (in the sense of english −s) due to their interaction with
negation, as we saw above. In addition, Schlenker and Lamberton (2019)
rightfully point out that the way that space is used to show arrangements
that need not correspond exactly numerically is seen in many gestural sys-
tems, including homesign (Spaepen et al., 2011) and gesturers that they
elicit from non-signers (Schlenker and Lamberton, 2019). Ideally, then, we
can incorporate depiction into the semantics of these expressions while also
getting the quantificational aspect right.
Let’s consider in particular the example in (196), with basketball fol-
lowed by a depictive classifier (DS: Depicting Sign) indicating the arrange-
ment of the balls, DS_b5(three in a row). This is acceptable in the situation
in which there are three basketballs on a table, and unacceptable to describe
a situation in which there are three buttons or soccer balls on the table, or
in a situation in which there are (only) two basketballs on the table (196).
7. COUNTABILITY 175
(196)
The first three judgments can be accounted for through the same classi-
fier semantics we used in Chapter 5, along the lines of (197).
(197)
J K
=λw∃e∃x, y, z : bulky_item(x, y, z).[theme(e, (x, y, z)) ∧ R(a, x)
(198) J K
=λw∃e∃x1 , x, y, z : bulky_item(x, y, z).[theme(e, x1 )∧(x ̸= y ̸= z)∧
basketball(x, y, z) ∧ x, y, z ≤ x1
∧demonstration(e, )in w
‘The proposition which returns TRUE for worlds in which there is an
event that has a theme, and that theme argument has at least three
individual sub-parts, all of which are basketballs, and the event is
More broadly, the observation in this section has been that ASL (and, it
seems DGS as well) does not have grammatical number marking/plural in
the English/German way in terms of its syntax and its semantics. It does
clearly, however, express the concept of plurality through conventionalized
morphemes and the depicting classifier system, although the classifier system
itself is tied closely to the verbal system, not to the nominal system. What
can we say more generally about countability in the verbal domain? We
turn to this topic in the next section.
length how for one verb , the same handshape and roughly the same
location can be combined with many different internal movements, each of
which seems to convey aspectual-like information. (Hou 2022 provides a
more modern analysis of the broad flexibility of this same sign.) Names
given to these different categories of meaning include “protractive, inces-
sant, durational, habitual, continuative,” and “iterative”. Klima and Bellugi
(1979) find similarly complex categories, with sometimes even more distinc-
tions, for other predicates, e.g. (be)sick. These kinds of categories are all
meanings that are morphemic in some spoken languages as well, although
more rarely are they morphemic all in the same language. Moreover, the
timing/rate of repetitions and holds often seems transparent to non-signers,
in the sense that they seem to be able to guess some (but certainly not
all) of the meanings of this verbal inflection without much experience with
ASL. This naturally raises the question about the nature of this change in
meaning: are the different movements morphemic or depictively iconic? Or,
is this a case of a motivated form which is interpreted as symbolic? For ex-
5 Event visibility
Telicity has become an increasingly well studied topic in sign language re-
search in recent years due to an extremely interesting proposal by Wilbur
(2008) that the boundedness in the meaning of verb phrases is mirrored in
the structure of verb phrases in sign languages, and that this is done in an
iconic way (see also Malaia and Wilbur 2012). This Event Visibility Hy-
pothesis (EVH) begins with the observation that verbs expressing bounded
events, like steal, tend to be expressed with a form that is bounded as well.
This contrasts with verbs that express unbounded events, like play, which
do not end in an abrupt stop in the same way.
7. COUNTABILITY 179
As proposed in Malaia and Wilbur (2012) and Wilbur (2008), the the-
oretical claim of the EVH is that the boundary point in the form of signs
like steal is an overt manifestation of the resultative morpheme proposed
in Ramchand (2008). As a consequence of this “visibility” of the resultative
morpheme, verb forms with an overt boundary point necessarily express telic
predicates, while those without typically express atelic predicates. That is
to say: the idea that telic predicates contain a piece of meaning that specif-
ically encodes telicity has been propose independently of the sign language
data (Ramchand 2008), but Malaia and Wilbur (2012) and Wilbur (2008)
draw a fascinating parallel to sign languages when they suggest that sign
language encode this boundary morpheme overtly, by the existence of an
abrupt stop.
The EVH captures a strong intuition, which is that often something
in the verbal form in sign languages feels natural given their meaning, and
moreover, that it crops up regularly in unrelated sign languages of the world.
It would seem quite wrong to reverse the signs for play and steal even
though neither is especially iconic or transparent in its meaning. Strickland
et al. (2015) confirm this intuition experimentally, showing that non-signers
categorize signs with overt boundary points as expressing telic meanings
more often than those without boundary points. Given this intuition, a
kind of semantic analysis that posits a universal due to iconicity available
in the visual modality might be well motivated. On the other hand, the
morphological claim is quite surprising from the crosslinguistic perspective:
no spoken languages seem to encode telicity directly in their morphology.
In a skeptical view of the overtness of telicity in sign languages, Davidson
et al. (2019) note that telicity is like mass/count in many ways (Bach, 1986),
one of which is that it is usually seen as an emergent property depending
on lexical semantics and semantic propreties of its arguments; while many
things interact differently with mass and count nouns, no overt morphology
marks that distinction as such directly on the nouns themselves in the way
that the EVH proposes for telicity in sign languages. Building on Wilbur’s
work, different theoretical takes have been proposed to cover the intuition
behind the EVH. For example, Kuhn (2017b) speculates on an especially
iconic implementation of the EVH, proposing that signs might encode not
just the presence or absence of the boundary point, but that the completion
of the event is mirrored in the form of the sign, mapping the production of
the sign directly onto the event structure. He gives the example of the type
in (200), where producing the sign die in a way that ends before the citation
endpoint is interpreted as ‘almost die’, and where internal modulations of
the timing of the sign are interpreted as reflecting the timing of the event,
e.g. ‘after a struggle’ or ‘gradual’.
7. COUNTABILITY 180
(200)
Under this view, the EVH is both stronger and yet even simpler than pro-
posed by Wilbur (2008); it simply restricts though a manner adverbial that
the event progressed in the manner shown, imposed by an iconic function
(Iconϕ ). Note that this builds the depiction into the propositional meaning.
Another way to think about the relationship between iconicity and the
grammar is to map even (perhaps, especially) the form of verbs that encode
atelic predicates to small portions of an event taking place, as suggested
by Wright (2014). Wright notes that the sign for sew in ASL is frequently
atelic and lacks a boundary point, having a repetitive movement without
a boundary, in line with the EVH, but that the internal movements of the
sign sew in ASL can be thought of as corresponding to individual stitches,
and so iconicity comes into play in the semantics through the repetition
and not the boundary. One could think about play like this as well, with
each small internal repetition mapping to some interval of playing. The fact
that telic predicates are frequently expressed with verbs that have an abrupt
boundary point is in this sense epiphenomenal, in that they do not continue
indefinitely.
The original formulation of the EVH (Wilbur, 2008; Malaia and Wilbur,
2012) as well as other takes by Kuhn (2017b) and Wright (2014) take an
iconic mapping between the telicity of the predicates and their (bounded or
unbounded) verb forms to be a given. As noted above, a source of empirical
support for this is often taken to be the fact that people who do not have
any previous experience with a sign language guess at above change levels
at whether a verb has a (typically) telic or atelic interpretation, as shown
in experimental work by Strickland et al. (2015). However, Davidson et al.
(2019) discuss several reasons to be skeptical of a strong version of the EVH.
On the one hand, they highlight the existence of alternating predicates in
ASL like read, write, drive, and ski, which take one bounded form
7. COUNTABILITY 181
6 Pluractionality
Finally, recall that we discussed grammatical number (e.g. singular/plural
marking) in the nominal domain. Many languages also mark number in the
verbal domain, through pluractionality. Kuhn and Aristodemo (2017)
describe and analyze two ways that French SL (LSF) marks pluralities of
events, through two different forms of repetition. One distributes over time
(they call this rep, since the form of the verb is repeated); another distributes
over participants (they call this alt, since the form of the verb involves
alternating). One is exemplified in (201a): repeating the verb forget with
the same hand is interpreted as a particular type of event, involving the
same participants (in this case, Jean as the agent and a particular word as
the theme), occuring multiple times; note that it contrasts with the phrase
every-day which permits the words to vary from occasion to occasion.
every-day (over times)) which interact with filters like -rep or -alt. Under
this approach there is significant similarity with grammatical number mark-
ing, which is also a filter in some sense: when we have plural marking on a
noun, this is frequently modeled as a function that takes some set of individ-
uals and returns only those which have more than one individual as some
part of them. Similarly, Kuhn and Aristodemo (2017) propose semantics
for pluractional markers in LIS which takes some set of events and returns
those which have more than one sub event as part of them. This was the
function of the restriction ¬atomic in the word on quantification in Chapter
6. In the case of -alt these have to be events (that are part of the larger
event) with different participants (θ(e′ ) ̸= θ(e′′ )); in the case of -rep these
have to be events (that are part of the larger event) with different run times
(τ (e′ ) ̸= τ (e′′ )) (203)a-b).
In the case of LSF -alt, they analyze it as a function that takes verb types
(e.g. forget) and returns events of that kind (e.g. the forgetting events)
such that they have two different sub events each with a different event
participant. In contrast, the LSF morpheme -rep is a function that takes
verb types (e.g. forget) and returns events of that kind (e.g. the forgetting
events) such that they have two different sub events each with a different
run time.
We end by noting that pluractionality intersects with the other area of
countability in the verbal domain we have discussed in this chapter: telicity.
Even if a basic verb plus its arguments would typically be telic, the addition
of pluractionality can cause a predicate to pass a test for telicity. Consider,
for example, the English sentence My friend gave me one book. This doesn’t
pass the “for an hour” test for telicity: you can’t say that My friend gave me
one book for a year. However, if we add repetition, it’s fine to say For a year,
my friend repeatedly gave me one book or For a year, my friend often forgot
one word. One question then becomes whether we can account for some of
the observations made about iconicity and telicity through understanding
pluractionality. Most likely, these analyses are right in different ways, for
example, that sign languages have conventionalized ways to express different
kinds of pluractionality (Kuhn and Aristodemo, 2017), and an endstate/telos
seems to be reflected in the bounded forms for many verbs like ASL steal,
die, and forget, and that on top of both of these descriptively iconic
conventionalizations, they can also support (iconic) depictions, such as the
depiction of the progression toward death explored by Kuhn (2017b) or some
of the alternations in manner as in look-at noted by Klima and Bellugi
7. COUNTABILITY 184
(1979) and in (bounded vs. ongoing) alternating verbs like read (Davidson
et al., 2019).
To imagine how this might work, let’s consider a verbal case involving
(204)
(205)
In the case of the verb that has an end point form, the end point is
naturally taken as a point at which one finishes the reading goal, as in the
book, in the situation in (204a). Note that the sentence with the bounded
7. COUNTABILITY 185
form is not acceptable in the situation where the girl hasn’t finished reading
the salient book (204b). In contrast, in (205), we see that a form with small
internal movements (i.e. without the endstate form) is less acceptable in
a context in which the girl finished the book, but acceptable if she is still
reading it and hasn’t finished.
We’ll naturally want to reflect these semantic differences in our semantic
analysis, but as things stand, many possibilities remain open for how ex-
actly we might want to do this. One way to model the distinction we see
in (204)-(205) is roughly via the EVH, as proposed by Wilbur (2008), in
which the endstate form in (204a) reflects the presence of the telos in the
semantics. This would correctly account for the judgments in (204), and
we can imagine why it would also account for the preference in (205), given
pragmatic competition between the two forms. But this isn’t the only way
to model the pattern seen in (204)-(205): telicity and aspect are collapsed in
these cases, so we could also model this particular distinction by taking the
endstate form to express perfective aspect (e.g. has read (the book)) and/or
the small internal movements as a type of progressive form (e.g. is reading
(the book)). Of course, the ideal goal is to dissociate telicity from aspect in
ASL and other sign languages, as we did for English above in (199), but this
proves to be complicated for both logistical and form-based reasons, detailed
in (Davidson et al., 2019). Certainly, one thing we don’t want to do is to en-
code the difference in the verb itself, since we find the same basic verb form
in both sentences (contra Strickland et al. (2015)’s implication that telicity
is a property of verbs and not verbs with their arguments). Yet another
possibility is to consider the depictive potential of these verb phrases, and
consider that some distinctions may be being convey via depiction and not
description. We began this chapter with the observation that much of the
countability expressed in sign languages seems to be more transparent to
nonsigners than other iconic areas of sign language grammar, which would
point toward depiction (although, is not an argument for it; we can have
symbols with more or less transparent meanings). But consider the kind of
event conveyed in (206) where the subject is shown as taking part in effort-
ful reading; we might simply consider this a simultaneous depiction without
propositional consequences.
(206) L M=
of Wilbur (2008) and Malaia and Wilbur (2012). In addition, although all
of these notions are technically dissociable, reduplication is known to be a
form recruited for progressive markings crosslinguistically, while telicity fre-
quently tracks with perfective aspect. Therefore, we might be encountering
not just challenges from the perspective of an analysis, but also challenges
from the dynamicity of language: a state of flux could make these difficult
to disentangle, but in entirely expected ways given our understanding of
(spoken) languages and language change/typology more broadly, as noted
specifically in this domain of aspect marking by Deo (2015b).
In addition, when it comes to iconicity, conventionalized symbolic mor-
phemes may nevertheless have some amount of iconicity, which can itself
support further depictive iconicity. This is exemplified in English in the
case of using onomatopeia (e.g. knock, an entirely conventionalized symbol,
representable in text, that nonetheless makes for compelling depictions).
Evidence for this might be the concentration on the signer’s face and the
intense hand movements in the utterance in (206), an example of morphemic
distinction that also naturally supports further depiction. The symbolic and
the depictive seem to especially rely on each other in the verbal domain (we
saw this in work on demonstrations in Chapter 5), even in ways that might
differ between signers and nonsigners: experiments on nonsigners may be
picking up on an underlying tendency in depiction, which might be used to
bias certain symbols to meanings (e.g. verb forms) but which are not them-
selves directly interpreted as a visible manifestation of a symbolic endstate
morpheme.
7 Conclusions
We perceive the world as comprised of different types of things, such as
events, objects, and substances. We also use language to talk about the
world in ways that reflects these different categories, but via mappings that
are not determined by them: the same event out in the world can be de-
scribed as folding laundry (atelic) or folding a towel (telic); the same puddle
can be described as water (mass) or a puddle of water (count). Sign lan-
guages have been investigated in both the nominal and verbal domain when
it comes to countability, with special attention played to the role of iconic-
ity in the expression of countability. This section emphasized in particular
the value in considering the separation of iconically motivated morphemes
from further, additional iconic depictions that they might very naturally
support. In particular, while some previous work in these domains has as-
sumed because there is some iconicity that it must be reflected directly in
the semantics (as in, e.g. the Event Visibility Hypothesis), we argued that
this need not be true either in the nominal or verbal domain, but rather
came to the conclusion that descriptive iconicity exists in both domains,
7. COUNTABILITY 187
Intensionality
Human languages, across all modalities, are notable not just for their com-
positionality and creativity, but also for their ability to go beyond the “here
and now”: we can discuss not just the present place and time, but also the
past, the future, and even alternative possibilities, how things might have
been, how we hope they might be, what we think, and what we believe
must be true. What is particularly remarkable about this ability in human
language is that we can discuss these possibilities with precision and make
inferences about them, including entailments, presuppositions, and impli-
catures, just like we do when we share information to help narrow down
the particular world we might currently be in. Consider, for example, the
relationship between (207a) and (207b): in every situation in which there
has to be a rainbow (207a) is true, and because it is also true that there can
be a rainbow, (207b) is true, so the first entails the second.
We find a similar pattern when we use attitude verbs: (208a) entails (208b).
Both have a bit of the feeling we found with quantification, where we have
to ignore the pragmatically strengthened reading of the second/entailed/(b)
sentence (209); the point is that we can clearly derive inferences that go
beyond what is directly said, and so we want to understand the logic under-
lying these sentences using words like must, might, know, think, etc.
188
8. INTENSIONALITY 189
Words like must, might, know and think (but not some or all) are part
of a broader class of expressions known as intensional operators which
induce us to consider possibilities/possible worlds other than the current
one. Consider that to evaluate the non-intensional sentences in (209) we
need only consider the here and now: do I see all/some of the relevant
rainbows? But to evaluate the sentences in (207) or (208) we need to consider
other possibilities. For (207a) it seems that we need to somehow consider
every (relevant) possibility and check that there is a rainbow in every one. In
(207b) we need to make sure that there is a rainbow in at least one possibility
that we are considering: There might be a rainbow means roughly that in
some possibility that we’ll consider valid, there is a rainbow. We can reason
similarly about attitude predicates, of the sort we saw in (208): if The girl
knows that is a rainbow, then somehow in every possibility that the girl
can access through her knowledge, it’s a rainbow. The idea is that in The
girl thinks that is a rainbow, then in some possibilities that are part of her
knowledge, then it is a rainbow, but perhaps in some of them it is not; it
seems strictly weaker than know.
This same intensionality is active in conditional statements like (210):
roughly, we can think of its meaning as expressing the claim that in all of the
possibilities in which the girl sees a rainbow, then in those possibilties she
will run outside. In other words, we evaluate the consequent claim (she’ll
run outside) with respect to only the subset of possibilities in which she
sees a rainbow; it doesn’t say anything about the possibilities in which she
doesn’t see a rainbow.
1 Conditionals
One of the simplest intensional structures are conditional statements. In the
DGS example in (214) we see a pattern familiar to many other sign languages
of the world (including ASL) in which the conditional can be expressed
through nonmanual marking. In the conditional statement in (214a), the
antecedent tomorrow rain is expressed with raised eyebrow nonmanuals,
and the meaning is that the consequent we party cancel must holds in all
of the restriction worlds, i.e. the worlds in which it rains tomorrow. In other
words, the utterance in (214a) is going to be judged true in a scenario in
which we’re not sure what the weather will be tomorrow, but we know that
we won’t hold a party in the rain. This contrasts with the (not conditional)
utterance in (214b), which will be unacceptable in that situation, since it is
not a conditional statement but rather two independent statements, the first
claiming that it will rain tomorrow (and thus, unacceptable in a scenario in
which we do not know whether it will rain).
raised brow
(215) a. J tomorrow rain, we party cancel must K
= ∀w.w ∈ {w.it rains tomorrow in w} → w ∈ {w.we cancel party in w}
‘For all worlds w in which it rains tomorrow, we will cancel the
party in w.’
b. J tomorrow rain, we party cancel must K
= λw∀p.p ∈ {It rains tomorrow, We cancel party} → p(w) = 1
‘The proposition expressed by the worlds in which it is both true
that it rains tomorrow and it is true that we cancel the party.’
There are several interesting takeaways from even this very simple dis-
cussion of conditionals. First, we see a new perspective on the broad notion
of intensionality: sign languages show that Kratzer’s “notional category
of modality” can be expressed through the multi-modal use of nonmanual
marking in sign languages, since this is the only distinction in form between
(215a-b). Second, nonmanual marking, although able to express lots of expe-
riential content like emotions, etc., is certainly also able to contribute propo-
sitional content through symbolic means, as we have seen earlier in Chapter
3 with negation and here with the expression of a conditional. Thus, as Pfau
and Steinbach (2016) note, nonmanual marking needs to be an integral part
of formal semantic analyses in sign languages, including and especially in
the intensional domain. This doesn’t necessary mean that nonmanual mark-
ing and intensional operators necessarily go together in a priveleged way:
in fact, negation shows us immediately that nonmanual marking expresses
non-intensional meaning. Furthermore, the use of nonmanual marking in
conditionals might rather be attributed to the syntactic structure of con-
ditional statements: Wilbur and Patschke (1999) argues that brow raising
is indicative of subordinate syntactic structures. Similarly, many sign lan-
guages have manual signs that introduce conditionals, including American
Sign Language. The conclusion thus is not that the nonmanual expression
is because of the intensional semantics, then, but rather that intensional
operators can come in forms that include nonmanual markings. Finally, we
see the use of another modal expression, must, in the consequent in (214).
As von Fintel and Heim (2002) note following Kratzer (1981), conditionals
are set up to interact naturally with other modals like these auxiliaries.
8. INTENSIONALITY 193
2 Attitude verbs
Intensionality can be expressed by main verbs, as we saw above in spoken
languages, and perhaps one of the simplest examples come from verbs of
desire, such as want. Consider the example from the previous Chapter 6;
then we focused on the countability of expressions like three apples; here
we can incorporate modal semantics for the verb want (217).
(216)
We’ve already introduced quite a bit about attitude verbs and nonman-
ual marking in Chapter 5, when we discussed role shift introduced by verbs
like think, say, etc, but had not focused on their own semantic contribution
and the difference between embedding clauses vs. more demonstration-like
structures. Consider the example of the attitude predicate think in (218).
Here, we gain a window into Alex’s thoughts: the meaning seems to be a bit
different than the English sentence Alex thought his sister has a book which
implicates that he would agree that she does; in the ASL example in (218)
it seems less a claim about his belief and more a claim about the content of
his thoughts, that he was thinking/wondering about whether his sister has
a book. As a first pass, we might model this as in (219).
(218)
8. INTENSIONALITY 194
However, the analysis in (219) seems to fall short in a couple of ways. For
one thing, it doesn’t seem quite right that in all of Alex’s thought worlds,
his sister has a book. That’s probably something like the right semantics for
the English sentence Alex thinks his sister has a book, but as we noted, this
ASL utterance seems to have different truth conditions. Instead, we seem
to want to express the idea that Alex is considering this possibility, that it
is possible, not necessary, in his thought worlds. Thus, we might want to
model this as an existential quantifier, like in (220).
3 Modals
Compared to attitude predicates, there has been relatively less work on
modal (auxiliary) verbs like can, can’t, should, etc. in ASL and in other
sign languages. Perhaps this is because they look rather unsurprising: for
example, the ASL modals should and can in (221) looks quite a lot like
the English modals in conveying a similar meaning via a similar form, a
standalone morpheme, and in a similar syntactic environment, preceding
the main verb 1-drive-b.
(221) a.
least; research on other sign languages might find variation of the sort seen
in spoken languages.
That said, modals in ASL and other sign languages have some notable
syntactic/semantic properties that have been discussed in prior literature.
One of the most surprising, perhaps, is that they can sometimes be found
“doubled” at the end of an utterance (222). Known within generative syntac-
tic literature as “focus-doubling” Lillo-Martin and de Quadros (2004), they
have been analyzed as occupying a similar syntactic position as clause-final
negation of the sort we saw in Chapter 2.
5 De dicto/de re
One final topic worth mentioning that is related both to attitude predicates
and to the word order distinction from the last section deals with the in-
terpretation of objects under the scope of intensional operators. Consider,
for example, the noun phrase three apple in (223). Interestingly, three
apples don’t even need to exist at all in the world in which this sentence is
expressed, and yet this sentence is acceptable; what matters for acceptabil-
ity is that three apples exist in all of the signer’s “desire worlds”, e.g. the
signer really wants three apples.
(223) Context: It’s unclear if there are any apples around, but the signer
really needs three apples to make her recipe.
We call this the de dicto interpretation of the object, which can exist in the
worlds invoked by the intensional operator (in this case, the desire worlds
of the signer) without necessarily existing in the actual world.
In formal semantic theories, de dicto interpretations of noun phrases
under intensional operators have been traditionally modeled as an effect
of scope. Consider, for example, two very different ways to understand
the noun phrase a prince in the English example in (224). In one (“de
dicto”) interpretation the sentence is unacceptable since Aurora’s desires
need not include princehood for her future husband; yet under another (“de
re”) interpretation the sentence is fine, if by a prince we mean the specific
person who we know to be a prince.
8. INTENSIONALITY 199
(224) Context: In the story Sleeping Beauty, the princess Aurora wants to
marry a man she met in the woods; her family wants her to marry
the prince of the neighboring kingdom. Aurore doesn’t realize that
the man she met is actually the very same prince of the neighboring
kingdom.
Sentence: Aurora wants to marry a prince.
a. (Not acceptable, if we interpret a prince de dicto, since her desire
worlds include marrying this particular man, and she thinks he’s
not a prince)
b. (Acceptable, if we interpret a prince de re, that is, by the way
that the participants in the conversation could describe him from
their perspective, as in fact he is the prince of the neighboring
kingdom)
We can, glossing over many details and complications we can’t get into here,
model this as scope, with the idea that in the de re case we have a (specific)
prince, and say something about him (namely, that Aurore wants to marry
him) (225a). In contrast, in the de dicto case we are saying something about
Aurora’s desires, namely, that they include prince-marrying (225b).
The important takeaway for current purposes is that the English sentence
seems to have two different interpretations, one of which has the object inter-
preted “outside” of the verb and another in which the object is interpreted
entirely “inside”/in the scope of the attitude verb, despite having the same
word order for both interpretations marry a prince. The reason this is rel-
evant to sign linguistics is that we might expect word order effects to care
about this difference, since many sign languages seem to have more flexible
word order with respect to argument structure that changes in a way that
can reflect semantic scope. Consider (226), where the difference is naturally
expressed in the de dicto case by taking Aurore’s desires as the topic, and in
the de re case by topicalizing the prince: here, word order seems to reflect
scope order, resulting in a lack of ambiguity in ASL where one exists in
English.
(226) Context: In the story Sleeping Beauty, the princess Aurora wants to
marry a man she met in the woods; her family wants her to marry
the prince of the neighboring kingdom. Aurore doesn’t realize that
the man she met is actually the very same prince of the neighboring
kingdom.
8. INTENSIONALITY 200
The takeaway from (226) is not that ASL and other sign languages lack
the de re/de dicto ambiguity, but rather that (as with many other languages)
there are many ways to minmize ambiguity, and a word order/syntax that is
sensitive to information structural differences is one way to disambiguate the
scopal relationships of noun phrases and intensional operators. This might
lead to word order differences in intensional contexts, perhaps yet one more
pressure on the word order findings from Napoli et al. (2017) for Libras, not
just iconicity but also semantic scope.
6 Conclusions
This chapter is perhaps the most speculative of this text, in part because the
formalizations get much more complicated than we are able to go into in this
context, and in part because there has been some of the least research done
on intensionality in sign languages. Nevertheless, there are some important
findings in this area, and great possibilities for future directions.
For one thing, although modals are more semantically complex than
negation, there are interesting parallelisms between negation and modals in
a couple of places in sign languages, and that parallelism is worth pursuing
for further understanding. For one thing, negation and modals can both
appear “doubled” in sentence-final position, perhaps both conveying verum
focus. Negation and modals also seem to optionally to induce word order
variation based on scope. For example, we saw how de re readings seem to
be highlighted by topicalizing the object in a sentence with an intensional
verb; the same can be seen with negation, where placing, say, a depiction
sentence-initially seems to ensure it is out of the scope of negation, as we
saw in Chapter 5.
Understanding the role of scope, iconicity, and word order across sign
languages is surely a future research path for formal semantics/pragmatics in
sign languages. The more general topic of intensionality is even broader than
conditionals, attitude verbs, and modal auxiliaries, encompassing a wide
range of expressions and potentially (in fact, likely) nonmanual markings as
well. In addition, Herrmann (2013) discusses modal-like meanings arising
from discourse particles and inter-sentential operators in DGS, along with
8. INTENSIONALITY 201
their relationship with nonmanual marking, and more on this kind of work
is surely due for other sign languages.
9
Conclusions
The goal of this book has been to showcase research that connects formal
semantics and sign linguistics across a variety of phemonena and hopefully
in doing so to potentially bring together different researchers across these
two domains to work together on future projects.
202
9. CONCLUSIONS 203
further depictions. This is the case with signs like ASL or words
like English knock, which are symbols with full linguistic potential that also
retain enough iconicity to easily support further depiction.
When we talk about a semantic analysis for ASL, we focus on separat-
ing propositional meaning from representations of events, and understanding
how the two interact. To illustrate the power of this, we can return to an
example from Chapter 1. Recall the sentence about students waiting in line
at the library, in (227). In Chapter 1, we discussed important entailments of
this sentence. Now that we have provided several examples of semantic anal-
yses of various phenomena in the intervening chapters, we can understand
what a simple semantic analysis would look like for this bit of language based
on the analyses given at the end of Chapters 5 and 6, and also understand
why those inferences follow.
(227)
(228)
9. CONCLUSIONS 204
The continuation will move forward the dialogue to partially answer this
question, eliminating possibilities via its propositional contributions, and
simultaneously depicting aspects of the event.
In (229a), we provide the propositional contribution that we computed
compositionally at the end of Chapter 5. This is the proposition that con-
tains exactly those worlds in which there are at least ten members in the in-
tersection of the set of students and the set of upright figures who are themes
(229) a.
J K
L M
=
9. CONCLUSIONS 205
(230) a.
J K
= λw.{x.x ≤ ιx.R(x, a)} ∩ {x.remembered card} = ∅ in w
b.
L M
=
What does this gain us in terms of “meaning” and reasoning? For one
thing, we can reason through simulation via the event representation en-
coded as some kind of model, e.g. . Such models may vary quite
a bit between people in a conversation; the more detail provided in an ut-
terance, including and especially via depiction, the more likely people will
be to align on their representations of a particular event. The other kind of
meaning is propositional, which we can model as a series of questions and
answers. We take the topic to be raising the QUD in (231a), taken from
above in (228). The answer/resolution comes from the dialogue that follows,
in which we take the meaning in (231b) and (231c) and conjoin them as in
(231d).
ing from descriptions both contribute to the same kind of model-like rep-
resentations of events (Liddell, 2003; Taub, 2001). These approaches natu-
rally emphasize iconicity in language, as iconicity is expected under a view
in which language (of all sorts) contributes to constructing a model of the
world. Mental models are usually taken to be iconic representations in the
human mind, with the idea that you can represent the world in the format
in which you interact with it, and which allows you to investigate and test
inferences about the world via simulation (Johnson-Laird, 1980). Naturally,
using forms in language that also seem to mirror the world seems like a di-
rect source for encoding features of the world in your mind and that of your
interlocutor (i.e. to “paint a picture” in someone’s mind); what becomes
much more challenging to model are the symbolic components of language,
or .
On the other end of the spectrum, we find approaches to meaning which
are purely truth conditional, and which incorporate iconicity as a new type
of constaint on these truth conditions. One way to account for this is as a
presupposition, as in the iconic presuppositions in Schlenker et al. (2013) and
Schlenker (2021). Kuhn (2017b) proposes that language allows generally for
an iconic function Icon ϕ which can be used to map forms to meaning and
imposes on them a mapping between their referent in the world (e.g. perhaps
the timing of an event) and the form of a sign (e.g. the duration of a verb).
This proposal manages nicely to capture the gradient meaning requirements
encoded in depiction, but not so much why it doesn’t integrated with logical
operators like negation. What is so different about iconic meaning? It
also raises some unanswered general questions about how it integrates with
the rest of the grammar: how do we know when Icon ϕ is present or not?
?
A third approach proposed by Ramchand (2019) is to divide model-
like meaning for events and propositional meaning for clauses essentially by
syntactic domain, following a larger research project of matching syntactic
domains with semantic ontology. Her proposal is that within the syntac-
9. CONCLUSIONS 208
tic domain of the verb phrase (the predicate and its arguments, along with
its adjuncts, but before tense and aspect are included), the event type is
created through a system of meaning construction/mental model creation
similar to that proposed in cognitive linguistics frameworks. Then, this
enters as an argument into a function that maps the event into a truth con-
ditional/possible worlds framework, creating a final representation that does
have truth conditions, but is no longer interpreted as iconic. One problem
with this approach is that it seems to overgenerate the iconicity interpreted
within the verb phrase. For example, both the ASL signs or vote
and the English words knock and chirp have the potential to be used purely
symbolically in a way that their form does not affect the representation of
an event in which they participate, for example, we can use the same sign
with the same form to talk about a table sitting on its side.
The approach in most of this text has taken both kinds of meaning to
be in parallel, which has empirical advantages over the other three systems.
The downside, clearly, is in theoretical parsimony: why have two theories of
meaning when one will do? The argument for the dual approach (of event
representations of particulars and of propositions for which we build alter-
natives) comes from empirical coverage crossed with parsimony: we could
stretch a truth conditional approach to include iconic functions to account
for both propositional and iconic aspects, but then we lose predictions for
when and where this iconic function appears. We could also stretch a cog-
nitive linguistic approach to cover proposition-like meaning but this loses
an enormous amount of explanatory power that we focused on in Chapters
2-3, for which cognitive linguistics has little to say about, for example, nega-
tion, connections, and entailment. A dual approach allows more complete
coverage of these phenomena while at the same time fitting in with a larger
picture of the mind in cognitive science as making use of dual/parallel pro-
cesses Kahneman and Tversky (2013), see Baggio (2021) for compositionality
in particular.
2 On pragmatic universality
This book has mostly considered semantics and pragmatics as intertwined
topics. For example, in Chapter 2 we looked at the semantics of phenom-
ena like question-answer clauses by understanding their pragmatic roles and
relation to a Question Under Discussion, and understood logical operators
in Chapter 3 as part of a system that involves both functions over propo-
sitions and related scalar implicatures. But, we haven’t addressed formal
pragmatics in a head-on way: is there anything generally special about sign
languages and pragmatics? In other words, do Gricean Maxims apply within
9. CONCLUSIONS 209
3 Cross-linguistic typology
In some areas of linguistics, cross-linguistic variation has long been a driv-
ing question: phonology/phonetics naturally has been interested in how
patterns vary from language to language below the level of stored mor-
phemes/symbols, and studies in morphology and syntax have similarly long
acknowledged the importance of understanding variation in lexical and syn-
tactic structure (while, of course, all emphasizing similarities across lan-
guages as well). Semantics, on the other hand, has long been a less natural
fit for crosslinguistic investigations. Partly this is because it connects with
two external disciplines that are less focused on variation: logic, and psychol-
ogy/psycholinguistics. But it is also partly because there is less obviously a
case for variation, and in fact, one might reasonable imagine that semantics
is a place where human languages are more similar to each other than they
are different: the pieces of meaning might indeed be universal even if the
ways that we express them vary. So it has been with excitement that recent
work in formal semantics on crosslinguistic variation has raised many inter-
esting questions about the burden of proof for arguing that languages have
different semantics and/or what kinds of semantic interactions we expect to
see across languages. There are two forms that this typically takes: 1) units
9. CONCLUSIONS 211
of meaning are basically the same, but the way that languages categorize
and express them are different, or 2) units of meaning themselves vary, so
that some languages are working with difference pieces than others.
An example of the first category is work on quantification, where the
expectation and general understanding is that languages all seem to have
the ability to express quantification, but how it gets mapped to linguistic
forms can vary. Work on sign languages has shown similar variation to spo-
ken languages, when comparing American Sign Language (Petronio, 1995;
Abner and Wilbur, 2017) and Russian Sign Language (Kimmelman, 2017),
for example, and evidence for quantification can be found in the much more
recently conventionalized Nicaraguan Sign Language as well (Kocab et al.,
2022). Work on logical connectives like conjunction, disjunction, and nega-
tion tends to take a similar form: the expectation is generally that languages
have the same underlying logical organization that supports propositional
meanings and operators on these meanings, but the way that they are ex-
pressed can differ. In sign languages we see these kinds of proposals for the
use of boolean connectives (Davidson, 2013; Zorzi, 2018; Asada, 2019) and
the interplay between manual and nonmanual negation across sign languages
(Zeshan, 2006; Kuhn and Pasalskaya, 2019).
The second category of cross-linguistic variation in semantics concerns
variation in the possible pieces of meaning themselves. One place this arises
is in the study of the way that gradability is expressed. For example, Aris-
todemo and Geraci (2018) argue that Italian SL (LIS) not only has de-
grees as part of the semantic ontology (roughly, semantic units from which
comparisons are made), but makes them “visible” through the use of gra-
dient/iconic signing space. In contrast, Koulidobrova et al. (2022) argues
that ASL patterns with spoken languages like Washo (Bochnak, 2015) in not
having degrees in their ontology/as ingredients for semantic composition; as
a consequence of this, comparison in ASL is expressed with a different set
of expressions than would be possible if degrees were involved. Note that
this doesn’t mean that comparison can’t be expressed in some way: it’s not
a theory of expressability, but rather the form that comparison can be ex-
pressed being determined by the semantic primitives that it makes use of.
Perhaps this is a case of sign languages having different semantic ingredients
from each other, in the way that spoken languages have been argued to vary.
We can find similar discussions of crosslinguistic variation in semantic
ingredients being made in the literature on tense, where all languages can
clearly express that events happened in a past time even if they do not
seem to have specialized elements of tense, or alternatively, we can postu-
late that they are universal but that there is not direct evidence for them in
some languages (Matthewson, 2006). It can also be see in the literature on
mass/count, where cross-linguistic variation show distinct sets of patterns
which can argue for languages begining with different sets of semantic in-
gredients (Chierchia, 1998); in sign languages, similar analyzes have been
9. CONCLUSIONS 212
4 Historical change
Like so much of linguistic theory on phonology, morphology, and syntax, the
fields of formal semantics and pragmatics have both overwhelmingly focused
on synchronic language from a particular time and place and language com-
munity, but of course, we know that language is always changing, always in
flux, across all of these variables (time, place, and person). Clearly, there
is interesting work to do to tie together formal approaches to meaning and
language change, and in recent years there have been important advances
in this domain in spoken languages, highlighted in the regular conference
on Formal Approaches to Diachronic Semantics; see Deo (2015a) for an
overview of this kind of work.
Given that historical linguistics and formal semantics have only very
recently been connected in spoken languages it is perhaps not a surprise that
there is also minimal work in this domain in signed languages. Nonetheless,
what we do know suggests an extremely rich area for investigation. In one
recent notable study, Sampson and Mayberry (2022) investigate changes
from a personal pronoun similar in use to the current , but with the
handshape currently used in self. This pronominal form was interpreted by
later generations of users of the signing community as a reflexive pronoun,
9. CONCLUSIONS 213
hence the current sense as ‘self’. Moreover, Sampson and Mayberry show
based on analysis of current signing productions that among young current
signers, the same sign can also be used as a copula (‘be’). On the one hand,
this proposes a semantic trajectory of a language in flux of exactly of the
sort we would expect to see in spoken languages in the pronominal domain;
on the other hand, it illustrates especially careful synchronic work as well
as archival work given the video format of historical data on sign languages,
and hopefully will be a model for future work on sign language meaning
change.
Another category of work that tackles the question of historical semantic
change in sign languages focuses on languages in which changes across time
can be studied through the generations of signers who still make up the lan-
guage community today, such as work on Nicaraguan Sign Language that
investigates the way that space is used in the grammar across successive
generations (Senghas, 2010; Kocab et al., 2015) and the kinds of seman-
tic/syntactic structures that are available in different generations of this
new language (Kocab et al., 2016). Meir et al. (2010) discuss the process of
emerging conventionalization in language communities more generally, and
Tomita (2021) provides an in-depth look at change in formal and mean-
ing (of indexical pointing signs) across three decades by a single individual
signer of Japanese Sign Language. All of these provide insight into how the
structure of a language community and the context of an individual influ-
ence the way that linguistic form and meaning change over time, and this
more broadly bears on foundational questions in language and cognition.
There is no question that work in this domain with respect to compositional
semantics can help us better understand the relationship between language
as mental and community activity, both in flux. Of course, the investiga-
tion should eventually comprise language in all modalities, most notably
including the semantics of tactile languages of Deafblind communities (see
Checchetto et al. 2018 for one example of change in modality from visual to
tactile).
5 Future directions
Recall we began with the question: how do we know what other people
“mean” when they share ideas using language? One thread through this
book is that we need to think about this question across multiple dimen-
sions: on the one hand, we have a logical-propositional system that seems
to use logical structure to support the communication of information about
the world. On the other hand, we seem to be able to “paint a picture”
in someone’s mind in the sense that we can help them build up a model
of a particular scenario or event that they can reason about through via
experience. An approach to semantics that considers both of these ways
9. CONCLUSIONS 214
Abner, N. (2017). What you see is what you get: Surface transparency
and ambiguity of nominalizing reduplication in American Sign Language.
Syntax, 20(4):317–352.
Abner, N., Flaherty, M., Stangl, K., Coppola, M., Brentari, D., and Goldin-
Meadow, S. (2019). The noun-verb distinction in established and emergent
sign systems. Language, 95(2):230–267.
Abner, N. and Graf, T. (2012). Binding complexity and the status of pro-
nouns in English and American Sign Language. Poster presentation at
Formal and Experimental Advances in Sign Language Theory (FEAST).
Ahn, D., Kocab, A., and Davidson, K. (2019). The role of contrast in
anaphoric expressions in ASL. In Proceedings of Glow-in-Asia XII.
Alsop, A., Stranahan, E., and Davidson, K. (2018). Testing contrastive in-
ferences from suprasegmental features using offline measures. Proceedings
of the Linguistic Society of America, 3(1):71–1.
215
BIBLIOGRAPHY 216
Cecchetto, C., Checchetto, A., Geraci, C., Santoro, M., and Zucchi, S.
(2015). The syntax of predicate ellipsis in Italian Sign Language (LIS).
Lingua, 166:214–235.
Cecchetto, C., Geraci, C., and Zucchi, S. (2009). Another way to mark
syntactic dependencies: The case for right-peripheral specifiers in sign
languages. Language, 85(2):278–320.
Checchetto, A., Geraci, C., Cecchetto, C., and Zucchi, S. (2018). The lan-
guage instinct in extreme circumstances: The transition to tactile Italian
Sign Language (LISt) by Deafblind signers.
Cormier, K., Schembri, A., and Woll, B. (2013). Pronouns and pointing in
sign languages. Lingua, 137:230–247.
Davidson, K., Kocab, A., Sims, A. D., and Wagner, L. (2019). The relation-
ship between verbal form and event structure in sign languages. Glossa:
A journal of general linguistics, 4(1).
Dingemanse, M., Blasi, D. E., Lupyan, G., Christiansen, M. H., and Mon-
aghan, P. (2015). Arbitrariness, iconicity, and systematicity in language.
Trends in cognitive sciences, 19(10):603–615.
Fenlon, J., Cooperrider, K., Keane, J., Brentari, D., and Goldin-Meadow,
S. (2019). Comparing sign language and gesture: Insights from pointing.
Glossa: a journal of general linguistics, 4(1).
Guasti, M. T., Chierchia, G., Crain, S., Foppolo, F., Gualmini, A., and
Meroni, L. (2005). Why children and adults sometimes (but not always)
compute implicatures. Language and cognitive processes, 20(5):667–696.
Hartmann, K., Pfau, R., and Legeland, I. (2021). Asymmetry and contrast:
Coordination in sign language of the netherlands. Glossa: a journal of
general linguistics, 6(1).
Heim, I. (1982). The semantics of definite and indefinite noun phrases. PhD
thesis, University of Massachusetts.
Kocab, A., Ahn, D., Lund, G., and Davidson, K. (2019). Reconsidering
agreement in sign languages. In Poster at GLOW 42 in Oslo.
Kocab, A., Davidson, K., and Snedeker, J. (2022). The emergence of natural
language quantification. Cognitive Science.
Kocab, A., Pyers, J., and Senghas, A. (2015). Referential shift in Nicaraguan
Sign Language: A transition from lexical to spatial devices. Frontiers in
Psychology, 5:1540.
Kratzer, A. (1977). What ‘can’ and ‘must’ can and must mean. Linguistics
and Philosophy, 1:337–355.
Loos, C., Steinbach, M., and Repp, S. (2020). Affirming and rejecting as-
sertions in German Sign Language (DGS). In Proceedings of Sinn und
Bedeutung, volume 24, pages 1–19.
Meir, I., Padden, C. A., Aronoff, M., and Sandler, W. (2007). Body as
subject. Journal of Linguistics, 43(3):531–563.
BIBLIOGRAPHY 228
Meir, I., Sandler, W., Padden, C., and Aronoff, M. (2010). Emerging sign
languages. Oxford handbook of deaf studies, language, and education,
2:267–280.
Montague, R. (1973). The proper treatment of quantification in ordinary
english. In Approaches to natural language, pages 221–242. Springer.
Murray, S. (2017). Complex connectives. In Semantics and Linguistic The-
ory, volume 27, pages 655–679.
Napoli, D. J., Spence, R. S., and de Quadros, R. M. (2017). Influence of
predicate sense on word order in sign languages: Intensional and exten-
sional verbs. Language, 93(3):641–670.
Neidle, C. J. (2000). The syntax of American Sign Language: Functional
categories and hierarchical structure. MIT press.
Nevins, A. (2011). Prospects and challenges for a clitic analysis of ASL
agreement. Theoretical Linguistics, 37(3-4):173–187.
Noveck, I. (2018). Experimental pragmatics: The making of a cognitive
science. Cambridge University Press.
Noveck, I. A. (2001). When children are more logical than adults: Experi-
mental investigations of scalar implicature. Cognition, 78(2):165–188.
Ohori, T. (2004). Coordination in mentalese. In Haspelmath, M., editor,
Coordinating constructions, pages 41–55.
Padden, C. (1986). Verbs and role-shifting in American Sign Language. In
Proceedings of the fourth national symposium on sign language research
and teaching, pages 44–57. National Association of the Deaf Silver Spring,
MD.
Padden, C. A. (1988). Interaction of morphology and syntax in American
Sign Language. Routledge.
Papafragou, A. and Musolino, J. (2003). Scalar implicatures: experiments
at the semantics–pragmatics interface. Cognition, 86(3):253–282.
Partee, B. H. (1995). Quantificational structures and compositionality. In
Quantification in natural languages, pages 541–601. Springer.
Perniss, P., Thompson, R., and Vigliocco, G. (2010). Iconicity as a gen-
eral property of language: evidence from spoken and signed languages.
Frontiers in psychology, 1:227.
Perniss, P. and Vigliocco, G. (2014). The bridge of iconicity: from a world
of experience to the experience of language. Philosophical Transactions
of the Royal Society B: Biological Sciences, 369(1651):20130300.
BIBLIOGRAPHY 229
Pfau, R. (2016). Syntax: complex sentences. In Baker, A., van den Bo-
gaerde, B., Pfau, R., and Schermer, T., editors, The Linguistics of Sign
Languages, pages 149–172. Amsterdam: John Benjamins.
Pfau, R., Salzmann, M., and Steinbach, M. (2018). The syntax of sign
language agreement: Common ingredients, but unusual recipe. Glossa: a
journal of general linguistics, 3(1).
Pfau, R., Steinbach, M., and Woll, B. (2012). Sign language. De Gruyter
Mouton.
Schlenker, P. (2011). Donkey anaphora: the view from sign language (ASL
and LSF). Linguistics and Philosophy, 34(4):341–395.
Schlenker, P. (2017b). Super monsters ii: Role shift, iconicity and quotation
in sign language. Semantics and Pragmatics, 10(12).
Schlenker, P., Bonnet, M., Lamberton, J., Lamberton, J., Chemla, E., San-
toro, M., and Geraci, C. (2022). Iconic syntax: Sign language classifier
predicates and gesture sequences.
BIBLIOGRAPHY 231
Sevgi, H. (2022). One root to build them all: Roots in sign language clas-
sifiers. In Proceedings of West Coast Conference on Formal Linguistics,
39.
Spaepen, E., Coppola, M., Spelke, E. S., Carey, S. E., and Goldin-Meadow,
S. (2011). Number without a language model. Proceedings of the National
Academy of Sciences, 108(8):3163–3168.
Speas, M. (2000). Person and point of view in Navajo direct discourse com-
plements. Carnie, Andrew & Jelinek, Eloise & Willie Mary Ann (eds.),
Papers in honor of Ken Hale, pages 19–38.
Strickland, B., Geraci, C., Chemla, E., Schlenker, P., Kelepir, M., and Pfau,
R. (2015). Event representations constrain the structure of language:
Sign language as a window into universally accessible linguistic biases.
Proceedings of the National Academy of Sciences, 112(19):5968–5973.
Sundaresan, S. (2013). Context and (co) reference in the syntax and its
interfaces. PhD thesis, Universitetet i Tromsø.
Tieu, L., Schlenker, P., and Chemla, E. (2019). Linguistic inferences without
words. Proceedings of the National Academy of Sciences, 116(20):9796–
9801.
Tomita, N. (2021). Breaking Free from Text: One JSL User’s Discourse
Journey over Time. PhD thesis, Gallaudet University.
BIBLIOGRAPHY 233