Reiss 23 Research-Methods
Reiss 23 Research-Methods
∗
Charles Reiss
September 7, 2023
1
century when the mathematical biologist J.B.S. Haldane remarked about the
significance of the first spacecraft to reach the surface of the Moon, the 1959
Soviet Luna 2 lunar impactor mission, that “it is scientifically important to
have hit the moon. It is a concrete proof that certain physical ‘laws’ or
rules about the behaviour of matter hold good at least as far as the moon”
(Haldane and Dronamraju, 2009, 159-60).
The perspective of this chapter is that the idea of an innate, genetically
determined Human Language Faculty (sometimes called ‘Universal Gram-
mar’ (UG) in one sense of the term) plays the same role in linguistics as
Newton’s assumption played in physics. Just as the assumption that data
about apples falling can potentially bear on the analysis of planets in orbit,
it is useful to think of UG more as a postulate that allows empirical work
to proceed, than as a hypothesis or a theory. In concrete terms, the postu-
late of the existence of UG allows data from each language to bear on the
analysis of all languages, and also on the characterization of the details of
UG, as discussed below. There appears to be some plasticity in the Human
Language Faculty, but plasticity only makes sense in the context of a deeper
uniformity.
What does it mean to treat apples and planets or Quechua and Hungarian
as elements of the same domain? It means that there is a (possibly to-be-
determined) level of analysis at which apparently different entities can be
analyzed in terms of the same properties—because they consist of the same
properties. At this level, the color of an apple, its degree of ripeness, and
the presence of a couple of wormholes in its surface are irrelevant to the
comparison with a planet. The inventory of the relevant properties is the
object of study of mechanics; and if we are realists about scientific theories,
we expect our theory to be isomorphic to the object of study with respect
to the properties posited in the theory. In the same vein, the postulate
of UG means that there is a (possibly to-be-determined) level of analysis at
which Quechua and Hungarian can be characterized using the same primitive
notions.
Of course, mechanics does not recognize apples and planets in its ontol-
ogy. Similarly, theoretical linguistics does not recognize everyday notions like
Quechua and Hungarian, but only individual mental grammars (I-languages)
that can be characterized by a common (shared) universal set of properties—
the theory of general linguistics, (also sometimes called ‘Universal Gram-
mar’), should be isomophic to the object of study for linguists, the universal
set of linguistic building blocks provided by the human mind.
2
2 UG-object and UG-theory
My impression is that Chomsky has always understood UG (or equivalent
terms, from before he adopted the phrase “universal grammar”) in this way, as
a necessary assumption that allows empirical work on language to proceed,
but discussion by both adherents and sceptics tends to miss or mask this
perspective. UG is often presented as merely a controversial theory or a
hypothesis for which linguists need to provide evidence, for example, by
Da˛browska (2015) in a paper called “What exactly is Universal Grammar,
and has anyone seen it?” where she says that “Universal Grammar (UG) is a
suspect concept. There is little agreement on what exactly is in it; and the
empirical evidence for it is very weak.” This is quite a different perspective
from treating UG as a postulate, which has a status similar to an axiom in
mathematics.
The suggestion to treat UG as a background assumption, a postulate, is
partly a rhetorical device designed to flip the null hypothesis. Instead of feel-
ing pressure to prove the hypothesis of UG, linguists can show that it is only
by assuming UG that we have achieved the sophisticated results of recent
decades. General discussion of the existence and content of UG, of course,
also treats the idea as a hypothesis and part of an articulated theory that
can be subjected to elaboration and revision. So, without worrying too much
about the exact terms—postulate, theory, hypothesis–we can again appreci-
ate the parallel to Newton. The assumption that the same force was at work
on apples and planets led Newton to the inverse square law (the gravitational
force between two objects varies inversely with the square of the distance be-
tween them) and the notion of centripetal force. Having worked out such
details with mathematical rigor, Newton could see that his initial postulate
led somewhere good, and he was then able to formulate an articulated the-
ory of Universal Gravitation. The proof of the value of the postulate was
in the proverbial empirical pudding. So, in order to make my case, I’ll have
to provide some linguistic pudding below, since the “sophisticated results” to
which I allude tend to be little known outside of the narrow circle of theo-
retical linguistics. Anyone who wants to reject the postulate or hypothesis
of UG will need to provide alternative accounts of the results sketched below
in section 4.
Even some linguists complain that we shouldn’t talk about UG until we
look at more languages and get more data. Consider the following abusive
appeal for empirical confirmation of a property of one postulated version
3
of UG from within the linguistics community. Pullum and Scholz (2010)
object to the claim by Epstein and Hornstein (2005) that ‘discrete infinity’
(or denumerable infinity) is a property of every human language, that is, it is
part of UG.1 Pullum and Scholz object to such a claim about every language
“as if one by one they had all been examined by scientists and checked for
discrete infinitude.” The kind of empirical verification they are implicitly
demanding is, of course, ridiculous and non-existent in any field. Physicists
make claims about the mass and charge of electrons, yet they haven’t checked
every electron in the universe for these properties. To be consistent with
their own logic, Pullum and Scholz should reject Universal Gravitation. The
kind of generalization that Epstein and Hornstein are making is just normal
science—proposing a tentative description of the world, based on a set of
postulates and an infinitesimal set of observations guided by these postulates.
I propose that one source of difficulty in understanding UG as a postu-
late arises from the systematic ambiguity of the phrase Universal Grammar,
alluded to above, which refers both to the innate, genetically determined
language faculty, the object of study for the theoretical linguist, and to the
linguist’s scientific model of that faculty. Let’s distinguish these two usages
as UG-object vs. UG-theory. In some work, UG-object is referred to as the
Human Language Faculty, or S0 , the initial state, at birth, of the Human
Language Faculty. UG-theory, in contrast, can be equated with the field of
general linguistics.2
1
We have to be careful—in this case every language appears to have this fundamental
property of discrete infinity. However, when we characterize reduplication below, we argue
that the existence of reduplication in just one language demonstrates the necessity for
certain computational power in UG. This does not mean that every attested language
must manifest reduplication. I suspect that Epstein and Hornstein are actually implicitly
applying Empirical Argumentation Device 5, introduced in section 4.5, below.
2
I won’t hesitate to use the UG terms anachronistically to refer to related notions,
from before Chomsky started referring to Universal Grammar, because the issues are
more important than the terminological history—an example would be the reference to
“the general nature of Language” with capital L, in Syntactic Structures (Chomsky, 1957).
Here, “Language” just means UG-object. Chomsky (1965, p.25) refers to this ambiguity
of the term ‘grammar’, in general, not just with respect to Universal Grammar: the term
“grammar” can refer to an internally represented system of knowledge or to the linguist’s
account of that knowledge. The fact that Chomsky calls the internally represented system
the speaker’s “theory of his language”, with apparent scare quotes, was a bit confusing,
but we won’t go into his reasons for doing so. Chomsky addresses the confusion directly
on pp. ix-x of the 2014 Preface to the fiftieth anniversary edition of Aspects of the Theory
of Syntax (Chomsky, 2014).
4
While linguists tend to be comfortable with the terminological situation,
Green and Vervaeke (1997), a psychologist and a philosopher, highlight the
potential for confusion arising from the systematic ambiguity of the term
“grammar” (and its use in the phrase “Universal Grammar”), but in the pro-
cess, they reveal an additional confusion:
The additional confusion that Green and Vervaeke introduce is that the ques-
tion of nativism, the existence of a genetically determined UG-object, is dis-
tinct from the notion of I-language. Chomsky actually introduced the term
“I-language” to refer to an attained state of the language faculty, after expe-
rience, for example in Knowledge of Language (Chomsky, 1986, p. 23):
5
in the I-language perspective, because UG-object, and all subsequent states of
the language faculty must be understood as individual—they are part of each
person; internal—they are represented in the mind/brain; and intensional in
the set theoretic sense—they consist of rules, patterns or functions, rather
than lists or look-up tables, since the number of sentences of a language is
unbounded, so the grammar cannot extensionally characterize (list) the set
of sentences of the language.3
A good paraphrase for “I-language” is “mental grammar”, and Jackendoff’s
1994 informal equation is a useful guide to distinguishing UG from I-language:
Mental Grammar = UG + Experience. In Chomsky’s 1980 slightly more
opaque formulation “One may think of the genotype as a function that maps a
course of experience into the phenotype. In these terms, universal grammar is
an element of the genotype that maps a course of experience into a particular
grammar”. Here, “universal grammar” clearly means UG-object. 4
6
p. 108-109).5
I propose that Chomsky doesn’t worry too much about the shifting be-
tween UG-theory and UG-object because he takes it for granted that one
would only pursue a UG-theory if it were a realist theory of UG-object. This
is obvious to Chomsky (in my understanding), but takes a bit of unpacking
for most of us. The implicit reasoning goes something like this, with two
parts: (a) If we don’t make use of universal UG-theory, then there is no
coherent sense to the term “general linguistics”. Why not compare French,
Swahili, the Python programming language, the rules of chess, tango danc-
ing, and the Constitution of the United States of America? It is the postulate
of UG that defines the domain of inquiry. We want to make a theory over a
coherent domain of phenomena. Newton postulated that apples and planets
fall into a single domain for his purposes, and linguists think that French
and Swahili, but not chess, belong together. As discussed above this process
of defining domains involves postulating an abstract set of properties that,
for mechanics, say, includes mass, but not the Granny Smith vs. Golden
Delicious distinction.6 (b) Once we accept (a), linguists obviously need a
UG-theory containing a universal set of linguistic entities and operations of
various sorts because they are trying to model human languages which are
built from a universal set of linguistic entities and operations of various sorts.
5
Ironically for us, Haldane (Haldane and Dronamraju, 2009, p. 122) uses “parts of
speech” to illustrate what he assumes is an obviously legitimate case of instrumentalism,
in contrast to chemistry:
But nobody knew how small the atoms were. And some philosophers
said they were only conventions to help our thinking, like parts of speech in
grammar or the decimal system in arithmetic. Chemical changes occurred as
if there were atoms, but we could never know what matter was really made
of.
Generative linguists hold that categories like negative polarity items and wh-words are
real, natural objects (see below).
6
It is worth pointing out that discussions of modularity in cognitive science tend to
obscure the point that a module (like language) defines the elements of its domain. For
example, it is not the set of faces in the world that defines the face recognition module;
instead, our so-called face recognition module parses certain stimuli as faces, whether the
input comes from light reflected from a person (a ‘real’ face), a computer screen, a cloud
in the sky or a smiley face emoji. In some sense the so-called ‘domain’ of a module (e.g.,
things that look like faces) is actually, in the mathematical sense relevant to functions,
its range, its set of possible outputs. See Chomsky’s comment on the lack of a “mind-
independent object” corresponding to a syllable, in Section 5, below.
7
These are the things in the world that make languages and only languages
form a natural class of entities. In other words, the theory and the object
should be isomorphic. This is consistent with the idea that linguistics is a
form of naturalist inquiry and that each language is a natural object, built
from a genetically determined set of primitives (Chomsky, 2000a).7
In syntax, the existence of such innate categories, appears to be widely
accepted—it seems that every generative syntactician at least implicitly ac-
cepts that all languages make use of universally available categories like
anaphors, wh-question words, and so on. In phonology, by contrast, there is
much less consensus. Chomsky and Halle (1965, fn. 24) are very clear con-
cerning the postulation of a universal set of distinctive features for phonology:
8
hypothesis that needs to be confirmed or disproven. I attempted to clarify
some of the confusion around the systematically ambiguous uses of the term
“grammar” as referring both to an object of study and a linguist’s theory
about that object. For our purposes, these issues are most relevant to the
ambiguity of the phrase ‘Universal Grammar’. I then pointed out that the
postulate of the existence of UG-object is necessary to define the domain of
linguistics in a coherent fashion, and that this postulate is reflected in the
practice of incorporating into the construction of UG-theory a fixed inven-
tory of theoretical primitives. By virtue of the fact that linguists are realists
about their science, they aim for an isomorphism between the UG-theory
they construct and the UG-object that exists in the world. In that sense,
linguists are just like other scientists, or at least those that abjure the instru-
mentalist view that theories are just tools we use to make predictions and
describe phenomena, but with no deeper relation to their objects. In sec-
tion 3, I reiterated the suggestion that the postulate of UG is necessary for
any empirical work to proceed, and pointed out that, as Chomsky has said,
everyone implicitly adopts this postulate when they claim to be engaged in
general linguistics.8
8
Real life inquiry is, of course, messy. We do not know in advance what will constitute
a coherent domain and facts do not come labeled by an omniscient god telling us to which
domain an observation is relevant. Gravity does not completely determine how objects
fall—friction and magnetic forces might also be relevant. In this case, physics happens
to be able (in principle) to combine all these factors into a single account by calculating
the total force to which an object is subjected, but that is no guarantee that such a
synthesis will always be possible. For example, grammars play some role in determining
what people say, but there is no coherent theory that predicts what a person will say at
any given moment, given the fact that they may, perhaps, choose to speak French, English
or Swahili; they may lie or tell the truth; they may speak formally or informally; they
may use sarcasm; they may stutter or lose track of what they wanted to say; they may
get distracted by someone walking by; they may try to mimic Donald Trump’s dialect;
or an anvil might fall on their head and cut short their intended utterance. The physics
of falling objects presents a much simpler problem than that of predicting speech (and of
course we know that, given the three body problem of gravity, we still have to settle for
simplification and approximation even in physics).
Even in the abstract domain of linguistic analysis, at the pre-theoretical stage of the
study of languages, we can’t be sure that every observed phenomenon should be part of
a single unified theory. Languages differ in the kinds of distinctions they encode. For ex-
ample, Hungarian has no gender differences for third person pronouns, like English he vs.
she, and French not only differentiates the pronouns like English, but also requires agree-
ment of adjectives with these pronouns, according to an arbitrary grammatical ‘gender’
system. Just because there is a tradition of treating such phenomena as ‘linguistic’, there
9
In this section, I try to show that theoretical generative linguistics, what
I reclamatorily call ‘armchair linguistics’, is, in its actual practice, a robust
scientific field of inquiry that has yielded impressive empirical results with no
reference to a laboratory, a statistical analysis, or large-scale online corpora,
all of which I throw together into a category I sloppily call ‘lab linguistics’.9 I
support this view of armchair linguistics by presenting a number of Empirical
Argumentation Devices which only make sense in the context of UG, and
which are foundational tools of the trade for theoretical linguists, even when
they are not named or recognized as such—well-trained students learn to use
them implicitly.
It is curious that whole careers are launched to explore how language
is used, how language is acquired, and how language is instantiated in the
brain, with minimal concern for what language is. Armchair linguists have
discovered that they can get insight and make predictions if they analyze
languages in terms of patterns and elements that at some level of analysis
is no guarantee that they should be analysed in the same way as, say, negative polarity or
wh-movement. In fact, such considerations have motivated Chomsky to distinguish what
he calls the “Narrow Faculty of Language”, FLN, from the “Broad Faculty of Language”,
FLB (e.g., Hauser et al. (2002)). People can argue about where the boundaries should fall
(e.g., Pinker and Jackendoff (2005)), and they can argue about how much, say, the study
of syntax bears on the study of phonology; but cutting up a pretheoretical domain like “lan-
guage” into distinct subdomains as knowledge progresses is standard practice (and it may
turn out that some phenomena are not amenable to scientific inquiry at all). For a parallel
within cognitive science consider Pylyshyn’s (2003) discussion of the many processes and
systems involved in what is pretheoretically called “seeing”:
To use the term ‘vision’ to include all the organism’s intellectual activity
that originates with information at the eye and culminates in beliefs about
the world, or even actions is not very useful, since it runs together a lot
of different processes. The same tack was adopted by Chomsky, who uses
the term “language” or “language capacity” to refer to that function that is
unique to linguistic processing, even though understanding natural language
utterances clearly involves most of our intellectual faculties.
The detection of edges and textures and colors involves different processes from the infer-
ence that three distinguishable regions in a visual display correspond to just two objects,
with one object partially occluding a portion of another. These are just some of the
processes involved in what we call ‘seeing’ in everyday language.
9
For a disparaging use of the term ‘armchair linguistics’ see Ibbotson and Tomasello
(2016), and for a response see this entry on the knitting and sewing blog of a former un-
dergraduate student of mine: http://woolandpotato.com/2016/10/05/scientific-american-
says-universal-grammar-is-dead-a-response/.
10
can be described as, for example, unaccusative verbs, applicative construc-
tions, negative polarity items, vowel harmony, reduplication, and so on. For
linguists, these notions constitute what language is (at some, non-final level
of abstraction), and knowing what language is is a challenge that is logically
prior to the challenges of finding out about the neurological implementation
of language or how language learning works. If learnability theorists and
neuroscientists are not talking about the acquisition or neural instantiation
of things like vowel harmony or negative polarity items or ergative-absolutive
case marking systems, then as far as armchair linguists are concerned, they
are not talking about language.
It is important to keep in mind that scorn for armchair linguistics can
only backfire for lab linguists. It would be bad news for anyone doing corpus
studies of language, statistical analysis, or experimental work, if the cate-
gories and concepts of armchair linguistics are not well-grounded, because
then all that lab work will also be built on a shaky foundation. The experi-
mentalist, the corpus linguist and the statistician need the armchair linguist
to tell them what to count and measure.
11
as empirical scientists. If quantum loop theory turns out to be better than
string theory, nobody would say that the string theorists haven’t been doing
science. Similarly, if Lexical Functional Grammar turns out to be better
than Minimalism, we shouldn’t therefore say that the Minimalists haven’t
been doing science. Linguistics is an empirical science because it adopts
scientific methodologies, regardless of whether or not the existence of binary
Merge turns out to be an illusion, just as nineteenth century physicists who
believed in the existence of the ether as the medium of electromagnetic waves
were physicicsts, despite being wrong about the ether.
12
in 1957, in The Logical Structure of Linguistic Theory, but published much
later (Chomsky, 1985).
13
domains. One needs to decide in advance—of course, subject to revision—
what kinds of observations count as data.11
A particularly clear indication from Chomsky that this is the role of UG
in linguistics comes from Knowledge of Language (1986, p. 38):
14
Here’s how linguists use the reasoning alluded to in the quotation above.
Suppose that Linguist Rim has constructed a grammar for the English-type
I-language of a speaker Kyle. In constructing this grammar, GR/Eng , Rim has
made use of a set of categories and operations that constitute the contents of
her UG-theory, UGtR , which she posits is a model of UG-object, a component
of Kyle’s mind, and that of every other human. So, UGtR is the theory of
UG-object according to Rim. Now suppose that Linguist Sarah constructs
an alternative grammar for Kyle’s I-language, GS/Eng . (We’ll assume that
the two grammars are not just notational variants.) Just like Rim’s, Sarah’s
grammar makes use of a set of categories and operations that constitute
the contents of her UG-theory, UGtS , the theory of UG-object according to
Sarah.
Suppose further that the two grammars for Kyle’s I-language are exten-
sionally equivalent: every string that Rim predicts is grammatical for Kyle
is predicted to be grammatical by Sarah, too. And every string that Rim
predicts to be ungrammatical for Kyle is predicted to be ungrammatical by
Sarah. Because linguists believe that I-languages, mental grammars, are real
things in the world, it is impossible that both GR/Eng and GS/Eng are the
correct grammar (model) of Kyle’s I-language (but let’s suppose that one of
them is).
Now, suppose that Rim and Sarah next try to model Hisako’s Japanese-
type I-language by constructing grammars GR/Jap and GS/Jap (Rim’s gram-
mar of Japanese and Sarah’s grammar of Japanese, respectively.) These
grammars will have to be built from the components provided by UGtR and
UGtS , respectively—that’s what it means for them to each have a UG-theory.
Finally, suppose that Sarah’s grammar is a good model of Hisako’s mental
grammar, but Rim’s is not. In other words, Sarah’s UGtS allows her to model
both English and Japanese, whereas Rim’s UGtR only allows her to model
English. We can conclude that Sarah’s model of English was the correct one
(remember, we are supposing that one of them is correct), and that Rim’s
was incorrect. We further can conclude that Sarah’s UGtS is a better model
of UG-object than Rim’s—note that there is, of course, only one true UG-
object, but that every theoretician may have their own UG-theory. Without
the postulate of UG, there is no reason to think that data from Japanese
should be any more relevant to the choice of which English grammar is correct
than data concerning bird migration or soap bubbles.
15
4.3.2 EAD 2: Recurrent categories
This EAD is really an elaboration and exemplification of the previous one. If
languages are not assumed to be manifestations of a single language faculty
or Universal Grammar, then we might expect the analysis of each language
to force us to start ex nihilo, with no preconceptions about what we might
find grammaticalized (cf. Haspelmath, this volume). Well, perhaps that is a
bit strong—we might expect to find the recurrence of concepts and categories
that belong to general human cognition.
While it is impossible to quantify such observations, it appears to be the
case that, in fact, there are a number of characteristics of languages that (a)
recur over and over in pretty much every language we study, and (b) serve no
conceivable role in the service of language for communication, that is, they
do not contribute to meaning.
One of the most intriguing examples of such a recurrent category is the
negative polarity items (NPIs) represented by English words like any, any-
thing and ever, as well as idioms like a red cent and lift a finger. We’ll
consider complexities below, but for now, let’s say that these items occur in
syntactic positions that are in the scope of a downward entailing operator
(DEO), including negation. Consider these examples:
(1) Negative polarity items
a. John didn’t eat any cookies.
b. John didn’t eat any chocolate cookies.
c. John didn’t eat any baked goods.
d. *John ate any cookies.
The string in (1a) is grammatical, and, furthermore, we know that the truth
of the proposition expressed by (a) entails the truth of the proposition ex-
pressed by (b), because “chocolate cookies” is more specific than “cookies”,
or equivalently, the set of chocolate cookies is a subset of the set of cookies.
Any statement that is true of (all) cookies is true of chocolate cookies. In
contrast, John may have eaten pie, so (a) might be true while (c) is false.
Here we see that “baked goods” is less specific than “cookies”. In (d), without
negation, the string is illformed—it appears that any can’t appear without
negation.
If we remove negation, the entailments reverse, and the strings cannot
contain NPIs as they did above:
16
(2) a. John ate (*any) chocolate cookies.
b. John ate (*any) cookies.
c. John ate (*any) baked goods.
If we know that John ate chocolate cookies, then we know that he ate cookies,
and we also know that he ate baked goods. This is upward entailment, from
more specific to more general.
Note that negative polarity items like any do not fall into the parts of
speech we learn in grammar school. A temporal adverb like ever can also be
a negative polarity item:
It seems that, like any, the word ever is somehow dependent on the presence
of negation (or some other DEO).
The same is true for the idiomatic NPIs: He wouldn’t lift a finger to
help his own mother is grammatical, whereas *He would lift a finger to help
his own mother can only be interpreted as a nerdy sarcastic joke of some
kind, because it is not grammatical if the string lift a finger is supposed to
correspond to the idiomatic expression.
There are many other DEOs, the elements that allow the occurrence
of NPIs, beyond the example of negation given thus far. For example, in
English, before is a DEO but after is not:
(4) a. Mary will call me before John eats any chocolate cookies.
b. Mary will call me before John eats any cookies.
c. Mary will call me before John eats any baked goods.
Note the entailment relations and the possibility of any appearing. The truth
of (b) entails the truth of (a), and the truth of (c) entails the truth of both
(a) and (b).
Contrast this with the use of after :
17
• *Mary will call me after John eats any cookies.
c. Mary will call me after John eats baked goods.
• *Mary will call me after John eats any baked goods.
Here’s an amazing fact about the DEOs, the elements that allow any and
other NPIs to appear: they obey de Morgan’s Law of logic that governs the
combinations of and and or with negations like not. For example, note that
the members of these sentence pairs have the same meanings:
NPIs and DEOs occur in all languages and it appears that children must
have access to de Morgan’s Law in order to know the distribution of NPIs.12
In my opinion, these facts are enough justification for linguists to shout from
the mountaintops about innate knowledge and UG, but there is more.
Not only overt DEOs like not, never and before license the appearance of
NPIs, but also polarity (YES-NO) questions and conditionals:
Without giving a formal analysis, note that if the answer to (8) is no,
then the answer is also necessarily no to the question “Have you eaten any
chocolate cookies today?”, but not to “Have you eaten any baked goods to-
day”. Similarly, if it is the case that Jane will kiss me in accordance with
(9), then she will still kiss me if the cookies happened to have been chocolate
ones. But I am not guaranteed a kiss if Sue ate a babka instead of cook-
ies. These facts hold for every single language ever studied. The recurrence
of these patterns in every single language is a breathtaking discovery about
12
The most important reference on NPIs is probably Ladusaw (1979). Another im-
portant thesis is Gajewski (2005). Gualmini (2014) reports on experimental work with
children related to entailment relations. Later papers by these three authors and many
others are relevant to this broad topic.
18
the structure of the human mind. This knowledge gives us great predictive
power, and it is hard to imagine treating it as anything but an empirical
result. NPIs and DEOs are basic building blocks of language—they are part
of Universal Grammar. No computer language makes use of them. If anyone
wants to do a corpus study of NPIs, it will rely of the armchair work that
has identified the category in the first place.
These examples of any illustrate so-called ‘free choice any’. Linguists are
willing to claim that these example of any actually represent a homophone
of the earlier cases. Now this seems like an attempt to ‘save the phenomenon’,
since these sentences show any without negation or any other DEO. However,
we have a nice argument. If we translate these two English sentences into
French, the results are interesting:
19
But think of what we have done: we have assumed that the analysis of
French can tell us about the analysis of English. What licenses this reason-
ing? Only the belief in Universal Grammar allows us to construct this kind of
argument. It turns out that every linguist makes such arguments implicitly,
all the time, but some of them deny a belief in Universal Grammar.
What we have just seen is an illustration of the Japanese-English dis-
cussion above, based on homophony in one language. In this example we
appealed to French, rather than Japanese, to determine the correct analysis
of English. Looking at French, we could see that it was justified to treat the
English NPI any and the English free-choice item any as two different words.
This is just a reiteration of our point that UG is a necessary postulate: by
assuming UG (just as Newton assumed universal gravitation), Chomsky has
made it clear that the empirical data relevant to the study of English-type
grammars includes facts from French. Our armchair linguistics turns out
to be hyper-empirical, as we assume that observations from any language
potentially bear on the analysis of every language.
Of course, everybody already knew this. . . at least implicitly. This is why
analysis of new languages never starts from scratch. Furthermore, if we do
decide that we need a new category to analyze a newly attested language, we
immediately expect to find that category instantiated in other languages. We
assume there are words, and nouns, and NPIs, and so on, in each language
we come across.
20
/s/ and /z/ are sets of valued features. These two segments correspond to
sets that are identical, except that /s/ contains a valued feature −Voice
(corresponding in a complex and indirect way to the fact that it involves no
vocal fold vibration) and /z/ contains a valued feature +Voice. A third
possibility is a segment /S/ that contains no valued feature for Voice—we’ll
allow this possibility here, but we won’t justify this decision.
Now, suppose that there are twenty such features provided by Universal
Grammar for the encoding of speech segments with phonetic correlates such
as rounding vs. spreading of the lips, lowering vs. raising of the tongue,
raising or lowering the velum to control airflow through the nasal passage,
and so on. Since each feature can be absent from a segment or else take
the value ‘+’ or ‘−’, this means that there are 320 possible speech segments,
intensionally defined by UG, about 3.5 billion. So, a UG with just twenty-
two basic symbols for defining speech segments (twenty features and the
two values, ‘+’ and ‘−’) allows for languages to contain any subset of those
3.5 billion segments. English-type grammars have maybe a few dozen of
these segments, including /i/, /T/, and /N/; Hawaiian-type grammars have
about eleven distinct segments; and Northern Ndebele has fifteen clicks, along
with many other segments. So, treating a language as just an inventory of
segments, how many languages does our modest UG define intensionally?
The set of languages is just the power set, the set of all subsets, of the
set of segments. For each language, we need to specify whether or not it
contains a given segment in its inventory. Since there are 3.5 billion = 3.5 ×
9
109 segments, there are 23.5×10 languages (segment inventories) intensionally
defined by our UG. This is comfortably above what Gallistel and King call
“essentially infinite”, meaning greater than the number of particles in the
universe, which is estimated to be about 2285 . And now imagine what the
number of languages would be if all the other postulated elements of UG
(morphosyntactic as well as phonological) were included. Languages contain
ordered phonological rules and for a given set of n rules, there are n! orderings
(n choices for the first rule, n − 1 for the second, etc.). So for a set of just
ten rules, there are 10! = 3, 628, 800 orderings. But this is just the number
for a fixed set of rules. For a language with, say, twenty segments, there are
about 176,000 rules of the form ‘a → b / c d’. There are about 7.5 × 1045
distinct sets of ten rules available from a set of 176,000 rules (“176,000 choose
10”). Each such set of 10 can be ordered in 3,628,800 ways. And we haven’t
even left phonology yet. Some languages place adjectives after the noun
they modify and some before, so that doubles whatever meta-astronomical
21
number we have already arrived at. And so on.
Such combinatoric facts do not prove the existence of UG, and they cer-
tainly do not prove the existence of specific content in UG—we surely do not
know exactly what the innate set of features is—but the numbers do provide
a plausibility argument. It may be possible to “to abstract from the welter
of descriptive complexity certain general principles [and basic units] govern-
ing computation that would allow the rules of a particular language to be
given in very simple forms” (Chomsky, 2000a, 122). The combinatorics also
support the idea of a UG encoded in the genome and instantiated in the
brain, since the “less attributed to genetic information . . . the more feasible
the study of its evolution” (Chomsky, 2007) and its neural implementation.
Combinatorics of a small inventory of basic elements and operations is
exactly what Gallistel and King (2009) propose as a desirable model for cog-
nition of all kinds, from insect navigation to human language:
22
(1957) showed that finite state models are insufficiently powerful to generate
English-type grammars, he was able to conclude that finite state models are
thus insufficiently powerful to model Language—what he would now call the
Human Faculty of Language.14 Critics of UG, whether within or outside
the linguistics academic community complain that we shouldn’t make claims
about UG until we have looked at many more languages (but they don’t ever
specify how many would suffice). However, the brilliance of Chomsky’s argu-
ment is that it shows how one can make empirical discoveries without leaving
the armchair. No matter how many remote villages one visits to document
new, exotic languages, there is absolutely no data that can undermine the
empirical result based on English alone. The Human Language Faculty must
have access to something more powerful than a finite state machine, just in
case a learner happens to be born in Cinncinati.
We can apply Chomsky’s armchair methodology over and over again. For
example, the Warlpiri language of Australia makes the plural of nouns via
full reduplication, so kurdu ‘child’ is pluralized as kurdukurdu, and mardukuja
‘woman’ is pluralized as mardukujamardukuja. What is the sound of the
plural marker? Well, unlike, say the [z] of dogs, there is no fixed sound of
the plural in Warlpiri. It is better described as a variable x that is assigned
14
A reader suggests that I am exaggerating here the empirical nature of Chomsky’s
claim, and that I should temper it by recognizing the implicit assumption that there is no
bound on the distance between dependent elements:
I think this is literally correct, but in fact irrelevant because a parallel complaint could be
made about pretty much any empirical claim. Chomsky observes long distance dependen-
cies of various types in English sentences, and there is no evidence to suggest that there are
any constraints on the size of these dependencies. Rather than taking it as a “theoretical
inference” that English allows arbitrarily large displacement, it is best seen as due respect
for Ockham’s Razor and general elegance. Positing, in spite of a lack of evidence, that
there is a limit on the length of long-distance dependencies is as wrong as positing that
prime numbers play a role in syntactic computation, or that water has consisted of H2 0,
only since we began analyzing it, and that it might consist of something else starting next
week. The critique is logically valid, but useless, since this is as good as it gets in empirical
science.
23
a pronunciation identical to that of the noun it combines with. So x is
assigned the phonological value [kurdu] when attached to the noun kurdu,
and it is assigned the phonological value [mardukuja] when attached to the
noun mardukuja. Since children don’t know in advance whether they will
be learning English or Warlpiri, UG has to provide them with access to
such variables, as well as a mechanism for concatenating the variable with
the element that assigns it a value. The fact that English and Finnish lack
reduplication cannot bear on the discovery that variables that can be assigned
a phonological value and concatenation must be provided by the UG toolkit.
24
The form I in this language, used only for transitive subjects, would be
called ‘ergative’, and the form me used for objects and intransitive subjects
would be called ‘absolutive’. Now, many, many languages follow the Penglish
pattern, however, some languages don’t show any changes in the forms of
pronouns at all—they have no overt case marking. Nevertheless, we do get
patterns like the following:
I’ve used italics and bold to encode the gender agreement. In (a) and (b) the
sole argument (what we would call subject of the intransitive) determines
the shape of the verb: fell in (a) shows an animate verb stem because dog is
animate; fell in (b) shows an inanimate verb stem because rock is inanimate.
25
But in (c) and (d) it is the object that agrees with the verb stem, and the
subject is irrelevant.
So, the assumption of UG allows us to look beyond the superficial forms
of sentences and unite Penglish, Samoan and Cree into a single pattern in
which the sole argument of an intransitive acts like a transitive object. This
contrasts with English in which the sole argument of an intransitive (I am
sleeping) acts like a transitive subject (I am kicking him).
I have presented this distinction in terms of a difference among languages,
but this is an oversimplification. In some languages, it is apparent that there
are two categories of intransitives—one that treats the sole argument like a
transitive subject, and one that treats the sole argument like a transitive
object. In Lakhota, a Siouxan language, the pronominal markers on verbs
differ in form depending on which verb occurs. Here’s another schematic
example, using independent pronouns to reflect the forms of Lakhota verb
markers:
The sole argument of (a) is marked like the object in (d), whereas the sole
argument in (b) is marked like the subject in (c). This ‘split-intransitive’
system seems quite exotic until we note that Italian manifests exactly the
same pattern:
(19) Italian
a. Gianni ha mangiato la torta
Gianni has eaten the cake
b. Gianni ha telefonato
Gianni has called
c. È arrivato Gianni
Gianni has arrived
26
In Italian, the sole argument of the verb ‘has called’ in (b) occurs before the
verb, just like the subject of ‘has eaten’ in (a). In contrast, the sole argument
in (c) occurs after the verb ‘has arrived’, just like the object ‘the cake’ in (a).
(These are the default positions, when there is no special focus or contrast
expressed by the sentences.) There are two kinds of intransitive verbs in
Italian, just as in Lakhota! If Lakhota is exotic, so is Italian. Note also that
in addition to the position of the sole argument, these two kinds of Italian
intransitives use different auxiliary verbs, ha vs. è.
This discussion is meant to show that superficial diversity among lan-
guages can sometimes be analyzed, using the methods of the armchair, to
detect underlying patterns that no laboratory procedure will expose. Of
course, a universal account of verbs and their arguments in all languages
remains a topic of research.
This kind of deep analysis is only available if we can see beyond super-
ficial patterns and posit abstract elements and structures. But what is the
ontological status of these elements and structures? Are they just elements
of our theories? If there is no Faculty of Language, no UG-object, then that
is the only possibility—they are just parts of our theory. But that leaves
no explanation for the recurring utility of such elements and structures in
language after language, and it leaves no explanation for certain apparent
gaps, such as the fact that no language treats objects and transitive subjects
alike, to the exclusion of intransitive subjects. I have not provided a syntactic
account to unify all these phenomena, but there is a rich literature on which
this brief survey is dependent.
27
copies are concatenated with an intervening [k], to produce a strucure N-k-
N that has the meaning ‘any N’. In other words, we construct a negative
polarity item version of each noun using reduplication. All the pieces are
familiar from other languages—we have mentioned reduplication in Warlpiri
and Samoan, and we mentioned English and French NPIs. Components from
different modules of the language faculty—a morphological process like redu-
plication and a syntactico-semantic property like negative polarity partici-
pate in the desirable combinatoric explosion described by Gallistel and King
(2009).
Only the assumption of UG leads us to expect such discoveries in Yoruba
and recognize the building blocks. In contrast, no language derives word
forms by reversing the sequence of segments in other word forms. There is
no reason to believe that UG allows such an operation, despite the fact that
it is trivial to state.
28
sometimes flabbergasted by what we accept as data, and our lack of statis-
tical measures of confidence for, say, grammaticality judgments. There is no
reason to argue about this; it is enough to consider what it would look like
if armchair linguists did insist on incoporating mindless statistical analysis
into their work. One could spend one’s research funding asking an English-
speaking subject a thousand times whether this is a well-formed question:
What did the cat eat? Or even worse, one could ask a thousand ‘English’
speakers each one time if that question is well-formed.15 There will be pretty
robust statistics confirming our intuition that indeed the question is gram-
matical.
However, instead of doing that kind of statistical work, guaranteed to give
robust results, one could build a model in which What did the cat eat?, Who
did Bill see? and Which fish did the cat eat? are all tokens of an abstract
type of sentence. Then one might go further, and show that these are also
members of the same type: What does Mary think the cat ate? and Who
does Mary think Bill claimed Fred saw?, and even Who saw Bill? and Which
cat did Mary claim saw Bill?. The point is that a statistical analysis of, say,
wh-questions relies on the categories discovered in the armchair. Facts about
wh-questions are not derivable from raw data because there is no raw data
about wh-questions. The results from the armchair, not the eye-tracker or the
stats package, are pretty impressive. They include accounts of the contrast
between Who does Mary believe Bill claims to have married? and ill-formed
*Who does Mary believe Bill’s claim to have married? It also includes an
account of the contrast between What do you like bacon with? and *What do
you like bacon and? Note that the accounts of these distinctions, developed
in the armchair, turn out to not be parochial explanations of the questions
that English speakers hear and say, but rather they turn out to generalize
fairly well to all languages (although, many puzzles remain to be explained,
as expected in a complex domain of inquiry). And of course, it is only
the postulate of UG that licenses the comparison of wh-words of English
with elements in Mandarin, Quechua, Hungarian, and Urdu with completely
different phonetics.
For an non-linguist, it may come as a shock that armchair based research
has even offered arguments that in a sentence like What does Mary claim
15
Of course, ‘English’ is in scare quotes since, under the I-language perspective, there
is no such entity as English or Swedish or Hindhi, there are just a bunch of individual
I-languages, some of which are more alike than others.
29
Bill believes Fred said Sue denies Irving ate? there is evidence that the
sentence initial what, which must be interpreted as the object of the last
verb ate, actually ‘passes through’ all the intermediate clauses. That is the
sentence must be modeled as something like this: Whati does Mary claim ti
Bill believes ti Fred said ti Sue denies ti Irving ate?, where each ti is a ‘trace’
of the movement of what from its position as object of ate to the front of
the sentence. I won’t reconstruct the argument in full here, but it consists of
the application of EAD 3: there are languages in which there is an audible
effect on all of the intermediate clauses in such structures (see examples in
Torrego 1984,Henry 1995,Haı̈k 1990). In English, where there is no such overt
evidence, we are licensed to assume the step-by-step movement by virtue of
the postulate of UG. The simplest account of Language, consistent with the
data, is of course, that all languages in fact use step-by-step movement of
wh-elements. Ultimately, we might find in English independent evidence for
such an analysis—the postulate of UG encourages us to keep looking. This
is an armchair research program par excellence.
30
to what are called natural classes of segments. A natural class is a set of
segments that can be characterized by a conjunction of valued features. So
in our hypothetical language L, we can identify the natural class of vowels
/i,u/, which consists of all and only the vowels that have the property +High.
We can also identify the natural class of vowels /e,o/ that have the property
−High. A natural class may contain a single member, such as the class
containing just /o/, which consists of all and only the vowels containing the
properties −High, +Back and +Round. The role these natural classes
play in grammar is to help define the set of possible rules of a language—a
rule can only make reference to natural classes. This means that no rule can
refer, for example, to /i,u,o/ but not /e/, because there is no conjunction of
features shared by the first three to the exclusion of /e/.
An almost universal assumption among phonologists is that natural classes
should be captured with a minimal amount of information: the more com-
pactly the classes are defined, the more compactly the rules are defined. Such
compactness is taken as an equivalent of theoretical elegance and a reflection
of adherence to Occam’s Razor. The problem with this view is that it fails to
account for how the learner arrives at the “elegant” solution. Trouble arises
when we consider the preference for minimalist specification in the context
of the acquisition of the particular I-language L.
Consider a rule of L that makes reference to the natural class containing
/i,e/. In the context of L, this class can be identified, as noted above, as
the class of vowels containing −Back. However, there are two alternatives
that are extensionally equivalent to that analysis. We could equally charac-
terize the class in question as the set of all and only the vowels that contain
−Round, or as the set of all and only the vowels that contain −High and
−Round. Given the widespread assumption that minimalization of featural
descriptions is desirable, most phonologists would favor one of the first two
characterizations of the natural class, the ones that use a single feature.
However, Gorman and Reiss (2023) argue that this minimization approach
suffers from two problems once the issue is framed in terms of the process
of language acquisition and learnability theory. First, as we see, there is
not a unique solution that satisfies the minimization criterion, since char-
acterization of the class consisting of /i,e/ can be done with either −Back
or −Round. A learning algorithm should, ideally yield a unique output
grammar for a given course of experience for the learner. No proposal for
choosing among competing minimal characterizations of natural classes has
been proposed in the literature, and the potential lack of a unique solution
31
is typically ignored.
Second, Gorman and Reiss (2023) cite a demonstration by Chen and Hulden
(2018) who show that, in general, the search for a minimal characterization of
a natural class is computationally intractable due to combinatoric explosion.
With just one feature, F, we can define three segments and three natural
classes of segments. Treating segments as sets of features, we have the seg-
ments {+F}, {−F} and the underspecified { }. The natural classes that can
be defined are [+F] which has the member {+F}; [−F] which has the member
{−F}; and [ ] which has the members {+F}, {−F} and { }. With three fea-
tures, the number of natural classes is 33 = 81; with five features, the number
of classes is 35 = 243; and with twenty-four features, the number of natural
classes is 324 ≈ 282 billion. So, the search space for finding a minimal charac-
terization of a natural class grows exponentially with the number of features
that we attribute to UG. There is no solution to the problem of searching
through such a space of possibilities in “polynomial time”, since there is no
polynomial with a fixed highest exponent that expresses the size of the search
space. Adopting the P-cognition hypothesis (Frixione 2001; van Rooij et al.
2019; van Rooij 2008), the idea that any feasible model of knowledge acqui-
sition must be solvable in polynomial time, Gorman and Reiss (2023) reject
feature minimization in rules in favor of maximization, for which they pro-
vide a simple algorithm that is tractable and also yields a unique output.
Although it has not been explicitly discussed previously in the literature, the
phonology community appears to implicitly accept the P-cognition hypoth-
esis, so Gorman and Reiss’ assumptions are not radical.
Looking merely at alternative extensionally equivalent grammars, phonol-
ogists have favored minimal specification in rules. However, Gorman and Reiss
(2023) are able to leverage the ‘empirical’ mathematical result of Chen and Hulden
(2018) to choose maximal representations of natural classes as more psycho-
logically plausible than the superficially more elegant alternatives. The ap-
proach expands the range of data beyond that of forms in a single language
L, to include the facts of computational complexity theory. By taking more
‘facts’ into account, the maximalist solution is more empirically grounded.
This EAD can be applied whenever a posited model of grammar entails a
learning algorithm that is inconsistent with the P-cognition hypothesis.
32
5 Conclusion
The use of the Empirical Argumentation Devices laid out above is standard
practice among working linguists, even if we do not have standard labels for
these methods. I have suggested that the EADs involve observation and anal-
ysis of linguistic phenomena in the context of an assumption or postulate of
Universal Grammar. Just as Newton’s postulate of universal gravitation led
him to find links between falling apples, tides and planetary orbits, linguists
are able to see regularities across human languages by virtue of the postulate
of UG. UG is thus best conceived not as a theory or a tentative hypothesis,
but as a background assumption that defines an empirical domain, and takes
on the status of the null hypothesis. It is the adoption of the UG postulate
that licenses us to treat empirical data from each language as potentially
bearing on the analysis of all other languages. Anyone who denies UG has
no grounds for importing analytic categories from one language to the study
of another. In this sense, the Chomskyan, UG-based perspective is explicitly
more richly empirical than competing theoretical approaches.
We can also compare armchair linguistics to lab approaches (to use my
sloppy term). A survey of recent job ads for linguistics faculty positions
shows a preference for lab linguistics—corpus linguistics, statistical methods
and experimental approaches are doing quite well in terms of job postings,
without any requirement for a firm basis in linguistic theory. In fact, many
of these jobs will be filled by candidates holding degrees in computer sci-
ence, psychology or other fields. In my opinion, this is in large part due to
an insecurity among many theoretical linguists—they don’t view their own
field as an empirical science, and they think that the experimental methods
of statisticians, psychologists and neuroscientists will bring linguistics more
credibility as ‘real’ science.
This insecurity by some theoretical linguists is mirrored by a scorn by
lab linguists of armchair work. As noted above, lab linguistics is critically
dependent on what arises ex cathedra, since the categories of language are not
manifested directly in ‘raw data’ (Hammarberg, 1981). It is a commonplace
that word boundaries, and thus words, are the output of speech perception
and other cognitive modules, not definable by reference to acoustic signals:
there are silent parts within words, for example inside the [sp] cluster in
a word like spot, and there is typically no silent part between words in a
sentence. Concerning the level of the syllable, Chomsky (2015, p.126) says
that “No one is so deluded as to believe that there is a mind-independent
33
object corresponding to the internal syllable [ba], some construction from
motion of molecules perhaps, which is selected when I say [ba] and when
you hear it.” Moving to more abstract categories like wh-elements and NPIs,
the categories become no more concrete (physical). Lasnik et al. (2000, p. 3)
comment on the abstractness of linguistic categories at all levels:
Nonetheless, we have seen that such abstract (mental) categories can indeed
be fruitfully studied—hypotheses about them can be formulated, tested and
revised and retested. The fact that the categories are abstract, that is, men-
tal, does not mean that they are not real: “Linguistic theory is mentalistic,
since it is concerned with discovering a mental reality underlying actual be-
havior” Chomsky (1965).
In addition to confusing anti-empiricist with anti-empirical, Peter Norvig,
in the blog cited above says that
34
I can’t imagine Laplace saying that observations of the plan-
ets cannot constitute the subject-matter of orbital mechanics or
Maxwell saying that observations of electrical charge cannot con-
stitute the subject-matter of electromagnetism”
35
References
Bale, Alan, and Charles Reiss. 2018. Phonology: A formal introduction.
Cambridge, MA: MIT Press.
Chomsky, Noam. 1966. Cartesian linguistics. New York: Harper & Row.
Chomsky, Noam. 1986. Knowledge of language: Its nature, origin, and use.
Westport, CT: Praeger.
36
Chomsky, Noam. 2000a. Language as a natural object. In New horizons in the
study of language and mind, 106–133. Cambridge: Cambridge University
Press.
Chomsky, Noam. 2014. Aspects of the theory of syntax . 11. MIT press.
Chomsky, Noam. 2015. What kind of creatures are we? . New York: Columbia
University Press.
Da˛browska, Ewa. 2015. What exactly is universal grammar, and has anyone
seen it? Frontiers in psychology 6:852.
Dupre, Gabriel, Ryan Nefdt, and Kate Stanton, ed. 2024. Oxford Handbook
of philosophy of linguistics. Oxford University Press.
Epstein, Sam, and Norbert Hornstein. 2005. Letter on ‘the future of lan-
guage’. Language 81:3–6.
Gallistel, C. Randy, and Adam Philip King. 2009. Memory and the compu-
tational brain: Why cognitive science will transform neuroscience. Chich-
ester, West Sussex, UK: Wiley-Blackwell.
37
Gorman, Kyle, and Charles Reiss. 2023. Maximal feature specification is fea-
sible; minimal feature specification is not. URL lingbuzz/007296, hand-
out for talk at GLOW 46 in Vienna, April 2023.
Green, Christopher D, and John Vervaeke. 1997. But what have you done for
us lately?: Some recent perspectives on linguistic nativism. In The future
of the cognitive revolution, ed. David Martel Johnson and Christina E.
Erneling, 149–163. Oxford University Press.
Gualmini, Andrea. 2014. The ups and downs of child language: Experimental
studies on children’s knowledge of entailment relationships and polarity
phenomena. Routledge.
Hammarberg, Robert. 1981. The cooked and the raw. Journal of Information
Science 3:261–267.
Hauser, Marc D, Noam Chomsky, and W Tecumseh Fitch. 2002. The faculty
of language: what is it, who has it, and how did it evolve? Science
298:1569–1579.
Henry, Alison. 1995. Belfast english and standard english: Dialect variation
and parameter setting. Oxford University Press, USA.
Ibbotson, Paul, and Michael Tomasello. 2016. Language in a new key. Sci-
entific American 315:70–75.
38
Jackendoff, Ray. 1994. Patterns in the mind: Language and human nature.
New York: Basic Books.
Norvig, Peter. 2011. On chomsky and the two cultures of statistical learning.
http://norvig.com/chomsky.html.
Pylyshyn, Zenon W. 2003. Seeing and visualizing : It’s not what you think .
Cambridge, Mass.: MIT Press.
Reiss, Charles, and Veno Volenec. 2022. Conquer primal fear: Phonological
features are innate and substance free. Canadian Journal of Linguistics
67:581–610.
39
Smith, Brian Cantwell. 1996. On the origin of objects. Cambridge, MA: MIT
Press.
Torrego, Esther. 1984. On inversion in Spanish and some of its effects. Lin-
guistic Inquiry 15:103–129.
van Rooij, Iris. 2008. The tractable cognition thesis. Cognitive Science
32:939–984.
van Rooij, Iris, Mark Blokpoel, Johan Kiwsthout, and Todd Wareham. 2019.
Cognition and Intractability: a Guide to Classical and Parameterized Com-
plexity Theory. Cambridge University Press.
Volenec, Veno, and Charles Reiss. 2020. Formal generative phonology. Rad-
ical: A Journal of Phonology 2:1–148.
40