(Phonology and Phonetics (Book 23) ) Larry M. Hyman - Frans Plank - Phonological Typology-Walter de Gruyter (2018)
(Phonology and Phonetics (Book 23) ) Larry M. Hyman - Frans Plank - Phonological Typology-Walter de Gruyter (2018)
)
Phonological Typology
Phonology and Phonetics
Edited by
Aditi Lahiri
Volume 23
ISBN 978-3-11-044970-9
e-ISBN (PDF) 978-3-11-045193-1
e-ISBN (EPUB) 978-3-11-044992-1
ISSN 1861-4191
www.degruyter.com
Contents
Preface
Contributors
Larry M. Hyman
What is phonological typology?
Frans Plank
An implicational universal to defy: typology ⊃ ¬ phonology ≡ ph
onology ⊃ ¬ typology ≡ ¬ (typology ᴧ phonology) ≡ ¬ typology v
¬ phonology
Paul Kiparsky
Formal and empirical issues in phonological typology
Ian Maddieson
Is phonological typology possible without (universal) categorie
s?
Jeffrey Heinz
The computational nature of phonological generalizations
Aditi Lahiri
Predicting universal phonological contrasts
Ellen Broselow
Laryngeal contrasts in second language phonology
Tomas Riad
The phonological typology of North Germanic accent
Carlos Gussenhoven
Prosodic typology meets phonological representations
Subject Index
Language Index
Author Index
Preface
This volume seeks to bring together two separate enterprises, typology and
phonology, which have often gone their separate ways, even when
apparently addressing each other. Although there has always been
phonology in the many approaches to typology, there has always been far
less of it than there has been of morphology and syntax. Particularly in
recent times, during which phonological theory has flourished in many
colours, the phonology in typological circles has centered either around
segment inventories and basic phonotactics or limited itself to crudely
categorising labels such as “syllable- vs. stress-timed languages”,
something which would barely pass muster with phonological theorists,
whose concerns have been “deeper” and usually more formal. On the other
hand, despite the numerous and diverse languages that would typically
inform phonological theorising, typological questions as such were rarely of
major consequence, if raised at all.
If one sees value or at any rate promise in typology, the research
programme for discovering and accounting for order in diversity, and if you
see no principled reasons to doubt that linguistic diversity is about as
orderly in phonology as in syntax and morphology (or at any rate
inflection), this state of affairs is regrettable. Since this is how we see it, we
felt obliged to lend a hand towards improving relationships.
In organised typology, the success of our efforts was limited. Though
long involved in various capacities in the Association for Linguistic
Typology (ALT), it looks like we and a few fellow campaigners have not
been able to significantly raise the profile of phonology at ALT’s biennial
conferences or in its other activities. The two of us hit rock bottom when a
workshop to boost phonology in typology that we suggested as part of an
ALT conference a few years ago did not find favour with the programme
committee, for reasons we ourselves found unconvincing. Reassuringly, the
phonological record of Linguistic Typology, the journal we have helped to
run for two decades, is better, but still comparatively modest.
Perhaps we had gotten hold of the wrong end of the stick, trying to
proselytise in typological circles. Changing tack, for the present volume we
sought out linguists who define themselves and are perceived first and
foremost as phonologists. Though rarely meeting at typological get-
togethers and not so far published in LT, we were still expecting typological
awareness rather than complete innocence, because our remit for them was
as follows: Do typology! Present a sample piece of work appropriate for a
workshop and subsequent publication as you think it can and ought to be
done in state-of-the-art phonology!
And here we go. After two scene-setting and background-providing
contributions from the editors (Hyman; Plank), the phonologists assembled
here, typologically aware or indeed expert, address metatheoretical issues of
what it means to do phonological typology (Kiparsky; Maddieson).
Through different methodologies, they explore the possibilities of and
limitations on segmental alternations (Heinz; Brohan & Mielke), seek
system, not in segment inventories as such, but in featural contrasts, and
find variability, though not randomness, of contrastive systems in language
change and language acquisition (Lahiri; Dresher, Harvey, & Oxford;
Broselow) and seek limitations on diversity and change concerning
microvariation in tonal accent systems in North Germanic (Riad). An effort
at conceptual clarification of what it is that can be typologised in prosodic
typology – segments, constituents, their alignments – concludes
(Gussenhoven).
Typology’s subject matter is vast: for EVERYTHING about language –
units, paradigmatic systems, rules for and constraints on forming prosodic
constituents (syllables, feet . . . anything syntagmatically complex without
or with meaning) – we need to ascertain whether it is variable or invariable,
and if it is variable we want to know whether it varies independently or co-
varies with anything else. Despite the extensive knowledge accumulated in
phonology over the past century, phonological typologists still have
considerable work cut out for them for an unforeseeable time to come. The
present collection will have served its purpose if it gives them
encouragement and guidance. Naturally, phonological awareness or indeed
expertise of none-too-basic a nature will be an asset for those planning to
join in this enterprise: in phonology no less than in syntax and morphology,
generalising is futile if the particulars over which one generalises are
inadequately analysed.
This volume derives from a workshop in Somerville College, University
of Oxford, on 11–13 August 2013 (funded by a European Research Council
Advanced Investigator Grant to Aditi Lahiri). Subsequent to the preparation
of this volume, a survey monograph of the same title appeared, by Matthew
K. Gordon (Oxford University Press, 2016), adding another voice to the
small chorus of performers striving to bring the adjective into harmony with
the noun (or vice versa) of our shared title. A big thanks from all of us who
were present goes to Aditi Lahiri, for there is no better host. There also is
no better series editor for this sort of thing either, cruel but fair with her
editors and authors: A thank you to our dear friend and colleague Aditi
Lahiri in this capacity, too.
We might as well dedicate this collection to her on the occasion of a
round birthday that must not go unmarked . . .
Larry M. Hyman
Frans Plank
Contributors
Anthony Brohan
Google, Mountain View, CA 94043, USA
Ellen Broselow
Department of Linguistics, Stony Brook
University, Stony Brook, NY 11794-4376, USA
[email protected]
B. Elan Dresher
Department of Linguistics, University of
Toronto, Toronto, Ontario, Canada M5S 3G3
[email protected]
Carlos Gussenhoven
Department of Linguistics, Radboud
University, PO Box 9103, 6550 HD Nijmegen, The Netherlands
[email protected]
Christopher Harvey
Department of Linguistics, University of
Toronto, Toronto, Ontario, Canada M5S 3G3
[email protected]
Jeffrey Heinz
Department of Linguistics and Institute of
Advanced Computational Science, Stony Brook
University, Stony Brook, NY 11794-4376, USA
[email protected]
Larry M. Hyman
Department of Linguistics, University of
California at Berkeley, Berkeley, CA 94704, USA
[email protected]
Paul Kiparsky
Department of Linguistics, Stanford University,
Stanford, CA 94305-2150, USA
[email protected]
Aditi Lahiri
Faculty of Linguistics, Philology and Phonetics,
University of Oxford, Oxford OX1 2HG, UK
[email protected]
Ian Maddieson
Department of Linguistics, University of New
Mexico, Albuquerque NM
[email protected]
Jeff Mielke
Department of English, North Carolina State
University, Raleigh, NC 27695-8105, USA
[email protected]
Will Oxford
Department of Linguistics, University of
Manitoba, Winnipeg, Manitoba, Canada R3T 5V5
[email protected]
Frans Plank
Somerville College, University of Oxford,
Oxford OX2 6HD, UK
[email protected]
Tomas Riad
Institutionen för svenska och flerspråkighet,
Stockholms universitet, 106 91 Stockholm,
Sweden
[email protected]
Larry M. Hyman
What is phonological typology?
Whatever typology is, it is on a roll at the moment and likely to continue.
(Nichols 2007: 236)
She concludes that most typologists do not exploit large databases, many
(including herself) are not functionalists, and finally, implicational
statements are “a convenient format for presenting and testing results [. . .]
[but not] the be-all and end-all of typology”.
In fact, typologists disagree on a number of issues, including these:
what we call typology is not properly a subfield of linguistics but is simply framework-
neutral analysis and theory plus some of the common applications of such analysis
(which include crosslinguistic comparison, geographical mapping, cladistics, and
reconstruction). (Nichols 2007: 236)
the goal of typology is to uncover universals of language, most of which are universals
of grammatical variation. (Croft 2003: 200)
The hypothesis that typology is of theoretical interest is essentially the hypothesis that
the ways in which languages differ from each other are not entirely random, but show
various types of dependencies (Greenberg 1974: 54)
The systems in (2a, b) have triangular vowel systems with underlying front
unrounded and back rounded vowels, while (2c) represents a vertical central
vowel system with front and round features restricted to consonants (to
which the centralized vowels typically assimilate). (2d) represents a vowel
harmony system where some vowels are specified, others unspecified for
Front and Round. Finally, as in the case of nasality, Front and Round can be
prosodies on whole morphemes or words. Recall from (1) that some
languages lack nasality entirely. The situation is different concerning Front
and Round: while two languages (Qawasar and Yessan-Mayo) out of the
451 languages in the UPSID database (Maddieson & Precoda 1990;
Maddieson 1991) lack a front vowel, both have the palatal glide /y/. Of the
four languages (Jaqaru, Alawa, Nunggubuyu, and Nimboran) which lack a
round vowel, only Nimboran also lacks the labiovelar glide /w/ and hence
does not exploit the feature Round at all. (It is likely that a language will
turn up that in parallel fashion does not exploit the feature Front.) No
language has thus far been cited which fails to phonologize both Front and
Round.
This does not necessarily mean that there will be a total lack of nasality,
palatality, or rounding in phonetic outputs. Examples such as (1) and (2)
illustrate that phonological typology cannot be about surface outputs alone
(for which we might distinguish PHONETIC typology). One has to make a
choice of level, which is particularly problematic in the case of tone
systems. For example, Ik (Heine 1993) and Kom (Hyman 2005) both have
underlying / H, L/ but a third [M] (mid tone) on the surface which they
derive by the following rules:
Since the trigger H may drop out after conditioning L tone raising in Ik, and
similarly, the trigger L can drop out after triggering H tone lowering in
Kom, these languages have two underlying-contrastive tone heights / H, L/,
but three surface-contrastive tone heights [H, M, L]. Are these 2- or 3-
height systems? The only adequate approach is to typologize on the basis of
the relation between underlying and surface contrastive elements, i.e., both
Ik and Kom have a 2→3 tone-height system.
A second reason to avoid labeling language types is that this gives the
impression that there is a unique taxonomy. Consider the following
hypothetical exchange over whether German should be classified with
English vs. French on the basis of its vowel system. To illustrate, consider
the hypothetical exchange in (5):
While Beckman & Venditti find the Mandarin and English L+H similarities
significant, compare the more usual view of Gussenhoven’s (2007: 256)
concerning the similar H+L in Japanese and English:
While phonologically comparable, the pitch accents of Japanese and English have very
different morphological statuses. In Japanese, they form part of the underlying
phonological specification of morphemes, along with the vowels and consonants.
Intonational pitch accents are morphemically independent of the words they come with,
and are chiefly used to express the information status of the expression. The fact that
the English example [. . .] seems to have an accentuation similar to the Japanese
example [. . .] is ENTIRELY ACCIDENTAL. (My emphasis; cf. Hyman 2012)
(iii) a language that marks X more than certain other languages, e.g., “tone
language” vs. “pitch-accent language”, “syllable language” vs. “word
language”:
A pitch-accent system is one in which pitch is the primary correlate of prominence and
there are significant constraints on the pitch patterns for words [. . .] (Bybee et al
1998:277)
A syllable language is one which dominantly refers to the syllable, a word language is
one which dominantly refers to the phonological word in its phonological make-up.
(Auer 1993: 91)
[. . .] if we push the use of accents to its limits (at the expense of using tones), this
implies allowing unaccented words (violating obligatoriness) and multiple accents
(violating culminativity). In this liberal view on acccent, only languages that have more
than a binary pitch contrast are necessarily tonal [. . .] (van der Hulst 2011: 13)
References
Aissen, Judith. 2003. Differential object marking: Iconicity vs. economy. Natural
Language & Linguistic Theory 21. 435–483.
Auer, Peter. 1993. Is a rhythm-based typology possible? A study of the role of
prosody in phonological typology. KontRI Working Paper No. 21. University of
Konstanz.
Beckman, Mary E. & Jennifer J. Venditti. 2010. Tone and intonation. In William J.
Hardcastle & John Laver (eds.), The handbook of phonetic sciences, 603–
652. Oxford: Blackwell.
Beckman, Mary E. & Jennifer J. Venditti. 2011. Intonation. In John Goldsmith,
Jason Riggle, & Alan C. L. Yu (eds.), The handbook of phonological theory,
485–532. Oxford: Blackwell.
Bickel, Balthasar. 2003. Referential density in discourse and syntactic typology.
Language 79. 708–736.
Bickel, Balthasar. 2007. Typology in the 21st century: Major current developments.
Linguistic Typology 11. 239–251.
Bossong, Georg. 1998. Le marquage différential de l’objet dans les langues
d’Europe. In Jack Feuillet (ed.), Actance et valence dans les langues de
l’Europe, 193–258. Berlin: Mouton de Gruyter.
Buckley, Eugene. 2000. On the naturalness of unnatural rules. Proceedings from
the Second Workshop on American Indigenous Languages. UCSB Working
Papers in Linguistics 9. 1–14.
Bybee, Joan L., Paromita Chakraborti, Dagmar Jung, & Joanne Scheibman. 1998.
Prosody and segmental effect: Some paths of evolution for word stress.
Studies in Language 22. 267–314.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper & Row.
Clark, Mary. 1988. An accentual analysis of Zulu. In van der Hulst & Smith (eds.),
51–79.
Clements, G. N. & Sylvester Osu. 2003. Ikwere nasal harmony in typological
perspective. In Patrick Sauzet & Anne Zribi-Hertz (eds.), Typologie des
langues d’Afrique et universaux de la grammaire, vol. 2, 70–95. Paris:
L’Harmattan.
Clements, G. N. & Annie Rialland. 2008. Africa as a phonological area. In Bernd
Heine & Derek Nurse (eds.), A linguistic geography of Africa, 36–85.
Cambridge: Cambridge University Press.
Cohn, Abigail. 1993. A survey of the phonology of the feature [+nasal]. Working
Papers of the Cornell Phonetics Laboratory 8. 141–203.
Corbett, Greville G. 2007. Canonical typology, suppletion, and possible words.
Language 83. 8–42.
Croft, William. 2003. Typology and universals. Second edition. Cambridge:
Cambridge University Press.
Croft, William. 2007. Typology and linguistic theory in the past decade: A personal
view. Linguistic Typology 11. 79–91.
Dixon, R. M. W. 1994. Ergativity. Cambridge: Cambridge University Press.
Dixon, R. M. W. & Alexandra Y. Aikhenvald (eds.). 2000. Changing valency: Case
studies in transitivity. Cambridge: Cambridge University Press.
Donohue, Mark. 1997. Tone in New Guinea languages. Linguistic Typology 1.
347–386.
Donegan, Patricia & David Stampe. 1983. Rhythm and the holistic organization of
language structure. In F. Richardson (ed.), CLS 19, Parasession on the
interplay of phonology, morphology & syntax, 337–353. Chicago: Chicago
Linguistic Society.
Dressler, Wolfgang U. 1979. Reflections on phonological typology. Acta Linguistica
Academiae Scientiarum Hungaricae 29. 259–273.
Dryer, Matthew. 1986. Primary objects, secondary objects, and antidative.
Language 62. 808–845.
Evans, Nicholas & Stephen C. Levinson. 2010. Time for a sea-change in
linguistics: Reponses to comments on “The myth of language universals”.
Lingua 120. 2733–2758.
Gordon, Matthew. 2007. Typology in optimality theory. Language and Linguistics
Compass 1(6). 750–769.
Gordon, Matthew K. 2016. Phonological typology. Oxford University Press.
Gouskova, Maria. 2013. Review of Marc van Oostendorp, Colin J. Ewen, Elizabeth
Hume, & Keren Rice (eds.). 2011. The Blackwell companion to phonology
(Malden: Mass.: Wiley-Blackwell). Phonology 30. 173–179.
Greenberg, Joseph H. 1948. The tonal system of Proto-Bantu. Word 4. 196–208.
Greenberg. Joseph H. 1958. The labial consonants of Proto-Afro-Asiatic. Word 14.
295–302.
Greenberg, Joseph H. 1962. Is the vowel-consonant dichotomy universal? Word
18. 73–81.
Greenberg, Joseph H. 1963. Vowel harmony in African languages. Actes du
Second Colloque Internationale de Linguistique Negro-Africaine, 33–38.
Dakar: Université de Dakar, West African Languages Survey.
Greenberg, Joseph H. 1966. Synchronic and diachronic universals in phonology.
Language 42. 508–517.
Greenberg, Joseph H. 1970. Some generalizations concerning glottalic
consonants, especially implosives. International Journal of American
Linguistics 36. 123–145.
Greenberg, Joseph H. 1974. Language typology: A historical and analytic
overview. The Hague: Mouton.
Greenberg, Joseph H. 1978. Some generalizations concerning initial and final
consonant clusters. Linguistics 18. 5–34.
Greenberg, Joseph H., James J. Jenkins, & Donald J. Foss. 1967. Phonological
distinctive features as cues in learning. Journal of Experimental Psychology
77. 200–205.
Greenberg, Joseph H. & Dorothea Kaschube. 1976. Word prosodic systems: A
preliminary report. Working Papers in Language Universals 20. 1–18.
Gussenhoven, Carlos. 2007. The phonology of intonation. In Paul de Lacy (ed.),
The Cambridge handbook of phonology, 253–280. Cambridge: Cambridge
University Press.
Hagège, Claude. 1992. Morphological typology. In Oxford international
encyclopedia of linguistics, vol. 3, 7–8. Oxford: Oxford University Press.
Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA:
MIT Press.
Haspelmath, Martin. 2006. Against markedness (and what to replace it with).
Journal of Linguistics 42. 25–70.
Hammond, Michael 2006. Phonological typology. Encyclopedia of Language &
Linguistics. Online. http://www.sciencedirect.com/science/article/pii/B0080448
542000468
Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago:
University of Chicago Press.
Heine, Bernd. 1993. Ik dictionary. Köln: Rüdiger Köppe Verlag.
Hockett, Charles F. 1955. A manual of phonology. Memoir 11, International Journal
of American Linguistics.
Hualde, José Ignacio. 2006. Remarks on word-prosodic typology. Proceedings of
the 32nd Annual Meeting of the Berkeley Linguistics Society, 157–174.
Hulst, Harry van der. 2011. Pitch accent systems. In van Oostendorp, Ewen,
Hume, & Rice (eds.), vol. 2, #45.
Hulst, Harry van der, Rob Goedemans, & Ellen van Zanten (eds). 2010. A survey
of word accentual patterns in the languages of the world. Berlin: De Gruyter
Mouton.
Hyman, Larry M. 1977. On the nature of linguistic stress. In Larry M. Hyman (ed.),
Studies in stress and accent, 37–82. Southern California Occasional Papers
in Linguistics 4. Department of Linguistics, University of Southern California.
Hyman, Larry M. 2005. Initial vowel and prefix tone in Kom: Related to the Bantu
Augment? In Koen Bostoen & Jacky Maniacky (eds.), Studies in African
comparative linguistics, 313–341. Köln: Rüdiger Köppe Verlag.
Hyman, Larry M. 2007. Where’s phonology in typology? Linguistic Typology 11.
265–271.
Hyman, Larry M. 2009. How (not) to do phonological typology: The case of pitch-
accent. Language Sciences 31. 213–238.
Hyman, Larry M. 2011. Tone: Is it different? In John Goldsmith, Jason Riggle, &
Alan Yu (eds.), The handbook of phonological theory, 2nd edition, 197–239.
Oxford: Blackwell.
Hyman, Larry M. 2012. In defense of prosodic typology: A response to Beckman &
Venditti. Linguistic Typology 16. 341–385.
Hyman, Larry M. 2015. Towards a canonical typology of prosodic systems. In
Esther Herrera Zendejas (ed.), Tono, acento y estructuras métricas en
lenguas mexicanas, 13–38. México: El Colegio de México.
Kenstowicz, Michael & Charles Kisseberth. 1977. Topics in phonological theory.
New York: Academic Press.
Kiparsky, Paul. 1968. Linguistic universals and linguistic change. In Emmon Bach
& Robert T. Harms (eds.), Universals in linguistic theory, 171–202. New York:
Holt, Rinehart & Winston.
Kiparsky, Paul. 2008. Universals constrain change; change results in typological
generalizations. In Jeff Good (ed.), Linguistic universals and language
change, 23–53. Oxford: Oxford University Press.
Maddieson, Ian. 1991. Testing the universality of phonological generalizations with
a phonetically specified segment database: Results and limitations. Phonetica
48. 193–206.
Maddieson, Ian & Kristin Precoda. 1990. Updating UPSID. UCLA Working Papers
in Phonetics 74. 104–111.
McCarthy, John J. 2002. A thematic guide to optimality theory. Cambridge:
Cambridge University Press.
Newmeyer, Frederick J. 2005. Possible and probable languages: A generative
perspective on linguistic typology. Oxford: Oxford University Press.
Nichols, Johanna. 1986. Head-marking and dependent-marking grammar.
Language 66. 56–119.
Nichols, Johanna. 1992. Language diversity in space and time. Chicago:
University of Chicago Press.
Nichols, Johanna. 2007. What, if anything, is typology? Linguistic Typology 11.
231–238.
Oostendorp, Marc van, Colin J. Ewen, Elizabeth Hume, & Keren Rice (eds.). 2011.
The Blackwell companion to phonology. Malden, MA: Wiley-Blackwell.
Plank, Frans. 1998. The co-variation of phonology with morphology and syntax: A
hopeful history. Linguistic Typology 2. 195–230.
Plank, Frans. 2001. Typology by the end of the 18th century. In Sylvain Auroux et
al. (eds.), History of the Language Sciences: An International Handbook on
the Evolution of the Study of Language from the Beginnings to the Present,
vol. 2, 1399–1414. Berlin: Walter de Gruyter.
Prince, Alan & Paul Smolensky. 1993/2004. Optimality theory: Constraint
interaction in generative grammar. Malden, MA: Blackwell.
Samuels, Bridget D. 2011. Phonological architecture: A biolinguistic perspective.
Oxford: Oxford University Press.
Sapir, Edward. 1925. Sound patterns in language. Language 1. 37–51.
Seiler, Hansjakob. 1979. Language universals research, questions, objectives, and
prospects. Acta Linguistica Academiae Scientiarum Hungaricae 29. 353–367.
Slobin, Dan I. 2004. The many ways to search for a frog: Linguistic typology and
the expression of motion events. In Sven Strömqvist & Ludo Verhoeven
(eds.), Relating events in narrative, volume 2: Typological and contextual
perspectives, 219–257. Mahwah, N.J.: Erlbaum.
Talmy, Leonard. 1985. Lexicalization patterns: Semantic structure in lexical forms.
In Timothy Shopen (ed.), Language typology and linguistic description, volume
3: Grammatical categories and the lexicon, 57–149. Cambridge: Cambridge
University Press.
Trask, R. L. 1996. A dictionary of phonetics and phonology. London: Routledge.
Trubetzkoy, Nikolai (1969 [1939]). Principles of phonology. Translated by
Christiane A. M. Baltaxe. Berkeley and Los Angeles: University of California
Press.
Vajda, Edward. 2001. Test materials dated August 17, 2001. http://pandora.cii.ww
u.edu/ vajda/ ling201/test2materials/Phonology3.htm.
Whaley, Lindsay J. 1997. Introduction to typology. Thousand Oaks, California:
Sage Publications.
Frans Plank
An implicational universal to defy:
typology ⊃ ¬ phonology ≡ phonology
⊃ ¬ typology ≡ ¬ (typology ᴧ
phonology) ≡ ¬ typology v ¬
phonology
1 Introduction
The purpose of this chapter is twofold: first, to assess how typology,
unceremoniously introduced in §2, has been dealing with phonology (§3),
from early days (§3.1) to the present (§3.2); second, focusing on phonology
(§4), to ask about an imbalance of phonology and syntax-inflection in
general (§4.1) and about typological concerns in phonology itself (§4.2).
Looked at from both angles, the phonology–typology relationship is seen to
be special, and the impression is confirmed that, in comparison especially
with syntax, phonological typology as well as typological phonology are
behindhand in the quest for system in linguistic diversity. (Though not all is
well about the syntax–typology relationship, either.) Explanations are
suggested in terms of the substance of subject matters and of the attitudes to
description and theory in different subcommunities in linguistics.
3 The evidence
3.2.4 Databases
Since data collections are no longer jealously guarded as the collector’s
private property, an increasingly popular research tool in typology are
online databases. Among the thirty or so world-wide typological databases I
am aware of as recently active, the majority cover domains from syntax and
inflection. But it is not as overwhelming a majority as one might have
expected, since about a dozen are on phonology or include substantial
phonological data:
– UPSID: UCLA Phonological Segment Inventory Database
http://www.linguistics.ucla.edu/faciliti/sales/software.htm;
http://web.phonetik.uni-frankfurt.de/upsid_info.html
– LAPSyd: Lyon-Albuquerque Phonological Systems Database
http://www.lapsyd.ddl.ish-lyon.cnrs.fr/lapsyd/
– PHOIBLE Online: Phonetics Information Base and Lexicon
http://phoible.org/
– P-base
http://pbase.phon.chass.ncsu.edu/
– World Phonotactics Database
http://phonotactics.anu.edu.au/
– StressTyp2
http://st2.ullet.net/?
– XTone: Cross-Linguistic Tonal Database
http://xtone.linguistics.berkeley.edu/index.php
– Metathesis in Language
http://metathesisinlanguage.osu.edu/database.cfm
– Language Typology Database
http://www.unicaen.fr/typo_langues/index.php?malang=gb
– tds: Typological Database System
http://languagelink.let.uu.nl/tds/main.html
– WALS Online: World Atlas of Language Structures
http://wals.info/
– SignPhon: A Phonological Database for Sign Languages
http://www.ru.nl/sign-lang/projects/completed-projects/signphon/
For some time now, prospective typologists have also been able to benefit
from summer (or autumn or winter) schools. Among the earliest I am aware
of as exclusively devoted to this subject were the typology schools of the
Deutsche Gesellschaft für Sprachwissenschaft at Mainz/Germany in 1998,
of the MOSCOW Typological Circle in or near MOSCOW in 1998, 2000, 2002,
and 2005, of ALT in Cagliari/Sardinia in 2003; the most recent, run by the
Fédération Typologie et Universaux Linguistique of the CNRS, will be at
the Ile de Porquerolles/ France in the autumn of 2016. There have always
been one or even two phonology courses at these schools, but one or two
dozen were on offer for those typology students keener on other matters,
such as inflection and syntax, methodology, and language/family surveys.15
Seeking guidance beyond the textbook level, apprentice typologists, and
whoever else is in need of ready reference about this field, can now also
consult specialised handbooks – currently these two:
– Haspelmath, Martin, Ekkehard König, Wulf Oestereicher, & Wolfgang
Raible (eds.). 2001/02. Language typology and language universals. 2
vols. Berlin: Walter de Gruyter.
– Song, Jae Sung (ed.). 2010. The Oxford handbook of linguistic typology.
Oxford: Oxford University Press.
With its two weighty tomes, the first is almost a compendium of linguistics
in its entirety; its section on “Phonology-based typology” (5 chapters), a
chapter on syllable/accent-counting as one “salient typological parameter”,
and occasional passing references to sound matters add up to some 90 pages
of phonology, out of 1,800. The second has one out of 30 chapters devoted
to segment/phoneme inventories, which yields an even worse proportion of
phonology (if this is what this chapter is, and not phonetics) to non-
phonological typology.
A handbook of sorts, too, is this set of three volumes – with a great deal
of morphology subsumed under “syntax”, but with no companion set
Language typology and phonological description:
– Shopen, Timothy (ed.). 1985. Language typology and syntactic
description. Vol. 1: Clause structure. Vol. 2: Complex constructions.
Vol. 3: Grammatical categories and the lexicon. Cambridge: Cambridge
University Press. (2nd edn., co-edited by Matthew S. Dryer, 2007.)
3.2.6 Specialisation
Few linguists who see themselves and are seen by others as typologists,
whatever further categorisations they might invite, are genuine all-rounders:
some wholly devote themselves to methodology (and might in fact be
statisticians), but most specialise in one structural domain or another – and
inflection and/or syntax specialists far outnumber phonology (and
phonetics) specialists. (Specialisation is not entirely novel in typology: the
chief expertise of a pioneer such as Gabelentz, unlike that of most of his
Neogrammarian colleagues at Leipzig, lay in syntax.) As crown witnesses I
call the five previous presidents of ALT, Bernard Comrie, Marianne
Mithun, Nicholas Evans, Anna Siewierska, Johanna Nichols: as typologists
all are primarily known for their work in syntax and inflection, although
most have a sound component to their work, too. Of the one editor and 27
associate editors who have so far overseen ALT’s journal, LT, one was a
phonologist (Larry Hyman, though also with morphosyntactic work to his
typological credit), one a phonetician (Ian Maddieson), and one divided her
time between phonology and morphosyntax (Joan Bybee); syntax and
inflection were the main expertise of the rest, with one or the other on rare
occasions moonlighting as phonologists (William Croft, Nicholas Evans,
Frans Plank, Martine Vanhove). Further evidence pointing in the same
direction is conveniently gathered from the ALT membership directory
(http://ling-asv.ling.su.se/alt_filer/membership.html): as their “special
interests” members do mention phonology or phonetics as such as well as
particular phonological/ phonetic topics such as tone, nasalisation and other
phonological processes, phonotactics, prosody, sound change, speech
perception; but these figures cannot compete with mentions of syntax and
morphology and particular morphosyntactic topics. Phonologists might of
course be doing their typology elsewhere – a question which we need to
return to (§3.2) before we can conclude that among today’s linguists with
typological interests phonologists are comparatively rare.
4 Reasons why
Undeniably, then, however you look at it, typology has been, and continues
to be, about co-variation and co-evolution in syntax and inflection much
more so than in phonology. Now, what are the reasons for this imbalance?
And is it desirable, and possible, to redress it in future?
A first step towards an answer is to raise a further question: Is typology
special?
Above (§2.2.4) I compared typology to historical linguistics with regard
to the amount of phonology one finds at specialised conferences and in
specialised journals, concluding, if tentatively, that it is less than
morphosyntax, too. It is probably only among those historical linguists
active in comparative (and internal) reconstruction that phonological
expertise will be at a premium. Making further comparisons with subfields
within linguistics where languages are being studied from some sort of a
selective perspective and with some special ulterior motives, the likelihood
is that a similar imbalance will be encountered. Take psycho- and
neurolinguistics, or sociolinguistics and anthropological linguistics, or
computational linguistics, and syntax (but not necessarily inflection) will
receive more attention than phonology, although the subject matter
supposedly is languages as such and there would not seem to be inherent
reasons for some structural domains being prioritised over others. Perhaps
dialectology is a rare subfield where the preferences among syntax and
phonology(-cum-phonetics) are reversed, with inflection possibly on a par
with phonology (and with the lexicon ahead of both).
If phonology is pitched against syntax in linguistics as a whole, it will
again emerge as the loser, although it will probably win second place before
morphology. Relevant evidence are the contents of general linguistics
journals (you name them) or the membership lists of learned societies
catering for the discipline as a whole (to name some where I made
spotchecks: Societas Linguistica Europaea, Linguistic Society of America,
Linguistics Association of Great Britain, Philological Society, Deutsche
Gesellschaft für Sprachwissenschaft, Société Linguistique de Paris, Società
di Linguistica Italiana, Società Italiana di Glottologia, Australian Linguistic
Society, Linguistic Society of India): more linguists specialise in, and
publish on, syntax than phonology or also morphology.16 As a subset of
linguists, typologists thus are not ESPECIALLY averse to phonology, then:
they are boringly average the way they like and dislike to specialise. If they
are special, it is probably in their idiosyncratic partiality to inflectional
morphology.
4.1.2 Quantum sufficit
This sample is not entirely random: though differing in many respects, what
my dozen grammars have in common is that they were written or co-written
by recognised or in fact eminent phonologists.18 They are a rather select
group, because grammar-writing is not a common activity of phonologists,1
9 and one could therefore suspect my sample to be biased in favour of
Over and above such similarities there is a curious and seemingly trivial
difference between introductions to phonology on the one hand and those to
morphology and syntax on the other: the former consistently serve up more
languages. According to their indices, Odden’s and Gussenhoven &
Jacobs’s phonology texts in one way or another make reference to some 150
languages, comparing to a little over 100 in Tallerman’s syntax and in
Haspelmath (& Sims)’s as well as Lieber’s morphology texts. An early text
such as Larry Hyman’s Phonology: Theory and analysis (New York: Holt,
1975) had examples from and analyses for over 80 languages, at a time
when introductions to syntax would make ends meet with one (often the
author’s own) and Peter Matthews’ Morphology, the first Cambridge
Textbook in Linguistics (Cambridge University Press, 1974), got along with
a modest 20.
And such an imbalance is not encountered in textbooks alone. The
language index, for example, of the Blackwell Handbook of phonological
theory (edited by John A. Goldsmith, 1995) has 422 entries, while its
companion Handbook of morphology (edited by Andrew Spencer & Arnold
M. Zwicky, 1998), apart from missing the tag “theory” in the title, only has
159 entries for languages and families, incorporated in the subject index.
Major theoretical works in phonology are routinely brimming with
languages, too: to choose almost randomly, only think of Trubetzkoy’s
Grundzüge der Phonologie (Travaux du Cercle Linguistique de Prague 7,
Prague 1939), Wolfgang Dressler’s Morphophonology (Ann Arbor:
Karoma, 1985), John Goldsmith’s Autosegmental and metrical phonology
(Oxford: Blackwell, 1990), Bruce Hayes’ Metrical stress theory (Chicago:
University of Chicago Press, 1995), or Robert Ladd’s Intonational
phonology (Cambridge: Cambridge University Press, 1996). Devoted to a
single language, even Noam Chomsky & Moris Halle’s The sound pattern
of English (New York: Harper & Row, 1968) has a separate language index,
with as many as 101 entries. Comparable language coverage in landmark
monographs is hard to find in syntax: perhaps Guglielmo Cinque, with titles
such as Adverbs and functional heads: A cross-linguistic perspective
(Oxford: Oxford University Press, 1999) or The syntax of adjectives: A
comparative study (Cambridge, MA: MIT Press, 2010), or Mark Baker,
with The polysynthesis parameter (Oxford: Oxford University Press, 1996),
Lexical categories (Cambridge: Cambridge University Press, 2005), or The
syntax of agreement and concord (Cambridge: Cambridge University Press,
2008), come closest, but they are exceptions. Morphology is in between, as
is suggested by the language counts for some works from the morphological
Renaissance of the 1980s: Frans Plank, Morphologische (Ir-)Regularitäten
(Tübingen: Narr, 1981) has 20+; Wolfgang Wurzel, Flexionsmorphologie
und Natürlichkeit (Berlin: Akademie-Verlag, 1984) 75; Joan Bybee,
Morphology (Amsterdam: Benjamins, 1985) 50+; Andrew Carstairs,
Allomorphy in inflexion (London: Croom Helm, 1987) 48. The
morphological counterpart to SPE, Mark Aronoff’s Word formation in
Generative Grammar (Cambridge, MA: MIT Press, 1976), is mostly about
one language, English, but is livened up with tangential references to ten
others.
What could seem an idle ranking of publications by language density
arguably bears witness to a divergence of research traditions, and of an
estrangement of professional sub-communities, between syntax and
phonology.23
In syntax, and similarly in morphology, a tradition had developed of
elaborating theories and frameworks on a narrow basis: at the expense of
confronting crosslinguistic diversity, theorising would be informed by in-
depth looks at selected structural phenomena in one or a few particular
languages, and not just by utterances and texts, but also by native
judgments about them. This was not only the policy in Generative
Grammar: a collection such as Syntactic theory 1: Structuralist (edited by
Fred Householder, Harmondsworth: Penguin, 1972), assembling 23 classic
readings, accumulates a little over 100 languages and families, but most
individual chapters make their theoretical points on the basis of individual
languages, mostly English, with Ilocano (L. Bloomfield), Bilaan (K. L.
Pike), Teleéfoól (P. Healey), Sundanese (R. H. Robins), Vietnamese (P. J.
Honey), and Eskimo-Aleut (K. Bergsland) as sporadic co-stars and the rest
as bit-part players. (The only multi-language exceptions in this reader are
W. S. Allen, Transitivity and possession, and B. L. Whorf, Grammatical
categories.) A pre-structuralist classic, Wilhelm Havers’ Handbuch der
erklärenden Syntax (Heidelberg: Winter, 1931), had limited itself to a subset
of Indo- European, although with illustrations from spoken modern
languages and with occasional comparisons of these “Kultursprachen” to
none-too-specific “Natursprachen”.
Eventually, from the 1960s and 70s onwards, as typology was
beginning, through individual efforts like Joseph Greenberg’s, to attract
wider attention than ever before, languages in the plural would re-assert
their right to be heard not only for their phonology, but also their syntax.
With the Generative paradigm continuing to dominate, a misperception
arose of syntax being done in two ways: “theoretically”, engaging with
single languages against the backdrop of Universal Grammar (largely taken
for granted), vs. “descriptively”, dealing with multiple languages and
inductively inferring crosslinguistic generalisations about co-variation/co-
evolution. When “theorists” were withholding the honorary epithet
“theoretical” from the latter line, where theorising was primarily about
finding and explaining inductive generalisations, they were probably
encouraged by an occasional lack of subtlety in conceptualising syntactic
structures and processes and a reluctance to countenance abstract
representations. Though far better informed crosslinguistically, syntactic
typology as part and parcel of the “descriptive” approach remained
theoretically indeed sometimes a bit basic.24 It seemed like grammar was in
bare essence to be conceived of as a checklist of variables, possibly with
only two values, plus or minus – OV or VO? Adposition before/ after NP?
Genitive before/ after head noun? Ergative or accusative or other
alignment? A definite article, yes or no? Zero copula? Dual? Inclusive-
exclusive? Gender, and where applicable: how many? Doing typology then
typically meant searching for co-variation among such variables whose
values the typologist could easily glean at a glance from lots of descriptive
grammars.
In phonology, theory and framework development had never been
divorced from crosslinguistic awareness to a similarly alarming extent.
There were thus no grounds for a multilingual “descriptive” phonology to
split off from a monolingual “theoretical” phonology à la syntax.
Languages in the plural remained at the core of phonological theorising. In
principle this meant that typology could have been done as part of
theoretical phonology, rather than in a separate community where members
defined themselves as typologists and where “non-theoretical” syntacticians
were setting the agenda. And to some extent it was – namely to the extent
that phonological grammar could be conceived of as a checklist and values
could conveniently be checked for co-variation/ co-evolution:25 Does the
language have this segment and that? Does it have quantity contrasts for
vowels/ consonants? Does it permit onset clusters, and if so which? How do
its syllables go beyond CV? Does it enforce final devoicing? Does it have
vowel harmony? Is it tonal? Level or contour tones, and how many? Which
syllable of the word does it stress? However, this sort of thing – like listing
segment inventories and phonotactic templates – was never considered all
there is for phonological theory to address, and, when it was a point of
departure, it did not represent a theoretical issue or conclusion. Hence,
much of what was at the heart of phonological theorising, in variable
frameworks but invariably richly informed by diverse languages, has never
translated into phonological typology of the checklist-based variety, a style
so increasingly popular for syntactic and morphological typology and
accounting for the latter-day bulk of it.
When phonology is seen as phonologists see it, aiming at adequate
description and at the same time at making theoretical sense of what is
being described, the grammar of sound is substantially growing in sheer
volume. When syntax chapters in weightier descriptive grammars are
compared with monographs on the syntax of the same languages, like those
of the Cambridge Syntax Guides, there is no dramatic mismatch. The remit
for authors of these Guides, of which a dozen have so far been published
(mostly for European languages), is to be both descriptive and theoretical,
while the editorial team itself (Peter Austin, Bernard Comrie, Joan Bresnan,
David Lightfoot, Ian Roberts, Neil Smith), like the intended audience, is
patently divided between “descriptive” and “theoretical” allegiances. The
authors of the Phonology of the World’s Languages series of Clarendon/
Oxford University Press are likewise instructed, although by a single editor
(Jacques Durand), to attend to both description and explanation, to the
benefit of a single undivided body of intended readers, which should not be
put off by differences between theoretical frameworks – and the resultant
monographs, as yet 19, far exceed what would be found in even the most
comprehensive of descriptive grammars. Intriguingly, several languages
have both a Cambridge syntax guide and an Oxford phonology portrait
devoted to them, and the Kolmogorov Complexity or at any rate book
length is not necessarily less for the phonology: Arabic phonology (and
morphology; author Janet Watson) 336 pages, syntax (authors Joseph Aoun
et al.) 260 pages; Catalan phonology (Max Wheeler) 400, Spanish syntax
(Karen Zagona) 300; Welsh phonology (S. J. Hannahs) 198, Welsh syntax
(Robert Borsley & Maggie Tallerman) 412; Icelandic (& Faroese)
phonology (Kristján Árnason) 368, syntax (Höskuldur Thráinsson) 580;
Dutch phonology (Geert Booij) 218, syntax (Jan-Wouter Zwart) 418;
German phonology (Richard Wiese) 368, syntax (Hubert Haider) 368;
Hungarian phonology (Péter Siptár & Miklós Törkenczy) 336, syntax
(Katalin É. Kiss) 292; Chinese phonology (San Duanmu) 382, syntax
(James Huang et al.) 404.26
References
Note: Works dealt with as historiographical data are referenced in the body of the chapter.
Abstract: The word level in the sense of Lexical Phonology and Stratal
OT, here referred to as the l-phonemic level, is a linguistically significant
level of representation, which captures what was right about the structural
phonemic level without inheriting its well-known problems. It does so in
virtue of encoding non-contrastive but distinctive as well as contrastive but
non-distinctive phonological properties. I show that phonological systems
which appear marginal or aberrant from the perspective of structural
phonemics and generative phonological underlying representations are
normalized at the l-phonemic level, and that certain phonological universals
become exceptionless only at this level. Dramatic instances include putative
vertical and one-vowel systems such as those of Arrernte and Kabardian,
and apparently syllable-less languages such as Gokana. I further argue that
“external evidence” from change, dispersion, poetic conventions, and
language games supports l-phonemic representations rather than classical
phonemic representations.
The larger methodological point is that there are no theory-neutral
grammars, and consequently no theory-neutral typology. In terms of
Hyman’s (2008) distinction, there are no “descriptive” universals of
language. All universals are “analytic”, and their validity often turns on a
set of critical cases where different solutions can be and have been
entertained. Therefore the search for better linguistic descriptions, more
illuminating typologies, and stronger cross-linguistic generalizations and
universals must go hand in hand.
1 Lexical representations
1.1 Problems with phonemes
Typological generalizations and universals are explicanda for linguistic
theory, but they are themselves theory-dependent, for in order to be
intelligible and falsifiable they must adhere to some explicit descriptive
framework. This mutual dependency comes to a head at the margins of
typological space, where reconciling typologies with descriptive
frameworks and the analyses dictated by them can involve a labyrinth of
choices. I explore a few of the tangled paths through it in the realm of
syllable structure and vowel systems.
Phonological typology has been based on three distinct levels of
representation: phonemic, phonetic, and morphophonemic (underlying,
“systematic phonemic”). Most work on segment inventories is framed in
terms of phonemic systems in the tradition of Trubetzkoy (1929, 1939),
Jakobson (1958), and Greenberg (1978). A major resource is the UPSID
collection of phonemic systems (Maddieson 1984, 2013, Maddieson &
Precoda 1990), which has the virtue of being genetically balanced (to the
extent possible), carefully vetted, and to some extent normalized to conform
to a standard set of analytic principles.28 The same resource has also been
used by phoneticians to investigate the typology of speech sound
inventories (Schwartz et al. 1997). Proponents of Dispersion Theory have
attempted to model the UPSID vowel systems, even though the theory is
strictly speaking about the phonetic realization of phonemes (maximization
of perceptual distance and minimization of articulatory effort). A growing
body of typology crucially relies on underlying representations (phonemes
in the classical generative phonological sense), such as Dresher (2009) and
Casali (2014). The analysis of Arrernte syllable structure that Evans &
Levinson (2009: 434) cite as part of their argument that universals are
“myths” is based on abstract underlying representations (Section 2.1
below).
Throwing abstract morphophonemic, phonemic, and phonetic
inventories in the same bin is unlikely to produce coherent typologies and
universals. So what kinds of categories and representations should
typologists look at? At least two criteria follow from the nature of typology
itself. We want typological categories that correlate with each other and
show some historical stability. And we want the categories to be based on
independently justified linguistically significant representations.
It is not obvious that the phonemic level satisfies either of these criteria.
There is persuasive evidence for some level between abstract underlying
representations and phonetics at which phonology is accessed in language
use, including the classic “psychological reality” or “external evidence”
diagnostics such as versification and language games, as well as language
change, including sound change and phonologization, analogy, and
borrowing. But phonemic theories do not converge on this level. Depending
on how such fundamental issues as biuniqueness, invariance, linearity, and
morphological conditioning (“grammatical prerequisites”, junctures) are
resolved, phonemic analyses diverge for all but the simplest textbook cases,
and quite drastically for typologically challenging outlier systems of the
sort I’ll focus on here. For example, if we require linearity (a phoneme
cannot correspond to a sequence of sounds) Kabardian has seven vowel
phonemes. If we don’t require linearity, but do require biuniqueness, it has
three vowel phonemes; otherwise it has two. Each of these phonemic
analyses is currently advocated by researchers on Kabardian (Section 3.2).
I shall argue that what language users actually access, and what
language change reveals, is not exactly the classical phonemic level, but the
level of representations that emerges from the lexical phonology (in the
sense of Lexical Phonology and Stratal OT). I’ll refer to this as the level of
LEXICAL REPRESENTATIONS and to its elements as L-PHONEMES. I will argue
first that the classic diagnostics fit lexical representations rather than
phonemic representations, where they differ, and then that the typology of
syllable structure and phonological systems is best served by lexical
representations. At this level phonological systems converge on significant
common properties, and some important phonological near-universals turn
exceptionless. The global factors of dispersion, symmetry, and naturalness,
to the extent that they shape phonological systems, appear to take effect at
this level.
Although I concentrate on unusual syllabification and vowel systems,
these just highlight some inherent tensions between phonemics and
typology that arise less conspicuously in most languages. They are due to
the sparseness and segmentalism of phonemic representations.
The point of phonemic representations is that they should be stripped of
all predictable information. In Jakobson’s words (1958 [1962]: 525): “A
typology of either grammatical or phonological systems cannot be achieved
without subjecting them to a logical restatement which gives the maximum
economy by a strict extraction of redundancies.” I’ll defend the opposite
view, that a specific class of redundant information is phonologically
relevant,29 and that its omission can lead typology astray – namely just that
increment of information which accrues from the phonological computation
in the lexical module. In particular, lexical representations include word
stress, if the language has it, and word-level syllable structure, regardless of
whether these things are predictable in the language or relevant to any
phonological processes in it.30 Lexical representations include this
information for principled reasons, as we’ll see directly. But they exclude
postlexical feature specifications from sandhi processes and phonetic
implementation rules. In this respect lexical representations are thus more
like Praguian Wortphonologie than like Satzphonologie.
Besides sparseness, a second source of trouble for structural phonemics
is its segmentalist commitment (criticized in Scobbie & Stuart-Smith 2008).
It requires that a multiply associated feature be associated with exactly one
contrastive segment in its span. Segmentalism is implied by such concepts
as minimal pairs, the commutation test, and the view of a phonemic system
as an inventory of abstract contrastive segments. Structural phonemics has
no place for Harris’ long components, Firthian prosodies, or Goldsmith’s
autosegments, not even those versions of it that take distinctive features as
the basic units of phonology, such as Jakobson’s. OT phonology has
inherited segmentalism in its descriptive practice, and formalized it in
correspondence theory, but nothing about OT inherently requires it. OT is a
theory of constraint interaction, not a theory of representations. Lexical
representations differ from phonemic representations in that they record the
full cumulative effect of the stem-level and word-level phonological
computation, including any redundant features assigned in those two lexical
submodules, with one-to-many association of prosodies to segmental slots
where appropriate, while still excluding allophones introduced in the
postlexical module and phonetic implementation. This additional
information turns out to be important for phonological typology and
significant universals can be formulated over representations that
incorporate it.
An example will help make these points clear. Gravina (2014: 90–94)
describes Moloko (Central Chadic) as having a single underlying vowel /a/,
and a second vowel /ǝ/ which does not appear in underlying forms and is
predictably inserted where syllable structure requires.31 In addition, a word
may have one of two prosodies, palatalization and labialization (notated as
y, w), which color its vowels to yield six surface vowels altogether:
The prosodies spread leftward across a word, from suffix to stem and stem
to prefix, but they do not cross word boundaries (e.g. (2g)).32
(3) A vowel system may be contrastive only for aperture only if its vowels
acquire vowel color from neighboring consonants.
This means that at the level of lexical representations there are no one-
dimensional vowel systems, whether vertical or horizontal. Minimal vowel
systems are triangular, making use of both the front/back dimension and the
high/ low dimension. To this we can now add the more specific substantive
generalization (5).
(5) All vowel systems have at least a low vowel and two non-low vowels.
One of the non-low vowels is a front unrounded high vowel, the other
is back.
(7)
A formally identical English case was noted by Bloch (1941).
Although this duplication problem did not attract attention at the time, it
led to a crisis in phonemic theory when it was raised by Halle (1959: 22)
and Chomsky (1964) as an objection to any intermediate phonemic level
(Anderson 2000). Crucially, THIS PROBLEM DOES NOT ARISE IN STRATAL OT. At
any given level, the available contrasts are defined by the ranking of the
relevant faithfulness and markedness constraints. Schematically, the
asymmetric underlying vowel system of Menomini comes from a constraint
— call it *u¯ — that dominates IDENT(High) in the stem phonology, thereby
suppressing the height contrast between u¯ and o¯. In the word phonology,
both *u¯ and IDENT(High) are dominated by height assimilation, whose
activation brings in the new l-phoneme. In this way the grammar formally
characterizes both the neutralization of the contrast between u¯ and o¯ in
the stem phonology (which makes the height specification irrelevant in
input representations), and the derived distinction between them in the word
phonology.
The activation of context-sensitive markedness constraints not only
enhances feature distinctions and maximizes dispersion, but creates more
symmetric inventories, and maximizes feature economy in the sense of
Clements (2001, 2003). Like dispersion, symmetry and feature economy are
tendencies of phonological systems, not absolute requirements, but they are
quantifiable and statistically verifiable, as shown for feature economy by
Clements (2003), and legitimate criteria for adjudicating between different
phonemic solutions. I venture the following conjectures:
Since {i} is an independent phoneme, whereas [u] and [e] are allophones
occurring only in the contexts just mentioned, a split derivation would again
be required to reconstruct an s-phonemic level. As in (7), the processes
marked by dashed lines introduce new l-phonemes, increasing both
symmetry and dispersion.
(10)
Unlike Menomini, Jimi does not achieve perfect symmetry. {a} is never
raised to {o} due to a gap in the consonant inventory. Jimi has no labialized
alveolars, so that the process that produces [e] after {rj}, {lj} has no
corresponding labial triggers *{rw}, *{lw}.
(11)
Thus Dan : Don would be rendered as [dæn] : [dɑŋ], while ban : bang
would both be [bæn]. By the criteria laid out in Section 1.1, /æ/ and /ɑ/ are
distinct l-phonemes in Mandarin, present in lexical representations just as
/n/ and /ŋ/ are. The loan phonology privileges the front/back feature on
vowels over the corresponding consonantal feature on nasal codas.
Presumably the vocalic distinction is perceptually more salient than the
consonantal distinction, as in the case of the Russian. Postlexical allophones
are not used for such “reverse engineering” because they are not
represented in the lexical phonology and unavailable for manipulation by
speakers. For example, English borrowings from Chinese don’t render tones
by consonant voicing, although this might well produce approximations of
at least some Chinese tonal contrasts.
2 Syllabification
2.1 Arrernte
Arrernte (an Arandic language of Australia) has been claimed to have only
VC(C) syllables (Breen & Pensalfini 1999 [B&P], Pensalfini 1998, Tabain
et al. 2004). Evans & Levinson (2009: 434) cite B&P’s work as “a clear
demonstration that Arrernte organizes its syllables around a VC(C)
structure and does not permit consonantal onsets […] An initially plausible
pattern turns out not to be universal after all, once the range of induction is
sufficiently extended”. VC(C) is indeed the most marked syllable type since
it violates ONSET, NOCODA, and *COMPLEX, and contradicts the following
generalizations (and Jakobson’s CV universal):
B&P’s claim that all Arrernte syllables lack onsets is about underlying
representations. About 25% of words as actually pronounced begin with a
consonant. Their analysis posits that they have an underlying initial /e-/,
which is then deleted. Their claim that all Arrernte syllables are closed is
likewise about UNDERLYING representations. According to Henderson &
Dobson (1994: 23) “nearly all Arrernte phonological words end in a central
vowel, though this vowel need not be pronounced, and is often absent in
sandhi when another vowel follows”. H&D’s transcription implies a
phonemicization that is consistent with all four universals in (14). It has no
underlying unpronounced initial /e-/, and posits final /-e/ where it is
pronounced. (15) shows B&P’s analysis in the first column, and H&D’s in
the second, with the actual pronunciation in the third.
Iowa-Ota has the same stress pattern as Arrernte, but it cannot be reduced to
second-syllable stress by positing deleted initial vowels. Finnish secondary
stress exhibits the same pattern. Four-syllable words normally get a stress
on the third syllable, except if it is onsetless, e.g. á.te.ri.a ‘meal’,
kómp.pa.ni.a ‘(military) company’ (Karvonen 2005). KiKerewe
demonstrates the prosodic defectiveness of onsetless syllables in several
different ways: they are light, tonally defective, and do not induce
compensatory lengthening when desyllabified (Odden 1995). Regardless of
how the unstressability of onsetless syllables is modeled,46 it undermines
the argument for abstract /V-/ in Arrernte.
The second argument adduced by B&P for VC(C) syllabification in
Arrernte is based on the plural/reciprocal suffix. After a stem with an odd
number of syllables, the suffix is -err or -errirr. After a stem with an even
number of syllables, the suffix is -irr. Stems of more than one syllable can
also have the optional allomorph -ewarr. The syllable count comes out right
if an initial vowel is posited in words that begin with consonants, so that
(17a) begins in /et ̪-/ and (17c) begins in /ekwern/.47
The obvious alternative is that the allomorphy is stress-conditioned: the
allomorph -err- must head a foot, the allomorphs -irr and -ewarr cannot.48
B&P’s third argument is that the reduplication pattern of the
frequentative indicates VC(C) syllabification.
For B&P, the frequentative suffix consists of a disyllabic foot, the first
syllable pre-specified as -ep-, the second a copy of the final VC(C) syllable
of the root. But this is a weak argument because prosodic morphology
normally does not involve copying prosodic constituents of the base.
Rather, affixes are prosodic templates (defined by constraints) that get their
unspecified segmental content from the base (McCarthy & Prince 1986). If
the syllable structure of the reduplicant is fixed by the reduplication
morpheme itself, then it can’t tell us anything about the syllabification of
the base. The argument is further undermined by Pensalfini’s (1998)
observation that the same type of reduplication exists in Jingulu, which
uncontroversially has CV syllabification, and therefore in any case requires
some such alternative analysis. A straightforward formulation consistent
with the theory of Prosodic Morphology is that the suffix is /-epVC/, with
VC filled by the closest part of the stem melody, e.g. /empwarr/ → empwarr-
epVC → empwarr-eparr.
The fourth argument, from the play language Rabbit Talk, is especially
intriguing:49
It looks like the initial syllable of the word, VC(C) in B&P’s analysis, is
moved to the end. But an unproblematic alternative is that the word rhyme
(the portion of the word that includes the stressed vowel and everything that
follows it, boldfaced in (19)) is flipped with the residue (prosodic
circumscription), viz. (amp) (áŋkem) → aŋkem-amp.50
There is substantial positive evidence that Arrernte words do exhibit the
universal preference for CV. One indication comes from the rendering of
English loanwords. They insert a vowel after a final consonant, not before
an initial consonant as the VC(C) syllable canon would predict.
Arrernte songs categorically prefer CV. “In the Arandic [song] tradition,
quite generally, the consonant of a line-final suffix [...] is transferred to the
beginning of the line following, so that each line begins with a consonant,
even if the actual Arandic word heading the line is vowel initial [...]” (Hale
1984). Turpin (2012) moreover observes: “All sung syllables have an onset.
[...] creating a poetic line involves either deleting the line-initial vowel ([ɐ
ˈɳtǝpǝ] → [ˈɳtǝpǝ] ‘pigeon’) or inserting a consonant ([ɐˈlǝmǝ] →
[ˈwɐlǝmǝ] ‘stomach’).”
Postlexical syllabification shows CV preference as well. At the sentence
level, ONSET and NOCODA are maximized:
(22) a. Roots have the shapes CV, CVV, CVC, CVCV, but not *CVVV.
Analysis: they are minimally a syllable and maximally a bimoraic
foot, satisfying ONSET.
b. Derivational suffixes can have the shape -V or -CV. Analysis:
they are minimal (light) syllables.
c. Prosodic stems may be of the form CV, CVC, CVV, CVCV,
CVVCV, CVVCVV, CVVVV, but not *CVVVCV, *CVCVVV.
They are maximally disyllabic (disyllabic trochees), as Hyman
himself notes. Since Gokana syllables are maximally bimoraic,
the restrictions follow.55
d. Gokana has CV-reduplication. Analysis: the Gokana reduplicant
is a minimal (light) syllable, a very common type of reduplication
as predicted by Prosodic Morphology (McCarthy & Prince 1986,
1993).
Such constraints are obviously helpful to hearers and learners in parsing the
morphological structure of word. These data undermine even the weaker
claim that Gokana CAN be analyzed adequately without syllables.
According to Hyman (2008), “imposing an arbitrary syllabification [on
the word adds nothing to our understanding of Gokana”. I find this
argument unconvincing for two reasons. First, the syllabification would not
be arbitrary, for it would have to be compatible with the language’s
constraints, including the ones in (22). Secondly, it seems too much to ask
that the syllabification of EVERY Gokana word should add something to our
understanding of the whole LANGUAGE. We don’t ask that of any other
aspect of the phonological analysis of words. Rather, the analysis of the
entire language has to be compatible with all its words and yield as many
explanatory dividends as possible, within the language and across
languages. A theory lives by the totality of its consequences.
A theoretical argument for the same conclusion follows from basic
assumptions of OT. A constraint can be defeated only by a more highly
ranked constraint. Prohibiting syllabification would require constraints that
defeat syllable structure assignment. But syllabification per se violates
neither faithfulness constraints or markedness constraints (although specific
marked syllable structures violate such constraints as ONSET and *CODA,
which can be ranked to yield the familiar factorial typology, and
RESYLLABIFICATION does constitute a faithfulness violation). Such
constraints are unmotivated and their adoption would expand the factorial
typology in undesirable ways. For example, a language without syllables
would not violate any constraints such as ONSET, *CODA, and *COMPLEX,
and consequently not be subject to phonotactic constraints captured by
those constraints.
This seems to me enough reason to reject the claim that Gokana has no
syllables. Even if the symptoms of syllabicity in (22) are discounted, the
very fact that the language is syllabifiable in conformity with typologically
well-established constraints and preferences would be incomprehensible if
it did not in fact have syllables. All in all Gokana speaks for rather than
against the universality of syllables and CV syllables in particular, just as
Arrernte does.
Japanese is a broadly similar case. It has the same kind of funny vowel
sequences as Gokana, e.g., Bloch’s example oooóóo ‘let’s cover the tail’,
and perhaps no syllable-conditioned phonological processes. Yet there is
evidence for one-mora and two-mora syllables (McCawley 1968, Kubozono
1999, 2003, Itô & Mester 2003), possibly three-mora syllables, though
Kubozono argues that these are divided into two syllables as /CV.VN/.
Labrune disputes the existence of syllables in Japanese, citing the three-
way contrast in (23):
3 Vowel systems
3.1 Kalam
The generalization that all languages have an /i/-type vowel is contradicted
by analyses of some Papua New Guinean and Chadic languages. In these
languages syllabic and non-syllabic semivowels (high vowels and glides)
are in complementary distribution, but phonemicized as underlying /y/ and
/w/ (Fast 1953, Laycock 1965, Barreteau 1988, Comrie 1991, Pawley &
Bulmer 2011, Smith 1999). This analysis reduces the phonemic vowel
inventory to /e/, /a/, /o/, or just to /a/ (in some of the languages with an
additional epenthetic /ǝ/ or /ɨ/).59 Since i, y and u, w have the same
segmental feature content, differing only in syllabicity,60 this analysis
amounts to specifying syllabicity in the phonemic inventory, despite its
predictability, and despite the complementary distribution between the high
vowels and glides. I believe that the need to specify semivowels as
underlyingly non-syllabic in these languages is an artifact of segmentalist
phonemics, and present a Stratal OT analysis in which all underlying
segments are indifferent as to syllabicity, and the semivowels are derived
from underspecified {I}, {U}. This yields exactly the same output as
positing underlying /y/, /w/ or /i/, /u/, because the actual realizations are
determined by the languages’ strict syllable structure. I demonstrate this
with a reconsideration of the exemplary analysis of Kalam by Blevins &
Pawley (2010) and Pawley & Bulmer (2011).
In addition to the semivowels at issue, Kalam has the vowels /e/, /a/, /o/,
plus an epenthetic vowel which is inserted predictably after unsyllabifiable
consonants, realized as high central short [ɨ], with a word-final [ǝ]
allophone. Underlying forms can have long sequences of consonants, and
some words have no vowels at all. Only /y/, /w/, and / s/ (the language’s
only fricative consonant) occur as word-internal codas. Word-finally any
type of consonant is allowed, including obstruents, nasals, liquids, and
glides. Underlying forms are accommodated to a CV syllabic template
where possible by inserting the nucleus /ɨ/, driven by the basic constraints
ONSET and NOCODA.61
With respect to syllabification, Kalam phonemes can be divided into three
classes:
(26) a. Epenthetic [ɨ], [ǝ] are short: /kn/ [ˈkɨn], /m/ [ˈmǝ]
b. Phonemic vowels are half-long: /kay/ [ˈkaˑj]
c. Including the syllabic allophones of the semivowels: /kyn/ [ˈkiˑn]
B&P give four arguments that the semivowels are always underlying
consonants. Their first argument is that while /a e o/ are found word-
initially in native words, no native words begin with /i/, /u/ or any central
vowel. Instead, words may begin with [ji] or [wu]. B&P analyze them as
beginning phonemically with /j/, /w/.
P&B (2011: 31) describe the vowels in such words as predictably inserted
“release vowels” colored by the adjacent semivowel. However, as
reproduced in (27), B&P consistently transcribe them as half-long, like
regular vowels and like the vocalized glides in words like (24b) /kyn/
[ˈkiˑn], but unlike the short vowel predictably inserted between two
consonants in (24a) /kn/ [ˈkɨn].62 That means that they are not release
vowels, but vocalized glides, which for P&B’s analysis means that the
semivowels are vocalized as [ji-], [wu-] before consonants, since no words
begin with [i-], [u-]. P&B do not account for these data. A simpler
alternative is that Kalam words are syllabified to have onsets where
possible, so initial glides are [ji-], [wu-] before consonants and [j-], [w-]
before vowels. Since semivowels can be both syllabic nuclei and margins,
an initial semivowel followed by a consonant can satisfy the the CV
preference by being syllabified as /ji-/, with the same melodic element
serving as onset and nucleus. For example, underlying {Im} (or for that
matter underlying {im} or {ym}) is then syllabified as /jim/, rather than as
*/im/, */jm/, or as */jm/. Since vowels cannot be affiliated with onsets, it
also follows that underlying {Am} (or {am}) must be syllabified as /am/
rather than as */a̯am/. This derives the generalization that words can begin
only with a consonant or with a true vowel but not with a semivowel or
with an epenthetic /ɨ/.63 There is then no need to specify the semivowels as
underlyingly non-syllabic. This analysis is straightforwardly implementable
in OT, as we’ll see shortly in tableau (31) below. Before proceeding to that
formalization, let us review B&P’s remaining three arguments for
underlying /y/, /w/.
B&P’s second argument for underlying /y/, /w/ is that a word can end in
a consonant or in a vocalized semivowel [-i] and [-u], but not in a vowel [-
a], [-e], or [-o], e.g. (24d) /amy/ [ˈaˑˈmiˑ], but */ama/ *[ˈaˑˈmaˑ]. On their
assumption that [-i] and [-u] are underlying consonants, this follows from
the constraint that words cannot end in vowels. I assume the weaker
constraint that words cannot end in non-high vowels.64
B&P’s third argument is based on the distribution of the two allomorphs
of the negative prefix or proclitic /ma-/ ⁓ /m-/ ‘not, not yet’. The choice of
allomorph is phonologically determined: /m-/ occurs before vowels: /m-ag-
p/ ‘he did not speak’, /m-ow-p/ ‘he has not come’, /m-o-ng-gab/ ‘he will not
come’. /ma-/ occurs before consonants and before semivowels: /ma-pkp/ ‘it
has not struck’, /ma-dan/ ‘don’t touch’, /ma-ynb/ ‘it is not cooked’, /ma-
wkp/ ‘it is not cracked’.65
(30) Constraints
a. *V]ω : A word can’t end in a vowel.
b. SON: Consonants are non-syllabic, vowels are syllabic (see (25)).
c. NUC: A syllable has a nucleus.
d. IDENT(F): Input [αF] does not correspond to output [–αF].
e. DEP: Don’t insert a segment.
f. MAX: Don’t delete a segment.
g. *COMPLEX: A syllable does not have a consonant cluster.
h. ONSET: A syllable has an onset.
i. *CODA: A syllable does not have a coda.
That monoconsonantal words such as (24f) /m/ [‘̍mǝ] undergo epenthesis
follows not from reranking the constraints for those derivations, as B&P
suggest, but from standard constraints that are plainly unviolated in Kalam:
lexical words are accented, accents fall on a syllable, and syllables contain a
vocalic nucleus.66
Underlying forms are accommodated to a CV syllabic template by
inserting a vocalic nucleus realized as [ɨ] where necessary, driven by the
basic constraints ONSET and NOCODA. The insertion of the predictable [ɨ]-
vowels can be regarded as part of the syllabification process. The evidence
that they are epenthesized phonologically, rather than intruding
phonetically, is that they are obligatory regardless of speech rate, provide a
nucleus for syllables that would otherwise lack one, and carry word stress.
Several strands of evidence show that they originate specifically in the word
phonology rather than in the postlexical phonology. They are grouped into
binary feet within the domain of a word: in a word consisting of such
syllables, odd-numbered ones can get stressed (Pawley & Bulmer 2011:
30). Corroborating the lexical status of epenthetic vowels is the partial
unpredictability, or perhaps morphological conditioning, of these stresses.
For example, the second stress is on the fifth syllable in (24k) and on the
third syllable in (24l), perhaps because of the different morphological
structure. Finally, each member of a compound counts as a separate word
for purposes of the syllable count and word-finality. The inserted nucleus [ɨ]
is for these reasons an l-phoneme, not part of the s-phonemic representation
because it is predictable. Hence the regular CV syllable structure of the
language cannot be represented at the s-phonemic level. P&B posit initial
syllabifications with consonantal syllables like /t.d.k.s.pm/, /m.d.n.k.nŋ/,
contrary to Jakobson’s CV universal. The simple CV syllabification is,
however, visible in lexical representations, where the l-phoneme /ɨ/ is
present.
B&P say that Kalam ɨ does not neatly fit into Currie-Hall’s (2007)
typology of phonologically epenthetic vs. phonetically intrusive
(excrescent) vowels, on the grounds that it has two properties of intrusive
vowels in addition to the standard properties of epenthetic vowels: it does
not repair illicit structures, and it is a central vowel ([ɨ], word-finally [ǝ]).
Neither of these arguments hit the mark. The first argument overlooks the
generalization that Kalam epenthetic vowels provide a nucleus for
consonants that would otherwise have to be syllabified as codas but are
prohibited in coda position.67 But providing a nucleus for unsyllabifiable
consonants is repairing syllable structure. The second argument is based on
the incorrect premise that inserted central vowels are always intrusive rather
than epenthetic. There are many well-documented instances of epenthetic
central vowels, for example in German (Wiese 1986, 2000: 245), Catalan
(Wheeler 2005, Ch. 8), Armenian (Vaux 1998a, 1998b, 2003, Delisi 2015),
Slovenian (Jurgec 2007), some dialects of Berber (Dell & Tangi 1993),
Salishan languages (Parker 2011), and Mongolian (Svantesson 1995,
Svantesson et al. 2005) — all demonstrably phonological cases of ǝ-
epenthesis, some of them cyclic or morphologically conditioned, hence
definitely lexical. The correct generalization is the converse: intrusive
vowels are always central (unless of course they acquire peripheral features
from their context). In other words, independent peripheral quality is a
diagnostic for epenthetic vowels, but central quality is not a diagnostic of
intrusive vowels. That being the case, Kalam epenthetic vowels fit perfectly
into Hall’s typology; they do not have “mixed properties”.
Thinking that Kalam epenthetic vowels have mixed properties, B&P
classify them as a third category which they call REMNANT VOWELS. They
propose that remnant vowels arise from the historical loss of reduced
unstressed vowels, followed by reanalysis of deletion as insertion in the
complementary contexts (rule inversion) and possibly generalization of the
new insertion process, and that this origin explains their mixed properties.
This is no doubt how the Kalam epenthetic vowels arose. However, even
supposing contrary to fact that Kalam vowels had mixed properties, B&P’s
historical account would not explain that mixture, for there are numerous
synchronic epenthesis processes that are extended inversions of original
syncope and apocope processes and do NOT have mixed properties (see
Andersen 1969 on the synchrony and diachrony of Ukrainian paragoge; on
some analyses even the English schwa in the plural, genitive, and reduced
copula is a case). In any case there are at present no known clear instances
of epenthesis with mixed properties. The two-way distinction between
phonologically epenthetic and phonetically intrusive vowels offers a
sufficient typology of vowel insertion.
Comrie (1991) argues that of the seven Haruai vowels in (32), only /ǝ/ is
phonemic.
(32)
3.2 Kabardian
In the course of their argument that universals are “myths”, Evans &
Levinson (2009: 438) claim that it is “contested” whether all spoken
languages have vowel phonemes at all, citing the Northwestern Caucasian
languages, where “the quality of the vowel segments was long maintained
by many linguists to be entirely predictable from the consonantal context
(see Colarusso 1982; Halle 1970; Kuipers 1960)” (E&L 2009: 438). It must
be said that there is no such debate about Northwestern Caucasian or any
other languages; Colarusso (1982) and Halle (1970) demonstrated a
minimum inventory of two contrastive vowels.68 And it is not accurate that
“although most scholars have now swung round to recognizing two
contrasting vowels, the evidence for this hangs on the thread of a few
minimal pairs, mostly loanwords from Turkish or Arabic” (ibid.). Actually
the majority of scholars recognize a THREE-VOWEL system, and to the extent
that some have “swung round” to the two-vowel analysis, it is not from the
vowel-less analysis but from the three-vowel analysis. Nor does the
evidence particularly depend on Turkish and Arabic loans, or on minimal
pairs. On the contrary, the strongest evidence comes from native words and
the core vocabulary (Colarusso 1992: 22).
The history of scholarship on Kabardian phonology is worth reviewing
as an example of the theory-dependence of phonemic analyses. Older
reference grammars of Kabardian set up seven vowel phonemes: two
variable short vowels /ǝ/ and /ɐ/, whose realization depends mostly on the
following consonant, and five stable long vowels /ɑ: i: u: e: o:/, phonetically
more peripheral than the variable ones (Jakovlev 1948, Turčaninov &
Tsagov 1940, Abitov et al. 1957, Šagirov 1967, Bagov et al. 1970, followed
by Maddieson 1984, 2013). Jakovlev (1923) discovered that the stable
vowels can be derived by fusion of the short vowels with a glide. Most s-
phonemic theories allow fusion (e.g., phonemicizing English [ɚ] as /ǝr/, or
French and Portuguese nasal vowels as V+N sequences). In these it is
straightforward to reduce Kabardian /i: u: e: o:/ to underlying /ǝy/, /ǝw/, /
ɐy/, /ɐw/ respectively. Historically /ɑ:/ undoubtedly comes from an
analogous fusion of /ɐh/, but synchronically it can’t quite be reduced to that
under the strictures of s-phonemics. So in addition to /ǝ/ and /a/, analyses
typically posit it as a third s-phoneme. Its representation has been the
subject of some debate. Currently favored is /a:/ (Choi 1991, Matasovic ´
2006, Wood 1994, Gordon & Applebaum 2006, Applebaum & Gordon
2013). An older theory posits a vertical three-vowel system /ǝ/ /ɐ/ /a/
(Trubetzkoy 1925, 1929, 1939, Catford 1942, 1984, Kumaxov 1973, 1984).
Abstract generative analyses, on the other hand, can easily derive the fifth
long vowel [ɑ:] from /ɐh/, in some cases ultimately from /hɐ/ and /ɐɣ/ by
other processes. The result of that further analytic step is the two-vowel
system of Kuipers (1960), Halle (1970), and Colarusso (1992, 2006).
It cannot be emphasized enough that the seven-vowel, three-vowel, and
two-vowel solutions with its two variants do not reflect any disagreement
about Kabardian, only the differences between phonological theories. The
data and phonological generalizations of Kabardian are not at stake. Each
analysis follows rigorously from exactly the same facts depending on the
principles that it assumes. Even the choice between the two variants of the
three-vowel phonemic system is a deep question of principle: what is at
stake is whether phonemics should privilege phonetic criteria, or
morphophonemic criteria and the overall simplicity of the grammar. The
former in this case favor a qualitative opposition /ɐ/ : /ɑ/, the latter point to
a quantitative opposition /a/ : /a:/. Far from being a dismaying free-for-all,
this spectrum of analyses is heartening because it means that our
understanding of Kabardian has reached a point where it can be advanced
by sharpening phonological theory and typology by empirical work on
other languages.
The upshot, then, is that Kabardian has at least three vocalic phonemes
(s-phonemes), reducible to two underlying m-phonemes. With that and the
failure of the refutation of the CV universal, E&L’s case against
phonological universals falls apart.
Even though Kabardian is not vowel-less, it remains, at the level of s-
phonemic representations, an exception to the proposed universals on vowel
systems in (14). A look at its phonology makes it likely that its lexical
representations do conform to them. Phonetically, Kabardian makes full use
of the vowel space, with unrounded and rounded front vowels and rounded
back vowels, in three heights, ten vowels in all according to Colarusso
(1992). In (33) I give examples of his phonetic and underlying forms, to
which have added the phonemic representation according to the three-vowel
analysis.
This ten-vowel repertoire arises by assimilation in height, backness, and
rounding to a following consonant, if there is one.69 Vowels are fronted
before [–high] coronals (alveolars, alveopalatals, palatoalveolars), fronted
and raised before [+high] coronals (palatals and palato-alveolars), backed
before plain uvulars and pharyngeals, backed and rounded before rounded
uvulars, and raised and rounded before labiovelars (there are no plain
velars). Onset consonants also color the following vowel, but in a variable
and gradient manner at the level of phonetic implementation, as Colarusso
(1992: 31) makes clear. Word-finally and before labials and the laryngeal /
Ɂ/, which lack a distinctive tongue position, the vowels are unraised and
front (Colarusso 1982: 96, 1992: 30).70 These assimilations, summarized in
(34), generate ten surface vowels.
(35)
Before /Ɂw/, which triggers rounding but not backing, /ǝ/ and /a/ are realized
as ö, ͻ¨ , not shown in the two-dimensional diagram. The long vowels /i:/
/u:/ /e:/ /o:/ originate by the same assimilation processes before /y/ and /w/,
which are then deleted with compensatory lengthening. /h/ neutralizes /ǝ/
and /a/ without any other coloring effect, and deletes like the other glides,
giving /a:/.
By the criteria (A)-(G) of Section 1.1, these assimilations are
phonological processes, not coarticulation processes, and they take effect in
the word-level phonology, everywhere within the word domain, but not
across phonological word boundaries (Gordon & Applebaum 2010: 51).
They are categorical and operate on discrete feature values. Note especially
that [–high] consonants trigger a chain shift, so that [o] and [ɛ] represent
either /ǝ/ and /ɐ/, depending on the following consonant.
The surface vowels of Kabardian are thus l-phonemes. The ten-vowel
system that emerges at the word level is symmetric and dispersed. It is
isomorphic to UPSID’s ten-vowel system for Korean (Maddieson 1984:
283). Perhaps significantly, the four-way combination of the values of
[round] and [back] that is its outstanding typological feature is also found in
coterritorial Turkish and its relatives, and elsewhere in Eurasia (Uyghur,
Selkup, Seto, Dagur, among others).
3.3 Marshallese
The other famous case of a vertical vowel system is Marshallese. Bender
(1963) had posited the phonemic vowel system (36) (I have replaced the
unrounded back vowels with their offical IPA symbols).
He rejects this further reduction for the phonemic level because it would
violate biuniqueness. Words like [bwuŋw] ‘night’ could be phonemicized
either as /bwʌŋwɯ/ or as /bwɯŋwʌ/, unless one had access to the
morphonological information about the underlying second syllable from
suffixed forms, which is not available at the phonemic level. At the
morphophonemic level, this objection falls away.
Nevertheless Bender’s remarkable solution does not fully adhere to the
principle of biuniqueness, and transcends structuralist procedures of
segmentation and classification, for the context that triggers the vowel
allophones is sometimes itself deleted, and the phoneme /h/, the “heavy”
counterpart of /y/ and /w/, never surfaces at all. For example, the three-way
contrasts in long vowel vowels seen in (39) are due to deleted intervocalic
glides.
(39) a. /mayar/ mɛɛr ‘to tell a lie’, /mahaj/ mɑɑj ‘open field’, /mawar/
mͻͻr ‘bait’
b. /mʌyʌj/ meej ‘dark colored’, /rʌhʌj/ rʌʌj ‘bright colored’, /tʌwʌj/
tooj ‘conspicuous’
4 Conclusion
All putative phonological universals are framed in terms of theory-
dependent categories, and defined on some theory-dependent level of
representation, most often the phonemic level. Therefore the linguistic
descriptions on which they are based cannot be theory-neutral or
atheoretical. The approach of “describing each language in its own terms”
is at best aspirational. With one exception, all grammars I am aware of draw
heavily on existing descriptive frameworks.74 Since there are no theory-
neutral grammars, there is no theory-neutral typology. In terms of Hyman’s
(2008) distinction, there are no “descriptive” universals of language. All
universals are analytic, and their validity often turns on a set of critical
cases where different solutions can be and have been entertained. The
choice between these is not a matter of taste or whimsy but of different
assumptions, each one with testable empirical consequences in a multitude
of other languages. It follows that the search for better linguistic
descriptions, more illuminating typologies, and stronger cross-linguistic
generalizations and universals should go hand in hand.
Stratal OT’s word level representations encode the typologically
significant phonological properties omitted in s-phonemic representations,
including syllabification regardless of whether it is contrastive or not, and
“quasi-phonemes”. They also encode typologically significant abstract
structural information that is missing in the phonetic record, such as
metrical and prosodic structure and feature sharing, while omitting
postlexical features and structurally irrelevant coarticulation phenomena.
This makes word phonology the sweet spot where typological
generalizations appear at their tidiest: it seems likely that it obeys all
phonological universals that phonemic representations do, and then some.
The difference is most dramatic where phonemic theory imposes extremely
abstract analyses, as in vertical and one-vowel systems. But the argument
that the word level should replace phonemics in typological research can
also be made in languages where lexical representations are fairly close to
classical phonemic representations.
Since the lexical level of representation is empirically supported and
formally anchored in Stratal OT and Lexical Phonology, it is a good
candidate for replacing the classical s-phonemic level. That would remove
an unmotivated residue of structuralism and replace it with a well motivated
level of representation that serves some of the same functions. Our analysis
of unusual syllabification and vowel systems shows it to be a Goldilocks
level that is just right for typology, in that it conforms to some important
generalizations that are obscured for technical reasons in structuralist
phonemic representations, therby leading to cleaner typologies and turning
near-universals into solid exceptionless universals.
Since lexical representations and l-phonemes were not defined with an
eye on typology, its positive typological implications are a nice bonus that
supports Stratal OT. In broader perspective, the outcome encourages the
joint pursuit of linguistic theory and typology, where universals are not just
inductive generalizations from putatively theory-neutral linguistic
descriptions but hypotheses that at once guide analysis and are informed by
it. It has the hallmark of a good theory, that it leads BOTH to better linguistic
descriptions and to stronger cross-linguistic generalizations and universals.
Going beyond the typology of segmental inventories and syllable structure,
the relevance of lexical representations is worth exploring further in
dispersion theory, language acquisition, language use, and sound change.
References
Abitov, M. L. et al. (eds.). 1957. Grammatika kabardino-čerkesskogo literaturnogo
jazyka. Moskva: Izd-vo Akademii Nauk SSSR.
Alber, Birgit. 2005. Clash, lapse and directionality. Natural Language & Linguistic
Theory 23. 485–542.
Andersen, Henning. 1969. A study in diachronic morphophonemics: The Ukrainian
prefixes. Language 45. 807–830.
Anderson, Stephen R. 1982. The analysis of French shwa, or How to get
something for nothing. Language 58. 534–573.
Anderson, Stephen R. 2000. Reflections on “On the phonetic rules of Russian”.
Folia linguistica 34. 11–27.
Applebaum, Ayla & Matthew Gordon. 2013. A comparative phonetic study of the
Circassian languages. In Chundra Cathcart, Shinae Kang, & Clare S. Sandy
(eds.), Proceedings of the 37th Annual Meeting of the Berkeley Linguistics
Society, 3–17. Berkeley: BLS.
Austin, Peter & Joan Bresnan. 1996. Non-configurationality in Australian aboriginal
languages. Natural Language & Linguistic Theory 14. 215–268.
Avery, Peter, B. Elan Dresher & Keren Rice (eds.). 2008. Contrast in phonology.
Theory, perception, acquisition. Berlin: De Gruyter Mouton.
Bagov, P. M., B. X. Balkarov, T. X. Kuaševa, M. A. Kumaxov & G. B. Rogova
(eds.). 1970. Grammatika kabardino-čerkesskogo literaturnogo jazyka, 1.
Fonetika i morfologija. Moskva: Nauka.
Barreteau, Daniel. 1988. Description du mofu-gudur: Langue de la famille
tchadique parlée au Cameroun. Paris: Editions de l’ORSTOM.
Bedell, George. 1968. Kokugaku grammatical theory. Ph.D. dissertation, MIT,
Cambridge, MA.
Bender, Byron W. 1963. Marshallese phonemics: Labialization or palatalization?
Word 19. 335–341.
Bender, Byron W. 1968. Marshallese phonology. Oceanic Linguistics 7. 16–35.
Bender, Byron W. 1971. Micronesian languages. In Thomas A. Sebeok (ed.),
Current trends in linguistics, vol. 8: Linguistics in Oceania, 426–465. The
Hague: Mouton.
Bermúdez-Otero, Ricardo. 2012. The architecture of grammar and the division of
labour in exponence. In Jochen Trommer (ed.), The morphology and
phonology of exponence: The state of the art, 8–83. Oxford: Oxford University
Press.
Bermúdez-Otero, Ricardo. 2015. Amphichronic explanation and the life cycle of
phonological processes. In Patrick Honeybone & Joseph Salmons (eds.), The
Oxford handbook of historical phonology, 374–399. Oxford: Oxford University
Press.
Bloch, Bernard. 1941. Phonemic overlapping. American Speech 16. 278–284.
Bloch, Bernard. 1953. Contrast. Language 29. 59–61.
Bloomfield, Leonard. 1939. Menomini morphophonemics. In Etudes phonologiques
dédiées à la mémoire de M. le prince N. S. Trubetzkoy, 105–115. Travaux du
Cercle Linguistique de Prague 8. Prague.
Bloomfield, Leonard. 1962. The Menomini language. Edited by Charles F. Hockett.
New Haven, CT: Yale University Press.
Blumenfeld, Lev. 2004. Tone-to-stress and stress-to-tone: Ancient Greek accent
revisited. Proceedings of the 30th Annual Meeting of the Berkeley Linguistics
Society.
Borowsky, Toni. 1993. On the word level. In Sharon Hargus & Ellen Kaisse (eds.),
Studies in Lexical Phonology, 199–234. New York: Academic Press.
Breen, Gavan & Rob Pensalfini. 1999. Arrernte: A language with no syllable
onsets. Linguistic Inquiry 30. 1–25.
Burzio, Luigi. 1994. Principles of English stress. Cambridge: Cambridge University
Press.
Casali, Roderic F. 2014. Assimilation, markedness and inventory structure in
tongue root harmony systems. ROA 1319. http://roa.rutgers.edu/content/articl
e/files/1319_casali_1.pdf.
Catford. J. C. 1942. The Kabardian language. Maître phonétique, 3rd Series 78.
15–18.
Catford, J. C. 1984. Instrumental data and linguistic phonetics. In Jo-Ann W. Higgs
& Robin Thelwall (eds.), Topics in linguistic phonetics: In honour of E. T.
Uldall, 23–48. Coleraine, N. Ireland: New University of Ulster.
Choi, John, D. 1991. An acoustic study of Kabardian vowels. Journal of the
International Phonetic Association 21. 4–12.
Chomsky, Noam. 1964. Current issues in linguistic theory. The Hague: Mouton.
Chomsky, Noam, & Morris Halle. 1968. The sound pattern of English. New York:
Harper & Row.
Clairis, Christos. 1977. Première approche du qawasqar: Identification et
phonologie. La Linguistique 13. 145–152.
Clements, G. N. 2001. Representational economy in constraint-based phonology.
In T. Alan Hall (ed.), Distinctive feature theory, 71–146. Berlin: Mouton de
Gruyter.
Clements, G. N. 2003. Feature economy in sound systems. Phonology 20. 287–
333.
Colarusso, John. 1982. Western Circassian vocalism. Folia Slavica 5. 89–114.
Colarusso, John. 1992. A grammar of the Kabardian language. Calgary: University
of Calgary Press.
Colarusso, John. 2006. Kabardian (East Circassian). München: Lincom Europa.
Comrie, Bernard. 1991. On Haruai vowels. In Andrew Pawley (ed.), Man and a
half: Essays in pacific anthropology in honour of Ralph Bulmer, 393–397.
Auckland: The Polynesian Society.
Currie-Hall, Daniel. 2007. The role and representation of contrast in phonological
theory. Ph.D. dissertation, University of Toronto. Toronto Working Papers in
Linguistics.
Currie-Hall, Kathleen. 2013. A typology of intermediate phonological relationships.
The Linguistic Review 30. 215–275.
DeCamp, David. 1958. The pronunciation of English in San Francisco. Part 1.
Orbis 7. 372–391.
DeCamp, David. 1959. The pronunciation of English in San Francisco. Part 2.
Orbis 8. 54–77.
Delisi, Jessica L. 2015. Epenthesis and prosodic structure in Armenian: A
diachronic account. Ph.D. dissertation, UCLA Indo-European Studies.
Dell, François & Oufae Tangi. 1993. Syllabification and empty nuclei in Ath-Sidhar
Rifain Berber. Journal of African Languages and Linguistics 13. 125–162.
Dinnsen, Daniel A. 1985. A re-examination of phonological neutralization. Journal
of Linguistics 21. 265–279.
Downing, Laura J. 1998. On the prosodic misalignment of onsetless syllables.
Natural Language & Linguistic Theory 16. 1–52.
Dresher, B. Elan. 2009. The contrastive hierarchy in phonology. Cambridge:
Cambridge University Press.
Ebeling, C. L. 1960. Linguistic units. The Hague: Mouton.
Evans, Nicholas & Stephen C. Levinson. 2009. The myth of language universals:
Language diversity and its importance for cognitive science. Behavioral and
Brain Sciences 32. 429–492.
Fast, P. W. 1953. Amuesha (Arawak) phonemes. International Journal of American
Linguistics 19. 191–194.
Flemming, Edward. 1995. Auditory representations in phonology. Ph.D.
dissertation, UCLA. (Published New York: Garland Press, 2002.)
Flemming, Edward. 2016. Dispersion theory and phonology. Oxford Research
Encyclopedias: Linguistics. http://linguistics.oxfordre.com/view/10.1093/acrefo
re/9780199384655.001.0001/acrefore-9780199384655-e-110
Fougeron, Cécile & Rachid Ridouane. 2008. On the phonetic implementation of
syllabic consonants and vowel-less syllables in Tashlhiyt. Estudios de
Fonética Experimental 17. 139–175.
Gordon, Matthew & Ayla Applebaum. 2006. Phonetic structures in Turkish
Kabardian. Journal of the International Phonetic Association 36. 159–186.
Gravina, Richard. 2014. The phonology of Proto-Central Chadic: The
reconstruction of the phonology and lexicon of Proto-Central Chadic, and the
linguistic history of the Central Chadic languages. Ph.D. dissertation, Leiden
University. https://www.lotpublications.nl/Documents/375_fulltext.pdf
Green, Jenny. 1994. A learner’s guide to Eastern and Central Arrernte. Alice
Springs: IAD Press.
Greenberg, Joseph H., Charles A. Ferguson & Edith A. Moravcsik (eds.). 1978.
Universals of human language, volume 2. Stanford: Stanford University Press.
Guimarães, Maximiliano & Andrew Nevins. 2013. Probing the representation of
nasal vowels in Brazilian Portuguese with language games. Organon 28. 155–
178. ling.auf.net/lingbuzz/001693/current.pdf
Gussenhoven, Carlos. 1986. English plosive allophones and ambisyllabicity.
Gramma 10. 119–142.
Hale, Mark. 2000. Marshallese phonology, the phonetics-phonology interface and
historical linguistics. The Linguistic Review 17. 241–257.
Hall, Tracy Alan. 1993. The phonology of German /R/. Phonology 10. 43–82.
Halle, Morris. 1959. The sound pattern of Russian. The Hague: Mouton.
Halle, Morris. 1970. Is Kabardian a vowel-less language? Foundations of
Language 6. 95–103.
Harris, James W. & Ellen M. Kaisse. 1999. Palatal vowels, glides and obstruents in
Argentinean Spanish. Phonology 16. 117–190.
Harris, John. 1987. Non-structure-preserving rules in Lexical Phonology:
Southeastern Bantu harmony. Lingua 72. 255–292.
Harris, John. 1990. Derived phonological contrasts. In Susan Ramsaran (ed.),
Studies in the pronunciation of English: A commemorative volume in honour
of A. C. Gimson, 87–105. London: Routledge.
Henderson, John & Veronica Dobson. 1994. Eastern and Central Arrernte to
English dictionary. Alice Springs: Institute for Aboriginal Development.
Hsieh, Feng-Fan, Michael Kenstowicz & Xiaomin Mou. 2009. Mandarin
adaptations of coda nasals in English loanwords. In Andrea Calabrese & Leo
Wetzels (eds.), Loan phonology: Issues and controversies, 131–154.
Amsterdam: John Benjamins.
Hualde, José Ignacio. 2005. Quasi-phonemic contrasts in Spanish. In Benjamin
Schmeiser, Vineeta Chand, Ann Kelleher & Angelo Rodriguez (eds.), West
Coast Conference on Formal Linguistics 23. Somerville, MA: Cascadilla
Press.
Hyde, Brett. 2002. A restrictive theory of metrical stress. Phonology 19. 313–359.
Hyman, Larry M. 1976. Phonologization. In Alphonse G. Juilland, A. M. Devine &
Laurence D. Stephens (eds.), Linguistic studies offered to Joseph H.
Greenberg on the occasion of his sixtieth birthday, 407–418. Saratoga, CA:
Anma Libri.
Hyman, Larry M. 1983. Are there syllables in Gokana? In Jonathan Kaye, Hilda
Koopman, Dominique Sportiche & André Dugas (eds.), Current approaches to
African linguistics, vol. 2, 171–179. Dordrecht: Foris.
Hyman, Larry M. 1985. A theory of phonological weight. Dordecht: Foris.
Hyman, Larry M. 2008. Universals in phonology. The Linguistic Review 25. 83–
137.
Hyman, Larry M. 2011. Does Gokana really have no syllables? Or: what’s so great
about being universal? Phonology 28. 55–85.
Hyman, Larry M. 2015. Does Gokana really have syllables? A postscript.
Phonology 32. 303–306.
Itô, Junko & Armin Mester. 2003. Japanese morphophonemics: Markedness and
word structure (Linguistic Inquiry Monograph Series 41). Cambridge, MA: MIT
Press.
Itô, Junko & Armin Mester. 2015a. Sino-Japanese phonology. In Haruo Kubozono
(ed.), Handbook of Japanese phonetics and phonology, 289–312. Berlin:
Mouton de Gruyter.
Itô, Junko & Armin Mester. 2015b. Word formation and phonological processes. In
Haruo Kubozono (ed.), Handbook of Japanese phonetics and phonology,
363–395. Berlin: Mouton de Gruyter.
Jakobson, Roman. 1931. Die Betonung und ihre Rolle in der Wort- und
Syntagmaphonologie. In Réunion Phonologique Internationale tenue à
Prague: 18–21/XII 1930, 164–183. (Reprinted in Jakobson, Selected writings,
vol. 1: Phonological studies, 117–136. The Hague: Mouton, 1962.)
Jakobson, Roman. 1958. Typological studies and their contribution to historical
comparative linguistics. In Proceedings of the 8th International Congress of
Linguists, Oslo. (Reprinted in Jakobson, Selected writings, vol. 1:
Phonological studies, 523–532. The Hague: Mouton, 1962.)
Jakobson, Roman, Gunnar Fant & Morris Halle. 1952. Preliminaries to speech
analysis. Cambridge, MA: Acoustics Laboratory, Massachusetts Institute of
Technology.
Jakovlev, N. F. 1923. Tablitsy fonetiki kabardinskogo jazyka. In Trudy podrazriada
issledovaniia severokavkazskikh jazykov pri Institute Vostokovedeniia v
Moskve, I. Moskva.
Jakovlev, N. F. 1948. Grammatika literaturnogo kabardino-čerkesskogo jazyka.
Moskva: Izd-vo AN SSSR.
Janda, Richard D. 2003. “Phonologization” as the start of dephoneticization – or,
on sound change and its aftermath: Of extension, lexicalization, and
morphologization. In Brian D. Joseph & Richard D. Janda (eds.), The
handbook of historical linguistics, 401–422. Oxford: Blackwell.
Jurgec, Peter. 2007. Schwa in Slovenian is epenthetic. 2nd Congress of the Slavic
Linguistic Society. Berlin: ZAS. http://www.hum.uit.no/a/jurgec/schwa.pdf
(accessed August 24 2007).
Kager, René. 2007. Feet and metrical stress. In Paul de Lacy (ed.), The
Cambridge handbook of phonology, 195–227. Cambridge: Cambridge
University Press.
Kahn, Daniel. 1976. Syllable-based generalizations in English phonology. Ph.D.
dissertation, MIT.
Kaplan, Abby. 2011. Phonology shaped by phonetics: The case of intervocalic
lenition. ROA 1077. http://roa.rutgers.edu/article/view/1107
Karvonen, Dan. 2005. Word prosody in Finnish. Ph.D. dissertation, University of
California at Santa Cruz.
Kawahara, Shigeto. 2012. Review of Laurence Labrune, The phonology of
Japanese (2012). Phonology 29. 540–548.
Kenstowicz, Michael & Nabila Louriz. 2009. Reverse engineering: Emphatic
consonants and the adaptation of vowels in French loanwords into Moroccan
Arabic. Brill’s Annual of Afroasiatic Languages and Linguistics 1. 41–74.
Kessler, Brett. 1994. Sandhi and syllables in Classical Sanskrit. In Erin Duncan,
Donka Farkas & Philip Spaelti (eds.), The proceedings of the 12th West Coast
Conference on Formal Linguistics, 35–50. Stanford, CA: CSLI Publications.
Keyser, Samuel Jay & Kenneth N. Stevens. 2006. Enhancement and overlap in
the speech chain. Language 82. 33–63.
Kim, Susan. 2001. Lexical Phonology and the fricative voicing rule. Journal of
Linguistics 29. 149–161.
Kiparsky, Paul. 1996. Allomorphy or morphophonology? In Rajendra Singh (ed.),
Trubetzkoy’s orphan: Proceedings of the Montréal roundtable
“Morphophonology: Contemporary Responses”, 13–31. Amsterdam: John
Benjamins.
Kiparsky, Paul. 2003. Syllables and moras in Arabic. In Caroline Féry & Ruben van
de Vijver (eds.), The syllable in Optimality Theory, 147–182. Cambridge:
Cambridge University Press.
Kiparsky, Paul. 2006. Amphichronic linguistics vs. Evolutionary Phonology.
Theoretical Linguistics 32. 217–236.
Kiparsky, Paul. 2015. Phonologization. In Patrick Honeybone & Joseph Salmons
(eds.), The Oxford handbook of historical phonology, 563–582. Oxford: Oxford
University Press.
Kiparsky, Paul. To Appear. Paradigms and opacity. Stanford: CSLI Press.
Kleber, Felicitas, Tina John & Jonathan Harrington. 2010. The implications for
speech perception of incomplete neutralization of final devoicing in German.
Journal of Phonetics 38. 185–196.
Kohler, Klaus J. 1966. Is the syllable a phonological universal? Journal of
Linguistics 2. 207–208
Korhonen, Mikko. 1969. Die Entwicklung der morphologischen Methode im
Lappischen. Finnisch-Ugrische Forschungen 37. 203–262.
Kubozono, Haruo. 1999. Mora and syllable. In Natsuko Tsujumura (ed.), The
handbook of Japanese linguistics, 31–61. Oxford: Blackwell.
Kubozono, Haruo. 2003. The syllable as a unit of prosodic organization in
Japanese. In Caroline Féry & Ruben van de Vijver (eds.), The syllable in
Optimality Theory, 99–122. Cambridge: Cambridge University Press.
Kumaxov, M. A. 1973. Teorija monovokalizma i zapadnokavkazskie jazyki.
Voprosy jazykoznanija 4. 54–67.
Kumaxov, M. A. 1984. Očerki obščego i kavkazskogo jazykoznanija. Nal’cik:
Izdatel’stvo El’brus.
Kuipers, Aert H. 1960. Phoneme and morpheme in Kabardian. The Hague:
Mouton.
Labov, William. 1994. Principles of linguistic change. Vol. 1: Internal factors.
Oxford: Wiley-Blackwell.
Labrune, Laurence. 2012. Questioning the universality of the syllable: Evidence
from Japanese. Phonology 29. 113–152.
Ladd, D. Robert. 2006. “Distinctive phones” in surface representation. In Louis M.
Goldstein, D. H. Whalen & Catherine T. Best (eds.), Laboratory Phonology,
vol. 8, 3–26. Berlin: Mouton de Gruyter.
Laycock, D. C. 1965. The Ndu language family (Linguistic Circle of Canberra
Publications, Series C, 1). Canberra: Australian National University.
Liberman, Anatoly. 1991. Phonologization in Germanic: Umlauts and vowel shifts.
In Elmer H. Antonsen & Hans Henrich Hock (eds.), Stæfcræft: Studies in
Germanic linguistics, 125–137. Amsterdam: John Benjamins.
Lindblom, Björn. 1986. Phonetic universals in vowel systems. In John J. Ohala &
Jeri J. Jaeger (eds.), Experimental phonology, 13–44. Orlando: Academic
Press.
Lindblom, Björn. 1990. Explaining phonetic variation: A sketch of the H&H theory.
In W. J. Hardcastle & A. Marchal (eds.), Speech production and speech
modelling, 403–439. Dordrecht: Kluwer.
McCarthy, John & Alan Prince. 1986. Prosodic morphology 1986. http://scholarwor
ks.umass.edu/linguist_faculty_pubs/13
McCarthy, John & Alan Prince. 1993. Prosodic morphology I: Constraint interaction
and satisfaction. http://scholarworks.umass.edu/linguist_faculty_pubs/14
McCawley, James D. 1968. The phonological component of a grammar of
Japanese. The Hague: Mouton.
MacMahon, April M. S. 1991. Lexical Phonology and sound change: The case of
the Scottish vowel length rule. Journal of Linguistics 27. 29–53.
Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University
Press.
Maddieson, Ian. 2013. Chapters in Matthew S. Dryer & Martin Haspelmath (eds.),
The world atlas of language structures online. Leipzig: Max Planck Institute for
Evolutionary Anthropology. http://wals.info/
Maddieson, Ian & Kristin Precoda. 1990. Updating UPSID. UCLA Working Papers
in Phonetics 74. 104–111.
Manaster-Ramer, Alexis. 1994. On three East Slavic non-counterexamples to
Stieber’s Law. Journal of Slavic Linguistics 2. 164–170.
Martin, Samuel. 1975. A reference grammar of Japanese. New Haven, CT: Yale
University Press.
Martinet, André. 1964. Elements of general linguistics. Chicago: University of
Chicago Press.
Martínez-Gil, Fernando. 1993. Galician nasal velarization as a case against
Structure Preservation. In Proceedings of the 19th Annual Meeting of the
Berkeley Linguistics Society, 254–267.
Matasović, Ranko. 2006. A short grammar of East Circassian (Kabardian). Zagreb.
http://mudrac.ffzg.unizg.hr/~rmatasov/KabardianGrammar.pdf
Mohanan, K. P. 1982. Grammatical relations and clause structure in Malayalam. In
Joan Bresnan (ed.), The mental representation of grammatical relations, 504–
589. Cambridge, MA: MIT Press.
Nagarajan, Hemalatha. 1995. Gemination of stops in Tamil: Implications for the
phonology-syntax interface.
https://www.ucl.ac.uk/pals/research/linguistics/publications/wpl/95paper
Odden, David. 1995. The status of onsetless syllables in Kikerewe. OSU Working
Papers in Linguistics 47. 89–110.
Ó Siadhail, Mícheál. 1989. Modern Irish: Grammatical structure and dialectal
variation. Cambridge: Cambridge University Press.
Ohala, John J. 1990. Alternatives to the sonority hierarchy for explaining
segmental sequential constraints. In Michael Ziolkowski, Manuela Noske, &
Karen Deaton (eds.), Papers from the 26th Regional Meeting of the Chicago
Linguistic Society. Vol. 2: Parasession on the syllable in phonetics and
phonology, 319–338. Chicago: CLS.
Ohala, John J. & Haruko Kawasaki-Fukumori. 1997. Alternatives to the sonority
hierarchy for explaining segmental sequential constraints: In Stig Eliasson &
Ernst Håkon Jahr (eds.), Language and its ecology: Essays in memory of
Einar Haugen, 343–365. Berlin: Mouton de Gruyter.
Padgett, Jaye. 2010. Russian consonant-vowel interactions and derivational
opacity. In W. Brown, A. Cooper, A. Fisher, E. Kesici, N. Predolac, & D. Zec
(eds.), Proceedings of the 18th Formal Approaches to Slavic Linguistics
meeting, 353–382. Ann Arbor: Michigan Slavic Publications.
Padgett, Jaye & Máire Ní Chiosáin. 2011. Markedness, segment realization and
locality in spreading. In Linda Lombardi (ed.), Segmental phonology in
Optimality Theory: Constraints and representations, 118–156. Cambridge:
Cambridge University Press.
Parker, Aliana. 2011. It’s that schwa again! Towards a typology of Salish schwa.
Working Papers of the Linguistics Circle of the University of Victoria 21. 9–21.
Pawley, Andrew & Ralph Bulmer. 2011. A dictionary of Kalam with ethnographic
notes. Canberra, A.C.T.: Pacific Linguistics, School of Culture, History and
Language, College of Asia and the Pacific, The Australian National University.
Pensalfini, Robert. 1998. The development of (apparently) onsetless
syllabification: A constraint-based approach. In M. Catherine Gruber, Derrick
Higgins, Kenneth Olson & Tamra Wysocki (eds.), Papers from the 32nd
Regional Meeting of the Chicago Linguistic Society, 167–178. Chicago: CLS.
Piroth, Hans Georg & Peter M. Janker. 2004. Speaker-dependent differences in
voicing and devoicing of German obstruents. Journal of Phonetics 32. 81–
109.
Port, Robert F. & Michael O’Dell. 1985. Neutralization of syllable-final voicing in
German. Journal of Phonetics 13. 455–471.
Port, Robert F. & Penny Crawford. 1989. Incomplete neutralization and pragmatics
in German. Journal of Phonetics 17. 257–282.
Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in
generative grammar. RuCCS Technical Report 2, Rutgers University,
Piscateway, NJ: Rutgers University. Center for Cognitive Science. Revised
version published 2004 by Blackwell.
Ridouane, Rachid. 2007. Gemination in Tashlhiyt Berber: An acoustic and
articulatory study. Journal of the International Phonetic Association 37. 119–
142.
Roca, Iggy. 2005. Strata, yes, structure-preservation, no. In Twan Geerts, Ivo van
Ginneken, & Haike Jacobs (eds.), Romance languages and linguistic theory
2003, 197–218. Amsterdam: John Benjamins.
Rood, David S. 1975. Implications of Wichita phonology. Language 51. 315–337.
Rubach, Jerzy. 2000. Backness switch in Russian. Phonology 17. 39–64.
Ryan, Kevin. 2014. Onsets contribute to syllable weight: Statistical evidence from
stress and meter. Language 90. 309–341.
Šagirov, A. K. 1967. Kabardinskij jazyk. In V. V. Vinogradov (ed.), Jazyki narodov
SSSR, vol. 4: Iberijsko-kavkazskie jazyki, 165–183. Moskva: Nauka.
Schwartz, Geoffrey. 2013. A representational parameter for onsetless syllables.
Journal of Linguistics 49. 613–646.
Schwartz, Jean-Luc, Louis-Jean Boë, Nathalie Vallée & Christian Abry. 1997.
Major trends in vowel system inventories. Journal of Phonetics 25. 233–253.
Scobbie, James M. & Jane Stuart-Smith. 2008. Quasi-phonemic contrast and the
fuzzy inventory: Examples from Scottish English. In Avery, Dresher & Rice
(eds.) 2008, 87–113.
Smith, Tony. 1999. Muyang phonology. Yaoundé: SIL.
Sommer, Bruce A. 1970a. An Australian language without CV syllables.
International Journal of American Linguistics 36. 57–58.
Sommer, Bruce A. 1970b. The shape of Kunjen syllables. In Didier L. Goyvaerts
(ed.), Phonology in the 1980s, 231–244. Ghent: Story-Scientia.
Steriade, Donca. 1999. Alternatives in syllable-based accounts of consonantal
phonotactics. In Osamu Fujimura, Brian Joseph & Bohumil Palek (eds.),
Proceedings of LP 1998, vol. 1, 205–246. Prague: Charles University and
Karolinum Press.
Stevens, Kenneth N. & Samuel Jay Keyser. 1989. Primary features and their
enhancement in consonants. Language 65. 81–106.
Svantesson, Jan-Olof. 1995. Cyclic syllabification in Mongolian. Natural Language
& Linguistic Theory 13. 755–766.
Svantesson, Jan-Olof, Anna Tsendina, Anastasia M. Karlsson & Vivan Franzen.
2005. The phonology of Mongolian. Oxford: Oxford University Press.
Tabain, Marija, Gavan Breen & Andrew Butcher. 2004. VC vs. CV syllables: A
comparison of Aboriginal languages with English. Journal of the International
Phonetic Association 34. 175–200.
Topintzi, Nina. 2010. Onsets: Suprasegmental and prosodic behaviour.
Cambridge: Cambridge University Press.
Topintzi, Nina & Andrew Nevins. 2017. Moraic onsets in Arrernte. Phonology 34.
615–650.
Trubetzkoy, N. S. 1925. Review of Jakovlev 1923. Bulletin de la Société de
Linguistique de Paris 26. 277–286.
Trubetzkoy, N. S. 1929. Zur allgemeinen Theorie der phonologischen
Vokalsysteme. Travaux du Cercle Linguistique de Prague 1. 39–67.
Trubetzkoy, N. S. 1939. Grundzüge der Phonologie. Prague: Travaux du Cercle
Linguistique de Prague, No. 7.
Turčaninov, G. & M. Tsagov. 1940. Grammatika kabardinskogo jazyka. Moskva:
Izd-vo Akademii Nauk.
Turpin, Myfany. 2012. The metrics of Kaytetye rain songs, a ceremonial repertory
of Central Australia. http://linguistics.ucla.edu/event/icalrepeat.detail/2012/10/
03/325/-/pho
Vaux, Bert. 1998a. The laryngeal specifications of fricatives. Linguistic Inquiry 29.
497–511.
Vaux, Bert. 1998b. The phonology of Armenian. Oxford: Oxford University Press.
Vaux, Bert. 2003. Syllabification in Armenian, universal grammar, and the lexicon.
Linguistic Inquiry 34. 91–125.
Vaux, Bert & Andrew Wolfe. 2009. The appendix. In Eric Raimy & Charles E.
Cairns (eds.), Contemporary views on architecture and representations in
phonology, 101–143. Cambridge, MA: MIT Press.
Vaux, Bert & Bridget Samuels. 2015. Explaining vowel systems: Dispersion theory
vs. natural selection. The Linguistic Review 32. 573–599.
Vennemann, Theo. 1972. On the theory of syllabic phonology. Linguistische
Berichte 18. 1–18.
Versteegh, Kees. 1997. Landmarks in linguistic thought, vol. 3: The Arabic
linguistic tradition. London: Routledge.
Wells, John. 1982. Accents of English. Cambridge: Cambridge University Press.
Wiese, Richard. 1986. Schwa and the structure of words in German. Linguistics
24. 697–724.
Wiese, Richard. 2000. The phonology of German. Oxford: Oxford University Press.
Wood, Sidney. 1994. A spectrographic analysis of vowel allophones in Kabardian.
Working Papers 42. 241–250. Lund: Lund University Department of
Linguistics.
Ian Maddieson
Is phonological typology possible
without (universal) categories?
1 Introduction
Phoneticians and phonologists have generally relied on a basic descriptive
framework which presupposes a set of categories anchored in local and
dynamic aspects of the speech production mechanism and in the auditory
and perceptual systems and the mental processing capacities largely
common to all humans, as well as in the nature of the acoustic signal that
carries speech between the speaker and the listener. These include terms for
places of articulation such as bilabial, velar, or pharyngeal, labels for
categories of articulatory configurations and their auditory characteristics
such as plosive, fricative, or nasal, and categories for acoustic properties
such as burst, formant, and noise. Specific entities such as voiceless bilabial
plosive ([p]) or low central unrounded vowel ([a]) are also referenced. In
addition, higher-order categories such as consonant and vowel, liquid,
sonorant and obstruent, coronal and guttural are customary. Categories such
as the syllable and its component parts of onset, nucleus, rhyme, and coda,
and other larger units such as the intonational phrase are also familiar.
Analytical concepts such as tones, phonemes (or similar notions of
contrastive elements), and stress, and inventories of these elements as well
as categories that express relationships between variant forms, such as
assimilation, gemination, or lenition, also form part of this framework.
Comparison between the phonetic and phonological properties of languages
has mostly been based on such categorical properties: this or that language
has a similar vowel system to another but a distinct one from yet others;
these languages allow limited syllable structures but others allow a larger
range of structures; these languages require nasals before stops to assimilate
in place but these others don’t, and so on.
The most familiar body of work on phonetics and phonology from the
nineteenth, twentieth, and twenty-first centuries (including, for example,
Sweet 1877, Jespersen 1889, Trubetzkoy 1939, Hockett 1955, Catford
1977, Chomsky & Halle 1968, Maddieson 1984, Stevens 1998, Ladefoged
& Maddieson 1986, etc., etc.) for the most part assumes that the categories
established are more-or-less valid for any language without explicitly
arguing the point. And much of this familiar conceptual framework and
terminology has roots in considerably older traditions of scholarship in
Greek, Roman, Arabic, Indian, or Chinese cultures which also imply that –
even if the terms are used to describe properties of specific languages such
as Latin or Sanskrit – the descriptive framework itself is not language-
specific.
In other words, much of the terminology used in the phonetic sciences
and applied in phonological analysis refers to categories that are determined
outside the scope of an individual language. That is, they seem to fit the bill
of being “pre-determined categories” of the sort that Haspelmath (2007)
declared “do not exist”. Haspelmath (2010) argues that cross-language
comparison, and hence any form of linguistic typology, cannot be based on
“descriptive categories” but must instead be based on “comparative
concepts”. This seems like a distinction without a difference. The notion of
a category is of course widely discussed in philosophical literature and in
many specialized fields, and is open to divergent interpretations, but by-
and-large anything that can be called a concept can be interpreted as a
category. Harnad (2005) in a trenchant (and entertaining) article argues that
any cognitive act is necessarily an act of categorization. The very name
“typology” implies recognition of types, that is, categories. But purely
physical scales can be non-categorical. As an example, Harnad mentions
the categorical set of colors as opposed to the continuous property of
electromagnetic wavelength/ frequency which human perceptual and
cognitive systems divide up in colors.
In this chapter I consider whether it is conceivable (or useful) to discuss
within- and between-language similarities and differences without forming
categories, i.e., without appealing to any discrete variables (not necessarily
the familiar ones) that are taken to be language-independent. That is, can
we insightfully compare languages or their phonological attributes without
establishing types? In particular the foundation of various continuous-
seeming scales proposed in the literature will be discussed.
2 Commensurability
Any kind of linguistic analysis, most especially typology, depends on being
able to say that some tokens are exemplars of the same “entity” or can be
placed in a commensurable space: otherwise each speech act is sui generis
and no generalizations are possible.
“Sameness” could be physical identity; in which case it would not be
necessary to form any kind of over-arching category to subsume any
differences. But no two utterances, even by the same speaker of the same
lexical string in the same language, are ever identical. Hence, IDENTITY can
never provide a basis for grouping of phonetic/phonological samples.
Repetitions of the same utterance by the same speaker even under similar
conditions differ in many details. Consider the two spectrograms in Figure
4.1. These show two repetitions by a female English speaker of a string of
digits which form a familiar telephone number. The speaker is very
habituated to saying this string and the two repetitions are so similar that
listeners cannot reliably say if they have heard two playings of the same
recording or two different recordings.
These two utterances have almost identical overall timing and very
similar F0 contours – but nonetheless they differ in many details of timing,
amplitude, and spectral composition, some of which are indicated in the
annotations provided on the figure.
Note that for convenience the differences are mostly described here
using categorical labels, e.g., vowel, nasal, burst, formant, etc., since these
terms are familiar. However, in principle it is possible to largely avoid these
categorical labels by using circumlocutions referring only to continuous
variables, such as “the time interval between the first major increase in
signal amplitude and the following salient reduction in amplitude” instead
of “first vowel”.
5 Sonority scaling
Similarly, work by cultural anthropologists examining the relationship
between cultural/ environmental factors and phonological structures is
based ultimately on categorical data, even when a continuous scale is used.
This work seems to be little known among linguists, so will be summarized
in some detail here.
Figure 4.10: Mean and range of TPD scores for languages grouped
into six regional clusters (after Figure 4 in Anderson 2011).
7 Final remarks
It has been shown that several proposals to characterize aspects of
phonological typology along continuous scales that are found in the
literature turn out to be based on a prior step involving categorical
classifications. A thought experiment to devise a sonority measure that was
entirely free of non-categorical assumptions seemed to founder on practical
and theoretical difficulties. However, one that makes only a minimal appeal
to prior established categories shows some promise of providing interesting
differentiation between languages, and a potentially intriguing connection
to hypotheses suggesting adaptation to aspects of the environment. This
perhaps shows that cross-language comparisons using scalar variables,
albeit derived from categorization, may nonetheless be of some interest.
Is the failure to devise purely continuous scales for typological
properties simply a failure of imagination, due to over-familiarity with the
traditional way of observing phonetic and phonological characteristics
through the lens of established categories? Quite possibly. It seems also
possible that Harnad is right: “To cognize is to categorize”. Any attempt to
distribute languages along a parameter seems to entail defining some
property which is present or absent either in an absolute or a gradient
fashion in each language examined. A property is necessarily a categorical
entity. This does not mean that the categories that are familiar in the
established traditions in the phonetic sciences are necessarily the most
useful we can devise, or that they are applicable to all languages. But to this
phonological typologist, it does not seem practicable to compare languages
in the absence of categories.
References
Abercrombie, David. 1967. Elements of general phonetics. Edinburgh: Edinburgh
University Press.
Arvaniti, Amalia. 2012. The usefulness of metrics in the quantification of speech
rhythm. Journal of Phonetics 40. 351–373.
Atkinson, Quentin. 2011. Phonemic diversity supports serial founder effect model
of language expansion from Africa. Science 332. 346–349.
Catford, J. C. 1977. Fundamental problems in phonetics. London: Longman.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper & Row.
Dauer, Rebecca. 1967. Stress-timing and syllable-timing re-analyzed. Journal of
Phonetics 11. 51–62.
Dellwo, Volker. 2009. Choosing the right rate normalization methods for
measurements of speech rhythm. In Proceedings of AISV.
Dellwo, Volker, Adrian Leemann, & Marie-José Kolly. 2012. Speaker idiosyncratic
rhythmic features in the speech signal. In Proceedings of Interspeech 2012,
Portland OR.
Easterday, Shelece, Jason Timm, & Ian Maddieson. 2011. The effects of
phonological structure on the acoustic correlates of rhythm. ICPhS Hong
Kong.
Ember, Carol R. & Marvin Ember. 2007. Climate, econiche, and sexuality:
Influences on sonority in language. American Anthropologist, New Series 109.
180–185.
Ember, Carol R. & Marvin Ember. 2007. Rejoinder to Munroe and Fought‘s
commentary. American Anthropologist, New Series 109. 785.
Ember, Marvin & Carol R. Ember. 1999. Cross-language predictors of consonant-
vowel syllables. American Anthropologist, New Series 101. 730–742.
Fought, John G., Robert L. Munroe, Carmen R. Fought, & Erin M. Good. 2004.
Sonority and climate in a world sample of languages. Cross-Cultural Research
38. 27–51.
Grabe, Esther & E. L. Low. 2003. Durational variability in speech and the rhythm
class hypothesis. In Carlos Gussenhoven & Natasha Warner (eds.), Papers in
laboratory phonology 7, 515–546. Berlin: Mouton de Gruyter.
Harnad, Stevan. 2005. To cognize is to categorize: Cognition is categorization. In
Henri Cohen & Claire Lefebvre (eds.), Handbook of categorization in cognitive
science, 19–43. Amsterdam: Elsevier.
Haspelmath, Martin. 2007. Pre-established categories don’t exist: Consequences
for language description and typology. Linguistic Typology 11. 119–132.
Haspelmath, Martin. 2010. Comparative concepts and descriptive categories in
crosslinguistic studies. Language 86. 663–687.
Hockett, Charles. 1955. A manual of phonology (Indiana University Publications in
Anthropology and Linguistics, Memoir 11). Bloomington IN: Indiana University.
Huber, Brad R., Vendula Linhartova, & Dana Cope. 2004. Measuring paternal
certainty using cross-cultural data. World Cultures 15. 48–59.
Jespersen, Otto. 1889. The articulations of speech sounds represented by means
of analphabetic symbols. Marburg: N. G. Elwert.
Ladefoged, Peter & Ian Maddieson. Sounds of the world’s languages. Oxford:
Blackwell.
Lloyd James, Arthur. 1940. Speech signals in telephony. London: Pitman.
Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University
Press.
Maddieson, Ian, Tanmoy Bhattacharya, Eric D. Smith, & William Croft. 2011.
Geographical distribution of phonological complexity. Linguistic Typology 15.
267–279.
Maddieson, Ian & Christophe Coupé. 2015. Human language diversity and the
acoustic adaptation hypothesis. Proceedings of Meetings on Acoustics 25,
060005. http://dx.doi.org/10.1121/2.0000198
Maddieson, Ian, Sébastien Flavier, Christophe Coupé, Egidio Marsico, & François
Pellegrino. 2013. LAPSyD: Lyon-Albuquerque Phonological Systems
Database. Interspeech 2013, Lyon.
Munroe, Robert L., R. H. Munroe, & S. Winters. 1996. Cross-cultural correlates of
the CV syllable. Cross-Cultural Research 30. 60–83.
Munroe, Robert L. & Megan Silander. 1999. Climate and the consonant-vowel
syllable: A replication within language families. Cross-Cultural Research 33.
43–62.
Pike, Kenneth L. 1946. Intonation of American English. Ann Arbor: University of
Michigan.
Ramus, Franck, Marina Nespor, & Jacques Mehler. 1999. Correlates of linguistic
rhythm in the speech signal. Cognition 73. 265–292.
Stevens, Kenneth L. 1998. Acoustic phonetics. Cambridge, MA: MIT Press.
Sweet, Henry. 1877. A handbook of phonetics. Oxford: Clarendon Press.
Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Travaux du Cercle
Linguistique de Prague 7.
Jeffrey Heinz
The computational nature of
phonological generalizations
2 What is phonology?
The fundamental insight in the twentieth century which shaped the
development of generative phonology is that the best explanation of the
systematic variation in the pronunciation of morphemes is to posit a single
underlying mental representation of the phonetic form of each morpheme
and to derive its pronounced variants with context-sensitive transformations.
This development, present in Chomsky (1951) and Halle (1959), was
perhaps stated most fully and completely with Chomsky & Halle (1968), and
persists in OT (Prince & Smolensky 2004) today.
Thus there is a point of agreement between different theories of
phonology, which is stated in (1).
In fact every logically possible string which does not contain this marked
substructure is in the extension of the ∗NC˳ constraint. As will be discussed
in greater detail in Section 5.1, substrings like n‘t are sub-structures of
strings.
Another example comes from syllable structure. It is widely held that
codas are marked. Words with codas are said to violate the constraint
NOCODA. Thus the well-formed structures picked out by this constraint are
all and only those strings which do not contain codas as indicated by (4).
3.2 Transformations
Extensions of transformations can also be described as infinite sets. In this
case the elements of the set are pairs: the first element of the pair represents
the INPUT and the second element the OUTPUT. Such extensions have been
called MAPS by Tesar (2014) and others.
As an example, consider the SPE-style rule shown in (5), which
epenthesizes [ɨ] between stridents.
The extension of this rule can be interpreted as every pair of strings (i, o)
such that if i is the input to the rule o would be the output. The extension of
(5) is shown in (6).
Here is another example. Consider the rule in (7), which devoices word-final
obstruents.
(8) { (rat, rat), (sap, sap), (rad, rat), (sab, sap), (sag, sat), (flugenrat,
flugenrat), (flugenrad, flugenrat), . . . }
At the bottom of the hierarchy are the “finite stringsets”. These stringsets are
of finite cardinality. Unlike infinite sets, which require a generative grammar
to generate or recognize them, elements of finite sets can be listed. In
introduction to linguistics courses, we learn that linguistic generalizations
cannot be modeled with finite sets because there is no principled upper
bound on the length of possible words or sentences. The finite languages are
the most restrictive, but least expressive class. In between the computably
enumerable and finite classes are the regular, context-free and context-
sensitive regions.
An important aspect of the hierarchy is that several regions have
independently motivated, equivalent descriptions. Regular stringsets for
instance can be defined with monadic second order logical formulae, finite-
state acceptors, or regular expressions. Computer scientists Engelfriet and
Hoogeboom explain: “It is always a pleasant surprise when two formalisms,
introduced with different motivations, turn out to be equally powerful, as
this indicates that the underlying concept is a natural one. Additionally, this
means that notions and tools from one formalism can be made use of within
the other, leading to a better understanding of the formalisms under
consideration” (Engelfriet & Hoogeboom 2001: 216). At a high level of
abstraction, the different characterizations can be thought of as different
views on the same underlying object roughly in the same way different
equations in different coordinate systems can describe the same circle. The
more views we have, the better we can understand what it is we are looking
at.
Also, each region X describes a linguistic hypothesis: linguistic
generalizations must belong to X. Early work in generative grammar was
interested in establishing evidence for or against such hypotheses in order to
establish upper bounds on the nature of linguistic generalizations. The
weakest scientific hypothesis is that they are computably enumerable, which
is what I called the Church-Turing Theory of Phonology. As X moves down
the hierarchy, the hypotheses become stronger, so the claim that the weak
generative capacity of human syntax is a regular stringset is a strong
scientific hypothesis. However, it is generally considered to be false
(Chomsky 1956; Shieber 1985).
If this is true, then it is true REGARDLESS of whether they are described with
SPE, OT, or other grammar formalisms! Here are some other ways of saying
the same thing:
– There are no non-regular phonological maps.
– A UNIVERSAL property of phonological maps is that they are regular.
Again, the fact that every rule-based grammar describes a regular relation, in
addition to the fact that there is no counterexample to the hypothesis that
phonological maps are regular, is strong evidence that the hypothesis in (11)
is correct.
One consequence of this result is that finite-state grammars become a
lingua franca for different phonological theories describing some aspect of
the phonology of a language. Hence in addition to the work mentioned
above which translates rule-based grammars into finite-state machines, there
exists much work which shows how to translate OT grammars into finite-
state machines (Frank & Satta 1998; Karttunen 1998; Gerdemann & van
Noord 2000; J¨ager 2002; Riggle 2004). Thus, for attested phonological
patterns — just as with circles — there at several ways we can describe
them. Those stringsets and maps can be described with rule-based grammars,
OT grammars, finite-state machines, and other tools (e.g. logical formulae).
Another consequence of (11) follows from a theorem by Scott & Rabin
(1959). This theorem establishes that the domain and image of regular
relations are regular sets of strings. This means the set of possible underlying
representations and the set of possible surface representations are also
regular. In other words, phonotactic knowledge and markedness constraints
describe regular stringsets. Or equivalently, every stringset defined by a
markedness constraint has the property of “being regular”.
5 Constraints
This section is devoted to markedness constraints. The primary purpose is to
describe the Subregular Hierarchies in Figure 5.2, which constitutes the
encyclopedia of categories, that I am arguing is important for understanding
the nature of markedness constraints in phonology. To help motivate the
discussion, and help make it more accessible, I will begin by discussing part
of an encyclopedia of types (the actual constraints found in natural
language).
We would like to have an explanation for this fact. We would like our theory
of markedness to explain why constraints like those found in English and
Samala are possible, but the ones found in Language X and Language Y are
not.
So what’s the explanation? In OT, constraints like ∗#mgl and
∗[+strident,α anterior] . . . [+strident,−α anterior] structures would be part of
Figure 5.3: Pattern templates for Sibilant Harmony (left) and First/Last
Harmony (right).
We may also wonder to what extent memory requirements could explain the
difference between the attested pattern in Samala and First/ Last Harmony.
In fact, however, it comes down to the pattern type, or template. This is
because both types can be described simply by marking which pairs of
sounds are permitted or forbidden in a given template as shown in Figure
5.3. The 2x2 cells for are identical — it is only the templates that differ.
As for the pattern in Language Y, it is plausible that perception or
articulation should be able to explain the absence of even/odd parity
constraints (or more generally constraints which count mod n) in phonology,
but I haven’t seen any explicit connection. Whatever the explanation may
be, it SHOULD connect to the computational properties discussed here. More
generally, if phonology is truly reducible ENTIRELY to phonetic principles
then there ought to be research showing how the computational laws being
posited in this chapter can be clearly derived from such phonetic principles.
This is not meant to deny any role to phonetic explanation in phonology.
Instead this discussion is intended to make clearer some of the limits of
those explanations and to persuade researchers in those areas that the
computational principles discussed here are worth connecting their work to.
At the very least, a complete theory of of phonology will refer to phonetic
factors IN ADDITION TO the computational principles discussed here. I return
to this issue in Section 7.3.
The computational explanation offered in this chapter is simply this. The
extensions of constraints on substrings (like ∗NC˳) and constraints on
subsequences (like [+strident, α anterior]. . . [+strident, −α anterior] in
Samala) are Strictly Local and Strictly Piecewise stringsets respectively.
With the exception of the finite languages, these are the most restrictive,
least expressive regions in the Subregular Hierarchies shown Figure Table 5.
2. On the other hand, First/Last Harmony and ∗ODD-SIBILANTS belong to the
Locally Testable and Regular regions, respectively. In other words, the
widely-attested constraints are the formally simple ones, where the measure
of complexity is determined according to these hierarchies.
L (φ) = ¯
¯¯¯
¯¯¯ ∩¯¯
¯¯¯¯¯¯
¯¯¯
¯¯¯ ∩¯¯¯
¯¯¯¯¯¯
¯¯¯¯ ∩¯
¯¯¯
¯¯¯
∗ ∗ ∗ ∗ ∗ ∗
bΣ Σ aaΣ Σ bbΣ Σ a
It is not difficult to see that this is the same as the infinite set {ab, abab,
ababab, . . .}.
So now we can provide one definition of the Strictly Local stringsets. A
Strictly k-Local (SLk) stringset is one which can be defined as the
conjunction of negative literals, where the literals are interpreted as
substrings, and whose longest forbidden literal (substring) is of length k. The
Strictly Local stringsets are those that are SLk for some k.
If the order relation is precedence, then the literals are interpreted as
subsequences. The negative literal ¬aa is thus interpreted to mean the
subsequence aa is a marked structure. So any string containing this marked
structure violates the constraint and is not in the extension of the constraint.
Here is an example. The formula below can be read as “Strings which do
not contain an a followed by an a nor a b followed by a c are well-formed”.
So here the literals aa and bc are interpreted as subsequences, and not as
substrings.
φ = (¬aa) ∧ (¬bc)
The one notable outstanding case Heinz (2010a) discusses is the set of
surface forms derived from long-distance dissimilation. These appear to be
Non-Counting but do not belong to any lower class (hence they are called
‘Properly Non-Counting’) (Heinz et al. 2011).83 They are discussed further
below.
Whether constraints like Onset are SL or not depends on the choice of
representation. If syllable boundaries are included in string representations,
which is a common practice, then constraints like Onset are SL since they
can be represented this way: (¬ .V). The importance of representations will
be further discussed in Section 8.
I would like to conclude the discussion of the “Strict” classes by
providing their language-theoretic characterizations. This characterization
for SL stringsets is provided in (13), which Rogers & Pullum (2011) name
Suffix Substitution Closure.
Next we move up one level to the next kind of logic: propositional. Unlike
the conjunction of negative literals, where all formulae had the form (¬l1) ∧
(¬l2) ∧ . . . ∧ (¬ln) for n literals (li), propositional logic allows any well-
formed propositional formulae to generate a stringset. Not only is any
combination/ordering of negation and conjunction now permitted, but
disjunction (∨) is also allowed. As a consequence, mainly familiar
propositional connectives are also allowed, such as implication (→) and the
biconditional (↔). Propositional logic is therefore more expressive (and less
restrictive) than the conjunction of negative literals.
For example, the following formula is a well-defined formula in
propositional logic.
φ = b ∨ (aa → ac)
If these literals are interpreted with respect to the successor model of strings,
then this formula translates to the following English: “Words are well-
formed if they contain the substring b or if it is the case that if they contain
the substring aa they also contain the substring ac.” Below I provide the
extension of φ under the successor interpretation of the literals.
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
L (φ) = Σ bΣ ∪ (Σ aaΣ acΣ ∪ Σ acΣ aaΣ )
I submit that both of these logically possible constraints seem more odd from
a phonological perspective than the SL or SP constraints. At first glance, it
seems strange to have a markedness constraint which requires that if one
sub-structure is present another one must be present as well.
This is perhaps the most notable difference between the kinds of
constraints permitted using propositional logic. Such constraints can require
sub-structures to be present in well-formed words (Rogers & Pullum 2011).
The interpretation of the simple formulae φ = b is that well-formed words
must contain the sub-structure b. Such examples exist in the phonological
literature. For instance, it is true that the constraint ONSET has this flavor. As
we have mentioned with ONSET, however, the choice of representation
matters: this can be construed as SL provided syllable boundaries are
introduced as symbols in strings. Another constraint like this is what Hyman
(2009) calls Obligatoriness, the requirement that all well-formed words bear
an accent (or stress). Unlike ONSET, there is no straightforward
representational “fix” for this constraint. I return to this issue in Section 5.3.
Now we can provide one definition of the Locally Testable stringsets. A
Locally k-Testable (LTk ) stringset is one which can be defined with a formula in
propositional logic, where the literals are interpreted as substrings, and whose
longest literal (substring) is of length k. The Locally Testable stringsets are those
that are LTk for some k.
Similarly, a definition of the Piecewise Testable stringsets can be given. A
Piecewise k-Testable (PTk ) stringset is one which can be defined with a formulae
in propositional logic, where the literals are interpreted as subsequences, and
whose longest literal (subsequence) is of length k. The Piecewise Testable
stringsets are those that are PTk for some k.
There are language-theoretic characterizations of these classes too. This
characterization is given in (15) for the Locally Testable class.
The next rung up the logical hierarchy brings us to First Order (FO) Logic. The
main differences between first order logic and propositional logic is that literals
disappear and variables appear. It is not necessary in this chapter to provide the
technical details regarding FO models of strings. For this, readers are referred to
Rogers et al. (2013).
There are only three important items readers need to to understand. First, FO
logic is strictly more powerful logic than Propositional logic. Second, as usual,
whether the ordering relation is given as the successor relation or the precedence
relation will determine the kinds of stringsets expressible with FO formulae. FO
logic with the successor relation yields the class called the Locally Threshold
Testable (LTT) class, and FO logic with the precedence relation yields the class
called Non-Counting (NC). Third, successor is FO-definable from precedence, but
not vice versa so the Non-Counting class properly includes the LTT class.
I will go straight to the language-theoretic properties. If the ordering relation is
the successor, then the class of stringsets that is FO-definable is called the Locally
Threshold Testable (LTT) class, and it properly includes the LT class.
One important difference between the FO-definable classes and the
Propositional-definable classes is that the FO-definable classes are able to
distinguish the presence of otherwise identical sub-structures. In this way, FO-
definable classes can count the number of sub-structures up to some threshold. On
the other hand, the Propositional classes can only detect the presence or absence of
sub-structures. So for a given sub-structure, Propositional logic can distinguish
zero of them from one of them. FO logic, however, can detect up to some number
n of sub-structures. So a limited ability to count is present at the FO-level. There is
always some finite number n after which the number of substructures cannot be
distinguished. FO-definable classes are not sufficiently expressive to be able to
count indefinitely. Thus the difference between LTT and LT is that in the LTT
class, the number of substrings can be counted, but only up to some threshold t
(Thomas 1997).
(17) (Substring Threshold Equivalence) A stringset L is LTT if there is a k and a t
such that for all strings u and v, if u and v have the same number, up to some
threshold t, of substrings of length k then either both u and v belong to L or
both u and v do not belong to L.
The reason for this is that the Non-Counting class can do much more than count
subsequences. This is partly because the successor ordering relation is FO-
definable from precedence, but not vice versa.84 Consequently, every stringset in
the LTT region is also in the Non-Counting region, but not vice versa. NC is
strictly more expressive than LTT. Thus, Figure 5.2 shows that the the NC class
properly includes the LTT class. McNaughton & Papert (1971) comprehensively
establish several other important characterizations of the Non-Counting class.
Are there markedness constraints that count up to some threshold? An example
of such a constraint would be something like ∗3NC˳ where words with zero, one or
two NC˳ substrings are considered well-formed, but words with three or more are
ill-formed. Needless to say, such constraints do not seem like the kinds of
constraints found in natural language.
On the other hand, there are constraints in natural language that have been
argued to be properly Non-Counting. These are the stringsets that are definable
from long-distance dissimilation (Heinz et al. 2011). Heinz et al. (2011) also show
that such constraints belong to subclasses of the Non-Counting region they call
Tier-based Strictly Local (TSL). These stringsets are defined with the common
notion of phonological tier (Goldsmith 1976). Like the Strictly Local class, TSL
stringsets can be defined with formulae that are conjunctions of negative literals,
interpreted under the successor relation after non-tier elements are ignored. Thus
the kind of long-distance behavior is limited in some kind of way. TSL stringsets
are not as well understood as the other classes (there are not multiple
characterizations), but Heinz et al. (2011) argue that every markedness constraint
in natural language is describable with TSL constraints. Of course an important
issue is here what the tier is. Jardine & Heinz (2016) show that the tier can be
identified from positive data when the bound k on the size of the constraints are
known a priori (the tier is not known a priori).
I will refer to the hypothesis that all markedness constraints are TSL as the
weak subregular hypothesis.
Whether the evidence favors the strong or weak subregular hypothesis will be
addressed in Section 7.
The next rung up the logical hierarchy and the highest to which we attend is
Monadic Second Order (MSO) Logic. The difference between first order and
monadic second order logic is that variables over sets of elements in the domain
are allowed in addition to the variables which vary over individual elements
(which FO logic allows). There are several interesting consequences of adding
such variables, which I will now review.
First, the two branches in the subregular hierarchies merge at this point
because precedence is MSO-definable from successor.85 So the stringsets that are
MSO-definable with successor are exactly the stringsets that are MSO-definable
with precedence.
Second, this class of stringsets corresponds exactly to the class of stringsets
definable with finite-state acceptors, i.e. the regular class of stringsets (Bu¨chi
1960).
Third, this class is strictly more expressive than both the Non-Counting and
Locally Threshold Testable class (McNaughton & Papert 1971; Thomas 1997). It
can be shown that the stringset defined by the constraint ∗ODD-SIBILANTS (see Sec
tion 5.1) is not Non-Counting, but it is a regular stringset.86
6 Transformations
Now we turn to transformations. From an OT perspective, this section is about
faithfulness constraints and the map derived from the interaction of all the OT
constraints. It is also about the typology of maps generated from a given CON.
From a rule-based perspective, this section is about the extensions of individual
phonological rules and their composition.
The computational theory of subregular relations is not as well developed as
the Subregular Hierarchies. For example, logical characterizations of string
relations have not yet been fully carried out. Previous work on subclasses of
subregular relations is primarily limited to two classes known as the LEFT
SUBSEQUENTIAL and RIGHT SUBSEQUENTIAL. Essentially, these are classes of
transformations with finite look-ahead; so they are “myopic” in the sense of
Wilson (2003).
More will be said about these classes momentarily. I will keep the discussion
at a high level and readers can find the definitions of them and other technical
details in many different places, including Berstel (1979), Mohri (1997), Roche &
Schabes (1997), Lothaire (2005), Sakarovitch (2009). A linguistically motivated
treatment is given in Heinz & Lai (2013).
Much recent work at the University of Delaware has sought to develop a
hierarchy for string relations that is analogous to the one for stringsets shown in Fi
gure 5.2. The most notable advance in this regard has been work by Jane Chandlee
(Chandlee 2014; Chandlee et al. 2014; Jardine et al. 2014; Chandlee & Heinz
2018), which establishes relational counterparts to the Strictly Local stringsets,
discusses their significant coverage of empirical phenomena, and explains how
they can be learned.
As has been discussed in the OT literature, this map is the optimal outcome of two
very simple constraints: a markedness constraint banning successive vowels with
different values of feature F (Agree(F)) outranking the faithfulness constraint
Ident(F). Baković (2000: 26) defines the term this way:
When Agree[F] is dominant, it winnows the candidate set down to basically two candidates,
one with all [αF] segments and the other with all [−αF] segments. If IO-Ident[F] gets the next
crack at the evaluation process, it will choose the one of these candidates that is least deviant
from the input, regardless of the stem/ affix or +/− distinctions. In other words, what ends up
mattering is the relative percentages of [αF] and [−αF] vowels in the input: the underlyingly
greater number of [−αF] vowels in [a map where /+ − −/ ↦ [− − −]] gangs up on the lesser
number of [αF] vowels, yielding the problematic effect that I call ‘majority rule.’ [emphasis in
original]
As recognized by Baković, MR is unattested and considered phonologically
bizarre. His solution adds certain locally conjoined constraints to CON, which he
argues has the effect of ridding Majority Rules maps from the typology, and which
he argues is independently needed for analyzing dominant/ recessive types of
vowel harmony. The point I wish to emphasize here however is that Majority
Rules is a logically possible map, which is quite easy to generate in classic OT
with a simple markedness constraint (depending on whether arbitrary many
consonants may intervene between vowels determines whether Agree(F) is SL or
TSL) and standard faithfulness constraints.
Another possible map is one where the first or last vowel determines the
features of the other vowels in the word. This has been called progressive
harmony (PH) and regressive harmony (RH), respectively. Examples of a PH map
are shown in (21).
(21) (Progressive Harmony) { (++−, +++), (+−+, +++), (−++, −−−), (−−+, −−−),
(− + −, −− −), (+ − −, + + +), . . . }
The inclusion of neutral vowels alters this map only slightly. Neutral are vowels
which resist harmonizing and either are skipped (in which case they are called
transparent) or force subsequent vowels to harmonize with them (in which case
they are called opaque). Following Heinz & Lai (2013), I will use the symbols [⊖]
and [⊟] to represent [−F] vowels that are transparent and opaque, respectively, and
the symbols [⊕] and [⊞] to represent [+F] vowels that are transparent and opaque,
respectively. With this expanded alphabet, mappings like /+ ⊖ −/↦[+ ⊖ +] and /+
⊟ +/↦[+ ⊟ −] would also belong to the PH map.
In contrast to the above, sometimes vowel harmony is bounded, in the sense
that only the subsequent vowel is affected. I will call this Local Assimilation (LA)
and (22) below illustrates this map where only the initial vowel is the trigger.89
(22) (Local Assimilation) { (+ −−, ++ −), (−++, −−+), (+ −+, +++), (−+ −, −−−),.
..}
Early analyses of vowel harmony analyzed the extension of many patterns like
bounded or unbounded PH or RH (van der Hulst & van de Weijer 1995), but this
type of analysis is present in recent work as well (Nevins 2010).
Another logically possible, but unattested type of vowel harmony process has
been called “Sour Grapes” (SG) (Padgett 1995; Wilson 2003). Informally, SG is
like progressive harmony except that later vowels only harmonize if no opaque
vowels occur later in the word. If an opaque vowel occurs somewhere after the
initial vowel, then non-neutral vowels between it and the initial vowel will not
harmonize. (23) illustrates a SG map.
(23) (Sour Grapes) { (+−−, +++), (+−⊟, +−⊟), (+−−−, ++++), (+−−⊟, +−−⊟), . .
.}
Like Majority Rules, Sour Grapes has been argued to be a phonologically bizarre
vowel harmony process. In particular, Wilson (2003) argues that harmony
processes never look ahead beyond immediately adjacent segments. Wilson refers
to the absence of look-ahead as a kind of myopia, and characterizes spreading
processes as “myopic”. The Sour Grapes pattern disobeys Wilson’s (2003)
principle that phonological laws are myopic. In a Sour Grapes pattern, each vowel
gets to “look ahead” arbitrarily far to the end of the word to see if there is an
opaque vowel downstream, and only harmonizes if it does not find one.
Classic OT has no difficulty generating SG maps. Under a typical analysis,
there is a markedness constraint against segments that are [+F] but share all other
features with ⊟ (such as ∗⊞). Consequently, underlying /⊟/ can never surface as
[⊞]. Finley (2008: 32) describes the rest of the OT analysis this way.
Sour grapes harmony patterns occur when a blocker prevents spreading to vowels intervening
between the source and the blocker. For the input [/+ − ⊟/] . . . the output [[+ − ⊟]] will be
optimal rather than the desired [+ + ⊟]. This type of pathology is produced when the harmony-
inducing constraint [Agree(F)] does not localize the violation of harmony. In both the sour
grapes candidate [+−⊟] and the spreading candidate [+ + ⊟], there is only one locus of
disagreement. [. . .] However, because the sour grapes candidate incurs no faithfulness
violations, it will emerge as optimal.
Like MR, SG has received attention in the literature because it the optimal
outcome of relatively simple constraints in OT (Wilson 2003; McCarthy 2004;
Finley 2008).
Interestingly, in the domain of tone, there do seem to be some patterns that
exhibit Sour Grapes-like behavior. See Hyman (2007) and Kula & Bickmore
(2015) for cases in Kuki Thaadow and Copperbelt Bemba, respectively, and
Jardine (2016) for extensive analysis and discussion.
Other analyses of vowel harmony argue that the right generalization is that
vowels in a word harmonize to a particular feature value, if it is present anywhere
in the word. This analysis has been called dominant/ recessive (DR) since the
feature F appears to have a dominant value (the one that vowels harmonize with)
and a recessive value (the one that vowels don’t harmonize with). In the example
DR map below, the [+] value is the dominant one; so any underlying
representation containing the harmonizing feature with the value [+] will surface
so that the harmonizing feature in all vowels will also be [+].
(24) (Dominant/Recessive) { (++−, +++), (+−+, +++), (−++, +++), (−−+, +++),
(− + −, + + +), (+ − −, + + +), (− − −, −− −), . . . }
A similar analysis of VH patterns (shown in (25)) is one where the root vowel
determines the features of the other vowels in the word. This kind of analysis has
been termed “Stem Control” (SC). Here the feature that spreads is determined by
its morphological status, and not its inherent vowel as in DR harmony. Typically
vowels agree with the closest stem vowel. Following Baković (2000), I use the
(25)
The term “circumambient” refers to two surrounding triggers and the term
“unbounded” refers to the absence of a bound on the distance between the two
triggers. Yaka is the only language which appears to have CU vowel harmony
(Hyman 1998), though Jardine (2016) argues Sanskrit n-retroflexion is formally
similar, and (Graf 2010a) provides a logical analysis of it. (As discussed further
below in 6.3, unbounded high tone plateauing is a well-attested, a common tonal
pattern which is circumambient unbounded (Hyman 2011; Jardine 2016).)
Table 5.5 is a reproduction of Table 5.3 from Heinz & Lai (2013), with the
additions of local assimilation and circumambient unbounded harmony. It
summarizes the encyclopedia of types outlined above. Phonological theory has
posited maps like LA, PH, RH, DR, and SC, the consensus appears to be that MR
and SG are not only unattested but bizarre. In fact they have been called
pathological patterns in some works (Wilson 2003, 2004; Finley 2008). Lastly,
setting aside tonal phonology for now, maps like CU are only marginally attested.
As before, we ask the question: What principle or principles separate the
linguistically- motivated generalizations (PH, RH, DR, SC) from the pathological
ones (MR and SG) and the marginally attested ones (CU)?
Table 5.5: Example mappings of underlying forms (w) given by local assimilation (LA),
progressive harmony (PH), regressive harmony (RH), dominant/recessive harmony (DR),
sour grapes harmony (SG), majority rules harmony (MR), and circumambient unbounded
harmony (CU). Symbols [+] indicates a [+F] vowel and [−] indicates a [−F] vowel where F
is the feature harmonizing. Symbols [⊟] and [⊖] are [−F] vowels that are opaque and
transparent, respectively. (From Heinz & Lai 2013: 57.)
Input Strictly Local functions generalize the notion of Strictly Local stringset.
Recall the Strictly Local stringsets are Markovian in nature: the well-formedness
of a string can be determined by examining the substrings of length k.
Equivalently, this means that the well formedness of any position in the string can
be determined by checking the k − 1 previous symbols. This is illustrated in Figure
5.5, for the case where k = 2.
Input Strictly Local functions are similarly Markovian. The idea is that every
element in the input string corresponds to a STRING of symbols in the output string.
For any input symbol x its output string u will only depend on x and the previous k
− 1 elements of x in the input string. Figure 5.6 illustrates, for the case where k =
2.
Table 5.6: Illustrating why transformations with right contexts can still be ISL. The symbol
λ represents the empty string (the string of length zero).
element preceding input element output string
x x u
⋊ i i
i n λ
n p mp
p a a
Furthermore, Chandlee (2014) and Chandlee et al. (2014a) also show how ISL
functions can be efficiently learned from finitely many examples in the sense of
Gold (1967) and de la Higuera (1997). This stands in stark contrast to the class of
regular functions which cannot be so learned. Remarkably, Jardine et al. (2014)
generalize this result to obtain an even more efficient learning algorithm for this
class of functions.
A notable example of a map that Input Strictly Local functions are unable to
model are ones like progressive harmony (PH) (21) above. Recall that a mapping
like /+ − −−/↦[+ + ++] belongs to this map, and more generally for all numbers k,
/+ −k −/↦[+ +k +] and /− −k −/↦[− −k −]. Such a map cannot be Input Strictly Local
for any k. This is because whether the last input element surfaces as [+] or [−]
depends on an input element which is more than k input elements away.
Chandlee (2014) defines Left and Right Output Strictly Local functions
(LOSL and ROSL) to address such maps. These capitalize on the output-oriented
nature of many phonological processes (Kisseberth 1970; Prince & Smolensky
1993, 2004). They are Markovian like ISL functions, but this time the context is
found in the output string, not the input string. Specifically for Left (Right) OSL
functions, for any input element x, its output string u will only depend on x and the
previous (following) k − 1 elements of the output string. The idea is that a function
is Left or Right, depending on whether the left or right context in the output string
matters. Figures 5.7 and 5.8 illustrate Left and Right OSL functions, respectively,
for the case where k = 2.
Informally, Left and Right OSL functions can be thought of as characterizing the
maps one can describe with rewrite rules that apply left-to-right or right-to-left
(Howard 1972) (cf. the treatment of rule-application by Kaplan & Kay 1994). This
appears to be approximately correct, though certain details are still being worked
out. However, we can say with certainty that the map PH is LOSL and the map
RH is ROSL. More generally, such functions capture spreading processes such as
progressive and regressive nasal spreading.
Left and Right OSL functions can both be computed by subsequential
transducers. For Right OSL functions, the input string must be processed right-to-
left by the transducer and the resulting output will then be reversed. See Heinz &
Lai (2013) for details.
In the abstract maps for vowel harmony discussed earlier, consonants were
ignored. If arbitrary many consonants are allowed to intervene between the vowels
then the PH and RH maps will not be LOSL nor ROSL, respectively. For the PH
case, this means for all numbers k, /+Ck −/↦[+Ck +] and /−Ck −/↦[−Ck −]. Such a
map cannot be Left nor Right Output Strictly Local for any k because whether the
last input element surfaces as [+] or [−] depends on an output element which is
more than k output elements away. In a sense, at input element x, the functions
cannot remember whether the preceding vowel in the output string was [+] or [−]
because too many [C]s intervene.
We therefore move up the hierarchy in Figure 5.4. I note that as of yet there are
no regions for string-to-string maps corresponding to the SP, LT, PT, TSL, or LTT
stringsets.90
Informally, for Left (Right) Subsequential functions, each logically possible
input string is classified as belonging to exactly one of finitely many regular
stringsets. For any input element x, the output string u will only depend on x and
the regular stringset to which its preceding input string belongs. Figure 5.9
illustrates left subsequential functions.
Even if arbitrary many consonants are allowed to intervene between the
vowels then the PH and RH are in fact left and right subsequential, respectively.
To see why, consider Table 5.7. Subsequential functions can “remember” up to
finitely many pieces of information about the left context; in Table 5.7, that the
first vowel was [+F]. Thus even if k or more Cs then occur in the input string, the
function simply outputs each C as it reads each C, without changing its memory
state.
Figure 5.9: Schematic illustration of Left Subsequential functions. For every
Left Subsequential function, the output string u of each input element x
depends only on x and the stringset S to which the preceding input string
belongs. As before, the lightly shaded cell only depends on the darkly shaded
cells.
Table 5.7: Illustrating why PH is left subsequential even if arbitrarily many Cs intervene
between vowels.
set to which string preceding x belongs input element x output string u
⋊ + +
⋊ + C∗ C C
⋊ + C∗ C C
... ... ...
⋊ + C∗ C C
⋊ + C∗ – +
In the same way that ISL functions could “look ahead” by writing empty
output strings, subsequential functions can do so do as well. However, like the ISL
functions, there is a sense in which left subsequential functions can look into the
right context of the input element ONLY some finite distance. There is a bound k on
how far they can look ahead, which relates to the fact that it can only remember
finitely much information about the input string. For this reason it is not possible
to remember the EXACT preceding input string.
An example will help make this idea clear. The dominant/recessive (DR) map
is neither left nor right subsequential. This is because, for all numbers k, /−k + −k /
↦[+k + +k ] and /−k − −k /↦[−k − −k ]. Such a map cannot be left subsequential
because whether the first k input elements all surface as [+] or [−] depends
whether the next element is [+] or not. Therefore, even though these functions
might output the empty string for the first k input elements, if the [+] comes next,
such functions would have to output k [+] symbols (and one more). But this is
impossible because k can be ANY number and left subsequential functions can only
classify the preceding input string into one of FINITELY many categories. Table 5.8
illustrates this conundrum. For this reason, left subsequential functions are myopic
in the sense that they cannot look unboundedly far into the right context.
Right subsequential functions are similar except that input strings following
the input element are categorized into finitely many regular stringsets. Also, right
subsequential functions can only “look ahead” into the left context a finite
distance (and an argument similar to the one made above shows why). It may be
useful to think of right subsequential maps as the “reverse” of left subsequential
maps: if L is a left subsequential map then there is a right subsequential map Lr
such that (w, v) ∈ L iff (wr, vr ) ∈ Lr (where xr is the reverse of the string x so
(abc)r = cba). From a processing perspective, one could say that left subsequential
functions process strings left-to-right, and right subsequential functions process
strings right-to-left.
Table 5.8: Illustrating why the dominant/recessive DR map is not left subsequential. The
symbol λ represents the empty string. The problem is that the left subsequential function
cannot remember exactly how many − symbols occurred before the first + (it cannot
always correctly fill in the ‘. . . ’).
set to which string preceding x input element output string
belongs x u
λ — λ
— − λ
−− − λ
... ... ...
−− − . . . − λ
−− − . . . − + +++...++
At the University of Delaware in 2010, the question was asked whether the
transformations from underlying to surface forms are left or right subsequential. In
other words, we investigated what I will call the Subsequential Hypothesis.
With one interesting class of exceptions discussed below, this hypothesis appears
to be well supported. This matters for two reasons. First, it is a stronger more
restrictive hypothesis than the previously understood bound (phonology is regular,
see Section 4.3). Second, it has been known for quite some time that left
subsequential (and right subsequential) functions are learnable in a particular
sense (Oncina et al. 1993). The algorithm presented there has even been adapted
for use in phonology (Gildea & Jurafsky 1996). In other words, if phonological
transformations are subsequential, then the computational nature of phonological
transformations directly provides purchase on the learning problem.
So what is the evidence which favors (27)? I will use the term “subsequential”
to mean either left or right subsequential. Chandlee (2014) proves that ISL, LOSL,
and ROSL functions are subsequential; therefore, all the maps they cover are
subsequential. Synchronically attested metathesis is also subsequential (Chandlee
et al. 2012; Chandlee & Heinz 2012). Gainor et al. (2012) study the extensions of
the vowel harmony maps in Nevins (2010) and conclude they are subsequential.
Since Nevins assumes a certain degree of underspecification, Heinz & Lai (2013)
show that progressive and regressive vowel harmony with no underspecification
pace OT (maps PH and RH above) are subsequential. Payne (2017) shows that
long-distance consonant dissimilation maps described by Suzuki (1998) and
Bennett (2013) are subsequential, and Luo (2017) shows that long-distance
consonant assimilation maps described by Hansson (2008) and Rose & Walker
(2004) are subsequential.
In some sense, these results are not too surprising because ultimately these
results support Wilson’s intuition that phonology is myopic. Nonetheless, if
phonological myopia is best characterized as subsequentiality (or something
stronger like ISL), then there is much concrete to gain: a theory which not only
appears sufficiently expressive, but which is also more restrictive than previously
entertained, and which has desirable learnability properties.
Weakly Deterministic functions are defined by Heinz & Lai (2013) as those maps
that can be defined as the composition of a left subsequential and right
subsequential function without the introduction of new alphabetic symbols. Heinz
& Lai (2013) show that the dominant/ recessive (DR) and stem-control (SC) maps
are properly weakly deterministic. In fact the DR map is the composition of a map
like progressive harmony (DRP ), which only spreads the dominant feature
progressively, and a map like regressive harmony (DRR), which only spreads the
dominant feature regressively. Table 5.9 illustrates. Since DRp and DRR are left
and right subsequential, respectively, their composition is weakly deterministic.
Heinz & Lai (2013) conjecture that sour grapes (SG) is not weakly
deterministic. They explain that SG can be described as the composition of a left
subsequential function and a right subsequential function, but that crucially the
intermediate form requires the use of an additional alphabetic symbol which they
write as [?]. Table 5.10 (adapted from their paper) illustrate the role an additional
symbol plays in the decomposition. Essentially, [?] records the fact that this is a
minus, which has a [+] in its left context. So the right subsequential function R
will rewrite this as [−] or [+] depending on whether there is a [⊟] in the right
context of the [?].
Table 5.9: Map DRp converts every − after a + to + (like PH), and map DRR convert
every − before a + to + (like RH). As indicated, the composition of these two maps yields
the DR map.
Table 5.10: Illustrations of the role of the new symbol [? ] in the deterministic
decomposition into a left subsequential function L and a right subsequential function R.
Non-deterministic Regular functions can be defined in at least two ways (Elgot &
Mezei 1965). First, they can be defined as the composition of a left and right
subsequential function, provided the intermediate string is allowed to use
additional alphabetic symbols. As we have seen in the example of Sour Grapes
(SG) in Table 5.10, these additional symbols allow certain types of information to
become present in the string. Second, non-deterministic regular functions can be
defined as those string-to-string functions that can be described with a non-
deterministic finite-state transducer.
Both SG and CU maps thus properly belong to the non-deterministic regular
function region (Heinz & Lai 2013; Jardine 2016).
Non-deterministic finite-state transducers are a grammatical formalism that
can also describe transformations which have more than one output for each input.
In fact the class of transformations describable with non-deterministic finite-state
transducers are called regular relations. Beesley & Kartunnen (2003) and Hulden
(2009) develop toolkits for manipulating regular relations for describing the
phonology and morphology of languages.
The Majority Rules map cannot even be described with a non-deterministic
finite-state transducer. It is in fact non-regular (Riggle 2004; Heinz & Lai 2013).
According to the hierarchy presented here, it is the most complex kind of map
under discussion.
Given that many phonological rules are optional, one may wonder whether it
is appropriate to model individual transformations (as we have here) as functions
instead of relations. There are two responses to this.
The first response is to say that the optionality is handled at a higher level of
control than the individual transformation. This is essentially the position adopted
in rule-based and constraint-based phonology. In rule-based phonology, the idea
was that a rule was marked as optional. When deriving the output from an input
and a rule marked as optional is encountered, additional, usually random,
information is consulted (such as a coin flip) and the outcome determines whether
the rule is applied or skipped. Thus the extension of the rule itself is still
functional. Similarly, in stochastic OT (Boersma 1998; Boersma & Hayes 2001), a
given constraint ranking has a functional extension, but a higher-level control
process determines which particular constraint ranking will be utilized at any
particular time.
The second response is to say that subclasses of regular relations are likely to
follow the same lines developed here. Mohri (1997) establishes that as long as
there is a bound on the amount of optionality, many properties of subsequential
functions are preserved. More recently, Beros & de la Higuera (2014) also show
how to generalize subsequential functions in a way that permits a degree of
optionality. While subclasses of classes have not been studied the fact that they
preserve important aspects of the underlying finite-state transducers and that
classes like ISL have automata-theoretic characterizations based on subsequential
transducers (Chandlee 2014; Chandlee et al. 2014a) strongly suggests that
subclasses like ISL which permit a degree of optionality are only waiting to be
discovered.
While Yaka vowel height harmony and Sanskrit n-retroflexion are arguably
counterexamples to this revised hypothesis, I think it is prudent not to reject the
hypothesis on these grounds. Unlike UTP, these cases are rare and the evidence
that they are truly CU maps, while compelling, is not as strong as it is for the UTP
cases. Future research may lead to a better understanding of these languages.
Thus, Jardine shows that tonal patterns are different from segmental patterns,
since they are arguably more complex. Paraphrasing Hyman (2011), tonal
phonology really can do more than segmental phonology!
(29)
(30)
At issue of course is the marked structure(s) NOCODA, which is supposed to
define when graph -like representations such as those in the preceding
examples are adopted. There are several possibilities. Just having a node in
the graph labeled with “C” for instance could be sufficient, in which case
(30) would violate it. Or it may be necessary for the node labeled “C” to
dominate a phonetic element (in which case (30) would not violate it). Or it
may be that the labels in the preceding examples are just ornaments for
phonologists for readability, and that what really matters is that there is
consonantal material after the nucleus dominated by the same syllable node.
A more difficult question raises itself with these representations with
respect to the constraint ONSET. At the very least this constraint REQUIRES an
“O” node to be present. It does not ban a sub-structure, suggesting this
constraint is at the Propositional level. Here again we see an interplay
between the choice of representation and the power of the constraint. To
state ONSET, this enriched representation requires a more complex constraint
type than the string representation with the latent syllable boundary, which
can define ONSET as the Strictly 2-Local constraint (∗.V).
These are all interesting possibilities that have been explored to various
degrees in the phonological literature. The point I wish to make is a fairly
obvious one: representations of words matter when defining constraints or
transformations. The extensions of the constraints and transformations will
be in terms of these representations. String representations were used
throughout this chapter, but some other representations could have been
used, and computational analysis could have preceded on those
representations instead.
9 Conclusion
This has been a long chapter. Thankfully, the conclusion can be brief.
Phonology is about how underlying lexical representations are transformed
into surface ones. An important question asks about the cross-linguistic
nature of these transformations. Grammars are typically conceived as
generating patterns; these patterns are extensions of the grammar in the
same way the extension of an algebraic equation is the set of points
satisfying that equation. Computational analysis studies these EXTENSIONS,
and such analysis of phonological generalizations is ongoing. Nonetheless,
the results so far reveal that despite the cross-linguistic diversity, there are
very strong, specific, universal computational properties shared by almost
all phonological patterns. The few potential counter-examples are of special
interest and deserve further study. Explaining these plausibly universal
computational properties of phonological patterns is hard for theories that
rely on optimization as a central organizing feature of the theory, but is
straightforward if the computational properties highlighted within this
chapter become the organizing principles themselves. These principles are
natural for many reasons, only some of which could be covered here. Also,
there is a clear sense in which these principles derive from principles of
inference and learning. While there is still much work to do, a theory of
phonology built around these computational principles promises to be
sufficiently expressive, maximally restrictive, and learnable.
Acknowledgement: I am in indebted to Jane Chandlee, Rémi Eyraud,
Bill Idsardi, Adam Jardine, Regine Lai, Jason Riggle, and Jim Rogers for
invaluable discussion. I am also grateful to Jane Chandlee, Alex Cristia,
Thomas Graf, Larry Hyman, and Lisa Pearl for helpful comments on an
earlier draft. I also thank the students in my computational phonology
course at the 2015 LSA Summer Institute, and the students in the Spring
2015 computational phonology seminar at the University of Delaware, in
particular Hossep Dolatian, Hyun Jin Hwangbo, Huan Luo, Amanda Payne,
Kristina Strother-Garcia, and Mai Ha Vu. Of course I assume full
responsibility for flaws present in this chapter.
References
Anderson, Stephen R. 1985. Phonology in the twentieth century. Chicago:
University of Chicago Press.
Applegate, Richard B. 1972. Ineseño Chumash grammar. Doctoral dissertation,
University of California, Berkeley.
Applegate, Richard B. 2007. Samala-English dictionary: A guide to the Samala
language of the Ineseño Chumash people. Santa Ynez, CA: Santa Ynez Band
of Chumash Indians.
Bach, Emmon. 1975. Long vowels and stress in Kwakiutl. Texas Linguistic Forum
2. 9–19.
Baković, Eric. 2000. Harmony, dominance and control. Doctoral dissertation,
Rutgers University, New Brunswick, NJ.
Baković, Eric. 2004. Unbounded stress and factorial typology. In John J. McCarthy
(ed.), Optimality Theory in phonology: A reader. Oxford: Blackwell. ROA-244,
Rutgers Optimality Archive, http://roa.rutgers.edu/
Baković, Eric. 2007. A revised typology of opaque generalizations. Phonology 24.
217–259.
Baković, Eric. 2011. Opacity deconstructed. In van Oostendorp et al. (eds.) The
Blackwell companion to phonology, 2011.
Baković, Eric. 2013. Blocking and complementarity in phonological theory.
Sheffield: Equinox.
Baković, Eric & Colin Wilson. 2000. Transparency, strict locality, and targeted
constraints. In Roger Billerey & Brook Danielle Lillehaugen (eds.),
Proceedings of the 19th West Coast Conference on Formal Linguistics, 43–
56. Somerville, MA: Cascadilla Press.
Beckman, Jill. 1998. Positional faithfulness. Doctoral dissertation, University of
Massachusetts, Amherst.
Beesley, Kenneth & Lauri Kartunnen. 2003. Finite state morphology. Stanford:
CSLI.
Bennett, William. 2013. Dissimilation, consonant harmony, and surface
correspondence. Doctoral dissertation, Rutgers University, New Brunswick,
NJ.
Benua, Laura. 1995. Identity effects in morphological truncation. In Jill Beckman,
Laura Walsh Dickey & Suzanne Urbanczyk (eds.), Papers in Optimality
Theory, 77–136. Amherst, MA: GLSA Publications.
Benua, Laura. 1997. Transderivational identity: Phonological relations between
words. Doctoral dissertation, University of Massachusetts, Amherst.
Beros, Achilles & Colin de la Higuera. 2014. A canonical semi-deterministic
transducer. In Alexander Clark, Makoto Kanazawa, & Ryo Yoshinaka (eds.),
Proceedings of the 12th International Conference on Grammatical Inference
(ICGI 2014), vol. 34, 33–148. JMLR: Workshop and Conference Proceedings.
Berstel, Jean. 1979. Transductions and context-free languages. Dordrecht:
Springer.
Blevins, Juliette. 2004. Evolutionary phonology. Cambridge: Cambridge University
Press.
Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between
articulatory and perceptual drives. Doctoral dissertation, Universiteit van
Amsterdam. Published as LOT International Series 11. The Hague: Holland
Academic Graphics.
Boersma, Paul & Bruce Hayes. 2001. Empirical tests of the gradual learning
algorithm. Lingustic Inquiry 32. 45–86.
Browman, Catherine P. & Louis Goldstein. 1992. Articulatory phonology: An
overview. Phonetica 49. 155–180.
Buccola, Brian. 2013. On the expressivity of Optimality Theory versus ordered
rewrite rules. In Glyn Morrill & Mark Jan Nederhof (eds.), Proceedings of
Formal Grammar 2012 and 2013, vol. 8306 of Lecture notes in computer
science, 142–158. Berlin: Springer.
Büchi, J. Richard. 1960. Weak second-order arithmetic and finite automata.
Mathematical Logic Quarterly 6. 66–92.
Chandlee, Jane. 2014. Strictly local phonological processes. Doctoral dissertation,
University of Delaware.
Chandlee, Jane, Angeliki Athanasopoulou & Jeffrey Heinz. 2012. Evidence for
classifying metathesis patterns as subsequential. In Proceedings of the 29th
West Coast Conference on Formal Linguistics, 303–309. Somerville, MA:
Cascadilla Press.
Chandlee, Jane, Rémy Eyraud & Jeffrey Heinz. 2014a. Learning strictly local
subsequential functions. Transactions of the Association for Computational
Linguistics 2. 491–503.
Chandlee, Jane & Jeffrey Heinz. 2012. Bounded copying is subsequential:
Implications for metathesis and reduplication. In Proceedings of the 12th
Meeting of the ACL Special Interest Group on Computational Morphology and
Phonology, 42–51. Montreal: Association for Computational Linguistics.
Chandlee, Jane & Jeffrey Heinz. 2018. Strict locality and phonological maps.
Linguistic Inquiry 49. 23–60.
Chandlee, Jane, Jeffrey Heinz & Adam Jardine. 2018. Input strictly local opaque
maps. Phonology, to appear.
Chandlee, Jane, Adam Jardine & Jeffrey Heinz. 2014. Learning repairs for marked
structures. Poster at the Annual Meeting of Phonology. MIT.
Chomsky, Noam. 1951. Morphophonemics of Modern Hebrew. Doctoral
dissertation, University of Pennsylvania, Philadelphia. Published New York:
Garland Press, 1979.
Chomsky, Noam. 1956. Three models for the description of language. IRE
Transactions on Information Theory 113124. IT-2.
Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT
Press.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper & Row.
Clements, George N. 1985. The geometry of phonological features. Phonology
Yearbook 2. 225–252.
Comon, Hubert, Max Dauchet, Rémi Gilleron, Florent Jacquemard, Denis Lugiez,
Christof Löding, Sophie Tison & Marc Tommasi. 2007. Tree automata
techniques and applications. http://www.grappa.univ-lille3.fr/tata (accessed 12
October 2007).
Dell, François & Mohamed Elmedlaoui. 1985. Syllabic consonants and
syllabification in Imdlawn Tashlhiyt Berber. Journal of African Languages and
Linguistics 7. 105–130.
Dresher, B. Elan. 1999. Charting the learning path: Cues to parameter setting.
Linguistic Inquiry 30. 27–67.
Dresher, B. Elan. 2011. The phoneme. In van Oostendorp et al. (eds.) 2011, vol. 1,
241–266.
Edlefsen, Matt, Dylan Leeman, Nathan Myers, Nathaniel Smith, Molly Visscher &
David Wellcome. 2008. Deciding strictly local (SL) languages. In Jon
Breitenbucher (ed.), Proceedings of the Midstates Conference for
Undergraduate Research in Computer Science and Mathematics, 66–73.
Elgot, C. C. & J. E. Mezei. 1965. On relations defined by generalized finite
automata. IBM Journal of Research and Development 9. 47–68.
Endress, Ansgar D., Marina Nespor & Jacques Mehler. 2009. Perceptual and
memory constraints on language acquisition. Trends in Cognitive Science 13.
348–353.
Engelfriet, Joost & Hendrik Jan Hoogeboom. 2001. MSO definable string
transductions and two-way finite-state transducers. ACM Transactions on
Computational Logic 2. 216–254.
Finley, Sara. 2008. The formal and cognitive restrictions on vowel harmony.
Doctoral dissertation, Johns Hopkins University, Baltimore.
Finley, Sara. 2015. Learning non-adjacent dependencies in phonology:
Transparent vowels in vowel harmony. Language 91. 48–72.
Finley, Sara & William Badecker. 2009. Artificial language learning and feature-
based generalization. Journal of Memory and Language 61. 423–437.
Fougeron, Cécile & Patricia A. Keating. 1997. Articulatory strengthening at edges
of prosodic domains. Journal of the Acoustical Society of America 101. 3728–
3740.
Frank, Robert & Giorgo Satta. 1998. Optimality Theory and the generative
complexity of constraint violability. Computational Linguistics 24. 307–315.
Gainor, Brian, Regine Lai & Jeffrey Heinz. 2012. Computational characterizations
of vowel harmony patterns and pathologies. In Proceedings of the 29th West
Coast Conference on Formal Linguistics, 63–71. Somerville, MA: Cascadilla
Press.
Gallagher, Gillian. 2010. Perceptual distinctness and long-distance laryngeal
restrictions. Phonology 27. 435–480.
García, Pedro & José Ruiz. 2004. Learning k-testable and k-piecewise testable
languages from positive data. Grammars 7. 125–140.
García, Pedro, Enrique Vidal & José Oncina. 1990. Learning locally testable
languages in the strict sense. In Proceedings of the Workshop on Algorithmic
Learning Theory, 325–338. Tokyo.
Gazdar, Gerald & Geoffrey K. Pullum. 1982. Natural languages and context-free
languages. Linguistics and Philosophy 4. 469–470.
Gerdemann, Dale & Gertjan van Noord. 2000. Approximation and exactness in
finite state optimality theory. In Proceedings of the 5th Meeting of the ACL
Special Interest Group in Computational Phonology, 34–45.
Gildea, Daniel & Daniel Jurafsky. 1996. Learning bias and phonological-rule
induction. Computational Linguistics 24. 497–530.
Gold, E. Mark. 1967. Language identification in the limit. Information and Control
10. 447–474.
Goldsmith, John A. 1976. Autosegmental phonology. Doctoral dissertation,
Massachusetts Institute of Technology, Cambridge, MA.
Goldwater, Sharon & Mark Johnson. 2003. Learning OT constraint rankings using
a maximum entropy model. In Jennifer Spenader, Anders Eriksson & Östen
Dahl (eds.), Proceedings of the Stockholm Workshop on Variation within
Optimality Theory, 111–120. Stockholm: Stockholm University.
Gorman, Kyle. 2013. Generative phonotactics. Doctoral dissertation, University of
Pennsylvania, Philadelphia.
Graf, Thomas. 2010a. Comparing incomparable frameworks: A model theoretic
approach to phonology. University of Pennsylvania Working Papers in
Linguistics 16, Article 10. http://repository.upenn.edu/pwpl/vol16/iss1/10.
Graf, Thomas. 2010b. Logics of phonological reasoning. Master’s thesis,
University of California, Los Angeles.
Graf, Thomas. 2013. Local and transderivational constraints in syntax and
semantics. Doctoral dissertation, University of California, Los Angeles.
Hale, Mark & Charles Reiss. 2000. Substance abuse and dysfunctionalism:
Current trends in phonology. Linguistic Inquiry 31. 157–169.
Halle, Morris. 1959. The sound pattern of Russian. The Hague: Mouton.
Halle, Morris. 1978. Knowledge unlearned and untaught: What speakers know
about the sounds of their language. In Morris Halle, Joan Bresnan & George
Miller (eds.), Linguistic theory and psychological reality, 294–303. Cambridge,
MA: MIT Press.
Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA:
MIT Press.
Hansson, Gunnar. 2001. Theoretical and typological issues in consonant harmony.
Doctoral dissertation, University of California, Berkeley.
Hansson, Gunnar. 2008. Diachronic explanations of sound patterns. Language
and Linguistics Compass 2. 859–893.
Hayes, Bruce. 1986. Assimilation as spreading in Toba Batak. Linguistic Inquiry
17. 467–499.
Hayes, Bruce. 1995. Metrical stress theory. Chicago: University of Chicago Press.
Hayes, Bruce, Robert Kirchner & Donca Steriade (eds.). 2004. Phonetically-based
phonology. Cambridge: Cambridge University Press.
Hayes, Bruce & Colin Wilson. 2008. A maximum entropy model of phonotactics
and phonotactic learning. Linguistic Inquiry 39. 379–440.
Heinz, Jeffrey. 2007. The inductive learning of phonotactic patterns. Doctoral
dissertation, University of California, Los Angeles.
Heinz, Jeffrey. 2009. On the role of locality in learning stress patterns. Phonology
26. 303–351.
Heinz, Jeffrey. 2010a. Learning long-distance phonotactics. Linguistic Inquiry 41.
623–661.
Heinz, Jeffrey. 2010b. String extension learning. In Proceedings of the 48th Annual
Meeting of the Association for Computational Linguistics, 897–906. Uppsala:
Association for Computational Linguistics.
Heinz, Jeffrey. 2014. Culminativity times harmony equals unbounded stress. In
Harry van der Hulst (ed.), Word stress: Theoretical and typological issues,
255–275. Cambridge: Cambridge University Press.
Heinz, Jeffrey. 2016. Computational theories of learning and developmental
psycholinguistics. In Jeffrey Lidz, William Synder & Joe Pater (eds.), The
Oxford handbook of developmental linguistics. Oxford: Oxford University
Press.
Heinz, Jeffrey & William Idsardi. 2011. Sentence and word complexity. Science
333. 295–297.
Heinz, Jeffrey & William Idsardi. 2013. What complexity differences reveal about
domains in language. Topics in Cognitive Science 5. 111–131.
Heinz, Jeffrey, Anna Kasprzik & Timo Kötzing. 2012. Learning with lattice-
structured hypothesis spaces. Theoretical Computer Science 457. 111–127.
Heinz, Jeffrey & Regine Lai. 2013. Vowel harmony and subsequentiality. In András
Kornai & Marco Kuhlmann (eds.), Proceedings of the 13th Meeting on the
Mathematics of Language (MoL 13), 52–63. Sofia, Bulgaria.
Heinz, Jeffrey, Chetan Rawal, & Herbert G. Tanner. 2011. Tier-based strictly local
constraints for phonology. In Proceedings of the 49th Annual Meeting of the
Association for Computational Linguistics, 58–64. Portland, OR: Association
for Computational Linguistics.
Higuera, Colin de la. 1997. Characteristic sets for polynomial grammatical
inference. Machine Learning 27. 125–138.
Howard, Irwin. 1972. A directional theory of rule application in phonology. Doctoral
dissertation, Massachusetts Institute of Technology, Cambridge, MA.
Hulden, Mans. 2009. Finite-state machine construction methods and algorithms for
phonology and morphology. Doctoral dissertation, University of Arizona,
Tucson.
Hulst, Harry van der & Jeroen van de Weijer. 1995. Vowel harmony. In John A.
Goldsmith (ed.), The handbook of phonological theory, 495–534. Oxford:
Blackwell.
Humboldt, Wilhelm von. 1999. On language. Edited by Michael Losonsky.
Translated by Peter Heath. Cambridge: Cambridge University Press.
Originally published 1836.
Hyman, Larry M. 1998. Positional prominence and the “prosodic trough” in Yaka.
Phonology 15. 41–75.
Hyman, Larry M. 2007. Kuki-Thaadow: An African tone system in Southeast Asia.
Berkeley Phonology Lab Annual Report. University of California, Berkeley:
Department of Linguistics.
Hyman, Larry M. 2011. Tone: Is it different? In John A. Goldsmith, Jason Riggle, &
Alan C. L. Yu (eds.), The handbook of phonological theory, 2nd edn. 197–238.
Oxford: Wiley-Blackwell.
Hyman, Larry M. 2014. How autosegmental is phonology? The Linguistic Review
31. 363–400.
Hyman, Larry M. 2009. How (not) to do phonological typology: The case of pitch-
accent. Language Sciences 31. 213–238 (Data and Theory: Papers in
Phonology in celebration of Charles W. Kisseberth).
Idsardi, William. 1998. Tiberian Hebrew spirantization and phonological
derivations. Linguistic Inquiry 29. 37–73.
Idsardi, William J. 2000. Clarifying opacity. The Linguistic Review 17. 337–350.
Jäger, Gerhard. 2002. Some notes on the formal properties of bidirectional
optimality theory. Journal of Logic, Language, and Information 11. 427–451.
Jardine, Adam. 2016a. Computationally, tone is different. Phonology 32. 247–283.
Jardine, Adam. 2016b. Locality and non-linear representations in tonal phonology.
Doctoral dissertation, University of Delaware.
Jardine, Adam, Jane Chandlee, Rémi Eyraud & Jeffrey Heinz. 2014. Very efficient
learning of structured classes of subsequential functions from positive data. In
Alexander Clark, Makoto Kanazawa & Ryo Yoshinaka (eds.), Proceedings of
the 12th International Conference on Grammatical Inference (ICGI 2014), vol.
34, 94–108. JMLR: Workshop and Conference Proceedings.
Jardine, Adam & Jeffrey Heinz. 2015a. A concatenation operation to derive
autosegmental graphs. In Proceedings of the 14th Meeting on the
Mathematics of Language (MoL 2015), 139–151. Chicago.
Jardine, Adam & Jeffrey Heinz. 2016. Learning tier-based strictly local languages.
Transactions of the Association for Computational Linguistics 4. 87-98.
Johnson, C. Douglas. 1972. Formal aspects of phonological description. The
Hague: Mouton.
Jurafsky, Daniel & James Martin. 2008. Speech and language processing: An
introduction to natural language processing, speech recognition, and
computational linguistics, 2nd edn. Upper Saddle River, NJ: Prentice-Hall.
Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press.
Kaplan, Ronald & Martin Kay. 1994. Regular models of phonological rule systems.
Computational Linguistics 20. 331–378.
Karttunen, Lauri. 1998. The proper treatment of optimality in computational
phonology. In FSMNLP’98, 1–12. International Workshop on Finite-State
Methods in Natural Language Processing, Bilkent University, Ankara.
Kiparsky, Paul. 2000. Opacity and cyclicity. The Linguistic Review 17. 351–366.
Kisseberth, Charles W. 1970. On the functional unity of phonological rules.
Linguistic Inquiry 1. 291–306.
Kisseberth, Charles W. & David Odden. 2003. Tone. In Derek Nurse & Gérard
Philippson (eds.), The Bantu languages, 59–70. London: Routledge.
Kobele, Gregory. 2006. Generating copies: An investigation into structural identity
in language and grammar. Doctoral dissertation, University of California, Los
Angeles.
Kornai, András. 1995. Formal phonology. Outstanding Dissertations in Linguistics.
New York: Garland Publishing.
Koskenniemi, Kimmo. 1983. Two-level morphology. Publication no. 11, Department
of General Linguistics. Helsinki: University of Helsinki.
Krämer, Martin. 2003. Vowel harmony and Correspondence Theory. Berlin:
Mouton de Gruyter.
Kula, Nancy C. & Lee S. Bickmore. 2015. Phrasal phonology in Copperbelt
Bemba. Phonology 32. 147–176.
Lacy, Paul de. 2011. Markedness and faithfulness constraints. In van Oostendorp
et al. (eds.) 2011, Chapter 74.
Lai, Regine. 2012. Domain specificity in phonology. Doctoral dissertation,
University of Delaware.
Lai, Regine. 2015. Learnable vs. unlearnable harmony patterns. Linguistic Inquiry
46. 425–451.
Lautemann, Clemens, Pierre McKenzie, Thomas Schwentick & Heribert Vollmer.
2001. The descriptive complexity approach to {LOGCFL}. Journal of
Computer and System Sciences 62: 629–652.
Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic
Inquiry 8. 249–336.
Lothaire, M. (ed.). 1997. Combinatorics on words. Cambridge: Cambridge
University Press.
Lothaire, M. (ed.) 2005. Algebraic combinatorics on words, 2nd edn. Cambridge:
Cambridge University Press.
Luo, Huan. 2017. Long-distance consonant harmony and subsequantiality. Glossa
2, 52.
Marlo, Michael. 2007. The verbal tonology of Lumarachi and Lunyala: Two dialects
of Luluyia. Doctoral dissertation, University of Michigan, Ann Arbor.
McCarthy, John J. 2003. OT constraints are categorical. Phonology 20. 75–138.
McCarthy, John J. 2004. Headed spans and autosegmental spreading.
Unpublished manuscript, University of Massachusetts, Amherst.
McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in
Optimality Theory. London: Equinox.
McCarthy, John J. 2008a. Doing Optimality Theory. Oxford: Blackwell.
McCarthy, John J. 2008b. The gradual path to cluster simplification. Phonology 25.
271–319.
McNaughton, Robert & Seymour Papert. 1971. Counter-free automata.
Cambridge, MA: MIT Press.
Medvedev, Yu. T. 1964. On the class of events representable in a finite automaton.
In Edward F. Moore (ed.), Sequential machines: Selected Papers, 215–227.
Boston: Addison-Wesley. Originally published in Russian in Avtomaty, 1956,
385–401.
Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: Oxford
University Press.
Mohri, Mehryar. 1997. Finite-state transducers in language and speech
processing. Computational Linguistics 23. 269–311.
Nevins, Andrew. 2010. Locality in vowel harmony. Cambridge, MA: MIT Press.
Odden, David. 1994. Adjacency parameters in phonology. Language 70. 289–330.
Ohala, John J. 1981. The listener as a source of sound change. In C. S. Masek, R.
A. Hendrik & M. F. Miller (eds.), Papers from the Parasession on Language
and Behavior, Chicago Linguistics Society, 178–203. Chicago: CLS.
Oncina, José & Pedro García. 1991. Inductive learning of subsequential functions.
Technical Report DSIC II-34, University Politécnia de Valencia.
Oncina, José, Pedro García & Enrique Vidal. 1993. Learning subsequential
transducers for pattern recognition tasks. IEEE Transactions on Pattern
Analysis and Machine Intelligence 15. 448–458.
Oostendorp, Marc van, Colin J. Ewen, Elizabeth Hume & Keren Rice (eds.). 2011.
The Blackwell companion to phonology. 5 volumes. Oxford: Blackwell.
Padgett, Jaye. 1995. Partial class behavior and nasal place assimilation. In
Keiichiro Suzuki & Dirk Elzinga (eds.), Proceedings of the 1995 Southwestern
Workshop on Optimality Theory. University of Arizona: Coyote Papers.
Pater, Joe. 2000. *NC. In Kiyomi Kusumoto (ed.), Proceedings of the 26th Annual
Meeting of the North East Linguistics Society, 227–239. Amherst, MA: GLSA.
Pater, Joe. 2001. Austronesian nasal substitution revisited: What’s wrong with *NC
(and what’s not). In Linda Lombardi (ed.), Segmental phonology in Optimality
Theory: Constraints and representations, 159–182. Cambridge: Cambridge
University Press.
Payne, Amanda. 2017. All dissimilation is computationally subsequential.
Language 93. e353–e371.
Popper, Karl. 1959. The logic of scientific discovery. New York: Basic Books.
Potts, Christopher, Joe Pater, Rajesh Bhatt & Michael Becker. 2008. Harmonic
grammar with linear programming: From linear systems to linguistic typology.
Rutgers Optimality Archive ROA-984.
Potts, Christopher & Geoffrey K. Pullum. 2002. Model theory and the content of
OT constraints. Phonology 19. 361–393.
Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in
generative grammar. Technical Report 2, Rutgers University Center for
Cognitive Science.
Prince, Alan & Paul Smolensky. 2004. Optimality Theory: Constraint interaction in
Generative Grammar. Oxford: Blackwell.
Riggle, Jason. 2004. Generation, recognition, and learning in finite state Optimality
Theory. Doctoral dissertation, University of California, Los Angeles.
Roark, Brian & Richard Sproat. 2007. Computational approaches to morphology
and syntax. Oxford: Oxford University Press.
Roche, Emmanuel & Yves Schabes. 1997. Finite-state language processing.
Cambridge, MA: MIT Press.
Rogers, James. 1998. A descriptive approach to language-theoretic complexity.
Stanford: CSLI Publications.
Rogers, James, Jeffrey Heinz, Gil Bailey, Matt Edlefsen, Molly Visscher, David
Wellcome & Sean Wibel. 2010. On languages piecewise testable in the strict
sense. In Christian Ebert, Gerhard Jäger, & Jens Michaelis (eds.), The
mathematics of language, vol. 6149 of Lecture notes in Artifical Intelligence,
255–265. Dordrecht: Springer.
Rogers, James, Jeffrey Heinz, Margaret Fero, Jeremy Hurst, Dakotah Lambert, &
Sean Wibel. 2013. Cognitive and sub-regular complexity. In Glyn Morrill &
Mark-Jan Nederhof (eds.), Formal grammar, vol. 8036 of Lecture notes in
computer science, 90–108. Dordrecht: Springer.
Rogers, James & Geoffrey K. Pullum. 2011. Aural pattern recognition experiments
and the subregular hierarchy. Journal of Logic, Language and Information 20.
329–342.
Rose, Sharon & Rachel Walker. 2004. A typology of consonant agreement as
correspondence. Language 80. 475–531.
Rozenberg, Grzegorz & Arto Salomaa (eds.). 1997. Handbook of formal
languages: Beyond words, vol. 3. Dordrecht: Springer.
Sakarovitch, Jaques. 2009. Elements of automata theory. Cambridge: Cambridge
University Press. (Translated by Reuben Thomas from the French original,
Paris: Vuibert, 2003.)
Scott, Dana & Michael Rabin. 1959. Finite automata and their decision problems.
IBM Journal of Research and Development 5. 114–125.
Shieber, Stuart. 1985. Evidence against the context-freeness of natural language.
Linguistics and Philosophy 8. 333–343.
Sipser, Michael. 1997. Introduction to the theory of computation. Boston: PWS
Publishing.
Smolensky, Paul & Géraldine Legendre. 2006. The harmonic mind: From neural
computation to Optimality-Theoretic grammar. Cambridge, MA: MIT Press.
Suzuki, Keiichiro. 1998. A typological investigation of dissimilation. Doctoral
dissertation, University of Arizona, Tucson.
Tesar, Bruce. 1995. Computational Optimality Theory. Doctoral dissertation,
University of Colorado at Boulder.
Tesar, Bruce. 2014. Output-driven phonology. Cambridge: Cambridge University
Press.
Tesar, Bruce & Paul Smolensky. 1998. Learnability in Optimality Theory. Linguistic
Inquiry 29. 229–268.
Tesar, Bruce & Paul Smolensky. 2000. Learnability in Optimality Theory.
Cambridge, MA: MIT Press.
Thomas, Wolfgang. 1997. Languages, automata, and logic. In Rozenberg &
Salomaa (eds.) 1997, 389–455.
Turing, Alan. 1937. On computable numbers, with an application to the
Entscheidungsproblem. Proceedings of the London Mathematical Society 2.
230–265.
Walker, Rachel. 2011. Vowel patterns in language. Cambridge: Cambridge
University Press.
Wilson, Colin. 2001. Consonant cluster neutralization and targeted constraints.
Phonology 18. 147–197.
Wilson, Colin. 2003. Analyzing unbounded spreading with constraints: Marks,
targets, and derivations. Unpublished manuscript, UCLA.
Wilson, Colin. 2004. Experimental investigation of phonological naturalness. In
Proceedings of the 22nd West Coast Conference on Formal Linguistics, 534–
546. San Diego: University of California.
Anthony Brohan and Jeff Mielke
Frequent segmental alternations in P-
base 3
1 Introduction
Much of what is currently known about phonological typology is based on
the UCLA Phonological Segment Inventory Database (UPSID), compiled
by Ian Maddieson and colleagues. The UPSID database was published with
317 languages in Maddieson’s 1984 book Patterns of sounds, later
expanded to 451 inventories (Maddieson & Precoda 1990), and now being
expanded as LAPSyD (Maddieson 2014). The availability of UPSID has
meant that studies of phonological universals have preferentially favored
segment inventories (Lindblom & Maddieson 1988; Hyman 2008) over
phonological alternations. Crosslinguistic databases of phonological
alternations have often involved custom databases of particular types of
phonological patterns such as lenition (Lavoie 2001; Kirchner 2001) and
metathesis (Hume 2004). P-base (Mielke 2008) is a general database of
phonological alternations in several hundred languages, but until recently it
was organized in terms of classes of sounds, making it difficult to use to
study phonological patterns more generally. We report a study of frequent
phonological patterns in a newly reorganized version of P-base.
2 P-base
P-base was compiled from descriptions available on library stacks at the
Ohio State University and Michigan State University. It was originally
collected to test models of natural classes (Mielke 2005, 2008), and was
structured according to classes involved in phonological patterns (6170
classes involved in alternations, and about 3000 others involved in
phonotactic restrictions. These alternations include general phonological
patterns as well as morphologically conditioned patterns.
We report on a reorganized P-base that is structured according to
phonological alternations and distributional restrictions. As an example of
this reorganization, (1) shows how Japanese high vowel devoicing (Vance
1987) was entered in the original P-base as two phonologically active
classes, and (2) shows how the same phonological pattern is entered in P-
base 3, as a single phonological pattern involving two classes of sounds.
In the P-base 1 version, a user could easily query the classes of sounds
involved in patterns such as this one, but the details of the alternation were
stored in text strings, leaving it up to the user to perform keyword searches
in order to identify similar patterns, and even to associate these two entries
(the target and the trigger) with one another.
Figure 6.2 shows the result of applying G-sampling to the UPSID and
P-base samples of inventories. The coordinates of each IPA symbol in the
figure represent the proportion of P-base and UPSID inventories containing
the segment. The gray arrow originating from each IPA symbol represents
the effect of G-sampling: an arrow pointing down and to the left means that
G-sampling reduced the proportion of languages having the segment
(indicating that frequent occurrence in particular language families led to its
higher raw numbers). The fact that the segments are close to the diagonal
indicates that they are generally similar in both databases. The vast majority
of distinct IPA transcriptions are attested in only a handful of languages in
either sample. Segments appearing substantially above the diagonal are
more frequent in P-base, and segments appearing substantially below the
diagonal are more frequent in UPSID. Where they diverge, UPSID can be
considered to be more reliable, because the primary difference is the
sampling techniques.
A major effect of G-sampling is to move everything closer to the
middle, along the diagonal. If G-sampling were having the effect of
reconciling differences between the UPSID and P-base samples, the arrows
would be pointing closer to the diagonal, not parallel to it. The two
databases are the most divergent on [r], and G-sampling indicates that this
is not due to sampling. On the other hand, [v] and [ɾ] both occur in about
25% of languages, but G-sampling affects them in opposite ways, that are
similar across the two databases, suggesting that the frequency of [v] is
overestimated due shared inheritance, while the frequency of [ɾ] is
underestimated due to language families that lack it.
The rest of this chapter is concerned with phonological alternations, for
which there is no other database to compare to. The fact that segment
frequencies in P-base are similar to UPSID, which was collected with
genetic balance in mind, lead us to be less concerned about genetic balance
in our interpretation of other patterns in the P-base language sample. While
it would be possible to apply G-sampling to our counts of phonological
alternations, we elect not to, on the basis of the inventory comparison, and
because G-sampling works best for phenomena whose presence or absence
is easily counted.
Counting the absence of a feature is where phonological patterns
become challenging to genetic sampling techniques. It is reasonable to
expect language descriptions to report binary features such as verb-object
vs. object-verb word order in a consistent way, and only a little less
reasonable to expect phonological descriptions to indicate in similar terms
whether a particular type of phonological segment is present or absent in
the inventory of a language. By comparison, descriptions of phonological
alternations and distributional restrictions vary widely in their
exhaustiveness, and it is much harder to interpret a pattern that is
unreported as truly absent from the language. A detailed phonological
description may include a range of postlexical rules that require some
phonetic sophistication to recognize, and the fact that such a pattern is not
included in a description of a related language is not conclusive.93 To be
confident enough in the absence of a pattern to conclude that it is innovative
in a related language would require going and looking for each pattern in
each language where it wasn’t reported. We have not done this.
A striking observation from Figure 6.3 is that there are few kinds of
frequent place changes (represented by horizontal or diagonal arrows
passing over a different color). The most salient manner changes represent
voicing and devoicing of obstruents, spirantization of stops, nasalization of
voiced stops, /ɾ/, and /l/, gliding of high vowels, and height changes among
vowels. The most frequent place changes involve nasal changing into other
nasals, palatalization of the consonants /s k ɡ/, debuccalization, including
/k/ spirantizing to [h], and vowels changing backness and rounding,
especially by changing to [ə].
Another set of place changes requires comment: /w/ is involved in many
place changes with bilabial stops and voiced labial fricatives. This reflects
the arbitrary choice of putting labiovelar /w/ near the velars instead of the
labials. Additionally, we do not have much articulatory evidence for how
many instances of /w/ really involve a velar constriction, and how many
should really be transcribed as /β̞/ and placed in the bilabial column. UPSID
has transcribed more approximants as /β̞/ than P-base has. /k/ → [h] is
notable because most instances of spirantization do not involve place
changes, but /k/’s fricative counterpart /x/ is considerably less frequent than
/h/ in segment inventories. Reinterpretation of /k/ lenition as /k/ → [h] in
languages with /h/ but no /x/ is consistent with the principle of structural
analogy (Blevins 2004: 154):
In the course of language acquisition, the existence of a (non-ambiguous) phonological
contrast between A and B will result in more instances of sound change involving
shifts of ambiguous elements to A or B than if no contrast between A and B existed.
The input-output mappings of Figure 6.3 are summarized in Figures 6.4 and
6.5. In both figures, the background color indicates the place or
manner/voicing of the output segment, and color of the outer ring of the
balloon indicates the place or manner/voicing of the input segment. The
area of each colored outer ring represents the number of structure
preserving input-output mappings, and the area of each inner white circle
represents the number of structure changing input-output mappings. The
numbers used to generate the balloons are segment counts, not pattern
counts.
The manner mappings in Figure 6.4 are mostly off the diagonal,
meaning that there are few changes that do not affect manner or voicing.
These are dominated by the categories involved in place changes observed
above (nasals, palatalization of obstruents, and vowel changes). All of these
are about evenly split between structure changing and structure preserving.
The leading (off-diagonal) manner changes include obstruent voicing,
which is mostly structure changing, and obstruent devoicing, which is
mostly structure preserving. This asymmetry can potentially be explained
entirely in terms of the rarity of voiced obstruents without voiceless
counterparts.
Vowel height changes are abundant, especially between high and mid
vowels, and are more likely to be structure preserving than non-height
changes. High vowels frequently turn into glides, and the converse is less
frequent. Stops change manner in many different ways, and a salient fact is
that stop changes that result in a fricative, flap, implosive, or ejective are
typically structure changing, but stop changes that result in a nasal, trill,
glide, or lateral approximant are more likely to be structure preserving.
The place mappings in Figure 6.5 mostly fall into one of three types.
First, many are on the diagonal, meaning place is unchanged (as seen above
in Figure 6.4). Second, many involve pairings of bilabial, alveolar, palatal,
and velar, which are by far the most frequent nasal places of articulation (as
can be seen in Figure 6.3, and abovein Figure 6.1). This is because nasal
place assimilation accounts for a large portion of place changes. Third,
there are many vowel backness/rounding changes, which in this figure are
overlaid on palatal, labialpalatal, velar, and labial- velar. The infrequency of
changes involving labialpalatal/front rounded segments reflects the rarity of
these segments to begin with. Similarly, the fact that place changes resulting
in bilabial or alveolar outputs are mostly structure preserving reflects the
fact that almost all inventories contain /m/ and /n/, so nasal place
assimilation to these places of articulation is almost never structure
changing.
5 Top-down analysis
We will analyze the phonological alternations in P-base in two ways. The
first one involves labeling the patterns according to established
phonological criteria and finding the most frequently co-occurring labels.
The second one will involve clustering the patterns in order to induce
categories that may not be detected through expected labels.
5.1 Methods
The 4560 phonological alternations were partitioned according the criteria
listed in Tables 6.1–6.3 on the basis of the automated phonological feature
analysis provided by P-base, in order to count the most frequently-occurring
patterns.
All 4560 patterns under consideration meet the criteria for Rule (having
an input-output mapping), and do not meet the criteria for Distribution
(describing what can or cannot occur in a particular context) so we omit
these criteria.95
Table 6.1: Phonological pattern labels for contexts
Tables 6.1–6.3 show the labels used to classify phonological patterns, and
the number of matching patterns for each label. These labels were selected
in order to systematize what we as phonologists expect to be frequent, and
what we find to be frequent on the basis of our experience querying P-base.
Most phonological alternations in P-base fall into more than one of the
categories, and in the next section we will examine the co-occurrence of
labels. To select label co-occurrences to report, we conducted a chi-squared
test for the combinations of each process label with every other label, in
order to identify pairs of labels that co-occur more than chance.
An example is shown in (4), comparing the co-occurrence of the labels
Deletion and I=h.
For this test, χ2(1) = 123.31, exceeding 6.635, which is the critical value for
α = 0.01. This captures the fact that the 62 cases of /h/ deletion are more
than would be expected if the two labels co-occurred at random. Rather, /h/
is the input for 2.4% of all patterns, but it is the input for 8.0% of all
deletion patterns, and /h/ deletion accounts for 57% of all occurrences of /h/
as input. Trivial co-occurrences were omitted (e.g., I=Nasal implies I=C,
and I=Nasal is not expected to be independent from O=Nasal) and labels
which imply the same subset of labels were compared on that subset (e.g.,
Vowel Raising and I=a were compared on the basis of the 1383
V→patterns, not all 4560 patterns, because both labels imply that the input
is a vowel).
Table 6.2: Phonological pattern labels for inputs and outputs
Table 6.3: Phonological pattern labels for processes
5.2 Results by pattern type
Figures 6.6, 6.7, 6.10, and 6.12 illustrate some of the patterns in the
structure of the label counts in Tables 6.1–6.3. Each circle’s area represents
the number of patterns matching one or more labels. The rectangular nodes
are broad categories of patterns, and edges connect groups that are in a
subset-superset relationship. The purpose of displaying them this way is for
the relative size of the circles to provide a visual gestalt impression of what
the bulk of phonological patterns are.
From Tables 6.1–6.3, above, a few facts are apparent, each investigated
more closely below. Deletion is 2.4 times as frequent as epenthesis, and
glides and glottals make up a disproportionately large portion of the cases
of epenthesis, as compared to deletion. Assimilatory and non-assimilatory
changes each make up a little over a third of phonological patterns in the
sample. Among assimilatory changes, regressive assimilation is more
frequent than progressive assimilation. Regressive assimilation accounts for
more than twice as many patterns as progressive assimilation. These
asymmetries between formally symmetrical processes such deletion and
epenthesis and between progressive and regressive assimilation are an
important contribution of phonological typology to phonological theory: an
important goal of phonological theory is to account for why some things
happen more often than other things, whether it is because of markedness,
the role of sound change, or something else.
Figure 6.6 illustrates some major patterns for epenthesis. About one third of
epenthesis patterns in P-base involve epenthetic vowels, and 64% of these
are one of the vowels [i u ə]. Among many factors thought to contribute to
epenthetic vowel quality, (e.g., Hume & Bromberg 2005), it has often been
observed that epenthetic vowels tend to be short and otherwise perceptually
non-salient. High vowels are often shorter than lower vowels (Catford
1977; Maddieson 1997), so epenthetic high vowels are consistent with the
general idea that phonological repairs make minimal changes (Steriade
2001). [i u] are likely the shortest underlying vowels in the inventories of
many languages in P-base, and while [ə] occurs in fewer inventories than [i
u], [ə] epenthesis accounts for the majority of vowel epenthesis patterns that
are structure-changing with respect to segments (i.e., [ə] epenthesis in
languages without /ə/).96
There are 210 cases of epenthesis of glides, glottals, and other
consonants. Of these, 43.7% are glide epenthesis, 33.3% are glottal
epenthesis, and all other types of consonants combined amount to 24.0%.
This is consistent with two recent accounts of the typology of consonant
epenthesis. Vaux (2002) showed that consonant epenthesis is not restricted
to a few default consonants, but that nearly every familiar consonant is
epenthetic in at least one language, and that many of the more obscure
epenthetic consonants (such as [ɹ] in some varieties of English) are due to
restructuring of deletion patterns. Blevins (2008a) argued that the record of
sound changes supports two basic sources of epenthetic consonants:
reinterpretation of vowel-vowel sequences as vowel-glide-vowel, and the
phonologization of naturally occurring irregular phonation at prosodic
boundaries (Pierrehumbert & Talkin 1992) as glottal consonants. Blevins
attributes epenthetic consonants other than glides and glottals to complex
and/ or unnatural sources such as subsequent glide fortition and the
restructuring of consonant deletion patterns. Blevins’ account predicts that
non-glide/non-glottal epenthesis will be sparse, because they require
telescoping or restructuring of existing patterns, and the particular
epenthetic consonant involved is dependent on fortition and deletion
patterns occurring in any particular language, which are likely not to be
nearly as specific as the sources of epenthetic glides and glottals. The
contexts in which glottals and glides are epenthesized in P-base is also
consistent with the historical account. 61% of glide epenthesis is
intervocalic, and most of the rest is either prevocalic or postvocalic. It
generally is not sensitive to word boundaries. On the other hand, only the
word-initial prevocalic context is significantly associated with glottal
epenthesis, accounting for 33% of [ʔ] epenthesis.
5.2.2 Assimilation
Figure 6.10 illustrates some major patterns for assimilation, which accounts
for 35.8% of the sound patterns in P-base, or a little more than half of the
segmental changes (phonological alternations that are not deletion or
epenthesis). 24.2% of assimilation patterns change vowels into other
vowels, and 24.6% of these are nasalization. The rest are vowel quality
changes that are predominantly conditioned by other vowels. 51.4% of
consonant assimilation patterns occur within consonant clusters, which is
reminiscent of consonant deletion.
Consonant-consonant assimilation patterns are even more biased toward
regressive assimilation (2.9 times as many regressive than progressive CC
assimilations, vs. a ratio of 2.0 for assimilations in general). The elephant in
the room for regressive vs. progressive CC assimilation is regressive nasal
place assimilation, which accounts for 46.4% of all regressive CC
assimilation. While this is consistent with the perceptual account of CC
repair strategies, it can also be attributed in part to nasal-consonant clusters
simply being more frequent than consonantnasal clusters. Total assimilation
accounts for 14.6% of regressive CC assimilation, and a negligible part of
progressive CC assimilation. Nasal place assimilation and total assimilation
account for much of the directional asymmetry in CC assimilation. If nasal
place assimilation is excluded, the ratio of regressive to progressive
assimilation in consonant clusters drops from 2.9 to 1.7.
Voicing and devoicing together account for 32.9% of CC assimilation.
Devoicing is biased toward regressive (with a ratio of 2.7) while voicing is
not. In addition to consonant cluster assimilation, intervocalic voicing
accounts for 32.3% of bidirectional assimilation patterns, and it is the
largest recognizable subgroup within prevocalic and postvocalic
assimilatory consonant changes, although it is not significantly more
frequent in those contexts. Palatalization and lenition are ill-served by the
split into assimilatory and non-assimilatory patterns, because only some of
each is formally assimilation in the feature system we are using. We have
excluded palatalization from the assimilation analysis (because we are using
a feature system that does not readily capture it as assimilation) and
included it below in Figure 6.12 with other miscellaneous patterns. If
included here, palatalization would increase the number of prevocalic
regressive assimilation among consonants. Many intervocalic lenition
patterns are assimilatory, and these are included in both figures
Figure 6.11 shows assimilatory changes by feature and context. Unlike
the previous balloon plots, the [+] and [−] values are not superimposed, and
the visible area of the black outer ring represents the number of times the
[−] value of the feature spreads.
Figure 6.12 illustrates some major types of sound patterns not addressed in
the preceding sections. Lenition (including voicing, spirantization, and
debuccalization) accounts for 628 patterns (13.77%), many of which are
also classified as assimilation. Word-final devoicing occurs 48 times
(1.05%). Final devoicing can be considered to be assimilatory in utterance-
final position, and word- final devoicing has been analyzed as a
generalization of utterance-final deletion (see, e.g., Vennemann 1974;
Blevins 2006; Myers 2010, 2012). A possible connection between
devoicing in clusters and at edges is that assimilatory final devoicing is
regressive, and in consonant clusters, regressive devoicing assimilation is
quite a bit more frequent than progressive devoicing assimilation, while
voicing is symmetrical.
Figure 6.11: Vowel (left) and consonant (right) assimilation by feature
and context.
Palatalization (defined in terms of the change, not the trigger) has 145
occurrences (3.18%), and 82 of these are prevocalic. Many of these are
triggered specifically by front and/ or high vowels, as expected. While we
did not code for subsets of vowels in this analysis, this is certainly relevant
for palatalization. There are 116 cases of vowel gliding, where a vowel
becomes a vocalic glide (or 2.54% of all patterns). For comparison, there
are only 17 instances where glides become vowels. 73% of vowel gliding
instances occur prevocalically and 66% involve only high vowels.
In the top-down analysis, the label “Lenition” has been used to
characterize a set of pre-determined changes (Degemination, Spirantization,
Debuccalization, Voicing, Vowel Shortening). We can instead unpack this
notion of lenition and look at changes which occur intervocalically, which
involve changes of the feature [sonorant], [voice], and [continuant]. Below
is a view of the feature changes, highlighting common pathways of
intervocalic lenition (Figure 6.13).
Common paths which emerge from this picture are the attestation of
stop voicing, voiced and voiceless stop spirantization, and voiced stop
flapping. Other changes along this lattice (gliding of voiceless stops (e.g., /p
/ → [w] / V——V)) are also seen to be represented. This lattice illustrates
that most of these lenition patterns typically make a minimal feature
change.
Figure 6.13: Changes involved in intervocalic lenition.
6 Bottom-up analysis
A bottom-up analysis of the patterns in P-base was conducted to determine
whether there were significant groupings of features patterning together that
our top-down analysis was missing. The bottom-up analysis generates
generalizations based on the data, and induces potential categories of
patterns from the observed set of phonological patterns.
6.1 Methods
We employed Multiple Correspondence Analysis, a technique of vector
space reduction for multinomial data. This technique summarizes a large set
of variables with a small number of factors (similar to Principal
Components Analysis). This essentially can be used to reduce the data set
into a smaller number of factors, which can be inspected to see how
features pattern together. Patterns which are associated with each other
show up with similar weightings in their factors, for instance features like
[±tense] and [±ATR] generally group together in their behavior; so they
have a similar weighting in the factor space.
Each factor can be interpreted as summarizing a dimension along which
the set of phonological rules vary. The first factor splits the set of patterns
into left-triggered and right-triggered rules. We can inspect the grouping of
features in this factor space to see what relations exist between features,
with the idea that features which are close to each other generally pattern
together. We achieve this by running a hierarchical cluster analysis on a
trimmed set of features (features with a low factor loading have little
predictive value in the MCA analysis – they don’t pattern with other
features in a predictable manner). Hierarchical clustering works to identify
features which are close together in the factor loading space; which
effectively yields a tree of features which tend to co-occur with each other.
Two analyses were undertaken, the first takes binary features referring
to P-base’s feature analysis of the input, output, change, environment and
detected assimilations (to the left and right).99 Here the number of features
is tremendous (46 possible values in 7 possible positions), and after filtering
features which are farther away than 1.2 in our factor loadings we have the
following cluster dendrograms.
6.2 Results
With this set of data, the primary axis of variation is between left-triggered
and right-triggered rules (Figures 6.14 and 6.15). Inspecting the diagram
yields predictable clumps of features which pattern together generally
around triggering context, and a number of processes tend to cluster (such
as lenition) together. Sensible labels were attached to groups of clusters
based on querying P-base and seeing how these features which describe
patterns tend to group together in terms of set of patterns described.
7 Discussion
Phonological alternations are clearly a diverse set of phenomena, often
reflecting idiosyncratic, arbitrary facts about the particular language in
which they occur. A major goal of phonological typology is to determine
what is frequent and why, and a major recurring theme in this exploration of
P-base has been that certain very specific types of phonological alternations
are extremely frequent, and their frequency is not predictable on the basis of
the frequency of their parts. It is useful to interpret the very frequent
patterns in terms of potential phonetic and structural sources, i.e., in terms
of phonetic factors that could drive sound change and independent
linguistic or cognitive factors that could drive learners to learn sound
patterns in a particular way.
7.3 Conclusions
This study of frequent sound patterns in P-base has been deliberately
incomplete. We have avoided focusing a lot of attention on types of sound
patterns that have been the focus of a lot of crosslinguistic work already
(but could be revisited), and certainly we have overlooked interesting types
of sound patterns that have not been well studied before. We encourage
readers who have been intrigued by any of these possibilities, or by
questions raised in other chapters in this volume, to follow up with their
own P-base queries at http://pbase.phon.chass.ncsu.edu.
References
Bickel, Balthasar. 2008. A refined sampling procedure for genealogical control.
Sprachtypologie und Universalienforschung 61. 221–233.
Bickel, Balthasar & Johanna Nichols. 1996. The AUTOTYP database. http://www.s
pw.uzh.ch/autotyp/, electronic database.
Blevins, Juliette. 2004. Evolutionary phonology. Cambridge: Cambridge University
Press.
Blevins, Juliette. 2006. A theoretical synopsis of Evolutionary Phonology.
Theoretical Linguistics 32. 117–165.
Blevins, Juliette. 2008a. Consonant epenthesis: Natural and unnatural histories. In
Jeff Good (ed.), Linguistic universals and language change, 79–107. Oxford:
Oxford University Press.
Blevins, Juliette. 2008b. Natural and unnatural sound patterns: A pocket field
guide. In Klaas Willems & Ludovic de Cuypere (eds.), Naturalness and
iconicity in language, 121–148. Amsterdam: Benjamins.
Catford, J. C. 1977. Fundamental problems in phonetics, volume 1. Edinburgh:
Edinburgh University Press.
Chiu, Chenhao & Bryan Gick. 2013. Producing whole speech events: Anticipatory
lip compression in bilabial stops. Proceedings of Meetings on Acoustics 19.
060252. http://asa.scitation.org/doi/abs/10.1121/1.4800579
Cho, Young-Mee. 1990. Parameters of consonantal assimilation. Ph.D. thesis,
Stanford University. (Published München: Lincom, 1999.)
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper and Row.
Clements, G. N. 1985. The geometry of phonological features. Phonology
Yearbook 2. 225–252.
Clements, G. N. & Elizabeth V. Hume. 1995. The internal organization of speech
sounds. In John Goldsmith (ed.), The handbook of phonological theory, 245–
306. Oxford: Blackwell.
Côté, Marie-Hélène. 2000. Consonant cluster phonotactics: A perceptual
approach. Ph.D. thesis, Massachusetts Institute of Technology. https://rucore.li
braries.rutgers.edu/rutgers-lib/38478/
Gastner, Michael T. & M. E. J. Newman. 2004. Diffusion-based method for
producing density-equalizing maps. Proceedings of the National Academy of
Sciences of the United States of America 101. 7499–7504.
Gick, Bryan, I. Ian Stavness & C. Chenhao Chiu. 2013. Coarticulation in a whole
event model of speech production. Proceedings of Meetings on Acoustics 19,
060207. http://asa.scitation.org/doi/abs/10.1121/1.4799482.
Hall, Robert A. 1949. The linguistic position of Franco-Provençal. Language 25. 1–
14.
Halle, Morris & George N. Clements. 1983. Problem book in phonology.
Cambridge, MA: MIT Press.
Halle, Morris, Bert Vaux & Andrew Wolfe. 2000. On feature spreading and the
representation of place of articulation. Linguistic Inquiry 31. 387–444.
Hume, Elizabeth V. 2004. The indeterminacy/attestation model of metathesis.
Language 80. 203–237.
Hume, Elizabeth & Ilana Bromberg. 2005. Predicting epenthesis: An information-
theoretic account. Paper presented at 7èmes journées internationales du
réseau français de phonologie, Aix-en-Provence. https://www.researchgate.ne
t/publication/228973329_Predicting_epenthesis_An_Information-theoretic_acc
ount
Hyman, Larry M. 2008. Universals in phonology. The Linguistic Review 25. 83–
137.
Jakobson, Roman, C. Gunnar M. Fant & Morris Halle. 1952. Preliminaries to
speech analysis: The distinctive features and their correlates. Massachusetts
Institute of Technology, Acoustics Laboratory, Technical Report No. 13; 2nd
printing with additions and corrections.
Kirchner, Robert M. 2001. An effort based approach to consonant lenition. New
York: Routledge.
Lavoie, Lisa M. 2001. Consonant strength: Phonological patterns and phonetic
manifestations. New York: Garland.
Lindblom, Björn & Ian Maddieson. 1988. Phonetic universals in consonant
systems. In Larry M. Hyman & Charles N. Li (eds.), Language, speech, and
mind: Studies in honour of Victoria Fromkin, 62–78. London: Routledge.
Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University
Press.
Maddieson, Ian. 1997. Phonetic universals. In William J. Hardcastle & John Laver
(eds.), The handbook of phonetic sciences, 619–639. Oxford: Blackwell.
Maddieson, Ian. 2014. LAPSyD: Lyon-Albuquerque Phonological Systems
Database, CNRS, Lyon, France. http://www.lapsyd.ddl.ish-lyon.cnrs.fr/lapsyd/
Maddieson, Ian & Kristin Precoda. 1990. Updating UPSID. UCLA Working Papers
in Phonetics 74. 104–111.
Mielke, Jeff. 2003. The interplay of speech perception and phonology:
Experimental evidence from Turkish. Phonetica 60. 208–229.
Mielke, Jeff. 2005. Ambivalence and ambiguity in laterals and nasals. Phonology
22. 169–203.
Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: Oxford
University Press.
Mielke, Jeff. 2017. Visualizing phonetic segment frequencies with density-
equalizing maps. Journal of the International Phonetic Association, https://doi.
org/10.1017/S0025100317000123.
Mielke, Jeff, Lyra Magloughlin & Elizabeth Hume. 2011. Evaluating the
effectiveness of Unified Feature Theory and three other feature systems. In
John Goldsmith, Elizabeth Hume & Leo Wetzels (eds.), Tones and features: In
honor of G. Nick Clements, 223–263. Berlin: Mouton de Gruyter.
Moreton, Elliot. 2008. Analytic bias and phonological typology. Phonology 25. 83–
127.
Myers, Scott. 2010. Regressive voicing assimilation: Production and perception
studies. Journal of the International Phonetic Association 40. 163–179.
Myers, Scott. 2012. Final devoicing: Production and perception studies. In Toni
Borowsky, Shigeto Kawahara, Mariko Sugahara & Takahito Shinya (eds.),
Prosody matters: Essays in honor of Elisabeth Selkirk, 148–180. London:
Equinox Press.
Pierrehumbert, Janet & David Talkin. 1992. Lenition of /h/ and glottal stop. In G. J.
Doherty & D. R. Ladd (eds.), Papers in laboratory phonology, vol. 2: Gesture,
segment, prosody, 90–117. Cambridge: Cambridge University Press.
Sagey, Elizabeth C. 1986. The representation of features and relations in non-
linear phonology. Ph.D. dissertation, Massachusetts Institute of Technology.
(Published New York: Garland, 1990.)
Simpson, Adrian P. 1999. Fundamental problems in comparative phonetics and
phonology: Does UPSID help to solve them? In Proceedings of ICPhS XIV,
Berkeley, CA, 349–352. http://www.personal.uni-jena.de/~x1siad/papers/icphs
99_fund.pdf
Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual
account. In Elizabeth Hume & Keith Johnson (eds.), Perception in phonology,
219–250. New York: Academic Press.
Vance, Timothy J. 1987. An introduction to Japanese phonology. Albany, NY: State
University of New York Press.
Vaux, Bert. 2002. Consonant epenthesis and the problem of unnatural phonology,
Paper presented at Yale University Linguistics Colloquium.
Vennemann, Theo. 1974. Words and syllables in natural generative phonology. In
A. Bruck, R. Fox & M. L. Galy (eds.), CLS 10: Parasession on natural
phonology, 364–374. Chicago: Chicago Linguistic Society.
Winters, Stephen J. 2003. Empirical investigations into the perceptual and
articulatory origins of crosslinguistic asymmetries in place assimilation. Ph.D.
dissertation, Ohio State University, Columbus, OH. https://etd.ohiolink.edu/pg_
10?0::NO:10:P10_ACCESSION_NUM:osu1054756426
Aditi Lahiri
Predicting universal phonological
contrasts
1 Introducing FUL
In a landmark work, Jakobson, Fant, & Halle (1952, henceforth JFH)
proposed a set of 21 distinctive features for describing phonological
systems. Well defined acoustic and articulatory correlates were identified
for their features, and the same features were employed to classify place of
articulation for vowels and consonants. An example would be the feature
acute: front vowels (such as [i y e ø æ]) and fronted consonants (e.g.,
alveolars and palatals) were classified as acute and characterised as having
high frequency energy. First proposed in 1999, the FUL system (short for
Featurally Underspecified Lexicon) endorsed these two fundamental
assumptions of JFH’s. The following considerations, some differing from
JHF, are especially highlighted in FUL: (i) phonological features form a
hierarchical system; (ii) all features are monovalent; (iii) the contrasts
established by this set of features should account for phonological
alternations across the languages of the world; (iv) a small set of features
are universally underspecified, and these features should therefore always
be part of the inventory; (v) there are no feature dependencies; (vi)
underlying phonological representations, as part of the mental lexicon,
govern production and comprehension, with underspecification, thus,
implying asymmetries in processing; (vii) feature specification and building
the feature tree during acquisition initially follow a universal pattern; (viii)
feature specification and underspecification should also play a part in
language change.
Notions from the JHF tradition such as “markedness”, “specificity”,
“redundancy”, and “activity” have in one way or another been widely used
by phonologists. No one has ever assumed that all features have the same
“weight”, and most phonologists do not specify non-contrastive features.
Chomsky & Halle (1968) engaged in detailed discussions about markedness
combined with redundancy to obtain the right phonological alternations. In
the early eighties, underspecification was hotly debated (cf. Archangeli
1988), particularly with respect to coronality (Paradis & Prunet 1991), and
the concept was indeed frowned upon (McCarthy 1988). Halle et al. (2000)
emphasise that full specification for contrastive features should be the
norm. Despite the unease, there is no doubt that asymmetries and
markedness differences exist across feature distributions and directions of
the output of phonological rules, and various methods have been employed
to handle them. Calabrese (1995) distinguished different types of feature
representations such as contrastive, marked, and full, which in turn were
interspersed in the ordering of rules. Mohanan (1993) favoured what he
called “fields of attraction” and “dominance”, which allowed him to express
degrees of markedness. Clements (2001) proposed a complex model
combining both specification and underspecification, which allowed non-
contrastive features to be specified if they were “active” in phonology. He
distinguished between “active” features (which may form natural classes)
and “prominent” features (which, for instance, play a role in spreading).
Against this historical backdrop, this chapter sets out the FUL view of
underspecification and asymmetry, and specifically addresses two
questions:
(i) How do FUL’s features and their hierarchical organisation account for
the phonological contrasts of the languages of the world?
(ii) To what extent are (UNDER)SPECIFICATION and (IN)ACTIVITY correlated?
The hierarchy begins with [low], this being the first division by Jakobson &
Halle (1956) on the grounds of highest sonority. The next cut follows the
common pattern of place-next. The most important aspect is that /i/ is
[coronal] and this is the feature that is required to trigger palatalisation. All
four-vowel systems of this family have the strong i as coronal. The three-
vowel systems / i u a/, however, do not have /ə/, neither is palatalisation
being triggered. Thus, these vowel systems (again beginning with [low]) are
organised as follows:
This is an elegant analysis which produces the pattern of alternations set out
in (9). Weak i alternations show no effect of the feature CORONAL, while
strong i alternations do, since /i/ is specified for CORONAL. Strong i leads to
palatalisation with surface [tʃ] and [ʎ].
Although FUL agrees that the vowel /i/ is [CORONAL], it faces several
problems with the assumption that CORONAL alone triggers palatalisation.
The assumption in FUL is that palatalisation would normally occur with the
additional feature [HIGH]; certainly the vowel /i/ is involved, but not the
main place feature. Second, [CORONAL] would be underspecified, and thus
will not play an active role. How could it work under these assumptions and
would such an analysis be in any way preferable? We provide an alternative
below.
First, the Inuit palatalisations affect all places of articulation; a summary
from C&D’s data is in (10).
Note that the obstruents become strident, which would be the phonetic
enhancement of the palatalisation process. It is actually not evident from
C&D’s analysis why [CORONAL] is the active feature relevant for
palatalisation, since the inputs / t l s/ are all [CORONAL] to begin with. The
only change in place of articulation is /k/ → /s/. All relevant features in
FUL are tabulated in (11); the consonants [λ ɲ s tʃ] are listed for
convenience, but they are in parentheses since they are derivatives of /l n t
k/ in the context of /i/.
There are a few additional points to be made. First, we are able to account
for palatalisation even if [CORONAL] is unspecified. However, why would
this analysis be preferred over that of C&D, who assume that the [coronal]
specification of /i/ can account for all the palatalisation processes? They
elegantly connect the presence and absence of palatalisation and the
specification of [CORONAL] for /i/. We do not deny that /i/ is [coronal] nor
that it plays a significant role. However, C&D do not discuss the various
ways in which [CORONAL] should affect the other consonants and in fact
they do not show how palatalisation is actually realised. For instance, why
is it that /l/ becomes /λ/ when [CORONAL] from /i/ spreads? Is /l/ not
[CORONAL]? What about other coronal consonants such as /n/? Why does the
addition of [coronal] from /i/ alone lead to palatalisation? Is it the vocalic
element that is crucial and [CORONAL] from consonants has no effect?104 For
/k/-palatalisation, it is obvious that the place feature of the consonant
changes. In our analysis, this is treated as an assimilation process whereby
the ARTICULATOR features merge; this can be achieved by spreading or
deletion. However, in our view palatalisation of the other consonants which
are inherently all [coronal] is different. Thus, we crucially distinguish
between palatalisations which affects back consonants and those which
share the place feature with /i/. It is not clear how this is accounted for in
C&D.
Second, the main aim of C&D’s analysis is to confirm that the four-
vowel and three-vowel systems have different feature distributions.
Accordingly, for them the difference lies in the four-vowel systems
requiring [CORONAL] to be specified for /i/, which triggers palatalisation,
while it is unspecified in the three-vowel system (see above (7), (8)). Can
our analysis account for this contrast, given that [CORONAL] is always
underspecified and will always be filled in in the surface representation
because it has an empty ARTICULATOR node? The answer is yes. Recall
that in FUL it is not coronality per se which triggers palatalisation: it is
[HIGH] that plays a crucial role. We compare the four- and three-vowel
systems in FUL:
(17) FUL features for vowels in Inuit dialects (3- and 4-vowel systems)
(a) 3-vowel system
Thus, a change from [k] to [tʃ] in the context of [i] or [j] would involve a
change in the primary change of articulation of dorsal to coronal in the
context of [−back], which was dominated by dorsal. This problem was
addressed in detail by Clements and taken up by Hume, leading to the
feature set we discussed above.
In the analysis of Clements (1989), the structure of palatalisation would
involve the following features:
The first step involves a palatalised [kj], which has a C-place dorsal as well
as a V-place coronal. This in turn undergoes tier promotion, complex
segment formation, and concomitant affrication to become [tʃ]. This was a
remarkable proposal, suggesting for the first time that [j] led to
palatalisation because of its coronal status. Our view is similar, except that
we do not have the independent tiers. However, before we delve into FUL’s
proposal, we briefly discuss Hall’s take on this.
According to Hall (1997), palatals (“alveopalatals” in his terminology)
differ from “true” palatals such as German [ç]. His features for these
consonants would be as follows.
Consequently, since front vowels are coronal (like in FUL), Hall’s analysis
dispenses with the awkwardness of having a dorsal [k] becoming a coronal
palatoalveolar in the context of dorsal [i] or [j] via [−back], which too is
dominated by dorsal. However, since the palatals are still dorsal (unlike
FUL), the [+P] feature has to be dominated by both coronal and dorsal.
Instead of this rather complex analysis, we follow Clements’
assumptions that all palatals and palatoalveolars and front vowels and
glides are coronal, and thus palatalisation which causes a fronting of velar
consonants is an assimilation to a coronal place of articulation and by a
coronal. However, as noted above, palatalisation involves also the
“backing” of dentals/alveolars [t d] to [tʃ dʃ] or [ʃ ʒ] as well as adding
secondary articulations. We turn to this below.
We have argued elsewhere (i) that neither fronted velars and palatals,
nor palatalised velar and regular velar in the context of [i] may contrast in
any single language (Keating & Lahiri 1993), and (ii) that alveopalatal and
palatal stops do not co-occur in the same language (Lahiri & Blumstein
1984). Thus, features for these various coronal consonants as compared to a
velar would be as in (31).
Note that FUL’s features for alveopalatal and palatal sounds are the same: if
there is a contrast it has to be via [STRIDENT]. In FUL, the palatalisation of
velars, as for example [k] → [ç] in German or [k] → [tʃ] in Slavic, would
always have to be as follows:
Earlier, in Lahiri & Evers (1991, henceforth L&E), where we permitted
dependent features, palatal and palatoalveolar consonants were
distinguished by [−anterior]. Thus, the various coronal consonants were
distinguished as follows:
However, given that all palatalised segments in FUL are represented by the
ARTICULATOR node with a [HIGH] under TONGUE HEIGHT, we could
represent the diminutive morpheme as in (40), with the features of the
relevant consonants in (41).
Palatalisation for the diminutive involves [s] becoming [ʃ] and [t] becoming
[c]. For FUL, both involve adding [high]. To illustrate, we show the features
of the relevant consonants in Dutch.
In (42) we state the rules which are required to obtain the diminutive forms.
The point we would like to make here is that the high front glide [j] which
is part of the diminutive morpheme has the feature [HIGH] which in turn
requires the obstruents [t s] to become [c ʃ] respectively. Since all
consonants in question are [CORONAL], there is nothing else that is required.
The only other relevant process is place assimilation where the place-
underspecified [t] acquires the place of the final consonant in words like
[raːmpjə] from /raːm – tjə/. Sample derivations are added in (43).
The affix /a/ remains unchanged when the root also contains /a/ (iv). When
the root has a high vowel, /a/ becomes /e/ (i), while it takes on the features
of the root if they contain mid vowels /e ɛ o ɔ/.
The underlying features of the relevant vowels in Hyman’s analysis are
given in (45).
Like Dresher and Clements, Hyman invokes the notion of “activity” and
argues that only the “active” four features that are necessary to account for
the data should be relevant. Using a system like that of Clements, ATR and
OPEN fall under a single APERTURE node. The vowel [ə] does not surface,
but is assumed to be the intermediate fronted vowel of /a/ when the root has
a high vowel / i u/.
Examples for the first three harmony cases are given in (46).
The features ATR, FRONT, and round participate actively in the harmony
process and the last two are “parasitic” on FRONT (Hyman 2003: 90). Our
interest here is in the vowel /a/, which changes to [e] not only in the context
of /i/, but also in the context of /u/ where, in a parallel scenario, it ought to
change to [o]. Hyman argues that “the fronting of /a/ under ATR harmony is
a secondary development, the primary one being to lower its F1”. That is,
/a/ “first converts to a [+ATR] central vowel, here symbolised as schwa”,
which in turn becomes /e/ (44(i)). Why should this be so in a perfectly
regulated harmony system? Why does the spreading of ATR ignore the place
features for the high vowels? We turn to FUL for an answer. (47), including
a tree diagram representation, gives the features that FUL would assign on
universal principles; note that CORONAL remains underspecified.
A clear distinction needs to be made between the underspecified CORONAL
and the lack of an ARTICULATOR node. As always, CORONAL is not
specified in the representation, but if the vowel has an ARTICULATOR
node, it will get the feature on the surface by a fill-in rule. Thus, /a/ will not
get a CORONAL specification, but / i e ɛ/ will. A futher lack of feature
specification involves TONGUE HEIGHT (TH) as well as TONGUE
ROOT (TR) features: / i u/ are not specified for height, but they do have the
TH node and /ɛ ɔ a/ are not specified for TR. The feature filling rules,
which determine the surface features, are as follows:
Thus, /a/ has the feature low without precise place features suggesting that
phonetically it can be in between.
Under this representational hypothesis, it is clear why, when ATR spreads
from /i u/, the suffix /a/ will automatically become /e/: /a/ and /e/ share the
feature [LOW] and nothing else. Thus, spreading [ATR] from /i, u/ to /a/ turns
it into /e/. The difference between /a/ and /e/ is that /a/ does not have an
ARTICULATOR node. This gets filled in on the surface where it will then
emerge as /e/ since atr has spread. Consequently, unlike in Hyman’s
analysis, /a/ > [e] does not require an intermediate analysis which produces
[ə] (49i). The harmony processes for high and mid vowels look different
because of the mismatch between the TH features. These are spelt out with
relevant examples below.
Lack of ATR and OPEN, then, ensue in the lack of harmony alternations.
Is it possible to account for this complex situation in FUL, where not
only CORONAL is underspecified, but the vowels /I U/ must lack height as
well as ATR features? Our proposal is outlined in (53).
Thus, in closed syllables, the addition of [low] for unspecified TH of /I U/
would give /ɛ ɔ/, while in open syllables they would receive the feature
[atr].
6 Moving on
To sum up, FUL provides a set of monovalent features, along with
underspecification of [CORONAL] and [PLOSIVE], which are intended to be
universal. Thus, binary features like [±high] or [±voice] are not acceptable
and the automatic consequence is that negative features cannot form natural
classes. However, it is possible to refer to a node which does not contain a
fully specified feature. Thus, ARTICULATOR remains empty for CORONAL,
which gets filled in on the surface. Rules like English aspiration of
voiceless consonants (under the assumption that underlying stops are
unaspirated) could be realised as below:
The rule of aspiration says that when the LARYNGEAL node is “empty”
and does not contain either SPREAD GLOTTIS or VOICE, the feature SPREAD
GLOTTIS would be added. When, on the other hand, the feature VOICE is part
of the LARYNGEAL node, then SPREAD GLOTTIS would not be added. Thus,
in a word like /pɪn/, the initial consonant has no laryngeal feature and
acquires SPREAD GLOTTIS in syllable-initial position, but since the
LARYNGEAL node for /b/ (in words like / bɪn/) are already specified with
the feature voice, no other feature can be added. Consequently, /b/ remains
without aspiration.
In fleshing out a model like FUL, a host of further questions need to be
tackled. We will only broach three here: they are ones where significant
progress has been or is being made. First, since unlike contrastive theories
that assume activation we assume that universal features are acquired first
and always establish a contrast, then how do the other features become part
of the system? Second, if CORONAL and PLOSIVE are always underspecified,
then they must always be available in natural languages; but are they?
Finally, we have claimed that underspecification has consequences for
processing: but to what extent do we have evidence supporting this?
With respect to acquisition, if CORONAL must always be present, then
the first cut is CORONAL vs. something else. Following Ghini (2001a), we
maintain that PLACE-first is a universal principle. The acquisition literature
suggests that LABIAL is produced first (cf. Jakobson 1941; Levelt 1995;
Fikkert & Levelt 2008). Fikkert & Levelt find that words are
undifferentiated with respect to features and the word node itself has
LABIAL, with vowels and consonants sharing the same feature. Our
assumption is that CORONAL is underspecified but present, and in fact the
LABIAL VS. CORONAL contrast is the first one to be manifest on the surface.
We also assume that all languages have PLOSIVES – not necessarily all places
of articulation, but at least one. This tallies with Hyman (2008) who argues
that two of the valid universals about phonological inventories are that all
have oral stops and all have coronals. But CORONAL phonemes need not be
PLOSIVES; they could be CONTINUANT for instance. Thus, in acquisition, we
would first find a contrast of underspecified CORONAL vs. some other
ARTICULATOR (in all probability LABIAL) and PLOSIVE vs. probably
CONTINUANT. Recall that FUL assumes that vowels and consonants share
PLACE. Thus, for vowels as well, the first cut is probably CORONAL VS.
LABIAL. It could be the case that the LABIAL vowels are also DORSAL.
We have also suggested that in terms of TONGUE HEIGHT, [low] is
acquired first. But we do not believe that this needs to be underspecified
universally, because a language might only have one vowel, with no
necessity to specify any height contrast. Thus, other features are built very
much on the basis of contrast. The question is whether contrasts depend
entirely on “activity” or on distribution. The answer is probably both.
Initially, infants are not going to be exposed to lots of alternations which
would conclusively estabish activity. However, distribution is something
they inevitably enounter right away.
Challenging the assumption of the universality of coronals, Blevins
(2009) has suggested that Northwest Mekeo lacks CORONAL obstruents,
though it may acquire them via language contact. All Mekeo dialects,
however, have coronal sonorants; /l/ occurs in other Mekeo dialects and
Northwest Mekeo itself has a palatal glide /y/ (Blevins’ notation) which
alternates with /ɛ/. Blevins argues that /l/ can be seen as primarily lateral
with redundant coronal specification. That is not an assumption made by
FUL, where PLACE is primary. Consequently, it is not the case that this
universal “bites the dust”: CORONAL is very much present even in Northwest
Mekeo, albeit perhaps not in obstruents. In Blevins’ own terms, CORONAL
appears on the surface via assimilation, and with /i/.
References
Archangeli, Diana. 1988. Aspects of underspecification theory. Phonology 5. 183–
207.
Bhat, D. N. S. 1978. A general study of palatalization. In Joseph H. Greenberg,
Charles A. Ferguson, & Edith Moravcsik (eds.), Universals of language, vol. 2:
Phonology, 47–92. Stanford: Stanford University Press.
Blumstein, Sheila & Kenneth Stevens. 1980. Perceptual invariance and onset
spectra for stop consonants in different vowel environments. Journal of the
Acoustical Society of America 67. 648–662.
Blevins, Juliette. 2009. Another universal bites the dust: Northwest Mekeo lacks
coronal phonemes. Oceanic Linguistics 48. 264–273.
Calabrese, Andrea. 1995. A constraint-based theory of phonological markedness
and simplification procedures. Linguistic Inquiry 26. 373–463.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper & Row.
Clements, George N. 1985. The geometry of phonological features. Phonology
Yearbook 2. 225–252.
Clements, G. Nick. 1989. A unified set of features for consonants and vowels.
Unpublished, Cornell University, Ithaca, NY.
Clements, G. Nick. 2001. Representational economy in constraint-based
phonology. In Hall (ed.) 2001, 71–146.
Clements, George N. & Elizabeth V. Hume. 1995. The internal organization of
speech sounds. In John A. Goldsmith (ed.), The handbook of phonological
theory, 245–306. Oxford: Blackwell.
Compton, Richard & B. Elan Dresher. 2011. Palatalization and “strong i” across
Inuit dialects. Canadian Journal of Linguistics 56. 203–228.
Cornell, Sonia A., Carsten Eulitz, & Aditi Lahiri. 2013. Inequality across
consonantal contrasts in speech perception: Evidence from mismatch
negativity. Journal of Experimental Psychology: Human Perception and
Performance 39. 757–772.
Dresher, B. Elan. 2009. The contrastive hierarchy in phonology. Cambridge:
Cambridge University Press.
Eulitz, Carsten & Aditi Lahiri. 2004. Neurobiological evidence for abstract
phonological representations in the mental lexicon during speech recognition.
Journal of Cognitive Neuroscience 16. 577–583.
Fikkert, Paula & Clara Levelt. 2008. How does Place fall into place? The lexicon
and emergent constraints. In Peter Avery, B. Elan Dresher, & Keren Rice
(eds.), Contrast in phonology: Theory, perception, acquisition, 231–270.
Berlin: Mouton de Gruyter.
Ghini, Mirco. 2001a. Asymmetries in the phonology of Miogliola. Berlin: Mouton de
Gruyter.
Ghini, Mirco. 2001b. Place of articulation first. In Hall (ed.) 2001, 71–146.
Gussenhoven, Carlos & Haike Jacobs. 2011. Understanding phonology. 3rd edn
2017. London: Hodder Education.
Hall, T. Alan. 1997. The phonology of coronals. Amsterdam: Benjamins.
Hall, T. Alan (ed.). 2001. Distinctive feature theory. Berlin: Mouton de Gruyter.
Halle, Morris, Bert Vaux, & Andrew Wolfe. 2000. On feature spreading and the
representation of place of articulation. Linguistic Inquiry 31. 387–444.
Hyman, Larry M. 2003. “Abstract” vowel harmony in Kàlɔ̀ŋ: A system-driven
account. In Patrick Sauzet & Anne Zribi-Hertz (eds.) Typologie des langues
d’Afrique et universaux de la grammaire, vol. 1, 85–112. Paris: L’Harmattan.
Hyman, Larry M. 2008. Universals in phonology. The Linguistic Review 25. 83–
137.
Jakobson, Roman. 1941. Kindersprache, Aphasie und allgemeine Lautgesetze.
Uppsala: Almqvist & Wiksell.
Jakobson, Roman, Gunnar Fant, & Morris Halle. 1952. Preliminaries to speech
analysis: The distinctive features and their correlates (Technical Report No.
13). Cambridge, MA: MIT, Acoustics Laboratory.
Jakobson, Roman & Morris Halle. 1956. Fundamentals of language. The Hague:
Mouton.
Kaplan, Lawrence D. 1981. Phonological issus in North Alaskan Inupiaq.
Fairbanks: Alaska Native Language Center.
Keating, Patricia & Aditi Lahiri. 1993. Fronted velars, palatalized velars, and
palatals. Phonetica 50. 73–101.
Kotzor, Sandra, Allison Wetterlin, & Aditi Lahiri. 2017. Symmetry or asymmetry:
Evidence for underspecification in the mental lexicon. In Aditi Lahiri & Sandra
Kotzor (eds.), The speech processing lexicon, 85–106. Berlin: De Gruyter
Mouton.
Lahiri, Aditi. 2000. Hierarchical restructuring in the creation of verbal morphology in
Bengali and Germanic: Evidence from phonology. In Aditi Lahiri (ed.), Analogy
and markedness: Principles of change in phonology and morphology, 71–123.
Berlin: Mouton de Gruyter, paperback 2003.
Lahiri, Aditi. 2012. Asymmetric phonological representations of words in the mental
lexicon. In Abigail C. Cohn, Cécile Fougeron, & Marie Huffman (eds.), The
Oxford handbook of laboratory phonology, 146–161. Oxford: Oxford University
Press.
Lahiri, Aditi & Sheila E. Blumstein. 1984. A re-evaluation of the feature ‘coronal’.
Journal of Phonetics 12. 133–145.
Lahiri, Aditi & Vincent Evers. 1991. Palatalization and coronality. In Paradis &
Prunet (eds.) 1991, 79–100.
Lahiri, Aditi, Letitia Gewirth, & Sheila E. Blumstein. 1984. A reconsideration of
acoustic invariance for place of articulation in diffuse stop consonants:
Evidence from a cross-language study. Journal of the Acoustical Society of
America 76. 391–404.
Lahiri, Aditi & Henning Reetz. 2002. Underspecified recognition. In Carlos
Gussenhoven & Natasha Warner (eds.), Labphon 7, 637–676. Berlin: Mouton
de Gruyter.
Lahiri, Aditi & Henning Reetz. 2010. Distinctive features: Phonological
underspecification in representation and processing. Journal of Phonetics 28.
44–59.
Levelt, Clara. 1995. Segmental structure of early words: Articulatory frames or
phonological constraints. In Eve V. Clark (ed.), The Proceedings of the
Twenty-seventh Annual Child Language Research Forum, 19–27. Stanford:
CSLI.
McCarthy, John. 1988. Feature geometry and dependency: A review. Phonetica
43. 84–108.
Mohanan, K. P. 1993. Fields of attraction in phonology. In John Goldsmith (ed.),
The last phonological rule: Reflections on constraints and derivations, 61–116.
Chicago: University of Chicago Press.
Mohanan, K. P. & Tara Mohanan. 1984. Lexical phonology of the consonant
system in Malayalam. Linguistic Inquiry 15. 575–602.
Paradis, Carole & Jean-François Prunet (eds.). 1991. The special status of
coronals (Phonetics and Phonology 2). San Diego: Academic Press.
Paulian, Christiane. 1986. Les voyelles en nù-kàlùŋɛ̀: Sept phonèmes, mais ...
Cahiers du Lacito 1986/1. 51–65.
Pierrehumbert, Janet B. 2016. Phonological representation: Beyond abstract
versus episodic. Annual Review of Linguistics 2. 33–52.
Plank, Frans & Aditi Lahiri. 2015. Macroscopic and microscopic typology: Basic
Valence Orientation, more pertinacious than meets the naked eye. Linguistic
Typology 19. 1–54.
Roberts, Adam, Allison Wetterlin, & Aditi Lahiri. 2013. Aligning mispronounced
words to meaning. The Mental Lexicon 8. 140–163.
Rubach, Jerzy. 1984. Cyclic and lexical phonology: The structure of Polish.
Dordrecht: Foris.
Sagey, Elizabeth C. 1986. The representation of features and relations in non-
linear phonology. Doctoral dissertation, MIT, Cambridge, MA.
Shaw, Patricia. 1991. Consonant harmony systems: The special status of coronal
harmony. In Paradis & Prunet (eds.) 1991, 125–157.
Stevens, Kenneth & Sheila E. Blumstein. 1978. Invariant cues for place of
articulation in stop consonants. Journal of the Acoustical Society of America
64. 1358–1368.
Trommelen, Mieke. 1984. The syllable in Dutch. Dordrecht: Foris.
B. Elan Dresher, Christopher Harvey, and Will Oxford
Contrastive feature hierarchies as a
new lens on typology
1 Introduction
This article addresses a question raised in the proposal for the Workshop on
Phonological Typology (Oxford University, August 2013): Phonological
typology vs. phonetic typology – same or different? We will propose a way
of looking at phonological typology that is clearly DIFFERENT from phonetic
typology. In particular, we will propose that CONTRASTIVE FEATURE
HIERARCHIES offer a new lens on typology, while also shedding light on
synchronic and diachronic phonological patterns.
We will begin in Section 2 with some general remarks on typology,
phonological contrast, and contrastive feature hierarchies. Section 3
illustrates the relation between contrast and phonological activity, as
exemplified by the Classical Manchu vowel system. We then show how
contrastive hierarchies can lend insight to synchronic, diachronic, and areal
typology, with examples drawn from a typological survey of rounding
harmony and the relative ordering of features [round] and [front] (Section
4), the diachrony of Algonquian vowel systems (Section 5), and areal
typology of Ob-Ugric vowel systems (Section 6), respectively. Section 7 is
a brief conclusion.
The phonemes /v/ and /ʒ/ appear to be out of place in the chart of language
D, but Sapir justifies their positions by their phonological behaviour, in that
their places in the pattern are parallel to those of language C’s /w/ and /j/,
respectively. Sapir (1925: 47–48) allows that the “natural phonetic
arrangement” of sounds is a useful guide to how they pattern, but he goes
on: “And yet it is most important to emphasize the fact, strange but
indubitable, that a pattern alignment does not need to correspond exactly to
the more obvious phonetic one.”
The isomorphic alignments in C and D can be understood as indicating
that corresponding phonemes have the same CONTRASTIVE values. The chart
in (2) represents one possible way of suggesting what the contrastive
specifications might be for the consonants in (1). In each cell, the first
sound is from C, the second from D. The differences between them do not
involve contrastive specifications.
It was observed that the language D phonemes /v/ and /ʒ/ appear to be in the
“wrong place”, which in (2) translates into their having incorrect
specifications. In generative grammar, this mismatch can be resolved by
assigning them different underlying specifications, matching those of their
counterparts. These types of examples have been much discussed in
connection with how abstract Sapir’s theory of phonology was (cf.
McCawley 1967). Less attention has been paid to the other examples, which
do not appeal to abstractness, but which show the importance of
establishing the contrastive properties of segments. For example, the
obstruents in the third row in (2) are contrastively voiced and redundantly
stops or spirants. No abstractness is at issue here, but we have to distinguish
between contrastive and non-contrastive properties.
It follows that for Sapir the pattern alignment of a phoneme amounts to
its contrastive status, which is not determined by its phonetics, but is a
function of its phonetic and phonological behaviour. Thus, a synchronic
analysis of the phonology should, among other things, give an account of
the contrastive features of each phoneme.
Turning to diachrony, Prague School phonologists have argued that the
contrastive properties of phonemes also play an important role in
phonological change. The insight that phonological change may involve a
reorganization of the phonemes of a language goes back to an article by
Roman Jakobson first published in 1931 (Jakobson 1972 [1931]): “Once a
phonological change has taken place, the following questions must be
asked: What exactly has been modified within the phonological system? [. .
.] has the structure of individual oppositions [contrasts] been transformed?
Or in other words, has the place of a specific opposition been changed [. .
.]?”
It should be noted that phonological theories that put the emphasis on
contrast have not been unproblematic. In pre-generative structuralist
theories, synchronic grammars were composed of contrasting elements
locked into systems of oppositions. If one takes too literally Saussure’s
(1972 [1916]: 166) dictum that “dans la langue il n’y a que des différences
[. . .] sans termes positifs” then grammars become incommensurable, and
one has no way to relate successive stages of a language, or even closely
related dialects (Moulton 1960). Generative grammar (Chomsky & Halle
1968) solves this problem by construing phonology as a system of rules that
mediate between underlying (lexical) and surface (phonetic) forms. Now,
grammar change takes the form of the addition, loss, reordering, or
restructuring of rules.
Kiparsky (1965) demonstrated that a series of sound changes in
Armenian dialects, shown in (3), can be understood in terms of the
spreading of three rules, described informally in (4). Kiparsky (1965) points
out that these sound changes spread from one dialect to another, regardless
of how many contrasts they contained. If we were to classify the dialects in
terms of oppositions, we would arrive at meaningless groupings for
explaining any synchronic or diachronic facts. He writes:
An incidental feature of the present example is that it highlights the pointlessness of a
structural dialectology that [. . .] distinguishes dialects according to points of structural
difference rather than according to the innovations through which they diverged [. . .] If
in the present example we were to divide the dialects into those with two stop series
and those with three, we would be linking together dialects that have nothing to do with
each other and separating dialects that are closely related. (Kiparsky 1965: 17)
(5) The contrastive feature hierarchy (based on Jakobson, Fant, & Halle
1952, among others):
Contrastive features are assigned by language-particular feature
hierarchies.
Finally, this theory of contrast does not need to make any assumptions
as to where features come from: the Successive Division Algorithm works
equally well if features are universal, as supposed by Chomsky & Halle
(1968), or emergent, as suggested by Mielke (2008) and Samuel (2011).
Dresher (2014) observes that the contrastive hierarchy itself ensures that
phonological representations across languages will look rather similar even
in the absence of a universal set of features.
To illustrate the workings of the feature hierarchy and the Contrastivist
Hypothesis, consider a hypothetical vowel inventory / i, u, a/. The
Successive Division Algorithm requires that an inventory of three
phonemes must be characterized by exactly two features, though both the
choice of features and their ordering may vary. In (11), we illustrate two
possible contrastive hierarchies that use the features [back] and [low]; in
(12), we give two more hierarchies using the features [front] and [round].
Other combinations of features are also possible, but these examples should
suffice to illustrate the concept.
The three most notable kinds of phonological activity involving vowels are
ATR harmony, rounding (labial) harmony, and palatalization. We will
briefly discuss them in turn, and show how the patterns of activity motivate
the hierarchy in (15).
The vowel /i/ is neutral and co-occurs in stems with both ATR (19a) and
non-ATR vowels (19b). Similarly, suffix /i/ freely occurs with both types of
vowels (19c).
Perhaps unexpectedly, when /i/ is in a position to trigger harmony, it occurs
only with non-ATR vowels (20).
The evidence from activity, therefore, is that /ə/ and /u/ have an active
feature in common, that we are calling [ATR], that is not shared by the
other vowels; by hypothesis, this feature must be contrastive. The same is
evidently not the case with /i/, though /i/ is phonetically ATR. In the
representations proposed in (15) and (16), /ə/ and /u/, but not /i/, are
contrastively [ATR].
3.4 Palatalization
The vowel /i/ uniquely causes palatalization of a preceding consonant,
which suggests that it alone has a contrastive triggering feature we call
[front]. There is no evidence that it has any other active features.
Compton & Dresher (2011) observe the generalization in (30) about dialects
in which /i/ causes or once caused palatalization:
(30) Generalization about Inuit palatalization (Compton & Dresher
2011):
Inuit /i/ can cause palatalization (assibilation) of a consonant only in
dialects where there is evidence for a (former) contrast with a fourth
vowel; where there is no contrast between strong and weak i, /i/ does
not trigger palatalization.
This generalization follows if we assume that the feature hierarchy for Inuit
and Yupik is [low] > [round] > [front] as in (31). When the fourth vowel is
in the underlying inventory, /i/ has a contrastive [front] feature that enables
it to cause palatalization (31a). But in the absence of a fourth vowel, [front]
is not a contrastive feature (31b).
4.3 Vowel systems where ordering of [round]
and [front] is not crucial
Turkic languages have symmetrical inventories. They are typically analyzed
with three features: one height feature and two place features, as in (32)
(see Kabak 2011 for Turkish). Here, every feature specification is
contrastive in any order; the vowels completely fill the eight-cell vowel
space defined by three binary features. A possible ordering of the features
of Turkish is given in (33); however, the same contrastive specifications
would result from any ordering of these three features.117 We predict,
therefore, that all round vowels could potentially be triggers of round
harmony in such languages. This prediction is correct, though harmony
observes limitations that are not due to contrast, but to other factors.
In Turkish, harmony triggers can be high or low, but targets are typically
limited to high vowels (34). In Kachin Khakass (Korn 1969), both triggers
and targets of round harmony must be high (35), the opposite of the
Manchu-Tungus-Eastern Mongolian pattern. Because all vowels have
contrastive [round] and [front] features, however they are ordered, these
restrictions cannot be due to considerations of contrast, but to other factors.
This way of classifying phonological systems allows us to account for two
Manchu languages that are notable exceptions to the prevailing Manchu-
Tungusic pattern of round harmony. Spoken Manchu and Xibe are modern
Manchu languages in which [ATR] has been lost and /ə/ has become a (non-
low) vowel (Zhang 1996; Dresher & Zhang 2005). The vowel system of
Xibe, for example, is given in (36). The reclassification of /ə/ as a (non-low)
vowel necessitates a new contrastive feature to distinguish it from /u/. The
most natural modification is to extend the feature [round], already in the
system, to /u/.
Evidence that /u/ is in fact contrastively [round] in Xibe can be found in the
creation of new phonemes /y/ and /oe/. The latter derives from sequences of
/ɔ/ and /i/, where the [front] feature derives from /i/ and the [round] feature
from /ɔ/. Similarly, the new phoneme /y/ derives from sequences of /u/ and
/i/, showing that /u/ had acquired a [round] feature. More evidence that /u/
is contrastively [round] in Xibe comes from a new form of round harmony
that arose in Xibe, whereby /ə/ alternates with /u/ in suffixes: /u/ occurs if
the stem-final vowel is round, /ə/ occurs otherwise.
The participation of /u/ in triggering round harmony, rare in the
Manchu-Tungusic family, is accounted for by the extension of the
contrastive [round] specification to /u/. The phonological patterning of the
vowels in Xibe points to a contrastive hierarchy and branching tree as in
(37). This tree very closely resembles the Turkish feature hierarchy in (33).
4.4 Summary
To sum up, we can classify languages into types based on the contrastive
scopes of the vowel features [front] and [round] as in (38). Whether a
feature is contrastive on a given vowel depends on the feature hierarchy and
the size and structure of the phonological inventory.
6.1 Proto-Mansi
We will focus here on Mansi. Starting from the Proto-Mansi first-syllable
vowel system reconstructed by Steinitz (1955), and taking into account the
phonological patterning attributed to that period, Harvey (2012) posits the
Proto-Mansi contrastive hierarchy in (44).
A major type of phonological activity that provides evidence for this
hierarchy is front vowel harmony (45a), which we suppose to be governed
by the feature [front]. The Ob-Ugric languages have no neutral vowels,
therefore all vowels must have a contrastive value for this feature. Proto-
Mansi also had a system of productive ablaut-like root-vowel alternations
(Honti 1988a:149, 1988b:174), where a certain set of suffixes causes roots
with long vowels to shorten, as in the Western Mansi examples in (45b).
When the front feature is dropped to the lowest rank, about half of the
vowels lose their contrastive [front] feature. In the next stage the three
remaining [front] vowels are merged to their back counterparts: */ǣ/ > */ɤ̄/,
*/ĭ/ > */ ̆ /, and */ȳ/ > */ū/. Once complete, these mergers leave no vowels
with a contrastive [front] feature at all. As expected, front harmony is no
longer viable, and has disappeared from Northern Mansi.
We also expect that root-vowel alternation would become untenable.
For instance, in Proto-Mansi, /ū/ alternated with /ŭ/. After */ȳ/ has merged
with */ū/, there is no way for a speaker of the modern language to tell
which /ū/ should alternate and which should not. As predicted, vowel
alternation has almost completely vanished in Northern Mansi.
Although the evolution of the vowel systems of Western and Northern
Mansi differ in their details, in both the feature [front] was demoted, and in
both front harmony and root-vowel alternations were adversely affected.
Interestingly, the dropping of [front] has also produced two very different
results. In Western Mansi, front dropping has caused some back vowels to
become more front; in Northern Mansi, the loss of the same contrast has
caused some front vowels to merge with their back counterparts.
7 Conclusions
The approach to phonological typology we have sketched here is based on a
fundamental distinction between a phonetic and phonological analysis of
the sound systems of languages. This view builds on approaches to
phonology pioneered by Sapir and the Prague School (Jakobson and
Trubetzkoy), instantiated within a generative grammar. More specifically, it
views phonemes as being composed of contrastive features that are
themselves organized into language-particular hierarchies. Because of the
hypothesized connection between contrast and activity, we expect
languages with similar hierarchies and inventories to exhibit similar
patterns.
In some of the language families we have surveyed here, feature
hierarchies appear to be relatively stable, as exemplified by Manchu-
Tungusic, Eastern Mongolian, Yupik-Inuit, and branches of Algonquin.
Contrast shifts can occur, however, for various reasons, and these can result
in dramatic differences in patterning, as shown by the modern Manchu
languages, Eastern and Western Algonquin as compared with Central, and
extensive changes in Ob-Ugric vowel systems viewed over a relatively long
period of time. Finally, Ob-Ugric shows that elements of feature hierarchies
can spread and be borrowed, like other aspects of linguistic structure.
We have seen that, like Sapir’s languages C and D, languages with
similar contrastive structures may show varying phonetic realizations. For
example, the breakdown of the front-back contrast had different phonetic
results in Western and Northern Mansi: in the former it resulted in some
back vowels fronting, and in the latter a series of vowels that used to be
front retracted and merged with back vowels. What the two dialects have in
common is the dropping and subsequent loss of [front] as a contrastive
feature; thus, it no longer constrained the phonetic ranges of the vowels. In
Algonquian, the various palatalizations and mergers show phonetic
differences, and the phonetic descriptions of the vowels vary from dialect to
dialect. But dialects sharing the same contrastive hierarchy show similar
patterns at that level.
We hope to have demonstrated that contrastive feature hierarchies
provide an interesting and fruitful level of representation for typological
research in phonology.
References
Archangeli, Diana. 1988. Aspects of underspecification theory. Phonology 5. 183–
207.
Barrie, Mike. 2003. Contrast in Cantonese vowels. Toronto Working Papers in
Linguistics 20. 1–19.
Bhat, D. N. S. 1978. A general study of palatalization. In Joseph H. Greenberg,
Charles A. Ferguson, & Edith A. Moravcsik (eds.), Universals of human
language, vol. 2: Phonology, 47–92. Stanford: Stanford University Press.
Calabrese, Andrea. 2005. Markedness and economy in a derivational model of
phonology. Berlin: Mouton de Gruyter.
Campos-Astorkiza, Judit Rebeka. 2009. The role and representation of minimal
contrast and the phonology – phonetics interaction. München: Lincom Europa.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper & Row.
Compton, Richard & B. Elan Dresher. 2011. Palatalization and “strong i” across
Inuit dialects. Canadian Journal of Linguistics/Revue canadienne de
linguistique 56. 203–228.
Dorais, Louis-Jacques. 2003. Inuit uqausiqatigiit: Inuit languages and dialects
(second, revised edition). Iqaluit: Nunavut Arctic College.
Dresher, B. Elan. 1998. On contrast and redundancy. Paper presented at the
annual meeting of the Canadian Linguistic Association, May, Ottawa. Ms.,
University of Toronto.
Dresher, B. Elan. 2003. Contrast and asymmetries in inventories. In Anna-Maria di
Sciullo (ed.), Asymmetry in grammar, vol. 2: Morphology, phonology,
acquisition, 239–257. Amsterdam: John Benjamins.
Dresher, B. Elan. 2009. The contrastive hierarchy in phonology. Cambridge:
Cambridge University Press.
Dresher, B. Elan. 2014. The arch not the stones: Universal feature theory without
universal features. Nordlyd 41(2). 165–181, special issue on Features, ed. by
Martin Krämer, Sandra Ronai, & Peter Svenonius. University of Tromsø – The
Arctic University of Norway.
Dresher, B. Elan. 2015. The motivation for contrastive feature hierarchies in
phonology. Linguistic Variation 15. 1–40.
Dresher, B. Elan. 2016. Contrast in phonology 1867–1967: History and
development. Annual Review of Linguistics 2. 53–73.
Dresher, B. Elan. 2017. Contrastive feature hierarchies in Old English diachronic
phonology. Transactions of the Philological Society, doi:10.1111/1467-
968X.12105.
Dresher, B. Elan, Christopher Harvey, & Will Oxford. 2014. Contrast shift as a type
of diachronic change. In Hsin-Lun Huang, Ethan Poole, & Amanda Rysling
(eds.), NELS 43: Proceedings of the Forty-Third Annual Meeting of the North
East Linguistic Society, The City University of New York, vol. 1, 103–116.
Amherst, MA: GLSA.
Dresher, B. Elan & Andrew Nevins. 2017. Conditions on iterative rounding
harmony in Oroqen. Transactions of the Philological Society 115. 365–394.
Dresher, B. Elan & Keren Rice. 2007. Markedness and the contrastive hierarchy in
phonology. http://homes.chass.utoronto.ca/~contrast/.
Dresher, B. Elan & Xi Zhang. 2005. Contrast and phonological activity in Manchu
vowel systems. Canadian Journal of Linguistics/Revue canadienne de
linguistique 50. 45–82.
Fortescue, Michael, Steven A. Jacobson, & Lawrence D. Kaplan. 1994.
Comparative Eskimo dictionary with Aleut cognates. Fairbanks: Alaska Native
Language Center.
Gardner, Matt Hunt. 2012. Beyond the phonological void: Contrast and the
Canadian Shift. Ms., Department of Linguistics, University of Toronto.
Hall, Daniel Currie. 2007. The role and representation of contrast in phonological
theory. Doctoral dissertation, University of Toronto.
Hall, Daniel Currie. 2011. Phonological contrast and its phonetic enhancement:
Dispersedness without dispersion. Phonology 28. 1–54.
Harvey, Christopher. 2012. Contrastive shift in Ob-Ugric vowel systems. Ms.,
University of Toronto.
Honti, László. 1988a. Die ob-ugrischen Sprachen I: Die wogulische Sprache. In
Sinor (ed.) 1988, 147–171.
Honti, László. 1988b. Die ob-ugrischen Sprachen II: Die ostjakische Sprache. In
Sinor (ed.) 1988, 172–196.
Honti, László. 1998. Ob Ugrian. In Daniel Abondolo (ed.), The Uralic languages,
327–357. London: Routledge.
Hyman, Larry M. 2007. Where’s phonology in typology? Linguistic Typology 11.
265–271.
Jakobson, Roman. 1972 [1931]. Principles of historical phonology. In Allan R.
Keiler (ed.), A reader in historical and comparative linguistics, 121–138. New
York: Holt, Rinehart and Winston. Translation of Prinzipien der historischen
Phonologie. Travaux du cercle linguistique de Prague 4. 247–267.
Copenhagen, 1931.
Jakobson, Roman, C. Gunnar M. Fant, & Morris Halle. 1952. Preliminaries to
speech analysis. MIT Acoustics Laboratory, Technical Report, No. 13.
Reissued by MIT Press, Cambridge, Mass., 11th printing, 1976.
Jakobson, Roman & Morris Halle. 1956. Fundamentals of language. The Hague:
Mouton.
Kabak, Bariş. 2011. Turkish vowel harmony. In van Oostendorp et al. (eds.) 2011.
Kaun, Abigail Rhoades. 1995. The typology of rounding harmony: An Optimality
Theoretic approach. Doctoral dissertation, University of California, Los
Angeles.
Kiparsky, Paul. 1965. Phonological change. Doctoral dissertation, MIT.
Ko, Seongyeon. 2010. A contrastivist view on the evolution of the Korean vowel
system. In Hiroki Maezawa & Azusa Yokogoshi (eds.), MITWPL 61:
Proceedings of the Sixth Workshop on Altaic Formal Linguistics, 181–196.
Ko, Seongyeon. 2011. Vowel contrast and vowel harmony shift in the Mongolic
languages. Language Research 47. 23–43.
Ko, Seongyeon. 2012. Tongue root harmony and vowel contrast in Northeast
Asian languages. Doctoral dissertation, Cornell University.
Kochetov, Alexei. 2011. Palatalization. In van Oostendorp et al. (eds.) 2011.
Korn, David. 1969. Types of labial vowel harmony in the Turkic languages.
Anthropological Linguistics 11. 98–106.
Li, Bing. 1996. Tungusic vowel harmony. The Hague: Holland Academic Graphics.
Li, Shulan & Qian Zhong. 1986. Xiboyu jianzhi [A brief introduction to the Xibe
language]. Beijing: Minzu Chubanshe.
Mackenzie, Sara. 2011. Contrast and the evaluation of similarity: Evidence from
consonant harmony. Lingua 121. 1401–1423.
Mackenzie, Sara. 2013. Laryngeal co-occurrence restrictions in Aymara:
Contrastive representations and constraint interaction. Phonology 30. 297–
345.
McCawley, James D. 1967. Edward Sapir’s “phonologic representation”.
International Journal of American Linguistics 33. 106–111.
Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: Oxford
University Press.
Moulton, William G. 1960. The short vowel systems of Northern Switzerland: A
study in structural dialectology. Word 16. 155–182.
Nevins, Andrew. 2010. Locality in vowel harmony. Cambridge, MA: MIT Press.
Newman, Stanley. 1944. Yokuts language of California (VFPA 2). New York: The
Viking Fund Publications in Anthropology.
Odden, David. 2011. The representation of vowel length. In van Oostendorp et al.
(eds.) 2011.
Oostendorp, Marc van, Colin J. Ewen, Elizabeth Hume, & Keren Rice (eds.). 2011.
The Blackwell companion to phonology. Oxford: Blackwell.
Oxford, Will. 2012a. “Contrast shift” in the Algonquian languages. In A. McKillen &
J. Loughren (eds.), Proceedings from the Montreal-Ottawa-Toronto (MOT)
Phonology Workshop 2011: Phonology in the 21st Century: In Honour of
Glyne Piggott. McGill Working Papers in Linguistics 22 (1).
Oxford, Will. 2012b. On the contrastive status of vowel length. Presented at the
MOT Phonology Workshop, University of Toronto, March 2012. http://home.cc.
umanitoba.ca/~oxfordwr/papers/Oxford_2011_MOT.pdf.
Oxford, Will. 2015. Patterns of contrast in phonological change: Evidence from
Algonquian vowel systems. Language 91. 308–357.
Padgett, Jaye. 2003. Contrast and post-velar fronting in Russian. Natural
Language & Linguistic Theory 21. 39–87.
Purnell, Thomas & Eric Raimy. 2013. Contrastive features in phonetic
implementation: The English vowel system. Presented at the CUNY
Phonology Forum Conference On The Feature, January 2013.
Purnell, Thomas & Eric Raimy. 2015. Distinctive features, levels of representation,
and historical phonology. In Patrick Honeybone & Joseph Salmons (eds.), The
handbook of historical phonology, 522–544. Oxford: Oxford University Press.
Qinggertai (Chingeltei). 1982. Guanyu yuanyin hexielü [On the vowel harmony
rule]. Zhongguo Yuyanxue Bao 1. 200–220.
Rice, Keren. 2003. Featural markedness in phonology: Variation. In Lisa Cheng &
Rint Sybesma (eds.), The second Glot International state-of-the-article book:
The latest in linguistics, 387–427. Berlin: Mouton de Gruyter.
Rice, Keren. 2007. Markedness in phonology. In Paul de Lacy (ed.), The
Cambridge handbook of phonology, 79–97. Cambridge: Cambridge University
Press.
Roeder, Rebecca & Matt Hunt Gardner. 2013. The phonology of the Canadian
Shift revisited: Thunder Bay and Cape Breton. University of Pennsylvania
Working Papers in Linguistics: Selected Papers from NWAV 41, 19 (2). 161–
170.
Rohany Rahbar, Elham. 2008. A historical study of the Persian vowel system.
Kansas Working Papers in Linguistics 30. 233–245.
Sammallahti, Pekka. 1988. Historical phonology of the Uralic languages. In Sinor
(ed.) 1988, 478–554.
Samuels, Bridget D. 2011. Phonological architecture: A biolinguistic perspective.
Oxford: Oxford University Press.
Sapir, Edward. 1925. Sound patterns in language. Language 1. 37–51. Reprinted
in Martin Joos (ed.), Readings in linguistics I, 19–25. Chicago, IL: University of
Chicago Press, 1957.
Saussure, Ferdinand de. 1972 [1916]. Cours de linguistique générale. Publié par
Charles Bally et Albert Sechehaye; avec la collaboration de Albert Riedlinger.
Éd. critique préparée par Tullio de Mauro. Paris: Payot.
Sinor, Denis (ed.). 1988. Handbuch der Orientalistik: Handbook of Uralic studies,
vol. 1: The Uralic languages. Leiden: E. J. Brill.
Steinitz, Wolfgang. 1955. Geschichte des wogulischen Vokalismus. Berlin:
Akademie-Verlag.
Stevens, Kenneth N., Samuel Jay Keyser, & Haruko Kawasaki. 1986. Toward a
phonetic and phonological theory of redundant features. In Joseph S. Perkell
& Dennis H. Klatt (eds.), Symposium on invariance and variability of speech
processes, 432–469. Hillsdale, NJ: Lawrence Erlbaum.
Svantesson, Jan-Olaf. 1985. Vowel harmony shift in Mongolian. Lingua 67. 283–
327.
Vajda, Edward. 2001. Test materials dated August 17, 2001. Posted at http://pando
ra.cii.wwu.edu/vajda/ling201/test2materials/Phonology3.htm.
Walker, Rachel. 2001. Round licensing and bisyllabic triggers in Altaic. Natural
Language & Linguistic Theory 19. 827–878.
Walker, Rachel. 2014. Nonlocal trigger-target relations. Linguistic Inquiry 45. 501–
523.
Zhang, Xi. 1996. Vowel systems of the Manchu-Tungus languages of China.
Doctoral dissertation, University of Toronto.
Zhang, Xi & B. Elan Dresher. 1996. Labial harmony in written Manchu. Saksaha: A
Review of Manchu Studies 1. 13–24.
Ellen Broselow
Laryngeal contrasts in second
language phonology
Potentially fruitful test cases for criteria (1c) and (1d) involve new linguistic
systems, among them the patterns of speakers acquiring a novel language.
Typological markedness has frequently been invoked to explain the
emergence of patterns in second language (L2) phonology that appear to
have no basis in either the native or the foreign language grammars (e.g.,
Eckman 1977, 1984). A surprising number of L2 studies have reported that
for speakers of native languages that lack final laryngeal contrasts, or that
lack any final obstruents, the mastery of final voiceless obstruents precedes
the mastery of final voiced obstruents. This finding has served as a veritable
poster child for arguments that second language learning is guided by
universal principles, even in the absence of direct supporting evidence in
the input to the learner.
The goal of this chapter is to survey the literature on the second
language acquisition of laryngeal contrasts, in order to determine, first, the
extent to which L2 patterns align with typological generalizations, and
second, whether the second language data can shed light on the nature and
source of these typological generalizations. To begin, we distinguish two
opposing views (along a broad spectrum) concerning the nature of
typological asymmetries. On one view, typology reflects what Moreton
(2008) calls “channel bias”: factors based in articulation and perception
make certain structures less likely to survive in the transmission of language
across generations (Blevins 2004 and Ohala 1981, among many others), and
listeners’ imperfect perception of more fragile contrasts ultimately results in
phonologization of a system lacking these contrasts (Hyman 1976). The
numerous aerodynamic and acoustic factors that make voicing difficult to
maintain and to perceive in final positions (reviewed in Blevins 2004, 2006
and Myers 2012) make the typological generalizations concerning laryngeal
contrasts very strong candidates for this sort of explanation. However, some
L2 evidence has been argued to support the view that at least some
typological generalizations reflect what Moreton calls “analytic bias”,
defined as “cognitive biases which facilitate the learning of some
phonological patterns and inhibit that of others” (Moreton 2008: 84). On
this view, language learners simply will not entertain the hypothesis that the
system they are learning fails to conform to the relevant typological
generalization. In surveying the second language literature, we will consider
the fit of the second language data, particularly the finding that L2 learners
frequently master some L2 structures earlier than other equally novel
structures, with typological generalizations. We will consider explanations
of the L2 patterns ranging from the articulatory and perceptual difficulty of
particular structures (channel bias effects) to learning biases potentially
rooted in universal grammatical constraints (analytic bias effects).
Before proceeding, a caveat is in order regarding the scope of this
survey. First, in considering the acquisition of final obstruents, we will
consider only single obstruents in final position, since the introduction of
consonant clusters introduces additional factors that cloud the debate.
Second, the term “second language acquisition” casts a wide net, including
learners ranging from children to adults, with varying levels of proficiency
and exposure, and situations ranging from naturalistic learning to formal
instruction. Furthermore, the studies in the second language literature below
include a wide range of methodologies which makes comparison across
studies difficult. We will see, however, that certain patterns emerge across a
wide range of subject populations and methodologies.
Section 2 reviews the typological claims concerning the favored
positions for laryngeal contrasts in native language systems, as well as
favored segment types in different positions. In Section 3 we will see that
studies of speakers from a wide range of native languages show more
success in mastering the typologically more natural structures, and we will
consider possible explanations of individual cases. Section 4 focuses on the
question of whether the preferred repair strategy for those learners who fail
to successfully produce final voiced obstruents is devoicing of the
obstruent, as predicted by Steriade’s (2001/2008) proposal. Here we will
consider the interaction of devoicing with speaker-dependent factors such
as proficiency as well as linguistic factors such as word size and the manner
and place of articulation of the target final obstruents. We conclude by
discussing the implications of the second language data for theories of
typology.
Broselow (2004) argues that the intermediate ranking falls out of the
Gradual Learning Algorithm approach (Boersma & Hayes 2001), in which
the rate of constraint demotion is an effect of the frequency of input tokens
that violate the constraint. Since the constraint banning all final obstruents
will of necessity be violated more frequently than the constraint banning
only voiced final obstruents, the general constraint is demoted more rapidly
than the more specific constraint. Another application of the Gradual
Learning Algorithm approach, in which constraint rankings are stochastic
and variable across different speech events, is to predict variation; Cardoso
(2007) uses the GLA to model the variable productions he finds in his study
of Brazilian Portuguese learners of English.
While the constraint-based approach appears to be compatible with the
earlier mastery of L2 voiceless than of voiced final obstruents for speakers
of languages like Mandarin (with no final obstruents) or like German (with
only voiceless final obstruents), this approach faces a challenge from the
cases of asymmetry mentioned above involving speakers of languages with
final voicing contrasts. Speakers of Hungarian (Altenberg & Vago 1983)
and Farsi (Eckman 1984), which allow both voiced and voiceless final
obstruents, should approach the L2 with a native language grammar that has
the same constraint ranking as English. To account for these learners’
greater success in producing English voiceless than voiced final obstruents,
a proponent of the grammar-based approach might argue that while the
grammars of Hungarian and Farsi permit final voiced obstruents, the
phonetic realization of voiced targets is sufficiently different from the
realization of voicing in English that the learners’ attempts to produce
voiced stops are not recognized as such by native English speakers.
However, this explanation faces the difficulty that in Hungarian, final
voiced stops are actually more fully voiced than are their English
counterparts (Gosy & Ringen 2009), a fact that might lead us to expect that
transfer of native language articulatory routines should make final voicing
easy to hear. An alternative explanation could appeal to a difference in the
phonological feature specifications that define the laryngeal contrasts in
English vs. in the other languages. If, as proposed in Iverson & Salmons
(1995), Jessen & Ringen (2002), Vaux & Samuels (2005), and many others,
the relevant feature distinguishing stops is [spread glottis] for aspirating
languages like English but [voice] for voicing languages like Hungarian,
then the grammars of Hungarian and Farsi may not in fact be identical to
that of English. The validity of these approaches can only be evaluated in
the context of detailed study of the acoustics of the relevant languages,
explicit analyses of the grammars of the two languages, and explicit
theories of phonological specification.
An additional explanation for the developmental asymmetry that should
be considered is the possibility of asymmetries in the data available to the
learner. If final voiceless obstruents are significantly more frequent in the
target language than are final voiced obstruents (that is, if input to the
learners contains significantly more tokens of final voiceless than voiced
obstruents), this could explain why learners might acquire the former before
the latter, with no recourse to markedness considerations. As Broselow &
Xu (2004) point out, the order in which new English structures are mastered
by Mandarin-speaking learners does not correlate in any obviously way
with the frequency of different English coda types as outlined by Kessler &
Treiman (1997), though systematic studies of frequency in learner input are
lacking.
In summary, the second language data provides convincing evidence for
hierarchies of difficulty: for learners from a variety of native language
backgrounds, L2 final voiced obstruents seem to be harder to successfully
produce than either L2 nonfinal voiced or final voiceless obstruents. We
now examine the nature of L2 learners’ unsuccessful productions of final
voiced obstruents.
Merrill argues that the nasalization process arose from two earlier
processes: at an earlier stage, all voiced stops were prenasalized;
subsequently, prenasalized stops became plain stops in onset position but
became nasals in coda. Although the nasalization pattern arose through
separate sound changes, it seems to have become established in the
synchronic grammar by learners who have not been exposed to the separate
stages that gave rise to this pattern. This disqualifies the preference for final
devoicing for the status of true phonological universal, according to
Kiparsky’s criteria (reviewed in Section 1), which include the claim that
learners will never construct grammars that violate a true universal.
Nonetheless, it is clear that the overwhelming majority of languages do
choose final devoicing as the preferred option. A weaker version of
Steriade’s claim would be to ascribe the preference for final devoicing to a
default, initial-state ranking which holds in the absence of evidence to the
contrary, but which could be adjusted when learners are exposed to
evidence contradicting this ranking. On this view, the responsibility to
explain the rarity of repairs other than final devoicing would rest with
channel bias effects, rather than on the formal grammar.
However, attempts to investigate channel bias effects in repair of final
voiced obstruents are not entirely consistent with the perceptual similarity
hypothesis. Kawahara & Garvey (2010), in an online experiment, elicited
direct judgments of perceptual similarity by asking participants to compare
forms with final voiced obstruents (e.g., ab) with possible correponding
forms (e.g., am, a, aba, ap) and to rate the similarity of each pair. In trials
that involved orthographic presentation of forms, the devoicing option was
chosen as most similar to the finalobstruent form, consistent with Steriade’s
claim. But when forms were presented auditorily, the form with final
epenthetic schwa was judged most similar to the final-obstruent form.
Kawahara & Garvey note that the final obstruents in the auditory stimuli
were released, and although the release was spliced off, sufficient
information may have remained to bias listeners toward the vowel insertion
form. These facts suggest that determining the closest perceptual match
may rely on a complex combination of subtle phonetic details.
With these facts in mind, we now turn to the question of whether
learners’ non-target-like productions provide evidence for devoicing as the
preferred (if not necessarily universal) repair. We consider the relative
proportions of different repairs (consonant deletion, vowel insertion, and
final devoicing) in various studies, and the effect on choice of repair of
several factors: learner proficiency, task, and grammatical context; the
existence of an active devoicing process in the native language; word size
and stress; and manner and place of articulation.
Wang also investigated the effect of word stress on the choice of deletion
vs. insertion. Her disyllabic forms were equally divided between those with
initial stress and those with final stress. Epenthesis was more likely in the
final-stress disyllables than in the initial-stress disyllables, suggesting that
in the absence of word size effects, stress did have an effect. However, in a
comparison of monosyllables with final-stress disyllables, the overall rate
of epenthesis was still significantly higher for monosyllables than for final-
stress disyllables.
Additional evidence of word size effects comes from Cardoso’s (2007)
study of six speakers of Brazilian Portuguese, a language in which the only
possible coda obstruent is /s/. The speakers in this study either produced
coda stops correctly, or inserted a vowel following the coda stop (which he
argues is a productive native language strategy for syllabifying stops,
though he notes that devoicing has been reported in other studies of
Brazilian Portuguese-English interlanguage). Cardoso’s study included
learners at three levels, and while the lowest level speakers produced almost
no codas successfully (i.e., inserted a following vowel), the intermediate
and advanced learners were far more likely to produce coda stops in
polysyllabic words (37% and 59%, respectively), than in monosyllables
(16% correct production for intermediate and 31% correct production for
advanced learners). As Cardoso notes, Brazilian Portuguese contains a
number of highly frequent monosyllables, as does English, though in
English, monosyllabic content words must arguably be bimoraic. He argues
that “the language learner opts for minimal word disyllabicity, a structure
that is enforced neither in BP nor in English, over bimoraicity, which
represents the target-like structure” (Cardoso 2007: 227).
Thus, while devoicing is extremely common in second language
phonology, it is not necessarily the favored strategy, even for learners who
have the ability to produce obstruents in final position. These facts are
consistent with the view that the choice of final devoicing over other repairs
represents at most a strong preference rather than an absolute universal, and
one that may interact with other universal preferences. In fact, the word size
effects are reconcilable with Steriade’s claim that the universal preference
for final devoicing represents a default ranking of faithfulness constraints,
given the architecture of Optimality Theory grammars. So long as the
faithfulness constraints are outranked by markedness constraints demanding
a disyllabic word minimum, vowel insertion will be chosen over deletion or
devoicing for final obstruents in monosyllables, even when the ranking of
faithfulness constraints defines devoicing as the generally preferred option.
This is illustrated in the tableau below (where D indicates any voiced
obstruent):
A closer look at the choice of repair for stops and fricatives is intriguing.
The rate of target-like productions was the same for voiced stops and voiced
fricatives; the major difference lies in the higher rates of deletion and
epenthesis for voiced fricatives (72% combined) vs. voiced stops (27%
combined). This may be a native language effect: Vietnamese has no final
fricatives, but does have final stops (albeit only voiceless ones). Thus, the
higher rate of devoicing for stops than fricatives may simply reflect the fact
that devoicing is not an option for final fricatives, since these speakers
cannot yet successfully produce fricatives in final position. However, it is
puzzling that for voiceless stops and fricatives, the rates of target-like
productions were comparable (leaving out the surprisingly high rates for
/p/). Thus, while the Vietnamese data provide clear support for a difficulty
hierarchy involving voiceless vs. voiced final fricatives, their significance
with respect to the relationship between manner and the likelihood of
devoicing is less clear.
If we take the Dutch pattern – greater likelihood of devoicing of final
fricatives than final stops – as representative of phonetically-grounded
factors disfavoring voiced fricatives, it seems likely that we should find
languages in which the pattern of devoicing final fricatives but not final
stops has become phonologized. Myers (2012) addresses this question in
the context of his proposal that word-final and syllable-final devoicing
processes arise historically from the generalization of utterance-final
devoicing: “One might expect from this that utterance-final fricative
devoicing should be the most common version of the pattern of final
devoicing [. . .] But it certainly does not seem as if such cases are more
common than [. . .] devoicing of all obstruents including stops” (Myers
2012: 173). Myers cites only one language, Gothic, where final devoicing is
limited to fricatives. Thus, while the Dutch speakers’ L2 patterns are
congruent with aerodynamic and perceptual factors that appear to make a
final voicing contrast in stops more natural than one in fricatives, it does not
seem to be the case that this asymmetry has become widely
grammaticalized. If such an asymmetry emerges frequently in second
language phonology but never as a pattern in a first language, it might
provide an argument for an analytic bias against a grammar that allows
devoicing for one set of obstruents but not another, although at this point
the evidence from second language production is too limited to support this
claim. A related question is whether the same or different feature
specifications govern laryngeal contrasts for stops and fricatives, and
whether stop devoicing and fricative devoicing should be treated as
different processes in grammars. (See Vaux 1998 for the proposal that the
unmarked opposition for voiceless and voiced fricatives is [+spread glottis]
vs. [-spread glottis], and van Oostendorp 2007 for the proposal that for at
least some Dutch dialects, the fricative contrast is better explained in terms
of length rather than laryngeal features.)
Native speakers also showed less voicing in the high vowel-velar case than
in other cases, but their percentage of voicing was, for each vowel-
consonant combination, significantly higher than that of native speakers
(e.g., 65.5% voicing during a velar closure following a high vowel).
While it makes sense that the smaller oral cavity associated with velars
and the narrower constriction of high vowels should have an additive effect
on voicing, this does not appear to be a phonologized pattern in languages;
Moreton (2008) argues that few languages show systematic interactions of
vowel height and voicing, and those interactions that are attested take the
form of the raising of vowel height before voiced consonants and the
lowering of vowel height before voiceless consonants (though see Yu 2011
for a different interpretation of Moreton’s data). Thus, place and vowel
height effects, though they appear in the phonetic detail of both native and
non-native speakers, appear not to have been grammaticalized in either first
language or interlanguage phonology. Again, this is an area where research
is relatively sparse.
5 Conclusion
We set out to determine first, whether the facts of second language
phonology are compatible with typological generalizations, and second,
whether the second language facts can shed light on the source of
typological generalizations.
We found numerous cases supporting a difficulty hierarchy for final
voiceless vs. voiced stops in second language phonology, and this difficulty
hierarchy aligns with the typological generalizations on preferred segment
type. Across a range of native languages, including those with no final
obstruents, those with only voiceless final obstruents, and those with a final
laryngeal contrast, speakers successfully produced L2 final voiceless
obstruents before final voiced obstruents. In no case was there evidence of
speakers acquiring the more marked structure (final voiced obstruents)
before the less marked structure (final voiceless obstruents). Whether these
facts reflect articulatory and perceptual factors or the effects of formal
grammatical constraints is difficult to resolve – since the structural
constraints of Optimality Theory are generally grounded in articulatory and
perceptual considerations, there is considerable overlap between the
approaches. However, we note that locating the difficulty of final voiced
obstruents in articulatory and perceptual difficulty alone predicts that the
likelihood of a difficulty hierarchy emerging in second language acquisition
should be a function of the phonetic robustness of the contrast in the target
language; for example, languages in which final stops are uniformly
released should provide the learner with more cues to the voicing contrast
than languages without such release. Systematic study of the productions of
both native and second language speakers across a range of languages is
necessary to address this question.
We also found that final devoicing was quite common in second
language phonology, although it was by no means the only strategy used.
Since speakers must be able to produce obstruents in final position before
they can devoice them, the fact that vowel insertion and consonant deletion
were also common repairs of L2 forms does not in itself invalidate
Steriade’s (2001/2008) claim that final devoicing is the only solution to the
final obstruent problem. We did, however, find evidence that some speakers
exhibit a systematic relationship between choice of repair and preferred
word size. It is intriguing that this pattern was found for speakers of two
different languages, Mandarin and Brazilian Portuguese, but is not clearly
attested in any native language system.
Effects of aerodynamic factors that contribute to the difficulty of
sustaining voicing appeared in some studies, at the level of relatively fine
phonetic detail: Dutch speakers’ final fricatives were less voiced than stops
(Simon 2010), and Mandarin, Japanese, and Portuguese speakers’ velars
were less voiced than alveolars and bilabials, though only after high vowels
(Yavas 2009). On the channel bias account, we might expect these
differences to give rise to systems in which the phonetic asymmetries
become phonologized. Yet such systems seem either rare or unattested;
Myers (2012) cites only one language, Gothic, in which fricatives, but not
stops, are regularly devoiced.
A reasonable place to look for systems that have phonologized the
effects found in second language phonology is in regionalized varieties of
English, where what generally began as a second language has now become
standardized. A striking number of regional Englishes show evidence of at
least some final devoicing. In a survey of English varieties of Africa, South
Asia, and Southeast Asia, Mesthrie (2004) reports final devoicing in St.
Helens English, Cape Flats English, Black South African English, Nigerian
English, Ghanaian English, Cameroon English, Cameroon Pidgin,
Singapore English, and Malaysian English. Final devoicing is also reported
in Fiji English (Tent & Mugler 2004), Tok Pisin (Smith 2004), and Liberian
Settler English (Singler 2004). The prevalence of final devoicing suggests
that speakers did indeed converge on this repair as their systems stabilized.
It is notable that none of these descriptions identify epenthesis or deletion
as regular productive processes targeting single final voiced obstruents, and
no systems are identified as showing different treatment of final stops and
fricatives or systematic effects of place of articulation that are independent
of the substrate language. Thus, at least some of the well-founded phonetic
effects that emerge in second language phonology fail to acquire the status
of regular phonological processes.
References
Abrahamsson, Niclas. 2003. Development and recoverability of L2 codas: A
longitudinal study of Chinese-Swedish interphonology. Studies in Second
Language Acquisition 25. 313–349.
Altenberg, Evelyn & Robert Vago. 1983. Theoretical implications of an error
analysis of second language phonology production. Language Learning 33.
427–448.
Becker, Michael, Nihan Ketrez, & Andrew Nevins. 2011. The surfeit of the stimulus:
Analytic biases filter lexical statistics in Turkish laryngeal alternations.
Language 87. 84–125.
Beckman, Jill, Pétur Helgason, Bob McMurray, & Catherine Ringen. 2011. Rate
effects on Swedish VOT: Evidence for phonological overspecification. Journal
of Phonetics 39. 39–49.
Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns.
Cambridge: Cambridge University Press.
Blevins, Juliette. 2006. A theoretical synopsis of Evolutionary Phonology.
Theoretical Linguistics 32. 117–166.
Blevins, Juliette. 2010. Phonetically-based sound patterns: Typological tendencies
or phonological universals? In Cécile Fougeron, Barbara Kühnert, Mariapaola
D’Imperio, & Nathalie Vallée (eds.), Papers in Laboratory Phonology 10:
Variation, phonetic detail and phonological modeling, 201–224. Berlin: Mouton
de Gruyter.
Broersma, Miriam. 2005. Perception of familiar contrasts in unfamiliar positions.
Journal of the Acoustical Society of America 117. 3890–3901.
Broselow, Ellen. 2004. Unmarked structures and emergent rankings in second
language phonology. International Journal of Bilingualism 8. 51–65.
Broselow, Ellen, Su-I Chen, & Chilin Wang. 1998. The emergence of the unmarked
in second language phonology. Studies in Second Language Acquisition 20.
261–280.
Broselow, Ellen & Yoonjung Kang. 2013. Second language phonology and speech.
In Julia Herschensohn & Martha Young-Scholten (eds.), The Cambridge
handbook of second language acquisition, 529–554. Cambridge: Cambridge
University Press.
Broselow, Ellen & Zheng Xu. 2004. Differential difficulty in the acquisition of
second language phonology. International Journal of English Studies 4
(special issue: Advances in Optimality Theory). 13–163.
Cahill, Michael. 1999. Aspects of morphology and phonology of Konni. Ph.D.
dissertation, Ohio State University.
Cardoso, Walcir. 2007. The variable development of English word-final stops by
Brazilian Portuguese speakers: A stochastic Optimality Theory account.
Language Variation and Change 19. 219–248.
Cebrian, Juli. 2000. Transferability and productivity of L1 rules in Catalan-English
interlanguage. Studies in Second Language Acquisition 22. 1–26.
Cichocki, Wladyslaw, Anthony B. House, A. Murray Kinloch, & Anthony C. Lister.
1993. Cantonese speakers and the acquisition of French consonants.
Language Learning 43. 43–68.
Eckman, Fred. 1977. Markedness and the contrastive analysis hypothesis.
Language Learning 27. 315–330.
Eckman, Fred. 1981. On the naturalness of interlanguage phonological rules.
Language Learning 31. 195–216.
Eckman, Fred. 1984. Universals, typology, and interlanguage. In William E.
Rutherford (ed.), Language universals and second language acquisition, 79–
105. Amsterdam: John Benjamins.
Eckman, Fred. 2004. From phonemic differences to constraint rankings: Research
on second language phonology. Studies in Second Language Acquisition 26.
513–549.
Edge, Beverly. 1991. The production of word-final voiced obstruents in English by
L1 speakers of Japanese and Cantonese. Studies in Second Language
Acquisition 13. 377–393.
Flege, James Emil & Richard D. Davidian. 1984. Transfer and developmental
processes in adult foreign language speech production. Applied
Psycholinguistics 5. 323–347.
Flege, James Emil, Martin J. McCutcheon, & Steven C. Smith. 1987. The
development of skill in producing word-final English stops. Journal of the
Acoustical Society of America 82. 433–447.
Flege, James Emil & Chipin Wang. 1989. Native-language phonotactic constraints
affect how well Chinese subjects perceive the word-final English /t/-/d/
contrast. Journal of Phonetics 17. 299–315.
Gordon, Matthew. 2007. Typology in Optimality Theory. Language and Linguistics
Compass 1. 750–769.
Gosy, Maria & Catherine Ringen. 2009. Everything you always wanted to know
about VOT in Hungarian. Talk presented at the International Conference on
the Structure of Hungarian 9. Debrecen, Hungary.
Greenberg, Joseph, Charles Ferguson, & Edith Moravscik (eds.). 1978. Universals
of human language. Volume 2: Phonology. Stanford, CA: Stanford University
Press.
Hammarberg, Björn. 1990. Conditions on transfer in phonology. In Allan R. James
& Jonathan Leather (eds.), New sounds 90: Proceedings of the 1990
Symposium on the Acquisition of Second-Language Speech, 198–215.
Dordrecht: Foris.
Hancin-Bhatt, Barbara. 2000. Optimality in second language phonology: Codas in
Thai ESL. Second Language Research 16. 201–232.
Hansen, Jette G. 2004. Developmental sequences in the acquisition of English L2
syllable codas. Studies in Second Language Acquisition 26. 85–124.
Helgason, Pétur & Catherine Ringen. 2008. Voicing and aspiration in Swedish
stops. Journal of Phonetics 36. 607–628.
Heyer, Sarah. 1986. English final consonants and the Chinese learner.
Unpublished master’s thesis, Southern Illinois University Edwardsville.
Hillenbrand, James, Dennis R. Ingrisano, Bruce L. Smith, & James E. Flege. 1984.
Perception of the voiced-voiceless contrast in syllable-final stops. Journal of
the Acoustical Society of America 76. 18–26.
Hyman, Larry. 1976. Phonologization. In Alphonse Juillard (ed.), Linguistic studies
offered to Joseph Greenberg, volume 2, 407–418. Saratoga, CA: Anna Libri.
Hyman, Larry. 2008. Universals in phonology. The Linguistic Review 25. 83–137.
Iverson, Gregory & Joseph Salmons. 1995. Aspiration and laryngeal
representation in Germanic. Phonology 12. 369–396.
Iverson, Gregory & Joseph Salmons. 2011. Final devoicing and final laryngeal
neutralization. In Marc van Oostendorp, Colin Ewen, Elizabeth V. Hume, &
Keren Rice (eds.), The Blackwell companion to phonology, volume 3, 1622–
1643. Oxford: Blackwell Publishing.
Jessen, Michael & Catherine Ringen. 2002. Laryngeal features in German.
Phonology 19. 189–218.
Kawahara, Shigeto & Kelly Garvey. 2010. Testing the P-map hypothesis: Coda
devoicing. Rutgers Optimality Archive.
Keating, Patricia, Wendy Linker, & Marie Huffman. 1983. Patterns in allophone
distribution for voiced and voiceless stops. Journal of Phonetics 11. 277–290.
Kenstowicz, Michael. 2005. The phonetics and phonology of loanword adaptation.
In S.-J. Rhee (ed.), Proceedings of ECKL 1: Proceedings of First European
Conference on Korean Linguistics, 316–340. Seoul: Hankook Publishing.
Kessler, Brett & Rebecca Treiman. 1997. Syllable structure and the distribution of
segments in English syllables. Journal of Memory and Language 37. 295–311.
Kiparsky, Paul. 2006. The amphichronic program vs. evolutionary phonology.
Theoretical Linguistics 32. 217–236.
Kiparsky, Paul. 2008. Universals constrain change; change results in typological
generalizations. In Jeff Good (ed.), Language universals and language
change, 23–53. Oxford: Oxford University Press.
Lombardi, Linda. 1995. Laryngeal neutralization and syllable well-formedness.
Natural Language and Linguistic Theory 13. 39–74.
Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University
Press.
Major, Roy & Michael Faudree. 1996. Markedness universals and the acquisition
of voicing contrasts by Korean speakers of English. Studies in Second
Language Acquisition 18. 69–90.
Merrill, John. 2015a. Nasalization as a repair for voiced obstruent codas in Noon.
Talk presented at the Annual Meeting of the LSA, January, 2015.
Merrill, John. 2015b. Nasalization as a repair for voiced obstruent codas in Noon.
LSA Annual Meeting Extended Abstracts. http://journals.linguisticsociety.org/p
roceedings/index.php/ExtendedAbs/article/view/3014
Mesthrie, Rajend. 2004. Synopsis: The phonology of English in Africa and South
and Southeast Asia. In Schneider et al. (eds.) 2004, 1099–1110.
Moreton, Elliott. 2008. Analytic bias and phonological typology. Phonology 25. 83–
127.
Myers, Scott. 2012. Final devoicing: Production and perception studies. In Toni
Borowsky, Shigeto Kawahara, Takahito Shinya, & Mariko Sugahara (eds.),
Prosody matters: Essays in honor of Elisabeth Selkirk, 148–180. London:
Equinox.
Myers, Scott & Jaye Padgett. 2015. Domain generalisation in artificial language
learning. Phonology 31. 399–434.
Oh, Mira. 1996. Linguistic input to loanword phonology. Studies in Phonetics,
Phonology, and Morphology 2. 117–126.
Ohala, John. 1981. The listener as a source of sound change. In Carrie S. Masek,
Roberta A. Hendrick, & Mary Frances Miller (eds.), CLS: Papers from the
parasession on language and behavior, 178–203. Chicago: Chicago Linguistic
Society.
Ohala, John. 1983. The origin of sound patterns in vocal tract constraints. In Peter
MacNeilage (ed.), The production of speech, 189–216. New York: Springer.
Oostendorp, Marc van. 2007. Exceptions to final devoicing. In Jeroen van de
Weijer & Erik Jan van de Torre (eds.), Voicing in Dutch: (De)voicing –
phonology, phonetics, and psycholinguistics, 81–98. Amsterdam: John
Benjamins,
Peng, Long & Jean Ann. 2004. Obstruent voicing and devoicing in the English of
Cantonese speakers from Hong Kong. World Englishes 23. 535–564.
Raphael, Lawrence. 1972. Preceding vowel duration as a cue to the perception of
the voicing characteristic of word-final consonants in American English.
Journal of the Acoustical Society of America 51. 1296–1303.
Schneider, Edgar, Kate Burridge, Bernd Kortmann, Rajend Mesthrie, & Clive Upton
(eds.). 2004. A handbook of varieties of English. Berlin: Mouton de Gruyter.
Simon, Ellen. 2009. Acquiring a new second language contrast: An analysis of the
English laryngeal system of native speakers of Dutch. Second Language
Research 25. 377–408.
Simon, Ellen. 2010. Phonological transfer of voicing and devoicing rules: Evidence
from NL Dutch and L2 English conversational speech. Language Sciences 32.
63–86.
Singler, John. 2004. Liberian Settler English: Phonology. In Schneider et al. (eds.)
2004, 874–884.
Smith, Geoff. 2004. Tok Pisin in Papua New Guinea: Phonology. In Schneider et
al. (eds.) 2004, 710–728.
Smolensky, Paul. 1996. On the comprehension/production dilemma in child
language. Linguistic Inquiry 27. 720–731.
Steele, Jeffrey. 2002. L2 learners’ modification of target language syllable
structure: Prosodic licensing effects in interlanguage phonology. In Allan R.
James & Jonathan Leather (eds.), New sounds 2000: Proceedings of the 4th
International Symposium on the Acquisition of Second Language Speech,
315–324. Klagenfurt: University of Klagenfurt.
Steriade, Donca. 1999. Phonetics in phonology: The case of laryngeal
neutralization. UCLA Working Papers in Phonology 3. 25–146.
Steriade, Donca. 2001/2008. The phonology of perceptibility effects: The P-Map
and its consequences for constraint organization. In Kristin Hanson & Sharon
Inkelas (eds.), The nature of the word, 151–179. Cambridge: MIT Press.
Tent, Jan & France Mugler. 2004. Fiji English: Phonology. In Schneider et al. (eds.)
2004, 750–779. Vaux, Bert. 1998. The laryngeal specification of fricatives.
Linguistic Inquiry 29. 497–511.
Vaux, Bert & Bridget Samuels. 2005. Laryngeal markedness and aspiration.
Phonology 22. 395–346.
Wang, Chilin. 1995. The acquisition of English word-final obstruents by Chinese
speakers. Unpublished doctoral dissertation, Stony Brook University.
Weinberger, Steven. 1987. The influence of linguistic context on syllable
simplification. In Georgette Ioup & Steven H. Weinberger (eds.), Interlanguage
phonology: The acquisition of a second language sound system, 401–417.
Rowley, MA: Newbury House.
Westbury, John & Patricia Keating. 1986. On the naturalness of stop consonant
voicing. Journal of Linguistics 22. 145–166.
Wetzels, Leo & Joan Mascaró. 2001. The typology of voicing and devoicing.
Language 77. 207–244.
Wiltshire, Caroline. 2006. Word-final consonant and cluster acquisition in Indian
Englishes. In David Bamman, Tatiana Magnitskaia, & Colleen Zaller (eds.),
Online proceedings supplement, Boston University Conference on Language
Development 30.
Wissing, Daan & Wim Zonneveld. 1996. Final devoicing as a robust phenomenon
in second language acquisition: Tswana, English and Afrikaans. South African
Journal of Linguistics 14. 3–23.
Yavas, Mehmet. 2009. Factors influencing the VOT of English long lag stops in
interlanguage phonology. In M. A. Watkins, A. S. Reuber, & B. O. Baptista
(eds.), Recent research in second language phonetics/phonology: Perception
and production, 244–255. Newcastleupon-Tyne: Cambridge Scholars
Publishing.
Yu, Alan C. 2004. Explaining final obstruent voicing in Lezgian: Phonetics and
history. Language 80. 73–97.
Yu, Alan C. 2011. On measuring phonetic precursor robustness: A response to
Moreton. Phonology 28. 491–518.
Tomas Riad
The phonological typology of North
Germanic accent
1 Introduction
Many of the dialects of Swedish and Norwegian exhibit a tonal contrast
within the intonational prominence that is superimposed on a syllable
carrying primary stress. Many properties of these so-called accents are
shared between dialects, but the tonal variation makes the dialects sound
quite different from one another, and this constitutes the main source for the
man-in-the-street’s recognition of the major dialect areas. This article is
concerned with laying bare the linguistic properties that form the basis of
the typology. My main point is that they are all structural in a quite concrete
sense, relating to phonological representation in terms of value of the
lexical tone as high/H or low/ L, tonal association patterns in compounds,
and spreading behaviour. Several previous typologies have been based on
more phonetic and/ or functional categories (e.g., number of tonal peaks,
presence/ absence of a separate focus gesture) which may describe parts of
the typology well, but which are ultimately too superficial. They tend either
to be insufficient when the typology is extended to all major dialect types or
to overgenerate in their predictions of possible dialect types. The
importance of identifying the most relevant structural categories for
typology in the tonal domain is emphasized below. Following from this, I
also hope to provide an updated and coherent account of the major dialect
types.
Varieties of Germanic that exhibit a lexical tonal contrast occur not only
in North Germanic (NGmc), but also in the Central Franconian varieties
spoken in and around the Rhine delta (West Germanic). There too, the tonal
distinction is superimposed on stressed syllables. If we look more widely,
we find this type of system also in, for instance, Bosnian/Croatian/Serbian,
varieties of Basque, Latvian and Lithuanian. The North Germanic system is
differently constituted from the Franconian and Baltic ones, e.g., in
requiring two syllables for the expression of one of the tonal categories
(accent 2). There is also an organic relationship between the NGmc tonal
varieties and the Danish stød system (Gårding 1977; Ringgaard 1983; Riad
2000a, 2000b, 2009b), but the typology given below will not include
Danish in a principled way, as there are outstanding issues regarding the
representation and status of Danish stød.122
The identification of parameters for microvariation is of course always
of interest, descriptively as well as comparatively. In the case of the
typology of very closely related varieties, the interest is enhanced by the
fact that one has a chance of getting a handle on the general frame for
variation. Depending on the unity of the system as a whole, the access of
relatively rich linguistic information might allow for the formulation of
what constitutes a likely or less likely change. The North Germanic
typology is very coherent with regard to tonal structure, geographic
distribution of features, and also shared history. A good analysis should
reveal the prosodic relationship between dialects, and allow us to formulate
hypotheses regarding the relative structural distance between tonal varieties,
in its turn a prerequisite for the reconstruction of diachronic developments
within the tonal system.
It is our task, then, to identify the properties that best describe the
variation and thereby the individual varieties. My claim is that phonology
provides the most relevant level at which to formulate these things. In
particular, we must view phonetic and functional categories with
scepticism. While phonetics will provide a lot of relevant information, it is
the typology at the phonological/ grammatical level that best explains
relationships between varieties.123
2 Previous typologies
There are a number of earlier typological treatments of NGmc accent. The
ground-breaking study of Meyer (1937, 1954) provides accent contrasts in
chiefly disyllabic simplex forms for some one hundred informants from
various locations around Scandinavia, with most recordings made in the
Central Swedish and Dala regions, a fair amount from the Göta region
(West Swedish, WSw), some recordings from North Swedish (NSw) and
scattered items from Finland, Estonia, and Denmark. Meyer’s materials
form an initial basis for the description of dialects in terms of the number of
tonal peaks, i.e., as one- or two-peaked realizations of accent 2 (cf. (1)
below). This material was also used as basis for a hypothesis known as the
“Scandinavian accent orbit” by Öhman (1967), where varieties were lined
up according to the phonetic timing of peaks. In the phonetic tradition of
Norway, there are the studies of Fintoft (1970), Mjaavatn (1978), and
Fintoft, Mjaavatn, Møllergård, & Ulseth (1978), where four types of tonal
contour are identified (the same set for both accent 1 and accent 2). These
are then paired to get the various dialect areas. In Norway, tradition refers to
dialect types via the first tone of the accent 1 contour, as “high-tone” and
“low-tone” dialects.
In the Swedish tradition it is customary to talk about single peak
dialects and double peak dialects (or one- and two-peak dialects). This
refers exclusively to the accent 2 contour, as accent 1 invariably has only
one peak (Gårding & Lindblad 1973; Bruce & Gårding 1978). This
terminology remains in later work by Bruce (2005, 2007), where the
typology is more refined and more clearly extended to the broader
intonation. Below is the basic typology currently assumed for disyllabic
simplex words in citation form (given in Bye 2004, based on Gårding &
Lindblad 1973), illustrated with Meyer’s tonal contours.124
The general problem with this type of phonetic typology is overgeneration.
The reference points mentioned admit several types that are not attested
(e.g., a variant of 2A but with accent 1 having early timing of the peak, or a
type combining accent 1 of 1A with accent 2 of 1B), without articulating
expectations or reasons for why they should be excluded (or not). Some
answers to this type of problem will come out of a segmentation of the tonal
contours into constituent tones (Bruce 1977, 2004).
Proper phonological typologies, where reference is made to
phonological categories, e.g., by breaking down the global contour into a
string of constituent tones, are given in work by Lorentz (1995), Riad
(1998b, 2006), Bye (2004), and Bruce (2005, 2007). In Lorentz (1995), for
instance, the contour is divided into lexical tone, prominence tone, and
boundary tone. It must be noted, however, that beyond the lexical tone, the
functional aspects are not reliably tied to individual tones (Riad 2006). For
one thing, there appears to be a bias to use H tones for the prominence
function, whether it is a separate tonal gesture or a boundary tone (cf. Sectio
n 7).
The phonological typologies vary in geographic coverage (as well as in
analysis), but have in later years come to heed the broader area of North
Germanic tonal dialects, including both Norway and Sweden, and
sometimes also the few remaining tonal dialects of Finland (Lorentz 1995;
Riad 1998b; Bye 2004). In addition, these systems can then be put in
relation with Danish stød, which is clearly related historically (as is evident
from lexical distributional patterns), but which also clearly stands out
within the linguistic area.
Part of the background assumptions made by Gårding & Lindblad
(1973), Bruce (2005) and others is that two peaks would never occur in
accent 1, at least not to the exclusion of two peaks in accent 2. Another
(related) fact is that accent 2 always has the richer tonal structure, e.g., by
requiring one more tonal feature than accent 1. Beside these things, which
have implications for the representation of accentual contours, there are
distributional facts of which scholars have different interpretations. For
instance, tradition has often considered accent 2 as the typical accent of
disyllabic forms with initial stress, a fact supported by sheer type and token
frequencies. In phonological analyses this has sometimes been interpreted
as grounds for assuming accent 2 as the default accent of disyllables (Kock
1878; Danell 1937: 51; Malmberg 1970: 157; Öhman 1966; Teleman 1969:
187; Nyström 1997; Lahiri et al. 2005; Wetterlin 2010). On the other hand,
accent 2 correlates robustly with a large class of suffixes, inviting a quite
different analysis where accent 2 results from lexically represented
information in those suffixes (Riad 2009a, 2012, 2014, 2015). This type of
difference will not be settled in this article.125 Instead, we shall make a
number of basic assumptions explicit and then move on to the typological
comparison.
3 The tonal accent system and the crucial forms
The terms “accent 1” and “accent 2” are usually used with reference to the
entire tonal contour of words in citation form, hence in a focused context.
This means that lexical and intonational tonal material is not distinguished
until further segmentation is made. (2) provides an overview of the
segmentation in Central Swedish, for the two prominence levels in which
the accent distinction is realized. We will refer to the higher prominence
level as the “big” accent, and the lower prominence level as the “small”
accent (Myrberg & Riad 2015). This keeps the terminology free from
functional implications, and directs attention to the phonological shape,
without tying it to a particular dialect.126 There is thus a categorical
prominence level distinction between big accent and small accent, and the
lexical distinction is realized in both of them. For the purposes of the
typology, it will suffice to look at the big accent, which (largely) includes
the tonal material of the small accent.127 Bolded tones are lexical, all other
tones are postlexical. The accent distinction is privative.
The initial H* tone of the accent 2 contour is what distinguishes the big
accent contours for accent 1 and accent 2, respectively. The rise in the big
accent (LH) is common to both accent 1 and 2, and is purely intonational
(hence postlexical). We will refer to it as the “prominence tone”. The tonal
sequence is the same in the two cases of accent 2, but the initial H tone has
different sources. In simplex accent 2, the initial H tone (bold) is lexical (in
a root or a suffix), whereas in compound accent 2, this tone is postlexical.
This postlexical accent 2 tone is assigned by a rule which is sensitive to the
number of stresses, and which overrules lexical specifications.128 Any tones
of accent 1 are assigned postlexically. We will use the term “lexical tone” or
“(post)lexical tone” in reference to the first tone in the accent 2 contour
(bolded). This tone is what instantiates the marked member of the accentual
opposition, and accent 1 consequently consists of just intonation tones. The
lexical tone is invariably associated to the primary stressed syllable. The
next tone is the “prominence tone”. In accent 1, it is associated to the
primary stressed syllable, while in accent 2, it is displaced to the right by
the lexical tone which occupies the stressed syllable. A lexical tone thus
always has precedence to the stressed syllable (= TBU), which will only
host a single, associated tone. In citation forms the prominence tone is
followed by the “boundary tone”, usually L%. The boundary tone is not
associated to a TBU, but is aligned with the end of the phrase. The three
terms are used in the overview in (9).
Let us now have a look at the privative contrast in simplex forms. In the
following panels the big accent (on 1ˈAllan and 2ˈAnna, respectively) is
followed by a small accent (on i1ˈgår ‘yesterday’). The presence of the
small accent here creates a stable endpoint for the big accent, allowing us to
compare the realization of the shared part of the big accent contour in the
two accents. The (small accent) HL* is coordinated with the beginning of
the stressed vowel of i1ˈgår in each panel.
The distinction between the two accents thus lies in the initial part of the
contour, where accent 2 contains an extra tone. The rest of the big tonal
contour of accent 2 is identical to all of the accent 1 contour and constituted
by intonation tones only. We can illustrate this fact by matching the two
contours as in (5), where the lexical tone of accent 2 is to the left of the first
vertical line. In the right-hand panels, the contour of accent 1 is compressed
and the identity of the intonational part of the contour is evident. The tonal
sequences for accent 2 and accent 1 are given above and below,
respectively.
The shared part of the contour is delayed in accent 2, due to the presence of
the lexical tone. For the big accent 1 there is, then, more space for the
prominence tone (L*H), and the low target of the following small accent
(HL*) is reached earlier in igår than following a big accent 2.
Simplex forms typically only contain a single association point since
minimal prosodic words in Swedish can contain only a single stressed
syllable, unlike, e.g., English and German (Riad 2014). Whenever there are
more stresses, more minimal prosodic words are created, and the structure
as a whole receives what we shall call “compound accent”.129 Compound
accent is melodically the same as accent 2, but the first tone is postlexical
rather than lexical, cf. (2). Accent is here sensitive to the number of stresses
in a form, and this holds of Central Swedish and several other dialects
(Dala, Narvik, Göta).130 The fact that more TBU’s become available in
compounds also makes it possible for the prominence tone to associate, to
the last stress of the compound (or other similar forms containing two
stresses, formal compounds or derivations with a stressed suffix, cf. (24)).
This is illustrated in (6).
This contour shows how the postlexical H* associates to the first stress and
the prominence L*H associates to the second and last stress. The trailing H
of the prominence tone floats and does not exhibit stable timing (Bruce
1987). Indeed, it may in some dialects drift to the right of the focused word
(Bruce 2003; Myrberg 2010).132 The example in (6) is cut out from the
middle of a phrase, so there is no final boundary L%, a fact that might
motivate the relatively late realization of the floating H. The fact that it is
the last stress that is the target for the secondary association is clear from
forms that contain several stressed syllables. This is illustrated in (7).
Longish compounds in focus position (i.e., exhibiting the big accent) that
carry accent 2 is the single most relevant type of data to use in dialect
comparison, since they exhibit more properties of the tonal grammar than
any other form. Unlike simplex forms, otherwise the typical data used in
typologies of North Germanic accent, long compounds show whether or not
there is a secondary association to a later TBU, as well as if there is
spreading or interpolation between association points. Both of these things
prove to be important parameters of the tone accent typology.133 We
comment on compounds with accent 1 separately, in Section 5.3.
The lexical accent distinction in terms of minimal pairs is of no
particular interest to the typology as such. The distinction carries a very
marginal functional load, and alleged minimal pairs are often not “clean”,
i.e., they are often constituted by inflected forms where the uninflected
forms do not form a minimal pair (8a). Also, it is incidental changes like
vowel reductions and consonant assimilations that determine whether there
are relatively many minimal pairs (Norwegian, around 3000, Leira 1998) or
not (Swedish, about 350, Elert 1972), cf. (8b). Furthermore minimality is
seldom instantiated by forms of the same grammatical category, (8c). The
many flaws of minimal pairs, however, do not mean that there is no
unpredictable lexical distinction. Monomorphemic word pairs like those in
(8d) show that lexical tones are real. These forms are also semantically
close to each other.
It is the near-minimal pairs in (8d) that establish the lexical contrast, rather
than the alleged minimal pairs in (8a) and (8c).
4 One typology
The tonal dialects form a coherent typology by virtue of sharing some basic
properties. For one thing the realization of accent 2 requires two syllables
(disregarding the few apocopating dialects; Lorentz 2008), whereas accent
1 requires only a single syllable. This is indeed the first argument for the
privative nature of the contrast. Since accent 2 requires more space, there
should be more tonal material in that contour.134
The fact that accent 2 requires so much space has an interpretation in
terms of tone bearing unit, which is the stressed syllable in Swedish and
Norwegian. Tones associate to primary stressed syllables in all dialects, and
in some dialects they associate to secondary stresses, too. This
understanding of the TBU coupled with the common assumption of one-
tone-per-TBU rather predicts the synchronic requirement of two or more
syllables for the occurrence of accent 2. If the TBU were the mora, as for
example in most eastern Central Franconian varieties (Peters 2007: 171),
there would be nothing in the way of a contrast in monosyllables.
Another thing that makes the typology coherent is the tonal alternation,
which means that there are no challenges to the Obligatory Contour
Principle (OCP). While it has been proposed that there are OCP-induced
tonal epentheses (Lorentz 1995), the simplest analysis is achieved by
simply segmenting the tonal contour into three basic parts: lexical tone (if
any), prominence tone, and boundary tone, where each tone is (or begins
with) the opposite value of the preceding tone. The lexical tone is invariably
a single tone, and that seems to be the case also with the boundary tone,
though the issue has not been systematically studied. The prominence tone
may be single or complex. The apparent generalization here (which we
return to in Section 7) is that a prominence tone must contain an H tone,
unless the boundary tone is H% and employed for the expression of focus
(East Norwegian). At any rate, there is no need to postulate tonal
epenthesis. The more conservative view that there are no epenthetic tones
also has a restrictive effect on the typology as such, as it reduces the
variational space, and also has something to say for the structural similarity
between dialects irrespective of tonal values.
Further support for the coherence of the typology comes from the
lexical distribution of the accents, which is very stable across dialects.
Accent 2 shows up with the same set of unstressed, posttonic suffixes (Sw -
ar, -or, -are, -ing, -nad, -lig, -ig, -a, among others, and corresponding
Norwegian suffixes), a fact that points at both a morphological anchoring of
lexical tone and a shared origin.
Another shared property is the fact that the postlexical generalization of
accent in compounds is always in the direction of accent 2. No dialect that
has a tonal contrast exhibits prosodically motivated assignment of accent 1
in compounds. Most dialects, however, exhibit some prosodic assignment
of accent 2, usually at least in the core set of stem compounds containing a
clash (South Swedish; Strandberg 2014) and in recently formed formal
compounds arising from initial stress insertion (protes1ˈtere > 2ˈprotesˌtere
‘to protest’; e.g., East Norwegian; Kristoffersen 2000: 165). Often, accent 2
is broadly generalized to any form containing two stresses (e.g., Central
Swedish, Dala, Göta, North Norwegian).135
Finally, the geographic contiguity of the tonal systems obviously points
at a common historical core. Although some developments have taken place
– yielding the variation we study as a typology here – there are no
indications of a tonal variety that is radically different from that of any
other dialect. With Danish stød, however, there is reason to believe that a
radical change has taken place, as some basic conditions are different there:
beside the different phonetic exponent, there is the sonority requirement in
stressed syllables (“stød basis”), the possibility of more than one stød in a
single compound, and the largely (but not completely) inverse distribution
of stød compared with accent 2.136 The historical affinity is not in question,
but neither is the typological distance.
As can be seen, the lexical tone varies between H* and L*. All tones in the
first column are marked with a star to indicate that the (post)lexical tone is
invariably associated. We shall now make the comparison between a CSw
variety (Stockholm) and a Dala variety (Norberg). These varieties differ
primarily with respect to the value of the lexical tone. Grammatically, they
are otherwise the same, i.e., with regard to tonal associations and spreading
pattern. We use the word sommarledigheten ‘the summer holidays’ as our
sample word. This word contains three stresses and will have accent 2 in all
dialects, either by virtue of a prosodic compound rule, or by virtue of the
first morpheme sommar ‘summer’ being lexically accent 2. In Figure 10.2,
autosegmental representations are provided for both dialects compared,
above and below the sample word, and also stylized contours.
We look first at two examples from CSw. The previous panels that we have
looked at were all taken from this variety. The compounds in (10) and (11)
both contain three minimal prosodic words each, hence three stresses. The
(post)lexical tone associates to the first and the prominence tone associates
to the last.
These CSw contours should now be compared with Dala. The two contours
are from speakers from Norberg and Dala-Järna.
In the analysis of both of these dialects the prominence tone spreads back
from the last stressed syllable to the (post)lexical tone. In the CSw variety
this means a tonal floor from the last stress back to right after the
(post)lexical H*. In Dala varieties this means a high plateau from the
prominence H* in the last stress back to the (post)lexical L* on the primary
stress.
The value of the (post)lexical tone is readily identified from panels like
these, and we maintain that this tone is also a phonologically relevant
category, therefore useful for typological concerns. As we saw in (5), the
lexical tone is quite easy to isolate, by simple comparison of accent 1 and
accent 2 forms, where the accent 2 forms will contain an extra tone before
the intonational tones that are common to both accents.138
Let us look at another example of a minimal contrast on tonal value.
This time we compare East Norwegian (ENw) with South Swedish (SSw),
shown in Figure 10.3. These dialects have parallel association patterns, but
each tone contrasts. H and L tones do not fulfill functional purposes in
exactly the same way, there being a bias for H tones to serve as markers of
prominence. In Norwegian it has long been maintained that the boundary
tone also carries the function of focus (Fretheim & Nilsen 1989;
Kristoffersen 2000: 278).139 This does not affect the tonal grammar,
however, which is parallel. Panels exemplifying these dialects are given in
(14) and (15), below for ENw, and in (20) for SSw.
In this example we can clearly see how the second H tone occurs relatively
close to the left edge, certainly not at the last stress. It might look as if it
were associated to the second stress, but that is not in fact the case. This is
clear from the next example where the peak is in an unstressed syllable.
These contours should be compared with the ones given for Central
Swedish above, such as (7), (10), and (11), where a secondary association at
the rightmost stress is clearly in evidence in a dialect with the same tonal
sequence.
Our third comparison regarding tonal association involves Dala and
South Swedish (Skåne). These varieties have the same tonal make-up but
differ regarding secondary association, as represented in Figure 10.6.
We have already seen examples of Dala (items (12) and (13), above). From
Skåne we have the following utterance, which contains two compounds
with the accent 2 contour.
The tonal accent 2 contour is L*HL, where the (post)lexical tone is L*. We
have registered the peak and the immediate drop as an HL prominence tone.
The status of the L tonal segment is somewhat unclear, e.g., whether it is
part of the prominence tone, or if it represents some kind of default. Further
research on SSw varieties is needed to clarify the issue.
Comparing with CSw item (7) and Göta 2 item (17) above, we see clearly
that the floating H of L*H is timed later in Göta 1 compared with CSw, but
earlier than in Göta 2, where the H looks like it is also the boundary tone.
The invited conclusion to draw from this is that these variant realizations
represent possible developmental stages. Moving one dialect to the west of
WSw, i.e. into ENw, the typological difference is simply the absence of a
secondary association, as described in Section 5.3.
The (post)lexical tone is in the primary stressed syllable, but the following
L occurs before the last stressed syllable, in contrast with Stockholm where
it is normally associated in the stressed syllable. Instead, the H tone of LH
is associated in Eskilstuna. The reason for this behaviour is to be found at
the very end of the contour, where the L% boundary tone is pulled firmly
into the stressed syllable, leaving little room for all of the prominence tone
L*H. In this situation, the prominence contour is pushed back, the H
segment associating (H*) while the preceding L becomes a leading tone to
H*. The sharp fall from H* to L% in the last stressed syllable causes
phonetic stød in this case, as can in fact be seen in the contour in (30).
Many other recordings exhibit creaky voice in the corresponding place. At
the segmental level there is some centralizing diphthongization (Bleckert
1997).
The association of an H* at the last stress of compounds is in fact a step
toward the system found in Dala (cf. Section 5.1). And a further step in that
direction is found in the northwestern side of the city of Eskilstuna.145 In
this variety, which we will call Eskilstuna-west, compounds no longer
exhibit an H* (post) lexical tone on the first stressed syllable. Instead, the
(post)lexical tone is now L*. This contrast is schematically illustrated in Fig
ure 10.11, still in comparison with CSw, though a comparison with Dala
would be equally warranted.
6.3 Observations
The spreading and interpolation behaviours provide evidence of alignment
tendencies to the left and to the right (Riad 1998b). Any dialect that has two
association points in compounds, and which hence has accent 2 as a rule in
compounds and other forms with two stresses, show association to the first
and the last stressed morpheme. This should be interpreted as a solution to
the double desiderata on the part of tones. Leftward alignment is primary
and all dialects have the main stress as an obligatory association point,
whereas rightward alignment comes to the surface only when a secondary
association point is sought out. Given a secondary association, dialects may
then differ in how the transition between the two association points is
realized. Backwards spreading is a sign of the grammaticalized “desire” of
the prominence tone to be at the left edge, simultaneously with being
associated at the right edge. This leads to a tonal floor (Central Swedish,
Göta) or a plateau (Dala, Narvik). Interpolation is the case when no
backwards spreading takes place, a fact that could be interpreted as the
action of a constraint against (the markedness of) spreading (Riad 1998b).
In dialects that allow only one association point, it is invariably the first
TBU which receives a tonal mark. In this way, we can account for the fact
that there is no systematic association to any medial stresses in long
compounds.
The five types in the upper half of Figure 10.13 have a (post)lexical H*
tone and the three types in the lower half have a (post)lexical L*. In all
varieties, the remaining tones of the big accent include an H tone. A good
way of illustrating the generality of this H tone is to look at accent 1. Bruce
(2005) does not discuss the presence of a separate focus gesture in accent 1,
but there is an indirect reference to it in Bruce (2003). Clearly though, the
null hypothesis must be that the focus gesture generalizes across big accent
in both accent 1 and accent 2. In Figure 10.14 the (post)lexical part of the
big accent 2 contour has been shaded over, leaving the big accent 1 contour
unshaded. As can be readily seen, each dialect has a remaining H tone that
can be engaged for prominence purposes.
Figure 10.13: Big accent 2 in long compounds in schematic
representation.
Figure 10.14: Removing the (post)lexical tone, leaving the contour of
big accent 1. All instances of accent 1 contain an H tone.
However, we only find four alternating tones if the first tone, the
(post)lexical tone, is H. The absence of LHLH now follows from the
generalization that an H tone must be regularly available for prominence
purposes in the big accent. In fact it narrows this generalization to “exactly
one H tone”.
Our conclusion is that “separate tonal gesture” vs. “enhancement” are
surface observations that depend on other things, rather than proper
typological features. A prediction for diachrony would be that we should
not expect to see tonal behaviours that appear to preserve the separate tonal
gesture as such (e.g., in the transition from CSw to Dala, via Eskilstuna).
But we should expect to see preservation of an H tone in the big accent
contour. As shown by Myrberg (2013, 2016), enhancement is related to
function and information structural status in Central Swedish. This is likely
to generalize to many, perhaps all, dialects, and would thereby not be a
structural typological property either.
References
Arvaniti, Amalia. 2002. The intonation of yes-no questions in Greek. In M. Makri-
Tsilipakou (ed.), Selected papers on theoretical and applied linguistics, 71–83.
Thessaloniki, Department of Theoretical and Applied Linguistics, School of
English, Aristotle University.
Basbøll, Hans. 2005. The phonology of Danish. Oxford: Oxford University Press.
Bleckert, Lars. 1987. Centralsvensk diftongering som satsfonetiskt problem
(Skrifter utgivna av institutionen för nordiska språk vid Uppsala universitet 21).
Uppsala.
Bruce, Gösta. 1977. Swedish word accents in sentence perspective (Travaux de
l’institut de linguistique de Lund 12). Lund: CWK Gleerup.
Bruce, Gösta. 1982. Reglerna för slutledsbetoning i sammansatta ord i
nordsvenskan. In Claes-Christian Elert & Sigurd Fries (eds.), Nordsvenska,
123–148. Umeå University.
Bruce, Gösta. 1987. How floating is focal accent? In Kirsten Gregersen & Hans
Basbøll (eds.), Nordic prosody 4, 41–49. Odense: Odense University Press.
Bruce, Gösta. 1998. Allmän och svensk prosodi (Praktisk Lingvistik 16). Lund
University.
Bruce, Gösta. 2003. Late pitch peaks in West Swedish. Proceedings of ICPhS 15,
Barcelona, 245–248.
Bruce, Gösta. 2004. An intonational typology of Swedish. Speech Prosody 2004,
Nara, Japan, 175–178.
Bruce, Gösta. 2005. Intonational prominence in Swedish revisited. In Sun-Ah Jun
(ed.), Prosodic typology: The phonology of intonation and phrasing, 410–429.
Oxford: Oxford University Press.
Bruce, Gösta. 2007. Components of a prosodic typology of Swedish intonation. In
Riad & Gussenhoven (eds.) 2007, 113–146.
Bruce, Gösta. 2010. Vår fonetiska geografi: Om svenskans accenter, melodi och
uttal. Lund: Studentlitteratur.
Bruce, Gösta & Eva Gårding. 1978. A prosodic typology for Swedish dialects. In
Gårding et al. (eds.), 219–228.
Bruce, Gösta, Olle Engstrand, & Anders Eriksson. 1998. De svenska dialekternas
fonetik och fonologi år 2000 (SweDia 2000) – en projektbeskrivning.
Folkmålsstudier 39. 33–54.
Bye, Patrik. 2004. Evolutionary typology and Scandinavian pitch accent.
Manuscript, University of Tromsø.
Dalton, Martha & Ailbhe Ní Chasaide. 2007. Melodic alignment and micro-dialect
variation in Connemara Irish. In Riad & Gussenhoven (eds.), 293–316.
Danell, Gideon. 1937. Svensk ljudlära. 4th edition. Stockholm: Svenska
bokförlaget, Norstedt & söner.
Elert, Claes-Christian. 1972. Tonality in Swedish: Rules and a list of minimal pairs.
In Evelyn S. Firchow, Kaaren Grimstad, Nils Hasselmo, & Wayne O’Neil
(eds.), Studies for Einar Haugen, 151–173. The Hague & Paris: Mouton.
Engstrand, Olle. 1995. Phonetic interpretation of the word accent contrast in
Swedish. Phonetica 52. 171–179.
Engstrand, Olle. 1997. Phonetic interpretation of the word accent contrast in
Swedish: Evidence from spontaneous speech. Phonetica 54. 61–75.
Fant, Gunnar & Anita Kruckenberg. 2008. Multi-level analysis and synthesis of
prosody with applications to Swedish. Manuscript, Kungliga Tekniska
Högskolan.
Fintoft, Knut. 1970. Acoustical analysis and perception of tonemes in some
Norwegian dialects. Oslo: Universitetsforlaget.
Fintoft, Knut., P. E. Mjaavatn, E. Møllergård, & B. Ulseth. 1978. Toneme patterns in
Norwegian dialects. In Gårding et al. (eds.), 197–206.
Fretheim, Thorstein & Randi Alice Nilsen. 1989. Terminal rise and rise-fall tunes in
East Norwegian intonation. Nordic Journal of Linguistics 12. 155–182.
Gårding, Eva. 1977. The Scandinavian word accents (Travaux de l’institut de
linguistique de Lund 11). Lund: CWK Gleerup.
Gårding, Eva & Per Lindblad. 1973. Constancy and variation in Swedish word
accent patterns. Working Papers 7. 36–110. Dept. of Linguistics, Lund
University.
Gårding, Eva, Gösta Bruce, & Robert Bannert (eds.). 1978. Nordic prosody:
Papers from a symposium (Travaux de l‘institut de linguistique de Lund 13).
Lund University.
Grice, Martine, D. Robert Ladd, & Amalia Arvaniti. 2000. On the place of phrase
accents in intonational phonology. Phonology 17. 143–185.
Gussenhoven, Carlos. 2000. The lexical tone contrast of Roermond Dutch in
Optimality Theory. In Merle Horne (ed.), Prosody: Theory and experiment,
129–167. Dordrecht: Kluwer.
Gussenhoven, Carlos. 2007. Intonation. In Paul de Lacy (ed.), The Cambridge
handbook of phonology, 253–280. Cambridge: Cambridge University Press.
Gussenhoven, Carlos. 2012. Asymmetries in the intonation system of Maastricht
Limburgish. Phonology 29. 39–79.
Gussenhoven, Carlos & Peter van der Vliet. 1999. The phonology of tone and
intonation in the Dutch dialect of Venlo. Journal of Linguistics 35. 99–135.
Gussenhoven, Carlos & Frank van den Beuken. 2012. Contrasting the high rise
and the low rise intonations in a dialect with the Central Franconian tone. The
Linguistic Review 29. 75–107.
Hognestad, Jan K. 2012. Tonelagsvariasjon i norsk: Synkrone og diakrone
aspekter, med særlig fokus på vestnorsk. PhD dissertation, University of
Agder.
House, David. 2002. Intonational and visual cues in the perception of interrogative
mode in Swedish. Proceedings of ICSLP 2002, 1957–1960. Denver,
Colorado.
House, David. 2004. Final rises and Swedish question intonation. Proceedings of
Fonetik 2004, 56–59. Stockholm University.
Hualde, José Ignacio & Tomas Riad. 2014. Word accent and intonation in Baltic. In
N. Campbell, D. Gibbon, & D. Hirst (eds.), Speech Prosody 7, 669–671.
Dublin.
Kallstenius, Gottfrid. 1902. Värmländska Bärgslagsmålets ljudlära. Stockholm:
Norstedt & Söner.
Kock, Axel. 1878. Språkhistoriska undersökningar om svensk akcent. Lund:
Gleerup.
Köhnlein, Björn. 2011. Rule reversal revisited: Synchrony and diachrony of tone
and prosodic structure in the Franconian dialect of Arzbach. PhD dissertation,
University of Leiden.
Köhnlein, Björn. 2016. Contrastive foot structure in Franconian tone-accent
dialects. Phonology 33. 87–123.
Kristoffersen, Gjert. 2000. The phonology of Norwegian. Oxford: Oxford University
Press.
Kristoffersen, Gjert. 2007. Dialect variation in East Norwegian tone. In Riad &
Gussenhoven (eds.), 91–111.
Lahiri, Aditi, Allison Wetterlin, & Elisabet Jönsson-Steiner. 2005. Lexical
specification of tone in North Germanic. Nordic Journal of Linguistics 28. 61–
96.
Leira, Vigleik. 1998. Tonempar i bokmål. Norskrift 95. 49–86.
Lorentz, Ove. 1995. Tonal prominence and alignment. Phonology at Santa Cruz 4.
39–56.
Lorentz, Ove. 2008. Tonelagsbasis i norsk. Maal og Minne 1. 50–68.
Malmberg, Bertil. 1970. Lärobok i fonetik. Lund: Gleerup.
Meyer, Ernst A. 1937. Die Intonation im Schwedischen I: Die Sveamundarten.
Helsingfors: Fritzes bokförlags AB. Mercators tryckeri.
Meyer, Ernst A. 1954. Die Intonation im Schwedischen II: Die norrländischen
Mundarten (Stockholm Studies in Scandinavian Philology 11). Uppsala:
Almqvist & Wiksell.
Mjaavatn, Per Egil. 1978. Isoglosses of toneme categories compared with
isoglosses of traditional dialect geography. In Gårding et al. (eds.), 207–216.
Myrberg, Sara. 2010. The intonational phonology of Stockholm Swedish
(Stockholm Studies in Scandinavian Philology 53). Stockholm University.
Myrberg, Sara. 2013. Focus type effects on focal accents and boundary tones.
Proceedings of Fonetik 2013, 53–56. Linköping University.
Myrberg, Sara. 2016. Second occurrence focus in Stockholm Swedish.
Manuscript, Stockholm University.
Myrberg, Sara & Tomas Riad. 2015. The prosodic hierarchy of Swedish. Nordic
Journal of Linguistics 38. 115–147.
Myrberg, Sara & Tomas Riad. 2016. On the expression of focus in the metrical grid
and in the prosodic hierarchy. In Caroline Féry & Shinichiro Ishihara (eds.),
Oxford handbook of information structure, 441–462. Oxford: Oxford University
Press.
Naydenov, Vladimir. 2011. Issues in the phonology of the tonal accents in Swedish
and their Norwegian and Danish counterparts. PhD dissertation, University of
Sofia.
Nordberg, Bengt. 1970. Språket som socialt kännetecken: Rapport om ett
språksociologiskt försök. Uppsala, FUMS report 7.
Nordberg, Bengt. 1972. Morfologiska variationsmönster i ett centralsvenskt
stadsspråk. Uppsala, FUMS report 23.
Nordberg, Bengt. 1985. Det mångskiftande språket: Om variation i nusvenskan.
Malmö: Liber Förlag.
Nyström, Staffan. 1997. Grav accent i östra Svealands folkmål. In Maj
Reinhammar (ed.), Nordiska dialektstudier: Föredrag vid femte nordiska
dialektkonferensen, 215–222 (Skrifter utgivna av Språk- och
folkminnesinstitutet genom dialektenheten i Uppsala. Ser. A:27). Uppsala:
Språk- och folkminnesinstitutet.
Öhman, Sven. 1966. Generativa regler för det svenska verbets fonologi och
prosodi. In Sture Allén (ed.), Svenskans beskrivning 3, Göteborg.
Öhman, Sven. 1967. Word and sentence intonation: A quantitative model. Speech
Transmission Laboratory Quarterly Progress and Status Report (STL-QPSR)
2–3. 20–54. Dept. of Speech Transmission, Royal Institute of Technology,
Stockholm.
Peters, Jörg. 2006. The dialect of Hasselt. Journal of the International Phonetic
Association 36. 117–124.
Peters, Jörg. 2007. A bitonal lexical pitch accent in the Limburgian dialect of
Borgloon. In Riad & Gussenhoven (eds.), 167–198.
Riad, Tomas. 1998a. The origin of Scandinavian tone accents. Diachronica 15.
63–98.
Riad, Tomas. 1998b. Towards a Scandinavian accent typology. In Wolfgang
Kehrein & Richard Wiese (eds.), Phonology and morphology of the Germanic
languages, 77–109. Tübingen: Niemeyer.
Riad, Tomas. 2000a. The origin of Danish stød. In Aditi Lahiri (ed.), Analogy,
levelling and markedness: Principles of change in phonology and morphology,
261–300. Berlin: Mouton de Gruyter.
Riad, Tomas. 2000b. Stöten som aldrig blev av – generaliserad accent 2 i Östra
Mälardalen. Folkmålsstudier 39. 319–344. Helsingfors.
Riad, Tomas. 2006. Scandinavian accent typology, Sprachtypologie und
Universalienforschung 59. 36–55.
Riad, Tomas. 2009a. The morphological status of accent 2 in North Germanic
simplex forms. In Martti Vainio, Reijo Aulanko, & Olli Aaltonen (eds.), Nordic
prosody: Proceedings of the 10th Conference, Helsinki 2008, 205–216.
Frankfurt am Main: Peter Lang.
Riad, Tomas. 2009b. Eskilstuna as the tonal key to Danish. Proceedings Fonetik
2009, 12–17. Stockholm University.
Riad, Tomas. 2012. Culminativity, stress and tone accent in Central Swedish.
Lingua 122. 1352–1379.
Riad, Tomas. 2014. The phonology of Swedish. Oxford: Oxford University Press.
Riad, Tomas. 2015. Prosodin i svenskans morfologi. Stockholm: Morfem förlag.
Riad, Tomas & Carlos Gussenhoven (eds.). 2007. Tones and tunes I: Studies in
word and sentence prosody. Berlin: Mouton de Gruyter.
Riad, Tomas & My Segerup. 2008. Phonological association of tone: Phonetic
implications in West Swedish and East Norwegian. Proceedings Fonetik 2008,
93–96. Göteborg.
Ringgaard, Kristian. 1983. Review of Liberman (1982). Phonetica 40. 342–344.
Schmidt, Jürgen Erich. 1986. Die mittelfränkischen Tonakzente (Rheinische
Akzentuierung). (Mainzer Studien zur Sprach- und Volksforschung 8).
Stuttgart: Steiner.
Segerup, My. 2004. Gothenburg Swedish word accents: A fine distinction.
Proceedings Fonetik 2004, 28–31. Stockholm University.
Strandberg, Mathias. 2014. De sammansatta ordens accentuering i Skånemålen.
PhD dissertation, Uppsala University.
Teleman, Ulf. 1969. Böjningssuffixens form i svenskan. Arkiv för nordisk filologi 84.
163–208.
de Vaan, Michiel. 1999. Towards an explanation of the Franconian tone accents.
Amsterdamer Beiträge zur älteren Germanistik 51. 3–44.
Wetterlin, Allison. 2010. Tonal accents in Norwegian: Phonology, morphology and
lexical specification. Berlin: Mouton de Gruyter.
Carlos Gussenhoven
Prosodic typology meets
phonological representations
1 Introduction
Like Christmas presents, phonological grammars are best typologized under
three structural headings. First, there is the contents, the actual present, or
equivalently, the phonological features and segments that make up the
segmental strings. Next, there are the containers, the box, the wrapping
paper, and the ribbon, equivalent to a suite of constituents in the prosodic
hierarchy (Selkirk 1981; Nespor & Vogel 1986; Selkirk & Lee 2015).
Finally, there are ways of anchoring the present inside the box, airbags or
polystyrene beads perhaps, equivalent to phonological alignment and
segmental association (Goldsmith 1976; McCarthy & Prince 1993). The
purpose of this article is, first, to point out that tone, stress, and accent are
not easily typologized within a single taxonomy, since they are
instantiations of each of these three very disparate aspects of grammars,
respectively. The second goal is to draw attention to two aspects in prosodic
descriptions that do not fit this model. One is the burgeoning of
representations of prominence above the word level. The second is the
imbalance in the employment of phonological alignment and phonological
association, where phonological alignment has been underused and
phonological association overused, to the detriment of our understanding of
phonological representations of intonation.
2 Segments
In this section, I defend two positions. First, all languages have
phonological segments (Section 2.1), and second, there is no systematic
phonetic representation in the sense of Chomsky & Halle (1968) (Section 2.
2). A claim that there are no phonological segments is refuted in Section
2.3. In Section 2.4, I point out that the “suprasegmental” view of
phonological structure has blurred the segmental status of tones, with a
detrimental effect on typological discussions of word prosody.
3 Prosodic constituents
In this section, I argue that in all languages segments are contained in a
hierarchically arranged set of otherwise empty constituents, with higher
ones encompassing lower ones (Selkirk 1981; Nespor & Vogel 1986;
Selkirk & Lee 2015). Prosodic constituents constrain the distribution of
segments. For instance, an English sequence like /pk/ can only occur across
a syllable boundary, as in napkin, because inside the syllable, there is no
legitimate way in which they can be located in that order. Similarly, they
may forbid specific mixes of segments. Pharyngealized and contrastively
plain consonants do not occur in the same syllable in Zwara Berber
(Gussenhoven 2017), and nasal and non-nasal segments may not occur in
the same phonological word in some languages (Walker 2011). They may
also restrict their number, like the single voiced obstruent in Japanese words
(Itô & Mester 1986) and the single glottalized consonant in the Indo-
European syllable (Hopper 1973; Gamkrelidze & 1973).
The specific suite of constituents is language-specific. First, constituents
with identical ranks may take different forms in different languages, and
second, languages do not have identical suites of prosodic constituents. The
first point in particular is true for syllables and feet, as discussed in Sections
3.1 and 3.2. The second point would appear to be true of feet, as discussed
in Section 3.3, phonological words (Schiering et al. 2010), and accentual
phrases. The situation becomes more varied at the higher end of the
hierarchy. For instance, only six out of the 14 chapters in Jun (2014) with
descriptions of (groups of) languages postulate an accentual phrase, a
constituent somewhat larger than the word (Basque, Bengali, Dalabon,
Georgian, Japanese, and Mongolian). Section 3.4 emphasizes that in the
interest of conceptual clarity, stress is to be equated with word stress, i.e.,
foot structure. Headedness of higher prosodic constituents than the foot and
the Pword is less obvious and the ways in which higher-level prominence
has been recruited to explain phonological phenomena have not been
successful in improving our understanding of prosodic phenomena.
Table 11.2: Indonesian and Japanese syllable structures with disyllabic examples
/salak/ ‘Salacca zalacca’ and /nikoN/ ‘brand name’.
3.1 Syllables
Mainstream analyses of syllable structure have assumed a binary division
into subconstituents.151 One such analysis is the majority onset-rhyme type,
as exemplified for Indonesian, and another the Japanese type, where the
first cut is between the onset plus the vowel in the first mora vs. the
segment in the second mora (Kubozono 1999). Table 11.2 illustrates these,
together with disyllabic examples.
The difference in Table 11.2 suggests that syllables emerge from data.
At the same time, if associations are universal, so will be the phonological
constituent that provides the elements with which segments are associated,
and the syllable will therefore have to be universal too. As was clear in Sect
ion 2.3, linguists have two ways of deciding if a constituent exists. In
theoretical linguistics, their existence depends on their role in the grammar.
If no distributional generalization of any kind refers to a specific
constituent, it has no reason for figuring in the grammar of that language.
For instance, Labrune (2012) proposed that Japanese equates syllables with
moras, thus denying the existence of heavy syllables. As it happens, in this
case such evidence is found in the fact that the syllable is the domain of
word accent, while there is at least one syllable-based generalization about
accent placement in longer loanwords (Kubozono 2011; Kawahara 2016).
In addition, unaccented word-initial syllables have high pitch if they are
long, but low pitch if they are short in Tokyo Japanese, a difference that is
hard to capture without the syllable (e.g., Pierrehumbert & Beckman 1988).
The other way is to provide behavioral evidence. For most linguists,
Kawahara’s (2016) review of durational evidence for the rhyme in the
Japanese syllable would be enough to refute Labrune’s (2012) claim.
Conversely, Hyman (2015a) shows that reference to the syllable can be
avoided altogether in Gokana, but that is the extent of the claim. A
demonstration that the language has no syllables would have to await the
negative results of phonetic and behavioral research.
3.2 Feet
The obligatory status of stress as well as its syllable-based nature (Hyman
2006) follow from the fact that feet are headed and directly dominate
syllables. That is, if prosodic constituents are obligatory and words are
parsed into feet, stess is obligatory if the language has feet. Stress thus has
no phonological substance. There will typically be a hyperarticulation of the
stressed syllable, often leading to greater duration, more even spectral
balance and relatively little undershooting of targets. Also, many languages
have developed different segmental profiles for stressed and unstressed
syllables, like English. However, there is in principle no problem with
languages that have feet which do not obviously lead to measurable
properties.
Compared to the shapes of syllables, those for the foot are more varied
still. In addition to the distinction between trochees and iambs, trochees
come in many shapes, depending on the language. Thus, there is the
syllabic trochee ([σ σ]), the even moraic trochee ([μ.μ] or [μμ]) (Hayes
1995), and the uneven moraic trochee ([μμ. μ], [μ. μ] or [μμ]), as widely
used in the analysis of Germanic languages (Zonneveld et al. 1999). And
there is the Germanic foot, which can have a disyllabic strong branch
(indicated by parentheses) and ranges across the structures [μμ.μ], [μ.μ],
[(μ.μμ) μ], [(μ.μ) μ] and [μμ], as in /wor.du/ ‘words’, /lo.fu/ ‘praises’,
/ky.niŋ. ga/ ‘king’, /we.ru.du/ ‘troops’ and /sel/ ‘hall’ (Dresher & Lahiri
1991). Like syllable structure, foot structure evidently emerges on the basis
of language input.
Finally, while we can represent (2) in terms of a metrical score and point
out structural similarities between music and language (Lerdahl &
Jackendoff 1983), it is not the case that all sentences fit at least a single
meter; neither do we need a grid or tree to establish what utterances do fit a
given meter, assuming we know the conventions. In (2), bold print is used
for stressed syllables and small capitals for accented syllables; Pwords are
in parentheses, phonological phrases in square brackets and intonational
phrases in accolades. The point is that (2) gives all the information needed
to generate a flawless, canonical utterance.
4.1 Alignment
All linguistic constituents are aligned with one or more other constituents
(McCarthy & Prince 1993). Alignments minimally amount to the
coincidence of an edge of one (prosodic or segmental) constituent with an
edge of another. One implication is that no constituent can be demanded to
make an appearance in random locations in linguistic expressions. A second
implication is that infixation is oriented towards the beginning or the end of
a constituent, as in the well-known case of Tagalog verbal um. The suffix’s
left edge aligns with the left edge of the derived verb, but because demands
on syllable structure prevent it from appearing word-initially if the base
begins with a consonant, it only ever aligns left on the surface with vowel-
initial hosts. Thus, /um/+/alis/ gives [umalis] ‘to leave’, but / um/+/sulat/
gives [sumulat] ‘to write’. In both cases, the prefix is leftmost in the word,
while respecting the minimization of onsetless syllables (Schachter &
Otanes 1972; Prince & Smolensky 1993).
Alignment of segments is particularly relevant for tones. If we disregard
the autosegmental behavior of vowels and consonants with respect to each
other in the phonology of Semitic languages (McCarthy 1985), vowels and
consonants typically appear in single strings in morphemes, even in quite
lengthy ones. Like plain-clothed policemen at a fancy ball or clowns in a
crowded village square, tones tend to be sparser on their tier compared to
vowels and consonants on theirs, making their alignments more
conspicuous. Examples of the less common situation of more tones than
vowels are given in (3a) and (3b). The lax vowel in the vocative chant in
(3a) is given as long, following the observation in Hayes & Lahiri (1991a)
that they neutralize with long vowels under this intonation contour.
Example (3b) has a lax vowel spanning four tones in an intonation contour.
In (3c), finally, the more typical situation is given of tones spanning
stretches of vowels and consonants. The L-tone spans the stretch between
the preceding H* and following H%, approximately coinciding with /əv
ðəʊz pʊ/.
4.2 Association
Association is a temporal integration between segments (or features) with
either rhymes or moras, the tone bearing units (TBUs) (Howie 1974;
Hyman 1985). The motivation for TBUs is (i) the relatively constant
phonetic timing of a tone’s target in relation to a location in the TBU and
(ii) the implication that strings of tones will be regularly distributed over
strings of TBUs. For the first aspect, Pierrehumbert (1980: 44) observed
that the target of the accented tone (T*) of English was timed in a fairly
constant fashion relative to an accented syllable, but that the following
“phrase accent”, by which she meant the tone after T*, showed “a fair
amount of variation” in a location near the end of the nuclear word. This
point is shown graphically in (4), with transcripion following e.g.
Gussenhoven (2016).
The timing of L is related to its distance from H* and is not governed by the
syllable structure of the post-accentual stretch, as shown for English by
Barnes et al. (2010). To a limited extent, the timing and scaling of
associated tones are affected by contextual factors, like tone crowding and
the nature of adjacent tones. Thus, in (4a), the target of H* occurs earlier
than in (4b, c), because it needs to make room for the targets of the
following L-tones. In (4b, c), the distance of H* to L is greater than that in
(4a), because despite accommodating the L-tone by being earlier, the target
of H* is close enough to the final boundary for LL% to be squeezed up to it.
Observe that the graphs are distorted because they are projected onto the
orthographic examples; an impression of their utterance durations is
provided by the horizontal bars.
The second motivation for TBUs is the way the strings of tones are
distributed over them, which figured emphatically in work by Leben (1970,
1973) and Goldsmith (1976). This is shown, for instance, by the different
distributions of LH in Mende words like /lèlèmá/ ‘praying mantis’ and
/ndàvúlá/ ‘sling’, where L is associated with the first two syllables in the
first word, but only with the first in the second word, other syllables having
H. Unlike the situation illustrated in (3), the synchronization of the target of
L with its syllables is of the same order of precision as that between H and
its syllables (see also Beckman & Pierrehumbert 1986: 281). That is, the
Mende contrast cannot be reproduced in (4).
The two tones in the Tokyo Japanese pitch accent H*L associate with the
first mora in the accented syllable and the next mora, if there is one,
respectively (Pierrehumbert & Beckman 1988). In (7b) this is possible only
for H*, there being no second mora available for L. Each of the members of
this minimal pair now consists of a sequence of two H-tones, Hα in (7a) and
H* in (7b), where the second is the floating interrogative H%. Since the
boxed tones in (7b) are deleted for lack of a TBU, the difference between
(7a) and (7b) is that Hα (7a) left-aligns with the left edge of the accentual
phrase, while H* in (7b) left-aligns with the left edge of the rhyme of the
accented syllable, as expressed by the diacritics α and *, respectively. For
those speakers who maintain this contrast,152 the pitch movement in (7a) is
lower than that in (7b). Thus, while these forms have identical phonological
tones, LHH, in which the first H is associated with the one available mora,
the alignments of that first H are not identical, leading to a phonetic
distinction (Gussenhoven 2004: 187). A similar case was earlier reported by
Hayes & Lahiri (1991b), who noted that the IP-final string LHL in Bengali
had a systematically higher and later f0 peak when H is aligned with the IP,
in which case it signals a question (L* HɩLφ), than when it aligns with the
phonological phrase, in which case it signals a narrow-focus declarative (L*
HφLφ).
A second sense in which alignment is contrastive arises from OT, which
holds that alignment constraints are ranked amongst each other. When two
tones with different alignments compete for the same position, the order
they appear in will depend on their ranking. In (8a), the right-edge
alignment of H% to the right edge of the intonation phrase is ranked above
a constraint that aligns the right edge of a lexical H to the right edge of the
syllable, a situation is found in Venlo Limburgish interrogatives
(Gussenhoven & van de Vliet 1999).
For cases like (11), association to a higher node looks like a complicated
way of describing alignment. Describing alignment as association in the
specific way in which Pierrehumbert & Beckman (1988: 154) interpret this
also leads to the false prediction that distinctions like (8a, b) cannot exist.
Observe that the nodes to which tones associate do not form a single tier in
the way strings of moras or rhymes do. As a result, depending on whether F
igure 11.3 is a two-dimensional object, as it appears on the page, or a three-
dimensional object, in which node ɩ does not lie in the same plane as node
α, for instance, the interpretation of the No CROSSING CONSTRAINT
(Goldsmith (1976) varies. Pierrehumbert & Beckman (1988: 154) in effect
take the two-dimensional interpretation when proposing that boundary
tones are sequenced so as to mirror their rank, with the tone at the highest
edge occurring outermost in the tone string.
The resulting diffuseness of the meaning of association led to additional
interpretations. For instance, Prieto et al. (2005) proposed that tones might
associate with edges of these constituents by way of secondary association,
doubling down on the ambiguity between association and alignment. Ladd
(2004, 2006) instead argued that phonetic implementation rules should be
available for fine-tuning the synchronization of tone targets with the
syllabic, moraic, and CV-segmental structure. The concept of tone-to-node
association further gave rise to the notational convention of showing the
boundary tones as linked to the boundary bracket by means of an
association line (Hayes & Lahiri 1991b). I blithely took over that practice in
my (2004) book, but only up to Chapter 9. When formulating the
optimality-theoretic descriptions of Northern Bizkaian Basque and
Japanese, the meaning of constraints demanding association became
intractable, for which reason I reverted to the original notation of
Pierrehumbert (1980), announcing the change in Section 8.4 (p. 155).
Extensions of tone-to-TBU association at the tonal end were considered
and rejected in Arvaniti et al. (2000), who showed that the prenuclear rise in
Athenian Greek has its beginning synchronized with the end of the pre-
accentual vowel and its ending at the beginning of the post-tonic vowel,
thus embracing the consonants around the vowel of the accented syllable.
Because neither tone therefore had a privileged temporal relation with the
accented syllable, they briefly considered and rejected the option of
labelling the LH-pitch accent as accented, rather than either L or H, in the
spirit of a proposal of a dominating tonal node for the H*L pitch accent of
Japanese by Pierrehumbert & Beckman (1988). Arvaniti et al. (2000)
argued that this move would undermine the tenet of autosegmental
phonology which derives f0 movements from interpolations of level targets.
Within the terms of this contribution, the only option is to assign a star to
one of the tones in the pitch accent. For Japanese, this is clearly H, but that
decision is harder in the Greek case.153
The difference between alignment and association is in principle
applicable to other segments, too. An association difference well may
account for the phonetic behavior of “impure s” in Italian. As shown by
Hermes et al. (2000), initial /s/ in onset clusters, as in stella ‘star’, is neither
a separate syllable nor part of the onset, being durationally independent in a
way that /b/ is brilla ‘is shining’ is not. Since association is potentially
contrastive for all types of segments, the representation of “impure s” may
lack an association to the onset, making /s/ in stella a word-initial floating
consonant. In spirit, this is what Gierut (1999) and Barlow (2001) intended
to achieve when characterizing impure /s/ as extrasyllabic. To be sure,
despite a similarity in notation, association lines and lines indicating tree
structures of prosodic constituents are distinct concepts. Absence of an
association line neither implies non-parsing nor a parsing outside the
syllable, as suggested by the term “extrasyllabic”. That is, impure /s/ is just
as much part of its syllable as a floating boundary tone is part of the
constituent it aligns with. While consonants associate with onsets (or
syllables, if no onset node exists, e.g., Hayes 1995) and moras, syllables do
not ASSOCIATE with feet, but are dominated by them, or contained within
them, in the metaphor of Section 1. A conceptual blurring of association
lines and lines in tree structures may wreak havoc with notions like
“spreading” and “improper bracketing”. Segments can spread, i.e., associate
with more than one of its segment-bearing units, while prosodic
constituents ideally behave according to the Strict Layer Hypothesis. The
TBUs for tones are moras or rhymes, VBUs for vowels are moras, while
CBUs are onsets and moras. Just as vowels and tones may show multiple
associations, so may consonants, which as a result may well be
ambisyllabic. To quote Kessler (1998):
A major objection is that [ambisyllabicity] violates proper bracketing, or specifically,
the prosodic hierarchy, which teaches that elements at one prosodic level are properly
included in a single parent construct (Selkirk 1982: 355). To this one may well ask why
segments should be considered part of the prosodic hierarchy; or why geminates, which
autosegmental phonologists agree are typically single melodic constituents shared by
two syllables, are not an equally big problem.
4.6 Accent
Accent is neither phonological content, like a segment, nor is it a prosodic
constituent, like a foot. It amounts to a dual marking of a TBU and a tone
(or other segment, but there have been no proposals of accented consonants
or vowels), an instruction that the tone is to be associated with the TBU
(Goldsmith 1976: 47, 87). When a melody appears in different locations in
different words or expressions, it stands to reason to separate off the melody
and mark its location as an address label on the TBU concerned in each
word, i.e., to mark accent. Japanese and English are obvious examples.
Gomez-Imbert & Kenstowicz (2000) present the instructive case of
Barasana, which has the tone structures in (12).
These examples do not obviously suggest that the right analysis is one that
assumes two melodies, H and HL, plus a lexical accent mark on the first or
second mora, Barasana’s TBU, rather than the four melodies H, HL, LH,
and LHL of (12). One argument for the accentual analysis is based on the
way pronouns impose their tone pattern on the following noun, whose
original tones are deleted.
In (13a), the HL of [ín à] is copied onto [jáí], which loses its H, while in
(13b) the doubly linked H of [mání] ‘my’ is copied onto [mínì] ‘pet’, which
loses its HL melody. Unexpectedly, however, the LH melody of copies
as H onto [mínì], suggesting that the initial L does not count for the tone
copy rule. This suspicion is confirmed in (13d), where the initial L of
[wì˗hí˗bò] is preserved when H of is copied from of [mání] to create
[wì˗hí˗bó] as opposed to ill-formed *[wì˗hí˗bó]. Apparently, an L on the
first mora is exempt from being copied as well as from being overwritten.
Example (13e) confirms this, because [ínà] only manages to copy its HL on
the second and third moras of [bàbárá̃ ], which loses its H on those moras.
Finally, the H of [jì˗˗í] is seen on the final two TBUs of [wì˗hí˗bó].
The words in (12) now look like (14), where * over a vowel indicates
the accented mora. The empty first moras in (14c, d) receive a default L.
This analysis is supported by other processes, like the compound rule,
which deletes the accent on the second constituent, causing the tones of the
first constituent to spread through it, this time of course ignoring the accent
location in the second constituent.
References
Abercrombie, David. 1964. English phonetic texts. London: Faber and Faber.
Abercrombie, David. 1967. Elements of general phonetics. Edinburgh: Edinburgh
University Press.
Arvaniti, Amalia, D. Robert Ladd, & Ineke Mennen. 2000. What is a starred tone?
Evidence from Greek. In Broe & Pierrehumbert (eds.) 2000, 119–131.
Barlow, Jessica A. 2001. A preliminary typology of initial clusters in acquisition.
Clinical Linguistics and Phonetics 15. 9–13.
Barnes, Jonathan, Nanette Veilleux, Alejna Brugos, & Stefanie Shattuck-Hufnagel.
2010. Turning points, tonal targets, and the English L-phrase accent.
Language and Cognitive Processes 25. 982–1023.
Beckman, Mary E. & Janet B. Pierrehumbert. 1986. Intonational structure in
Japanese and English. Phonology 3. 255–309.
Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between
articulatory and perceptual drives. The Hague: Holland Academic Graphics.
Broe, Michael B. & Janet B. Pierrehumbert (eds.). 2000. Papers in Laboratory
Phonology, vol. 5: Acquisition and the lexicon. Cambridge: Cambridge
University Press.
Browman, Catherine P. & Louis M. Goldstein. 1986. Towards an articulatory
phonology. Phonology Yearbook 3. 219–252.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York:
Harper and Row.
Clements, N. George & Samuel Jay Keyser. 1981. A three-tiered theory of the
syllable. Occasional Paper No. 19. The Center for Cognitive Science, MIT,
Cambridge, MA.
Clements, G. Nick & Rachid Ridouane (eds.). 2016. Where do phonological
features come from? Cognitive, physical and developmental bases of
distinctive speech categories. Amsterdam: Benjamins.
Dresher, Elan B. & Aditi Lahiri. 1991. The Germanic foot: Metrical coherence in
Old English. Linguistic Inquiry 22. 251–286.
Flemming, Edward. 2002. Auditory representations in phonology. Abington and
New York: Routledge.
Fromkin, Victoria A. (ed.). 1973. Speech errors as linguistic evidence. The Hague:
Mouton.
Gamkrelidze, Thomas & Vyacheslav Ivanov. 1973. Sprachtypologie und die
Rekonstruktion der gemeinindogermanischen Verschlüsse. Phonetica 27.
150–156.
Giegerich, Heinz J. 1983. On English sentence stress and the nature of metrical
structure. Journal of Linguistics 19. 1–28.
Gierut, Judith A. 1999. Syllable onsets: Clusters and adjuncts in acquisition.
Journal of Speech, Language, and Hearing Research 42. 708–726.
Goedemans, Rob & Ellen A. van Zanten. 2007. Stress and accent in Indonesian.
In Vincent J. van Heuven & E. A. van Zanten (eds.), Prosody in Indonesian
languages (LOT Occasional Series 9), 35–62. Utrecht: Netherlands School of
Linguistics.
Goldsmith, John A. 1976. Autosegmental phonology. Doctoral dissertation, MIT,
Cambridge, MA. Bloomington, IN: Indiana University Linguistics Club.
Gomez-Imbert, Elsa & Michael Kenstowicz. 2000. Barasana tone and accent.
International Journal of American Linguistics 66. 419–463.
Grice, Martine. 1995. The intonation of interrogation in Palermo Italian:
Implications for intonation theory. Tübingen: Niemeyer.
Grice, Martine, D. Robert Ladd, & Amalia Arvaniti. 2000. On the place of phrase
accents in intonational phonology. Phonology 17. 143–185.
Gussenhoven, Carlos. 2000. The boundary tones are coming: On the
nonperipheral realization of boundary tones. In Broe & Pierrehumbert (eds.)
2000, 132–151.
Gussenhoven, Carlos. 2004. The phonology of tone and intonation. Cambridge:
Cambridge University Press.
Gussenhoven, Carlos. 2007. Wat is de beste transcriptie voor het Nederlands?
Nederlandse Taalkunde 12. 331–350.
Gussenhoven, Carlos. 2011. Sentential prominence in English. In van Oostendorp
et al. (eds.) 2011, 2778–2806.
Gussenhoven, Carlos. 2015. Does phonological prominence exist? Lingue e
Linguaggio 14. 7–24.
Gussenhoven, Carlos. 2016. The analysis of intonation: The case of MAE-ToBI.
Laboratory Phonology 7. 1–35.
Gussenhoven, Carlos. 2017. Zwara (Zuwārah) Berber. Journal of the Association
for Laboratory Phonology 7. 1–17. DOI: http://doi.org/10.5334/labphon.30
Harst, Sander van der. 2011. The vowel space paradox: A sociophonetic study on
Dutch. Utrecht: LOT.
Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago:
Chicago University Press.
Hayes Bruce & Aditi Lahiri. 1991a. Durationally specified intonation in English and
Bengali. In Johan Sundberg, Lennart Nord, & Rolf Carlson (eds.), Music,
language, speech and brain (Wenner-Gren Center International Symposium
Series), 78–91. London: Palgrave.
Hayes, Bruce & Aditi Lahiri. 1991b. Bengali intonational phonology. Natural
Language and Linguistic Theory 9. 47–96.
Hermes, Anne, Doris Mücke, & Martine Grice. 2013. Gestural coordination of
Italian word-initial clusters: The case of Italian “impure s”. Phonology 30. 1–
25.
Hopper, Paul J. 1973. Glottalized and murmured occlusives in Indo-European.
Glossa 7. 141–166.
Howie, J. M. 1974. On the domain of tone in Mandarin: Some acoustical evidence.
Phonetica 30. 129–148.
Hulst, Harry van der. 2012. Deconstructing stress. Lingua 122. 1494–1521.
Hume, Elizabeth. 1998. Metathesis in phonological theory: The case of Leti.
Lingua 104. 147–186.
Hyman, Larry M. 1985. A theory of phonological weight. Dordrecht: Foris.
Hyman, Larry M. 2006. Word-prosodic typology. Phonology 23. 225–257.
Hyman, Larry M. 2011. Tone: Is it different? In John Goldsmith, Jason Riggle, &
Alan C. L. Yu (eds.), The handbook of phonological theory, 197–239. 2nd edn,
Oxford: Wiley-Blackwell.
Hyman, Larry M. 2015a. Does Gokana really have syllables? A postscript to
Hyman 2011. Phonology 32. 303–306.
Hyman, Larry M. 2015b. Why underlying representations? UC Berkeley Phonology
Lab Annual Report (2015): 210–226. Published in Journal of Linguistics 54.
Itô, Junko & Armin Mester. 1986. The phonology of voicing in Japanese:
Theoretical consequences for morphological accessibility. Linguistic Inquiry
17. 49–73.
Jacobi, Irene. 2009. On variation and change in diphthongs and long vowels of
spoken Dutch. Doctoral dissertation, University of Amsterdam.
Jun, Sun-Ah. 2014. Prosodic typology II. Oxford: Oxford University Press.
Kawahara, Shigeto. 2016. Japanese has syllables: A reply to Labrune. Phonology
33. 169–194.
Kessler, Brett. 1998. Ambisyllabicity in the language of the Rigveda.
http://spell.psychology.wustl.edu/ambisyll-sanskrit/Last change 27-08-2004
Kiparsky, Paul. 1982. Lexical morphology and phonology. In In-Seok Yang (ed.),
Linguistics in the morning calm: Selected papers from SICOL, 3–91. Seoul:
Hanshin.
Kubozono, Haruo. 1999. Mora and syllable. In Natsuko Tsujimura (ed.), The
handbook of Japanese linguistics, 31–61. Malden, MA: Blackwell.
Kubozono, Haruo. 2011. Japanese pitch accent. In van Oostendorp et al. (eds.)
2011, 2879–2907.
Labrune, Laurence. 2012. Questioning the universality of the syllable: Evidence
from Japanese. Phonology 29. 113–152.
Ladd, D. Robert. 2004. Segmental anchoring of pitch movements: Autosegmental
phonology or speech production? In Hugo Quené & Vincent J. van Heuven
(eds.), On speech and language: Studies for Sieb G. Nooteboom, 123–132.
Utrecht: LOT.
Ladd, D. Robert. 2006. Segmental anchoring of pitch movements: Autosegmental
association or gestural coordination? Italian Journal of Linguistics 18. 19–38.
Ladd, D. Robert. 2014. Simultaneous structure in phonology. Oxford: Oxford
University Press.
Leben, William R. 1970. The representation of tone. In Victoria A. Fromkin (ed.),
Tone: A linguistic survey, 177–219. New York: Academic Press.
Leben, William R. 1973. Suprasegmental phonology. Doctoral dissertation, MIT,
Cambridge, MA.
Lehiste, Ilse. 1970. Suprasegmentals. Cambridge MA: MIT Press.
Lerdahl, Fred & Ray Jackendoff. 1983. A generative theory of tonal music.
Cambridge, MA: MIT Press.
Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic
Inquiry 8. 249–336.
Lickley, Robin, Astrid Schepman, & D. Robert Ladd. 2005. Alignment of “phrase
accent” lows in Dutch falling rising questions: Theoretical and methodological
implications. Language and Speech 48. 157–183.
Maskikit-Essed, Raechel & Carlos Gussenhoven. 2016. No stress, no pitch accent,
no prosodic focus: The case of Ambonese Malay. Phonology 33. 353–389.
McCarthy, John J. 1985. Formal problems in Semitic phonology and morphology.
New York: Garland.
McCarthy, John J. & Alan Prince. 1993. Generalized alignment. In Geert Booij &
Jaap van Marle (eds.), Yearbook of morphology, 79–154. Berlin: Springer.
McQueen, James M., Anne Cutler, & Dennis Norris. 2006. Phonological
abstraction in the mental lexicon. Cognitive Science 30. 1113–1126.
Minde, Don van. 1997. Malayu Ambong: Phonology, morphology, syntax. Doctoral
dissertation, University of Leiden.
Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris.
Nolan, Francis & Hae-Song Jeon. 2014. Speech rhythm: A metaphor?
Philosophical Transactions of the Royal Society B: Biological Sciences 369
(1658): 20130396. DOI: 10.1098/ rstb.2013.0396.
Nooteboom, Sieb & Hugo Quené. 2007. The SLIP technique as a window on the
mental preparation of speech: Some methodological considerations. In Maria-
Josep Solé, Pamela S. Beddor, & Manjari Ohala (eds.), Experimental
approaches to phonology, 339–350. Oxford: Oxford University Press.
Oostendorp, Marc van, Colin J. Ewen, Elizabeth Hume, & Keren Rice (eds.). The
Blackwell companion to phonology. Oxford: Wiley-Blackwell.
Peters, Jörg. 2008. Tone and intonation in the dialect of Hasselt. Linguistics 46.
983–1018.
Peters, Jörg, Judith Hanssen, & Carlos Gussenhoven. 2015. The timing of nuclear
falls: Evidence from Dutch, West Frisian, Dutch Low Saxon, German Low
Saxon, and High German. Laboratory Phonology 6. 1–52.
Pierrehumbert, Janet B. 1980. The phonology and phonetics of English intonation.
Doctoral dissertation, MIT, Cambridge, MA.
Pierrehumbert, Janet. 1990. Phonological and phonetic representation. Journal of
Phonetics 18. 375–394.
Pierrehumbert, Janet B. 2002. Word-specific phonetics. In Carlos Gussenhoven &
Natasha Warner (eds.), Laboratory phonology vol. 7, 1001–1039. Berlin:
Mouton de Gruyter.
Pierrehumbert, Janet B. & Mary E. Beckman. 1988. Japanese tone structure.
Cambridge, MA: MIT Press.
Post, Mark W. 2009. The phonology and grammar of Galo “words”: A case study in
benign disunity. Studies in Language 33. 934–974.
Port, Robert & Adam Leary. 2005. Against formal phonology. Language 81. 927–
964.
Prieto, Pilar, Mariapaola D’Imperio, & Barbara Gili-Fivela. 2005. Pitch accent
alignment in Romance: Primary and secondary associations with metrical
structure. Language and Speech 48. 359–396.
Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in
Generative Grammar. Rutgers University Center for Cognitive Science
Technical Report 2.
Riad, Tomas. 1998. Towards a Scandinavian accent typology. In Wolfgang Kehrein
& Richard Wiese (eds.), Phonology and morphology of the Germanic
languages, 77–109. Tübingen: Niemeyer.
Schachter, Paul & Fe T. Otanes. 1972. Tagalog reference grammar. Berkeley:
University of California Press.
Schiering, René, Balthasar Bickel, & Kristine A. Hildebrandt. 2010. The
phonological word is not universal, but emergent. Journal of Linguistics 46.
657–709.
Selkirk, Elisabeth. 1981. On prosodic structure and its relation to syntactic
structure. In Thorstein Fretheim (ed.), Nordic prosody 2, 111–114. Trondheim:
Tapir.
Selkirk, Elisabeth. 1982. Syllables. In Harry van der Hulst & Norval Smith (eds.),
The structure of phonological representations. Part 2, 337–383. Dordrecht:
Foris.
Selkirk, Elisabeth & Seunghun J. Lee. 2015. Constituency in sentence phonology:
An introduction. Phonology 31. 1–18.
Strange, Winifred. 1995. Cross-language studies of speech perception: A historical
review. In Winifred Strange (ed.), Speech perception and linguistic
experience: Issues in cross-language speech research, 3–45. Timonium, MD:
York.
Trager, George L. & Henry Lee Smith. 1951. An outline of English structure.
Washington: American Council of Learned Societies.
Vance, Timothy. J. 1995. Final accent vs. no accent: Utterance-final neutralization
in Tokyo Japanese. Journal of Phonetics 23. 487–499.
Ven, Marco van de & Carlos Gussenhoven. 2011. The timing of the final rise in
falling-rising intonation contours in Dutch. Journal of Phonetics 39. 225–236.
Volk, Erez. 2011. Mijikenda phonology. Doctoral dissertation, Tel Aviv University.
Walker, Rachel. 2011. Nasal harmony. In van Oostendorp et al. (eds.) 2011, 1838–
1865.
Warner, Natasha. 1997. Japanese final-accented and unaccented phrases.
Journal of Phonetics 25. 43–60.
Wiese, Richard. 2000. The phonology of German. Oxford: Oxford University Press.
Zonneveld, Wim, Mieke Trommelen, Michael Jessen, Curtis Rice, Gösta Bruce, &
Kristján Árnason. 1999. Word stress in West-Germanic and North-Germanic
languages. In Harry van der Hulst (ed.), Word prosodic systems in the
languages of Europe, 477–603. Berlin: Mouton de Gruyter.
Subject Index
abstractness 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17
accent 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
see also pitch accent; stress
acquisition 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
see also artificial language learning; learnability; second language acquisi
tion
active (of a contrast/feature) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17
alignment 1, 2, 3, 4, 5, 6, 7
analogy 1, 2, 3
apocope 1, 2, 3
artificial language learning 1, 2, 3, 4
aspiration 1, 2, 3, 4, 5, 6, 7, 8, 9
assimilation 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35
association 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17
autosegment(al) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
borrowing 1, 2, 3, 4, 5, 6, 7
see also loanword
clicks 1, 2, 3
coarticulation 1, 2, 3, 4, 5, 6
colour (vowels) 1, 2, 3, 4, 5, 6, 7, 8, 9
complexity 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11n, 12, 13, 14, 15, 16
consonant cluster 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
consonant (dis)harmony 1, 2, 3, 4, 5
constraint-based phonology see Optimality Theory
contact 1, 2, 3, 4
contrast privative, equipollent, or gradual 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
contrast shift 1, 2
contrastive hierarchy 1, 2, 3
coronal 1, 2, 3, 4, 5, 6
databases 1, 2, 3, 4, 5, 6
see also UPSID/LAPSyd database; P-base
deletion, of
– consonant 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
– feature 1, 2, 3, 4
– tone 1, 2, 3
– vowel 1, 2, 3, 4, 5, 6, 7, 8, 9
diachrony 1, 2, 3, 4, 5, 6, 7, 8, 9, 10n, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 2
1, 22, 23, 24, 25, 26, 27
diffusion (areal) see borrowing
diphthongization 1, 2, 3, 4, 5, 6, 7
dispersion 1, 2, 3, 4, 5
dissimilation 1, 2, 3
economy 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
enhancement 1, 2, 3, 4, 5, 6, 7, 8, 9
epenthesis 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
geminate 1, 2, 3
gemination/degemination 1, 2, 3, 4, 5
geography and phonology 1
glottalization 1, 2
hiatus 1, 2, 3, 4, 5
labialization 1, 2, 3
laryngeal contrast see voicing contrast
learnability 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
see also acquisition
lengthening see quantity contrast
lenition 1, 2, 3, 4, 5, 6
lexical/postlexical phonology 1, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11, 12, 13, 14, 15, 1
6
loanword 1, 2, 3, 4, 5, 6
see also borrowing
markedness 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33
merger 1n, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11
metathesis 1, 2, 3, 4, 5, 6, 7
meter (poetic) 1, 2, 3
mora 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
mora-timing see rhythm
morphophonology 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Optimality Theory (OT) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28
P-base 1, 2
palatalization 1, 2, 3n, 4, 5, 6n, 7n, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17n, 18, 1
9, 20, 21, 22, 23
pharyngealization 1, 2, 3
phonemic level of representation 1, 2, 3, 4, 5
phonetics in relation to phonology 1, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11, 12, 13, 14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 3
5, 36, 37, 38, 39, 40, 41n, 42, 43, 44, 45, 46, 47, 48
phonetic implementation see phonetics in relation to phonology
phonological alternations 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12n, 13, 14, 15, 16, 1
7
phonological change see diachrony
phonologization 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
phonotactics 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
pitch accent 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
play languages 1, 2, 3, 4
Prague School 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
prosodic hierarchy 1, 2, 3, 4, 5
prosodic morphology 1, 2, 3, 4
prosody 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
reconstruction 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19
redundancy see economy
reduplication 1, 2, 3, 4
rhythm 1, 2, 3, 4, 5, 6, 7, 8
tone 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13n, 14, 15, 16, 17, 18, 19, 20, 21
see also floating tone
typology defined 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
voicing contrast 1, 2, 3, 4, 5, 6, 7, 8, 9, 10n, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21
see also final devoicing
vowel harmony 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2
0, 21
vowel reduction 1, 2, 3, 4, 5, 6, 7
vowel systems 1, 2, 3, 4, 5, 6, 7, 8, 279–297
Language Index
Abenaki 1
Alawa 1
Alemannic 1n
Aleut 1
Algonquian 1, 2, 3, 4
Angami 1
Ao 1
Arabic 1, 2, 3, 4, 5, 6, 7, 8, 9
Arapaho 1, 2
Armenian 1, 2, 3
Arrernte 1, 2, 3n, 4, 5
Bambara 1
Basaá 1
Basque 1, 2, 3, 4, 5, 6, 7
Bella Coola 1
Bengali 1, 2, 3, 4
Berber 1, 2, 3, 4
Bininj Gun-Wok 1
Blackfoot 1
Breton 1
Campa 1
Cantonese 1n, 2, 3
Catalan 1, 2, 3, 4, 5, 6, 7, 8
Cheyenne 1, 2
Chichewa 1n
Chickasaw 1
Chinese (see Mandarin, Cantonese)
Chukchi 1
Chumash 1
Cree 1
Cree, Northern Plains 1
Czech 1
Danish 1n, 2, 3, 4n, 5, 6n, 7, 8, 9, 10
Desano 1
Doutai 1
Dutch 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21n
Ekagi 1
English 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14n, 15, 16, 17, 18, 19, 20n, 21
n, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32n, 33, 34, 35, 36, 37, 38, 39, 40, 4
1, 42, 43, 44, 45, 46, 47n, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 6
1, 62, 63, 64, 65, 66, 67, 68, 69
Eskimo (Proto) 1, 2, 3, 4n, 5
Eskimo 1, 2, 3, 4, 5n, 6
Estonian 1
Faroese 1
Farsi 1, 2, 3
Finnish 1, 2, 3, 4
Fox 1
Franconian (Central) 1, 2, 3n, 4n, 5
French 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
German 1, 2, 3n, 4, 5n, 6, 7, 8, 9, 10, 11, 12, 13n, 14, 15, 16, 17, 18, 19, 20,
21, 22
Germanic 1, 2, 3, 4, 5, 6
Gokana 1, 2, 3, 4, 5
Gothic 1, 2
Greek 1, 2, 3, 4, 5, 6, 7n, 8n, 9, 10
Greenlandic 1
Haruai 1
Hindi 1
Hungarian 1, 2n, 3, 4, 5, 6, 7, 8n
Icelandic 1
Igbo 1
Ik 1
Ikwere 1
Ilocano 1
Indonesian 1, 2, 3
Inuit 1, 2, 3, 4, 5, 6, 7, 8n, 9
Inupiaq (Barrow) 1
Irish 1n, 2n, 3n
Italian 1n, 2n, 3, 4, 5, 6, 7
Japanese 1, 2, 3, 4, 5n, 6, 7n, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29n
Jaqaru 1
Jimi 1
Kabardian 1, 2, 3, 4, 5n, 6
Kachin Khakass 1
Kalam 1, 2
Kàlɔ̀ŋ 1
Karen (Sgaw) 1
Khanty 1, 2, 3
Kinyarwanda 1
Kom 1
Konni 1
Korean 1, 2, 3, 4, 5, 6, 7n, 8, 9
Koromfe 1
Kuki-Thaadow 1
Kune 1
Kunwinjku 1
Kwakiutl 1
Latvian 1
Lithuanian 1n, 2, 3, 4n, 5
Mahican 1
Malagasy 1
Malay 1, 2, 3
Malayalam 1, 248m
Maliseet-Passamaquoddy 1
Manchu (Classical) 1
Mandarin (Chinese) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
Mansi 1
Maori 1
Marshallese 1, 2
Massachusett 1
Maung 1
Mayá 1
Mayali 1
Mazatec 1
Mekeo 1
Menomini 1, 2
Mi’kmaq 1
Miami-Illinois 1
Mixtec 1, 2
Mlabri 1
Moloko 1, 2, 3, 4
Mongolian 1, 2, 3, 4, 5, 6, 7, 8
Nahuatl 1
Nenets (Tundra) 1n
Nez Perce 1
Nimboran 1
Noon 1
Norwegian 1, 2, 3, 4, 5n, 6, 7, 8, 9n, 10, 11
Nunggubuyu 1
Nupe 1
Nzadi 1
Ob-Ugric 1, 2, 3n, 4, 5, 6, 7, 8, 9, 10
Ojibwe 1
Oroqen 1n, 2, 3
Pitta-Pitta 1, 2
Pohnpeian 1
Polish 1, 2, 3, 4, 5, 6, 7, 8, 9
Portuguese 1, 2, 3, 4, 5, 6, 7, 8
Qawasar 1, 2
Qiang 1
Romance 1, 2, 3, 4, 5
Russian 1, 2n, 3, 4, 5, 6, 7
Salishan 1
Samala 1, 2, 3, 4n, 5
Sanskrit 1, 2n, 3, 4, 5, 6
Semitic 1, 2
Serbo-Croatian 1, 2, 3
Shawnee 1
Sheko 1
Skou 1
Slave 1
Slavic 1, 2, 3n
Somali 1
Spanish 1, 2, 3, 4n, 5, 6, 7, 8, 9, 10, 11
Sundanese 1
Swedish 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11n, 12, 13, 14, 15n, 16, 17, 18, 19, 20, 2
1, 22, 23
Tacana 1
Tahltan 1
Takelma 1, 2
Tashlhiyt (Berber) 1n
Teleéfoól 1
Thaayore 1
Thai 1, 2, 3
Totontepec (Mixe) 1
Towa 1
Tswana 1, 2, 3
Tungusic, Tungus 1, 2, 3, 4, 5, 6, 7
Turkic 1, 2, 3
Turkish 1, 2, 3, 4, 5, 6, 7, 8, 9
Usarufa 1
Uyghur 1, 2
Vietnamese 1, 2, 3, 4
Walloon 1
Welsh 1, 2
Wichita 1n
Xibe 1, 2, 3
Yaka 1, 2, 3
Yessan-Mayo 1
Yokuts (Yowlumne) 1, 2, 3, 4
Yoruba 1, 2
Yupik 1, 2, 3, 4
Author Index
Abercrombie, David 1, 2, 3, 4
Abitov, M. L. 1, 2
Abrahamsson, Niclas 1, 2, 3
Abry, Christian 1
Aikhenvald, Alexandra Y. 1, 2
Aissen, Judith 1, 2
Alber, Birgit 1, 2
Altenberg, Evelyn 1, 2, 3
Andersen, Henning 1, 2
Anderson, Stephen R. 1n, 2, 3, 4, 5, 6, 7, 8
Ann, Jean 1, 2
Aoun, Joseph 1
Applebaum, Ayla 1, 2n, 3, 4, 5
Applegate, Richard B. 1, 2, 3n, 4
Archangeli, Diana 1, 2, 3, 4
Árnason, Kristján 1, 2
Aronoff, Mark 1
Arvaniti, Amalia 1, 2, 3n, 4n, 5, 6, 7, 8, 9, 10
Atkinson. Quentin 1, 2, 3
Auer, Peter 1, 2
Austin, Peter 1, 2, 3
Bach, Emmon 1
Badecker, William 1n, 2
Bagov, P. M. 1, 2
Bailey, Gil 1
Baker, Mark 1
Baković, Eric 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Balas, Anna 1
Balkarov, B. X. 1
Bannert, Robert 1
Barlow, Jessica A. 1, 2
Barnes, Jonathan 1, 2
Barreteau, Daniel 1, 2, 3
Barrie, Mike 1, 2
Basbøll, Hans 1, 2, 3
Beauzée, Nicolas 1, 2n
Becker, Michael 1, 2, 3
Beckman, Jill 1, 2, 3, 4
Beckman, Mary E. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
Bedell, George 1, 2
Beesley, Kenneth 1, 2, 3
Bender, Byron W. 1, 2, 3, 4, 5, 6
Bennett, William 1, 2, 3
Benua, Laura 1n, 2
Bergsland, Knut 1
Bermúdez-Otero, Ricardo 1n, 2, 3, 4, 5, 6
Beros, Achilles 1, 2
Berstel, Jean 1, 2
Bhat, D. N. S. 1, 2, 3n, 4
Bhatt, Rajesh 1
Bickel, Balthasar 1, 2, 3, 4, 5, 6, 7, 8
Bickmore, Lee S. 1, 2
Blake, Barry J. 1
Bleckert, Lars 1, 2, 3
Blevins, Juliette 1n, 2, 3n, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 1
9, 20, 21
Bloch, Bernard 1, 2, 3
Bloomfield, Leonard 1, 2n, 3, 4, 5, 6
Blumenfeld, Lev 1, 2
Blumstein, Sheila 1n, 2, 3n, 4, 5, 6, 7, 8, 9
Boas, Franz 1
Boë, Louis-Jean 1
Boersma, Paul 1, 2, 3, 4, 5
Booij, Geert 1, 2
Borowsky, Toni 1, 2, 3n, 4, 5
Borsley, Robert 1
Bossong. Georg 1, 2
Breen, Gavan 1, 2n, 3, 4
Bresnan, Joan 1, 2, 3, 4, 5
Broe, Michael B. 1, 2
Broersma, Miriam 1, 2
Brohan, Anthony 1, 2, 3, 4
Bromberg, Ilana 1, 2
Broselow, Ellen 1, 2, 3, 4
Browman, Catherine P. 1, 2, 3, 4
Bruce, Gösta 1, 2, 3, 4n, 5, 6, 7, 8, 9, 10n, 11, 12, 13, 14, 15, 16, 17, 18
Brugos, Alejna 1
Buccola, Brian 1, 2
Buckley, Eugene 1, 2
Bulmer, Ralph 1, 2, 3, 4, 5, 6
Burnett, James (Lord Monboddo) 1, 2
Burzio, Luigi 1, 2
Butcher, Andrew 1
Bybee, Joan L. 1, 2, 3, 4
Bye, Patrik 1, 2, 3
Bynon, Theodora 1n, 2
Cahill, Michael 1, 2
Calabrese, Andrea 1, 2, 3, 4, 5
Campanella, Tommaso 1, 2, 3, 4
Campos-Astorkiza 1, 2n, 3
Cardoso, Walcir 1, 2, 3, 4
Carstairs, Andrew 1
Casali, Roderic F. 1, 2
Catford, J. C. 1, 2, 3, 4, 5, 6, 7
Cebrian, Juli 1, 2, 3
Chakraborti, Paromita 1, 2
Chandlee, Jane 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Chen, Su-I 1, 2
Chiu, C. Chenhao 1, 2
Cho, Young-Mee 1, 2
Choi, John D. 1, 2n, 3
Chomsky, Noam 1, 2, 3, 4, 5, 6, 7, 8, 9, 10n, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26n, 27, 28, 29, 30, 31, 32, 33, 34, 35
Cichocki, Wladyslaw 1, 2
Cinque, Guglielmo 1
Clairis, Christos 1, 2
Clark, Mary 1, 2
Clements, G. N. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22n, 23
Cohn, Abigail 1, 2, 3
Colarusso, John 1n, 2, 3n, 4, 5, 6n, 7
Comon, Hubert 1, 2
Compton, Richard 1, 2, 3n, 4, 5n, 6
Comrie, Bernard 1, 2, 3, 4, 5, 6, 7, 8
Cope, Dana 1
Cornell, Sonia A. 1, 2, 270C
Côté, Marie-Hélène 1
Coupé, Christophe 1, 2, 3
Crane, Thera M. 1
Crawford, Penny 1, 2
Cristofaro, Sonia 1
Croft, William 1, 2, 3, 4, 5
Currie-Hall, Kathleen 1n, 2
Cutler, Anne 1
d’Andrade, Ernesto 1n
D’Imperio, Mariapaola 1, 2
Dauchet, Max 1
Dauer, Rebecca 1, 2, 3, 4
Davidian, Richard D. 1, 2, 3, 4, 5
DeCamp, David 1n, 2
Delisi, Jessica L. 1, 2
Dell, François 1, 2, 3, 4
Dellwo, Volker 1, 2, 3, 4
Dinnsen, Daniel A. 1, 2
Dixon, R. M. W. 1, 2
Dolatian, Hossep 1
Donegan, Patricia 1, 2
Donohue, Mark 1, 2
Dorais, Louis-Jacques 1, 2
Downing, Laura J. 1, 2
Dresher, B. Elan 1, 2, 3, 4, 5, 6n, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19
Dressler, Wolfgang U. 1, 2
Dryer, Matlthew S. 1, 2, 3, 4
Duanmu, San 1
Durand, Jacques 1
É. Kiss, Katalin 1
Easterday, Shelece 1, 2
Ebeling, C. L. 1, 2, 3
Eckman, Fred 1, 2, 3, 4, 5, 6, 7
Edge, Beverly 1, 2, 3, 4
Edlefsen, Matt 1, 2, 3
Elert, Claes-Christian 1, 2, 3
Elgot, C. C. 1, 2, 3
Elmedlaoui, Mohamed 1, 2
Ember, Carol R. 1, 2, 3
Ember, Marvin 1, 2, 3
Endress, Ansgar D. 1, 2
Engelfriet, Joost 1, 2, 3
Engstrand, Olle 1n, 2, 3
Eulitz, Carsten 1, 2
Evans, Nicholas 1, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11
Evers, Vincent 1, 2, 3, 4n, 5
Ewen, Colin J. 1, 2, 3, 4, 5, 6, 7
Eyraud, Rémy 1, 2, 3
Haarmann, Harald 1
Hagège, Claude 1, 2, 3n
Hale, Mark 1, 2, 3, 4
Hall, Daniel Currie 1n, 2, 3
Hall, Robert A. 1, 2
Hall, Tracy Alan 1, 2, 3, 4, 5n, 6, 7, 8, 9, 10, 11
Halle, Morris 1, 2, 3, 4, 5, 6n, 7, 8, 9, 10n, 11, 12n, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28n, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 44, 45, 46
Hammarberg, Björn 1, 2, 3
Hammond, Michael 1, 2
Hancin-Bhatt, Barbara 1, 2, 3
Hannahs, S. J. 1
Hansen, Jette G. 1, 2, 3, 4
Hansson, Gunnar 1, 2, 3, 4, 5
Harnad, Stevan 1, 2, 3
Harrington, Jonathan 1, 2
Harris, James 1, 2
Harris, John 1, 2, 3
Harris, Zellig 1
Harst, Sander van der 1, 2
Harvey, Christopher 1, 2, 3
Haspelmath, Martin 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Haudricourt, André-Georges 1
Havers, Wilhelm 1
Hayes, Bruce 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20
Healey, P. 1
Heidolph, Karl Erich 1, 2
Heine, Bernd 1, 2, 3
Heinz, Jeffrey 1, 2, 3, 4, 5, 6
Helgason, Pétur 1, 2, 3
Hermes, Anne 1, 2
Heyer, Sarah 1, 2
Higuera, Colin de la 1, 2, 3, 4
Hildebrandt, Kristine A. 1
Hillenbrand, James 1, 2
Hjelmslev, Louis 1
Hockett, Charles F. 1, 2, 3, 4, 5, 6
Hognestad, Jan K. 1, 2, 3
Honey, P. J. 1
Honeybone, Patrick 1, 2, 3, 4, 5, 6
Honti, László 1, 2, 3, 4, 5
Hoogeboom, Hendrik Jan 1, 2, 3
Hopper, Paul J. 1, 2
House, Anthony B. 1
House, David 1, 2
Householder, Fred 1
Howard, Irwin 1, 2
Howie, J. M. 1, 2
Hsieh, Feng-Fan 1, 2
Hualde, José Ignacio 1, 2, 3, 4, 5, 6, 7
Huang, James 1, 2
Huber, Brad R. 1, 2
Huffman, Marie 1, 2
Hulden, Mans 1, 2, 3
Humboldt, Wilhelm von 1, 2, 3
Hume, Elizabeth 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20
Hwangbo, Hyun Jin 1
Hyde, Brett 1, 2
Hyman, Larry M. 1, 2, 3, 4, 5, 6, 7, 8n, 9, 10, 11n, 12n, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 3
9, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55
Idsardi, William 1, 2, 3, 4, 5, 6, 7
Ineichen, Gustav 1
Ingrisano, Dennis R. 1, 2
Isidore of Seville 1
Itô, Junko 1, 2, 3, 4, 5
Ivanov, Vyacheslav 1
Iverson, Gregory 1, 2, 3
Jackendoff, Ray 1, 2
Jacobi, Irene 1, 2
Jacobs, Haike 1, 2, 3, 4, 5
Jacobson, Steven A. 1
Jäger, Gerhard 1, 2
Jakobson, Roman O. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21
Jakovlev, N. F. 1, 2, 3
Janda, Richard D. 1, 2, 3
Janker, Peter M. 1, 2
Jardine, Adam 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
Jenkins, James J. 1, 2
Jeon, Hae-Song 1, 2
Jespersen, Otto 1, 2, 3, 4
Jessen, Michael 1, 2, 3
John, Tina 1, 2
Johnson, C. Douglas 1, 2, 3
Johnson, Keith 1
Johnson, Mark 1, 2
Jönsson-Steiner, Elisabet 1, 2
Jun, Sun-Ah 1, 2, 3, 4
Jung, Dagmar 1, 2
Jurafsky, Daniel 1, 2, 3, 4
Jurgec, Peter 1, 2
Kabak, Barıș 1, 2
Kager, René 1, 2, 3, 4
Kahl, Thede 1
Kahn, Daniel 1, 2, 3
Kaisse, Ellen M. 1, 2, 3
Kallstenius, Gottfrid 1, 2
Kang, Yoonjung 1, 2
Kaplan, Abby 1, 2
Kaplan, Lawrence D. 1, 2, 3
Kaplan, Ronald 1, 2, 3
Karlsson, Anastasia M. 1
Kartunnen, Lauri 1, 2, 3
Karvonen, Dan 1, 2
Kaschube, Dorothea 1, 2
Kasprzik, Anna 1
Kaufmann, Stefan 1
Kaun, Abigail Rhoades 1, 2, 3
Kawahara, Shigeto 1, 2, 3, 4, 5, 6, 7
Kawasaki-Fukumori, Haruko 1, 2, 3, 4
Kay, Martin 1, 2, 3
Keating, Patricia 1, 2, 3, 4, 5, 6, 7, 8, 9
Kenstowicz, Michael 1, 2, 3, 4, 5, 6, 7, 8, 9
Kessler, Brett 1n, 2, 3, 4, 5, 6
Ketrez, Nihan 1
Keyser, Samuel Jay 1, 2, 3, 4, 5, 6, 7, 8
Kibrik, A. E. 1
Kim, Susan 1, 2, 3
Kinloch, Murray 1
Kiparsky, Paul 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17
Kirchner, Robert M. 1, 2, 3
Kisseberth, Charles W. 1, 2, 3, 4, 5, 6, 7, 8
Kleber, Felicitas 1, 2
Ko, Seongyeon 1n, 2
Kobele, Gregory 1, 2, 192ko
Kochetov, Alexei 1n, 2n, 3
Kock, Axel 1, 2
Kodzasov, S. V. 1
Kohler, Klaus J. 1, 2
Köhnlein, Björn 1, 2, 3, 4
Kolly, Marie-José 1
König, Ekkehard 1
Korhonen, Mikko 1, 2, 3
Korn, David 1, 2, 3
Kornai, András 1, 2, 3
Koskenniemi, Kimmo 1, 2
Kötzing, Timo 1
Krämer, Martin 1, 2
Kristoffersen, Gjert 1, 2, 3, 4, 5
Kruckenberg, Anita 1, 2
Kuaševa, T. X. 1
Kubozono, Haruo 1, 2, 3, 4, 5, 6, 7, 8
Kuipers, Aert H. 1, 2, 3
Kula, Nancy C. 1, 2
Kumaxov, M. A. 1, 2, 3
Labov, William 1, 2
Lacy, Paul de 1, 2, 3, 4, 5, 6
Ladd, D. Robert 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Ladefoged, Peter 1, 2
Lahiri, Aditi 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
Lai, Regine 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
Lautemann, Clements 1, 2
Lavoie, Lisa M. 1, 2
Laycock, D. C. 1, 2
Lazard, Gilbert 1
Leary, Adam 1, 2, 3
Leben, William R. 1, 2
Lee, Seunghun 1, 2, 3, 4
Leeman, Dylan 1
Leemann, Adrian 1
Lehiste, Ilse 1, 2
Leira, Vigleik 1, 2
Lekeneny, Jean 1
Lerdahl, Fred 1, 2
Levelt, Clara 1, 2, 3, 4
Levinson, Stephen C. 1, 2, 3, 4, 5
Li, Bing 1n, 2
Li, Shulan 1, 2
Liberman, Anatoly 1, 2, 3
Liberman, Mark 1, 2, 3, 4, 5
Lickley, Robin 1, 2
Lieber, Rochelle 1
Lightfoot, David 1
Lindblad, Per 1, 2, 3, 4
Lindblom, Björn 1, 2, 3, 4
Linhartova, Vendula 1
Linker, Wendy 1
Lister, Anthony C. 1
Lloyd James, Arthur 1, 2
Lombardi, Linda 1, 2, 3, 4, 5
Lorentz, Ove 1, 2, 3, 4
Lothaire, M. 1, 2, 3, 4
Louriz, Nabila 1, 2
Low, E. L. 1, 2, 3, 4
Luo, Huan 1, 2, 3
McCarthy, John J. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 1
9
McCawley, James D. 1, 2, 3, 4
McCutcheon, Martin J. 1, 2
McKenzie, Pierre 1
MacMahon, April M. S. 1, 2, 3
McNaughton, Robert 1, 2, 3, 4
McQueen, James 1, 2
Mackenzie, Sara 1, 2
Maddieson, Ian 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19
Magloughlin, Lyra 1
Major, Roy 1, 2, 3
Mallinson, Graham 1
Manaster-Ramer, Alexis 1, 2
Marlo, Michael 1, 2
Marsico, Egidio 1
Martin, James 1, 2
Martin, Samuel 1, 2
Martinet, André 1, 2, 3, 4, 5
Martínez-Gil, Fernando 1, 2
Mascaró, Joan 1, 2, 3
Maskikit-Essed, Raechel 1, 2
Matasović, Ranko 1, 2
Mateus, Maria Helena 1
Matthews, Peter 1
Mchombo, Sam 1
Medvedev, Yu. 1, 2
Mehler, Jacques 1, 2
Melville, Herman 1
Mennen, Ineke 1
Merrill, John 1, 2, 3
Mesgnien-Meninski, François (de) 1, 2, 3
Mester, Armin 1, 2, 3, 4, 5
Mesthrie, Rajend 1, 2, 3
Metzeltin, Michael 1
Meyer, Ernst A. 1, 2
Mezei, J. E. 1, 2, 3
Mielke, Jeff 1, 2, 3, 4, 5, 6, 7, 8
Minde, Don van 1, 2, 3
Mithun, Marianne 1
Mjaavatn, Per Egil 1, 2, 3
Mohanan, K. P. 1, 2, 3, 4, 5
Mohanan, Tara 1, 2
Mohri, Mehryar 1, 2, 3
Møllergård, E. 1, 2
Moravcsik, Edith A. 1, 2, 3, 4, 5, 6, 7
Moreton, Elliot 1, 2, 3, 4, 5, 6, 7
Morpurgo Davies, Anna 1, 2
Motsch, Wolfgang 1
Mou, Xiaomin 1, 2
Moulton, William G. 1, 2
Moure, Teresa 1
Mücke, Doris 1
Mugler, France 1, 2
Munroe, R. H. 1
Munroe, Robert L. 1, 2, 3
Myers, Nathan 1
Myers, Scott 1, 2, 3, 4, 5, 6, 7, 8
Myrberg, Sara 1, 2n, 3, 4, 5, 6
Nagarajan, Hemalatha 1, 2
Naydenov, Vladimir 1, 2
Nedjalkov, Vladimir P. 1
Nespor, Marina 1, 2, 3, 4, 5
Nevins, Andrew 1, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17
Newman, M. E. J. 1, 2
Newman, Stanley 1, 2, 3
Newmeyer, Frederick J. 1, 2
Ní Chiosáin, Máire 1, 2
Nichols, Johanna 1, 2, 3, 4, 5, 6
Nilsen, Randi Alice 1, 2, 3
Nolan, Francis 1, 2
Noord, Gertjan van 1, 2
Nooteboom, Sieb 1, 2, 3
Nordberg, Bengt 1, 2n, 3
Norris, Daniel 1
Nyström, Staffan 1, 2
Padgett, Jaye 1, 2, 3, 4, 5, 6, 7, 8, 9
Pāṇini 1n
Papert, Seymour 1n, 2, 3, 4
Paradis, Carole 1, 2, 3, 4
Parker, Aliana 1, 2
Pater, Joe 1, 2, 3, 4, 5
Paulian, Christiane 1
Pawley, Andrew 1, 2, 3, 4, 5, 6
Payne, Amanda 1, 2, 3
Pellegrino, François 1
Peng, Long 1, 2, 3
Pensalfini, Rob 1, 2, 3, 4
Peters, Jörg 1, 2, 3, 4, 5, 6, 7, 8
Pierrehumbert, Janet 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15n, 16, 17, 1
8, 19
Pike, Kenneth L. 1n, 2, 3, 4
Piroth, Hans Georg 1, 2
Plank, Frans 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
Popper, Karl 1, 2
Port, Robert F. 1, 2, 3, 4, 5, 6, 7
Post, Mark W. 1
Pott, August Friedrich 1n
Potts, Christopher 1n, 2, 3
Precoda, Kristin 1n, 2, 3, 4, 5, 6, 7, 8
Prieto, Pilar 1, 2, 3
Prince, Alan 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23
Prunet, Jean-François 1, 2, 3
Pullum, Geoffrey K. 1, 2, 3, 4, 5, 6, 7, 8
Purnell, Thomas 1, 2n, 3
Qinggertai (Chingeltei) 1, 2
Quené, Hugo 1, 2, 3
Rabin, Michael 1, 2
Raible, Wolfgang 1
Raimy, Eric 1, 2, 3n, 4
Ramat, Paolo 1
Ramus, Franck 1, 2, 3, 4, 5, 6
Raphael, Lawrence 1, 2
Rask, Rasmus 1
Rawal, Chetan 1
Reetz, Henning 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Reiss, Charles 1, 2
Rennison, John R. 1
Riad, Tomas 1, 2, 3, 4, 5, 6, 7
Rialland, Annie 1, 2
Rice, Curtis 1
Rice, Keren 1, 2, 3, 4, 5, 6, 7, 8, 9n, 10, 11, 12, 13, 14, 15
Ridouane, Rachid 1, 2, 3, 4, 5, 6
Riggle, Jason 1, 2, 3, 4, 5, 6, 7, 8, 9
Ringen, Catherine 1, 2, 3, 4, 5, 6
Ringgaard, Kristian 1, 2
Rischel, Jørgen 1
Roark, Brian 1, 2
Roberts, Adam 1, 2
Roberts, Ian 1
Robins, R. H. 1
Roca, Iggy 1, 2
Roche, James 1, 2, 3
Roeder, Rebecca 1, 2
Rogers, James 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
Rogova, G. B. 1
Rohany Rahbar, Elham 1, 2
Rood, David S. 1n, 2
Rose, Sharon 1, 2, 3, 4, 5, 6
Rozenberg, Grzegorz 1, 2
Rubach, Jerzy 1, 2, 3, 4
Ruiz, José 1, 2
Ryan, Kevin 1, 2
Sagey, Elizabeth C. 1, 2, 3, 4, 5, 6, 7
Šagirov, A. K. 1, 2
Sakarovitch, Jacques 1, 2
Salmons, Joseph 1, 2, 3, 4, 5, 6
Salomaa, Arto 1, 2
Samedov, D. S. 1
Sammallahti, Pekka 1, 2, 3
Samuels, Bridget 1, 2n, 3, 4, 5, 6, 7
Sapir, Edward 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Satta, Giorgio 1, 2, 3, 4
Saussure, Ferdinand de 1, 2
Schabes, Yves 1, 2, 3
Schachter, Paul 1, 2
Scheibman, Joanne 1, 2
Schepman, Astrid 1
Schiering, René 1, 2
Schmidt, Jürgen Erich 1, 2
Schwartz, Geoffrey 1n, 2
Schwartz, Jean-Luc 1, 2
Schwentick, Thomas 1
Scobbie, James M. 1, 2, 3n, 4, 5
Scott, Dana 1, 2
Segerup, My 1, 2n, 3, 4
Seiler, Hansjakob 1, 2
Selkirk, Elisabeth 1, 2, 3, 4, 5, 6
Shattuck-Hufnagel, Stefanie 1
Shaw, Patricia 1, 2, 3, 4
Shibatani, Masayoshi 1, 2
Shieber, Stuart 1, 2, 3, 4
Shopen, Timothy 1, 2
Siewierska, Anna 1
Silander, Megan 1, 2
Simon, Ellen 1, 2, 3, 4, 5
Simpson, Adrian P. 1n, 2
Singler, John 1, 2
Sipser, Michael 1, 2
Siptár, Péter 1
Skalička, Vladimír 1
Slobin, Dan 1, 2
Smith, Adam 1, 2
Smith, Bruce L. 1, 2
Smith, Geoff 1, 2
Smith, Henry Lee 1, 2
Smith, Nathaniel 1
Smith, Neil 1
Smith, Norval 1, 2, 3
Smith, Steven C. 1
Smith, Tony 1
Smolensky, Paul 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17
Sommer, Bruce A. 1, 2
Song, Jae Sung 1, 2
Spencer, Andrew 1
Sproat, Richard 1, 2
Stampe, David 1, 2
Stavness, I. Ian 1
Steinitz, Wolfgang 1, 2, 3, 4
Steriade, Donca 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
Stevens, Kenneth N. 1, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11
Strandberg, Mathias 1, 2, 3, 4
Strange, Winifred 1, 2
Strother-Garcia, Kristina 1
Stuart-Smith, Jane 1, 2, 3n, 4, 5
Suzuki, Keiichiro 1, 2, 3, 4
Svantesson, Jan-Olof 1, 2, 3, 4
Sweet, Henry 1, 2, 3
Tabain, Marija 1, 2
Talkin, David 1, 2
Tallerman, Maggie 1, 2
Talmy, Leonard 1, 2
Tangi, Oufae 1, 2
Tanner, Herbert G. 1
Teleman, Ulf 1, 2
Tent, Jan 1, 2
Tesar, Bruce 1, 2, 3, 4
Thomas, Wolfgang 1, 2, 3
Thráinsson, Höskuldur 1
Timm, Jason 1
Topinzi, Nina 1, 2n, 3
Törkenczy, Miklós 1
Trager, George 1, 2
Trask, R. L. 1, 2
Trommelen, Mieke 1, 2, 3
Trubetzkoy, Nikolaj S. 1, 2, 3, 4, 5, 6, 7n, 8, 9, 10, 11, 12, 13, 14, 15, 16
Tsagov, M. 1, 2
Tsendina, Anna 1
Tukumu, Simon Nsielanga 1
Turčaninov, G. 1, 2
Turing, Alan 1, 2, 3, 4
Turpin, Myfany 1, 2
Ulseth, B. 1, 2
Urbina, Jon Ortiz de 1
Vaan, Michiel de 1, 2
Vago, Robert 1, 2, 3
Vajda, Edward 1, 2, 3, 4
Vallée, Nathalie 1, 2
Vance, Timothy J. 1, 2, 3, 4
Vanhove, Martine 1
Vaux, Bert 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
Veilleux, Nanette 1
Velupillai, Viveka 1
Ven, Marco van de 1, 2
Venditti, Jennifer J. 1, 2, 3
Vennemann, Theo 1, 2, 3, 4
Vergnaud, Jean-Roger 1, 2, 3, 4
Verner, Karl 1
Versteegh, Kees 1, 2
Vidal, Enrique 1, 2
Visscher, Molly 1, 2
Vliet, Pete van der 1, 2, 3, 4, 5
Vogel, Irene 1, 2, 3
Vollmer, Heribert 1
Vu, Mai Ha 1
Walker, Rachel 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Wang, Chilin 1, 2, 3, 4, 5, 6, 7, 8, 9
Watson, Janet 1
Weijer, Jeroen van de 1, 2, 3, 4
Weinberger, Steven 1, 2, 3
Wellcome, David 1, 2
Wells, John 1, 2
Westbury, John 1, 2, 3
Wetterlin, Allison 1, 2, 3, 4, 5
Wetzels, Leo 1, 2, 3, 4, 5
Whaley, Lindsay J. 1, 2
Wheeler, Max 1, 2
Whorf, B. L. 1
Wibel, Sean 1
Wiese, Richard 1, 2, 3, 4, 5, 6, 7
Wilson, Colin 1n, 2, 3, 4, 5, 6, 7, 8, 9, 10
Wiltshire, Caroline 1, 2, 3
Winteler, Jost 1
Winters, Stephen J. 1, 2, 3
Wissing, Daan 1, 2, 3
Wolfe, Andrew 1, 2, 3, 4
Wood, Sidney 1, 2
Wurzel, Wolfgang 1, 2, 3
Xolodovič, Aleksandr A. 1
Xrakovskij, Viktor S. 1
Xu, Zheng 1, 2
Yavas, Mehmet 1, 2, 3, 4
Yu, Alan C. 1, 2, 3, 4, 5, 6, 7, 8, 9
Zagona, Karen 1
Zanten, Ellen van 1, 2, 3
Zhang, Xi 1, 2, 3, 4, 5, 6, 7, 8, 9n, 10, 11
Zonneveld, Wim 1, 2, 3, 4, 5
Zwart, Jan-Wouter 1
Zwicky, Arnold M. 1
Endnotes
1
In the UPSID database (Maddieson & Precoda 1992) I have however not found a
language which only has the two places of articulation, labial and alveolar. For
accessing UPSID I have used Henning Reetz’s online interface: http://web.phoneti
k.uni-frankfurt.de/upsid.
2
Since submitting this chapter, Gordon (2016) has appeared who defines
phonological typology as follows: “Phonological typology is concerned with the
study of the distribution and behavior of sounds found in human languages of the
world” (p. 1).
3
As a test, consult the World atlas of language structure (WALS, http://wals.info/)
and decide for yourself whether the variables labeled “Phonology” are phonological
or phonetic, as you would draw the line.
4
Syntax-and-inflection typologists will sometimes, upon reflection, express surprise
that they had not noticed before how little phonology there was to be met with in
their own circles. Perceptions similar to mine have been reported in Hyman
(2007), and also in this volume.
5
This whole subsection 3.1 draws on earlier historiographic writings of mine, in
particular Plank (1991, 1992, 1993, 1998, 2001), where the reporting and
referencing are more conscientious. My history is intended as an “inside narrative”
(to borrow a term of Herman Melville’s), and thereby differs from historians’
histories of typology, which standardly begin with ninetheenth century German
Romanticism, a period that in my view produced little of substance that was novel
or profound. Let me just emphasise that the typological speculations and insights
referred to here were not private musings or only shared in private correspondence
(subsequently buried in archives, if surviving at all), but saw contemporary
publication.
6
Beauzée was employed by the École Royale Militaire in Paris, where they still
maintain a centre linguistique (http://www.rma.ac.be/clng/fr/index.html).
Although, unlike his latter-day successors, Beauzée was a grammairien rather
than a grammatiste (language teacher), his job was not to train future linguists of
the kind he himself was one. Probably August Friedrich Pott (1802–87) – better
known as a historical-comparative Indo-Europeanist despite his numerous
Humboldtinspired contributions to typology – was the first to have been trained as
a general linguist (at least insofar as his doctoral dissertation at Göttingen was
about general linguistics, dealing with the semantics of prepositions across
languages), and whose academic responsibilities at the university of Halle an der
Saale then included the training of future general linguists (Plank 1995).
7
A “conjuncture” of vowel harmony and agglutinative morphology, where word
cohesion is otherwise rather loose, was suspected by Jan Baudouin de Courtenay in
the 1870’s.
8
Morpurgo Davies (1997) is a masterful portrait of this century, an “inside
narrative” in a class of its own. Morpurgo Davies (1975) highlights the relationship
between historical and typological comparison. For further details also see Plank
(1991, 1995).
9
And sounds mattered even before one got started: there was an inbuilt historical
limitation to the Comparative Method, insofar as cognates, if not lost, would at
some point (after 8,000 years or so) become impossible to recognise, as the sound
shapes of morphemes would inexorably change over time.
10
A similar point has been made on similar grounds by Hyman (2007) (and
elsewhere, including in this volume).
11
These research initiatives are instructively portrayed by group leaders in Shibatani
& Bynon (1995). Paris did see groundbreaking research on phonological typology,
namely work centred around André-Georges Haudricourt’s Phonologie
panchronique (1940 etc., with an interim summary in La phonologie
panchronique by Claude Hagège & Haudricourt, Paris: Presses Universitaires de
France, 1978); but it was only later that this attained formal project status.
12
Not without initial opposition: some reviewers of the programme proposal sought
to block it as pointless; for them, the only respectable comparison was historical.
13
Regrettably, over the centuries, negative results – demonstrations that variables do
not co-vary – have continuingly been deemed less worthy of reporting and
recording.
14
Only the programme committees will know how the rejection rates compared for
phonological and morphosyntactic abstracts. There was probably never a bias
against phonology at the stage of abstract selection; but there were very few
acknowledged phonologists on these ALT committees, and common sense suggests
that the perceived expertise of abstract selectors is a factor encouraging or
discouraging abstract submission.
15
Details, so far as they could be recovered other than from memory, at: https://ling
uistlist.org/issues/9/9-874.html; http://listserv.linguistlist.org/pipermail/alt/200
2-November/000039.html; http://typoling2016.sciencesconf.org/.
16
Figures supplied upon request. These figures would be similar for organisations
devoted to particular language families. In terms of specialised journals or also
specialised conferences, however, syntax does not seem far ahead of phonology
and morphology. Significantly, phonetics is the clear winner in this respect,
essentially forming a professional world of its own. Even passable all-rounders in
linguistics can yet be useless in phonetics (and vice versa).
17
In other grammars following this format, not written by phonologists, the
phonology sections are substantially shorter.
18
Jespersen, Sapir, and Bloomfield were all-rounders, but had substantial phonetic
or phonological work to their credit. Hyman has morphosyntax as a sideline. Evans
is a part-timer, but the best to be had among Australianists for present purposes.
19
On current evidence, the contributors to the present volume, with a single
exception (grammar-writing Hyman), are thus in the company of the likes of Jan
Baudouin de Courtenay, Mikołaj Kruszewski, Ferdinand de Saussure, Nikolaj S.
Trubetzkoy, Roman Jakobson, John Rupert Firth, Louis Hjelmslev, André
Martinet, Kenneth L. Pike, Morris Halle (who did write and co-author what could
have formed the phonology chapters of the grammars of two languages, Russian
and English).
20
Don’t be misled by dialect grammars: they tend to background what dialects
supposedly do not much differ in, namely syntax. For example, Die Kerenzer
Mundart des Kantons Glarus in ihren Grundzügen dargestellt by Jost Winteler
(Leipzig: Winter, 1876), renowned for its innovative phonology, devotes 147 pages
to this subject, 43 to inflection, and not one to the syntax of this Swiss variety of
Alemannic, a dialect of High German.
21
Heinz’ chapter in this volume is in the same spirit, but limits complexity
comparisons to phonological patterns.
22
If you define typology as only being about diversity, as is sometimes done in
phonological circles (and elsewhere), then matters are of course different. More on
this point presently, and also in Hyman in this volume.
23
The admirable inside history of phonology by Anderson (1985) doesn’t quite
highlight this theme of languages and typology in phonology vs. syntax. Nor does
the more recent collection of Honeybone & Bermúdez-Otero (2006), where
similarities between phonology and syntax are emphasised, rather than possible
differences in researching phonological and syntactic structures and architectures.
24
This overly spartan mode of description arguably prevented such typological
syntax from meaningfully engaging with diachronic syntax and from playing a
more significant role in experimental psycho- and neurolinguistics.
25
After Greenberg’s “dynamicised” typology and Haudricourt’s “panchronic
phonology”, the most determined single effort to explain co-variation as co-
evolution in phonology was Juliette Blevins’ Evolutionary phonology (Cambridge:
Cambridge University Press, 2007). Characteristically, the impact of this book was
felt more in phonology, theoretical as well as historical, than in typology, while
similar evolutionary work in morphosyntax typically had stronger reverberations
in typology than in syntax.
26
The by far longest phonology, at 624 pages, is that of Danish, and you wonder
whether to give credit to the language or the author (Hans Basbøll) for such
unusual opulence. Portuguese phonology only needs 170 pages (from Maria
Helena Mateus & Ernesto d’Andrade), almost as little as Chichewa syntax does
(166, Sam Mchombo); and it is moot to speculate whether such frugality reflects on
these languages or the describers of their phonology and syntax.
27
Admittedly it has been recognised that often what has been sampled were really
lower-level units such as “doculects”, namely those varieties of a language that
happen to have been documented (in a text or corpus, by a fieldworker, in a
particular grammar).
28
This useful work is often somewhat misleadingly called a “database” of “sound
inventories”. It contains phoneme inventories, which can be at considerable
remove from the primary data. Really they are theories, just as grammars are not
primary data but theories of languages.
29
UPSID’s convention of representing phonemes by their most frequent allophone,
rather than by an invariant feature bundle, could be seen as a partial
acknowledgment of this.
30
This is inconsistent with the contrastive hypothesis, according to which
phonological generalizations can only refer to contrastive features (Currie-Hall
2007, Dresher 2007).
31
Moloko allows as medial codas only the most sonorous consonants, the non-nasal
sonorants /r/, /l/, /w/ or /j/. Violations are eliminated by inserting /ǝ/.
32
I include the labial prosodies y and w within the phonemic representation, but they
are not phonemically attached to the last segment.
33
Perhaps through merger with an a-colored consonant inserted to satisfy ONSET.
34
See below for a defense of Jakobson’s generalization against claims that some
languages have only VC(C) syllables (Section 2.1) and that some languages have no
syllables (Section 2.2).
35
Lexical words should be distinguished from postlexical words formed by syntactic
processes such as cliticization, which are only subject to the postlexical phonology.
36
At the stem level, on the other hand, structure-preservation is a theorem of Stratal
OT, because the phonological inventory and stem structure of a language derives
from its stem-level constraint system. No special structure-preservation principle
is needed. Note further that with the equivalent update, rule-based Lexical
Phonology can provide essentially the same kind of rich word phonology as Stratal
OT.
37
REMOVING it by syllabification and desyllabification does; this will become
important below.
38
Such near-mergers had been reported in the earlier dialectological literature,
though their significance remained unappreciated. For example, DeCamp (1958,
1959) notes near-merger of four and for in what was then old-fashioned San
Francisco speech, since then replaced by complete merger.
39
These terms have also been used to refer to contextually restricted contrasts, such
as Spanish [r]:[ɾ] intervocalically (Hualde 2005), or Italian [ɛ]:[e] (only stressed
syllables), as well as to marginal, “fuzzy” contrasts (Scobbie & Stuart Smith 2008).
Currie-Hall (2013) sorts out these various uses.
40
There are of course vertical subsystems consisting of minimally complex reduced
central vowels, such as English /ɨ/ and /ǝ/ (Rosa’s roses). Irish has a subsystem of
three short vowels /ɯ/, /Ѳ/, /a/, plus six long vowels /i:/, /Ѳ:/, /u:/, /e:/, /a:/,
/o:/ (Ó Siadhail 1989: 35–37). /ɯ/, /Ѳ/ have back allophones [u] and [o]
respectively before broad (velarized) consonants and front allophones [i] and [e]
before slender (palatalized) consonants, e.g. /lʹɯm/ → [lʹum] ‘with me’, /lʹɯNʹ/ →
[lʹiNʹ] ‘with us’.
41
Even in vertical systems, when the non-low vowels are not colored by a consonant
or prosody, they are often front rather than central. In the two-vowel system of
Arrernte, the non-high vowel appears as [i] in initial position where there is no
consonant to influence it, and Hale therefore set it up as /i/ (quoted in Green 1994:
35). Wichita has a three-vowel system /i/, /e/, /a/, with three degrees of length;
phonetically also [o] and [u]. /i/ ranges between [i] and [e], /e/ between [ɛ] and
[æ], and /a/ between low back unrounded [ɑ] and (when short) [ʌ] as in but, with
rounding next to /w/, rarely [u] (Rood 1975). In the variety of Kabardian described
by Colarusso (1992, 2006), the vowels transcribed as [ǝ] and [ɨ] are actually front
vowels.
42
These are nominal exceptions to Stieber’s Law, which says that allophonic features
cannot spread by analogy (see Manaster-Ramer 1994). But if Stieber’s Law is taken
as a generalization about l-phonemes, it may well be exceptionless.
43
For example, artificial and beneficial rhyme, even though they differ underlyingly
({-s-} vs. {-t-}), and keep and coop alliterate, even though their initial consonants
differ phonetically in backness.
44
The same is true of the similar earlier claim about the syllable structure of the
Kunjen dialects (Sommer 1970a, 1970b). As Sommer makes clear in the latter
article, their output syllabification actually conforms to Jakobson’s CV
generalization.
45
Two other ways to simplify stress have been proposed, also with strange syllable
structure. Topintzi & Nevins (2017) make initial consonants in Arrernte moraic,
with stress falling on the second mora, so that [mǝ´ɳǝ] is /mµǝ´µ.ɳǝµ/ and [e.nǝ
´.kǝ] is /eµ.nǝ´µ.kǝµ/. Schwartz (2013) assigned vowels a “vocalic onset node” (≈
null onset), with other onset consonants being excluded by *COMPLEXONSET.
46
On one analysis onsetless syllables can be adjoined to an adjacent syllable to form
a “sesquisyllabic” complex (Kiparsky 2003).
47
Orthographic rr denotes an alveolar tap or trill, r a retroflex approximant [ɻ]
(transcribed as r. in Pensalfini’s and Breen’s work). rn, rt are retroflex [ɳ], [t]. The
orthography uses h to mark dental place of articulation in th, nh etc. I have
replaced them by the IPA symbols to prevent confusion with aspiration.
48
Compare stress-sensitive root allomorphy in Italian, e.g. vádo, andáte, andáre
(Kiparsky 1996).
49
The last two examples are from Breen; thanks to Toni Borowsky for passing them
on.
50
The sources don’t reveal the stress of the Rabbit Talk forms; my guess is that they
stay on the same syllable as in the original word, e.g. áŋkemamp.
51
Another case where theorizing has been led astray by a misconstrual of abstract
phonemic and morphophonemic representations as phonetic transcriptions are
Tundra Nenets word-final stops (Kiparsky 2006).
52
This is not to say that syllabicity has no phonetic correlates. For example, Fougeron
& Ridouane (2008) find that Tashlhiyt syllabic consonants are not longer than non-
syllabic consonants, but they are less coarticulated.
53
Tellingly, even Pa¯ṇini, whose rich descriptive apparatus includes phonological and
morphological features, ordered rules, constraints, blocking, Theta-roles, linking,
and inheritance hierarchies, among others, did not use syllables, even though
Sanskrit very clearly has them (Kessler 1994); they simply would not have made his
grammar shorter. This is a case of the ICEBERG PROBLEM, fatal for the project of
“describing each language on its own terms”: a single language, however rich and
precise the description, cannot reveal all aspects of UG.
54
A caveat: syllable structure can change in the course of a derivation. The more
careful formulation has to be that it must be consistent at any given level of
representation. For example, in English rhythm, spasm, plasm are monosyllabic at
the stem level. If the nasal were syllabic at this level, it would get stressed in words
like rhýthm-ic (cf. átom, atómic), and words like éctoplàsm, ángiospàsm,
cýtoplàsm, cátaclàsm, hólophràsm, would be stressed on the second part, on the
pattern of èctopárasite, àngiothlípsis, èndotóxin, cy`tocóccus, càtatónia,
hòmophóbia, hòmomórphism, rather than on the pattern of éctomòrph,
ángiospèrm, éndolymph, cý → cyst, péricàrp, hómophòbe, hómomòrph. Spanish
[je] is a diphthong in the lexical phonology and behaves as a heavy syllable for
purposes of stress, but postlexically [j-] is resyllabified from the nucleus into the
onset, as shown by its allophonic realization (Harris & Kaisse 1999).
55
Hyman suggests that this distribution could be due to the lack of inputs that would
yield *CVVVCV, *CVCVVV (an accident or a conspiracy?). For example, *CVCVVV
must be of the form Root + Derivational suffix + Inflectional suffix, and this can be
neither /CVC-V-VV/, because this sequence would undergo vowel shortening, nor
/CVC-VV-V/, because derivational suffixes must be minimal syllables of the form -
(C)V.
56
I am grateful to Junko Itô and Stefan Kaufmann for information on Japanese.
57
Similarly the attempt to replicate the phonology of the source language in
loanwords, as in Labrune’s example baiorin˚ /baioriN/ ‘violin’.
58
Of course positive arguments against a VP constituent were sometimes adduced,
one being that subjects and objects c-command each other for purposes of
Condition C of Binding Theory. However, these arguments turned out to be fragile,
and the formulation of Condition C on which it relies has itself been questioned.
59
Or even to no vowels, as Barreteau (1988: 429–437) does for Mofu-Gudur, also a
Chadic language. His analysis predicts the surface vowels just from the consonants,
prosodies, and tone.
60
This has been generally accepted at least since Jakobson, Fant, & Halle (1952),
irrespective of whether syllabicity is represented featurally as in Chomsky & Halle
(1968), or by position in a syllabic constituent as in Kahn (1976), Gussenhoven
(1986), and later work.
61
(24a-j) are from B&P, (24k, l) are from P&B.
62
The length mark is inadvertently (I think) omitted by the authors in (24h,i,j), where
I have inserted it.
63
Words beginning with a central vowel are excluded because the central vowel is only
added after otherwise unsyllabifiable consonants.
64
On the other hand, it is conceivable that final high vowels are actually pronounced
with an offglide, as in English, where a word like bee [biy] is transcribed as [bi:].
65
The transcriptions in (28) are inferred from the information in P&B (2011: 30).
66
B&P’s remark in this connection that “technical problems within OT grammars can
always be solved by invoking additional constraints” is unduly dismissive. Actually
the data are predicted by the theory, in the sense that if monoconsonantal words did
not undergo epenthesis, the analysis would require otherwise unwanted constraints,
as the reader can verify.
67
This is not inconsistent with their formulation that vowels are inserted after
consonants that require a release. It may be that syllable-final consonants must be
released. Another question is whether it is justified to attribute the release property
not only to plosives, but also to nasal stops and /l/, as B&P have to do; only the
continuants /s/, /y/, and /w/ are licit medial codas. Phoneticians normally use the
term “release” for the separation of articulators in plosives, and explain that
plosives prefer syllable onsets because that is where their release burst is most
easily preceptible. The prohibition of sonorant stops in word-internal codas in
Kalam cannot be explained the same way, because they are not released with a noisy
burst, and would be easily perceptible even in coda position, and are in fact
common as codas across languages.
68
The two-vowel analysis is originally due to Kuipers (1960). After presenting it, he
goes further by eliminating first /ɐ/ by doubling the consonant inventory with a set
of ɐ-colored consonants, and then, in an extreme tour de force, eliminates the
remaining vowel /ǝ/ by enriching the phonemic representation with an abstract
juncture marker “:”. In this analysis, not only were all phonemes consonants, but
every consonant was a morpheme, and every morpheme was a consonant. It was
conclusively refuted by Halle (1970), Kumaxov (1973), and Colarusso (1982), and
has found no followers since.
69
It is always tautosyllabic, if Colarusso (1992: 15) is right that all intervocalic
consonants are ambisyllabic, e.g. /dǝda/ [ˈdi˺.dæ].
70
E.g. /Ɂa/ [Ɂæ] ‘hand’, /psǝɁa/ [ˈpsεɁæ] ‘wetness’, [ˈpsε>Ɂ˺ ˌɁæ>] in Colarusso’s
narrower transcription. They are represented as central vowels [ǝ] and [ɐ] in
Gordon & Applebaum (2006), which would be more consistent with the
expectations of Dispersion Theory (Flemming 1995, 2016, Vaux & Samuels 2015).
But in the samples I have heard they are definitely front in agreement with
Colarusso’s description: https://www.youtube.com/watch?v=gtuU5_U-gL4, http
s://www.youtube.com/watch?v=4-BY1vYfM_Q, https://www.youtube.com/watch?
v=r_qQCUDaz-I.
71
Though in practice he writes the phonemes as /i,i,e,a/.
72
The data in (39) undermine Hale’s (2000) claim that the vowels of Marshallese are
not only phonologically but PHONETICALLY underspecified for backness and
rounding (he pointedly represents them at both levels with arbitrary dingbat
symbols), and that the twelve vowels in (36) and their diphthongal combinations
are introduced only in the acoustic/articulatory output. It would be hard to account
for the contrasts in (39) as resulting from coarticulation (at least under standard
assumptions about the phonology/phonetics interface). Deletion of glides would
also have to be an acoustic/ articulatory process, in counterbleeding order with
acoustic/ articulatory assimilation.
73
Choi (1992: 68) also concludes that the smooth transition between vowel qualities
must be due to phonetic coarticulation processes: the F2 trajectory for Marshallese
/tyeap/ ‘to return’ shows no steady-state position for the tongue during the
realization of the diphthong.
74
The exception is Pa¯ṇini’s Aṣṭa¯dhya¯yı¯, built from scratch strictly by using
minimum description length as the sole criterion for establishing both the
generalizations and the formalism in which these are expressed. This was done by
defining the technical terms and conventions of the system in the grammar itself, so
that minimum description length then requires that they are introduced if and only
if they reduce the overall length of the grammar — that is, if the minimum possible
cost of defining them is outweighed by the maximum possible grammatical
simplification they allow. Autochtonous philologies such as that of the Japanese
kokugakusha (Bedell 1968) and the Arabic tradition originating with Sibawayh
(Versteegh 1997) also describe their respective object language in its own terms, but
they were not comprehensive grammars in the modern sense. They were more
concerned with settling points of usage and philosophical issues than with
grammatical analysis per se.
75
The roadmap is not exhaustive. Notable earlier research which examines the nature
of phonological generalizations from a computational perspective but which will not
receive as much discussion as it should includes Potts & Pullum (2002) and Graf
(2010b), which also come from an intellectually similar perspective.
76
It is true that periodically some work is published in that direction, for example the
work on output-to-output correspondence (Benua 1995, 1997, and others).
77
A small minority of students suggest that vlas and sram might be English words but
they agree they are less sure about these than the others. For more on gradient
versus categorical distinctions in phonotactics, see Hayes & Wilson (2008) and
Gorman (2013). In this chapter, we assume a categorical distinction for expositional
purposes, but as discussed in Heinz (2010a) nothing really hinges on this.
78
In fact, both rule-based and OT grammars predict there to be complete
phonological grammars which only instantiate the process of inter-strident
epenthesis or word-final devoicing. The fact no known phonology only contains a
map which would correspond to a single traditional phonological rule has never
been taken as a problem for either rule-based theories or OT.
79
More formally, it is decidable whether or a not any particular string belongs to the
set. Interestingly, most logically possible sets of strings are not computably
enumerable (Turing 1937).
80
By the way, [∫toyonowonowa∫] means ‘it stood upright’ (Applegate 1972).
81
The relevant feature could also be [distributed].
82
Alan Yu also points out that there is a perceptually-motivated diachronic path to
arriving at this language (pace Ohala 1981 and Blevins 2004). A language like
Samala would be the precursor language to one with First/Last Harmony. Since
interior sibilants in the precursor language are not perceived as accurately as ones
at word edges, some of them may change over time to disagreeing sibilants. This
would result in a language whose words obey First/ Last Harmony.
83
The Non-Counting class also goes by the names Star-Free and Locally Testable with
Order (McNaughton & Papert 1971).
84
For those familiar with the FO formulae, here is the definition where x ⊲ y means y
is a successor of x, and x < y means x precedes y.
def
x y = x < y ∧ ¬ (∃z) [x < ∧ z < y]
85
For those familiar with MSO logic, here is a definition. Individual variables are
denoted with x, y, and X denotes a set variable. x ⊲ y means y is a successor of x,
and x < y means x precedes y.
def
closed (X) = (∀x, y) [(x ∈ X ∧ x y) → y ∈ X]
def
x < y = (∀X) [(x ∈ X ∧ closed (X) → y ∈ X]
86
To see why the set of strings L containing only an even number of sibilants is not
Non-Counting, the characterization in (18) can be used. For any k, observe that
os2ko belongs to L, but os2k+1o does not.
87
Finley & Badecker (2009) found no difference between the absence of training and
a control condition where the words in the training condition contained words
which were well-formed and ill-formed according to each targeted constraint type.
88
Readers may also wonder why the sub-structure that picks out the first/ last
template [#—. . .— #] is not available. Here the reason is simple: both the successor
and precedence relations allow every word to have a model and for distinct words to
have distinct models. This is not the case with the sub-structure indicated by the
template [#—. . . — #] is used to model words.
89
Bounded spreading may be more common with the feature nasal. For convenience,
the bound here is assumed to be the next relevant segment, but in fact the syllable
seems to be a natural domain (Odden 1994). See discussion in Nevins (2010,
Chapter 5).
90
Some work exists that characterizes string-to-string maps which correspond to the
NC stringsets (Lautemann et al. 2001). Also the author has work in progress
characterizing string-to-string maps for each of these regions.
91
http://pbase.phon.chass.ncsu.edu
92
This procedure works well for parallel patterns, but fails at detecting a common
feature change in chain shift alternations.
93
All the usual caveats about studies of phonological inventories apply to studies of
phonological alternations. A phoneme inventory and a phonological rule are both
very high-level descriptions, and many steps intervene between the numbers
reported in this paper, and the linguistic descriptions they are based on. The
criticisms of UPSID put forth by Simpson (1999: 349) apply equally to the segment
inventories in P-base, and the phonological alternations that this paper is mainly
concerned with are subject to similar issues. In general, these have been taken as
described from the grammars in which they appeared, except when contradicted by
available data. The aggregate data presented here is expected to highlight many
facts about phonological patterns that are interesting to typologists, and also to
reflect some conventions in how phonological patterns are typically described.
94
Other voiceless fricatives also turn to [h], but not as often as /s/ does, which is
consistent with /s/ being very frequent in inventories. Corresponding changes such
as /z/ → [ɦ] are not observed.
95
For the purpose of these definitions, place features = [anterior], [coronal], [back],
[high], or [labial];
but
not glottal, e.g., palatized = having the palatalization diacritic [j].
Palatalization has been excluded from assimilation in the reporting of these results.
There is overlap of 69 patterns between Progressive Assimilation and Regressive
Assimilation because some patterns operate in both directions but do not require
both at the same time (which would be Bidirectional Assimilation).
96
Since /i/ and /u/ occur in such a large number of inventories, there is very little
opportunity to observe structure changing epenthesis involving these vowels.
97
Here and elsewhere, “stops” is used to mean obstruent stops, not including [ʔ],
which typically has very different phonological behavior, and has often been treated
as a glide by phonologists (e.g., Chomsky & Halle 1968).
98
In these balloon plots, frequency is indicated by area, and the epenthesis and
deletion balloons are superimposed (e.g., vowel deletion is slightly more frequent
than vowel deletion in the C_C context). In Figure 6.9, the balloons are
superimposed, but the “all epenthesis” balloons are by definition as large or larger
than the subset of epenthesis that is structure changing.
99
I: Input, O: Output, C: Change, ER: Environment right, EL: Environment Left, AR:
Assimilated Right, AL: Assimilated Left
100
Notwithstanding the move towards articulatorily oriented features in phonology,
the acoustics of features continued to be investigated by Stevens, Blumstein and
colleagues (cf. Stevens & Blumstein 1978; Blumstein & Stevens 1980; Lahiri,
Gewirth, & Blumstein 1984), the goal being to locate invariant acoustic cues for
distinctive features rather than for segments, which had proved to be impossible (cf.
Lahiri et al. 1984 for cues to distinguish coronal and labial diffuse stops).
101
The tier structures in the feature trees (1)–(4) are not relevant for the present
discussion.
102
The feature tree given in Halle et al. (2000: 389) does not indicate +/− values.
However, from their discussion of Irish assimilation it is obvious that as before the
features high, low, distributed, round, anterior, back are binary.
103
It is possible that the features under the LARYNGEAL node should be independent
and not be subsumed under a single node.
104
We are assuming that these consonants should be [coronal] in C&D based on the
rest of their analysis.
105
In Hall’s terminology, rather confusingly, traditional “palatals” are called
“alveolopalatals”, and they differ in their coronality: “The term ‘palatals’ will used
here to refer to true palatals, such as German [ç ʝ] and not to sounds like
Hungarian [c ɟ], which are alveolopalatal” (1997: 70, §2.6). According to Hall,
alveolopalatals are coronal whereas true palatals are not; thus, “alveopalatals” [ c ɟ
ɲ ɕ ʑ] are [+coronal], “true palatals” [ç ʝ] are [−back, +dorsal]; also, he assumes
that a four-way contrast among a single series of [+coronal –cont] is maximal
(1997: 88, (4)). Since [±back] is not an option, in FUL all of these consonants are
[coronal]. Hall also states, and here we agree, that no language contrasts
alveolopalatals [ɕ] and palatoalveolars like [ʃ], and in fact the same holds true for
palatals and palatalised velars – which is why, in his model, they have the same
features. However, no language contrasts alveopalatal [ɕ] and palatal [c] either,
and moreover there cannot be stops in both positions: one of the consonants has to
be a continuant (cf. Lahiri & Blumstein 1984).
106 The change leads most often to a [high] consonant such as [ç tʃ ʃ]. Sometimes
/t/ also becomes /s/ in a similar context, but that is more of an assibilation
whereby the stop becomes a sibilant fricative, again in the context of a high vowel
or glide.
107
L&E assumed that the palatalisation of [t] lead to an affricate [tʃ]. This was an
incorrect assumption, as Carlos Gussenhoven points out, because it ought to be
more like [c], which is a stop. However, the second author of L&E, Vincent Evers,
finds that the diminutive of plaats ‘place’ ends up as [plaːtʃə] and is, thus, not very
different from the diminutive of plaat ‘plate’. What is important here is that for
FUL, both are [coronal], differing in affrication.
108
This comment has also been made by many phonologists including Sagey as well as
in SPE.
109
When the root has the “abstract” vowels /I U/, which in turn surface as [i~e] or
[u~ɔ], the suffix remains /a/. Our focus is not on the abstract vowels, which as
Hyman shows are entirely transparent and predictable, but on the first three
contexts.
110 Blevins provides these examples. If, however, /g/ > [dʒ] in Northwest Mekeo
in the context of /i/ it is not obvious to us where the example gina comes from.
111
Hybrid models which allow both abstract and episodic representations
(Pierrehumbert 2016) are hard to test. FUL does not deny that native listeners are
especially sensitive to familiar voices; surely one’s mother’s voice is easier to
identify in a noisy environment than the voice of a salesperson. Nor do we
disregard the fact that different dialects can cause hiccups in processing or that
hearing an unfamiliar dialect for many days at a time leads to familiarisation.
Nevertheless, we believe that individual lexical representations are abstract and do
not contain details of individual voices or dialects. Certainly representations can
change and become more flexible, but our claim is that basic contrasts and feature
representations along with concomitant processing implications are universal.
112
We do not exclude the possibility that there may be universal tendencies
concerning markedness; for example, we do not know of a language where [–nasal]
is marked. However, Rice (2003, 2007) shows that a number of presumed
universals of markedness are not empirically supported. Therefore, we adopt the
conservative position that all markedness relations are language specific. We are
prepared to modify this view where evidence exists in favour of a stronger position.
113
Markedness considerations thus dictate whether we name a feature [back] or
[front]: if a language has backing triggered by a back vowel but no fronting
triggered by a front vowel we call the harmony feature [back]; conversely, we
attribute fronting or palatalization to a feature [front]. In some cases the phonetic
ranges of vowels might influence the choice of label.
114
For example, in dialects descending from Proto-Eskimo that retain a four-vowel
system (either overtly or in underlying representations), the reflex of Proto-Eskimo
*/ə/ can assimilate to different vowels depending on context, but diachronically
this vowel has only merged with Proto- Eskimo */i/; see Compton & Dresher
(2011) and Section 4.2 below.
115
Zhang (1996) labels the features [labial] rather than [round], and [coronal] rather
than [front]. For our purposes these names are interchangeable and do not imply
any differences in the substance of these features.
116
Various proposals have been offered to account for why a single /ɔ/ does not cause
rounding harmony; a similar restriction occurs in Oroqen (Zhang & Dresher 1996;
Walker 2001, 2014). Based on the observation that a single irregular stem-internal
/ɔ/ does cause harmony in Baiyinna Oroqen (Li 1996; Walker 2014), Dresher &
Nevins (2017) propose that the restriction may actually be that a low suffix vowel
may obtain a [round] feature from a stem-internal /ɔ/, but not from an /ɔ/ that is
stem-initial.
117
This is not to say that there can be no other empirical evidence, for example from
synchronic alternations or diachronic mergers, that can choose between these
orderings.
118
Analyses that exploit the contrastive hierarchy in accounting for diachronic change
include: Zhang (1996) and Dresher & Zhang (2005) on Manchu; Barrie (2003) on
Cantonese; Rohany Rahbar (2008) on Persian; Dresher (2009: 215–225) on East
Slavic; Compton & Dresher (2011) on Inuit; Gardner (2012), Roeder & Gardner
(2013), and Purnell & Raimy (2013) on North American English vowel shifts;
Purnell & Raimy (2015) and Dresher (2017) on Old English; and large-scale studies
by Harvey (2012) on Ob-Ugric, Ko (2010, 2011, 2012) on Korean, Mongolic, and
Tungusic, and Oxford (2011, 2012a, 2015) on Algonquian.
119
See Oxford (2015) for the sources of these observations.
120
However, they are a counterexample to a proposed implicational universal to the
effect that “in a given language, low and mid front vowels apparently only trigger
palatalization if high front vowels trigger it too” (Kochetov 2011). Kochetov notes,
however, that “Bhat (1978) mentions some cases where mid front vowels palatalize
velars to the exclusion of high front vowels”. The typological rarity of such cases
may be due to the rarity of feature hierarchies like (42). More expected is the fact
that the palatalizations in question involve dorsal consonants, which, according to
Kochetov (2011), are “almost exclusively targeted by /i/ and other front vowels”,
unlike coronals which may be targeted by high vocoids.
121
See, for example, analyses of Lithuanian vowel length in Campos-Astorkiza (2009)
and Dresher (2009).
122
For instance, scholars do not agree on whether stød is a tonal configuration or a
separate phonological type of object.
123
This is not to say that there are no important distinctions of a finer kind. Meyer’s
(1937) average contours exhibit small timing differences between dialects,
employed by Öhman (1967) in his “Scandinavian accent orbit”, and Dalton & Ní
Chasaide (2007) have shown how systematic such differences can be between
varieties within what is considered the same dialect of Irish. The exact boundary, if
any, between phonology and phonetics in this regard is ultimately a matter of
model and interpretation.
124
Bruce (2007, 2010) adds an interesting and important discussion of North
Swedish, which we return to in Section 5.4.
125
For a general discussion of the analytical history of Scandinavian accent, see
Naydenov (2011).
126
Bruce (1977) called these “sentence accent” and “word accent”. Later they have
been referred to as “focus/focal accent” and “word accent”, “focal accent” and
“non-focal accent” (e.g., Bruce 2007), or “prominence level 2” and “prominence
level 1” (Myrberg 2010). For a fuller discussion of the reasons for adopting the
terms “big” and “small”, see Myrberg & Riad (2015).
127
The small accent is HL in both instances, but with different associations, yielding
the difference in timing (Bruce 1977). The leading H in accent 1 is sometimes in
evidence also in the big accent (hence HL*H), but the phonological status of this
tone is disputed. It is more stable in the small accent (Bruce 1977), and its presence
in the big accent is related to articulatory emphasis, and thereby the height of the
trailing H (Fant & Kruckenberg 2008). Engstrand (1995, 1997) has argued that the
leading H in the big accent is predictable from ambient intonation.
128
The tonal identity between lexical accent 2 and postlexical accent 2 is no
coincidence, but the result of a diachronic change, from postlexical to lexical (Riad
1998a).
129
Compound accent occurs in any form that contains two stresses, predominantly
but not exclusively compounds, cf. (24) below.
130
Several dialects admit either accent in compounds and thereby do not have a
“compound rule”. This is discussed in Section 5.3. In the comparisons we make we
shall use instances of accent 2 in compounds.
131
The orthographic form is divided into syllables according to retroflexion in the
phonetic form which is [ˈtʰɑːlɛspæˌʂuːnɛɳa]. Unknown speakers are coded as ‘NN’,
known ones with initials.
132
It has, however, not been reported that there would be a stable association point in
the following word, as has for example been reported for Standard Greek (Arvaniti
2002; Grice et al. 2000).
133
The secondary association here is motivated by the availability of TBU’s. Grice,
Ladd, & Arvaniti (2000) discuss cases of secondary association of phrase accents
(corresponding to the separate focus gesture in Bruce’s terminology) in Hungarian,
Greek, and Cypriot Greek. Intonation tones may then associate to a stressed
syllable in the following word (Greek) or another syllable down the line, depending
on prosodic context.
134
Unlike the case in Central Franconian and Lithuanian, where there is a basic
requirement of two sonorant moras for the tonal distinction to be realized, there is
no sonority requirement in Swedish and Norwegian that affects the tonal contour.
The situation for Danish stød resembles that of Central Franconian and Lithuanian
in this regard (Basbøll 2005: 272).
135
To make this point clear, the alternative to prosodically motivated accent
assignment is lexically motivated accent assignment, which looks at the properties
of the first compound member in free form.
136
Some Central Franconian dialects, too, exhibit the possibility of more than one
accent within compounds or compound-like forms (Peters 2006: 120). This points
at a difference regarding culminativity, between, on the one hand, NGmc tonal
dialects, where there is invariably one accent per (maximal prosodic) word, always
with association to the primary stressed syllable, and on the other hand, Danish
and Central Franconian dialects of the Hasselt type, where there can be more than
one accents in a word, hence in both primary and secondary stressed syllables.
137
The actual prominence FUNCTION is sometimes carried by the boundary tone,
when H%, cf. Section 7.
138
In the Norwegian tradition, where one distinguishes “high-tone” dialects from
“low-tone” dialects, the terminology refers to the value of the prominence tone
(i.e., the first tone of accent 1). This is a rather unfelicitous choice of category, since
it does not necessarily single out a phonological unit. In CSw the prominence tone
is L*H, i.e., a bitonal unit. Calling this a “low-tone” dialect would refer to just half
of the phonological category.
139
The boundary H% can also be followed by another boundary tone L%, which we
disregard here.
140
For instance, speakers have developed a sensitivity to a secondary association.
Hognestad proposes that this ongoing change might lead to generalized connective
accent 2 in compounds. At this stage, there is not (yet) a “compound accent”, and
all speakers still exhibit both accent 1 and accent 2 in compounds. The shape of
accents with sensitivity to the final secondary stress contains an early second peak
followed by a characteristic high plateau (rather than the low floor that occurs in
Central Swedish, which has the same tonal make-up). An example of such a high
plateau occurs in the accent 1 compound in (22).
141
In informal conversation this informant exhibits a pattern more like the other
informants his age.
142
The analyses of the three Stavanger forms do not necessarily agree with those
given by Hognestad (2012).
143
There have been proposals that there are dialects that have very nearly the same
tonal makeup in accent 1 and accent 2 (Kristoffersen 2007 for Oppdal, Segerup
2004 for Göteborg). These things would be expected to have a phonetic
explanation in terms of realization, under the analysis pursued here, and not lend
themselves to immediate phonological translation as identical melodies.
Otherwise, there should be nothing in the way of a generalized, connective accent 1
in compounds, which is unattested.
144
Bruce (2003: 246) states that the focus gesture, i.e., the second rise, is not
coordinated with the secondary stress in compounds in the Göta varieties.
However, Riad & Segerup (2008) found that the L*H was systematically associated
in the last (secondary) stress in compounds. It is the trailing H of L*H which is not
obviously timed with that syllable.
145
Thanks to Bengt Nordberg who pointed out (p.c.) that there appears to be a dialect
boundary cutting through the city of Eskilstuna.
146
It is not entirely clear that the association pattern for the prominence is LH* rather
than L*H. Closer investigation is required.
147
One could have imagined that the (post)lexical H* would have been employed for
prominence purposes in some dialect, in parallel with what we see for the
boundary H% in East Norwegian. However, that would have predicted the
possibility of a contrast between accent 2 as H*L and accent 1 as L*H, but this has
not been reported for any dialect. This adds weight to the generality of the
privative nature of the accent distinction.
148 As mentioned, it is not clear whether there is a strict separation between the
two consecutive L’s in South Swedish.
149
If the argumentation starts from the number of syllables as basic to accent 2, the
morphological patterning is left unexplained.
150
There is clearly a connection between the mora-based tonal assignment, the
sonority restriction and the segmental dependence, though the exact typological
implications, also with respect to tonal systems of the South East Asian types,
remain to be worked out.
151
An exception is Clements & Keyser (1981), who worked with multiply branching
syllables.
152
Many speakers neutralize the word accent contrast on word-final monomoraic
accented syllables in monosyllabic (cf. (5a) and (5b)) and disyllabic words, as in
/hana/ ‘nose’ vs. /haná/ ‘flower’ in utterance-final declarative and interrogative
contexts (cf. Vance 1995). In words of more than two syllables, the contrast is
reliably preserved in declarative intonation (L%), due to the presence of a
preceding syllable with Hα. In the unaccented case, the pitch falls from peninitial
Hα to mid pitch at the word end, but in the accented case, it rises towards H* and
then falls to mid in the last part of the final syllable (Warner 1997). In questions,
the situation in trisyllabic words is similar to that in the disyllabic cases, so that the
prediction is that trisyllabic minimal pairs are not distinct when spoken with
question intonation in the speech of speakers who neutralize disyllables and
monosyllables. I am not aware of experimental work on this question.
153
The prenuclear pitch accent might be H*L, with L being aligned rightmost, as in
my descriptions of English and Dutch, with somewhat delayed pronunciation of
H*. This is in tune with the fact that the L-target is undershot under tone
crowding. The motivation for LH as opposed to HL would appear to be the closer
proximity of the two tones, but since there is no constant timing relation between
them, that motivation is slim.
154
English has a vacuous restriction of one per foot, as in Cálifórnia (Pierrehumbert
1980), by virtue of its TBU, the stressed syllable, but Japanese for instance
routinely has more TBUs in the domain in which only one accent may occur.
155
The analytical approach to the nature of accent has no implications for its status in
the grammar. In Gussenhoven (2004: 42), I drew a distinction between analytical
concepts like “word melody” and “accent”, which are based on distributional
considerations, from phonological concepts like segments and prosodic
constituents, which typically have measurable properties, with no implication that
one type of concept is somehow superior to the other. On this, van der Hulst (2012:
1519) commented that ‘[CG] sees accent as an analytic device, suggesting that it is
the “invention” of the linguist, adopted in order to organize data, and that
therefore it is not as “real” as an H tone or “stress”, which we can “hear”.’ To
clarify, if a language were to have foot-based generalizations, while at the same
time failing to reveal foot heads in the phonetic record, the foot would be an
analytical notion, i.e., one based on distributional facts only, but it would still be a
prosodic constituent of that language.