Explaining Grammaticalization The Standard Way
Explaining Grammaticalization The Standard Way
net/publication/255597358
CITATIONS READS
35 368
1 author:
Bart Geurts
Radboud University
127 PUBLICATIONS 3,769 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Bart Geurts on 14 June 2014.
BART GEURTS
Since nobody knows how to draw the line between lexical expressions and
grammatical devices, it is natural to suppose that, in some sense, the tran-
sition from lexicon to grammar is a gradual one. There is a continuum, it
seems, bounded by purely lexical items on one end and purely grammatical
items on the other, with many expressions lying somewhere between these
opposites. This, at any rate, is the view that underlies the notion of gram-
maticalization, which by definition is a process of language change in which
an expression moves away from the lexical pole and toward the grammati-
cal pole. This type of change is quite common, but it turns out that shifts in
the opposite direction, away from the grammatical pole and toward the
lexical pole, are practically nonexistent. The asymmetry between grammati-
calization and degrammaticalization is the topic of the following remarks.
There is a more or less standard view of grammaticalization, which has
recently been challenged, in this journal, by Haspelmath (1999). The pur-
pose of this note is twofold: I want to show how Haspelmath’s criticism can
be met and discuss some of the problems his own proposal runs into.
One of the best-known instances of grammaticalization is ‘‘Jespersen’s
cycle’’:
The history of negative expressions in various languages makes us witness the
following curious fluctuation: the original negative adverb is first weakened, then
found insufficient and therefore strengthened, generally through some additional
word, and this in its turn may be felt as the negative proper and may then in course
of time be subject to the same development as the original word (Jespersen 1917: 4).
and finally emerge as an ordinary noun, this kind of thing rarely happens
in practice. This discrepancy calls for an explanation: why is degrammati-
calization so rare, while grammaticalization is quite common? There is a
more or less standard answer to this question, which goes back at least as
far as von der Gabelentz (1891) and sees grammaticalization as resulting
from the interaction between two opposite forces: effectiveness and effi-
ciency (also known as clarity vs. economy, force of diversification vs. force
of unification, hearer’s economy vs. speaker’s economy, Q-principle vs.
I-principle, and so on; this is a terminological free-for-all, apparently).
On the one hand, speakers seek to make themselves understood and
therefore strive for maximally effective messages, but on the other hand,
there is a general tendency not to expend more energy than is strictly
necessary and therefore to prefer economical forms to more elaborate
ones. Grammaticalization begins when a form a that may be efficient but
is felt to lack in effectiveness is replaced with a periphrastic, and therefore
less economical, locution b calculated to enhance effectiveness. Then b gets
the upper hand and wears down due to the general drive toward efficiency
of expression, until it is weakened to the point where it has to be replaced
by some c.
This is the standard view of grammaticalization, which has recently been
challenged by Haspelmath (1999). Referring to the opposing forces of
effectiveness and efficiency (‘‘clarity’’ and ‘‘economy,’’ in his terminology),
Haspelmath asks himself and his readers,
The real problem is to explain why the conflicting tendencies do not cancel each
other out, leading to stasis rather than change — why doesn’t erosion stop at the
point where it would threaten intelligibility? Or alternatively, why doesn’t the
tug-of-war between the two counteracting forces lead only to a back-and-forth
movement? (Haspelmath 1999: 1052)
are while expanded forms are not, and he is taken to task
for this by Haspelmath (1999: 1050) on the grounds that
[...] the accuracy of predictability is generally quite low. Although we can exclude
certain changes, there is no way to predict, say, whether a [p] will be reduced to a
[w] or a [b], or whether going to will be reduced to [gAne] or [gone]. Similarly, the
degree of predictability in lexical-semantic change is very low, and yet words change
their meanings all the time. Thus, why shouldn’t the preposition on become a noun
**owan ‘top’ or ‘head’ for instance?
Note
1. I am grateful to an anonymous reader for Linguistics for her or his comments on the
first version of this paper. Correspondence address: Department of Philosophy,
University of Nijmegen, P.O. Box 9103, NL-6500 HD Nijmegen, The Netherlands.
E-mail: [email protected].
References
Cann, R. (2000). Functional versus lexical: a cognitive dichotomy. In The Nature and
Function of Syntactic Categories, R. Borsley (ed.), 37–78. Syntax and Semantics 32.
London: Academic Press.
Givón, Talmy (1975). Serial verbs and syntactic change: Niger-Congo. In Word Order and
Word Order Change, C. N. Li (ed.), 47–112. Austin: University of Texas Press.
Haspelmath, Martin (1999). Why is grammaticalization irreversible? Linguistics 37(6),
1043–1068.
Hock, H. H. (1991). Principles of Historical Linguistics, 2nd ed. Berlin: Mouton de Gruyter.
Hopper, P. J.; and Traugott, E. C. (1993). Grammaticalization. Cambridge: Cambridge
University Press.
Horn, L. R. (1989). A Natural History of Negation. Chicago: University of Chicago Press.
Jespersen, Otto (1917). Negation in English and Other Languages. Copenhagen:
Munksgaard.
Keller, R. (1994). On Language Change: The Invisible Hand in Language. London:
Routledge.
Von der Gabelentz, G. (1891). Die Sprachwissenschaft, ihre Aufgaben, Methoden und
bisherigen Ergebnisse. Leipzig: Weigel.