LFG PDF
LFG PDF
Mary Dalrymple
Centre for Linguistics and Philology
Walton Street
Oxford OX1 2HG
United Kingdom
[email protected]
Abstract
Lexical Functional Grammar (LFG) is a linguistic theory which studies the var-
ious aspects of linguistic structure and the relations between them. Traditional
LFG analyses focus on two syntactic structures. Constituent structure (c-
structure) represents word order and phrasal groupings, and functional struc-
ture (f-structure) represents grammatical functions like subject and object.
These structures have separate representations, but are related to each other in
systematic ways. Recent LFG work includes investigations of argument struc-
ture, semantic structure, and other linguistic structures and their relation to
c-structure and f-structure.
1
the subject phrase in the c-structure tree is related to the subject f-structure
by means of a function which relates nodes of the c-structure tree to parts of
the f-structure for a sentence. Relations between c-structure, f-structure, and
other linguistic levels have also been explored and defined in terms of functional
mappings from subparts of one structure to the corresponding subparts of other
structures.
The overall formal structure and basic linguistic assumptions of the theory
have changed very little since its development in the late 1970s by Joan Bresnan,
a linguist trained at the Massachusetts Institute of Technology, and Ronald
M. Kaplan, a psycholinguist and computational linguist trained at Harvard
University. Bresnan (1982) is a collection of influential early papers in LFG;
recent works providing an overview or introduction to LFG include Dalrymple
et al. (1995), Bresnan (2001), Dalrymple (2001), Falk (2001), and Kroeger
(2004).
2 Constituent structure
Languages vary greatly in the basic phrasal expression of even simple sentences.
Basic word order can be verb-initial (Malagasy), verb-final (Japanese), or verb-
medial (English). Word order correlates with grammatical function in some
languages, such as English, in which the subject and other arguments appear
in particular phrase structure positions. In other languages, word order is
more free, and grammatical functions are identified by casemarking or agree-
ment rather than phrasal configuration: in many languages, there is no specific
phrasal position where the subject or object must always appear. Requirements
for phrasal groupings also differ across languages. In English, for example, a
noun and any adjectives that modify it must appear together and form a phrasal
unit. In many other languages, including Latin, this is not necessary, and a
noun can be separated from its modifying adjectives in the sentence. LFG’s
constituent structure represents word order and phrasal constituenthood.
2
(1) David is sleeping.
IP
NP I0
N0 I VP
N is V0
David V
sleeping
3
position determines the functional role of a phrase.
A phrase can dominate other constituents besides its head. LFG does not
require phrase structure trees to be binary branching, and so there can be more
than two daughters of any node in a c-structure tree. The nonhead daughter
of a maximal phrase is called its specifier, and the nonhead sisters of a lexical
category are its complements. This is shown schematically in (2):
(2) XP
YP X0
(head) (complements of X)
4
(3) Anna såg boken.
Anna saw book.def
‘Anna saw the book.’
IP
NP I0
N0 I VP
N såg V0
saw
Anna NP
Anna
N0
N
boken
book.def
Nonhead daughters are also only optionally present. In Japanese and other
so-called “pro-drop” languages, a verb can appear with no overt arguments.
If no overt arguments of a verb are present, the c-structure tree contains only
the verb. As a relatively free word order language, Japanese makes use of the
exocentric category S, and so an utterance S can consist of a single constituent
of category V:
(4) koware-ta
break-PAST
‘[It/Something] broke.’
S
V
kowareta
break.PAST
5
(5) a. IP −→ NP I0
b. IP
NP I0
3 Functional structure
PRED ‘DEVOURhSUBJ,OBJi’
SUBJ PRED ‘DAVID’
SPEC A
OBJ
PRED ‘SANDWICH’
For clarity, many of the features and values in this f-structure have been omit-
ted, a practice often followed in LFG presentations. The full f-structure would
contain tense, aspect, person, number, and other functional features.
Every content word in a sentence contributes a value for the feature PRED.
These values are called semantic forms. In the functional structure, semantic
forms are surrounded by single quotes: the semantic form contributed by the
word David is ‘DAVID’.
An important property of semantic forms is that they are uniquely instanti-
ated for each instance of their use, reflecting the unique semantic contribution
of each word within the sentence. This is occasionally indicated by associating
a unique numerical identifier with each instance of a semantic form, as in (8):
6
(8) David devoured a sandwich.
PRED ‘DEVOUR37 hSUBJ,OBJi’
h i
SUBJ
PRED ‘DAVID42 ’
SPEC A
OBJ
PRED ‘SANDWICH14 ’
In (8), the particular occurrence of the semantic form for the word David as it is
used in this sentence is represented as ‘DAVID42 ’. Another use of David would be
associated with a different unique identifier, perhaps ‘DAVID73 ’. Representing
semantic forms with explicit numerical identifiers clearly shows that each word
makes a unique contribution to the f-structure. However, the identifiers also add
unnecessary clutter to the f-structure, and as such are usually not displayed.
A verb or other predicate generally requires a particular set of arguments:
for example, the verb devoured requires a subject (SUBJ) and an object (OBJ).
These arguments are said to be governed by the predicate; equivalently, the
predicate is said to subcategorize for its arguments (see subcategoriza-
tion). The semantic form contributed by a verb or other predicate contains
information about the arguments it governs. As shown above, the governed
arguments appear in angled brackets: ‘DEVOURhSUBJ,OBJi’.
The LFG requirements of Completeness and Coherence ensure that all and
only the grammatical functions governed by a predicate are found in the f-
structure of a grammatically acceptable sentence. For example, the unaccept-
ability of example (9) shows that the verb devoured cannot appear without an
OBJ:
This sentence violates the principle of Coherence, according to which only the
grammatical functions that are governed by a predicate can appear. Since the
sentence contains a grammatical function that the verb devour does not govern,
it is incoherent.
The grammatical functions that a predicate can govern are called govern-
able grammatical functions. The inventory of universally-available govern-
able grammatical functions is given in (11). Languages differ as to which of
these functions are relevant, but in many languages, including English, all of
these functions are used.
7
(11) SUBJ: subject
OBJ: object
COMP: sentential or closed (nonpredicative) infinitival complement
XCOMP: an open (predicative) complement, often infinitival, whose
SUBJ function is externally controlled
OBJθ : a family of secondary OBJ functions associated with a partic-
ular, language-specific set of thematic roles; in English, only
OBJTHEME is allowed, while other languages allow more than
one thematically restricted secondary object
OBLθ : a family of thematically restricted oblique functions such as
OBLGOAL or OBLAGENT , often corresponding to adpositional
phrases at c-structure
There are two nongovernable grammatical functions. The function ADJ is the
grammatical function of modifiers like in the park, with a hammer, and yes-
terday. The function XADJ is the grammatical function of open predicative
adjuncts whose subject is externally controlled; as with the governable gram-
matical function XCOMP, the X in the name of the function indicates that it
is an open function whose SUBJ is supplied externally. The phrase filling the
XADJ role is underlined in (13).
In (13a) and (13b), the open adjunct XADJ is controlled by the subject of the
main clause: it is David that opened the window, and it is David that is naked.
In (13c), the XADJ is controlled by the object: it is the celery that is raw.
Unlike governable grammatical functions, more than one adjunct function
can appear in a sentence:
Since the ADJ function can be multiply filled, its value is a set of f-structures:
8
PRED ‘DEVOURhSUBJ,OBJi’
SUBJ PRED ‘DAVID’
OBJ SPEC A
PRED ‘SANDWICH’
PRED ‘YESTERDAY’
ADJ PRED ‘AThOBJi’
OBJ
PRED ‘NOON’
The same is true of XADJ: more than one XADJ phrase can appear in a
single sentence:
(16) Having opened the window, David ate the celery naked.
(17)
Feature Value
Person: PERS 1, 2, 3
Gender: GEND MASC, FEM, ...
Number: NUM SG, DUAL, PL, . . .
Case: CASE NOM, ACC, . . .
Surface form: FORM Surface word form
Verb form: VFORM PASTPART, PRESPART,. . .
Complementizer form: COMPFORM Surface form of complemen-
tizer: THAT, WHETHER,. . .
Tense: TENSE PRES, PAST,. . .
Aspect: ASPECT F-structure representing
complex description of
sentential aspect. Some-
times abbreviated as e.g.
PRES.IMPERFECT
Pronoun type: PRONTYPE REL, WH, PERS,. . .
The values given in this chart are the ones that are most often assumed, but
some authors have argued for a different representation of the values of some
features. For example, Dalrymple & Kaplan (2000) argue for a set-based rep-
resentation of the PERS and GEND features to allow for an account of feature
resolution in coordination, and of the CASE feature to allow for case indeter-
minacy. Some studies assume a PCASE feature whose value specifies the gram-
matical function of its phrase; in more recent work, Nordlinger (1998) provides
9
a theory of constructive case, according to which a casemarked phrase places
constraints on its f-structure environment that determine its grammatical func-
tion in the sentence. This treatment supplants the traditional treatment of
obliques in terms of the PCASE feature.
This expression refers to the value of the TENSE feature in the f-structure f . If
we want to specify the value of that feature, we use an expression like:
This defining equation specifies that the feature TENSE in the f-structure f
has the value PAST.
We can also specify that a feature has a particular f-structure as its value.
The expression in (20) specifies that the value of the SUBJ feature in f is the
f-structure g:
(20) (f SUBJ) = g
Some features take as their value a set of functional structures. For example,
since any number of adjuncts can appear in a sentence, the value of the feature
ADJ is a set. We can specify that an f-structure h is a member of the ADJ set
with the following constraint, using the set-membership symbol ∈:
(21) h ∈ (f ADJ)
The constraints discussed so far are called defining constraints, since they
define the required properties of a functional structure. An abbreviated f-
description for a sentence like David sneezed is:
This f-description holds of the following f-structure, where the f-structures are
annotated with the names used in the f-description (22):
10
(23) David sneezed.
PRED ‘SNEEZEhSUBJi’
f : TENSE PAST
SUBJ g: PRED ‘DAVID’
Notice, however, that the f-description also holds of the f-structure in (24),
which also contains all the attributes and values that are mentioned in the
f-description in (22):
(24)
PRED ‘SNEEZEhSUBJi’
TENSE PAST
SUBJ g: PRED ‘DAVID’
PERS 3
f :
PRED ‘YESTERDAY’
ADJ PRED ‘AThOBJi’
OBJ
PRED ‘NOON’
When this expression appears, the f-structure f that is the minimal solution to
the defining equations must contain the feature SUBJ whose value has a feature
NUM with value SG. The constraining equation in (25) does not hold of the
f-structure in (23), since in that f-structure, the value of the NUM feature has
been left unspecified, and the SUBJ of f does not have a NUM feature with
value SG.
In contrast, the functional description in (26a) for the sentence David sneezes
has a well-formed solution, the f-structure in (26b):
11
(26) a. (f PRED) = ‘SNEEZEhSUBJi’
(f TENSE) = PRES
(f SUBJ) = g
(g PRED) = ‘DAVID’
(g NUM) = SG
(f SUBJ NUM) =c SG
b.
PRED ‘SNEEZEhSUBJi’
TENSE PRES
f :
SUBJ g: PRED ‘DAVID’
NUM SG
Here, the value SG for the NUM feature for g is specified in the second-to-last
line of the functional description. Thus, the f-structure in (26b) satisfies the
defining constraints given in the first five lines of (26a). Moreover, it satisfies
the constraining equation given in the last line of (26a).
We can also place other requirements on the minimal solution to the defining
equations in some f-description. The expression in (27a) requires f not to
have the value PRESENT for the feature TENSE, which can happen if f has
no TENSE feature, or if f has a TENSE feature with some value other than
PRESENT. When it appears in a functional description, the expression in (27b)
is an existential constraint, requiring f to contain the feature TENSE, but
not requiring any particular value for this feature. We can also use a negative
existential constraint to require an f-structure not to contain a feature, as
in (27c), which requires f not to contain the feature TENSE with any value
whatsoever.
12
¬, and the scope of negation is indicated by curly brackets. This lexical entry
allows two possibilities. The first is for the base form of the verb, in which the
value of the VFORM feature is BASE. For the second possibility, the value of the
feature TENSE is PRES for present tense, and a third-person singular subject
is disallowed by negating the possibility for the PERS feature to have value 3
when the NUM feature has value SG.
NP I0
PRED ‘SNEEZEhSUBJi’
0
φ
N VP TENSE PAST
N V0 SUBJ PRED ‘DAVID’
David V
sneezed
Each node of the c-structure tree corresponds to some part of the f-structure.
As shown in (30), more than one c-structure node can correspond to the same
f-structure (the φ function is many-to-one):
(30) φ
V0 " #
PRED ‘SNEEZEhSUBJi’
V TENSE PAST
sneezed
13
optionally specifies functional information about its subject. When there is
no overt subject phrase in the sentence, the information specified by the verb
supplies the SUBJ value for the sentence. In (31), since there is no overt subject,
all of the information about the subject comes from specifications on the verb,
and there is no c-structure node corresponding to the SUBJ f-structure:
(31) koware-ta
break-PAST
‘[It/Something] broke.’
S φ
PRED ‘BREAKhSUBJi’
V TENSE PAST
kowareta SUBJ PRED ‘PRO’
break.PAST
N0 I VP
N is V0
David V
yawning
14
(33) David yawned.
IP
NP I0
PRED ‘YAWNhSUBJi’
0
N VP TENSE PAST
N V0 SUBJ PRED ‘DAVID’
David V
yawned
In Finnish, the specifier of IP is associated with the TOPIC function, and the
specifier of CP is associated with FOCUS:
NP C0
N0 IP
PRED ‘GEThSUBJ,OBJ,OBLSOURCE i’
N NP I0
FOCUS PRED ‘MIKKO’
OBLSOURCE
Mikolta N0 I VP
Mikko
TOPIC PRED ‘ANNA’
N sai NP
SUBJ
got
Anna N0
OBJ PRED ‘FLOWERS’
Anna
N
kukkia
flowers
15
OBJ and OBJTHEME :
NP I0
PRED ‘GIVEhSUBJ,OBJ,OBJTHEME i’
SUBJ PRED ‘DAVID’
N0 VP
OBJ PRED ‘CHRIS’
N V0
SPEC A
David V NP NP OBJTHEME
PRED ‘BOOK’
gave N0 Det N0
N a N
Chris book
We can use these symbols to annotate the V0 phrase structure rule with f-
structure correspondence constraints:
(37) V0 −→ V
↑= ↓
mother’s f-structure = self’s f-structure
This annotated rule licenses the configuration in (38). In the c-structure, the
V0 node dominates the V node, as the phrase structure rules require. The V0
and V nodes correspond to the same f-structure, as the annotations on the V
node require.
(38) V0 []
16
In the rule shown in (39), the V and the V0 node correspond to the same f-
structure, as specified by the ↑ =↓ annotation on the V node. The annotation
on the NP node requires the f-structure ↓ corresponding to the NP to be the
value of the OBJ value in the f-structure ↑ for the mother node.
(39) V0 −→ V NP
↑= ↓ (↑ OBJ) = ↓
The rule in (39) licenses the following configuration:
(40)
h i
0
V OBJ [ ]
V NP
(42) "
PRED ‘SNEEZEhSUBJi’
#
V
TENSE PAST
sneezed
Several recent research strands in LFG have explored the relation of constituent
and functional structure to other linguistic levels. Among these are the theory
of the relation between argument structure and syntax, and the “glue” approach
to the interface between syntax and semantics.
17
5.1 Mapping theory and argument linking
Mapping theory explores correlations between the semantic roles of the argu-
ments of a verb and their syntactic functions: if a language assigns the syntactic
function SUBJ to the agent argument of an active verb like kick, for example,
it invariably assigns SUBJ to the agent argument of semantically similar verbs
like hit.
Early formulations of the rules of mapping theory proposed rules relating
specific thematic roles to specific grammatical functions: for example, that the
thematic role of AGENT is always realized as SUBJ. Later work proposed more
general rules relating thematic roles to classes of grammatical functions, rather
than specific functions. It is most often assumed that grammatical functions
are cross-classified with the features +R and +O. Several versions of mapping
theory have been proposed (Bresnan & Kanerva, 1989; Bresnan & Zaenen, 1990;
Bresnan, 2001); in the following, we describe the theory of Bresnan & Zaenen
(1990).
The feature +R distinguishes unrestricted (−R) grammatical functions
from restricted (+R) functions. The grammatical functions SUBJ and OBJ
are classified as unrestricted, meaning that they can be filled by an argument
bearing any thematic role. These contrast with restricted grammatical func-
tions like obliques or thematically restricted objects, which must be filled by
arguments with particular thematic roles: for example, the OBLSOURCE func-
tion must be filled by an argument bearing the thematic role SOURCE, and
the thematically restricted object function OBJTHEME is filled by a THEME
argument.
The feature +O distinguishes objective (+O) grammatical functions from
nonobjective (−O) functions. The unrestricted OBJ function and the re-
stricted OBJθ functions are objective, while the SUBJ and the oblique functions
are nonobjective.
These features cross-classify the grammatical functions as in (44):
(44)
−R +R
−O SUBJ OBLθ
+O OBJ OBJθ
18
(45) Thematic hierarchy (Bresnan & Kanerva, 1989):
AGENT > BENEFACTIVE > RECIPIENT/EXPERIENCER
> INSTRUMENT > THEME/PATIENT > LOCATIVE
One of the default mapping rules requires the argument of a predicate that is
highest on the thematic hierarchy to be classified as unrestricted (−R). For
example, if a verb requires an AGENT argument and a PATIENT argument, the
AGENT argument thematically outranks the PATIENT argument, and thus the
AGENT argument is classified as unrestricted.
For a predicate with an AGENT and a PATIENT argument, like kick, this
has the following result (Bresnan & Kanerva, 1989):
19
The deduction is performed on the basis of logical premises contributed by
the words in the sentence (and possibly by syntactic constructions). Linear
logic, a resource-based logic, is used to state requirements on how the meanings
of the parts of a sentence can be combined to form the meaning of the sentence
as a whole. Linear logic is different from classical logic in that it does not
admit rules that allow for premises to be discarded or used more than once in a
deduction. Premises in a linear logic deduction are, then, resources that must
be accounted for in the course of a deduction; this nicely models the semantic
contribution of the words in a sentence, which must contribute exactly once to
the meaning of the sentence, and may not be ignored or used more than once.
A sentence like David knocked twice cannot mean simply David knocked: the
meaning of twice cannot be ignored. It also cannot mean the same thing as
David knocked twice twice; the meaning of a word in a sentence cannot be used
multiple times in forming the meaning of the sentence.
The syntactic structures for the sentence David yawned, together with the
desired semantic result, are displayed in (47):
The semantic structure for the sentence is related to its f-structure by the cor-
respondence function σ, represented as a dotted line. This result is obtained on
the basis of the following lexical information, associated with the verb yawned:
20
must obtain a meaning for its arguments in order for a meaning for the sentence
to be available.
The f-structure for the sentence David yawned, together with the instanti-
ated meaning constructors contributed by David and yawned, is given in (49):
(49)
PRED ‘YAWNhSUBJi’
y :
SUBJ d : PRED ‘DAVID’
[David] David : dσ
[yawn] λX.yawn (X) : dσ −◦ yσ
The left-hand side of the meaning constructor labeled [David] is the proper
noun meaning David, and the left-hand side of the meaning constructor labeled
[yawn] is the meaning of the intransitive verb yawned, the one-place predicate
λX.yawn (X).
We must also provide rules for how the right-hand (glue) side of each of
the meaning constructors in (49) relates to the left-hand (meaning) side in
a meaning deduction. For simple, nonimplicational meaning constructors like
[David] in (49), the meaning on the left-hand side is the meaning of the seman-
tic structure on the right-hand side. For meaning constructors which contain
the linear implication operator −◦, like [yawn], modus ponens on the glue side
corresponds to function application on the meaning side:
(50) X : fσ
P : fσ −◦ gσ
P (X) : gσ
By using the function application rule and the meaning constructors for David
and yawned, we deduce the meaning yawn(David) for the sentence David yawned,
as desired.
21
Glue analyses of quantification, intensional verbs, modification, coordina-
tion, and other phenomena have been explored (Dalrymple, 1999). A particu-
lar challenge for the glue approach is found in cases where there are apparently
too many or too few meaning resources to produce the correct meaning for a
sentence; such cases are explored within the glue framework by Asudeh (2004).
From its inception, work on LFG has been informed by computational and
psycholinguistic concerns. Recent research has combined LFG’s syntactic as-
sumptions with an optimality-theoretic approach in an exploration of OT-LFG
(see Optimality Theory, OT-LFG). Other work combines LFG with Data-
Oriented Parsing, a new view of language processing and acquisition. There
have also been significant developments in parsing and generating with LFG
grammars and grammars in related formalisms.
6.2 Parsing
Several breakthroughs have been made in the parsing of large computational
LFG grammars. Maxwell & Kaplan (1991) examine the problem of processing
disjunctive specifications of constraints, which are computationally very difficult
to process: in the worst case, processing disjunctive constraints is exponentially
difficult. However, this worst-case scenario assumes that every disjunctive con-
straint can interact significantly with every other constraint. In practice, such
interactions are found only very rarely: an ambiguity in the syntactic proper-
ties of the SUBJ of a sentence rarely correlates with ambiguities in the OBJ or
other arguments. This insight is the basis of Maxwell & Kaplan’s algorithm,
which works by turning a set of disjunctively specified constraints into a set of
22
contexted, conjunctively specified constraints, where the context of a constraint
indicates where the constraint is relevant. Solving these contexted constraints
turns out to be very efficient for linguistically motivated sets of constraints,
where only local interactions among disjunctions tend to occur.
Maxwell & Kaplan (1993, 1996) explore the issue of c-structure processing
and its relation to solving f-structural constraints. It has long been known that
constituent structure parsing — determining the phrase structure trees for a
given sentence — is very fast in comparison to solving the equations that de-
termine the f-structure for the sentence. For this reason, an important task
in designing algorithms for linguistic processing of different kinds of structures
like the c-structure and the f-structure is to optimize the interactions between
these computationally very different tasks. Previous research often assumed
that the most efficient approach would be to interleave the construction of the
phrase structure tree with the solution of f-structure constraints. Maxwell &
Kaplan (1993) explore and compare a number of different methods for com-
bining phrase structure processing with constraint solving; they show that in
certain situations, interleaving the two processes can actually give very bad re-
sults. Subsequently, Maxwell & Kaplan (1996) showed that if phrase structure
parsing and f-structural constraint solving are combined in the right way, pars-
ing can be very fast; in fact, if the grammar that results from combining phrase
structure and functional constraints happens to be context-free equivalent, the
algorithm for computing the c-structure and f-structure operates in cubic time,
the same as for pure phrase structure parsing.
6.3 Generation
Generation is the inverse of parsing: whereas the parsing problem is to deter-
mine the c-structure and f-structure that correspond to a particular sentence,
work on generation in LFG assumes that the generation task is to determine
which sentences correspond to a specified f-structure, given a particular gram-
mar. Based on these assumptions, several interesting theoretical results have
been attained. Of particular inportance is the work of Kaplan & Wedekind
(2000), who show that if we are given an LFG grammar and an acyclic f-
structure (that is, an f-structure that does not contain a reference to another f-
structure that contains it), the set of strings that corresponds to that f-structure
according to the grammar is a context-free language. Kaplan & Wedekind also
provide a method for constructing the context-free grammar for that set of
strings by a process of specialization of the full grammar that we are given.
This result leads to a new way of thinking about generation, opens the way to
new, more efficient generation algorithms, and clarifies a number of formal and
mathematical issues relating to LFG parsing and generation.
Wedekind & Kaplan (1996) explore issues in ambiguity-preserving genera-
tion, where a set of f-structures rather than a single f-structure is considered,
and the sentences of interest are those that correspond to all of the f-structures
under consideration. The potential practical advantages of ambiguity-preserving
generation are clear: consider, for example, a scenario involving translation from
English to German. We first parse the input English sentence, producing sev-
23
eral f-structures if the English sentence is ambiguous. For instance, the English
sentence Hans saw the man with the telescope is ambiguous: it means either that
the man had the telescope or that Hans used the telescope to see the man. The
best translation for this sentence would be a German sentence that is ambigu-
ous in exactly the same way as the English sentence, if such a German sentence
exists. In the case at hand, we would like to produce the German sentence Hans
sah den Mann mit dem Fernrohr, which has exactly the same two meanings as
the English input. To do this, we map the English f-structures for the input
sentence to the set of corresponding German f-structures; our goal is then to
generate the German sentence Hans sah den Mann mit dem Fernrohr, which
corresponds to each of these f-structures. This approach is linguistically appeal-
ing, but mathematically potentially problematic: Wedekind & Kaplan (1996)
show that determining whether there is a single sentence that corresponds to
each member of a set of f-structures is in general undecidable for an arbitrary
(possibly linguistically unreasonable) LFG grammar. This means that there are
grammars that can be written within the formal parameters of LFG, though
these grammars may not encode the properties of any actual or potential hu-
man language, and for these grammars, there are sets of f-structures for which
it is impossible to determine whether there is any sentence that corresponds to
those f-structures. This result is important in understanding the formal limits
of ambiguity-preserving generation.
Bibliography
Andrews, A., III & Manning, C. D. (1999). Complex Predicates and Information
Spreading in LFG. Stanford, CA: CSLI Publications.
24
Bresnan, J. & Kanerva, J. M. (1989). ‘Locative inversion in Chicheŵa: A case
study of factorization in grammar.’ Linguistic Inquiry 20 (1), 1–50. Reprinted
in Stowell et al. (1992).
Dalrymple, M., Kaplan, R. M., Maxwell, J. T., III & Zaenen, A. (eds.) (1995).
Formal Issues in Lexical-Functional Grammar. Stanford, CA: CSLI Publi-
cations.
25
Manning, C. D. (1996). Ergativity: Argument Structure and Grammatical Rela-
tions. Dissertations in Linguistics. Stanford, CA: CSLI Publications. Revised
and corrected version of 1994 Stanford University dissertation.
Maxwell, J. T., III & Kaplan, R. M. (1991). ‘A method for disjunctive constraint
satisfaction.’ In Tomita, M. (ed.) ‘Current Issues in Parsing Technology,’
Kluwer Academic Publishers, 173–190. Reprinted in Dalrymple et al. (1995,
pp. 381–401).
Maxwell, J. T., III & Kaplan, R. M. (1993). ‘The interface between phrasal
and functional constraints.’ Computational Linguistics 19 (4), 571–590.
Maxwell, J. T., III & Kaplan, R. M. (1996). ‘An efficient parser for LFG.’ In
Butt, M. & King, T. H. (eds.) ‘On-line Proceedings of the LFG96 Conference,’
URL http://csli-publications.stanford.edu/LFG/1/lfg1.html.
Mohanan, T. (1994). Arguments in Hindi. Dissertations in Linguistics. Stan-
ford, CA: CSLI Publications. Reprinted version of 1990 Stanford University
dissertation.
Nordlinger, R. (1998). Constructive Case: Evidence from Australian Languages.
Dissertations in Linguistics. Stanford, CA: CSLI Publications. Revised ver-
sion of 1997 Stanford University dissertation.
Sells, P. (2001). Structure, Alignment and Optimality in Swedish. Stanford,
CA: CSLI Publications.
Simpson, J. (1991). Warlpiri Morpho-Syntax: A Lexicalist Approach. Dor-
drecht: Kluwer Academic Publishers.
Stowell, T., Wehrli, E. & Anderson, S. R. (eds.) (1992). Syntax and Semantics:
Syntax and the Lexicon, volume 26. San Diego: Academic Press.
Toivonen, I. (2003). Non-Projecting Words: A Case Study of Swedish Particles.
Dordrecht: Kluwer Academic Publishers.
Wedekind, J. & Kaplan, R. M. (1996). ‘Ambiguity-preserving generation with
LFG- and PATR-style grammars.’ Computational Linguistics 22 (4), 555–
568.
Keywords
Cross-references
26
Biography
27