0% found this document useful (0 votes)
189 views

Formal Semantics For Natural Language

This document provides an overview of formal semantics for natural language. It discusses how tools from formal logic can be used to analyze the meaning of natural languages. It describes how Richard Montague demonstrated in the 1970s that it was possible to treat natural languages like English as formal languages, using tools beyond just first-order logic. The document then outlines three major theories in formal semantics: Montague Grammar, Discourse Representation Theory, and Situation Semantics. It also discusses the principle of compositionality of meaning and how semantics relies on syntax to account for ambiguities in language.

Uploaded by

JUan GAbriel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views

Formal Semantics For Natural Language

This document provides an overview of formal semantics for natural language. It discusses how tools from formal logic can be used to analyze the meaning of natural languages. It describes how Richard Montague demonstrated in the 1970s that it was possible to treat natural languages like English as formal languages, using tools beyond just first-order logic. The document then outlines three major theories in formal semantics: Montague Grammar, Discourse Representation Theory, and Situation Semantics. It also discusses the principle of compositionality of meaning and how semantics relies on syntax to account for ambiguities in language.

Uploaded by

JUan GAbriel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Formal semantics for natural language

K. KORTA &J.M. LARRAZABAL


Logika era Zient.:ziaren Filosofia Saila
ILUI (E.H.U.)

Formal Semantics for Natural Language is the study that makes use' of tools from
formal or symbolic logic to analyse the meaning in the languages that human beings use
for everyday communication. But what has to do logic in the study of natural language
meaning? Well, the answer to this question depends first on what the term 'logic'
means. If we reduce the meaning of 'logic' to the classical first-order predicate logic,
then the answer is that the use of logic is hopeless in order to analyse meaning in natural
language. This reductive assumption on the nature of logic, together with the belief on
the inherent incoherence of natural language, was the main reason for the idea that
formal semantics for natural language was totally insufficient and that became the
official opinion for many logicians, philosophers, and linguists. Even Alfred Tarski,
who, because of his definitions ofTmth and Logical Consequence for formal languages,
is considered the father of formal semantics, thought that it was impossible to build up
an adequate formal semantics for natural language. It was in the 70's when Richard
Montague, using all the tools that modern logic put at his hands (not only first-order
logic, but also, higher-order logic, type theory, lambda calculus, intensionallogic with
possible world semantics, etc.), showed us that it was possible to treat natural language,
specifically English, as a formal language.
From that point, the results in formal semantics for naturallariguage have been
many and rich. In this paper we will try to show which are" the main contributions
made by formal semantics to the comprehension of meaning in natural language.
For that purpose nothing better than presenting, necessarily briefly, three of the
most important theories in this field nowadays: Montague Grammar, Discourse
Representation Theory and Situation Semantics. Obviously, we will not be a1?le to
offer a complete and detailed description of any of them, but we hope the present
paper will be useful for drawing out a panoramic view of the curren.t work in the
field. Thus, in the first section we will sketch the proposal made by Richard
Montague, the so-called PTQ Grammar. In the second section, you will find a
description of the main theory concerned with discourse meaning: Discourse Representation Theory. Next, the third section will be focused on a theory located so(*) This article has been written as a result of a grant (UPV 003.230-H038/91) from the University of the Basque
Country fo'r a researh project on Logical Semantics and Natural Language: a study of the' logical grouns from a theory of
information". We would like to thank Dr. John Etchemendy and the staff of CSLI (Stanford University) for their
hospitality and Kindness during our stay in Fall 1992, at the time of the final version of this paper was written.

[ASJU, XVII-2, 1993, 371-393]

372

K. KORTA &J.M. LARRAZABAL

mewhere in the border between semantics and pragmatics, and based on a general
theory of information and its flow: Situation Semantics. Finally, in the last section
we will draw out some general conclusions and further suggestions.
Everybody knows about the secondary role that many linguists have assigned and
still assign to semantics within the linguistic enterprise. The aim of this paper would be
completely fulfilled if, as a result of reading it, some of them would change or at least
relativise that view. We would be glad if it would be for some use for the people that,
interested in natural language, have not had the choice to approach to work on the
problem of meaning, and also for the people that, though interested in the problem of
meaning, has been reluctant to the formal treatments, perhaps because of making the
same reductive assumption on the nature of logic mentioned above.

The Principle of Compositionality


Before going on, let _us see one semantic principle which is essential for all we will
say later: the principle of compositionality of meaning. This principle is very important,
particularly in Montague Grammar, because, it is not only a criterion for its semantics,
but also has notorious consequences on the relationship between syntax and semantics.
The principle of the compositionality of meaning, also known as 'Frege's principle', says the following: the meaning of a complex expression is a function of the
meanings of its parts. Thus, in- the semantics we fix the meaning of basic expressions, and the meaning of complex ones is determined by semantic rules that tell us
how basic expressions combine with each other contributing to the meaning of the
whole. This fact imposes certain conditions on the syntax, the semantics, and the
relationship between both in the language. In the case of formal languages it is easy
to see that meaning is treated following that principle. Since formal languages are
artificial, we have no problem to build them up in a way to fit that principle. We
cannot, on the other hand, organise natural languages just c~responding to our
goals; natural languages are given to us. Hence, we have to think carefully before
claiming that natural language meaning meets the principle of compositionality.
Indeed, this issue has been and still is subject of intense debate. Alfred Tarski had
not a very optimistic opinion on the plausibility of applying his own work to
natural language. For using logical tools in the definition of meaning in a language
we need a complete specification of the syntax for that language, and that did not
seem very plausible for natural language at least in the 40)s, when Tarski did his
most fundamental work.
Anyway, the point is that the principle of compositionality presupposes the
syntax. In other words, in order to analyse compositionally the meaning of a complex expression we have to consider not only such expression but also its syntactic
analysis. The latter will show us which parts is the expression formed by, by means
of which rules, and in which order. We will need all this information if we want to
analyse the meaning of the whole expression as a function of the meanings of its
parts. Moreover, in natural languages we have expressions syntactically ambiguous;
expressions, which may have been built up by more than one syntactic procedure,
that may lead to different syntactic analysis. In some cases, different syntactic

FORMAL SEMANTICS FOR NATURAL LANGUAGE

373

analysis will produce different semantic analysis. Therefore, we need the former to
be able to study the latter. Let us see some classical examples to illustrate all this:
(1)

I saw a man with a telescope

This sentence has two possible readings. In one of them I use a telescope to see a
man. In the other one, it is the man I saw who has the telescope. The ambiguity is
created by the expression a man with a telescope. The issue is the position of the
Preposition Phrase (PP) in the constituent structure. If the PP is in the verb argument position, then we obtain the first reading. If, on the other hand, the PP is in
the argument position of the Noun Phrase (NP) a man; then we ~ave the second
reading. In other words, the point is the difference between these two structures:
(2)

a. s[I vp[saw NP[a man with a telescope]]]


b. s[I vp[saw NP[a man] pp[with a telescope]]]

Thus, in order to analyse the meanings of (1) we need the syntactic analysis (2a)
and (2b). We call structure ambiguity to ambiguity phenomena such as this. But there
are syntactic ambiguities which are not cases of structure ambiguity. Consider (3)
below:
(3)

Everyone in the department speaks two languages.

This sentence has two possible readings too. In one of them, there are two
languages, say Basque and English, spoken by everyone in the department, and it is
possible for someone to speak another language too.A different reading will tell us
that everyone in the department speaks only two languages, possibly different
languages for each one.
We cannot explain this case of ambiguity in terms of structural ambiguity, since
(3) only has one constituent structure. The explanation proposed by Montague says
that the ambiguity of (3) arises from the difference in its ways of derivation. These
ways of syntactic derivation will give us the same constituent structure as their
result, but they will produce different meanings. For that reason, we call derivation
ambiguity to this kind of phenomena.
The point is then that to account for meaning ambiguities the semantics needs
the syntax: not only the syntactic analysis of constituent structures but also of
syntactic derivations.
But the principle of compositionality not only requires a well defined syntax. At
the same time, it imposes conditions to it. The compositionality principle demands
that every ambiguity not based in the lexicon must be rooted in the syntactic
derivation. And also that every syntactic operation must have its semantic interpretation. These requirements mean that the syntax is not autonomous within Montague Grammar. Semantic considerations will be of great relevance for proposing the
syntax. Certain syntactic ambiguities will be produced by exclusively semantic
reasons, and for the choice between different syntactic analysis we will use criteria of
a semantic nature instead of a syntactic one.
On the other hand, we may study semantical issues without having in mind any
syntactic criteria at all. Two sentences with different syntactic derivations or struc-

374

K. KORTA &J.M. LARRAZABAL

tures may have the same meaning. Starting from a semantic theory allows us to
consider that two different sentences have the same meaning without having to asses
that they are equivalent at some syntactic level as deep structure, d-structure or
whatever. The unique requirement is that the different syntactic analysis of those
sentences produce the same semantic interpretation.
Mter all these considerations, we will try now to give a structured overview on
the semantic theories mentioned above, beginning with Montague Grammar as
presented in Montague 1973, which is known as 'PTQ grammar'.

I. Montague Grammar
We call Montague Grammar the work made in the seventies by Richard Montague to build up a model-theoretic semantics for natural language. However, we
could talk of semantics instead of grammar, since, as we shall see, his motivations
and the nature of his results are semantic, and it is the semantics what determines
the characterisation of the other components in the grammar. Let us see first the
general organisation of Montague grammar.

1.1. The General Organisation ofPTQ


For all the reasons we have mentioned so far, the principle of compositionality
has strong consequences in order to determine the general organisation of PTQ
grammar. But it has more peculiarities. One of them is the link between syntax and
semantics. In PTQ grammar this linkage is not direct; there is a translation to a
logical language in the middle. It would be possible for a natural language to give
its semantic interpretation directly, once we have its syntactic analysis. In fact, that
is what Montague himself does in Montague 1970a. But in PTQ natural language
expressions are first translated into expressions of a logical language, and then we
get the semantic interpretation of these logical expressions.
The translation process must satisfy certain conditions. In the logical language
there is no place for ambiguities. So, the semantics assigns one mefning to each
logical expression. And then, if a given natural language expression is ftmbiguous, in
the translation we have to assign to it different logical expressions. As we said
earlier, if the ambiguity of a sentence is not due to the lexicon, then different
syntactic analysis correspond to it, and different logical expressions to each analysis,
and a different semantic interpretation to each expression. The translation process is,
then, compositional too. To each lexical item corresponds one logical expression and
to each logical expression one meaning. The syntactic rules tell us how complex
expressions are formed starting from lexical items. To each syntactic rule corresponds one translation rule that tells us how we get the translation of the complex
expression starting from the translations of the lexical items that form it. Once we
have this translation we get the meaning in the same way as in the semantics of
logical languages. These are, briefly presented, the main features of the. general
organisation ofPTQ grammar, which is graphically summarised in (4):
/

375

FORMAL SEMANTICS FOR NATURAL LANGUAGE

(4)
_5~.--------------.

Natural
Language

Translation

Logical
Language

Models
Interpretation

Now we will present the necessary elements of the PTQ syntax in order to be
able to understand the main features of Montague Semantics. For those who look for
a more complete description and explanation, Gallin 1975, Dowty et al. 1981 and
Gamut 1991, volume 2, are three excellent systematic presentations of Montague
Grammar. Let us also mention Halvorsen and Ladusaw 1979 for a good description
of Montague's Universal Grammar.

I. 2. The Syntax
The syntax proposed by Montague in PTQ is a ~:ategorial syntax of a special
kind. Usually a categorial syntax has these four components:
(i)
(ii)
(iii)
(iv)

a list of basic components;


a definition of derived categories;
the lexicon, that is, a list of all lexical items, specifying their category;
the specification of the behavior of the only syntactic rule of concatenation.

It is this last point where Montague's syntax differs from traditional categorial
syntax. Indeed, since categorial syntax is equivalent to context free rewrite rules
systems, it has problems to account for some nat"urallanguage phenomena. For that
reason, it has been proposed to introduce a transformational component in a pure
categorial syntax. Montague does not follow this option. Instead, he defines a big
and heterogeneous set of syntactic rules. These rules, besides concatenation, introduce elements syncategorematically, change word order, regulate morphological
form, and even sometimes do the three operations at the same time. For contemporary syntacticians these rules will have few value as explanation of syntactic operations, but we cannot forget that as Montague recognises "I .fail to see any great
interest in syntax except as a preliminary to semantics" (Montague 1974: 223, fn).
Anyway, there are some successful proposals in order to show how we can use
Montague semantics with more adequate syntactic theories (see, for instance, Partee 1973, 1975).
But, let us describe PTQ syntax.
First of all, we define the set of categories:
(i)
(ii)

e and t are basic categories.


If a and b are categories, then a/b and a//b are categories. 1

(1) aJb and aJ/b are different syntactic categories with the same semantic function like common nouns and
intransitive verbs.

376

K. KORTA &.].M. LARRAZABAL

The basic categories are two: 'e (for entity) which does not correspond to any
lexical item, and t (for truth) which corresponds to sentences; the category a/b is a
functional category which when applied to an argument of category b produces an
expression of category a. As you can see, the rule (ii) of the definition of categories is
recursive, that is to say, the number of categories that produces is infinite. But
Montague uses only few of them, and we will use even fewer in this introductory
exposition. Here you are the list of most important categories with their abbreviation, and with the fragment of English we will study:
Categorial definition Abbreviation Description

Expressions

e
t

0
0

tIe

IV

tIle

CN

tlIV

IVIT

TV

Sentence
Intransitive Verb

smoke, sleep, walk,


talk
Common Noun
man, woman,
language, unicorn,
elephant, queen,
park
Term or Noun Phrase John, Mary, Bill,
heo, hel, ...
Transitive Verb Phrase .love, speak, know,
seek, find

Let us see now some syntactic rules. In the table above we defined the set of basic
expressions for each category A: let us call this set BA. Syntactic rules have to define
the set of phrases of category A, that is, the set that will contain, jointly with the set
BA, the set of complex expressions of category A formed by syntactic rules with
elements of BA. Call this set PA. The first syntactic rule says this:
SI. For every category A, BA~ PA.
The subsequent rules will specify how complex expressions are formed. For that
purpose, they have to give us three kinds of information: (i) the categories to which
we may apply the rule; (ii) the category of the complex expression resulting of the
application of the rule; (iii) the specification of the syntactic operation needed to get
the new expression. Let us see, as an example, the fourth syntactic rule in PTQ,
known as the 'subject-predicate' rule:
S4. If a E PT and 8 E PIV, then F4(a, 8) E Pt, where F4(a, 8) = a8' and 8' is the
result of substituting the first verb for its third person form in 8.
This rule allows us to form a sentence (of Pt) by a term (of PT) and a verb phrase
(of PIV), with the subject-predicate concordance at the same time. Consider, for
example,John E PT and walk E PIV (these two words are ofBT and BIV, respectively,
hence, by SI are of PT and Prv). Linking them we obtainJohn walks E Pt, by 54.
In Montague Grammar the derivation of an expression is represented by analysis
trees. The tree for the example above is the following one:

377

FORMAL SEMANTICS FOR NATURAL LANGUAGE

(5)

John walks, t, S4

John, T

walk, IV

All the nodes in the tree are labelled with an expression, its category, and the
syntactic rule used to produce it except with SI.

I. 3. The translation
As we have said before, to each syntactic rule corresponds one translation rule.
The translation rule corresponding to S4 is T4, namely:
T4. If a, E PT and 8 E PIV, and if a,' and 8' are, respectively, the translations of Cl
and 8, then the translation ofF4(a, 8) is a'(1\8').
The rules S4 and T4 constitute the most common combination of syntactic/translation rules: a rule of functional application. A syntactic rule of functional
application, linking two expressions of categories A/B and B, gives an expression of
category A. In our example, it links an expression of category T (= tIN) with an
expression of category IV to yield a new expression of category t. The translation
rule corresponding to syntactic rules of functional application is always of the same
form: we take the translation a' of the expression of category NB and we link with
the translation ~' of the expression of category B, to get a'(1\13').2
In order to illustrate how T4 works we will use our example above. First, we
need the translations of the basic expressions John and walk. The translation of the
latter is quite straightforward. Intransitive verbs as well as common nouns have the
same semantic function, they are one-place predicates, that is, constants that denote
one set of individuals. As usual in Montague Grammar we will write the translation
of walk as walk'.
The translation of terms is more complex and constitutes one of the most important features of Montague Semantics. Let us spend a little bit of time explaining it.

1. 3. 1. The translation ofterms.


In principle, it is natural to consider terms like proper names and definite
descriptions as referring to individuals. On the other hand, it seems that quantified
terms like every man have to be interpreted in a different way, since it is clear that
they do not denote individuals. But, one of the main features of PTQ grammar, in
fact what motivates the title of the article where it was presented, is the uniform and
compositional treatment of terms as quantified expressions. Let us consider this
example:

(2) The symbol lA' stands for intension. The intermediate logical language used in Montague Grammar is the
Typed Intensional Logic, rich enough for accounting for intensional phenomena in natural language such as opaque
context created by quotation, indirect speech, verbs expressing propositional attitudes, modality and so forth.
Hereinafter through the paper we will obviate intensionality for simplicity and clarity in the exposition.

378

K. KORTA&].M. LARRAZABAL

(6) Every man walks.

Using first-order logic we would translate this formula as:


(7) '\Ix(man(x)--7walk(x))

the formula that expres'ses that for every individual it is true that if it is a man, it
walks. Thus, every man would be translated as an expression asserting that all individuals that are men has the property Q:
(8) '\Ix(man(x)--7Q(x)).

And using lambda abstraction 3 on Q we would get an expression denoting all


the properties such that every man (in some world w) has those properties, which is
the intended interpretation in PTQ grammar:
(9) AQ'\Ix(man(x)~Q(x)).

And now, if we substitute the property man for an arbitrary property P and
abstract over it we get the expression:
(10) APAQ'\Ix(P(x)~Q(x)).
(10) would be the translation for the determiner every, which expresses a relation
between two properties of individuals (extensionally sets), a relation that is true of
properties P and Q in a world if and only if all the individuals who have property P
in that world have property Q in that world. If we apply this expression to the
translation of man, that is, man' ,we get:

an expression which refers to the set of properties Q that stand in the relation
described by (10) with respect to the property of being a man. We could simplify
this expression using lambda conversion and get:
(12) AQ'\Ix(man'(x)--7Q(x)).

Now, applying (12) to the translation of walk, namely, walk', we have:


(13) AQ'\Ix(man'(x)~Q(x))(walk').

And using lambda conversion:


(14) '\Ix(man'(x)--7walk'(x)).

So, finally we get an expression almost identical to that of first-order languag,e.


But using PTQ treatment, we build up the interpretation of a term compositionally,
(3) The use of lambda operations in the study of natural language is motivated, among other reasons, by
compositionality, as we can see later, and provides the language with more expressive power. Consider the
first-order logic expression: (1) man(x)
We can get the expression to refer to the property of being a man applying lambda abstraction to (1), which
produces: (2) A.x[man(x)].
If we want to predicate this property of an individual, say, j, we would have j as an argument of the expression
in (2), that is: (3) A.x[man(x)](j).
And now we can apply lambda-conversion to (3) and get a first-order formula: (4) man(j).

379

FORMAl SEMANTICS FOR NATURAL LANGUAGE

and we can represent the semantic function of each syntactic element, something we
cannot do using merely first-order logic.
Then, quantified terms are interpreted as sets of properties (or of sets), and the
determiners as relations between properties. Here are the translations for the determiners every, the, a(n) and one:
(15) every ==>AP[AQV'X[P(x)~Q(x)]]

(16) the =>AP[AQ3y[V'x[P(x)~x=y ]/\Q(y)]]


(17) a(n) =>AP[AQ3x[P(x)/\Q(x)]]

(18) one ~APAQ3yV'x[(P(x)/\Q(x))~x=y]

Now, the treatment of quantified terms affects the treatment of proper names.
They are from the same syntactic category of quantified terms, they have the same
syntactic function. Hence, they will be interpreted like quantified terms as sets of
properties. Indeed, we can interpret, say,John walks as an assertion that the property
of walking belongs to the set of John's properties, and, then,John would be translated as:
(19) AP[P(j)]

So; now we can get the translation for the analysis tree (5) of the sentence]ohn
walks. The different stages in the translation process will be listed as in a proof: in
each line we will put on the translation to the Montague's intensionallogic (IL) as
well as its justification; we will use the symbol '==>' to mean 'is translated as'. Thus,
we will have English expressions in the left hand of the symbol, and the translation
of the expression and the justification in the right one.

1.John => AP[P(j)]


Basic expression
Basic expression
=> walk'
3.John walks => AP[P(j)](walk')
T4, 1 and 2
4. walk'(j)
Lambda-conversion, 3

2. walk

Let us see now the translation for a sentence like Every man talks. But before
doing that, we need the analysis tree of that sentence, and for that, we need the
following syntactic rule:
S2. If a E PT/CN and PE PCN, then F2(a, ~) E PT and F2(<X, ~) = a~.
Then, we introduce determiners as expressions of category T/CN, that applied to
expressions of category CN yield an expression of category T.4
With this rule at hand we can derive the following analysis tree:
every man talks, 84

(20)

~J

_ _ _ _ _ _ _ _

every ~
every,

ttN

talk, IV

;;;;;;; CN

(4) In fact, this is not a rule of PTQ since Montague introduces the determiners syncategorematically, but it is
entirely equivalent, and more convenient for us for several reasons.

380

K. KORTA&J.M. LARRAZABAL

And once we have the analysis tree, and the translation rule T2 (of functional
application, and then very similar to T4) corresponding to the syntactic rule S2, we
can get the following translation:
1. every =>AP[AQ'v'x[P(x)~Q(x)]]
2.man~man'
3. every man =>AP[AQ\7'x[P(x)~Q(x)]] (man')
4. AQ'v'x[man'(x)~Q(x)]

5. talk => talk'


6. every man talks =>AQ'v'x[man'(x)~Q(x)](talk')
7. V'x[man'(x)~talk'(x)]

Basic expression

Basic expression
T2, 1 and 2
Lambda conversion, 3
Basic expression
T4, 4 and 5
Lambda-conversion, 6

I. 4. Conclusions
We have left aside a lot of important details of PTQ grammar and Montague
Grammar in general. We have only introduced a few syntactic rules and the corresponding translation .rules in order to show how PTQ grammar works. But we have
not seen some of the most important syntactic-translation rules that are used to
account for semantic phenomena as de dicto-de re readings distinction, scope ambiguities, intensional contexts or anaphora. 5 We have not seen many features of the
rich Intensional Theory of Types used as an intermediate language. But, anyway, we
hope that the overall idea underlying Montague's work will be clearer enough. Let
us quote Montague himself:
There is in my opinion no important theoretical difference between naturallanguages and artificial languages of logicians; indeed, I consider it possible to
comprehend the syntax and semantics of both kinds of languages within a single
natural and mathematically precise theory. On this point I differ from a number
of philosophers, but agree, I believe, with Chomsky and his associates. It is clear,
however, that no adequate and comprehensive semantical theory has yet been
constructed, and arguable that no comprehensive and semantically significant
syntactical theory yet exists. (Montague 1970b).

So, as we have indicated earlier, one of the main goal of Montague is to show that
formal semantics for natural language is possible and hopeful; that we can study and
understand meaning in natural language as rigorously asip. formal languages. Another and equally important goal is to make place for semantics as a proper part of
linguistics. He met the first goal successfully; all work done in formal semantics
after him shows that. The second goal is still something we have to work further.

11. Discourse Representation Theory


Discourse Representation Theory (hereinafter DRT) is the name for a semantic
theory for natural language initially developed by Hans Kamp (1981) and independently by Irene Heim (1982). Two of its main characteristics are mentioned in
the name. The first of them is that it concerns with the semantic interpretation of
(5) We suggest Orlowska, 1988, as a good introduction to some extensions of Montague's logic.

FORMAL SEMANTICS FOR NATURAL LANGUAGE

381

discourses, not with sentences as Montague semantics does. The other one has to do
with the word representation. DRT postulates an intermediate level between linguistic expressions and reality in the relation of semantic interpretation. This level is the
level of semantic representation where the information conveyed by a discourse is
stored. As we have seen, in Montague grammar the intermediate level of translation
to Typed Intensional Logic is not necessary, but in DRT the process from expressions to representations is not compositional, and this fact makes the level of
representation an essential one in DRT.
Another important characteristic of DRT is that it is considered as a bridge over
two perspectives in semantics taken as opposite: the psycho-linguistic view which
relates syntactic structures to mental representations, and the logical view which
relates syntactic structures to (models of) reality. In other words, DRT links the
declarative or static view of meaning with the procedural or dynamic view.
But for our purposes it be better to look at the empirical motivations of DRT,
that is, at phenomena like pronouns interpretation, and anaphoric relations between
pronouns and indefinite terms. Phenomena which Montague grammar has problems
to'deal with. In that way we will have the possibility for contrasting both theories in
some points.
11.1. Anaphora
Anaphoric pronouns are those pronouns that have not a reference by itself, but
take it from another noun phrase. Consider the following example:
(1)

John loves Mary and he kisses her.

In this sentence, we interpret he as referring back to John, and her as referring


back to Mary. In Montague grammar these pronouns are analysed as variables bound
by the noun phrase denotations. But sometimes anaphoric relations arise between a
term and a pronoun in different sentences, that is, beyond the sentence boundary as
in (2):
(2)

A man walks in the park. He whistles.

In the anaphoric reading of (2), the pronoun he in the second sentence is bound
by the noun phrase a man in the first sentence. As we know the analysis unit in
Montague grammar is sentence. But we can think of (2) as having the same meaning
as (3):
(3)

A man walks in the park and he whistles.

And in the framework of Montague grammar we have no problem at all to get


the right interpretation of (3) translating it to the formula:
(4)

:3x (man'(x)Awalk in the parkJ(x)Awhistle'(x

which expresses also the meaning of (2).


But consider now that we can add very naturally new sentences to the discourse
(2), sentences in which the pronoun he occurs again referring back to a man.
(5)

A man walks in the park. He whistles. He is Mary's friend.

382

K. KORTA &J.M. LARRAZABAL

If we get the meaning of the first two sentences in (5), we cannot add the third
sentence and get at the same time the correct meaning of (5), since he in the third
sentence would not be bound by the term a man. To get a correct translation in
Montague grammar we have to know the entire discourse first. And then, consider
it as a conjunction of sentences with the variables in the appropriate places. But this
does not correspond very well with what intuitively seems to be the way of how we
understand discourses. We do not need to know if a discourse is finished, closed in
order to interpret it. We interpret sentences in a discourse, step by step, incrementally, being able to interpret new sentences as they are being said.
This kind of problems and other relatives to anaphoric phenomena as the so-called- 'donkey-sentences' are the basic motivation for DRT.

11. 2. An informal description of DRT


The main characteristic of DRT is that it assumes that each sentence is provided
with a representation. We will have a set of rules for converting syntactic structures
into representations which are called 'discourse representation structures' (DRSs).
There are two notations for DRSs. We will use the less orthodox but most used
pictorial notation. First, we will describe the construction of DRSs. Let us begin
with a single sentence as an example:
(6) John loves a girl.
The first step is to introduce the sentence in a box as in (7):

(7)

I John loves a girl

The second step yields:


(8)

x
John=x
x loves a girl

From (7) to (8) first, we have introduced the reference marker x; secondly, we have
asserted the equivalence between the proper name]ohn and the reference marker x;
and then, we substituted]ohn for x in the sentence. All these three steps are what we
have to do whenever we have a proper name.
Proper names introduce reference markers. Indefinite terms too. Thus, the next
steps consist of introducing a new reference marker y, and substitute the indefinite
term a girl for the reference marker y in the sentence, and, finally, we add the
formula girl(y). Thus, we build up the DRS for the sentence (6):

FORMAL SEMANTICS FOR NATURAL LANGUAGE

383

(9)
x

John=x
x loves y
girl(y)
But, as we have said, DRT is not restricted to the semantics of sentences, but it
takes discourse (or sequence of sentences) as the analysis unit. We have also said that
in DRT, unlike in Montague grammar, we do not need the discourse to he closed,
and we can interpret sentences in discourse as they are being said. Let us suppose
that immediately after (6) someone adds (10):
(10) She loves him too.
We can think of DRSs as places where information is stored. Having said (10),
we add more information to the DRS (9). Now, it is important to notice that
anaphorically interpreted pronouns introduce no new reference markers, but make
use of those reference markers available in the DRS. In this case there is no doubt:
the pronoun she corresponds to the reference marker y, and the pronoun him to x.
The only step, then, is to add the information that y loves x, and we will have the
DRS corresponding to the discourse consisting of sentences (6) and (10),namely:
(11)
x

John=x
x loves y
girl(y)
y loves x
Then, in a DRS we have two types of things: a set of reference markers (in our
example {x, y}), and a set of formulas (Uohn=x, x loves y, girl(y), y loves x}). The
formulas in a DRS are called conditions. There can be atomic conditions as those in
our example, or complex conditions as we will see later. Let us see now how DRSs
are interpreted.
DRSs are considered as partial models of reality. In (11), for instance, as a model
containing two individuals which are asserted to have some properties specified by
the conditions. Now, the truth of a DRS is a relation between that partial model and
a total model M just in case the first can be embedded in (or can be taken to be a
part of) M.
The model M is defined as usual in first-order predicate logic. We define, then,
the notion of verifying embedding of a DRS into a model M as a function f which

384

K. KORTA &J.M. LARRAZABAL

assigns elements of D (the domain of M) to the reference markers in the DRS in


such a way that all conditions in the DRS are true in M.
A DRS is true in M if and only if there is at least one verifying embedding for
that DRS in M.
Let us take DRS (11) as an example. This DRS is true if and only if there is a
verifying embedding f such that assigns individuals to the reference markers x and
y, in such a way that f(x) = John and f(y) is a girl loved by John who also loves John.
It should be noticed here that the existential import of the indefinite description a
girl is not accounted for treating it as an existentially quantified term but by the
existential quantification over verifying embeddings.

11. 3. Quantified Noun Phrases


While Montague grammar treats noun phrases (NPs) uniformly as sets of properties, DRT makes an important distinction between indefinite NPs and definite
NPs. Indefinite NPs simply introduce new reference markers and atomic conditions
as we have seen above. The case of definite NPs is more complex. 6 Here we will
briefly examine the case of general or quantified NPs, those containing determiners
like 'every', 'each', and 'many'. Quantified NPs introduce complex conditions. Let us
illustrate this with the famous donkey-sentence:
Every farmer who owns a donkey beats it.

(12)

The first step in the construction of a DRS is to introduce the sentence in a box.
That will be the box for the main DRS. Now, having a quantified term we have to
introduce something new: the implication relation ~ between DRSs. Doing that
we will have:
(13)
x

farmer(x)
x owns a donkey

x beats it

In the left box we have introduced a reference marker x, and formulas corresponding to the common noun and its relative clause. In the right box we have a formula
which is the result of substituting the quantified NP in the sentence for the introduced reference marker x. In this process we have also introduced another relation
(6) Most definite NPs can be used either anaphorically or deictically. Anaphoric uses of pronouns, as we have
said before, link their interpretation with reference markers already introduced by previous discourse. Deictically
uses, on the other hand, link pronouns with reference markers which are contextually salient individuals. So, the
information stored in DRSs is not only that conveyed by discourse, but also information given by the context and
background assumptions.

385

FORMAL SEMANTICS FOR NATURAL LANGUAGE

between DRSs: the subordination relation. The two DRSs related by --7 are subordinated to the main DRS; the one on the right of --7 to the one on the left.
But, we can go on applying more steps for building DRSs. In the left box we can
apply the rule for indefinite NPs to the expression a donkey. Thus, we introduce a
new reference marker y, substitute a donkey for y in the second condition, and add an
atomic condition donkey(y), yielding (14):
(14)
x

farmer(x)
x owns y
donkey(y)

x beats it

Now, what happens with the pronoun it in the right box? We have said before
that anaphorically interpreted pronouns are linked to reference markers available in
the DRS. Now there is no reference markers nor in its DRS neither in the main
DRS. So, it takes the reference marker in the left box, or specifically, in the DRS
which is subordinated to. Then, we have:
(15)

farmer(x)
x owns y
donkey(y)

x beats y

Here the main DRS contains no reference markers but only one complex condition. This complex condition will be true just in case every verifying embedding for
the antecedent in the condition is also a verifying embedding for the consequence.
This kind of analysis permits DRT to account for several problems related to
donkey-sentences.

11. 4. Conclusions
So far, we have tried to describe some of the main motivations and features of
DRT, as well as the characteristics of one of its main tools: DRSs. Finally, we want
to draw out some few general remarks on this semantic theory.
DRT can be seen as a theory for overcoming some empirical problems arisen in
Montague grammar. As any essay of this kind, albeit the success is clear in solving

386

K. KORTA&J.M. LARRAZABAL

such problems, new ones faces the new theory. In any case the step forward given by
DRT is important in several ways.
Leaving aside the passing from sentence to discourse as analysis unit, first we
should mention the dynamic character of the theory, which together with the
representational level, makes more natural an explanation of natural language meaning and understanding in terms of a formal semantic theory of natural language.
Understanding discourse, as DRT suggests us, seems to be a dynamic process of
storing information (information not only conveyed by discourse but also by context
and background assumptions) at some representational level. Here we can find out
one of the most important values of this theory.
It should be also mentioned that, as Gamut 1991 discusses, DRT introduces a
richer notion of meaning than the truth-conditional one of Montague grammar. In
DRT truth is not a basic notion upon which the other semantic notions are recursively built up. Truth in DRT is a derived notion, and thus, the meaning of a
discourse is not identified with its truth conditions but depends on the embedding
conditions of the DRS corresponding to the discourse. Indeed, Gamut suggests, that
using that richer notion of meaning we could expect to success unifying Montague
grammar and DRT.
Finally, we should notice that DRT framework is not limited to the study of
phenomena we have just mentioned here. DRT is used in studies of tense and aspect
(see, for instance, Kamp and Rohrer 1983, Partee 1984) or propositional attitudes
(Asher 1986, Zeevat 1987). It seems also worth noting that some interesting proposals have been done with the aim of treating explicitly the partiality of information
in DRT as a theory of information, in such a way that on its basis could be given a
right account of epistemic models and intentionality (Landman 1990).

Ill. Situation semantics


Although at the first moment Barwise and Perry (1983) introduced situation
semantics as a special kind of semantic theory which would overcome problems
inherent to the model-theoretical treatment of natural language meaning, developing
the framework, they found the need for a general theory of information: a "mathematical theory of information flow, storing and transmission which is known as situation theory. Situation semantics is considered now as the application of situationtheoretic tools to the study of meaning. Even we could think of different situation
semantics depending on what kind of language or what fragment of what language
is the object of our semantic study.
We will first introduce the concepts in situation theory necessary to understand
the main features of the situation semantics we will present here.

Ill. 1. Situation Theory


Situations are parts of the world. My being here typing the keyboard, Mary
running, yesterday's football match, me telling you "it's eleven o'clock"..., all such
things are situations. They are primitive in the theory, that means, they are not

FORMAL SEMANTICS FOR NATURAL LANGUAGE

387

defined upon any other primitive. Human agents individuate or discriminate parts
of the world as these, and guide their behavior according to them.
Information is in the world. It flows through situations. Let us examine a widely
used example in order to illustrate it. Suppose a situation which consists of a cut
trunk. This situation carries a lot of information: the age of the tree when it was cut,
the tool use to cut it, the orientation with respect to the cardinal points,... In virtue
of the different (lawlike, natural, conventional,...) relations between this type of
situation with other types, our cut trunk carries information about quite different
and remote situations as those we have mentioned.
The situation-theoretic tool to represent units of this information is the notion of
infon. 7 An infon is composed by a relation, a set of argument-places for the relation,
an spatio-temporal location and a polarity item i such that i E {I, O}. Thus, the
following infon:
(1)

runs, Mary, I, 1

represents the information that Mary runs at the location I. On the other hand, the
information that Mary does not run at I is represented by the infon:
(2)

runs, Mary, I, 0.

The infons (1) and (2) are called duals. We say that an infon s is a fact when some
situation s supports it. 8 We write si =0', and it is read as s supports (J', or (J' is made
factual by s, or 0' holds in s. It is obvious that if si =(f, then sl:;tcr, that is, if some
situation s supports an infon <T, then it does not support its dual 0=. Now, we cannot
claim the converse, i. e., that if sl*<T then sl;tcr. In other words, if some situation
does not support an infon, we have no basis to maintain that that situation supports
its dual. That means that given a situation we cannot de~ide about the factuality of
infons that do not hold in that situation. Here we are introducing the partiality of
information, a central notion in situation theory and situation semantics.
'
There are also what we call parametric infons. These are infons in which not every
element is defined. In a parametric infon, as in a usual one, there are a relation, some
argument places for that relation (among which we -include the spatio-temporal
location) and polarity; but instead of assigning appropriate objects for the argument-places, we parametrise them, we aSsign a 'label' for the argument place. Let us
look at an example:
(3) laugh, a, b, 1
(3) is a parametric infon. We have labelled the role of the laugher :by the
parameter a and the location by the parameter b. Every assignment of appropriate
objects for (some or all) the parameters in an infon is called an anchor.
(7) It must be stressed that infons are objects in the theory for representing units or items of information, but
this does not mean that we think of them as being the real informational objects -whatever they are- which flow in
the world.
(8) In one sense, we can think of defining situations on the set of infons it supports, and that could be useful for
some purposes. But, for many important reasons (see, for instance, Barwise 1989 or Devlin 1991) situations are
regarded as primitives. Thus, the primitives in the theory are situations, relations, individuals, spatio-temporal
locations and polarity items.

388

K. KORTA &].M. LARRAZABAL

We will see later the use of parametric infons in situation semantics. Now it is
worth to establish the difference between this notion and the notion of unsaturated
infon.
In an unsaturated infon there is neither an assigned object nor a parameter for
argument-places. Nevertheless, they are considered as well formed infons. Consider
the following infon:
(4) eats, John, I, 1
It is natural to think of 'eat' as a relation with argument-places for the eater as
well as for the thing eaten (let alone the location). In (4) there is neither an object
nor a parameter for the role of the thing eaten, but we can consider (4) as the
information conveyed by an utterance of the sentence 'John is eating now'. So, for
cases like that it is interesting to differentiate between parametrised and unsaturated
infons.
'
We have introduced so far the two main concepts in situation theory: the
situation and the infon. Let us see now how these ideas apply to the study of natural
language meaning.

111.2. Situation semanti~-First of all, we have to say that situation semantics is concerned with the
meaning of utterances, not of sentences of natural language. It adopts the austinian
point of view considering language as action. The meaning of a sentence is an
important factor, but only one factor, for determining the meaning of an utterance
made by an agent addressing to some other agent in some particular circumstances.
In fact, one of the main motivations of Barwise and Perry (1983) was to account for
what they called the efficiency of language: the fact that one and the same expression
or sentence can be uttered by different agents on different occasions to mean quite
different things. Those acquainted with the classical boundaries between semantics
and pragmatics will have noticed that situation semantics tries to cross over such
boundaries.
Although there are some attempts to apply situation semantics to the study of
the meaning of questions or commands, it is focused on the semantics of (utterances
of) declarative sentences, like the two other semantic theories we have seen before.
The starting point is that utterances of sentences of, say, English are situations,
where we have an speaker, an audience, and some circumstances which will affect
the meaning of the utterance. As other kinds of situations, utterances carry information about other situations often quite different and remote with respect to themselves. Let us see more carefully what kind of things are involved in natural language
utterance meaning.
Firstly, we have the utterance situation: the situation or context where the utterance is made. Let us look at an example. 9 Suppose that Marcus says to Irina: A man is
(9) Not only the example but also the explanation of these concepts follow those of Devlin 1991~ which differs a
bit with respect to for example, Barwise and Perry 1983 or Gawron and Peters 1989.

FORMAL SEMANTICS FOR NATURAL LANGUAGE

389

at the dobr. The utterance situation u is the immediate context in which Marcus
utters the sentence and Irina hears it. It includes both ~arcus and Irina, the duration of the utterance, and will contain all the necessary in order to identify such
things as the door that Marcus is referring to. Thus, we have that
(5)

u I=

utters, Marcus, A MAN IS AT THE DOOR, I, 1 /\


refers-to, Marcus, THE DOOR, DJ I, 1

where D is a door that is fixed by u.


In many cases the utterance is part of a wider discourse situation. As we are
concerned with single utterances, it will be assumed that the utterance situation and
the discourse situation coincide.
In any case, the discourse situation is part of a wider situation called the embedding situation. We could say that the embedding situation contains the part of the
world of direct relevance to the discourse. Let us take back the example to see what
we mean. Suppose that the above utterance is made by Marcus as a request for Irina
to open the door. If Irina acts according to the request and opens the door a change
would happen in the embedding situation. That is, if e denotes the embedding
situation at the time of utterance and eJ the embedding situation a bit later, we
would have that
(6)

e I=

closed, D, I, 1

(7)

e' I =

opens, Irina, D,' P, 1

We will also have (at least in many cases) the resource situation. Suppose, for
instance, that Marcus says: The man I saw running yesterday is at the door. Saying that
Marcus is making use of a particular situation in which he saw a man running, in
order to identify the man is now referred to as being at the door. More precisely, if u
is the utterance situation and M and D some individuals, then
(8)

u I=

utters, Marcus, <1>, I, 1 /\


refers-to, Marcus, THE MAN, M, I, 1 /\
refers-to, Marcus, THE DOOR, D, I, 1

where <I> is the sentence


(9)

THE MAN I SAW RUNNING YESTERDAY IS AT THE DOOR.

In order to refer to the individual M, as we said earlier, Marcus makes use of


another situation r occurred the day before the utterance, where there was an unique
individual M such that
(10) rl=

runs, M, 1', 1

And, finally, there is the described situation: that part of the world the utterance is
about. Features of the utterance help to identify which is the described situation s:
(11) s

1=

present, M, IJ 1

where I represents the location of the door at the time of utterance.

390

K. KORTA &J.M. LARRAZABAL

111.3. The Relational Theory of Meaning


In the situation-theoretic framework it is assumed that the main goal of asserting declarative sentences is to convey information. As we said earlier, some lawlike
relations between the rings of a tree stump and the age of the tree are what makes
possible for a certain tree trunk to carry information about -the age of that very tree
when it was cut. The same idea is behind utterance situations. The lawlike (conventional) relationship between types of utterance situations and the types of situations
they describe are what is considered meaning in natural language.
The sentence <P: Alfred is at the door can be uttered by different persons to convey
to different persons different information about different situations (with different
Alfreds, different doors, at different times). Nevertheless, there remains an abstract
relation between the type of utterances of that sentence and the type of situations
described. We define the meaning of tP as the abstract linkage 11 tP 11 between two
types of situations U and S such that
(12) U =[u 1u 1= {speaking-to, a, b, I, 1, utters, a, <1>, I, 1,
refers-to, a, ALFRED, c, I, 1, refers-to, a, THE DOOR, d, I, I}]

where the parameters a, b, and c stands for persons and the parameter d for doors, and
(13) S= [s I si =present, c, d, I, 1]

Thus, if we have an utterance situation u in which Marcus says <I> to Irina, then if
u is of type U, there is some person, namely, Marcus, who fulfils the parameter a of
the speaker. Marcus utters <P and, in particular, utters the word 'Alfred' to refer to
some individual to whom the parameter c anchors, and the expression 'the door'
referring to some particular door to which the parameter d anchors. Then, we can
think of Marcus as having the information that Alfred is at the door and uttering tP
for conveying this piece of information to Irina. Irina hearing the utterance will
acquire the information that the situation s that Marcus, utterance is about is such
that there is some particular person named Alfred who is at the door. That is, she
acquires the information that the described situation s is of type S. That information
is what is called the propositional content of the utterance. Namely,
(14) s 1= present, A, I, 1

.~

where A stands for the individual referred to by Marcus as Alfred and I for the
location of the door at the time of utterance. That is the information the utterance
carries about the described situation s. And that is so by means of the meaning IltPll
of the sentence <1>, which provides the linkage between the utterance situation and
the described situation, and makes possible for an utterance of tP to be informational.

Ill. 4. Conclusions
We have seen -which is the concept of meaning for the case of a declarative
sentence. The framework may also be used for the treatment of other speech acts (See
Devlin 1991). The following step would be to analyse the meanings of the parts of
the sentence. At this point, it is worth to notice that no assumption on the composi-

FORMAL SEMANTICS FOR NATURAL LANGUAGE

391

tionality of meaning is adopted. The strategy in developing situation theory has


been, from the beginning, to avoid any commitment to given methodological
assumptions like this principle. At the same time, no assumption has been made
about the linkage between syntax and semantics in natural language. 10
Building up situation theory and semantics, notions as information, its flow,
meaning and understanding are studied from the pretheoretical ground, in order to
avoid problems that do not belong to the phenomena under study, but to the theory
used to model such phenomena. I I
But some of the most interesting points for the kind of problems we are concerned with are the following ones. First, situations and relations are primitives. They
are not extensionally defined in set-theoretic terms. This gives us the possibility for
an interesting treatment of intensionality in a more direct and understandable way
than Montague Grammar and any other set-theoretically based semantic theory.
Secondly, there is the idea of partiality. Information transmitted by natural
language utterances is rarely about the whole world or complete possible worlds,
but only about some small parts of it (situations). This idea of partiality as well as
the treatment of truth as a derived notion in the theory is common to situation
theory and DRT.
Finally, we should mention the essay of including in the theory aspects that until
now had been considered as belonging to pragmatics. A respectable discipline, but
which was taken as it had nothing to do with the standpoint of formal semantics.
Situation theory and semantics starting from the model-theoretic semantics tradi-

tion has opened its doors to the treatment of 'formally intractable' pragmatic factors
as context and circumstances. But we will take this point later.

IV. Concluding remarks


We have so far tried to present three of the most important and representative
theories in formal semantics for natural language. The first one, Montague
grammar, apart from the internal characteristics we have seen, shows as its most
important contribution the following one, namely to propose the study of natural
lang.uage meaning by means of formal tools as a respectable and fruitful line of
research. Even with all its shortcomings, that is more than we usually expect from a
theory of any kind.
DRT and situation semantics came (almost at the same time) to go beyond the
problems arisen in Montague grammar and brought new perspectives in the study of
natural language meaning. With DRT, we passed from sentence to discourse, and it
was introduced a dynamic view which corresponds more closely to the idea of what
u~derstanding is. The explicit treatment of partiality (present implicitly in DRT),
and the information-based explanation of what we do when we use language in
(10) In Gawron & Peters 1989 and Pollard & Sag 1987 lexical-functional grammar and head-driven structure
grammar are, respectively, used as syntactic theories for situation semantics.
(11) This point is involved in the adoption by situation theory of Peter Aczel's non-well-founded set theory
instead ofZF-set theory.

392

K. KORTA&J.M. LARRAZABAL

everyday communication are the most essential contributions made by Situation


Semantics.
The variety in terminology, notations and formal mechanisms in the three theories may have confused the reader, making difficult to see the similarities and
differences among them. This confusion will be more dramatic for that who is
seeking for the right semantic theory. How are we going to unify so (at least
apparently) different theories? This seems to be a very natural question for the
people believing in one true theory for each research topic. In fact, there is also a
general trend for preferring unified and general theories. We could mention several
points to take in mind in order to try to answer the question above.
The first one is related to a general discussion in linguistic theory: conceptualism
versus realism. Conceptualism is the view according to which meaning -and all
relevant aspects of language, from a linguistic point of view- has to do with a
mental system. This need not more explanation for the reader acquainted with
chomskyan syntactic theories. Realism, on the other hand, takes meaning as related
to language and reality - it does not claim that mind has nothing to do with
language but it does not bother studying it. While conceptualist semanticists and,
generally, syntacticians adopt conceptualism, the tradition in formal semantics is
realist. In fact, it is difficult to see Montague grammar as an explanation of meaning
as a psychological matter. Barwise and Perry also take realism as an assumption of
Situation Semantics. But, as we have said earlier, one of the advantages of DRT is
that it can be seen as a bridge between these two (apparently) opposite approaches.
We could find here a way for overcoming the dispute, just avoiding it:
Another point has been suggested earlier. We could maybe enrich the notion of
meaning present in Montague grammar (truth-conditional meaning) by a dynamic
notion as the one in DRT and use a dynamic logic to integrate both theories, as
Gamut 1991 does. It remains to be seen the success of such a move, and whether it
will be also useful for the case of situation semantics. It seems also quite natural to
think about a possible approaching between DRT and Situation Semantics, given
their common characteristics like the dynamic nature of meaning, partiality of
information, and the inclusion of contextual factors. This attempt of convergency is
what reaches a paper like Cooper and Kamp 1991.
Taking the three theories from a general standpoint, one of the most remarkable
points is the 'pragmatic turn in semantics' caused by DRT and specially by Situation Semantics. It seems a fact that trying to give account for meaning in natural
language has broken the limits imposed by logic (not only first-order logic but also
those most sophisticated as Montague's intensionallogic) on the notion of meaning
as truth-conditional, static -inmune to context changes-. The boundaries between
semantics and pragmatics are fuzzier than ever, and the very idea of logic has become
much more vague" than the most condescending Quine would be near to imagine.
Someone would have to clarify what is going to loose in such a 'pragmatic turn'.

FORMAL SEMANTICS FOR NATURAL LANGUAGE

393

References
Asher, N., 1986, "Belief in Discourse Representation Theory".Journal ofPhilosophical Logic 15.
Barwise, J., 1989, The Situation in Logic. Stanford: CSLI Lecture Notes 17.
Barwise, J. and Perry, J., 1983, Situations and Attitudes. Cambridge MA: MIT Press.
Barwise, J., M. Gawron, G. Plotkin, and S. Tutiya, 1991, Situation Theory and its Applications.
Vol. 2. Stanford: CSLI Lecture Notes 26.
Cooper, R. & H. Kamp, 1991, "Negation in Situation Semantics and Discourse Representation
Theory" in Barwise et al., 1991.
Devlin, K. J., 1991, Logic and Information. Cambridge University Press.
Dowty, D, R. Wall and S. Peters, 1981, Introduction to Montague Semantics. Dordrecht: Reidel.
Gallin, D., 1975, Intensional and Higher-Order Modal Logic. Amsterdam: North-Holland.
Gamut, L. T. F., 1991, Logic, Language and Meaning. Chicago: The University of Chicago
Press.
Gawron, M. and Peters, S., 1990, Anaphora and Quantification in Situation Semantics. CSLI Lecture
Notes 19.
Halvorsen. P.K. and W.A. Ladusaw, 1979, OCMontague's Universal Grammar": An
introduction for the linguist". Linguistics and Philosophy 3,185-223.
Heim, I., 1982, The semantics of definite and indefinite noun phrases. Ph. D. dissertation,
University of Massachusetts, Amherst.
Kamp, H., 1981, "A theory of truth and semantic representation" in J. Groenendijk, T.
Janssen, and M. Stokhof, eds., Formal Methods in the Study of Language. Amsterdam:
Mathematical Centre. Reprinted in J. Groenendijk, T. Janssen and M. Stokhof, eds.,
Truth, Interpretation and Information. Dordrecht: Fods, 1984.
- - - - , and C. Rohrer, 1983, "Tense in texts" in R. Bauerle et aI., eds., Meaning, Use and
Interpretation ofLanguage. Berlin: De Gruyter.
Landman, F., 1990, "Partial Information, Modality, and Intentionality" in P. Hanson, ed.,
Information, Language and Cognition. Vancouver: University of British Columbia Press.
Montague, R., 1970a, "English as a Formal Language" in B. Visentini, et aI., eds., Linguaggi
nella societd e nella tecnica. Milan: Edizioni di Communita. Reprinted in Montague,
1974.
- - - - , 1970b, "Universal Grammar" Theoria 36. Reprinted in Montague, 1974.
- - - - , 1973. "The Proper Treatment of Quantification in Ordinary English" in J.
Hintikka, J. Moravcsik, and P. Suppes, eds., Approaches to Natural Language. Dordrecht:
Reidel. Reprinted in Montague, 1974.
- - - - , 1974. Formal Philosophy: Selected Papers of Richard Montague, Edited and with
Introduction by R. H. Thomason. New haven: Yale University Press.
Orlowska, E., 1988, "Montague Logic and its Extensions" in W. Buszkowski et al., eds.,
Categorial Grammar. Amsterdam: John Benjamins.
Partee, B., 1973, "Some transformational extensions 'Of Montague Grammar". Journal of
Philosophical Logic 2.
- - - - , 1975, "Montague Grammar and Transformational Grammar". L16.
- - - - , 1984, "Nominal and Temporal anaphora". Linguistics and Philosophy 7.
Pollard, C., and I. Sag, 1987, Information-based Syntax and Semantics. Stanford: CSLI Lecture
Notes 13.
Zeevat, H., 1987, "A treatment of belief sentences in discourse representation theoryU in J.
Groenendijk et ai., eds., Studies in Discourse Representation Theory and the Theory of
Generalized Quantifiers. Dordrecht: Foris.

You might also like