100% found this document useful (1 vote)
601 views122 pages

Cohen Book

Intro to mathematical logic

Uploaded by

Dat Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
601 views122 pages

Cohen Book

Intro to mathematical logic

Uploaded by

Dat Nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 122

Copyright 2004, S.

Marc Cohen Revised 9/29/04


1-1
Chapter 1: Atomic Sentences
We begin by describing a simple artificial language. It is technically what is known as a first-order
language; well call it FOL.
Although FOL can contain a much larger vocabulary than well use, well mostly be restricting ourselves
to a portion of FOL that can be used to describe the worlds we can build using the software program
Tarskis World.
At the very simplest level, FOL contains sentences that contain two kinds of ingredients: individual
constants (names) and predicate symbols (property and relation words). Examples will make this clearer.
1.1 Individual constants
a, b, c, d, e, f, n
1
, n
2
, n
3
, etc.
We will use these as the names of the various blocks that inhabit the Tarski worlds we will be
examining. If we use one of these constants in describing a Tarski world, it must name some
actually existing block a block that exists in the world that we are evaluating. Note these
requirements:
Every world must contain at least one block.
Any name that we use must name some block.
In a given Tarski world, no name refers to more than one block.
A block may have more than one name.
Some blocks may not have names.
1.2 Predicate symbols
They are listed on p. 21. Notice that they come in three arities.
Arity 1:
Cube, Tet, Large, etc.
Correspond to property words is a cube, is a tetrahedron, is large, etc.
Arity 2:
Smaller, Larger, LeftOf, SameSize, etc.
Correspond to relational words is smaller than, is larger than, is to the left of, is the same size
as, etc.
Arity 3:
Between
Correspond to relational words is between and .
Notice that the arity of a predicate is the same as the number of individual constants (names) it
takes to combine with the predicate to form a complete sentence.
Copyright 2004, S. Marc Cohen Revised 9/29/04
1-2
1.3 Atomic sentences
We write atomic sentences in the blocks language by combining a predicate (which always begins
with a capital letter), followed by (in parentheses) one or more individual constants (which always
begin with a lower case letter). The number of individual constants matches the arity of the
predicate. For examples, look at the chart on p. 22.
Note that in writing atomic sentences, we use prefix notation:
Cube(a) Larger(b, c)
the predicate comes first, followed by (in parentheses) one or more names. The one exception is in
the case of the identity symbol, =. In this case, we use infix notation: a = b, rather than =(a, b).
Now it is time to start learning about atomic sentences, and when they are true and when they are
false. It is also time to start learning about the program Tarskis World.
Introduction to Tarskis World
1. Open the program. Click Start, Programs, LPL Software, Tarskis World 5.6. (Alternatively,
click on Tarski.exe. Its in the Tarskis World Folder, inside the LPL Software Folder.) You
will find an empty world and an empty sentence file. In the world, add two blocks, of
different shapes and sizes. E.g., a large cube and a small tetrahedron.
2. Name them a and b. (To name the cube a, select the cube, then put a check-mark in the box
labeled a in the Inspector and click on OK.)
3. In a new sentence file, write four sentences, describing their size and shape. You may type
them in, or click on the keyboard on the screen (faster, easier, and more accurate than
typing), or even copy (ctrl-C)-and-paste (ctrl-V) from another program.
Cube(a) Tet(b)
Large(a) Small(b)
4. Now, verify the four sentences (ctrl-F). The letter T that appears next to each sentence
tells you that the sentence makes a true statement about the world you have constructed.
5. Return to the world. Alter the shapes and sizes of the blocks, as to make all of the sentences
false.
6. Back to the sentences. Change them so that they still describe the shapes and sizes of the
blocks, but make them all true.
7. Notice the shape and size of a. Add another block of that size and shape, and name it c.
8. Now write the sentence a = c.
9. Predict its truth-value: do you expect it to be true? If so, why? Do you expect it to be false?
If so, why?
10. Verify the sentences again. Now you know that a = c is false. What can you do to the world
to make it true?
11. Now you see that for a = c to be true, a and c have to name the same block. The equal sign
(=) means one and the same object, not two objects that are exactly alike.
Copyright 2006, S. Marc Cohen Revised 10/2/06
2-1
Chapter 2: The Logic of Atomic Sentences
2.1 Valid and sound arguments
Conclusion
An argument is a piece of reasoning (a sequence of statements) attempting to establish a
conclusion. The conclusion is what the arguer is trying to establish. It is indicated by words
like therefore, so, hence, thus, consequently. What immediately follows these words is usually
the conclusion.
Premises
The premises are the reasons the arguer gives in support of the conclusion. They may be given
before the conclusion, or after it. Premises are typically preceded by words like because, since,
after all.
Validity
In a valid argument, the conclusion follows from or is a logical consequence of the premises.
Here is our definition of validity:
An argument is valid if it is impossible for its premises
to be true and its conclusion false.
The word impossible is important here. The fact that an arguments conclusion is actually
true does not make the argument valid validity requires that there be no possible
circumstance in which the premises would be true and the conclusion false.
Similarly, the fact that an argument contains a false premise means nothing about the
arguments validity or invalidity. Some arguments with false premises are valid, and others are
invalid. What matters is whether there is any possible circumstance in which the premises
would be true and the conclusion false.
Soundness
A sound argument is a valid argument with true premises.
So, every sound argument is valid, but not every valid argument is sound.
Fitch bar notation
In many books, arguments are written up using the 3-dot symbol: So, for example,
you might see:
Socrates is a man.
All men are mortal.
Socrates is mortal.
In LPL, well use the Fitch bar notation. The premises are written above the
horizontal line (the Fitch bar), and the conclusion below:
Socrates is a man.
All men are mortal.
Socrates is mortal.
Copyright 2006, S. Marc Cohen Revised 10/2/06
2-2
Examples
Here are two examples of arguments: one valid, one invalid.
Example 1a: a valid argument
Cube(a)
Large(a)
SameShape(a, b)
Cube(b)
Example 1b: an invalid argument
Cube(a)
Large(a)
SameShape(a, b)
Large(b)
Well return to these arguments later. Well see how to prove the first one valid, and how to
show that the second one is invalid. Our method of proof of validity is very different from our
method of showing invalidity.
2.2 Methods of proof
A step-by-step demonstration showing that the conclusion follows from the premises. In a proof, a
series of intermediate conclusions are reached, leading in a chain from the premises to the
(ultimate) conclusion. The intermediate conclusions are also written below the Fitch bar.
At each step, there must be absolute certainty. That is, there must be no chance that any
conclusion (intermediate or otherwise) does not follow from the sentences it is inferred from. Our
steps must be such that there is never a possibility that we might be inferring a false sentence from
true ones.
Proofs involving the identity symbol
Our language so far contains only atomic sentences, which limits our ability to come up with
rules for deriving conclusions from premises. But we can take advantage of some features of
the identity relation to put our first two rules (concerning the symbol =) into play.
First, note that the identity relation (the relation that holds between a and b in virtue of
which a = b is true) is reflexive. That is, each thing is identical to itself. In other
words, sentences like b = b are always true.
Second, the identity relation is symmetrical. That is, if a = b, then b = a.
Third, the identity relation is transitive. That is, if a = b and b = c, then a = c.
Finally, if a = b, then whatever holds of a also holds of b. This is called the
indiscernibility of identicals.
We will enshrine these features of identity in our system of proof by introducing rules that
take advantage of them.
Copyright 2006, S. Marc Cohen Revised 10/2/06
2-3
2.3 Formal proofs
We will be developing a deductive system for writing up formal proofs. We call the system ,
and we will be employing a computer program called Fitch that is a somewhat more user-
friendly version of .
In a formal proof in , we use the Fitch bar notation. The premises are written above the
(horizontal) Fitch bar; the subsequent steps (intermediate conclusions and the ultimate conclusion)
are written below the Fitch bar.
Each step in a formal proof must be entered in accordance with some precisely stated rule of the
formal system of rules. By applying a rule to some previous line or lines in a proof, we provide a
justification for entering a new step in a proof.
A justification, then, cites a rule and the lines to which the rule is being applied in order to
generate the line being introduced.
Our first rules can now be stated:
Identity Introduction (= Intro)
n = n
The triangle points to the sentence that the rule entitles you to enter. This rule says, in effect,
that you may enter a sentence of the form n = n at any point you wish. Obviously, this rule
emodies the principle of reflexivity of identity.
Identity Elimination (= Elim)
P(n)
n = m
P(m)
This rule tells you that you may substitute m for n wherever you like, provided that you have
the sentence n = m. This rule embodies the principle of indiscernibility of identicals.
Notice that although the rule is called an elimination rule, nothing is really being eliminated.
The idea is that we have used (eliminated?) an identity sentence in the process of arriving at a
conclusion. That is, we are arguing from an identity sentence, and in that sense we are
eliminating it.
In , each logical symbol has a pair of rules associated with it: an introduction rule, which
tells you how to get a sentence containing that logical symbol into a proof, and an elimination
rule, which tells you how to deduce something from a sentence containing that logical
symbol. (For this reason the rules in a system like are sometimes called int-elim rules.)
Thus, = Intro tells us how to enter an identity sentence (we can enter a = a), and = Elim tells
us how to use an identity sentence (n = m) as a premise.
Dont worry that our two rules seem to have ignored the symmetry and transitivity of identity.
In fact, symmetry and transitivity follow from reflexivity and indiscernibility. That is, using
only = Intro and = Elim, you can prove that b = a follows from a = b, and that a = c follows
from a = b and b = c. (You will be proving transitivity in exercise 2.16 in H
3
.)
Copyright 2006, S. Marc Cohen Revised 10/2/06
2-4
For an illustration of how = Elim works, open Ch2Ex3.prf. Point the focus slider at line 3 and
click on both premises; they will be highlighted, meaning that they are your support sentences.
Then choose rule = Elim, and click on Check Step. Notice which sentence Fitch inserts, by
default. Is this the sentence you expected? (Perhaps you were surprised to see both
occurrences of a replaced by b). Try entering a new line that makes only one replacement in
line 1, and ask Fitch to check it out. (Be sure to highlight your two support lines by clicking on
them.) Then enter yet another line that makes a different single replacement in line 1 and have
Fitch check it out. You will notice that = Elim licenses all three of these inferences.
Please be aware that (unlike Fitch) is a very strict system. Its rule = Elim permits us to
substitute the name that occurs to the right of the equals sign for the one that occurs to the
left, but does not permit us to substitute the name on the left for the one on the right. That is,
the rule does not strictly apply to the pair of sentences Cube(b) and a = b. It only applies to
the sentences Cube(a) and a = b. Since the Fitch program is more liberal about this fine detail
than is, we will be able to ignore it when were using Fitch.
2.4 Constructing proofs in Fitch
You try it
Work the problem on p. 58, using the file Identity 1 (its in the Fitch Exercise Files folder). To
see what your proof should look like, open the file Proof Identity 1.prf. (Either click on the
link or find the file on the Supplementary Exercises page of the course web site.)
Ana Con
This is a mechanism that is built into Fitch. It basically checks to see whether a conclusion
does indeed follow from its premises. Ana Con has some limitations: it does not understand
the predicates Adjoins and Between, and some complicated arguments may stump it.
As we will see, Ana Con uses a broader notion of logical consequence than is strictly allowed
in FOL. For example, in FOL we cannot deduce Larger(a, b) from Smaller(b, a). This is
because this inference depends on the meaning of the predicates, and FOL is ignorant of the
meanings of the predicates in the arguments it examines.
But given the meanings of Larger and Smaller, we may note that it is not possible for the first
sentence to be true and the second false. So there is a clear sense in which the inference in
question is valid. Ana Con takes the meanings of the predicates into account. So well say that
Larger(a, b) is an analytic consequence of Smaller(b, a), even though it is not a first-order
consequence of it. Try to show this in Fitch by opening Ch2Ex2.prf and using Ana Con to
complete the proof.
2.5 Demonstrating nonconsequence
We dont use proofs
We do not give proofs of nonconsequence; we do this by means of counterexample. This is
because of the following fact:
When we establish that an argument is valid, we establish something quite general. That is,
that it is impossible for the premises to be true and the conclusion false. To put it another way,
we establish that in every possible situation in which the premises are true, so is the
conclusion.
Copyright 2006, S. Marc Cohen Revised 10/2/06
2-5
Conversely, to establish that an argument is invalid, we must show that it is not valid. That is,
that it is possible for the premises to be true and the conclusion false. To put it another way,
we must establish that there is some possible situation in which the premises are true and the
conclusion is false.
So when we show an argument to be invalid, we need not prove anything general. It is
sufficient to describe a possible situation in which the premises are true and the conclusion is
false.
Since demonstrating nonconsequence does not involve proofs, we will not be using the
program Fitch to show that an argument is invalid. For example, suppose we are using Fitch,
and we examine some purported proof of a given argument, and we see that the proof contains
a mistake or a misapplication of the rules. That doesnt show the argument to be invalid.
Perhaps it is just a faulty proof of a valid argument! In that case, some other proof could be
found.
But we can use Fitchs Con mechanisms to tell us that an argument is invalid. Lets apply the
Ana Con mechanism to our previous examples, Example 1a and Example 1b. To see how,
open Ch2Ex1a.prf and Ch2Ex1b.prf.]
We construct counterexamples
To demonstrate nonconsequence, we use the program Tarskis World. This program lets us
create counterexamples: possible situations, or worlds, in which the premises of an argument
are true and its conclusion is false.
Well continue with Examples 1a and 1b. Open Ch2Ex1a.sen, Ch2Ex1b.sen, and
Ch2Ex1.wld. (1a) is valid, (1b) is invalid. If we change the world slightly, we can make the
conclusion of (1b) false while leaving the premises true. (Just make b into a small cube.) But
there is no way of making the conclusion of (1a) false while leaving the premises true, for that
is a valid argument, and any world in which its premises are true will also make its conclusion
true.
Now do the You try it on p. 64 to construct your own demonstration of nonconsequence. You
will be constructing a counterexample to Bills Argument.
If you solved this problem, you should have ended up with a world that looks something like
this: Bills Argument.wld.
Deductive vs. Inductive Reasoning
In this course, we will be studying deductive reasoning, where we try to determine whether a given
conclusion does or does not follow from a certain set of premises. But a good deal of reasoning is
not deductive; often, one is interested in something weaker than absolute certainty. One may be
interested in whether a set of premises makes the truth of a conclusion more probable, rather than
in whether it guarantees the truth of the conclusion.
For a good illustration of the difference between these two modes of reasoning, look at the world
Deductive vs Inductive.wld and the accompanying sentence file Deductive vs Inductive.sen.
We cant see whether there is a block behind the medium cube in the column on the right, but we
know from the fact that the sentences in the file are true that there must be one there. For we know
that there is a block named b that is in the same column as a and the same row as c.
Copyright 2006, S. Marc Cohen Revised 10/2/06
2-6
Can we tell anything about the size and shape of b? Using inductive reasoning, we might conclude
that b is a cube. After all, in the left-hand column, all the blocks are of the same shape, and in the
middle column, all the blocks are of the same shape. So one might reason, inductively, that the third
column will be like the other two in this respect. If it is, the hidden block will be a cube, for the two
that are visible are cubes. But there is no certainty here, for we can imagine a world in which b is
not a cube.
Can we tell what size b is? Here, we can do better. For b must be small. If b were medium or large,
it would be at least partially visible. But we cannot see anything of b. Therefore, it must be small.
Here, we have used deductive reasoning to establish that a certain conclusion follows from the
information that we already have (and not just that it is more likely to be true, given that
information).
So we have given an inductive argument that b is likely to be a cube, and a deductive argument
that b must be small. We can now rotate the world 90 degrees (or switch to a 2-D view) to find out
the size and shape of b.
As to the size, there can be no surprisestheres no possibility that b can be anything but small,
consistent with the information we already have. But as to the shape, we may well be in for a
surprise.
Note that this is a feature of all inductive arguments: no matter how good the argument is, there is
always a possibility, however remote, that the conclusion may be false even though all the premises
are true.

Copyright 2004, S. Marc Cohen Revised 9/29/04
1-3
12. Write these additional sentences:
Adjoins(a, b) FrontOf(a, b)
SameSize(a, b) Between(a, b, d)
13. Now play with the blocks (move them around) and verify the sentences (ctrl-F) each time
you move them. That way, youll see under what conditions these sentences are true, and
learn first-hand the meanings of the predicates. Write some more sentences, using the other
predicates in the blocks language, and continue to experiment.
14. You will notice, for example:
Adjoins(a, b) requires that a and b be on squares that share a side; they cannot be
diagonally adjacent.
FrontOf(a, b) requires no more than that a be closer to the front than b; it does not have
to be anywhere near b, or even in the same row or column.
A sentence containing a name that does not name any block in a given world does not
have any truth value in that world. To make Between(a, b, d) have a truth value, we had
to assign the name d to one of the blocks in the world.
Between(a, b, d) requires that a, b, and d be in a straight line: either in the same row,
column, or diagonal. Note that it is the first named block (a in the sentence above) that is
the one in the middle.
If you try to move blocks in such a way that a large block adjoins another block, you
cannot do it! In Tarskis World, no large block can adjoin any other block. (That is
because the large blocks are so large they overlap their borders and infringe on the
adjacent block.)
SameSize(a, a), SameShape(a, a), SameRow(a, a), SameCol(a, a), and a = a are
always true.
Larger(a, a), Smaller(a, a), Adjoins(a, a), FrontOf(a, a), BackOf(a, a), RightOf(a,
a), LeftOf(a, a), and a a are always false.
Between(a, a, a), Between(a, a, b), Between(a, b, a), and Between(b, a, a), etc. are
always false. A Between sentence cannot be true unless it contains three different
names. (Although even then it may still be false.)
15. These facts all express features of the meanings of the predicates in the blocks language,
which closely (although not exactly) match the meanings of their English counterparts. For
example, it is part of the meaning of larger than that a thing cannot be larger than itself; it is
part of the meaning of is in the same row as that a thing cannot fail to be in the same row as
itself.
16. The predicates of the blocks language are determinate, not vague. There is no gradation of
sizes between small and medium, and any two objects that are both are small are considered
to be of the same size. Hence, every sentence of the blocks language is either true or false.
Nor are there degrees of truth and falsitya sentence is either (entirely) true or (entirely)
false, and no true sentence is truer than another.
Be sure to do the You try it on p. 24.
Copyright 2004, S. Marc Cohen Revised 10/7/04
3-1
Chapter 3: The Boolean Connectives
These are truth-functional connectives: the truth value (truth or falsity) of a compound sentence formed
with such a connective is a function of (i.e., is completely determined by) the truth value of its
components.
3.1 Negation symbol:
The negation of a true sentence is false; the negation of a false sentence is true. This information is
recorded in the truth-table on p. 69. (Here and in other such tables we will abbreviate true by T
and false by F.)
P P
T
F
F
T
This table tells us that the negation of a sentence has the opposite truth value.
Some terminology: If P is atomic, then both P and P are called literals. Thus, Cube(a) and
Cube(a) are literals, but Cube(a) is not a literal.
3.2 Conjunction symbol:
Writing conjunctions in FOL and in English
In English, conjunction is expressed by and, moreover, and but.
George is wealthy and John is not wealthy
George is wealthy but John is not wealthy
are both translated in FOL as Wealthy(george) Wealthy(john)
Note that we read this FOL sentence as: Wealthy George and not wealthy John. This way of
reading FOL sentences will make it much easier later when we come to write them using
Tarskis World.
vs. and
In English, and often conveys a temporal meaning: and then or and next. Thus, these
arent equivalent:
Max went home and Claire went to sleep
Claire went to sleep and Max went home
The first suggests that Claire retired after Max left; the second suggests that Max didnt leave
until after Claire retired.
But in FOL, the following sentences are equivalent:
WentHome(max) WentToSleep(claire)
WentToSleep(claire) WentHome(max)
That is, requires nothing more than joint truth, not temporal order.
Copyright 2004, S. Marc Cohen Revised 10/7/04
3-2
The semantics of
See the truth table for on p. 72.
P Q
P Q
T
T
F
F
T
F
T
F
T
F
F
F
This table shows that a conjunction P Q is true in just one case: the case in which P is true
and Q is true.
To see how this works, try playing the game in Tarskis World. Do the You try it on p. 72.
in FOL where theres no corresponding English connective
How do we translate d is a large cube into FOL? Although the English sentence has no
connective, we treat it as if it had an and in it: d is a cube and d is large. The advantage of
this is that it makes translation easyour FOL translation looks like this:
Cube(d) Large(d).
We can do this because in Tarskis World, we treat the size of an object as being entirely
independent of its shape. Whether an object is a cube or a tetrahedron has no effect on
whether it is counted as large, medium, or small.
This approach to translation into FOL keeps things simple, but it does not always give
satisfactory results. Suppose we try putting Dumbo is a small elephant into FOL as:
Elephant(dumbo) Small(dumbo)
But small elephants are still large objects, so one might plausibly assert: Although Dumbo is a
small elephant, Dumbo is large. If we put this into FOL using the scheme above, we get:
Elephant(dumbo) Small(dumbo) Large(dumbo)
This translation, however, is problematic. For one thing, this FOL sentence never comes out
true, since nothing can be simultaneously, and without qualification, both small and large.
To confirm this, try the following experiment: open Fitch and start a new proof with no
premises. Add a new line, and enter the sentence
(Elephant(dumbo) Small(dumbo) Large(dumbo)). Now justify the line using
Ana Con and click on Check Step. You will see that it checks out, which means that
it is always true. Hence the sentence it negates,
Elephant(dumbo) Small(dumbo) Large(dumbo), is always false.
For another thing, it is unclear what this FOL sentence is supposed to mean. Since the order of
the conjuncts in an FOL conjunction has no effect on its meaning, we could translate it equally
well in either of the following ways:
Although Dumbo is a small elephant, Dumbo is large.
Although Dumbo is a large elephant, Dumbo is small.
Copyright 2004, S. Marc Cohen Revised 10/7/04
3-3
These English sentences are certainly not equivalent, so they cannot both correspond to the
same FOL sentence. In English, Dumbo is a small elephant really means that Dumbo is small
for an elephant. But there is no way to express Dumbo is small for an elephant in FOL using
only the predicates Small and Elephant and the truth-functional connectives.
3.3 Disjunction symbol:
Writing disjunctions in FOL and in English
In English, disjunction is expressed by or.
George is wealthy or John is wealthy
Either George or John is wealthy
are both translated in FOL as Wealthy(george) Wealthy(john)
We read this FOL sentence as: Wealthy George or wealthy John.
vs. or
In English, or is sometimes used in an exclusive sense, meaning one or the other but not
both. But it will be our practice to use it in the (more common) inclusive sense, in which it
means one or the other or both. (This is sometimes called and/or.)
Thus, in our example above, the sentence comes out true in the event that both George and
John are wealthy. If we need to say that exactly one of the two is wealthy (either George or
John but not both), we can always write in FOL:
(Wealthy(george) Wealthy(john)) (Wealthy(george) Wealthy(john))
The semantics of
See the truth table for on p. 75.
P Q P Q
T
T
F
F
T
F
T
F
T
T
T
F
This table shows that a disjunction P Q is true in three cases: P true and Q true, P true
and Q false, and P false and Q true. That is, it is false in just one case: the case in which P is
false and Q is false.
To see how this works, try playing the game in Tarskis World. Do the You try it on p. 76.
Some connectives that are not truth-functional
Lots of English connective words are not truth-functional. That is, if you use one of these words as
the main connective in a compound sentence, the truth-value of the resulting sentence does not
depend in all cases solely on the truth-values of the component sentences. An easy way to see that
a connective is not truth-functional is to try to construct a truth-table for a compound in which it is
the main connective. You will notice that you cannot complete all the rows.
Copyright 2004, S. Marc Cohen Revised 10/7/04
3-4
Claire fed Scruffy while Max slept.
Fed(claire,
scruffy)
Slept(max)
Fed(claire, scruffy)
while Slept(max)
T
T
F
F
T
F
T
F
?
F
F
F
In this case, we know that if either component is false, the whole compound must be false. For
example, if Claire did not feed Scruffy, it is false that she fed Scruffy while Max slept. The
problem occurs when both components are true. It may be true that Claire fed Scruffy and true that
Max slept, and nothing follows about whether the feeding and sleeping took place at the same time
or not. The truth of both component sentences is compatible with either the truth or the falsity of
the entire compound sentence.
Claire went home because she found Max boring
WentHome(claire) Bored(max, claire)
WentHome(claire)
because Bored(max,
claire)
T
T
F
F
T
F
T
F
?
F
F
F
Once again, we know that if either component is false, the whole compound must be false. For
example, if Claire did not find Max boring, it is false that she went home for that reason. Again,
the problem occurs when both components are true. It may be true that Claire went home and true
that Max bored her, and nothing follows about whether or not his boring her was the reason she
went home. The truth of both component sentences is compatible with either the truth or the falsity
of the entire compound sentence.
3.4 Remarks about the game
The game rules for , , and are summarized on p. 78. There is no need to memorize them,
though, as Tarskis World will always tell you what your commitments are (after you choose your
initial commitment), and will tell you when it is your turn to move.
To play the game and be sure of winning, you will need to know not only that a sentence has the
truth value you say it has (your commitment), but also why it does. This means, for example, that if
you know that a disjunction is true, you will need to know which disjunct is true in order to be sure
of winning. Similarly, if you know that a conjunction is false, you will need to know which
conjunct is false in order to be sure of winning.
Copyright 2004, S. Marc Cohen Revised 10/7/04
3-5
Sometimes, however, you may know the truth value of an entire compound sentence without
knowing the truth values of its components. Suppose you have the sentence Cube(d)
Cube(d). You know that this is true even though you dont know which disjunct is the true one.
If d is a cube, the left disjunct is true; otherwise, its the right disjunct thats true. But you may not
be able to see d; perhaps it is small, and hidden behind a larger object. Try exercise 3.11 to see how
this works.
3.5 Ambiguity and parentheses
In FOL, we need to be able to avoid ambiguities that can arise in English. The form
P and Q or R
is ambiguous. Does it mean P, and either Q or R? Or does it mean either both P and Q, or R?
Notice how the auxiliary words either and both, working with or and and, respectively, remove the
ambiguity. (You will see these at work in exercise 3.21, problems 1, 8, and 10.)
FOL does not have such auxiliary words. We use parentheses to remove ambiguity:
P (Q R) (P Q) R
The effect is the same. The parentheses remove the ambiguity by showing which is the main
connective, and which the subsidiary. (As we will say, they show which connective has the larger
scope.)
Scope is especially important with negation. Compare these sentences:
Cube(a) Cube(b) (Cube(a) Cube(b))
The first says that a is not a cube, but b is a cube. The second does not give us such definite
information about a and b. All it tells us is that that arent both cubes. That is, either a is not a
cube, or b is not a cube, or perhaps neither is a cube. The first is a much more informative claim.
Practice
Lets check out some sentences in a sample world. Download the files Sentences TF1 and
World TF1 from the course web sitetheyre on the Supplementary Exercises page. Then
predict the truth values of these sentences in this world, and play the game with Tarskis
World.
3.6 Equivalent ways of saying things
There are many different ways of saying the same thing in FOL. That is, for any given FOL sentence,
we can come up with a different but equivalent FOL sentence. (Equivalent here means comes out
true or false in exactly the same cases, or has the same truth table. Here are some of the more
common equivalent pairs. (! represents equivalence).
P ! P Double negation
(P Q) ! (P Q) DeMorgans law
(P Q) ! (P Q) DeMorgans law
Note that these can be combined to yield more equivalences:
(P Q) ! (P Q) defined in terms of
(P Q) ! (P Q) defined in terms of
Copyright 2004, S. Marc Cohen Revised 10/7/04
3-6
3.7 Translation
Under what conditions do we count an FOL sentence to be a correct translation of an English
sentence? The only rule is that the two sentences must agree in truth value in all possible
circumstances.
Notice that this requires more than that the two sentences both be true, or both be false. Agreement
in (actual) truth value may be due to accidental circumstances that happen to obtain. The two
sentences must agree even if you change the facts.
This means that any two equivalent FOL sentences will be equally correct translations of any
English sentence that either of them correctly translates. That is, if an FOL sentence S is a good
translation of an English sentence S, and S is equivalent to some other FOL sentence S!, then S!
also counts as a correct translation of S.
A result of this policy is that some rather unnatural sounding translations will count as correct.
Consider the English sentence b is a cube and c is a tetrahedron. The most natural translation of
that into FOL is:
Cube(b) Tet(c)
But given the DeMorgan and Double Negation equivalences noted above, we can see that:
(Cube(b) Tet(c)) " (Cube(b) Tet(c))
Hence, our sentence is equally accurately translated as:
(Cube(b) Tet(c))
But even though this is (technically) correct, it is not the best or most natural translation, for it
introduces three nots and an or, none of which were present in the English original.
Still, both Tarskis World and I will follow the policy of counting any translation that is equivalent
to the right one as correct.
[Note that later in the term, when the sentences get more complicated, the Grade Grinder may not
always be able to tell whether an answer you give is equivalent to the correct answer. If that
happens, it will tell you that it timed outi.e., couldnt figure out whether your answer was
correct. Bring any such cases to your instructor for evaluation.]

Copyright 2004, S. Marc Cohen Revised 6/1/04
4-1
Chapter 4: The Logic of Boolean Connectives
4.1 Tautologies and logical truth
Logical truth
We already have the notion of logical consequence. A sentence is a logical consequence of a
set of sentences if it is impossible for that sentence to be false when all the sentences in the set
are true. We will define logical truth in terms of logical consequence.
Suppose a given sentence is a logical consequence of every set of sentences. That means that
it is impossible for that sentence to be false it comes out true in every possible circumstance.
Hence:
A sentence is a logical truth if it is a logical consequence of every set of
sentences.
Tautology
A tautology is a logical truth that owes its truth entirely to the meanings of the truth-
functional connectives it contains, and not at all to the meanings of the atomic sentences it
contains.
For example, Cube(a) Cube(a). No matter what shape a is, this sentence comes out true.
And it owes its truth entirely to the meanings of or and not. You could replace Cube with any
other predicate and a with any other name, and the resulting sentence would still be true.
Indeed, you could replace Cube(a) with any other sentence and the resulting sentence would
still be true.
Tautologies and truth tables
To show that an FOL sentence is a tautology, we construct a truth table. Look at the example
of the table for Cube(a) Cube(a) on p. 96.
Features of truth tables
The number of rows in the table for a given sentence is a function of the number of
atomic sentences it contains. If there are n atomic sentences, there are 2
n
rows.
Each row represents a possible assignment of truth values to the component atomic
sentences.
On each row, the values of the atomic sentences determine the values of the compounds
of which they are components. The values of the compounds of atomic sentences in turn
determine the values of the larger compounds of which they are components. In the end,
a unique value for the entire sentence is determined on each row.
A tautology is a sentence that comes out true on every row of its truth table.
Do the You try it on p. 100: Open the program Boole and build the truth table. You will
confirm that (A (A (B C))) B is a tautology.
Copyright 2004, S. Marc Cohen Revised 6/1/04
4-2
Tautologies, logical truths, and Tarskis World necessities
When we looked at the sentence Cube(a) Cube(a), we noted that it owes its truth
entirely to the meanings of or and not. You could replace Cube (both occurrences, of course)
with any other predicate, and the resulting sentence would still be true. Indeed, you could
replace the two occurrences of Cube(a) with any other sentence and the resulting sentence
would still be true.
Contrast this with Cube(a) Tet(a) Dodec(a). Although this sentence always comes out
true in Tarskis World, we can imagine a circumstance in which it is not true: suppose that a
is a sphere. So this sentence is not even logically true. We can say that it is a Tarskis World
necessity, because it comes out true in every world in Tarskis World. (It is a special feature
of Tarskis World that there are no objects other than cubes, tetrahedra, and dodecahedra.)
So Tarskis World necessities form a large set of sentences that includes the tautologies as a
(smaller) part: every tautology is a Tarskis World necessity, but not conversely.
Note that there are some necessary truths that are not tautologies, but dont depend for their
truth on any special features of Tarskis World. For example:
(Larger(a, b) Larger(b, a))
This is not a tautology, for it depends on the meaning of the predicate larger than. But its
necessity is not limited to Tarskis World, for it can never be true that both a is larger than b
and b is larger than a.
Why Boole cant identify all logical truths
Boole is sensitive to the meaning of the truth-functional connectives, but not to the
meanings of the predicates contained in atomic sentences. (In particular, Boole does not
recognize the meaning of the identity symbol =, nor does Boole recognize the meanings
of the quantifier symbols ! and " that well be studying in chapter 9.)
So when Boole sees a sentence like (Larger(a, b) Larger(b, a)), it does not see
the predicate larger than. Instead, all Boole sees is the negation of a conjunction of two
different atomic sentences. In effect, all Boole sees is sentence of the form (P Q).
And when Boole sees this sentence, it thinks, I know how to make this sentence
falseI just assign T to P and T to Q. That makes P Q true, and so it makes
(P Q) false. Since Boole cant see inside the atomic sentences and doesnt
understand the predicates they contain, he doesnt know that its impossible for both
Larger(a, b) and Larger(b, a) to be true.
Now when it comes to tautologies, Boole rules! So any logical truth that Boole doesnt
recognize as coming out true in every circumstance is a non-tautology. A row on a truth-
table that contains a T under the main connective, then, may not represent a genuine
logical possibility.
We have discovered that there is a set of logical truths that falls between the tautologies and
the Tarskis World necessities. It is best to picture the situation in terms of a nested group of
concentric circles (Euler circles) collecting together a subset of all the true sentences:
The outer, largest, circle: the Tarskis World necessities (sentences that are TW-
necessary). It also contains the contents of all the inner circles.
The next largest circle: the logical truths or logical necessities.
Copyright 2004, S. Marc Cohen Revised 6/1/04
4-3
The innermost circle: the tautologies (TT-necessary).
The relationship is depicted graphically on p. 102 in figure 4.1. You should be able to give
examples of each kind of necessary truth.
Note that every tautology is also a logical truth, and every logical truth is also a TW-necessity.
But the converse is not true: some logical truths are not tautologies, and some TW-necessities
are not logical truths.
Three kinds of possibility
Notice that if we are considering possibility, rather than necessity, we have a similar nest of
Euler circles. The difference is that the TT-possible sentencesthe ones that come out true on
at least one row of their truth tableare included in the largest circle, and the TW-possible
sentences are in the smallest circle. That is, a sentence may be TT-possible without being
logically possible or TW-possible, although all TW-possibilities are also logically possible and
TT-possible.
Look at exercise 4.10. We are asked to locate these three classes of sentences in an Euler
diagram. To see what the circles look like, open Possibility.pdf (on the Supplementary
Exercises page of the course web site).
The outer, largest, circle: the TT-possible sentences. It also contains the contents of
all the inner circles.
The next largest circle: the logically possible sentences.
The innermost circle: the Tarskis World possibilities (TW-possible sentences).
Again, you should be able to give examples of each kind of possibility. You can test your
understanding of these different kinds of possibility by completing exercise 4-9 (not assigned
for homework).
Id suggest downloading and printing a copy of Possibility.pdf for your notes.
4.2 Logical and tautological equivalence
Logically equivalent sentences
Sentences that have the same truth value in every possible circumstance are logically
equivalent.
Tautologically equivalent sentences
Logically equivalent sentences whose equivalence is due to the meanings of the truth
functional connectives they contain are tautologically equivalent.
Tautological equivalence and truth tables
To see whether a pair of FOL sentences are tautologically equivalent, we construct a joint truth
table for them. The two sentences are tautologically equivalent if they are assigned the same
truth value on every row.
Note that sentences may be logically equivalent without being tautologically equivalent. A good
example is given on pp. 107-8:
a = b Cube(a) ! a = b Cube(b)
These sentences are logically equivalentthere is no possible circumstance in which they could
differ in truth value. But they are not tautologically equivalent, as the truth table on p. 108 shows.
Copyright 2004, S. Marc Cohen Revised 6/1/04
4-4
Note that this truth table contains two rows (rows 2 and 3) that do not represent real logical
possibilities, although they do represent truth table possibilities. Row 2, for example, assigns T
to a = b, T to Cube(a), and F to Cube(b). This assignment does not represent a real logical
possibility, since it is not possible for a to be a cube while b is not a cube if a and b are the same
block.
How, then, can this be a truth table possibility? The answer is that, as we saw above, Boole cant
see inside atomic sentences and doesnt understand the predicates they contain. As far as Boole
is concerned, a = b, Cube(a), and Cube(b) are just three different atomic sentences, to which it
can assign any values it likes.
4.3 Logical and tautological consequence
Consequence is the core notion
Q is a logical consequence of P if it is impossible for P to be true and Q false. That is, there
is no possible circumstance in which P is true and Q is false.
Both logical truth and logical equivalence are special cases of logical consequence:
A sentence is a logical truth if it is a logical consequence of the empty set of
sentences.
Two sentences are logically equivalent if they are logical consequences of one
another.
Tautological consequence and truth tables
Q is a tautological consequence of P if in the joint truth table for the two sentences there is no
row on which P is true and Q is false.
The relation between logical and tautological consequence
As with tautological truth (and equivalence) vs. logical truth (and equivalence), tautological
consequence is a special case of logical consequence. That is, every tautological consequence
is also a logical consequence, but the converse does not holdin some cases, Q might be a
logical consequence of P but not a tautological consequence.
Examples
Cube(b) is a logical consequence of a = b Cube(a), but not a tautological
consequence of it. Thats because theres no possible circumstance in which
a = b Cube(a) is true and Cube(b) is false. But the truth table for these sentences
does not show this, since (as we saw above) it is allowed to assign T to a = b, T to
Cube(a), and F to Cube(b). This is a truth table possibility that is not a real
possibility.
The same distinction obtains between logical possibility and TT-possibility. The
sentence Cube(a) Tet(a) is TT-possible, since it takes a T in row 1. But that TT-
possibility is not a real possibility, for that row represents the (impossible) case in
which a is both a cube and a tetrahedron.
We will look at both of these examples again in a moment.
Copyright 2004, S. Marc Cohen Revised 6/1/04
4-5
4.4 Tautological consequence in Fitch
Using Fitch to check for consequence
Truth tables are mechanicalthere is an automatic procedure (an algorithm) that always
give you an answer to the question whether a sentence is a tautology or whether a sentence is
a tautological consequence of another sentence or set of sentences.
Instead of using Boole to construct a truth table, you can use the program Fitch to check
whether one sentence is a tautological consequence of a given set of sentences.
Do the You try it on p. 114 to see how to do this.
Taut Con, FO Con, and Ana Con
These are three methods, of increasing strength, that Fitch uses to check for consequence.
Taut Con checks to see whether a sentence is a tautological consequence of some
others. It pays attention only to the truth functional connectives. It is the weakest
procedure of the three because it only catches tautological consequence, and misses
the broader notions of consequence.
FO Con checks to see whether a sentence is a first-order consequence of some
others. It pays attention not only to the truth functional connectives but also to the
identity predicate and to the quantifiers.
Ana Con checks to see whether a sentence is an analytic consequence of some
others. It pays attention not only to the truth functional connectives, the identity
predicate, and the quantifiers, but also to the meanings of most (but not all!) of the
predicates in the blocks language. This notion comes the closest of the three to that of
(unrestricted) logical truth.
If a sentence is a tautological consequence of some others it is clearly also a first-order
consequence and an analytic consequence of those sentences. But the converse does not
holdsome first-order consequences are not tautological consequences, and some analytic
consequences are not first-order consequences.
Examples
Cube(a) Cube(b) is a tautological consequence of Cube(a). This is obvious
there is no assignment of truth-values to these sentences that makes Cube(a) true and
Cube(a) Cube(b) false.
Cube(b) is a first-order consequence, but not a tautological consequence, of
a = b Cube(a). We can check this out, first in Boole (see file Ch4Ex1.tt), then in
Fitch (see file Ch4Ex1.prf).
SameSize(a, b) is an analytic consequence, but not a first-order consequence (and
hence not a tautological consequence), of Larger(a, b) Larger(b, a). We can
check this out in Fitch (see file Ch4Ex3.prf).
Cube(a) Tet(a) is FO-possible (and hence TT-possible), but not logically possible.
We can use Boole (see file Ch4Ex2.tt) to show that it is TT-possible. Notice how,
with a little trickery, we can also use Fitch (see file Ch4Ex2.prf) to show both that it
is TT-possible and FO-possible, but not logically possible.
Copyright 2004, S. Marc Cohen Revised 6/1/04
4-6
A warning about Ana Con
The Ana Con mechanism does not distinguish between logical necessity and TW-necessity.
That is, it counts at least some Tarski World consequences as analytic consequences along
with logical consequences more narrowly conceived. An example will make this clear.
According to Ana Con, Cube(b) is an analytic consequence of
Tet(b) Dodec(b). (Obviously, this is not a first-order consequence, and hence
not a tautological consequence either.)
This happens because Ana Con pays attention not only to the meanings of some of the
predicates, but also to some of the special features of Tarskis World. Since in Tarskis World
there are only three shapes of blocks, it follows that there cannot be a Tarski World in which
an object is neither a tetrahedron nor a cube nor a dodecahedron.
But while that may be true for every Tarski World, it does not hold for every possible world.
In general, it does not follow logically, from the fact that b is neither a tetrahedron nor a
dodecahedron, that b is a cubeb might be a sphere. So this example does not seem to be a
logical necessity, but only something weakera TW-necessity.
Ana Con also has some other limitations. It misses certain TW-necessities, namely, those
involving the predicates Adjoins and Between, which it does not understand. For example,
Large(a) is a TW-consequence of Adjoins(a, b), since it is impossible in a Tarski world for
a large block to adjoin another block. But Ana Con will not recognize this consequence.
Similarly, Ana Con does not understand any predicates that are not in the blocks language.
Hence, it will not know that Older(b, a) is a logical consequence of Younger(a, b), since
these predicates are not in the blocks language. So you must use Ana Con with caution!

Copyright 2004, S. Marc Cohen Revised 6/1/04
5-1
Chapter 5: Methods of Proof for Boolean Logic
5.1 Valid inference steps
Conjunction elimination
Sometimes called simplification. From a conjunction, infer any of the conjuncts.
From P Q, infer P (or infer Q).
Conjunction introduction
Sometimes called conjunction. From a pair of sentences, infer their conjunction.
From P and Q, infer P Q.
5.2 Proof by cases
This is another valid inference step (it will form the rule of disjunction elimination in our formal
deductive system and in Fitch), but it is also a powerful proof strategy.
In a proof by cases, one begins with a disjunction (as a premise, or as an intermediate conclusion
already proved). One then shows that a certain consequence may be deduced from each of the
disjuncts taken separately. One concludes that that same sentence is a consequence of the entire
disjunction.
From P Q, and from the fact that S follows from P and S also follows from Q,
infer S.
The general proof strategy looks like this: if you have a disjunction, then you know that at least one
of the disjuncts is trueyou just dont know which one. So you consider the individual cases
(i.e., disjuncts), one at a time. You assume the first disjunct, and then derive your conclusion from
it. You repeat this process for each disjunct. So it doesnt matter which disjunct is trueyou get
the same conclusion in any case. Hence you may infer that it follows from the entire disjunction.
In practice, this method of proof requires the use of subproofswe will take these up in the next
chapter when we look at formal proofs.
5.3 Indirect proof: proof by contradiction
Also called indirect proof or reductio ad absurdum, this is a powerful method of proof commonly
used in mathematics.
In a proof by contradiction, one assumes that ones conclusion is false, and then tries to show that
this assumption (together with the arguments premises) leads to a contradiction. This shows that
the conclusion cannot be false if all the premises are truei.e., that the conclusion must be true if
the premises are true. That is to say, that the conclusion is a logical consequence of the premises.
We will develop this idea as a way of establishing a negative conclusion. Suppose you wish to
establish that a conclusion of the form S is a logical consequence of a set of premises P
1
, P
2
,
P
n
. You assume S (equivalent to the negation of the arguments conclusion) and treat it as a
premise along with P
1
, P
2
, P
n
. You then try to deduce from these assumptions a contradiction
a pair of sentences that contradict one another, e.g., Q and Q. You may then (no longer assuming
S) conclude that S.
Copyright 2004, S. Marc Cohen Revised 6/1/04
5-2
For an example of indirect proof, see the proof on p. 136. In this example, we get these
contradictions: Cube(b) contradicts Tet(b), and Cube(b) contradicts Dodec(b). These are not TT-
contradictions (like Cube(b) Cube(b)), but they are still logically contradictory, in that it is
impossible for them both to be true.
The contradiction symbol ! !! !
In constructing proofs we will use the symbol ! (an upside down tee) to indicate that a
contradiction has been reached. Rather than struggle for a way to pronounce this symbol, we
will read ! simply as contradiction.
TT-contradictions vs. other types
Not all contradictions are TT-contradictions. Consider these examples:
Cube(b) Cube(b) TT-contradiction
a " a FO-contradiction (but not a TT- contradiction)
Cube(b) Tet(b) Logical contradiction (but not a FO-
contradiction)
Large(b) Adjoins(b, c) TW-contradiction (but not a logical
contradiction)
The reasons for this classification are as follows:
Cube(b) Cube(b)
A truth table shows that this sentence cannot be true. Hence, it is a TT-contradiction.
a " "" " a
No truth table can show that this sentence cannot be true, for a truth table can assign F
to a = a. So it is not a TT-contradiction. But the meaning FOL assigns to = and its rules
for using names like a make the sentence a = a true in every world. That is, an identity
sentence is true in any world in which the names it contains name the same object, and
no name can name two different objects in the same world. So a " a is an FO-
contradiction.
Cube(b) Tet(b)
FOL does not assign any particular meaning to the predicates Cube and Tet. For all FOL
knows, this sentence can be true. So it is not an FO-contradiction. But given the
meanings of these predicates, this sentence cannot be trueit is logically impossible for
something to be both a cube and a tetrahedron. Hence, this sentence is a logical
contradiction.
Large(b) Adjoins(b, c)
Given the meanings of the predicates Large and Adjoins, it should be perfectly possible
for this sentence to be true. That is, we can describe a situation in which a large object
adjoins another object. So this sentence is not a logical contradiction. However, there is
no Tarski World in which this sentence is true. Hence, it is a TW-contradiction.
Copyright 2004, S. Marc Cohen Revised 6/1/04
5-3
The basic rule (called ! introduction) that we will use in our formal system (and in Fitch) to
show that a contradiction has been reached will require that our contradictory sentences be
TT-contradictory. This will require some extra footwork in cases in which we have other
kinds of contradictions.
5.4 Arguments with inconsistent premises
If a set of premises is inconsistent, any argument having those premises is valid. (If the premises
are inconsistent, there is no possible circumstance in which they are all true. So no matter what the
conclusion is, there is no possible circumstance in which the premises are all true and the
conclusion is false.
But no such argument is sound, since a sound argument is not only valid but has true premises.
Why be interested in arguments with inconsistent premises? Well, we know that if you can derive a
contradiction ! from a set of premises, the set is inconsistent. (If it were possible for the premises
all to be true, then since we have derived ! from them, it would have to be possible for ! to be
true, and this clearly is not possible.)
We may not know, at the start, that our premises are inconsistent, but if we derive ! from them, we
have established that they are inconsistent. If a set of premises, or assumptions, is inconsistent, it
is important to know this. And being able to deduce a contradiction from them is an excellent way
of showing this. We may not be able to show, using logic alone, which premise is false, but we can
establish that at least one of them is false.
Inconsistent premises vs. impossible sentences
If a set of premises (or any set of sentences, actually) is inconsistent, then at least one of the
sentences in the set must be false. But which one is false depends on the worldthere need
not be a single sentence which is always the culprit, independent of what the facts happen to
be.
To see this, open Ch5Ex1.sen and Ch5Ex1.wld on the Supplementary Exercises web page.
You will see that it is impossible for all of the sentences to come out trueno matter how you
change the world, at least one sentence comes out false. You can make any three of them
true, but you cant make all four true.
Contrast this case with the case of Ch5Ex2.sen and Ch5Ex2.wld. Here we have an
inconsistent set of sentences where there is a culpritthe last sentence cannot be true (it is, in
fact, TT-impossible). It is truly a bad apple: it will make any set of sentences it belongs to
inconsistent.
To see the inconsistency of these sets of sentences, open Ch5Ex1.prf and Ch5Ex2.prf. These
two Fitch proofs contain the sets of sentences above as their premise-sets.
Notice that in both cases, the arguments are valid. That is, in both cases, ! is a tautological
consequence of the premises. (Check this out using Taut Con.) Notice, too, that in Ex1, the
argument checks out only if all four premises are cited. But in Ex2, the argument checks out
if (and only if) the culprit premise is cited.
Here we see the difference between two kinds of inconsistent sets: one (Ex2) contains an
impossible sentence, the other (Ex1) does not. Each sentence in Ex1 is possible; what is
impossible is the conjunction of all four.
Copyright 2004, S. Marc Cohen Revised 6/1/04
5-4
A connection between validity and inconsistency
When an argument is valid, its conclusion is a logical consequence of its premises. Another
way to put this is to say that it would be inconsistent to assert the premises and deny the
conclusion.
This means that for an argument to be valid is for the set of sentences consisting of all of the
premises together with the negation of the conclusion to be inconsistent.
Examples
This set of sentences is inconsistent:
{Cube(a) Cube(b), Cube(a), Cube(b)}
And so this argument is valid:
Cube(a) Cube(b)
Cube(a)
Cube(b)
This set of sentences is consistent:
{Tet(a) Tet(b), Tet(a), Tet(b)}
And so this argument is invalid:
Tet(a) Tet(b)
Tet(a)
Tet(b)
Remember: for an argument to be valid is for its premises to be inconsistent with
the negation of its conclusion.

Copyright 2004, S. Marc Cohen Revised 6/1/04
6-1
Chapter 6: Formal Proofs and Boolean Logic
The Fitch program, like the system F, uses introduction and elimination rules. The ones weve
seen so far deal with the logical symbol =. The next group of rules deals with the Boolean connectives
, , and .
6.1 Conjunction rules
Conjunction Elimination ( Elim)
P
1
P
i
P
n

P
i

This rule tells you that if you have a conjunction in a proof, you may enter, on a new line, any
of its conjuncts. (P
i
here represents any of the conjuncts, including the first or the last.)
Notice this important point: the conjunction to which you apply Elim must appear by itself
on a line in the proof. You cannot apply this rule to a conjunction that is embedded as part of
a larger sentence. For example, this is not a valid use of Elim:
1. (Cube(a) Large(a))
2. Cube(a) x Elim: 1
The reason this is not valid use of the rule is that Elim can only be applied to conjunctions,
and the line that this proof purports to apply it to is a negation. And its a good thing that
this move is not allowed, for the inference above is not validfrom the premise that a is not a
large cube it does not follow that a is not a cube. a might well be a small cube (and hence not
a large cube, but still a cube).
This same restrictionthe rule applies to the sentence on the entire line, and not to an
embedded sentenceholds for all of the rules of F, by the way. And so Fitch will not let you
apply Elim or any of the rules of inference to sentences that are embedded within larger
sentences.
Conjunction Introduction ( Intro)
P
1


P
n

P
1
P
n

This rule tells you that if you have a number of sentences in a proof, you may enter, on a new
line, their conjunction. Each conjunct must appear individually on its own line, although they
may occur in any order. Thus, if you have A on line 1 and B on line 3, you may enter B A
on a subsequent line. (Note that the lines need not be consecutive.) You may, of course, also
enter A B.
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-2
Default and generous uses of rules
Unlike system F, Fitch has both default and generous uses of its rules. A default use of a rule
is what will happen if you cite a rule and a previous line (or lines) as justification, but do not
enter any new sentence. If you ask Fitch to check out the step, it will enter a sentence for you.
A generous use of a rule is one that is not is not strictly in accordance with the rule as stated
in F (i.e., F would not allow you to derive it in a single step), but is still a valid inference.
Fitch will often let you do this in one step.
Default and generous uses of the rules
Default use: if you cite a conjunction and the rule Elim, and ask Fitch to check out
the step, Fitch will enter the leftmost conjunct on the new line.
Generous use: if you cite a conjunction and the rule Elim, you may manually enter
any of its conjuncts, or you may enter any conjunction whose conjuncts are among
those in the cited line. Fitch will check out the step as a valid use of the rule.
Note just how generous Fitch is about Elimfrom the premise
A B C D
Fitch will allow you to obtain any of the following (among others!) by a generous use of the
rule:
A
B
C
D
A B
A C
A D
B C
B D
C D
A B C
B A D
D A C
B A C D
6.2 Disjunction rules
Disjunction Introduction ( Intro)
P
i

P
1
P
i
P
n

This rule tells you that if you have a sentence on a line in a proof, you may enter, on a new
line, any disjunction of which it is a disjunct. (P
i
here represents any of the disjuncts,
including the first or the last.)
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-3
Disjunction Elimination ( Elim)
This is the formal rule that corresponds to the method of proof by cases. It incorporates the
formal device of a subproof.
A subproof involves the temporary use of an additional assumption, which functions in a
subproof the way the premises do in the main proof under which it is subsumed.
We place a subproof within a main proof by introducing a new vertical line, inside the vertical
line for the main proof. We begin the subproof with an assumption (any sentence of our
choice), and place a new Fitch bar under the assumption:
Premise
Assumption for subproof

The subproof may be ended at any time. When the subproof ends, the vertical line stops, and
the next line either jumps out to the original vertical proof line, or a new subproof may be
begun. As well see, Elim involves the use of two (or more) subproofs, typically (although
not necessarily) entered one immediately after the other.
The rule:
P
1
P
n

P
1

S

P
n

S
S
What the rule says is this: if have a disjunction in a proof, and you have shown, through
a sequence of subproofs, that each of the disjuncts (together with any other premises in
the main proof) leads to the same conclusion, then you may derive that conclusion from
the disjunction (together with any main premises cited within the subproofs).
This is clearly a formal version of the method of proof by cases. Each of the P
i

represents one of the cases. Each subproof represents a demonstration that, in each case,
we may conclude S. Our conclusion is that S is a consequence of the disjunction
together with any of the main premises cited within the subproofs.
When you do the You try it on p. 151, notice, as you proceed through the proof, that after
step 4 you must end the subproof first, before you begin the next subproof.
To do these things, you can click on the options in the Proof menu. But it is easier and quicker
to use the keyboard shortcuts: to end a subproof, press Control-E; to begin a new
subproof, press Control-P. Another handy shortcut is Control-A for adding a new line
after the current line, as part of the same proof or subproof. (Any time you add a new line,
Fitch will wait for you to write in a sentence and cite a justification for it.)
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-4
Note also that the use of Reit is strictly optional. For example, in the proof on p. 151, step 5 is
not required. The proof might look like the one in Page 151.prf (on Supplementary Exercises
page) and it will check out.
Default and generous uses of the rules
Default uses
o Elim: if you cite a disjunction and some subproofs, with each subproof
beginning with a different disjunct of the disjunction, and all subproofs
ending in the same sentence, S, cite the rule Elim, and ask Fitch to check it
out, Fitch will enter S.
o Intro: if you cite a sentence and the rule Intro, and ask Fitch to check it
out, Fitch will enter the cited sentence followed by a , and wait for you to
enter whatever disjunct you wish.
Generous use: if your cited disjunction contains more than two disjuncts, you dont
need a separate subproof for each disjunct. A subproof may begin with a disjunction
of just some of the disjuncts of the cited disjunction. When you ask Fitch to check the
step, Fitch will check it out as a valid use of the rule, so long as every disjunct of the
cited disjunction is either a subproof assumption or a disjunct of such an assumption.
6.3 Negation rules
Negation Elimination ( Elim)
This simple rule allows us to eliminate double negations.
P
P
Negation Introduction ( Intro)
This is our formal version of the method of indirect proof, or proof by contradiction. It
requires the use of a subproof. The idea is this: if an assumption made in a subproof leads to
!, you may close the subproof and derive as a conclusion the negation of the sentence that
was the assumption.
P
!
P
To use this rule, we will need a way of getting the contradiction symbol, !, into a proof. We
will have a special rule for that, one which allows us to enter a ! if we have, on separate lines
in our proof (or subproof) both a sentence and its negation.
! !! ! Introduction (! !! ! Intro)
P
P
!
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-5
Note that the cited lines must be explicit contradictories, i.e., sentences of the form P and P.
This means that the two sentences must be symbol-for-symbol identical, except for the
negation sign at the beginning of one of them. It is not enough that the two sentences be TT-
inconsistent with one another, such as A B and A B. Although these two are
contradictories (semantically speaking) since they must always have opposite truth-values,
they are not explicit contradictories (syntactically speaking) since they are not written in the
form P and P.
To try out these two rules, do the You try it on p. 156.
Other kinds of contradictions
The rule of ! !! ! Intro lets us derive ! whenever we have a pair of sentences that are explicit
contradictories. But there are other kinds of contradictory pairs: non-explicit TT-
contradictories, FO-contradictories that are TT-consistent, logical contradictories that are FO-
consistent, and TW-contradictories that are logically consistent. Here are some examples of
these other types of contradictory pairs:
1. Tet(a) Tet(b) and Tet(a) Tet(b)
2. Cube(b) a = b and Cube(a)
3. Cube(b) and Tet(b)
4. Tet(a) Cube(a) and Dodec(a)
In example (1) we have TT-contradictory sentences but not an explicit contradiction, as
defined above. In (2) we have a pair of sentences that are FO-inconsistent (they cannot both be
true in any possible circumstance), but not TT-inconsistent (a truth-table would not detect their
inconsistency). In (3) we have a pair that are logically inconsistent but not FO-inconsistent (or
TT-inconsistent). Finally, in (4) we have a pair that are TW-contradictories (there is no Tarski
world in which both of these sentences are simultaneously true), although they are logically
consistentit is possible for an object to be neither a tetrahedron nor a cube nor a
dodecahedron (it may be a sphere).
The rule of ! !! ! Intro does not apply directly in any of these examples. In each case it takes a
bit of maneuvering first before we come up with an explicitly contradictory pair of sentences,
as required by the rule.
Example 1
1. Tet(a) Tet(b)
2. Tet(a) Tet(b)
3. Tet(a) Elim: 2
4. Tet(b) Elim: 2
5. Tet(a)
6. ! ! !! ! Intro: 3, 5
7. Tet(b)
8. ! ! !! ! Intro: 4, 7
9. ! Elim: 1, 5-6, 7-8
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-6
Here we used Elim twice, to get the two conjuncts of (2) separately, and then constructed a
proof by cases to show that whichever disjunct of line (1) we choose, we get to an explicit
contradiction.
Example 2
1. Cube(b) a = b
2. Cube(a)
3. Cube(b) Elim: 1
4. a = b Elim: 1
5. Cube(b) = Elim: 2, 4
6. ! ! !! ! Intro: 3, 5
Here we used Elim to get Cube(b) and a = b to stand alone, and then = Elim (substituting
b for a in line 2) to get the explicit contradictory of Cube(b).
Example 3
1. Cube(b)
2. Tet(b)
3. Tet(b) Ana Con: 1
4. ! ! !! ! Intro: 2, 3
Here we had to use Ana Con. Of course, as long as we were going to use Ana Con at all, we
could have used it instead of ! !! ! Intro to get our contradiction, as follows:
1. Cube(b)
2. Tet(b)
3. ! Ana Con: 1, 2
Example 4
1. Tet(a) Cube(a)
2. Dodec(a)
3. ! Ana Con: 1, 2
To see these different forms of contradictions in action, do the You try it on p. 159. Its an
excellent illustration of these differences. Youll find that you often need to use the Con
mechanisms to introduce a ! into a proof, since ! !! ! Intro requires that there be an explicit
contradiction in the form of a pair of sentences P and P.
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-7
! !! ! Elimination (! !! ! Elim)
!
P
The rule of ! elimination is added to our system strictly as a conveniencewe do not really
need it. It allows us, once we have a ! in a proof, to enter any sentence we like. (Weve
already seen that every sentence follows from a contradiction.) As p. 161 shows, we can
easily do without this rule with a four step workaround.
Default and generous uses of the rules
Note the default and generous uses of these rules in Fitch (p. 161). With Elim, you dont
need two steps to get from P to P (passing through the intermediate step P). You
can do it in one step. In fact, this is also the default use of the rule (if you cite the rule and ask
Fitch to fill in the derived line).
In the case of Intro, where the subproof assumption is a negation, P, and the subproof
ends with a !:
Default use: if you end the subproof, cite the subproof and rule Intro, and ask
Fitch to check the step, Fitch will enter the line P.
Generous use: if you end the subproof, enter the line P manually, cite the subproof
and rule Intro, and ask Fitch to check the step, Fitch will check it out as a valid use
of the rule.
6.4 The proper use of subproofs
Once a subproof has ended, none of the lines in that subproof may be cited in any subsequent part
of the proof. Look at the proof on p. 163 to see what can happen if this restriction is violated.
How Fitch keeps you out of trouble
When you are working in system F, you can enter erroneous lines like line 8 on p. 163 and
never be aware of it. But Fitch wont let you do this! To see what happens, look at
Page163.prf.
Notice that when we try to justify line 8 by means of Intro, Fitch will not let us cite the line
that occurs inside the subproof that has already been closed.
When a subproof ends, we say that its assumption has been discharged. After an assumption is
discharged, one may not cite any line that depended on that assumption.
Note that it is permissible, while within a subproof, to cite lines that occur outside that subproof.
So, for example, one may, while within a subproof, refer back to the original premises, or
conclusions derived from them. One must just take care not to cite lines that occur in subproofs
whose assumptions have been discharged.
Subproofs may be nestedone subproof may begin before another is ended. In such cases, the last
subproof begun must be ended first. The example on p. 165 illustrates such a nested subproof.
6.5 Strategy and tactics
Keep in mind what the sentences in your proof mean
Dont just look at the sentences in your proof as meaningless collections of symbols.
Remember what the sentences mean as you try to discover whether the argument is valid.
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-8
If youre not told whether the argument is valid, you can use Fitchs Taut Con mechanism to
check it out. If you discover that the argument is not valid, you should not waste time trying to
find a proof.
Try to sketch out an informal proof
This will often give you a good formal proof strategy. An informal indirect proof can be
turned into a use of Intro in F. An informal proof by cases can be turned into a use of
Elim in F.
Try working backwards
This is a very basic strategy. It involves figuring out what intermediate conclusion you might
reach that would enable you to obtain your ultimate conclusion, and then taking that
intermediate conclusion as your new goal. You can then work backwards to achieve this new
goal: figure out what other intermediate conclusion you might reach that would enable you to
obtain your first intermediate conclusion, and so on. Working backward in this way, you may
discover that it is obvious to you how to obtain one of those intermediate conclusions. You
then have all the pieces you need to assemble the proof.
Fitch is very helpful to you in using this strategy, for you can work from the bottom up as
well as from the top down. To see this, do the You try it on p. 168 (open the file Strategy
1). You will note that you can cite a line, or a subproof, as part of a justification even before
you have justified the line itself. This shows up with the two innermost subproofs (3-5 and 6-
8) which can be used in the justification of line 9 even before lines 5 and 8 themselves have
been justified.
This gives you a good method for checking out your strategy.
An example
(A B) (C D)
C B
D
Open Ch6Ex2a youll find this problem on the Supplementary Exercises page of the web
site. We can start by working backwards. We can get D from D by assuming D and using
Intro. So our goal will be to get D.
Our first premise is a disjunction, so that suggests a proof by cases. We will have a separate
subproof for each case, deriving D at the end of each subproof. Open Ch6Ex2b Notice that
our strategy checks out when we apply Elim, and that our strategy for obtaining D also
checks out.
Case 1: A B
The second conjunct, B, contradicts the second conjunct of premise 2. So we can
derive ! by ! !! ! Intro and then derive D by ! !! ! Elim.
Case 2: C
The first conjunct, C, contradicts the first conjunct of premise 2. So we can derive ! by
! !! ! Intro and then derive D by ! !! ! Elim.
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-9
Case 3: D
We already have D, so we can use Reit to enter it as a conclusion in the subproof. In
fact, we can even skip the Reit step, as well see.
Well now go after case 1. Open Ch6Ex2c, and follow the strategy above. Notice that the
steps check out. Finally, well complete cases 2 and 3. Open Ch6Ex2d, and follow the
strategy for cases 2 and 3.
Notice that in case 3, we did not need to use Reit. In this case, our subproof contains only the
assumption line. In such a case, we count the assumption line itself as the last line in the
subproof, and hence we take that line to have been established, given the assumption. This is
obviously acceptable, since every sentence is a consequence of itself.
6.6 Proofs without premises
Both system F and the program Fitch are set up so that a proof may begin with some line other
than a premise. For example, it might begin with a use of = Intro. Or, it may begin with a subproof
assumption.
This means that we may have a proof that has no premises at all! What does such a proof establish?
Since a proof establishes that a conclusion is a logical consequence of its premises (i.e., that it must
be true if they are), a proof without premises establishes that its conclusion is a logical
consequence of the empty set of premises. That is, it establishes that its conclusion must be true,
period.
In other words, such a proof establishes that its conclusion is a logical truth. See pages 173-4 for
examples of such proofs. (The conclusion of a proof without premises is often called a theorem,
although Barwise and Etchemendy do not use that terminology.)
For a try at proving a logical truth in Fitch, try exercise 6.33. Can you think of a simpler proof of
the same logical truth? You can find one at Proof 6.33 simpler.
A Difficult example: 6.41
Well try to prove this starred tautology: (A B) A B (And we will do so the hard
way, without using Taut Con to justify an instance of Excluded Middle.)
Our tautology says that either A and B are both true, or at least one of them is false. To prove
a tautology, it is often easiest to use indirect proof: assume the negation of what were trying
to prove, and show that it leads to a contradiction. That is the method we will use.
To see the general strategy for our proof, open Proof 6.41a. We assume the negation of our
desired conclusion, and aim to derive !. We can then apply rule Intro (generous version) to
get our theorem. Note that this last step checks out.
Now what we have assumed is the negation of a disjunction. So if what weve assumed is
true, each of the disjuncts is false. In particular, both A and B are false. So, given our
assumption, we should be able to prove both A and B. That is what we will do next. We will
do so by indirect proof: open Proof 6.41b. Note that our indirect method for proving both A
and B checks out.
Copyright 2004, S. Marc Cohen Revised 6/1/04
6-10
Our strategy is to assume A, reach a !, and deduce A. Then we do the same for B. So we
have three questions to answer. (1) How do we show that A leads to a contradiction? (2)
How do we show that B leads to a contradiction? (3) Having established both A and B, how
do we show that that in turn leads to a contradiction? The answer is the same in every case: by
using judiciously chosen applications of Intro.
Our two (inner) assumptions (A and B) are, in fact, disjuncts of the theorem were trying
to prove. Hence, we can get from each of those assumptions to the theorem in one application
of Intro. That wont prove the theorem (we still have an open assumption), but it will give
us a sentence that contradicts our assumption, which is exactly what we want.
To see the complete proof, open Proof 6.41c.
An alternative strategy for 6.41: proof by cases
Notice that a different strategy might yield an equally correct, but much more complicated
proof. To see the alternative strategy, open Proof 6.41d.
The idea here is to do a proof by cases:
Case 1
Assume (A B) and derive the theorem.
Case 2
Assume (A B) and derive the theorem.
We can use Taut Con to obtain the disjunction (A B) (A B) that we need. Then we
can apply rule Elim and complete the proof by cases. Note that both of these rule
applications check out in Fitch.
Case 1 is easy: it takes only one step of Intro. But case 2 is complicated. To develop the
alternative strategy further, open Proof 6.41e. The idea is to try to obtain the right-most
disjunction, A B, by indirect proof. So we assume its negation, viz., (A B).
Note what this sentence says: neither not A nor not B, which is equivalent to A and B. But
(A B) contradicts our assumption line, so once we have it in our proof, we can obtain !.
(Note that our use of ! !! ! Intro will check out in Fitch.) Our next task will be to obtain the
conjunction (A B) from our indirect proof assumption. We know that it follows, because
its an instance of one of DeMorgans laws. But those laws are not part of system F, so we
will need a different strategy. We will obtain each of A and B separately, and then use Intro
to get A B.
To obtain A we use an indirect proof; then we do the same for B. To see how the strategy now
looks, open Proof 6.41f. The remaining steps are simple. We assume A for indirect proof.
The line we need to contradict is (A B). But A is one of the disjuncts of our negated
disjunction. So we use Intro to get the disjunction A B, and we have our contra-
diction. This lets us obtain A by means of a generous use of Intro. We repeat this for B.
To see the complete proof, open Proof 6.41g.

Copyright 2004, S. Marc Cohen Revised 6/1/04
7-1
Chapter 7: Conditionals
We next turn to the logic of conditional, or if then, sentences. We will be treating if then as a
truth-functional connective in the sense defined in chapter 3: the truth value of a compound sentence
formed with such a connective is a function of (i.e., is completely determined by) the truth value of its
components.
Not all sentence-forming connectives are truth-functional. Consider because. It is obvious that we
could not fill out a truth table for the sentence P because Q. How would we fill out the value of P
because Q in the row where P and Q are both true? There is no way to do this.
Consider a sentence like Tom left the party because Lucy sneezed. Suppose that both component
sentences are true. What is the truth value of the entire compound? You cant tellit could be either.
If Tom and Lucy had prearranged that Lucy would sneeze as a signal to Tom that it was time to leave,
the sentence would be true. But if Lucy just happened to sneeze and Tom left, but for some other
reason, it would be false. So because is not a truth-functional connective.
This should be a tip-off that you should not read any kind of causal connection into the if then that
we will be introducing into FOL.
7.1 Material conditional symbol: ! !! !
Truth table definition of ! !! !
Here is the truth table that appears on p. 178:
P Q P ! Q
T
T
F
F
T
F
T
F
T
F
T
T
Here P is the antecedent and Q is the consequent. (The antecedent is on the left, with the
arrow pointing from it; the consequent is on the right, with the arrow pointing to it.)
As the truth table shows, a conditional sentence comes out true in every case except the one
where the antecedent is true and the consequent false. That is,
P ! Q is equivalent to both of these Boolean forms:
P Q (P Q)
Hence, ! adds no new expressive power to FOL (anything we can say using ! we can also
say without it, just using and or and ). But the new symbol makes it easier to produce
FOL sentences that correspond naturally to sentences of English.
English forms of the material conditional
It is convenient to read ! sentences in English using if then. That is, we read P ! Q ( P
arrow Q) as if P, then Q. But there are many other ways in English of saying the same thing,
and hence many other ways of reading ! sentences in English:
Q if P P only if Q Q provided that P Q in case P
Provided P, Q In the event that P, Q
Copyright 2004, S. Marc Cohen Revised 6/1/04
7-2
Note the variation in word order: in English (unlike FOL) the antecedent (in this case P)
doesnt always come first.
If you are looking for a way of reading P ! Q in English that begins with the sentence that
replaces P, the only formulation that works is P only if Q.
People sometimes read P ! Q as p implies q. This is handy, in that it gives you a way to
read the FOL sentence from left to right, symbol-for-symbol, maintaining the word order. But
there is something misleading about it, for it suggests a confusion between the truth of an
ifthen sentence and a logical implication. That is because p implies q is even more often
used as a shorthand for p logically implies q, which expresses the relation of logical
consequence: to say that p logically implies q is to say that q is a logical consequence of p.
But the mere fact that P ! Q is true does not mean that P logically implies Q. It simply
means that either Q is true or P is false. Hence it is probably best to avoid reading ! as
implies.
If vs. only if
This is a difference that beginners often find baffling. The authors of LPL explain (p. 180) the
difference in terms of necessary and sufficient conditions. Only if introduces a necessary
condition: P only if Q means that the truth of Q is necessary, or required, in order for P to be
true. That is, P only if Q rules out just one possibility: that P is true and Q is false. But that is
exactly what P ! Q rules out. So its obviously correct to read P ! Q as P only if Q.
If, on the other hand, introduces a sufficient condition: P if Q means that the truth of Q is
sufficient, or enough, for P to be true as well. That is, P if Q rules out just one possibility: that
Q is true and P is false. But that is exactly what Q ! P rules out. So its obviously correct to
read Q ! P as P if Q.
Example
To get really clear on the difference between if and only if, consider the following
sentences:
1. a and b are the same size if a = b
a = b ! SameSize(a, b)
2. a and b are the same size only if a = b
SameSize(a, b) ! a = b
(1) is a logical truth: if a and b are one and the same object, then there is no difference
between a and b in size, shape, location, or anything else.
But (2) makes a substantive claim that could well be false: it is possible for a and b to
be the same size but be two different objects. a and b might be a pair of large cubes, or
a might be a large cube and b a large tetrahedron.
Now consider the following pair:
3. a = b only if a and b are the same size
a = b ! SameSize(a, b)
4. a = b if a and b are the same size
SameSize(a, b) ! a = b
Copyright 2004, S. Marc Cohen Revised 6/1/04
7-3
(3), like (1), is a logical truth: a and b cant be identical without being the same sizeif
a = b, then a and b are one and the same object, which of course has the same size as
itself! But thats just what (1) says, so (3) and (1) are equivalent. (And that is why they
have the same FOL translation.) (4), on the other hand, comes out false if a and b are two
different objects of the same size. That is, (4) is equivalent to (2), and so they also have
the same FOL translation.
You can confirm this by evaluating these sentences (in file Ch7Ex1.sen) in some
different worlds (start with Ch7Ex1.wld).
Unless
The best way to think of unless is that it means if not. So you can read not P unless Q as
not P if not Q, and translate that into FOL as:
Q ! P
As well see, this FOL sentence is equivalent to
P ! Q
And this, in turn, gives us another way to read ! sentences: not [antecedent] unless
[consequent], which clearly, and correctly, expresses the fact that the truth of the consequent
is a necessary condition for the truth of the antecedent.
We learn something else from this last observation. Since P ! Q expresses the English not P
unless Q, and P ! Q is equivalent to P Q, these English and FOL sentences say the same
thing:
P Q
not P unless Q
And what we now see is that, strangely enough, the English unless corresponds to the FOL .
In effect, we can treat unless as meaning or.
Summary
The English forms Q if P and P only if Q are equivalent, and correspond to the FOL
sentence P ! Q.
But the English forms P if Q and P only if Q are not equivalent. The first goes into
FOL as Q ! P; the second as P ! Q.
Only if introduces the consequent; if (without the only) introduces the antecedent.
Think of unless as meaning if not; alternatively, just replace unless with or, i.e.,
translate unless into FOL by means of .
7.2 Biconditional symbol: " "" "
" "" " and if and only if
P " Q corresponds to P if, and only if, Q. It is thus really a conjunction of a pair of one-way
conditionals:
(P ! Q) (Q ! P)
Copyright 2004, S. Marc Cohen Revised 6/1/04
7-4
Truth table for ! !! !
Here is the truth table that appears on p. 182. Note that P ! Q comes out true whenever the
two components agree in truth value:
P Q P ! Q
T
T
F
F
T
F
T
F
T
F
F
T
Iff
If and only if is often abbreviated as iff. Watch for this.
Just in case
Mathematicians often read P ! Q as P just in case Q (or sometimes as P exactly in case Q,
or as P exactly if Q). Watch for this, too.
Biconditionals and equivalence: ! !! ! vs. " "" "
The FOL sentence P ! Q does not say that P and Q are logically equivalent. It says
something weaker, namely, that they (happen to) agree in truth value. The claim that P and Q
are logically equivalent is strongerit amounts to the claim that their biconditional is not just
true, but a logical truth.
For example, in a world in which b is a large cube, the sentences Cube(b) and Large(b) are
both true, and the sentences Tet(b) and Small(b) are both false. Hence these two
biconditionals:
Cube(b) ! Large(b) Tet(b) ! Small(b)
are both true. But Cube(b) is not equivalent to Large(b), because there are worlds in which
they differ in truth value.
On the other hand, the sentences Cube(b) and Cube(b) are logically equivalentthere is
no world in which they differ in truth value. That is, their biconditional is a logical truthtrue
in every world.
To say that two sentences are equivalent, we can use the symbol ". That is, we can write:
Cube(b) " Cube(b)
to mean that Cube(b) and Cube(b) are logically equivalent. But the sentence containing
" "" " is not an FOL sentence. It is just a way of saying that the FOL sentence Cube(b) !
Cube(b) is a logical truth, or, alternatively, of saying that the two sentences Cube(b) and
Cube(b) are logically equivalent.
7.3 Conversational implicature
It is easy to misread what a sentence says because one mistakenly attaches to the meaning of the
sentence certain additional information, information that is frequently conveyed by the assertion of
the sentence, even though it is not strictly speaking part of what is said, or part of what the
sentence means.
Copyright 2004, S. Marc Cohen Revised 6/1/04
7-5
Example: Tom asks whether the picnic will be held, and Betty says If it rains, the picnic will not
be held. Strictly speaking, what Betty has said is that it is not the case that both rain and the picnic
will occur. Tom may well infer, however, that Betty said something additionalthat if it does not
rain, the picnic will be held.
But Betty did not say this. Tom may well infer that Betty must have meant this, for if Betty were
aware of any other situation in which the picnic would not be held, she would have mentioned it.
Her failure to do so strongly suggests that, in her view, rain is the only thing that would stop the
picnic.
Well use the terminology of H. P. Grice to describe this situation. Betty said that if it rains, the
picnic will not be held; but in saying this (in this situation, using these words) she
conversationally implicated that if it doesnt rain, the picnic will be held.
The test for conversational implicature is Grices cancellability test. Suppose a speaker utters a
sentence S, and the hearer draws the conclusion that P. The question now arises whether, in
uttering S, the speaker has said that P or only implicated that P. The test is this: see whether the
conclusion the hearer draws (that P) can be explicitly cancelled by adding and not P to the
original sentence S. If the resulting conjunction S and not P is a contradiction, P is part of what
was said; if the result is not a contradiction, P is only implicated, not part of what was said.
Applying the test in this case: can Betty say this, without contradicting herself?:
If it rains, the picnic will not be held; and even if it doesnt rain,
the picnic may still not be held.
Surely there is no contradiction here. Betty may be alluding to the fact that there are many
conditions that are sufficient for calling off the picnic: rain, snow, the death of one of the hosts,
nuclear annihilation, etc. Only the first has a high enough probability to be worth mentioning, so
that is why Betty neglects the other conditions and why she doesnt attach the second conjunct to
her assertion.
Contrast the following case, where the cancellability test gives a different result. The speaker says
Neither Dave nor Sally was in class today. Did the speaker say that Sally was not in class today?
(Notice: the speaker did not utter the words Sally was not in class today.) The test is this: is the
following self-contradictory?:
Neither Dave nor Sally was in class today, but Sally was in class today.
This is obviously self-contradictory, so the speaker really did say, and not just implicate, that Sally
was not in class today.

Copyright 2004, S. Marc Cohen Revised 6/1/04
8-1
Chapter 8: The Logic of Conditionals
8.1 Informal methods of proof
Conditional elimination
This method of proof is also known by its Latin name, modus ponens (literally, method of
affirmingroughly, having affirmed the antecedent of a conditional, you may affirm the
consequent).
From P and P ! Q , you may infer Q.
Biconditional elimination
This is sometimes called modus ponens for the biconditional.
From P and P " Q , you may infer Q.
From P and Q " P , you may infer Q.
Some handy equivalences
Contraposition
P ! Q # Q ! P
The conditional disjunction equivalence
P ! Q # P Q
The negated conditional equivalence
(P ! Q) # P Q
The biconditional conjunction equivalence
P " Q # (P ! Q) (Q ! P)
The biconditional disjunction equivalence
P " Q # (P Q) (P Q)
In some systems of deduction for the propositional portion of FOL, these equivalences are used
as rules. In the system F, and in Fitch, these are not going to be rules. In fact, we will be
using Fitch to prove these equivalences. Still, it is useful to be aware of them.
Note that the equivalence symbol # is not a connective of FOL, but a symbol we use in the
metalanguage (in this case, English, the language in which we talk about FOL), and the
statements of equivalence are not FOL sentences.
For a useful chart of tautological equivalences, see the Supplementary Exercises page on the
course web site. Look under the listings for Chapter 7.
The method of conditional proof
This very important method of proof is a way of establishing conditional sentences. In using
this method, we make a provisional assumption, P, and deduce some consequences of it.
When we arrive at some appropriate sentence, Q, we have shown that the assumption of P
has led to the conclusion Q. We may then discharge the assumption of P and conclude that if
P, then Q.
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-2
How do we know when we have reached an appropriate sentence, Q? We should know in
advance which sentence we are looking for. Since we are trying to establish a conditional
sentence, we assume its antecedent and attempt to deduce its consequent.
Note that the method of conditional proof can be used for biconditionals, too. To prove
P ! Q, construct separate conditional proofs for each of the conditionals P " Q and
Q " P. The conjunction of these two conditionals is equivalent to the biconditional P ! Q.
(See the biconditional conjunction equivalence above.)
8.2 Formal rules of proof for " "" " and ! !! !
Conditional elimination (" "" " Elim)
P " Q
P
Q
That is, if you have a conditional on one line in a proof, and its antecedent (alone) on another
line, you may infer the consequent. As justification, you cite the two earlier lines.
Conditional introduction (" "" " Intro)
P
Q
P " Q
This is the formal counterpart of the method of conditional proof. Begin a subproof with P,
the antecedent of your desired conditional. When your desired consequent, Q, occurs on a
later line in the subproof, end the subproof and enter the conditional P " Q. As justification
for the conditional, you cite the entire subproof.
Some tricks using " "" " Intro
Here are a couple of tricks involving conditionals, one involving an irrelevant
antecedent and one an irrelevant consequent. Open the files Irrelevant Antecedent
and Irrelevant Consequent on the Supplementary Exercises page. In each case you
should be able to use a simple conditional proof strategy (with a little trick thrown in) to
deduce the conclusion. (You probably used one of these strategies on Exercise 8.26.)
In these cases, what we have shown is that we can have any sentence we like as the
antecedent of a conditional whose consequent we have already proved, and we can have
any sentence we like as the consequent of a conditional if weve already proved the
negation of its antecedent.
Heres a good problem on which to use the tricks youve just learned. Open
Conditional Tricks on the Supplementary Exercises page. Then try to use these tricks
in constructing a proof. (This is one half of the negated conditional equivalence we
studied above; the proof you just constructed will make up half of the proof of that
equivalence in Exercise 8.30.)
When youve finished the proof, leave Fitch running with your proof file still open.
Well be using it again in a moment.
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-3
Proofs without premises
Its easy to use ! !! ! Intro to convert a proof with a premise into a proof (without premises) of
the corresponding conditional sentence. The trick is just to embed the old proof as a
subproof into the new proof.
Heres an easy way to embed on old proof into a new one. (This procedure is described in
4.4.3 of the software manual.) Open a new Fitch file, and start a new subproof (Ctrl-P).
Now go back to the proof youve just finished, and click on the rectangle at the upper left of
the window, to the left of the symbol selectorthis converts the cursor into a selector than
can copy many lines in a proof simultaneously.
Point the cursor at the upper left corner of the proof and click on the left mouse button. You
will notice that the cursor has changed shape. Now, hold the button down while dragging
toward the lower right corner. You will see a box appear, surrounding the text youve
selected. When it encloses your entire proof, release the mouse button. You have just selected
your entire proof.
Now click on Edit Copy (or type Ctrl-C) to copy the proof youve selected. Go back to
your new proof; the focus slider should still be pointed at the assumption line of your new
subproof. (If its not, change the focus so that it is.) Then click on Edit Paste (or type
Ctrl-V). This will insert your entire old proof into the new one, but one level of subproof
deeper. Your old premise has become the first subproof assumption, and your old conclusion
is the last line of that subproof. Dont forget to switch back to pointer tool (click on arrow at
upper left).
Change the focus to the last line, and end the subproof. You are now prompted for a rule, so
choose ! !! ! Intro and cite the entire subproof. Then click on Check Step. Congratulations!
You have just constructed a proof without premises. The old premise has become the first
subproof assumption, and the old conclusion is the last line of that subproof. The last line
(which has no premises) is the original arguments corresponding conditional sentence.
We will be looking at other (more complex) proofs without premises later.
Default and generous uses of the ! !! !rules
Default
If you are on a new line in a proof, and you choose rule ! !! ! Elim and cite, as two
separate lines, a conditional and its antecedent, Fitch will take the consequent and enter
it by itself on the new line.
If you end a subproof, Fitch will create a new line after the subproof and ask you to
choose a rule to justify the line. And as we saw (in our example of embedding a proof
as a new subproof) when we chose ! !! ! Intro and cited the entire subproof, Fitch entered,
on the new line, the conditional sentence whose antecedent was the assumption of the
subproof and whose consequent was the last line of the subproof.
Generous
In using ! !! ! Elim, you can cite the support sentences in any order. In using ! !! ! Intro, the
consequent of the new conditional does not have to be the last line of the subproofit
can be any line of the subproof, even the assumption line itself.
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-4
Short cut hinttry this: start a new Fitch proof with no premises. Assume A. Then
choose End Subproof (Ctrl-E), choose rule ! !! ! Intro, and cite the entire one-line
subproof. Ask Fitch to check the line, and watch what Fitch does!
Biconditional elimination (" "" " Elim)
P " Q (or Q " P)
P
Q
This rule is, effectively, modus ponens in either directiongiven a biconditional on one line,
and either of its components on another line, you may infer the other component on a new
line.
Biconditional introduction (" "" " Intro)
P
Q
Q
P
P " Q
This rule is, effectively, a double use of ! !! ! Intro. If you have the two subproofs which would
entitle you to infer, by ! !! ! Intro, a pair of conditionals that are converses of one another (they
have antecedent and consequent exchanged), you may infer the biconditional of the two
component sentences.
8.3 Soundness and completeness
In this course, we are mainly interested in developing a system of logic that we can use to prove the
validity of valid arguments, and demonstrate the invalidity of invalid arguments. In more advanced
logic courses, the attention turns to proving things about the system of logic itselfthis is
metatheory, the study of the properties of a logical system.
Two of the most important properties of such a system are soundness and completeness. We will
not attempt to prove that our system (even the part of it weve developed so far) has these
properties. Rather, were just interested in understanding what these properties are, and with
getting a rough idea of what would be involved in proving that our system has them. To study more
metatheory, youll have to go on to take PHIL 470.
Soundness and completeness are properties of the system Fthe set of deductive rules weve
been developing. Actually, what weve studied so far is only a part of the system Fthe part that
concerns the sentence connectives. That is, the system weve studied is just the collection of Intro
and Elim rules for , , , !, ", and #. Well call this smaller system F
T
. And well write:
P
1
,, P
n

T
S
to mean that there is a proof in F
T
of S from premises P
1
,, P
n
.
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-5
Soundness
Basically, a sound system is one in which all the inferences that you are entitled to make by
the rules of the system are valid inferences.
The Soundness Theorem for F
T
: If P
1
,, P
n

T
S,
then S is a tautological consequence of P
1
,, P
n
.
That is, the rules of F
T
are sound if they satisfy this condition: any conclusion we can deduce
from a set of premises by means of the rules of F
T
is in fact a tautological consequence of
those premises.
To put it another way, the soundness of F
T
comes to this: if you can prove a conclusion using
Fitch, it will also turn out to be a tautological consequence of its premises according to Boole.
A Corollary
Given the soundness of F
T
, we know that any conclusion we prove using its rules is
indeed a tautological consequence of the premises we use to prove it. This also applies
when there are no premises. That is, if there is a proof of S in F
T
with no premises,
then S is a tautology.
Soundness Corollary: If
T
S, then S is a tautology.
Completeness
Completeness is just the converse of soundness. That is, a complete system is one in which
any valid inference can be proved by means of the rules of the system.
The Completeness Theorem for F
T
: If a sentence S is a
tautological consequence of P
1
,, P
n
, then P
1
,, P
n

T
S.
That is, the rules of F
T
are complete if they satisfy this condition: any conclusion that is in
fact a tautological consequence of a set of premises can be deduced from those premises by
means of the rules of F
T
.
To put it another way, the completeness of F
T
comes to this: if a conclusion is a tautological
consequence of its premises according to Boole, then you can also prove it from those
premises using Fitch.
A Corollary
Given the completeness of F
T
, we know that any tautological consequence of a given
set of premises can be proved from those premises using the rules of F
T
. This also
applies when there are no premises. That is, if S is a tautology, then there is a proof of
S in F
T
with no premises.
Completeness Corollary: If S is a tautology, then
T
S.
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-6
Why we need both soundness and completeness
It is not enough for a system to be sound, for a system can be sound without being complete.
For example, a system whose only rules were Intro and Elim would be sound (it would
never permit any invalid inferences), but woefully incomplete (it could not prove even the
simplest inferences using connectives other than .) So a system that is sound but not
complete would never produce any incorrect inferences, but there would be some correct
inferences it would be unable to prove.
Similarly, it is not enough for a system to be complete, for a system can be complete without
being sound. For example, consider a system whose only rule is this: from P, infer Q. In
this system, every inference would be simple: you could deduce any conclusion you want in
just one step. Such a system would be too powerful; although it would include all the correct
inferences, it would let in all the incorrect ones, too.
Proving soundness and completeness
Soundness is much easier to prove than completeness. To prove the soundness of the rules of
F
T
, we only have to prove that none of the rules is capable of deducing a sentence that does
not follow from (i.e., is not a TT-consequence of) its premises.
The method of proof is illustrated on pp. 215-216. It combines a proof by cases with a proof
by contradiction. Assume that F
T
is unsound. That means that there is a proof with an invalid
stepa line in a proof that is licensed by the rules of F
T
but that is not a tautological
consequence of its premises. Pick any such proof, and consider the first invalid step in the
proof. That step had to be licensed by one of the rules of F
T
. Then show that, no matter which
rule we consider, the assumption that this rule has yielded an invalid step leads to a
contradiction.
For example, heres how we show that ! !! ! Elim could not have produced the first invalid step.
Suppose that it did. Then we would have a pair of sentences Q and Q ! R that are both
tautological consequences of some set of premises A
1
A
k
, but the sentence R obtained by
! !! ! Elim is not a consequence of A
1
,, A
k
. But that means that a truth table for these
sentences would contain a row in which A
1
,, A
k
, Q, and Q ! R are all true, but R is false.
But this is impossiblethere can be no such row.
The full proof would go on to show that a similar impossibility results no matter which rule
we suppose was the culprit that introduced the first invalid step.
The proof of completeness of F
T
is more complicated, and will not be covered in this course.
The proof can be found in 17.2, and by the time you have finished this course, you will be in
a position to follow it.
8.4 Valid arguments: some review exercises
We will work on Exercise 8.48.prf and Exercise 8.52.prf. for practice.
8.48
Since the conclusion is a conditional, we will want to use conditional proof strategy
(! !! ! Intro). So we will start a subproof with Tet(c) as the assumption. But notice that the
first premise is a conjunction, and we may need one of its conjuncts separately. (Its always a
good idea to use Elim first, when possible, before setting up any subproofs. That way
individual conjuncts will be available in all subsequent subroofs.) The second conjunct is a
disjunction whose disjuncts occur in the other premises, so first well enter it on the next line
( Elim) and then set up our conditional proof. Open Proof 8.48a.prf.
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-7
The goal of our subproof is FrontOf(a, b), the consequent of the main goal. We can use proof
by cases to get it. Open Proof 8.48b.prf. The first case is easy: Medium(b) gets us our
conclusion by ! !! ! Elim. The second case leads to a contradiction, since we can get both Tet(c)
and Tet(c).
That allows us to use Elim to complete the proof by cases of FrontOf(a, b). Then we use
! !! ! Intro to complete our conditional proof of Tet(c) ! FrontOf(a, b). See Proof
8.48c.prf for the complete proof.
8.52
Open Exercise 8.52.prf. Again, the conclusion is a conditional, so we will set the proof up as
a conditional proof (! !! ! Intro). And we will use proof by contradiction ( Intro) in the
embedded subproof to obtain a " c, the consequent of the main conditional. Open Proof
8.52a.prf.
Dodec(b) is clearly inconsistent with Cube(b), so if we assume Cube(b) we can use Ana
Con to derive # from that assumption. That lets us prove Cube(b). Open Proof 8.52b.prf.
Our task is now to get from Cube(b) to a contradiction. But our premise is a biconditional,
with Cube(b) as its left-hand component. Using Taut Con (remember, after p. 222 were
allowed to use Taut Con for simple steps) we can infer the negation of the right-hand
component. Then, substituting c for a (as licensed by a = c), we obtain a contradiction (the
negation of a biconditional of the form P $ P) from which we can infer # by Taut Con. See
the complete Proof 8.52c.prf.
Another example: a complicated tautology
We can use system F
T
(and hence Fitch) to prove (without premises) any tautology. Heres an
interesting example that looks as if it should be difficult to prove, but turns out to be very
easy:
(A ! (B ! C)) ! ((A ! B) ! (A ! C))
The challenge is to prove this sentence from the empty set of premises. And the apparent
difficulty is that the conclusion is long and complex, and there are no premises to work with.
The solution is to allow the logical structure of the conclusion to dictate the proof strategy.
Since the conclusion is a conditional, we will plan to get it by using ! !! ! Intro, assuming
(A ! (B ! C)) and deriving (A ! B) ! (A ! C). Heres what the opening strategy looks
like. Open Ch8Ex1a.prf.
Now our task is to derive the intermediate conclusion (A ! B) ! (A ! C). Since this is also
a conditional, we can use the same ! !! ! Intro strategy. Well assume A ! B and derive
A ! C. Open Ch8Ex1b.prf.
Our final intermediate conclusion is A ! C, and we simply repeat the same strategy: assume
the antecedent A and derive the consequent C. Open Ch8Ex1c.prf.
Now that the entire strategy has been laid out, we have several nested subproofs. Working
inside the innermost subproof, our task is to derive C from all of the assumptions under which
it falls. Since those assumptions are:
A ! (B ! C)
A ! B
A
Copyright 2004, S. Marc Cohen Revised 6/1/04
8-8
the deduction is easy (three applications of ! !! ! Elim). For the complete proof, see
Ch8Ex1d.prf.
Another example: a logical truth requiring Ana Con
In this example, we prove a logical truth that is neither a tautology nor a first-order logical
truthso well have to use Ana Con:
Cube(a) ! (Tet(a) Dodec(a))
Open the file Ch8Ex2.prf from the Supplementary Exercises page on the course web site.
Notice that the Goal Constraints permit the use of Ana Con, but only with literals. Note
carefully what this means. First, and most obviously, it means you cant apply Ana Con
directly to the conclusion itself, justifying it in a single step. (Without this goal constraint, you
could do this!). Second, and less obviously, you cant use Ana Con to justify
(Tet(a) Dodec(a)), citing the previous step Cube(a). For even though you would be
applying Ana Con to a literal, the sentence you are deriving is not a literal. This constraint
requires that both the cited sentence and the derived sentence must be literals.
Try to develop an approach that works within this constraint: my version uses conditional
proof strategy, with an embedded indirect proof; the contradiction in the indirect proof is
obtained by using proof by cases. Your version might well be different. For my version, see
Proof Ch8Ex2.prf.

Copyright 2006, S. Marc Cohen Revised 11/4/06
9-1
Chapter 9: Introduction to Quantification
9.1 Variables and atomic wffs
Variables behave syntactically like namesthey appear in sentences in the same places that
names appear. So all of the following count as correct atomic expressions of FOL:
Cube(d) FrontOf(a, b) Adjoins(c, e)
Cube(x) FrontOf(x, y) Adjoins(c, x) Adjoins(y, e)
These are all well formed formulas (wffs) of FOL. In fact, they are all atomic wffs. But the ones
with variables in them (in these examples, the ones in the second row, containing x and y) are
semantically different. For unlike the ones in the first row (whose individual symbols are
restricted to names), the ones with variables in them do not make determinate statements, and
hence do not have truth-values.
All of the expressions above are wffs; but only those in the top row are sentences.
FOL contains an infinite supply of variables: t, u, v, w, x, y, z, t
1
, u
1
, etc. Fitch understands all of
these, but Tarskis World is restricted to these six: u, v, w, x, y, z.
9.2 The quantifier symbols: ! !! !, " "" "
The quantifier symbols, ! and ", are used with variables and wffs to create FOL sentences.
Universal quantifier (! !! !)
!x is read for every object x. Thus, Every object is a cube would be expressed in FOL as
!x Cube(x). Some other obvious translations:
English FOL
Everything is either a cube or a tetrahedron. !x (Cube(x) ! Tet(x))
Every tetrahedron is small. !x (Tet(x) # Small(x))
Existential quantifier (" "" ")
"x is read for at least one object x. Thus, At least object is a tetrahedron would be
expressed in FOL as "x Tet(x). Some other obvious translations:
English FOL
Some tetrahedron is small. "x (Tet(x) " Small(x))
There is at least one cube in front of b. "x (Cube(x) " FrontOf(x, b))
Pay particular attention to the two small tetrahedron sentences:
Every tetrahedron is small. !x (Tet(x) # Small(x))
Some tetrahedron is small. "x (Tet(x) " Small(x))
In English, the only difference between them is that one contains every where the other contains
some. So one might suppose that in FOL, the only difference between them would be that one
contains ! where the other contains ". But this is not the case, as you can see. The universally
quantified sentence contains a # where the existentially quantified sentence contains a ". We will
spend some time later getting clear exactly why this is so.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-2
9.3 Wffs and sentences
In the portion of FOL we have studied up until now (the logic of sentences, or propositional
logic), all sentences are built up out of atomic sentences, truth-functional connectives, and
parentheses. In quantificational logic, we still have all of these sentences, but we have a lot more.
For we can now form sentences out of parts that are neither sentences nor connectives, namely, out
of wffs that are not sentences. That is, the parts will include wffs that contain variables.
What we need to do is to give the rules of the syntax of FOL. We will approach this in two stages.
First, well describe the rules for constructing the wffs; then we will state the rules for determining
which of the wffs are sentences.
Wffs
We begin with the notion of an atomic wff: any n-ary predicate followed by n individual
symbols. (An individual symbol is either an individual constant or a variable.) Atomic wffs
are the building blocks of FOL.
The examples we looked at earlier are all atomic wffs:
Cube(d) FrontOf(a, b) Adjoins(c, e)
Cube(x) FrontOf(x, y) Adjoins(c, x) Adjoins(y, e)
Any variable occurring in an atomic wff is free (unbound). Thus, there are free variables (x
and y) in the atomic wffs in the second row, and no variables in the atomic wffs in the first
row.
We can now give the rules for constructing more complex wffs out of atomic wffs,
connectives, parentheses, and quantifiers:
1. If P is a wff, so is P.
2. If P
1
,, P
n
are wffs, so is (P
1
! ! P
n
).
3. If P
1
,, P
n
are wffs, so is (P
1
" " P
n
).
4. If P and Q are wffs, so is (P ! Q).
5. If P and Q are wffs, so is (P " Q).
6. If P is a wff and ! is a variable, then #! P is a wff, and any occurrence of ! in #! P is
said to be bound.
7. If P is a wff and ! is a variable, then $! P is a wff, and any occurrence of ! in $! P is
said to be bound.
Examples
Cube(x) and Dodec(y) are both atomic wffs, so (Cube(x) !Dodec(y)) is a wff (by
clause 2).
Since (Cube(x) !Dodec(y)) is a wff, so is (Cube(x) !Dodec(y)) (by clause 1).
Since Adjoins(x, y) is a wff, so is $x Adjoins(x, y) (by clause 7).
Since (Cube(x) !Dodec(y)) and $x Adjoins(x, y) are both wffs, so is
((Cube(x) ! Dodec(y)) ! $x Adjoins(x, y)) (by clause 4).
And so on. Note that in our last wff above, the occurrence of x in the antecedent is free,
while both occurrences of x in the consequent are bound. All occurrences of y are free.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-3
Sentences
A sentence is a wff in which no variable has a free occurrence. So, to convert our wff
above into a sentence, we will have to do something to its free variables.
Bind with a quantifier
One way to convert our wff to a sentence is to attach quantifiers to bind the variables.
We would do this in two stages:
!y((Cube(x) ! Dodec(y)) " #x Adjoins(x, y))
This takes care of y, as all three of its occurrences are now bound. But the leftmost
occurrence of x is still free. So we can attach another quantifier, this one containing an
x. Note that it will bind only the leftmost x; the ones in the consequent are already
bound, and so are not bindable by the new quantifier.
!x!y((Cube(x) !Dodec(y)) " #x Adjoins(x, y))
There are no free variables in this wff, and so it is a sentence. (We are not worried right
now about what this sentence means. We are only trying to see what makes it a
sentence.)
Substitution
Another way to convert a wff to a sentence is to replace the free variables it contains
with constants. Starting with:
((Cube(x) ! Dodec(y)) " #x Adjoins(x, y))
we replace both occurrences of y with the same constant (in this case a)replacement
must be uniform. As for x, we do not replace its occurrences in the consequent, because
they are not free; we replace only the occurrence in the antecedent. We can replace that
occurrence of x with any constant we like (including a). Or, we can use a different
constant:
((Cube(b) ! Dodec(a)) " #x Adjoins(x, a))
There are no free variables in this wff, and so it is a sentence.
You can confirm that these are sentences in Tarskis World. Open the file Ch9Ex1.sen from
the Supplementary Exercises page of the course web site:
1. !x!y((Cube(x) !Dodec(y)) " #x Adjoins(x, y))
2. ((Cube(b) !Dodec(a)) " #x Adjoins(x, a))
Then try to verify the two sentences (above) that it contains. You will find that both are
sentences, but neither is evaluable. (1) is not evaluable because the world is empty, and no
sentence is evaluable in an empty world. As soon as you put a block into the world, (1) will be
evaluable (it will come out false if all you do is to put in one block. Even when you add a
second block, (1) will remain false unless your two blocks adjoin one another).
Note that (2) remains unevaluable. It cannot be evaluated until the names it contains are
assigned to objects in the world. (Note this disparity in FOL between names and predicates:
the predicates can be empty, but the names cannot.) As soon as you assign the names a and
b to blocks, (2) will evaluate as true or false.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-4
Notice, also, that, for convenience, I have omitted the outermost pair of parentheses on (2). It
is always permissible to omit the outermost pair of parentheses. Just dont forget to put them
back on if you are embedding the sentence in a larger context (e.g., negating it, or making it a
component of a compound sentence, or attaching a quantifier to it). We will turn next to an
example of what can happen if you are not careful about this.
Scope of quantifier
Pay careful attention to the example discussed on p. 233. Parentheses are important in
indicating the scope of a quantifier, that is, which part of the sentence contains occurrences of
variables bindable by that quantifier.
So we must distinguish between these two wffs:
!x (Doctor(x) ! Smart(x))
and
!x Doctor(x) ! Smart(x)
The first is a sentence (it says, roughly, some doctor is smart); the second is not a sentence,
since the x in Smart(x) is free. (This wff says, roughly, There are doctors, and x is smart.)
Its easy to make the mistake of writing the second wff when you intend the first sentence.
Heres how it might happen:
You start with the atomic sentences Doctor(x) and Smart(x). You then conjoin them and
get (Doctor(x) ! Smart(x)). You decide to drop the outer parentheses for convenience,
and get the perfectly acceptable Doctor(x) ! Smart(x). Then, when you attach the
quantifier, you forget to put the missing parentheses back. So instead of the intended
sentence !x (Doctor(x) ! Smart(x)) you get the mistaken wff
!x Doctor(x) ! Smart(x). Be careful!
9.4 Semantics for the quantifiers
Satisfaction
Wffs containing free variables dont have truth-valuesthey are not true or false.
Consequently, a quantified sentence that is built of such wffs, such as !x Cube(x), cannot
have its truth-value defined in terms of the truth-value of its component wff, Cube(x), since
that atomic wff does not have a truth-value.
Wffs containing free variables, although not true or false simpliciter, nevertheless can be said
to be true or false of things. The wff Cube(x) is true of each cube, and false of every other
thing. The wff Tet(x) ! Small(x) is true of each small tetrahedron, and false of every other
thing. Another way to put this is to say that each cube satisfies Cube(x) and each small
tetrahedron satisfies Tet(x) ! Small(x).
Satisfaction, then, is a relation between an object and a wff with a free variable.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-5
[We are simplifying for ease of comprehension. Strictly speaking, we should say that
satisfaction is a relation between an ordered n-tuple of objects and a wff with n free
variables. For example, consider a wff with two free variables, such as Larger(x, y).
Which objects stand in the satisfaction-relation to this wff? No object taken by itself
does so; rather, it is pairs of objects that satisfy this wff. Thus, if a is a small cube and b
is a large tetrahedron, then the pair of objects b and a, taken in that order<b, a> is how
we write thissatisfies the wff Larger(x, y). Note that <a, b> does not satisfy this wff,
since a is not larger than b.]
We can state what it is for an object to satisfy a wff in terms of the truth of a certain sentence.
For example, if S(x) is a wff containing one free variable, then a given object satisfies S(x) iff
we get a true sentence when we replace every free occurrence of x in S(x) with the name of
that object.
For example, an object named b satisfies Cube(x) ! Adjoins(x, a) iff the sentence
Cube(b) ! Adjoins(b, a) is true.
But not every object has a name. (In many of the worlds in Tarskis World, lots of objects are
nameless.) How do we explain what it is for a nameless object to satisfy a wff? We assign the
object a temporary name and proceed as we did above for named objects.
Tarskis World reserves a number of individual constants, n
1
, n
2
, n
3
, etc. , for just this
purpose. If we want to know whether a given nameless object satisfies a wff, we temporarily
give it a name, choosing as its name the first of these constants not already in use. Suppose n
2

is the first such constant. Then, using n
2
as a name for our nameless object, that object
satisfies S(x) iff we get a true sentence when we replace every free occurrence of x in S(x)
with n
2
.
For example, a nameless object satisfies Cube(x) ! Adjoins(x, a) iff, treating n
2
as the
name of that object, the following sentence is true: Cube(n
2
) ! Adjoins(n
2
, a)
Semantics of ! !! !
A sentence of the form !x S(x) is true iff there is at least one object satisfying S(x).
Example: !x (Cube(x) ! Small(x)) is true iff there is at least one object satisfying
Cube(x) ! Small(x), i.e., iff there is at least one small cube.
Semantics of " "" "
A sentence of the form "x S(x) is true iff every object satisfies S(x).
Example: "x (Cube(x) # Small(x)) is true iff every object satisfies
Cube(x) # Small(x), i.e., iff every object satisfying Cube(x) also satisfies Small(x),
i.e., iff every cube is small.
Domain of discourse
The domain of discourse is the entire collection of things that we take our FOL sentences to be
aboutthe things we allow our quantifiers to range over or pick out. Sometimes, the
domain is unrestricted, in which case we are talking about everything, and our quantifiers
range over all objects. More often, the domain is restricted in some way (restricted to a
smaller collection of objectspeople, numbers, politicians, elementary particles, etc.). The
choice of domains affects how we read the quantifiers and quantified sentences. But in any
case, the domain must be non-empty.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-6
Examples
In the domain of persons, we read !x as for every person, x .
In the domain of numbers, we read "x as there is at least one number x such that .
In the domain of politicians, we read !y as for every politician, y .
In Tarskis World, the domain is restricted to blocks. Hence, in sentences about a Tarski
World, we read !x as for every block, x and "x as for at least one block, x .
If the domain is unrestricted, then !x is read as for everything, x and "x is read as
there is at least one thing x such that . When a domain has not been specified, it will
be assumed to be unrestricted.
A difference in domain is reflected in a difference in the way we translate sentences from
English to FOL, and vice versa:
In the domain of numbers, we could translate Some numbers are even as "x Even(x).
But in an unrestricted domain, wed have to write "x (Number(x) ! Even(x)).
Similarly, in the domain of politicians, we could translate All politicians are crooks as
!x Crook(x). But in an unrestricted domain, wed have to write
!x (Politician(x) # Crook(x)).
Obviously, the advantage of a restricted domain is that it makes translation easier. The
drawback is that once the domain has been restricted, your sentences cannot talk about
anything outside of the restricted domain.
Hence, a sentence like Every person owns a pet cannot be translated adequately into an FOL
whose domain has been restricted to persons, since this sentence requires us to quantify over
pets, and pets are not persons (at least, many pets are not persons!).
A notational convention
In stating the semantics of the quantifiers, and in stating the game rules for Tarskis World,
we talk about sentences of the form "x S(x), for example. S(x) here can be any wff that
contains at least one free occurrence of x. So, for example, the following FOL sentence is of
the form "x S(x):
"x (Cube(x) ! !y (Tet(y) # Larger(x, y)))
If we then want to talk about a given substitution instance of this existential generalization,
we would use the notation S(b), for example. Here, S(b) means the result of replacing every
free occurrence of x in S(x) with an occurrence of b. Hence, where "x S(x) is the sentence
above, S(b) is:
Cube(b) ! !y (Tet(y) # Larger(b, y))
Game rules for the quantifiers
The game rules are summarized on p. 237. The only rules that are new are the ones for the
quantifiers, that is, for sentences of the form "x P(x) and !x P(x). Study these rules carefully.
Heres a handy way of remembering how they work.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-7
Existential quantifier
!x P(x) is true iff at least one object satisfies P(x). Call any object that does this a
witness. Then the game rule for !x P(x) can be stated as follows: whoever is
committed to TRUE must try to find a witness. If you are committed to TRUE, Tarskis
World will ask you to choose a witness; if you are committed to FALSE, Tarskis World
will try to choose a witness.
Universal quantifier
"x P(x) is true iff every object satisfies P(x). Call any object that does not satisfy P(x)
a counterexample. Then the game rule for "x P(x) can be stated as follows:
whoever is committed to FALSE must try to find a counterexample. If you are
committed to TRUE, Tarskis World will try to find a counterexample; if you are
committed to FALSE, Tarskis World will ask you to find a counterexample.
In both cases, remember that if it is Tarskis Worlds move (that is, you have committed to
TRUE for "x P(x) or to FALSE for !x P(x)), and your commitment is correct, there will be no
counterexample to "x P(x) and no witness for !x P(x). But Tarskis World will not give up
it will choose an object anyway, and try to trick you into thinking that it is a witness (or a
counterexample). So dont be intimidated just because Tarskis World has made a choice. It
may be bluffing!
9.5 The four Aristotelian forms
Aristotle (384-322 BCE) invented the first system of formal logic. He focused on four forms of
sentencesuniversal affirmative, universal negative, particular affirmative, and particular
negative:
A All Ps are Qs.
I Some Ps are Qs.
E No Ps are Qs.
O Some Ps are not Qs.
The labels (A, I, E, O) were not due to Aristotle. They were a medieval mnemonic device, from the
Latin words affirmo (meaning I affirm) and nego (meaning I deny). A and I (from affirmo) are
the positive, or affirmative, ones; E and O (from nego) are the negative ones.
It is important to learn these forms well, as many very complicated sentences can be shown to be
based on these simple forms.
A vs. I
The most important point to be clear on at the start is the difference between A and I
sentences when they are translated into FOL.
English FOL
All Ps are Qs "x (P(x) # Q(x))
Some Ps are Qs !x (P(x) ! Q(x))
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-8
Why do these FOL sentences have different connectives, as well as different quantifiers? Its
pretty easy to see that !x (P(x) ! Q(x)) could not be right for All Ps are Qs. For this FOL
sentence says everything is both P and Q, and this is obviously too strong. All humans are
mortal is true, but it is not true that everything is both human and mortal.
Seeing why "x (P(x) # Q(x)) is wrong for Some Ps are Qs is harder. The You try it on p.
240 will help you see this.
An easy way to see whats wrong with this translation into FOL is to remember that P # Q is
equivalent to P "Q. This means that "x (P(x) # Q(x)) is equivalent to "x (P(x) " Q(x)).
Now compare these two sentences:
1. Some cubes are large.
2. Something is either not a cube or large.
Clearly, these are not equivalent. (1) cannot be true unless there is a large cube; but (2) does
not require thisit comes out true if there is a non-cube. It also comes out true if there is a
large thingwhether or not its a cube!
In fact, the only way (2) comes out false is if everything is a cube and nothing is large.
Heres a world in which "x (Cube(x) # Large(x)) is false. Open the files Ch9Ex2.wld and
Ch9Ex2.sen. Notice that in this world of small and medium cubes, our sentence comes out
false. But almost any change we make to the world makes our sentence true. Change any of
the cubes into a non-cube, and the sentence becomes true; or, add any large object (of any
shape) to the world, and the sentence becomes true. Notice that the correct translation of some
cubes are large, "x (Cube(x) ! Large(x)), remains false when these changes are made.
Now lets take what weve learned from this example and apply it to any FOL sentence of the
form "x (P(x) # Q(x)). It almost always comes out true. The only way it comes out false is if
everything satisfies P(x) and nothing satisfies Q(x). Hence, it makes a statement so weak (it
almost always comes out true) that it is seldom worth asserting.
Two ways of writing E
There are two ways of thinking about No Ps are Qs. You might think of it as (a) a universal
generalization or (b) a negative sentence.
(a) Universal generalization
(a) encourages this reading: for any object, if its a P, then its not a Q. That is, in FOL:
!x (P(x) # Q(x)).
(b) Negation
(b) encourages this reading: it is false that even one P is a Q.
That is, in FOL: "x (P(x) ! Q(x)).
These are both correct and perfectly acceptable ways of translating E sentences into FOL.
All vs. Only
Notice that just as all can be a quantifier in English (as in the phrase all freshmen), so too only
can be used as a quantifier (as in only freshmen). Compare the following two sentences:
1. All freshmen are eligible for the Kershner prize.
2. Only freshmen are eligible for the Kershner prize.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-9
Clearly, (1) and (2) are not equivalent. What is the difference between them? (1) tells us that
being a freshman is a sufficient condition for eligibilityif youre a freshman, then youre
eligible. But (2) tells us that being a freshman is a necessary condition for eligibilityyoure
eligible only if youre a freshman (but perhaps there are other necessary conditions as well).
Hence, our two sentences go into FOL as follows:
1. !x (Freshman(x) " Eligible(x))
2. !x (Eligible(x) " Freshman(x))
Notice that just as, in propositional logic, only if indicates that the sentence that follows is the
consequent of a conditional, so too in quantificational logic only indicates that the noun
phrase that follows should be translated by a wff that is the consequent of a conditional
embedded in the scope of a universal quantifier.
For practice, open Tarskis World and construct a world in which there is a small tetrahedron,
a medium dodecahedron, a small cube, and a large cube. Notice that although not all the cubes
are large, the only large block is a cube. Now write two FOL sentences that correspond to the
English sentences (1) All cubes are large and (2) Only cubes are large. Then click Verify All.
(1) should come out false and (2) should come out true. Now make the small cube large and
click Verify All again. This time they should both be true. Now make the tetrahedron or the
dodecahedron large (but leave the cubes both large) and re-verify. This time (1) should come
out true and (2) should come out false.
For a handy chart of FOL translations of some common English quantificational sentences,
download Common Quantificational Forms on the Supplementary Exercises page for this
chapter.
9.6 Translating complex noun phrases
It is now time to investigate sentences that are more complex than the ones weve seen so far, but
that still have the basic structure of one of the four Aristotelian forms. Our first look will be at
sentences that involve complex noun phrases, such as the following:
small happy dog
large cube in front of b
an apple or an orange
freshman or sophomore who has studied logic
In all of these cases, we could treat the complex noun phrase as a single predicate, and then use
these predicates to construct atomic sentences, such as:
SmallHappyDog(pris)
But such translations are undesirable, in that they make some important logical relationships less
perspicuous than they should be. Wed like to translate Pris is a small happy dog into FOL in a way
that makes clear that this sentence has Pris is a dog as a consequence. And our proposed atomic
translation above does not do this.
A better way is to use truth-functional connectives and more familiar (and less complicated)
predicates. So Pris is a small happy dog will become:
Small(pris) ! Happy(pris) ! Dog(pris)
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-10
Translating the complex noun phrase, then, means finding an appropriate truth-functional
compound of wffs that are not sentences (i.e., wffs containing a free variable). Our remaining
examples look like this:
Large(x) ! Cube(x) ! FrontOf(x, b)
Apple(y) " Orange(y)
(Frosh(x) " Soph(x)) ! StudiedLogic(x)
We can then embed these wffs in sentences, either by replacing the free variables with a name, or
prefixing the appropriate quantifier. Well do that with these sentences:
There is no large cube in front of b.
!x (Large(x) ! Cube(x) ! FrontOf(x, b))

If Bob eats anything, it will be an apple or an orange.
"y (Eats(bob, y) # (Apple(y) " Orange(y))

Any freshman or sophomore who has studied logic will succeed.
"x ((Frosh(x) " Soph(x)) ! StudiedLogic(x)) # Succeed(x))
[There is a possible ambiguity in this last sentence: in which of these two ways
do we read the noun phrase?
(freshman or sophomore) who has studied logic
freshman or (sophomore who has studied logic)
The first is more natural (its the one we used above), but the second is still
possible.]
Sometimes, the correct rendition of a complex noun phrase in FOL is surprising. Take, for example,
the phrase apples and oranges. We might expect this to go into FOL as Apple(x) ! Orange(x).
But study this wff carefully. Which objects satisfy it? It takes only a little thought to realize that
nothing satisfies it, for in order to satisfy this wff, an object would have to satisfy both of the wffs
Apple(x) and Orange(x). But no object does this, since no object is both an apple and an orange.
The correct rendition of apples and oranges is more likely to be Apple(x) " Orange(x). For when
you consider such sentences as:
Apples and oranges are fruits.
Bob eats only apples and oranges.
it is clear that the FOL sentences that capture their meanings are:
"x ((Apple(x) " Orange(x)) # Fruit(x))
"x (Eats(bob, x) # (Apple(x) " Orange(x))
Conversational implicature and quantification
When we use such English quantificational phrases as every applicant, all my grandchildren,
etc., there is an apparent implication that there are some applicants, that I have some
grandchildren, etc.
But in FOL, such sentences as:
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-11
!x (Applicant(x) " Hired(bill, x))
!y (Grandchild(y, marc) " Brilliant(y))
come out true when nothing satisfies the wff in the antecedent. So if there were no applicants,
the FOL translation of Bill hired every applicant comes out true; and if I have no
grandchildren, the FOL translation of All my grandchildren are brilliant comes out true.
What are we to say of these vacuous generalizations? They come out true in FOL, but when we
assert their English translations, something seems wrong with them. But what is wrong with
them? Is it that they are false?
The most widely accepted answer to this question makes use of Grices notion of
conversational implicature. Grices answer is not that vacuous generalizations are false, but
that they are misleading.
What is misleading about them is that the speaker has not been fully forthcoming with all the
information at his or her disposal. If the speaker knows that there are no applicants, or that
Marc has no grandchildren, the most fully informative statements he or she could make about
the applicants, or about Marcs grandchildren, are:
There were no applicants.
Marc has no grandchildren.
The statements we are considering:
Bill hired every applicant.
All of Marcs grandchildren are brilliant.
make weaker claimseach is a logical consequence of its counterpart negative existential,
but does not logically imply it. So the vacuous generalization makes a weaker claim.
The relation is just the same as that between a disjunction and one of its disjunctsthe
disjunction makes a weaker claim than the disjunct standing alone. But clearly the weaker
claim is not falseit is just a weaker version of the truth. For example, if I tell my wife Your
keys are either in the kitchen or by the front door, and I know that they are in the kitchen, I
have not liedI have not said something false. I have misled her, of course, by withholding
relevant information that I possessed, namely, that they are not by the front door. I should
have said simply They are in the kitchen. But my mistake was not in saying something
false; rather, it was in not telling all of the truth I was in possession of. I may have
conversationally implicated that I did not know the exact location of her keys, but I did not
assert that.
That this is exactly what is going on in the cases we are considering becomes apparent when
we consider an equivalent FOL sentence:
!y (Grandchild(y, marc) ! Brilliant(y))
This is equivalent to the standard FOL version of All of Marcs grandchildren are brilliant and
it stands in just the same relation to its counterpart negative existential:
!y Grandchild(y, marc)
Here, too, the weaker statement is a disjunction and the stronger statement is one of its
disjuncts.
Copyright 2006, S. Marc Cohen Revised 11/4/06
9-12
So when I, who have no grandchildren, say that all of my grandchildren are brilliant, I say
something that is misleading, but not false. For I have not asserted, falsely, that I have
grandchildren, although I have implicated this. The implicature can be cancelled, for I can
say, without contradicting myself (barely!) All of my grandchildren are brilliant
unfortunately, I dont have any grandchildren.
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-1
Chapter 10: The Logic of Quantifiers
First-order logic
The system of quantificational logic that we are studying is called first-order logic because of a
restriction in what we can quantify over. Our language, FOL, contains both individual constants
(names) and predicates. The names stand for individuals and the predicates, we might say, stand for
properties of those individuals. In FOL, we quantify over individuals, but not over properties.
That is, we can take the sentence Cube(b) Large(b) and obtain a quantified sentence by
replacing the individual constant with a variable, and attaching a quantifier:
!x (Cube(x) Large(x))
This is a way of saying in FOL that something is both a cube and large. But we cannot similarly
replace a predicate with a variable and still have an FOL sentence. For example, we cannot start
with the sentence Student(max) Student(claire) and obtain:
!P (P(max) P(claire))
(which seems to say that Max and Claire have something in common), for this is not a sentence of
FOL. In second-order logic, there are predicate variables as well as individual variables, and
second-order quantifiers. But second-order logic is a lot more complicated than FOL, and does not
have all of the same features. (For example, our system F for FOL is complete, but no there is no
complete deductive system for second-order logic.) For more on second-order logic, see
SecondOrder.pdf
10.1 Tautologies and quantification
Not all cases of logical consequence are cases of tautological consequence. The following
argument is valid:
"x Cube(x)
"x Small(x)
"x (Cube(x) Small(x))
but the conclusion is not a tautological consequence of the premises. The validity of the argument
depends on the meaning of the universal quantifier ", and not just on the meaning of the
connective .
As LPL shows (p. 258), the validity here must depend on more than just the connective , for the
following argument is not valid:
!x Cube(x)
!x Small(x)
!x (Cube(x) Small(x))
Similarly, not all logical truths are tautologies. The following is an example of a logical truth that
is not a tautology:
!x Cube(x) !x Cube(x)
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-2
This is a logical truth because in every world in which we evaluate an FOL sentence, there is at
least one object. If a world has a cube in it, the left disjunct is true; otherwise, it contains an object
that is not a cube, in which case the right disjunct is true. So the entire sentence is true in every
world.
But the sentence is not a tautology, for the similar sentence:
!x Cube(x) !x Cube(x)
is clearly not a tautology, or even true in every world. But the two sentences are exactly alike in
terms of their connectives.
A sentence containing quantifiers that is a tautology is this:
!x Cube(x) !x Cube(x)
which is just an instance of the tautologous form A A.
Truth-functional form
So we have seen that some logical truths are tautologies, and some are not. To be able to
decide whether an FOL sentence that contains quantifiers is a tautology, we need to develop
the notion of a sentences truth-functional form.
The truth-functional form of a sentence is basically what Boole sees when it looks at the
sentence. Its the structure that the sentence can be seen to have when all of its constituent
quantified sentences are treated as if they were atomic. We dont look inside the general
sentenceswe just uniformly replace them with letters. We then replace any remaining
atomic sentences with letters.
Example
!x Tet(x) " #y (Cube(y) FrontOf(b, y) Dodec(b))
There are two constituent general sentences here:
!x Tet(x)
#y (Cube(y) FrontOf(b, y) Dodec(b)).
So we replace the first general sentence with A and the second with B. The only
remaining parts of the sentence are the connectives and ". So the truth-functional
form of the sentence is A " B.
Another way to put this is to say that from the perspective of truth-functional logic, this
sentence is a conditional whose consequent is a negation. This is all Boole sees when it
looks at this sentence.
Truth-functional form algorithm
LPL provides a simple mechanical procedure (or algorithm) for producing the truth-
functional form of a sentence. This is described on p. 261; you should study it and be sure you
know how to apply it.
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-3
Heres a slightly different way of carrying out the procedure: If the sentence contains any
quantifiers, start with those of largest scope. For each such quantifier, underline its entire
scope (this will include the quantifier itself). Any quantifiers, connectives, or atomic
sentences that are included in this scope should be ignored. Once all the quantified sentences
have been underlined, underline any remaining atomic sentences, with each atomic sentence
being separately underlined. Next, attach a sentence letter (i.e., a capital letter) to each
underline, starting from the left and proceeding alphabetically. If any sentence is repeated, it
should be given the same sentence letter each time.
Finally, after all the underlines have been assigned sentence letters, replace each underlined
sentence with its corresponding letter, and keep any remaining connectives that have not been
underlined. The result is the truth-functional form of the original sentence.
Example 1
!x Tet(x) " #y (Cube(y) FrontOf(b, y) #z Dodec(z))
First, we underline:
!x Tet(x) " #y (Cube(y) FrontOf(b, y) #z Dodec(z))
Then we attach sentence letters:
!x Tet(x)
A
" #y (Cube(y) FrontOf(b, y) #z Dodec(z))
B

Then we replace the underlined sentences with the letters:
A " B
This sentence is TT-possible, but not a tautology, and therefore so is our original
sentence.
Example 2
#x Tet(x) " (#y (Cube(y) FrontOf(y, b)) " #x Tet(x))
#x Tet(x) " (#y (Cube(y) FrontOf(y, b)) " #x Tet(x))
#x Tet(x)
A
" (#y (Cube(y) FrontOf(y, b))
B
" #x Tet(x)
A
)
A " (B " A)
This sentence is a tautology, and therefore so is our original sentence.
Tautologies of FOL
A quantified sentence of FOL is said to be a tautology
if and only if its truth-functional form is a tautology.
Note that the same procedure can be applied to arguments as well as to individual
sentences. That is, we can apply it to any FOL argument to construct the truth-functional
form of the argument, and hence to determine whether its conclusion is a tautological
consequence of its premises. Well call such valid arguments truth-table valid, or TT-
valid, for short.
Note that an argument may appear deceptively similar to a TT-valid argument even
though it is not TT-valid. For example:
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-4
!x (Cube(x) " Small(x))
!x Cube(x)
!x Small(x)
This may look like modus ponens (" "" " Elim), but it is not. Its truth functional form is
actually this:
A
B
C
So our original argument is not TT-valid. Indeed, it is not valid at all. (You can construct
a Tarski World counterexample to it. If youre in doubt about what such world would
look like, check these sentences in this world.)
10.2 First-order validity and consequence
A logical truth is one that is true in all possible circumstances; a valid argument is one whose
conclusion comes out true in every possible circumstance in which its premises all come out true.
In propositional logic, we were able to use truth-tables as a way of expressing more precisely the
notion of possible circumstancesa possible circumstance was represented as a row on a truth-
table.
But since there are valid arguments that are not TT-valid, and logical truths that are not tautologies,
we need a way to make the idea of possible circumstances more precise that goes beyond what
truth-tables provide.
That is, we need to provide a more precise account of what it is to be a first-order logical truth, a
first-order consequence, or a first-order equivalence.
Terminological point: well follow LPL in calling a first-order logical truth a first-order
validity, or FO validity, for short.
The general idea is this:
First-order validities (or consequences, or equivalences) are truths
(or consequences, or equivalences) solely in virtue of the truth-
functional connectives, the quantifiers, and the identity symbol.
This means that to determine whether a sentence is an FO validity (or an argument a case of FO
consequence, or a pair of sentences FO equivalent) we ignore the meanings of the names and
predicates they contain.
A convenient way of ignoring the meanings of names and predicates is just to replace them with
nonsense predicates (e.g., the predicates Tove, Slithy, Outgrabe, Borogove, etc., borrowed from
Lewis Carrolls poem Jabberwocky
1
).

1
For the full text of this marvelous poem, see www.jabberwocky.com/carroll/jabber/jabberwocky.html
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-5
Thus, we can see that the logical truth !x SameSize(x, x) is not an FO validity because when we
replace the predicate SameSize with the predicate Outgrabe, the resulting sentence,
!x Outgrabe(x, x), cannot be guaranteed by logic to be trueits truth depends on the meaning
of Outgrabe.
On the other hand, we can see that !x Cube(x) " Cube(b) is an FO validity because the
nonsense sentence !x Tove(x) " Tove(b) is true no matter what Tove means.
Using nonsense predicates may be an illuminating device, but we need not resort to this. We can
simply replace predicates with predicate letters (and names with individual constants) and consider
these letters to be open to interpretation in any way we wish. (That is, we can take its individual
constants to be names of any objects we like, and its predicate letters to stand for any properties we
like.) This leads to the replacement method of pp. 270-71.
Replacement method
1. Replace all names with individual constants and all predicates with predicate letters
(maintaining the arity, of course); if a predicate (or a name) is repeated, use the same
letter to replace all of its occurrences.
2. To see whether a sentence is an FO validity, try to describe a circumstance, and an
interpretation of the predicate letters and individual constants, in which the sentence
is false. If there is none, the sentence is an FO validity.
3. To see whether S is an FO consequence of P
1
,, P
n
, try to describe a circumstance
and an interpretation under which S is false and all of P
1
,, P
n
are true. If there is
none, S is an FO consequence of P
1
,, P
n
.
This method is used on the example on pp. 269-70. Study it carefully! (Exercise: can you
provide a Tarskis World counterexample for the argument-form obtained by the replacement
method on this example? You should be able to do this.)
Using the notion of interpretation that we have just introduced, we can define FO validity and
FO consequence as follows:
A sentence S is an FO validity iff it comes out true on every interpretation.
A sentence S is an FO consequence of sentences P
1
,, P
n
iff there is no
interpretation under which all of P
1
,, P
n
come out true and S comes out
false.
To show that a sentence is not an FO validity, then, you need to provide an interpretation on
which it comes out false. You can often use Tarskis World to do this, but sometimes Tarskis
World will not be able to provide the required interpretation. We will be looking at examples
of this in subsequent chapters.
Summary
1. If S is a tautology, then S is an FO validity (but not conversely). And if S is an FO
validity, then S is a logical truth (but not conversely).
2. If S is a tautological consequence of premises P
1
,, P
n
, then S is an FO
consequence of P
1
,, P
n
(but not conversely). And if S is an FO consequence of
P
1
,, P
n
, then S is a logical consequence of P
1
,, P
n
(but not conversely).
The Euler diagram on p. 272 depicts these relationships. Study it carefully.
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-6
10.3 First-order equivalence and DeMorgans laws
The two sentences:
1. (!x Cube(x) "y Dodec(y))
2. !x Cube(x) "y Dodec(y))
are tautologically equivalent. Indeed, their equivalence is an instance of DeMorgans laws.
The two sentences:
3. !x (Cube(x) Large(x))
4. !x (Cube(x) Large(x))
are also equivalent, but they are not tautologically equivalent. (Apply the truth-functional form
algorithm to this pair if that point is not clear.)
The difference is that in (1) and (2), DeMorgans Laws are applied to a pair of sentences, whereas
in (3) and (4), we appear to be applying DeMorgans Laws to a pair of wffs that are not sentences.
But how can we say that (Cube(x) Large(x)) and Cube(x) Large(x) are equivalent,
when they are not even sentences? We need to extend the notion of equivalence to wffs containing
free variables.
Logically equivalent wffs
Here is our definition of logically equivalent wffs with free variables:
A pair of wffs with free variables are logically equivalent if, in any
possible circumstance, they are satisfied by the same objects.
And it is easy to see that our two wffs above satisfy this condition. The objects satisfying
(Cube(x) Large(x)) are those that are not large cubes; and the ones satisfying
Cube(x) Large(x) are those that are either not cubes or not large, i.e., those that are not
large cubes.
Note that if, within a given sentence, we substitute one logically equivalent wff for another,
the resulting sentence will be equivalent to the original. Hence, (3) and (4) are equivalent
because (4) can be obtained from (3) by replacing one component wff with another equivalent
wff.
Caveat: in the definition above of equivalence for wffs with free variables, we are
assuming that the two wffs contain the same free variable (e.g., they both have x free, or
both have y free, etc.). Otherwise, we would confront the problem that our definition
would count Cube(x) as equivalent to Cube(y). But it is not so clear that we would
want to do this, given that we dont normally require different variables to pick out the
same object. And if we allow x and y to pick out different objects, the biconditional
Cube(x) # Cube(y) might not always come out true. And how can there be an
equivalence whose corresponding biconditional can come out false?
DeMorgan laws for quantifiers
Note the connection between " and : in a world of four objects, a, b, c, and d, the two
sentences
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-7
Cube(a) Cube(b) Cube(c) Cube(d)
!x Cube(x)
will always agree in truth-value. We have a similar connection between " and : in a world
like the one above, the two sentences
Cube(a) Cube(b) Cube(c) Cube(d)
"x Cube(x)
will always agree in truth-value.
So we would expect there to be first-order equivalences for the quantifiers that are
counterparts to the DeMorgan equivalences of propositional logic. And indeed there are. Just
as these sentences are equivalent:
(Cube(a) Cube(b) Cube(c) Cube(d))
Cube(a) Cube(b) Cube(c) Cube(d)
So are these:
!x Cube(x)
"x Cube(x)
Hence, we can state the DeMorgan laws for quantifiers (also known as the
quantifier/negation equivalences):
!x P(x) # "x P(x)

"x P(x) # !x P(x)
Combining laws and equivalences
We can combine the DeMorgan laws for quantifiers and various other equivalent wffs to set
up some illuminating chains of equivalences.
A is equivalent to O
!x (P(x) $ Q(x)) # !x (P(x) Q(x))
# "x (P(x) Q(x))
# "x (P(x) Q(x))
# "x (P(x) Q(x))
I is equivalent to E
"x (P(x) Q(x)) # !x (P(x) Q(x))
# !x (P(x) Q(x))
# !x (P(x) $ Q(x))
This last chain shows, in effect, that the two FOL forms of E are equivalent.
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-8
10.4 Other quantifier equivalences and non-equivalences
There are a number of other important quantifier equivalences to be aware of. There are also some
important pseudo-equivalences to be wary ofnon-equivalences that appear deceptively like
equivalences. We list both kinds here.
Distributing ! !! ! through
!x (P(x) Q(x)) " !x P(x) !x Q(x)
Distributing # ## # through
#x (P(x) Q(x)) " #x P(x) #x Q(x)
Non-equivalences to beware of
Beware of the following non-equivalences:
!x (P(x) Q(x)) " !x P(x) !x Q(x)
#x (P(x) Q(x)) " #x P(x) #x Q(x)
Notice that you can distribute ! through , and you can distribute # through , but you
cannot distribute ! through or # through . If you are in any doubt about these last two
non-equivalences, try problems 10.24 and 10.27. Be sure you understand why the non-
equivalent pairs are not equivalent.
Null quantification
In the following examples, P represents any wff in which x does not occur free.
!x P " P
#x P " P
!x (P Q(x)) " P !x Q(x)
#x (P Q(x)) " P #x Q(x)
The last two might be thought of as providing limited distribution of ! through and #
through . (For an example of the last one, see problem 10.28) The next four null
quantification over $ equivalences are not discussed in LPL, although they are listed in
some exercises on p. 315. The third and fourth equivalences are trickythey appear not to be
equivalentso study them carefully.
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-9
Null quantification over ! !! !
"x (P ! Q(x)) # P ! "x Q(x)
$x (P ! Q(x)) # P ! $x Q(x)
"x (Q(x) ! P) # $x Q(x) ! P
$x (Q(x) ! P) # "x Q(x) ! P
More non-equivalences to beware of
"x (Q(x) ! P) # "x Q(x) ! P
$x (Q(x) ! P) # $x Q(x) ! P
These last two pseudo-equivalences are easy to missthe parentheses indicate the crucial
differences in the scope of the quantifiers.
Replacing bound variables
In the next examples, P(x) is any wff and y is any variable that does not occur in P(x):
"x P(x) # "y P(y)
$x P(x) # $y P(y)
What these equivalences tell you, in effect, is that it does not matter which variable you use in
a universal or existential generalization. Systematically rewriting the bound variables does not
change the meaning of the sentence.
Exercises with chains of equivalence
Two of the more puzzling equivalence claims we encountered above were the last two null
quantification over ! !! ! equivalences:
"x (Q(x) ! P) # $x Q(x) ! P
$x (Q(x) ! P) # "x Q(x) ! P
To convince yourself that these two equivalence claims are correct, construct for each of them
a chain of equivalences that establishes its correctness, making use of other equivalence
claims that seem more intuitively obvious. Model your chains on those constructed above in
10.3.
In your chains you should make use of the following equivalences (which I hope are familiar
by now): null quantification over , DeMorgans laws for quantifiers, definition of ! in
terms of and , and equivalence of P Q and Q P. Here is what these two equivalence
chains will look like; just fill in the missing steps:
"x (Q(x) ! P) # ?
# ?
# ?
Copyright 2004, S. Marc Cohen Revised 11/21/04
10-10
! "x Q(x) # P
"x (Q(x) # P) ! ?
! ?
! ?
! $x Q(x) # P
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-1
G(x)
Chapter 11: Multiple Quantifiers
11.1 Multiple uses of a single quantifier
We begin by considering sentences in which there is more than one quantifier of the same
quantityi.e., sentences with two or more existential quantifiers, and sentences with two or
more universal quantifiers. Only later will we consider the more difficult cases of mixed
quantifiers.
Avoid prenex form
The examples on pp. 289-90 are instructive. In both cases, we see that there are two
equivalent FOL sentences that adequately translate the same English sentence:
Some cube is left of a tetrahedron.
!x !y [Cube(x) Tet(y) LeftOf(x, y)]
!x [Cube(x) !y (Tet(y) LeftOf(x, y))]
Every cube is left of every tetrahedron.
"x "y [(Cube(x) Tet(y)) # LeftOf(x, y)]
"x [Cube(x) # "y (Tet(y) # LeftOf(x, y))]
The first FOL sentence in each case has all the quantifiers out in frontin prenex form, as
logicians say. But there is an advantage to using the second FOL sentence, with one of the
quantifiers embedded. For this way of translating English into FOL makes clearer the overall
Aristotelian structure of the sentence, and hence such an FOL translation will be easier to
come by in a systematic way.
The overall Aristotelian structure becomes clear if we treat each of the phrases left of a
tetrahedron and left of every tetrahedron as a single, indissoluble, unit:
left-of-a-tetrahedron left-of-every-tetrahedron
and represent each as a single FOL predicate, say G and H, respectively. In that case, we can
think of our original sentences as:
Some cube is G. Every cube is H.
And it is easy to translate these into FOL as, respectively:
!x [Cube(x) G(x)] "x [Cube(x) # H(x)]
Our next task is to replace the temporary wffs G(x) and H(x) with proper FOL wffs. Since
G(x) represents x is left of a tetrahedron and H(x) represents x is left of every tetrahedron, we
must translate these into wffs of FOL. In so doing, we must be sure that in each case our
translation contains a free occurrence of x, and hence is not a sentence. (Remember, a wff
with a free occurrence of a variable is not a sentence.) But if we ignore the fact that these wffs
are not sentences, we will recognize their forms as familiar Aristotelian ones.
x is left of a tetrahedron
Some tetrahedron has x to its left
!y (y is a tetrahedron and x is to the left of y)
!y (Tet(y) LeftOf(x, y))
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-2
H(x)
x is left of every tetrahedron
Every tetrahedron has x to its left
!y (if y is a tetrahedron, then x is to the left of y)
!y (Tet(y) " LeftOf(x, y))
Notice that in both cases, we chose a new variable, y, for our new quantifier. (We did this to
keep our occurrences of x free.) Now we simply replace G(x) with #y (Tet(y) LeftOf(x, y))
and H(x) with !y (Tet(y) " LeftOf(x, y)), and our FOL translations are complete:
#x [Cube(x) #y (Tet(y) LeftOf(x, y))]
!x [Cube(x) " !y (Tet(y) " LeftOf(x, y))]
The moral of this story is to translate complex sentences into FOL by first figuring out their
overall structure (usually an Aristotelian form) and then replacing the embedded wffs with
more complex wffs containing quantifiers. If you do this, you will find that you seldom
produce an FOL sentence in prenex form.
Multiple quantifiers dont guarantee multiple objects
It is tempting to read #x #y as saying there are two objects, x and y . But this would be a
mistake, for the variables x and y may pick out the same object. To see why this is so, open
Tarskis World and write #x #y (Cube(x) Cube(y)) in a new sentence file.
Next, create a new world with a single cube in it. Then try playing the game committed to
false. Do you see why you cant win? Tarski will name the one cube n
1
and will pick it as the
value for both x and y. You will end up committed to the falsity of Cube(n
1
) Cube(n
1
),
which is a losing position.
In other words, just as the truth of Cube(a) Cube(b) does not guarantee that there is more
than one cube, neither does the truth of the quantified sentence #x #y (Cube(x) Cube(y))
guarantee this. For just as a and b may name the same object, so too may the quantifiers #x
and #y pick out the same object. In fact, the FOL sentence #x #y x = y is a logical truth! In
every (non-empty) world, there is sure to be some object satisfying the condition #y x = y
(that is, the condition of being identical to something), since we can always pick the same
object as the value for both x and y. Some object is identical to something, since some object
is identical to itself. That is, #x x = x logically implies #x #y x = y.
11.2 Mixed quantifiers
We now consider sentences with multiple quantifiers in which the quantifiers are mixedsome
universal and some existential.
A simple Aristotelian form
Consider a slight variation on an example we looked at above:
Every cube is left of a tetrahedron.
This clearly has an Aristotelian form, !x (P(x) " Q(x)), where P(x) means x is a cube and
Q(x) means x is left of a tetrahedron. Earlier, we saw that we could translate the wff x is left
of a tetrahedron as #y (Tet(y) LeftOf(x, y)), so we just plug that in here for Q(x). The
result is this FOL sentence:
!x [Cube(x) " #y (Tet(y) LeftOf(x, y))]
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-3
Note, by the way, that the embedded wff !y (Tet(y) LeftOf(x, y) is itself of the
Aristotelian form I: Some tetrahedron has x to its left. So our translation has the overall
structure of an Aristotelian A sentence with an I wff embedded inside it as the consequent of
the conditional.
Order of quantifiers
When quantifiers in the same sentence are of the same quantity (all universal or all
existential), the order in which they occur does not matter. But when they are mixed, the order
in which they occur becomes crucial. Consider these examples:
"x "y Likes(x, y) # "y "x Likes(x, y)
!x !y Likes(x, y) # !y !x Likes(x, y)
These are clearly equivalent pairs. The first pair contains two different ways of saying
everyone likes everyone. The second contains two different ways of saying someone likes
someone.
Now consider this mixed quantifier case:
"x !y Likes(x, y) # !y "x Likes(x, y)
Clearly these are not equivalent sentences. The one on the left says (very plausibly) that
everyone likes someone (or other), but allows for the possibility that different people have
different likesI like Edgar Martinez, you like Ken Griffey, Jr., Madonna likes herself, etc.
The one on the right, however, says something much strongerit says that there is at least
one person so well liked that everyone likes him or her. (Its very unlikely that there is such a
person, and so very unlikely that the sentence on the right is true.)
Notice that the stronger sentence (on the right) logically implies the weaker one (on the left).
In general, an !" sentence logically implies its "! counterpart. (We will return to these
stronger weaker pairs later in this chapter.)
For a more dramatic contrast, consider this pair of sentences:
"x !y x = y # !y "x x =y
Again, these are not equivalent. The one on the left is a logical truth; it says everything is
identical to something. The one on the right says there is something such that everything is
identical to that thing, and this comes very close to being logically false. (It is not logically
false, because there are at least some worlds in which it is true. Can you think of one? You
should be able to. If you cant, try constructing a Tarski World in which it comes out true.)
To cement your understanding of mixed quantifier sentences, do the You try it on p. 295.
11.3 The step-by-step method of translation
We have already encountered the step-by-step method of translation in our discussion of the
advisability of avoiding prenex form in 11.1. The trick is to start with the outer or gross
structure of the sentence, and then move inward. (For this reason, the step-by-step method is
sometimes called paraphrasing inward.)
Lets try our hand at a fairly simple example:
Some cube that adjoins a dodecahedron is larger than every tetrahedron.
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-4
The step-by-step procedure is outlined in the file Step-by-step1.sen, on the Supplementary
Exercises page. Heres a short description of the procedure:
First, find the gross structure of the sentence. In this case, its one of the Aristotelian
forms, I: Some Ps are Qs, or !x (P(x) Q(x)). This gives us the overall form:

!x (x is a cube that adjoins a dodecahedron x is larger than every tetrahedron).
Then isolate the embedded wffs:
x is a cube that adjoins a dodecahedron
x is larger than every tetrahedron
and translate those into FOL wffs with free x.
This yields these wffs:
Cube(x) !y (Dodec(y) Adjoins(x, y))
"y (Tet(y) # Larger(x, y))
Finally, plug these wffs into our overall I form !x (P(x) Q(x)) in place of the two
conjuncts P(x) and Q(x) This yields our completed translation:
!x [Cube(x) !y (Dodec(y) Adjoins(x, y))
"y (Tet(y) # Larger(x, y))]
To check that this translation is correct, open the file Step-by-step1.wld. The sentence weve
written should come out true in this world. Try making some changes to the world and confirm that
the resulting evaluation of our sentence is appropriate.
For example, move the dodecahedron away from the cubethe sentence should become false.
Next, put the dodecahedron back where it was, but make one of the tetrahedra largerthe sentence
should become false. Finally, make the tetrahedron small again, but shrink the cubethe sentence
should become false. If you do not get these results, your translation is incorrect.
Now lets attempt the difficult example mentioned on p. 298:
No cube to the right of a tetrahedron is to the left of a larger dodecahedron.
We can begin by determining the gross structure of the sentence. Is it an Aristotelian form? If so,
which? Clearly, it is an E sentence. Let us use our hyphenation technique to make this evident:
No cube-to-the-right-of-a-tetrahedron is to-the-left-of-a-larger-dodecahedron.
We then treat the hyphenated phrases as if they were simple predicates, and put the sentence into
its gross Aristotelian form:
"x (x is a cube-to-the-right-of-a-tetrahedron # x is to-the-left-of-a-larger-
dodecahedron)
Our next task is to translate the two embedded wffs. First, we tackle the antecedent, proceeding in
a step-by-step way:
x is a cube-to-the-right-of-a-tetrahedron
x is a cube x is to the right of a tetrahedron
x is a cube some tetrahedron has x to its right
x is a cube !y (y is a tetrahedron x is right of y)
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-5
Cube(x) !y (Tet(y) RightOf(x, y))
Next, the consequent:
x is to-the-left-of-a-larger-dodecahedron
Before we can begin to put this wff into FOL, we must decide what the dodecahedron is being said
to be larger than. There seem to be two possibilities: (1) a dodecahedron larger than x, and (2) a
dodecahedron larger than the tetrahedron mentioned in the antecedent. The sentence seems
genuinely ambiguous between these possibilities, although (1) seems more likely to my ears, so we
will go with that reading.
x is to the left of a dodecahedron that is larger than x
There is a dodecahedron that x is to the left of and that is larger than x
There is a dodecahedron such that x is to the left of it and it is larger than x
!y (Dodec(y) LeftOf(x, y) Larger(y, x))
We now have our outer framework (the E sentence):
"x (P(x) # Q(x))
and the two wffs that will become its embedded antecedent and consequent. All that remains is to
assemble the pieceswe substitute our two wffs for P(x) and Q(x), respectively:
"x ((Cube(x) !y (Tet(y) RightOf(x, y))) # !y (Dodec(y) LeftOf(x, y)
Larger(y, x)))
And thats how the step-by-step method of translation works.
11.4 Paraphrasing English
There are times when the step-by-step method cannot be applied directly. This happens frequently
in cases in which the quantifier word something is used with universal force. Example:
If something is a cube, it is not a tetrahedron.
The tip-off that the something here is a universal quantifier is the occurrence of the pronoun it in
the consequent. This it functions in English as a variable, so it must be bound by a quantifier. But
the only quantifier around is the one in the antecedent. If we make it existential and include the
variable it in its scope, we would get:
There is something such that, if is a cube, it is not a tetrahedron.
!x (Cube(x) # Tet(x))
But this sentence is too weak, as weve already seen, to say what the English sentence says. (The
existence of a single non-cube, for example, makes it true.) But if we restrict the scope of !x to the
antecedent, we get:
!x Cube(x) # Tet(x)
and this wff is not a sentence (the x in Tet(x) is free). The step-by-step method seems to have
failed us.
What we must do, instead, is to paraphrase the original sentence in a way that gives the quantifier
large scope. When we do this, we see that the quantifier is actually universal:
If anything is a cube, it is not a tetrahedron.
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-6
For anything you like, if it is a cube, it is not a tetrahedron.
No cube is a tetrahedron.
!x (Cube(x) " Tet(x))
Donkey sentences
The classic example of a so-called donkey sentence is this:
Every farmer who owns a donkey beats it.
The difficulty with such sentences is that they resemble ones in which the phrase a donkey is
properly treated as an existential quantifier. For example:
Every farmer who owns a donkey buys hay.
This goes into FOL straightforwardly as:
!x ((Farmer(x) #y (Donkey(y) Owns(x, y)) " BuysHay(x)))
Note that the scope of the existential quantifier stops at the end of the antecedent. If we try to
translate the classic donkey sentence this way, we get:
!x ((Farmer(x) #y (Donkey(y) Owns(x, y)) " Beats(x, y)))
and this wff is not a sentence, since the y in the consequent is free. We can see this by
translating the wff back into English:
Every farmer who owns a donkey beats y.
In order to have a sentence (a wff with no free variables) we must make sure that the y
variable in Beats(x, y) is bound by the quantifier (a donkey) in the antecedent. This means
we must paraphrase the original English sentence, perhaps in one of the following ways:
Any farmer who owns any donkey beats it.
Every farmer is such that any donkey he owns is beaten by him.
Every farmer beats every donkey he owns.
This makes clear that the original sentence contains two universal quantifiers:
!x (Farmer(x) " !y ((Donkey(y) Owns(x, y)) " Beats(x, y)))
In LPL (p. 301), a slightly different (but equivalent) translation was obtained:
!x (Donkey (x) " !y ((Farmer(y) Owns(y, x)) " Beats(y, x)))
11.5 Ambiguity and context sensitivity
Sentences containing both universal and existential quantifiers can be ambiguous, depending on
the scope the quantifiers receive. Heres an example:
Some man has been calling Becky every hour.
When the existential quantifier is given wide scope, we get what is called the strong reading:
#x (Man(x) !y (Hour(y) " Calls(x, becky, y)))
This FOL sentence suggests that Becky is being harassed by a single persistent (and unwanted)
caller. On the other hand, if we take the English sentence to mean merely that Becky is popular,
and has been receiving calls from many different interested gentlemen, the right way to put it
would be this (the weak reading):
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-7
!y (Hour(y) " #x (Man(x) Calls(x, becky, y)))
The weak reading is a logical consequence of the strong reading, but not conversely.
In other cases, the context makes the weak reading obviously the intended one. Consider the
following sentence (attributed to the showman P. T. Barnum):
Theres a sucker born every minute.
The strong reading here is obviously inappropriate:
#x (Sucker(x) !y (Minute(y) " BornAt(x, y)))
The trouble with this FOL translation is that it says that some unfortunate individual has the
property of being born (again, and again) at each and every minute. What the original sentence
obviously intended was the weaker claim, that no matter what minute you pick, some sucker is
being born then (a different sucker at each succeeding minute, of course, since each of us is born
only once). Heres the FOL version of the intended (weak) reading:
!y (Minute(y) " #x (Sucker(x) BornAt(x, y)))
The Doris Day principle
In our next example, there are multiple sources of ambiguitynot just the scope of the
quantifiers, but their quantity. Heres the example:
Everybody loves a lover.
Only four words, but a mares nest of ambiguity! First, there is the order of the quantifiers:
does everybody have wide scope, or does a lover have wide scope? Second, there is the
quantity of the quantifiers: is a lover an existential quantifier (some lover) or universal
(every lover)? Well begin with those two questions, but as well see later, theres yet a
further possible source of ambiguity.
Quantity
Does a lover here mean some lover or every lover? Without a context, its hard to tell,
so well have to keep both options open.
Order
Which of the two quantifiers has wide scope? Again, it seems well have to keep both
options open. This would seem to give us, at least in the abstract, four possibilities. We
can represent them (temporarily) in the following slightly unorthodox way:
1. # lover y ! person x : x loves y
2. ! person x # lover y: x loves y
3. ! lover y ! person x : x loves y
4. ! person x ! lover y: x loves y
Since (3) and (4) do not involve mixed quantifiers, they are clearly equivalent. (3) says that
every lover is loved by every person, and (4) says that every person loves every lover. So we
only need to consider one of themwell drop (4) from consideration. But the other three are
still in the running.
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-8
(1) says that there is some lover, y, such that everyone loves y. (This might have been true
back in the early days of motion picturesRudolph Valentino was a lover, and everybody
loved him.)
(2) says that for each person, x, there is a lover, y, such that x loves y. (This leaves open the
possibility, which (1) does not, that different people might love different loverse.g., Julia
Roberts is a lover, and Brad Pitt is a lover, and I love Julia (but I dont love Brad), and you
love Brad (but not Julia), etc.
(3) says that every lover is loved by everyone. This seems to have been the original intention
of the poet Ralph Waldo Emerson when he wrote Heres to the happy man: All the world
loves a lover. That is, no matter who you are, all you have to do is to be a lover, and
everyone will love you.
So (3) seems to be the favored reading of this potentially ambiguous sentence. It certainly is
the correct reading in the context in which I first ran across it, which was in a song that Doris
Day made popular. (It rose to #6 on the charts in 1958, and got a Grammy nomination.) The
song begins:
Everybody loves a lover, Im a lover, everybody loves me. And I love everybody, since
I fell in love with you.
Doris seems to be advancing an argument here, with two conclusions:
Everybody loves a lover
Doris is a lover.
Everybody loves Doris.
Doris loves everybody.
Charity demands that we interpret the argument as valid, if we can. And this argument is valid
only if we interpret the ambiguous first premise as meaning (3). So that is its likely meaning
in this context. (Exercise: can you explain why the argument would be invalid if the first
premise is interpreted as (1) or (2)?)
Hence, our preferred reading of our ambiguous sentence is:
! person x ! lover y: x loves y
This, of course, is not an FOL sentence. But it is easy to see how to put it into FOL. For it says
that no matter which objects x and y we take, if x is a person and y is a lover, then x loves y.
That is:
!x !y ((x is a person y is a lover) " x loves y)
If we take the domain of discourse to be restricted to persons we can simply drop the conjunct
x is a person. So we can put this into FOL as:
!x !y (Lover(y) " Loves(x, y))
We must now consider one final potential source of ambiguity: the predicate is a lover. What,
exactly, does this mean? It seems clear that we should be able to express the meaning of the
unary predicate Lover(y) in terms of the binary predicate Loves(y, z). But how should we
do this? The following seems to me to be correct:
Lover(y) =
df
#z Loves(y, z)
But I can imagine a case being made for one of the following, among others:
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-9
Lover(y) =
df
!z Loves(y, z)
Lover(y) =
df
"z (Loves(y, z) Loves(z, y))
Lover(y) =
df
"z Loves(y, z) "z Loves(z, y)
The first option seems best: to be a lover is simply to love someone. In its favor is that it,
alone, passes Grices cancellability test. (Roughly: you can be a lover without either loving
everyone, or being loved by someone you love; or even being loved by anyone at all, but you
cannot be a lover without loving someone.)
So we can replace Lover(y) with "z Loves(y, z), and come up with our FOL version of what
Ill call the Doris Day principle:
!x !y ("z Loves(y, z) # Loves(x, y))
So much for the translation issue. We will revisit the Doris Day principle in Chapter 13,
where it will figure in some proofs.
11.7 Prenex Form
When we started doing translations involving multiple quantifiers ( 11.1), I warned you that when
doing translations, it is best to avoid prenex form, i.e., placing all quantifiers at the beginning of
the FOL sentence. The reason was that attempting to produce such sentences was likely to lead to
translation errors.
But it turns out that for many purposes, it is advantageous to have an FOL sentence in prenex form.
Furthermore, every FOL sentence has an equivalent sentence (in fact, many equivalent sentences) in
prenex form. In this section, we discuss methods for putting sentences into prenex form. But first,
lets refresh ourselves on why trying to put sentences directly into prenex form is likely to lead to
error.
Pitfalls of going directly to prenex
Heres a fairly simple example of a sentence not in prenex form:
1. !x Cube(x) # !y Large(y)
If we simply pull all the quantifiers to the outside, we would produce this:
2. !x !y (Cube(x) # Large(y))
(2) is in prenex form, but it is not equivalent to (1). If you are in any doubt about this, try
evaluating the two sentences in a Tarski world. You will find it easy to get them to disagree in
truth value.
Converting to prenex form
To convert (1) to prenex form, we must remember these equivalences that we learned in
Chapter 10:
3. "x (Q(x) # P) $ !x Q(x) # P
4. !x (P # Q(x)) $ P # !x Q(x)
Remember, these equivalences require that P is either a sentence or a wff containing no free
occurrence of x.
First, we apply equivalence (3) to sentence (1) and obtain:
5. "x (Cube(x) # !y Large(y))
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-10
What we did was to pull the universal quantifier off of the antecedent and change it to an
existential quantifier whose scope is the entire conditional. Next, we will apply equivalence
(4) to sentence (5) and obtain:
6. !x "y (Cube(x) # Large(y))
Here we simply moved the universal quantifier, "y, from the consequent to the entire
conditional. Note that in applying (4) to (5), P is the wff Cube(x), which contains no free
occurrences of y, the variable in the exported quantifier.
Comparing (6) with (2), you can see the difference: (2) begins "x, while (6) begins !x, and
otherwise they are identical. It should be obvious, therefore, that they are not equivalent.
Rules for conversion to prenex
To convert an FOL sentence to prenex form, we make use of these equivalences that we
learned in chapter 10:
DeMorgan laws for quantifiers
Distributing " through
Distributing ! through
Null quantification
Null quantification over #
Replacing bound variables
In addition, we will need to use some of the handy truth-functional equivalences we learned in
8.1, especially to get rid of biconditionals:
The biconditional conjunction equivalence
P $ Q % (P # Q) (Q # P)
The biconditional disjunction equivalence
P $ Q % (P Q) (P Q)
The general strategy is to work from the inside out, moving quantifiers outward so that they
get larger scope. Since all of our quantifiers will appear at the beginning of our ultimate
sentence, we must be sure that no quantifier gets reused (e.g., we cannot have both "x and
!x); each time we have a quantifier that repeats a variable, we will have to change to a new
variable.
We will definitely need to get rid of biconditionals, and it is sometimes useful to get rid of
conditionals, as well. The procedure is best illustrated by examples, to which we now turn.
Example #1
Well start with a simple example:
"x Cube(x) !x Tet(x)
The strategy will be to drive the negation sign through the quantifier !x, converting it to "x
(appealing to DeMorgan laws for quantifiers), then rewrite the second quantifier with a new
variable, y (replacing bound variables), then pull the quantifiers to the outside (null
quantification). Well do this one step at a time.
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-11
!x Cube(x) "x Tet(x)
!x Cube(x) !x Tet(x)
!x Cube(x) !y Tet(y)
!x (Cube(x) !y Tet(y))
!x !y (Cube(x) Tet(y))
Notice that we might have performed the last two steps (pulling out the universal quantifiers)
in reverse order. If we had, we would have ended up with this (equivalent) prenex form:
!y !x (Cube(x) Tet(y))
Example #2
Sentences containing biconditionals are exceptionally tricky to put into prenex form. Heres
one that in its non-prenex form is very easy to understand:
"x Cube(x) # Tet(b)
This says that a cube exists if, and only if, b is a tetrahedron. Now lets put it into prenex
form. Well proceed in a step-by-step fashion, applying the equivalences mentioned above.
"x Cube(x) # Tet(b)
("x Cube(x) $ Tet(b)) (Tet(b) $ "x Cube(x)) bicond. conj.
!x (Cube(x) $ Tet(b)) "x (Tet(b) $ Cube(x)) null quant. over $
!x (Cube(x) $ Tet(b)) "y (Tet(b) $ Cube(y)) replace bound vbl.
!x ((Cube(x) $ Tet(b)) "y (Tet(b) $ Cube(y))) dist. ! thru
!x "y ((Cube(x) $ Tet(b)) (Tet(b) $ Cube(y))) dist. " thru
Our sentence is now in prenex form. But notice the price: although the quantifiers are all out
in front, the FOL sentence is hard to understand when compared to the original.
Note that we would end up with a different, but still equivalent, prenex form if we began with
the biconditional disjunction equivalence:
"x Cube(x) # Tet(b)
("x Cube(x) Tet(b)) ("x Cube(x) Tet(b)) bicond. disj.
("x Cube(x) Tet(b)) (!x Cube(x) Tet(b)) DeM quant.
"x ((Cube(x) Tet(b)) !x (Cube(x) Tet(b))) dist. ", ! thru .
"x ((Cube(x) Tet(b)) !y (Cube(y) Tet(b))) replace bound vbl.
"x !y ((Cube(x) Tet(b)) (Cube(y) Tet(b))) dist. ! thru .
Here we have another, equivalent, prenex form of our original sentence. You can use Fitchs
FO Con to confirm that the two prenex forms are equivalent. [Note (11/19/04): the FO Con
in Fitch 2.2 gives the wrong evaluation of this equivalence.]
An English example: first, translate
Here we start with an English sentence, translate it into FOL, and then put it into prenex form:
No cube that adjoins a tetrahedron is back of every dodecahedron.
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-12
We make no effort to go directly to prenex form. Instead, we translate into FOL using the step-
by-step method:
!x (x is a cube-that-adjoins-a-tetrahedron " x is back-of-every-dodecahedron)
!x ((x is a cube #y (y is a tetrahedron x adjoins y)) " x is back-of-every-
dodecahedron)
!x ((x is a cube #y (y is a tetrahedron x adjoins y)) " !z (z is a dodecahedron "
x is back of z))
!x ((Cube(x) #y (Tet(y) Adjoins(x, y))) " !z (Dodec(z) " BackOf(x, z)))
Next, convert to prenex
Now lets apply our conversion technique to our example.
Before applying our technique, its handy to use (temporarily!) a more compact notation:
!x ((Cx #y (Ty Axy)) " !z (Dz " Bxz))
This is what we do to produce the compact notation. We use single letters for the predicates;
we remove the parentheses that surround their arguments; we delete the commas and spaces
that separate the arguments (assuming that the arguments are all single letters, e.g., all
variables). For example, we abbreviate Cube(x) as Cx, and Adjoins(x, y) as Axy, etc. When
we do this, the quantifiers and connectives become more prominent, making it easier for us to
see where the conversion equivalences apply.
Now we convert to prenex form. First, we drive the negation sign inside the scope of the
quantifier !z:
!x ((Cx #y (Ty Axy)) " #z (Dz " Bxz))
Next, we look at the conjunction that is the antecedent of the first conditional:
Cx #y (Ty Axy)
And we apply one of the null quantification equivalences:
#x (P Q(x)) $ P #x Q(x)
This allows us to pull the existential quantifier out:
#y (Cx (Ty Axy))
Replacing this in the entire sentence yields:
!x (#y (Cx (Ty Axy)) " #z (Dz " Bxz))
The wff in the scope of the initial universal quantifier !x is:
#y (Cx (Ty Axy)) " #z (Dz " Bxz)
And this is of the form: #y Q(y) " P
which is equivalent to: !y (Q(y) " P)
So we pull out the existential quantifier and change it to a universal, and embed the resulting
wff inside the scope of !x:
!x !y ((Cx (Ty Axy)) " #z (Dz " Bxz))
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-13
Finally, the existential quantifier in the consequent can be moved to the outside of the
conditional (but inside the other quantifiers!), yielding:
!x !y "z ((Cx (Ty Axy)) # (Dz # Bxz))
And that is our original sentence in prenex form. Now all we have to do is to replace our
abbreviated wffs with the real ones:
!x !y "z ((Cube(x) (Tet(y) Adjoins(x, y))) # (Dodec(z) # BackOf(x, z)))
11.8 Some extra translation problems
The problems in this section make for excellent practice. 11.39 and 11.40

have been assigned as


homework problems. But the remaining ones should be attempted too, time permitting. Heres
another nice translation problem, in the form of an argument. (After we have studied Chapter 13
we will prove that it is valid.)
Dangerfields argument
The late comedian Rodney Dangerfield was famous for the line I dont get no respect. I
always thought that this was because he himself didnt respect anyone. So when I discovered
the following argument (due to the logician W. V. O. Quine) I decided to name it after
Rodney. (If the argument is sound, and if my conjecture about Rodney is correct, this would
explain why he had such a hard time finding a job.)
No one respects a person who doesnt respect himself.
No one will hire anyone (s)he doesnt respect.
Anyone who respects no one will not be hired by anyone.
Before we begin, we notice that the argument talks only about persons, so we will restrict our
domain of discourse appropriately. This means that we will not need a predicate Person(x).
The only predicates we will need, then, are Respects(x, y) and Hires(x, y).
First Premise
No one respects a person who doesnt respect himself.
Since this sentence contains two quantifiers (no one, a person) in the first clause, it will
be best to begin with a paraphrase. If we treat the quantifier a person as universal, we
can give it wide scope and paraphrase the sentence this way:
Any person who doesnt respect himself is respected by no one.
This sentence clearly has an Aristotelian form, which is an A sentence. Hence, we can
proceed by the step-by-step method:
Every person-who-doesnt-respect-himself is respected-by-no-one.
!x (x is a person who doesnt respect himself # x is respected by no one)
Next we attack the wffs that are embedded in this A sentence:
x is a person who doesnt respect himself
x doesnt respect himself
Respects(x, x)
Copyright 2004, S. Marc Cohen Revised 11/21/04
11-14
x is respected by no one
no one respects x
!y Respects(y, x)
We then place these FOL translations of our wffs into the antecedent and consequent of
our A sentence:
!x (Respects(x, x) " !y Respects(y, x))
Second Premise
No one will hire anyone (s)he doesnt respect.
Here again a preliminary paraphrase is helpful. The sentence clearly says that no matter
what pair of persons you pick, the first will not hire the second if the first doesnt
respect the second. That is:
!x !y (x will not hire y if x does not respect y)
!x !y (Respects(x, y) " Hires(x, y))
Conclusion
Anyone who respects no one will not be hired by anyone.
Here we may proceed immediately with the step-by-step method, beginning with the
sentences Aristotelian form. This is clearly an A sentence:
!x (if x respects no one, then x will not be hired by anyone)
We next translate the wffs that are embedded in this A sentence:
x respects no one
No one is respected by x
!y y is respected by x
!y x respects y
!y Respects(x, y)
x will not be hired by anyone
No one will hire x
!y y will hire x
!y Hires(y, x)
We then place these FOL translations of our wffs into the antecedent and consequent of
our A sentence:
!x (!y Respects(x, y) " !y Hires(y, x))
Here, then, is the entire argument in FOL:
!x (Respects(x, x) " !y Respects(y, x))
!x !y (Respects(x, y) " Hires(x, y))
!x (!yRespects(x, y) " !y Hires(y, x))
We will return to Dangerfields argument when we discuss proofs involving quantifiers in
Chapter 13.
Note that here we could have
written #y instead of !y.
Here again we could have written
#y instead of !y.
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-1
Chapter 12: Methods of Proof for Quantifiers
12.1 Valid quantifier steps
The two simplest rules are the elimination rule for the universal quantifier and the introduction rule
for the existential quantifier.
Universal elimination
This rule is sometimes called universal instantiation. Given a universal generalization (an !
sentence), the rule allows you to infer any instance of that generalization.
Example:
From Everyone is mortal, infer Dick Cheney is mortal.
The formal version of this rule (to be developed in Chapter 13) is called ! !! ! Elim. It will permit
inferences like the following.
Example:
From !x Cube(x), infer Cube(b).
Notice that the term elimination is somewhat misleading here, for nothing is really being
eliminated. But since the sentence from which the inference is drawn contains a universal
quantifier that does not occur in the sentence which is inferred from it, one might well think
of this maneuver as eliminating the universal quantifier.
Clearly the inferences above are valid. There is no way the from sentence can be true while
the to sentence is false. (We are assuming, in both cases, that the names being used denote
objects in the domain of discourse.) If Dick Cheney is not mortal, then it is not true that
everyone is mortal. And if Cube(b) is false, then we have a counterexample to !x Cube(x).
Existential introduction
This rule, which permits you to introduce an existential quantifier, is sometimes called
existential generalization. It allows you to infer an existential generalization (an " sentence)
from any instance of that generalization.
Example:
From Dick Cheney is mortal infer Someone is mortal.
The formal version of this rule is called " "" " Intro. It will permit inferences like the following.
Example:
From Cube(b), infer "x Cube(x).
Again, these are both valid inferences. There is no way the from sentence can be true while
the to sentence is false. If it is true that Dick Cheney is mortal, then it is true that someone
(Dick Cheney, at the very least) is mortal. And if "x Cube(x) is false, then there are no cubes,
and so Cube(b) is false.
We thus have our first two simple rules for quantifiers: we can infer
from a universal generalization to any of its instances, and we can infer
to an existential generalization from any of its instances.
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-2
Why the next two rules are more complicated
We now have an elimination rule for the universal quantifier, and an introduction rule for the
existential quantifier. This means that we can draw inferences from universally quantified
sentences (! sentences), and to existentially quantified sentences (" sentences). Further, these
rules are very simple, as can be seen from the examples above, and can be very simply stated (as
can their formal counterparts ! !! ! Elim and " "" " Intro).
We also need to draw inferences from existentially quantified sentences and to universally
quantified sentences. The question is, how can we formulate inference rules that enable us to do
this?
It is clear that if we model these new rules on our present ones, they will not work. Such overly
simple rules would look like this:
Existential Elimination
From an existential generalization, infer any of its instances.
Universal Generalization
From any instance of a universal generalization, infer that generalization.
The trouble is, inferences made in accordance with these rules are not valid, for they would
permit us to infer false sentences from true ones. Here are some examples of invalid arguments
that these rules would permit:
Someone is a liberal
Dick Cheney is a liberal
"x Cube(x)
Cube(b)
Bill Bradley is a liberal
Everyone is a liberal
Cube(c)
!x Cube(x)
Its obvious that these arguments are invalid. Bill Bradley is a liberal, but not everyone is (e.g.,
Dick Cheney isnt a liberal). Someone is a liberal (e.g., Bill Bradley), but Dick Cheney isnt. And
its easy to construct a Tarskis World counterexample to the other two arguments: let b be a
tetrahedron and c a cube.
So our simple versions of the new rules will not do. We must therefore come up with different
rules for drawing inferences from existential generalizations and to universal generalizations.
Instead of introducing those rules at this point, we will informally describe a method of drawing an
inference from an existential generalization, and a method of inferring to a universal
generalization. Having done that, we will be in a position to formulate sound rules of existential
elimination and universal introduction.
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-3
12.2 The method of existential instantiation
The method
We give up the idea of trying to infer an instance of an existential generalization from the
generalization. Instead, we temporarily introduce a new name into our proof and assume that
it names an object (whatever it might be) that makes the existential generalization true. (We
know that there must be such an objectwe just dont know its name.) Then we try to prove
something about the hypothetical individual. Finally, we derive some further sentence
(typically, an existential generalization) that does not mention that individual by name.
Example
Argument
Some criminal stole the diamonds from the museum. Whoever stole the diamonds has an
accomplice on the museum staff. Therefore, some criminal has an accomplice on the
museum staff.
Proof
We know that some criminal stole the diamonds; lets call him (or her) Ralph. Since
whoever stole the diamonds has an accomplice on the museum staff, it follows that
Ralph has such an accomplice. But Ralph is a criminal, and Ralph has an accomplice on
the staff. So, some criminal has an accomplice on the staff.
The rule
If we have followed this method successfully, we are in the following situation:
We have an existential generalization as a line in our proof, say !x P(x).
We have assumed an instance of that generalization, say P(c), as a temporary
assumption.
We have derived from that assumption some conclusion, say Q, in which c does not
occur.
The rule then permits us to enter the conclusion Q that we just reached as a new line, but one
which depends on the existential generalization !x P(x), rather than on the instance P(c) we
temporarily assumed.
Our example followed this procedure: P(x) was x is a criminal and x stole the diamonds
from the museum, c was Ralph, and Q was Some criminal has an accomplice on the
staff. Our assumption came at the point where we said Lets call him Ralph.
The example on p. 323 also makes clear how this works. When we get to Ch. 13, well see
how the rule for system F formalizes this procedure.
12.3 The method of general conditional proof
Once again, we give up the idea of trying to infer a universal generalization from just any instance
of the generalization. Instead, we temporarily introduce a new name into our proof and assume
that it names an object chosen at random. Then we prove something about the randomly chosen
individual. Finally, we may then infer that what we have proven about this randomly chosen
individual holds universallyi.e., we may infer a universal generalization.
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-4
The method
This is a method of proving generalized conditional sentences, that is, sentences of the form
All Ps are Qs. The technique is to pick some arbitrary instance of P, and then prove that it is
also an instance of Q. Having shown that this arbitrary instance of P is also an instance of Q,
we may infer that every instance of P is an instance of Q.
One might well wonder, how can we be certain that we have picked an instance of P? What
happens if there are none? The answer is that we do not have to be certain of this. That there
is an instance of P (chosen by us at random) is just an assumption we are making, an
assumption that we will discharge. So, in the end, our proof will not depend on there actually
being such an instance. Rather, what we show is that if there is such an instance, it will also
be an instance of Q. The method looks a lot like the conditional proof method we used in
propositional logic.
That is, to prove a statement of the form !x (P(x) " Q(x)):
Assume some instance of the wff P(x), say P(c), where c denotes any arbitrarily
selected individual satisfying P(x).
Prove Q(c)
Discharge the assumption and draw the conclusion !x (P(x) " Q(x)).
There is another way to look at this kind of proof, one that usually goes by the name
universal generalization. Here, one starts out with only the assumption that one has chosen
some object at random (but no other assumption about it). One then proves something about
this object. One then concludes that whatever one has proved about this arbitrarily chosen
object holds of every object. That is:
Choose a name, say c, and assume that it denotes some arbitrary individual.
Prove some sentence containing the name c, say S(c).
Discharge the assumption and infer the universal generalization !x S(x).
As LPL points out, these two approaches are in fact redundantwe can make do with only
one of them. But the first is common in everyday reasoning, and the second is common in
logic books, so we will build them both into system F and into Fitch.
Planning a strategy: informal proofs
Sketching out an informal proof is almost always a good thing to do before trying to construct a
formal proof. So before moving on to the next chapter, lets try our hand at some informal proofs.
Example: Exercise 12.9
!x [(Cube(x) Large(x)) (Tet(x) Small(x))]
!x [Tet(x) " BackOf(x, c)]
!x [Small(x) " BackOf(x, c)]
Proof: We will use the method of general conditional proof. Let a be an arbitrary small
objectwe will prove that a is back of c. Premise 1 tells us that every object is either a large
cube or a small tetrahedron, so it follows (by ! !! ! Elim) that a is either a large cube or a small
tetrahedron. This gives us two cases. But the first case is immediately contradictory, since a
cannot be both small and large, so the second case must hold, and a must be a small
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-5
tetrahedron. But premise 2 tells us that every tetrahedron is back of c, so it follows that a is
back of c, which is what we wanted to prove. Our assumption that a is small, then, has led to
the conclusion that a is back of c. Hence, by general conditional proof, anything that is small
is in back of c. QED.
Example: Exercise 12.20
!x [Cube(x) " #y LeftOf(x, y)]
#x #z [Cube(x) Cube(z) LeftOf(x, z)]
#x #y [Cube(x) Cube(y) x $ y]
#x Cube(x)
Proof: Here we will use the method of existential instantiation. Premise 3 tells us that there
are at least two cubes; lets call these a and b. Premise 1 tells us that every cube is left of
something, so we can infer that if a is a cube, then there is something that a is to the left of.
But a is a cube, so we may infer (using " "" " Elim) that there is something that a is to the left of.
Lets pick an object that a is to the left of and call it c. We will now prove that c is not a cube.
For, suppose c is a cube; then a is a cube and c is a cube and a is to the left of c. We may then
apply # ## # Intro to this and infer that there is a cube that is to the left of a cube, i.e.,
x z (Cube(x) Cube(z) LeftOf(x, z)), contradicting premise 2. Since our assumption
that c is a cube has led to a contradiction, we may infer that c is not a cube. So, by # ## # Intro,
there is at least one object that is not a cube. Now we know from premise 3 that there are at
least two cubes, and we have been assuming that their names are a and b. From this
assumption we derived the conclusion that there is at least one non-cube. But nothing in our
proof depended on the names of the two cubesno matter what their names are, we could
still derive the same conclusion. Hence we may conclude (citing premise 3 in place of our
assumption about the names of the cubes) that there is at least one object that is not a cube.
12.4 Proofs involving mixed quantifiers
In both of these methods (existential instantiation and universal generalization), we must be sure
that we have a way of guaranteeing that the name we use in our assumption picks out an arbitrary
object. We must be sure that we do not smuggle into our proof any extraneous information that
may depend on the individual denoted by the name we have chosen to represent a random object.
A special case in which this issue arises involves proofs with mixed quantifiers. Although #!
sentences imply their !# counterparts, the converse is not always true. The following arguments
illustrate this point.
A valid argument
There is a certain person who is admired by everyone. Therefore, everyone admires someone
or other.
#y !x Admires(x, y)
!x #y Admires(x, y)
This argument is valid, and our method of proof can establish its validity.
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-6
Proof
There is a certain person who is admired by everyone. Lets call him George. Now
since everyone admires George, we can pick any person at random and that person will
admire George. So lets pick someone at random, and call him Jerry. So, since Jerry
admires George, it follows that there is someone whom Jerry admires. But Jerry was
any randomly chosen individual, and we have shown that Jerry admires someone or
other. Therefore, everyone admires someone or other.
Switch the premise and conclusion, however, and the argument becomes invalid.
An invalid argument
Everyone admires someone or other. Therefore, there is a certain person who is admired by
everyone.
!x "y Admires(x, y)
"y !x Admires(x, y)
This argument is clearly invalid. (Just to convince yourself of its invalidity, see whether you
can describe a possible situationsay, involving George, Jerry, and Elainein which the
premise is true and the conclusion false. You should be able to do this fairly easilyto check
out your counterexample, compare it with mine.) So we must take pains to ensure that the
method of proof we used above will not enable us to prove the conclusion of this argument.
If we are not careful, that is just what will happen. Consider the following pseudo-proof:
Pseudo proof
Everyone admires someone or other. So lets pick anyone at random, and call her
Elaine. Now Elaine admires someone or other; so lets pick a person that Elaine
admires and call him Cosmo. So, Elaine admires Cosmo. But Elaine was any randomly
chosen individual, and we have shown that Elaine admires Cosmo. So everyone admires
Cosmo. Therefore, it follows that there is a certain person (namely, Cosmo) whom
everyone admires.
Clearly, this proof is fallacious. The fallacy lies in concluding, from the fact Elaine was
chosen at random, and admires Cosmo, that everyone admires Cosmo. The reason it is a
fallacy is that we did not choose Cosmo at random from the entire domain of people who are
admired; rather, we chose him from among the people that Elaine admires. That is, our choice
of Cosmo depended on our prior choice of Elaine.
This is an example of what LPL calls a hidden dependency. Since the sentence Elaine
admires Cosmo mentions an individual (Cosmo) whose choice depended on our prior choice
of Elaine, we cannot universally generalize with respect to Elaine and conclude Everyone
admires Cosmo.
As LPL explains(p. 331), we must now add the restriction that S(c) not contain any constant
introduced by existential instantiation after the introduction of the constant c.
Heres another way of putting the restriction: even if we prove S(c), where c is a randomly
chosen individual, we may not draw the conclusion !x S(x) if S(c) contains any other name
that was introduced as an assumption for existential instantiation after the name c was
introduced.
Copyright 2004, S. Marc Cohen Revised 11/23/04
12-7
Thus, in our example, S(c) is Elaine admires Cosmo and c is Elaine. Since Elaine admires
Cosmo contains the name Cosmo, which was introduced after the name Elaine was introduced
for existential instantiation strategy, we may not infer Everyone admires Cosmo.
This restriction sounds complicated, but in system F we will have a very simple method of
ensuring that it is not violated. It will turn out that our rule of ! !! ! Elim (the formal rule that
corresponds to the method of existential instantiation) will be set up in such a way as to
prevent this fallacious inference. We will see how this comes about when we introduce the
formal rules for quantifiers in Chapter 13.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-1
Chapter 13: Formal Proofs and Quantifiers
13.1 Universal quantifier rules
Universal Elimination (! !! ! Elim)
!x S(x)
S(c)
Here x stands for any variable, c stands for any individual constant, and S(c) stands for the
result of replacing all free occurrences of x in S(x) with c.
Example
1. !x "y (Adjoins(x, y) SameSize(y, x))
2. "y (Adjoins(b, y) SameSize(y, b)) ! !! ! Elim: 1
General Conditional Proof (! !! ! Intro)
c

P(c)

Q(c)
!x (P(x) # Q(x))
There is an important bit of new notation here c

, the boxed constant at the beginning of


the assumption line. This says, in effect, lets call it c. To enter the boxed constant in Fitch,
start a new subproof and click on the downward pointing triangle . This will open a menu
that lets you choose the constant you wish to use as a name for an arbitrary object. Your
subproof will typically end with some sentence containing this constant.
In giving the justification for the universal generalization, we cite the entire subproof (as we
do in the case of # ## # Intro).
Notice that although c may not occur outside the subproof where it is introduced, it may occur
again inside a subproof within the original subproof.
Universal Introduction (! !! ! Intro)
c



P(c)
!x P(x)
Remember, any time you set up a subproof for ! !! ! Intro, you must choose a boxed constant
on the assumption line of the subproof, even if there is no sentence on the assumption line.
For practice, do the You try it on p. 344.
Where c does not occur outside
the subproof where it is
introduced.
Where c does not occur outside
the subproof where it is
introduced.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-2
Default and generous uses of the ! !! ! rules
Default
! !! ! Elim: If you cite a universal generalization and apply the rule ! !! ! Elim, Fitch will
enter an instance of the generalization containing its best guess of the constant you want
to replace the variable. If you are within a subproof containing a boxed constant, Fitch
will use that constant. Otherwise, Fitch will use the alphabetically first constant not
already in use in the sentence.
If you want to use a different constant, there are three ways to do it.
(1) The slow way: type the entire sentence in manually.
(2) A faster way: let Fitch guess, and then correct the sentence manually.
(3) The fastest way: suppose the quantifier is !x and you want to replace x with
c. Cite the universal generalization, apply ! !! ! Elim, and type in :x > c. What
this says to Fitch is replace x with c. Fitch will then enter an instance of the
universal generalization with c plugged in for x.
! !! ! Intro: If you apply ! !! ! Intro to a subproof containing a boxed constant (but no
sentence) on the assumption line, Fitch will enter the universal generalization of the last
line in the subproof. If there is a sentence on the assumption line, Fitch will enter the
universal generalization of the conditional whose antecedent is the assumption sentence
and whose consequent is the last line of the subproof.
Do the You try it on p. 345.
Generous
Fitch lets you remove (or introduce) more than one quantifier at a time.
! !! ! Elim: You can remove several quantifiers simultaneously. To go from !x !y
SameCol(x, y) to SameCol(b, c), you may type in the new sentence manually, cite the
universal generalization, and apply the rule. Or, cite the supporting sentence, apply the
rule, and tell Fitch:
:x > b :y > c
This tells Fitch to replace x with b and y with c.
! !! ! Intro: You may also introduce more than one quantifier at a time. The trick here is to
box more than one constant at the start of the subproof. Then, at the end of the
subproof, Fitch will enter the appropriate universal generalization (of a conditional, if
there is a sentence in the assumption line, otherwise of the last line in the subproof).
You can use the colon notation as above to tell Fitch which variables to use. For
example, if your boxed constants are b and c, then you tell Fitch to replace b with x and
c with y by writing:
:b > x :c > y
The order in which you write these replacement instructions makes a difference.
The instruction we wrote above tells Fitch not only to replace b with x and c with y, it
also tells Fitch to put the quantifiers in the order !x!y. If we wanted to make the same
replacements (x for b and y for c), but have the quantifiers in the opposite order, !y!x,
wed give the instruction this way:
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-3
:c > y :b > x
To see how this works, go to the Supplementary Exercises page and open the file
Ch13Ex1. Open a new subproof with boxed constants b and c. (Choose them in this
order.) Then add a new step after the assumption, cite the premise, and choose rule ! !! !
Elim. If you click Check Step at this point, Fitch will enter an instance of the premise,
but with only the outer quantifier removedit will replace x with b. (If you had chosen
the boxed constants in the other order, c b , Fitch would have replaced x with c.)
If you want to use ! !! ! Elim to get SameCol(b, c) from the premise
!x!y SameCol(x, y) in just one step, you must specify the replacements using the
colon notation, :x > b :y > c. Then cite the premise and apply ! !! ! Elim; Fitch will
enter SameCol(b, c).
Next, end the subproof, cite it, and choose rule ! !! ! Intro. This time, be sure you specify
not only the replacements but also the order in which the quantifiers are to appear.
Since the conclusion is !y!x SameCol(x, y), the instruction is
:c > y :b > x
Fitch will enter the desired conclusion, !y!x SameCol(x, y). (Notice what happens to
the conclusion if you write :b > x :c > y.) What we proved here, by the way, is
that in a string of universal quantifiers, the order of the quantifiers is semantically
irrelevant.
13.2 Existential quantifier rules
Existential Introduction (" "" " Intro)
S(c)
"x S(x)
Here x stands for any variable, c stands for any individual constant, and S(c) stands for the
result of replacing all free occurrences of x in S(x) with c. Note that there may be other
occurrences of c in S(x).
Example 1
1. !y (Adjoins(b, y) # SameSize(y, b))
2. "x!y (Adjoins(x, y) # SameSize(y, x)) " "" " Intro: 1
In example 1, there are no occurrences of b in the existential generalization we derived
by using " "" " Intro. But now look at the next example:
Example 2
1. !y (Adjoins(b, y) # SameSize(y, b))
2. "x!y (Adjoins(b, y) # SameSize(y, x)) " "" " Intro: 1
In example 2, there is an occurrence of b in the existential generalization we derived by
using " "" " Intro. But that is perfectly all right. We require that the instance (line 1, in this
case) have b wherever the generalization (line 2) has x, but not conversely.
Note carefully the wording of the rule. It talks about S(c) being the result of replacing
free occurrences of x in S(x) with c, even though when we apply the rule, we tend to
think of it differentlywe start out with S(c) and then replace c with x, and attach "x.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-4
So a good way to think about ! !! ! Intro is as follows: you start out with an instance of a
general sentence, containing (perhaps) one or more occurrences of the constant, c. You
then get to replace one or more of the occurrences of c (you dont have to replace all of
the occurrences of c, although you may if you wish) with a variable x, and then attach
the quantifier !x.
The reason for the perhaps above is because of the possibility of null
quantification (recall 10.4, pp. 280-82)that is, a sentence in which an x
quantifier contains no other occurrence of x within its scope. Strictly speaking,
this peculiar inference, whose conclusion is a null quantification, is valid:
Cube(b)
!x Cube(b)
Therefore, the ! !! ! Intro rule had better allow us to draw it. And notice that its
careful wording insures that it does just this. In this case, S(x) is Cube(b),
which contains no occurrences of x at all. So S(c), which is the result of
replacing all free occurrences of x in S(x) with c, is also just our original
sentence Cube(b). Then when we attach the quantifier !x, it becomes null, since
there is no free occurrence of x in Cube(b) for the quantifier to bind. So the
! !! ! Intro rule permits this inference. If youre in doubt, try out this use of ! !! ! Intro
in Fitch!
For a good illustration of the versatility of ! !! ! Intro, look at the file EI Varieties (on the
Supplementary Exercises page). Youll see that there are four different conclusions that can
be obtained by ! !! ! Intro from Likes(max, max), including the null quantification case
described above.
Existential Elimination (! !! ! Elim)
!x S(x)
c

S(c)

Q
Q
Here, again, the boxed constant indicates that we are choosing c as a name for some
arbitrary object satisfying S(x). Note that the restriction that c may not occur outside the
subproof means, in effect, that c cannot occur in the last line of the subproof, either. (! !! ! Elim
instructs us to end the subproof and enter its last line, Q, as a new line, outside the subproof).
It is this restriction on ! !! ! Elim that blocks the fallacious inference we discussed in Chapter 12
[the pseudo-proof deducing !y "x Admires(x, y) from "x !y Admires(x, y)]. To see how
this works, look at Exercise 13.17 on page 351 (which is a homework problem). There you
will see that the mistake in this pseudo proof is an incorrect application of ! !! ! Elim.
Where c does not occur outside
the subproof where it is
introduced.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-5
Default and generous uses of the ! !! ! rules
Default
! !! ! Intro: If you cite a sentence and apply ! !! ! Intro, Fitch will replace the alphabetically
first name in the sentence with the first variable in the list (x, y, z, u, v, w) not already
in the sentence.
! !! ! Elim: If you end a subproof, cite it, and apply ! !! ! Elim, Fitch will enter the last line of
the subproof on a new line (provided it does not contain any occurrences of the boxed
constant).
Generous
! !! ! Intro: You can attach several existential quantifiers simultaneously. (Of course, they
will have to be attached to the front of the sentence.) To go from SameCol(b, c) to !x
!y SameCol(x, y), you may cite the supporting sentence, apply the rule, and tell Fitch:
:b > x :c > y
This tells Fitch to replace b with x and c with y. The instruction
:b > y :c > x
will produce the sentence !y !x SameCol(y, x). On the other hand, the instruction
:c > x :b > y
will produce the sentence !x !y SameCol(y, x).
! !! ! Elim: The trick here is to start a subproof with more than one boxed constant. If your
subproof ends with a sentence, Q, that does not contain either of these constants, you
may use ! !! ! Elim to enter Q on the next line. (Q is typically, although not always, an
existential generalization, i.e., an !-sentence.)
13.3 Strategy and tactics
Working out strategy and tactics for a given proof is best accomplished in the following way:
1. Try to understand what the FOL sentences mean.
2. Try to come up with an informal proof.
3. Convert your informal proof into a Fitch proof.
The example on p. 352 gives you a good idea of how this works. For some hands-on experience, do
the You try it on p. 356.
Now lets try working through one of the exercises. Open Exercise 13.23. First, figure out what the
sentences mean. Youll come up with something like this:
Everything is either a cube or a dodecahedron.
Every cube is large.
Something is not large.
Therefore, there is a dodecahedron.
Next, try to develop an informal proof. It might run as follows:
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-6
We know that at least one thing is not large. Lets pick a thing that isnt large and call it
b. Now since every cube is large and b isnt large, we know that b is not a cube. But
everything is either a cube or a dodecahedron, and b is not a cube. Therefore, b is a
dodecahedron. So we have proved that there is a dodecahedron.
Now convert this into a Fitch proof. We have obviously used existential instantiation strategy,
based on premise 3. So our proof will begin with a subproof containing the assumption Large(b),
with b as a boxed constant (see the file Proof 13.23a.prf). We will aim for Dodec(b), from which
we can obtain !x Dodec(x) by ! !! ! Intro. Then we can use ! !! ! Elim to end the subproof and infer our
conclusion !x Dodec(x).
Premise 2 tells us that all the cubes are large, so we need to infer the relevant instance concerning
b. This means using " "" " Elim, replacing x with b. We now have Cube(b) # Large(b) on one line
and Large(b) on another. Since we are allowed to use Taut Con with this problem, we may
immediately infer Cube(b). This leaves us in the position shown in Proof 13.23b.prf.
We now go back to premise 1 and apply " "" " Elim again, with b replacing y, obtaining
Cube(b) Dodec(b). And this, together with Cube(b), gives us Dodec(b)once again we use
Taut Con. And that gives us our completed proof (see the file Proof 13.23.prf).
In Chapter 11 we translated some arguments into FOL and promised to return to them later. The one
about Doris Day provides good practice in proof strategy.
The Doris Day principle (again)
Everybody loves a lover.
Doris is a lover.
Everybody loves Doris.
Doris loves everybody.
In FOL:
"x "y (!z Loves(y, z) # Loves(x, y))
!z Loves(doris, z)
"x Loves(x, doris)
"x Loves(doris, x)
Informal proof
The first premise tells us that everybody loves a lover, so it follows that everybody
loves Doris if shes a lover. But the second premise tells us that Doris is a loverso it
follows that everybody loves her. That is our first conclusion. But if everybody loves
Doris, then any randomly chosen person, a, loves Doris. Since a loves Doris, it follows
that a loves someone (i.e., a is a lover). But it follows from the first premise that if a is
a lover, everybody loves a. Now we have proved that a is a lover, so it follows that
everybody loves a. From this it follows that Doris loves a. Since a was chosen at
random, it follows that Doris loves everyone.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-7
Formal proof
Now convert this into a Fitch proof (see Ch13Ex3.prf). Start a new subproof with
boxed constant a. We will prove, first, that a loves Doris, and, second, that Doris loves
a. We apply ! !! ! Elim to the first premise twice, replacing x with a and y with Doris. The
resulting sentence says that if Doris is a lover, then a loves Doris. We then use " "" " Elim
on that sentence together with the second premise to obtain Loves(a, doris). We then
reapply ! !! ! Elim to the first premise, this time replacing x with Doris and y with a. The
resulting sentence says that if a is a lover, then Doris loves a. We then end the subproof
and apply ! !! ! Intro twice, once to get !x Loves(x, doris) and a second time to get !x
Loves(doris, x). Notice that the sentence (containing the boxed constant a) on which
we are generalizing does not have to occur in the last line of the subproof. For the
complete proof, see ProofCh13Ex3.prf.
13.4 Soundness and completeness
We saw earlier (Chapter 8) that the restricted system F
T
for propositional logic is both sound and
complete. We now note that the full system F for FOL is also both sound and complete. Let us
briefly review what soundness and completeness amount to.
Soundness
To say that a deductive system is sound is to say that all of the inferences it permits are
(semantically) correct. That is, it never permits you to infer a falsehood from a truth. In the
case of system F
T
, this meant that every conclusion that can be proved by the rules of F
T
is a
tautological consequence of its premises.
Completeness
To say that a deductive system is complete is to say that there is no (semantically) correct
inference that it cannot prove. In the case of system F
T
, this meant that any tautological
consequence of any set of premises can be proved (i.e., derived from those premises) by the
rules of F
T
.
Obviously, the notions of soundness and completeness of the full system F are exactly analogous
to those for the restricted system F
T
. There are only two differences:
1. In place of tautological consequence (for system F
T
) we now have first-order
consequence (for system F). If you are unclear on the notion of first-order consequence,
review 10.2.
2. In place of the notion of provability in system F
T
, we now have provability in system F.
Just as we used the turnstile notation,
T
, to express the former notion, we now write
simply to express the latter. Here is what the difference amounts to:
P
1
,, P
n

T
S means that S can be proved, from premises P
1
,, P
n
, using only the
truth-functional rules (i.e., the rules of F
T
).
P
1
,, P
n
S means that S can be proved, from premises P
1
,, P
n
, using any of the
rules of F.
We can now state the soundness and completeness theorems for F:
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-8
The Soundness Theorem for F: If P
1
,, P
n
S, then S is a first-order
consequence of P
1
,, P
n
.
The Completeness Theorem for F: If a sentence S is a first-order
consequence of P
1
,, P
n
, then P
1
,, P
n
S.
Recall that in propositional logic, there are soundness and completeness corollaries that relate the
notions of tautology and provability in F
T
. There are analogous corollaries for the full system F.
These concern the relation between proofs without premises, on the one hand, and first-order
validities, on the other. (Remember that S means that there is a proof without premises of S in
system F.)
Soundness Corollary: If S, then S is a first-order validity.
The soundness corollary tells us that every sentence of FOL that can be proved without premises in
system F is a first-order validity, that is, a logical truth of FOL.
Completeness Corollary: If S is a first-order validity, then S.
The completeness corollary tells us that every sentence of FOL that is a first-order validity, that is, a
logical truth of FOL, can be proved without premises in system F.
13.5 Some review exercises
This section contains 19 problems (of which 8 are assigned on problem set H
19
). One of these is to
prove the famous Drinking Theorem (13.51

). The theorem is: !x(P(x) " #y P(y)). If you


read P(x) as x drinks, you get the theorem:
There are some people such that, if they drink, everyone drinks.
There are two separate issues here: (1) to see that this is a logical truth (indeed, a first-order
validity), and (2) to figure out how to prove it. Doing (1) first is extremely helpful before tackling
(2).
Since this is a hard problem, here are some hints:
1. Who are these people who are such that, if they drink, everyone drinks?
2. Remember that conditionals are treated by Tarskis World as just abbreviations of
disjunctions. That is:
Drinks(x) " #y Drinks(y)
is just an abbreviation of:
Drinks(x) #y Drinks(y)
3. So our question (1) is really: who are these people who are such that, either they dont
drink, or everyone drinks? And the answer to this is easy: they are the non-drinkers.
4. But, what if there are no such people, i.e., no non-drinkers? In that case, everyone drinks.
But that makes the right disjunct true.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-9
5. So if there are non-drinkers, they falsify the antecedent, making the conditional true of
them; and if there are no non-drinkers, that means that everyones a drinker, making the
consequent true, and thereby the whole conditional true of every x, and hence true of at
least one thing. So the existential generalization is true in any case. So its a first-order
validity.
[You can check this out by writing the sentence !x (Cube(x) " #y Cube(y)) in Tarskis
World and then trying to construct a world it which it is false. Youll see that its impossible
to do so. Try playing the game, committed to false. If your world contains all cubes, Tarski
will ask you to find something that falsifies Cube(y), and you will fail. If your world contains
at least one non-cube, Tarski will pick a non-cube, call it n
1
, and show that you are committed
to the truth of Cube(n
1
).]
This suggests a proof strategy: proof by cases. Case (1): there are non-drinkers; case (2) there are
no non-drinkers. You may use Taut Con in your proof to enter the disjunction of (1) and (2),
which is an instance of excluded middle.
More exercises
The Doris Day principle (one last time)
Weve already deduced some of the logical consequences of the principle everybody loves a
lover. Well now try to prove that the principle is false, using a couple of empirical premises
that are, I think, uncontroversial:
Madonna loves herself.
Rush Limbaugh does not love Hillary Clinton.
It is not true that everybody loves a lover.
Heres the argument in FOL:
Loves(madonna, madonna)
Loves(rush, hillary)
#x #y (!z Loves(y, z) " Loves(x, y))
Informal Proof:
Well do this by indirect proof. Suppose that everybody loves a lover. Now, Madonna
loves herself, and so she loves someone (by ! !! ! Intro). That means that shes a lover. So
everybody loves her (from our indirect proof assumption, by # ## # Elim). In particular,
Hillary loves her (another application of # ## # Elim). So Hillary loves someone (by
! !! ! Intro), and that makes her a lover. So everybody loves her (yet another application of
# ## # Elim). Therefore (by # ## # Elim!), Rush loves her, and that contradicts the second
premise. So, by Intro, it follows that the Doris Day principle is false.
We will now implement this strategy in a Fitch proof. You can find it in Doris Days
Argument on the Supplementary Exercises page.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-10
Stage 1
Start a new subproof and assume !x !y ("z Loves(y, z) # Loves(x, y)), for proof by
contradiction. We intend to obtain our contradiction by proving Loves(rush, hillary),
which contradicts premise (2), so we will enter that step (without justification at this
point), use it to obtain $, and end the subproof. To see where the proof stands at this
stage, open Proof Doris Day Stage 1.
Stage 2
We need to prove that Madonna is a lover, so we apply " "" " Intro to the first premise,
being careful to substitute the variable z for the second occurrence of Madonna only.
(We want to prove that Madonna loves someone, not that someone loves herself!) We
then apply ! !! ! Elim to our assumption, replacing x with hillary and y with madonna.
That enables us to use # Elim to infer Loves(hillary, madonna). To see where the
proof stands now, look at Proof Doris Day Stage 2.
Stage 3
Next we apply " "" " Intro to Loves(hillary, madonna) to prove that Hillary is a lover
(typing :madonna > z). Then we go back to the Doris Day principle (our subproof
assumption) and obtain another instance by ! !! ! Elim. This time we want to substitute
rush for x and hillary for y. The resulting sentence tells us that Rush loves Hillary if
Hillary is a lover. So by # Elim we can obtain Loves(rush, hillary), which contradicts
the second premise. This completes the proof see Proof Doris Day's Argument.
This proof may not have convinced you that the Doris Day principle is false, since you may
object that the argument is unsound. That is, you may think that one of the empirical
premises we used is false. So lets look at an alternative version in which there are no names.
Everybody loves a lover.
If there is even one lover, then everyone loves everyone.
The conclusion here seems clearly false: the antecedent is true (there exists at least one lover)
and the consequent is false (there is at least one pair of people, x and y, such that x doesnt
love y). So the Doris Day principle must be false if it has this blatantly false consequence.
Exercise: show that the Doris Day principle does have this consequence by proving the
following in Fitch:
!x !y ("z Loves(y, z) # Loves(x, y))
"x "y Loves(x, y) # !x !y Loves(x, y)
You can find this exercise as Doris Day 2.prf on the Supplementary Exercises page.
Dangerfields argument
Youll remember this one from the notes to Chapter 11, where we used it as a translation
problem. You can find Dangerfields Argument on the Supplementary Exercises page. Here
it is:
!x (Respects(x, x) # !y Respects(y, x))
!x !y (Respects(x, y) # Hires(x, y))
!x (!yRespects(x, y) # !y Hires(y, x))
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-11
Informal Proof:
Well use the method of general conditional proof. Our conclusion is that anyone who
respects no one will not be hired by anyone. So, let d be a person who respects no one;
we will prove that no one will hire d.
To do this, we will let c be any arbitrary person; we will prove that c will not hire d.
Note that it follows from the first premise (by ! !! ! Elim) that (1) if d doesnt respect
himself, then no one respects d. But we have assumed that d respects no one, and this
logically implies that (2) d does not respect himself. And (1) and (2) imply (by " "" " Elim)
that no one respects d. Hence (by ! !! ! Elim), c does not respect d. But we may infer from
the second premise that if c doesnt respect d, c will not hire d. Hence, c will not hire d.
But c was arbitrary, so no one will hire d. Our conclusion then follows by general
conditional proof.
We will now implement this strategy in a Fitch proof.
Stage 1
We begin by starting a subproof assuming !yRespects(d, y), with boxed constant d.
The last line of the subproof will be !y Hires(y, d). We then apply ! !! ! Intro to obtain
the conclusion. To see where the proof stands at this stage, open Proof Dangerfield
Stage 1.
Stage 2
The next step is to start a new subproof, within the first, with c as a boxed constant, but
no sentence in the assumption line. Our goal for this subproof is Hires(c, d). First we
apply ! !! ! Elim twice, to the first premise and to line 3, with d replacing x. That sets up
an application of " "" " Elim to obtain !yRespects(y, d). We can then apply ! !! ! Elim to
this sentence, with c replacing y. To see where the proof stands now, look at Proof
Dangerfield Stage 2.
Stage 3
We now turn our attention to the second premise. We apply ! !! ! Elim again, with c
replacing x and d replacing y. (You can do this in one step by specifying the
replacements this way: :x > c :y > d. But the fastest way is in two steps, with
Fitch supplying the right substitution at each step by default.) One more application of
" "" " Elim gives us Hires(c, d), which was the goal sentence for this subproof. This
completes the proof see Proof Dangerfield's Argument.
Leibnizs argument
The philosopher Leibniz (1646-1716) wrote, I define a good man as one who loves all men.
He also proposed to deduce all the theorems of equity and justice from this and a few other
basic definitions. One such theorem might be that there is a man who loves all good men. We
will construct a proof in Fitch to show that this theorem is, indeed, a FO-consequence of
Leibnizs definition of a good man.
[By man here Leibniz clearly meant person; so well treat the domain of discourse as
persons, which will simplify the translation process.]
Well treat the definition as a universally quantified biconditional:
!x (Good(x) # !y Loves(x, y))
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-12
The translation into FOL of the theorem to be proved is straightforward:
!y "x (Good(x) # Loves(y, x))
Open the file Leibnizs Argument (on the Supplementary Exercises page) and try to
construct the proof. For a hint on how to start, open Proof Leibniz Start. As you can see, the
strategy is proof by cases, with the two cases being (1) there are some good persons, and (2)
there are not any good persons. Since the disjunction of (1) and (2) is an instance of excluded
middle, we can obtain it by Taut Con.
The problem now is to show that in either case, we can deduce the conclusion. The best way
to do this is to sketch an informal proof. Then you should be able to model your Fitch proof
on this.
Informal Proof:
Case (1): Suppose there is at least one good person, b, for example. Then, according to
the definition, b loves all persons. Since b loves all persons, b clearly loves all the good
ones, too. So someone loves all good persons.
Case (2): Suppose, on the other hand, that there are no good persons. That means that
any universal generalization you make about all good persons will be vacuously true.
For example, all good persons are loved by d. (Here, d can be anyone you like, and
need not be an arbitrarily chosen person.) That is, d loves all good persons. Hence,
someone loves all good persons.
Now the problem is to come up with a Fitch implementation of this strategy. We will do this
in stages.
Stage 1
Case (1) has an ! sentence as its assumption line, so we use existential instantiation
strategy: assume Good(b), which is an instance of !x Good(x), deduce the desired
conclusion, and apply ! !! ! Elim.
Case (2) is more complicated. Since we are attempting to prove an !" sentence, the
strategy is to come up with an instance of the ! sentence, and then use ! !! ! Intro. But the
instance in question will be a general conditional sentence, namely,
"x (Good(x) # Loves(d, x)). So the strategy here is to use general conditional proof:
start a new subproof assuming Good(c), deduce Loves(d, c), and apply " "" " Intro. Note
that to employ this strategy, one must introduce c as a boxed constant on the assumption
line. But d does not have to be a boxed constantit can be any name you like.
To see where the proof stands at this stage, open Proof Leibniz Stage 1.
Stage 2
Case (1): We have assumed Good(b), and wish to prove that b loves all good persons.
So, we use " "" " Intro strategy: assume that arbitrary person a is good, and prove that b
loves a.
Case (2) has !x Good(x) as its assumption line, and our subproof within that assumes
Good(c). From these we should be able to get a contradiction. Then we can use $ $$ $ Elim
to get Loves(d, c).
To see the proof at this stage, look at Proof Leibniz Stage 2.
Copyright 2004, S. Marc Cohen Revised 11/26/04
13-13
Stage 3
Case (1): We apply ! !! ! Elim, obtaining Good(b) " !y Loves(b, y). We can then
detach the right side of this biconditional and apply ! !! ! Elim again, obtaining
Loves(b, a). We then end the subproof and apply ! !! ! Intro.
Case (2): From our assumption Good(c) we can obtain #x Good(x) by # ## # Intro. This
gives us the contradiction we were looking for.
Putting all the pieces together, we get the completed proof, which you can see by opening
Proof Leibniz's Argument.
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-1
Chapter 14: More on Quantification
14.1 Numerical quantification
In what weve seen so far of FOL, our quantifiers are limited to the universal and the existential.
This means that we can deal with English quantifiers like everything and something. We quickly
discovered that with a judicious use of truth-functional connectives, we could also express such
English quantifiers as nothing, every cube, some tetrahedron, all large cubes, etc. The FOL
representations of these quantifier phrases in English make use of the quantifiers plus some truth-
functional machinery:
Nothing !x
Every cube !x (Cube(x) "
Some tetrahedron #x (Tet(x)
All large cubes !x ((Cube(x) Large(x)) "
We will now see how to use our existing FOL machinery to represent numerical quantifiers. At this
point, there is one numerical quantifier we know how to expressat least one. That is because the
sentence At least one cube is large goes easily into FOL as #x (Cube(x) Large(x)).
Now we will learn how to use FOL to express such numerical quantifiers as the following: at least
two, at most one, exactly one, at least three, at most two, exactly two, etc. The interesting aspect of
this is that we do not need to enrich FOL in any way in order to accomplish this. We will not have
special numerical quantifiers. Rather, we will use our regular universal and existential
quantifiers, together with truth-functional connectives and (most importantly) the identity sign.
In the following examples, we will be using FOL to say something about the number of cubes there
are.
At least two
Suppose we want to say that there are at least two cubes. A first effort might be
#x#y (Cube(x) Cube(y)). But we quickly realize that this cannot be correct. For nothing in
this FOL sentence tells us that x and y have to be different cubes. (If this is not clear, put this
sentence into a new Tarskis World sentence file and evaluate it in a world with a single cube.
You will see that it comes out true. If you dont see why, try playing the game against Tarski,
committing to the falsity of this sentence. Notice why Tarski will always win.)
Obviously, what is needed is a clause guaranteeing that x and y are distinct objects. And such
a clause is easy to come by: x $ y. So, our final version of there are at least two cubes is:
#x#y (Cube(x) Cube(y) x $ y)
Notice that we can have an at least two quantification that is not restricted to cubes. If we
want to say simply that there are at least two things (without being specific about any other
properties these things might have) we can write:
#x#y x $ y
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-2
At most one
Representing There is at most one cube in FOL is a bit more complicated. Here is the general
idea. Suppose that your domain of discourse is a barrel that contains cubes, tetrahedra, and
dodecahedra (a kind of three-dimensional Tarskis World!) and suppose that there is at most
one cube in the barrel.
Now suppose that you reach into the barrel and pull out a cube, and then throw the cube back
in. Then you reach in again and pull out a cube. (Pretty amazing, considering that there is at
most one cube in the barrel.) In fact, we know for certain that you pulled out the same cube
twice!
This is how we will put the claim when we couch it in FOL: if you reach in the barrel and pull
out a cube, x, and (after returning the cube to the barrel) reach in again and pull out a cube, y,
then x = y.
!x!y ((Cube(x) Cube(y)) " x = y)
Similarly, we can have an at most one quantification that is not restricted to cubes. If we want
to say simply that there is at most one thing (without being specific about any other properties
it might have) we can write:
!x!y x = y
Exactly one
Having dealt with at least one and at most one, we already have everything we need to handle
exactly one. For it is nothing more than the conjunction of the other two. That is, for it to be
true that there to be exactly one cube is just for it to be true both that there is at least one cube
and that there is at most one cube.
So a simple way to arrive at an FOL translation of There is exactly one cube is just to conjoin
our FOL versions of at least one and at most one:
#x Cube(x) !x!y ((Cube(x) Cube(y)) " x = y)
However, there are equivalent, but more compact, ways of expressing this. We will be led to
one such formulation by the following line of thought. For there to be exactly one cube is for
there to be something, x, such that x is a cube, and nothing but x is a cube. That is, there is an
x such that x is a cube, and no matter which y you pick, if y is a cube, then y and x are one and
the same object. In FOL symbols:
#x (Cube(x) !y (Cube(y) " y = x))
This is the version presented in LPL on p. 370. An even more compact version can be
produced, however. We can delete the clause Cube(x), but get the effect of including it by
changing the " to a $. That gives us:
#x!y (Cube(y) $ y = x)
In other words, to say that there is exactly one cube is to say that there is an x such that no
matter which y you pick, y is a cube iff y and x are one and the same object. This is the most
compact version of exactly one.
As before, we can have an exactly one quantification that is not restricted to cubes. If we want
to say simply that there is exactly one thing (many philosophers have actually believed this!)
we can write:
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-3
!x"y y = x
In fact, we met this sentence earlier, when we first studied multiple quantification with
mixed quantifiers. You might wish to test this sentence out in Tarskis World. You will
quickly discover that it is true in any world containing exactly one block. As soon as you add
a second block, the sentence becomes false.
At least three
At least two required two quantifiers and a non-identity clause. So it is easy to see that at least
three will require three quantifiers and three non-identity clauses. That is, in FOL we express
there are at least three cubes as:
!x!y!z (Cube(x) Cube(y) Cube(z) x # y y # z x # z).
Three non-identity clauses are required because we need to state that we can select cubes in
such a way that after three selections, we never selected the same cube twice. And we can say
simply that there are at least three thingsjust drop the Cube wffs from the sentence above:
!x!y!z (x # y y # z x # z).
At most two
To understand our treatment of there are at most two cubes, put yourself back in the position
of someone pulling blocks out of a barrel. If there are at most two cubes in the barrel, that
means that if you make three draws from the barrel (under the conditions described earlier)
and get a cube every time, then you must have drawn the same cube more than once. That is,
in FOL we express there are at most two cubes as:
"x"y"z ((Cube(x) Cube(y) Cube(z)) $ (x = y y = z x = z)).
As before, we can say simply that there are at most two things by dropping the Cube wffs and
the $ from the sentence above and keeping just the quantifiers and the disjunction of identity
clauses:
"x"y"z (x = y y = z x = z).
Exactly two
To handle exactly two, we can build on our treatment of exactly one. Conceptually, the
simplest treatment is just to conjoin our two FOL sentences that translate at least two and at
most two. But the resulting FOL sentence is not very compact. To get a more compact version,
we can build on the compact version of exactly one. That is, instead of saying there is
something such that it and it alone is a cube, we would say there are two distinct things such
that they and they alone are cubes. In FOL:
!x !y (x # y "z (Cube(z) % (z = x z = y))).
If you are a dualist, and wish to say that there are exactly two things, youd put it this way in
FOL:
!x !y (x # y "z (z = x z = y)).
Examples
Lets test our understanding of some numerical sentences. Open the file Ch14Ex1.sen (on
the Supplementary Exercises web page). Notice that the English sentences all make numerical
claims, and that they all appear with their FOL translations. Can you see why the FOL
sentences say what their English translations do?
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-4
Try constructing a world in which all of the sentences are true. Now try making changes to the
world, falsifying some of the sentences. Now make more changes, so that all the sentences are
false.
Next, destroy all the blocks in the world in which you are evaluating the sentences, and open
the file Ch14Ex1a.sen. You will find the same FOL sentences as in the previous sentence
file, but all the English translations have been deleted. Do you still know what the FOL
sentences mean? Try to rebuild your world so that all the sentences come out true.
As a final test of your understanding of numerical quantification in FOL, open the file
Ch14Ex2.sen. There are eight numerical claims here, but no accompanying English
translations. These sentences are more complicated than the ones in the previous file, so read
them carefully. Then create a world making as many of the sentences true as you can. (You
should be able to create a world in which they are all true.)
You may have trouble understanding some of these sentences. Try putting any that you are in
doubt about into English. To assist you with the translation, make changes to your world and
observe what happens to the truth value of the sentence in question. That should help you
figure out what the sentence means. Save your world as World Ch14Ex2.wld. Save the
sentence file (with your English annotations) as Sentences Ch14Ex2.sen.
Generalizing
Obviously, what we have done for the numbers 1, 2, and 3, we can do for any integer n. That
is, for any n, we can produce FOL sentences that translate:
There are at least n Fs
There are at most n Fs
There are exactly n Fs.
Needless to say, as n gets larger, the FOL sentences become longer and more complexthis is
hardly an ideal language in which to do arithmetic! But the point is that the expressive power
of FOL is considerable. For any condition expressible in FOL and for any finite number, n, one
can, in principle, construct an FOL sentence saying that n things satisfy that condition.
Abbreviations for numerical claims
Rather than write out the (sometimes very long) FOL sentences that express numerical claims,
we can use the following abbreviation scheme.
!
"n
x P(x) abbreviates the FOL sentence asserting There are at least n objects
satisfying P(x).
!
#n
x P(x) abbreviates the FOL sentence asserting There are at most n objects
satisfying P(x).
!
!n
x P(x) abbreviates the FOL sentence asserting There are exactly n objects
satisfying P(x).
For the special case where n = 1, it is customary to write !!x P(x) as a shorthand for !
!1
x
P(x). This can be read as there is a unique x such that P(x).
More translations
In translating numerical claims, we made heavy use of =, the identity predicate. There are
other common claims, not explicitly numerical, that also require the use of identity. We will
look at a couple of them here.
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-5
Superlatives
A superlative is an adjective ending in est, such as largest, oldest, strongest, etc.
Suppose we want to write an FOL sentence corresponding to b is the largest cube. We
might try one of the following:
1. LargestCube(b)
2. Cube(b) Largest(b)
But both of these seem problematic. (1) conceals too much information, for it does not
have Cube(b) as a FO consequence, whereas b is a cube certainly seems like a FO
consequence of b is the largest cube. (2) avoids this problem, but introduces another.
For (2) says that b is the largest thing, whereas our original sentence only says that b is
the largest cube.
And b might be the largest cube without being the largest thing. (Imagine a world in
which b is a medium cube, all the other cubes are small, and c is a large tetrahedron.)
The trick is to translate this sentence into FOL using only the comparative predicate
larger. To say that b is the largest cube is to say that b is larger than all the other cubes.
That is, b is a cube, and every cube that is not b is smaller than b. In FOL:
Cube(b) !x ((Cube(x) x " b) # Larger(b, x))
In general, to be the F-est thing is to be F-er than everything else; to be the F-est G is to
be a G that is F-er than every other G. In colloquial speech, people tend to be careless
and leave out the other. Meaning to assert that Clark Kent is the strongest man, they
may say Clark Kent is stronger than any man. Strictly speaking, of course, this is not
true: Clark Kent may be stronger than all the other men, but he is not stronger than
himself!
Exceptives
An exceptive is a claim that makes a universal generalization with an exception, such as
everything is a cube except c. We can translate everything is a cube into FOL as !x
Cube(x), and c is not a cube as Cube(c). But if we simply conjoin these:
!x Cube(x) Cube(c)
we get a contradiction, which our original sentence certainly is not. What we want to
say, roughly, is c is not a cube, but everything else is a cube:
Cube(c) !x (x " c # Cube(x)).
This translation is correct, but we can produce a more compact version. Think of the
sentence c is not a cube as a universal generalization: c is no cube, or (equivalently) no
cube is c. So instead of Cube(c) we can write:
!x (Cube(x) # x " c).
Using this in place of the equivalent Cube(c), we get:
!x (Cube(x) # x " c) !x (x " c # Cube(x))
which, by moving the quantifier to the outside (see 10.4), is equivalent to:
!x [(Cube(x) # x " c) (x " c # Cube(x))].
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-6
Finally, we replace the embedded conjunction of conditional wffs with the
corresponding biconditional, and obtain:
!x (Cube(x) " x # c)).
This is the simplest way to express the exceptive sentence everything is a cube except c
in FOL. Now read this FOL sentence from left to right, and compare it with the English
sentence. Did you notice that the phrase except c is rendered in FOL by " x # c?
Exceptive sentences go into FOL as negative biconditionals.
14.2 Proving numerical claims
Since we can translate numerical claims into FOL, we can evaluate the validity, in FOL, of
arguments containing such claims. Consider, for example, the following argument
There are exactly two cubes.
There are exactly three non-cubes.
There are exactly five objects.
The conclusion is clearly a logical consequence of the premisesthere is no possible circumstance
in which there are two cubes, three non-cubes, but not a total of five objects, cubes and non-cubes,
altogether. But is the conclusion a FO consequence of the premises?
Using our abbreviations for FOL numerical quantifications, the FOL version of the argument looks
like this:
$
!2
x Cube(x)
$
!3
x Cube(x)
$
!5
x (x = x)
Can we prove this conclusion in F? Before we can do so, of course, we would have to write out
the real FOL sentences, instead of the abbreviations we used above. And when we do this, we
will see that the argument contains no predicates that affect its validityCube could just as easily
be replaced by Tove or any other predicate. So the conclusion is, indeed, a FO consequence of the
premises. And since F is complete, it follows that it is possible, at least in principle, to prove this
conclusion in F.
What this means
Such a proof would seem to come very close to being a proof, purely within formal logic,
that 2+3=5. Of course, we are not really proving things about numbers, but about cubes and
non-cubes, and about relations among various conditions of their identity and distinctness. But
we are certainly capturing some basic arithmetical ideas within a system of pure logic.
What this does not mean
Although we can express (and prove) many arithmetical claims in FOL, there are still many
kinds of arithmetical claims that we can neither express nor prove. For example, consider the
following claim:
There are more cubes than tetrahedra.
This claim does not tell us how many cubes there areonly that the number of cubes is larger
than the number of tetrahedra. In other words, there are numbers n and m such that there are n
cubes and there are m tetrahedra, and n > m. One might try to state this in FOL as follows:
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-7
!n!m (!
!n
x Cube(x) !
!m
x Tet(x) n > m)
But there are two problems here. First, our sentence contains the predicate >, which is not one
of the logical symbols of FOL. Second (and more importantly), what we have written is not the
abbreviation of any FOL sentence. A numerical subscript of the form
!n
tells us how to write
the FOL sentence we are abbreviating, but only when n is some positive integer. We have no
way of dealing with a variable in a numerical quantifier.
In short, our numerical quantifiers do not quantify over numbers; they are simply
abbreviations of more complex looking FOL sentences that quantify over whatever objects
(cubes, etc.) are in their domain of discourse. We can axiomatize arithmetic in FOL (see.
16.4), but we cannot express all arithmetic claims in pure FOL.
In practice, of course, proofs of arithmetical claims in FOL are messy and difficult to come byFOL
is not a very good language in which to do arithmetic. Still, we can easily handle cases where n is
very small, and use Fitch to construct proofs of at least some numerical claims. Heres an easy
example to start with:
There is at least one cube.
There is at least one tetrahedron.
There are at least two things.
Youll find the problem on the Supplementary Exercises page as Ch14Ex3.prf. In this problem,
youll need to use Ana Con. You should use it only to obtain " from atomic sentences.
14.3 The, both, and neither
When the word the combines with a noun phrase to form an expression that purports to refer to
exactly one object, the entire phrase (of the form the so-and-so) is called a definite description.
Here are some examples of definite descriptions:
The tallest player on the team
The king of Norway
The sum of 3 and 5 [more usually written 3 + 5]
The 40th president of the U.S.A.
Whistlers mother [note the absence of the in this case]
Notice that definite descriptions function syntactically like names, as illustrated by the following
pairs of sentences:
John has red hair.
The tallest player on the team has red hair.
Reagan was a Republican.
The 40th president of the U.S.A. was a Republican.
But there is good reason to think that definite descriptions do not function semantically like
names. In fact, FOL would be inadequate if it treated descriptions in the same way it treats proper
names, namely, as logical constants. For then the FOL versions of both of these arguments would
be, effectively, the same:
John has red hair.
Some player on the team has red hair.
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-8
The tallest player on the team has red hair.
Some player on the team has red hair.
Clearly, the second argument is valid, but the first is not (for nothing in the first argument tells us
that John is a player on the team). The premise of the second argument contains information that
the premise of the first argument lacks. So we should not treat definite descriptions in FOL as if
they were names.
But now a problem arises. For logic cannot guarantee that a definite description actually succeeds
in picking out a unique object. Consider a sentence like The cube is small, and imagine that you are
evaluating it in various Tarski Worlds. How would you assess its truth value in a given world? You
would expect to find exactly one cube in the world, and then you would check its sizeif its
small, the sentence is true; otherwise, the sentence is false.
But suppose there are no cubes in the world? What is the truth value of the sentence in that case?
Or suppose there are two cubes, one of which is small and one of which is not? What is the truth
value of the sentence in that case?
In both of these cases, something has gone wrong with the description. For convenience, lets say
that a description, the F, is a good description when there is exactly one F, and a bad description
otherwise. Thus, there are two ways in which a description can go bad. The senator from
Washington is a bad description in one way, since there is more than one senator from Washington,
and the present king of France is a bad description in another way, since there is no king of France
at present.
How are we to evaluate sentences that contain bad descriptions? This was the problem that
motivated Bertrand Russells famous Theory of Descriptions (1905).
Russells Theory of Descriptions
According to Russell, a sentence containing a definite description can be thought of as a
conjunction with three conjuncts. Consider such a sentence:
The cube is small.
On Russells theory, this amounts to the following conjunction:
There is at least one cube, and there is at most one cube, and every
cube is small.
This easily goes into FOL as:
!x Cube(x)
"x"y ((Cube(x) Cube(y)) # y = x)
"x (Cube(x) # Small(x))
Its easy to see that this sentence can be false in three different ways, depending on which
conjunct is false: there may be no cubes (first conjunct false), or more than one cube (second
conjunct false), or some cube that is not small (third conjunct false).
Equivalent formulations of Russells analysis
Equivalently, but more compactly, Russells analysis can put as follows:
!x (Cube(x) "y (Cube(y) # y = x) Small(x))
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-9
This is the standard FOL sentence that LPL presents as Russells analysis of the cube
is small.
An even more compact version looks like this:
!x "y ((Cube(y) # y = x) Small(x))
That is, there is exactly one cube, and its small. All three versions are, of course,
equivalent.
Both and neither
We may note in passing that Russells analysis can be extended to cover the determiners
both and neither. That is, we can treat phrases like both cubes and neither tetrahedron
along the same lines as the cube. This can be seen easily from the following examples.
Both cubes are small.
On Russells analysis, this says that there are exactly two cubes, and each cube is small.
That is:
!
!2
x Cube(x) "x (Cube(x) $ Small(x))
Similarly, neither tetrahedron is large, on Russells analysis, says that there are exactly
two tetrahedra, and no tetrahedron is large. That is:
!
!2
x Tet(x) "x (Tet(x) $ Large(x))
Remember that what we have produced above are really just abbreviations of the real
FOL sentences that would count as the Russellian analyses of both and neither. (Real
FOL sentences dont contain numerical quantifiers, like !
!2
x.)
Two key features of Russells theory
The beauty of Russells analysis is that it provides a truth value for every sentence
containing a definite description, even if its a bad description. If someone says the cube
is small when there is no cube, he or she has simply said something false.
Russells analysis also provides this interesting feature: although a sentence containing
a description may be perfectly unambiguous, the introduction of a logical operation
such as negation may introduce an ambiguity. Thus, consider the sentence the cube is
small, whose Russellian analysis looks like this:
1. !x (Cube(x) "y (Cube(y) $ y = x) Small(x)).
Now consider what happens when a negation is introduced:
The cube is not small.
On Russells theory, this sentence is ambiguous. On one reading, it asserts that there is
exactly one cube, and says, further, that it is not small. In FOL:
2. !x (Cube(x) "y (Cube(y) $ y = x) Small(x))
But on another reading, the English sentence says something different, namely, that it is
not the case both that there is exactly one cube and that it is small. In FOL:
3. !x (Cube(x) "y (Cube(y) $ y = x) Small(x))
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-10
You can see the difference between these sentences by comparing their evaluations in
various worlds. Open the file Russell.sen, where you will find these three sentences.
Now create a world with a single cube in it (it may contain non-cubes, too, but they are
irrelevant to these sentences) and evaluate all three sentences.
You will notice that (2) and (3) will always agree with one another, and disagree with
(1), so long as the description is goodi.e., as long as there is exactly one cube in the
world. But notice what happens when you add a cube, or remove all the cubes. In
worlds like this, the description the cube is bad, and sentences (2) and (3) will diverge
in truth value. Sentence (1) will be false; but (2) will also be false. (After all, (2) is not
the negation of (1), since (2) has its embedded in the last conjunct.) The negation of
(1) is (3), and (3) will be true. On Russells theory, then, there are situations in which
both the cube is small and the cube is not small are false.
Strawsons analysis
Russells theory has not convinced everyone. One celebrated critique is that of philosopher P.
F. Strawson. According to Strawson, Russell is mistaken in supposing that one who utters the
sentence the cube is small makes three claimsthat there is at least one cube, and at most one
cube, and that every cube is small. Rather, such a person does not even succeed in making a
claim unless there is exactly one cube. That there is exactly one cube is not part of what the
speaker claims, but is a presupposition of his making a claim at all.
If the presupposition is fulfilled (that is, if there is exactly one cube), then the utterer of the
sentence the cube is small claims, about that cube, that it is small. If the presupposition is not
fulfilled (that is, if there is more than one cube, or if there are no cubes), then the speaker has
failed to make any claim.
So one who utters a sentence containing a bad description has not succeeded in making a
claim. And on Strawsons account, truth values attach not to the sentences we utter, but to the
claims we make with them. Hence, according to Strawson, nothing true or false gets expressed
by a sentence containing a bad description. Strawsons analysis thus introduces what have
been called truth value gaps.
Consequences of Strawsons analysis
An obvious consequence of Strawsons analysis is that sentences like the cube is small cannot
be translated into FOL. For all FOL sentences have truth values, at least if they do not contain
any names. Strawsons proposal is, in effect, to treat definite descriptions (semantically as
well as syntactically) in the way that FOL treats names: the sentences in which they occur
cannot be evaluated for truth value unless they succeed in uniquely referring.
Strawsons analysis of definite descriptions seriously weakens FOL. For FOL would be unable,
on Strawsons account, to explain the validity of certain obviously valid arguments. Consider
this example:
The large cube is in front of b.
Something large is in front of b.
Although Strawson would agree that this is a valid argument (there is no possible
circumstance in which the premise is true and the conclusion false), the conclusion would not
be a FO consequence of the premise in a Strawsonian FOL. For it would look like this:
Copyright 2004, S. Marc Cohen Revised 6/1/04
14-11
FrontOf(the large cube, b)
!x (Large(x) FrontOf(x, b))
And the conclusion of this argument is not a FO consequence of the premise. Russells theory,
of course, proposes a different translation into FOL:
!x"y(((Large(y) Cube(y)) # y = x) FrontOf(x, b))
!x (Large(x) FrontOf(x, b))
And the conclusion here is obviously a FO consequence of the premise. Can Russells
theorywhich does not allow Strawsons truth value gapsbe defended?
Response to Strawson
It is Strawsons notion of presupposition that introduces truth value gaps. So to do away with
them, we need an alternative account of what he calls presuppositions. The best alternative is
to say that these are really implicatures.
According to Strawson, the sentence the large cube is not in front of b carries with it the
presupposition that there is exactly one large cube. When that presupposition is not fulfilled,
one who utters the sentence fails to make any claim, true or false. But if it is only an
implicature that there is exactly one large cube, the sentence may still have a truth value even
when the implicature is false.
So let us apply the cancellability test. Can one conjoin there is not exactly one large cube to
the large cube is not in front of b without contradiction? Opinions are mixed. My view is that
there is no contradiction here. It is not self-contradictory to say The large cube is not in front
of bin fact, there is no large cube at all!
It is rather like saying to the child who thinks that the tooth fairy put a dollar under her pillow:
No, the tooth fairy did not put a dollar under your pillowin fact, there is no tooth fairy. It
may be unkind to say this, but it is not untrue. Obviously, then, it is not self-contradictory,
either.
In fact, a Strawsonian analysis of this case seems particularly wrong-headed. Consider his
account of the sentence uttered by the child:
The tooth fairy put a dollar under my pillow.
According to Strawsons account, this sentence cannot be used to make a claim unless its
presuppositions are fulfilled. And one of its presuppositions is that there is exactly one tooth
fairy. So, since there are no tooth fairies, the child has not made a claim at all.
But surely this is wrong. The child has made a claim, and a false one. And we can correct the
child (if were mean enough, or the child is old enough) by uttering the proper negation of
what the child said:
No, the tooth fairy did not put a dollar under your pillow.
This is the harsh truth, and the Russellian analysis gives just the right result.
! u + u
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-1
Properties of ReIations and Infinite Domains
For our Iinal topic, we will cover a Iew points that are not Iully covered in LPL, but that are important
in two ways. First, they are important Ior understanding the logic oI arguments involving binary
predicates. Second, they help to bring into sharper Iocus some oI the theoretical limitations oI using
Tarski`s World counterexamples to show FOL arguments to be invalid.
Although most oI this topic is omitted in LPL, there is a useIul discussion oI the properties oI binary
relations in 15.5 (pp. 422-424). I suggest that you read these pages now, and then return to this point
and resume reading.
Binary relations
A binary relation is what gets expressed by a binary (2-place) predicate. For example, Larger
expresses the larger than relation, FrontOf expresses the relation oI being in front of, and =
expresses the identity relation.
The true atomic sentence Older(ringo, paul) expresses the Iact that Ringo stands in the older than
relation to Paul.
Properties of binary relations
Binary relations may themselves have properties. For example, iI a relation R is such that
everything stands in the relation R to itselI, R is said to be reflexive. Some relations, such as being
the same si:e as and being in the same column as, are reIlexive. Others, such as being in front of or
being larger than are not.
We can express the Iact that a relation is reIlexive as Iollows: a relation, R, is reIlexive iII it
satisIies the condition that !x R(x, x).
LPL has a brieI discussion oI these properties oI relations, and provides a list oI some oI the most
important ones, on p. 422. We summarize them here.
Reflexivity: !x R(x, x)
Irreflexivity: !x "R(x, x)
Transitivity: !x!v!: |(R(x, v) ! R(v, :)) # R(x, :)|
Symmetry: !x!v (R(x, v) # R(v, x))
Asymmetry: !x!v (R(x, v) # "R(v, x))
Antisymmetry: !x!v |(R(x, v) ! R(v, x)) # x v|
It would probably be useIul Ior you at this point to think oI the various relations expressed by the
binary predicates oI the blocks language and Iigure out, Ior each oI those relations, which oI the
properties above it has. (A shorter version oI this project appears in LPL as Exercise 15.36.)
OK, do you have your list? I`ll run through each oI the properties, give a brieI explanation oI it,
and give some examples oI relations that have that property.
Reflexivity
A reIlexive relation is one that everything bears to itselI. The blocks language predicates that
express reIlexive relations are: SameSize, SameShape, SameCol, SameRow, and =.
Other reIlexive relations include lives in the same citv as, is (biologicallv) related to.
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-2
!""#$%#&'(')*
An irreIlexive relation is one that nothing bears to itselI. The blocks language predicates that
express reIlexive relations are: Adjoins, Larger, Smaller, LeftOf, RightOf, FrontOf, and
BackOf. Other irreIlexive relations include is different from, occurred earlier than.
,"-./')'(')*
The property oI transitivity is probably more clearly and eIIiciently expressed by its FOL
Iormula than by trying to state it in English. One might try to put it like this: a transitive
relation is one such that iI one thing bears it to a second, and the second bears it to a third,
then the Iirst thing bears it to the third. Do you see what I mean? The FOL version is simpler
and more straightIorward.
The blocks language predicates that express transitive relations are: Larger, Smaller, LeftOf,
RightOf, FrontOf, BackOf, SameSize, SameShape, SameCol, SameRow and =. Wow,
that`s quite a list! In Iact, it includes every blocks language binary predicate except Ior one:
Adjoins. II it is not clear to you why adfoins is not transitive, you might want to check it out
using Tarski`s World. Other transitive relations include older than, occurred earlier than,
lives in the same citv as, ancestor of.
0*11#)"*
A symmetric relation is one that is always reciprocated. That is, iI one thing bears it to a
second, the second also bears it to the Iirst. (The only thing wrong with putting it this way is
that the 'Iirst thing and the 'second thing don`t have to be two diIIerent things!)
The blocks language predicates that express symmetric relations are: Adjoins, SameSize,
SameShape, SameCol, SameRow and =. Other symmetric relations include lives near, is
a sibling of.
2/*11#)"*
An asymmetric relation is one that is never reciprocated. That is, iI one thing bears it to a
second, the second does not bear it to the Iirst.
The blocks language predicates that express asymmetric relations are: Larger, Smaller,
LeftOf, RightOf, FrontOf, and BackOf. Other asymmetric relations include older than,
daughter of.
2.)'/*11#)"*
Looking at the deIinition oI antisymmetry above, you may have a hard time putting it into
English. You might try this: an antisymmetric relation is one such that iI two things bear it to
one another, then they are identical. But that doesn`t sound quite right, does it? You`ll come
up with a better way oI putting it iI you review the discussion oI 'at most one in the study
guide section on 14.1.
Did that help? An antisymmetric relation is one that no )34 things ever bear to one another.
The blocks language predicates that express antisymmetric relations are: Larger, Smaller,
LeftOf, RightOf, FrontOf, BackOf, and =.
It might at Iirst seem odd that larger than, Ior example, is antisymmetric. But nevertheless
(Larger(x, y) ! Larger(y, x)) ! x = y
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-3
is true Ior all values oI x and v. This is one oI those 'vacuously true universal generalizations
whose antecedent is always Ialse. And since Larger(x, y) ! Larger(y, x) is !"#"$ true, the
number oI things that are both larger than each other is zero. Since the number oI such things
is zero, it Iollows trivially that the number is not more than one. That is, no %&' things are
both larger than each other.
The (equivalent) contrapositive Iorm oI the antisymmetry condition is perhaps easier to
understand:
!x!v |x " v # $(R(x, v) ! R(v, x))|
This is in turn equivalent to:
!x!v |x " v # (R(x, v) # $R(v, x))|
And this says, once again, that iI something stands in the R relation to something "()", then the
relation is not reciprocated. Clearly, asymmetry implies antisymmetry, although the converse
does not hold.
Notice that neither same column nor same row is antisymmetric (two diIIerent blocks can be
in the same column, and two diIIerent blocks can be in the same row). But now consider the
('*+,-( /$'01,% oI these two relations, that is, the relation same column and same row. This
relation is antisymmetric. For iI x and v are in both the same column and the same row as one
another, then x v. No %&' blocks can be in both the same column and the same row. That`s
because in a Tarski world, you cannot Iit more than one block into a single square. For
practical purposes, the relation same column and same row just +) the identity relation in
Tarski`s World.
2$*13"!%) +!#'(#+!* 4+!-$5 $"(-%+'!)
When a relation has one oI these properties, that means that a certain kind oI argument involving
atomic sentences is valid. Thus, Ior example, it is because the larger than relation is transitive that
the Iollowing argument is valid.
Larger(a, b)
Larger(b, c)
Larger(a, c)
And it is because the adfoins relation is symmetric that the Iollowing argument is valid.
Adjoins(b, c)
Adjoins(c, b)
A good way to become Iamiliar with these properties oI relations is to do exercises 15.30 15.36.
Notice that every relation expressed by a binary atomic predicate in the blocks language
(SameSize, Larger, Adjoins, etc.) is either reIlexive or irreIlexive, and either symmetric or
asymmetric. This is because in the language oI the blocks world, all the binary predicates (except
Ior =) stand Ior )/-%+-( $"(-%+'!).
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-4
But when you consider relations more broadly, you will Iind some that are neither reIlexive nor
irreIlexive; some are neither symmetric nor asymmetric. Examples: loves, hates, shaves, respects.
Test these out Ior yourselI: !x Hates(x, x) is not true, but neither is !x "Hates(x, x). (Not
everyone hates himselI, but surely at least some selI-hatred exists.) And
!x!y (Shaves(x, y) # Shaves(y, x)) is not true, but neither is
!x!y (Shaves(x, y) # "Shaves(y, x)). (People do not typically shave one another, but there is
probably at least one pair oI people who engage in this strange behavior.)
Relations such as loves, hates, admires, etc. have this interesting Ieature: they have !"!# oI the
properties we have been discussing. Takes loves, Ior example. It is not reIlexive (not everyone
loves himselI or herselI), not irreIlexive (at least one person does love himselI), not transitive (one
does not always love the ones one`s beloved loves), not symmetric (love is not always
reciprocated), not asymmetric (love is sometimes reciprocated), and not antisymmetric (sometimes
two people really do love one another). I guess this shows there`s just not much that`s logical about
love!
$%&'()*#!+# -#*).'"!/
When a relation is transitive, symmetric, and reIlexive, it is called an #%&'()*#!+# relation. Being
the same si:e as is an equivalence relation; so are being in the same row as and having the same
parents as. The most Iamiliar (and important) example oI an equivalence relation is '0#!.'.1.
Unlike same si:e and same row, however, identitv has the additional property oI being
antisymmetric. (That is, no .2" things bear the identitv relation to one another.) This makes it
special among equivalence relations. In Iact, you can think oI identity as just a very special case oI
equivalence.
ConIirm to your own satisIaction (iI you are not already clear about this) that identity is transitive,
symmetric, reIlexive, and antisymmetric.
3-""4/ )5"&. -#*).'"!/
There are some interesting generalizations that can be proved about the properties oI relations. For
example, iI a relation is transitive and irreIlexive,
1
it must also be asymmetric. That is to say, the
Iollowing argument is valid.
!x!v!: |(R(x, v) ! R(v, :)) # R(x, :)|
!x "R(x, x)
!x!v (R(x, v) # "R(v, x))
To prove that this is so, go to the Supplementary Exercises page and open the Iile Asymmetry.prf.
You should be able to deduce the conclusion Irom the premises Iairly easily. Use generalized
conditional prooI strategy, assuming R(a, b) with boxed constants a and b. Then deduce "R(b, a)
and apply ! !! ! 6!.-". The trick in this prooI is to choose the right substitutions oI constants Ior
variables in applying ! !! ! $*'7 to the Iirst premise. To see my version oI the prooI, look at
Proof Asymmetry.prf. But don`t look until you`ve worked out your own prooI.
Here`s another exercise to try. We noted above that asymmetry implies antisymmetry. Prove that
this is so by completing the prooI in Antisymmetry.prf. Then compare your prooI with my version
(only six steps!) in Proof Antisymmetry.prf

1
Mathematically, a relation that is transitive and irreIlexive is known as a strict partial ordering.
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-5
Counter-examples to generalizations about relations
When a generalization about a relation is Ialse, you should be able to establish this by means oI a
counter-example. For example, we can show that not every symmetric relation is transitive by
producing a counter-example to this inIerence:
!!!" (#(!, ") " #(", !))
!!!"!$ |(#(!, ") ! #(", $)) " #(!, $)|
An obvious counter-example here would be to take # to be Adjoins. A little reIlection will show
that
!x!y (Adjoins(x, y) " Adjoins(y, x))
is a logical truth (although not, oI course, an FO validity). That is, the %&'()*+ relation is symmetric.
But this relation is certainly not transitive. That is, we can easily construct a Tarski World in which
the premise oI the argument below is true and the conclusion is Ialse:
1. !x!y (Adjoins(x, y) " Adjoins(y, x))
2. !x!y!z [(Adjoins(x, y) ! Adjoins(y, z)) " Adjoins(x, z)]
Go to the Supplementary Exercises page and open Symmetry.sen. Then construct a world in
which sentence (1) is true and sentence (2) is Ialse. You will Iind this very easy. In Iact, as long as
there are two adjoining blocks somewhere in your world, (2) will be Ialse. (Since (1) is a logical
truth, it will be true in every Tarski world.) This shows that %&'()*+ is not a transitive relation.
ThereIore, we have provided a counter-example to the generalization that every symmetric relation
is transitive.
|II you do not see why (2) can come out Ialse in a two-block world, try this experiment:
create a world with exactly two adjoining blocks. Now play the game against Tarski on
(2) and commit to its truth. You will soon discover why Tarski will always win.
Remember, three diIIerent variables do not require three diIIerent objects.|
Seriality and infinite domains
In our last example, we were able to IalsiIy a generalization about a relation by constructing a
counter-example with only two objects in its domain. (Our Tarski`s World invalidating the
argument needed only two blocks.) But some Ialse generalizations about relations cannot be shown
to be Ialse in Tarski`s World, or in any world containing only a Iinite number oI objects in its
domain.
To put the point another way, there are some invalid FOL arguments that cannot be shown to be
invalid by means oI any counter-example containing only Iinitely many objects. Sometimes, an
infinite domain is required.
Examples oI this kind oI argument Irequently involve relations that have an important property that
we have not yet consideredthe property oI seriality. (It is sometimes called 'totality, as it is on
p. 427 where it is deIined.) Here is the deIinition:
Seriality: !!#" #(!, ")
That is, a relation # is serial just in case Ior each object in the domain there is something to which
it stands in the relation #.
Here, then, is a Iamous example oI an invalid argument that has no counter-example in any Iinite
domain. Notice that it has the same premises as Asymmetry.prf, but a diIIerent conclusion:
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-6
!!!"!# |($(!, ") ! $(", #)) " $(!, #)|
!! #$(!, !)
#!!$" $(!, ")
This argument says that iI $ is both transitive and irreIlexive, it Iollows that it is !"# serial. To
show that the argument is invalid, we must Iind an example oI a relation that is #$%!&'#'(),
'$$)*+),'(), and &)$'%+. That is, we have to produce an example oI a relation that satisIies these
three conditions:
!!!"!# |($(!, ") ! $(", #)) " $(!, #)|
!! #$(!, !)
!!$" $(!, ")
.""/'!0 '! (%'! '! 1%$&/'2& 3"$+4
We might try to Iind a counter-example in Tarski`s World, but we will quickly see that we cannot
succeed. Consider the binary predicates in the blocks language oI Tarski`s World: Adjoins,
FrontOf, SameSize, Larger, =, etc. Now think oI the relations expressed by these predicates.
Certainly, some oI these relations are transitive and serial. For example, all oI the )56'(%+)!7)
relations (e.g., %&'()%)", *+,' *%#' +*, *+,' *.+/' +*) are both transitive and serial. The trouble is,
these are also reIlexive (each block is the same size as itselI, the same shape as itselI, identical to
itselI, etc.), and we are looking Ior an '$$)*+),'() relation.
Now the blocks language does have other predicates that express relations that are both transitive
and irreIlexive: Larger, Smaller, FrontOf, LeftOf, etc. The trouble is that none oI these relations
is &)$'%+. That is, it is never true in a Tarski World that )()$8 block is in Iront oI some block or
other, or that )()$8 block is smaller than some block or other. This is because, in a Tarski World,
the blocks in the back row are never in Iront oI anything, and there is no block larger than a large
block. There is a backmost row, a leItmost column, a smallest size, a largest size, etc.
This suggests that a successIul counter-example must avoid this limitation. Whatever relation, $,
we pick, we must be sure that there is no '$-most thingno ! which does not bear the $ relation
to anything else. The avoidance oI this limitation is what pushes us to an inIinite domain.
9 7"6!#)$:),%;<+) ='#> %! '!*'!'#) 4";%'!
It turns out that an example oI a relation that is transitive, irreIlexive, and serial is easy to Iind.The
0'** ).+( relation among positive integers is one such relation. It is transitive, obviously, since iI
one number is less than a second and the second is less than a third, the Iirst is less than the third.
Equally clearly, it is irreIlexive, since no number is less than itselI. But it is also serial, in that Ior
every number (no matter how large), there is another number that is even larger.
What makes this example work, oI course, is that the supply oI positive integers is inIinite There is
no largest integer, so that no matter what number ! we pick, we never run out oI numbers, ", such
that ! is less than ".
?$"('!0 #>%# % 4";%'! ;6&# @) '!*'!'#)
It is easy to see that our counter-example above works, and that the argument against which we
brought it is invalid. But it is not so easy to prove that a successIul counter-example in this case
;6&# have an inIinite domain. Still, there is a simple, intuitive way to see why the domain must be
inIinite. Here it is.
Copyright 2006, S. Marc Cohen Revised 4/25/09
Supplement-7
First, note that R is !"#$%&!&'(, &""(*+(,&'(, and %("&#+. But we have already proved above that any
relation that is transitive and irreIlexive is also asymmetric. So we can add to our conditions that R
must be #%-..(!"&/ as well.
Now suppose we have a domain with only one object, call it n
1
. Since R is serial, there is some v
such that R(n
1
, v). But n
1
is the only object in the domain, so v must be n
1
. Hence, we get R(n
1
, n
1
),
which is impossible iI R is irreIlexive. So our domain must have more than one object.
Next, suppose we have a domain with two objects, call them n
1
and n
2
. Since R is serial, n
1
must
bear R to something, and that thing cannot be n
1
(because R is irreIlexive). Hence, R(n
1
, n
2
). But n
2

must also bear R to something, and that thing cannot be n
1
(because oI asymmetry) or n
2
(because
oI irreIlexivity). So our domain must have more than two objects.
So let`s add another object, n
3
, and suppose that R(n
2
, n
3
). But because R is transitive, and we have
both R(n
1
, n
2
) and R(n
2
, n
3
), it Iollows that we also have R(n
1
, n
3
). Now, which object v is such that
R(n
3
, v)? It cannot be n
1
(because oI asymmetry) or n
2
(Ior the same reason) or n
3
(because oI
irreIlexivity). So our domain must have more than three objects.
It is easy to see that this line oI reasoning can be extended indeIinitely. Each time we add an object
to the domain, it is because we are Iorced to add a new object Ior its immediate predecessor to
stand in the R relation to. But then the transitivity condition will come into play and guarantee that
each oI the previous objects in the domain bears R to the newcomer. Given the asymmetry
condition, this will prevent the newcomer Irom bearing R to any oI them, and because oI the
irreIlexivity condition, the newcomer cannot bear R to itselI. The seriality condition then Iorces us
to introduce yet another object. So we will never be able to convert a domain that is too small into
one that is large enough by adding just one object. And that is suIIicient to establish that our
domain will have to be inIinite. For any Iinite domain can be constructed (however tediously) by
adding objects one at a time.
To convert this intuitive sketch into a proper prooI would require us to construct a prooI by
mathematical induction, which is the topic oI Chapter 16 oI LPL. So although our course ends
here, iI you wish to continue your study oI logic, that would be an excellent place to begin.

You might also like