Logic - Stanford
Logic - Stanford
Fall 2019
Lecture 2
Aleksandar Zeljić
(materials by Clark Barrett)
Stanford University
Outline
AND OR NOT
Propositional Logic: Motivation
The inputs and outputs of Boolean gates can be connected together to form a
combinational Boolean circuit.
D G
A E
I
B
H
F
C
D G
A E
I
B
H
F
C
Semantics
The meaning of logical symbols is always the same. The meaning of nonlogical
symbols depends on the context.
Propositional Logic: Syntax
An expression is a sequence of symbols. A sequence is denoted explicitly by a
comma separated list enclosed in angle brackets: <a1 , . . . ,am >.
Examples
<(, A1 , ∧, A3 , )>
<(, (, ¬, A1 , ), →, A2 , )>
<), ), ↔, ), A5 >
Propositional Logic: Syntax
An expression is a sequence of symbols. A sequence is denoted explicitly by a
comma separated list enclosed in angle brackets: <a1 , . . . ,am >.
Examples
<(, A1 , ∧, A3 , )> (A1 ∧ A3 )
<(, (, ¬, A1 , ), →, A2 , )> ((¬A1 ) → A2 )
<), ), ↔, ), A5 > )) ↔)A5
For convenience, we will write these sequences as a simple string of symbols,
with the understanding that the formal structure represented is a sequence
containing exactly the symbols in the string.
The formal meaning becomes important when trying to prove things about
expressions.
Propositional Logic: Syntax
An expression is a sequence of symbols. A sequence is denoted explicitly by a
comma separated list enclosed in angle brackets: <a1 , . . . ,am >.
Examples
<(, A1 , ∧, A3 , )> (A1 ∧ A3 )
<(, (, ¬, A1 , ), →, A2 , )> ((¬A1 ) → A2 )
<), ), ↔, ), A5 > )) ↔)A5
For convenience, we will write these sequences as a simple string of symbols,
with the understanding that the formal structure represented is a sequence
containing exactly the symbols in the string.
The formal meaning becomes important when trying to prove things about
expressions.
Not all expressions make sense. Part of the job of defining a syntax is to
restrict the kinds of expressions that will be allowed.
Propositional Logic: Syntax
We define the set W of well-formed formulas (wffs) as follows.
This definition is inductive: the set being defined is used as part of the
definition.
Propositional Logic: Syntax
We define the set W of well-formed formulas (wffs) as follows.
This definition is inductive: the set being defined is used as part of the
definition.
How would you use this definition to prove that )) ↔)A5 is not a wff ?
Propositional Logic: Syntax
We define the set W of well-formed formulas (wffs) as follows.
This definition is inductive: the set being defined is used as part of the
definition.
How would you use this definition to prove that )) ↔)A5 is not a wff ?
Item (c) is too vague for our purposes. To make it more precise we use
induction.
Propositional Logic: Well-Formed Formulas
We can use a formal inductive definition to define the set W of well-formed
formulas in propositional logic.
I U=
I B=
I F =
Propositional Logic: Well-Formed Formulas
We can use a formal inductive definition to define the set W of well-formed
formulas in propositional logic.
I E¬
Suppose α ∈ S. We know that E¬ (α) = (¬α). It follows that
l(E¬ (α)) = 1 + l(α) and r (E¬ (α)) = 1 + r (α).
But because α ∈ S, we know that l(α) = r (α), so it follows that
l(E¬ (α)) = r (E¬ (α)), and thus E¬ (α) ∈ S.
Propositional Logic: Well-Formed Formulas
Inductive Case:
We must show that S is closed under each formula-building operator in F .
I E¬
Suppose α ∈ S. We know that E¬ (α) = (¬α). It follows that
l(E¬ (α)) = 1 + l(α) and r (E¬ (α)) = 1 + r (α).
But because α ∈ S, we know that l(α) = r (α), so it follows that
l(E¬ (α)) = r (E¬ (α)), and thus E¬ (α) ∈ S.
I E∧
Suppose α, β ∈ S. We know that E∧ (α, β) = (α ∧ β). Thus
l(E∧ (α, β)) = 1 + l(α) + l(β) and r (E∧ (α, β)) = 1 + r (α) + r (β).
As before, it follows from the inductive hypothesis that E∧ (α, β) ∈ S
Propositional Logic: Well-Formed Formulas
Inductive Case:
We must show that S is closed under each formula-building operator in F .
I E¬
Suppose α ∈ S. We know that E¬ (α) = (¬α). It follows that
l(E¬ (α)) = 1 + l(α) and r (E¬ (α)) = 1 + r (α).
But because α ∈ S, we know that l(α) = r (α), so it follows that
l(E¬ (α)) = r (E¬ (α)), and thus E¬ (α) ∈ S.
I E∧
Suppose α, β ∈ S. We know that E∧ (α, β) = (α ∧ β). Thus
l(E∧ (α, β)) = 1 + l(α) + l(β) and r (E∧ (α, β)) = 1 + r (α) + r (β).
As before, it follows from the inductive hypothesis that E∧ (α, β) ∈ S
I The arguments for E∨ , E→ , and E↔ are exactly analogous to the one for
E∧ .
2
Propositional Logic: Well-Formed Formulas
Inductive Case:
We must show that S is closed under each formula-building operator in F .
I E¬
Suppose α ∈ S. We know that E¬ (α) = (¬α). It follows that
l(E¬ (α)) = 1 + l(α) and r (E¬ (α)) = 1 + r (α).
But because α ∈ S, we know that l(α) = r (α), so it follows that
l(E¬ (α)) = r (E¬ (α)), and thus E¬ (α) ∈ S.
I E∧
Suppose α, β ∈ S. We know that E∧ (α, β) = (α ∧ β). Thus
l(E∧ (α, β)) = 1 + l(α) + l(β) and r (E∧ (α, β)) = 1 + r (α) + r (β).
As before, it follows from the inductive hypothesis that E∧ (α, β) ∈ S
I The arguments for E∨ , E→ , and E↔ are exactly analogous to the one for
E∧ .
2
Since S includes B and is closed under the operations in F , it is inductive. It
follows by the induction principle that W ⊆ S.
Propositional Logic: Well-Formed Formulas
Now we can return to the question of how to prove that an expression is not a
wff .
How do we know that )) ↔)A5 is not a wff ?
Propositional Logic: Well-Formed Formulas
Now we can return to the question of how to prove that an expression is not a
wff .
How do we know that )) ↔)A5 is not a wff ?
It does not have the same number of left and right parentheses.
It follows from the theorem we just proved that )) ↔)A5 is not a wff .
An Algorithm for Recognizing WFFs
Lemma
Let α be a wff . Then exactly one of the following is true.
I α is a propositional symbol.
I α = (¬β) where β is a wff .
I α = (β γ) where is one of {∧, ∨, →, ↔}, β is the first
parentheses-balanced initial segment of the result of dropping the first (
from α, and β and γ are wffs.
An Algorithm for Recognizing WFFs
Lemma
Let α be a wff . Then exactly one of the following is true.
I α is a propositional symbol.
I α = (¬β) where β is a wff .
I α = (β γ) where is one of {∧, ∨, →, ↔}, β is the first
parentheses-balanced initial segment of the result of dropping the first (
from α, and β and γ are wffs.
I α is a propositional symbol.
I α = (¬β) where β is a wff .
I α = (β γ) where is one of {∧, ∨, →, ↔}, β is the first
parentheses-balanced initial segment of the result of dropping the first (
from α, and β and γ are wffs.
Note that conventions are only unambiguous for wffs, not for arbitrary
expressions.
Propositional Logic: Semantics
Intuitively, given a wff α and a value (either T or F) for each propositional
symbol in α, we should be able to determine the value of α.
How do we make this precise?
Let v be a function from B to {F, T}. We call this function a truth assignment.
Now, we define v , a function from W to {F, T} as follows (we compute with F
and T as if they were 0 and 1 respectively).
The recursion theorem and the unique readability theorem guarantee that v is
well-defined. (see Enderton)
Truth Tables
There are other ways to present the semantics which are less formal but
perhaps more intuitive.
α β α∧β
α ¬α T T
T T F
F F T
F F
α β α∧β
α ¬α T T
T F T F
F T F T
F F
α β α∧β
α ¬α T T T
T F T F F
F T F T F
F F F
α β α∧β
α ¬α T T T
T F T F F
F T F T F
F F F
I (A ∨ B) ∧ (¬A ∨ ¬B)
Examples
Suppose you had an algorithm SAT which would take a wff α as input and
return true if α is satisfiable and false otherwise. How would you use this
algorithm to verify each of the claims made above?
Examples
Suppose you had an algorithm SAT which would take a wff α as input and
return true if α is satisfiable and false otherwise. How would you use this
algorithm to verify each of the claims made above?
Examples
Suppose you had an algorithm SAT which would take a wff α as input and
return true if α is satisfiable and false otherwise. How would you use this
algorithm to verify each of the claims made above?
Examples
Suppose you had an algorithm SAT which would take a wff α as input and
return true if α is satisfiable and false otherwise. How would you use this
algorithm to verify each of the claims made above?
Examples
Now suppose you had an algorithm CHECKVALID which returns true when α
is valid and false otherwise. How would you verify the claims given this
algorithm?
Examples
Now suppose you had an algorithm CHECKVALID which returns true when α
is valid and false otherwise. How would you verify the claims given this
algorithm?
Satisfiability and validity are dual notions: α is unsatisfiable if and only if ¬α is
valid.
Determining Satisfiability using Truth Tables
An Algorithm for Satisfiability
To check whether α is satisfiable, form the truth table for α. If there is a row
in which T appears as the value for α, then α is satisfiable. Otherwise, α is
unsatisfiable.
Determining Satisfiability using Truth Tables
An Algorithm for Satisfiability
To check whether α is satisfiable, form the truth table for α. If there is a row
in which T appears as the value for α, then α is satisfiable. Otherwise, α is
unsatisfiable.
I (A ∧ (B ∨ C )) ↔ ((A ∧ B) ∨ (A ∧ C )).
I (A ∨ (B ∧ C )) ↔ ((A ∨ B) ∧ (A ∨ C )).
Some tautologies
Associative and Commutative laws for ∧, ∨, ↔
Distributive Laws
I (A ∧ (B ∨ C )) ↔ ((A ∧ B) ∨ (A ∧ C )).
I (A ∨ (B ∧ C )) ↔ ((A ∨ B) ∧ (A ∨ C )).
Negation
I ¬¬A ↔ A
I ¬(A → B) ↔ (A ∧ ¬B)
I ¬(A ↔ B) ↔ ((A ∧ ¬B) ∨ (¬A ∧ B))
Some tautologies
Associative and Commutative laws for ∧, ∨, ↔
Distributive Laws
I (A ∧ (B ∨ C )) ↔ ((A ∧ B) ∨ (A ∧ C )).
I (A ∨ (B ∧ C )) ↔ ((A ∨ B) ∧ (A ∨ C )).
Negation
I ¬¬A ↔ A
I ¬(A → B) ↔ (A ∧ ¬B)
I ¬(A ↔ B) ↔ ((A ∧ ¬B) ∨ (¬A ∧ B))
De Morgan’s Laws
I (A → B) ↔ (¬A ∨ B)
More Tautologies
Implication
I (A → B) ↔ (¬A ∨ B)
Excluded Middle
I A ∨ ¬A
More Tautologies
Implication
I (A → B) ↔ (¬A ∨ B)
Excluded Middle
I A ∨ ¬A
Contradiction
I ¬(A ∧ ¬A)
More Tautologies
Implication
I (A → B) ↔ (¬A ∨ B)
Excluded Middle
I A ∨ ¬A
Contradiction
I ¬(A ∧ ¬A)
Contraposition
I (A → B) ↔ (¬B → ¬A)
More Tautologies
Implication
I (A → B) ↔ (¬A ∨ B)
Excluded Middle
I A ∨ ¬A
Contradiction
I ¬(A ∧ ¬A)
Contraposition
I (A → B) ↔ (¬B → ¬A)
Exportation
I ((A ∧ B) → C ) ↔ (A → (B → C ))
Propositional Connectives
We have five connectives: ¬, ∧, ∨, →, ↔. Would we gain anything by having
more? Would we lose anything by having fewer?
Propositional Connectives
We have five connectives: ¬, ∧, ∨, →, ↔. Would we gain anything by having
more? Would we lose anything by having fewer?
Example: Ternary Majority Connective #
E# (α, β, γ) = (#αβγ)
v ((#αβγ)) = T iff the majority of v (α), v (β), and v (γ) are T.
Propositional Connectives
We have five connectives: ¬, ∧, ∨, →, ↔. Would we gain anything by having
more? Would we lose anything by having fewer?
Example: Ternary Majority Connective #
E# (α, β, γ) = (#αβγ)
v ((#αβγ)) = T iff the majority of v (α), v (β), and v (γ) are T.
What does this new connective do for us?
Propositional Connectives
We have five connectives: ¬, ∧, ∨, →, ↔. Would we gain anything by having
more? Would we lose anything by having fewer?
Example: Ternary Majority Connective #
E# (α, β, γ) = (#αβγ)
v ((#αβγ)) = T iff the majority of v (α), v (β), and v (γ) are T.
What does this new connective do for us?
The extended language obtained by allowing this new symbol has the
same expressive power as the original language.
Propositional Connectives
We have five connectives: ¬, ∧, ∨, →, ↔. Would we gain anything by having
more? Would we lose anything by having fewer?
Example: Ternary Majority Connective #
E# (α, β, γ) = (#αβγ)
v ((#αβγ)) = T iff the majority of v (α), v (β), and v (γ) are T.
What does this new connective do for us?
The extended language obtained by allowing this new symbol has the
same expressive power as the original language.
Every Boolean function can be realized by a wff which uses only the
connectives {¬, ∧, ∨}. We say that this set of connectives is complete.
In fact, we can do better. It turns out that {¬, ∧} and {¬, ∨} are complete as
well.