Algorithms Chapter 4 & 5
Algorithms Chapter 4 & 5
LOGICAL AGENTS
function KB-AGENT( percept ) returns an action
persistent: KB , a knowledge base
t , a counter, initially 0, indicating time
T ELL(KB, M AKE -P ERCEPT-S ENTENCE( percept , t ))
action ← A SK(KB , M AKE -ACTION -Q UERY(t ))
T ELL(KB, M AKE -ACTION -S ENTENCE(action, t ))
t ←t + 1
return action
Figure 7.1 A generic knowledge-based agent. Given a percept, the agent adds the percept
to its knowledge base, asks the knowledge base for the best action, and tells the knowledge
base that it has in fact taken that action.
17
Figure 7.10 A truth-table enumeration algorithm for deciding propositional entailment. (TT
stands for truth table.) PL-T RUE ? returns true if a sentence holds within a model. The
variable model represents a partial model—an assignment to some of the symbols. The key-
word and here is an infix function symbol in the pseudocode programming language, not an
operator in proposition logic; it takes two arguments and returns true or false.
Figure 7.13 A simple resolution algorithm for propositional logic. PL-R ESOLVE returns the
set of all possible clauses obtained by resolving its two inputs.
18 Chapter 7 Logical Agents
Figure 7.15 The forward-chaining algorithm for propositional logic. The agenda keeps
track of symbols known to be true but not yet “processed.” The count table keeps track of
how many premises of each implication are not yet proven. Whenever a new symbol p from
the agenda is processed, the count is reduced by one for each implication in whose premise
p appears (easily identified in constant time with appropriate indexing.) If a count reaches
zero, all the premises of the implication are known, so its conclusion can be added to the
agenda. Finally, we need to keep track of which symbols have been processed; a symbol that
is already in the set of inferred symbols need not be added to the agenda again. This avoids
redundant work and prevents loops caused by implications such as P ⇒ Q and Q ⇒ P .
19
Figure 7.17 The DPLL algorithm for checking satisfiability of a sentence in propositional
logic. The ideas behind F IND -P URE -S YMBOL and F IND -U NIT-C LAUSE are described in
the text; each returns a symbol (or null) and the truth value to assign to that symbol. Like
TT-E NTAILS ?, DPLL operates over partial models.
Figure 7.18 The WALK SAT algorithm for checking satisfiability by randomly flipping the
values of variables. Many versions of the algorithm exist.
20 Chapter 7 Logical Agents
Figure 7.20 A hybrid agent program for the wumpus world. It uses a propositional knowl-
edge base to infer the state of the world, and a combination of problem-solving search and
domain-specific code to choose actions. Each time H YBRID -W UMPUS -AGENT is called, it
adds the percept to the knowledge base, and then either relies on a previously-defined plan or
creates a new plan, and pops off the first step of the plan as the action to do next.
21
function SAT PLAN( init , transition, goal , T max ) returns solution or failure
inputs: init , transition, goal , constitute a description of the problem
T max , an upper limit for plan length
for t = 0 to T max do
cnf ← T RANSLATE -T O -SAT( init , transition, goal , t )
model ← SAT-S OLVER (cnf )
if model is not null then
return E XTRACT-S OLUTION(model )
return failure
Figure 7.22 The SATP LAN algorithm. The planning problem is translated into a CNF sen-
tence in which the goal is asserted to hold at a fixed time step t and axioms are included for
each time step up to t. If the satisfiability algorithm finds a model, then a plan is extracted by
looking at those proposition symbols that refer to actions and are assigned true in the model.
If no model exists, then the process is repeated with the goal moved one step later.
CHAPTER 8
FIRST-ORDER LOGIC
CHAPTER 9
INFERENCE IN FIRST-ORDER LOGIC
function U NIFY(x , y, θ=empty ) returns a substitution to make x and y identical, or failure
if θ = failure then return failure
else if x = y then return θ
else if VARIABLE ?(x ) then return U NIFY-VAR(x , y, θ)
else if VARIABLE ?(y) then return U NIFY-VAR(y, x , θ)
else if C OMPOUND ?(x ) and C OMPOUND ?(y) then
return U NIFY (A RGS(x ), A RGS(y), U NIFY (O P(x ), O P(y), θ))
else if L IST ?(x ) and L IST ?(y) then
return U NIFY (R EST(x ), R EST(y), U NIFY (F IRST(x ), F IRST(y), θ))
else return failure
Figure 9.1 The unification algorithm. The arguments x and y can be any expression: a
constant or variable, or a compound expression such as a complex sentence or term, or a list
of expressions. The argument θ is a substitution, initially the empty substitution, but with
{var /val } pairs added to it as we recurse through the inputs, comparing the expressions
element by element. In a compound expression such as F (A, B), O P (x ) field picks out the
function symbol F and A RGS (x ) field picks out the argument list (A, B).
24 Chapter 9 Inference in First-Order Logic
Figure 9.8 Pseudocode representing the result of compiling the Append predicate. The
function N EW-VARIABLE returns a new variable, distinct from all other variables used so far.
The procedure C ALL(continuation) continues execution with the specified continuation.
CHAPTER 19
LEARNING FROM EXAMPLES
function L EARN -D ECISION -T REE(examples, attributes, parent examples) returns a tree
if examples is empty then return P LURALITY-VALUE(parent examples)
else if all examples have the same classification then return the classification
else if attributes is empty then return P LURALITY-VALUE(examples)
else
A ← argmaxa ∈ attributes I MPORTANCE(a, examples)
tree ← a new decision tree with root test A
for each value v of A do
exs ← {e : e ∈ examples and e.A = v}
subtree ← L EARN -D ECISION -T REE(exs, attributes − A, examples)
add a branch to tree with label (A = v ) and subtree subtree
return tree
Figure 19.5 The decision tree learning algorithm. The function I MPORTANCE is described in
Section ??. The function P LURALITY-VALUE selects the most common output value among
a set of examples, breaking ties randomly.
46 Chapter 19 Learning from Examples
Figure 19.8 An algorithm to select the model that has the lowest validation error. It builds
models of increasing complexity, and choosing the one with best empirical error rate, err ,
on the validation data set. Learner (size, examples) returns a hypothesis whose complexity
is set by the parameter size, and which is trained on examples. In C ROSS -VALIDATION ,
each iteration of the for loop selects a different slice of the examples as the validation set,
and keeps the other examples as the training set. It then returns the average validation set
error over all the folds. Once we have determined which value of the size parameter is best,
M ODEL -S ELECTION returns the model (i.e., learner/hypothesis) of that size, trained on all
the training examples, along with its error rate on the held-out test examples.
Figure 19.25 The A DA B OOST variant of the boosting method for ensemble learning. The
algorithm generates hypotheses by successively reweighting the training examples. The func-
tion W EIGHTED -M AJORITY generates a hypothesis that returns the output value with the
highest vote from the hypotheses in h, with votes weighted Pby z. For regression problems, or
for binary classification with two classes -1 and 1, this is k h[k]z[k].