0% found this document useful (0 votes)
6 views

Algorithms Chapter 4 & 5

Algorithm

Uploaded by

shawetahasija
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Algorithms Chapter 4 & 5

Algorithm

Uploaded by

shawetahasija
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CHAPTER 7

LOGICAL AGENTS
function KB-AGENT( percept ) returns an action
persistent: KB , a knowledge base
t , a counter, initially 0, indicating time
T ELL(KB, M AKE -P ERCEPT-S ENTENCE( percept , t ))
action ← A SK(KB , M AKE -ACTION -Q UERY(t ))
T ELL(KB, M AKE -ACTION -S ENTENCE(action, t ))
t ←t + 1
return action

Figure 7.1 A generic knowledge-based agent. Given a percept, the agent adds the percept
to its knowledge base, asks the knowledge base for the best action, and tells the knowledge
base that it has in fact taken that action.
17

function TT-E NTAILS ?(KB , α) returns true or false


inputs: KB , the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
symbols ← a list of the proposition symbols in KB and α
return TT-C HECK -A LL(KB , α, symbols, { })

function TT-C HECK -A LL(KB, α, symbols , model ) returns true or false


if E MPTY ?(symbols) then
if PL-T RUE ?(KB , model ) then return PL-T RUE ?(α, model )
else return true // when KB is false, always return true
else
P ← F IRST (symbols)
rest ← R EST(symbols)
return (TT-C HECK -A LL(KB, α, rest , model ∪ {P = true})
and
TT-C HECK -A LL(KB , α, rest , model ∪ {P = false }))

Figure 7.10 A truth-table enumeration algorithm for deciding propositional entailment. (TT
stands for truth table.) PL-T RUE ? returns true if a sentence holds within a model. The
variable model represents a partial model—an assignment to some of the symbols. The key-
word and here is an infix function symbol in the pseudocode programming language, not an
operator in proposition logic; it takes two arguments and returns true or false.

function PL-R ESOLUTION(KB, α) returns true or false


inputs: KB , the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of KB ∧ ¬α
new ← { }
while true do
for each pair of clauses Ci , Cj in clauses do
resolvents ← PL-R ESOLVE(Ci , Cj )
if resolvents contains the empty clause then return true
new ← new ∪ resolvents
if new ⊆ clauses then return false
clauses ← clauses ∪ new

Figure 7.13 A simple resolution algorithm for propositional logic. PL-R ESOLVE returns the
set of all possible clauses obtained by resolving its two inputs.
18 Chapter 7 Logical Agents

function PL-FC-E NTAILS ?(KB , q) returns true or false


inputs: KB , the knowledge base, a set of propositional definite clauses
q, the query, a proposition symbol
count ← a table, where count[c] is initially the number of symbols in clause c’s premise
inferred ← a table, where inferred [s] is initially false for all symbols
queue ← a queue of symbols, initially symbols known to be true in KB
while queue is not empty do
p ← P OP(queue)
if p = q then return true
if inferred [p] = false then
inferred [p] ← true
for each clause c in KB where p is in c.P REMISE do
decrement count[c]
if count[c] = 0 then add c.C ONCLUSION to queue
return false

Figure 7.15 The forward-chaining algorithm for propositional logic. The agenda keeps
track of symbols known to be true but not yet “processed.” The count table keeps track of
how many premises of each implication are not yet proven. Whenever a new symbol p from
the agenda is processed, the count is reduced by one for each implication in whose premise
p appears (easily identified in constant time with appropriate indexing.) If a count reaches
zero, all the premises of the implication are known, so its conclusion can be added to the
agenda. Finally, we need to keep track of which symbols have been processed; a symbol that
is already in the set of inferred symbols need not be added to the agenda again. This avoids
redundant work and prevents loops caused by implications such as P ⇒ Q and Q ⇒ P .
19

function DPLL-S ATISFIABLE ?(s) returns true or false


inputs: s, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of s
symbols ← a list of the proposition symbols in s
return DPLL(clauses, symbols, { })

function DPLL(clauses, symbols, model ) returns true or false


if every clause in clauses is true in model then return true
if some clause in clauses is false in model then return false
P , value ← F IND -P URE -S YMBOL (symbols, clauses, model )
if P is non-null then return DPLL(clauses, symbols – P , model ∪ {P =value})
P , value ← F IND -U NIT-C LAUSE(clauses, model )
if P is non-null then return DPLL(clauses, symbols – P , model ∪ {P =value})
P ← F IRST(symbols); rest ← R EST(symbols)
return DPLL(clauses, rest, model ∪ {P =true}) or
DPLL(clauses, rest, model ∪ {P =false}))

Figure 7.17 The DPLL algorithm for checking satisfiability of a sentence in propositional
logic. The ideas behind F IND -P URE -S YMBOL and F IND -U NIT-C LAUSE are described in
the text; each returns a symbol (or null) and the truth value to assign to that symbol. Like
TT-E NTAILS ?, DPLL operates over partial models.

function WALK SAT(clauses, p, max flips) returns a satisfying model or failure


inputs: clauses, a set of clauses in propositional logic
p, the probability of choosing to do a “random walk” move, typically around 0.5
max flips, number of value flips allowed before giving up
model ← a random assignment of true/false to the symbols in clauses
for each i = 1 to max flips do
if model satisfies clauses then return model
clause ← a randomly selected clause from clauses that is false in model
if R ANDOM (0, 1) ≤ p then
flip the value in model of a randomly selected symbol from clause
else flip whichever symbol in clause maximizes the number of satisfied clauses
return failure

Figure 7.18 The WALK SAT algorithm for checking satisfiability by randomly flipping the
values of variables. Many versions of the algorithm exist.
20 Chapter 7 Logical Agents

function H YBRID -W UMPUS -AGENT ( percept ) returns an action


inputs: percept , a list, [stench,breeze,glitter ,bump,scream]
persistent: KB , a knowledge base, initially the atemporal “wumpus physics”
t , a counter, initially 0, indicating time
plan, an action sequence, initially empty
T ELL(KB, M AKE -P ERCEPT-S ENTENCE( percept , t ))
T ELL the KB the temporal “physics” sentences for time t
safe ← {[x , y] : A SK (KB , OK tx,y ) = true}
if A SK(KB , Glitter t ) = true then
plan ← [Grab] + P LAN -ROUTE(current, {[1,1]}, safe) + [Climb]
if plan is empty then

unvisited ← {[x , y] : A SK(KB , Ltx,y ) = false for all t′ ≤ t}
plan ← P LAN -ROUTE(current, unvisited ∩ safe, safe)
if plan is empty and A SK(KB , HaveArrow t ) = true then
possible wumpus ← {[x , y] : A SK(KB , ¬ Wx,y ) = false}
plan ← P LAN -S HOT(current, possible wumpus, safe)
if plan is empty then // no choice but to take a risk
not unsafe ← {[x , y] : A SK(KB , ¬ OK tx,y ) = false}
plan ← P LAN -ROUTE(current, unvisited ∩ not unsafe, safe)
if plan is empty then
plan ← P LAN -ROUTE(current, {[1, 1]}, safe) + [Climb]
action ← P OP(plan)
T ELL(KB, M AKE -ACTION -S ENTENCE(action, t ))
t ←t + 1
return action

function P LAN -ROUTE(current,goals,allowed ) returns an action sequence


inputs: current, the agent’s current position
goals, a set of squares; try to plan a route to one of them
allowed , a set of squares that can form part of the route
problem ← ROUTE -P ROBLEM (current, goals,allowed )
return S EARCH (problem) // Any search algorithm from Chapter ??

Figure 7.20 A hybrid agent program for the wumpus world. It uses a propositional knowl-
edge base to infer the state of the world, and a combination of problem-solving search and
domain-specific code to choose actions. Each time H YBRID -W UMPUS -AGENT is called, it
adds the percept to the knowledge base, and then either relies on a previously-defined plan or
creates a new plan, and pops off the first step of the plan as the action to do next.
21

function SAT PLAN( init , transition, goal , T max ) returns solution or failure
inputs: init , transition, goal , constitute a description of the problem
T max , an upper limit for plan length
for t = 0 to T max do
cnf ← T RANSLATE -T O -SAT( init , transition, goal , t )
model ← SAT-S OLVER (cnf )
if model is not null then
return E XTRACT-S OLUTION(model )
return failure

Figure 7.22 The SATP LAN algorithm. The planning problem is translated into a CNF sen-
tence in which the goal is asserted to hold at a fixed time step t and axioms are included for
each time step up to t. If the satisfiability algorithm finds a model, then a plan is extracted by
looking at those proposition symbols that refer to actions and are assigned true in the model.
If no model exists, then the process is repeated with the goal moved one step later.
CHAPTER 8
FIRST-ORDER LOGIC
CHAPTER 9
INFERENCE IN FIRST-ORDER LOGIC
function U NIFY(x , y, θ=empty ) returns a substitution to make x and y identical, or failure
if θ = failure then return failure
else if x = y then return θ
else if VARIABLE ?(x ) then return U NIFY-VAR(x , y, θ)
else if VARIABLE ?(y) then return U NIFY-VAR(y, x , θ)
else if C OMPOUND ?(x ) and C OMPOUND ?(y) then
return U NIFY (A RGS(x ), A RGS(y), U NIFY (O P(x ), O P(y), θ))
else if L IST ?(x ) and L IST ?(y) then
return U NIFY (R EST(x ), R EST(y), U NIFY (F IRST(x ), F IRST(y), θ))
else return failure

function U NIFY-VAR (var , x , θ) returns a substitution


if {var /val } ∈ θ for some val then return U NIFY (val , x , θ)
else if {x /val} ∈ θ for some val then return U NIFY (var , val , θ)
else if O CCUR -C HECK ?(var , x ) then return failure
else return add {var /x } to θ

Figure 9.1 The unification algorithm. The arguments x and y can be any expression: a
constant or variable, or a compound expression such as a complex sentence or term, or a list
of expressions. The argument θ is a substitution, initially the empty substitution, but with
{var /val } pairs added to it as we recurse through the inputs, comparing the expressions
element by element. In a compound expression such as F (A, B), O P (x ) field picks out the
function symbol F and A RGS (x ) field picks out the argument list (A, B).
24 Chapter 9 Inference in First-Order Logic

function FOL-FC-A SK (KB, α) returns a substitution or false


inputs: KB , the knowledge base, a set of first-order definite clauses
α, the query, an atomic sentence
while true do
new ← { } // The set of new sentences inferred on each iteration
for each rule in KB do
(p1 ∧ . . . ∧ pn ⇒ q) ← S TANDARDIZE -VARIABLES (rule)
for each θ such that S UBST (θ, p1 ∧ . . . ∧ pn ) = S UBST(θ, p1′ ∧ . . . ∧ pn′ )
for some p1′ , . . . , pn′ in KB

q ← S UBST (θ, q)
if q ′ does not unify with some sentence already in KB or new then
add q ′ to new
φ ← U NIFY (q ′ , α)
if φ is not failure then return φ
if new = { } then return false
add new to KB

Figure 9.3 A conceptually straightforward, but inefficient, forward-chaining algorithm. On


each iteration, it adds to KB all the atomic sentences that can be inferred in one step
from the implication sentences and the atomic sentences already in KB . The function
S TANDARDIZE -VARIABLES replaces all variables in its arguments with new ones that have
not been used before.

function FOL-BC-A SK (KB , query) returns a generator of substitutions


return FOL-BC-O R (KB , query, { })

function FOL-BC-O R (KB , goal , θ) returns a substitution


for each rule in F ETCH -RULES -F OR -G OAL(KB , goal ) do
(lhs ⇒ rhs) ← S TANDARDIZE -VARIABLES(rule)
for each θ′ in FOL-BC-A ND (KB , lhs, U NIFY (rhs, goal , θ)) do
yield θ′

function FOL-BC-A ND (KB , goals, θ) returns a substitution


if θ = failure then return
else if L ENGTH(goals) = 0 then yield θ
else
first,rest ← F IRST (goals), R EST(goals)
for each θ′ in FOL-BC-O R (KB, S UBST (θ, first), θ) do
for each θ′′ in FOL-BC-A ND (KB, rest , θ′ ) do
yield θ′′

Figure 9.6 A simple backward-chaining algorithm for first-order knowledge bases.


25

procedure A PPEND(ax , y, az , continuation)


trail ← G LOBAL -T RAIL -P OINTER()
if ax = [ ] and U NIFY (y, az ) then C ALL(continuation)
R ESET-T RAIL(trail)
a, x , z ← N EW-VARIABLE(), N EW-VARIABLE(), N EW-VARIABLE()
if U NIFY(ax , [a] + x ) and U NIFY(az , [a | z ]) then A PPEND(x , y, z , continuation)

Figure 9.8 Pseudocode representing the result of compiling the Append predicate. The
function N EW-VARIABLE returns a new variable, distinct from all other variables used so far.
The procedure C ALL(continuation) continues execution with the specified continuation.
CHAPTER 19
LEARNING FROM EXAMPLES
function L EARN -D ECISION -T REE(examples, attributes, parent examples) returns a tree
if examples is empty then return P LURALITY-VALUE(parent examples)
else if all examples have the same classification then return the classification
else if attributes is empty then return P LURALITY-VALUE(examples)
else
A ← argmaxa ∈ attributes I MPORTANCE(a, examples)
tree ← a new decision tree with root test A
for each value v of A do
exs ← {e : e ∈ examples and e.A = v}
subtree ← L EARN -D ECISION -T REE(exs, attributes − A, examples)
add a branch to tree with label (A = v ) and subtree subtree
return tree

Figure 19.5 The decision tree learning algorithm. The function I MPORTANCE is described in
Section ??. The function P LURALITY-VALUE selects the most common output value among
a set of examples, breaking ties randomly.
46 Chapter 19 Learning from Examples

function M ODEL -S ELECTION(Learner , examples, k ) returns a (hypothesis, error rate) pair


err ← an array, indexed by size, storing validation-set error rates
training set, test set ← a partition of examples into two sets
for size = 1 to ∞ do
err [size] ← C ROSS -VALIDATION (Learner, size, training set, k )
if err is starting to increase significantly then
best size ← the value of size with minimum err [size]
h ← Learner (best size, training set )
return h, E RROR -R ATE (h, test set)

function C ROSS -VALIDATION (Learner , size, examples, k ) returns error rate


N ← the number of examples
errs ← 0
for i = 1 to k do
validation set ← examples[(i − 1) × N/k:i × N/k]
training set ← examples − validation set
h ← Learner (size, training set)
errs ← errs + E RROR -R ATE (h, validation set )
return errs / k // average error rate on validation sets, across k-fold cross-validation

Figure 19.8 An algorithm to select the model that has the lowest validation error. It builds
models of increasing complexity, and choosing the one with best empirical error rate, err ,
on the validation data set. Learner (size, examples) returns a hypothesis whose complexity
is set by the parameter size, and which is trained on examples. In C ROSS -VALIDATION ,
each iteration of the for loop selects a different slice of the examples as the validation set,
and keeps the other examples as the training set. It then returns the average validation set
error over all the folds. Once we have determined which value of the size parameter is best,
M ODEL -S ELECTION returns the model (i.e., learner/hypothesis) of that size, trained on all
the training examples, along with its error rate on the held-out test examples.

function D ECISION -L IST-L EARNING (examples) returns a decision list, or failure


if examples is empty then return the trivial decision list No
t ← a test that matches a nonempty subset examples t of examples
such that the members of examples t are all positive or all negative
if there is no such t then return failure
if the examples in examples t are positive then o ← Yes else o ← No
return a decision list with initial test t and outcome o and remaining tests given by
D ECISION -L IST-L EARNING (examples − examples t )

Figure 19.11 An algorithm for learning decision lists.


47

function A DA B OOST (examples, L, K ) returns a hypothesis


inputs: examples, set of N labeled examples (x1 , y1 ), . . . , (xN , yN )
L, a learning algorithm
K , the number of hypotheses in the ensemble
local variables: w, a vector of N example weights, initially all 1/N
h, a vector of K hypotheses
z, a vector of K hypothesis weights
ǫ ← a small positive number, used to avoid division by zero
for k = 1 to K do
h[k ] ← L(examples, w)
error ← 0
for j = 1 to N do // Compute the total error for h[k ]
if h[k ](xj ) 6= yj then error ← error + w[j]
if error > 1/2 then break from loop
error ← min(error , 1 − ǫ)
for j = 1 to N do // Give more weight to the examples h[k ] got wrong
if h[k ](xj ) = yj then w[j] ← w[j] · error /(1 − error )
w ← N ORMALIZE(w)
z[k ] ← 12 log ((1 − P error )/error ) // Give more weight to accurate h[k ]
return Function(x) : zi hi (x)

Figure 19.25 The A DA B OOST variant of the boosting method for ensemble learning. The
algorithm generates hypotheses by successively reweighting the training examples. The func-
tion W EIGHTED -M AJORITY generates a hypothesis that returns the output value with the
highest vote from the hypotheses in h, with votes weighted Pby z. For regression problems, or
for binary classification with two classes -1 and 1, this is k h[k]z[k].

You might also like