AI Questions
AI Questions
1. For each of the following assertions, say whether it is true or false and support your answer with examples
or counterexamples where appropriate.
a. An agent that senses only partial information about the state cannot be perfectly rational.
False. Perfect rationality refers to the ability to make good decisions given the sensor information received.
b. There exist task environments in which no pure reflex agent can behave rationally.
True. A pure reflex agent ignores previous percepts, so cannot obtain an optimal state estimate in a partially observable
environment.
d. The input to an agent program is the same as the input to the agent function.
False. The agent function takes as input the entire percept sequence up to that point (percept history), whereas the agent
program takes the current percept only.
f. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a
deterministic task environment in which this agent is rational.
True. This is a special case of (c); if it doesn’t matter which action you take, selecting randomly is rational.
g. It is possible for a given agent to be perfectly rational in two distinct task environments.
True. For example, we can arbitrarily modify the parts of the environment that are unreachable by any optimal
policy as long as they stay unreachable.
Page 1 of 7
2. For each of the following activities, give (1) PEAS description of the task environment. (2) Characteristic
(properties) of the task environment.
Playing soccer.
Shopping for used AI books on the Internet.
Playing a tennis match.
Practicing tennis against a wall.
Mathematician’s theorem-proving assistant
Autonomous Mars rover
4. Explores the differences between agent functions and agent programs for the following:
a. Can there be more than one agent program that implements a given agent function? Give an example,
or show why one is not possible.
Yes, the agent program is the code for implementing the agent function. Thus, if a function has multiple options
then there must be more than one program.
b. Are there agent functions that cannot be implemented by any agent program?
Yes, For example, if an agent function was to count to find the square root of a negative number. There is no
way to solve that.
c. Given a fixed machine architecture, does each agent program implement exactly one agent function?
Yes; the agent’s behavior is fixed by the architecture and program.
d. Given an architecture with n bits of storage, how many different possible agent programs are there?
There are 2n agent programs
Page 3 of 7
5. Explain why problem formulation must follow goal formulation.
In goal formulation, we decide which aspects of the world we are interested in, and which can be ignored or
abstracted away. Then in problem formulation we decide how to manipulate the important aspects (and ignore
the others). If we did problem formulation first we would not know what to include and what to leave out. That
said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and
problem solving until one arrives at a sufficiently useful and efficient solution.
6. Define in your own words the following terms: state, state space, search tree, search node, goal,
action, successor function, and branching factor.
A state is a situation that an agent can find itself in. We distinguish two types of states: world states (the actual
concrete situations in the real world) and representational states (the abstract descriptions of the real world that
are used by the agent in deliberating about what to do).
A state space is a graph whose nodes are the set of all states, and whose links are actions that transform one
state into another.
A search tree is a tree (a graph with no undirected loops) in which the root node is the start state and the set of
children for each node consists of the states that reachable by taking any action.
A search node is a node in the search tree.
A goal is a state that the agent is trying to reach.
An action is something that the agent can choose to do.
A successor function described the agent’s options: given a state, it returns a set of (action, state) pairs, where
each state is the state reachable by taking the action.
The branching factor in a search tree is the number of actions available to the agent.
7. Consider a state space where the start state is number 1 and each state k has two successors:
numbers 2k and 2k + 1.
a. Draw the portion of the state space for states 1 to 15.
b. Suppose the goal state is 11. List the order in which nodes will be visited for breadth-
first search, depth-limited search with limit 3, and iterative deepening search.
a.
b. Breadth-first: 1 2 3 4 5 6 7 8 9 10 11
Depth-limited: 1 2 4 8 9 5 10 11
Iterative deepening: 1; 1 2 3; 1 2 4 5 3 6 7; 1 2 4 8 9 5 10 11
Page 4 of 7
8. Prove each of the following statements:
a. Breadth-first search is a special case of uniform-cost search.
Uniform Cost Search reduces to Breadth First Search when all edges have the same cost.
best-first search reduces to Breadth-First Search when f(n)=number of edges from start node to n,
best-first search reduces to uniform-cost search when f(n)=g(n);
best-first search reduces reduced to depth-first search by, for example, setting f(n)=-(number of nodes
from start state to n) (thus forcing deep nodes on the current branch to be searched before shallow
nodes on other branches).
A* Search reduces to uniform-cost search when the heuristic function is zero everywhere, i.e. h(n)=0
for all n.
9. The heuristic path algorithm is a best-first search in which the evaluation function is
f(n) = (2 − w)g(n) + wh(n). What kind of search does this perform for w = 0, w = 1, and w = 2?
w=1 A* Search
10. What are the pros (if any) and cons (if any) to using A* versus Uniform Cost Search?
Explain; consider both time and space.
Evaluating the heuristic in A* can take extra time, but if the heuristic is good (informed) it can cut down the
number of expanded states a lot (which helps running time and space)
123 123
85 456
476 78
Initial state Goal State
Trace the A* Search algorithm using the Total Manhattan Distance heuristic, to find the
shortest path from the initial state shown above, to the goal state.
Page 5 of 7
12. Consider the graph shown in the figure below. We can search it with a variety of different
algorithms, resulting in different search trees. Each of the trees (labeled G1 though G7) was
generated by searching this graph, but with a different algorithm. Assume that children of a
node are visited in alphabetical order. Each tree shows all the nodes that have been visited.
Numbers next to nodes indicate the relevant “score” used by the algorithm for those nodes.
For each tree, indicate whether it was generated with
1. Depth first search
2. Breadth first search
3. Uniform cost search
4. A* search
5. Greedy Best-first search
Page 6 of 7
In all cases a strict expanded list was used. Furthermore, if you choose an algorithm that uses
a heuristic function, say whether we used
H1: heuristic 1 = {h(A) = 3, h(B) = 6, h(C) = 4, h(D) = 3}
H2: heuristic 2 = {h(A) = 3, h(B) = 3, h(C) = 0, h(D) = 2}