Unit 1
Unit 1
PART - A
The exciting new effort to make computers think machines with minds in the full and literal sense.
The art of creating machines that perform functions that require intelligence when performed by people.
A field of study that seeks to explain and emulate intelligent behaviors in terms of computational processes-Schalkoff.
The branch of computer science that is concerned with the automation of intelligent behavior-Luger&Stubblefield.
The study of mental faculties through the use of computational models-Charniak&McDermott. The study of the
computations that make it possible to perceive, reason and act-Winston.
The Turing test proposed by Alan Turing was designed to provide a satisfactory operational definition of intelligence.
Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient
to fool an interrogator.
Automated Reasoning:
To use the stored information to answer questions and to draw new conclusion.
Machine Language:
5. Define an agent.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon the
environment through effectors.
A rational agent is one that does the right thing. Here right thing is one that will cause agent to be more successful. That
leaves us with the problem of deciding how and when to evaluate the agent’s success.
An omniscient agent knows the actual outcome of its action and can act accordingly; but omniscience is impossible in
reality.
9. What are the factors that a rational agent should depend on at any given time?
2. Ever thing that the agent has perceived so far. We will call this
complete perceptual history the percept sequence.
For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its
performance measure on the basis of the evidence provided by the percept sequence & whatever built-in knowledge
that the agent has.
Agent program is a function that implements the agents mapping from percept to actions.
The action program will run on some sort of computing device which is called as Architecture.
If an agent’s sensing apparatus give it access to the complete state of the environment then we can say the environment
is accessible to he agent.
If the next state of the environment is completely determined by the current state and the actions selected by the agent,
then the environment is deterministic.
In this, agent’s experience is divided into episodes. Each episodes consists of agents perceiving and then acting. The
quality of the action depends on the episode itself because subsequent episode do not depend on what action occur in
previous experience.
Discrete Vs Continuous:
If there is a limited no. of distinct clearly defined percepts & action we say that the environment is discrete.
15. What are the phases involved in designing a problem solving agent?
Single state problem, Multiple state problem, Contingency problem, Exploration problem.
A problem is really a collection of information that the agent will use to decide what to do.
18. List the basic elements that are to be include in problem definition.
Initial state, operator, successor function, state space, path, goal test, path cost.
Blind search has no information about the no. of steps or the path cost from the current state to the goal, they can
distinguish a goal state from nongoal state. Heuristic search-knowledge given. Problem specification solution is best.
a. BFS
c. DFS
f. Bidirectional search
• A* search
BFS means breath wise search DFS means depth wise search
BFS DFS
Uniform cost search is optimal & it chooses the best solution depending on the path cost.
25. Write the time & space complexity associated with depth limited search.
Space complexity=O(bl)
Iterative deepening is a strategy that sidesteps the issue of choosing the best depth limit by trying all possible depth
limits: first depth 0, then depth 1,then depth 2& so on.
The drawback of DFS is that it can get stuck going down the wrong path. Many problems have very deep or even
infinite search tree. So dfs will never be able to recover from an unlucky choice at one of the nodes near the top of the
tree.SoDFS should be avoided for search trees with large or infinite maximum depths.
The idea behind bidirectional search is to simultaneously search both forward from the initial state & backward from
the goal & stop when the two searches meet in the middle.
Depth limited avoids the pitfalls of DFS by imposing a cut off of the maximum depth of a path. This cutoff can be
implemented by special depth limited search algorithm or by using the general search algorithm with operators that
keep track of the depth.
Greedy Search
If we minimize the estimated cost to reach the goal h(n), we get greedy search The search time is usually decreased
compared to uniformed alg, but the alg is neither optimal nor complete
Iterative improvement algorithms keep only a single state in memory, but can get stuck on local maxima. In this alg
each iteration is a dfs just as in regular iterative deepening. The depth first search is modified to use an f-cost limit
rather than a depth limit. Thus each iteration expands
A*search
We can reduce space requirements of A* with memory bounded alg such as IDA* & SMA*.
* It is optimal if enough memory is available to store the shallowest optimal solution path. Otherwise it returns the best
solution that can be reached with the available memory.
*When enough memory is available for entire search tree, the search is optimally efficient.
*Hill climbing.
*Simulated annealing.
Plateaux: A plateaux is an area of the state space where the evaluation fn is essentially flat. The search will conduct a
random walk.
¬ generating, translating;
¬ planning;
¬ recognition;
¬ robotics;
¬ theorem proving;
¬ speech recognition;
¬ game playing;
¬ problem solving;
PART- B QUESTIONS
3. What are constraint satisfaction problem? How can you formulate them as
search problem?
Read more: CS1351 - Artificial Intelligence - Anna University Engineering Question Bank 4
U http://www.questionbank4u.com/index.php?action=view&listid=95&subject=26&semester=11#ixzz1K8rgqh73
Under Creative Commons License: Attribution
Enter to win a free tech book 101 Free Tech Books
Unit-2
PART - A
Logical agents apply inference to a knowledge base to derive new information and make decisions.
In Wumpus world -A pure reflex agent cannot know for sure when to Climb, because neither having the gold nor being
in the start square is part of the percept; they are things the agent knows by forming a representation of the world.
Systems that reason with causal rules are called model-based reasoning systems
A situation is a snapshot of the world at an interval of time during which nothing changes
Diagnostic rules infer the presence of hidden properties directly from the percept-derived information. We have already
seen two diagnostic rules:
Once the gold is found, it is necessary to change strategies. So now we need a new set of action values.
• Wrapping parentheses: ( … )
...and [conjunction]∧
...or [disjunction]∨
...implies [implication / conditional]⇒
...not [negation]¬
or alternatively
• To get a proof for Horn sentences, apply Modus Ponens repeatedly until nothing can be done
– Functions, which are a subset of relations where there is only one “value” for any given “input”
• Examples:
– Relations: Brother-of, bigger-than, outside, part-of, has-color, occurs-after, owns, visits, precedes, ...
Universal quantification
a. x)P(x) means that P holds for all values of x in the domain associated with that variable∀(
a. x)P(x) means that P holds for some value of x in the domain associated with that variable∃ (
language.
16. What are the three levels in describing knowledge based agent?
Logical level
Implementation level
The semantics of the language defines the truth of each sentence with
respect to each possible world. With this semantics, when a particular configuration
of a) Syntax b)Semantics.
ii. Proof Theory – a set of rules for deducing the entailment of a set sentences.
of entailment is this: if and only if in every model in which is true, is also true or if is true then must also be true.
Informally the truth of is contained in the truth of .
defined to generate all possible applications of inference rules then the search
conclusions that lead to the desired goal is said to be Modus Ponen’s rule.
n α ------^α 1 ^α
i α 2
AND-Introduction rule states that from a list of sentences we can infer their conjunctions
α 2,……..α ,α
α ^…….^α ^ 12nα 1n
1α
____________________
nα 2v………vα 1vα
OR-Introduction rule states that from, a sentence, we can infer its disjunction with
anything.
PART- B QUESTIONS
3. Explain the resolution for first order logic and inference rule
Read more: CS1351 - Artificial Intelligence - Anna University Engineering Question Bank 4
U http://www.questionbank4u.com/index.php?action=view&listid=96&subject=26&semester=11#ixzz1K8rvDQwT
Under Creative Commons License: Attribution
Enter to win a free tech book 101 Free Tech Books
Unit-3
The most straightforward approach is to use state-space search. Because the descriptions of actions in a planning
problem specify both preconditions and effects, it is possible to search in either direction: either forward from the initial
state or backward from the goal
The main advantage of backward search is that it allows us to consider only relevant actions.
A set of actions that make up the steps of the plan. These are taken from the set of actions in the planning problem. The
“empty” plan contains just the Start and Finish actions. Start has no preconditions and has as its effect all the literals in
the initial state of the planning problem. Finish has no effects and has as its preconditions the goal literals of the
planning problem.
Partial-order planning has a clear advantage in being able to decompose problems into sub problems. It also has a
disadvantage in that it does not represent states directly, so it is harder to estimate how far a partial-order plan is from
achieving a goal.
A Planning graph consists of a sequence of levels that correspond to time steps in the plan where level 0 is the initial
state. Each level contains a set of literals and a set of
Actions
8. What is Conditional planning?
Also known as contingency planning, conditional planning deals with incomplete information by constructing a
conditional plan that accounts for each possible situation or contingency that could arise
than checking the preconditions of the entire remaining plan. This is called action monitoring
uses beliefs about actions and their consequences to search for a solution.
i. The planner should be able to represent the states, goals and actions.
ii. The planner should be able to add new actions at any time.
iii. The planner should be able to use Divide and Conquer method for
12. What are the components that are needed for representing an action?
ii. Precondition.
iii. Effect.
13. What are the components that are needed for representing a plan?
15. What are the ways in which incomplete and incorrect information’s can be handled in planning?
They can be handled with the help of two planning agents namely,
i. Conditional planning agent.
A solution is defined as a plan that an agent can execute and that guarantees the achievement of goal.
A complete plan is one in which every precondition of every step is achieved by some other step.
A consistent plan is one in which there are no contradictions in the ordering or binding constraints.
Conditional planning is a way in which the incompleteness of information is incorporated in terms of adding a
conditional step, which involves if – then rules.
acquired knowledge
Induction heuristics is a method, which enable procedures to learn descriptions from positive and negative examples.
i. Require-link heuristics.
23. What are the principles that are followed by any learning procedure?
The law states that, “When there is doubt about what to do, do nothing”
The law states that, “ When an object or situation known to be an example, fails to match a general model, create a
special case exception model”.
The law states that, “ You cannot learn anything unless you almost know it already”.
Similarity net is an approach for arranging models. Similarity net is a representation in which nodes denotes models,
links connect similar models and links are tied to different descriptions.
The elevation of a link to the status of a describable node is a kind of reification. When a link is so elevated then it is
said to be a reified link.
31. Differentiate between Partial Order Plan & Total order plan.
ν Partial-order plan
• plan generation algorithm can be applied to transform partial-order plan to total-order plan
ν Total-order plan
The process of checking the preconditions of each action as it is executed, rather than checking the preconditions of the
entire remaining plan. This is called action monitoring.
Execution monitoring is related to conditional planning in the following way. An agent that builds a plan and then
executes it while watching for errors is, in a sense, taking into account the possible conditions that constitute execution
errors.
34. List the two different ways to deal with the problems arising from incomplete and incorrect information
• Conditional planning
• Execution monitoring
35. Differentiate between Forward state-space search and Backward state-space search.
1. Forward state-space search : It searches forward from the initial situation to the goal situation.
2. Backward state-space search: It searches backward from the goal situation to the initial situation.
36. What are the steps of planning problems using state space research
methodology?
• The initial state of the search is the initial state from the planning problem. In general,
each state will be a set of positive ground literals; literals not appearing are false.
• The actions that are applicable to a state are all those whose preconditions are satisfied.
The successor state resulting from an action is generated by adding the positive effect
literals and deleting the negative effect literals. (In the first-order case, we must apply
the unifier from the preconditions to the effect literals.) Note that a single successor
function works for all planning problems—a consequence of using an explicit action
representation.
• The goal test checks whether the state satisfies the goal of the planning problem.
• The step cost of each action is typically 1. Although it would be easy to allow different
A simple replanning agent uses execution monitoring and splices in subplans as needed.
PART- B QUESTIONS
Read more: CS1351 - Artificial Intelligence - Anna University Engineering Question Bank 4
U http://www.questionbank4u.com/index.php?action=view&listid=97&subject=26&semester=11#ixzz1K8s444eq
Under Creative Commons License: Attribution
Enter to win a free tech book 101 Free Tech Books
Unit-4
PART - A
1. Why does uncertainty arise ?
• Agents almost never have access to the whole truth about their
environment.
2. State the reason why first order, logic fails to cope with that the mind like
medical diagnosis.
Three reasons
a.laziness:
b. Theoritical Ignorance:
Practical ignorance:
even if we know all the rules, we may be uncertain about a particular item needed.
The term utility is used in the sense of \"the quality of being useful .\", utility of a state is relative to the agents, whose
preferences the utility function is supposed to represent.
Utility theory says that every state has a degree of usefulness, or utility to In agent, and that the agent will prefer states
with higher utility. The use utility theory to represent and reason with preferences.
The basic idea is that an agent is rational if and only if it chooses the action that yields the highest expected utility,
averaged over all the possible outcomes of the action. This is known as MEU.
Preferences As Expressed by Utilities Are Combined with Probabilities in the General Theory of Rational Decisions
Called Decision Theory. Decision Theory = Probability Theory + Utility Theory.
p(a) for the Unconditional or Prior Probability Is That the Proposition A is True. It is important to remember that p(a)
can only be used when there is no other information.
Once the agents has obtained some evidence concerning the previously
with the notation p(A/B) is used. This is important that p(A/B) can only be used when all be is known.
10. Define probability distribution:
Eg.
--------------------
P(A)
15. What are the ways in which one can understand the semantics of a belief
network?
There are two ways to see the network as a representation of the joint probability distribution to view it as an encoding
of collection of conditional independence statements.
The basic task is to reason in terms of prior probabilities of conjunctions, but for the most part, we will use conditional
probabilities as a vehicle for probabilistic inference.
Poly trees. Here at most one undirected path between any two nodes is present.
E-X is the evidential support for X- the evidence variables \"below\" X that
A multiple connected graph is one in which two nodes are connected by more than one path.
21. List the 3 basic classes of algorithms for evaluating multiply connected graphs.
• Clustering methods
• Conditioning methods
Uncertainty means that many of the simplifications that are possible with deductive inference are no longer valid.
A deterministic node has its value specified exactly by the values of its parents, with no uncertainty.
24. What are all the various uses of a belief network?
• Making decisions based on probabilities in the network and on the agent\'s utilities.
• Deciding which additional evidence variables should be observed in order to gain useful
information.
• Performing sensitivity analysis to understand which aspects of the model have the greatest
impact on the probabilities of the query variables (and therefore must be accurate).
Fuzzy set theory is a means of specifying how well an object satisfies a vague description.
PART- B QUESTIONS
Read more: CS1351 - Artificial Intelligence - Anna University Engineering Question Bank 4
U http://www.questionbank4u.com/index.php?action=view&listid=98&subject=26&semester=11#ixzz1K8sAdG3D
Under Creative Commons License: Attribution
Enter to win a free tech book 101 Free Tech Books
Unit-5
16 Marks
PART - A
1. What is meant by learning?
Learning is a goal-directed process of a system that improves the knowledge or the knowledge representation of the
system by exploring experience and prior knowledge.
A transformation from on representation to another causes no loss of information; they can be constructed from each
other.
The same information and the same inferences are achieved with the same amount of effort.
• knowledge acquisition (example: learning physics) — learning new symbolic information coupled with the ability to
apply that information in an effective manner
• skill refinement (example: riding a bicycle, playing the piano) — occurs at a subconscious level by virtue of repeated
practice
Instead of using examples as foci for generalization, one can use them directly to solve new problems.
The background knowledge is sufficient to explain the hypothesis. The agent does not learn anything factually new
from the instance. It extracts general rules from single examples by explaining the examples and generalizing the
explanation
7. What is meant by Relevance-Based Learning?
• uses prior knowledge in the form of determinations to identify the relevant attributes
Knowledge-Based Inductive Learning finds inductive hypotheses that explain set of observations with the help of
background knowledge.
An inference algorithm that derives only entailed sentences is called sound or truth preserving.
Learning a function from examples of its inputs and outputs is called inductive learning.
It is measured by their learning curve, which shows the prediction accuracy as a function of the number of observed
examples.
• It serves as a good introduction to the area of inductive learning and is easy to implement.
13. What is the function of Decision Trees?
A decision tree takes as input an object or situation by a set of properties, and outputs a yes/no decision. Decision tree
represents Boolean functions.
• Learning to fly
The task of reinforcement learning is to use rewards to learn a successful agent function.
A passive learner watches the world going by, and tries to learn the utility of being in various states. An active learner
acts using the learned information, and can use its problem generator to suggest explorations of unknown portions of
the environment.
17. State the design issues that affect the learning element.
18. State the factors that play a role in the design of a learning system.
• Learning element
• Performance element
• Critic
• Problem generator
The technique of memorization is used to speed up programs by saving the results of computation. The basic idea is to
accumulate a database of input/output pairs; When the function is called, it first checks the database to see if it can
avoid solving the problem from scratch.
The agent learns an action-value function giving the expected utility of taking a given action in a given state. This is
called Q-Learning.
Any situation in which both inputs and outputs of a component can be perceived is called supervised learning. Learning
when there is no hint at all about the correct outputs is called unsupervised learning.
Bayesian learning simply calculates the probability of each hypothesis, given the data,
and makes predictions on that basis. That is, the predictions are made by using all the hypotheses, weighted by their
probabilities, rather than by using just a single “best” hypothesis.
Many real-world problems have hidden variables (sometimes called latent variables) which are not observable in the
data that are available for learning.
The basic idea behind Cross validation is try to eliminate how well the current hypothesis will predict unseen data.
It starts with a set of one or more individuals and applies selection and reproduction operators to evolve an individual
that is successful, as measured by a fitness function.
4. Information about the results of possible actions the agent can take.
7. Goals that describe classes of states whose achievement maximizes the agent\'s utility.
If the function is the parity function, which returns 1 if and only if an even number of inputs are 1, then an
exponentially large decision tree will be needed.
A majority function, which returns 1 if more than half of its inputs are 1.
1. Least-mean-square Approach
2. Adaptive Dynamic Programming Approach
PART- B QUESTIONS
Read more: CS1351 - Artificial Intelligence - Anna University Engineering Question Bank 4
U http://www.questionbank4u.com/index.php?action=view&listid=99&subject=26&semester=11#ixzz1K8sXtWFT
Under Creative Commons License: Attribution
Enter to win a free tech book 101 Free Tech Books