Artificial Intelligence Unit-I Question Bank (Solved) (Theory & MCQ) Theory
Artificial Intelligence Unit-I Question Bank (Solved) (Theory & MCQ) Theory
Group No: 3
Q. For each of the following activities, give PEAS description –
playing soccer, Shopping for AI books on internet, playing a
tennis match
4
Ans:
Sensors:HTML
Group No: 4
Q. Explain the 5 components required to define a problem
with an example.
Ans:
5
A problem-solving agent performs by defining “problems” and
several solutions. So, problem solving is a part of artificial
intelligence that includes a number of techniques such as a tree, B-
tree, heuristic algorithms to solve a problem.
1) Initial State: This state requires an initial state for the problem
which starts the AI agent towards a specified goal. In this state
new methods also initialize problem domain solving by a
specific class.
2) Action: This stage of problem formulation works with function
with a specific class taken from the initial state and all possible
actions done in this stage.
3) Transition Model: This stage of problem formulation
integrates the actual action done by the previous action stage
and collects the final stage to forward it to their next stage.
4) Goal test: This stage determines that the specified goal
achieved by the integrated transition model or not, whenever
the goal achieves stop the action and forward into the next
stage to determines the cost to achieve the goal.
5) Path costing: This component of problem-solving numerical
assigned what will be the cost to achieve the goal. It requires
all hardware software and human working cost.
Example :
8 Puzzle
6
3) Transition model: Given a state and an action, return the
resulting state.
4) Goal test: Check whether goal configuration is reached
5) Path cost: Number of steps to reach goal
Group No: 5
Once visited, all nodes are marked. These iterations continue until all
the nodes of the graph have been successfully visited and marked.
The full form of BFS is the Breadth-first search.
Example of BFS
7
In the following example of DFS, we have used graph having 6
vertices.
Example of BFS
Step 1)
Step 2)
8
Step 3)
Step 4)
Step 5)
9
Traversing iterations are repeated until all nodes are visited.
2. DFS
The space complexity for DFS is O(h) where h is the maximum height
of the tree.
Example of DFS
10
Step 1)
Step 2)
You will visit the element, which is at the top of the stack, for
example, 1 and go to its adjacent nodes. It is because 0 has already
been visited. Therefore, we visit vertex 2.
Step 3)
11
Vertex 2 has an unvisited nearby vertex in 4. Therefore, we add that
in the stack and visit it.
Step 4)
Finally, we will visit the last vertex 3, it doesn't have any unvisited
adjoining nodes. We have completed the traversal of the graph using
DFS algorithm.
Group No: 6
Q. Explain Uniform-cost Search Algorithm.
Ans:
12
Uniform-cost search is a searching algorithm used for traversing a
weighted tree or graph. This algorithm comes into play when a
different cost is available for each edge. The primary goal of the
uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs form the root node. It can be used to
solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue.
It gives maximum priority to the lowest cumulative cost. Uniform
cost search is equivalent to BFS algorithm if the path cost of all edges
is the same.
Advantages:
Uniform cost search is optimal because at every state the path with
the least cost is chosen.
Disadvantages:
It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.
Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS
will find it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer
to the goal node. Then the number of steps is = C*/ε+1. Here we
have taken +1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search isO(b1
+ [C*/ε]
)/.
Space Complexity:
The same logic is for space complexity so, the worst-case space
complexity of Uniform-cost search is O(b1 + [C*/ε]).
13
Optimal:
Uniform-cost search is always optimal as it only selects a path with
the lowest path cost.
Group No: 7
Q. Explain the following search strategies with an example
Ans:
1. Bidirectional search
A) It runs two simultaneous search
Forward search form source node toward goal node.
Backward search form goal node toward source node.
The search terminates when two graphs intersect
Bidirectional search is complete in BFS and incomplete in
DFS.
The time complexity of bidirectional search is O(bd/2) since
each search need only proceed to half the solution depth.
2. Greedy BFS
14
Best first search algorithm is often referred as greedy algorithm
•This is because they quickly attack the most desirable path as
soon as its heuristic weight becomes the most desirable.
•It tries to expand the node that is closest to the goal.
•It evaluates nodes by using just the heuristic function
i.e. f(n) = h(n).
Time Complexity: The worst case time complexity of Greedy
best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy
best first search is O(bm).
Where, m is the maximum depth of the search space.
15
Group No:8
16
Ans.A non-admissible heuristic may overestimate the cost of
reaching the
goal. It may or may not result in an optimal solution. However,
the
advantage is that sometimes, a non-admissible heuristic
expands much
fewer nodes. Thus, the total cost (= search cost + path cost)
may actually
be lower than an optimal solution using an admissible heuristic.
Ans.
In the given figure, all the tiles are out of position, hence for
this state,
17
h2 = sqrt(5) + 1 + sqrt(2) + sqrt(2) + 2 + sqrt(5) + sqrt(5) + 2 =
14.53.
Ans.
In the given figure, all the tiles are out of position, hence for
this state,
18
h3 = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18.
• It is represented by h(n)
– Euclidean Distance
– Manhattan Distance etc
Group No:9
Ans:
A* search is the most commonly known form of best-first search. It
uses heuristic function h(n), and cost to reach the node n from the
start state g(n). It has combined features of UCS and greedy best-first
search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and provides
optimal result faster. A* algorithm is similar to UCS except that it uses
g(n)+h(n) instead of g(n).
20
The time complexity of A* depends on the heuristic.
Algorithm of A* search:
Step 2: Check if the OPEN list is empty or not, if the list is empty then
return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g+h), if node n is goal node then return
success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n' is already
in the OPEN or CLOSED list, if not then compute evaluation function
for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should
be attached to the back pointer which reflects the lowest g(n') value.
Advantages:
21
o This algorithm can solve very complex problems.
Disadvantages:
Example:
Solution:
22
Initialization: {(S, 5)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7),
(S-->G, 10)}
Group No: 10
Agent:
23
In artificial intelligence, an intelligent agent (IA) is anything which
perceives its environment, takes actions autonomously in order to
achieve goals, and may improve its performance with learning or
may use knowledge.
Agent function:
This is a function in which actions are mapped from a certain percept
sequence.
Agent program:
An intelligent agent is a program that can make decisions or perform
a service based on its environment, user input and experiences.
These programs can be used to autonomously gather information on
a regular, programmed schedule or when prompted by the user in
real time.
Rationality:
Rationality is nothing but status of being reasonable, sensible, and
having good sense of judgment. Rationality is concerned with
expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful
information is an important part of rationality.
Autonomy:
Autonomous systems are defined as systems that are able to
accomplish a task, achieve a goal, or interact with its surroundings
with minimal to no human involvement.
Architecture of agent:
Architecture is the machinery that the agent executes on. It is a
device with sensors and actuators, for
1. example: a robotic car, a camera, a PC. Agent program is an
implementation of an agent function
24
Performance measure:
Performance Measure of Agent − It is the criteria, which determines
how successful an agent is. Behavior of Agent − It is the action that
agent performs after any given sequence of percepts. Percept − It is
agent's perceptual inputs at a given instance.
Group No: 11
Stimulated Annealing
Ans) It is a method for solving unconstrained and bound-constrained
optimization problems. The method models the physical process of
heating a material and then slowly lowering the temperature to
decrease defects, thus minimizing the system energy. Simulated
annealing is a technique that is used to find the best solution for
either a global minimum or maximum, without having to check every
single possible solution that exists
25
Group No: 12
B.) MCQ
Group No: 1
1. Which of the following is not an AI?
A. intelligent optical character recognition
B. Market automation
C. Facial detection
D. natural language processing
Ans: B. Market Automation
26
2. Artificial Intelligence is about_____.
A. Playing a game on Computer
B. Making a machine Intelligent
C. Programming on Machine with your Own Intelligence
D. Putting your intelligence in Machine
Ans: B. Making a machine intelligent.
27
B. A set of computer programs that produce output that would
be considered to reflect intelligence if it were generated by
humans
C. The study of mental faculties through the use of mental
models implemented on a computer
D. All of the mentioned
A. Expert Systems
B. Gaming
C. Vision Systems
D. All of the above
28
Ans- D. All of the above
A. Boolean Algebra
B. Turing Test
C. Logarithm
D. Algorithm
A.BASICS
B.FORTRAN
C.IPL
D.LISP
29
A.Critical
B.Heuristic
C.Value based
D.Analytical
Ans - B. Heuristic
Group No: 2
1. An agent is perceiving it's environment through ___
A. Actuator
B. Sensor
C. Performance
D. Software
Ans: B Sensor
30
B. The agent’s prior knowledge of the environment
C. The actions that the agent can perform
D. All of the mentioned
Ans: D All of the mentioned
31
C. Observing
D. None of the mentioned
Ans: B Learning
32
C. An embedded program controlling line following robot
D. All of these
Ans: D All of these
Group No: 3
1. If a robot is able to change its own trajectory as per the external
conditions, then the robot is considered as the__
a. Mobile
b. Non-Servo
c. Open Loop
d. Intelligent
ANS: d. Intelligent
a. Environment creator
b. Environment Generator
c. Both a & b
d. None of the mentioned
a.Expert Systems
33
b.Gaming
c.Vision Systems
d.All of the above
34
b) Solution
c) Agent
d) Observation
ANS: Problem
a.Expert Systems
b.Gaming
c.Vision Systems
d.All of the above
35
a) Problem
b) Solution
c) Agent
d) Observation
Answer : a) Problem
A. Discrete
36
B. Continuous
C. Episodic
D. Non-deterministic
Ans : A
Group No: 4
1. Which of the following is not a functionality of problem solving
agent?
A. Goal Formulation
B. Problem Formulation
C. Performance
D. Search
Ans: Performance
37
C. Performance
D. Search
Ans: Performance
38
A. Goal Formulation
B. Problem Formulation
C. Performance
D. Search
Ans: Performance
39
11. The process of removing detail from a given state representation
is called __
A. Extraction
B. Abstraction
C. Information Retrieval
D. Mining of data
Ans: Abstraction
40
D. Problem Formulation
Ans: Problem Formulation
42
22. Searching using Query on Internet, is use of ___ type of agent.
A. Offline agent
B. Online agent
C. Goal Based agent
D. Both A and B
Ans: Goal Based agent
Group No: 5
1. Which search is implemented with an empty first-in-first-out
queue?
a) Depth-first search
b) Breadth-first search
c) Bidirectional search
d) None of the mentioned
43
a) O(b)
b) O(bl)
c) O(m)
d) O(bm)
Ans. d) O(bm)
a) Breadth-first algorithm
b) Tree algorithm
c) Bidirectional search algorithm
d) None of the mentioned
a) Depth-limited search
b) Depth-first search
c) Iterative deepening search
d) Bidirectional search
a) Stacks
b) Queues
c) Priority queues
d) None of the mentioned
44
Ans. b) Queues
a) Stacks
b) Queues
c) Priority queues
d) None of the mentioned
Ans. A) stacks
a) Depth-limited search
b) Depth-first search
c) Breadth-first search
d) None of the mentioned
Group No: 6
45
1. What is the general term of Blind searching?
a) Informed Search
b) Uninformed Search
c) Informed & Unformed Search
d) Heuristic Search
Answer: b
Answer: d
Answer: d
Answer: c
46
5. Among which of the following mentioned statements will the
conditional probability be applied?
i.The number occurred on rolling a die one time.
ii.What will the temperature tomorrow?
iii.What card will get on picking a card from a fair deck of 52
cards?
iv.What output will we get on tossing a coin once?
Options:
a) Only iv.
b) All i., ii., iii. and iv.
c) ii. and iv.
d) Only ii.
Answer: d
Answer: a
Answer: b
47
8. State whether the following condition is true or false?
"The independent events are affected by the happening of some
other events which may occur simultaneously or have occurred
before it."
a) True
b) False
Answer: b
Group No: 7
1. In Bidirectional Search, Forward search form _____ node toward
_____ node.
a. Source, Goal (Answer)
b. Goal, Source
c. Current, source
d. Source, current
48
c. Heuristic
d. Greedy
8. Greedy BFS tries to expand the node that is ____ to the goal.
a. Closest (Answer)
b. Farthest
c. Similar
d. Different
10. What is the other name of the greedy best first search?
a) Blind Search
b) Pure Heuristic Search (Answer)
49
c) Combinatorial Search
d) Divide and Conquer Search
Group No:8
1.A heuristic is a way of trying _____
(a)Informed Search
(b)Best First Search
(c)Heuristic Search
(d)All of the mentioned
Ans:
D
(a)Heuristic function
(b)Path cost from start node to current node
(c)Path cost from start node to current node + Heuristic
cost
(d)Average of Path cost from start node to current node
and Heuristic cost
Ans:
A
(a)Heuristic function
(b)Path cost from start node to current node
(c)Path cost from start node to current node + Heuristic
cost
(d)Average of Path cost from start node to current node
and Heuristic cost
C
51
6.Constraint satisfaction problems on finite domains are
typically solved using a form of _____
(a)Search Algorithms
(b)Heuristic Search Algorithms
(c)Greedy Search Algorithms
(d)All of the mentioned
Ans:
D
52
9.Heuristic function h(n) is a function that estimates ___
A. How close a state is to a goal
B. How far a state is to a goal
C. How close a node is to a goal
D. How far a node is to a goal
Ans:
A
53
D. Path cost
Ans:
A
Group No:9
54
Ans: D
2. A* Search Algorithm __
A. does not expand the node which have the lowest value of
f(n),
B. finds the shortest path through the search space using the
heuristic function i.e f(n)=g(n) + h(n)
C. terminates when the goal node is not found.
D. All of the above
Ans: B
Ans: D
i. Random Search
Ii Depth First Search
Iii Breadth First Search
Iv Best First Search
Options:
a. Only iv.
b. All i., ii., iii. and iv.
c. ii. and iv.
d. None of the above
55
Answer
d. Only iv
Ans) d) 4
2. What is the rule of simple reflex agent?
a) Simple-action rule
b) Condition-action rule
c) Simple & Condition-action rule
d) None of the mentioned
Ans) a) True
5. What is rational at any given time depends on?
a) The performance measure that defines the criterion of success
b) The agent’s prior knowledge of the environment
57
c) The actions that the agent can perform
d) All of the mentioned
6. Rational agent is the one who always does the right thing.
a) True
b) False
Ans) a) True
7. A hardware-based system that has autonomy, social ability and
reactivity.
a) AI
b) Autonomous Agent
c) Agency
d) Behavior Engineering
Ans) b) RoboCup
9. Which depends on the percepts and actions available to the
agent?
a) Agent
b) Sensor
c) Design Problem
d) None of the mentioned
58
Ans) c) Design Problem
10. An agent is composed of ________
a) Architecture
b) Agent Function
c) Perception Sequence
d) Architecture and Program
Group No: 11
1. The Hill-Climbing technique stuck for some reasons. which of the
following is the reason?
(A). Local maxima
(B). Ridges
(C). Plateaux
(D). All of these
(E). None of these
MCQ Answer: d
59
3. When will the Hill-Climbing algorithm terminate?
(A). Stopping criterion met
(B). Global Min/Max is achieved
(C). No neighbor has a higher value
(D). All of these
(E). None of these
MCQ Answer: c
60
6. Which of the following are the main disadvantages of a hill-
climbing search?
(A). Stops at local optimum and don’t find the optimum
solution
(B). Stops at global optimum and don’t find the optimum
solution
(C). Don’t find the optimum solution and Flop to search for a
solution
(D). Fail to find a solution
(E). None of these
MCQ Answer: a
61
(d) Fail to find a solution
Answer: Option (a)
Ans : d. Nearest
62
12. _ or gradient search is a useful variation on simple hill-climbing
which considers all the moves from the current state and selects
the best one as the next one.
Ans. Steepest-ascent hill climbing
Group No: 12
1. What is meant by simulated annealing in artificial intelligence?
a) Returns an optimal solution when there is a proper cooling
schedule
b) Returns an optimal solution when there is no proper cooling
schedule
c) It will not return an optimal solution when there is a proper
cooling schedule
d) None of the mentioned
Answer: Option a)
63
b) Random mutation and Crossover techniques
c) Random mutation and Individuals among the population
d) Random mutation and Fitness function
Answer: a)
64
SUBJECT: ARTIFICIAL
CLASS: TYCS SEM: 5
INTELLIGENCE
Sr.N
o.
Question Answer1 Answer2 Answer3 Answer 4 Answer5 CorrectOption
An agent in artificial
Perception Architecture and
1 intelligence is composed Actuator Agent Function Answer 4
Sequence Program
of ________
Sensors,
The Task Environment of Actuators,E
2 an agent consists of what nvironment, Queues Stack Search Answer 1
all? Performanc
e Measure
Which instruments are
used for perceiving and Sensors and
3 Sensors Perceiver Program Answer 1
acting upon the Actuators
environment?
How many types of agents
4 are there in artificial 1 2 3 4 Answer 4
intelligence?
Simple &
What is the rule of simple Simple- Condition-action
5 Condition- No rule Answer 2
reflex agent? action rule rule
action rule
Manhattan Distance can be
Heuristic
6 used to find the value of Ridges Stacks Nothing Answer 1
function
_________
Hill-Climbing search
13 approach gets stuck for which Sensors Actuators Ridges Program Answer 3
of the following reasons?
Linear Regression is a
Classificatio
24 popular statistical model used Rewards. Clustering Regression Answer 4
n
for ___________ .
Maximum
Maximum Length Minimum Lag Minimum Lookup
30 Likelihood Answer1
What is the full form of MLE Estimator Estimator Estimator
Estimator
?
______________ determines
Encryption
36 how much importance is to be Discount Factor Design Factor Market Factor Answer2
Factor
given to the immediate
reward and future rewards.
Temporal
37 Distorted Euclidean Imperial Answer4
________learning is a model- Difference
free learning method
____________ is a type of
Direct Utility
38 passive reinforcement Q- Learning Decentralization GST Answer2
Estimation
learning
Direct Utility
39 _______ is a type of active Centralization Q-Learning Bluffing Answer3
Estimation
reinforcement learning
K-Means clustering is
40 an__________ iterative supervised ergonomic encrypted Answer1
unsupervised
clustering technique.
Depth-First Breadth-First
42 Which search method takes Random search Crazy search Answer1
Search search
less memory?
The node which does not
43 have any child node is called parent cache imperial leaf Answer4
the ______ node.
_________maintain internal
Model-based
48 state to track aspects of the New agents Large agent Estate agents Answer3
agents
world that are not evident in
the current percept.
In the Turing test, a computer
needs to interact with a
human
49 ___________by answering Server. intenet. Bank. Answer4
interrogator
his questions in written
format.
In reinforcement learning, if
the value of discount factor is
50 future parallel immediate horn Answer1
1, it means that more
importance is given to the
________ rewards.
An ______ reinforcement
66 learning agent changes its Active Passive Child Parent Answer 1
policy as it goes and learns.
Model-based learning is a
Supervised Unsupervised Reinforcement
67 simple approach to Camera Answer 3
learning. learning. learning.
_________ .
Iterative Deepening Depth
68 First Search is an _________ Server Uninformed . Crazy Informed Answer 2
search.
In Reinforcement learning an
optimal policy is a policy that
77 variables penalties lag reward Answer4
maximizes the
expected total __________.
In reinforcement learning a
_________ agent learns a
utility function on states and
78 dump buffer-based utility-based rogue Answer3
uses it to select actions that
maximize the expected
outcome utility.
In reinforcement learning a
parameter
79 _________ agent learns an Q-learning buffer-based rogue Answer2
learning
action-utility function
Markov Module
Marking Design Most Delightful
80 What is full form of MDP ? Decision Development Answer1
Process Pandemic
Process Program
______________ learning is
when we want an agent to Active
Passive
81 learn about the utilities of Reinforceme Dynamic Unsupervised Answer4
Reinforcement
various states under nt
a fixed policy.
Attributes are
Attributes are
statistically
Which of the following Attributes are Attributes can be statistically
independent of
82 statements about Naive Bayes equally nominal or dependent of one Answer4
one another given
is incorrect? important. numeric another given the
the class value.
class value.
Flow-Chart &
Structure in
90 What is a Decision Tree? which internal Answer3
node represents
test on an
attribute, each
branch represents
outcome of test
and each leaf
node represents
Flow-Chart Building structure class label Flow of water
___________ is a powerful,
91 effective and simple Answer4
Temporal Parameter
ensemble learning method
difference A* dumbing Bagging
Cameras,
In the task environment for an
sonar,
98 automated taxi, the actuators Answer3
speedometer, Safe, fast, legal, Steering,
can be __________
GPS, comfortable trip, accelerator, pedestrians,
odometer maximize profits brake, horn. customers
In reinforcement learning, if
the value of discount factor is
99 0, it means that more future immediate parallel horn Answer2
importance is given to the
________ reward.
Q-Learning technique is
an ______________ and uses Off
On Intermediate
100 the greedy approach to learn Policy techni Immediate policy Answer1
Policy technique policy technique
the Q-value. que