0% found this document useful (0 votes)
228 views78 pages

Artificial Intelligence Unit-I Question Bank (Solved) (Theory & MCQ) Theory

The document provides information on artificial intelligence including: 1) It discusses four categories of defining AI - acting humanly, thinking humanly, thinking rationally, and acting rationally. 2) It explains the components of a learning agent as having a learning element, critic, performance element, and problem generator. 3) It gives examples of PEAS descriptions for playing soccer, tennis, and shopping for AI books online. 4) It outlines the five components required to define a problem as the initial state, actions, transition model, goal test, and path costing.

Uploaded by

shivam mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
228 views78 pages

Artificial Intelligence Unit-I Question Bank (Solved) (Theory & MCQ) Theory

The document provides information on artificial intelligence including: 1) It discusses four categories of defining AI - acting humanly, thinking humanly, thinking rationally, and acting rationally. 2) It explains the components of a learning agent as having a learning element, critic, performance element, and problem generator. 3) It gives examples of PEAS descriptions for playing soccer, tennis, and shopping for AI books online. 4) It outlines the five components required to define a problem as the initial state, actions, transition model, goal test, and path costing.

Uploaded by

shivam mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

ARTIFICIAL INTELLIGENCE UNIT-I

QUESTION BANK (SOLVED)


[Theory & MCQ]
A.) Theory
Group No: 1
Q. Explain the four categories of definition of AI.
Ans.
1. Acting humanly: The Turing Test approach
The Turing Test, proposed by Alan Turing (1950), was
designed to provide a satisfactory operational definition
of intelligence. A computer passes the test if a human
interrogator, after posing some written questions,
cannot tell whether the written responses come from a
person or from a computer. The computer would need
to possess the following capabilities:
• natural language processing to enable it to
communicate successfully in English
• knowledge representation to store what it knows or
hears
• automated reasoning to use the stored information to
answer questions and to draw new conclusions;
• machine learning to adapt to new circumstances and
to detect and extrapolate patterns
2. Thinking humanly: The cognitive modeling approach:
If we are going to say that a given program thinks like a
human, we must have some way of determining how
1
humans think. We need to get inside the actual
workings of human minds. There are three ways to do
this: through introspection—trying to catch our own
thoughts as they go by; through psychological
experiments—observing a person in action; and through
brain imaging—observing the brain in action. Once we
have a sufficiently precise theory of the mind, it
becomes possible to express the theory as a computer
program. If the program’s input–output behavior
matches corresponding human behavior, that is
evidence that some of the program’s mechanisms could
also be operating in humans. The interdisciplinary field
of cognitive science brings together COGNITIVE SCIENCE
computer models from AI and experimental techniques
from psychology to construct precise and testable
theories of the human mind.
3. Thinking rationally: The “laws of thought” approach:
Thinking rationally means right thinking.
A pattern for argument structure that always gives
correct answers or conclusions for eg: “Socrates is a
man; all men are mortal; therefore, Socrates is mortal.”
These laws of thought were supposed to govern the
operation of the mind; their study initiated the field
called logic.
There are two main obstacles to this approach.
a. It is not easy to take informal knowledge and state it
in the formal terms required by logical notation,
particularly when the knowledge is less than 100%
certain.
2
b. There is a big difference between solving a problem
“in principle” and solving it in practice.
4. Acting rationally: The rational agent approach:
An agent is just something that acts (agent comes from
the Latin agere, to do). Of course, all computer
programs do something, but computer agents are
expected to do more: operate autonomously, perceive
their environment, persist over a prolonged time period,
adapt to change, and create and pursue goals. A rational
agent is one that acts so as to achieve the best outcome
or, when there is uncertainty, the best expected
outcome. In the “laws of thought” approach to AI, the
emphasis was on correct inferences. Making correct
inferences is sometimes part of being a rational agent,
because one way to act rationally is to reason logically
to the conclusion that a given action will achieve one’s
goals and then to act on that conclusion. On the other
hand, correct inference is not all of rationality; in some
situations, there is no provably correct thing to do, but
something must still be done. There are also ways of
acting rationally that cannot be said to involve
inference. For example, recoiling from a hot stove is a
reflex action that is usually more successful than a
slower action taken after careful deliberation. All the
skills needed for the Turing Test also allow an agent to
act rationally. Knowledge representation and reasoning
enable agents to reach good decisions. The rational-
agent approach has two advantages over the other
approaches. First, it is more general than the “laws of
3
thought” approach because correct inference is just one
of several possible mechanisms for achieving rationality.
Second, it is more amenable to scientific development
than are approaches based on human behavior or
human thought.
Group No: 2
Q. Explain components of learning agent
ANS:
A learning agent in AI is the type of agent which can learn from its
past experiences or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
Learning element :It is responsible for making improvements by
learning from the environment
Critic: Learning element takes feedback from critic which describes
how well the agent is doing with respect to a fixed performance
standard.
Performance element: It is responsile for selecting external action
Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences

Group No: 3
Q. For each of the following activities, give PEAS description –
playing soccer, Shopping for AI books on internet, playing a
tennis match

4
Ans:

1.) PEAS descriptor for Playing a tennis match.

Performance Measure: winning

Environment: playground, racquet, ball, opponent

Actuators: ball, racquet, joint arm

Sensors: ball locator, camera, racquet sensor, opponent locator

2.) PEAS descriptor for Playing soccer.

Performance Measure:scoring goals, defending, speed

Environment:playground, teammates, opponents, ball

Actuators:body, dribbling, tackling, passing ball, shooting

Sensors:camera, ball sensor, location sensor, other players locator

3.) PEAS descriptor for shopping for AI books on internet.

Performance Measure:price, quality, authors, book review

Environment:web, vendors, shippers

Actuators:fill in form, follow URL, display to user

Sensors:HTML

Group No: 4
Q. Explain the 5 components required to define a problem
with an example.
Ans:

5
A problem-solving agent performs by defining “problems” and
several solutions. So, problem solving is a part of artificial
intelligence that includes a number of techniques such as a tree, B-
tree, heuristic algorithms to solve a problem.

We need a number of finite steps to solve a problem.The following


are the components which define a problem :

1) Initial State: This state requires an initial state for the problem
which starts the AI agent towards a specified goal. In this state
new methods also initialize problem domain solving by a
specific class.
2) Action: This stage of problem formulation works with function
with a specific class taken from the initial state and all possible
actions done in this stage.
3) Transition Model: This stage of problem formulation
integrates the actual action done by the previous action stage
and collects the final stage to forward it to their next stage.
4) Goal test: This stage determines that the specified goal
achieved by the integrated transition model or not, whenever
the goal achieves stop the action and forward into the next
stage to determines the cost to achieve the goal.
5) Path costing: This component of problem-solving numerical
assigned what will be the cost to achieve the goal. It requires
all hardware software and human working cost.

Example :

8 Puzzle

{ States: locations of tiles}

1) Initial state: any state


2) Actions: Move blank {Left, Right, Up, Down}

6
3) Transition model: Given a state and an action, return the
resulting state.
4) Goal test: Check whether goal configuration is reached
5) Path cost: Number of steps to reach goal

Group No: 5

Q. Explain the following search strategies with examples


Ans:
1. BFS

BFS is an algorithm that is used to graph data or searching tree or


traversing structures. The algorithm efficiently visits and marks all
the key nodes in a graph in an accurate breadthwise fashion.

This algorithm selects a single node (initial or source point) in a graph


and then visits all the nodes adjacent to the selected node. Once the
algorithm visits and marks the starting node, then it moves towards
the nearest unvisited nodes and analyses them.

Once visited, all nodes are marked. These iterations continue until all
the nodes of the graph have been successfully visited and marked.
The full form of BFS is the Breadth-first search.

The time complexity of BFS if the entire tree is traversed is O(V)


where V is the number of nodes.
The space complexity for BFS is O(w) where w is the maximum width
of the tree

Example of BFS

7
In the following example of DFS, we have used graph having 6
vertices.

Example of BFS

Step 1)

You have a graph of seven numbers ranging from 0 – 6.

Step 2)

0 or zero has been marked as a root node.

8
Step 3)

0 is visited, marked, and inserted into the queue data structure.

Step 4)

Remaining 0 adjacent and unvisited nodes are visited, marked, and


inserted into the queue.

Step 5)

9
Traversing iterations are repeated until all nodes are visited.

2. DFS

DFS is an algorithm for finding or traversing graphs or trees in depth-


ward direction. The execution of the algorithm begins at the root
node and explores each branch before backtracking. It uses a stack
data structure to remember, to get the subsequent vertex, and to
start a search, whenever a dead-end appears in any iteration. The full
form of DFS is Depth-first search.

The time complexity of DFS in this case is O(V * V) = O(V2).

The space complexity for DFS is O(h) where h is the maximum height
of the tree.

Example of DFS

In the following example of DFS, we have used an undirected graph


having 5 vertices.

10
Step 1)

We have started from vertex 0. The algorithm begins by putting it in


the visited list and simultaneously putting all its adjacent vertices in
the data structure called stack.

Step 2)

You will visit the element, which is at the top of the stack, for
example, 1 and go to its adjacent nodes. It is because 0 has already
been visited. Therefore, we visit vertex 2.

Step 3)

11
Vertex 2 has an unvisited nearby vertex in 4. Therefore, we add that
in the stack and visit it.

Step 4)

Finally, we will visit the last vertex 3, it doesn't have any unvisited
adjoining nodes. We have completed the traversal of the graph using
DFS algorithm.

Group No: 6
Q. Explain Uniform-cost Search Algorithm.
Ans:

12
Uniform-cost search is a searching algorithm used for traversing a
weighted tree or graph. This algorithm comes into play when a
different cost is available for each edge. The primary goal of the
uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost. Uniform-cost search expands nodes
according to their path costs form the root node. It can be used to
solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue.
It gives maximum priority to the lowest cumulative cost. Uniform
cost search is equivalent to BFS algorithm if the path cost of all edges
is the same.

Advantages:
Uniform cost search is optimal because at every state the path with
the least cost is chosen.

Disadvantages:
It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.

Completeness:
Uniform-cost search is complete, such as if there is a solution, UCS
will find it.

Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer
to the goal node. Then the number of steps is = C*/ε+1. Here we
have taken +1, as we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search isO(b1
+ [C*/ε]
)/.

Space Complexity:
The same logic is for space complexity so, the worst-case space
complexity of Uniform-cost search is O(b1 + [C*/ε]).

13
Optimal:
Uniform-cost search is always optimal as it only selects a path with
the lowest path cost.

Group No: 7
Q. Explain the following search strategies with an example
Ans:
1. Bidirectional search
A) It runs two simultaneous search
Forward search form source node toward goal node.
Backward search form goal node toward source node.
The search terminates when two graphs intersect
Bidirectional search is complete in BFS and incomplete in
DFS.
The time complexity of bidirectional search is O(bd/2) since
each search need only proceed to half the solution depth.

Space complexity of bidirectional search is O(bd).

2. Greedy BFS

14
Best first search algorithm is often referred as greedy algorithm
•This is because they quickly attack the most desirable path as
soon as its heuristic weight becomes the most desirable.
•It tries to expand the node that is closest to the goal.
•It evaluates nodes by using just the heuristic function
i.e. f(n) = h(n).
Time Complexity: The worst case time complexity of Greedy
best first search is O(bm).
Space Complexity: The worst case space complexity of Greedy
best first search is O(bm).
Where, m is the maximum depth of the search space.

15
Group No:8

1. What is admissible heuristic.


Ans. Admissible Heuristics
An admissible heuristic never overestimates the cost of
reaching the goal.
Using an admissible heuristic will always result in an optimal
solution.

2.What is non admissible heuristic.

16
Ans.A non-admissible heuristic may overestimate the cost of
reaching the
goal. It may or may not result in an optimal solution. However,
the
advantage is that sometimes, a non-admissible heuristic
expands much
fewer nodes. Thus, the total cost (= search cost + path cost)
may actually
be lower than an optimal solution using an admissible heuristic.

3.Admissible Heuristics for the 8-puzzle using Eucledian distances.

Ans.

h2 : Sum of Eucledian distances of the tiles from their


goal positions

In the given figure, all the tiles are out of position, hence for
this state,

17
h2 = sqrt(5) + 1 + sqrt(2) + sqrt(2) + 2 + sqrt(5) + sqrt(5) + 2 =
14.53.

h2 is an admissible heuristic, since in every move, one tile can


only
move closer to its goal by one step and the eucledian distance
is never
greater than the number of steps required to move a tile to its
goal
position.

4.Admissible Heuristics for the 8-puzzle using distances.

Ans.

h3 : Sum of Manhattan distances of the tiles from their goal


positions

In the given figure, all the tiles are out of position, hence for
this state,

18
h3 = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2 = 18.

h3 is an admissible heuristic, since in every move, one tile can


only
move closer to its goal by one step.

5. What is heuristic function.


Ans.
Heuristic function estimates how close a state is to the goal.

• It is represented by h(n)
– Euclidean Distance
– Manhattan Distance etc

6.What is heuristic search.


Ans.
• Informed search algorithm uses the idea of heuristic, so it is
also called Heuristic search.
• A key component is a heuristic function h(n)
• Heuristic function h(n) is a function that estimates how close
a state is to a goal.

7. Explain Admissibility of h(n)


Ans.
19
• Admissibility of the heuristic function is given
as:

h(n) <= h*(n)

• Here h(n) is heuristic cost, and


• h*(n) is the estimated cost.

• Hence heuristic cost should be less than or


equal to the estimated cost.

Group No:9

Q. Explain A* search algorithm with example.

Ans:
A* search is the most commonly known form of best-first search. It
uses heuristic function h(n), and cost to reach the node n from the
start state g(n). It has combined features of UCS and greedy best-first
search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic
function. This search algorithm expands less search tree and provides
optimal result faster. A* algorithm is similar to UCS except that it uses
g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to


reach the node. Hence we can combine both costs as following, and
this sum is called as a fitness number.

20
The time complexity of A* depends on the heuristic.

The space complexity of A* is roughly the same as that of all other


graph search algorithms, as it keeps all generated nodes in memory .

Algorithm of A* search:

Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then
return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest
value of evaluation function (g+h), if node n is goal node then return
success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n
into the closed list. For each successor n', check whether n' is already
in the OPEN or CLOSED list, if not then compute evaluation function
for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should
be attached to the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search


algorithms.
o A* search algorithm is optimal and complete.

21
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based


on heuristics and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems.

Example:

In this example, we will traverse the given graph using the A*


algorithm. The heuristic value of all states is given in the below table
so we will calculate the f(n) of each state using the formula f(n)= g(n)
+ h(n), where g(n) is the cost to reach any node from start state.
Here we will use OPEN and CLOSED list.

Solution:

22
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7),
(S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the


optimal path with cost 6.

Group No: 10

Q. Explain the following terms: agent, agent function, agent


program, rationality, autonomy, architecture of an agent,
performance measure.
Ans:

Agent:

23
In artificial intelligence, an intelligent agent (IA) is anything which
perceives its environment, takes actions autonomously in order to
achieve goals, and may improve its performance with learning or
may use knowledge.
Agent function:
This is a function in which actions are mapped from a certain percept
sequence.
Agent program:
An intelligent agent is a program that can make decisions or perform
a service based on its environment, user input and experiences.
These programs can be used to autonomously gather information on
a regular, programmed schedule or when prompted by the user in
real time.
Rationality:
Rationality is nothing but status of being reasonable, sensible, and
having good sense of judgment. Rationality is concerned with
expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful
information is an important part of rationality.
Autonomy:
Autonomous systems are defined as systems that are able to
accomplish a task, achieve a goal, or interact with its surroundings
with minimal to no human involvement.
Architecture of agent:
Architecture is the machinery that the agent executes on. It is a
device with sensors and actuators, for
1. example: a robotic car, a camera, a PC. Agent program is an
implementation of an agent function

24
Performance measure:
Performance Measure of Agent − It is the criteria, which determines
how successful an agent is. Behavior of Agent − It is the action that
agent performs after any given sequence of percepts. Percept − It is
agent's perceptual inputs at a given instance.

Group No: 11

Q. Explain Steepest Ascent and Stimulated Annealing.


Ans:
The steepest ascent hill
Ans) It first examines all the neighboring nodes and then selects the
node closest to the solution state as of next node.
The steepest-Ascent algorithm is a variation of the simple hill-
climbing algorithm. This algorithm examines all the neighbouring
nodes of the current state and selects one neighbour node which is
closest to the goal state. This algorithm consumes more time as it
searches for multiple neighbours.

Stimulated Annealing
Ans) It is a method for solving unconstrained and bound-constrained
optimization problems. The method models the physical process of
heating a material and then slowly lowering the temperature to
decrease defects, thus minimizing the system energy. Simulated
annealing is a technique that is used to find the best solution for
either a global minimum or maximum, without having to check every
single possible solution that exists

25
Group No: 12

Q. Describe the following terms in Hill Climbing.


Ans:
a) Local Maxima: It is a state which is better than its neighboring
state. This state is better because here the value of the objective
function is higher than its neighbors.
b) Plateau: A plateau is the flat area of the search space in which all
the neighbor states of the current state contains the same value,
because of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.
c) Ridge: A ridge is a special form of the local maximum. It has an
area which is higher than its surrounding areas, but itself has a slope,
and cannot be reached in a single move.
d) Global Maxima: It is the best possible state in the state space
diagram. This because at this state, objective function has highest
value.

B.) MCQ

Group No: 1
1. Which of the following is not an AI?
A. intelligent optical character recognition
B. Market automation
C. Facial detection
D. natural language processing
Ans: B. Market Automation

26
2. Artificial Intelligence is about_____.
A. Playing a game on Computer
B. Making a machine Intelligent
C. Programming on Machine with your Own Intelligence
D. Putting your intelligence in Machine
Ans: B. Making a machine intelligent.

3. Machines with _____ Artificial Intelligence are made to respond to


specifi situations, but can not think for themselves.
A. Impotent
B. Weak
C. Smart
D. Strong
Ans: B. Weak

4. Which of the following is not the type of AI?


A. Reactive machines
B. Unlimited memory
C. Theory of mind
D. Self-awareness
Ans: B. Unlimited memory

5. Strong Artificial Intelligence (AI) is


A. The embodiment of human intellectual capabilities within a
computer

27
B. A set of computer programs that produce output that would
be considered to reflect intelligence if it were generated by
humans
C. The study of mental faculties through the use of mental
models implemented on a computer
D. All of the mentioned

Ans: A. The embodiment of human intellectual capabilities


within a computer.

6. Artificial Intelligence can be divided in various types, there are


mainly two types of main categorization which are based on _______
and based on _______ of AI.
A. Capabilities, Functionalities
B. Rationality, Model
C. Theorem, Approach
D. None of the Above.
Ans. A. Capabilities, Functionalities

7.The application/applications of Artificial Intelligence is/are

A. Expert Systems
B. Gaming
C. Vision Systems
D. All of the above

28
Ans- D. All of the above

8.A technique that was developed to determine whether a machine


could or could not demonstrate the artificial intelligence known as
the___

A. Boolean Algebra
B. Turing Test
C. Logarithm
D. Algorithm

Ans- B. Turing Test

9.The first AI programming language was called:

A.BASICS
B.FORTRAN
C.IPL
D.LISP

Ans -D. LISP


10. What is the term used for describing the judgemental or common
sense part of the problem solving ?

29
A.Critical
B.Heuristic
C.Value based
D.Analytical
Ans - B. Heuristic

Group No: 2
1. An agent is perceiving it's environment through ___
A. Actuator
B. Sensor
C. Performance
D. Software
Ans: B Sensor

2. An ___ acts on environment through actuators.


A. Sensor
B. Software
C. Agent
D. Performance
Ans: C Agent

3. What is rational at any given time depends on?


A.The performance measure that defines the criterion of
success

30
B. The agent’s prior knowledge of the environment
C. The actions that the agent can perform
D. All of the mentioned
Ans: D All of the mentioned

4. What could possibly be the environment of a Satellite Image


Analysis System?
A. Computers in space and earth
B. Image categorization techniques
C. Statistical data on image pixel intensity value and histograms
D. All of the mentioned
Ans: D All of the mentioned

5. Problem Generator is present in which of the following agent?


A. Learning agent
B. Observing agent
C. Reflex agent
D. None of the above
Ans: A Learning agent

6. Which one of the following is used to improve the agents


performance?
A.Perceiving
B. Learning

31
C. Observing
D. None of the mentioned
Ans: B Learning

7. What is the action of task environment in artificial intelligence?


A. Problem
B. Solution
C. Agent
D. Observation
Ans: A Problem

8. Which environment is called as semi dynamic?


A Environment does not change with the passage of time
B Agent performance changes
C Environment will be changed
D Environment does not change with the passage of time, but
Agent performance changes
Ans: D Environment does not change with the passage of time,
but Agent performance changes

9. Which of the following is an ‘agent’ in AI?


A. Perceives its environment by sensors and acting upon that
environment by actuators
B. Takes input from the surroundings and gets the help of its
intelligence and performs the desired operations

32
C. An embedded program controlling line following robot
D. All of these
Ans: D All of these

Group No: 3
1. If a robot is able to change its own trajectory as per the external
conditions, then the robot is considered as the__

a. Mobile
b. Non-Servo
c. Open Loop
d. Intelligent

ANS: d. Intelligent

2. Which is used to select the particular environment to run the


agent ?

a. Environment creator
b. Environment Generator
c. Both a & b
d. None of the mentioned

ANS: Environment Generator

3. The application/applications of Artificial Intelligence is/are

a.Expert Systems

33
b.Gaming
c.Vision Systems
d.All of the above

Answer : d. all of the above

4. What is the expansion if PEAS in task environment?


a) Peer, Environment, Actuators, Sense
b) Perceiving, Environment, Actuators, Sensors
c) Performance, Environment, Actuators, Sensors
d) None of the mentioned

Answer : c) Performance, Environment, Actuators, Sensors

5. The application/applications of Artificial Intelligence is/are

(a) Expert Systems


(b) Gaming
(c) Vision Systems
(d) All of the above

ANS : d. All of the above

6. What is the action of task environment in artificial intelligence?


a) Problem

34
b) Solution
c) Agent
d) Observation

ANS: Problem

7. What is the expansion if PEAS in task environment?


a) Peer, Environment, Actuators, Sense
b) Perceiving, Environment, Actuators, Sensors
c) Performance, Environment, Actuators, Sensors
d) None of the mentioned

Answer : c) Performance, Environment, Actuators, Sensors

8. The application/applications of Artificial Intelligence is/are

a.Expert Systems
b.Gaming
c.Vision Systems
d.All of the above

Answer : d. all of the above

9. What is the action of task environment in artificial intelligence?

35
a) Problem
b) Solution
c) Agent
d) Observation

Answer : a) Problem

10. The application/applications of Artificial Intelligence is/are

(a) Expert Systems


(b) Gaming
(c) Vision Systems
(d) All of the above
Answer : d) All of the above
11. Which is used to provide the feedback to the learning element?
a) Critic
b) Actuators
c) Sensor
d) None of the mentioned
Answer: a

12. Chess is example of which properties?

A. Discrete

36
B. Continuous
C. Episodic
D. Non-deterministic
Ans : A

Group No: 4
1. Which of the following is not a functionality of problem solving
agent?
A. Goal Formulation
B. Problem Formulation
C. Performance
D. Search
Ans: Performance

2. Which is the most important step of problem-solving which


decides what actions should be taken to achieve the formulated
goal?
A. Execution
B. Goal Formulation
C. Solution
D. Problem Formulation
Ans: Problem Formulation

3. Which of the following is not a functionality of problem solving


agent?
A. Goal Formulation
B. Problem Formulation

37
C. Performance
D. Search
Ans: Performance

4. Which is the most important step of problem-solving which


decides what actions should be taken to achieve the formulated
goal?
A. Execution
B. Goal Formulation
C. Solution
D. Problem Formulation
Ans: Problem Formulation

5. What is the main task of a problem-solving agent?


A. Solve the given problem and reach to goal
B. To find out which sequence of action will get it to the goal
state
C. All of the mentioned
D. None of the mentioned
Ans: All of the mentioned

6.A problem solving approach works well for ______


A. 8-Puzzle problem
B. 8-queen problem
C. Finding a optimal path from a given source to a destination
D. Mars Hover (Robot Navigation)
Ans: Mars Hover (Robot Navigation)

7. Which of the following is not a functionality of problem solving


agent?

38
A. Goal Formulation
B. Problem Formulation
C. Performance
D. Search
Ans: Performance

8. Which is the most important step of problem-solving which


decides what actions should be taken to achieve the formulated
goal?
A. Execution
B. Goal Formulation
C. Solution
D. Problem Formulation
Ans: Problem Formulation

9. What is the main task of a problem-solving agent?


A. Solve the given problem and reach to goal
B. To find out which sequence of action will get it to the goal
state
C. All of the mentioned
D. None of the mentioned
Ans: All of the mentioned

10.A problem solving approach works well for ______


A. 8-Puzzle problem
B. 8-queen problem
C. Finding a optimal path from a given source to a destination
D. Mars Hover (Robot Navigation)
Ans: Mars Hover (Robot Navigation)

39
11. The process of removing detail from a given state representation
is called __
A. Extraction
B. Abstraction
C. Information Retrieval
D. Mining of data
Ans: Abstraction

12. What is the major component/components for measuring the


performance of problem solving?
A. Completeness
B. Optimality
C. Time and Space complexity
D. All of the mentioned
And. All the mentioned

13. Which of the following is not a functionality of problem solving


agent?
A. Goal Formulation
B. Problem Formulation
C. Performance
D. Search
Ans: Performance

14. Which is the most important step of problem-solving which


decides what actions should be taken to achieve the formulated
goal?
A. Execution
B. Goal Formulation
C. Solution

40
D. Problem Formulation
Ans: Problem Formulation

15. What is the main task of a problem-solving agent?


A. Solve the given problem and reach to goal
B. To find out which sequence of action will get it to the goal
state
C. All of the mentioned
D. None of the mentioned
Ans: All of the mentioned

16.A problem solving approach works well for ______


A. 8-Puzzle problem
B. 8-queen problem
C. Finding a optimal path from a given source to a destination
D. Mars Hover (Robot Navigation)
Ans: Mars Hover (Robot Navigation)

17. The process of removing detail from a given state representation


is called __
A. Extraction
B. Abstraction
C. Information Retrieval
D. Mining of data
Ans: Abstraction

18. What is the major component/components for measuring the


performance of problem solving?
A. Completeness
B. Optimality
41
C. Time and Space complexity
D. All of the mentioned
And. All the mentioned

19. Which of the following is a touring problem in which each city


must be visited exactly once. The purpose is to search for the
shortest tour among all the tours.
A. Searching the shortest path between a source and a
destination
B. Depth-first search traversal on a given map represented as a
graph
C. Map coloring problem
D. Travelling Salesman problem
Ans: Travelling Salesman problem

20. What kind of agent is a Web Crawler?


A. Model-based agent
B. Problem-solving agent
C. Simple reflex agent
D. Intelligent goal-based agent
Ans: Intelligent goal-based agent

21. A search algorithm takes ___ as an input and returns ____ as an


output.
A. Input, output
B. Problem, solution
C. Solution, problem
D. Parameters, sequence of actions
Ans: Problem, solution

42
22. Searching using Query on Internet, is use of ___ type of agent.
A. Offline agent
B. Online agent
C. Goal Based agent
D. Both A and B
Ans: Goal Based agent

Group No: 5
1. Which search is implemented with an empty first-in-first-out
queue?

a) Depth-first search
b) Breadth-first search
c) Bidirectional search
d) None of the mentioned

Ans. b) Breadth-first search

2. When is breadth-first search is optimal?

a) When there is less number of nodes


b) When all step costs are equal
c) When all step costs are unequal
d) None of the mentioned
Ans. b) When all step costs are equal

3. What is the space complexity of Depth-first search?

43
a) O(b)
b) O(bl)
c) O(m)
d) O(bm)
Ans. d) O(bm)

4. Which algorithm is used to solve any kind of problem?

a) Breadth-first algorithm
b) Tree algorithm
c) Bidirectional search algorithm
d) None of the mentioned

Ans. b) Tree algorithm

5. Which search algorithm imposes a fixed depth limit on nodes?

a) Depth-limited search
b) Depth-first search
c) Iterative deepening search
d) Bidirectional search

Ans. a) Depth-limited search

7. Which data Structure conveniently used to implement BFS?

a) Stacks
b) Queues
c) Priority queues
d) None of the mentioned

44
Ans. b) Queues

8. Which data Structure conveniently used to implement DFS?

a) Stacks
b) Queues
c) Priority queues
d) None of the mentioned

Ans. A) stacks

9. The time and space complexity of BFS is

a. O(bd+1) and O(bd+1)

b. O(b2) and O(d2)

c. O(d2) and O(b2)

d. O(d2) and O(d2)

ans. a. O(bd+1) and O(bd+1)

10. Which search implements stack operation for searching the


states?

a) Depth-limited search
b) Depth-first search
c) Breadth-first search
d) None of the mentioned

Ans. b) Depth-first search

Group No: 6

45
1. What is the general term of Blind searching?
a) Informed Search
b) Uninformed Search
c) Informed & Unformed Search
d) Heuristic Search

Answer: b

2. Strategies that know whether one non-goal state is “more


promising” than another are called _____
a) Informed & Unformed Search
b) Unformed Search
c) Heuristic & Unformed Search
d) Informed & Heuristic Search

Answer: d

3. Which of the following is/are Uninformed Search


technique/techniques?
a) Breadth First Search (BFS)
b) Depth First Search (DFS)
c) Bidirectional Search
d) All of the mentioned

Answer: d

4. Which of the following points are valid with respect to conditional


probability?
a) Conditional Probability gives 100% accurate results.
b) Conditional Probability can be applied to a single event.
c) Conditional Probability has no effect or relevance or
independent events.
d) None of the above

Answer: c

46
5. Among which of the following mentioned statements will the
conditional probability be applied?
i.The number occurred on rolling a die one time.
ii.What will the temperature tomorrow?
iii.What card will get on picking a card from a fair deck of 52
cards?
iv.What output will we get on tossing a coin once?
Options:

a) Only iv.
b) All i., ii., iii. and iv.
c) ii. and iv.
d) Only ii.

Answer: d

6. On which of the mentioned points does the Conditional Probability


reasonable to apply?
a) Dependent Events
b) Independent Events
c) Neither a. nor b.
d) Both a. and b.

Answer: a

7. The results that we get after we apply conditional probability to a


problem are,
a) 100% accurate
b) Estimated values
c) Wrong values
d) None of the above

Answer: b

47
8. State whether the following condition is true or false?
"The independent events are affected by the happening of some
other events which may occur simultaneously or have occurred
before it."
a) True
b) False

Answer: b

Group No: 7
1. In Bidirectional Search, Forward search form _____ node toward
_____ node.
a. Source, Goal (Answer)
b. Goal, Source
c. Current, source
d. Source, current

2. In Bidirectional Search, Backward search form _____ node toward


_____ node.
a. Source, Goal
b. Goal, Source (Answer)
c. Current, source
d. Source, current

3. Bidirectional search is complete in _______?


a. BFS (Answer)
b. DFS
c. DLS
d. IDDFS

4. Bidirectional search comes under which search algorithm?


a. Uninform (Answer)
b. Inform

48
c. Heuristic
d. Greedy

5. Best First search comes under which search algorithm?


e. Uninform
f. Inform (Answer)
g. Blind
h. Greedy

6. Greedy BFS algorithm is implied by ____?


a. Queue
b. Stack
c. Priority Queue (Answer)
d. Array
7. Best First Search is a combination of ____ & _____?
a. BFS & DFS (Answer)
b. DLS & IDDFS
c. A* & Hill Climb
d. BFS & A*

8. Greedy BFS tries to expand the node that is ____ to the goal.
a. Closest (Answer)
b. Farthest
c. Similar
d. Different

9. Greedy best first search algorithm is _____.


a. Not Optimal (Answer)
b. Optimal
c. Bit optimal
d. None of the above

10. What is the other name of the greedy best first search?
a) Blind Search
b) Pure Heuristic Search (Answer)

49
c) Combinatorial Search
d) Divide and Conquer Search

Group No:8
1.A heuristic is a way of trying _____

(a)To discover something or an idea embedded in a


program
(b)To search and measure how far a node in a search
tree seems to be from a goal
(c)To compare two nodes in a search tree to see if one is
better than another
(d)All of the mentioned
Ans:
D

2.The search strategy the uses a problem specific knowledge


is known as _____.

(a)Informed Search
(b)Best First Search
(c)Heuristic Search
(d)All of the mentioned
Ans:
D

3.Heuristic function h(n) is ____

(a)Lowest path cost


50
(b)Cheapest path from root to goal node
(c)Estimated cost of cheapest path from root to goal
node
(d)Average path cost
Ans:
C

4.What is the evaluation function in greedy approach?

(a)Heuristic function
(b)Path cost from start node to current node
(c)Path cost from start node to current node + Heuristic
cost
(d)Average of Path cost from start node to current node
and Heuristic cost
Ans:
A

5.What is the evaluation function in A* approach?

(a)Heuristic function
(b)Path cost from start node to current node
(c)Path cost from start node to current node + Heuristic
cost
(d)Average of Path cost from start node to current node
and Heuristic cost
C

51
6.Constraint satisfaction problems on finite domains are
typically solved using a form of _____

(a)Search Algorithms
(b)Heuristic Search Algorithms
(c)Greedy Search Algorithms
(d)All of the mentioned
Ans:
D

7.Which of the mentioned properties of heuristic search


differentiates it from other searches?

(a)It provides solution in a reasonable time frame


(b)It provides the reasonably accurate direction to a goal
(c)It considers both actual costs that it took to reach the
current state and approximate cost it would take to
reach the goal from the current state
(d)All of the above
Ans:
D

8.Informed search algorithm uses the idea of ___, so it is also


called Heuristic search.
A. Heuristic
B. A*
C. Greedy Search Algorithms
D. Hill climbing

52
9.Heuristic function h(n) is a function that estimates ___
A. How close a state is to a goal
B. How far a state is to a goal
C. How close a node is to a goal
D. How far a node is to a goal
Ans:
A

10.Heuristic function is represented by___


A . g(n)
B. h(n)
C. f(n)
D. o(n)
Ans:
B

11.Admissibility of the heuristic function is given


as:
A. h(n) <= h*(n)
B. h(n) = h*(n)
C. h(n) >= h*(n)
D. h(n) + h*(n)
Ans:
A

12. h*(n) is the __


A. estimated cost.
B. Heuristic cost.
C. All of the above

53
D. Path cost
Ans:
A

13. h1 : Sum of __ distances of the tiles from their goal


positions
A. Manhattan
B. Eucledian
C. Path
D. Heuristic
Ans:
A

14. h2 : Sum of ___ distances of the tiles from their goal


positions
A. Eucledian
B. Manhattan
C. Path
D. Heuristic
Ans:
A

Group No:9

1. Which of the following are Informed search algorithms


A. Best First Search
B. A* Search
C. Iterative Deeping Search
D. Both a & b

54
Ans: D

2. A* Search Algorithm __
A. does not expand the node which have the lowest value of
f(n),
B. finds the shortest path through the search space using the
heuristic function i.e f(n)=g(n) + h(n)
C. terminates when the goal node is not found.
D. All of the above

Ans: B

3. Which search is complete and optimal when h(n) is consistent?


a) Best-first search
b) Depth-first search
c) Both Best-first & Depth-first search
d) A* search

Ans: D

4. Which of the following mentioned searches are heuristic searches?

i. Random Search
Ii Depth First Search
Iii Breadth First Search
Iv Best First Search

Options:
a. Only iv.
b. All i., ii., iii. and iv.
c. ii. and iv.
d. None of the above

55
Answer
d. Only iv

5. Which of the mentioned properties of heuristic search


differentiates it from other searches?
a. It provides solution in a reasonable time frame
b. It provides the reasonably accurate direction to a goal
c. It considers both actual costs that it took to reach the current
state and approximate cost it would take to reach the goal from
the current state
d. All of the above
Answer
All of the above

6. Consider the following statement:


"The search first begins from the root node and the first one of
the child node’s sub-tree is completely traversed. That is, first
all the one-sided nodes are checked, and then the other sided
nodes are checked."
Which search algorithm is described in the above definition?
a.The Breadth First Search (BFS)
b.The Depth First Search (DFS)
c.The A* search
d.None of the above
Answer: b

7. Consider the following statement:


"In AI search algorithms, we look for a solution which provides us
the most optimized way in terms of both time and cost to reach
from the current state to the Goal State."
State whether the above condition is true or false?
Answer
True
56
Group No: 10
1. How many types of agents are there in artificial intelligence?
a) 1
b) 2
c) 3
d) 4

Ans) d) 4
2. What is the rule of simple reflex agent?
a) Simple-action rule
b) Condition-action rule
c) Simple & Condition-action rule
d) None of the mentioned

Ans) b) Condition-action rule


3. Agents’ behavior can be best described by ____________
a) Perception sequence
b) Agent’s function
c) Sensors and Actuators
d) Environment in which agent is performing

Ans) b) Agent’s function


4. Performance Measures are fixed for all agents.
a) True
b) False

Ans) a) True
5. What is rational at any given time depends on?
a) The performance measure that defines the criterion of success
b) The agent’s prior knowledge of the environment

57
c) The actions that the agent can perform
d) All of the mentioned

Ans) d) All of the mentioned

6. Rational agent is the one who always does the right thing.
a) True
b) False

Ans) a) True
7. A hardware-based system that has autonomy, social ability and
reactivity.
a) AI
b) Autonomous Agent
c) Agency
d) Behavior Engineering

Ans) b) Autonomous Agent


8. An international research effort to promote autonomous robots.
a) Fresh Kitty
b) RoboCup
c) AICup
d) SPOT

Ans) b) RoboCup
9. Which depends on the percepts and actions available to the
agent?
a) Agent
b) Sensor
c) Design Problem
d) None of the mentioned

58
Ans) c) Design Problem
10. An agent is composed of ________
a) Architecture
b) Agent Function
c) Perception Sequence
d) Architecture and Program

Ans) d) Architecture and Program

Group No: 11
1. The Hill-Climbing technique stuck for some reasons. which of the
following is the reason?
(A). Local maxima
(B). Ridges
(C). Plateaux
(D). All of these
(E). None of these
MCQ Answer: d

2. According to which of the following algorithm, a loop that


continually moves in the direction of increasing value, that is uphill.
(A). Up-Hill Search
(B). Hill-Climbing
(C). Hill algorithm
(D). Reverse-Down-Hill search
(E). None of these
MCQ Answer: b

59
3. When will the Hill-Climbing algorithm terminate?
(A). Stopping criterion met
(B). Global Min/Max is achieved
(C). No neighbor has a higher value
(D). All of these
(E). None of these
MCQ Answer: c

4. _____ Is an algorithm, a loop that continually moves in the


direction of increasing value – that is uphill.
(a) Up-Hill Search
(b) Hill-Climbing
(c) Hill algorithm
(d) Reverse-Down-Hill search
Answer: Option (b)

5. Hill climbing is commonly knows as ………search because it grabs a


suitable neighbor state without being thoughtful onward about
where to go next.
(A). Needy local search
(B). Heuristic local search
(C). Greedy local search
(D). Optimal local search
(E). None of these
MCQ Answer: c

60
6. Which of the following are the main disadvantages of a hill-
climbing search?
(A). Stops at local optimum and don’t find the optimum
solution
(B). Stops at global optimum and don’t find the optimum
solution
(C). Don’t find the optimum solution and Flop to search for a
solution
(D). Fail to find a solution
(E). None of these
MCQ Answer: a

7. Hill-Climbing technique stuck for which of the following reasons?


(A). Local maxima
(B). Ridges
(C). Plateaux
(D). All of these
(E). None of these
MCQ Answer: d

8. What are the main cons of hill-climbing search?


(a) Terminates at local optimum & Does not find optimum
solution
(b) Terminates at global optimum & Does not find optimum
solution
(c) Does not find optimum solution & Fail to find a solution

61
(d) Fail to find a solution
Answer: Option (a)

9. Simulated annealing differs from … in that a move is selected at


random and then decides whether to accept it.
A. Generate-and-Test
B. Hill Climbing
C. Best First Search
D. Simulated Annealing
ans option B

10. Hill climbing Search algorithm works like ___ algorithm.


a. AI
b. A*
c. Hilltop
d. Generate and test
Ans : d. Generate and test

11. Greedy approach in hill climbing means choosing best possible


____ solution.
e. Hilltop
f. Complex
g. Otimal
h. Nearest

Ans : d. Nearest

62
12. _ or gradient search is a useful variation on simple hill-climbing
which considers all the moves from the current state and selects
the best one as the next one.
Ans. Steepest-ascent hill climbing

Group No: 12
1. What is meant by simulated annealing in artificial intelligence?
a) Returns an optimal solution when there is a proper cooling
schedule
b) Returns an optimal solution when there is no proper cooling
schedule
c) It will not return an optimal solution when there is a proper
cooling schedule
d) None of the mentioned
Answer: Option a)

2. How the new states are generated in genetic algorithm?


a) Composition
b) Mutation
c) Cross-over
d) Both Mutation & Cross-over
Answer: d)

3. Which of the following are the two key characteristics of the


Genetic Algorithm?
a) Crossover techniques and Fitness function

63
b) Random mutation and Crossover techniques
c) Random mutation and Individuals among the population
d) Random mutation and Fitness function
Answer: a)

4. Searching by query on the Internet is the use of which of the


following type of agent.
a) Offline agent
b) Online agent
c) Both Offline and Online agent
d) Goal-Based and Online agent
Answer: d)

64
SUBJECT: ARTIFICIAL
CLASS: TYCS SEM: 5
INTELLIGENCE
Sr.N
o.
Question Answer1 Answer2 Answer3 Answer 4 Answer5 CorrectOption
An agent in artificial
Perception Architecture and
1 intelligence is composed Actuator Agent Function Answer 4
Sequence Program
of ________
Sensors,
The Task Environment of Actuators,E
2 an agent consists of what nvironment, Queues Stack Search Answer 1
all? Performanc
e Measure
Which instruments are
used for perceiving and Sensors and
3 Sensors Perceiver Program Answer 1
acting upon the Actuators
environment?
How many types of agents
4 are there in artificial 1 2 3 4 Answer 4
intelligence?
Simple &
What is the rule of simple Simple- Condition-action
5 Condition- No rule Answer 2
reflex agent? action rule rule
action rule
Manhattan Distance can be
Heuristic
6 used to find the value of Ridges Stacks Nothing Answer 1
function
_________

Which agent deals with happy Simple reflex Utility based


7 Model based agent Learning agent Answer 4
and unhappy states? agent agent

What is the general term of Informed Uninformed Informed &


8 Heuristic Search Answer 2
Blind searching? Search Search Unformed Search
Which data structure is
9 conveniently used to Stacks Queues Priority Queues Sensors Answer 2
implement BFS?
Which data structure is
10 conveniently used to Stacks Queues Priority Queues Sensors Answer1
implement DFS?
Uniform Cost search is an
11 Optimal Slow Not optimal Perceiver Answer 1
__________ search
Which of the following is an
Bidirectional
12 Uninformed Search A* Search Best First Search No Search Answer 3
Search
technique/techniques?

Hill-Climbing search
13 approach gets stuck for which Sensors Actuators Ridges Program Answer 3
of the following reasons?

A Simple Reflex agent is a


14 Agent Environment Search Rule Answer 1
type of ________
If a hypothesis agrees with all
Consistent Integral Regular
15 the data, it is called as Best Hypothesis Answer 1
hypothesis Hypothesis Hypothesis
_________.
Uniform-cost search expands
Lowest path
16 the node n with the Heuristic cost Highest path cost Average path cost Answer 1
cost
__________

Which search is complete and Both Best-first &


Best-first
17 optimal when h(n) is Depth-first search Depth-first A* search Answer 4
search
consistent? search

Which of the following is a Supervised Steepest Ascent Dragon Hill


18 Depth first search Answer 3
type of Hill Climbing Search? learning Hill Climbing climbing

Simulated Annealing is a type Breadth first Hill climbing


19 Depth first search Utillity Answer 4
of which search algorithm? search search
Which equation is true for A*
20 f(n) <= h(n) f(n) <>h(n) f(n) = h(n) – g(n) f(n) = h(n) + g(n) Answer 4
search?

Greedy best first search is a


21 Unknown Rewards. Informed Uninformed Answer 3
type of __________ search.

Classification and Regression


Reinforceme Supervised Unsupervised
22 are types of which learning No Learning Answe 2
nt learning learning. learning
method?

Clustering comes under Reinforceme Supervised Unsupervised


23 Problem generator Answer 3
which type of learning? nt learning learning. learning

Linear Regression is a
Classificatio
24 popular statistical model used Rewards. Clustering Regression Answer 4
n
for ___________ .

Entropy in Decision tree is a


measure of the _________ in Randomnes
25 Leaf Node Boosting Reflex Answer 1
the information being s
processed

The regression model


attempts to predict the value
26 random independent reflex inspired Answer2
of dependent variable
depending upon the new
value of ________ variable.
For decision trees, a
technique called decision tree
27 addition residuals sorting pruning Answer4
________ combats
overfitting.
The The dependent The dependent
There is no
In binary logistic regression dependent variable is divided variable consists
28 dependent Answer3
___________ variable is into four equal of two
variable.
continuous. subcategories. categories.

MLE is a statistical approach


29 randomness parameters reflex residual Answer2
to estimating the __________
of a mathematical model.

Maximum
Maximum Length Minimum Lag Minimum Lookup
30 Likelihood Answer1
What is the full form of MLE Estimator Estimator Estimator
Estimator
?

Sum of Sum of the


Sum of the
residuals square of
absolute value of Sum of outputs is
31 (∑(Y – residuals ( ∑ (Y- Answer3
residuals (∑|Y- maximum
h(X))) is h(X))2) is
In practice, Line of best fit or maximum h(X)|) is maximum
minimum
regression line is found when
_____________
_________ function maps
32 any real value into another Sigmoid Sum Product Difference Answer1
value between 0 and 1.

Reinforceme Non Linear Unsupervised


33 Error Answer2
Polynomial Regression is a nt Learning Regression learning.
type of ______
The relationship
There is not between the
The dependent One or more of
enough data dependent
variable depends the assumptions
to carry out variable and the
34 on more than one of simple linear Answer2
simple linear independent
independent regression are not
regression variables cannot
variable. correct.
analysis. be described by a
linear function.

Multiple regression analysis


is used when
______________
In SVM __________
implicitly transform
the input data into a high-
Crazy
35 dimensional space where a Variables Hidden methods Kernel methods Answer4
methods
linear separator may exist,
even if the original data are
non-separable.

______________ determines
Encryption
36 how much importance is to be Discount Factor Design Factor Market Factor Answer2
Factor
given to the immediate
reward and future rewards.
Temporal
37 Distorted Euclidean Imperial Answer4
________learning is a model- Difference
free learning method
____________ is a type of
Direct Utility
38 passive reinforcement Q- Learning Decentralization GST Answer2
Estimation
learning

Direct Utility
39 _______ is a type of active Centralization Q-Learning Bluffing Answer3
Estimation
reinforcement learning
K-Means clustering is
40 an__________ iterative supervised ergonomic encrypted Answer1
unsupervised
clustering technique.

41 SARSA is a type of Distorted Reinforcement unsupervised analytic Answer2


__________ learning

Depth-First Breadth-First
42 Which search method takes Random search Crazy search Answer1
Search search
less memory?
The node which does not
43 have any child node is called parent cache imperial leaf Answer4
the ______ node.

The main idea of ________is


to reduce the time complexity
Best First Bidirectional
44 by searching two way A* search B* search Answer2
Search search
simultaneously from start
node and another from goal
node.
The most widely used
Breadth-First
45 ensemble method is called Baiting Clustering Boosting Answer3
search
________
If the next state of the
environment is completely
determined by the current
46 state and the action executed Ideal deterministic optimistic stale Answer2
by the agent then the
environment is said to be
________ .
In Hill climbing
___________ is a flat
Global
47 region of state space where Agent Learner Plateau Answer4
Maxima
neighbouring states have
the same value.

_________maintain internal
Model-based
48 state to track aspects of the New agents Large agent Estate agents Answer3
agents
world that are not evident in
the current percept.
In the Turing test, a computer
needs to interact with a
human
49 ___________by answering Server. intenet. Bank. Answer4
interrogator
his questions in written
format.

In reinforcement learning, if
the value of discount factor is
50 future parallel immediate horn Answer1
1, it means that more
importance is given to the
________ rewards.

What is the full form of Minimum a Maximum a Maximum a Minimum a


51 Answer 3
MAP? process process posteriori program
Decision Trees can be used
52 Intercept Machine Sensors Classification Answer 4
for ________ Tasks.

learning with learning without


Supervised Learning is learning without learning with the
53 the help of computers as Answer 3
___________ teacher help of teacher
examples supervisor

The problem of finding


Supervised Unsupervised Reinforced
54 hidden structure in unlabelled Not Learning Answer 2
Learning Learning Learning
data is called______.

In a simple linear regression


model (One independent
variable),If we change the
55 By 1 No change By intercept By its slope Answer 4
input variable by 1 unit. How
much output variable will
change?

Logistic regression comes


Supervised Unsupervised Reinforcement
56 under which learning Not Learning Answer 1
Learning. Learning. Learning.
method?

Support Vector Machine can


57 Vectors Classification Mentoring Programs Answer 2
be used for __________ .

What is the full form of EM Ensemble Expectation Enabled Machine Extreme


58 Answer 2
algoritm? Machine Maximization. learning Minimization
________________ is the
most common Bayesian Linear Naïve Bayes
59 KNN model Star network. Answer 3
network Model used in Regression. model
machine learning.
Q-learning is a model-
Reinforceme Supervised Unsupervised
60 free ____________ Without learning Answer 1
nt learning. learning. learning.
algorithm.
A* search is an _________
61 Informed Uninformed Crazy Unknown Answer 1
search.
_________ agent is the one
62 who always does the right Server Slope Rational Parameter Answer 3
thing.

Categorize Crossword puzzle


Fully Partially
63 in Fully Observable / Partially Parent Parameter Answer 1
Observable Observable
Observable environment.

In which agent is the problem Learning


64 Observing agent Reflex agent Server Answer 1
generator is present? agent

The agent’s sole objective in


Reinforcement learning is to
65 Search Slope Rewards Null Answer 3
maximize the total _____ it
receives in the long run.

An ______ reinforcement
66 learning agent changes its Active Passive Child Parent Answer 1
policy as it goes and learns.

Model-based learning is a
Supervised Unsupervised Reinforcement
67 simple approach to Camera Answer 3
learning. learning. learning.
_________ .
Iterative Deepening Depth
68 First Search is an _________ Server Uninformed . Crazy Informed Answer 2
search.

__________ agents take


69 Entropy Goal based Simple Reflex based Answer 2
actions to achieve their goals

A nonparametric model is one


70 that cannot be characterized Slopes Parameters Neighbours Utility Answer 2
by a bounded set of _______.

Putting your Programming with Making a


What is Artificial
71 neurons into your own Machine Playing a Game Answer 3
intelligence?
Computer intelligence intelligent

Identify the environment of


Multi agent A single-agent Not an
72 an agent of part picking robot Penalty Answer 2
Environment. environment environment
by itself.

The ________ in the


Learning agent is responsible
Problem
73 for suggesting actions that Critic Server Entropy Answer 1
generator
will lead to new and
informative experiences.

The self driving car sensors


74 Brake Camera Steering Clutch Answer 2
include the following.

From where does the learning


Problem
75 element takes feedback to Actuators Critic Slope Answer 3
generator
improve performance ?
What is the full form of Reverse Lag Rectified Linear Reinforcement Recurring Logic
76 Answer2
ReLU activation function? Unit Unit Learning Unit. Unit.

In Reinforcement learning an
optimal policy is a policy that
77 variables penalties lag reward Answer4
maximizes the
expected total __________.

In reinforcement learning a
_________ agent learns a
utility function on states and
78 dump buffer-based utility-based rogue Answer3
uses it to select actions that
maximize the expected
outcome utility.

In reinforcement learning a
parameter
79 _________ agent learns an Q-learning buffer-based rogue Answer2
learning
action-utility function

Markov Module
Marking Design Most Delightful
80 What is full form of MDP ? Decision Development Answer1
Process Pandemic
Process Program

______________ learning is
when we want an agent to Active
Passive
81 learn about the utilities of Reinforceme Dynamic Unsupervised Answer4
Reinforcement
various states under nt
a fixed policy.
Attributes are
Attributes are
statistically
Which of the following Attributes are Attributes can be statistically
independent of
82 statements about Naive Bayes equally nominal or dependent of one Answer4
one another given
is incorrect? important. numeric another given the
the class value.
class value.

Out of the two repeated steps the


the maximization the optimization the normalization
83 in EM algorithm, the step 2 is minimization Answer2
step step step
________ step

Which of the following


method is used for finding Elbow
84 Manhattan method Euclidian method Scalar method Answer1
optimal of cluster in K-Mean method
algorithm?
Adaptive Dynamic
Programming is a model
85 based approach which attribute Classification transition Monopoly based Answer3
requires the __________
model of the environment.
A…………………. has
feed forward
86 connections only in one feed back network uniform network unicorn network Answer1
network
direction.
A network with all the
Simple &
inputs connected directly Simple- Single layer
87 Condition- No rule Answer 2
to the output is called a action rule neural network
action rule
_______________.
In ANN, the process of
88 adjusting the weight is known searching learning styling synchronizing Answer2
as __________

DFS is ______ efficient and


89 Space, Time Time, Space Time, Time Space, Space Answer1
BFS is __________ efficient.

Flow-Chart &
Structure in
90 What is a Decision Tree? which internal Answer3
node represents
test on an
attribute, each
branch represents
outcome of test
and each leaf
node represents
Flow-Chart Building structure class label Flow of water

___________ is a powerful,
91 effective and simple Answer4
Temporal Parameter
ensemble learning method
difference A* dumbing Bagging

Non Parametric model is also


92 Model based Memory based Ambiguous Answer2
called as ______
learning learning Learning No learning
Adaboost is known as a Sequential
93 Answer1
________ Learner Parallel Learner Sensors Utility
The Decision tree selects the
attribute which has the
94 parallel Answer3
________ Entropy or largest
Information gain. largest smallest disruptive
Greedy best first search
95 evaluates the node using Answer2
_________ f(n) = g(n) f(n) = h(n) f(n) != g(n) f(n)= 0

________ are the receptive


96 parts of an agent which takes Answer3
in the input for the agent.
Actuators Searches Sensors Cleaners
______________ is a
performance criteria which
97 determines how much Answer4
memory does the search Completenes
method require. s Global Maxima Sensing Space Complexity

Cameras,
In the task environment for an
sonar,
98 automated taxi, the actuators Answer3
speedometer, Safe, fast, legal, Steering,
can be __________
GPS, comfortable trip, accelerator, pedestrians,
odometer maximize profits brake, horn. customers
In reinforcement learning, if
the value of discount factor is
99 0, it means that more future immediate parallel horn Answer2
importance is given to the
________ reward.
Q-Learning technique is
an ______________ and uses Off
On Intermediate
100 the greedy approach to learn Policy techni Immediate policy Answer1
Policy technique policy technique
the Q-value. que

You might also like