0% found this document useful (0 votes)
36 views

AI Question Bank

Uploaded by

sktambe371122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

AI Question Bank

Uploaded by

sktambe371122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

K. K.

Wagh Institute of Engineering Education and Research, Nashik


(Autonomous from Academic Year 2022-23)

Question Bank on Unit-1 & 2


ADS223002: Artificial Intelligence

Q.No Questions
1. What do you mean by Artificial Intelligence and also describe the foundations of AI?

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed
to think, learn, and make decisions. AI systems are designed to perform tasks that typically require human
intelligence, such as understanding natural language, recognizing patterns, solving problems, and making
decisions. AI can be categorized into narrow AI, which is designed for specific tasks, and general AI, which
would theoretically have the ability to perform any intellectual task that a human can.

Foundations of AI

The foundations of AI are rooted in various disciplines, including:

1. Mathematics: Provides tools for reasoning and problem-solving, such as logic, probability, and
statistics.
2. Computer Science: Offers the computational power and algorithms needed to build and run AI
systems.
3. Psychology and Cognitive Science: Help in understanding how humans think and learn, which
inspires the development of AI models.
4. Neuroscience: Contributes insights into how the brain functions, informing the design of neural
networks and other AI technologies.
5. Linguistics: Assists in the development of natural language processing, enabling machines to
understand and generate human language.
6. Philosophy: Raises ethical and existential questions about the nature of intelligence, consciousness,
and the implications of AI.

2. Explain the history of Artificial Intelligence.

History of Artificial Intelligence

The history of AI can be traced back to ancient myths and stories about artificial beings endowed with
intelligence. However, modern AI began in the mid-20th century with the following key milestones:

1. 1940s-1950s: Early Concepts


o Alan Turing proposed the concept of a machine that could simulate any human intelligence,
leading to the development of the Turing Test.
o The term "Artificial Intelligence" was coined in 1956 at the Dartmouth Conference by John
McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.
2. 1960s-1970s: Early AI Research
By Devendra Ahire (CSD)
oAI research focused on symbolic reasoning, problem-solving, and early machine learning
algorithms.
o The development of expert systems began, which were designed to mimic the decision-
making abilities of a human expert.
3. 1980s: AI Winter and Revival
o AI experienced a period of reduced funding and interest known as the "AI Winter" due to
unmet expectations.
o In the late 1980s, AI research revived with advancements in machine learning and the
development of neural networks.
4. 1990s-2000s: Practical Applications
o AI began to be integrated into practical applications such as speech recognition, data mining,
and robotics.
o The advent of big data and improved computational power accelerated the development of
more sophisticated AI systems.
5. 2010s-Present: AI Boom
o The rise of deep learning, a subset of machine learning, has led to significant breakthroughs
in image recognition, natural language processing, and autonomous systems.
o AI is now a part of everyday life, powering applications like virtual assistants (e.g., Siri,
Alexa), recommendation systems, and autonomous vehicles.

3. Give brief note on Risks and benefits of Artificial Intelligence.

Risks and Benefits of Artificial Intelligence

Benefits:

1. Automation: AI can automate repetitive tasks, improving efficiency and reducing human error.
2. Enhanced Decision-Making: AI can analyze large amounts of data quickly, helping in making
informed decisions.
3. Personalization: AI powers personalized recommendations in e-commerce, entertainment, and
social media.
4. Healthcare: AI aids in medical diagnostics, drug discovery, and personalized treatment plans.
5. Economic Growth: AI-driven innovation can boost productivity and create new industries.

Risks:

1. Job Displacement: Automation powered by AI could lead to job losses in certain industries.
2. Bias and Discrimination: AI systems can perpetuate or even amplify biases present in training data.
3. Security Risks: AI can be used in cyberattacks, creating sophisticated threats.
4. Loss of Control: There is a concern that advanced AI systems might act unpredictably or beyond
human control.
5. Ethical Concerns: The use of AI in areas like surveillance, warfare, and decision-making raises
significant ethical questions.

4. Explain about Agent and Environment with its types.

Agent and Environment

Agent: An agent is an entity that perceives its environment through sensors and acts upon that environment
using actuators. An agent can be anything that can make decisions and perform actions, such as a robot,

By Devendra Ahire (CSD)


software program, or even a human.

Environment: The environment is the external world in which the agent operates. It provides the context
for the agent's actions and includes all external factors that influence the agent's behavior.

Types of Agents:

1. Simple Reflex Agents: These agents make decisions based on current percepts only, ignoring
history.
2. Model-Based Reflex Agents: They maintain an internal state based on the history of percepts to
make decisions.
3. Goal-Based Agents: These agents act to achieve specific goals, considering future states and
consequences of actions.
4. Utility-Based Agents: These agents not only aim to achieve goals but also maximize their
performance measure, often expressed in terms of utility functions.
5. Learning Agents: These agents can improve their performance over time by learning from their
experiences.

Types of Environments:

1. Fully Observable vs. Partially Observable: In fully observable environments, the agent has access
to the complete state of the environment at any time. In partially observable environments, the agent
only has limited information.
2. Deterministic vs. Stochastic: In deterministic environments, the next state of the environment is
fully determined by the current state and the agent's actions. In stochastic environments, there is
some randomness involved.
3. Episodic vs. Sequential: In episodic environments, the agent's experiences are divided into episodes
that are independent of each other. In sequential environments, the current action can affect future
decisions.
4. Static vs. Dynamic: In static environments, the environment does not change while the agent is
deciding on an action. In dynamic environments, the environment can change during the agent's
decision-making process.
5. Discrete vs. Continuous: In discrete environments, there are a finite number of distinct states,
actions, and percepts. In continuous environments, the state and actions can vary across a continuous
range

5. What is an Intelligent Agent? Explain its types.

What is an Intelligent Agent?

An Intelligent Agent is an agent that can autonomously make decisions, adapt to changes, and learn from
experiences to improve its performance over time. It operates in complex environments and can handle
uncertainty.

Types of Intelligent Agents:

 Simple reflex agents

These agents respond to stimuli based on pre-programmed rules and don't consider past or future
consequences.

By Devendra Ahire (CSD)


 Model-based reflex agents

These agents are similar to simple reflex agents, but they have a more comprehensive understanding
of their environment.

 Goal-based agents

Also known as rational agents, these agents use goals to describe desirable situations and choose
actions to achieve those goals.

 Utility-based agents

These agents are similar to goal-based agents, but they can also measure utility. They can rate
potential scenarios based on desired results and choose actions to maximize the outcome.

 Learning agents

These agents get smarter over time as they learn from their experiences and acquire more knowledge
6. Differentiate between Informed and Uniformed Search Techniques.
Feature Informed Search Techniques Uniformed Search Techniques
Techniques that use problem-specific Techniques that do not use problem-
Definition knowledge to find solutions more specific knowledge; they explore the
efficiently. search space systematically.
Utilizes heuristics or additional
Knowledge No heuristics or additional problem-
information about the problem to
Utilized specific information is used.
guide the search.
Example A* Search, Greedy Best-First Search, Breadth-First Search (BFS), Depth-First
Algorithms Hill Climbing. Search (DFS), Uniform Cost Search.
Often more efficient because they use Can be less efficient as they explore
Efficiency heuristics to focus the search in large portions of the search space
promising directions. without guidance.
May not always guarantee an optimal
Algorithms like Uniform Cost Search
solution unless combined with
Optimality guarantee optimality if the path cost is
specific heuristics (e.g., A* with an
non-negative.
admissible heuristic).
Generally complete if the heuristic is Generally complete, provided the search
Completeness admissible (i.e., does not space is finite and the algorithm does
overestimate the cost). not get stuck in infinite loops.
Can be high due to storing heuristic
Space complexity can be lower (e.g.,
Space information and maintaining priority
DFS has lower space complexity
Complexity queues (e.g., A* requires storing
compared to BFS).
nodes and their costs).
Directed towards the goal using Searches in all directions uniformly,
Search
heuristic estimates (e.g., A* uses f(n) without directionality (e.g., BFS
Direction
= g(n) + h(n)). explores level by level).

By Devendra Ahire (CSD)


Generally handles cycles well, but Depends on the algorithm; DFS can get
Handling of may require cycle-checking stuck in cycles without cycle-checking,
Cycles mechanisms (e.g., A* avoids while BFS can handle cycles if
revisiting nodes). implemented properly.

7. Explain the Breadth first search and depth first search searching techniques withexample. Also compare
their time and space complexity.

Breadth-First Search (BFS) and Depth-First Search (DFS)

Breadth-First Search (BFS)

Description:

 BFS explores all the nodes at the present depth level before moving on to the nodes at the next depth level.
 It uses a queue data structure to keep track of the nodes that need to be explored.

Example: Consider the following tree:

mathematica
Copy code
A
/ | \
B C D
/| | | \
E F G H I

BFS Traversal: A → B → C → D → E → F → G → H → I

Time Complexity:

 O(bd)O(b^d)O(bd), where bbb is the branching factor (average number of child nodes for each node), and
ddd is the depth of the shallowest solution.

Space Complexity:

 O(bd)O(b^d)O(bd), since it needs to store all the nodes at the current level.

Depth-First Search (DFS)

Description:

 DFS explores as far down a branch as possible before backtracking to explore other branches.
 It uses a stack data structure (can be implemented using recursion) to keep track of nodes.

Example: Using the same tree as above:

DFS Traversal: A → B → E → F → C → G → D → H → I

By Devendra Ahire (CSD)


Time Complexity:

 O(bm)O(b^m)O(bm), where mmm is the maximum depth of the tree.

Space Complexity:

 O(bm)O(bm)O(bm) in the worst case, which occurs if the tree is deep.

Comparison:

 BFS is optimal if all step costs are equal, as it finds the shortest path first. DFS is not optimal.
 BFS consumes more memory than DFS.
 DFS can get stuck in an infinite loop if the search space is infinite or very deep without cycle detection.

8. Describe Depth limited search with example and its performance measures.

Depth-Limited Search (DLS)

Description:

 DLS is a variant of DFS that limits the depth to which the search can go. This avoids infinite loops in deep or
infinite search spaces.

Example: Using the previous tree and limiting depth to 2:

DLS Traversal (Depth Limit = 2): A → B → E → F → C → G → D

Nodes HHH and III are not explored because they are beyond the depth limit.

Performance Measures:

 Completeness: Incomplete if the solution is deeper than the limit.


 Optimality: Not optimal if the shallowest solution is beyond the depth limit.
 Time Complexity: O(bl)O(b^l)O(bl), where lll is the depth limit.
 Space Complexity: O(bl)O(bl)O(bl), since it only needs to store the path up to the depth limit.

9. What is Heuristic search technique? Explain A* algorithm.

Heuristic Search Technique

Definition: A heuristic search uses a heuristic function h(n)h(n)h(n) that estimates the cost to reach the
goal from node nnn. This estimation helps guide the search towards the most promising paths, improving
efficiency.

A* Algorithm

Description:

 A* is a popular heuristic search algorithm that combines the actual cost to reach a node g(n)g(n)g(n) and the
estimated cost to reach the goal from that node h(n)h(n)h(n).

By Devendra Ahire (CSD)


 It evaluates nodes using the function f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)+h(n) and selects the node with
the lowest f(n)f(n)f(n) value.

Example: Consider a graph where you want to find the shortest path from node AAA to GGG. Assume the
following costs:

 g(n)g(n)g(n) (actual cost so far): AAA → BBB → CCC → GGG


 h(n)h(n)h(n) (estimated cost to goal): GGG has a heuristic value of 0.

A* explores paths by evaluating f(n)f(n)f(n) until it finds the one that minimizes this function, leading to the
goal.

Performance Measures:

 Completeness: A* is complete if the branching factor is finite and every arc has a positive cost.
 Optimality: A* is optimal if the heuristic function h(n)h(n)h(n) is admissible (never overestimates the cost to
reach the goal).
 Time Complexity: Depends on the heuristic, but in the worst case, it can be O(bd)O(b^d)O(bd).
 Space Complexity: Typically O(bd)O(b^d)O(bd), as it needs to store all generated nodes.

10. Illustrate step by step work flow of Local search algorithm.

Local Search Algorithm Workflow

Local Search Algorithms focus on finding solutions by iteratively improving a single candidate solution.
They do not explore the entire search space but move to neighboring states based on some criterion.

Example: Hill-Climbing Algorithm

1. Start: Begin with an initial solution.


2. Evaluate: Calculate the cost or value of the current state.
3. Generate Neighboring States: Explore nearby states.
4. Move: If a neighbor has a better cost or value, move to that state.
5. Repeat: Continue this process until no better neighboring state is found (local maximum).

Example Workflow:

1. Initial State: Suppose you're at point AAA on a hill.


2. Evaluate: The current height is 100 meters.
3. Neighboring States: The heights nearby are 95 meters, 105 meters, and 110 meters.
4. Move: Choose the state with the height of 110 meters.
5. Repeat: Continue this process until reaching the peak.

Performance Measures:

 Completeness: Not guaranteed; may get stuck in local maxima or minima.


 Optimality: Not guaranteed; the solution may be suboptimal.
 Time Complexity: Generally O(n)O(n)O(n) for simple problems, but depends on the complexity of the
neighbor evaluation.
 Space Complexity: Typically very low, O(1)O(1)O(1), as it only needs to store the current state and a few
neighbors.

By Devendra Ahire (CSD)


11. Draw the state space diagram of Hill climbing algorithm with its different regions andalso explain the
problems generate in this algorithm.

The image illustrates the state space diagram of the Hill Climbing algorithm, showing different regions such
as the Global Maximum, Local Maximum, Plateau, and Ridge. The diagram visually represents how the
algorithm moves towards higher states (elevations) and highlights the potential problems that can occur
during the search process.

Problems in the Hill Climbing Algorithm

1. Local Maximum:
o The algorithm may reach a peak that is higher than the surrounding area but lower than the global
maximum, causing it to stop without finding the best possible solution.
2. Plateau:
o A flat region in the state space where there is no gradient to guide the search. The algorithm may
wander without making progress.
3. Ridge:
o A narrow path leading to higher ground. If the search path is not perfectly aligned with the ridge, the
algorithm might fall off and miss the optimal solution.

12. Give brief note on Stimulated Annealing.

By Devendra Ahire (CSD)


Simulated Annealing

Simulated Annealing is an optimization technique that helps to overcome the limitations of the Hill
Climbing algorithm, such as getting stuck in local maxima. It is inspired by the annealing process in
metallurgy, where a material is heated and then slowly cooled to reduce defects and find a stable structure.

Key Concepts:

 The algorithm accepts not only improvements but also some worse solutions with a probability that decreases
over time. This probability is controlled by a "temperature" parameter.
 As the temperature decreases, the algorithm becomes more selective, eventually converging to a solution.
 Simulated Annealing can escape local maxima and explore the search space more effectively.

13. Explain Memory bounded search techniques.

Memory-Bounded Search Techniques

Memory-bounded search techniques aim to reduce the memory requirements of traditional search
algorithms while maintaining good performance. They are particularly useful in environments with limited
memory resources.

Examples:

1. Iterative Deepening A (IDA):**


o Combines the depth-first search's space efficiency with the optimality of A*. It performs a series of
depth-first searches, each with an increasing cost limit.
2. RBFS (Recursive Best-First Search):
o An A*-like search that uses only linear space. It keeps track of the best alternative path to a node so
that it can backtrack if needed.
3. MA (Memory-bounded A):**
o Similar to A* but uses a limited memory. When memory is full, it removes the least promising nodes.

14. Describe Bidirectional search technique with example and its performance measures.

Bidirectional Search Technique

Bidirectional Search is a graph search algorithm that simultaneously searches from the start node and the
goal node. The search proceeds from both directions and meets in the middle, which significantly reduces
the search time.

Example: Consider finding the shortest path between nodes AAA and GGG. Bidirectional search would
explore paths from AAA and GGG concurrently until they meet at a node, say DDD. The combined path
A→D→GA \rightarrow D \rightarrow GA→D→G is the solution.

Performance Measures:

 Time Complexity: O(bd/2)O(b^{d/2})O(bd/2), where bbb is the branching factor and ddd is the distance
between the start and goal. This is more efficient than a single-direction search.
 Space Complexity: O(bd/2)O(b^{d/2})O(bd/2), as it requires storage of the nodes explored from both

By Devendra Ahire (CSD)


directions.
 Completeness: Complete if both searches are guaranteed to meet.
 Optimality: If both searches use an optimal algorithm like BFS, the solution will be optimal.

By Devendra Ahire (CSD)


15. Find out the path for the given graph using bidirectional search, here suppose the start
node is 0 and goal node is 14.

Bidirectional Search Steps:

1. Start from Node 0:


o Expand node 0: {4}
o The frontier from the start side is now {4}.
2. Start from Node 14:
o Expand node 14: {10}
o The frontier from the goal side is now {10}.
3. Expand from Start Side:
o Expand node 4: {0, 6}
o The frontier from the start side is now {6}.
4. Expand from Goal Side:
o Expand node 10: {8, 13, 14}
o The frontier from the goal side is now {8, 13}.
5. Expand from Start Side:
o Expand node 6: {4, 5, 7}
o The frontier from the start side is now {7}.
6. Expand from Goal Side:
o Expand node 8: {7, 9, 10}
o The frontier from the goal side is now {7}.

Meeting Point:
 The search from both directions meets at node 7.

Path Construction:
 From the start side: 0 → 4 → 6 → 7
 From the goal side: 14 → 10 → 8 → 7

Combined Path:

The complete path from start (0) to goal (14) is:

0 → 4 → 6 → 7 → 8 → 10 → 14

This is the path found using bidirectional search, where the search meets at node 7.

By Devendra Ahire (CSD)


16. Find out the BFS and DFS for the given graph.

/ \

B C

/ / \

D E F

Graph Structure:
mathematica
Copy code
A
/ \
B C
/ / \
D E F

Breadth-First Search (BFS)

BFS Traversal:

 BFS explores all the nodes at the present depth level before moving on to the nodes at the
next depth level. It uses a queue data structure.

1. Start with node A.


2. Visit its neighbors B and C.
3. Visit the children of B and C, which are D, E, and F.

Order:

 A→B→C→D→E→F

Depth-First Search (DFS)

DFS Traversal:

 DFS explores as far down a branch as possible before backtracking to explore other
branches. It uses a stack data structure (or recursion).

1. Start with node A.


2. Go deep into the leftmost branch: A → B → D.
3. Backtrack and explore the next branch from B: No more nodes to explore, so backtrack
to A.
4. Explore the next branch from A: A → C → E → F.

Order:

 A→B→D→C→E→F

By Devendra Ahire (CSD)

You might also like