AI Question Bank
AI Question Bank
Q.No Questions
1. What do you mean by Artificial Intelligence and also describe the foundations of AI?
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed
to think, learn, and make decisions. AI systems are designed to perform tasks that typically require human
intelligence, such as understanding natural language, recognizing patterns, solving problems, and making
decisions. AI can be categorized into narrow AI, which is designed for specific tasks, and general AI, which
would theoretically have the ability to perform any intellectual task that a human can.
Foundations of AI
1. Mathematics: Provides tools for reasoning and problem-solving, such as logic, probability, and
statistics.
2. Computer Science: Offers the computational power and algorithms needed to build and run AI
systems.
3. Psychology and Cognitive Science: Help in understanding how humans think and learn, which
inspires the development of AI models.
4. Neuroscience: Contributes insights into how the brain functions, informing the design of neural
networks and other AI technologies.
5. Linguistics: Assists in the development of natural language processing, enabling machines to
understand and generate human language.
6. Philosophy: Raises ethical and existential questions about the nature of intelligence, consciousness,
and the implications of AI.
The history of AI can be traced back to ancient myths and stories about artificial beings endowed with
intelligence. However, modern AI began in the mid-20th century with the following key milestones:
Benefits:
1. Automation: AI can automate repetitive tasks, improving efficiency and reducing human error.
2. Enhanced Decision-Making: AI can analyze large amounts of data quickly, helping in making
informed decisions.
3. Personalization: AI powers personalized recommendations in e-commerce, entertainment, and
social media.
4. Healthcare: AI aids in medical diagnostics, drug discovery, and personalized treatment plans.
5. Economic Growth: AI-driven innovation can boost productivity and create new industries.
Risks:
1. Job Displacement: Automation powered by AI could lead to job losses in certain industries.
2. Bias and Discrimination: AI systems can perpetuate or even amplify biases present in training data.
3. Security Risks: AI can be used in cyberattacks, creating sophisticated threats.
4. Loss of Control: There is a concern that advanced AI systems might act unpredictably or beyond
human control.
5. Ethical Concerns: The use of AI in areas like surveillance, warfare, and decision-making raises
significant ethical questions.
Agent: An agent is an entity that perceives its environment through sensors and acts upon that environment
using actuators. An agent can be anything that can make decisions and perform actions, such as a robot,
Environment: The environment is the external world in which the agent operates. It provides the context
for the agent's actions and includes all external factors that influence the agent's behavior.
Types of Agents:
1. Simple Reflex Agents: These agents make decisions based on current percepts only, ignoring
history.
2. Model-Based Reflex Agents: They maintain an internal state based on the history of percepts to
make decisions.
3. Goal-Based Agents: These agents act to achieve specific goals, considering future states and
consequences of actions.
4. Utility-Based Agents: These agents not only aim to achieve goals but also maximize their
performance measure, often expressed in terms of utility functions.
5. Learning Agents: These agents can improve their performance over time by learning from their
experiences.
Types of Environments:
1. Fully Observable vs. Partially Observable: In fully observable environments, the agent has access
to the complete state of the environment at any time. In partially observable environments, the agent
only has limited information.
2. Deterministic vs. Stochastic: In deterministic environments, the next state of the environment is
fully determined by the current state and the agent's actions. In stochastic environments, there is
some randomness involved.
3. Episodic vs. Sequential: In episodic environments, the agent's experiences are divided into episodes
that are independent of each other. In sequential environments, the current action can affect future
decisions.
4. Static vs. Dynamic: In static environments, the environment does not change while the agent is
deciding on an action. In dynamic environments, the environment can change during the agent's
decision-making process.
5. Discrete vs. Continuous: In discrete environments, there are a finite number of distinct states,
actions, and percepts. In continuous environments, the state and actions can vary across a continuous
range
An Intelligent Agent is an agent that can autonomously make decisions, adapt to changes, and learn from
experiences to improve its performance over time. It operates in complex environments and can handle
uncertainty.
These agents respond to stimuli based on pre-programmed rules and don't consider past or future
consequences.
These agents are similar to simple reflex agents, but they have a more comprehensive understanding
of their environment.
Goal-based agents
Also known as rational agents, these agents use goals to describe desirable situations and choose
actions to achieve those goals.
Utility-based agents
These agents are similar to goal-based agents, but they can also measure utility. They can rate
potential scenarios based on desired results and choose actions to maximize the outcome.
Learning agents
These agents get smarter over time as they learn from their experiences and acquire more knowledge
6. Differentiate between Informed and Uniformed Search Techniques.
Feature Informed Search Techniques Uniformed Search Techniques
Techniques that use problem-specific Techniques that do not use problem-
Definition knowledge to find solutions more specific knowledge; they explore the
efficiently. search space systematically.
Utilizes heuristics or additional
Knowledge No heuristics or additional problem-
information about the problem to
Utilized specific information is used.
guide the search.
Example A* Search, Greedy Best-First Search, Breadth-First Search (BFS), Depth-First
Algorithms Hill Climbing. Search (DFS), Uniform Cost Search.
Often more efficient because they use Can be less efficient as they explore
Efficiency heuristics to focus the search in large portions of the search space
promising directions. without guidance.
May not always guarantee an optimal
Algorithms like Uniform Cost Search
solution unless combined with
Optimality guarantee optimality if the path cost is
specific heuristics (e.g., A* with an
non-negative.
admissible heuristic).
Generally complete if the heuristic is Generally complete, provided the search
Completeness admissible (i.e., does not space is finite and the algorithm does
overestimate the cost). not get stuck in infinite loops.
Can be high due to storing heuristic
Space complexity can be lower (e.g.,
Space information and maintaining priority
DFS has lower space complexity
Complexity queues (e.g., A* requires storing
compared to BFS).
nodes and their costs).
Directed towards the goal using Searches in all directions uniformly,
Search
heuristic estimates (e.g., A* uses f(n) without directionality (e.g., BFS
Direction
= g(n) + h(n)). explores level by level).
7. Explain the Breadth first search and depth first search searching techniques withexample. Also compare
their time and space complexity.
Description:
BFS explores all the nodes at the present depth level before moving on to the nodes at the next depth level.
It uses a queue data structure to keep track of the nodes that need to be explored.
mathematica
Copy code
A
/ | \
B C D
/| | | \
E F G H I
BFS Traversal: A → B → C → D → E → F → G → H → I
Time Complexity:
O(bd)O(b^d)O(bd), where bbb is the branching factor (average number of child nodes for each node), and
ddd is the depth of the shallowest solution.
Space Complexity:
O(bd)O(b^d)O(bd), since it needs to store all the nodes at the current level.
Description:
DFS explores as far down a branch as possible before backtracking to explore other branches.
It uses a stack data structure (can be implemented using recursion) to keep track of nodes.
DFS Traversal: A → B → E → F → C → G → D → H → I
Space Complexity:
Comparison:
BFS is optimal if all step costs are equal, as it finds the shortest path first. DFS is not optimal.
BFS consumes more memory than DFS.
DFS can get stuck in an infinite loop if the search space is infinite or very deep without cycle detection.
8. Describe Depth limited search with example and its performance measures.
Description:
DLS is a variant of DFS that limits the depth to which the search can go. This avoids infinite loops in deep or
infinite search spaces.
Nodes HHH and III are not explored because they are beyond the depth limit.
Performance Measures:
Definition: A heuristic search uses a heuristic function h(n)h(n)h(n) that estimates the cost to reach the
goal from node nnn. This estimation helps guide the search towards the most promising paths, improving
efficiency.
A* Algorithm
Description:
A* is a popular heuristic search algorithm that combines the actual cost to reach a node g(n)g(n)g(n) and the
estimated cost to reach the goal from that node h(n)h(n)h(n).
Example: Consider a graph where you want to find the shortest path from node AAA to GGG. Assume the
following costs:
A* explores paths by evaluating f(n)f(n)f(n) until it finds the one that minimizes this function, leading to the
goal.
Performance Measures:
Completeness: A* is complete if the branching factor is finite and every arc has a positive cost.
Optimality: A* is optimal if the heuristic function h(n)h(n)h(n) is admissible (never overestimates the cost to
reach the goal).
Time Complexity: Depends on the heuristic, but in the worst case, it can be O(bd)O(b^d)O(bd).
Space Complexity: Typically O(bd)O(b^d)O(bd), as it needs to store all generated nodes.
Local Search Algorithms focus on finding solutions by iteratively improving a single candidate solution.
They do not explore the entire search space but move to neighboring states based on some criterion.
Example Workflow:
Performance Measures:
The image illustrates the state space diagram of the Hill Climbing algorithm, showing different regions such
as the Global Maximum, Local Maximum, Plateau, and Ridge. The diagram visually represents how the
algorithm moves towards higher states (elevations) and highlights the potential problems that can occur
during the search process.
1. Local Maximum:
o The algorithm may reach a peak that is higher than the surrounding area but lower than the global
maximum, causing it to stop without finding the best possible solution.
2. Plateau:
o A flat region in the state space where there is no gradient to guide the search. The algorithm may
wander without making progress.
3. Ridge:
o A narrow path leading to higher ground. If the search path is not perfectly aligned with the ridge, the
algorithm might fall off and miss the optimal solution.
Simulated Annealing is an optimization technique that helps to overcome the limitations of the Hill
Climbing algorithm, such as getting stuck in local maxima. It is inspired by the annealing process in
metallurgy, where a material is heated and then slowly cooled to reduce defects and find a stable structure.
Key Concepts:
The algorithm accepts not only improvements but also some worse solutions with a probability that decreases
over time. This probability is controlled by a "temperature" parameter.
As the temperature decreases, the algorithm becomes more selective, eventually converging to a solution.
Simulated Annealing can escape local maxima and explore the search space more effectively.
Memory-bounded search techniques aim to reduce the memory requirements of traditional search
algorithms while maintaining good performance. They are particularly useful in environments with limited
memory resources.
Examples:
14. Describe Bidirectional search technique with example and its performance measures.
Bidirectional Search is a graph search algorithm that simultaneously searches from the start node and the
goal node. The search proceeds from both directions and meets in the middle, which significantly reduces
the search time.
Example: Consider finding the shortest path between nodes AAA and GGG. Bidirectional search would
explore paths from AAA and GGG concurrently until they meet at a node, say DDD. The combined path
A→D→GA \rightarrow D \rightarrow GA→D→G is the solution.
Performance Measures:
Time Complexity: O(bd/2)O(b^{d/2})O(bd/2), where bbb is the branching factor and ddd is the distance
between the start and goal. This is more efficient than a single-direction search.
Space Complexity: O(bd/2)O(b^{d/2})O(bd/2), as it requires storage of the nodes explored from both
Meeting Point:
The search from both directions meets at node 7.
Path Construction:
From the start side: 0 → 4 → 6 → 7
From the goal side: 14 → 10 → 8 → 7
Combined Path:
0 → 4 → 6 → 7 → 8 → 10 → 14
This is the path found using bidirectional search, where the search meets at node 7.
/ \
B C
/ / \
D E F
Graph Structure:
mathematica
Copy code
A
/ \
B C
/ / \
D E F
BFS Traversal:
BFS explores all the nodes at the present depth level before moving on to the nodes at the
next depth level. It uses a queue data structure.
Order:
A→B→C→D→E→F
DFS Traversal:
DFS explores as far down a branch as possible before backtracking to explore other
branches. It uses a stack data structure (or recursion).
Order:
A→B→D→C→E→F