Nov Dec 2023 Solu (AI)
Nov Dec 2023 Solu (AI)
:
P7553 [Total No. of Pages : 2
[6180]-63
T.E. (Artificial Intelligence & Data Science)
ARTIFICIAL INTELLIGENCE
(2019 Pattern) (Semester-I) (310253)
Q1) a) Explain Min Max and Alpha Beta pruning algorithm for adversarial search
with example: [9]
b) Define and explain Constraints satisfaction problem. [9]
Min-Max Algorithm:
The Min-Max algorithm is used in adversarial search to make decisions for a game where two
opponents are playing against each other, such as in chess or tic-tac-toe. The goal of this
algorithm is to select the best move for a player assuming that the opponent will also play
optimally to minimize the player's score.
The algorithm works by exploring all possible moves and selecting the one that maximizes the
player's score while minimizing the opponent’s score.
Maximizing Player (Player 1): The player is trying to maximize their score.
Minimizing Player (Player 2): The opponent is trying to minimize the player's score.
1. Tree Structure: A search tree is constructed where each node represents a game state,
and each edge represents a possible move.
2. Evaluation: At the leaf nodes of the tree (end game states), the evaluation function is
applied to determine the score of that particular state (positive for a win, negative for a
loss, and zero for a draw).
3. Backpropagation:
o If it is the maximizing player’s turn, the value of the node is the maximum value
of its child nodes.
o If it is the minimizing player’s turn, the value of the node is the minimum value of
its child nodes.
4. Selection: The algorithm selects the move (root node’s child) that leads to the highest
evaluation value.
Example (Tic-Tac-Toe):
Consider a simple Tic-Tac-Toe game with the following simplified state tree. Assume Player 1
(Max) is trying to maximize the score, and Player 2 (Min) is trying to minimize it.
MAX
/ \
MINMIN
/\ / \
MAX MAX MAX MAX
/ \ / \ / \ / \
(1) (0) (1) (0)(-1)(0)(0)(-1)
At the leaf nodes, the evaluation values represent the outcome of the game:
o 1: Win for Player 1
o 0: Draw
o -1: Win for Player 2
The algorithm backpropagates these values up the tree:
o MIN nodes take the minimum of the values of their children.
o MAX nodes take the maximum of the values of their children.
Finally, Player 1 will choose the move with the maximum value, and Player 2 will choose
the move with the minimum value.
Alpha-Beta Pruning is an optimization technique for the Min-Max algorithm that helps to
reduce the number of nodes evaluated in the search tree. It "prunes" branches that do not need to
be explored because they cannot affect the final decision.
Alpha: The best value found so far along the path to the root for the maximizer.
Beta: The best value found so far along the path to the root for the minimizer.
1. Pruning Condition:
o During the exploration of the tree, if at any point, the value of a node being
explored is greater than or equal to Beta (for maximizing) or less than or equal
to Alpha (for minimizing), we stop exploring that branch because it cannot affect
the final outcome.
2. Alpha is updated when a better (larger) value is found by the maximizer.
3. Beta is updated when a better (smaller) value is found by the minimizer.
4. Pruning occurs when the current value cannot change the outcome because the branch
has already been shown to lead to a worse outcome for the player.
Consider the same tree as the Min-Max example, but now apply Alpha-Beta Pruning.
For instance, if Player 1 (MAX) is evaluating a branch, and Player 2 (MIN) already has a branch
with a score of -1 (Beta = -1), then any further branches with values greater than or equal to -1
do not need to be explored because Player 2 would never allow Player 1 to reach such a state.
By pruning unnecessary branches, Alpha-Beta pruning reduces the number of nodes to be
evaluated, making the algorithm more efficient.
A Constraint Satisfaction Problem (CSP) is a problem where the goal is to find a set of values
for a set of variables that satisfy a set of constraints. Each variable in the problem must be
assigned a value from a specified domain, and the solution must satisfy all constraints defined for
the problem.
1. Variables: These are the elements of the problem that need to be assigned values.
o Example: In a map-coloring problem, the variables could be the regions of a
map.
2. Domains: Each variable has a set of possible values it can take.
o Example: In the map-coloring problem, the domain of each region could be {Red,
Green, Blue}.
3. Constraints: These are the restrictions or rules that the variables must satisfy.
Constraints can be unary (involving only one variable), binary (involving two variables),
or higher-order (involving more than two variables).
o Example: In the map-coloring problem, a constraint could be that adjacent regions
cannot have the same color.
4. Solution: A solution to the CSP is a complete assignment of values to variables such that
all constraints are satisfied.
CSP Types:
Solving CSPs:
1. Backtracking: A brute-force search algorithm where assignments are made one by one,
and if a conflict arises, it backtracks and tries another assignment.
2. Constraint Propagation: Techniques like arc-consistency that simplify the problem by
reducing the domain of variables as constraints are applied.
3. Heuristic Search: Using heuristics such as minimum remaining values (MRV) and
degree heuristic to decide which variable to assign first.
4. Local Search: Techniques like min-conflicts for large, complex CSPs that may not have
a simple backtracking solution.
This solution satisfies all constraints (no two adjacent regions have the same color).
Conclusion:
CSPs are fundamental in AI because many real-world problems can be formulated as CSPs,
including scheduling, resource allocation, and puzzle solving (e.g., Sudoku). Solving a CSP
efficiently requires both good algorithms (like backtracking or constraint propagation) and
heuristics to make the search process more manageable.
OR
Q2) a) Explain with example graph coloring problem. [9]
Graph Coloring is a problem of assigning labels (called colors) to the vertices of a graph in
such a way that no two adjacent vertices share the same color. This problem is a classic
Constraint Satisfaction Problem (CSP) where the variables are the vertices of the graph, the
domains are the available colors, and the constraints are that adjacent vertices must have
different colors.
Key Concepts:
Example:
A
/ \
B C
\ /
D
Vertices: A, B, C, D
Edges: (A, B), (A, C), (B, D), (C, D)
The goal is to assign colors to the vertices such that no two adjacent vertices share the same
color.
Step-by-Step Solution:
Final coloring:
A = Color 1
B = Color 2
C = Color 2
D = Color 1
This is a valid coloring, as no adjacent vertices share the same color. The number of colors used
in this case is 2.
Tic-Tac-Toe is a two-player game where players take turns marking an empty cell in a 3x3 grid
with their symbol (X or O). The player who first gets three of their symbols in a row, column, or
diagonal wins. If the grid is filled without a winner, the game is a draw.
To solve the Tic-Tac-Toe problem using AI, we typically use minimax search with alpha-beta
pruning. Here's how AI is applied:
1. Minimax Algorithm:
o The Minimax algorithm is a decision-making algorithm that computes the
optimal move for a player assuming that both players play optimally.
o It generates a game tree that represents all possible moves and their outcomes.
o Maximizing Player: This is the player (AI) trying to maximize their score,
usually marked as 'X'.
o Minimizing Player: This is the opponent trying to minimize the AI's score,
usually marked as 'O'.
Alpha-Beta Pruning:
Example:
X | O | X
---------
O | X | O
---------
X | |
2. Recursive Exploration: Generate all possible moves and simulate the game until a
terminal state is reached (win, draw, or loss).
3. Evaluating Moves:
o If it's the AI's turn, choose the move that maximizes the evaluation score (+1 for
win, 0 for draw, -1 for loss).
o If it's the opponent’s turn, choose the move that minimizes the evaluation score
(opponent tries to block the AI).
4. Alpha-Beta Pruning: As the tree is explored, branches are pruned if they can’t affect the
outcome based on current Alpha and Beta values.
The algorithm will ultimately choose the best move for the AI based on the evaluations, and the
game will end either in a win for AI, win for the opponent, or a draw.
Advantages of AI in Tic-Tac-Toe:
Optimal Play: The AI will always play optimally, ensuring either a win or a draw (never
a loss).
Efficiency: With Alpha-Beta pruning, the search space is reduced, making the algorithm
more efficient.
Conclusion:
AI techniques, such as Minimax and Alpha-Beta Pruning, are very effective in solving
deterministic, two-player games like Tic-Tac-Toe. These algorithms ensure that the AI makes the
best possible move at every stage of the game, leading to an optimal outcome.
Q3) a) Explain Wumpus world environment giving its PEAS description. [9]
b) Explain different inference rules in FOL with suitable example. [8]
The Wumpus World is a popular problem in artificial intelligence (AI) used to illustrate the
concepts of logic-based reasoning and agent decision-making. The problem consists of a grid
environment where an agent (the "agent") needs to explore and navigate a series of rooms in
search of gold while avoiding dangers like the Wumpus (a monster) and pits (holes). The agent
must use percepts to determine its actions.
The PEAS description provides a framework to describe the environment of an agent, outlining
Performance Measure, Environment, Actuators, and Sensors. Here's the PEAS description
for the Wumpus World environment:
1. Performance Measure:
o The agent's goal is to find the gold and avoid the Wumpus and pits.
o The agent is rewarded for finding the gold and penalized for falling into a pit or
being eaten by the Wumpus.
o The performance measure might be calculated as:
+100 for finding the gold.
-1000 for being eaten by the Wumpus.
-100 for falling into a pit.
-1 for each move (to encourage the agent to find a solution efficiently).
2. Environment:
o The Wumpus World consists of a grid (typically 4x4) of rooms.
o Each room can contain:
Gold: The agent’s objective is to find this.
Wumpus: A deadly creature that kills the agent if it moves into its room.
Pit: A hole that causes the agent to fall and die if it steps into it.
Breeze: A percept that indicates the presence of a nearby pit.
Stench: A percept that indicates the presence of the Wumpus in an
adjacent room.
oThe agent has limited knowledge of the world, based only on the percepts it
gathers from the rooms it visits.
3. Actuators:
o The agent can perform the following actions:
Move forward: Move to an adjacent room.
Turn left/right: Change the agent's direction.
Grab: Pick up the gold if in the same room.
Shoot: Fire an arrow to kill the Wumpus if it’s in a straight line in front of
the agent.
Climb: Exit the cave (when the agent has found the gold).
4. Sensors:
o The agent has the following sensors to detect the environment:
Breeze: Indicates that a pit is adjacent to the current room.
Stench: Indicates that the Wumpus is adjacent to the current room.
Glitter: Indicates that gold is in the current room.
Scream: Indicates that the Wumpus has been killed (this occurs when the
agent shoots an arrow).
+---+---+---+---+
| G | P | P | W |
+---+---+---+---+
| P | | | |
+---+---+---+---+
| P | | | |
+---+---+---+---+
| | | | |
+---+---+---+---+
G = Gold
P = Pit
W = Wumpus
Empty cells are free of hazards.
In this scenario:
The agent needs to navigate this grid and find the gold while avoiding pits and the
Wumpus.
First-Order Logic (FOL), also known as Predicate Logic, extends propositional logic by
allowing the use of quantifiers and predicates. FOL is capable of expressing relationships
between objects and properties, and it provides several inference rules that allow for deriving
conclusions from given facts or premises.
Conclusion:
c)
OR
Q4) a) Write an propositional logic for the statement, [10]
i) “All birds fly”
ii) “Every man respect his parents”
b) Differentiate between propositional logic and First order logic. [7]
To express the statement "All birds fly" in propositional logic, we can use the universal
quantifier concept found in First-Order Logic (FOL), but since we are asked to express it in
propositional logic, we will simplify it.
This proposition captures the idea that being a bird implies the ability to fly. However, this logic
does not explicitly handle all birds; in propositional logic, we often treat the entities in a more
simplified manner without specifying quantifiers like "all."
∀x (Bird(x) → Fly(x))
Where "∀x" indicates "for all x," meaning "every bird flies."
Where "∀x" means "for all men x" and "∃y" means "there exists a parent y."
However, in propositional logic, we cannot directly handle quantifiers and functions like ∀ (for
all) and ∃ (there exists). If we simplify for propositional logic:
But to fully handle the concept of "every man respects his parents," First-Order Logic (FOL) is
better suited.
Propositional Logic (also called Sentential Logic) and First-Order Logic (FOL) are both
formal systems used for reasoning, but they differ in terms of expressiveness and complexity.
Key Differences:
1. Scope of Representation:
o Propositional Logic is limited to dealing with propositions (true or false
statements) and their logical connectives.
o First-Order Logic can represent more complex statements, including objects,
their properties, and relationships through predicates and quantifiers.
2. Quantifiers:
o Propositional Logic does not support quantifiers (i.e., it cannot express
statements like "for all" or "there exists").
o First-Order Logic uses quantifiers to express general or existential statements,
allowing it to reason about all objects (universal quantifier) or at least one object
(existential quantifier).
3. Level of Abstraction:
o Propositional Logic operates at a lower level of abstraction, dealing with simple
statements.
o First-Order Logic provides a higher level of abstraction, allowing for a deeper
understanding of the structure of relationships and objects within a domain.
In conclusion, Propositional Logic is simpler and deals with fixed propositions, whereas First-
Order Logic is more powerful and flexible, allowing for reasoning about the structure and
relationships within a domain.
Q5) a) Explain Forward chaining algorithm with the help of example. [9]
b) Write and explain the steps of knowledge engineering process. [9]
The algorithm follows a forward search, progressing from known facts towards new
conclusions.
Summary: Forward chaining moves from known facts to new conclusions, applying rules
in a forward direction. It is useful for systems that need to work with available data and
gradually expand their knowledge.
Knowledge Engineering is the process of designing, building, and maintaining systems that use
knowledge, typically in the form of knowledge-based systems. The aim is to capture expert
knowledge and make it usable by machines to solve problems. The process involves acquiring,
organizing, and implementing knowledge into an AI system.
1. Knowledge Acquisition:
o Definition: Knowledge acquisition is the process of collecting and capturing the
knowledge from experts, documents, databases, and other sources.
o Activities:
Interviewing domain experts to gather expert knowledge.
Observing expert decision-making in real-world scenarios.
Extracting knowledge from existing data sources.
o Tools/Methods:
Interviews, surveys, questionnaires to collect data.
Knowledge Acquisition Tools like protocol analysis or
conceptualization tools.
2. Knowledge Representation:
o Definition: Once knowledge is acquired, it must be represented in a way that a
computer system can understand and process. This involves choosing the correct
framework or formalism to encode the knowledge.
o Activities:
Identifying and defining the relevant concepts and relationships within the
domain.
Choosing the appropriate representation model (e.g., semantic networks,
frames, rules, ontologies).
o Tools/Methods:
Ontologies, taxonomies, frames, semantic networks, production rules.
3. Knowledge Validation:
o Definition: Knowledge validation ensures that the knowledge captured is correct,
relevant, and useful for the problem at hand.
o Activities:
Testing the knowledge base with real-world examples.
Verifying the correctness and consistency of the knowledge.
o Tools/Methods:
Cross-checking knowledge with experts.
Implementing test cases and evaluating performance.
4. Knowledge Modeling:
o Definition: Knowledge modeling involves structuring and organizing knowledge
in a way that is efficient for the AI system to reason about.
o Activities:
Creating models (e.g., decision trees, rule-based systems) that describe the
relationships and behaviors within the domain.
Organizing the knowledge in a hierarchical manner to simplify access and
inference.
o Tools/Methods:
Entity-relationship diagrams, conceptual models, knowledge graphs.
5. Knowledge Implementation:
o Definition: This is the process of converting the represented knowledge into a
form that can be used by a software system (i.e., the AI system).
o Activities:
Programming the knowledge into a knowledge-based system (expert
systems, decision support systems, etc.).
Implementing the reasoning engine to apply the rules or models.
o Tools/Methods:
Expert system shells, machine learning algorithms, neural networks.
6. Knowledge Testing and Refinement:
o Definition: After implementing the knowledge in the system, it is important to
evaluate the system's performance and refine the knowledge to improve accuracy
and efficiency.
o Activities:
Testing the system’s output with real-world data.
Refining the knowledge base by removing inconsistencies or adding
missing information.
o Tools/Methods:
Performance evaluation metrics.
Expert feedback to improve accuracy.
7. Knowledge Maintenance:
o Definition: Knowledge maintenance involves keeping the knowledge base up-to-
date as the domain changes over time.
o Activities:
Regularly updating the knowledge base to reflect new information or
changes in the domain.
Re-evaluating and refining the knowledge base as needed.
o Tools/Methods:
Automated knowledge management systems.
Continuous feedback loops from system performance.
Knowledge engineering is a continuous and iterative process that requires gathering, organizing,
representing, and maintaining knowledge for AI systems. The primary goal is to ensure that the
AI system can apply expert knowledge in a structured and effective manner to solve complex
problems.
OR
P.T.O.
Q6) a) Explain Backward chaining algorithm vith the help of example. [9]
b) Write a short note on [9]
i) Resolution and
ii) Unification
1. Start with a Goal: The algorithm starts with a goal (hypothesis) and looks for rules that
can help achieve that goal.
2. Search for Rules: The algorithm searches for rules whose conclusions match the goal.
3. Check Premise: If the conclusion of a rule matches the goal, it then checks the premises
(conditions) of that rule.
4. Apply Rule: If the premises are not already known, it recursively checks if those
premises can be proven using other rules.
5. Repeat: This process continues until the premises are known or all the rules have been
exhausted.
Example:
Facts:
o A: "John is tired."
o B: "John went to bed."
Rules:
1. If John went to bed, then John is sleeping.
(B → C)
2. If John is sleeping, then John is tired.
(C → A)
Summary:
Backward Chaining starts with the goal and works backward through rules to prove or
disprove the goal. It is typically used in expert systems and logic programming, where
it helps find the necessary conditions to support a conclusion.
i) Resolution:
Resolution is a complete inference rule used in propositional logic and first-order logic to
derive conclusions from a set of clauses. It is particularly used in logic programming and
automated theorem proving.
The idea behind resolution is to combine two clauses that contain complementary literals (i.e.,
one contains a literal and the other contains its negation), and by doing so, deduce a new clause.
Resolution Rule:
We can resolve these clauses by eliminating the complementary literals (P and ¬P\neg P) and
combining the remaining literals to produce a new clause:
Example:
P∨QP \lor Q (P or Q)
¬P∨R\neg P \lor R (not P or R)
Applying resolution:
If we continue to resolve further with other clauses or goals, we eventually reach the empty
clause (which represents a contradiction) if the goal is proven to be unsatisfiable.
Unification is a process used in logic and computer science (especially in logic programming
and AI) to make two logical expressions identical by finding a substitution of variables.
Unification plays a crucial role in first-order logic and is essential for reasoning, theorem
proving, and query answering.
Definition:
Unification is the process of finding a substitution (a mapping of variables to terms) that makes
two logical expressions identical.
Steps in Unification:
1. Input Expressions: Given two logical expressions, say E1E_1 and E2E_2, unification
aims to find a substitution such that E1E_1 becomes identical to E2E_2.
2. Find Substitution: Unification works by replacing variables in the expressions with
constants, other variables, or more complex terms, so that both expressions become the
same.
3. Return the Substitution: If a valid substitution exists, return it. If no substitution can
make the expressions identical, then unification fails.
Example:
f(x,a)f(x, a)
f(b,y)f(b, y)
x↦bx \mapsto b
y↦ay \mapsto a
Now, the two expressions are unified, as both become f(b,a)f(b, a).
Key Points:
Summary:
Planning Agent:
State Representation:
A state in a planning problem is a complete description of the world at any given point in time. It
consists of all the relevant facts or conditions necessary to make decisions about actions. States
are often represented as sets of propositions or as vectors in a state space.
For example:
In a robot navigation problem, a state might include the robot's position, its orientation,
and whether obstacles are in the way.
Goal Representation:
A goal is the target state or condition the agent wants to achieve. It specifies the desired
properties that the agent must satisfy at the end of the planning process. Goals can be represented
as logical formulas, sets of propositions, or conditions that must be true.
For example:
The goal could be “The robot is at the target location” or “The door is open.”
Action Representation:
An action in planning is an operation that transforms one state into another. Actions are typically
represented by:
1. Preconditions: The conditions that must be true for the action to be performed.
2. Effects: The changes the action will bring about in the state.
3. Action Description: A formal representation of the action, often in the form of a
predicate logic expression or a set of conditions.
For example:
A robot’s action could be “move forward,” which has a precondition (robot is not at the
end of the path) and an effect (the robot’s position changes).
1. Initial State:
o The starting condition or configuration of the system, which represents the current
state of the world or environment.
o For example, in a robot planning system, the initial state could be the robot’s
starting position.
2. Goal State:
o The desired end condition or set of conditions that the system aims to achieve.
The goal state specifies what the world should look like after the agent completes
its actions.
o For example, the goal could be “The robot has reached its destination.”
3. Actions:
o The operations that the system can use to transition between states. Each action
has preconditions (what must be true for the action to be applicable) and effects
(what the action does to the world).
o Actions are often represented using action schemas in formal planning
languages.
4. State Representation:
o A formal description of the environment or the world at any given time, which
includes the relevant features and conditions that can change over time. States are
usually represented as sets of variables or facts.
o Example: Position of the robot, obstacles in the way, battery level, etc.
5. Plan:
o The sequence of actions generated by the planning system that leads from the
initial state to the goal state. A valid plan must satisfy the goal conditions and
respect the preconditions of the actions.
o A plan is essentially a solution to the problem.
6. Search Algorithm:
o Planning systems often use search algorithms (e.g., breadth-first search, A*,
depth-first search) to explore possible sequences of actions and select the one that
reaches the goal in the most efficient way.
o These algorithms help the planner search through the state space to find a valid
and optimal plan.
7. Heuristic Function (Optional):
o Some planning systems use heuristics to guide the search process. A heuristic
function estimates the cost or distance from a given state to the goal and helps the
system prioritize which states to explore first.
o This is often used in optimal planning methods.
The components of Artificial Intelligence (AI) can be broadly categorized into various areas
that work together to create intelligent systems capable of reasoning, learning, perception, and
decision-making. The major components of AI include:
1. Learning:
o Machine Learning (ML) is a core component of AI, enabling systems to learn
from data and improve their performance over time. ML involves algorithms that
allow computers to recognize patterns, make predictions, and optimize decision-
making without being explicitly programmed.
o Supervised Learning, Unsupervised Learning, Reinforcement Learning, and
Deep Learning are some common approaches to machine learning.
2. Reasoning:
o AI systems use logical reasoning to make inferences and decisions. This involves
drawing conclusions from a set of premises or facts.
o Key techniques include propositional logic, first-order logic, deductive
reasoning, and inductive reasoning.
3. Perception:
o Perception refers to the ability of an AI system to sense and interpret its
environment. This involves sensory data (e.g., visual, auditory, tactile) and
processing it to understand the surroundings.
o Computer Vision, Speech Recognition, and Natural Language Processing
(NLP) are examples of perception tasks in AI.
4. Planning and Decision Making:
o Planning involves generating a sequence of actions to achieve a specific goal,
while decision-making is about choosing the best course of action given the
available information.
o Decision Trees, Markov Decision Processes (MDPs), and Reinforcement
Learning are examples of methods used for decision-making in AI.
5. Problem Solving:
o AI systems are often designed to solve complex problems by breaking them down
into simpler subproblems. This involves the use of search algorithms,
optimization techniques, and heuristics.
o A*, Breadth-First Search (BFS), and Depth-First Search (DFS) are popular
algorithms for problem-solving.
6. Natural Language Processing (NLP):
o NLP enables machines to understand, interpret, and generate human language. It
includes tasks such as language translation, text summarization, and speech
recognition.
o NLP is used in chatbots, virtual assistants, and translation systems.
7. Robotics:
o AI in robotics involves the integration of various AI techniques to allow robots to
perform tasks autonomously. This includes motion planning, sensor fusion, and
interaction with the environment.
o Robots can use AI to perceive their environment, plan their actions, and execute
them.
8. Knowledge Representation:
AI systems need to represent knowledge about the world in a structured form that
o
they can process. This can be done using various methods such as semantic
networks, frames, ontologies, and production rules.
o The goal is to represent real-world information that can be used for reasoning and
decision-making.
9. Expert Systems:
o Expert systems are AI programs that simulate the decision-making ability of a
human expert in a particular field. They rely on knowledge bases and inference
engines to make decisions and solve problems within a specific domain.
Summary:
OR
Q8) a) What are the types of planning? Explain in detail. [6]
b) Explain Classical Planning and its advantages with Example. [6]
c) Write note on hierarchical task network planning. [5]
In the context of artificial intelligence, planning refers to the process of selecting a sequence of
actions to achieve a specific goal. Different types of planning techniques are employed
depending on the nature of the environment, the problem at hand, and the type of agent used. The
main types of planning include:
1. Classical Planning:
o Classical planning assumes a deterministic environment with perfect knowledge
and no uncertainty. It involves finding a sequence of actions that transform the
current state to the goal state.
o Example: A robot planning to navigate from one room to another by following a
set of deterministic actions (like move forward, turn left, etc.).
2. Conditional Planning:
o Conditional planning involves planning for contingencies where the actions might
have different outcomes based on conditions. It assumes that the future is
uncertain and the agent needs to handle different scenarios.
o Example: If a robot is navigating and encounters an obstacle, it needs to have a
contingency plan (e.g., turn left if blocked, move forward otherwise).
3. Non-Deterministic Planning:
o Non-deterministic planning allows for uncertainty in actions. The environment’s
response to an action is not always known, meaning an action might have multiple
possible outcomes.
o Example: A robot in an uncertain environment where sensors may sometimes
malfunction, causing different possible outcomes from the same action.
4. Partial-Order Planning:
o In partial-order planning, actions are not scheduled in a strict sequence, but
constraints are added that must be satisfied. This method allows flexibility in the
order of actions.
o Example: In a task involving both cooking and cleaning, the agent can perform
some tasks (like setting the table) independently of other tasks (like stirring soup),
but others must occur in a certain order.
5. Hierarchical Task Network (HTN) Planning:
o HTN planning breaks down tasks into subtasks (hierarchical decomposition). This
allows complex tasks to be broken down into simpler ones, which can be more
easily planned and executed.
o Example: In a complex robot task like "clean the house," HTN might decompose
it into smaller tasks such as "sweep the floor," "vacuum the carpet," and so on.
6. Temporal Planning:
o Temporal planning takes into account time constraints. The planning process
includes the concept of time, durations for actions, and synchronization between
actions.
o Example: Scheduling a flight involves temporal planning, as certain actions (e.g.,
takeoff, landing) must happen at specific times.
7. Goal-Oriented Planning:
o Goal-oriented planning focuses on achieving a specific goal. The system
determines what needs to be done to reach the goal, and plans actions
accordingly.
o Example: A chess-playing AI aims to checkmate the opponent, so it generates a
plan that progressively puts the opponent’s king into check.
8. Plan-Space Planning:
o Plan-space planning represents plans as a set of choices that must be made, with
some choices depending on others. It doesn't follow a linear progression and can
be useful in complex environments where multiple paths are possible.
Classical Planning:
Classical planning refers to the problem of finding a sequence of actions that lead from an initial
state to a goal state in a deterministic environment, where the outcomes of actions are
predictable, and there is no uncertainty in the agent's actions. It is often referred to as STRIPS
(Stanford Research Institute Problem Solver) planning, where the environment is described
using states and actions.
Example:
Consider a robot that needs to navigate from one room to another. The robot has actions like
"move left," "move right," "turn around," etc., and a clear goal state (reaching the destination
room). In classical planning, the system will generate a sequence of actions that will transform
the initial state (robot in the starting room) to the goal state (robot in the destination room).
Example in Action: A classical planning algorithm might be used to plan a sequence of moves
for a vacuum cleaning robot. Given a grid map and the locations of dirt, the robot's task is to
clean the house efficiently by moving along a set of predefined paths.
HTN Planning is a formal approach to AI planning where tasks are broken down into subtasks
in a hierarchical manner. It is an extension of classical planning that allows more complex tasks
to be represented as compositions of simpler tasks. In HTN, the problem is decomposed into
smaller, more manageable tasks (or "methods"), and these tasks can further be broken down until
they reach a level where they can be executed directly.
HTN planning is particularly useful for modeling and solving complex problems by organizing
tasks hierarchically.
1. Task Decomposition: High-level tasks are decomposed into subtasks, and this
decomposition is done recursively until the tasks become primitive (i.e., directly
executable).
2. Methods: A method defines how a task can be decomposed. It consists of a set of
preconditions (requirements for the task) and a set of subtasks (the decomposition of the
task).
3. Primitive Tasks: These are the basic actions or tasks that can be directly executed, such
as moving an object, picking something up, etc.
Example:
Consider a robot tasked with performing the following complex task: "clean the house."
High-Level Task: Clean the house
o Method 1: To clean the house, the robot needs to clean each room.
Subtasks: Clean living room, clean kitchen, clean bedroom.
o Method 2: To clean the living room, the robot needs to:
Subtasks: Sweep the floor, mop the floor, vacuum the carpet.
This hierarchical decomposition continues until each task is simple enough to be directly
executed (e.g., "move to room X," "sweep the floor").
1. Modularity: HTN allows for modular planning, where tasks can be reused across
different plans. This makes it easier to scale planning solutions.
2. Natural Representation: HTN allows for a natural and intuitive representation of
complex tasks in terms of simpler tasks, closely resembling how humans approach
problem-solving.
3. Efficiency: By decomposing tasks into subtasks, HTN reduces the search space and
makes planning more efficient.
Disadvantages:
1. Complexity in Task Definition: Defining methods for every task and subtask can be
time-consuming and requires domain-specific knowledge.
2. Limited Flexibility: HTN assumes that tasks can be decomposed in a fixed manner. If
unexpected situations arise, the system may not adapt easily.
HTN planning is commonly used in areas such as robotics, game AI, and autonomous systems,
where complex tasks require hierarchical decomposition for effective execution.
Summary: