0% found this document useful (0 votes)
5 views

Artificial_Intelligence

The document provides an overview of various search algorithms in artificial intelligence, including Depth First Search (DFS), Breadth First Search (BFS), and heuristic search techniques. It discusses the properties of search algorithms, the Mini-Max algorithm for game playing, and optimization techniques like Alpha-Beta pruning. Additionally, it covers concepts such as Constraint Satisfaction Problems and Means-Ends Analysis, highlighting their significance in problem-solving within AI.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Artificial_Intelligence

The document provides an overview of various search algorithms in artificial intelligence, including Depth First Search (DFS), Breadth First Search (BFS), and heuristic search techniques. It discusses the properties of search algorithms, the Mini-Max algorithm for game playing, and optimization techniques like Alpha-Beta pruning. Additionally, it covers concepts such as Constraint Satisfaction Problems and Means-Ends Analysis, highlighting their significance in problem-solving within AI.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 91

Artificial

Intelligence
Unit -1 Artificial
Intelligence
Depth First Search or DFS for a Graph
In Depth First Search (or DFS) for a graph, we traverse all adjacent
vertices one by one. When we traverse an adjacent vertex, we completely
finish the traversal of all vertices reachable through that adjacent vertex.
This is similar to a tree, where we first completely traverse the left subtree
and then move to the right subtree. The key difference is that, unlike trees,
graphs may contain cycles (a node may be visited more than once). To
avoid processing a node multiple times, we use a boolean visited array.
The step by step process to implement the DFS
traversal is given as follows -

• First, create a stack with the total number of vertices


in the graph.
• Now, choose any vertex as the starting point of
traversal, and push that vertex into the stack.
• After that, push a non-visited vertex (adjacent to the
vertex on the top of the stack) to the top of the stack.
• Now, repeat steps 3 and 4 until no vertices are left to
visit from the vertex on the stack's top.
• If no vertex is left, go back and pop a vertex from the
stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Water Jug Problem With DFS
Breadth First Search or BFS for a Graph
• Breadth First Search (BFS) is a fundamental graph traversal
algorithm. It begins with a node, then first traverses all its
adjacent. Once all adjacent are visited, then their adjacent are
traversed.
• BFS is different from DFS in a way that closest vertices are
visited before others. We mainly traverse vertices level by level.
• Popular graph algorithms like Dijkstra’s shortest path, Kahn’s
Algorithm, and Prim’s algorithm are based on BFS.
• BFS itself can be used to detect cycle in a directed and
undirected graph, find shortest path in an unweighted graph and
many more problems.
Water Jug Problem With BFS
Properties of Search Algorithms
• Completeness
A search algorithm is considered complete when it gives a solution or
returns any solution for a given random input.

• Optimality
If a solution found is the best (lowest path cost) among all the solutions
identified, then that solution is said to be optimal.

• Time complexity
The time taken by an algorithm to complete its task is called time
complexity. If the algorithm completes a task in a shorter amount of
time, then it is efficient.

• Space complexity
It is the maximum storage or memory the algorithm takes while
searching.

These properties are also used to compare the efficiency of the


different types of searching algorithms.
Heuristic Search Techniques in AI
One of the core methods AI systems use to navigate problem-solving
is through heuristic search techniques. These techniques are
essential for tasks that involve finding the best path from a starting
point to a goal state, such as in navigation systems, game playing,
and optimization problems. This article delves into what heuristic
search is, its significance, and the various techniques employed in AI.
Components of Heuristic Search
Heuristic search algorithms typically comprise several essential
components:
1.State Space: This implies that the totality of all possible states or
settings, which is considered to be the solution for the given
problem.
2.Initial State: The instance in the search tree of the highest level
with no null values, serving as the initial state of the problem at
hand.
3.Goal Test: The exploration phase ensures whether the present
state is a terminal or consenting state in which the problem is
solved.
4.Successor Function: This create a situation where individual
states supplant the current state which represent the possible
Hill Climbing
• Hill climbing is a heuristic search used for mathematical optimization
problems. It is a variant of the gradient ascent method. Starting from a
random point, the algorithm takes steps in the direction of increasing
elevation or value to find the peak of the mountain or the optimal solution
to the problem. However, it may settle for a local maximum and not reach
the global maximum.
• Hill climbing is a fundamental concept in AI because of its simplicity,
efficiency, and effectiveness in certain scenarios, especially when dealing
with optimization problems or finding solutions in large search spaces.
Basic Concepts of Hill Climbing Algorithms
Hill climbing follows these steps:
• Initial State: Start with an arbitrary or random solution (initial state).
• Neighboring States: Identify neighboring states of the current solution
by making small adjustments (mutations or tweaks).
• Move to Neighbor: If one of the neighboring states offers a better
solution (according to some evaluation function), move to this new state.
• Termination: Repeat this process until no neighboring state is better
than the current one. At this point, you’ve reached a local maximum or
minimum (depending on whether you’re maximizing or minimizing).
1. Simple Hill Climbing Algorithm

Simple Hill Climbing is a straightforward variant of hill climbing where the


algorithm evaluates each neighboring node one by one and selects the first
node that offers an improvement over the current one.
Algorithm for Simple Hill Climbing
1.Evaluate the initial state. If it is a goal state, return success.

2.Make the initial state the current state.

3.Loop until a solution is found or no operators can be applied:

• Select a new state that has not yet been applied to the current state.

• Evaluate the new state.

• If the new state is the goal, return success.

• If the new state improves upon the current state, make it the current
state and continue.

• If it doesn’t improve, continue searching neighboring states.

4.Exit the function if no better state is found.


Steepest-Ascent Hill Climbing

Steepest-Ascent Hill Climbing is an enhanced version of simple hill climbing.


Instead of moving to the first neighboring node that improves the state, it
evaluates all neighbors and moves to the one offering the highest improvement
(steepest ascent).
Algorithm for Steepest-Ascent Hill Climbing
1.Evaluate the initial state. If it is a goal state, return success.

2.Make the initial state the current state.

3.Repeat until the solution is found or the current state remains unchanged:

• Select a new state that hasn’t been applied to the current state.

• Initialize a ‘best state’ variable and evaluate all neighboring states.

• If a better state is found, update the best state.

• If the best state is the goal, return success.

• If the best state improves upon the current state, make it the new
current state and repeat.

4.Exit the function if no better state is found.


Key Regions in the State-Space Diagram
Local Maximum: A local maximum is a state better than its neighbors but
not the best overall. While its objective function value is higher than nearby
states, a global maximum may still exist.
Global Maximum: The global maximum is the best state in the state-space
diagram, where the objective function achieves its highest value. This is the
optimal solution the algorithm seeks.
Plateau/Flat Local Maximum: A plateau is a flat region where
neighboring states have the same objective function value, making it
difficult for the algorithm to decide on the best direction to move.
Ridge: A ridge is a higher region with a slope, which can look like a peak.
This may cause the algorithm to stop prematurely, missing better solutions
nearby.
Current State: The current state refers to the algorithm’s position in the
state-space diagram during its search for the optimal solution.
Shoulder: A shoulder is a plateau with an uphill edge, allowing the
algorithm to move toward better solutions if it continues searching beyond
the plateau.
A* (pronounced "A-star")
• It is a powerful graph traversal and pathfinding algorithm widely used in
artificial intelligence and computer science.
• It is mainly used to find the shortest path between two nodes in a graph,
given the estimated cost of getting from the current node to the
destination node.
• The main advantage of the algorithm is its ability to provide an optimal
path by exploring the graph in a more informed way compared to
traditional search algorithms such as Dijkstra's algorithm.
• Algorithm A* combines the advantages of two other search algorithms:
Dijkstra's algorithm and Greedy Best-First Search.
• Like Dijkstra's algorithm, A* ensures that the path found is as short as
possible but does so more efficiently by directing its search through a
heuristic similar to Greedy Best-First Search.
• A heuristic function, denoted h(n), estimates the cost of getting from any
given node n to the destination node.
AO* algorithm
• It is an advanced search algorithm utilized in artificial intelligence,
particularly in problem-solving and decision-making contexts.
• It is an extension of the A* algorithm, designed to handle more
complex problems that require handling multiple paths and making
decisions at each node.
Example of AO* Algorithm
In the figure, we consider the process of buying a car. This process
can be broken down into smaller tasks using an AND-OR graph. The
AND section might include securing funds or applying for a loan,
while the OR section could present different options for acquiring the
car, such as purchasing with cash or financing. Each subproblem in
the AND-OR graph must be solved before the main issue is resolved.
Comparison between A* Algorithm and AO*
algorithm
Aspect A Algorithm* AO Algorithm*

Search Type Best-first search Best-first search

Informed search using Informed search using


Type of Search
heuristics heuristics

Always gives the Does not guarantee an


Solution Optimality
optimal solution optimal solution

Explores all possible Stops exploring once a


Path Exploration
paths solution is found

Memory Usage Uses more memory Uses less memory

May go into an endless


Cannot go into an
Endless Loop loop without proper
endless loop
checks
ALGORITHM:
Let G be a graph with only starting node INIT.
Repeat the followings until INIT is labeled SOLVED or h(INIT) > FUTILITY
a) Select an unexpanded node from the most promising path from INIT
(call it NODE)
b) Generate successors of NODE. If there are none, set h(NODE) =
FUTILITY (i.e., NODE is unsolvable); otherwise for each SUCCESSOR that
is not an ancestor of NODE do the following:
i. Add SUCCESSSOR to G.
ii. If SUCCESSOR is a terminal node, label it SOLVED and set
h(SUCCESSOR) = 0.
iii. If SUCCESSPR is not a terminal node, compute its h
c) Propagate the newly discovered information up the graph by doing
the following: let S be set of SOLVED nodes or nodes whose h values have
been changed and need to have values propagated back to their parents.
Initialize S to Node. Until S is empty repeat the followings:
i. Remove a node from S and call it CURRENT.
ii. Compute the cost of each of the arcs emerging from CURRENT. Assign
minimum cost of its successors as its h.
iii. Mark the best path out of CURRENT by marking the arc that had the
minimum cost in step ii
iv. Mark CURRENT as SOLVED if all of the nodes connected to it through
new labeled arc have been labeled SOLVED
v. If CURRENT has been labeled SOLVED or its cost was just changed,
propagate its new cost back up through the graph. So add all of the
Constraint Satisfaction Problem (CSP)
A Constraint Satisfaction Problem is a mathematical problem where the
solution must meet a number of constraints. In a CSP, the objective is to
assign values to variables such that all the constraints are satisfied.
CSPs are used extensively in artificial intelligence for decision-making
problems where resources must be managed or arranged within strict
guidelines
Components of Constraint Satisfaction Problems
1.Variables: The things that need to be determined are variables.
Variables in a CSP are the objects that must have values assigned to
them in order to satisfy a particular set of constraints. Boolean,
integer, and categorical variables are just a few examples of the
various types of variables, for instance, could stand in for the many
puzzle cells that need to be filled with numbers in a sudoku puzzle.
2.Domains: The range of potential values that a variable can have is
represented by domains. Depending on the issue, a domain may be
finite or limitless. For instance, in Sudoku, the set of numbers from 1
to 9 can serve as the domain of a variable representing a problem cell.
3.Constraints: The guidelines that control how variables relate to one
another are known as constraints. Constraints in a CSP define the
ranges of possible values for variables. Unary constraints, binary
constraints, and higher-order constraints are only a few examples of
the various sorts of constraints. For instance, in a sudoku problem, the
Means-Ends Analysis in Artificial Intelligence
• We have studied the strategies which can reason either in forward or
backward, but a mixture of the two directions is appropriate for solving
a complex and large problem. Such a mixed strategy, make it possible
that first to solve the major part of a problem and then go back and
solve the small problems arise during combining the big parts of the
problem. Such a technique is called Means-Ends Analysis.
• Means-Ends Analysis is problem-solving techniques used in Artificial
intelligence for limiting search in AI programs.
• It is a mixture of Backward and forward search technique.
• The MEA technique was first introduced in 1961 by Allen Newell, and
Herbert A. Simon in their problem-solving computer program, which
was named as General Problem Solver (GPS).
• The MEA analysis process centered on the evaluation of the difference
between the current state and goal state.
How means-ends analysis Works:
The means-ends analysis process can be applied recursively for a problem.
It is a strategy to control search in problem-solving. Following are the
main Steps which describes the working of MEA technique for solving a
problem.
1. First, evaluate the difference between Initial State and final State.
2. Select the various operators which can be applied for each difference.
3. Apply the operator at each difference, which reduces the difference
Unit 2 Game
Playing
Mini-Max Algorithm in Artificial Intelligence
• Mini-max algorithm is a recursive or backtracking algorithm which is
used in decision-making and game theory. It provides an optimal move
for the player assuming that opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through the game-tree.
• Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm
computes the minimax decision for the current state.
• In this algorithm two players play the game, one is called MAX and other
is called MIN.
• Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node
of the tree, then backtrack the tree as the recursion.
Alpha-Beta pruning
It is not actually a new algorithm, but rather an optimization
technique for the minimax algorithm. It reduces the computation time
by a huge factor. This allows us to search much faster and even go
into deeper levels in the game tree. It cuts off branches in the game
tree which need not be searched because there already exists a
better move available. It is called Alpha-Beta pruning because it
passes 2 extra parameters in the minimax function, namely alpha and
beta.

Alpha is the best value that the maximizer currently can guarantee at
that level or above.
Beta is the best value that the minimizer currently can guarantee at
that level or below.

Key points about alpha-beta pruning:

• The Max player will only update the value of alpha.


• The Min player will only update the value of beta.
• While backtracking the tree, the node values will be passed to
upper nodes instead of values of alpha and beta.
Iterative Deepening Search
• Iterative Deepening Search (IDS) is a search algorithm used in AI that blends the
completeness of Breadth-First Search (BFS) with the space efficiency of
Depth-First Search (DFS).
• IDS explores a graph or a tree by progressively increasing the depth limit with each
iteration, effectively performing a series of DFS operations until the goal node is
found.
• This approach is particularly advantageous when the depth of the solution is
unknown, and we aim to achieve both optimality and completeness without
excessive memory usage.
How Iterative Deepening Search Works
• The core concept of IDS revolves around repeatedly running a depth-limited DFS
up to increasing depth levels.
• It starts with a depth limit of zero, then increments this limit iteratively.
• Each iteration performs a DFS search up to the current depth limit.
Here’s a step-by-step breakdown of the algorithm:
1. Start at the Root Node: Begin the search from the root node (or starting point).

2. Perform DFS with Depth Limit (L): In each iteration, perform a DFS with a
depth limit L.

3. Increment Depth: After each iteration, increment L by 1.

4. Repeat: Continue this process until the goal node is found or the search space is
exhausted.
Knowledge representation issues
• Humans are best at understanding, reasoning, and interpreting
knowledge. Human knows things, which is knowledge and as per their
knowledge they perform various actions in the real world. But how
machines do all these things comes under knowledge representation
and reasoning
• It is responsible for representing information about the real world so
that a computer can understand and can utilize this knowledge to solve
the complex real world problems such as diagnosis a medical condition
or communicating with humans in natural language.
• What to Represent:
• Following are the kind of knowledge which needs to be represented in
AI systems:
• Object: All the facts about objects in our world domain. E.g., Guitars
contains strings, trumpets are brass instruments.
• Events: Events are the actions which occur in our world.
• Performance: It describe behavior which involves knowledge about
how to do things.
• Meta-knowledge: It is knowledge about what we know.
• Facts: Facts are the truths about the real world and what we represent.
• Knowledge-Base: The central component of the knowledge-based
agents is the knowledge base. It is represented as KB. The
Knowledgebase is a group of the Sentences (Here, sentences are used
1. Declarative Knowledge
• Declarative knowledge is to know about something.
• It includes concepts, facts, and objects.
• It is also called descriptive knowledge and expressed in declarative sentences.
• It is simpler than procedural language.
2. Procedural Knowledge
It is also known as imperative knowledge.
Procedural knowledge is a type of knowledge which is responsible for knowing how to
do something.
It can be directly applied to any task.
It includes rules, strategies, procedures, agendas, etc.
• Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
• Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
Heuristic knowledge is representing knowledge of some experts in a filed or subject.
• Heuristic knowledge is rules of thumb based on previous experiences, awareness of
approaches, and which are good to work but not guaranteed.
• 5. Structural knowledge:
• Structural knowledge is basic knowledge to problem-solving.
• It describes relationships between various concepts such as kind of, part of, and
grouping of something.
It describes the relationship that exists between concepts or objects
Issues in Knowledge Representation
• Important Attributed:
Any attribute of objects so basic that they occur in almost
every problem domain?
• Relationship among attributes:
Any important relationship that exists among object
attributed?
• Choosing Granularity:
At what level of detail should the knowledge be represented?
• Set of objects:
How should sets of objects be represented?
• Finding Right structure:
Given a large amount of knowledge stored in a database, how
can relevant parts are accessed when they are needed?
Frame problem
• The frame problem is a problem in AI that deals with the issue of how to
represent knowledge in a way that is useful for reasoning.
• The problem is that there is an infinite number of ways to represent any
given piece of information, and each representation has its own
advantages and disadvantages. The challenge is to find a representation
that is both expressive and efficient.

What are the causes of the frame problem?


The frame problem is a problem in AI that occurs when an AI system is
trying to reason about a problem or situation. The frame problem occurs
because the AI system does not have all of the information it needs to make
a decision. The frame problem is a difficult problem to solve because it is
often hard to determine what information is relevant and what is not.

How can the frame problem be overcome?


One way to overcome the frame problem is to use a model-based approach.
This means that instead of trying to reason about the system as a whole,
you create a model of the system. This model can then be used to reason
about the effects of changes.
Another way to overcome the frame problem is to use a heuristic approach.
This means that you use rules of thumb or heuristics to make decisions.
This can be less accurate than a model-based approach, but it can be much
faster.
Unit-3 Uncertinity And
Reasoning
Reasoning
 Reasoning in Artificial Intelligence (AI) is the logical process of drawing conclusions, making
predictions, or constructing solutions based on existing knowledge.
 It plays a crucial role in enabling AI systems to simulate human-like decision-making and
problem-solving capabilities

Monotonic Reasoning

 Monotonic Reasoning is the process that does not change its direction or can say that it moves
in the one direction.

• Monotonic Reasoning will move in the same direction continuously means it will either move in
increasing order or decrease.

• But since Monotonic Reasoning depends on knowledge and facts, It will only increase and will
never decrease in this reasoning.

• Example:

• Sun rises in the East and sets in the West.


Non-monotonic Reasoning

 Non-monotonic Reasoning is the process that changes its direction or values as the
knowledge base increases.

• It is also known as NMR in Artificial Intelligence.

• Non-monotonic Reasoning will increase or decrease based on the condition.

• Since that Non-monotonic Reasoning depends on assumptions, It will change itself


with improving knowledge or facts.

• Example:

• Consider a bowl of water, If we put it on the stove and turn the flame on it will obviously
boil hot and as we will turn off the flame it will cool down gradually.
Types of Reasoning Mechanisms in AI

Deductive Reasoning?
 Deductive reasoning is a logical process where one draws a specific conclusion
from a general premise. It involves using general principles or accepted truths
to reach a specific conclusion.
 For example, if the premise is "All birds have wings," and the specific
observation is "Robins are birds," then deducing that "Robins have wings" is a
logical conclusion.
• In deductive reasoning, the conclusion is necessarily true if the premises are
true.

• It follows a top-down approach, starting with general principles and applying


them to specific situations to derive conclusions.

• Deductive reasoning is often used in formal logic, where the validity of


arguments is assessed based on the structure of the reasoning rather than the
content.

• It helps in making predictions and solving puzzles by systematically eliminating


possibilities until only one logical solution remains.
Inductive Reasoning?
 Inductive reasoning is a logical approach to making inferences, or conclusions. People often
use inductive reasoning informally in everyday situations .When you use a specific set of data
or existing knowledge from past experiences to make decisions, you're using inductive
reasoning.
Example: AI-Based Email Classification
 Scenario: An AI system is designed to classify emails into categories such as "urgent,"
"important," "normal," and "spam."
Process:
 Data Collection: The AI starts by analyzing thousands of emails that are already labeled by
users. It observes various features such as keywords, sender information, time of email, and
user interactions (like whether emails are opened quickly and replied to, or marked as spam).
 Pattern Recognition: Through its analysis, the AI notices certain patterns:
 Emails containing words like "urgent" or "immediately" and sent from recognized contacts are often
labeled as "urgent."
 Emails from known commercial sources containing words like "sale" or "offer" are frequently marked
as "spam."
 Emails that are not from contacts but contain formal language and no promotional content are often
classified as "important."
 Generalization: Using these observations, the AI develops a general set of rules or a model
to predict the category of new emails. For example, it might generalize that any email from a
recognized contact that includes the word "urgent" should be classified as "urgent."
 Application: When new emails arrive, the AI applies these generalized rules to classify them
based on the learned patterns.
 Outcome: The AI uses inductive reasoning to generalize from specific instances to broader
rules, enabling it to perform email classification with a high degree of accuracy even on emails
it has never seen before.
Abductive Reasoning?
 Abductive reasoning is a type of reasoning that emphasizes drawing inferences from the existing data.
There is no assurance that the conclusion drawn is accurate, though, as the information at hand could not
be comprehensive.
 Conclusions drawn from abductive reasoning are likely to be true. This type of reasoning determines the
most likely conclusion for a set of incomplete facts by taking it into account.
 Although abductive reasoning is a kind of deductive reasoning, the accuracy of the conclusion cannot be
guaranteed by the information at hand.

 Example of Abductive Reasoning : Let's take an example: Suppose you wake up one morning and find
that the street outside your house is wet.

1. Observation: The street is wet.

2. Possible Hypotheses:

 It rained last night.

 A water pipe burst.

 A street cleaning vehicle just passed by.

3. Additional Information: You recall that the weather forecast predicted rain for last night.

4. Abductive Reasoning Conclusion: The most plausible explanation for the wet street, given the forecast
and the lack of any other visible cause, is that it rained last night.
Sources of Uncertainty in Reasoning
 Uncertainty is omnipresent because of incompleteness and incorrectness.
 Uncertainty in Data : data derived from assumptions
 Uncertainty in Knowledge Representation : limited expressiveness of the representation
mechanism
 Uncertainty in Rules : conflict resolution and incomplete because some conditions are
unknown
Reasoning and KR
Non-Monotonic Reasoning:
 In a non-monotonic reasoning system new information can be added which will
cause the deletion or alteration of existing knowledge. For example, imagine you
have invited someone to your house for dinner.

 In the absence of any other information you may make an assumption that your
guest eats meat and will like chicken. Later you discover that the guest is in fact a
vegetarian and the inference that your guest likes chicken becomes invalid.

 A system to deal with such a non-monotonic knowledge is the Truth Maintenance


System (TMS).
 The main object of the TMS is the maintenance of the knowledge base.
 A TMS is a mechanism for keeping track of dependencies and detecting
inconsistencies. It is also called reason maintenance system.
 TMS which implements to permit a form of non-monotonic reasoning by permitting
the addition of changing to a knowledge base.
• A Truth Maintenance System (TMS), also known as a Reason
Maintenance System, is a type of artificial intelligence (AI) system
designed to handle situations where information might be
contradictory or uncertain. It essentially helps manage the
knowledge base of an AI system by tracking how beliefs and
assumptions are formed.
• Here's how a TMS works:
1. Knowledge Representation: The TMS maintains a record of all the facts
and beliefs within the system. This includes both base facts (initial
assumptions) and derived facts (conclusions reached through reasoning).
2. Dependency Tracking: The key aspect of a TMS is that it tracks the
dependencies between these facts. For each derived fact, the TMS stores the
specific base facts and reasoning steps that led to its conclusion. This creates
a network of relationships between beliefs.
3. Maintaining Consistency: Imagine a scenario where a new piece of
information contradicts existing beliefs. This can lead to inconsistencies in the
knowledge base. The TMS detects such inconsistencies and tries to maintain
a coherent view.
Architecture of Truth Management System
• Role of Truth Maintenance System
• 1. The main job is TMS is to maintain “consistency of knowledge” being used by the
problem solver and not to perform any inference functions.
• 2. TMS also gives the inference – component, the latitude to perform non-monotonic
inferences.
• 3. When discoveries made, this more recent information can displace previous
conclusions that are no longer valid.
• 4. the TMS maintains dependency records for all such conclusion.
• 5. The procedure uses to perform this process that says “Dependency-Directed Back-
tracking“.
• 6. The records maintain in the form of a “Dependency-Network“.
• The nodes in the network represent KB entries such as premises, conclusions, inference rules
and the like.
• Attached to the nodes are justifications that represent the inference steps from which the
node derived.
Working Principle of TMS
 The Inference Engine (IE) solves domain problems based on its current belief set,
while the TMS maintains the currently active belief set. The updating process is
incremental. After each inference, information exchange between the two
components. The IE tells the TMS what deductions it has made. The TMS, in turn, asks
a question about current beliefs and reasons for failure. If maintains a consistent set
of beliefs for the IE to work with even if now knowledge is added or removed.

 Step1: Say, The KB contains the propositions P, P->Q and modus ponens.
 Step2: From this, the IE would rightfully conclude Q and add this conclusion to the KB.
 Step3: Later, if it was learned that P was appropriate, it would be added to the KB
resulting in a contradiction.
 Step4: Consequently, it would be necessary to remove P to eliminate the
inconsistency.
Truth Maintenance Systems can have different
characteristics:
• Justification-Based Truth Maintenance System (JTMS)
• It is a simple TMS where one can examine the consequences of the
current set of assumptions. The meaning of sentences is not known.
• Assumption-Based Truth Maintenance System (ATMS)
• It allows to maintain and reason with a number of simultaneous,
possibly incompatible, current sets of assumption. Otherwise it is
similar to JTMS, i.e. it does not recognise the meaning of sentences.
• Logical-Based Truth Maintenance System (LTMS)
• Like JTMS in that it reasons with only one set of current assumptions
at a time. More powerful than JTMS in that it recognises the
propositional semantics of sentences, i.e. understands the relations
between p and ~p, p and q and p&q, and so on.
Justification-Based Truth Maintenance
• A Justification-based truth maintenance system (JTMS) is a simple TMS where one can examine the
consequences of the current set of assumptions.
• In JTMS labels are attacched to arcs from sentence nodes to justification nodes. This label is either "+" or "-".
Then, for a justification node we can talk of its in-list, the list of its inputs with "+" label, and of its out-list, the list
of its inputs with "-" label.
• The meaning of sentences is not known. We can have a node representing a sentence p and one representing
~p and the two will be totally unrelated, unless relations are established between them by justifications.
• For example, we can write:

which says that if both p and ~p are IN we have a contradiction.


The association of IN or OUT labels with the nodes in a dependency network defines an in-out-labeling function.
This function is consistent if:
• The label of a junctification node is IN iff the labels of all the sentence nodes in its in-list are all IN and the
labels of all the sentence nodes in its out-list are OUT.
• The label of a sentence node is IN iff it is a premise, or an enabled assumption node, or it has an input from a
justification node with label IN.
Sample Space
• Sample Space is a key concept in probability theory and is used to
determine the likelihood of different results occurring in a random
experiment or event, by representing all possible outcomes or events
that can occur.
• The sample space in probability refers to the set of all possible
outcomes or results that can arise from a random experiment. It
serves as the foundation for calculating probabilities and
understanding the variability of outcomes.
• How to Find Sample Space in Probability
• To find the sample space in Probability, follow the below steps:

• Identify all possible outcomes of the experiment.


• List these outcomes in a set, ensuring each one is unique.
• For a single die roll, the sample space is {1, 2, 3, 4, 5, 6}.
• For drawing a card from a standard deck, the sample space is 52 unique cards.
• Combining sample spaces when multiple events occur helps calculate complex
probabilities.
• What is Sample Space Diagram
• A sample space diagram is a visual representation that
illustrates all the possible outcomes of a random
experiment. It is a valuable tool in probability theory
for visualising and understanding the different
potential results of an event.
• Sample Space Diagram for Rolling of Two Die
• Following illustration represents all the possible outcomes
i.e., sample space of rolling of two die.
ATMS based problem solver
A1. Hotel register was
forged.
A2. Hotel register was not
forged.
A3. Babbitt's brother-in-
law lied.
A4. Babbitt's brother-in-
law did not lie.
A5. Cabot lied.
A6. Cabot did not lie.
A7. Abbott, Babbitt, and
Cabot are the only
possible suspects
A8. Abbott, Babbitt, and
Cabot are not the
only suspects
Application of Bayes’
Theorem
• Statistical inference : to calculate the
probability that a new drug is effective in
treating a particular disease
• Bayesian statistics : to update beliefs abt
parameters and hypothesis.
• Machine learning : used to make
predictions such as classifying email as
spam or not spam
• Medical diagnosis : update the probability
of a disease given new results or symptoms.
Certainty factors

• Certainty factor (CF) is a numerical value


that indicates how likely a statement or
event is to be true. It is used to manage
uncertainty in rule-based systems.
When is it used?
CFs are used in expert systems to measure
and overcome uncertainty in inferences or
conclusions.
• CFs are used in medical diagnosis, fault
detection, and decision support systems.
Unit 4 Learning
Learning
•Learning in artificial intelligence (AI) is the process by which AI
systems improve their performance and skills based on
experience and data analysis. This process allows AI systems to
adapt to new situations and tasks.
Types of AI learning:
•Machine learning: A discipline of AI that allows machines to
learn from data automatically
•Deep learning: A type of machine learning that uses artificial
neural networks to learn from data
•Generative AI: A type of machine learning that can generate new
data from text prompts
•Reinforcement learning: A type of machine learning where an
agent learns to make decisions by interacting with its
environment
Rote learning
It is the process of memorizing information without
understanding its context. It’s a basic learning method
that involves storing information for future use.
How does rote learning work in AI?
•Data caching: Storing computed values so they don’t
need to be recomputed later
•Recognizing patterns: Using artificial neural networks
to process information and recognize patterns
•Learning board positions: In checkers-playing
programs, rote learning is used to learn board positions
Learning by taking advice

It refers to a method where an artificial intelligence system improves its


performance by actively seeking and incorporating guidance from an
external source, like a human expert, to learn and solve problems,
essentially acting on high-level instructions and translating them into
actionable steps within its decision-making process; this approach allows
the AI to leverage existing knowledge without needing to learn everything
from scratch.
Key points about learning by taking advice:
•Expert guidance: The primary source of “advice” is usually a human
expert who provides insights, strategies, or rules that the AI can apply to its
problem-solving process.
•High-level instructions: Advice is often given in a general or abstract
form, requiring the AI to interpret and translate it into specific actions
within its domain.
•Inference and reasoning: To effectively utilize advice, the AI needs to
reason and infer the underlying meaning of the guidance to apply it
accurately in different situations.
•Knowledge base integration: The received advice is typically integrated
into the AI’s knowledge base, allowing it to leverage this information
alongside its existing knowledge to make informed decisions.
General learning Model
Machine learning
Machine learning is important because it allows computers to learn from
data and improve their performance on specific tasks without being
explicitly programmed. This ability to learn from data and adapt to new
situations makes machine learning particularly useful for tasks that
involve large amounts of data, complex decision-making, and dynamic
environments.
Supervised machine learning
This type of ML involves supervision, where machines are trained on
labeled datasets and enabled to predict outputs based on the
provided training.
•The labeled dataset specifies that some input and output
parameters are already mapped. Hence, the machine is trained with
the input and corresponding output.
•A device is made to predict the outcome using the test dataset in
subsequent phases.
•For example, consider an input dataset of parrot and crow images.
Initially, the machine is trained to understand the pictures, including
the parrot and crow’s color, eyes, shape, and size. Post-training, an
input picture of a parrot is provided, and the machine is expected to
identify the object and predict the output. The trained machine
checks for the various features of the object, such as color, eyes,
shape, etc., in the input picture, to make a final prediction. This is
the process of object identification in supervised machine learning
• . Unsupervised machine learning
• Unsupervised learning refers to a learning technique that’s
devoid of supervision. Here, the machine is trained using an
unlabeled dataset and is enabled to predict the output without
any supervision.
• An unsupervised learning algorithm aims to group the
unsorted dataset based on the input’s similarities, differences,
and patterns.
• For example, consider an input dataset of images of a fruit-
filled container. Here, the images are not known to the
machine learning model. When we input the dataset into the
ML model, the task of the model is to identify the pattern of
objects, such as color, shape, or differences seen in the input
images and categorize them. Upon categorization, the machine
then predicts the output as it gets tested with a test dataset.
• Semi-supervised learning
• Semi-supervised learning comprises characteristics of
both supervised and unsupervised machine learning. It
uses the combination of labeled and unlabeled datasets
to train its algorithms.
• Using both types of datasets, semi-supervised learning
overcomes the drawbacks of the options mentioned
above.
• Consider an example of a college student. A student
learning a concept under a teacher’s supervision in
college is termed supervised learning. In unsupervised
learning, a student self-learns the same concept at
home without a teacher’s guidance. Meanwhile, a
student revising the concept after learning under the
direction of a teacher in college is a semi-supervised
form of learning.
• . Reinforcement learning
• Reinforcement learning is a feedback-based process. Here,
the AI component automatically takes stock of its
surroundings by the hit & trial method, takes action, learns
from experiences, and improves performance.
• The component is rewarded for each good action and
penalized for every wrong move. Thus, the reinforcement
learning component aims to maximize the rewards by
performing good actions.
• Unlike supervised learning, reinforcement learning lacks
labeled data, and the agents learn via experiences only.
Consider video games. Here, the game specifies the
environment, and each move of the reinforcement agent
defines its state. The agent is entitled to receive feedback
via punishment and rewards, thereby affecting the overall
game score. The ultimate goal of the agent is to achieve a
high score.

You might also like