0% found this document useful (0 votes)
33 views

Unit1 of AI

Uploaded by

Ashwath Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Unit1 of AI

Uploaded by

Ashwath Gupta
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 214

Introduction to

Artificial Intelligence
20CS540
Professional Core Course
4 Credits : 4:0:0
Course Outcomes
CO1: Analyze the problem state space, solution state space and path for the problem.

CO2: Compare and choose efficient search algorithms to suit the problem needs.

CO3: Understand the usage of predicate logic representation for the representation of the knowledge and
apply it for problem solving

CO4: Build Knowledge base for different machine learning problems using different forms of knowledge
representation.

CO5: Analyze different game playing algorithms and apply minimization and maximization techniques to
enhance machine capability to win the game. Design a mini expert system.
Unit 1
Introduction to AI:
Introduction to Artificial intelligence, History of AI, State of the art,
Domains of AI problems. Problem spaces and search, State space
search diagram and rule representation for Cannibal and Missionary
Problem, Water Jug Problem, Farmer, Cow, Tiger and Grass, 8 Puzzle
problems, TSP.
Unit 2
Problem Search (Uninformed Search): BFS algorithm and DFS
algorithm, Advantages of BFS example, Production system
characteristics.
Heuristic Search (Informed Search): Generate and Test, Hill climbing
algorithms, Best first search OR graphs, A* algorithm, Problem
reduction, Means end analysis.
Unit 3
Knowledge Representation: Representations and Mappings,
Approaches to knowledge representation, representational adequacy,
Inferential adequacy, inferential efficiency, acquisition efficiency,
Issues in knowledge representation.
Predicate logic: Representing simple facts in logic, Representing
instance and ISA relationships, Computable functions and predicates,
resolution, the basis of resolution, unification algorithm.
Unit 4
Types of Knowledge:
Representing knowledge using rules, Procedural Vs Declarative
knowledge, Logic programming, forward and backward reasoning,
semantic nets, frames, scripts and conceptual dependency.
Unit 5
Game playing:
Overview, The min-max search procedure, Min-Max algorithm, alpha
beta cut-offs, planning and its components, Goal stack planning.
Expert Systems.
Text Books:
Sl.No. Author/s Title Publisher Details
1. Ellaine Rich, Kelvin Artificial 3rd edition, Tata
Knight intelligence
McGraw Hill
publications
2. Stuart J Russel, , Peter Artificial 3rd edition, Pearson
Norvig intelligence ,A
modern Approach publications

Reference
Sl.No. Books:
Author/s Title Publisher Details
1. Charu C. Aggarwal Artificial Springer publications,
intelligence June 2021
Text Books:
2. Winston Patrick Henry Artificial 3rd edition ,Pearson
intelligence Education

Web Resources:
NPTEL :
https://nptel.ac.in/courses/106/105/106105077/
NPTEL : https://onlinecourses.nptel.ac.in/noc21_cs42/
Course Tutor Details:
Prof . Rakshitha R
Assistant Professor
Dept. of Computer Science and Engineering
SJCE, JSS S&TU
Mysuru- 6

Email: [email protected]
Mobile: 9743007435
CO vs PO and PSO mapping
UNIT - 1
Introduction to AI:
Introduction to Artificial intelligence, History of AI, State of the art,
Domains of AI problems. Problem spaces and search, State space
search diagram and rule representation for Cannibal and Missionary
Problem, Water Jug Problem, Farmer, Cow, Tiger and Grass, 8 Puzzle
problems, TSP.
What is Intelligence?
The ability of a system to calculate, reason, perceive relationships and
analogies, learn from experience, store and retrieve information from
memory, solve problems, comprehend complex ideas, use natural
language fluently, classify, generalize, and adapt new situations.
Obvious question
• What is AI?
• Programs that behave externally like humans?
• Programs that operate internally as humans do?
• Computational systems that behave intelligently?
• Rational behaviour?
ARTIFICIAL INTELLIGENCE
Definition:
Artificial Intelligence is the study of how to make computers do things,
which, at the moment, people do better.

Artificial Intelligence is concerned with design of intelligence in an


artificial device.

Term coined by McCarthy in 1956.


Artificial Intelligence v/s Natural Intelligence
What is AI?
an attempt of

 AI is the reproduction of human reasoning and


intelligent behavior by computational methods

Intelligent
behavior
Computer

Humans
What is AI?
(R&N)

Discipline that systematizes and automates


reasoning processes to create machines that:

Act like humans Act rationally


Think like humans Think rationally
ARTIFICIAL INTELLIGENCE

• Artificial Intelligence (AI) is a branch of Science which deals with helping machines
finding solutions to complex problems in a more human-like fashion. This generally involves
borrowing characteristics from human intelligence, and applying them as algorithms in a
computer friendly way.
• Stuart Russell and Peter Norvig: AI has to do with smart programs, so let's get on and write
some.
• John McCarthy-“The science and engineering of making intelligent machines, especially
intelligent computer programs”.
• Claudson Bornstein: AI is the science of common sense.
• Douglas Baker: AI is the attempt to make computers do what people think computers can not
do.
• Elaine Rich and Kevin Knight: AI is the study of how to make computers do things
at which, at the moment, people are better.
• "The study of the computations that make it possible to perceive, reason, and act."
(Winston, 1992)Acting Humanly. Turing Test proposed by Allan Turing

• Hagueland, 1985 :The exciting new effort to make computers think…. Machines with
minds in the full and literal sense (Thinking humanly)

• Charnaic and Mc Dermot 1985 : The study of mental faculties through the use of
Computational models (Thinking rationally)

• Nilsson 1998 : AI is concerned with intelligent behaviour in artefacts (Acting


Rationally)
We can define AI as:
"It is a branch of computer science by which we can create intelligent machines which can
behave like a human, think like humans, and able to make decisions."

"The two most fundamental concerns of AI researchers are knowledge representation and
search”
• “knowledge representation … addresses the problem of capturing in a language…suitable
for computer manipulation”
• “Search is a problem-solving technique that systematically explores a space of problem
states”.

Luger, G.F. Artificial Intelligence: Structures and Strategies for Complex Problem Solving
• The intelligence is intangible. It is composed of
Reasoning
Learning
Problem Solving
Perception
Linguistic Intelligence
The objectives of AI research are to incorporate the following aspects into
machines
Reasoning,
knowledge representation,
Planning, Learning,
Natural language processing, Realization,
Ability to move and manipulate objects.
Goals of Artificial Intelligence : Following are the main goals of Artificial Intelligence:
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such
as:
• Proving a theorem
• Playing chess
• Plan some surgical operation
• Driving a car in traffic, Robotic vehicles
• Creating some system which can exhibit intelligent behavior, learn new things by
itself, demonstrate, explain, and can advise to its user.
• Applications of AI
• Game Playing − AI plays important role for machine to think of large number of
possible positions based on deep knowledge in strategic games. for example: GO,
chess ,river crossing, N-queens problems and etc.
• Natural Language Processing − Interact with the computer that understands natural
language spoken by humans.
• Expert Systems − Machine or software provide explanation and advice to the users.
• Vision Systems − Systems understand, explain, and describe visual input on the
computer.
• Speech Recognition − There are some AI based speech recognition systems have
ability to hear and express as sentences and understand their meanings while a
person talks to it. For example Siri and Google assistant.
• Handwriting Recognition − The handwriting recognition software reads the text
written on paper and recognize the shapes of the letters and convert it into editable
text.
• Intelligent Robots − Robots are able to perform the instructions given by a human.
Major Goals
• Knowledge reasoning
• Planning
• Machine Learning
• Natural Language Processing
• Computer Vision
• Robotics
Main Areas of AI
 Knowledge representation
(including formal logic) Agent Perception
 Search, especially heuristic Robotics
search (puzzles, games)
 Planning Reasoning
Search
 Reasoning under Learning
uncertainty, including
probabilistic reasoning Knowledge Constraint
 Learning Planning rep. satisfaction
 Agent architectures
 Robotics and perception
Natural
 Natural language processing language
... Expert
Systems
WHY AI ?
Advantages of Artificial Intelligence
High Accuracy with less errors: AI machines or systems are prone to less errors and high
accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that
AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple
times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb,
exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI
technology is currently used by various E-commerce websites to show the products as per
customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car
which can make our journey safer and hassle-free, facial recognition for security purpose,
Natural language processing to communicate with the human in human-language, etc.
Disadvantages of Artificial Intelligence
Every technology has some disadvantages, and the same is true for Artificial intelligence.
• High Cost: The hardware and software requirement of AI is very costly as it requires lots
of maintenance to meet current world requirements.
• Can't think out of the box: Even we are making smarter machines with AI, but still they
cannot work out of the box, as the robot will only do that work for which they are
trained, or programmed.
• No feelings and emotions: AI machines can be an outstanding performer, but still it
does not have the feeling so it cannot make any kind of emotional attachment with
human, and may sometime be harmful for users if the proper care is not taken.
• Increase dependency on machines: With the increment of technology, people are
getting more dependent on devices and hence they are losing their mental capabilities.
• No Original Creativity: As humans are so creative and can imagine some new ideas but
still AI machines cannot beat this power of human intelligence and cannot be creative
and imaginative.
Difference between AI , ML and DL
APPLICATIONS OF AI
Google search engine recommendation
HEALTHCARE
Face Recogniiton
Sentimental Analysis
SIRI,ALEXA……
SELF DRIVING CAR
Recommendation System
GEO SATELLITE
Bits of History
 1956: The name “Artificial Intelligence” is
coined
 60’s: Search and games, formal logic and
theorem proving
 70’s: Robotics, perception, knowledge
representation, expert systems
 80’s: More expert systems, AI becomes an
industry
 90’s: Rational agents, probabilistic reasoning,
machine learning
 00’s: Systems integrating many AI methods,
machine learning, reasoning under uncertainty,
robotics again
The AI Problems
• Game playing
• Theorem Proving
• Common sense Reasoning
• Perception (Vision and Speech)
• Natural Language understanding.
Domain of artificial intelligence
Task Domains of Artificial Intelligence

Mundane (Ordinary) Tasks Formal Tasks Expert Tasks

•Perception Computer Vision •Mathematics •Engineering


•Speech, Voice •Geometry •Fault Finding
•Logic •Manufacturing
•Integration and Differentiation •Monitoring

•Natural Language Processing •GamesGo Scientific Analysis


Understanding •Chess (Deep Blue)
•Language Generation •Ckeckers
•Language Translation

Common Sense Verification Financial Analysis


Reasoning Theorem Proving Medical Diagnosis
Planning Creativity
•Robotics Locomotive
TASK DOMAIN OF AI
Problems, Problem Spaces, and Search:
Problem : It is a question raised which requires a solution

To build a system to solve a particular problem, we need to:


• Define the problem precisely – find input situations as well as final situations for an
acceptable solution to the problem.

• Analyze the problem – find few important features that may have impact on the
appropriateness of various possible techniques for solving the problem .

• Isolate and represent task knowledge necessary to solve the problem.

• Choose the best problem-solving technique(s) and apply to the particular problem.
State Space Search
• State Space means, the number states a problem can go i. e , from
initial state , some intermediate states, and final state.
• Problem state : It is defined as a set of elements which incorporates the
information about the past history of the problem which is sufficient to
determine the next stage after applying an operator.
• State space : It is the set of all problem states along with the relationships
between them
• State space representation of an AI problem is the structured
representation of an unstructured problem which consists of state
operators(actions) for changing states and knowledge contained explicitly
and implicitly in the problem
• Methodology of state space approach
1. Make a structured representation of defining a state
2. Identify the initial state
3. Identify the goal state
4. Consider the operators that may be required to change the state
5. Represent the knowledge / information contained in the problem in
a convenient way
6. Search the path between initial state to goal state.
• A state-space problem consists of a set of states;
• a distinguished set of states called the start states;
• a set of actions available to the agent in each state;
• an action function that, given a state and an action, returns a new state;
• a set of goal states, often specified as a Boolean function, goal(s), that is true when s is a goal
state; and
• a criterion that specifies the quality of an acceptable solution. For example, any sequence of
actions that gets the agent to the goal state may be acceptable, or there may be costs
associated with actions and the agent may be required to find a sequence that has minimal
total cost. This is called an optimal solution.
• a solution is a sequence of actions that will get the agent from its current state to a goal state.
State space representation of a problem:
• All the states the system can be in are represented as nodes of a graph.
• An action that can change the system from one state to another (e.g. a move
in a game) is represented by a link from one node to another.
• Links may be unidirectional (e.g. Xs and Os, chess, can't go back) or bi-
directional (e.g. geographic move).
Search for a solution. A solution might be:
• Any path from start state to goal state.
• The best (e.g. lowest cost) path from start state to goal state (e.g. Travelling
salesman problem).
• It may be possible to reach the same state through many different paths
(obviously true in Xs and Os).
• There may be loops in the graph (can go round in circle). No loops in Xs and
Os.
• A state space is represented by a four-tuple [N, A, S, GD]

• N is a set of nodes or states of the graph. These correspond to the states in a


problem-solving process.
• A is the set of arcs between the nodes. These correspond to the steps or
moves in a problem-solving process.
• S , a nonempty subset of N , contains the start state(s) of the problem.
• GD , a nonempty subset of N , contains the goal state(s) of the problem. The
states in GD are described using either:
A measurable property of the states encountered in the search.
A property of the path developed in the search.
• A solution path is a path through this graph from a node in S to a node in GD.
8-Puzzle Problem
The Problem is 8-Puzzle is a square tray in which 8 square tiles are
placed. The remaining 9 th square is uncovered. Each tile has a number
on it. A file that is adjacent to the blank space can be slide into that
space. The goal is to transform the starting position into the goal
position by sliding the tiles around.

Solution: State Space: The state space for the problem can be written
as a set of states where each state is position of the tiles on the tray.
8 Puzzle problem
Initial State Goal State
Travelling salesman problem (TSP)
Starting at A, find the shortest path through all the cities, visiting each city exactly once and returning to A.

What is the shortest path?


TSP
Representation a problem with the state-space representation needs:
(1). A set of states of the problem
(2). A set of operators to operate between states of the problem
(3). Initial state and final state(goal)
• Let the state be defined as
• ((NC,NM)A; (NC,NM)B;BA, BB)
• NC = Number of Cannibals; NM = Number of Missionaries;
• BA= Source Bank
• BB = Destination Bank
• Let the Initial state be defined as ((3C,3M)A;(0C,0M)B; A,B)
• and Goal state be defined as ((0C,0M)A;(3C,3M)B; B,A)

• RULES
• R1 : Row(2C,0M,A, B); If there are 2 or more cannibals on the Bank A and if the boat is at
bank A and if NM on Bank B >= NC or 0 Missionaries (at any given time )
• R2 : Row(2C, 0M, B, A); If there are 2 or more cannibals on the Bank B and if the boat is at
bank B and if NM on Bank A >= NC or there is 0 Missionaries
• R3 : Row(0C, 2M, A, B); If there are 2 or more Missionaries on bank A and the boat
should be at bank A the number of missionaries on bank B >= NC
• R4 : Row(0C, 2M, B, A); If there are 2 or more Missionaries on bank B and the boat
should be on Bank B and NM (number of missionaries ) on bank A and Bank B >= NC
• R5 : Row(1C, 0M, A,B); if there is 1 or more cannibal on bank A and NC <= NM on
bank B; Boat should be on side A
• R6 : Row(1C, 0M, B,A); if there is 1 or more cannibal on bank B and NC <= NM on
bank A; Boat should be on side B
• R7 : Row(1C, 1M, A,B); If there is 1 or more Cannibal on Bank A and also 1 or more
Missionaries on bank A; NM >= NC on bank B; Boat should be on the bank A;
• R8 : Row(1C, 1M, B, A); If there is 1 or more Cannibal on Bank B and also 1 or more
Missionaries on bank B; NM >= NC on bank A; Boat should be on the bank B;
Rule Operator Rules and Condition

R1 Row(2C,0M,A,B) If State = ((3C,3M)A;(0C,0M)B; A) or


((2C,3M)A;(1C,0M)B; A) or ((2C,0M)A;(1C,3M)B;A)

R2 Row(2C,0M,B,A) If State = ((3C,3M)B;(0C,0M)A; B) or


((2C,3M)B;(1C,0M)A; B) or ((2C,0M)B;(1C,3M)A;B)

R3 Row(0C,2M,A,B) If state = ((1C,3M)A;(2C,0M)B; A) or


((2C,2M)A;(1C,1M)B; A)

R4 Row(0C,2M,B,A) If state = ((1C,3M)B;(2C,0M)A; B) or


((2C,2M)B;(1C,1M)A; B)

R5 Row(1C,0M, A,B) if state = ((3C,3M)A;(0C,0M)B; A) or ((2C,3M)A;


(1C,0M)B; A) or ((2C,0M)A; (1C,3M)B; A) or (1C,3M)A;
(2C,0M)B; A)

R6 Row(1C,0M, B, A) if state = ((3C,3M)B;(0C,0M)A; B) or ((2C,3M)B;


(1C,0M)A; B) or ((2C,0M)B; (1C,3M)A; B) or (1C,3M)B;
(2C,0M)A; B)

R7 Row(1C,1M, A, B) If state = ((3C,3M)A; (0C,0M)B; A) or (2C, 2M)A;


(1C,1M)B; A) or ((1C,1M)A; (2C,2M)B; A)

R8 Row(1C,1M, B, A) If state = ((3C,3M)B; (0C,0M)A; B) or (2C, 2M)B;


(1C,1M)A; B) or ((1C,1M)B; (2C,2M)A; B)

R9 Row(0C,1M, A,B) If state = (2C,3M)A; (1C,0M)B; A)

R10 Row(0C,1M, B,A) If state = (2C,3M)B; (1C,0M)A; B)


Cannibals and Missionaries
Example: Representing Xs and Os as state-space problem. Image courtesy of Ralph
Morelli.

Tic Tac toe


A Water Jug Problem
Assumptions
Production Rules of Water Jug Problem
Production rules (conti)
Solution to Water Jug Problem
• State Space Representation
Tic Tac toe

Example: Representing Xs and Os as state-space problem.


ALGORITHM 1
Board having nine elements vector, each element will contain,

• 0 for blank
• 1 indicating X player move
• 2 indicating O player move

Computer may play as X or O player. First player is always X.


Move Table
It is a vector of 3^9 elements, each element of which is a nine element
vector representing board position.
Total of 3^9(19683) elements in move table
Move Table
Index Current Board position New Board position
0 000000000 000010000
1 000000001 020000001
2 000000002 000100002
3 000000010 002000010
.
.
Algorithm steps
1.View the vector (board) as a ternary number and convert it to its
corresponding decimal number.
2.Use the computed number as an index into the move table and
access the vector stored there.
3.The vector selected in step 2 represents the way the board will
look after the move that should be made. So set board equal to
that vector.
Empty board

board looks like 000 000 000 (tenary number) convert it into
decimal no. The decimal no is 0

Use the computed number ie 0 as an index into the move table


and access the vector stored in New Board Position.

The new board position is 000 010 000 .


This process continues until the player get win or tie.
Advantage and Disadvantage of this
algorithm
Advanatge :
This program is efficient in terms of time.

Disadvanatge:
• It takes more space to store the table and requires stunning effort to
calculate the decimal numbers.
• Does lot of work specifying all entries in movetable.
• It is unlike that required movetable is entered without errors.
• If want to extend to 3D dimension,since 3^27 board positions have to be
stored , hence consumes huge memory.
Algorithm steps:
Final Approach
 Mini-max procedure –
• A winning Strategy is to minimize the maximum potential gain of your
opponent.
• Recursive Backtracking algorithm is used in decision making and
Game theory.
• Two player – MAX (selects a max value)
MIN ( selects min value)
• Uses DFS to exploit the game tree.
• It is optimal
• Space and Time complexity is O(b^m)., b is branching factor of game
tree, m is maximum depth.
Minimax procedure
Implementation
Our game consists of these ten functions. Where b is board and m is move.
• INITIAL STATE(): returns empty matrix.
• COUNT(b): returns count of both X and O.
• PLAYER(b): returns which player to move in state b
• ACTIONS(b): returns legal moves in state b
• RESULT(b, m): returns state after action a taken in state b
• TERMINAL(b): checks if state b is a terminal state
• UTILITY(b): final numerical value for terminal state b
• MINIMAX(b): returns best move on the current board
• MAX_VALUE(b): returns the maximum value on the board, calls recursively min_value
• MIN_VALUE(b): returns the minimum value on the board, calls recursively max_value
End of unit 1
Unit 2
SEARCH STRATEGY
• Expanding the current state is applying each legal action to the
current state, thereby generating a new set of states to form a search
tree.
• The set of leaf nodes available for expansion is called fringe or frontier
or openlist.
• A test for goal node is performed.
PRODUCTION SYSTEMS
The Production Systems helps in structuring AI Programs in a way that facilities
describing and performing search processes .

It has 4 basic components,

• A set of rules each consisting of a left side that determines the applicability of the
rule and a right side that describes the operation to be performed if the rule is
applied.
• A Knowledge/database that contain whatever information is appropriate for the
particular task.
• A control strategy that specifies the order in which the rules will be compared with
the database and a way to resolve conflicts when several rules match at once.
• A rule applier.
CONTROL STRATEGIES
It specifies the order in which rules needs to be compared to the
database , it resolves conflict in minimal time.
The requirement of good strategy is ,
• It causes motion- In water jug problem , choose randomly from among
the applicable rules instead of starting from the top of list of rules.
• It is Systematic – Arrive at the same states several times during the
process and to use many more steps than are necessary.
• A search strategy is defined by picking the order of node expansion
• Strategies are evaluated with respect to the following parameters
Completeness : Does it always find a solution if it has
Systematicity : Does it visit each state only once
Time complexity : Number of nodes generated
Space complexity : Maximum number of nodes in the memory
Optimality : Does it always find the least cost path
• Time and Space complexity are always measured in terms of
b : Maximum branch factor
d : Depth of least cost function
m : Maximum depth of the state space

• The typical measure is the size of the state space graph, |V | + |E|, where V is the
set of vertices (nodes) of the graph and E is the set of edges (links).
• In AI , The branching factor b - The increase in the number of nodes on the fringe
each time a fringe node is dequeued and replaced with its children is O(b).
• At depth k in the search tree, there exists O(b^k ) nodes.
• The depth of the shallowest solution s.
• The set of leaf nodes available for expansion is called fringe or frontier or openlist
There are two types of search techniques:

Uninformed search : Also called blind, exhaustive or brute-force


search, uses no information about the problem to guide the search
and therefore may not be very efficient , like BFS,DFS.
• No additional information about states beyond that provided in the
problem definition.

Informed Search: Also called heuristic or intelligent search, uses


information about the problem to guide the search, usually guesses
the distance to a goal state and therefore efficient, but the search
may not be always possible ,those are heuristic search techniques.
• The problem-specific knowledge beyond the definition of the problem
itself can be used to find solutions more efficiently.
Breadth-first Search:
Breadth-first search is the most common search strategy for
traversing a tree or graph. This algorithm searches breadthwise in a tree or
graph, so it is called breadth-first search.
BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to nodes of
next level.
The breadth-first search algorithm is an example of a general-graph
search algorithm.
Breadth-first search implemented using FIFO queue data structure.
Breadth-First-Search

• BFS explores all the nodes at given depth before proceeding to the next level.
• BFS is implemented using Queue.
• Algorithm : Breadth First Search
1. Create a variable called NODE_LIST and set it to initial state
2. Perform the following steps Until a goal state is found or NODE_LIST
is empty :
(a) Remove the first element from NODE_LIST and call it E. If node list
was empty quit.
(b) For each way that each rule can match the state described in E do :
(i) Apply the rules to generate a new state
(ii) If the new state is the goal state, quit and return this state
(iii) Otherwise, add the Expand the node by applying rules and its
children to the end of the NODE_LIST
Advantages of BFS
BFS will provide a solution if any solution exists.
If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages:
It requires lots of memory since each level of the tree must be saved
into memory to expand the next level.
BFS needs lots of time if the solution is far away from the root node.
• BFS Fringe representation - If we want to visit shallower nodes before deeper
nodes, we must visit nodes in their order of insertion. Hence, we desire a
structure that outputs the oldest enqueued object to represent our fringe. For
this, BFS uses a first-in, first-out (FIFO) queue, which does exactly this.
1
Level- 1 b
Level- 2 b^2
Level- 3 b^3
.
.
.
Level- m b^m
• Completeness - If a solution exists, then the depth of the shallowest node d
must be finite, so BFS must eventually search this depth. Hence, it’s complete.
• Optimality - BFS is generally not optimal because it simply does not take costs
into consideration when determining which node to replace on the fringe. The
special case where BFS is guaranteed to be optimal is if all edge costs are
equivalent, because this reduces BFS to a special case of uniform cost search
• Time Complexity - We must search 1+b+b^2 +...+b^d nodes in the worst case,
since we go through all nodes at every depth from 1 to d. Hence, the time
complexity is O(b^d ).
• Space Complexity - The fringe, in the worst case, contains all the nodes in the
level corresponding to the shallowest solution. Since the shallowest solution is
located at depth d, there are O(b^d ) nodes at this depth
• Depth-first search (DFS) is a strategy for exploration that always selects the
deepest fringe node from the start node for expansion.
• Fringe representation - Removing the deepest node and replacing it on the
fringe with its children necessarily means the children are now the new
deepest nodes - their depth is one greater than the depth of the previous
deepest node. This implies that to implement DFS, we require a structure
that always gives the most recently added objects highest priority. A last-in,
first-out (LIFO) stack does exactly this, and is what is traditionally used to
represent the fringe when implementing DFS.
Depth-First-Search
• DFS algorithm is a recursive algorithm.
• It starts from root node and follows each path to its greatest depth node before moving to the
next path.
• It is implemented using STACK.
Depth-first search is a recursive algorithm for traversing a tree or graph
data structure.
It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next
path.
DFS uses a stack data structure for its implementation.
The process of the DFS algorithm is similar to the BFS algorithm.
DFS Algorithm
Set a variable NODE to the initial state, i.e., the root node.
Set a variable GOAL which contains the value of the goal state.
Loop each node by traversing deeply in one direction/path in search
of the goal node.
While performing the looping, start removing the elements from the
stack in LIFO order.
If the goal state is found, return goal state otherwise backtrack to
expand nodes in other direction.
• Completeness - Depth-first search is not complete. If there exist cycles in the state space graph, this
inevitably means that the corresponding search tree will be infinite in depth. Hence, there exists the
possibility that DFS will faithfully yet tragically get "stuck" searching for the deepest node in an infinite-
sized search tree, doomed to never find a solution.
• Optimality - Depth-first search simply finds the "leftmost" solution in the search tree without regard for
path costs, and so is not optimal.
• Time Complexity - In the worst case, depth first search may end up exploring the entire search tree.
Hence, given a tree with maximum depth m, the runtime of DFS is O(b^m).
• Space Complexity - In the worst case, DFS maintains b nodes at each of m depth levels on the fringe. This
is a simple consequence of the fact that once b children of some parent are enqueued, the nature of DFS
allows only one of the subtrees of any of these children to be explored at any given point in time. Hence,
the space complexity of BFS is O(b*m).
• Advantages of DFS
• DFS requires less memory since only the nodes on the current path are
stored in . This contrasts with BFS, where all of the tree that has been
generated so far must be stored.
• By chance( or if care is taken in ordering the alternative successor states),
DFS may find a solution without examining much of the search space at all.
This contrasts with BFS in which all nodes of level n of tree must be
examined before any nodes on level n+1 can be examined. This is
particularly significant if many acceptable solutions exist. DFS stops when
one of them is found
Heuristic Search
• Heuristic Search algorithm in which a node is selected for expansion based
on an evaluation function, f(n).
• The evaluation function is constructed as a cost estimate, so the node with
the lowest evaluation is expanded first.
• The choice of f determines the search strategy.
• Heuristic functions are the most common form in which additional
knowledge of the problem is imparted to the search algorithm.
• Most best-first algorithms include as a component of f a heuristic function,
denoted h(n):
h(n) = estimated cost of the cheapest path from the state at node n
to a goal state.
if n is a goal node, then h(n)=0
Heuristic Search Algorithms
• Generate and Test,
• Hill climbing algorithms,
• Best first search,
• A* algorithm,
• AO* algorithm (Problem reduction),
• Means end analysis.
Is the problem Decomposable?

The two subproblems are not independent.


Can Solution steps to be ignored
or undone?
• The important classes of problems:
Can Solution steps to be ignored
or undone?
Is the problem’s universe predictable?
Is a Good Solution Absolute
or Relative
• Consider the problem of answering questions,

The question is “Is Marcus alive?”

Any path problem is Relative


Is a Good Solution Absolute
or Relative
• Relative is Traveling Salesman Person

Best path problem is Absolute Solutions


Is a Good Solution Absolute
or Relative

We cannot sure the shortest path unless we try all other paths .
Solution : State
(Uses Less Knowledge)

(Uses More Knowledge)


PRODUCTION SYSTEM
CHARACTERISTICS
• There are classes of production systems
1. A monotonic production system is a production system in which the application of
a rule never prevents the later application of another rule that could also have
been applied at the time the first rule was selected. (rules are independent )
2. A non-monotonic production system is one in which this is not true.
3. A partially commutative production system is a production system with the
property that if the application of a particular sequence of rules transforms state P
into state Q, then any Permutation of those rules that is allowable also transforms
state P into state Q.
4. A commutative production system is a production system that is both monotonic
and partially commutative.
CLASSES OF PRODUCTION SYSTEM
• Partially commutative , monotonic production systems are useful
for solving ignorable problems. Problems that involve creating new
things rather than changing old ones are generally ignorable.
• Nonmonotonic partially commutative systems are useful for
problems in which changes occur but can be reversed and in which
the order of operation is not critical (ex: 8 puzzle problem).
• Not partially commutative are useful for many problems in which
irreversible changes occur, such as chemical analysis. When dealing
with such systems, the order in which operations are performed is very
important in determining final output.
• Not partially ,non monotonic production system are useful in which
reversible changes occur, order doesnot matter.
Issues in the Design of Search Programs
• Every search process is,
Issues in the design of Search Process
Search – Tree Traversal
Node – Represents state
Lines – Relationship
The search process finds a path/s
Trees are constructed from rules that define allowable moves in the
problem space.

Most of the search trees are constructed implicitly by applying rules and
explored explicitly for the solution
Important issues in search process
1. Forward Search : Start-to-Goal
Backward Search : Goal-to- Start
2. How to select applicable rules(Matching) – Production systems.
3. How to represent each node in the search tree (Knowledge
Representation problem)
4. Search tree Vs Search graph.
SOME IMPORTANT ISSUES OF SEARCH
TECHNIQUES
Search Tree vs Search Graph
Why graph preferred to tree for Search
Process
HEURISTIC SEARCH TECHNIQUES
• HEURISTIC is an technique designed to solve a problem quickly.
• Heuristic helps in Decision Making.
• Heuristic Function that gives an Estimation of the cost of getting from
node n to the goal State.
GENERATE-AND-TEST
• It uses Exhaustive Search, DFS with BackTracking.

ALGORITHM
1. Generate a possible solution. For some problems this means
generating a particular point in the problem space. For others
generating a path from the start state.
2. Test to see if this is actually the solution by comparing the chosen
point or the end point of the chosen path to the set of acceptable goal
states.
3. If solution is found quit. Otherwise return to step-1.
• It receives feedback from test procedure and is used to help
generator decide which direction to move in the State Space search.
• Properties of Good Generator
1) Complete
2) Non-Redundant
3) Informed.
For simple problems Generate and Test is a reasonable solution

Though solution if exists is found, problem space is very large, “ Eventually”


may take a very long time.

May be Exhaustive Search/ may be no guaranteed solution

DFS with backtracking is the most straight forward way to implement Generate
and Test
HILL CLIMBING
• It is a variant of Generate-And-Test algo, and uses Heuristic function.
• A heuristic function is just Distance between the current location and
location of tall buildings and desirable states are those in which the
distance is minimised,
Hill Climbing Algorithm in Artificial Intelligence

Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem.

It terminates when it reaches a peak value where no neighbor has a higher value.

Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.

Eg: Traveling-salesman Problem in which we need to minimize the distance traveled by the salesman.
It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.
A node of hill climbing algorithm has two components which are state and value.

Hill Climbing – When a good heuristic is available.


State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing algorithm which is


showing a graph between various states of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function, and
state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the
global minimum and local minimum.

If the function of Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.
Different regions in the state space landscape:
• Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is also another state which is higher than it.

• Global Maximum: Global maximum is the best possible state of state space landscape.
It has the highest value of objective function.

• Current state: It is a state in a landscape diagram where an agent is currently present.

• Flat local maximum: It is a flat space in the landscape where all the neighbour states of
current states have the same value.

• Shoulder: It is a plateau region which has an uphill edge.


Features of Hill Climbing: Following are some main features of Hill Climbing
Algorithm:
Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which helps to
decide which direction to move in the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
Types of Hill Climbing Algorithm:
Simple hill Climbing:
Steepest-Ascent hill-climbing:
Stochastic hill Climbing:

Simple Hill Climbing: It only evaluates the neighbour node state at a time and
selects the first one which optimizes current cost and set it as a current state.
It only checks it's one successor state, and if it finds better than the current
state, then move else be in the same state. This algorithm has the following
features:
Less time consuming
Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success and
Stop. Otherwise continue with the initial state as the current state.
• Step 2: Loop Until a solution is found or there is no new operator left to
apply.
• Step 3: Select and apply an operator that has not been applied to the
current state and apply it to ,produce the new state
• Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a current state.
• Else if not better than the current state, then return to step2.
• Step 5: Exit.
Steepest-Ascent hill climbing:
• The steepest-Ascent algorithm is a variation of simple hill climbing
algorithm. This algorithm examines all the moves from the current state and
selects best one as the next move.
• This algorithm consumes more time as it searches for multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
• Step 2: Loop until a solution is found or until a complete iteration produces
no change to current state
• Let SUCC be a state such that any successor of the current state will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
Problems of Hill Climbing Algorithm
1)Local Maximum-
• Local maximum : It is a state which is better than its neighboring state
but not better than some other state.
• Global maximum : It is the best possible state in the state space
diagram.
2)Plateau-It is a flat region of state space where neighboring states
have the same value. It is very difficult to find the best.
3)Ridge-
Ridge is a special kind of local maximum . It is an area of the search
space that is higher than surrounding area that it has a slope. It is
impossible to traverse a ridge by single move.
Ways to deal with Problems
• Backtrack - Go Back, To solve Local Maximum problem.
• Make a big jump in same direction – To solve Plateaus.
• Apply two or more rules before doing the test- To solve ridges.
Blocks World Problem
Local Heuristic Function
Disadvantage:
If we use Local Heuristic function , then it leads to Local Maximum Problem.
Global Heuristic Function

You might also like