Ai Module1 Note
Ai Module1 Note
MODULE 1
*********************************************************************
SYLLABUS
Module 1 (14 hours)
Introduction: What is AI, The foundations of AI, History and applications,
Production systems. Structures and strategies for state space search. Informed and
Uninformed searches.
*********************************************************************
Introduction
Artificial intellegence(AI) is the study of how to make computer do things which at
the moment people do better.
The AI problems
Some of the task domains of AI are
Mundane Tasks
Perception
Vision
Speech
Natural Language
Understanding
Generation
Translation
Commonsense reasoning
Robot control
Formal Taks
Games
Chess
Backgammon
Checkers
Mathematics
Geometry
Logic
Integral calculus
Proving properties of programs
Expert Tasks
Engineering
Design
Fault Finding
Manufacturing planning
Scientific Analysis
Medical Diagnosis
Financial Analysis
Perception of the world around us is crucial to our survival. Animals with much less
intelligence than people are capable of more sophisticated visual perception than are
current machines. Perpetual difficult because they involve analog signals, the signals
are typically very noisy and usually a large number of things must be perceived at
once.
The ability to use language to communicate a wide variety of ideas is perhaps the
most important thing that seperates humans from the other animals. The problem of
understanding spoken language is a perceptual problem. This problem, usually
referred to as natural language understanding. In order to understand sentences about
topic, it is necessary to know not only the language itself but also a good deal about
the topic so that unstated assumptions can be recognized.
AI focused on the sort of problem solving that we do every day when we decide how
to get to work in the morning, often called commonsense reasoning.
Game Playing and theorm proving show the property that people who do them well
are considered to be displaying intelligence. Compters could perform well at those
tasks simply by being fast at exploring a large number of solution paths and then
selecting the best one.
Now thousands of programs called expert systems in day to day operation throughout
all areas of industry and goverment. Each of these system attempts to solve part of a
practical, significant problem that previously required scarce expertise.
AI is a system that acts like human beings
For this, a computer would need to possess the following capabilities.
Natural language processing To enable it to communicate successfully in
English.
Knowledge representation To store what it knows or hears.
Automated reasoning To use the stored information to answer questions and to
draw new conclusions.
Machine learning To adapt to new circumstances and to detect and extrapolate
patterns.
Computer vision To perceive objects.
Robotics To manipulate objects and move about.
AI is a system that thinks like human beings.
First we must have some way of determining how humans think. We need to get
inside the workings of the human minds. Once we have a sufficiently precise theory
of the mind, it becomes possible to express that theory using a computer program.
The field of cognitive science brings together computer models from AI and
experimental techniques from psychology to try to construct precise and testable
theories of the workings of the human mind.
AI is a system that thinks rationally
For a given set of correct premises, it is possible to yield new conclusions.
For eg.
“Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
These laws of thought were supposed to govern the operation of the mind. This
resulted in a field called logic. A precise notation for the statements about all kinds of
things in the world and about relations among them are developed. Programs exist
that could in principle solve any solvable problem described in logical notation.
There are 2 main obstacles to this approach.
First it is not easy to take informal knowledge and state it in the formal terms
required by logical notation.
Second, there is a big difference between being able to solve a problem “in
principle” and doing so in practice.
AI is a system that acts rationally
An agent is something that acts. A rational agent is one that acts so as to achieve the
best outcome or, when there is uncertainty, the best expected outcome. We need the
ability to represent knowledge and reason with it because this enables us to reach
good decisions in a wide variety of situations. We need to be able to generate
comprehensive sentences in natural language because saying those sentences helps us
get by in a complex society. We need learning because having a better idea of how the
world works enables us to generate more effective strategies for dealing with it. We
need visual perception to get a better idea of what an action might achieve.
AI Application Areas
The 2 most fundamental concerns of AI researchers are knowledge representation and
search.
Knowledge representation
It addresses the problem of capturing the full range of knowledge required for
intelligent behavior in a formal language,
i.e. One suitable for computer manipulation.
Eg. predicate calculus, LISP, Prolog
Search
It is a problem solving technique that systematically explores a space of problem
states, ie, successive and alternative stages in the problem solving process.
Much of the early research in AI was done using common board games such as
checkers,
chess, and
15 puzzle.
Board games have certain properties that made them ideal for AI research. Most
games are played using a well defined set of rules. This makes it easy to generate
the search space. The board configuration used in playing these games can be
easily represented on a computer. As games can be easily played, testing a game
playing program presents no financial or ethical burden.
2. Heuristics
Games can generate extremely large search spaces. So we use powerful
techniques called heuristics to explore the problem space.
A heuristic is a useful but potentially fallible problem strategy, such as checking
to make sure that an unresponsive appliance is plugged in before assuming that it
is broken. Since most of us have some experience with these simple games, we do
not need to find and consult an expert. For these reasons games provide a rich
domain for the study of heuristic search.
3. Automated Reasoning and Theorem Proving
Examples for automatic theorem provers are
Newell and Simon’s Logic Theorist,
General Problem Solver (GPS).
Theorem proving research is responsible for the development of languages such
as predicate calculus and prolog. The attraction of automated theorem proving lies
in the rigor and generality of logic. A wide variety of problems can be attacked by
representing the problem description as logical axioms and treating problem
instances as theorems to be proved. Reasoning based on formal mathematical
logic is also attractive. Many important problems such as design and verification
of logic circuits, verification of the correctness of computer programs and control
of complex systems come in this category.
4. Expert Systems
Here comes the importance of domain specific knowledge. A doctor, for example,
is effective at diagnosing illness because she possesses some innate general
problem solving skill; she is effective because she knows a lot about medicine.
A geologist is effective at discovering mineral deposits. Expert knowledge is a
combination of theoretical understanding of the problem and a collection of
heuristic problem solving rules that experience has shown to be effective in the
domain. Expert systems are constructed by obtaining this knowledge from a
human expert and coding it into a form that a computer may apply to similar
problems. To develop such a system, we must obtain knowledge from a human
domain expert. Examples for domain experts are doctor, chemist, geologist,
engineer etc..
The domain expert provides the necessary knowledge of the problem domain. The
AI specialist is responsible for implementing this knowledge in a program.
Once such a program has been written, it is necessary to refine its expertise
through a process of giving it example problems to solve and making any
required changes or modifications to the program’s knowledge.
Dendral is an expert system designed to infer the structure of organic molecules
from their chemical formulas and
mass spectrographic information about the chemical bonds present in the
molecules.
Mycin is an expert system which uses expert medical knowledge to diagnose and
prescribe treatment for spinal meningitis
and bacterial infections of the blood.
Prospector is an expert system for determining the probable location and type of
ore deposits based on geological
information about a site.
Internist is an expert system for performing diagnosis in the area of internal
medicine.
The dipmeter advisor is an expert system for interpreting the results of oil well
drilling logs.
Xcon is an expert system for configuring VAX computers.
the design of systems that explicitly model some aspect of human performance
has been a fertile area of research in both AI and psychology.
7. Planning and Robotics
Research in planning began as an effort to design robots that could perform their tasks
with some degree of flexibility and responsiveness to outside world. Planning
assumes a robot that is capable of performing certain atomic actions.
Planning is a difficult problem because of the size of the space of possible sequences
of moves. Even an extremely simple robot is capable of generating a vast number of
potential move sequences. One method that human beings use in planning is
hierarchical problem decomposition. If we plan a trip to London, we will generally
treat the problems of arranging a flight, getting to the air port, making airline
connections and finding ground transportation in London separately. Each of these
may be further decomposed into smaller sub problems. Creating a computer program
that can do the same is a difficult challenge.
A robot that blindly performs a sequence of actions without responding to changes in
its environment cannot be considered intelligent. Often, a robot will have to formulate
a plan based on the incomplete information and correct its behavior. A robot may not
have adequate sensors to locate all obstacles in the way of a projected path.
Organizing plans in a fashion that allows response to changing environmental
conditions is a major problem for planning.
8. Machine Learning
An expert system may perform extensive and costly computations to solve a problem.
But if it is given the same or similar problem a second time, it usually does not
remember the solution. It performs the same sequence of computations again.
This is not the behavior of an intelligent problem solver. The programs must learn on
their own. Learning is a difficult area. But there are several programs that suggest that
it is possible. One program is AM, the automated mathematician which was designed
to discover mathematical laws. Initially given the concepts and axioms of set theory,
AM was able to induce important mathematical concepts such as cardinality, integer
arithmetic and many of the results of number theory. AM conjectured new theorems
by modifying its current knowledge base.
Early work includes Winston’s research on the induction of structural concepts such
as “arch” from a set of examples in the blocks world. The ID3 algorithm has proved
successful in learning general patterns from examples. Meta dendral learns rules for
interpreting mass spectrographic data in organic chemistry from examples of data on
compounds of known structure.Teiresias, an intelligent front end for expert systems,
converts high level advice into new rules for its knowledge base. There are also now
many important biological and sociological models of learning.
9. Neural Nets and Genetic Algorithms
An approach to build intelligent programs is to use models that parallel the structure
of neurons in the human brain. A neuron consists of a cell body that has a number of
branched protrusions called dendrites and a single branch called the axon. Dendrites
receive signals from other neurons. When these combined impulses exceed a certain
threshold, the neuron fires and an impulse or spike passes down the axon.
This description of the neuron captures features that are relevant to neural models of
computation. Each computational unit computes some function of its inputs and
passes the result along to connected units in the network; the final results are
produced by the parallel and distributed processing of this network of neural
connection and threshold weights.
Languages and Environments for AI
Programming environments include knowledge structuring techniques such as object
oriented programming and expert systems frameworks.
High level languages such as Lisp, and Prolog support modular development.
The starting position can be described as 8X8 array where each position contains a
symbol standing for the appropriate piece in the official chess opening position. We
can define as our goal any board position in which the opponent does not have any
legal move and his or her king is under attack. The legal moves provide the way of
getting from the initial state to a goal state.
Set of rules consisting of two parts:
A left side that serves as a pattern to be matched against the current board
position
A right side that describe the change to be made to the board position to reflect
the move
There are several ways in which these rules can be written. (10120 possible board
positions). We can have board configuration images or write as rules.
Difficulties
1. no person could ever supply a complete set of such rules
2. No program could easily handle all those rules
In order to minimize such problems we should look for a way to write the rules
describing the legal moves in as general a way as possible. Eg,
play chess by starting at an initial states using a set of rules to move from one state to
another and attempting to end up in one of a set of final states. This stste space
representation seems natural for chess because the set of states which correspond to
set of board positions is artificial and well organized.
The state space representation forms the basics of most of the AI methods
Its Structure:
It allows for a formal definition of a problem as the need to convert some given
situation into some desired situation using a set of permissible operations
It permits us to define the process of solving a particular problem as a
combination of known techniques and search the general technique of exploring
the space to try to find some path from the current state to goal state. Search is
very important process in the solution of hard problems for which no more direct
techniques are available
If y>0 ground
The state space for this problem can be described as a set of ordered pairs of integers
(x,y) such that x=0,1,2,3 or 4 and y= 0,1,2 or 3. x represent number of gallon of water
in the 4 gallon jug and y represent number of gallon of water in the 3 gallon jug.
The start state is (0,0).
The goal state is (2,n) n can be any value.
For any value of n, since the problem does not specify how many gallons needed to be
in the 3 gallon jug.
0 0 2
0 3 9
3 0 2
3 3 7
4 2 5
0 2 11
2 0 Final state
Production System
Control Stratergies
The first requirement of a good control stratergy is that it causes motion. Consider the
water jug problem. Suppose we implemented the simple control stratergy of starting
each time at the top of the list of rules and choosing the first applicable rule. If we did
that we would never solve the problem. We would continue indefinitley filling the 4
gallon jug with water.
Control stratergies that do not cause motion will never lead to a solution.
The second requirement of a good control stratergy is that it be systematic.
On each cycle choose at random from among the applicable rules. This stratergy is
better than the first. It causes motion. It will lead to a solution eventually. But we are
likely to arrive at the same state several times suring the process and to use many
more steps than are necessary. Because the control stratergy is not systemati, we may
explore a particular useless sequence of operators several times before we finally find
a solution.
The requirement that a control stratergy be systemtic corresponds to the need for
global motion as well as for local motion.
One systematic control stratergy for water jug problem is the following. Construct a
tree with initial state as its root, generate all offspring of the root by applying each of
the applicable rules to the initial state. Now for each leaf node, generate all its
sucessors by applying all the rules that are appropriate. Continue this process until
some rule produces a goal state. This process is called Breath First Search.
Pursue a single branch of tree until a solution or until a decision to determine the path
is made. It makes sense to terminate a path if it reaches a dead end. In such a case,
backtracking occurs. The most recently created state from which alternative moves
are available will be revisited and a new state will be created. This form of
backtracking is called chronological backtracking. This search procedure is called
depth first search.
Depth first Search Algorithm
1. if the initial state is a goal state, quit and return success
2. Otherwise do the following until success or failure is signaled:
a) Generate a successor E of the initial state. If there are no more
successors,signal failure
b) Call Depth first search with E as the initial state
c) If the success is returned, signal success otherwise continue in the loop.
Advantages of BFS
Breath first search will not get trapped exploring a blind alley. This contrasts with
DFS which may follow a single, unfruitful path for a very long time, before the path
actually terminates in a state that has no successors.
If there is a solution then breath first search is guarenteed to find it. Furthermore if
there are multiple solutions, then a minimal solution will be found.
Heuristic Search
In state space search, heuristics are formalized as rule for choosing those branches in
a state space that are most likely to lead to an acceptable problem solution.
Two basic solution:
1. a problem may not have an exact solution because of inherent ambiguties in the
problem statement or available data. Medical diagnosis is an example of this. A given
set of symptoms may have several possible causes, doctors use heuristics to choose
the most likely diagnosis and formulate a plan of treatment.
2. A problem may have an exact solution, but the computational cost of finding it may
be prohibitive. A heuristic algorithm can defeat this combinational explosion and find
an acceptable solution.
Heuristic approach for travelling salesman problem
1. arbitarily select a starting city
2. To select the next city, look at all cities not yet visited and select the one closest to
the current city go to it next.
3. Repeat step 2 until all cities have been visited
The procedure executes in time proportional to N2, a significant improvement over N!
Heuristics and the design algorithms to implement heuristic search have long been a
core concern of artificial intelligence research. Game playing and theorm proving are
two of the oldest applications in artificial intelligence.
Problem Charateristics
In order to choose the most appropriate method for a particular problem, it is
necessary to analyze the problem along several dimensions.
Is the problem decomposable into a set of independent smaller or easier sub
problems?
Can solution steps be ignored or at least undone if they prove unwise?
Is the problem’s universe predictable?
Is a good solution to the problem obvious without comparison to all other
possible solutions?
Is the desired solution a state of the world or a path to a state?
Is a large amount of knowledge absolutely required to solve the problem, or is
knowledge important only to constrain the search?
Can a computer that is simply given the problem return the solution, or will the
solution of the problem require interaction between the computer and a person?
B
C
A B C
The idea of solution is to reduce the problem of getting B on C and A on B to get two
seperated problems. The first of these new problems getting B on C is simple, given
he start state. Simply put B on C. The second subgoal is not quite so simple.
Since the only operators we have allow us to pick up single blocks at a time, we have
to clear off A by removing C before we can pick up A and put it on B. this can easily
be done. However if we now try to combine the two sub solutions into one solution,
we will fqail. Regardless of which one we do first we will not be able to do the second
as we had planned. In this problem the two sub problems are not independent. They
interact and those interactions must be considered in order to arrive at a solution for
entire problem.
These two examples symbolic integration and the block world, illustrate the
difference between decomposable and non decomposible problems
ON(B,C) ON(A,B)
Move A to table
Put A on B
The goal is to transform the starting position into the goal position by sliding the tiles
around. In an attempt to solve the 8- puzzle, we might make a stupid move. For
example, in the game shown above, we might start by sliding tile 5 into the empty
space. Having done that, we cannot change our mind and immediately slide tile 6 into
the empty space since the empty space will essentially have moved. But we can
backtrack and undo the 1st move, sliding tile 5 back to where it was. Then we can
move tile 6. here mistakes can be recovered.
Additional step must be performed to undo each incorrect step. The control
mechanism for an 8-puzzle solver must keep track of the order in which operations
are performed so that the operations can be undone one at a time if necessary.
Irrecoverable problems (eg. Chess)
Consider the problem of playing chess. Suppose a chess playing program makes a
stupid move and realizes it a couple of moves later. It cannot simply play as though it
had never made the stupid move. Nor can it simply back up and start the game over
from that point. All it can do is to try to make the best of the current situation and go
from there.
The recoverability of a problem plays an important role in determining the complexity
of the control structure necessary for the problem’s solution. Ignorable problems can
be solved using a simple control structure. Recoverable problems can be solved by
a slightly more complicated control strategy that does sometimes makes mistakes.
Irrecoverable problems will need to be solved by a system that expends a great deal
of effort making each decision since the decision must be final.
possible to do such planning with certainty since we cannot know exactly where all
the cards are or what the other players will do on their turns.
Planning can be used to generate a sequence of operators that is guarenteed to lead to
a solution. For uncetain outcome problems, planning can at best generate a sequence
of operators that has a good probability of leading to a solution. In which the outcome
cannot be predicted.
One of the hardest types of problems to solve is the irrecoverable, uncertain
outcome.
Examples of such problems are
Playing bridge,
Controlling a robot arm,
Helping a lawyer decide how to defend his client against a murder charge.
Solutions Axiom
7 It is now 2017 A. D. 7
OR
Solutions Axiom
1 It is now 2017AD 7
One place the salesman could start is Boston. In that case, one path that might be
followed is the one shown below which is 8850 miles long.
But is this the solution to the problem? The answer is that we cannot be sure unless
we also try all other paths to make sure that none of them is shorter.
Best path problems are computationally harder than any path problems. Any path
problem can often be solved in a reasonable amount of time by using heuristics that
suggest good path to explore.
Pasta salad is a salad containing pasta. But there are other ways interpretations can be
formed from pairs of nouns.
For example, dog food does not normally contain dogs. The phrase ‘with the fork’
could modify several parts of the sentence. In this case, it modifies the verb ‘eat’. But,
if the phrase had been ‘with vegetables’, then the modification structure would be
different. Because of the interaction among the interpretations of the constituents of
this sentence, some search may be required to find a complete interpretation for the
sentence. But to solve the problem of finding the interpretation, we need to produce
only the interpretation itself. No record of the processing by which the interpretation
was found is necessary.
Problems whose solution is a path to a state? Eg. Water jug problem
In water jug problem, it is not sufficient to report that we have solved the problem and
that the final state is (2,0). For this kind of problem, what we really must report is not
the final state, but the path that we found to that state.
SEARCHING
Problem solving in artificial intelligence may be characterized as a systematic search
through a range of possible actions in order to reach some predefined goal or solution.
In AI problem solving by search algorithms is quite common technique. In the coming
age of AI it will have big impact on the technologies of the robotics and path finding.
It is also widely used in travel planning. This chapter contains the different search
algorithms of AI used in various applications. Let us look the concepts for visualizing
the algorithms. A search algorithm takes a problem as input and returns the solution in
the form of an action sequence. Once the solution is found, the actions it recommends
can be carried out. This phase is called as the execution phase. After formulating a
goal and problem to solve the agent cells a search procedure to solve it. A problem
can be defined by 5 components.
a) The initial state: The state from which agent will start.
b) The goal state: The state to be finally reached.
c) The current state: The state at which the agent is present after starting from the
initial state.
d) Successor function: It is the description of possible actions and their outcomes.
e) Path cost: It is a function that assigns a numeric cost to each path.
Informed Search
Informed Search
A search using domain-specific knowledge.
Suppose that we have a way to estimate how close a state is to the goal, with
an evaluation function.
General strategy: expand the best state in the open list first. It's called a
best-first search or ordered state-space search.
In general the evaluation function is imprecise, which makes the method a
heuristic (works well in most cases).
The evaluation is often based on empirical observations.
Informed (Heuristic) Search Strategies
To solve large problems with large number of possible states, problem-specific
knowledge needs to be added to increase the efficiency of search algorithms.
Heuristic Evaluation Functions
They calculate the cost of optimal path between two states. A heuristic function for
sliding-tiles games is computed by counting number of moves that each tile makes
from its goal state and adding these number of moves for all tiles.
Pure Heuristic Search
It expands nodes in the order of their heuristic values. It creates two lists, a closed list
for the already expanded nodes and an open list for the created but unexpanded nodes.
In each iteration, a node with a minimum heuristic value is expanded, all its child
nodes are created and placed in the closed list. Then, the heuristic function is applied
to the child nodes and they are placed in the open list according to their heuristic
value. The shorter paths are saved and the longer ones are disposed.
A * Search
It is best-known form of Best First search. It avoids expanding paths that are already
expensive, but expands most promising paths first.
f(n) = g(n) + h(n), where
g(n) the cost (so far) to reach the node
h(n) estimated cost to get from the node to the goal
f(n) estimated total cost of path through n to goal. It is implemented using
priority queue by increasing f(n).
Greedy Best First Search
It expands the node that is estimated to be closest to goal. It expands nodes based on
f(n) = h(n). It is implemented using priority queue.
Disadvantage − It can get stuck in loops. It is not optimal.
Local Search Algorithms
They start from a prospective solution and then move to a neighboring solution. They
can return a valid solution even if it is interrupted at any time before they end.
Generate-And-Test Algorithm
Algorithm: Generate-And-Test
Potential solutions that need to be generated vary depending on the kinds of problems.
For some problems the possible solutions may be particular points in the problem
space and for some problems, paths from the start state.
Systematic Generate-And-Test
While generating complete solutions and generating random solutions are the two
extremes there exists another approach that lies in between. The approach is that the
search process proceeds systematically but some paths that unlikely to lead the
solution are not considered. This evaluation is performed by a heuristic function.
Exhaustive generate-and-test is very useful for simple problems. But for complex
problems even heuristic generate-and-test is not very effective technique. But this
may be made effective by combining with other techniques in such a way that the
space in which to search is restricted. An AI program DENDRAL, for example, uses
plan-Generate-and-test technique. First, the planning process uses
constraint-satisfaction techniques and creates lists of recommended and
contraindicated substructures. Then the generate-and-test procedure uses the lists
generated and required to explore only a limited set of structures. Constrained in this
way, generate-and-test proved highly effective. A major weakness of planning is that
it often produces inaccurate solutions as there is no feedback from the world. But if it
is used to produce only pieces of solutions then lack of detailed accuracy becomes
unimportant.
Hill climbing search algorithm is simply a loop that continuously moves in the
direction of increasing value. It stops when it reaches a “peak” where no neighbour
has higher value. This algorithm is considered to be one of the simplest procedures for
implementing heuristic search. The hill climbing comes from that idea if you are
trying to find the top of the hill and you go up direction from where ever you are. This
heuristic combines the advantages of both depth first and breadth first searches into a
single method. The name hill climbing is derived from simulating the situation of a
person climbing the hill. The person will try to move forward in the direction of at the
top of the hill. His movement stops when it reaches at the peak of hill and no peak has
higher value of heuristic function than this. Hill climbing uses knowledge about the
local terrain, providing a very useful and effective heuristic for eliminating much of
the unproductive search space. It is a branch by a local evaluation function. The hill
climbing is a variant of generate and test in which direction the search should proceed.
At each point in the search path, a successor node that appears to reach for
exploration.
Algorithm:
Step 1: Evaluate the starting state. If it is a goal state then stop and return success.
Step 2: Else, continue with the starting state as considering it as a current state.
Step 3: Continue step-4 until a solution is found i.e. until there are no new states left
to be applied in the current state.
Step 4: a) Select a state that has not been yet applied to the current state and apply it
to produce a new state.
b)Procedure to evaluate a new state.
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is
found.
Step 5:Exit.
Advantages:
Hill climbing technique is useful in job shop scheduling, automatic
programming, circuit designing, and vehicle routing and portfolio
management.
It is also helpful to solve pure optimization problems where the objective is to
find the best state according to the objective function.
It requires much less conditions than other search techniques.
Disadvantages:
The question that remains on hill climbing search is whether this hill is the highest hill
possible.
Unfortunately without further extensive exploration, this question cannot be answered.
This technique works but as it uses local information that’s why it can be fooled. The
algorithm doesn’t maintain a search tree, so the current node data structure need only
record the state and its objective function value. It assumes that local improvement
will lead to global improvement.
There are some reasons by which hill climbing often gets suck which are stated
below.
Local Maxima:
A local maxima is a state that is better than each of its neighbouring states, but not
better than some other states further away. Generally this state is lower than the global
maximum. At this point, one cannot decide easily to move in which direction! This
difficulties can be extracted by the process of backtracking i.e. backtrack to any of
one earlier node position and try to go on a different event direction. To implement
this strategy, maintaining in a list of path almost taken and go back to one of them. If
the path was taken that leads to a dead end, then go back to one of them.
Ridges:
It is a special type of local maxima. It is a simply an area of search space. Ridges
result in a sequence of local maxima that is very difficult to implement ridge itself has
a slope which is difficult to traverse. In this type of situation apply two or more rules
before doing the test. This will correspond to move in several directions at once.
Plateau:
It is a flat area of search space in which the neighbouring have same value. So it is
very difficult to calculate the best direction. So to get out of this situation, make a big
jump in any direction, which will help to move in a new direction this is the best way
to handle the problem like plateau.
This section presents three uninformed search strategies that do not take into
account the location of the goal. Intuitively, these algorithms ignore where they are
going until they find a goal and report success.
Depth-First Search
Breadth-First Search
Lowest-Cost-First Search