UNIT-I PPT Introduction To Artificial Intelligence
UNIT-I PPT Introduction To Artificial Intelligence
Intelligence
II – II CSE(AI&ML)
Anurag Engineering College
Y LAXMI PRASANNA
UNIT-I
•Introduction to AI
•Intelligent Agents,
•Problem-Solving Agents,
•Searching for Solutions
•Breadth-first search,
•Depth-first search,
•Hill-climbing search,
•Simulated annealing search,
•Local Search in Continuous Spaces.
Introduction to AI
Introduction to AI
2. Expert tasks
Common-Place Tasks:
• 1. Recognizing people, objects.
• 2. Communicating (through natural language).
• 3. Navigating around obstacles on the streets.
These tasks are done routinely by people and some other animals.
Expert tasks:
• 1. Medical diagnosis.
• 2. Mathematical problem solving
• 3. Playing games like chess
These tasks cannot be done by all people, and can only be performed
by skilled specialists.
The State of the Art: What can AI do Today?
• Autonomous planning and scheduling(NASAs Remote Agent Program)
• Game Playing(IBM’s Deep Blue)
• Autonomous Control(ALVINN Computer Vision System- trained to
steer a car)
• Diagnosis
• Logistics Planning(DART –Dynamic Analysis and Replanning Tool)
• Robotics(Hip Nav-3D modelshows internal Anatomy of patient and
guid to insertion of Hip replacement)
• Language understanding and problem solving(PROVERB Program-
solves cross word puzzels better then humans)
Intelligent Agents
Agents and Environments:
• An Agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
• A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth, and other body parts for actuators.
• A robotic agent might have cameras and infrared range finders for sensors
and various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets.
• Percept: We use the term percept to refer to the agent's perceptual
inputs at any given instant.
• Percept Sequence: An agent's percept sequence is the complete
history of everything the agent has ever perceived.
• Agent function: Mathematically speaking, we say that an agent's
behavior is described by the agent function that maps any given
percept sequence to an action.
• Agent program Internally, the agent function for an artificial agent will
be implemented by an agent program. It is important to keep these
two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running
on the agent architecture.
example-the vacuum-cleaner world
• This particular world has just two locations: squares A and B.
• The vacuum agent perceives, which square it is in and whether there
is dirt in the square. It can choose to move left, move right, suck up
the dirt, or do nothing.
• One very simple agent function is the following: if the current square
is dirty, then suck, otherwise move to the other square.
An Effector is any device that affects the physical environment. An actuator is the actual mechanism that
enables the effector to execute an action.
THE CONCEPT OF RATIONALITY
• They use a model of the world to choose their actions. They maintain
an internal state.
• Model − knowledge about “how the things happen in the world”.
• Internal State − It is a representation of unobserved aspects of
current state depending on percept history.
• Updating the state requires the information about −
• • How the world evolves.
• • How the agent’s actions affect the world.
3.Goal-Based Agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent
by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning,
which makes an agent proactive.
4. Utility Based Agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
• The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
Properties of Environments
Environments come in several flavors. The principal distinctions to be
made are as follows:
• Fully Observable vs. partially Observable
• Deterministic vs. nondeterministic(Stochastic).
• Episodic vs. nonepisodic(Sequential)
• Single Agent vs. Multi-Agent
• Static vs. dynamic
• Discrete vs. continuous
PROBLEM SOLVING AGENTS
PROBLEM SOLVING AGENTS
• Intelligent agents are supposed to act in such a way that the
environment goes through a sequence of states that maximizes the
performance measure. In its full generality, this specification is
difficult to translate into a successful agent design.
• The task is somewhat simplified if the agent can adopt a goal and aim
to satisfy it.
• Goal formulation, based on the current situation, is the first step in
problem solving.
• As well as formulating a goal, the agent may wish to decide on some
other factors that affect the desirability of different ways of achieving
the goal.
• Problem formulation is the process of deciding what actions and
states to consider, and follows goal formulation.
• Assume that the agent will consider actions at the level of driving
from one major town to another. The states it will consider therefore
correspond to being in a particular town.
• In general, then, an agent with several immediate options of
unknown value can decide what to do by first examining; different
possible sequences of actions that lead to states of known value, and
then choosing the best one. This process of looking for such a
sequence is called search.
• A search algorithm takes a problem as input and returns a solution in
the form of an action sequence. Once a solution is found, the actions
it recommends can be carried out. This is called the execution phase.
Formulating Problems
• Well Defined problems and Solutions
• A problem is really a collection of information that the agent will use
to decide what to do.
• The basic elements of a problem definition are the states and actions.
To capture these formally, we need the following:
• The initial state that the agent knows itself to be in.
• The term operator is used to denote the description of an action in terms of
which state will be reached by carrying out the action in a particular state. (An
alternate formulation uses a successor function S. Given a particular state x,
S(x) returns the set of states reachable from x by any single action.)
• state space of the problem: the set of all states reachable from the initial
state by any sequence of actions. A path in the state space is simply any
sequence of actions leading from one state to another.
• The goal test, which the agent can apply to a single state description to
determine if it is a goal state.
• A path cost function is a function that assigns a cost to a path
• Measuring problem-solving performance
• The effectiveness of a search can be measured in at least three ways.
• First, does it rind a solution at all?
• Second, is it a good solution (one with a low path cost)?
• Third, what is the search cost associated with the time and memory
required to find a solution?
• The total cost of the search is the sum of the path cost and the search
cost.
Example Problems
Toy problems
The 8-puzzIe:
The 8-puzzle, an instance of which is shown in Figure, consists of a 3x3
board with eight numbered tiles and a blank space. A tile adjacent to
the blank space can slide into the space. The object is to reach the
configuration shown on the right of the figure.
• This leads us to the following formulation:
• States: a state description specifies the location of each of the eight
tiles in one of the nine squares. For efficiency, it is useful to include
the location of the blank.
• Operators: blank moves left, right, up, or down.
• Goal test: state matches the goal configuration shown in Figure.
• Path cost: each step costs 1, so the path cost is just the length of the
path.
The 8-queens problem
• The eight queens puzzle is the problem of placing eight chess queens
on an 8×8 chessboard so that no two queens threaten each other;
thus, a solution requires that no two queens share the same row,
column, or diagonal.
• This figure doesn’t represent a solution because the queen in the first
column is on the same diagonal as the queen in the last column.
• Goal test: 8 queens on board, none attacked.
• Path cost: zero.
• States: any arrangement of 0 to 8 queens on board.
• Operators: add a queen to any square.
• The vacuum world
• States: one of the eight states shown in Figure 3.2 (or Figure 3.6).
• Operators: move left, move'right, suck.
• Goal test: no dirt left in any square.
• Path cost: each action costs 1.
• Cryptarithmetic
• In cryptarithmetic problems, letters stand for digits and the aim is to
find a substitution of digits for letters such that the resulting sum is
arithmetically correct. Usually, each letter must stand for a different
digit. The following is a well-known example
• The following formulation is probably the simplest:
• States: a cryptarithmetic puzzle with some letters replaced by digits.
• Operators: replace all occurrences of a letter with a digit not already
appearing in the puzzle.
• Goal test: puzzle contains only digits, and represents a correct sum.
• Path cost: zero. All solutions equally valid.
• Missionaries and cannibals
• In the missionaries and cannibals problem, three missionaries and three
cannibals must cross a river using a boat which can carry at most two
people, under the constraint that, for both banks, if there are missionaries
present on the bank, they cannot be outnumbered by cannibals (if they
were, the cannibals would eat the missionaries). The boat cannot cross the
river by itself with no people on board.
• Find the smallest number of crossings that will allow everyone to cross
the river safely.
• States: Combination of missionaries, cannibals, and boat on each side of river..
Thus, the start state is (3,3,1).
• Operators: from each state the possible operators are to take either one
missionary, one cannibal, two missionaries, two cannibals, or one of each across in
the boat. (Raid the boat with maximum two persons across the river in either
direction)
• Goal test: reached state (0,0,0)-Move all missionaries and cannibals across the
river.
• Path cost: number of crossings(Require the minimum number of crossings)
Real-world problems
• Route finding
Route-finding algorithms are used in a variety of applications, such as
routing in computer networks, automated travel advisory systems, and
airline travel planning systems.
• Touring and travelling sales person problems
The travelling salesperson problem (TSP) is a famous touring problem in
which each city must be visited exactly once. The aim is to find the
shortest tour.
• VLSI layout
A typical VLSI chip can have as many as a million gates, and the
positioning and connections of every gate are crucial to the successful
operation of the chip. Computer-aided design tools are used in every
phase of the process. Two of the most difficult tasks are cell layout and
channel routing. These come after the components and connections of
the circuit have been fixed; the purpose is to lay out the circuit on the
chip so as to minimize area and connection lengths, thereby
maximizing speed.
• Robot navigation
Robot navigation is a generalization of the route-finding problem.
• Assembly sequencing
Automatic assembly of complex objects by a robot was first
demonstrated by FREDDY the robot (Michie, 1972). Progress since then
has been slow but sure, to the point where assembly of objects such as
electric motors is economically feasible. In assembly problems, the
problem is to find an order in which to assemble the parts of some
object. If the wrong order is chosen, there will be no way to add some
part later in the sequence without undoing some of the work already
done.
SEARCHING FOR SOLUTIONS
• Data structures for search trees There are many ways to represent nodes, we will
assume a node is a data structure with five components:
• the state in the state space to which the node corresponds
• the node in the search tree that generated this node (this is called the parent
node);
• the operator that was applied to generate the node;
• the number of nodes on the path from the root to this node (the depth of the
node);
• the path cost of the path from the initial state to the node.
• The node data type is thus:
datatype node
components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST
SEARCH STRATEGIES
• we will evaluate strategies in terms of four criteria:
• COMPLETENESS: is the strategy guaranteed to find a solution when
there is one?
• TIME COMPLEXITY :how long does it take to find a solution?
• SPACE COMPLEXITY :how much memory does it need to perform the
search?
• OPTIMALITY: does the strategy find the highest-quality solution when
there are several different solutions?
Search Algorithm Terminologies
• Search:
1. Search Space: Search space represents a set of possible solutions, which a system may have.
3. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not.
• Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node
which is corresponding to the initial state.
• Actions: It gives the description of all the available actions to the agent.
• Transition model: A description of what each action do, can be represented as a transition model.
• Solution: It is an action sequence which leads from the start node to the goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
• Uninformed search: The term means that they have no information
about the number of steps or the path cost from the current state to
the goal—all they can do is distinguish a goal state from a nongoal
state. Uninformed search is also sometimes called blind search.
• Informed search or heuristic search : informed search algorithm
contains an array of knowledge such as how far we are
from the goal, path cost, how to reach to goal node, etc.
This knowledge help agents to explore less to the search
space and find more efficiently the goal node.
The informed search algorithm is more useful for large
search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in
Informed Search, and it finds the most promising path.
BREADTH-FIRST SEARCH
• Breadth-first search is the most common search strategy for
traversing a tree or graph.
• This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
• BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to
nodes of next level.
• The breadth-first search algorithm is an example of a general-graph
search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages:
• it requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Example :
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity:
• Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.
Space Complexity:
• Space complexity of BFS algorithm is given by the Memory size of frontier
which is O(bd).
Completeness:
• BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality:
• BFS is optimal if path cost is a non-decreasing function of the depth of the
node.
DEPTH-FIRST SEARCH
• It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
• It will start searching from root node S, and traverse A, then B, then D and
E, after traversing E, it will backtrack the tree as E has no other successor
and still goal node is not found. After backtracking it will traverse node C
and then G, and here it will terminate as it found goal node.
• Completeness: DFS search algorithm is complete within finite state space as
it will expand every node within a limited search tree.
• Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:
• Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
• Space Complexity: DFS algorithm needs to store only single path from the
root node, hence space complexity of DFS is equivalent to the size of the
fringe set, which is O(bm).
• Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.
INFORMED SEARCH (Heuristic Search)
• Informed search algorithms use domain knowledge.
• In an informed search, problem information is available which can guide
the search. Informed search strategies can find a solution more efficiently
than an uninformed search strategy.
• Informed search is also called a Heuristic search.
• A heuristic is a way which might not always be guaranteed for best
solutions but guaranteed to find a good solution in reasonable time.
• Heuristic function estimates how close a state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.
• Admissibility of the heuristic function is given as:
• h(n) <= h*(n)