0% found this document useful (0 votes)
169 views

UNIT-I PPT Introduction To Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views

UNIT-I PPT Introduction To Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Introduction to Artificial

Intelligence
II – II CSE(AI&ML)
Anurag Engineering College
Y LAXMI PRASANNA
UNIT-I

•Introduction to AI
•Intelligent Agents,
•Problem-Solving Agents,
•Searching for Solutions
•Breadth-first search,
•Depth-first search,
•Hill-climbing search,
•Simulated annealing search,
•Local Search in Continuous Spaces.
Introduction to AI
Introduction to AI

• Artificial Intelligence is concerned with the design of intelligence in an


artificial device. The term was coined by John McCarthy in 1956.

• Intelligence is the ability to acquire, understand and apply the


knowledge to achieve goals in the world.

• AI is the study of intellectual/mental processes as computational


processes.
• AI program will demonstrate a high level of intelligence to a degree
that equals or exceeds the intelligence required of a human in
performing some task.

• AI is unique, sharing borders with Mathematics, Computer Science,


Philosophy, , Biology, Cognitive Science and many others.

• Although there is no clear definition of AI or even Intelligence, it can


be described as an attempt to build machines that like humans can
think and act, able to learn and use knowledge to solve problems on
their own.
Foundations of Artificial Intelligence:
• Philosophy
e.g., foundational issues (can a machine think?), issues of knowledge
and believe, mutual knowledge
• Psychology and Cognitive Science
e.g., problem solving skills
• Neuro-Science
e.g., brain architecture
• Computer Science And Engineering
e.g., complexity theory, algorithms, logic and inference,
programming languages, and system building.
• Mathematics and Physics
Application of AI:
1) Business; financial strategies
2) Engineering: check design, offer suggestions to create new product,
expert systems for all engineering problems
3) Manufacturing: assembly, inspection and maintenance
4) Medicine: monitoring, diagnosing
5) Education: in teaching
6) Fraud detection
7) Object identification
8) Information retrieval
9) Space shuttle scheduling
The definitions of AI:
Intelligent Systems:
• In order to design intelligent systems, it is important to categorize
them into four categories
• 1. Systems that think like humans
• 2. Systems that think rationally
• 3. Systems that behave like humans
• 4. Systems that behave rationally
• Scientific Goal: To determine which ideas about knowledge
representation, learning, rule systems search, and so on, explain
various sorts of real intelligence.
• Engineering Goal:To solve real world problems using AI techniques
such as Knowledge representation, learning, rule systems, search, and
so on.
• Traditionally, computer scientists and engineers have been more
interested in the engineering goal, while psychologists, philosophers
and cognitive scientists have been more interested in the scientific
goal.
• Laws of thought: Think Rationally

• Cognitive Science: Think Human-Like

• Turing Test: Act Human-Like

• Rational agent: Act Rationally


The difference between strong AI and weak
AI:
• Strong AI: Strong AI powered machines can exhibit strong human
cognitive abilities.

• Weak AI : Weak AI Powered machines do not have mind of their own


Alexa and siri are the best examples.
AI Problems:
• AI problems (speech recognition, NLP, vision, automatic
programming, knowledge representation, etc.) can be paired with
techniques (NN, search, Bayesian nets, production systems, etc.).

• AI problems can be classified in two types:

1. Common-place tasks(Mundane Tasks)

2. Expert tasks
Common-Place Tasks:
• 1. Recognizing people, objects.
• 2. Communicating (through natural language).
• 3. Navigating around obstacles on the streets.
These tasks are done routinely by people and some other animals.
Expert tasks:
• 1. Medical diagnosis.
• 2. Mathematical problem solving
• 3. Playing games like chess
These tasks cannot be done by all people, and can only be performed
by skilled specialists.
The State of the Art: What can AI do Today?
• Autonomous planning and scheduling(NASAs Remote Agent Program)
• Game Playing(IBM’s Deep Blue)
• Autonomous Control(ALVINN Computer Vision System- trained to
steer a car)
• Diagnosis
• Logistics Planning(DART –Dynamic Analysis and Replanning Tool)
• Robotics(Hip Nav-3D modelshows internal Anatomy of patient and
guid to insertion of Hip replacement)
• Language understanding and problem solving(PROVERB Program-
solves cross word puzzels better then humans)
Intelligent Agents
Agents and Environments:
• An Agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
• A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth, and other body parts for actuators.
• A robotic agent might have cameras and infrared range finders for sensors
and various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets.
• Percept: We use the term percept to refer to the agent's perceptual
inputs at any given instant.
• Percept Sequence: An agent's percept sequence is the complete
history of everything the agent has ever perceived.
• Agent function: Mathematically speaking, we say that an agent's
behavior is described by the agent function that maps any given
percept sequence to an action.
• Agent program Internally, the agent function for an artificial agent will
be implemented by an agent program. It is important to keep these
two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running
on the agent architecture.
example-the vacuum-cleaner world
• This particular world has just two locations: squares A and B.
• The vacuum agent perceives, which square it is in and whether there
is dirt in the square. It can choose to move left, move right, suck up
the dirt, or do nothing.
• One very simple agent function is the following: if the current square
is dirty, then suck, otherwise move to the other square.
An Effector is any device that affects the physical environment. An actuator is the actual mechanism that
enables the effector to execute an action.
THE CONCEPT OF RATIONALITY

• Rationality is nothing but status of being reasonable, sensible, and


having good sense of judgment.
• Rationality is concerned with expected actions and results depending
upon what the agent has perceived. Performing actions with the aim
of obtaining useful information is an important part of rationality.

• What is Ideal Rational Agent?


• An ideal rational agent is the one, which is capable of doing expected
actions to maximize its performance measure, on the basis of −
• Its percept sequence
• Its built-in knowledge base
The Structure of Agents
• Agent’s structure can be viewed as −
• Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
We will consider four types of agent program:

• Simple reflex agents


• Model based reflex agents (Agents that keep track of the world)
• Goal-based agents
• Utility-based agents
• Learning agents
1. Simple Reflex Agents
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of the
percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action. Such as a Room Cleaner agent, it works
only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Not adaptive to changes in the environment.
2. Model Based Reflex Agents

• They use a model of the world to choose their actions. They maintain
an internal state.
• Model − knowledge about “how the things happen in the world”.
• Internal State − It is a representation of unobserved aspects of
current state depending on percept history.
• Updating the state requires the information about −
• • How the world evolves.
• • How the agent’s actions affect the world.
3.Goal-Based Agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent
by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning,
which makes an agent proactive.
4. Utility Based Agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
• The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
Properties of Environments
Environments come in several flavors. The principal distinctions to be
made are as follows:
• Fully Observable vs. partially Observable
• Deterministic vs. nondeterministic(Stochastic).
• Episodic vs. nonepisodic(Sequential)
• Single Agent vs. Multi-Agent
• Static vs. dynamic
• Discrete vs. continuous
PROBLEM SOLVING AGENTS
PROBLEM SOLVING AGENTS
• Intelligent agents are supposed to act in such a way that the
environment goes through a sequence of states that maximizes the
performance measure. In its full generality, this specification is
difficult to translate into a successful agent design.
• The task is somewhat simplified if the agent can adopt a goal and aim
to satisfy it.
• Goal formulation, based on the current situation, is the first step in
problem solving.
• As well as formulating a goal, the agent may wish to decide on some
other factors that affect the desirability of different ways of achieving
the goal.
• Problem formulation is the process of deciding what actions and
states to consider, and follows goal formulation.
• Assume that the agent will consider actions at the level of driving
from one major town to another. The states it will consider therefore
correspond to being in a particular town.
• In general, then, an agent with several immediate options of
unknown value can decide what to do by first examining; different
possible sequences of actions that lead to states of known value, and
then choosing the best one. This process of looking for such a
sequence is called search.
• A search algorithm takes a problem as input and returns a solution in
the form of an action sequence. Once a solution is found, the actions
it recommends can be carried out. This is called the execution phase.
Formulating Problems
• Well Defined problems and Solutions
• A problem is really a collection of information that the agent will use
to decide what to do.
• The basic elements of a problem definition are the states and actions.
To capture these formally, we need the following:
• The initial state that the agent knows itself to be in.
• The term operator is used to denote the description of an action in terms of
which state will be reached by carrying out the action in a particular state. (An
alternate formulation uses a successor function S. Given a particular state x,
S(x) returns the set of states reachable from x by any single action.)
• state space of the problem: the set of all states reachable from the initial
state by any sequence of actions. A path in the state space is simply any
sequence of actions leading from one state to another.
• The goal test, which the agent can apply to a single state description to
determine if it is a goal state.
• A path cost function is a function that assigns a cost to a path
• Measuring problem-solving performance
• The effectiveness of a search can be measured in at least three ways.
• First, does it rind a solution at all?
• Second, is it a good solution (one with a low path cost)?
• Third, what is the search cost associated with the time and memory
required to find a solution?
• The total cost of the search is the sum of the path cost and the search
cost.
Example Problems
Toy problems
The 8-puzzIe:
The 8-puzzle, an instance of which is shown in Figure, consists of a 3x3
board with eight numbered tiles and a blank space. A tile adjacent to
the blank space can slide into the space. The object is to reach the
configuration shown on the right of the figure.
• This leads us to the following formulation:
• States: a state description specifies the location of each of the eight
tiles in one of the nine squares. For efficiency, it is useful to include
the location of the blank.
• Operators: blank moves left, right, up, or down.
• Goal test: state matches the goal configuration shown in Figure.
• Path cost: each step costs 1, so the path cost is just the length of the
path.
The 8-queens problem
• The eight queens puzzle is the problem of placing eight chess queens
on an 8×8 chessboard so that no two queens threaten each other;
thus, a solution requires that no two queens share the same row,
column, or diagonal.

• This figure doesn’t represent a solution because the queen in the first
column is on the same diagonal as the queen in the last column.
• Goal test: 8 queens on board, none attacked.
• Path cost: zero.
• States: any arrangement of 0 to 8 queens on board.
• Operators: add a queen to any square.
• The vacuum world
• States: one of the eight states shown in Figure 3.2 (or Figure 3.6).
• Operators: move left, move'right, suck.
• Goal test: no dirt left in any square.
• Path cost: each action costs 1.
• Cryptarithmetic
• In cryptarithmetic problems, letters stand for digits and the aim is to
find a substitution of digits for letters such that the resulting sum is
arithmetically correct. Usually, each letter must stand for a different
digit. The following is a well-known example
• The following formulation is probably the simplest:
• States: a cryptarithmetic puzzle with some letters replaced by digits.
• Operators: replace all occurrences of a letter with a digit not already
appearing in the puzzle.
• Goal test: puzzle contains only digits, and represents a correct sum.
• Path cost: zero. All solutions equally valid.
• Missionaries and cannibals
• In the missionaries and cannibals problem, three missionaries and three
cannibals must cross a river using a boat which can carry at most two
people, under the constraint that, for both banks, if there are missionaries
present on the bank, they cannot be outnumbered by cannibals (if they
were, the cannibals would eat the missionaries). The boat cannot cross the
river by itself with no people on board.
• Find the smallest number of crossings that will allow everyone to cross
the river safely.
• States: Combination of missionaries, cannibals, and boat on each side of river..
Thus, the start state is (3,3,1).
• Operators: from each state the possible operators are to take either one
missionary, one cannibal, two missionaries, two cannibals, or one of each across in
the boat. (Raid the boat with maximum two persons across the river in either
direction)
• Goal test: reached state (0,0,0)-Move all missionaries and cannibals across the
river.
• Path cost: number of crossings(Require the minimum number of crossings)
Real-world problems
• Route finding
Route-finding algorithms are used in a variety of applications, such as
routing in computer networks, automated travel advisory systems, and
airline travel planning systems.
• Touring and travelling sales person problems
The travelling salesperson problem (TSP) is a famous touring problem in
which each city must be visited exactly once. The aim is to find the
shortest tour.
• VLSI layout
A typical VLSI chip can have as many as a million gates, and the
positioning and connections of every gate are crucial to the successful
operation of the chip. Computer-aided design tools are used in every
phase of the process. Two of the most difficult tasks are cell layout and
channel routing. These come after the components and connections of
the circuit have been fixed; the purpose is to lay out the circuit on the
chip so as to minimize area and connection lengths, thereby
maximizing speed.
• Robot navigation
Robot navigation is a generalization of the route-finding problem.
• Assembly sequencing
Automatic assembly of complex objects by a robot was first
demonstrated by FREDDY the robot (Michie, 1972). Progress since then
has been slow but sure, to the point where assembly of objects such as
electric motors is economically feasible. In assembly problems, the
problem is to find an order in which to assemble the parts of some
object. If the wrong order is chosen, there will be no way to add some
part later in the sequence without undoing some of the work already
done.
SEARCHING FOR SOLUTIONS
• Data structures for search trees There are many ways to represent nodes, we will
assume a node is a data structure with five components:
• the state in the state space to which the node corresponds
• the node in the search tree that generated this node (this is called the parent
node);
• the operator that was applied to generate the node;
• the number of nodes on the path from the root to this node (the depth of the
node);
• the path cost of the path from the initial state to the node.
• The node data type is thus:
datatype node
components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST
SEARCH STRATEGIES
• we will evaluate strategies in terms of four criteria:
• COMPLETENESS: is the strategy guaranteed to find a solution when
there is one?
• TIME COMPLEXITY :how long does it take to find a solution?
• SPACE COMPLEXITY :how much memory does it need to perform the
search?
• OPTIMALITY: does the strategy find the highest-quality solution when
there are several different solutions?
Search Algorithm Terminologies
• Search:

Searching is a step by step procedure to solve a search-problem in a given search space.

A search problem can have three main factors:

1. Search Space: Search space represents a set of possible solutions, which a system may have.

2. Start State: It is a state from where agent begins the search.

3. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not.

• Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node
which is corresponding to the initial state.

• Actions: It gives the description of all the available actions to the agent.

• Transition model: A description of what each action do, can be represented as a transition model.

• Path Cost: It is a function which assigns a numeric cost to each path.

• Solution: It is an action sequence which leads from the start node to the goal node.

• Optimal Solution: If a solution has the lowest cost among all solutions.
• Uninformed search: The term means that they have no information
about the number of steps or the path cost from the current state to
the goal—all they can do is distinguish a goal state from a nongoal
state. Uninformed search is also sometimes called blind search.
• Informed search or heuristic search : informed search algorithm
contains an array of knowledge such as how far we are
from the goal, path cost, how to reach to goal node, etc.
This knowledge help agents to explore less to the search
space and find more efficiently the goal node.
The informed search algorithm is more useful for large
search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in
Informed Search, and it finds the most promising path.
BREADTH-FIRST SEARCH
• Breadth-first search is the most common search strategy for
traversing a tree or graph.
• This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
• BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to
nodes of next level.
• The breadth-first search algorithm is an example of a general-graph
search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Advantages:
• BFS will provide a solution if any solution exists.
• If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
Disadvantages:
• it requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Example :
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in
layers, so it will follow the path which is shown by the dotted arrow, and the
traversed path will be: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity:
• Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.

Space Complexity:
• Space complexity of BFS algorithm is given by the Memory size of frontier
which is O(bd).
Completeness:
• BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality:
• BFS is optimal if path cost is a non-decreasing function of the depth of the
node.
DEPTH-FIRST SEARCH
• It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.
• It will start searching from root node S, and traverse A, then B, then D and
E, after traversing E, it will backtrack the tree as E has no other successor
and still goal node is not found. After backtracking it will traverse node C
and then G, and here it will terminate as it found goal node.
• Completeness: DFS search algorithm is complete within finite state space as
it will expand every node within a limited search tree.
• Time Complexity: Time complexity of DFS will be equivalent to the node
traversed by the algorithm. It is given by:

• Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)
• Space Complexity: DFS algorithm needs to store only single path from the
root node, hence space complexity of DFS is equivalent to the size of the
fringe set, which is O(bm).
• Optimal: DFS search algorithm is non-optimal, as it may generate a large
number of steps or high cost to reach to the goal node.
INFORMED SEARCH (Heuristic Search)
• Informed search algorithms use domain knowledge.
• In an informed search, problem information is available which can guide
the search. Informed search strategies can find a solution more efficiently
than an uninformed search strategy.
• Informed search is also called a Heuristic search.
• A heuristic is a way which might not always be guaranteed for best
solutions but guaranteed to find a good solution in reasonable time.
• Heuristic function estimates how close a state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path between
the pair of states. The value of the heuristic function is always positive.
• Admissibility of the heuristic function is given as:
• h(n) <= h*(n)

• Here h(n) is heuristic cost, and h*(n) is the estimated cost.


• Hence heuristic cost should be less than or equal to the estimated
cost.
HILL CLIMBING SEARCH
• Hill Climbing Algorithm
• The idea behind hill climbing is as follows.
1.Pick a random point in the search space.
2.Consider all the neighbors of the current state.
3.Choose the neighbor with the best quality and move to that state.
4.Repeat 2 thru 4 until all the neighboring states are of lower quality.
5.Return the current state as the solution state.
• Also, if two neighbors have the same evaluation and they are both the best
quality, then the algorithm will choose between them at random.
Problems with Hill Climbing
• Plateau: a plateau is an area of the state space where the evaluation
function is essentially flat. The search will conduct a random walk.
• Local maxima: a local maximum, as opposed to a global maximum, is a peak
that is lower than the highest peak in the state space. Once on a local
maximum, the algorithm will halt even though the solution may be far from
satisfactory.
• Ridges: a ridge may have steeply sloping sides, so that the search reaches
the top of the ridge with ease, but the top may slope only very gently
toward a peak. Unless there happen to be operators that move directly
along the top of the ridge, the search may oscillate from side to side,
making little progress.
SIMULATED ANNEALING SEARCH
• A hill-climbing algorithm that never makes “downhill” moves towards
states with lower value (or higher cost) is guaranteed to be
incomplete, because it can stuck on a local maximum. In contrast, a
purely random walk –that is, moving to a successor chosen uniformly
at random from the set of successors – is complete, but extremely
inefficient. Simulated annealing is an algorithm that combines
hill-climbing with a random walk in some way that yields both
efficiency and completeness.
• It is quite similar to hill climbing. Instead of picking the best move,
however, it picks the random move. If the move improves the
situation, it is always accepted. Otherwise, the algorithm accepts the
move with some probability less than 1. The probability decreases
exponentially with the “badness” of the move – the amount E by
which the evaluation is worsened. The probability also decreases as
the "temperature" T goes down: "bad moves are more likely to be
allowed at the start when temperature is high, and they become
more unlikely as T decreases. One can prove that if the schedule
lowers T slowly enough, the algorithm will find a global optimum with
probability approaching 1.
• Simulated annealing was first used extensively to solve VLSI layout
problems. It has been applied widely to factory scheduling and other
large-scale optimization tasks.
LOCAL SEARCH IN CONTINUOUS SPACES
We have considered algorithms that work only in discrete environments, but real-world
environment are continuous.
Local search amounts to maximizing a continuous objective function in a multi-dimensional
vector space.
This is hard to do in general.
Can immediately retreat
✔ Discretize the space near each state
✔ Apply a discrete local search strategy (e.g., stochastic hill climbing, simulated annealing)
Often resists a closed-form solution
✔ Fake up an empirical gradient
✔ Amounts to greedy hill climbing in discretized state space
Can employ Newton-Raphson Method to find maxima.
• Continuous problems have similar problems: plateaus, ridges, local maxima, etc.

You might also like