1. What is Artificial Intelligence?
1. INTELLIGENCE
The capacity to learn and solve problems. In
particular, the ability to solve novel problems (i.e
solve new problems) the ability to act rationally
(i.e act based on reason) the ability to act like
humans
3. 1.1 What is involved in intelligence?
• 1 What is involved in intelligence
• Ability to interact with the real world – to perceive,
understand, and act – e.g., speech recognition and
understanding and synthesis – e.g., image
understanding – e.g., ability to take actions, have an
effect
• Reasoning and Planning – modeling the external
world, given input – solving new problems, planning,
and making decisions – ability to deal with
unexpected problems, uncertainties
4. Continue…..
• Learning and Adaptation – we are
continuously learning and adapting – our
internal models are always being
―updated‖e.g., a baby learning to categorize
and recognize animals
5. History of AI
• 1923 Karel Čapek play named “Rossum's Universal Robots” (RUR)
opens in London, first use of the word "robot" in English.
• 1943 Foundations for neural networks laid.
• 1945 Isaac Asimov, a Columbia University alumni, coined the
term Robotics.
• 1956 John McCarthy coined the term Artificial Intelligence.
Demonstration of the first running AI program at Carnegie Mellon
University.
• 1958 John McCarthy invents LISP programming language for AI.
• 1964 Danny Bobrow's dissertation at MIT showed that computers
can understand natural language well enough to solve algebra
word problems correctly.
6. 1990 Major advances in all areas of AI −
• Significant demonstrations in machine learning
• Case-based reasoning
• Multi-agent planning
• Scheduling
• Data mining, Web Crawler
• natural language understanding and translation
• Vision, Virtual Reality
• Games
8. 2. ARTIFICIAL INTELLIGENCE
• It is the study of how to make computers do
things at which, at the moment, people are
better. The term AI is defined by each author
in own ways which falls into 4 categories
• 1. The system that think like humans.
• 2. System that act like humans.
• 3. Systems that think rationally.
• 4. Systems that act rationally
9. 2.1 SOME DEFINITIONS OF AI
• Building systems that think like humans ―The exciting new effort to make
computers think … machines with minds, in the full and literal sense Haugeland,
1985 ―The automation of activities that we associate with human thinking, … such
as decision-making, problem solving, learning, Bellman, 1978
• Building systems that act like humans ―The art of creating machines that perform
functions that require intelligence when performed by people‖ -- Kurzweil, 1990
―The study of how to make computers do things at which, at the moment, people
are better‖ -- Rich and Knight, 1991
• Building systems that think rationally ―The study of mental faculties through the
use of computational models‖ -- Charniak and McDermott, 1985 ―The study of the
computations that make it possible to perceive, reason, and act‖ -Winston, 1992
• Building systems that act rationally ―A field of study that seeks to explain and
emulate intelligent behavior in terms of computational processes‖ -- Schalkoff, 1990
―The branch of computer science that is concerned with the automation of
intelligent behavior‖ -- Luger and Stubblefield, 1993
10. Agents
• An agent:
– Perceives and acts
– Selects actions that maximize its utility function
– Has a goal
– Environment:
– Input and output to the agent
12. Types of Agents
• Agents can be grouped into five classes based on
their degree of perceived intelligence and capability.
All these agents can improve their performance and
generate better action over the time.
• These are given below:
– Simple Reflex Agent
– Model-based reflex agent
– Goal-based agents
– Utility-based agent
– Learning agent
13. Simple Reflex agent
• The Simple reflex agents are the simplest agents. These
agents take decisions on the basis of the current percepts
and ignore the rest of the percept history.
• These agents only succeed in the fully observable
environment.
• The Simple reflex agent does not consider any part of
percepts history during their decision and action process.
• The Simple reflex agent works on Condition-action rule,
which means it maps the current state to action. Such as a
Room Cleaner agent, it works only if there is dirt in the
room.
15. Problems for the simple reflex agent design approach:
– They have very limited intelligence
– They do not have knowledge of non-perceptual
parts of the current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.
16. Model-based reflex agent
• The Model-based agent can work in a partially observable
environment, and track the situation.
• A model-based agent has two important factors:
– Model: It is knowledge about "how things happen in the world," so it
is called a Model-based agent.
– Internal State: It is a representation of the current state based on
percept history.
• These agents have the model, "which is knowledge of the
world" and based on the model they perform actions.
• Updating the agent state requires information about:
– How the world evolves
– How the agent's action affects the world.
18. Goal-based agents
• The knowledge of the current state environment is not
always sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable
situations.
• Goal-based agents expand the capabilities of the model-
based agent by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of
possible actions before deciding whether the goal is achieved
or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
20. Utility-based agents
• These agents are similar to the goal-based agent but
provide an extra component of utility measurement which
makes them different by providing a measure of success at
a given state.
• Utility-based agent act based not only goals but also the
best way to achieve the goal.
• The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order
to perform the best action.
• The utility function maps each state to a real number to
check how efficiently each action achieves the goals.
22. Learning Agents
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
– Learning element: It is responsible for making improvements by
learning from environment
– Critic: Learning element takes feedback from critic which describes
that how well the agent is doing with respect to a fixed
performance standard.
– Performance element: It is responsible for selecting external action
– Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and
look for new ways to improve the performance.
24. Formulating problems
• Steps performed by Problem-solving agent
– Goal Formulation: It is the first and simplest step in problem-solving.
It organizes the steps/sequence required to formulate one goal out of
multiple goals as well as actions to achieve that goal. Goal
formulation is based on the current situation and the agent’s
performance measure (discussed below).
– Problem Formulation: It is the most important step of problem-
solving which decides what actions should be taken to achieve the
formulated goal. There are following five components involved in
problem formulation:
– Initial State: It is the starting state or initial step of the agent towards
its goal.
– Actions: It is the description of the possible actions available to the
agent.
– Transition Model: It describes what each action does.
25. Continue..
– Goal Test: It determines if the given state is a goal
state.
– Path cost: It assigns a numeric cost to each path
that follows the goal. The problem-solving agent
selects a cost function, which reflects its
performance measure.
Remember, an optimal solution has the lowest
path cost among all the solutions.
26. Continue..
Search: It identifies all the best possible sequence of
actions to reach the goal state from the current
state. It takes a problem as an input and returns
solution as its output.
Solution: It finds the best algorithm out of various
algorithms, which may be proven as the best
optimal solution.
Execution: It executes the best optimal solution
from the searching algorithms to reach the goal
state from the current state
27. Example Problems
• Basically, there are two types of problem
approaches:
• Toy Problem: It is a concise and exact description
of the problem which is used by the researchers
to compare the performance of algorithms.
• Real-world Problem: It is real-world based
problems which require solutions. Unlike a toy
problem, it does not depend on descriptions, but
we can have a general formulation of the problem
28. Example Problems
• Some Toy Problems
– 8 Puzzle Problem: Here, we have a 3×3 matrix with
movable tiles numbered from 1 to 8 with a blank
space. The tile adjacent to the blank space can
slide into that space. The objective is to reach a
specified goal state similar to the goal state, as
shown in the below figure.
– In the figure, our task is to convert the current
state into goal state by sliding digits into the blank
space.
29. • In the above figure, our task is to convert the
current(Start) state into goal state by sliding
digits into the blank space.
30. Continue…
• The problem formulation is as follows:
– States: It describes the location of each numbered tiles and the
blank tile.
– Initial State: We can start from any state as the initial state.
– Actions: Here, actions of the blank space is defined, i.e.,
either left, right, up or down
– Transition Model: It returns the resulting state as per the given
state and actions.
– Goal test: It identifies whether we have reached the correct
goal-state.
– Path cost: The path cost is the number of steps in the path
where the cost of each step is 1.
31. 8-queens problem:
• The aim of this problem is to place eight
queens on a chessboard in an order where no
queen may attack another. A queen can attack
other queens either diagonally or in same
row and column.
• From the following figure, we can understand
the problem as well as its correct solution.
33. 8-queens problem
• For this problem, there are two main kinds of formulation:
– Incremental formulation: It starts from an empty state where the
operator augments a queen at each step.
• Following steps are involved in this formulation:
– States: Arrangement of any 0 to 8 queens on the chessboard.
– Initial State: An empty chessboard
– Actions: Add a queen to any empty box.
– Transition model: Returns the chessboard with the queen added in a
box.
– Goal test: Checks whether 8-queens are placed on the chessboard
without any attack.
– Path cost: There is no need for path cost because only final states are
counted.
36. Some Real-world problems
• Traveling salesperson problem(TSP): It is a touring problem where the
salesman can visit each city only once. The objective is to find the shortest
tour and sell-out the stuff in each city.
• VLSI Layout problem: In this problem, millions of components and
connections are positioned on a chip in order to minimize the area, circuit-
delays, stray-capacitances, and maximizing the manufacturing yield.
• The layout problem is split into two parts:
• Cell layout: Here, the primitive components of the circuit are grouped into
cells, each performing its specific function. Each cell has a fixed shape and
size. The task is to place the cells on the chip without overlapping each
other.
• Channel routing: It finds a specific route for each wire through the gaps
between the cells.
37. Continue…
• Protein Design: The objective is to find a sequence of amino acids which will fold
into 3D protein having a property to cure some disease.
• Searching for solutions
– We have seen many problems. Now, there is a need to search for solutions to
solve them.
– In this section, we will understand how searching can be used by the agent to
solve a problem.
– For solving different kinds of problem, an agent makes use of different strategies
to reach the goal by searching the best possible algorithms. This process of
searching is known as search strategy.
• Measuring problem-solving performance
– Before discussing different search strategies, the performance measure of an
algorithm should be measured. Consequently, there are four ways to measure
the performance of an algorithm:
– Completeness: It measures if the algorithm guarantees to find a solution (if any
solution exist).
– Optimality: It measures if the strategy searches for an optimal solution.
– Time Complexity: The time taken by the algorithm to find a solution.
38. Continue…
• Space Complexity: Amount of memory required to perform a search.
• The complexity of an algorithm depends on branching factor or maximum number
of successors, depth of the shallowest goal node (i.e., number of steps from root
to the path) and the maximum length of any path in a state space.
40. Informed Search Uninformed Search
It is also known as Heuristic Search It is also known as Blind Search.
It uses knowledge for the searching
process.
It doesn’t use knowledge for the searching
process.
It finds a solution more quickly. It finds solution slow as compared to an
informed search.
It may or may not be complete. It is always complete.
Cost is low. Cost is high.
It consumes less time because of quick
searching.
It consumes moderate time because of
slow searching.
There is a direction given about the
solution.
No suggestion is given regarding the
solution in it.
Computational requirements are lessened. Comparatively higher computational
requirements.
Greedy Search
A* Search
AO* Search
Hill Climbing Algorithm
Depth First Search (DFS)
Breadth First Search (BFS)
Branch and Bound
41. • Strategies are evaluated along the following
dimensions:
–completeness: does it always find a solution if one exists?
–time complexity: number of nodes generated
–space complexity: maximum number of nodes in memory
–optimality: does it always find a least-cost solution?
• Time and space complexity are measured in
terms of
–b:maximum branching factor(average number of child nodes for a given
node) of the search tree
–d: depth of the least-cost solution
–m: maximum depth of the state space (may be ∞)
42. Breadth First Search
It starts from the root node, explores the
neighboring nodes first and moves towards the
next level neighbours. It generates one tree at
a time until the solution is found.
It can be implemented using FIFO queue data
structure. This method provides shortest path
to the solution.
43. Uniform Cost Search
• Sorting is done in increasing cost of the path
to a node. It always expands the least cost
node. It is identical to Breadth First search if
each transition has the same cost.
• It explores paths in the increasing order of
cost.
• Disadvantage − There can be multiple long
paths with the cost ≤ C*. Uniform Cost search
must explore them all.
44. Disadvantage −
Since each level of nodes is saved for creating
next one, it consumes a lot of memory space.
Space requirement to store nodes is
exponential.
Its complexity depends on the number of
nodes. It can check duplicate nodes.
45. Depth-First Search
• It is implemented in recursion with LIFO stack data structure. It creates
the same set of nodes as Breadth-First method, only in the different
order.
• This recursive nature of DFS can be implemented using stacks. The basic
idea is as follows:
Pick a starting node and push all its adjacent nodes into a
stack.
Pop a node from stack to select the next node to visit and push
all its adjacent nodes into a stack.
Repeat this process until the stack is empty. However, ensure
that the nodes that are visited are marked. This will prevent
you from visiting the same node more than once. If you do not
mark the nodes that are visited and you visit the same node
more than once, you may end up in an infinite loop
47. Breadth-First:
–When completeness is important.
–When optimal solutions are important.
Depth-First:
–When solutions are dense and low-cost is
important, especially space costs.
48. Bidirectional Search
• It searches forward from initial state and
backward from goal state till both meet to
identify a common state.
• The path from initial state is concatenated
with the inverse path from the goal state.
Each search is done only up to half of the
total path.
49. Uniform Cost Search
• Sorting is done in increasing cost of the path to
a node. It always expands the least cost node.
It is identical to Breadth First search if each
transition has the same cost.
• It explores paths in the increasing order of
cost.
• Disadvantage − There can be multiple long
paths with the cost ≤ C*. Uniform Cost search
must explore them all.
50. Iterative Deepening Depth-First Search
• It performs depth-first search to level 1, starts
over, executes a complete depth-first search to
level 2, and continues in such way till the
solution is found.
• It never creates a node until all lower nodes
are generated. It only saves a stack of nodes.
The algorithm ends when it finds a solution at
depth d. The number of nodes created at
depth d is bd
and at depth d-1 is bd-1.
52. Continue…
• Uninformed Search (Blind Search)
• This type of search strategy does not have any additional
information about the states except the information provided in
the problem definition. They can only generate the successors and
distinguish a goal state from a non-goal state. These type of search
does not maintain any internal state, that’s why it is also known
as Blind search.
• There are following types of uninformed searches:
• Breadth-first search
• Uniform cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
• Bidirectional search
53. Continue….
• Informed Search (Heuristic Search)
• This type of search strategy contains some additional
information about the states beyond the problem
definition. This search uses problem-specific knowledge
to find more efficient solutions. This search maintains
some sort of internal states via heuristic functions (which
provides hints), so it is also called heuristic search.
• There are following types of informed searches:
• Best first search (Greedy search)
• A* search
54. • Travelling Salesman Problem (TSP): Given a
set of cities and distance between every pair
of cities, the problem is to find the shortest
possible route that visits every city exactly
once and returns to the starting point.
• For example, consider the graph. A TSP tour in
the graph is 1-2-4-3-1. The cost of the tour is
10+25+30+15 which is 80.
56. Continue…
• Following is solutions for the traveling salesman
problem.
• Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep
track of minimum cost permutation.
4) Return the permutation with minimum cost.
• Time Complexity: Θ(n!)