Unit1_AI
Unit1_AI
GREATER NOIDA
INTRODUCTION
Unit: 1
9/30/24 3
RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
SYLLABUS
9/30/24 8
RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
CO-PO MAPPING
Sr. No CO.K PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
1 CO1 2 2 2 3 3 - - - - - - -
2 CO2 3 2 3 2 3 - - - - - - -
3 CO3 3 2 3 2 3 - - - - - - -
4 CO4 3 2 3 2 3 - - - - - - -
5 CO5 3 2 3 3 3 - - - - - - -
CO1 3 - - -
CO2 3 3 - -
CO3 3 3 - -
CO4 3 3 - -
CO5 3 3 - -
Program Educational
PEOs Description
Objectives (PEOs)
To have an excellent scientific and engineering breadth so as to
PEO 1 comprehend, analyze, design and provide sustainable solutions for
real-life problems using state-of-the-art technologies.
To have a successful career in industries, to pursue higher studies or
PEO 2 to support entrepreneurial endeavors and to face the global
challenges.
To have an effective communication skills, professional attitude,
ethical values and a desire to learn specific knowledge in emerging
PEO 3
trends, technologies for research, innovation and
product development and contribution to society.
To have life-long learning for up-skilling and re-skilling for successful
PEO 4 professional career as engineer, scientist, entrepreneur and
bureaucrat for betterment of society.
Name of the
Subject code Result % of clear passed
faculty
Students must have logical and practical skill set towards analyzing
various problems related to algorithms.
• https://www.youtube.com/watch?v=4jmsHaJ7xEA
• https://www.youtube.com/watch?v=oV74Najm6Nc
• https://www.youtube.com/watch?v=y5swZ2Q_lBw
• https://www.youtube.com/watch?v=ptWmh0ocveM
Ø Introduction-:
• Introduction to Artificial Intelligence
• Historical developments of Artificial Intelligence
• Well defined learning problems
• Designing a Learning System
• Basics of problem-solving: problem representation paradigms
• State Space
• Problem reduction
• Constraint satisfaction
• Applications of AI
• Combination of 2 words:
• What is AI?
Ability of machines to
– Learn
– Think
– Behave
Like humans
Ø Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.
Ø Year 1950: The Alan Turing who was an English mathematician and
pioneered Machine learning in 1950. Alan Turing publishes "Computing
Machinery and Intelligence" in which he proposed a test. The test can check
the machine's ability to exhibit intelligent behavior equivalent to human
intelligence, called a Turing test.
v Lecture
§ Well Defined Learning Problems
5. Issue of responsibility
6. Ethical challenges
v Lecture
§ Designing a Learning System
Learning: A Definition
Let us therefore define the target value V(b) for an arbitrary board state b in
B, as follows:
Now that we have specified the ideal target function V, we must choose a
representation that the learning program will use to describe the
function V’ that it will learn. As with earlier design choices, we again have
many options.
To keep the discussion brief, let us choose a simple representation: for any
given board state, the function c will be calculated as a linear combination of
the following board features:
Thus, our learning program will represent V’(b) as a linear function of the
form
The first three items above correspond to the specification of the learning
task, whereas the final two items constitute design choices for the
implementation of the learning program.
9/30/24 75
RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
Design a learning system
Specify the learning algorithm for choosing the weights wi to best fit the set
of training examples { (b,V train(b))}.
A first step we must define what we mean by the bestfit to the training data.
• One common approach is to define the best hypothesis, or set of weights,
as that which minimizes the squarg error E between the training values
and the values predicted by the hypothesis .
Several algorithms are known for finding weights of a linear function that
minimize E
The final design of our checkers learning system can be naturally described
by four distinct program modules that represent the central components
in many learning systems. These four modules, summarized in Figure 1.1,
are as follows:
The Performance System is the module that must solve the given
performance task, in this case playing checkers, by using the learned
target function(s). It takes an instance of a new problem (new game) as
input and produces a trace of its solution (game history) as output. In our
case, the strategy used by the Performance System to select its next move
at each step is determined by the learned p evaluation function.
Therefore, we expect its performance to improve as this evaluation
function becomes increasingly accurate.
The Critic takes as input the history or trace of the game and produces as
output a set of training examples of the target function. As shown in the
diagram, each training example in this case corresponds to some game state
in the trace, along with an estimate Vtraio,f the target function value for this
example. In our example, the Critic corresponds to the training rule given by
Equation (1.1).
The Generalizer takes as input the training examples and produces an output
hypothesis that is its estimate of the target function. It generalizes from the
specific training examples, hypothesizing a general function that covers these
examples and other cases beyond the training examples. In our example, the
Generalizer corresponds to the LMS algorithm, and the output hypothesis is
the function f described by the learned weights wo, . . . , W6.
v Lecture
§ Basics of problem-solving
§ Problem representation paradigms
performance measure.
■Problem-solving agents: find sequence of
actions that achieve goals.
■In this section we will use a map as an
example, if you take fast look you can deduce
that each node represents a city, and the cost to
travel from a city to another is denoted by the
number over the edge connecting the nodes of
those 2 cities.
■In order for an agent to solve a problem it
should pass by 2 phases of formulation:
■ Goal Formulation:
– Problem solving is about having a goal we want to reach, (i.e.,
I want to travel from ‘A’ to ‘E’).
– Goals have the advantage of limiting the objectives the agent
is trying to achieve.
– We can say that goal is a set of environment states in which
our goal is satisfied.
■ Problem Formulation:
– A problem formulation is about deciding what actions and
states to consider, we will come to this point it shortly.
– We will describe our states as “in(CITYNAME)”
where CITYNAME is the name of the city in which we are
currently in.
■ Once our agent has found the sequence of cities it should pass by to
reach its goal it should start following this sequence.
■ The process of finding such sequence is called search, a search
algorithm is like a black box which takes problem as input returns a
solution, and once the solution is found the sequence of actions it
recommends is carried out and this is what is called the execution
phase.
■ We now have a simple (formulate, search, execute) design for our
problem solving agent, so lets find out precisely how to formulate a
problem.
Formulating Problems
■ Goal Test:
– we should be able to decide whether the current state is a goal state {e.i:
is the current state is in(E)?}.
■ Path cost:
– a function that assigns a numeric value to each path, each step we take in
solving the problem should be somehow weighted, so If I travel from A
to E our agent will pass by many cities, the cost to travel between two
consecutive cities should have some cost measure, {e.i: Traveling from
‘A’ to ‘B’ costs 20 km or it can be typed as c(A, 20, B)}.
■ A solution to a problem is path from the initial state to a goal state, and
solution quality is measured by the path cost, and the optimal solution
has the lowest path cost among all possible solutions.
EXAMPLE
PROBLEMS
Toy Problem
Vacuum
World
– Initial state:
■Our vacuum can be in any state of the 8 states shown in
the picture.
– State description:
■Successor function generates legal states resulting from
applying the three actions {Left, Right, and Suck}.
■The states space is shown in the picture, there are 8 world
states.
– Goal test:
■Checks whether all squares are clean.
– Path cost:
■Each step costs 1, so the path cost is the sum of steps in
the path.
EXAMPLE
PROBLEMS
8 Puzzle
96
8-Puzzle
■ Initial state:
– Our board can be in any state resulting from making it in any configuration.
■ State description:
– Successor function generates legal states resulting from applying the three actions
{move blank Up, Down, Left, or Right}.
– State description specifies the location of each of the eight titles and the blank.
■ Goal test:
– Checks whether the states matches the goal configured in the goal state shown in the
picture.
■ Path cost:
– Each step costs 1, so the path cost is the sum of steps in the path.
97
98
EXAMPLE
PROBLEMS
8-Queens Problem
99
8-Queens Problem
■ States: ???
■ Initial State: ???
■ Successor Function: ???
■ Goal Test: ???
EXAMPLE
PROBLEMS
Real World Problem
10
1
Continue…
■ Touring problems
■ Traveling salesperson
problem
■ Robot navigation
■ Automatic assembly
sequencing
10
3
10
5
Continue…
■ The structure of a node in the search tree can be as follows:
– State: the state in the state space to which this state corresponds
– Parent-Node: the node in the search graph that generated this node.
– Action: the action that was applied to the parent to generate this node.
– Path-Cost: the cost of the path from the initial state to this node.
– Depth: the number of steps along the path from the initial state.
■ It is important to make a distinction between nodes and states, A node
in the search tree is a data structure holds a certain state and some info
used to represent the search tree, where state corresponds to a world
configuration, that is more than one node can hold the same state, this
can happened if 2 different paths lead to the same state. 10
6
10
7
Continue….
■ In AI, complexity is expressed by three factors b, d and m:
1. b the branching factor is the maximum number of
successors of any node.
2. d the depth of the deepest goal.
3. m the maximum length of any path in the state space.
10
8
Problem solving
■ We want:
– To automatically solve a problem
■ We need:
– A representation of the problem
– Algorithms that use some strategy to solve the problem
defined in that representation
9/30/24 110
RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
Example
Problem representation
■ General:
– State space: a problem is divided into a set of resolution steps
from the initial state to the goal state
– Reduction to sub-problems: a problem is arranged into a
hierarchy of sub-problems
■ Specific:
– Game resolution
– Constraints satisfaction
113
9/30/24 RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
Constraint Satisfaction
3. Domains: A domain represents the set of possible values that a variable can
take. It constrains the potential values a variable can be assigned during the
problem-solving process.
114
9/30/24 RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
Problem Reduction
115
9/30/24 RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
Problem Reduction
116
9/30/24 RAJEEV KUMAR Introduction to Artificial Intelligence Unit 1
Daily QUIZ
ØAI is intelligence demonstrated by ________.
a.Machine
b.Human
c.Both a and b
d.Animals
v What are the core components of Learning System? What do you mean
by well defined Learning System?
v What are the basic attributes of types of training in a Learning System?
v How is machine learning related to AI?
v How will artificial intelligence change the future?
v What do you mean by well-defined Learning System? Explain the steps to
design a well- defined Learning System.
v Explain Knowledge Pyramid.
v Explain the Goal of Artificial Intelligence?
v What is the future of Artificial intelligence?
v Distinguish between strong and weak artificial intelligence?
v What are the three features of well-posed learning problem?
v Elaborate on the History of Artificial Intelligence.
v Explain the different steps to design a well- defined Learning System in
detail.
v Explain well defined or well posed Learning System with one example.
Basics of problem-solving:-
Ø how to represent the problem.
Ø State space- State space representation of the problem,
satisfiability vs optimality,
Ø pattern classification problems,
Ø example domains.