Unit - V Pattern Recognition: Dr.K.Sampath Kumar Scse/Gu
Unit - V Pattern Recognition: Dr.K.Sampath Kumar Scse/Gu
Pattern Recognition
Dr.K.Sampath Kumar
SCSE/GU
Introduction
to
Pattern Recognition
Introduction to Pattern Recognition
The applications of Pattern Recognition can be found
everywhere.
Measurements
Features Classification
Input Preprocessing Class
object Label
Basic ingredients:
•Measurement space (e.g., image intensity, pressure)
•Features (e.g., corners, spectral energy)
•Classifier - soft and hard
•Decision boundary
•Training sample
•Probability of error
A Pattern Recognition Paradigm
Texture Discrimination
Shape Discrimination
Optical Character Recognition
Face Recognition & Discrimination
Are They From the Same Person?
Statistical Pattern recognition
Outline
Kinds of probability
• Classical: Ratio of favorable to the total outcomes
NE
P( E )
N
• Relative Frequency: Measure of frequency of occurrence
NE
P ( E ) lim
N N
i 1
P( B | A j ) P( A j )
then P( A j | B) C
P( B | A ) P( A )
i 1
i i
Random variables
• Expected Value E[ X ] xf X ( x)dx
FXY ( x, y) P( X x, Y y)
Joint probability density function is given by
d2
f XY ( x, y ) FXY ( x, y )
dxdy
x y
FXY ( x, y ) f
XY ( x, y )dxdy
Marginal Density Functions
d
f X ( x)
( X x) ( X x) (Y )
FX ( x)
dx
d
FX ( x,)
dx
d
x
FX ( x) P( X x)
f XY ( x, y )dy dx
dx
P( X x, Y )
f X ( x) f XY ( x, y )dy
FXY ( x,)
Similarly , fY ( x)
f XY ( x, y )dx
Bayesian Decision Theory
p(x | w j ) P( w j ) p(x | wk ) P( wk )
or ,
P ( x) P ( x)
or , p(x | w j ) P( w j ) p(x | wk ) P( wk ), k 1,..., C; k j
Minimized when
P(wj|x) is maximum
The average probability of error is
P (e) P(e | x) P(x)dx
For every x, we ensure that P(e|x) is minimum so that the integral must be as
small as possible
Conditional Risk & Bayes’ Risk
• Loss Measure of the cost of making an error
(ai | w j ) cost of assigning a pattern x to wi when x w j
• Conditional Risk
The overall risk in choosing action ai so that it is minimum for every x is
C
R(ai | x) (ai | w j ) P( wi | x)
i 1
C 0, i j
P ( w j | x) assuming (ai | w j )
j i 1, i j
1 P( wi | x)
To minimize the average probability of error, choose i that maximizes the
posteriori probability P(wi|x). If a is chosen such that for every x the
overall risk is minimized and the resulting minimum overall risk is called
the Bayes’ risk.
Bayes decision rule - Reject option
• Partition the sample space into 2 regions
R {x | (1 max P( wi | x) t}
i
1
0. P(w1|x) Pr obability of rejection
t R is empty when
0.
9 r (t ) p( x)dx
0.
8 1
1-t 0.
7 1R t
0.5
6 error rate Cis
0. C 1
0.3
4 e(t ) or(1, max
t p( wi | x)) p(x)dx
A
i C
0. P(w2|x)
0.1
2
0
-4 -3 A -2 -1 0 1 2 3 4
R A x