0% found this document useful (0 votes)
27 views30 pages

(IT413P) Pattern Recognition Grade Four DR: Nagham Mekky

The document summarizes key concepts from Chapter 2 of the textbook "Pattern Classification" regarding Bayesian decision theory and the normal density. It discusses: 1) The univariate and multivariate normal density functions, defining the mean, variance, and covariance matrix. 2) How discriminant functions are derived for the normal density model, resulting in linear decision boundaries when features are independent with equal variance. 3) Examples of 2D multivariate normal distributions, showing how correlation between features and unequal variance affects the shape of probability contours.

Uploaded by

Mustafa Elmalky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views30 pages

(IT413P) Pattern Recognition Grade Four DR: Nagham Mekky

The document summarizes key concepts from Chapter 2 of the textbook "Pattern Classification" regarding Bayesian decision theory and the normal density. It discusses: 1) The univariate and multivariate normal density functions, defining the mean, variance, and covariance matrix. 2) How discriminant functions are derived for the normal density model, resulting in linear decision boundaries when features are independent with equal variance. 3) Examples of 2D multivariate normal distributions, showing how correlation between features and unequal variance affects the shape of probability contours.

Uploaded by

Mustafa Elmalky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Mansoura University

Faculty of Computers and Information


Department of Information Technology
First Semester- 2020-2021

[IT413P] Pattern Recognition


Grade: Four
Dr: Nagham Mekky
CHAPTER (2)
BAYESIAN DECISION
THEORY
2.5 The Normal Density Univariate ‫احادي المتغير‬density
 We begin with the continuous univariate normal or
Gaussian density,
1  1 x 
2

P( x )  exp    ,
2   2    
Where:
 = mean (or expected value) of x
2 = expected squared standard deviation or variance

Pattern Classification, Chapter 2


(Part 2)
3
Pattern Classification, Chapter 2
(Part 2)
4
Multivariate normal density
p(x) ~ N( , )
The general multivariate normal density in d dimensions
is written as
1  1 
P( x )  exp  ( x   )  ( x   )
t 1

( 2 )   2 
d/2 1/ 2

 where x is a d-component column vector,


 μ is the d-component mean vector,
 Σ is the d-by-d covariance matrix,
 |Σ| and Σ−1 are its determinant and inverse, respectively,
 and (x − μ)t is the transpose of x − μ.
 where the expected value of a vector or a matrix is found by taking the
expected values of its components. In other words, if xi is the ith
component of x, μi the ith component of μ, and σij the ijth component of
Σ, then
Note:
Note:
Note:
2.6 Discriminant Functions for the
Normal Density

N(μ,Σ)

 In Sect. 2.4.1 we saw that the minimum-error classification can


be achieved by use of the discriminant function
gi (x)  ln p(x / i )  ln P(i )
This expression can be readily evaluated if the densities p(x|ωi) are
multivariate normal, i.e., if p(x|ωi) ∼ N(μi,Σi). In this case, then,
Multivariate Gaussian Density:
Case I
 Let us examine the discriminant function and resulting
classification for a number of special cases.

 Σi=σ2 I (diagonal matrix)


 Features are statistically independent
 Each feature has the same variance
Multivariate Gaussian Density:
Case I
Multivariate Gaussian Density:
Case I (cont’d)
A classifier that uses linear discriminant functions is called “a linear
machine”

w i=

We call wi0 the threshold or bias in the ith direction.

)
)
Multivariate Gaussian Density:
Case I (cont’d)
 Properties of decision boundary:
 It passes through x0
 It is orthogonal to the line linking the means.
 What happens when P(ωi)= P(ωj) ?
 If P(ωi)= P(ωj), then x0 shifts away from the most likely category.

 If σ is very small, the position of the boundary is insensitive to P(ωi)


and P(ωj)

)
)
Pattern Classification, Chapter 2
(Part 3)
16
Disparate : ‫متباينة‬

17
Multivariate Gaussian Density:
Case I

 P(ω ), then x shifts away


If P(ωi)= j 0
from the most likely category.
Multivariate Gaussian Density:
Case I (cont’d)

 P(ω ), then x shifts away


If P(ωi)= j 0
from the most likely category.
Multivariate Gaussian Density:
Case I (cont’d)

 P(ω ), then x shifts away


If P(ωi)= j 0
from the most likely category.
2-d Multivariate Normal Density
• Can you see much in this graph?

• At most you can see that the mean is around [0,0],


but can’t really tell if x1 and x2 are correlated
25
2-d Multivariate Normal Density
• How about this graph?

26
2-d Multivariate Normal Density
• Level curves graph
• p(x) is constant along
each contour µ
• topological map of 3-d
surface
• Now we can see much more
• x1 and x1 are independent
• σ12 and σ22 are equal

27
30

You might also like