Lecture W12ab
Lecture W12ab
E.g.
1
LDA
• Let us start with a data set which we can write as a matrix:
x1,1 x1,2 x1,N
X = x2,1 x2,2 x2,N
x3,1
xn,1 xn,N
• Each column is one data point, each row is a variable, but take
care sometimes the transpose is used
The mean adjusted data matrix
• We form the mean adjusted data matrix by subtracting the mean
of each variable
x1,1 - m1 x1,2 - m1 x1,N - m1
U = x2,1 - m2 x2,2 - m2 x2,N - m2
x3,1 - m3
xn,1 - mn xn,N - mn
1
𝑆 = 𝑈 𝑈𝑇
𝑁
Geometric Idea
x2
u
f1
f2
x2 • PCA: (f1,f2)
LDA: u
x1 x1
Method (Additional Notes)
• Let the between-class scatter matrix Sb be defined as
g
Sb
i 1
N i ( xi x )( xi x )T
• and the within-class scatter matrix Sw be defined as
g g Ni
Sw
i 1
( N i 1) S i
i 1 j 1
( xi , j xi )( xi , j xi )T
S w1S b P S w1S w P 0
S w1S b P P 0
( S w1S b ) P P
Standard LDA (Additional Notes)
• If Sw is a non-singular matrix then the Fisher’s criterion is
maximised when the projection matrix Plda is composed of the
eigenvectors of
1
S S
w b