Lecture 6
Lecture 6
Orthogonal matrices
• independent basis, orthogonal basis,
orthonormal vectors, normalization
• Put orthonormal vectors into a matrix
– Generally rectangular matrix – matrix with
orhonormal columns
– Square matrix with orthonormal colums –
orthogonal matrix Q
• Matrix A has non-orthogonal columns,
A → Q … Gram-Schmidt
Orthogonalization
• Q-1 = QT
• Having Q simplifies many formulas. e. g.
r r r r r
A Ax = A b K K Q Qxˆ = Q b � Ixˆ = Q b � xˆ = Q b
T T T T T T
• QR decomposition: A = QR
Determinants
a b
= ad bc
c d
• Properties:
– Exchange rows/cols of a matrix: reverse
sign of det
– Equals to 0 if two rows/cols are same.
– Det of triangular matrix is a product of
diagonal terms.
• determinant is a test for invertibility
– matrix is invertible if |A| ≠ 0
– matrix is singular if |A| = 0
Eigenvalues and eigenvectors
Ax = x
• Action of matrix A on vector x returns the vector
parallel (same direction) to vector x.
• x … eigenvectors, λ … eigenvalues
• spectrum, trace (sum of λ), determinant (product
of λ)
• λ = 0 … singular matrix
• Find eigendecomposition by solving characteristic
equation (leading to characteristic polynomial)
det(A - λI) = 0
• Find λ first, then find eigenvectors by Gauss
elimination, as the solution of (A - λI)x = 0.
• algebraic multiplicity of an eigenvalue
– multiplicity of the corresponding root
• geometric multiplicity of an eigenvalue
– number of linearly independent eigenvectors
with that eigenvalue
– degenerate, deffective matrix
Diagonalization
• Square matrix A … diagonalizable if there
exists an invertible matrix P such that P
−1
AP is a diagonal matrix.
• Diagonalization … process of finding a
corresponding diagonal matrix.
• Diagonalization of square matrix with
independent eigenvectors meas its
eigendecomposition
S-1AS = Λ
• Which matrices are diagonalizable (i.e.
eigendecomposed)?
– all λ’s are different (λ can be zero, the matrix is still
diagonalizable. But it is singular.)
– if some λ’s are same, I must be careful, I must check
the eigenvectors.
– If I have repeated eigenvalues, I may or may not have
n independent eigenvectors.
– If the eigenvectors are independent, than it is
diagonalizable (even if I have some repeating
eigenvalues).
A: m x n
U: m x m
V: n x n
Σ: m x n
• Consider case of square full-rank matrix
Am x m.
• Look again at Av1 = σ1u1
– σ1u1 is a linear combination of columns of A.
Where lies σ1u1?
• In C(A)
– And where lies v1?
• In C(AT)
• In SVD I’m looking for an orthogonal basis
in row space of A that gets knocked over
into an orthogonal basis in column space
of A.
• Can I construct an orthogonal basis of
C(AT)?
• Of course I can, Gram-Schmidt does this.
• However, when I take such a basis and
multiply it by A, there is no reason why the
result should also be orthogonal.
• I am looking for the special setup, where A
turns orthogonal basis vectors from row
space into orthogonal basis in the column
space.
basis vectors basis vectors
in row space in column spaces1
� �
of A of A � s �
rr r rr r
A [ v1v2 K vm ] = [ u1u2 K um ] � 2 � multiplying
� O � factors
� �
� sm �
4 4 1 0 32 0 1 2 1 2
A= =
3 3 0 1 0 18 1 2 1 2
1 1 1 0 2 0 1 2 1 2
A= =
0 0 0 1 0 0 1 2 1 2
Matlab code
c1=[1 2 4 8]'
c2=[3 6 9 12]
c3=[12 16 3 5]'
c4 = c1-c2+0.0000001*(rand(4,1)) 1 3 12 1.99
A = [c1 c2 c3 c4] 2 6 16 3.99
format long A=
4 9 3 4.99
[U,S,V]=svd(A)
8 12 5 3 .99
U1 = U; S1=S; V1 = V;
S1(4,4)=0.0
B=U1*S1*V1'
A-B
norm(A-B)
help norm
• m x n, m > n (overedetermined system)
– with all columns independent
A = [c1 c2 c3]
[U,S,V]=svd(A)
U(1:3,1:3)*S(1:3,1:3)*V'
s 1
s
2
A[ v1v2 vn ] = [ u1u2 unun 1 um ]
sn
complete this to an
orthonormal basis for the
whole column space Rm
s1
� �
� �
rr r r r rr r � s2
A [ v1v2 K vm vm 1 K vn ] = [ u1u2 K um ] �
� O �
� �
� sn �
complete this to an
orthonormal basis for the
whole row space Rn
complete this by
adding zeros
i.e. these vectors come from
null space, which is
orthogonal to row space
Avi = s i ui , i = 1,L , r
• Rank is?
– two, invertible, nonsingular
• I'm going to look for two vectors v1 and v2
in the row space, which of course is what?
– R2
• And I'm going to look for u1, u2 in the
column space, which is also R2, and I'm
going to look for numbers σ1, σ2 so that it
all comes out right.
SVD example
4 4
A=
3 3 25 7
A A=T
• 1 step – A A
st T
7 25
• 2nd step – find its eigenvectors (they’ll b the vs)
and eigenvalues (squares of the σ)
– eigen can be guessed just by looking
– what are eigenvectors and their corresponding
eigenvalues?
• [1 1]T, λ1 = 32
• [-1 1]T, λ2 = 18
• Actually, I made one mistake, do you see which one?
• I didn’t normalize eigenvectors, they should be [1/sqrt(2),
1/sqrt(2)]T …
4 4 32 0 1 2 1 2
A= = UV = U
T
3 3 0 18 1 2 1 2
4 4 1 0 32 0 1 2 1 2
A= = UV =
T
3 3 0 1 0 18 1 2 1 2
SVD expansion
• Do you remember column times row
picture of matrix multiplication?
• If you look at the UΣVT multiplication as at
the column times row problem, then you
get the SVD expansion.
T
A = s i ui vi
i
Please note, that if you put p = m (for m<n), then we need m2+m·n,
which is much more than m·n. So there is apparently some border
at which storing SVD matrices increases the storage needs!
Numerical Linear Algebra
Matrix Decompositions
1 2 x 4.001 x 1.999
2 3 y = 7.001 y = 1.001 small change in result b
1 2 x 4 x 2
small change in coefficient matrix A 2 3.999 y = 7.999 y = 1
1 2 x 4.001 x 3.999
2 3.999 y = 7.998 y = 4.000
stable
A(0) = A
for k = 1, 2, …
Q(k)R(k) = A(k-1) QR factorization of A(k-1)
A(k) = R(k)Q(k) Recombine factors in reverse order