Linear Algebra & Singular Value Decomposition
Linear Algebra & Singular Value Decomposition
Jul-Nov 2014
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
They may be distributed outside this class only with the permission of the Instructor.
1.1
To understand Eigen values and vectors intuitively, consider the following discussion,
First let us think what a square matrix does to a vector. Consider a matrix A Rnn . Let us see what
the matrix A acting on a vector x does to this vector. By action, we mean multiplication i.e. we get a new
vector y=Ax.
The matrix acting on a vector x does two things to the vector x.
It scales the vector.
It rotates the vector.
However, for any matrix A, there are some favoured vectors/directions. When the matrix acts on these
favoured vectors, the action essentially results in just scaling the vector. There is no rotation. These
favoured vectors are precisely the eigen vectors and the amount by which each of these favoured vectors
stretches or compresses is the eigenvalue.
For a more formal discussion,
Definition 1.1 Consider a square matrix A Rnn . C is an eigen value of matrix A and a vector
~x C n is its eigen vector if A~x = ~x where ~x 6= ~0.
6
Example: A =
2
5
5
35
, ~v =
, A~v =
= 7~v
1
1
7
1.1.1
1-2
Lecture 1: Title
n
P
i=1
|A| =
n
Q
i=1
1
i
is an eigen value of A1
Theorem 1.2 Consider a matrix Ann . Let v~1 , v~2 , ..., v~n be eigen vectors of A and 1 , 2 , ..., n be the eigen
values. If each of these eigen values are distinct, then v~1 , v~2 , ..., v~n are linearly independent.
Proof: Proof by contradiction
Suppose v~1 , v~2 , ..., v~n are linearly dependent. Consider minimum value of k such that,
k
X
1kn
ai v~i = ~0
i=1
= (A k I)
k
X
ai v~i = ~0
i=1
k
X
(A k I)ai v~i = ~0
i=1
k
X
ai (i v~i k v~i ) = ~0
i=1
k
X
ai (i k )~
vi = ~0
i=1
When i = k,
ai (i k )~
vi = ~0
=
k1
X
ai (i k )~
vi = ~0
i=1
This is a contradiction since we assumed that k is the smallest. This concludes the proof.
Lecture 1: Title
1.2
1-3
Diagonalisation
Definition 1.3 Suppose A, B are two square matrices of size n n. We say A, B are similar, if A =
P 1 BP for some invertible matrix P.
Definition 1.4 Suppose A is a square matrix of size n n. We say that A is diagonalizable, if there exists
an invertible matrix P such that P 1 AP is a diagonal matrix.
So, our question is which matrices are diagonalizable? Following theorem has some answer.
Definition 1.5 Suppose A is a square matrix of size n n. Then A is diagonalizable if and only if A has
n linearly independent eigenvectors.
Proof: Suppose A is diagonalizable. So, there is an invertible matrix P such that P 1 AP = D is a diagonal
matrix. Write
P = p~1
p~2
p~3
... p~n
1
and D = .
0
0 0 ...
. . ...
0 0 ...
0
.
n
where p~1 , p~2 , ..., p~n are the columns of P. We have AP = PD. So,
1 0 0 ... 0
A p~1 p~2 p~3 ... p~n = p~1 p~2 p~3 ... p~n . . . ... .
0 0 0 ... n
Therefore, i = 2, . . . , n we have A~
pi = i p~i and so p~i are eigenvectors of A. Also, since P is invertible
p1 , p2 , ..., pn are linearly independent. So, A has n linearly independent eigen-vectors. To prove the converse,
assume A has n linearly independent eigen- vectors, p1 , p2 , ..., pn . Then, for i = 2, . . . , n we have,
Api = i pi for some i . Write,
P = p~1
p~2
p~3
... p~n
1
and D = .
0
0 0
. .
0 0
...
...
...
0
.
n
It follows easily that AP = PD. Since, columns of P are linearly in- dependent, it follows that P is invertible.
Therefore, P 1 AP = D is a diagonal matrix. So, the proof is complete.
Observations:
Diagonalisation matrix S is not unique
Not all matrices have S 1 AS
We can see the use of using diagnolisation while calculating the powers of matrices from the following
example.
Example: Calculate An .
1-4
Lecture 1: Title
We know,
A = SS 1
An = (SS 1 )(SS 1 )...(SS 1 ) = (Sn S 1 )
1.3
Definition 1.6 A symmetric matrix A Rnn is called positive semidefinite if ~xT A~x 0 for all ~x Rn ,
and is called positive definite if ~xT A~x 0 for all nonzero ~x Rn . If A is positive semidefinite, we denote
A 0.
Theorem 1.7 Matrix A is positive semi definite if and only if all eigen values of A are non-negative.
Proof: For a matrix to be positive semidefinite,
~xT A~x 0~x
But if ~x is an eigen vector of A, then
~xT (~x) 0
~xT ~x 0
Since ~xT ~x is neccessarily a positive number, in order for ~xT A~x to be greater than or equal to 0,
= 0
1.4
n
X
i ui vi T
i=1
Lecture 1: Title
1.5
References
1-5