Internal 4 Sem
Internal 4 Sem
Sc PROJECT PRESENTATION
TOPIC: MATRIX DECOMPOSITION
PREPARED BY: AND IT’S APPLICATIONS
UNDER THE GUIDANCE
DONALDOF Dr.(230616018)
LAMIN SHOUVIK BHATTACHARYA
SHIRUP PHAWA (230616045)
PREPARED BY:
DONALD LAMIN (230616018)
SHIRUP PHAWA (230616045)
Singular Value Decomposition: Any m by n matrix A can be factored into
A = U = (orthogonal)(diagonal)(orthogonal).
The columns of U (m by m) are eigenvectors of A, and the columns of V (n by n)
are eigenvectors of A. The r singular values on the diagonal of (m by n) are the
square roots of the nonzero eigenvalues of both Aand A.
Remark 3 The SVD chooses those bases in an extremely special way. They are
more than just orthonormal. When A multiplies a column of of V, it produces
times a column of U. That comes directly from AV = U, looked at a column at a
time.
Remark 4 Eigenvectors of A and A must go into the columns of U and V:
From the first, U must be the eigenvector matrix for A. The eigenvalue matrix in
the middle is —which is m by m with on the diagonal. From the second, V must
be the eigenvector matrix for A. The diagonal matrix has the same but it is n
by n.
These checks are easy, but they do not prove the SVD. The proof is not
difficult but it can follow the examples and applications.
Remark 5 Here is the reason that A. Start with A:
Multiply by A A (2)
This says that A is an eigenvector of ! We just moved parentheses to ()( A). The
length of this eigenvector A is because
A gives
So the unit eigenvector is A. In other words, AV = U.
EXAMPLE 1 This A has only one column: rank r Then has only
SVD A [1]
We will pick a few of the important applications, after emphasizing one key
point. The SVD is terrific for numerically stable computations. The first reason is
that U and V are orthogonal matrices; they never change the length of a vector.
Since multiplication by U cannot destroy the scaling. Such a statement cannot
be made for the other factor ; we could multiply by a large or (more
commonly) divide by a small , and overflow the computer. But still is as good
as possible. It reveals exactly what is large and what is small, and the easy
availability of that information is the second reason for popularity of the SVD.
We come back to this in some application.
1. The effective rank The rank of a matrix is the number of independent rows,
and the number of independent columns. That can be hard to decide in
computations! In exact arithmetic, counting the pivots is correct. Real
arithmetic can be misleading but discarding small pivots is not the answer.
Consider the following:
The first has rank 1, although roundoff error will probably produce a second
pivot. Both pivots will be small; how many do we ignore? The second has one
small pivot, but we cannot pretend that its row is insignificant. The third has
two pivots and its rank is 2, but its "effective rank" ought to be 1.
We go to a more stable measure of rank. The first step is to use Aor A, which
are symmetric but share the same rank as A. Their eigenvalues the singular
values squared are not misleading. Based on the accuracy of the data, we
decide on a tolerance like and count the singular values above it that is the
effective rank. The examples above have effective rank 1 (when is very small).
Every real square matrix can be factored into A = QS, where Q is orthogonal
and S is symmetric positive semidefinite. If A is invertible then S is positive
definite.
For proof we just insert into the middle of the SVD:
(3)
The factor is symmetric and semidefinite (because is). The factor is an
orthogonal matrix (because . In the complex case, S becomes Hermitian instead
of symmetric and becomes unitary instead of orthogonal. In the invertible case
is definite and so is S.
EXAMPLE 2 POLAR DECOMPOSITION:
A = QS .
3.
A= , .
A =0, .
4.
= , .
= 0, .
Proof: Given, A = U.
U and V are orthogonal matrices and ∑ matrix contains the singular
values of A.
Proof (1):
AV
=
Hence,
1: .
2: .
Proof of (2):
= (VU∑)V=V(∑)
So,
=
=
=
Hence, A=
A=0, .
Proof of (3):
So,
A=
==
Hence,
A= , .
A =0, .
Proof of (4):
=(U∑)(V)U
=U ( (=U)
So,
=
=]
=]
=
Hence,
= ,
= 0, .
Computation for SDV:
=determinant of
Now,
For
Let
Then,
,…(2)
From (1) and (2) we get
a=0, b=1
For
Let
Then
We get,
a=1 b=0
Compute U:
Consider the matrix =
Let .
Where,
Trace of
Sum of minor of diagonal element of
Therefore ,
For λ =3
Let
Then,
We get
a=1, b=1, c=1
N() =
Similarly,
, N() =
, N() =
Compute ∑:
Since =, =
Therefore, ∑=
∑=
=
=
=
+
Thus,
=
=A.
=
=
=
+
Reference:
REFERENCES: