0% found this document useful (0 votes)
5 views36 pages

Internal 4 Sem

The document presents a detailed overview of Singular Value Decomposition (SVD) and its applications, including the mathematical foundations and properties of SVD. It explains how any matrix can be decomposed into orthogonal and diagonal components, and discusses the significance of SVD in numerical stability and effective rank determination. Additionally, it outlines the process for calculating SVD and provides examples and references for further reading.

Uploaded by

shirupphawa0202
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views36 pages

Internal 4 Sem

The document presents a detailed overview of Singular Value Decomposition (SVD) and its applications, including the mathematical foundations and properties of SVD. It explains how any matrix can be decomposed into orthogonal and diagonal components, and discusses the significance of SVD in numerical stability and effective rank determination. Additionally, it outlines the process for calculating SVD and provides examples and references for further reading.

Uploaded by

shirupphawa0202
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

M.

Sc PROJECT PRESENTATION
TOPIC: MATRIX DECOMPOSITION
PREPARED BY: AND IT’S APPLICATIONS
UNDER THE GUIDANCE
DONALDOF Dr.(230616018)
LAMIN SHOUVIK BHATTACHARYA
SHIRUP PHAWA (230616045)

PREPARED BY:
DONALD LAMIN (230616018)
SHIRUP PHAWA (230616045)
Singular Value Decomposition: Any m by n matrix A can be factored into
A = U = (orthogonal)(diagonal)(orthogonal).
The columns of U (m by m) are eigenvectors of A, and the columns of V (n by n)
are eigenvectors of A. The r singular values on the diagonal of (m by n) are the
square roots of the nonzero eigenvalues of both Aand A.

Remark 1 For positive definite matrices, is and U is identical to . For other


symmetric matrices, any negative eigenvalues in become positive in . For
complex matrices, remains real but U and V become unitary (the complex
version of orthogonal). We take complex conjugates in and and .
Remark 2 The columns of U and V give orthonormal bases for all four
fundamental subspaces:
first r columns of U: column space of A
last m — r columns of U: left nullspace of A
first r columns of V: row space of A
last n — r columns of V: nullspace of A

Remark 3 The SVD chooses those bases in an extremely special way. They are
more than just orthonormal. When A multiplies a column of of V, it produces
times a column of U. That comes directly from AV = U, looked at a column at a
time.
Remark 4 Eigenvectors of A and A must go into the columns of U and V:

A and, similarly V (1)

From the first, U must be the eigenvector matrix for A. The eigenvalue matrix in
the middle is —which is m by m with on the diagonal. From the second, V must
be the eigenvector matrix for A. The diagonal matrix has the same but it is n
by n.
These checks are easy, but they do not prove the SVD. The proof is not
difficult but it can follow the examples and applications.
Remark 5 Here is the reason that A. Start with A:
Multiply by A A (2)
This says that A is an eigenvector of ! We just moved parentheses to ()( A). The
length of this eigenvector A is because
A gives
So the unit eigenvector is A. In other words, AV = U.
EXAMPLE 1 This A has only one column: rank r Then has only
SVD A [1]

Here A is 1 by 1 while is 3 by 3. They both have eigenvalue 9 (whose square


root is the 3 in ). The two zero eigenvalues of A leave some freedom for the
eigenvectors in columns 2 and 3 of U. We made a choice which kept that matrix
orthogonal.
APPLICATIONS OF THE SVD

We will pick a few of the important applications, after emphasizing one key
point. The SVD is terrific for numerically stable computations. The first reason is
that U and V are orthogonal matrices; they never change the length of a vector.
Since multiplication by U cannot destroy the scaling. Such a statement cannot
be made for the other factor ; we could multiply by a large or (more
commonly) divide by a small , and overflow the computer. But still is as good
as possible. It reveals exactly what is large and what is small, and the easy
availability of that information is the second reason for popularity of the SVD.
We come back to this in some application.
1. The effective rank The rank of a matrix is the number of independent rows,
and the number of independent columns. That can be hard to decide in
computations! In exact arithmetic, counting the pivots is correct. Real
arithmetic can be misleading but discarding small pivots is not the answer.
Consider the following:

€ is small and and

The first has rank 1, although roundoff error will probably produce a second
pivot. Both pivots will be small; how many do we ignore? The second has one
small pivot, but we cannot pretend that its row is insignificant. The third has
two pivots and its rank is 2, but its "effective rank" ought to be 1.
We go to a more stable measure of rank. The first step is to use Aor A, which
are symmetric but share the same rank as A. Their eigenvalues the singular
values squared are not misleading. Based on the accuracy of the data, we
decide on a tolerance like and count the singular values above it that is the
effective rank. The examples above have effective rank 1 (when is very small).

2. Polar Decomposition Every complex number is the product of a nonnegative


number r and a number on the unit circle: z = r. That is the expression of in
"polar coordinates." If we think of these numbers as 1 by 1 matrices, r
corresponds to a positive semidefinite matrix and corresponds to an
orthogonal matrix. More exactly, since is complex and satisfies , it forms a 1 by
1 unitary matrix: . We take the complex conjugate as well as the transpose, for .
The SVD extends this “polar factorization” to matrices of any size:

Every real square matrix can be factored into A = QS, where Q is orthogonal
and S is symmetric positive semidefinite. If A is invertible then S is positive
definite.
For proof we just insert into the middle of the SVD:
(3)
The factor is symmetric and semidefinite (because is). The factor is an
orthogonal matrix (because . In the complex case, S becomes Hermitian instead
of symmetric and becomes unitary instead of orthogonal. In the invertible case
is definite and so is S.
EXAMPLE 2 POLAR DECOMPOSITION:
A = QS .

EXAMPLE 3 REVERSE POLAR DECOMPOSITION:


A = S'Q
The exercises show how, in the reverse order, S changes but Q remains the
same. Both S and S' are symmetric positive definite because this A is invertible.
Properti es of Singular Value Decompositi on (SVD):

1. The singular values in ∑ are always non-negati ve and arranged in


decreasing order.
2. If A is square and symmetric, then SVD reduces to eigenvalue
decompositi on.
3. The rank of the matrix A is equal to the number of non-zero
singular values.
4. SVD provides a low-rank approximati on of A. using its largest
singular values.
THEOREM:
Let A = Ube a singular value decomposition of an m × n real
matrix of rank r.
Where U and V are orthogonal matrices such that U T U = I and V T V = I
The ∑ matrix contains the singular values of A .
Then,
1. AV = U and
.
.
2.
A= .
A=0,

3.
A= , .
A =0, .

4.
= , .
= 0, .
Proof: Given, A = U.
U and V are orthogonal matrices and ∑ matrix contains the singular
values of A.

Proof (1):
AV
=
Hence,
1: .
2: .
Proof of (2):

= (VU∑)V=V(∑)

So,

=
=
=

Hence, A=
A=0, .
Proof of (3):

So,
A=
==
Hence,
A= , .
A =0, .
Proof of (4):

=(U∑)(V)U
=U ( (=U)
So,

=
=]
=]
=

Hence,
= ,
= 0, .
Computation for SDV:

Here, we provide an algorithm to calculate a singular value


decomposition of a matrix.
Step 1 – compute .
Step 2- compute the singular value of .
Step 3- compute the eigen vector of .
Step 4- compute the orthogonal matrix U and .
Step 5 – construct the ∑ .
Step 6- verify that .
Example:
A=
Solution:
Given A=
Compute: V
Consider the matrix

We have to find the eigen value and eigen vector of
Let

=determinant of
Now,
For
Let
Then,

,…(2)
From (1) and (2) we get
a=0, b=1

For
Let
Then
We get,
a=1 b=0
Compute U:
Consider the matrix =
Let .
Where,
Trace of
Sum of minor of diagonal element of

Therefore ,

For λ =3
Let
Then,

We get
a=1, b=1, c=1
N() =
Similarly,
, N() =
, N() =
Compute ∑:
Since =, =
Therefore, ∑=
∑=
=
=

=
+

Thus,
=
=A.
=
=
=
+
Reference:
REFERENCES:

1)Gilbert Strang, LINEAR ALGEBRA AND ITS APPICATIONS, (4th ED.).


2)D.poole, Linear Algebra: A Modern Introduction, Stanford, Cengage
Learning 2015.
3)G.Strang, Introduction to Linear Algebra, Wellesly, Wellesly-Cambrige,
Press, 2010.
THANK YOU

You might also like