0% found this document useful (0 votes)
114 views98 pages

Reminder of Linear Algebra

The document is a reminder of linear algebra concepts, focusing on vector notations, operations, norms, inner products, and matrix operations. It defines key terms such as Rn, column and row vectors, and various norms including Euclidean and p-norms. Additionally, it covers inner products and their properties, as well as matrix multiplication and transposition.

Uploaded by

Mohi Gpt4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views98 pages

Reminder of Linear Algebra

The document is a reminder of linear algebra concepts, focusing on vector notations, operations, norms, inner products, and matrix operations. It defines key terms such as Rn, column and row vectors, and various norms including Euclidean and p-norms. Additionally, it covers inner products and their properties, as well as matrix multiplication and transposition.

Uploaded by

Mohi Gpt4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

Reminder of Linear Algebra

El Houcine Bergou
[email protected]

1/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.
▶ x + y is the component wise addition.

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.
▶ x + y is the component wise addition.
▶ ∀λ ∈ R and x ∈ Rn , λx is the vector in Rn with the i-component
equal to λxi .

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.
▶ x + y is the component wise addition.
▶ ∀λ ∈ R and x ∈ Rn , λx is the vector in Rn with the i-component
equal to λxi .

Norm
Given x ∈ Rn , a norm of x, ∥x∥, is a nonnegative real number
satisfying :

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.
▶ x + y is the component wise addition.
▶ ∀λ ∈ R and x ∈ Rn , λx is the vector in Rn with the i-component
equal to λxi .

Norm
Given x ∈ Rn , a norm of x, ∥x∥, is a nonnegative real number
satisfying :
▶ (Definiteness) : ∥x∥ ≥ 0 and ∥x∥ = 0 if and only if x = 0.

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.
▶ x + y is the component wise addition.
▶ ∀λ ∈ R and x ∈ Rn , λx is the vector in Rn with the i-component
equal to λxi .

Norm
Given x ∈ Rn , a norm of x, ∥x∥, is a nonnegative real number
satisfying :
▶ (Definiteness) : ∥x∥ ≥ 0 and ∥x∥ = 0 if and only if x = 0.
▶ (Homogeneity) : For any real number α and all x ∈ Rn ,
∥αx∥ = |α|∥x∥.

2/25

Reminder of Linear Algebra


Vector notations and operations

▶ Rn : set of vectors with n ≥ 1 real components.


▶ x ∈ Rn is a column vector, xT will denote the corresponding row
vector.
▶ xi will denote the i-th component of x.
▶ x + y is the component wise addition.
▶ ∀λ ∈ R and x ∈ Rn , λx is the vector in Rn with the i-component
equal to λxi .

Norm
Given x ∈ Rn , a norm of x, ∥x∥, is a nonnegative real number
satisfying :
▶ (Definiteness) : ∥x∥ ≥ 0 and ∥x∥ = 0 if and only if x = 0.
▶ (Homogeneity) : For any real number α and all x ∈ Rn ,
∥αx∥ = |α|∥x∥.
▶ (The triangle inequality) : For all x, y ∈ Rn , ∥x + y∥ ≤ ∥x∥ + ∥y∥.

2/25

Reminder of Linear Algebra


Vector norm

Examples

3/25

Reminder of Linear Algebra


Vector norm

Examples
▶ The Euclidean,
√ or ℓ2 , norm :
∥x∥2 = x⊤ x = (x21 + x22 + . . . + x2n )1/2 .

3/25

Reminder of Linear Algebra


Vector norm

Examples
▶ The Euclidean,
√ or ℓ2 , norm :
∥x∥2 = x⊤ x = (x21 + x22 + . . . + x2n )1/2 .
▶ The max, or ℓ∞ , norm : ∥x∥∞ = max1≤i≤n |xi |.

3/25

Reminder of Linear Algebra


Vector norm

Examples
▶ The Euclidean,
√ or ℓ2 , norm :
∥x∥2 = x⊤ x = (x21 + x22 + . . . + x2n )1/2 .
▶ The max, or ℓ∞ , norm : ∥x∥∞ = max1≤i≤n |xi |.
▶ The p-norm or ℓp norm : ∥x∥p = (|x1 |p + |x2 |p + . . . + |xn |p )1/p .

3/25

Reminder of Linear Algebra


Vector norm

Examples
▶ The Euclidean,
√ or ℓ2 , norm :
∥x∥2 = x⊤ x = (x21 + x22 + . . . + x2n )1/2 .
▶ The max, or ℓ∞ , norm : ∥x∥∞ = max1≤i≤n |xi |.
▶ The p-norm or ℓp norm : ∥x∥p = (|x1 |p + |x2 |p + . . . + |xn |p )1/p .

Exercise : Prove the previous claims ?

3/25

Reminder of Linear Algebra


Inner products

4/25

Reminder of Linear Algebra


Inner products

Definition
An inner product on Rn is a map : x, y → ⟨x, y⟩, mapping
Rn × Rn → R and satisfying the following :
▶ (Definiteness) : ⟨x, x⟩ ≥ 0 and ⟨x, x⟩ = 0 if and only if x = 0.

4/25

Reminder of Linear Algebra


Inner products

Definition
An inner product on Rn is a map : x, y → ⟨x, y⟩, mapping
Rn × Rn → R and satisfying the following :
▶ (Definiteness) : ⟨x, x⟩ ≥ 0 and ⟨x, x⟩ = 0 if and only if x = 0.
▶ (Bilinearity) : For any real number α, β and all x, y, z ∈ Rn .
⟨αx + βy, z⟩ = α⟨x, z⟩ + β⟨y, z⟩.

4/25

Reminder of Linear Algebra


Inner products

Definition
An inner product on Rn is a map : x, y → ⟨x, y⟩, mapping
Rn × Rn → R and satisfying the following :
▶ (Definiteness) : ⟨x, x⟩ ≥ 0 and ⟨x, x⟩ = 0 if and only if x = 0.
▶ (Bilinearity) : For any real number α, β and all x, y, z ∈ Rn .
⟨αx + βy, z⟩ = α⟨x, z⟩ + β⟨y, z⟩.
▶ (Symmetry) : For all x, y ∈ Rn ⟨x, y⟩ = ⟨y, x⟩.

4/25

Reminder of Linear Algebra


Inner products

Definition
An inner product on Rn is a map : x, y → ⟨x, y⟩, mapping
Rn × Rn → R and satisfying the following :
▶ (Definiteness) : ⟨x, x⟩ ≥ 0 and ⟨x, x⟩ = 0 if and only if x = 0.
▶ (Bilinearity) : For any real number α, β and all x, y, z ∈ Rn .
⟨αx + βy, z⟩ = α⟨x, z⟩ + β⟨y, z⟩.
▶ (Symmetry) : For all x, y ∈ Rn ⟨x, y⟩ = ⟨y, x⟩.

Proposition (Inner product induces a norm)


p
▶ If ⟨·, ·⟩ is an inner product then ∥x∥ = ⟨x, x⟩ is the norm
induced by the inner product.
▶ In the rest, we will use ⟨x, y⟩ for the usual euclidean dot
Pn
product : ⟨x, y⟩ := i=1 xi yi = x⊤ y.

Exercise : Prove the previous claims ?

4/25

Reminder of Linear Algebra


Matrix notations and operations

5/25

Reminder of Linear Algebra


Matrix notations and operations
▶ Rm×n is the set of real matrices with m rows and n columns
(m ≥ 1 and n ≥ 1). Note that Rm×1 ≈ Rm .

5/25

Reminder of Linear Algebra


Matrix notations and operations
▶ Rm×n is the set of real matrices with m rows and n columns
(m ≥ 1 and n ≥ 1). Note that Rm×1 ≈ Rm .
▶ We will use [A]ij or Aij for the coefficient (i, j) of A ;

5/25

Reminder of Linear Algebra


Matrix notations and operations
▶ Rm×n is the set of real matrices with m rows and n columns
(m ≥ 1 and n ≥ 1). Note that Rm×1 ≈ Rm .
▶ We will use [A]ij or Aij for the coefficient (i, j) of A ;
▶ Let a⊤ th row of A ; and a be the j th column of A.
i be the i j

5/25

Reminder of Linear Algebra


Matrix notations and operations
▶ Rm×n is the set of real matrices with m rows and n columns
(m ≥ 1 and n ≥ 1). Note that Rm×1 ≈ Rm .
▶ We will use [A]ij or Aij for the coefficient (i, j) of A ;
▶ Let a⊤ th row of A ; and a be the j th column of A.
i be the i j
▶ The following notations for A are equivalent :
 ⊤ 
a1
[A] 1 ≤ i ≤ m , 
 ..  or [a , . . . , a ].
.  1 n

1≤j≤n am

5/25

Reminder of Linear Algebra


Matrix notations and operations
▶ Rm×n is the set of real matrices with m rows and n columns
(m ≥ 1 and n ≥ 1). Note that Rm×1 ≈ Rm .
▶ We will use [A]ij or Aij for the coefficient (i, j) of A ;
▶ Let a⊤ th row of A ; and a be the j th column of A.
i be the i j
▶ The following notations for A are equivalent :
 ⊤ 
a1
[A] 1 ≤ i ≤ m , 
 ..  or [a , . . . , a ].
.  1 n

1≤j≤n am

Let A = [a1 , a2 , . . . , an ] ∈ Rm×n , the span of A is defined by


( n
)
X
span(A) = a = αi ai | αi ∈ R ∀i .
i=1

The span of A is a subspace of a dimension ≤ min{m, n}.


5/25

Reminder of Linear Algebra


Product, Transpose and Symmetry

Let A ∈ Rm×p and B ∈ Rp×n , the matrix product of A and B is


defined as follows AB ∈ Rm×n
p
X
(AB)ij = Aik Bkj
k=1

6/25

Reminder of Linear Algebra


Product, Transpose and Symmetry

Let A ∈ Rm×p and B ∈ Rp×n , the matrix product of A and B is


defined as follows AB ∈ Rm×n
p
X
(AB)ij = Aik Bkj
k=1

Let A ∈ Rm×n , the transpose of A, noted A⊤ , is a real matrix with n


rows and m columns such that

∀i = 1, . . . , m, ∀j = 1, . . . , n, [A⊤ ]ij = [A]ji .

6/25

Reminder of Linear Algebra


Product, Transpose and Symmetry

Let A ∈ Rm×p and B ∈ Rp×n , the matrix product of A and B is


defined as follows AB ∈ Rm×n
p
X
(AB)ij = Aik Bkj
k=1

Let A ∈ Rm×n , the transpose of A, noted A⊤ , is a real matrix with n


rows and m columns such that

∀i = 1, . . . , m, ∀j = 1, . . . , n, [A⊤ ]ij = [A]ji .

For square matrices


▶ A⊤ ∈ Rn×n .

6/25

Reminder of Linear Algebra


Product, Transpose and Symmetry

Let A ∈ Rm×p and B ∈ Rp×n , the matrix product of A and B is


defined as follows AB ∈ Rm×n
p
X
(AB)ij = Aik Bkj
k=1

Let A ∈ Rm×n , the transpose of A, noted A⊤ , is a real matrix with n


rows and m columns such that

∀i = 1, . . . , m, ∀j = 1, . . . , n, [A⊤ ]ij = [A]ji .

For square matrices


▶ A⊤ ∈ Rn×n .
▶ A is said to be symmetric if A = A⊤ (e.g., diagonal matrices,
covariance matrices).

6/25

Reminder of Linear Algebra


Matrix norms

Definition
Given A an m by n real matrix and a real number α, a norm of A,
∥A∥∗ , is a nonnegative real number satisfying :

7/25

Reminder of Linear Algebra


Matrix norms

Definition
Given A an m by n real matrix and a real number α, a norm of A,
∥A∥∗ , is a nonnegative real number satisfying :
▶ (Definiteness) : ∥A∥∗ ≥ 0 and ∥A∥∗ = 0 if and only if A = 0.

7/25

Reminder of Linear Algebra


Matrix norms

Definition
Given A an m by n real matrix and a real number α, a norm of A,
∥A∥∗ , is a nonnegative real number satisfying :
▶ (Definiteness) : ∥A∥∗ ≥ 0 and ∥A∥∗ = 0 if and only if A = 0.
▶ (Homogeneity) : ∥αA∥∗ = |α|∥A∥∗ .

7/25

Reminder of Linear Algebra


Matrix norms

Definition
Given A an m by n real matrix and a real number α, a norm of A,
∥A∥∗ , is a nonnegative real number satisfying :
▶ (Definiteness) : ∥A∥∗ ≥ 0 and ∥A∥∗ = 0 if and only if A = 0.
▶ (Homogeneity) : ∥αA∥∗ = |α|∥A∥∗ .
▶ (The triangle inequality) : For all A, B,
∥A + B∥∗ ≤ ∥A∥∗ + ∥B∥∗ .‘

7/25

Reminder of Linear Algebra


Matrix norms

Definition
Given A an m by n real matrix and a real number α, a norm of A,
∥A∥∗ , is a nonnegative real number satisfying :
▶ (Definiteness) : ∥A∥∗ ≥ 0 and ∥A∥∗ = 0 if and only if A = 0.
▶ (Homogeneity) : ∥αA∥∗ = |α|∥A∥∗ .
▶ (The triangle inequality) : For all A, B,
∥A + B∥∗ ≤ ∥A∥∗ + ∥B∥∗ .‘
▶ (The compatibility, if m = n) : For all A, B, ∥AB∥∗ ≤ ∥A∥∗ ∥B∥∗ .

7/25

Reminder of Linear Algebra


Matrix norms

Definition
Given A an m by n real matrix and a real number α, a norm of A,
∥A∥∗ , is a nonnegative real number satisfying :
▶ (Definiteness) : ∥A∥∗ ≥ 0 and ∥A∥∗ = 0 if and only if A = 0.
▶ (Homogeneity) : ∥αA∥∗ = |α|∥A∥∗ .
▶ (The triangle inequality) : For all A, B,
∥A + B∥∗ ≤ ∥A∥∗ + ∥B∥∗ .‘
▶ (The compatibility, if m = n) : For all A, B, ∥AB∥∗ ≤ ∥A∥∗ ∥B∥∗ .

Example (Important norms)


▶ The induced matrix p-norm of A is defined by
∥Ax∥
∥A∥p = maxx̸=0 ∥x∥pp .
qP
n 2
▶ The Frobenius norm of A is given by ∥A∥F = i,j=1 Aij .
▶ Exercise : Prove these claims ?

7/25

Reminder of Linear Algebra


Kernel, range and rank

Let A ∈ Rm×n .
▶ The kernel (null space) of A is the subspace given by

ker(A) := {x ∈ Rn |Ax = 0Rm }.

8/25

Reminder of Linear Algebra


Kernel, range and rank

Let A ∈ Rm×n .
▶ The kernel (null space) of A is the subspace given by

ker(A) := {x ∈ Rn |Ax = 0Rm }.

▶ The image space (range space, span) of A is the subspace given by

Im(A) := {y ∈ Rm |∃x ∈ Rn , y = Ax} = span(A).

8/25

Reminder of Linear Algebra


Kernel, range and rank

Let A ∈ Rm×n .
▶ The kernel (null space) of A is the subspace given by

ker(A) := {x ∈ Rn |Ax = 0Rm }.

▶ The image space (range space, span) of A is the subspace given by

Im(A) := {y ∈ Rm |∃x ∈ Rn , y = Ax} = span(A).

▶ The rank of A, noted rank(A), is the dimension of the image


space. One has rank(A) ≤ min{m, n}.

8/25

Reminder of Linear Algebra


Kernel, range and rank

Let A ∈ Rm×n .
▶ The kernel (null space) of A is the subspace given by

ker(A) := {x ∈ Rn |Ax = 0Rm }.

▶ The image space (range space, span) of A is the subspace given by

Im(A) := {y ∈ Rm |∃x ∈ Rn , y = Ax} = span(A).

▶ The rank of A, noted rank(A), is the dimension of the image


space. One has rank(A) ≤ min{m, n}.

Rank-nullity theorem
For any matrix A ∈ Rm×n , one has

dim (ker(A)) + rank(A) = n.

8/25

Reminder of Linear Algebra


Non-singularity, positivity, definitness

Matrix inversion
A matrix A ∈ Rn×n is invertible (or non-singular) if it exists
B ∈ Rn×n such that BA = AB = In , where In is the identity matrix
of Rn×n .
In this case, B is called inverse matrix of A and is denoted by A−1 .

9/25

Reminder of Linear Algebra


Non-singularity, positivity, definitness

Matrix inversion
A matrix A ∈ Rn×n is invertible (or non-singular) if it exists
B ∈ Rn×n such that BA = AB = In , where In is the identity matrix
of Rn×n .
In this case, B is called inverse matrix of A and is denoted by A−1 .

Positive (semi-)definiteness
A matrix A ∈ Rn×n is positive semi-definite if

∀x ∈ Rn , x⊤ Ax ≥ 0.

9/25

Reminder of Linear Algebra


Non-singularity, positivity, definitness

Matrix inversion
A matrix A ∈ Rn×n is invertible (or non-singular) if it exists
B ∈ Rn×n such that BA = AB = In , where In is the identity matrix
of Rn×n .
In this case, B is called inverse matrix of A and is denoted by A−1 .

Positive (semi-)definiteness
A matrix A ∈ Rn×n is positive semi-definite if

∀x ∈ Rn , x⊤ Ax ≥ 0.

It is called positive definite when x⊤ Ax > 0 for every nonzero vector


x.

9/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).
=⇒ Any symmetric matrix in Rn×n possesses n real eigenvalues.

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).
=⇒ Any symmetric matrix in Rn×n possesses n real eigenvalues.

Given two matrices (A, B) ∈ Rn×n , we introduce the following


notations :

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).
=⇒ Any symmetric matrix in Rn×n possesses n real eigenvalues.

Given two matrices (A, B) ∈ Rn×n , we introduce the following


notations :
▶ λmin (A) (resp. λmax (A)) is the smallest (resp. largest) eigenvalue
of A ;

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).
=⇒ Any symmetric matrix in Rn×n possesses n real eigenvalues.

Given two matrices (A, B) ∈ Rn×n , we introduce the following


notations :
▶ λmin (A) (resp. λmax (A)) is the smallest (resp. largest) eigenvalue
of A ;
▶ A ⪰ B ⇐⇒ λmin (A) ≥ λmax (B).

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).
=⇒ Any symmetric matrix in Rn×n possesses n real eigenvalues.

Given two matrices (A, B) ∈ Rn×n , we introduce the following


notations :
▶ λmin (A) (resp. λmax (A)) is the smallest (resp. largest) eigenvalue
of A ;
▶ A ⪰ B ⇐⇒ λmin (A) ≥ λmax (B).
▶ A≻B ⇐⇒ λmin (A) > λmax (B).

10/25

Reminder of Linear Algebra


Eigenvalues and eigenvectors

Let A ∈ Rn×n , a real λ is called an eigenvalue of A if

∃v ∈ Rn , ∥v∥ =
̸ 0, Av = λv.

The vector v is then called an eigenvector of A (associated to the


eigenvalue λ).
=⇒ Any symmetric matrix in Rn×n possesses n real eigenvalues.

Given two matrices (A, B) ∈ Rn×n , we introduce the following


notations :
▶ λmin (A) (resp. λmax (A)) is the smallest (resp. largest) eigenvalue
of A ;
▶ A ⪰ B ⇐⇒ λmin (A) ≥ λmax (B).
▶ A≻B ⇐⇒ λmin (A) > λmax (B).
With these notations, A is positive semi-definite (resp. positive
definite) if and only if A ⪰ 0 (resp. A ≻ 0).

10/25

Reminder of Linear Algebra


Spectral decomposition

Theorem
Any symmetric matrix in Rn×n possesses a spectral decomposition of
the form
A = V ΛV ⊤ ,

11/25

Reminder of Linear Algebra


Spectral decomposition

Theorem
Any symmetric matrix in Rn×n possesses a spectral decomposition of
the form
A = V ΛV ⊤ ,
where
▶ V is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose columns are
eigenvectors of A.

11/25

Reminder of Linear Algebra


Spectral decomposition

Theorem
Any symmetric matrix in Rn×n possesses a spectral decomposition of
the form
A = V ΛV ⊤ ,
where
▶ V is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose columns are
eigenvectors of A.
▶ Λ is a diagonal matrix with the n eigenvalues of A λ1 , . . . , λn on
the diagonal.

11/25

Reminder of Linear Algebra


Spectral decomposition

Theorem
Any symmetric matrix in Rn×n possesses a spectral decomposition of
the form
A = V ΛV ⊤ ,
where
▶ V is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose columns are
eigenvectors of A.
▶ Λ is a diagonal matrix with the n eigenvalues of A λ1 , . . . , λn on
the diagonal.

Remark

11/25

Reminder of Linear Algebra


Spectral decomposition

Theorem
Any symmetric matrix in Rn×n possesses a spectral decomposition of
the form
A = V ΛV ⊤ ,
where
▶ V is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose columns are
eigenvectors of A.
▶ Λ is a diagonal matrix with the n eigenvalues of A λ1 , . . . , λn on
the diagonal.

Remark
▶ A spectral decomposition is not unique.

11/25

Reminder of Linear Algebra


Spectral decomposition

Theorem
Any symmetric matrix in Rn×n possesses a spectral decomposition of
the form
A = V ΛV ⊤ ,
where
▶ V is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose columns are
eigenvectors of A.
▶ Λ is a diagonal matrix with the n eigenvalues of A λ1 , . . . , λn on
the diagonal.

Remark
▶ A spectral decomposition is not unique.
▶ The set of the eigenvalues of a matrix A is unique.

11/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,

12/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,
where
▶ U ∈ Rm×m is an orthogonal matrix (i.e. U −1 = U ⊤ ) whose
columns are eigenvectors of AA⊤ .

12/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,
where
▶ U ∈ Rm×m is an orthogonal matrix (i.e. U −1 = U ⊤ ) whose
columns are eigenvectors of AA⊤ .
▶ V ∈ Rn×n is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose
columns are eigenvectors of A⊤ A.

12/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,
where
▶ U ∈ Rm×m is an orthogonal matrix (i.e. U −1 = U ⊤ ) whose
columns are eigenvectors of AA⊤ .
▶ V ∈ Rn×n is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose
columns are eigenvectors of A⊤ A.
▶ Σ ∈ Rm×n is a bloc diagonal matrix with nonnegative diagonal
elements called singular values (given by the square root of the
eigenvalues of A⊤ A).

12/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,
where
▶ U ∈ Rm×m is an orthogonal matrix (i.e. U −1 = U ⊤ ) whose
columns are eigenvectors of AA⊤ .
▶ V ∈ Rn×n is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose
columns are eigenvectors of A⊤ A.
▶ Σ ∈ Rm×n is a bloc diagonal matrix with nonnegative diagonal
elements called singular values (given by the square root of the
eigenvalues of A⊤ A).

Remark

12/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,
where
▶ U ∈ Rm×m is an orthogonal matrix (i.e. U −1 = U ⊤ ) whose
columns are eigenvectors of AA⊤ .
▶ V ∈ Rn×n is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose
columns are eigenvectors of A⊤ A.
▶ Σ ∈ Rm×n is a bloc diagonal matrix with nonnegative diagonal
elements called singular values (given by the square root of the
eigenvalues of A⊤ A).

Remark
▶ The SVD is not unique.

12/25

Reminder of Linear Algebra


Singular Value Decomposition
Theorem
Any matrix in Rm×n possesses a singular value decomposition (SVD)
of the form
A = U ΣV ⊤ ,
where
▶ U ∈ Rm×m is an orthogonal matrix (i.e. U −1 = U ⊤ ) whose
columns are eigenvectors of AA⊤ .
▶ V ∈ Rn×n is an orthogonal matrix (i.e. V −1 = V ⊤ ) whose
columns are eigenvectors of A⊤ A.
▶ Σ ∈ Rm×n is a bloc diagonal matrix with nonnegative diagonal
elements called singular values (given by the square root of the
eigenvalues of A⊤ A).

Remark
▶ The SVD is not unique.
▶ The singular values along the diagonal of Σ are usually arranged
in decreasing size, σ1 ≥ σ2 ≥ · · · ≥ σmin(m,n) ≥ 0.
12/25

Reminder of Linear Algebra


Examples

Let A ∈ R3×2 , one has


Σ
z }| {
σ1 0  ⊤ 
v1
A = [u1 , u2 , u3 ]  0 σ2 .
v2⊤

0 0
| {z }
U | {z }
V⊤

One has also


Σ⊤
{ u⊤
z }|  
1
σ1 0 0
A⊤ = [v1 , v2 ]  u⊤
2
.
0 σ2 0 ⊤
u3
| {z }
V
| {z }
U⊤

13/25

Reminder of Linear Algebra


SVD in a compact form
Let r = rank(A) ≤ min(m, n), then only the first r singular values
are nonzero (σ1 ≥ σ2 ≥ · · · ≥ σr > 0). In this case, one can simplify
the SVD of A into its compact form
r
X
A = Ur Σr Vr⊤ = σi ui vi⊤ (1)
i=1

where
▶ Ur = [u1 , . . . , ur ] ∈ Rm×r denotes the first r columns of U . The
columns of Ur form an orthonormal basis for Im(A)
(=Im(AA⊤ )).
 ⊤ 
v1
▶ Vr =  ... 
⊤ 
 ∈ Rr×n denotes the first r columns of V . The
vr⊤
columns of Vr form an orthonormal basis for Im(A⊤ )
(=Im(A⊤ A)).
▶ Σr ∈ Rr×r is a diagonal matrix composed of the positive singular
values. 14/25

Reminder of Linear Algebra


Truncated SVD (TSVD)

Let r = rank(A) ≤ min(m, n), then compact SVD of A is given by


Ur Σr Vr⊤ with
 
σ1 0 . . . 0
Ur = [u1 , . . . , ur ], Vr = [v1 , . . . , vr ], Σr =  0 ..
0 .
 
.
0 . . . 0 σr

For all k ≤ r, the decomposition given by Uk Σk Vk⊤ with


 
σ1 0 . . . 0
Uk = [u1 , . . . , uk ], Vk = [v1 , . . . , vk ], Σk =  0
 .. 
. 0 
0 ... 0 σk

is called truncated SVD, or k-SVD.

15/25

Reminder of Linear Algebra


Illustration : image compression using SVD
Consider an image of 200 × 320 pixels stored in a matrix A ∈ Rm×n

20

40

60

80

100

120

140

160

180

200
50 100 150 200 250 300

▶ Compute the SVD of A →


U, Σ, V .

16/25

Reminder of Linear Algebra


Illustration : image compression using SVD
Consider an image of 200 × 320 pixels stored in a matrix A ∈ Rm×n

20

40

60

80

100

120

140

160

180

200
50 100 150 200 250 300

▶ Compute the SVD of A →


U, Σ, V .
▶ The rank of A is 200.

16/25

Reminder of Linear Algebra


Illustration : image compression using SVD
Consider an image of 200 × 320 pixels stored in a matrix A ∈ Rm×n

20

40

60

80

100

120

140

160

180

200
50 100 150 200 250 300

The singular values of A


▶ Compute the SVD of A →
U, Σ, V .
▶ The rank of A is 200.

16/25

Reminder of Linear Algebra


Illustration : image compression using SVD
Consider an image of 200 × 320 pixels stored in a matrix A ∈ Rm×n

20

40

60

80

100

120

140

160

180

200
50 100 150 200 250 300

The singular values of A


▶ Compute the SVD of A →
U, Σ, V .
▶ The rank of A is 200.
▶ Test many truncated SVD :
Ak = Uk Σk Vk⊤ for different k.
16/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

17/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

A
20

40

60

80

100

120

140

160

180

200
50 100 150 200 250 300

17/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

A A3
20 20

40 40

60 60

80 80

100 100

120 120

140 140

160 160

180 180

200 200
50 100 150 200 250 300 50 100 150 200 250 300

17/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

A A3 A10
20 20 20

40 40 40

60 60 60

80 80 80

100 100 100

120 120 120

140 140 140

160 160 160

180 180 180

200 200 200


50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300

17/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

A A3 A10
20 20 20

40 40 40

60 60 60

80 80 80

100 100 100

120 120 120

140 140 140

160 160 160

180 180 180

200 200 200


50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300

A20
20

40

60

80

100

120

140

160

180

200
50 100 150 200 250 300

17/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

A A3 A10
20 20 20

40 40 40

60 60 60

80 80 80

100 100 100

120 120 120

140 140 140

160 160 160

180 180 180

200 200 200


50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300

A20 A30
20 20

40 40

60 60

80 80

100 100

120 120

140 140

160 160

180 180

200 200
50 100 150 200 250 300 50 100 150 200 250 300

17/25

Reminder of Linear Algebra


Illustration : image compression using SVD

Ak = Uk Σk Vk⊤ for k ∈ {3, 10, 20, 30, 40}.

A A3 A10
20 20 20

40 40 40

60 60 60

80 80 80

100 100 100

120 120 120

140 140 140

160 160 160

180 180 180

200 200 200


50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300

A20 A30 A40


20 20 20

40 40 40

60 60 60

80 80 80

100 100 100

120 120 120

140 140 140

160 160 160

180 180 180

200 200 200


50 100 150 200 250 300 50 100 150 200 250 300 50 100 150 200 250 300

17/25

Reminder of Linear Algebra


space saving using k-SVD

A ≈ Ak

U ΣV ≈ Uk Σk Vk⊤
m×n ≈ (m × k)(k)(k × n)
mn ≈ (m + n + 1)k
200 × 320 = 64000 → (200 + 320 + 1)20 = 10420.

18/25

Reminder of Linear Algebra


Linear systems

Definition
A linear system of n unknowns x and m linear equations is given by


 a11 x1 + a12 x2 + ... + a1n xn = b1
 a21 x1

+ a22 x2 + ... + a2n xn = b2
.. .. .. .. ..


 . . . . .
am1 x1 + am2 x2 + ... + amn xn = bm

or, Ax = b (the compact form), where A ∈ Rm×n , x ∈ Rn and b ∈ Rm .

19/25

Reminder of Linear Algebra


Rectangular linear systems

Ax = b, where A ∈ Rm×n , m ̸= n.

20/25

Reminder of Linear Algebra


Rectangular linear systems

Ax = b, where A ∈ Rm×n , m ̸= n.

Is there any solution ?

20/25

Reminder of Linear Algebra


Rectangular linear systems

Ax = b, where A ∈ Rm×n , m ̸= n.

Is there any solution ? it depends

20/25

Reminder of Linear Algebra


Rectangular linear systems

Ax = b, where A ∈ Rm×n , m ̸= n.

Is there any solution ? it depends


▶ A unique solution in R2 :

 x1 + x2 = 0
3x1 + 2x2 = 1
6x1 + 5x2 = 1

20/25

Reminder of Linear Algebra


Rectangular linear systems

Ax = b, where A ∈ Rm×n , m ̸= n.

Is there any solution ? it depends


▶ A unique solution in R2 :

 x1 + x2 = 0
3x1 + 2x2 = 1
6x1 + 5x2 = 1

▶ Infinite solution in R2 :

x1 + x2 = 0

20/25

Reminder of Linear Algebra


Rectangular linear systems

Ax = b, where A ∈ Rm×n , m ̸= n.

Is there any solution ? it depends


▶ A unique solution in R2 :

 x1 + x2 = 0
3x1 + 2x2 = 1
6x1 + 5x2 = 1

▶ Infinite solution in R2 :

x1 + x2 = 0

▶ No solution : 
 x1 = 2
2x2 = 1
x1 + x2 = 0

20/25

Reminder of Linear Algebra


The least-squares solution

21/25

Reminder of Linear Algebra


The least-squares solution

Linear least-squares
A linear least-squares problem consists of finding the minimum of the
function x → ∥Ax − b∥.
The regarded problem is of the form

min ∥Ax − b∥
x∈Rn

21/25

Reminder of Linear Algebra


The least-squares solution

Linear least-squares
A linear least-squares problem consists of finding the minimum of the
function x → ∥Ax − b∥.
The regarded problem is of the form

min ∥Ax − b∥
x∈Rn

Properties

21/25

Reminder of Linear Algebra


The least-squares solution

Linear least-squares
A linear least-squares problem consists of finding the minimum of the
function x → ∥Ax − b∥.
The regarded problem is of the form

min ∥Ax − b∥
x∈Rn

Properties
▶ This problems has always a solution (even if Ax = b has’nt ).

21/25

Reminder of Linear Algebra


The least-squares solution

Linear least-squares
A linear least-squares problem consists of finding the minimum of the
function x → ∥Ax − b∥.
The regarded problem is of the form

min ∥Ax − b∥
x∈Rn

Properties
▶ This problems has always a solution (even if Ax = b has’nt ).
▶ The solution depends on the rank of A.

21/25

Reminder of Linear Algebra


The least-squares solution

Linear least-squares
A linear least-squares problem consists of finding the minimum of the
function x → ∥Ax − b∥.
The regarded problem is of the form

min ∥Ax − b∥
x∈Rn

Properties
▶ This problems has always a solution (even if Ax = b has’nt ).
▶ The solution depends on the rank of A.
▶ The SVD plays a major role.

21/25

Reminder of Linear Algebra


The least-squares solution

Definition

22/25

Reminder of Linear Algebra


The least-squares solution

Definition
▶ For any matrix A ∈ Rm×n , one can define a linear operator that
“invert” the system Ax = b.

22/25

Reminder of Linear Algebra


The least-squares solution

Definition
▶ For any matrix A ∈ Rm×n , one can define a linear operator that
“invert” the system Ax = b.
▶ Such operator is called the pseudo-inverse of the matrix A, noted
by A+ .

22/25

Reminder of Linear Algebra


The Pseudo-Inverse operator
Any matrix Σ of the form
 
σ1 0 . . . 0 0 ... 0
 .. .. .. .. .. 
 .
 . . . . 
 0 ... 0 σr 0 ... 0 
Σ=  ∈ Rm×n
 0 ... 0 0 ... 0 
 
 . .. .. .. 
 .. . . . 
0 ... 0 0 ... 0

has as a pseudo-inverse the matrix


1
0 ... 0 0 ... 0
 
σ1
 .. .. .. .. .. 
 . . . . . 
1
 
+
 0 ... 0 σr 0 ... 0  ∈ Rn×m

Σ =

 0 ... 0 0 ... 0 

 .. .. .. .. 
 . . . . 
0 ... 0 0 ... 0
23/25

Reminder of Linear Algebra


The Pseudo-Inverse operator

The Pseudo-inverse of A (in the general case)


Let A ∈ Rm×n and U ΣV ⊤ a given SVD for A. We call pseudo-inverse
of A the matrix
A+ := V Σ+ U ⊤ ∈ Rn×m .

▶ Other names : generalized inverse, [Moore-]Penrose inverse.


▶ The pseudo-inverse is the solution of the following equations in
X :
(AX)⊤ = AX
 
AXA = A
and
XAX = X (XA)⊤ = XA

▶ If rank(A) = m, then A+ = A⊤ (AA⊤ )−1 .


▶ If rank(A) = n, then A+ = (A⊤ A)−1 A⊤ .
▶ If rank(A) = n = m, then A+ = A−1 .

24/25

Reminder of Linear Algebra


Least-squares and pseudo-inverse

Ax = b, where A ∈ Rm×n .

Theorem
For any b ∈ Rm , A+ b is the solution with minimal norm of the
problem

min ∥Ax − b∥
x∈Rn

25/25

Reminder of Linear Algebra

You might also like