Math6015-Lecture-02 - Gif
Math6015-Lecture-02 - Gif
Lecture : Chapter 6
02
Jingwei Liang
Institute of Natural Sciences and School of Mathematical Sciences
Email: [email protected]
Office: Room 355, No. 6 Science Building
Regression example
Least square regression
Non-negative least square
Least-norm Approximation
Regularized Approximation
Extra Materials
Image processing
Morden regularization
Applications
Non-smooth optimization
6.1 Regression example
Least square regression, non-negative least square
Least square regression
2
Problems • Chapter 6-Approximation and Fitting • Regression example • Least square regression /35
Least square regression
Matrix-vector representation,
x1 1 y1 ε1
x2 1 [ ] y2 ε2
a
A= . , x = , y = . and ε = . .
.. b .. ..
xm 1 ym εm
2
Problems • Chapter 6-Approximation and Fitting • Regression example • Least square regression /35
Least square regression
If AT A is not invertible,
x(k+1) = x(k) − γk AT (Ax(k) − y) → x⋆ ,
where x⋆ is “a” solution of the problem.
2
Problems • Chapter 6-Approximation and Fitting • Regression example • Least square regression /35
Least square regression
𝑦 = 0.23𝑥 − 0.08
2
Problems • Chapter 6-Approximation and Fitting • Regression example • Least square regression /35
Non-negative least square
𝑦 = 0.23𝑥 − 0.08
𝑦 = 0.21𝑥 − 0
3
Problems • Chapter 6-Approximation and Fitting • Regression example • Non-negative least square /35
Non-negative least square
𝑦 = 0.23𝑥 − 0.08
𝑦 = 0.21𝑥 − 0
3
Problems • Chapter 6-Approximation and Fitting • Regression example • Non-negative least square /35
Other constraints on x
For example,
2
minimizex ||Ax − y||
subject to x ∈ Ω.
4
Problems • Chapter 6-Approximation and Fitting • Regression example • Non-negative least square /35
6.2 Norm Approximation
Basic norm approximation problem
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
Approximation interpretation
Approximating the vector b via the linear combination of the columns of A.
In this context, the problem is called regression problem, and columns of A are called the
regressors. The solution
x1 a 1 + · · · + xn a n
is called the regression of b.
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
x⋆ is a vector to be estimated, v is measurement error (additive white Gaussian noise), and y is the
measurement.
Estimation problem or linear inverse problem is estimating/approximating x⋆ given the
measurement y .
Let x̃ be an estimation of x⋆ , to qualify the “goodness” of the approximation, one criteria is Ax̃ is as
close to Ax⋆ (namely y ) as possible. Therefore, one solves
minimize ||Ax − y||.
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
is called the sum of (absolute) residuals approximation problem, or robust (why?) estimator.
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Basic norm approximation problem
0.4
0.2
0
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
0.4
0.2
0
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
0.6
0.4
0.2
0
-0.2
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
5
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Penalty function approximation
ℓp -norm equivalence
( p p )1/p p p
minimize |r1 | + · · · + |rm | ⇐⇒ minimize |r1 | + · · · + |rm | .
Problem. Penalty function approximation problem [WyYβ⊤ ]
minimize ϕ(r1 ) + · · · + ϕ(rm ),
subject to r = Ax − b.
where ϕ : R → R is called the (residual) penalty function.
We assume that ϕ is convex, so that the optimization, problem still is convex. In many cases, ϕ is
moreover symmetric, nonnegative and satisfies ϕ(0) = 0.
6
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Penalty function approximation
Interpretation
Given an x, we have a residual vector r = Ax − b.
For each ri , it is penalized b ϕ.
The total penalty is ϕ(r1 ) + · · · + ϕ(rm ).
We seek x such that the total penalty is minimized.
6
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Penalty function approximation
deadline-linear
1
0.5
u
−1 1
6
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Penalty function approximation
0.5
u
−1 1
6
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Penalty function approximation
6
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Penalty function approximation
2
ϕ(u) Scaling the penalty function by a positive number does not
log barrier
affect the solution of the penalty function approximation
quadratic problem.
1.5
But the shape of ϕ matters!
deadline-linear
1
0.5
u
−1 1
6
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Example. Optimal input design
Four cases of ϕ 40
Absolute function ϕ(u) = |u|.
Quadratic function ϕ(u) = u2 .
0
Deadzone linear 2 1 0 1 2
{ } 10
0
2 1 0 1 2
10
0
2 1 0 1 2
Histogram of r
7
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
0.5
u
−1 1
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
u
−1 1
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Sensitivity to outliers or large errors
u
−1 1
8
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Small residuals and ℓ1 -norm approximation
Ax = b
9
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Approximation with constraints
It is possible to add constraints to the basic norm approximation problem. When these constraints
are convex, the resulting problem is convex.
To rule out certain unacceptable approximations of the vector b, or to ensure that the approximator
Ax satisfies certain properties.
10
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Approximation with constraints
It is possible to add constraints to the basic norm approximation problem. When these constraints
are convex, the resulting problem is convex.
To rule out certain unacceptable approximations of the vector b, or to ensure that the approximator
Ax satisfies certain properties.
Arise as prior knowledge of the vector x to be estimated, or from prior knowledge of the estimation
error v .
10
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Approximation with constraints
It is possible to add constraints to the basic norm approximation problem. When these constraints
are convex, the resulting problem is convex.
To rule out certain unacceptable approximations of the vector b, or to ensure that the approximator
Ax satisfies certain properties.
Arise as prior knowledge of the vector x to be estimated, or from prior knowledge of the estimation
error v .
Constraints arise in a geometric setting in determining the projection of a point b on a set more
complicated than a subspace, for example, a cone or polyhedron.
10
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
Approximation with constraints
It is possible to add constraints to the basic norm approximation problem. When these constraints
are convex, the resulting problem is convex.
To rule out certain unacceptable approximations of the vector b, or to ensure that the approximator
Ax satisfies certain properties.
Arise as prior knowledge of the vector x to be estimated, or from prior knowledge of the estimation
error v .
Constraints arise in a geometric setting in determining the projection of a point b on a set more
complicated than a subspace, for example, a cone or polyhedron.
Examples
Non-negativity Variable bounds, box constraints
x ⪰ 0. ℓ ⪯ x ⪯ u.
10
Problems • Chapter 6-Approximation and Fitting • Norm Approximation /35
6.3 Least-norm Approximation
Least-norm approximation problem
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Least-norm approximation problem
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Least-norm approximation problem
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Least-norm approximation problem
Geometric interpretation
The objective is the length of x, which is also the distance between 0 and x.
{ }
The feasible set x ∈ Rn | Ax = b is affine.
The least-norm problem seeks the point in the { affine set with minimum
} distance to 0, which is the
projection (definition?) of 0 on the affine set x ∈ Rn | Ax = b .
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Least-norm approximation problem
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Least-norm approximation problem
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Least-norm approximation problem
11
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Illustration
Ax = b
12
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Illustration
Ax = b
12
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Illustration
Ax = b
12
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
Illustration
Ax = b
12
Problems • Chapter 6-Approximation and Fitting • Least-norm Approximation /35
6.4 Regularized Approximation
Previously
Two scenarios
For approximation problem, b contains errors and might b ∈
/ range(A), and one seeks to
minimize ||Ax − b||.
For least-norm problem, there holds b = Ax, and one seeks to minimize ||x||.
A good approximation Ax ≈ b with small x is less sensitive to errors in A than good approximation
with large x.
13
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Previously
Two scenarios
For approximation problem, b contains errors and might b ∈
/ range(A), and one seeks to
minimize ||Ax − b||.
For least-norm problem, there holds b = Ax, and one seeks to minimize ||x||.
A good approximation Ax ≈ b with small x is less sensitive to errors in A than good approximation
with large x.
13
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Previously
ϕ(x)
13
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Previously
γ∥Ax − b∥2
ϕ(x)
13
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Previously
γ∥Ax − b∥2
ϕ(x)
13
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Bi-criterion formulation
14
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Bi-criterion formulation
Regularization a common scalarization method used to solve the bi-criterion problem. Through
weighted sum, we arrive at
minimize ||Ax − b|| + γ||x||,
where γ > 0 is the balancing parameter.
Special case
2 2
minimize ||Ax − b||2 + γ||x||2 .
14
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Bi-criterion formulation
Interpretations
In an estimation setting, the extra term penalizing large ||x|| can be interpreted as our prior
knowledge that ||x|| is not too large.
In an optimal design setting, the extra term adds the cost of using large values of the design
variables to the cost of missing the target specifications.
14
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Bi-criterion formulation
Tikhonov regularization
2 2
minimize ||Ax − b||2 + γ||x||2 .
Or more generally
2 2
minimize ||Ax − b||2 + γ||Lx||2 ,
where L is a bounded linear mapping.
14
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Bi-criterion formulation
14
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Example. Optimal input design
Linear dynamical system (or convolution system) with impulse response (convolution kernel) h
∑
t
y(t) = h(τ )u(t − τ ), t = 0, 1, ..., N.
τ =0
15
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Example. Optimal input design
Linear dynamical system (or convolution system) with impulse response (convolution kernel) h
∑
t
y(t) = h(τ )u(t − τ ), t = 0, 1, ..., N.
τ =0
{ }
Optimal design problem seeks the input signal u(0), ..., u(N ) achieving the following goals:
1 ∑( )2
N
Output tracking Small errors in Jtrack = y(t) − ydes (t) .
N + 1 t=0
1 ∑( )2
N
Small input Low energy Jmag = u(t) .
N + 1 t=0
1 ∑( )2
N
Small input variations Smoothness Jder = u(t + 1) − u(t) .
N + 1 t=0
15
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Example. Optimal input design
Linear dynamical system (or convolution system) with impulse response (convolution kernel) h
∑
t
y(t) = h(τ )u(t − τ ), t = 0, 1, ..., N.
τ =0
{ }
Optimal design problem seeks the input signal u(0), ..., u(N ) achieving the following goals:
1 ∑( )2
N
Output tracking Small errors in Jtrack = y(t) − ydes (t) .
N + 1 t=0
1 ∑( )2
N
Small input Low energy Jmag = u(t) .
N + 1 t=0
1 ∑( )2
N
Small input variations Smoothness Jder = u(t + 1) − u(t) .
N + 1 t=0
15
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Example. Optimal input design
Linear dynamical system (or convolution system) with impulse response (convolution kernel) h
∑
t
y(t) = h(τ )u(t − τ ), t = 0, 1, ..., N.
τ =0
Top row (δ, γ) = (0, 0.005); middle row (δ, γ) = (0, 0.05); bottom row (δ, γ) = (0.3, 0.05).
5 1
0.5
0
0
5
0.5
10 1
0 50 150 200 0 50 150 200
4 1
2 0.5
0 0
2 0.5
4 1
0 50 150 200 0 50 150 200
4 1
2 0.5
0 0
2 0.5
4 1
0 50 150 200 0 50 150 200
u(t) y(t)
15
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Regularization by ℓ1 -norm
16
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Regularization by ℓ1 -norm
Equality constraint
Ax = b
16
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Regularization by ℓ1 -norm
Noisy measurement b = Ax + ε
ℓ2 -norm ℓ1 -norm
16
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Example: blind image deconvolution
17
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Example: sparse non-negative matrix factorization
= ×
18
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
Given xcor = x⋆ + ε, denoising aims to find an estimation x̂, which resembles x⋆ , by solving
( )
minimize || x̂ − xcor ||, ϕ(x̂) .
The function ϕ : R → R is convex, and is called the regularization function or smoothing objective.
-5
200 400 600 800 1000 1200 1400 1600 1800 2000
-5
200 400 600 800 1000 1200 1400 1600 1800 2000
⋆
x and xcor .
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
Given xcor = x⋆ + ε, denoising aims to find an estimation x̂, which resembles x⋆ , by solving
( )
minimize || x̂ − xcor ||, ϕ(x̂) .
The function ϕ : R → R is convex, and is called the regularization function or smoothing objective.
Define
5
−1 1 0 ··· 0 0 0
0 −1 1 ··· 0 0 0
0
∇ = ... ..
.
..
.
..
. ... ..
. ,
-5
200 400 600 800 1000 1200 1400 1600 1800 2000
0 0 0 ··· −1 1 0
0 0 0 ··· 0 −1 1
5
we ∇ ∈ R (n−1)×n
, and is called discrete gra-
0 dient or finite difference operator.
-5
200 400 600 800 1000 1200 1400 1600 1800 2000
⋆
x and xcor .
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
Closed-form solution. 30
25
20
15
10
0
0 10 20 30 40 50 60
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
Quadratic smooth
∑
n−1
( )2
2
Closed-form solution. -2
200 400 600 800 1000 1200 1400 1600 1800 2000
-2
200 400 600 800 1000 1200 1400 1600 1800 2000
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
1D case. 0.5
0
-0.5
-1
200 400 600 800 1000 1200 1400 1600 1800 2000
⋆
x and xcor
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
30
20
10
0
0 10 20 30 40 50
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
i=1
-1
= ||∇x||1 . 200 400 600 800 1000 1200 1400 1600 1800 2000
Regularization problem
1
2
minimize || x̂ − xcor ||2 + γ||∇x̂||1 . 0
-1
200 400 600 800 1000 1200 1400 1600 1800 2000
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
Signal de-noising/smoothing
i=1
-1
= ||∇x||1 . 200 400 600 800 1000 1200 1400 1600 1800 2000
Regularization problem
1
2
minimize || x̂ − xcor ||2 + γ||∇x̂||1 . 0
-1
200 400 600 800 1000 1200 1400 1600 1800 2000
2
Results from ||∇x||2 .
19
Problems • Chapter 6-Approximation and Fitting • Regularized Approximation /35
6.5 Extra Materials
From Gaussian denoising to heat equation
Image observation
f = ů + ε,
where ε is zero mean white Gaussian noise.
20
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
From Gaussian denoising to heat equation
Gaussian denoising
u = Gσ ⋆ f ,
exp(− ||x||
2
1
where Gσ is the Gaussian kernel Gσ (x) = 2πσ 2 σ 2 ).
20
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
From Gaussian denoising to heat equation
Heat equation
∂
u(x, t) = ∆u(x, t), t > 0,
∂t
u(x, 0) = f ,
with appropriate boundary condition. Solution: u(x, t) = G√2t ⋆ f .
20
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
From Gaussian denoising to heat equation
4
20
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Non-linear diffusion
Linear diffusion
PDE appear as a natural way to denoise.
Linear PDE (or convolution) does not preserve edges.
Heat equation ( )
∆u(x, t) = div ∇u(x, t)
( )
= −∇⊤ ∇u(x, t) .
Goal
Diffusion along the edges.
No diffusion cross the edges.
21
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Non-linear diffusion
21
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Non-linear diffusion
ϵ-regularized TV flow
∂ ( 1 )
u(x, t) = div √ ∇u .
∂t ||∇u|| + ϵ
2 2
21
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Non-linear diffusion
21
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Non-linear diffusion
21
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization
Gradient flow
∂
u(x, t) = −∇E(u), t > 0,
∂t
u(x, 0) = f .
22
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization
Gradient flow
∂
u(x, t) = −∇E(u), t > 0,
∂t
u(x, 0) = f .
2
Consider E(u) = 12 e(||∇u|| ), then
( )
∇E(u) = ∇⊤ e′ (||∇u|| )∇u .
2
Heat equation
e′ (||∇u||2 ) = 1.
Non-linear diffusion
1
e′ (||∇u||2 ) = √ .
1 + ||∇u||2
TV flow
1
e′ (||∇u||2 ) = .
||∇u||1
22
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization
Gradient flow
∂
u(x, t) = −∇E(u), t > 0,
∂t
u(x, 0) = f .
Optimization problem
min E(u).
u
Gradient descent: u0 = f ,
uk+1 = uk − γ∇E(uk ).
22
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization
Gradient flow
∂
u(x, t) = −∇E(u), t > 0,
∂t
u(x, 0) = f .
22
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization
Gradient flow
∂
u(x, t) = −∇E(u), t > 0,
∂t
u(x, 0) = f .
22
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization
22
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
From diffusion to regularization
Previously
To preserve edges — from isotropic diffusion to anisotropic diffusion.
Namely, there is certain prior information of u that we want to keep.
Regularization
Promoting prior information to the solution.
Making ill-posed problem “solvable” or preventing over-fitting.
Application
Signal/image processing, compressed sensing, inverse problems
Data science, machine learning
Statstics
...
23
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: Tikhonov regularization
From now on: discrete setting...
Example. Tikhonov regularization [Tikhonov ’63] [WyYβ⊤ ]
2
||Γu||
24
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: total variation
||∇u||1
Higher-order TV...
25
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: wavelet frames
Example. Wavelet decomposition [Morlet, Meyer, Mallat, Daubechies and many others] [WyYβ⊤ ]
Family of functions {ψj,k : j, k ∈ Z}
26
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: dictionary
27
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: low rank
28
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: low rank
28
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Examples: others
Other examples
ℓ1 -norm, ℓ1,2 -norm, ℓ0 -pseudo norm, ℓp -norm, p ∈ [0, 1]
Fourier transform, discrete cosine transform
Curvelet, shearlet...
Matrix: nuclear norm, rank function
Constraints: non-negativity, simplex, box constraint
Physics laws
...
29
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Image denoising
Mathematical formulation
f = u + ε,
where
u is the true image which is piecewise constant — total variation.
ε is additive noise.
30
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Image denoising
Mathematical formulation
f = u + ε,
where
u is the true image which is piecewise constant — total variation.
ε is additive noise.
30
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Medical imaging
Mathematical formulation
f = F u + ε,
where
u is the true image which is piecewise constant — total variation.
F is partial Fourier transform.
ε is additive noise.
Original Measurement Reconstruction
31
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Medical imaging
Mathematical formulation
f = F u + ε,
where
u is the true image which is piecewise constant — total variation.
F is partial Fourier transform.
ε is additive noise.
31
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Video decomposition
Mathematical formulation
f = l + s + ε,
where
l is the background which is low rank — nuclear norm.
h is the foreground which is sparse — ℓ1 -norm.
ε is additive white Gaussian noise.
= +
= +
f l s
f l s
32
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Video decomposition
Mathematical formulation
f = l + s + ε,
where
l is the background which is low rank — nuclear norm.
h is the foreground which is sparse — ℓ1 -norm.
ε is additive white Gaussian noise.
32
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Video decomposition
32
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Non-smooth optimization: a typical example
where
F : smooth data fidelity term...
Ri : non-smooth regularization terms...
Ki : linear/nonlinear operators...
33
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Optimization methods
∑
r
min F (x1 , x2 , ..., xr ) + Ri (Ki xi )
x1 ,x2 ,...,xr
i=1
F (x1 , ·, ..., ·)
K1 , K2 , · · · , Kr R1 , R 2 , · · · , R r
F (·, x2 , ..., ·)
K1∗ , K2∗ , · · · , Kr∗ R1∗ , R2∗ , · · · , Rr∗
F (·, ..., ·, xr )
34
Problems • Chapter 6-Approximation and Fitting • Extra Materials • /35
Fin