0% found this document useful (0 votes)
16 views

Lecture08 Restoration Deblur

The document summarizes image restoration techniques for linear, position-invariant degradations. It discusses analyzing noise and blur in images, and modeling degradation using convolution. Image restoration is an ill-posed problem due to ill-conditioning. Approaches covered include inverse filtering, Wiener filtering, and algebraic methods like unconstrained and constrained optimization. Maximum a posteriori probability is also discussed as formulating the solution from a statistical perspective.

Uploaded by

Minu Choudhary
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lecture08 Restoration Deblur

The document summarizes image restoration techniques for linear, position-invariant degradations. It discusses analyzing noise and blur in images, and modeling degradation using convolution. Image restoration is an ill-posed problem due to ill-conditioning. Approaches covered include inverse filtering, Wiener filtering, and algebraic methods like unconstrained and constrained optimization. Maximum a posteriori probability is also discussed as formulating the solution from a statistical perspective.

Uploaded by

Minu Choudhary
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

ECE 472/572 - Digital Image

Processing

Lecture 8 - Image Restoration –


Linear, Position-Invariant
Degradations
10/10/11
Recap
 Analyze the noise  Analyze the blur
– Type of noise – Linear, position-invariant degradation
• Spatial invariant model
– SAP, Gaussian • Modeled by convolution*
• Periodic noise
• The point spread function (PSF)
– How to identify the type of noise? – How to estimate?
• Test pattern – Deblurring - an ill-posed problem
• Histogram – Ill-conditioning of the linear system
– How to evaluate noise level?
– Understand why image restoration is
• RMSE
an ill-posed problem and what it means
• PSNR
conceptually
 Noise removal
 Different restoration approaches
– Spatial domain
• Mean filters
– Frequency domain
• Order-statistics filters • Inverse filter
• Adaptive filters • Wiener filter
– Frequency domain – Spatial domain
• Band-pass/Band-reject • Unconstrained approach
• Notch filters • Constrained approach
• Optimal notch filter * • MAP

2
Questions

 What is PSF? How to estimate it?


 What is an ill-posed problem? What is an ill-
conditioning system?
 Inverse filter and problem?
 Wiener filter and how it solved the problem?
 Unconstrained vs. Constrained approaches (572)
 What is regularization? (572)

3
Image restoration

Degradation model  (x, y)

f (x, y) H + g (x, y)

4
Linear vs. Non-linear

Many types of degradation can be


approximated by linear, space-invariant
processes
Non-linear and space-variant models are
more accurate
– Difficult to solve
– Unsolvable

5
Linear, position-invariant degradation
model
Sampling theorem

Linearity - additivity

Linearity - homogeneity

Space invariant
Convolution integral
6
PSF - Point Spread Function

 Impulse response of system H

– Superposition integral of the first kind


– Convolution integral
 Point spread function (PSF)
– Used in optics - The impulse becomes a point of light  impulse
response
– Completely characterize the linear system

7
Estimate the degradation

By observation
By experiment
– g(x,y) = h(x,y)*f(x,y) + (x,y)
– G(u,v) = H(u,v)F(u,v) + N(u,v)
– H(u,v) = G(u,v)
By mathematical modeling
– Sec. 5.6.3

8
Image restoration – An ill-posed
problem
 Degradation model

 H is ill-conditioned which makes image


restoration problem an ill-posed problem
– Solution is not stable
9
Ill-conditioning

10
Example

Noise-free Sinusoidal noise Noise-free


11
Exact H Exact H not exact H
Different restoration approaches

 Frequency domain  Algebraic approaches


– Inverse filter – Unconstrained
– Wiener (minimum optimization
mean square error) – Constrained
filter optimization
– The regularization
theory

12
The block-circulant matrix

 Stacking rows of image f, g, n to make MN x 1 column


vectors f, g, and n. (Also called lexicographic
representation of the original image). Correspondingly, H
should be a MN x MN matrix

 H is called block-circulant matrix

13
Inverse filter
 In most images, adjacent pixels are highly
correlated, while the gray levels of widely
separated pixels are only loosely
correlated.
 Therefore, the autocorrelation function of
typical images generally decreases away
from the origin.
 Power spectrum of an image is the
Fourier transform of its autocorrelation
function, therefore, we can argue that the
power spectrum of an image generally
decreases with frequency
 Typical noise sources have either a flat
power spectrum or one that decreases
with frequency more slowly than typical
image power spectra.
 Therefore, the expected situation is for
the signal to dominate the spectrum at
low frequencies while the noise 14
dominates at high frequencies.
Wiener filter (1942)

 Objective function: find an estimate of f such that


the mean square error between them is minimized

 Potential problems:
– Weights all errors equally regardless of their location
K in the image,
while the eye is considerably more tolerant of errors in dark areas
and high-gradient areas in the image.
– In minimizing the mean square error, Wiener filter also smooth the
image more than the eye would prefer

15
Algebraic approach – Unconstrained
restoration vs. Inverse filter

Compared to the inverse filter: 16


Algebraic approach – Constrained
restoration vs. Wiener filter

Compared to: 17
Regularization theory

 Generally speaking, any regularization method tries to


analyze a related well-posed problem whose solution
approximates the original ill-posed problem.
 The well-posedness is achieved by implementing one or
more of the following basic ideas
– restriction of the data;
– change of the space and/or topologies;
– modification of the operator itself;
– the concept of regularization operators; and
– well-posed stochastic extensions of ill-posed problems.

18
Solution formulation

 For g = Hf + , the regularization method constructs the solution as

 u(f, g) describes how the real image data is related to the degraded
data. In other words, this term models the characteristic of the
imaging system.
 v(f) is the regularization term with the regularization operator v
operating on the original image f, and the regularization parameter 
used to tune up the weight of the regularization term.
 By adding the regularization term, the original ill-posed problem
turns into a well-posed one, that is, the insertion of the regularization
operator puts some constraints on what f might be, which makes the
solution more stable.

19
MAP (maximum a-posteriori
probability)
 Formulate solution from statistical point of view: MAP approach tries to find
an estimate of image f that maximizes the a-posteriori probability p(f|g) as

 According to Bayes' rule,

– P(f) is the a-priori probability of the unknown image f. We call it the prior model
– P(g) is the probability of g which is a constant when g is given
– p(g|f) is the conditional probability density function (pdf) of g. We call it the
sensor model, which is a description of the noisy or stochastic processes that
relate the original unknown image f to the measured image g.

20
MAP - Derivation

 Bayes interpretation of regularization theory

Noise term
Prior term

21
The noise term

Assume Gaussian noise of zero mean, 


the standard deviation

22
The prior model

 The a-priori probability of an image by a Gibbs


distribution is defined as

– U(f) is the energy function


– T is the temperature of the model
– Z is a normalization constant

23
The prior model (cont’)

 U(f), the prior energy function, is usually formulated


based on the smoothness property of the original image.
Therefore, U(f) should measure the extent to which the
smoothness is violated

punishment
Difference between
neighborhood pixels

24
The prior model (cont’)

  is the parameter that adjusts how smooth the image goes


 The k-th derivative models the difference between neighbor pixels. It
can also be approximated by convolution with the right kernel

25
The prior model – Kernel r

Laplacian kernel

26
The objective function

 Use gradient descent to solve f

27

You might also like