Lecture08 Restoration Deblur
Lecture08 Restoration Deblur
Processing
2
Questions
3
Image restoration
f (x, y) H + g (x, y)
4
Linear vs. Non-linear
5
Linear, position-invariant degradation
model
Sampling theorem
Linearity - additivity
Linearity - homogeneity
Space invariant
Convolution integral
6
PSF - Point Spread Function
7
Estimate the degradation
By observation
By experiment
– g(x,y) = h(x,y)*f(x,y) + (x,y)
– G(u,v) = H(u,v)F(u,v) + N(u,v)
– H(u,v) = G(u,v)
By mathematical modeling
– Sec. 5.6.3
8
Image restoration – An ill-posed
problem
Degradation model
10
Example
12
The block-circulant matrix
13
Inverse filter
In most images, adjacent pixels are highly
correlated, while the gray levels of widely
separated pixels are only loosely
correlated.
Therefore, the autocorrelation function of
typical images generally decreases away
from the origin.
Power spectrum of an image is the
Fourier transform of its autocorrelation
function, therefore, we can argue that the
power spectrum of an image generally
decreases with frequency
Typical noise sources have either a flat
power spectrum or one that decreases
with frequency more slowly than typical
image power spectra.
Therefore, the expected situation is for
the signal to dominate the spectrum at
low frequencies while the noise 14
dominates at high frequencies.
Wiener filter (1942)
Potential problems:
– Weights all errors equally regardless of their location
K in the image,
while the eye is considerably more tolerant of errors in dark areas
and high-gradient areas in the image.
– In minimizing the mean square error, Wiener filter also smooth the
image more than the eye would prefer
15
Algebraic approach – Unconstrained
restoration vs. Inverse filter
Compared to: 17
Regularization theory
18
Solution formulation
u(f, g) describes how the real image data is related to the degraded
data. In other words, this term models the characteristic of the
imaging system.
v(f) is the regularization term with the regularization operator v
operating on the original image f, and the regularization parameter
used to tune up the weight of the regularization term.
By adding the regularization term, the original ill-posed problem
turns into a well-posed one, that is, the insertion of the regularization
operator puts some constraints on what f might be, which makes the
solution more stable.
19
MAP (maximum a-posteriori
probability)
Formulate solution from statistical point of view: MAP approach tries to find
an estimate of image f that maximizes the a-posteriori probability p(f|g) as
– P(f) is the a-priori probability of the unknown image f. We call it the prior model
– P(g) is the probability of g which is a constant when g is given
– p(g|f) is the conditional probability density function (pdf) of g. We call it the
sensor model, which is a description of the noisy or stochastic processes that
relate the original unknown image f to the measured image g.
20
MAP - Derivation
Noise term
Prior term
21
The noise term
22
The prior model
23
The prior model (cont’)
punishment
Difference between
neighborhood pixels
24
The prior model (cont’)
25
The prior model – Kernel r
Laplacian kernel
26
The objective function
27