0% found this document useful (0 votes)
2 views

Lecture 9

The document discusses image restoration techniques in the frequency domain, focusing on periodic noise reduction and various filtering methods such as notch filtering and Wiener filtering. It outlines the degradation/restoration model, noise sources, and methods for estimating degradation functions, including image observation, experimentation, and mathematical modeling. Key concepts include the use of linear, position-invariant operators and the importance of optimizing filter parameters to minimize image information loss during restoration.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 9

The document discusses image restoration techniques in the frequency domain, focusing on periodic noise reduction and various filtering methods such as notch filtering and Wiener filtering. It outlines the degradation/restoration model, noise sources, and methods for estimating degradation functions, including image observation, experimentation, and mathematical modeling. Key concepts include the use of linear, position-invariant operators and the importance of optimizing filter parameters to minimize image information loss during restoration.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

EENG 860 Special Topics: Digital Image Processing

Lecture 9: Image Restoration In Frequency


Domain

Dr. Ahmadreza Baghaie


Department of Electrical and Computer Engineering
New York Institute of Technology

Spring 2020

Readings: Chapter 5 (sections 5.4-5.8)


1 / 39
How to read: read, take notes and experiment!
Table of Content

● Periodic Noise Reduction Using Frequency Domain Filtering


● Linear, Position Invariant Degradations
● Estimating the Degradation Function
● Inverse Filtering
● Minimum Mean Square Error (Wiener) Filtering
● Constrained Least Squares Filtering
● Geometric Mean Filter

2 / 39
Image Degradation/Restoration Model


Image degradation is modeled as an operator ℋ together with an additive
noise term for an input image f(x,y), to generate the degraded image g(x,y).

Given g(x,y), some knowledge about ℋ and some knowledge about the
additive noise term, the objective of restoration is to obtain an estimate of the
original image.

3 / 39
Noise Models

● Sources of noise in digital images arise mainly during image acquisition and/
or transmission.
● For example in CCD cameras, light levels and sensor temperature are major
factors in the amount of noise.
● During transmission, images are affected by interference in the transmission
channel.
● For example in wireless transmission, lightning or other atmospheric
disturbance cause noise.

4 / 39
Noise Probability Distribution Functions (PDF)

● Statistical behavior of the intensity values in the noise component is


characterized by a Probability Distribution Function (PDF).
● The noise component n(x,y) is an image with the same size as the input
image.
● We simulate noise images by generating an array whose intensity values are
random numbers with a specified PDF.
● Important Noise PDFs:
– Gaussian noise
+∞
– Rayleigh noise
Mean: z̄= E( z)= ∫ zp (z)dz
– Erlang (Gamma) noise −∞
+∞
Exponential noise 2 2

Variance: σ =V ( z)= ∫ ( z− z̄) p ( z) dz
– Uniform noise −∞

– Salt-and-pepper noise
● Salt-and-pepper noise is applied differently.
5 / 39
Important Noise PDFs: Examples

6 / 39
Important Noise PDFs: Examples

7 / 39
Restoration for Noise-Only Degradation – Spatial
Filtering
● Mean Filters:
– Arithmetic Mean Filter
– Geometric Mean Filter
– Harmonic Mean Filter
– Contra-Harmonic Mean Filter
● Order-Statistic Filters:
– Median Filter
– Min/Max Filter
– Midpoint Filter
– Alpha-Trimmed Mean Filter
● Adaptive Filters:
– Adaptive, Local Noise Reduction Filter
– Adaptive Median Filter
8 / 39
Periodic Noise Reduction Using Frequency Domain
Filtering
● Periodic noise can be analyzed and filtered using frequency domain filtering.
● Periodic noise appears as concentrated bursts of energy in the Fourier
domain, at locations corresponding to the frequencies of the periodic
interference.
● We saw three selective filters before:
– Bandreject
– Bandpass
– Notch
● Notch filters are more used in image restoration.
● Remember:
jMN
sin (2 π u0 x / M +2 π v 0 y / N )⇔ [ δ (u+u0, v + v 0 )−δ (u−u0, v−v 0 )]
2
1
cos (2 π u 0 x / M +2 π v 0 y / N )⇔ [ δ (u+u0 , v + v 0 )+ δ (u−u0 , v−v 0 )]
2
9 / 39
Notch Filtering

● Notch reject filter transfer functions are usually created as product of


highpass filter transfer functions whose centers are translated to the centers
of notches:
Q
H NR (u , v)= ∏ H k (u , v) H −k (u , v)
k =1

where Hk(u,v) and H-k(u,v) are highpass filters with centers at (uk,vk) and (-
uk,-vk) respectively.
● Frequency domain functions are symmetric about the center of the frequency
rectangle, so the notches are specified as symmetric pairs.
● The centered distance computations for the filter transfer functions are:

D k (u , v )=√ (u− M /2−u k )2 +( v−N /2−v k )2

D−k (u , v )= √(u−M /2+u k )2 +( v−N / 2+ v k )2

10 / 39
Notch Filtering

● Example: a Butterworth notch reject filter of order n with three notch pairs is:
3
1 1
H NR (u , v)= ∏ [ n
][ n
]
k =1 1+[ D 0 k / Dk (u , v)] 1+[ D 0k / D−k (u , v )]
● Note that D0k is the same for each pair, but can be different for different pairs.
● A notch pass filter can be constructed by:

H NP (u ,)=1−H NR (u , v )

11 / 39
Notch Filtering - Examples

● Using an ideal notch reject filter, to eliminate the sinusoidal interference.

12 / 39
Notch Filtering - Examples

● Using a rectangular notch reject filter, to reduce the horizontal scan lines.

13 / 39
Optimum Notch Filtering

● In previous examples, the interference patterns are easy to identify and


characterize in the frequency domain.
● What if we have multiple interference patterns? How to ensure less removal
of image information during the filtering process? This requires optimization
of the filter parameters.
● To achieve this, we start with a notch pass filter to extract the interference
pattern. This is represented in the frequency and spatial domains as:

N (u , v )=H NP (u , v)G(u , v)
n( x , y)=IDFT [ H NP (u , v)G(u , v)]
● Given the assumption of additive interference, we can estimate the
interference-free image as:

f^ (x , y)=g( x , y)−n(x , y)

14 / 39
Optimum Notch Filtering

● In practice, to model the effects of incomplete components not present in the


estimate of noise, we use a weighted or modulated estimate as:
f^ (x , y)=g( x , y)−w ( x , y)n( x , y )
● Optimum notch filtering aims to minimize the local variance of the restored
image.
● Assuming a neighborhood Sxy of odd size mxn centered on (x,y), and
assuming that the weight in the neighborhood is constant and is the weight
at the center pixel, we can write the variance of the restored image at
location (x,y) as:

2 1 ^ ¯f^ ]2
σ= ∑ [ f (r , c)−
mn (r , c)∈Sxy

1 2
= ∑ {[ g(r , c)−w( x , y )n(r , c)]−[ ḡ−w( x , y ) n̄]}
mn (r , c )∈ S
xy

where the “bar” symbol is to show the average of the corresponding


parameter in the neighborhood. 15 / 39
Optimum Notch Filtering

2 1 ^ ¯f^ ]2
σ= ∑ [ f (r , c)−
mn (r , c)∈S
xy

1
= ∑ {[ g(r , c)−w( x , y )n(r , c)]−[ ḡ−w( x , y ) n̄]}2
mn (r , c )∈ S
xy

● To minimize the variance with respect to w(x,y), we take the derivative and
equal it to zero: 2
∂ σ ( x , y)
=0
∂ w ( x , y)
● The result will be:
gn− ḡ n̄
w( x , y )=
n̄2− n̄2
which means for every pixel (x,y), this weight is computed and then used in
the first equation in the previous slide.

16 / 39
Optimum Notch Filtering - Example

● A digital image of the Martian terrain, with some complex semi-periodic


interference pattern.
● The original image, centered Fourier spectrum, uncentered Fourier
Spectrum, expert designed Fourier spectrum of the interference, spatial
interference pattern, restored image.

17 / 39
Linear, Position-Invariant (LPI) Degradations

● The input-output relationship can be expressed as:


g ( x , y)=ℋ [ f ( x , y)]+n( x , y)
● We know that ℋ is linear, if it satisfies the additivity and homogeneity
properties as follows:
ℋ [ af 1 ( x , y)+bf 2 (x , y)]=aℋ [f 1 ( x , y )]+bℋ [ f 2 ( x , y )]
● We say ℋ is position-invariant if:

ℋ [ f ( x− α , y−β )]=g( x−α , y−β )

18 / 39
Linear, Position-Invariant (LPI) Degradations

● It can be proven that if ℋ is a linear, position-invariant (LPI) operator, the


degradation in the spatial domain is represented by using convolution:
g( x , y )=h( x , y )⊗f (x , y)+n( x , y)
where h(x,y) is the impulse response, or point spread function (PSF) of
the degradation operation.
● In the Fourier domain we will have:
G(u , v)=H (u , v) F (u , v)+ N (u , v)
● The goal of image restoration is to estimate the degradation operation; here,
we only focus on linear, position-invariant degradation operators.
● Note: we don’t use padding for frequency domain restorations, as here we
only have the degraded images. Padding is only effective if they are applied
to the images before degradation!

19 / 39
Degradation Function Estimation

● Since the degradation is modeled as convolution, image restoration is


commonly called image deconvolution.
● There are three ways to estimate the degradation function:
– Image observation
– Experimentation
– Mathematical modeling
● The process of image restoration by an estimated degradation function is
commonly called blind deconvolution.
● To see how these approaches work, let’s focus on a degradation only case,
without any noise component first.

G(u , v)=H (u , v) F (u , v)

20 / 39
Estimation By Image Observation

● Without any knowledge of the degradation function, we can gather


information from image itself.
● For example if the image is blurred, we can inspect a small section
containing both image features and background to estimate how edges are
blurred.
g s ( x , y ) : observed section
f^s ( x , y ) : expected restored section

G s (u , v)
H s (u , v)=
F^ s (u , v)
● Based on the assumption of linear, position invariant (LPI) degradation
operators, we can construct a function H(u,v) on a larger scale, but having
the basic shape as Hs(u,v).

21 / 39
Estimation By Experimentation

● If we have access to the equipment similar to the one used to acquire the
degraded image, we can produce an accurate estimate of the degradation.
● Remember that the degradation operator was called a point spread
function (PSF) or an impulse response?
● This means if we can simulate an impulse (a bright dot of light) and then
image it using the equipment, we can estimate the degradation function as:
G(u , v)
H (u , v)=
A
with constant A as the Fourier transform of the impulse with strength A.

22 / 39
Estimation By Mathematical Modeling

● For estimation by mathematical modeling, we take into account the physical


properties of the degradation operation.
● For example, using the physical characteristics of the air turbulence, we can
model the degradation in aerial photos as:
2 2 5/6
−k (u + v )
H (u , v)=e
● Here k is a constant dependent on the nature of turbulence.
● Note that the degradation function is very similar to the Gaussian lowpass
filter transfer function.
● By changing k, we can simulate blurring of aerial images.

23 / 39
Estimation By Mathematical Modeling

24 / 39
Estimation By Mathematical Modeling

● To see the general approach for such modeling, let’s derive the equations for
modeling blurring as a result of motion between the image and sensor during
image acquisition.
● Here, we want to model image f(x,y) going through planar motion, and x0(t)
and y0(t) are the time-varying components of motion in the x and y
directions.
● We assume that shutter opening and closing is instantaneous.
● The captured image at any point is the integral of the exposure during the
time that the shutter is open.
● If T is the duration of the exposure, the captured image can be derived as:
T

g ( x , y)=∫ f [ x−x 0 (t ), y− y 0 (t )]dt


0

25 / 39
Estimation By Mathematical Modeling

● To derive H(u,v), we find the continuous Fourier transform of the g(x,y):


∞ ∞
G (u , v) = ∫ ∫ g ( x , y) e− j 2 π ( ux+ vy ) dxdy
−∞ −∞
T

[ ]
∞ ∞
− j 2 π (ux + vy)
= ∫∫ ∫ f [ x−x 0 (t ), y− y 0 (t )] dt e dxdy
−∞ −∞ 0
T ∞ ∞
= ∫ [−∞ ∫ f [ x−x 0 (t ), y− y 0 (t )]e− j 2 π (ux + vy ) dxdy ] dt
∫ −∞
0
T

= ∫ F (u , v )e− j 2 π [ux (t )+ vy ( t )] dt
0 0

0
T
− j 2 π [ux 0 (t )+vy 0 (t )]
= F (u , v )∫ e dt
0
● This means that H(u,v) is:
T
− j 2 π [ux 0 (t)+vy 0 (t)]
H (u , v)=∫ e dt
0
26 / 39
Estimation By Mathematical Modeling

● If the motion variables x0(t) and y0(t) are known, H(u,v) can be obtained
from the equation.
● For example let’s assume a linear motion with the rates x0(t)=at/T and
y0(t)=bt/T in the x and y directions. Then we have:
T
− j 2 π [uat / T + vbt /T ] T − j π (ua+ vb)
H (u , v)=∫ e dt = sin [ π (ua+vb)] e
0 π (ua+ vb)
● The discrete filter of size MxN is generated by sampling this function for
u=0,1,2,…,M-1 and v=0,1,2,…,N-1.

27 / 39
Inverse Filtering

● Now the question is how the restoration procedure should be done?


● The simplest approach is inverse filtering, which means to obtain the
restored image, the degraded image is element-wise divided by the
estimated degradation transfer function in the frequency domain:
G (u , v)
F^ (u , v) =
H (u , v )
H (u , v) F (u, v)+ N (u , v )
=
H (u , v )
N (u , v)
= F (u , v)+
H (u , v)
● This seems simple enough, especially if there is no noise component.
● But what if there is noise? Then even if we know the degradation function,
we cannot recover the original image since we don’t know N(u,v).
● Also, if degradation function has zero or small values, then the ratio
N(u,v)/H(u,v) will dominate the term F(u,v)!
28 / 39
Inverse Filtering - Example

● One approach to reduce the effects of zero or small values is to limit the filter
frequencies to values near the origin of the frequency domain, which usually
has the highest concentration of amplitudes.

29 / 39
Minimum Mean Square Error (Wiener) Filtering

● The inverse filtering approach does not have any explicit consideration for
noise in the process of restoration.

● The minimum mean square error (Wiener) filtering approach considers both
the image and the noise as random variables.

● It aims to find a restored image with minimum mean square error with the
uncorrupted image:
2 2
e =E {(f − f^ ) }

with E{.} as the expected value of the argument.

● The noise and image are assumed uncorrelated, with either one having a
zero mean, and that the intensity levels in the estimate are a linear function
of the levels in the degraded image.
30 / 39
Minimum Mean Square Error (Wiener) Filtering

● Considering all these assumptions, the restored image is represented as:


H * (u , v) S f (u , v)
F^ (u , v) =
[ 2
S f (u , v)|H (u , v )| +S n (u , v ) ]G (u , v)

H * (u , v)
=
[ 2
|H (u , v)| +S n (u , v)/S f (u, v)
G (u , v)
]
|H (u , v )|2
=
[ 1
2
H (u , v) |H (u , v)| + Sn (u , v)/ S f (u , v) ]
G (u , v)

● In this formulation:

H * (u , v) : complex conjugate of H (u , v)
|H (u , v )|2 = H * (u , v) H (u , v )
2
Sn (u , v) = |N (u , v)| power spectrum of the noise
2
S f (u , v ) = |F (u , v)| power spectrum of the undegraded image

● The derivation is out of the scope of the course. 31 / 39


Minimum Mean Square Error (Wiener) Filtering

|H (u , v)|2
F^ (u , v) =
[ 1
]
H (u , v ) |H (u , v )|2 +S n (u , v )/ S f (u , v )
G (u , v )

● If the noise is zero, the Wiener filter reduces to simple inverse filter.

● The signal-to-noise ratio measures the level of the original, undegraded


image power to the level of noise power:
M −1 N −1 M −1 N −1

∑ ∑ |F^ (u , v )|2 ∑ ∑ |^f ( x , y)|2


SNR= Mu=0 v=0
−1 N −1
x=0 y=0
= M −1 N −1

u=0 v=0
∑ |N (u , v)|2 ∑ ∑ [f ( x , y)− f^ ( x , y )]2
x=0 y=0
● To simplify the Wiener filter:

– We assume a white noise, with constant spectrum;


– We assume the ratio between the power spectrum of the undegraded
32 / 39
image and the power spectrum of the noise as a constant.
Minimum Mean Square Error (Wiener) Filtering

● Considering the simplifying assumptions, the Wiener filter can be


represented as:
2
1 , v
F^ (u , v) =
[ |H (u
] 2
)|
H (u , v ) |H (u , v )| + K
G (u , v )

with K a constant that is added to all terms of |H(u,v)|2.

33 / 39
Minimum Mean Square Error (Wiener) Filtering -
Examples

34 / 39
Constrained Least Squares Filtering

● The Wiener filtering is capable of image restoration, however it requires the


power spectra of the undegraded image and the noise to be known!

● The new method only requires the mean and variance of the noise.

● To reduce the noise sensitivity we can consider a measure of smoothness,


for example by using the second derivative of the image, the Laplacian.

● The frequency domain solution for this approach is:


*
H (u , v)
[
F^ (u , v )= 2
]
|H (u , v )| + γ|P (u , v )|2
G (u , v )

with γ as the adjustable parameter and P(u,v) is the Fourier transform of the
function:
0 −1 0
[
p( x , y)= −1 4 −1
0 −1 0 ] 35 / 39
Constrained Least Squares Filtering - Example

● The functions P(u,v) and H(u,v) must have the same size. If H is MxN, then
p(x,y) should be embedded in the center of an MxN arrays of zeros.

● To preserve the even symmetry of p(x,y), M and N must be even integers. If


the given degraded image is not of even dimensions, then a row and/or
column must be removed before computing H.

36 / 39
Geometric Mean Filter

● The Wiener filter can be generalized in the form of the geometric mean filter:
* α * 1−α
H (u , v ) H (u , v )
F^ (u , v )=
[|H (u , v )|2 ][ 2
|H (u , v )| + β
S n (u , v )
] G(u , v )

S f (u , v )
with ɑ and β as non-negative real constants.

● When ɑ=1, the filter reduces to the inverse filter.

● When ɑ=0, the filter becomes a parametric Wiener filter, which reduce to
standard Wiener filter when β=1.

● When ɑ=1/2 and β=1 the filter is called a spectrum equalization filter.

● The geometric mean filter is useful in image restoration, since it represents a


family of filters as a single expression.
37 / 39
What is Next?

● Fundamentals of Image Segmentation


● Point Detection
● Line Detection
● Edge Detection
● Image Thresholding

38 / 39
Questions?
[email protected]

39 / 39

You might also like