0% found this document useful (0 votes)
145 views

Digital Image Processing: Image Enhancement in The Spatial Domain

Digital image processing techniques can be categorized as operating in either the spatial domain or frequency domain. Spatial domain techniques directly manipulate pixel values in an image. Some common spatial domain techniques include point processing/intensity transformations, filtering, and histogram processing. Filtering techniques in the spatial domain include smoothing filters for blurring and noise reduction, as well as high-pass filters for sharpening and edge enhancement.

Uploaded by

Shahrukh Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views

Digital Image Processing: Image Enhancement in The Spatial Domain

Digital image processing techniques can be categorized as operating in either the spatial domain or frequency domain. Spatial domain techniques directly manipulate pixel values in an image. Some common spatial domain techniques include point processing/intensity transformations, filtering, and histogram processing. Filtering techniques in the spatial domain include smoothing filters for blurring and noise reduction, as well as high-pass filters for sharpening and edge enhancement.

Uploaded by

Shahrukh Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 139

Digital Image Processing

Image Enhancement in the Spatial


Domain
So far when we have spoken about image grey level
values we have said they are in the range [0, 255]
– Where 0 is black and 255 is white
There is no reason why we have to use this range
– The range [0,255] stems from display technologies
For many of the image processing operations in this
lecture grey levels are assumed to be given in the
range [0.0, 1.0]
What Is Image Enhancement?
Image enhancement is the process of making
images more useful
The reasons for doing this include:
– Highlighting interesting detail in images
– Removing noise from images
– Making images more visually appealing
There are two broad categories (domain) of image
enhancement techniques
– Spatial domain techniques (image plane)
• Techniques are based on direct manipulation of pixels in
an image
– frequency domain techniques
• Techniques are based on modifying the Fourier transform
of an image
• Manipulation of Fourier transform or wavelet transform of
an image
There are some enhancement techniques based
on various combinations of methods from these
two categories.
• For the moment we will concentrate on
techniques that operate in the spatial domain
Point Processing :

• Neighborhood = 1x1 pixel


g depends on only the value of f at (x,y)
T = gray level (or intensity or mapping)
transformation function
g(x,y) = T [f(x,y)]
s = T(r)
Where
r = gray level of f(x,y)
s = gray level of g(x,y)
Point Processing
(Intensity Transformation)

s(x,y) = T{ r(x,y)}

Transformed Original
Gray Level Gray Level

Transformation
Function
3 Basic Gray-Level Transformation Functions :

• Linear function ( negative and identity


transformations)
• Logarithm function ( Log and inverse-log
transformation)
• Power-law function ( nth power and nth root
transformations)
Some Intensity Transformation Functions
1. Image Negatives
• Denote [0, L-1] intensity levels of the image.
• Image negative is obtained by s= L-1-r
255 240 240 255
s= L-1-r
s= L-1-r
250 254 255 243 L=256
L=256
S=255-r
10 15 15 10 S2=255-r
S1=255-255
S2=255-240
S1=0
255 240 240 255 S2=15

0 15
Log Transformations
s = c log (1+r)
Where c is a constant and r ≥ 0
• Log curve maps a narrow range of low (intensity ) gray-
level values in the input image into a wider range of
output levels.

• The opposite is true for higher values of input levels.

• Used to expand the values of dark pixels in an image while


compressing the higher level values.
Log Transformations

Fourier Spectrum with Result after apply the log transformation


range = 0 to 1.5 x 106 with c = 1, range = 0 to 6.2
Result before apply the log transformation Result after apply the log transformation

Best case to test log transformation


Power–Law (Gamma) transformation

c,γ –positive constants


curve the grayscale
components either to
brighten the intensity
(when γ < 1)
or darken the intensity
(when γ > 1).
Power –Law (Gamma) transformation
Thresholding function
• Thresholding
• Assumptions:
• Object region of interest has intensity
distribution different from background

• Region pixel likely to be identified by


intensity alone: intensity > a or intensity < b
or a < intensity < b
Thresholding:

Threshold too low

Correct threshold

Threshold too high


Thresholding:

64

96

128

20
Contrast Stretching :

• Produce higher contrast


than the original

• By darkening the levels


below m in the original
image

• Brightening the levels


above m in the original
image
Piecewise Linear transformation
a) Contrast Stretching
(a)Increase the dynamic range of
the gray levels in the image
(b) a low-contrast image : result
from poor illumination, lack of
dynamic range in the imaging
sensor, or even wrong setting of
a lens aperture of image
acquisition
(c) result of contrast stretching:
(d) result of thresholding
b) Intensity-level slicing
Highlighting a specific range of gray levels in an image
c) Bit-plane slicing
Highlighting the contribution
made to total image
appearance by specific bits
• Suppose each pixel is
represented by 8 bits
• Higher-order bits contain
the majority of the visually
significant data
• Useful for analyzing the
relative importance played by
each bit of the image
Histogram Processing:

Dark image
Components of
histogram
areconcentrated on the
low side of the gray
scale.
Bright image
Components of
histogram are
concentrated on the high
side of the gray
Low-contrast image
histogram is narrow and
centered toward the middle
of the gray scale

High-contrast image
histogram covers broad
range of the gray scale and
the distribution of pixels
uniform like, with very few
vertical lines being much
higher than the others
Dark Image and Bright Image
Low contrast Image and High contrast Image
Histogram Transformation:

s = T(r)
Where 0 ≤ r ≤ 1

T(r) satisfies:

(a). T(r) is single valued and


monotonically increasing in
the interval 0 ≤ r ≤ 1

(b). 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1


Histogram Equalisation
Spreading out the frequencies in an image
(or equalising the image) is a simple way s
k  T (rk )
to improve dark or washed out images
k
The formula for histogram
equalisation is given where
  pr ( r j )
j 1
– r k: input intensity Histogram
Equalisation k nj
– s k: processed intensity 
– k: the intensity range j 1 n
(e.g 0.0 – 1.0)
– nj: the frequency of intensity j
– n: the sum of all frequencies
• Histogram equalization can be used to
improve the visual appearance of an image.
• Histogram equalization automatically
determines a transformation function that
produce and output image that has a near
uniform histogram
Histogram Equlization:

Input image

Output image
32
Histogram equalization
original contrast streching histogram equalization

36
Digital Linear Filters

10/09/2020 Doç.Dr.Hakan ÇAĞLAR 37


Basics of Spatial Filtering - Linear
Spatial filtering are filtering
operations performed on the
pixel intensities of an image
and not on the frequency
components of the image.

a b
g ( x, y )    w(s, t ) f ( x  s, y  t )
s  a t  b

a = (m - 1) / 2 b = (n - 1) / 2
Image Filtering

Spatial filtering

• Low-pass filtering mask (image


blurring, smoothing)
• Non-linear filters (Order-Statistics
filters, Median filter)
• High-pass filter (sharpening, edge
enhancement)

39
Linear Filtering:
• Linear Filtering of an image f of size MxN
filter mask of size mxn is given by the
expression

where a = (m-1)/2 and b = (n-1)/2

• To generate a complete filtered image this


equation must be applied for x = 0, 1, 2, … , M-1 and
y = 0, 1, 2, … , N-1
40
Smoothing Spatial Filters:
•Used for blurring and for noise reduction
• Blurring is used in preprocessing steps, such as;
• Removal of small details from an image prior to
object extraction
• Bridging of small gaps in lines or curves
• Noise reduction can be accomplished by blurring
with a linear filter and also by a nonlinear filter
• Output is simply the average of the pixels contained
in the neighborhood of the filter mask.
Called averaging filters or lowpass filters.

41
Smoothing Spatial Filters:
• Replacing the value of every pixel in an image by
the average of the gray levels in the neighborhood
will reduce the “sharp” transitions in gray levels.
• Sharp transitions
• Random noise in the image
• Edges of objects in the image
• Thus, smoothing can reduce noises (desirable)
and blur edges (undesirable)

42
Spatial Filtering:
•Use filter (can also be called as mask/ kernel/ template or
window)
• The values in a filter subimage are referred to as
coefficients, rather than pixel.
• Our focus will be on masks of odd sizes, e.g. 3x3, 5x5,…
• Simply move the filter mask from point to point in an
image.
• At each point (x,y), the response of the filter at that point
is calculated using a predefined relationship.

43
Basics of Spatial Filtering
Response, R, of an m  n mask at any point (x, y)

mn
R   wi zi
i 1

Special consideration is given when the center of the filter


approach the boarder of the image.
Fundamentals of Spatial Filtering
(Masking)
Portion of
a digital image Mask

w1 w 2 w3
z1 z2 z3
w4 w5 w6
z4 z5 z6
w7 w8 w9
z7 z8 z9

Replace
with
R = w1z1 + w2z2 + ….. +w9z9
Smoothing Spatial Filtering - Linear
Averaging (low-pass) Filters
Smoothing filters are used
- Noise reduction
- Smoothing of false contours
- Reduction of irrelevant detail

Undesirable side effect of smoothing filters


- Blur edges

Weighted average filter


reduces blurring in the
smoothing process.

Box Weighted
filter average
Smoothing Spatial Filtering _ Linear
Averaging (low-pass) Filters
n = filter size

n=3

n=5 n=9

n = 35

n = 15
Smoothing Spatial Filtering
Averaging & Threshold
filter size Thrsh = 25% of
n = 15 highest intensity
Nonlinear of Spatial Filtering

Nonlinear spatial filters operate on neighborhoods, and the mechanics


of sliding a mask past an image are the same as was just outlined. In
general however, the filtering operation is based conditionally on the
values of the pixel in the neighborhood under consideration, and they
do not explicitly use coefficients in the sum-of products manner
described previously.

Example
Computation for the median is a nonlinear operation.
Smoothing Spatial Filtering
Order Statistic Filters
Order-statistics filters are nonlinear spatial filters whose
response is based on ordering (ranking) the pixels contained in
the image area encompassed by the filter, and then replacing
the value of the center pixel with the value determined by the
ranking result.
3  3 Median filter [10 125 125 135 141 141 144 230 240] = 141
3  3 Max filter [10 125 125 135 141 141 144 230 240] = 240
3  3 Min filter [10 125 125 135 141 141 144 230 240] = 10

Median filter eliminates isolated clusters of pixels that are light or


dark with respect to their neighbors, and whose area is less than
n2/2.
Order Statistic Filters
n=3 n=3
Average Median
filter filter
Gaussian Low-Pass Filter :

Original image Gaussian low-pass filter

52
Smoothing Linear Filters:

a). original image 500x500 pixel

b). - f). results of smoothing with square


averaging filter masks of size n = 3, 5, 9, 15
and 35, respectively.

Note: big mask is used to eliminate small


objects from an image.

53
Linear Low-pass filter:

Original Image Smooting with15x15 Thresholding


Linear Low-pass filter

See the result after smoothing and thresholding, the remains are the largest
and brightest objects in the image.

54
Original image "airplane" (512x512x8)

(a) Original image


(b) 3x3 averaging
(c) 5x5 averaging
(d) 7x7 averaging
(e) 15x15 averaging

55
Order-Statistics Filters (Nonlinear Filters):

• The response is based on ordering (ranking) the pixels contained


in the image area encompassed by the filter
• Example;
• median filter : R = median{z |k = 1,2,…,n x n}
• max filter : R = max{z |k = 1,2,…,n x n}
• min filter : R = min{z |k = 1,2,…,n x n}
• Note: n x n is the size of the mask

56
Median Filters:

• Replaces the value of a pixel by the median of the gray levels in the
neighborhood of that pixel

• Quite popular because for certain types of random noise (impulse


noise; salt and pepper)

• they provide excellent noise-reduction capabilities, with considering


less blurring than linear smoothing filters of similar size.
Median Filters:

• Forces the points with distinct gray levels to be more like their
neighbors.

• Isolated clusters of pixels that are light or dark with respect to


their neighbors, and whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median filter.

• Eliminated = forced to have the value equal the median intensity


of the neighbors.

• Larger clusters are affected considerably less

58
Example : Median Filters

59
Nonlinear of Spatial Filtering

Nonlinear spatial filters operate on neighborhoods, and the mechanics


of sliding a mask past an image are the same as was just outlined. In
general however, the filtering operation is based conditionally on the
values of the pixel in the neighborhood under consideration, and they
do not explicitly use coefficients in the sum-of products manner
described previously.

Example
Computation for the median is a nonlinear operation.
Smoothing Spatial Filtering
Order Statistic Filters
Order-statistics filters are nonlinear spatial filters whose
response is based on ordering (ranking) the pixels contained in
the image area encompassed by the filter, and then replacing
the value of the center pixel with the value determined by the
ranking result.
3  3 Median filter [10 125 125 135 141 141 144 230 240] = 141
3  3 Max filter [10 125 125 135 141 141 144 230 240] = 240
3  3 Min filter [10 125 125 135 141 141 144 230 240] = 10

Median filter eliminates isolated clusters of pixels that are light or


dark with respect to their neighbors, and whose area is less than
n2/2.
Order Statistic Filters
n=3 n=3
Average Median
filter filter
Additive Gaussian Noise:

low-pass filter: 3x3 mean

63
Median Filter:

median filter: radius=2

64
(a) Salt & pepper noise 5(%)
(b) Salt & pepper noise 20(%)
(c) 3x3 averaging
(d) 3x3 averaging

65
Median Filter:

3
5

66
Median Filter:

7
9

67
Sharpening Spatial Filters

To highlight fine detail or to enhance detail


that blurred, either in error or as a natural
effect while image acquisition.

68
Linear High-Pass Filter: LHPF

• Blurring vs. Sharpening; blurring can be done in spatial


domain by pixel averaging in a neighbors

• Since averaging is analogous to integration thus, the


sharpening must be accomplished by spatial
differentiation.

• Thus, image differentiation enhances edges and other


discontinuities (noise); deemphasizes area with slowly
varying gray-level values.
69
Sharpening Spatial Filters
The principal objective of sharpening is to highlight
fine detail in an image or to enhance detail that has
been blurred.
1 9

9 1
zi
Image Blurred Image
1 9

9 1
z

The derivatives of a digital function are defined


in terms of differences.
Sharpening Spatial Filters
Requirements for digital derivative
First derivative
1) Must be zero in flat segment
2) Must be nonzero along ramps.
3) Must be nonzero at the onset of a gray-level step or ramp
Second derivative
1) Must be zero in flat segment
2) Must be zero along ramps.
3) Must be nonzero at the onset and end of a gray-level step or
ramp
f
 f ( x  1)  f ( x)
x
2
 f
2
 f ( x  1)  f ( x  1)  2 f ( x)
x
Image sharpening
• Edge enhancement for blurred images (e.g., caused by
camera focus, narrow bandwidth, etc.)

• Image pixel derivative

1st derivative: df/dx = f(x+1) – f(x)


2nd derivative: d2f/dx2 = [f(x+1) – f(x)] – [f(x) – f(x-
1)]

= f(x+1) + f(x-1) – 2f(x)

72
Using Second Derivatives For Image
Enhancement
•The 2nd derivative is more useful for image
enhancement than the 1st derivative
– Stronger response to fine detail
– Simpler implementation
– We will come back to the 1st order derivative later on
•The first sharpening filter we will look at is the
Laplacian
– Isotropic
– One of the simplest sharpening filters
– We will look at a digital implementation
The Laplacian
•The Laplacian is defined as follows:
 f  f
2 2
 f  2  2
2

 x  y
•where the partial 1st order derivative in the x
direction is defined as follows:
 f2
 f ( x  1, y )  f ( x  1, y )  2 f ( x, y )
 x
2
•and in the y direction as follows:
2 f
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
 y
2
The Laplacian (cont…)
•So, the Laplacian can be given as follows:
 f  [ f ( x  1, y )  f ( x  1, y )
2

 f ( x, y  1)  f ( x, y  1)]
 4 f ( x, y )
•We can easily build a filter based on this
0 1 0

1 -4 1

0 1 0
The Laplacian (cont…)
•Applying the Laplacian to an image we get a
new image that highlights edges and other
discontinuities

Original Laplacian Laplacian


Image Filtered Image Filtered Image
Scaled for Display
Laplacian Image Enhancement

- =
Original Laplacian Sharpened
Image Filtered Image Image

•In the final sharpened image edges and fine


detail are much more obvious
Image sharpening

(a) original

(b) f(x) blured

(c) df/dx
1st derivative

2nd derivative (d) d2f/dx2

(e) f(x) – d2f/dx2

78
Sharpening Spatial Filters
Comparing the response between first- and second-ordered
derivatives:
1) First-order derivative produce thicker edge
2) Second-order derivative have a stronger response to fine
detail, such as thin lines and isolated points.
3) First-order derivatives generally have a stronger response
to a gray-level step {2 4 15}.
4) Second-order derivatives produce a double response at
step changes in gray level.
In general the second derivative is better than the first
derivative for image enhancement. The principle use of
first derivative is for edge extraction.
1 Derivative Filtering
st

•Implementing 1st derivative filters is difficult in


practice
•For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column vector:

 f 
Gx   x 
f      f 
G y   
 y 
1 Derivative Filtering (cont…)
st

•The magnitude of this vector is given by:


f  mag (f )

 G G2
x
2
y  1
2

1
 f  2  f  2  2

      
 x   y  
•For practical reasons this can be simplified as:
f  G x  G y
Use of First Derivative for Edge Extraction
Gradient
First derivatives in image processing are implemented
using the magnitude of the gradient.
t
 f f 
f   z1 z2 z3
 x y 
0.5
z4 z5 z6
 f   f  
2 2

f  mag (f )         Gx  G y z7 z8 z9
 x   y  
Roberts operator
Gx = (z9-z5) and Gy = (z8 - z6)
Sobel operator
Gx = (z3+2z6 +z9) - (z1+2z4+z7) and
Gy = (z7+2z8+z9) - (z1+2z2+z3)
Use of First Derivative for Edge Extraction
Gradient

Robert operator

Sobel operators
Sobel Operators
•Based on the previous equations we can derive
the Sobel Operators
-1 -2 -1 -1 0 1

0 0 0 -2 0 2

1 2 1 -1 0 1

•To filter an image it is filtered using both operators


the results of which are added together
Sobel Example

An image of a contact
lens which is
enhanced in order to
make defects (at four
and five o’clock in the
image) more obvious

•Sobel filters are typically used for edge detection


1 & 2 Derivatives
st nd

•Comparing the 1st and 2nd derivatives we can


conclude the following:
– 1st order derivatives generally produce thicker edges
– 2nd order derivatives have a stronger response to
fine detail e.g. thin lines
– 1st order derivatives have stronger response to grey
level step
– 2nd order derivatives produce a double response at
step changes in grey level
Use of First Derivative for Edge Extraction
Gradient
-1 0 1 -1 -2 -1
-2 0 2 0 0 0
f(x,y) = [40, 140]
-1 0 1 1 2 1

…. 0 0 0 0 0 0
…. 0 200 400 400 400 400 ……
…. 0 400 600 400 400 400 …...
…. 0 400 400 0 0 0
…. 0 400 400 0 0 0
…. 0 400 400 0 0 0
…0 0 0 0 0
-1 -1 …0 100 200 200 200
1 1 …0 200 0 0 0
…0 200 0 0 0
Use of First Derivative for Edge Extraction
Gradient
2nd Derivative _ Laplacian

2 2
2  f  f
 f  2  2
x y

2 f
2
 f ( x  1, y )  f ( x  1, y )  2 f ( x, y )
x
2 f
2
 f ( x, y  1)  f ( x, y  1)  2 f ( x, y )
y
 2 f  [ f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)  4 f ( y, y )]
Use of 2nd Derivative for Enhancement
Laplacian

Isotropic filter response is independent of the direction of the


discontinuities in the image to which the filter is applied.

2 2
 f  f
2 f  2  2
x y
Use of 2nd Derivative for Enhancement
Laplacian

1 1 1
1 -8 1
1 1 1

If the center coefficient


of the laplacian mask is
negative

 f ( x, y )   2 f ( x, y )
g ( x, y )   2
 f ( x , y )   f ( x, y )
Use of 2nd Derivative for Enhancement
Laplacian
 2 f  [ f ( x  1, y )  f ( x  1, y )  f ( x, y  1)  f ( x, y  1)  4 f ( x, y )]
0 1 0
g(x,y) = f(x,y) - 1 -4 1 =
0 1 0

2
 f ( x, y )
Un-sharp Masking and High-boost Filtering

(b)
What is Image Restoration?
Image restoration attempts to restore images
that have been degraded
– Identify the degradation process and attempt to
reverse it
– Similar to image enhancement, but more
objective
Enhancement vs. Restoration
Enhancement Restoration

“Better” visual Remove effects of


representation sensing environment

Subjective Objective

No quantitative Mathematical, model


measures dependent
quantitative measures
Degradation Model

f(x,y) h(x,y) S g(x,y)

n(x,y)

Degradation Model: g = h*f + n


A Model of Image Degradation and
Restoration
The degradation process is modeled as a degradation
function that together with an additive noise term.

g ( x, y )  h ( x , y ) * f ( x, y )   ( x, y )
In frequency domain representation:
G (u, v)  H (u, v) F (u, v)  N (u, v)
Where: f(x,y) is the input image, g(x,y) is the degraded
image, h(x,y) is the degradation function, and ( x, y ) is
the additive noise.
Restoration Model

f(x,y) Degradation Restoration


f(x,y)
Model Filter

Unconstrained Constrained
• Inverse Filter • Wiener Filter
• Pseudo-inverse Filter
Restoration Model
 The objective of restoration is to obtain an estimate

f ( x, y ) of the original image f(x,y).

 Generally, the more we know about H, and, the


closer f ( x, y )will be to f(x,y).

 The approach used throughout most of this chapter


is based on various types of image restoration filters.
Degradation/Restoration Process

DEGRADATION RESTORATION

f ( x, y )

Degradation g ( x, y )
Degradation
Function
Function +
Restoration
Restoration
Filter(s)
fˆ ( x, y )
HH Filter(s)

 ( x, y )
Image Restoration
• Given g(x,y), some knowledge about the
degradation function H, and some
information about the additive noise  ( x, y )
• The objective of the restoration is to obtain
an estimate fˆ ( x, y ) of the original image.
f(x,y) =input image, g(x,y)= degraded
image, h(x,y)=degradation function, and
 ( x, y ) =additive noise.
Noise and Images
The sources of noise in digital
images arise during image
acquisition (digitization) and
transmission
– Imaging sensors can be affected
by ambient conditions
– Interference can be added
to an image during transmission
Noise Models
Impulse noise is also called salt-and-pepper noise.
For pa=pb=0.05
128 128 128 128 128 128 128 128 128 128 128 128 255 0 128 128 128 128 128 128
128 128 128 128 128 128 128 128 128 128 128 128 128 128 0 128 128 128 128 0
128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128
128 128 128 128 128 128 128 128 128 128 128 128 0 128 128 128 128 128 128 128
128 128 128 128 128 128 128 128 128 128 128 128 128 255 128 128 128 128 128 128
128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128
128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128
128 128 128 128 128 128 128 128 128 128 0 128 128 128 128 255 128 128 128 128
128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 255
128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 128 255 128 128

Input image Degraded image

Noise level p=0.05 means that approximately 5% of


pixels are contaminated by salt or pepper noise
(highlighted by red color)
Noise Models
• The principal source of noise in digital images
arise during image acquisition (digitization)
and/or transmission.
• The performance of imaging sensors is
affected by a variety of factors.
• Images are corrupted during transmission due
to interference in channel
Some Importance Noise
These noises are common found.
• Gaussian noise
• Rayleigh noise
• Erlang (Gamma) noise
• Exponential noise
• Uniform noise
• Impulse (salt-and-pepper) noise
Gaussian noise
• The PDF of a Gaussian noise is given by
1  ( z   ) 2 / 2 2 p(z)
p( z )  e
2 2

    z

Rayleigh noise
• The PDF of a Rayleigh noise is given by

p(z)
 b2 ( z  a )e  ( z  a )
2
/b
for z  a
p( z )  
 0 for z  a

The mean and variance are given


b( 4   )
  a  b / 4 and  2 
4

a z
a b
2
Erlang (Gamma) noise
• The PDF of a Erlang noise is given by
p(z) a (b  1) b 1 (b 1)
K e
K (b  1)!
 a b z b 1  az
 e for z  0
p ( z )   (b  1)!
0 for z  0

The mean and variance are given


b b
 and 2 
a a2

b 1 z
a
Exponential noise
• The PDF of a Exponential noise is given by
p(z)
a
ae  az
for z  0
p( z )  
0 for z  0

The mean and variance are given


1 1
 and 2 
a a2

z
Note: It is a special case of Erlang PDF, with b=1.
Uniform noise
• The PDF of a Uniform noise is given by
p(z)

 1
 if a  z  b
p( z )   b  a 1
 0 otherwise ba

The mean and variance are given


ab (b  a ) 2
 and  2 
2 12

a b z
Impulse (salt-and-pepper) noise
• The PDF of a (bipolar) impulse noise is given by
p(z)

 Pa for z  a

p ( z )   Pb for z  b Pb
 0 otherwise

Pa

a b z
Exponential noise Uniform noise Salt and pepper noise
Restoration of Noise Only Degradation

We can consider a noisy image to be


modelled as follows:
g ( x, y )  f ( x, y )   ( x, y )
G (u , v)  F (u , v)  N (u , v)
where f(x, y) is the original image pixel, η(x, y)
is the noise term and g(x, y) is the resulting
noisy pixel
Restoration in the Presence of Noise
• When the only degradation present in an image
is noise

g ( x, y )  f ( x , y )   ( x , y )
• The noise is unknown, so subtracting them from
g(x,y) is not a realistic option.
• In fact, enhancement and restoration become
almost indistinguishable disciplines in this
particular case.
Mean Filters
• This is the simply methods to reduce noise
in spatial domain.
– Arithmetic mean filter
– Geometric mean filter
– Harmonic mean filter
– Contraharmonic mean filter
• Let Sxy represent the set of coordinates in a
rectangular subimage window of size mxn,
centered at point (x,y).
Arithmetic mean filter
• Compute the average value of the corrupted
image g(x,y) in the aread defined by Sx,y.
• The value of the restored image fˆ at any
point (x,y)
ˆf ( x, y )  1
 g ( s, t )
mn ( s ,t )S x , y

This is implemented as the simple smoothing filter


Blurs the image to remove noise
Geometric mean filter
• Using a geometric mean filter is given by
the expression
1

  mn

fˆ ( x, y )    g ( s, t )
( s ,t )S xy 
Harmonic mean filter
• The harmonic mean filter operation is
given by the expression
mn
fˆ ( x, y ) 
1

( s ,t )S xy g ( s, t )
Contraharmonic mean filter
• The contraharmonic mean filter operation is
given by the expression

 g ( s ,
( s ,t )S xy
t ) Q 1

fˆ ( x, y ) 
 g ( s ,
( s ,t )S xy
t ) Q

Where Q is called the order of the filter. This filter is


well suited for reducing or virtually eliminating the
effects of salt-and-pepper noise.
Order-Statistics Filters
• Order-Statictics filters are spatial filters whose
response is based on ordering (ranking) the
pixels contained in the image area
encompassed by the filter
– Median filter
– Max and Min filter
– Midpoint filter
– Alpha-trimmed mean filter
Median filter
• Process is replaces the value of a pixel by the
median of the gray levels in region Sxy of that
pixel:

fˆ ( x, y )  median g ( s, t )
( s ,t )S xy
Max and Min filter
• Using the 100th percentile results in the so-called
max filter, given by
fˆ ( x, y )  max  g ( s, t )
( s ,t )S xy
This filter is useful for finding the brightest points in an image.
Since pepper noise has very low values, it is reduced by this
filter as a result of the max selection processing the subimage
area Sxy.
• The 0th percentile filter is min filter:
fˆ ( x, y )  min  g ( s, t )
( s ,t )S xy
This filter is useful for finding the darkest points in an image.
Also, it reduces salt noise as a result of the min operation.
Midpoint filter
• The midpoint filter simply computes the
midpoint between the maximum and minimum
values in the area encompassed by the filter:

1 
f ( x, y )   max  g ( s, t )  min  g ( s, t ) 
2 ( s ,t )S xy ( s ,t )S xy 
Note: This filter works best for randomly distributed
noise, like Gaussian or uniform noise.

You might also like