Dip Unit 4 Q&a
Dip Unit 4 Q&a
1a
1.T Draw the degradation/restoration model in image processing and describe [L1][CO4] [6M]
h) the each part presented on it.
2a
2. Explain the Gaussian and Rayleigh noises with their PDF expressions. [L2][CO4] [6M]
)
Gaussian noise
It is also called as electronic noise because it arises in amplifiers or detectors. Gaussian noise caused by
natural sources such as thermal vibration of atoms and discrete nature of radiation of warm objects .
Gaussian noise generally disturbs the gray values in digital images.
The PDF of a Gaussian random variable, z,
Mean:
Standard deviation:
Variance:
Rayleigh noise:
Rayleigh noise presents in radar range images. In Rayleigh noise, probability density function is given as
MEAN:
Variance:
2b Explain the Erlang and Exponential noises with their PDF expressions. [L2][CO4] [6M]
)
where the parameters are such that a > 0, b is a positive integer, and "!" indicates factorial. Themean and variance of
this density are given by
mean is given by
Variance: is given by
Exponential noises
The PDF of exponential noise,
Variance is given by
3 Explain the Uniform and Impulse noises with their PDF expressions. [L1][CO4] [6M]
a)
Uniform Noise:
Uniform density is useful for a number of ‘random number generator’ which are
insimulations.
Impulse (salt & pepper) Noise:
In this case, the noise is signal dependent, and is multiplied to the image. The PDF
ofbipolar (impulse) noise is given by
If b>a, gray level b will appear as a light dot in image. Level a will appear like a dark dot.
3b Explain the Normal and Gamma noises with their PDF expressions. [L2][CO4] [6M]
)
Gamma:
Gamma noise is generally seen in the laser based images. It obeys the Gamma distribution. It is represented as
where the parameters are such that a > 0, b is a positive integer, and "!" indicates factorial. Themean and variance of
this density are given by
mean is given by
Variance: is given by
4 1.
a) Explain restoration in the presence of noise only using arithmetic and [L2][CO4] [6M]
geometric mean filters.
When the only degradation present in an image is noise, i.e. g(x,y)=f(x,y)+η(x,y) or
G(u,v)= F(u,v)+ N(u,v) The noise terms are unknown so subtracting them from g(x,y) or
G(u,v) is not a realistic approach. In the case of periodic noise it is possible to estimate
N(u,v) from the spectrum G(u,v).
So N(u,v) can be subtracted from G(u,v) to obtain an estimate of original image. Spatial
filtering can be done when only additive noise is present. The following techniques can
beused to reduce the noise effect:
i) Mean Filter
(a) Arithmetic Mean filter:
It is the simplest mean filter. Let Sxy represents the set of coordinates in the sub image of
size m*n centered at point (x,y). The arithmetic mean filter computes the average value of
the corrupted image g(x,y) in the area defined by Sxy.
The value of the restored image f at any point (x,y) is the arithmetic mean computed using
the pixels in the region defined by Sxy
This operation can be using a convolution mask in which all coefficients have value
1/mn A mean filter smoothes local variations in image Noise is reduced as a result of
blurring. For every pixel in the image, the pixel value is replaced by the mean value of
its neighboring pixels with a weight .This will resulted in a smoothing effect in the
image.
(b) Geometric Mean filter:
An image restored using a geometric mean filter is given by the expression
Here, each restored pixel is given by the product of the pixel in the sub image window, raised to
the power 1/mn. A geometric mean filters but it to loose image details in the process
b) Write the expression for Harmonic and contraharmonic mean filter [L1][CO4] [6M]
4
and with their importance.
The harmonic mean filter works well for salt noise but fails for pepper noise. It does well with
Gaussian noise also.
(d) Contra harmonic mean filter:
Contra harmonic mean filter The contra harmonic mean filtering operation yields
arestored image based on the expression
where Q is called the order of the filter. This filter is well suited for reducing or virtually
For positive values of Q, the filter eliminates pepper noise. For negative values of Q
iteliminates salt noise.
It cannot do both simultaneously. Note that the contra harmonic filter reduces to
thearithmetic mean filter if Q = 0, and to the harmonic mean filter if Q = -1.
5 a) Explain the method of inverse filtering for image restoration. [L2][CO4] [6M]
The simplest approach to restoration is direct inverse filtering, where F (u, v), the
transform of the original image is computed simply by dividing the transform of
thedegraded image, G (u, v), by the degradation function.
It is expressed as,
The divisions are between individual elements of the functions. But G (u, v) is given by
G (u, v) = F (u, v) + N (u, v)
It tells that even if the degradation function is known the undegraded image cannot be
recovered [the inverse Fourier transform of F( u, v)] exactly because N(u, v) is a random
function whose Fourier transform is not known.
If the degradation has zero or very small values, then the ratio N(u, v)/H(u, v) could
easily dominate the estimate F(u, v). One approach to get around the zero or small-value
problemis to limit the filter frequencies to values near the origin.
H (0, 0) is equal to the average value of h(x, y) and that this is usually the highest
value ofH (u, v) in the frequency domain. Thus, by limiting the analysis to frequencies
near the origin, the probability of encountering zero values is reduced.
5 b) Give the advantages and disadvantages of the inverse filtering. [L2][CO4] [6M]
The inverse filtering is a restoration technique for deconvolution, i.e., when the image is
blurred by a known lowpass filter, it is possible to recover the image by inverse filtering
orgeneralized inverse filtering.
However, inverse filtering is very sensitive to additive noise. The Wiener filtering executes
an optimal tradeoff between inverse filtering and noise smoothing.
It removes the additive noise and inverts the blurring simultaneously. So the Wiener
filtering is optimal in terms of the mean square error because it minimizes the overall
meansquare error in the process of inverse filtering and noise smoothing.
The quickest and easiest way to restore that is by inverse filtering. the inverse filter is a
form of high pass filer, inverse filtering responds very badly to any noise that is present
inthe image because noise tends to be high frequency.
The inverse filtering is a restoration technique for deconvolution. However, inverse
filtering is very sensitive to additive noise. Wiener can removes the additive noise and
inverts the blurring simultaneously, it is usually used instead of inverse filter
6 a) Explain the method of the Least mean square filters for image [L2][CO4] [6M]
restoration.
The inverse filtering approach has poor performance. The wiener filtering approach uses
the degradation function and statistical characteristics of noise into the restoration
process.The objective is to find an estimate of the uncorrupted image ,such that the mean
squareerror between them is minimized.The error measure is given by
Where E{.} is the expected value of the argument.
We assume that the noise and the image are uncorrelated one or the other has zero
mean.The gray levels in the estimate are a linear function of the levels in the degraded
image.
b) Discuss the method of constrained least square restoration for image [L2][CO4] [6M]
6
restoration.
The wiener filter has a disadvantage that we need to know the power spectra of the
undegraded image and noise. The constrained least square filtering requires only
the knowledge of only the mean and variance of the noise. These parameters usually can be
calculated from a given degraded image this is the advantage with this method. This method
produces a optimal result. This method requires the optimal criteria which is important we
express the
Segmentation is an important stage of the image recognition system, because it extracts the objects
of our interest, for further processing such as description or recognition.
7 b) Explain the Region based Approach for image segmentation. [L2][CO5] [6M]
Region-Based Segmentation:
The objective of segmentation is to partition an image into regions. We approached
this problem by finding boundaries between regions based on discontinuities in gray
levels, whereas segmentation was accomplished via thresholds based on the
distribution of pixelproperties, such as gray-level values or color.
Basic Formulation: Let R represent the entire image region. We may view
segmentationas a process that partitions R into n subregions, R1, R2..., Rn.
Region Growing:
Region growing is a procedure that groups pixels or subregions into larger regions based
on predefined criteria. The basic approach is to start with a set of "seed" points and
from these grow regions by appending to each seed those neighboring pixels that have
propertiessimilar to the seed (such as specific ranges of gray level or color)...
Basically, growing a region should stop when no more pixels satisfy the criteria for
inclusion in that region. Criteria such as gray level, texture, and color, are local in nature
and do not take into account the "history" of region growth.
The use of these types of descriptors is based on the assumption that a model of
expected results is at least partially available.
Region Splitting and Merging:
The procedure just discussed grows regions from a set of seed points. An alternative is
to subdivide an image initially into a set of arbitrary, disjointed regions and then merge
and/orsplit the regions in an attempt to satisfy the conditions.
A split and merge algorithm that iteratively works toward satisfying these constraints is
developed.
Let R represent the entire image region and select a predicate P. One approach for
segmenting R is to subdivide it successively into smaller and smaller quadrant regions so
that, for any region Ri , P(Ri) = TRUE.
We start with the entire region. If P(R) = FALSE, we divide the image into quadrants. If P is
FALSE for any quadrant, we subdivide that quadrant into subquadrants, and so on.
This particular splitting technique has a convenient representation in the form of a so-
called quadtree (that is, a tree in which nodes have exactly four descendants), as
illustrated in F Note that the root of the tree corresponds to the entire image and that
each node correspondsto a subdivision. In this case, only R4 was subdivided further.
8 a) Illustrate the Clustering techniques for image segmentation with [L2][CO5] [6M]
example.
where I(x,y) are the original image pixels and IT(x,y) is the thresholded image. Since IT contains only
two values (1 for foreground pixels and 0 for background pixels), it is called a binary image. It can
serve as a mask because each location (x,y) of the original image has a value of 1 in the mask if this is
a feature pixel. some feature pixels will have intensity values below the threshold
value, and some background pixels will lie above the threshold value because of image
inhomogeneities and additive noise.
Image thresholding is the easiest way to separate image background and foreground.
Also, this image thresholding can be identify as image segmentation. To apply
thresholding techniques, we should use a gray scale image. When thresholding, that
grayscale image willbe converted to a binary image.
1) Sample Thresholding
2) Global thresholding: histogram improvement method and threshold computing method
Threshold Computing
a) Adaptive Thresholding
b)Threshold to Zero
c) Binary Thresholding
d)Otsu Thresholding
e) Truncate Thresholding
9 b) Discuss the Edge detection with the help of the following operators: [L2][CO5] [6M]
i) Gradient ii) Roberts iii) Prewitt iv) Sobel.
The two functions that can be expressed in terms of the directional derivatives are the
gradient magnitude and the gradient orientation. It is possible to compute the
magnitude
∇f of the gradient and the orientation φ(∇f ).
The gradient magnitude gives the amount of the difference between pixels in the
neighbourhood which gives the strength of the edge. The gradient magnitude is
definedby
The magnitude of the gradient gives the maximum rate of increase of f (x, y) per unit
distance in the gradient orientation of ∇f . The gradient orientation gives the direction
ofthe greatest change, which presumably is the direction across the edge. The
gradient orientation is given by
where the angle is measured with respect to the x-axis. The direction of the edge at (x, y) is
perpendicular to the direction of the gradient vector at that point.
Roberts:
The main objective is to determine the differences between adjacent pixels, one way
to find an edge is to explicitly use { +1,-1} , that calculates the difference between
adjacentpixels. Mathematically, these are called forward differences. The Roberts
kernels are, inpractice, too small to reliably find edges in the presence of noise. The
simplest way to implement the first-order partial derivative is by using the Roberts
cross-gradient operator.
The partial derivatives given above can be implemented by approximating them to two 2
× 2 masks. The Roberts operator masks are given by
These filters have the shortest support, thus the position of the edges is more accurate,
butthe problem with the short support of the filters is its vulnerability to noise.
iii) Prewitt:
The Prewitt kernels are named after Judy Prewitt. Prewitt kernels are based on the idea
ofcentral difference. The Prewitt edge detector is a much better operator than the
Roberts operator. Consider the arrangement of pixels about the central pixel [i, j ] as
shown below:
The constant c in the above expressions implies the emphasis given to pixels closer to
thecentre of the mask. Gx and Gy are the approximations at [i, j ]. Setting c = 1, the
Prewitt operator mask is obtained as
The Prewitt masks have longer support. The Prewitt mask differentiates in one
directionand averages in other direction; so the edge detector is less vulnerable to
noise.
iv) Sobel:
The Sobel kernels are named after Irwin Sobel. The Sobel kernel relies on central
differences, but gives greater weight to the central pixels when averaging. The Sobel
kernels can be thought of as 3 × 3 approximations to first derivatives of Gaussian
kernels.The partial derivates of the Sobel operator are calculated as
The noise-suppression characteristics of a Sobel mask is better than that of a Prewitt mask.
10 a) Discuss the Laplacian operator in edge detection. Also mention its [L2][CO5] [6M]
drawbacks.
Laplacian of Gaussian (LoG): It is a gaussian-based operator which uses the Laplacian to take the second
derivative of an image. This really works well when the transition of the grey level seems to be abrupt. It
works on the zero-crossing method i.e when the second-order derivative crosses zero, then that particular
location corresponds to a maximum level. It is called an edge location. Here the Gaussian operator reduces the
noise and the Laplacian operator detects the sharp edges.
The Gaussian function is defined by the formula:
Where
Advantages:
1)Easy to detect edges and their various orientations
2)There is fixed characteristics in all directions
Limitations:
1)Very sensitive to noise
2)The localization error may be severe at curved edges
3)It generates noisy responses that do not correspond to edges, so-called “false edges”
10 b) Discuss the concept of Laplacian of Gaussian (LoG) operator for edge [L2][CO5] [6M]
detection.
Laplacian filters are derivative filters used to find areas of rapid change (edges) in images. Since
derivative filters are very sensitive to noise, it is common to smooth the image (e.g., usinga
Gaussian filter) before applying the Laplacian. This two-step process is call the Laplacian of
Gaussian (LoG) operation.
Step2:the laplacian operator is applied to the result obtained in step 1.this is represented by
On differentiating the gaussian kernel,we get