0% found this document useful (0 votes)
37 views20 pages

Dip Unit 4 Q&a

The document discusses various types of noises that affect digital images and their probability density function expressions. It also explains different image restoration techniques like inverse filtering, mean filters, and their advantages and disadvantages. Mean filters are used to reduce noise when the only degradation present is additive noise. Inverse filtering directly estimates the original image by dividing the degraded image Fourier transform by the degradation function Fourier transform but it is sensitive to noise.

Uploaded by

Manaswi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views20 pages

Dip Unit 4 Q&a

The document discusses various types of noises that affect digital images and their probability density function expressions. It also explains different image restoration techniques like inverse filtering, mean filters, and their advantages and disadvantages. Mean filters are used to reduce noise when the only degradation present is additive noise. Inverse filtering directly estimates the original image by dividing the degraded image Fourier transform by the degradation function Fourier transform but it is sensitive to noise.

Uploaded by

Manaswi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT-4

 1a
1.T Draw the degradation/restoration model in image processing and describe [L1][CO4] [6M]
h) the each part presented on it.

The degradation process is modeled as a degradation function that, together with an


additive noise term, operates on an input image f(x, y) to produce a degraded image g(x,
y).
 Given g(x, y), some knowledge about the degradation function H, and some knowledge
about the additive noise term η(x, y), the objective of restoration is to obtain an estimate
f(x, y) of the original image.
 The estimate should be as close as possible to the original input image and, in general, the
more we know about H and η, the closer f(x, y) will be to f(x, y). The degraded image is
given in the spatial domain by
g (x, y) = h (x, y) * f (x, y) + η (x, y)
where h (x, y) is the spatial representation of the degradation function and, the symbol * indicates
convolution.
 Convolution in the spatial domain is equal to multiplication in the frequency domain,
hence
G7 (u, v) = H (u, v) F (u, v) + N (u, v)
where the terms in capital letters are the Fourier transforms of the corresponding terms in above
equation.

Figure: Model of the image degradation/restoration process.


b) Differentiate the Image Enhancement and Image Restoration. [L4][CO4] [6M]

i) Image enhancement techniques are heuristic procedures designed to manipulate an image in


order to take advantage of the psychophysical aspects of the human system. Whereas image
restoration techniques are basically reconstruction techniques by which a degraded image is
reconstructed by using some of the prior knowledge of the degradation phenomenon.
(ii) Image enhancement can be implemented by spatial and frequency domain technique,
whereas image restoration can be implement by frequency domain and algebraic techniques.
(iii) The computational complexity for image enhancement is relatively less when compared to
the computational complexity for image restoration, since algebraic methods requires
manipulation of large number of simultaneous equation. But, under some condition
computational complexity can be reduced to the same level as that required by traditional
frequency domain technique.
(iv) Image enhancement techniques are problem oriented, whereas image restoration techniques
are general and are oriented towards modeling the degradation and applying the reverse process
in order to reconstruct the original image.
(v) Masks are used in spatial domain methods for image enhancement, whereas masks are not
used for image restoration techniques.
(vi) Contrast stretching is considered as image enhancement technique because it is based on the
pleasing aspects of the review, whereas removal of’ image blur by applying a deblurring function
is considered as a image restoration technique.

2a
2. Explain the Gaussian and Rayleigh noises with their PDF expressions. [L2][CO4] [6M]
)

Gaussian noise
It is also called as electronic noise because it arises in amplifiers or detectors. Gaussian noise caused by
natural sources such as thermal vibration of atoms and discrete nature of radiation of warm objects .
Gaussian noise generally disturbs the gray values in digital images.
 The PDF of a Gaussian random variable, z,

 Mean:
 Standard deviation:

 Variance:

 70% of its values will be in the range:

 95% of its values will be in the range:

Rayleigh noise:
Rayleigh noise presents in radar range images. In Rayleigh noise, probability density function is given as
MEAN:

Variance:

2b Explain the Erlang and Exponential noises with their PDF expressions. [L2][CO4] [6M]
)

Erlang Noise( gamma):


Gamma noise is generally seen in the laser based images. It obeys the Gamma distribution. It is represented as

where the parameters are such that a > 0, b is a positive integer, and "!" indicates factorial. Themean and variance of
this density are given by
mean is given by

Variance: is given by

Exponential noises
The PDF of exponential noise,

Where mean is given by

Variance is given by
3 Explain the Uniform and Impulse noises with their PDF expressions. [L1][CO4] [6M]
a)

Uniform Noise:

 The PDF of uniform noise is given by

 The mean and variance of this noise is

 Uniform density is useful for a number of ‘random number generator’ which are
insimulations.
Impulse (salt & pepper) Noise:

 In this case, the noise is signal dependent, and is multiplied to the image. The PDF
ofbipolar (impulse) noise is given by

If b>a, gray level b will appear as a light dot in image. Level a will appear like a dark dot.
3b Explain the Normal and Gamma noises with their PDF expressions. [L2][CO4] [6M]
)

Gamma:
Gamma noise is generally seen in the laser based images. It obeys the Gamma distribution. It is represented as

where the parameters are such that a > 0, b is a positive integer, and "!" indicates factorial. Themean and variance of
this density are given by
mean is given by

Variance: is given by

4 1.
a) Explain restoration in the presence of noise only using arithmetic and [L2][CO4] [6M]
geometric mean filters.
 When the only degradation present in an image is noise, i.e. g(x,y)=f(x,y)+η(x,y) or
G(u,v)= F(u,v)+ N(u,v) The noise terms are unknown so subtracting them from g(x,y) or
G(u,v) is not a realistic approach. In the case of periodic noise it is possible to estimate
N(u,v) from the spectrum G(u,v).
 So N(u,v) can be subtracted from G(u,v) to obtain an estimate of original image. Spatial
filtering can be done when only additive noise is present. The following techniques can
beused to reduce the noise effect:
i) Mean Filter
(a) Arithmetic Mean filter:

 It is the simplest mean filter. Let Sxy represents the set of coordinates in the sub image of
size m*n centered at point (x,y). The arithmetic mean filter computes the average value of
the corrupted image g(x,y) in the area defined by Sxy.
 The value of the restored image f at any point (x,y) is the arithmetic mean computed using
the pixels in the region defined by Sxy

 This operation can be using a convolution mask in which all coefficients have value
1/mn A mean filter smoothes local variations in image Noise is reduced as a result of
blurring. For every pixel in the image, the pixel value is replaced by the mean value of

its neighboring pixels with a weight .This will resulted in a smoothing effect in the
image.
(b) Geometric Mean filter:
 An image restored using a geometric mean filter is given by the expression

Here, each restored pixel is given by the product of the pixel in the sub image window, raised to
the power 1/mn. A geometric mean filters but it to loose image details in the process
b) Write the expression for Harmonic and contraharmonic mean filter [L1][CO4] [6M]
4
and with their importance.

(c)Harmonic Mean filter:


 The harmonic mean filtering operation is given by the expression

The harmonic mean filter works well for salt noise but fails for pepper noise. It does well with
Gaussian noise also.
(d) Contra harmonic mean filter:

 Contra harmonic mean filter The contra harmonic mean filtering operation yields
arestored image based on the expression

where Q is called the order of the filter. This filter is well suited for reducing or virtually

eliminating the effects of salt-and-pepper noise.

 For positive values of Q, the filter eliminates pepper noise. For negative values of Q
iteliminates salt noise.
 It cannot do both simultaneously. Note that the contra harmonic filter reduces to
thearithmetic mean filter if Q = 0, and to the harmonic mean filter if Q = -1.
5 a) Explain the method of inverse filtering for image restoration. [L2][CO4] [6M]

 The simplest approach to restoration is direct inverse filtering, where F (u, v), the
transform of the original image is computed simply by dividing the transform of
thedegraded image, G (u, v), by the degradation function.
 It is expressed as,

 The divisions are between individual elements of the functions. But G (u, v) is given by
G (u, v) = F (u, v) + N (u, v)

 It tells that even if the degradation function is known the undegraded image cannot be
recovered [the inverse Fourier transform of F( u, v)] exactly because N(u, v) is a random
function whose Fourier transform is not known.
 If the degradation has zero or very small values, then the ratio N(u, v)/H(u, v) could
easily dominate the estimate F(u, v). One approach to get around the zero or small-value
problemis to limit the filter frequencies to values near the origin.
H (0, 0) is equal to the average value of h(x, y) and that this is usually the highest
value ofH (u, v) in the frequency domain. Thus, by limiting the analysis to frequencies
near the origin, the probability of encountering zero values is reduced.

5 b) Give the advantages and disadvantages of the inverse filtering. [L2][CO4] [6M]

The inverse filtering is a restoration technique for deconvolution, i.e., when the image is
blurred by a known lowpass filter, it is possible to recover the image by inverse filtering
orgeneralized inverse filtering.
 However, inverse filtering is very sensitive to additive noise. The Wiener filtering executes
an optimal tradeoff between inverse filtering and noise smoothing.
 It removes the additive noise and inverts the blurring simultaneously. So the Wiener
filtering is optimal in terms of the mean square error because it minimizes the overall
meansquare error in the process of inverse filtering and noise smoothing.
 The quickest and easiest way to restore that is by inverse filtering. the inverse filter is a
form of high pass filer, inverse filtering responds very badly to any noise that is present
inthe image because noise tends to be high frequency.
 The inverse filtering is a restoration technique for deconvolution. However, inverse
filtering is very sensitive to additive noise. Wiener can removes the additive noise and
inverts the blurring simultaneously, it is usually used instead of inverse filter
6 a) Explain the method of the Least mean square filters for image [L2][CO4] [6M]
restoration.

 The inverse filtering approach has poor performance. The wiener filtering approach uses
the degradation function and statistical characteristics of noise into the restoration
process.The objective is to find an estimate of the uncorrupted image ,such that the mean
squareerror between them is minimized.The error measure is given by
Where E{.} is the expected value of the argument.

 We assume that the noise and the image are uncorrelated one or the other has zero
mean.The gray levels in the estimate are a linear function of the levels in the degraded
image.

Where H(u,v)= degradation function


H*(u,v)=complex conjugate of H(u,v)
| H(u,v)|^ 2 =H* (u,v) H(u,v)
Sn(u,v)=|N(u,v)| ^2 = power spectrum of the noise
Sf(u,v)=|F(u,v)|^ 2 = power spectrum of the underrated image

 The power spectrum of the undegraded image is rarely known. An approach


usedfrequently when these quantities are not known or cannot be estimated
then the expression used is

Where K is a specified constant

b) Discuss the method of constrained least square restoration for image [L2][CO4] [6M]
6
restoration.

 The wiener filter has a disadvantage that we need to know the power spectra of the
undegraded image and noise. The constrained least square filtering requires only
the knowledge of only the mean and variance of the noise. These parameters usually can be
calculated from a given degraded image this is the advantage with this method. This method
produces a optimal result. This method requires the optimal criteria which is important we
express the

 The optimality criteria for restoration is based on a measure of smoothness, such as


thesecond derivative of an image (Laplacian). The minimum of a criterion function C
defined as
 Subject to the constraint

 The frequency domain solution to this optimization problem is given by

Where γ is a parameter that must be adjusted so that the constraint is satisfied.

 P(u,v) is the Fourier transform of the laplacian operator

7 a) Give the importance of image segmentation in image processing. [L2][CO5] [6M]

Segmentation is an important stage of the image recognition system, because it extracts the objects
of our interest, for further processing such as description or recognition.

 Segmentation of an image is in practice for the classification of image pixel. Segmentation


techniques are used to isolate the desired object from the image in order to perform analysis
of the object.
 For example, a tumour, cancer or a block in the blood flow can be easily isolated from its
background with the help of image segmentation technique.
 Various techniques are available for the segmentation of monochrome images. The
segmentation of color images is more complicated as each pixel in the color images is vector
valued.
 The existing image segmentation techniques have the limitations of over-segmentation and
highsensitivity to noise.
 The earlier techniques frequently use the statistical methods to find those features in a
completebrain MRI or CT images.
 Furthermore, the segmentation process takes more time for processing and some of the
featuresare redundant and get overlapped.
 The determination of features of the whole image is a difficult task too. Due to the presence
of redundant and overlapped features, an accurate result cannot be obtained. These methods
makeuse of both statistical and non-statistical features.

7 b) Explain the Region based Approach for image segmentation. [L2][CO5] [6M]

Region-Based Segmentation:
 The objective of segmentation is to partition an image into regions. We approached
this problem by finding boundaries between regions based on discontinuities in gray
levels, whereas segmentation was accomplished via thresholds based on the
distribution of pixelproperties, such as gray-level values or color.
 Basic Formulation: Let R represent the entire image region. We may view
segmentationas a process that partitions R into n subregions, R1, R2..., Rn.
Region Growing:

 Region growing is a procedure that groups pixels or subregions into larger regions based
on predefined criteria. The basic approach is to start with a set of "seed" points and
from these grow regions by appending to each seed those neighboring pixels that have
propertiessimilar to the seed (such as specific ranges of gray level or color)...
 Basically, growing a region should stop when no more pixels satisfy the criteria for
inclusion in that region. Criteria such as gray level, texture, and color, are local in nature
and do not take into account the "history" of region growth.
 The use of these types of descriptors is based on the assumption that a model of
expected results is at least partially available.
Region Splitting and Merging:

 The procedure just discussed grows regions from a set of seed points. An alternative is
to subdivide an image initially into a set of arbitrary, disjointed regions and then merge
and/orsplit the regions in an attempt to satisfy the conditions.
 A split and merge algorithm that iteratively works toward satisfying these constraints is
developed.
 Let R represent the entire image region and select a predicate P. One approach for
segmenting R is to subdivide it successively into smaller and smaller quadrant regions so
that, for any region Ri , P(Ri) = TRUE.
 We start with the entire region. If P(R) = FALSE, we divide the image into quadrants. If P is
FALSE for any quadrant, we subdivide that quadrant into subquadrants, and so on.
 This particular splitting technique has a convenient representation in the form of a so-
called quadtree (that is, a tree in which nodes have exactly four descendants), as
illustrated in F Note that the root of the tree corresponds to the entire image and that
each node correspondsto a subdivision. In this case, only R4 was subdivided further.

8 a) Illustrate the Clustering techniques for image segmentation with [L2][CO5] [6M]
example.

 Clustering refers to the classification of objects into groups according to certain


properties of these objects. In the clustering technique, an attempt is made to extract a
feature vectorfrom local areas in the image.
 A standard procedure for clustering is to assign each pixel to the class of the nearest
cluster mean. Clustering methods can be divided into two categories—hierarchical and
partitional.
 Hierarchical clustering techniques are based on the use of a proximity matrix indicating
the similarity between every pair of data points to be clustered. The end result is a tree
of clusters representing the nested group of patterns and similarity levels at which
groupings
change. The clustering methods differ in regard to the rules by which two small clusters
are merged or a large cluster is split. The two main categories of algorithms used in the
hierarchical clustering framework are agglomerative and divisive.
 Agglomerative algorithms seek to merge clusters to be larger and larger by starting with
N singlepoint clusters. The algorithm can be divided into three classes: (i) single-link
algorithm, (ii) complete-link algorithm, and (iii) minimum-variance algorithm.
 The single-link algorithm merges two clusters according to the minimum distance
betweenthe data samples from two clusters.
 Accordingly, the algorithm allows for a tendency to produce clusters with elongated
shapes. In contrast, the complete-link algorithm incorporates the maximum distance
between data samples in clusters, but its application always results in compact clusters.
 The quality of hierarchical clustering depends on how the dissimilarity measurement
between two clusters is defined. The minimum-variance algorithm combines two
clusters in the sense of minimising the cost function, namely, to form a new cluster with
the minimum increase of the cost function.
 This algorithm has attracted considerable interest in vector quantisation, where it is
termed pairwise-nearest-neighbourhood algorithm. Divisive clustering begins with the
entire dataset in the same cluster, followed by iterative splitting of the dataset until the
single- point clusters are attained on leaf nodes.
 It follows a reverse clustering strategy against agglomerative clustering. On each node,
the divisive algorithm conducts a full search for all possible pairs of clusters for data
samples on the node. Some of the hierarchical algorithms include COBWEB, CURE and
CHAMELEON.
 Partition-based clustering uses an iterative optimisation procedure that aims at
minimising an objective function f, which measures the goodness of clustering.
Partition- based clusterings are composed of two learning steps—the partitioning of
each pattern to its closest cluster and the computation of the cluster centroids.
 A common feature of partition-based clusterings is that the clustering procedure starts
from an initial solution with a known number of clusters. The cluster centroids are
usually computed based on the optimality criterion such that the objective function is
minimised. Partitional algorithms are categorised into partitioning relocation algorithms
and density- based partitioning.
 Algorithms of the first type are further categorised into probabilistic clustering, K-
medoids, and K-means. The second types of partitional algorithms, called density-based
partitioning,include algorithms such as DBSCAN, OPTICS DBCLAS, DENCLUE.
 The K-means method is the simplest method in unsupervised classification. The
clustering algorithms do not require training data. K-means clustering is an iterative
procedure. The K-means clustering algorithm clusters data by iteratively computing a
mean intensity for
each class and segmenting the image by classifying each pixel in the class with the closest
mean.
Fuzzy Clustering
 Clustering methods can be classified as either hard or fuzzy depending on whether a
pattern data belongs exclusively to a single cluster or to several clusters with different
degrees. In hard clustering, a membership value of zero or one is assigned to each pattern
data, whereas in fuzzy clustering, a value between zero and one is assigned to each
pattern by a membership function.
 In general, fuzzy clustering methods can be considered to be superior to those of their
hard counterparts since they can represent the relationship between the input pattern
data and clusters more naturally.
 Fuzzy clustering seeks to minimise a heuristic global cost function by exploiting the fact
that each pattern has some graded membership in each cluster. The clustering criterion
allows each pattern for multiple assignments of clusters.
 The fuzzy K-means algorithm iteratively updates the cluster centroid and estimates the
class membership function by using the gradient descent approach.

8 b) Discuss the basics of the intensity thresholding. [L2][CO5] [6M]


The purpose of segmentation is to separate one or more regions of interest in an image from
regions that do not contain relevant information. Regions that do not contain relevant information
are called background. Depending on the image, segmentation can be a very complex process,
and a comprehensive overview of the most relevant segmentation techniques could fill an entire
topic.
The distribution of feature pixels and background pixels is approximately Gaussian, a
characteristic intensity distribution with two peaks in the histogram emerges . Such a distribution
is called bimodal because there are two mode values: one for the background and one for the
feature. Intensities are normally spread around the modal values because of the additive noise
and intensity inhomogeneities of the features. The simplest approach for segmentation would be
the selection of a suitable intensity threshold, as indicated in Figure. All pixels with a value
higher than the threshold value are classified as feature pixels, and all pixels with a lower value
are classified as background pixels. Most commonly, a new image is created by using

where I(x,y) are the original image pixels and IT(x,y) is the thresholded image. Since IT contains only
two values (1 for foreground pixels and 0 for background pixels), it is called a binary image. It can
serve as a mask because each location (x,y) of the original image has a value of 1 in the mask if this is
a feature pixel. some feature pixels will have intensity values below the threshold
value, and some background pixels will lie above the threshold value because of image
inhomogeneities and additive noise.

9 a) List out the different types of thresholding. [L1][CO5] [6M]

 Image thresholding is the easiest way to separate image background and foreground.
Also, this image thresholding can be identify as image segmentation. To apply
thresholding techniques, we should use a gray scale image. When thresholding, that
grayscale image willbe converted to a binary image.

1) Sample Thresholding
2) Global thresholding: histogram improvement method and threshold computing method
Threshold Computing
a) Adaptive Thresholding
b)Threshold to Zero
c) Binary Thresholding
d)Otsu Thresholding
e) Truncate Thresholding

9 b) Discuss the Edge detection with the help of the following operators: [L2][CO5] [6M]
i) Gradient ii) Roberts iii) Prewitt iv) Sobel.

 Edge detection is the process of finding meaningful transitions in an image. Edge


detectionis one of the central tasks of the lower levels of image processing.
 The points where sharp changes in the brightness occur typically form the border
between different objects. These points can be detected by computing intensity
differences in localimage regions.
ii) Gradient Operator: A gradient is a two-dimensional vector that points to the direction in
whichthe image intensity grows fastest. The gradient operator ∇ is given by

 The two functions that can be expressed in terms of the directional derivatives are the
gradient magnitude and the gradient orientation. It is possible to compute the
magnitude
∇f of the gradient and the orientation φ(∇f ).
 The gradient magnitude gives the amount of the difference between pixels in the
neighbourhood which gives the strength of the edge. The gradient magnitude is
definedby

 The magnitude of the gradient gives the maximum rate of increase of f (x, y) per unit
distance in the gradient orientation of ∇f . The gradient orientation gives the direction
ofthe greatest change, which presumably is the direction across the edge. The
gradient orientation is given by

where the angle is measured with respect to the x-axis. The direction of the edge at (x, y) is
perpendicular to the direction of the gradient vector at that point.
Roberts:
 The main objective is to determine the differences between adjacent pixels, one way
to find an edge is to explicitly use { +1,-1} , that calculates the difference between
adjacentpixels. Mathematically, these are called forward differences. The Roberts
kernels are, inpractice, too small to reliably find edges in the presence of noise. The
simplest way to implement the first-order partial derivative is by using the Roberts
cross-gradient operator.

 The partial derivatives given above can be implemented by approximating them to two 2
× 2 masks. The Roberts operator masks are given by
 These filters have the shortest support, thus the position of the edges is more accurate,
butthe problem with the short support of the filters is its vulnerability to noise.

iii) Prewitt:

 The Prewitt kernels are named after Judy Prewitt. Prewitt kernels are based on the idea
ofcentral difference. The Prewitt edge detector is a much better operator than the
Roberts operator. Consider the arrangement of pixels about the central pixel [i, j ] as
shown below:
 The constant c in the above expressions implies the emphasis given to pixels closer to
thecentre of the mask. Gx and Gy are the approximations at [i, j ]. Setting c = 1, the
Prewitt operator mask is obtained as

 The Prewitt masks have longer support. The Prewitt mask differentiates in one
directionand averages in other direction; so the edge detector is less vulnerable to
noise.
iv) Sobel:

 The Sobel kernels are named after Irwin Sobel. The Sobel kernel relies on central
differences, but gives greater weight to the central pixels when averaging. The Sobel
kernels can be thought of as 3 × 3 approximations to first derivatives of Gaussian
kernels.The partial derivates of the Sobel operator are calculated as

The Sobel masks in matrix form are given as

The noise-suppression characteristics of a Sobel mask is better than that of a Prewitt mask.
10 a) Discuss the Laplacian operator in edge detection. Also mention its [L2][CO5] [6M]
drawbacks.
Laplacian of Gaussian (LoG): It is a gaussian-based operator which uses the Laplacian to take the second
derivative of an image. This really works well when the transition of the grey level seems to be abrupt. It
works on the zero-crossing method i.e when the second-order derivative crosses zero, then that particular
location corresponds to a maximum level. It is called an edge location. Here the Gaussian operator reduces the
noise and the Laplacian operator detects the sharp edges.
The Gaussian function is defined by the formula:
Where

is the standard deviation.


And the LoG operator is computed from

Advantages:
1)Easy to detect edges and their various orientations
2)There is fixed characteristics in all directions
Limitations:
1)Very sensitive to noise
2)The localization error may be severe at curved edges
3)It generates noisy responses that do not correspond to edges, so-called “false edges”

10 b) Discuss the concept of Laplacian of Gaussian (LoG) operator for edge [L2][CO5] [6M]
detection.

Laplacian filters are derivative filters used to find areas of rapid change (edges) in images. Since
derivative filters are very sensitive to noise, it is common to smooth the image (e.g., usinga
Gaussian filter) before applying the Laplacian. This two-step process is call the Laplacian of
Gaussian (LoG) operation.

Step1:smoothing of input image f(m,n)


The input image f(m,n) is smoothed by convolving it with Gaussian mask h(m,n) to get the
resultant smooth image g(m,n).

Step2:the laplacian operator is applied to the result obtained in step 1.this is represented by
On differentiating the gaussian kernel,we get

You might also like