0% found this document useful (0 votes)
505 views8 pages

Dip Assignment Questions Unit-1

The Prewitt and Sobel edge detection masks are used to detect diagonal edges. [1] The Prewitt mask consists of two 3x3 templates to detect horizontal and vertical edges. [2] When the central pixel weight is doubled in the Prewitt masks, it gives the Sobel edge detection operator, which also uses two 3x3 masks. [3] The Sobel operator proved more popular than other contemporary methods as it provided better overall performance than techniques like Prewitt.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
505 views8 pages

Dip Assignment Questions Unit-1

The Prewitt and Sobel edge detection masks are used to detect diagonal edges. [1] The Prewitt mask consists of two 3x3 templates to detect horizontal and vertical edges. [2] When the central pixel weight is doubled in the Prewitt masks, it gives the Sobel edge detection operator, which also uses two 3x3 masks. [3] The Sobel operator proved more popular than other contemporary methods as it provided better overall performance than techniques like Prewitt.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

DIP ASSIGNMENT QUESTIONS

Unit-1

1 Explain the significance of sampling and quantization in DIP.

Ans. Sampling has a relationship with image pixels. The total number of pixels in
an image can be calculated as Pixels = total no of rows * total no of columns. For
example, let’s say we have a total of 36 pixels, that means we have a square image of
6X 6. As we know in sampling, that more samples eventually result in more pixels. So
it means that of our continuous signal, we have taken 36 samples on the x axis. That
refers to 36 pixels of this image. Also the number sample is directly equal to the
number of sensors on the CCD array.
There is a relationship between Quantization with gray level resolution. The above
quantized image represents 5 different levels of gray and that means the image formed
from this signal, would only have 5 different colors. It would be a black and white
image more or less with some colors of gray.

2 Heine digital image? Explain the components of the general purpose rnagic
processing system.
Ans. Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the
problem domain.
Image Processing Hardware:
Image processing hardware is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
Computer:
Computer used in the image processing system is the general purpose computer that is
used by us in our daily life.
Image Processing Software:
Image processing software is the software that includes all the mechanisms and
algorithms that are used in image processing system.
Mass Storage:
Mass storage stores the pixels of the images during the processing.
Hard Copy Device:
Once the image is processed then it is stored in the hard copy device. It can be a pen
drive or any external ROM device.
Image Display:
It includes the monitor or display screen that displays the processed images.
Network:
Network is the connection of all the above elements of the image processing system.

3 What is half toning technique? Give the 1ogic to implement a half toned
image from a gray - level image.

Ans. Halftoning or analog halftoning is a process that simulates shades of gray by varying
the size of tiny black dots arranged in a regular pattern. This technique is used in printers, as
well as the publishing industry. If you inspect a photograph in a newspaper, you will notice
that the picture is composed of black dots even though it appears to be composed of grays.
This is possible because of the spatial integration performed by our eyes. Our eyes blend fine
details and record the overall intensity . Digital halftoning is similar to halftoning in which an
image is decomposed into a grid of halftone cells. Elements (or dots that halftoning uses in
simulates shades of grays) of an image are simulated by filling the appropriate halftone cells.
The more number of black dots in a halftone cell, the darker the cell appears. For example, in
Figure 4, a tiny dot located at the center is simulated in digital halftoning by filling the center
halftone cell; likewise, a medium size dot located at the top-left corner is simulated by filling
the four cells at the top-left corner. The large dot covering most of the area in the third image
is simulated by filling all halftone cells.

Figure 4.1 A sample of digital halftoning

Three common methods for generating digital halftoning images are

1. patterning
2. dithering
3. error diffusion
Unit-2

a) A 2-Dimensional DFT can be obtained using a 1- dimensional DFT algorithm


twice. Explain.
Ans.​ By 2D Fourier Transform,
F(u,v)=1N∑N−1x=0∑N−1y=0f(x,y)Wx Wy
vNF(u,v)=1N∑x=0N−1∑y=0N−1f(x,y)WNxx WNyc
Row Transform∶
F(u,v)=1N∑N−1x=0WxuN∑N−1y=0f(x,y)WyvNF(u,v)=1N∑N−1x=0WxuNF(x,v)F(u,
v)=1N∑x=0N−1WNxu∑y=0N−1f(x,y)WTvF(u,v)=1N∑x=0N−1WNxuF(x,v)
Column Transform∶
F(u,v)=1N∑N−1x=0F(x,v)WuNF(u,v)=1N∑x=0N−1F(x,v)WNxu
2D Fourier Transform of input image can be obtained by performing row-wise 1D
transform followed by column-wise 1D Fourier Transform.
Fast Fourier Algorithm to find DFT of an image:

● Perform row-wise transform using FFT Flowgraph.


● Perform column-wise transform using FFT Flowgraph.
● Scale by 1/N.

b) Write short notes on: ncPYONLINE.COM


i) Walsh transform
The system of Walsh functions (or, simply, Walsh system) may be viewed as
a discrete, digital counterpart of continuous, analog system of trigonometric
functions on the unit interval. Unlike trigonometric functions, Walsh
functions are only piecewise-continuous, and, in fact, are piecewise constant.
The functions take the values -1 and +1 only, on sub-intervals defined by
dyadic fractions.

Both systems form a complete, orthonomal set of functions, an orthonormal


basis in Hilbert space L^2[0,1] of the square-integrable functions on the unit
interval. Both are systems of bounded functions, unlike, say, Haar system or
Franklin system.

Both trigonometric and Walsh systems admit natural extension by periodicity


from the unit interval to the real line \mathbb R . Furthermore, both Fourier
analysis on the unit interval (Fourier series) and on the real line (Fourier
transform) have their digital counterparts defined via Walsh system, the
Walsh series analogous to the Fourier series, and the Hadamard transform
analogous to the Fourier Transform.
il) Hadamard transform
The Hadamard transform ​Hm​ ​ is a 2​m​ × 2​m​ matrix, the ​Hadamard matrix​ (scaled by a
normalization factor), that transforms 2​m​ real numbers ​x​n​ into 2​m​ real numbers ​X​k.​
The Hadamard transform can be defined in two ways: ​recursively​, or by using
the ​binary​ (​base​-2) representation of the indices ​n​ and ​k​.
OR

Unit- 3

a ) Explain the difference between operations involving 3*3 mask for median filtering and
average filtering.
Ans. Average filter-

# Low Pass SPatial Domain Filtering


# to observe the blurring effect

import cv2
import numpy as np

# Read the image


img = cv2.imread('sample.png', 0)

# Obtain number of rows and columns


# of the image
m, n = img.shape

# Develop Averaging filter(3, 3) mask


mask = np.ones([3, 3], dtype = int)
mask = mask / 9

# Convolve the 3X3 mask over the image


img_new = np.zeros([m, n])

for i in range(1, m-1):


for j in range(1, n-1):
temp = img[i-1, j-1]*mask[0, 0]+img[i-1, j]*mask[0, 1]+img[i-1, j + 1]*mask[0,
2]+img[i, j-1]*mask[1, 0]+ img[i, j]*mask[1, 1]+img[i, j + 1]*mask[1, 2]+img[i + 1,
j-1]*mask[2, 0]+img[i + 1, j]*mask[2, 1]+img[i + 1, j + 1]*mask[2, 2]

img_new[i, j]= temp

img_new = img_new.astype(np.uint8)
cv2.imwrite('blurred.tif', img_new)
Median filter-
# Median Spatial Domain Filtering

import cv2
import numpy as np

# Read the image


img_noisy1 = cv2.imread('sample.png', 0)

# Obtain the number of rows and columns


# of the image
m, n = img_noisy1.shape

# Traverse the image. For every 3X3 area,


# find the median of the pixels and
# replace the ceter pixel by the median
img_new1 = np.zeros([m, n])

for i in range(1, m-1):


for j in range(1, n-1):
temp = [img_noisy1[i-1, j-1],
img_noisy1[i-1, j],
img_noisy1[i-1, j + 1],
img_noisy1[i, j-1],
img_noisy1[i, j],
img_noisy1[i, j + 1],
img_noisy1[i + 1, j-1],
img_noisy1[i + 1, j],
img_noisy1[i + 1, j + 1]]

temp = sorted(temp)
img_new1[i, j]= temp[4]

img_new1 = img_new1.astype(np.uint8)
cv2.imwrite('new_median_filtered.png', img_new1)

b) What would happen to the dynamic range of an image if all the slopes in the contrast
structured algorithm (/,m,n) arc less than 1. Answer using an example.
Unit-4

a) Discuss the method of edge detection. Show prewitt and sobel masks are used for
detecting diagonal edges.

When the weight at the central pixels, for both Prewitt templates, is doubled, this
gives the famous ​Sobel edge-detection operator​ which, again, consists of two masks
to determine the edge in vector form. The ​Sobel operator​ was the most popular
edge-detection operator until the development of edge-detection techniques with a
theoretical basis. It proved popular because it gave, overall, a better performance than
other contemporaneous edge-detection operators, such as the Prewitt operator. The
templates for the Sobel operator can be found in Figure 4.10.

The Mathcad implementation of these masks is very similar to the implementation of


the Prewitt operator, Code 4.2, again operating on a 3×3 subpicture. This is the
standard formulation of the Sobel templates, but how do we form larger templates,
say for 5×5 or 7×7? Few textbooks state its original derivation, but it has been
attributed (Heath et al., 1997) as originating from a PhD thesis (Sobel, 1970).
Unfortunately a theoretical basis, that can be used to calculate the coefficients of
larger templates, is rarely given. One approach to a theoretical basis is to consider the
optimal forms of averaging and of differencing. Gaussian averaging has already been
stated to give optimal averaging. The binomial expansion gives the integer
coefficients of a series that, in the limit, approximates the normal distribution.
Pascal’s triangle gives sets of coefficients for a smoothing operator which, in the
limit, approaches the coefficients of a Gaussian smoothing operator. Pascal’s triangle
is then:

​b) Explain Global and adaptive thresholding techniques.

A global thresholding technique is one which makes use of a single threshold value for the
whole image, whereas local thresholding technique makes use of unique threshold values for
the partitioned subimages obtained from the whole image.
Adaptive thresholding is the method where the threshold value is calculated for smaller
regions and therefore, there will be different threshold values for different regions. In
OpenCV, you can perform Adaptive threshold operation on an image using the method
adaptiveThreshold() of the Imgproc class.

UNIT-5

a) What are various arithmetic and logical operations performed on images?


Image arithmetic applies one of the standard arithmetic operations or a logical operator to two
or more images. The operators are applied in a pixel-by-pixel way, i.e. the value of a pixel in
the output image depends only on the values of the corresponding pixels in the input images.
Hence, the images must be of the same size. Although image arithmetic is the most simple
form of image processing, there is a wide range of applications. A main advantage of
arithmetic operators is that the process is very simple and therefore fast.
Logical operators are often used to combine two (mostly binary) images. In the case of
integer images, the logical operator is normally applied in a bitwise way.

b) Explain opening and closing operations on images by using suitable examples.

Opening and Closing are dual operations used in Digital Image Processing for restoring an
eroded image. Opening is generally used to restore or recover the original image to the
maximum possible extent. Closing is generally used to smoother the contour of the distorted
image and fuse back the narrow breaks and long thin gulfs. Closing is also used for getting
rid of the small holes of the obtained image.

The combination of Opening and Closing is generally used to clean up artifacts in the
segmented image before using the image for digital analysis.

You might also like