0% found this document useful (0 votes)
14 views

Worksheet Paper - Digital Images Processing - April 2024

Uploaded by

amir8ahamd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Worksheet Paper - Digital Images Processing - April 2024

Uploaded by

amir8ahamd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

‫ميحرلا نمحرلا هللا‬ ‫بسم‬

SUDAN INTERNATIONAL UNIVERSITY


Kassala Center [Engineering College]
Faculty Engineering
HUC-Biomedical Engineering Dep .

Digital Images Processing Worksheet Paper


Lecturer: Dr. Amir Ahmed Omer Yousif
Questions & Answers

******************************************************************************************
Question 1:
a) Define Image
An image may be defined as two dimensional light intensity function f(x, y) where x and y
denote spatial co-ordinate and the amplitude or value of f at any point (x, y) is called
intensity or grayscale or brightness of the image at that point.

b) What is Dynamic Range?


The range of values spanned by the gray scale is called dynamic range of an image. Image
will have high contrast, if the dynamic range is high and image will have dull washed out
gray look if the dynamic range is low.

c) Define Brightness
Brightness of an object is the perceived luminance of the surround. Two objects with different
surroundings would have identical luminance but different brightness.

d) What do you meant by Gray level?


Gray level refers to a scalar measure of intensity that ranges from black to grays and finally
to white.

e) What do you meant by Color model?


A Color model is a specification of 3D-coordinates system and a subspace within that system
where each color is represented by a single point.

f) List the hardware oriented color models


1. RGB model

Dr. Amir Ahmed Omer Yousif


*(1)*
2. CMY model
3. YIQ model
4. HSI model

******************************************************************************************
Question 2:

a) List two kinds of image representation methods?


• Boundary Representation
• Region Representation

b) Name the types of connectivity and explain


a) 4-connectivity:
Two pixels p and q with values from V are 4-connected if q is in the set N4 (p)
b) 8- connectivity:
Two pixels p and q with values from V are 8-connected if q is in the set N8 (p)
c) m- connectivity:
Two pixels p and q with values from V are m-connected if
i. q is in N4 (p) or
ii. q is in ND (p) and the set N4(p) n N4(q) s Θ
c) Name three types of images sharpening filters?

1) Perwitt filter.
2) Roberts filter
3) Sobel filter.
4) Unsharp filter

d) Give the formula for calculating D4 (city block distance) and D8


distance (chess board distance).

D4 distance ( city block distance) is defined by

D4(p, q) = |x-s| + |y-t|

D8 distance(chess board distance) is defined by

D8(p, q) = max(|x-s|, |y-t|).

e) Explain the concept, steps and equations involved in unsharp


masking technique in image enhancement.
Unsharp masking:
The process of subtracting an unsharp (smoothed) version of an image from the original
images to get sharpened images (high pass filtering).
The process consists of the following steps:
a) Blur the original image.
b) Subtract the blured image from the original (the resulting difference is called the mask).
c) Add the mask to the original image.

Dr. Amir Ahmed Omer Yousif


*(2)*
******************************************************************************************
Question 3:

a) What are the steps involved in digital images processing?


1. Image Acquisition
2. Preprocessing
3. Segmentation
4. Representation and Description
5. Recognition and Interpretation

b) What is recognition and Interpretation?


Recognition means is a process that assigns a label to an object based on the information
provided by its descriptors. Interpretation means assigning meaning to a recognized object
16. Specify the elements of DIP system
1. Image Acquisition
2. Storage
3. Processing
4. Display

c) List the categories of digital storage


1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archival storage for infrequent access.

******************************************************************************************
Question 4:

a) Define Resolutions
Resolution is defined as the smallest number of discernible detail in an image. Spatial
resolution is the smallest discernible detail in an image and gray level resolution refers to the
smallest discernible change is gray level.

b) What is meant by pixel?


A digital image is composed of a finite number of elements each of which has a particular
location or value. These elements are referred to as pixels or image elements or picture
elements or pels elements.

c) Find the number of bits required to store a 256 X 256 image with 32
gray levels
32 gray levels = 25

Dr. Amir Ahmed Omer Yousif


*(3)*
= 5 bits
256 * 256 * 5 = 327680 bits.

d) Write the expression to find the number of bits to store a digital image?

The number of bits required to store a digital image is


b=M * N * k
When M=N, this equation becomes
b=N2 * k

e) Write short notes on neighbors of a pixel.


The pixel p at co-ordinates (x, y) has 4 neighbors (ie) 2 horizontal and 2 vertical neighbors
whose co-ordinates is given by (x+1, y), (x-1,y), (x,y-1), (x, y+1). This is called as direct
neighbors. It is denoted by N4(P)
Four diagonal neighbors of p have co-ordinates (x+1, y+1), (x+1,y-1), (x-1, y-1), (x-1, y+1).
It is denoted by ND(4).
Eight neighbors of p denoted by N8(P) is a combination of 4 direct neighbors and 4 diagonal
neighbors.

******************************************************************************************
Question 5:

a) Explain sampling and quantization:


For computer processing, the image function f(x,y)must be digitized both spatially and in
amplitude. Digitization of spatial co-ordinates is called image sampling and amplitude
digitization is called grey level quantization.

Sampling:
Consider a digital image of size 1024*1024,256 with a display area used for the
image being the same ,the pixels in the lower resolution images where duplicated in
order to fulfill the entire display .the pixel replication produced a checker board
effect, which is visible in the image of lower resolution .it is not possible to
differentiate a 512*512 images from a1024*1024 under this effect. but a slight
increase in grainess and a small decrease in sharpness is noted.
A 256*256 image shows a fine checker board pattern in the edges and more
pronounced grainess there out the image .these effect is much more visible in
128*128 images and it becomes quite pronounced in 64*64 and 32*32 images.

Quantization:
It discusses the effects produced when the number of bits used to represent the grey
level in an image is decreased .this is illustrated by reducing the grey level required
to represent a 1024*1024,512 image.
The 256,128,and 64 level image are visually identical for all practical purposes the
32 level images has developed a set of rigid like structure in areas of smooth grey
lines.this effect caused by the user insufficient number of grey levels in smooth areas
of digital image is called a false contouring.this is visible in images displayed using
16 or lesser gray level values.

Dr. Amir Ahmed Omer Yousif


*(4)*
******************************************************************************************
Question 6:

Explain the basic Elements of digital image processing:


Five elements of digital image processing,
• image acquisitions
• storage
• processing
• communication
• display
1) Image acquisition :
Two devices are required to acquire a digital image ,they are
1)physical device:
Produces an electric signal proportional to the amount of light energy sensed.
2) a digitizer:
Device for converting the electric output into a digital form.
2.storage:
An 8 bit image of size 1024*1024 requires one million bits of storage.three types of
storage:
1.short term storage:
It is used during processing. it is provide by computer memory. it consisits of frame
buffer which can store one or more images and can be accessed quickly at the video
rates.
2.online storage:
It is used for fast recall. It normally uses the magnetic disk, Winchester disk with100s
0f megabits are commonly used .
3.archival storage:
They are passive storage devices and it is used for infrequent access.magnetic tapes
and optical disc are the media. High density magnetic tapes can store 1 megabit in
about 13 feet of tape .
3) Processing:
Processing of a digital image p involves procedures that are expressedin terms of
algorithms .with the exception of image acquisition and display most image
processing functions can be implemented in software .the need for a specialized
hardware is called increased speed in application. Large scale image processing
systems are still being used for massive image application .steps are being merge for
general purpose small computer equipped with image processing hardware.
4) communication:
Communication in ip involves local communication between ip systems and remote
communication from one point to another in communication with the transmission of
image hardware and software are available for most of the computers .the telephone
line can transmit a max rate of 9600 bits per second.so to transmit a 512*512,8 bit
image at this rate require at last 5 mins. wireless link using intermediate stations such
as satellites are much faster but they are costly.
5) display:
Monochrome and colourtv monitors are the principal display devices used in modern
ips.monitors are driven by the outputs of the hardware in the display module of the
computer.

Dr. Amir Ahmed Omer Yousif


*(5)*
******************************************************************************************
Question 7:

a) What is the need for transform?


The need for transform is most of the signals or images are time domain signal (ie) signals
can be measured with a function of time. This representation is not always best. For most
image processing applications anyone of the mathematical transformation are applied to the
signal or images to obtain further information from that signal.

b) What is Image Transformation?


An image can be expanded in terms of a discrete set of basis arrays called basis
images. These basis images can be generated by unitary matrices. Alternatively, a given NxN
image can be viewed as an N2 * 1 vectors. An image transform provides a set of coordinates
or basis vectors for vector space.

c) What are the applications of image transformation?


1) To reduce band width
2) To reduce redundancy
3) To extract feature.

d) Give the Conditions for perfect transformation


Transpose of matrix = Inverse of a matrix.

e) Specify the properties of 2D Fourier transformation


The properties are
• Separability
• Translation
• Periodicity and conjugate symmetry
• Rotation
• Distributivity and scaling
• Average value
• Laplacian
• Convolution and correlation
• sampling

******************************************************************************************
Question 8:

a) Specify the objective of image enhancement technique.


The objective of enhancement technique is to process an image so that the result is
more suitable than the original image for a particular application.

Dr. Amir Ahmed Omer Yousif


*(6)*
b) List the 2 categories of image enhancement.
• Spatial domain refers to image plane itself & approaches in this category are
based on direct manipulation of picture image.
• Frequency domain methods based on modifying the image by fourier transform.

c) What is the purpose of image averaging?


An important application of image averaging is in the field of astronomy, where imaging
with very low light levels is routine, causing sensor noise frequently to render single images
virtually useless for analysis.

d) What is meant by masking?


• Mask is the small 2-D array in which the values of mask co-efficient determines
the nature of process.
• The enhancement technique based on this type of approach is referred to as mask
processing.

******************************************************************************************
Question 9:

a) What are the three methods of estimating the degradation function?


1. Observation
2. Experimentation
3. Mathematical modeling.

b) What is meant by Noise probability density function?


The spatial noise descriptor is the statistical behavior of gray level values in the noise
component of the model.

c) What are the types of noise models?


• Guassian noise
• Rayleigh noise
• Erlang noise
• Exponential noise
• Uniform noise
• Impulse noise

d) What is a Median filter?


The median filter replaces the value of a pixel by the median of the gray levels in the
neighborhood of that pixel.

Dr. Amir Ahmed Omer Yousif


*(7)*
e) What is maximum filter and minimum filter?
The 100th percentile is maximum filter is used in finding brightest points in an image.
The 0th percentile filter is minimum filter used for finding darkest points in an image.

f) Write the application of sharpening filters


1. Electronic printing and medical imaging to industrial application
2. Autonomous target detection in smart weapons.

******************************************************************************************
Question 10:

a) Explain the RGB image color model


RGB model,each color appears in its primary spectral components of red ,green and blue.
This model is based on a Cartesian coordinate system. This color subspace of interest is the
cube. RGB values are at three corners cyan. magenta and yellow are at three other corner
black is at the origin and white is the at the corner farthest from the origin this model the
gray scale extends from black to white.

Images represented in the RGB color model consist of three component images,one
for each primary colors.The no of bits used to represented each pixel in which each red,green
and blue images is an 8 bit image.Each RGB color pixel of values is said to be 24 bits .The
total no of colors in a 24 bit RGB images is 92803=16777,216.

The acquiring a color image is basically the process is shown in fig,. A color image can be
acquired by using three filters, sensitive to red, green and blue. When we view a color scene
with a monochrome camera equipped with one of these filters the result is a monochrome
image whose intensity is proportional to the response of that filter.

b) Explain CMY image color model


This model deals about the cyan, magenta and yellow are the secondary colors of light. When
a surface coated with cyan pigment is illuminated with white light no red lihgt is reflected
from the surface. Cyan subtracts red light from reflected white light,which itself is composed
of equal amounts of red, green and blue light.in this mode cyan data input or perform an
RGB to CMY conversion internally.

C=1-R
M=1-G
Y=1-B

All color values have been normalized to the range [0,1].the light reflected from a surface
coated with pure cyan does not contain red .RGB values can be obtained easily from a set of
CMY values by subtracting the individual CMY values from 1. Combining these colors

Dr. Amir Ahmed Omer Yousif


*(8)*
produces a black .When black is added giving rise to the CMYK color model. This is four
coluring printing .

c) Explain HIS image color model


The HSI Color Model
The RGB, CMY and other color models are not well suited for describing colors in terms that
are practical for human interpretation.For eg,one does not refer to the color of an
automobile by giving the percentage of each of the primaries composing its color.

When humans view a color object we describe it by its hue, saturation and brightness.
• Hue is a color attribute that describes a pure color.
• Saturation gives a measure of the degree to which a pure color is diluted by white
light.
• Brightness is a subjective descriptor that is practically impossible to measure. It
embodies the achromatic notion of intensity and is one of the key factors in
describing color sensation
• Intensity is a most useful descriptor of monochromatic images.

&&&&&

Dr. Amir Ahmed Omer Yousif


*(9)*

You might also like