UNIT 2
UNIT 2
SNO CONTENTS
1 SYLLABUS
2 LECTURE NOTES
3 PRACTICE QUIZ
[Type text]
UNIT 2
1. SYLLABUS
UNIT 2
Fundamentals of Image Processing - II
Image sampling and quantization, basic relationships between pixels –
neighborhood, adjacency, connectivity and distance measures,
mathematical operations in image processing.
2. LECTURE NOTES
To create a digital image, we need to convert the continuous sensed data into
digital form. This involves two processes: sampling and quantization. A continuous
image, f(x, y), that we want to convert to digital form. An image may be
continuous with respect to the x- and y coordinates, and also in amplitude. To
convert it to digital form, we have to sample the function in both coordinates
and in amplitude. Digitizing the coordinate values is called sampling. Digitizing
the amplitude values is called quantization.
1
UNIT 2
Figure „a‟ shows example of continuous image. Figure „b‟ shows a plot of
amplitude (gray level) values of the continuous image along the line segment
AB. Figure „c‟: To sample this function, we take equally spaced samples along line
AB. The location of each sample is given by a vertical tick mark in the bottom
part of the figure. The samples are shown as small white squares superimposed on
the function.
The set of these discrete locations gives the sampled function. However, the
values of the samples still span (vertically) a continuous range of gray-level
values. In order to form a digital function, the gray level
values also must be converted (quantized) into discrete quantities. The right side
of Figure „c‟ shows the gray-level scale divided into eight discrete levels, ranging
from black to white. The vertical tick marks indicate the specific value assigned to
each of the eight gray levels. The continuous gray levels are quantized simply by
assigning one of the eight discrete gray levels to each sample. The assignment is
made depending on the vertical proximity of a sample to a vertical tick mark.
The digital samples results from both sampling and quantization. This digitization
process requires decisions about values for M, N, and for the number, L, (discrete
gray levels or Quantization levels). There are no requirements on M and N, other
than that they have to be positive integers.
Due to processing storage and hardware consideration, the number gray levels
typically is an integer power of 2.
L=2k
2
UNIT 2
(Or)
3
UNIT 2
Concept of Resolution
The resolution (i.e., the degree of discernible detail) of an image is strongly
dependent on both size „N‟ and number of gray-levels „L‟. In an image sampling
procedure the sampling rate (or pixel clock) of the frame grabber or digitizer
determines the spatial resolution of the digitized image. The pixels in the lower
resolution images (b) are duplicated in order to fill out the display field.
Spatial Resolution:
The spatial resolution of an image is a measure of how much detail an image
holds. Image resolution quantifies how much close two lines (say one dark and
one light) can be to each other and still be visibly resolved. The resolution can be
specified as number of lines per unit distance, say 10 lines per mm or 5 line pairs
per mm. Another measure of image resolution is dots per inch, i.e. the number of
discernible dots per inch. (I. e. pixels per inch ppi)
4
UNIT 2
Fig 1.12. (a)Original High Resolution Image (b) Low Resolution image
Intensity Resolution:
5
UNIT 2
In digital imaging, a pixel (or picture element) is a single point in a raster image.
The pixel is the smallest addressable screen element; it is the smallest unit of
picture that can be controlled. Each pixel has its own address. The address of a
pixel corresponds to its coordinates.
Pixels are normally arranged in a 2-dimensional grid, and are often represented
using dots or squares. Each pixel is a sample of an original image; more samples
typically provide more accurate representations of the original. The intensity of
each pixel is variable. In color image systems, a color is typically represented by
three or four component intensities such as red, green, and blue, or cyan,
magenta, yellow, and black.
The number of distinct colors that can be represented by a pixel depends on the
number of bits per pixel (bpp). A 1 bpp image uses 1-bit for each pixel, so each
pixel can be either on or off. Each additional bit doubles the number of colors
available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8
colors:
For color depths of 15 or more bits per pixel, the depth is normally the sum of the
bits allocated to each of the red, green, and blue components. High color,
usually meaning 16 bpp, normally has five bits for red and blue, and six bits for
green, as the human eye is more sensitive to errors in green than in the other two
primary colors. For applications involving transparency, the 16 bits may be
divided into five bits each of red, green, and available: this means that each 24-
bit pixel has an extra 8 bits to describe its blue, with one bit left for transparency.
A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is
opacity (for purposes of combining with another image).
Neighborhood
6
UNIT 2
Adjacency
Two pixels p and q are said to be connected to each other if there exists a path,
from p to q, comprising of a sequence of pixels such that two successive pixels in
the sequence satisfy adjacency. The set C of pixels, any two members of which
are connected to each other by a path which comprises pixels in the
set C, is called as the connected component.
Region
Boundary pixels
The boundary (also called border or contour) of a region R is the set of pixels in
the region that have one or more neighbors that are not in R. If R happens to be
an entire image, then its boundary is defined as the set of pixels in the first and
last rows and columns in the image. This extra definition is required because an
image has no neighbors beyond its borders. Normally, when we refer to a region,
we are referring to subset of an image, and any pixels in the boundary of the
region that happen to coincide with the border of the image are included
implicitly as part of the region boundary.
A Digital Path
A digital path (or curve) from pixel p with coordinate (x,y) to pixel q with
coordinate (s,t) is a sequence of distinct pixels with coordinates (x 0,y0), (x1,y1), …,
(xn, yn)
where (x0,y0) =(x,y) and (xn, yn) = (s,t) and pixels (xi, yi) and (xi-1, yi-1) are adjacent
for 1 ≤ i ≤ n where n is the length of the path.
If (x0,y0) = (xn, yn), the path is closed.
We can specify 4-, 8- or m-paths depending on the type of adjacency specified.
Figure: (a) Arrangement of pixels; (b) pixels that are 8-adjacent(shown dashed)
to the center pixel; (c) m-adjacency.
In figure (b) the paths between the top right and bottom right pixels are 8-
paths. And the path between the same 2 pixels in figure (c) is m-path
DISTANCE MEASURES
For pixel p,q and z with coordinate (x.y) ,(s,t) and (v,w) respectively D is a
distance function or metric if
D [p.q] ≥ 0 {D[p.q] = if p=q}
D [p.q] = D [p.q] and
8
UNIT 2
D [p.q] ≥ {D[p.q]+D(q,z)}
De ( p, q) ( x s)2 ( y t )2
Pixels having a distance less than or equal to some value r from (x,y) are the
points contained in a disk of radius „ r „centered at (x,y)
The D4 distance (also called city-block distance) between p and q is defined as:
D4 ( p, q) x s y t
Pixels having a D4 distance from (x,y), less than or equal to some value r form a
Diamond centered at (x,y)
Example:
The pixels with distance D4 ≤ 2 from (x,y) form the following contours of constant
distance.
The pixels with D4 = 1 are the 4-neighbors of (x,y)
9
UNIT 2
Example:
D8 distance ≤ 2 from (x,y) form the following contours of constant distance.
10
UNIT 2
3. PRACTICE QUIZ
1. Digitization of the spatial co-ordinates (x, y) called
a) Gray Level quantization
b) finite sampling
c) Image sampling
d) Image quantization
2. The number of shades of gray in a six-bit Image is
a) 256
b) 128
c) 64
d) 32
3 Which of the following is used to joining of objects and components of region in
an Image
a) adjacent
b) connectivity
c) merging
d) linking
4.D8 distance also called as
a) Euclidian distance
b) City block distance
c) Chess board distance
d) None of the above
5. Radio wave band encompasses
a) audio
b) AM
c) FM
d) Both B & C
6. Hard x-rays are used in
a) medicines
b) lithoscopy
c) Industry
d) Radar
7. No of bits to store image is denoted by the formula
a) b = NxK
b) b = MxNxK
c) b = MxK
d) b = MxN
11
UNIT 2
12
UNIT 2
References:
1. S jayaraman, S Esakkirajan, T Veerakumar, “Digital Image processing”,Tata
McGraw Hill
2. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004
3. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image
processing using MATLAB”, Tata McGraw Hill, 2010.
13