0% found this document useful (0 votes)
2 views

UNIT 2

Unit 2 covers the fundamentals of image processing, focusing on image sampling, quantization, and pixel relationships. It explains the processes of converting continuous images to digital form, the representation of digital images, and the concepts of resolution, intensity, and pixel adjacency. Additionally, it includes a practice quiz to reinforce learning and lists prescribed textbooks and references for further study.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

UNIT 2

Unit 2 covers the fundamentals of image processing, focusing on image sampling, quantization, and pixel relationships. It explains the processes of converting continuous images to digital form, the representation of digital images, and the concepts of resolution, intensity, and pixel adjacency. Additionally, it includes a practice quiz to reinforce learning and lists prescribed textbooks and references for further study.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT 2

SNO CONTENTS

1 SYLLABUS

2 LECTURE NOTES

1.1 Image Sampling and Quantization

1.2 Representation of Digital images

1.3 BASIC RELATIONSHIP BETWEEN PIXELS– neighborhood,


adjacency, connectivity and distance measures

1.4 Mathematical operations in image processing.

3 PRACTICE QUIZ

[Type text]
UNIT 2

1. SYLLABUS
UNIT 2
Fundamentals of Image Processing - II
Image sampling and quantization, basic relationships between pixels –
neighborhood, adjacency, connectivity and distance measures,
mathematical operations in image processing.

2. LECTURE NOTES

1.1 Image Sampling and Quantization

To create a digital image, we need to convert the continuous sensed data into
digital form. This involves two processes: sampling and quantization. A continuous
image, f(x, y), that we want to convert to digital form. An image may be
continuous with respect to the x- and y coordinates, and also in amplitude. To
convert it to digital form, we have to sample the function in both coordinates
and in amplitude. Digitizing the coordinate values is called sampling. Digitizing
the amplitude values is called quantization.

1
UNIT 2

Fig 1.10: Image Sampling and quantization

Figure „a‟ shows example of continuous image. Figure „b‟ shows a plot of
amplitude (gray level) values of the continuous image along the line segment
AB. Figure „c‟: To sample this function, we take equally spaced samples along line
AB. The location of each sample is given by a vertical tick mark in the bottom
part of the figure. The samples are shown as small white squares superimposed on
the function.
The set of these discrete locations gives the sampled function. However, the
values of the samples still span (vertically) a continuous range of gray-level
values. In order to form a digital function, the gray level
values also must be converted (quantized) into discrete quantities. The right side
of Figure „c‟ shows the gray-level scale divided into eight discrete levels, ranging
from black to white. The vertical tick marks indicate the specific value assigned to
each of the eight gray levels. The continuous gray levels are quantized simply by
assigning one of the eight discrete gray levels to each sample. The assignment is
made depending on the vertical proximity of a sample to a vertical tick mark.
The digital samples results from both sampling and quantization. This digitization
process requires decisions about values for M, N, and for the number, L, (discrete
gray levels or Quantization levels). There are no requirements on M and N, other
than that they have to be positive integers.
Due to processing storage and hardware consideration, the number gray levels
typically is an integer power of 2.
L=2k

2
UNIT 2

Then, the number, b, of bites required to store a digital image is


B=M *N* k
When M=N, the equation become
B=N2*k
When an image can have 2k gray levels, it is referred to as “k- bit”. An image with
256 possible gray levels is called an “8- bit image” (256=28 ).

1.2 Representation of Digital images


The result of sampling and quantization is matrix of real numbers. Assume that an
image f(x,y) is sampled so that the resulting digital image has M rows and N
Columns. The values of the coordinates (x,y) now become discrete quantities
thus the value of the coordinates at origin become (x,y) =(0,0) The next
Coordinates value along the first signify the image along the first row. It does not
mean that these are the actual values of physical coordinates when the image
was sampled.Thus the right side of the matrix represents a digital element, pixel or
pel. The matrix can be represented in the following form as well.

(Or)

3
UNIT 2

Fig 1.11: Representations of Digital Image

Concept of Resolution
The resolution (i.e., the degree of discernible detail) of an image is strongly
dependent on both size „N‟ and number of gray-levels „L‟. In an image sampling
procedure the sampling rate (or pixel clock) of the frame grabber or digitizer
determines the spatial resolution of the digitized image. The pixels in the lower
resolution images (b) are duplicated in order to fill out the display field.

Spatial Resolution:
The spatial resolution of an image is a measure of how much detail an image
holds. Image resolution quantifies how much close two lines (say one dark and
one light) can be to each other and still be visibly resolved. The resolution can be
specified as number of lines per unit distance, say 10 lines per mm or 5 line pairs
per mm. Another measure of image resolution is dots per inch, i.e. the number of
discernible dots per inch. (I. e. pixels per inch ppi)

4
UNIT 2

The above figure shows images with different samples.

Fig 1.12. (a)Original High Resolution Image (b) Low Resolution image

Intensity Resolution:

Intensity resolution of an image refers to the smallest possible intensity change


which can be distinguished in an image. As we increase the intensity range of an
image by increasing the number of possible gray (intensity) levels which can be
representing the image, we also increase the intensity resolution. The number of
intensity levels is commonly chosen as 28 = 256. A display capable of showing an
8 bit image has the capability to quantize the gray intensity or color intensity in
fixed increments of 1/256 units of intensity value.

5
UNIT 2

1.3 BASIC RELATIONSHIP BETWEEN PIXELS

In digital imaging, a pixel (or picture element) is a single point in a raster image.
The pixel is the smallest addressable screen element; it is the smallest unit of
picture that can be controlled. Each pixel has its own address. The address of a
pixel corresponds to its coordinates.
Pixels are normally arranged in a 2-dimensional grid, and are often represented
using dots or squares. Each pixel is a sample of an original image; more samples
typically provide more accurate representations of the original. The intensity of
each pixel is variable. In color image systems, a color is typically represented by
three or four component intensities such as red, green, and blue, or cyan,
magenta, yellow, and black.
The number of distinct colors that can be represented by a pixel depends on the
number of bits per pixel (bpp). A 1 bpp image uses 1-bit for each pixel, so each
pixel can be either on or off. Each additional bit doubles the number of colors
available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8
colors:

For color depths of 15 or more bits per pixel, the depth is normally the sum of the
bits allocated to each of the red, green, and blue components. High color,
usually meaning 16 bpp, normally has five bits for red and blue, and six bits for
green, as the human eye is more sensitive to errors in green than in the other two
primary colors. For applications involving transparency, the 16 bits may be
divided into five bits each of red, green, and available: this means that each 24-
bit pixel has an extra 8 bits to describe its blue, with one bit left for transparency.
A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is
opacity (for purposes of combining with another image).

Neighborhood

In a 2-D coordinate system each pixel p in an image can be identified by a pair


of spatial coordinates (x, y). A pixel p has two horizontal neighbors (x−1, y), (x+1,
y) and two vertical neighbors (x, y−1), (x, y+1).

6
UNIT 2

These 4 pixels together constitute the 4-neighbors of pixel p, denoted as N4(p).


The pixel p also has 4 diagonal neighbors which are: (x+1, y+1), (x+1, y−1), (x−1,
y+ 1), (x−1, y−1).
The set of 4 diagonal neighbors forms the diagonal neighborhood denoted as
ND(p).
The set of 8 pixels surrounding the pixel p forms the 8-neighborhood denoted as
N8(p).
We have N8(p) = N4(p) ∪ ND(p).

Adjacency

The concept of adjacency has a slightly different meaning from neighborhood.


Adjacency takes into account not just spatial neighborhood but also intensity
groups. Suppose we define a set S of intensities which are considered to belong
to the same group. Two pixels p and q will be termed adjacent if both of them
have intensities from set S and both also conform to some definition of
neighborhood. Hence two pixels p and q are termed as
-adjacent if they have intensities from set S and q belongs to N4(p). Likewise
we can define 8- adjacency.
-adjacency i.e. mixed
adjacency. Two pixels with intensity values from set S are m-adjacent if
• q is a member of set N4(p), OR
• q is a member of set ND(p) and the set N4(p)∩N4(q) has no pixels whose
intensity values
are from set S.
Connected Components

Two pixels p and q are said to be connected to each other if there exists a path,
from p to q, comprising of a sequence of pixels such that two successive pixels in
the sequence satisfy adjacency. The set C of pixels, any two members of which
are connected to each other by a path which comprises pixels in the
set C, is called as the connected component.
Region

A Region R is a subset of pixels in an image such that all pixels in R form a


connected component.
7
UNIT 2

Boundary pixels

The boundary (also called border or contour) of a region R is the set of pixels in
the region that have one or more neighbors that are not in R. If R happens to be
an entire image, then its boundary is defined as the set of pixels in the first and
last rows and columns in the image. This extra definition is required because an
image has no neighbors beyond its borders. Normally, when we refer to a region,
we are referring to subset of an image, and any pixels in the boundary of the
region that happen to coincide with the border of the image are included
implicitly as part of the region boundary.
A Digital Path

A digital path (or curve) from pixel p with coordinate (x,y) to pixel q with
coordinate (s,t) is a sequence of distinct pixels with coordinates (x 0,y0), (x1,y1), …,
(xn, yn)
where (x0,y0) =(x,y) and (xn, yn) = (s,t) and pixels (xi, yi) and (xi-1, yi-1) are adjacent
for 1 ≤ i ≤ n where n is the length of the path.
If (x0,y0) = (xn, yn), the path is closed.
We can specify 4-, 8- or m-paths depending on the type of adjacency specified.

Figure: (a) Arrangement of pixels; (b) pixels that are 8-adjacent(shown dashed)
to the center pixel; (c) m-adjacency.

In figure (b) the paths between the top right and bottom right pixels are 8-
paths. And the path between the same 2 pixels in figure (c) is m-path

DISTANCE MEASURES
For pixel p,q and z with coordinate (x.y) ,(s,t) and (v,w) respectively D is a
distance function or metric if
 D [p.q] ≥ 0 {D[p.q] = if p=q}
 D [p.q] = D [p.q] and

8
UNIT 2

 D [p.q] ≥ {D[p.q]+D(q,z)}

The Euclidean Distance between p and q is defined as:

De ( p, q)  ( x  s)2  ( y  t )2
Pixels having a distance less than or equal to some value r from (x,y) are the
points contained in a disk of radius „ r „centered at (x,y)

The D4 distance (also called city-block distance) between p and q is defined as:
D4 ( p, q)  x  s  y  t
Pixels having a D4 distance from (x,y), less than or equal to some value r form a
Diamond centered at (x,y)

Example:
The pixels with distance D4 ≤ 2 from (x,y) form the following contours of constant
distance.
The pixels with D4 = 1 are the 4-neighbors of (x,y)

The D8 distance (also called chessboard distance) between p and q is defined


as:
D8 ( p, q)  max( x  s , y  t )
Pixels having a D8 distance from (x,y), less than or equal to some value r form a
square Centered at (x,y).

9
UNIT 2

Example:
D8 distance ≤ 2 from (x,y) form the following contours of constant distance.

10
UNIT 2

3. PRACTICE QUIZ
1. Digitization of the spatial co-ordinates (x, y) called
a) Gray Level quantization
b) finite sampling
c) Image sampling
d) Image quantization
2. The number of shades of gray in a six-bit Image is
a) 256
b) 128
c) 64
d) 32
3 Which of the following is used to joining of objects and components of region in
an Image
a) adjacent
b) connectivity
c) merging
d) linking
4.D8 distance also called as
a) Euclidian distance
b) City block distance
c) Chess board distance
d) None of the above
5. Radio wave band encompasses
a) audio
b) AM
c) FM
d) Both B & C
6. Hard x-rays are used in
a) medicines
b) lithoscopy
c) Industry
d) Radar
7. No of bits to store image is denoted by the formula
a) b = NxK
b) b = MxNxK
c) b = MxK
d) b = MxN

11
UNIT 2

8.Digitizing the amplitude values is called


a) radiance
b) lluminance
c) sampling
d) quantization
9.Which of the following color possess the longest wavelength in the visible
spectrum?
a) Yellow
b) Red
c) Blue
d) Violet
10. What is the smallest possible value of the gradient image?
a) 1
b) 0
c) e
d) e-

12
UNIT 2

PRESCRIBED TEXT BOOKS &


REFERENCE BOOKSText Book
1. R.C .Gonzalez& R.E. Woods, “Digital Image Processing”, Addison Wesley/Pearson
education, 3rd Edition, 2010 Edition.
2. A .K. Jain, “Fundamentals of Digital Image processing”, PHI.

References:
1. S jayaraman, S Esakkirajan, T Veerakumar, “Digital Image processing”,Tata
McGraw Hill
2. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004
3. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image
processing using MATLAB”, Tata McGraw Hill, 2010.

13

You might also like