Digital Image Processing
Digital Image Processing
Introduction
Contents
•This lecture will cover:
–Image Processing Fields
–Computerized Processes Types
–Digital Image Definition
–Fundamental steps in Digital Image
Image Processing Fields
• Computer Graphics: The creation of images
The maximum deviation from the average or equilibrium value of any repeatedly
changing quantity, such as the position of a vibrating object, pressure, velocity,
voltage, current and many others.
Digital Image Types : Intensity Image
RGB components
10 10 16 28
9 656 7026
56 43
37
32 99 54 96 56
70 67 78
15 25 13 22
21 60 42 67
47 96
54 90
32 15 87 39
54 85
65 85
65 4339 92
32 65 87 99
Image Types : Binary Image
Binary data
0 0 0 0
0 0 0 0
1 1 1 1
1 1 1 1
Image Types : Index Image
Index image
Each pixel contains index number
pointing to a color in a color table
Color Table
• JPEG – Joint Photographic Experts Group format is the most popular lossy method
of compression, and the current standard whose file name ends with “.jpg” which
allows Raster-based 8-bit grayscale or 24-bit color images with the compression
ratio more than 16:1 and preserves the fidelity of the reconstructed image
• EPS – Encapsulated PostScript language format from Adulus Systems uses Metafile
of 1~24-bit colors with compression
• JPEG 2000
Fundamental Steps in Digital Image Processing:
Outputs of these processes generally are images
Wavelets &
Image
Restoration
Segmentation
Image
Acquisition Object
Recognition
Problem Domain
Fundamental Steps in DIP:
(Description)
Step 1: Image Acquisition
The image is captured by a sensor (eg.
Camera), and digitized if the output of the
camera or sensor is not already in digital form,
using analogue-to-digital convertor
Fundamental Steps in DIP:
(Description)
Step 2: Image Enhancement
The process of manipulating an image so that the
result is more suitable than the original for
specific applications.
Important Tip: The more accurate the segmentation, the more likely
recognition is to succeed.
Fundamental Steps in DIP:
(Description)
Step 9: Representation and Description
- Representation: Make a decision whether the data should
be represented as a boundary or as a complete region. It is
almost always follows the output of a segmentation stage.
- Boundary Representation: Focus on external shape
characteristics, such as corners and inflections
- Region Representation: Focus on internal properties,
such as texture or skeleton shape
Fundamental Steps in DIP:
(Description)
Step 9: Representation and Description
- Choosing a representation is only part of the solution for
transforming raw data into a form suitable for subsequent
computer processing (mainly recognition)
Typical general-
Image sensors purpose DIP
Problem Domain system
Components of an Image Processing
System
1. Image Sensors
Two elements are required to acquire digital
images. The first is the physical device that is
sensitive to the energy radiated by the object
we wish to image (Sensor). The second,
called a digitizer, is a device for converting
the output of the physical sensing device into
digital form.
Components of an Image Processing
System
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as an
arithmetic logic unit (ALU), which performs arithmetic and
logical operations in parallel on entire images.
The lens and the ciliary muscle focus the reflected lights from
objects into the retina to form an image of the objects.
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
1. The lens contains 60-70% water, 6% of fat.
2. The iris diaphragm controls amount of light that enters the eye.
3. Light receptors in the retina
- About 6-7 millions cones for bright light vision called photopic
- Density of cones is about 150,000 elements/mm2.
- Cones involve in color vision.
- Cones are concentrated in fovea about 1.5x1.5 mm2.
- About 75-150 millions rods for dim light vision called scotopic
- Rods are sensitive to low level of light and are not involved
color vision.
4. Blind spot is the region of emergence of the optic nerve from the
eye. Absence of receptors there hence the name.
5. Fovea : Circular indentation in center of retina, about 1.5mm
diameter, dense with cones.
•Photoreceptors around fovea responsible for spatial vision (still
images).
•Photoreceptors around the periphery responsible for detecting
motion.
Image Formation in the Human Eye
Intensity
Position
Mach Band Effect
A
B
Intensity
Position
Simultaneous contrast. All small squares have exactly the same intensity
but they appear progressively darker as background becomes lighter.
Simultaneous Contrast
Single sensor
Line sensor
Array sensor
64 levels 32 levels
Effect of Quantization Levels (cont.)
16 levels 8 levels
In this image,
it is easy to see
false contour.
4 levels 2 levels
IMAGE INTERPOLATION
• Interpolation- constructing new data points
with in the range of discrete set of known data
points.
• Image interpolation- is a tool which is used to
zoom, shrink and geometric corrections of an
image ( re-sampling of images).
• Image interpolation refers to the “guess” of
intensity values at missing locations
• Why do we need image interpolation?
– We want BIG images
• When we see a video clip on a PC, we like to see it in
the full screen mode
– We want GOOD images
• If some block of an image gets damaged during the
transmission, we want to repair it
– We want COOL images
• Manipulate images digitally can render fancy artistic
effects as we often see in movies
Zooming and shrinking
• Zooming requires the creation of new pixel
locations, and the assignment of gray levels.
125 170 129
172 170 175
125 128 128
(2,4)
125 = a(2) + b(2) +c(4) +d (4,2) (3,3)
170 = a(2) + b(4) +c(8) +d
172 = a(4) + b(2) +c(8) +d (4,4)
170 = a(4) + b(4) +c(16) +d
• Image shrinking is done in a similar manner as
just described for zooming.
• For example, to shrink an image by half, we
delete every other row and column.
Bicubic interpolation
• Bicubic interpolation is more sophisticated
and produces smoother edges than bilinear
interpolation.
• Here, a new pixel is a bicubic function using 16
pixels in the nearest 4 x 4 neighborhood of the
pixel in the original image.
• This is the method most commonly used by
image editing software, printer drivers and
many digital cameras for re-sampling images
Basic Relationship of Pixels
(0,0) x
(x,y-1) 4-neighbors of p:
(x-1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y-1)
(x,y+1)
(x,y+1)
(x-1,y-1)
(x-1,y) p (x+1,y)
(x,y-1)
(x+1,y-1)
(x-1,y)
(x-1,y+1) (x,y+1) (x+1,y+1) (x+1,y)
N8(p) = (x-1,y+1)
(x,y+1)
(x+1,y+1)
(x-1,y-1)
p
(x+1,y-1)
ND(p) = (x-1,y1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)
S1
S2
q
p
Path (cont.)
8-path m-path
p p p
q q q
m-path from p to q
8-path from p to q
solves this ambiguity
results in some ambiguity
Distance Measure (Euclidean)
For pixels p, q, and z, with coordinates (x,y), (s,t), and
(v,w), respectively, D is a distance function or metric if
(a) D(p,q) 0 ,
(b) D(p,q) = D(q,p), (symmetry)
(c) D(p,z) D(p,q) + D(q,z) (triangular inequality)
0 0 1 q 0 0 1 0 1 1
0 1 0 1 1 0 1 1 0
p 1 0 0 1 0 0 1 0 0
1 2 1 1
1 0 1 2
(p)
Linear and Nonlinear Operation
Linear operation
H(af + bg) = a H( f ) + b H( g )
Revision
• Image Processing Fields
• Computerized Processes Types
• Digital Image Definition
• Fundamental steps in Digital Image and components of IP system
• Visual Perception: Human Eye
• Image Sensors
• Digital Image Acquisition Process
• A Simple Image Formation Model
• Image Sampling and Quantization
• Representing Digital Images
• Image Interpolation
• Basic Relationship of Pixels
• Distance Measure
• Linear and Nonlinear Operation