RS-Lecture 11-DigitalImageProcessing
RS-Lecture 11-DigitalImageProcessing
Al Albayt UNIVERSITY
Faculty of Engineering
Surveying Engineering Department
Harahsheh 2
Digital Image Processing: main Topics
▪ Image Enhancement
▪ Image Transformations
▪ Image Classification
Harahsheh 3
Digital Image Processing
Digital Image Processing is the computer manipulation of digital
values in an image for image Restoration and correction), image
enhancement and feature extraction.
Harahsheh 4
Digital Image Pre-Processing
Image Restoration And Rectification
Geometric Correction
What Geometric Errors? Why geometric correction?
Harahsheh 6
Geometric correction procedure
The steps to follow for geometric correction are as follows
1- Determination of parameters(GCPs ): Identifying the image coordinates (i.e. row,
column) of several clearly discernible points, called ground control points (or GCPs), in
the distorted image (A - A1 to A4), and matching them to their true positions in ground
coordinates (e.g. latitude, longitude). The true ground coordinates are typically measured
from a map (B - B1 to B4), either in paper or digital format. The number and distribution of
ground control points will influence the accuracy of the geometric correction.
Geometric correction procedure
2- Determination of the proper transformation equations to apply to the original (row and column)
image coordinates to map them into their new ground coordinates. Depending on the geometric
distortions, the order of polynomials will be determined. Usually a maximum of a third order
polynomials will be sufficient for existing remote sensing images.
To use a higher order of transformation, more GCP’s are needed.
1- Nearest neighbor:
This resampling uses the digital value from the pixel in the
original image which is nearest to the new pixel location in
the corrected image
Harahsheh 13
Image Enhancement
▪ Point operations:
▪ modify the brightness value of each pixel
independently- Contrast Stretching
▪ Local operations:
▪ modify the value of each pixel based on
neighboring brightness values – Spatial
Filtering
Contrast stretch
Simple 3*3
Original Filter
Neighbourhood
e 3*3 Filter
Image Pixels
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i
Filtering procedure involves moving a 'window' or kernel of a few pixels in
dimension with an odd number (e.g. 3x3, 5x5.) over each pixel in the image, applying a
mathematical calculation using the pixel values under that window, and replacing the
central pixel with the new value.
The above is repeated for every pixel in the original image to generate the filtered image
Low-pass filter
•A low-pass filter is designed to emphasize larger, homogeneous areas of
similar tone and reduce (remove)the smaller detail in an image.
•Thus, low-pass filters generally serve to smooth the appearance of an image.
Average and median filters imagery are examples of low-pass filters.
•For average filter Simply average all of the pixels in a neighbourhood around
a central value.
•Especially useful in removing noise from images Also useful for highlighting
gross detail
1/ 1/ 1/
9 9 9
1/ 1/ 1/
9 9 9
Origin x 1/ 1/ 1/
104 100 108 9 9 9
1/ 1/ 1/
99 106 98
95 90 85
* 1/
9
1/
9
1/
9
9 9 9
Simple 3*3
1/
104 1/
9 100
1/
9 1089 Original
3*3 Smoothing
Filter
Image Pixels
1/ 1/ 1/
999 1069 989
Neighbourhood /
195
9/ /190
9
185
9
Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
1/ *95 + 1/ *90 + 1/ *85
y Image f (x, y) 9 9 9
= 98.3333
The above is repeated for every pixel in the original image
to generate the smoothed image
Harahsheh 20
Low-pass filter
Harahsheh 21
High-pass Filters
High-pass filters do the opposite to low pass Filters and serve to sharpen the
appearance(highlight) of fine detail in an image:
Remove blurring from images
Highlight edges
A high pass filter is the basis for most sharpening methods. An image is sharpened when
contrast is enhanced between adjoining areas with little variation in brightness or darkness.
The kernel of the high pass filter is designed to increase the brightness of the center pixel
relative to neighboring pixels. The kernel array usually contains a single positive value at its
center, which is completely surrounded by negative values. The following array is an
example of a 3 by 3 kernel for a high pass filter:
Harahsheh 23
Image Transformations
Image transformations typically involve the manipulation of multiple
bands of data, whether from a single multispectral image or from two
or more images of the same area acquired at different times (i.e.
multitemporal image data).
Image Transformations
Arithmetic operations
Basic image transformations apply simple arithmetic operations to the
image data
b4/b3
Principal components analysis
Problem:
• Multispectral remote sensing datasets comprise a set of variables (the spectral bands), which are
usually correlated to some extent
• That is, variations in the DN in one band may be mirrored by similar variations in another band
(when the DN of a pixel in Band 1 is high, it is also high in Band 3, for example).
Solution:
▪ Principal Component Analysis (PCA) of the statistical characteristics of multi-
band data sets is used to produce uncorrelated output bands and to reduce the
dimensionality (i.e. the number of bands) in the data, and maximize
(statistically) the amount of information from the original data into the least
number of new components.
▪ Principal component transformations, applied either as an enhancement
operation prior to the visual interpretation or as preprocessing procedure prior
to automated classification of data
▪ PCA “Bands” produce more colorful color composite images than spectral
▪ color composite images because the variance in the data has been maximized.
▪ By selecting which PCA Bands to exclude in further processing, you can
reduce the amount of data you are handling, eliminate noise, and reduce
computational requirements.
▪ Highlight changes in imagery collected over the same area at different times.
Principal components analysis
Image Classification
Digital image classification uses the spectral information represented by the
digital numbers in one or more spectral bands, and attempts to classify each
individual pixel based on this spectral information. This type of classification is
termed spectral pattern recognition.
The objective is to assign all pixels in the image to particular classes or themes
(e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.).
The resulting classified image is comprised of a mosaic of pixels, each of which
belong to a particular theme, and is essentially a thematic "map" of the original
image.
Image Classification
• we need to distinguish between information classes and spectral classes.
•Information classes are those categories of interest that the analyst is actually
trying to identify in the imagery, such as different kinds of crops, different forest types
or tree species, different geologic units or rock types, etc.
•Spectral classes are groups of pixels that are uniform (or near-similar) with
respect to their brightness values in the different spectral channels of the data.
• The objective is to match the spectral classes in the data to the information classes.
• Rarely is there a simple one-to-one match between these two types of classes. Also,
unique spectral classes may appear which do not necessarily correspond to any pre-
defined information class, which is of particular use or interest to the analyst.
• Alternatively, a broad information class (e.g. forest) may contain a number of spectral sub-
classes with unique spectral variations. Using the forest example, spectral sub-classes
may be due to variations in age, species, and density, or perhaps as a result of shadowing
or variations in scene illumination.
• It is the analyst's job to decide on the utility of the different spectral classes and their
correspondence to useful information classes Harahsheh 31
Image Classification Procedures
(1) Design image classification scheme(system): they are usually information classes
such as urban, agriculture, forest areas, etc. Conduct field studies and collect ground
information and other ancillary data of the study area.
(3) Select representative areas on the image and analyze the initial clustering results or
generate training signatures.
(5) Post-processing: complete geometric correction & filtering and classification decorating.
-Selection of training areas is based on the analyst's familiarity with the study area
(field work). The analyst is "supervising" the categorization of classes.
- Spectral information in all bands for the pixels comprising training areas are used
to recognize spectrally similar areas for each class, which means creation"signatures”
-Each pixel in the image is compared to these signatures and labeled as the class it
most closely "resembles" digitally.
1. ISODATA and
2. K-means
Are the most used algorithms for
Unsupervised classification