0% found this document useful (0 votes)
44 views

RS-Lecture 11-DigitalImageProcessing

The document discusses topics related to digital image processing, including: 1) Image pre-processing techniques like geometric correction to remove distortions and rectify images. 2) Image enhancement methods like contrast stretching to improve image quality and highlight features. 3) Spatial filtering operations that modify pixel values based on neighboring pixels to smooth, sharpen, or detect edges in images. The document provides details on common digital image processing techniques applied to remotely sensed imagery.

Uploaded by

Mohammad Shorman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

RS-Lecture 11-DigitalImageProcessing

The document discusses topics related to digital image processing, including: 1) Image pre-processing techniques like geometric correction to remove distortions and rectify images. 2) Image enhancement methods like contrast stretching to improve image quality and highlight features. 3) Spatial filtering operations that modify pixel values based on neighboring pixels to smooth, sharpen, or detect edges in images. The document provides details on common digital image processing techniques applied to remotely sensed imagery.

Uploaded by

Mohammad Shorman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Course of

Principals of Remote Sensing

Al Albayt UNIVERSITY
Faculty of Engineering
Surveying Engineering Department

Dr. Hussein Harahsheh


First Semester 2021/2022

Dr. Hussein Harahsheh 1


Lecture 11
Digital Image Processing

Harahsheh 2
Digital Image Processing: main Topics

▪ Image Pre-processing: image restoration and rectification

▪ Image Enhancement

▪ Image Transformations

▪ Image Classification

▪ Data Integration and data Fusion

▪ Elements of Visual Interpretation and analysis

Harahsheh 3
Digital Image Processing
Digital Image Processing is the computer manipulation of digital
values in an image for image Restoration and correction), image
enhancement and feature extraction.

Digital Image Processing System consist of Hardware computer and


Image processing Software for analyzing digital data

Harahsheh 4
Digital Image Pre-Processing
Image Restoration And Rectification
Geometric Correction
What Geometric Errors? Why geometric correction?

▪ Remotely sensed imagery typically exhibits internal and


external geometric error. It is important to recognize the
source of the internal and external error and whether it is
systematic (predictable) or nonsystematic (random).

▪ It is usually necessary to preprocess remotely sensed data


and remove geometric distortion so that individual picture
elements (pixels) are in their proper planimetric (x, y) map
locations.
▪ This allows remote sensing–derived information to be
related to other thematic
▪ Geometrically corrected imagery can be used to extract
accurate distance, polygon area, and direction (bearing)
information.

Harahsheh 6
Geometric correction procedure
The steps to follow for geometric correction are as follows
1- Determination of parameters(GCPs ): Identifying the image coordinates (i.e. row,
column) of several clearly discernible points, called ground control points (or GCPs), in
the distorted image (A - A1 to A4), and matching them to their true positions in ground
coordinates (e.g. latitude, longitude). The true ground coordinates are typically measured
from a map (B - B1 to B4), either in paper or digital format. The number and distribution of
ground control points will influence the accuracy of the geometric correction.
Geometric correction procedure
2- Determination of the proper transformation equations to apply to the original (row and column)
image coordinates to map them into their new ground coordinates. Depending on the geometric
distortions, the order of polynomials will be determined. Usually a maximum of a third order
polynomials will be sufficient for existing remote sensing images.
To use a higher order of transformation, more GCP’s are needed.

1st order transformation 2nd order polynomial


• Also referred to as an affine • X’ = ao + a1x + a2y + a3x2+ a4y2 + a5xy
transformation
• Y’ = bo + b1x + b2y + b3x2+ b4y2 + b5xy
• X’ = ao + a1x + a2y X’,Y’ = image
• Y’ = bo + b1x + b2y x, y = map X’,Y’ = image, x, y = map

• 6 unknowns, need minimum of 3 GCP’s • 12 unknowns, need minimum of 6 GCP’s


ao, bo: for translation
a1, b1: for rotation and scaling in x direction
a2, b2: for rotation and scaling in y direction
3- Accuracy check: Accuracy of the geometric correction should be checked and
verified. If the accuracy does not meet the criteria, the method or the data used
should be checked and corrected in order to avoid the errors.
This is usually using Root Mean Square (RMS) Error.
The amount of error that the user is willing to tolerate depends on the application.
The re-transformed pixel is within the window formed by the source pixel and a
radius buffer- For example: a RMS of 0.5 pixel would still put the transformed
coordinate within the source pixel
4- Intensity Interpolation( Resampling): Resampling is used to determine the digital
values to place in the new pixel locations of the corrected output image. The resampling
process calculates the new pixel values from the original digital pixel values in the
uncorrected image. There are three common methods for resampling:

1- Nearest neighbor:
This resampling uses the digital value from the pixel in the
original image which is nearest to the new pixel location in
the corrected image

2- Bilinear interpolation resampling takes a weighted average


of surrounding four pixels in the original image nearest to the
new pixel location.

3- Cubic Convolution resampling calculates a distance


weighted average of a block of sixteen pixels from the original
image which surround the new output pixel location. As with
bilinear interpolation, this method results in completely new
pixel values.
Image Enhancement
•Enhancements are used to make it easier for visual
interpretation and understanding of imagery by increasing the
apparent distinction between the features of the scene.
•Enhancement operations are normally applied to image data
after the appropriate restoration procedures have been
performed
Image Histogram
The key to understanding image enhancements is to understand
the concept of an image histogram. A histogram is a graphical
representation of the brightness values that comprise an image. The brightness
values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency
of occurrence of each of these values in the image is shown on the y-axis.
Interpretations of Histogram
▪ “unbalanced” histograms do not
fully utilize the dynamic range

▪ Low contrast image: narrow


luminance range
▪ Under-exposed image:
concentrating on the dark side
▪ Over-exposed image: concentrating
on the bright side

▪ “balanced” histogram gives


more pleasant look and reveals
rich details

Harahsheh 13
Image Enhancement

▪ Point operations:
▪ modify the brightness value of each pixel
independently- Contrast Stretching

▪ Local operations:
▪ modify the value of each pixel based on
neighboring brightness values – Spatial
Filtering
Contrast stretch

▪ When image data are acquired, the


detected energy does not necessarily fill
the entire grey level range that the sensor
is capable of. This can result in a large
concentration of values in a small region of
grey levels producing an image with very
little contrast among the features.
v
▪ Contrast stretch used to expand the
narrow range of brightness values of an 
input image over a wider range of gray
values

output gray level



▪ Certain features may reflect more energy
than others. This results in good contrast
within the image and features that are 
easy to distinguish a b
o input gray level u

▪ When features reflect nearly the same level


of energy, The contrast level between the
features in the image is low.
Spatial Filtering
Spatial filters are designed to highlight or suppress specific features in an image
based on their spatial frequency, is useful for many applications, including smoothing,
sharpening, removing noise, and edge detection.
• Spatial filters pass (emphasize) or suppress (de-emphasize) image data of various spatial
frequencies
• Spatial frequency refers to the number of changes in brightness value, per unit distance,
for any area within a scene
• High spatial frequency → rough areas
– High frequency corresponds to image elements of smallest size
– An area with high spatial frequency will have rapid change in digital values with
distance (i.e. dense urban areas and street networks)
• Low spatial frequency → smooth areas
– Low frequency corresponds to image elements of (relatively) large size.
– An object with a low spatial frequency only changes slightly over many pixels and will
have gradual transitions in digital values (i.e. a lake or a smooth water surface).
Spatial Filtering Procedure
Filtering procedure involves moving a 'window' or kernel of a few pixels in
dimension with an odd number (e.g. 3x3, 5x5.) over each pixel in the image, applying a
mathematical calculation using the pixel values under that window, and replacing the
central pixel with the new value, The process used to apply filters to an image is known as
convolution,
The following examples in this section will focus on some of the basic
filters applied within the spatial domain using the CONVOL function:

•Low Pass Filtering

•High Pass Filtering

•Edge Detection Filters:


•Directional Filtering
•Laplacian Filtering
The Spatial Filtering Process
Origin x
a b c r s t
d
g
e
h
f
i
* u
x
v
y
w
z

Simple 3*3
Original Filter
Neighbourhood
e 3*3 Filter
Image Pixels
eprocessed = v*e +
r*a + s*b + t*c +
u*d + w*f +
y Image f (x, y) x*g + y*h + z*i
Filtering procedure involves moving a 'window' or kernel of a few pixels in
dimension with an odd number (e.g. 3x3, 5x5.) over each pixel in the image, applying a
mathematical calculation using the pixel values under that window, and replacing the
central pixel with the new value.
The above is repeated for every pixel in the original image to generate the filtered image
Low-pass filter
•A low-pass filter is designed to emphasize larger, homogeneous areas of
similar tone and reduce (remove)the smaller detail in an image.
•Thus, low-pass filters generally serve to smooth the appearance of an image.
Average and median filters imagery are examples of low-pass filters.
•For average filter Simply average all of the pixels in a neighbourhood around
a central value.
•Especially useful in removing noise from images Also useful for highlighting
gross detail
1/ 1/ 1/
9 9 9

1/ 1/ 1/
9 9 9

Simple averaging filter 1/


9
1/
9
1/
9
Low Pass “Smoothing” Spatial Filtering

Origin x 1/ 1/ 1/
104 100 108 9 9 9
1/ 1/ 1/
99 106 98

95 90 85
* 1/
9
1/
9
1/
9

9 9 9

Simple 3*3
1/
104 1/
9 100
1/
9 1089 Original
3*3 Smoothing
Filter
Image Pixels
1/ 1/ 1/
999 1069 989
Neighbourhood /
195
9/ /190
9
185
9
Filter
e = 1/9*106 +
1/ *104 + 1/ *100 + 1/ *108 +
9 9 9
1/ *99 + 1/ *98 +
9 9
1/ *95 + 1/ *90 + 1/ *85
y Image f (x, y) 9 9 9
= 98.3333
The above is repeated for every pixel in the original image
to generate the smoothed image
Harahsheh 20
Low-pass filter

Harahsheh 21
High-pass Filters
High-pass filters do the opposite to low pass Filters and serve to sharpen the
appearance(highlight) of fine detail in an image:
Remove blurring from images
Highlight edges

A high pass filter is the basis for most sharpening methods. An image is sharpened when
contrast is enhanced between adjoining areas with little variation in brightness or darkness.

The kernel of the high pass filter is designed to increase the brightness of the center pixel
relative to neighboring pixels. The kernel array usually contains a single positive value at its
center, which is completely surrounded by negative values. The following array is an
example of a 3 by 3 kernel for a high pass filter:

Streets and highways, and some streams and


ridges, are greatly emphasized. The trademark of
a high pass filter image is that linear features
commonly are defined as bright lines with a dark
border.
High-pass Filters

Harahsheh 23
Image Transformations
Image transformations typically involve the manipulation of multiple
bands of data, whether from a single multispectral image or from two
or more images of the same area acquired at different times (i.e.
multitemporal image data).
Image Transformations
Arithmetic operations
Basic image transformations apply simple arithmetic operations to the
image data

Image subtract is often used to identify changes that have occurred


between images collected on different dates.
Image Ratioing
Image division or Image ratioing serves to highlight subtle (small variations in the
spectral responses of various surface covers.

Example illustrates the concept of spectral ratioing: Healthy vegetation reflects


strongly in the near-infrared portion of the spectrum while absorbing strongly in the
visible red. Other surface types, such as soil and water, show near equal reflectances
in both the near-infrared and red portions. Thus, a ratio image of Landsat TM Band
4 (Near-Infrared) divided by Band 3 (Red) would result in ratios much greater than
1.0 for vegetation, and ratios around 1.0 for soil and water. Thus the discrimination
of vegetation from other surface cover types is significantly enhanced
b4 b3

b4/b3
Principal components analysis

Problem:
• Multispectral remote sensing datasets comprise a set of variables (the spectral bands), which are
usually correlated to some extent
• That is, variations in the DN in one band may be mirrored by similar variations in another band
(when the DN of a pixel in Band 1 is high, it is also high in Band 3, for example).

Solution:
▪ Principal Component Analysis (PCA) of the statistical characteristics of multi-
band data sets is used to produce uncorrelated output bands and to reduce the
dimensionality (i.e. the number of bands) in the data, and maximize
(statistically) the amount of information from the original data into the least
number of new components.
▪ Principal component transformations, applied either as an enhancement
operation prior to the visual interpretation or as preprocessing procedure prior
to automated classification of data
▪ PCA “Bands” produce more colorful color composite images than spectral
▪ color composite images because the variance in the data has been maximized.
▪ By selecting which PCA Bands to exclude in further processing, you can
reduce the amount of data you are handling, eliminate noise, and reduce
computational requirements.
▪ Highlight changes in imagery collected over the same area at different times.
Principal components analysis
Image Classification
Digital image classification uses the spectral information represented by the
digital numbers in one or more spectral bands, and attempts to classify each
individual pixel based on this spectral information. This type of classification is
termed spectral pattern recognition.
The objective is to assign all pixels in the image to particular classes or themes
(e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.).
The resulting classified image is comprised of a mosaic of pixels, each of which
belong to a particular theme, and is essentially a thematic "map" of the original
image.
Image Classification
• we need to distinguish between information classes and spectral classes.
•Information classes are those categories of interest that the analyst is actually
trying to identify in the imagery, such as different kinds of crops, different forest types
or tree species, different geologic units or rock types, etc.
•Spectral classes are groups of pixels that are uniform (or near-similar) with
respect to their brightness values in the different spectral channels of the data.

• The objective is to match the spectral classes in the data to the information classes.

• Rarely is there a simple one-to-one match between these two types of classes. Also,
unique spectral classes may appear which do not necessarily correspond to any pre-
defined information class, which is of particular use or interest to the analyst.

• Alternatively, a broad information class (e.g. forest) may contain a number of spectral sub-
classes with unique spectral variations. Using the forest example, spectral sub-classes
may be due to variations in age, species, and density, or perhaps as a result of shadowing
or variations in scene illumination.

• It is the analyst's job to decide on the utility of the different spectral classes and their
correspondence to useful information classes Harahsheh 31
Image Classification Procedures
(1) Design image classification scheme(system): they are usually information classes
such as urban, agriculture, forest areas, etc. Conduct field studies and collect ground
information and other ancillary data of the study area.

(2) Preprocessing of the image, including radiometric, atmospheric, geometric and


topographic corrections, image enhancement, and initial image clustering.

(3) Select representative areas on the image and analyze the initial clustering results or
generate training signatures.

(4) Image classification Methods


(1) Pixel based Classifications
(2) Sub-Pixel based Classifications
(3) Object based Classification

(5) Post-processing: complete geometric correction & filtering and classification decorating.

(6) Accuracy assessment: compare classification results with field studies.


Harahsheh 32
Supervised Classification
A pixel-labelling algorithm is used to assign a pixel to an information class.
-The analyst identifies in the imagery homogeneous representative samples of the
different cover types (information classes) or training areas.

-Selection of training areas is based on the analyst's familiarity with the study area
(field work). The analyst is "supervising" the categorization of classes.

- Spectral information in all bands for the pixels comprising training areas are used
to recognize spectrally similar areas for each class, which means creation"signatures”

-Each pixel in the image is compared to these signatures and labeled as the class it
most closely "resembles" digitally.

Some examples of this kind of approach are:


1. Maximum likelihood,
2. minimum distance,
3. artificial neural network,
4. decision tree classifier.
Unsupervised Classification
Unsupervised classification requires no advance information about the classes of
interest.
It examine the data and breaks it into the most prevalent natural spectral groupings
or clusters, present in the data.
The analyst then identifies these clusters as land cover classes through combination
of familiarity with the study area.

1. ISODATA and
2. K-means
Are the most used algorithms for
Unsupervised classification

You might also like