Unit II
Unit II
30-Jul-19
Praveen S
GEOSPATIAL TECHNIQUES
Digital Image Processing
30-Jul-19 1
1
30-Jul-19 1
Syllabi
Digital Image Processing
Basics of Digital Image Processing: Image Restoration: Geometric Corrections,
Co-Registration of data, Ground Control Points (GCPs), Atmospheric Corrections,
Solar Illumination Corrections, Image Enhancement: Concept of Color, Color
Composites, Linear and Non Linear Contrast Stretching, Filtering Techniques,
Edge Enhancement, Density Slicing, Information Extraction: Multispectral
GMR Institute of Technology
8
Content
GEOMETRIC CORRECTIONS
30-Jul-19
The flux radiance registered by a remote sensing system ideally represents the radiant energy
leaving the surface of earth like vegetation, urban land, water bodies etc. Unfortunately, this energy
flux is interspersed with errors, both internal and external which exist as noise within the data.
The internal errors, also known as systematic errors are sensor created in nature and hence are
systematic and quite predictable. The external errors are largely due to perturbations in the
platform or atmospheric scene characteristics. Image pre processing is the technique used to
correct this degradation/noise created in the image, thereby to produce a corrected image
GMR Institute of Technology
which replicates the surface characteristics as closely as possible. The transformation of a remotely
sensed image, so that it possesses the scale and projection properties of a given map
projection, is called geometric correction or georeferencing. A related technique essential for
georeferencing, known as registration that deals with fitting of coordinate system of one image to
that of a second image, both of the same area.
9
Content
Systematic Errors
30-Jul-19
The sources of systematic errors in a remote sensing system are explained below :
• Scan skew
Caused when the ground swath is not normal and is skewed due to the forward motion of the
platform during the time of scan.
• Platform velocity
GMR Institute of Technology
Caused due to a change in speed of the platform resulting in along track scale distortion.
• Earth rotation
Caused due the rotation of earth during the scan period resulting in along scan distortion. When a
satellite moving along its orbital track tries to scan the earth revolving with a surface velocity
proportional to the latitude of the nadir, there occurs a shift in displacement of the last scan line in
the image. This can be corrected provided we know the distance travelled by the satellite and its
velocity.
10
Content
30-Jul-19
GMR Institute of Technology
11
Content
• Mirror scan velocity
30-Jul-19
Caused when the rate of scanning is not constant resulting in along scan geometric distortion.
• Aspect ratio
Sensors like MSS of Landsat produce images whose pixels are not square. The instantaneous field
of view of MSS is 79m, while the spacing between pixels along each scan line is 56m. This results
in the creation of pixels which are not square due to over sampling in the across track direction.
GMR Institute of Technology
a. Altitude: Caused due the departure of sensor altitude resulting in change of scale.
b. Attitude: Errors due to attitude variations can be attributed to the roll, pitch and yaw of satellite.
Schematic showing roll, attitude distortions pertainting to an aircraft is depicted. Some of these
errors can be corrected having knowledge about the platform ephemeris, ground control point,
sensor characteristics and spacecraft velocity
12
Content
30-Jul-19
Schematic representation
of the systematic and non
GMR Institute of Technology
systematic distortions
13
Content
30-Jul-19
GMR Institute of Technology
14
Content
30-Jul-19
RADIOMETRIC CORRECTIONS
The energy registered by the sensor will not be exactly equal to that emitted or reflected from
the terrain surface due to radiometric and geometric errors. They represent the commonly
encountered error that alters the original data by including errors. Of these, geometric error
types and their methods of correction have been discussed in the previous lecture.
Radiometric errors can be sensor driven or due to atmospheric attenuation. Before analysis
of remote sensing images, it is essential that these error types are identified and removed to
GMR Institute of Technology
15
Content
30-Jul-19
GMR Institute of Technology
Some detectors generate noise which is a function of the relative gain/offset differences of the
detectors within a band which results in banding. Such errors can be corrected using a histogram
based approach
GMR Institute of Technology
17
Content
30-Jul-19
ATMOSPHERIC CORRECTIONS
The DN measured or registered by a sensor is composed of two components. One is the actual
radiance of the pixel which we wish to record, another is the atmospheric component. The
magnitude of radiance leaving ground is attenuated by atmospheric absorption and the directional
properties are altered due to scattering. Other sources of errors are due to the varying illumination
geometry dependent on sun’s azimuth and elevation angles, ground terrain. As the atmosphere
properties vary from time to time, it becomes highly essential to correct the radiance values for
GMR Institute of Technology
atmospheric effects. But due to the highly dynamic and complex atmospheric system, it is
practically not possible to understand fully the interactions between atmospheric system and
electromagnetic radiation
18
Content
30-Jul-19
Atmospheric correction to DN
measured by remote sensing
sensors
GMR Institute of Technology
19
Content
The means of correcting for atmospheric attenuation are
30-Jul-19
Based on images
Some of the simple techniques used are based on the histogram minimum method and regression.
The extent to which the atmosphere alters the true DN is best seen by examining the DN
histograms for various bands. Many scenes contain very dark pixels (such as those in deep
shadow) and it might be assumed that they should have a DN of zero. A first order atmospheric
correction may be applied to remotely sensed datasets by assuming that the offsets are due solely
GMR Institute of Technology
to the atmospheric effects and by subtracting the offset from each DN.
20
Content
Empirical method
30-Jul-19
This method relies on apriority knowledge about the reflectance of two targets-one of which is light
and the other is dark. Now, the radiances recorded by the sensor can be calculated from the DN
of images. The line joining the two target points can be defined to determine the intercept
representing atmospheric radiance.
on avoiding this step suggesting that when the training data and the data to
be classified are both measured on the same relative scale, the
atmospheric attenuation from both sources tend to cancel out.
21
Content
GROUND CONTROL POINTS
30-Jul-19
Remotely sensed images obtained raw from the satellites contain errors in the
form of systematic and non systematic geometric errors. Some of the errors
can be rectified by having additional information about satellite ephemeris,
sensor characteristics etc. Some of these can be corrected by using ground
control points (GCP).
GMR Institute of Technology
These are well defined points on the surface of the earth whose coordinates
can be estimated easily on a map as well as on the image.
22
GMR Institute of Technology 30-Jul-19
Content
23
Content
CONCEPTS OF COLOUR
30-Jul-19
The field of digital image processing relies on mathematical and probabilistic formulations accompanied by
human intuition and analysis based on visual and subjective judgements. As such, among the various
elements of image interpretation, colour plays a crucial role in identifying and extracting objects from an
image. Colour image processing can be broadly classified into two categories- full colour and pseudo colour
processing. Colour images are usually acquired by a colour TV camera/ colour scanner whereas pseudo
colour images are created by assigning colour to a gray scale image of monochrome intensity. A beam of light
passing through a glass prism branches into a continuous spectrum of light ranging from violet, blue, green,
yellow, orange to red. The colors that all living beings perceive in an object are basically due to the nature of
GMR Institute of Technology
light reflected from the object. Within the electromagnetic spectrum, visible light is composed of a very narrow
band of frequencies. Achromatic light (i.e., light in a single colour) can be characterized based on its scalar
measure of intensity ranging from black, to gray and finally to white. On the other hand, chromatic light can be
described using radiance, luminance and brightness. Radiance, measured in watts refers to the total amount of
energy that flows from any light source. Of this, the amount of light energy perceived by an observer is termed
as luminance, measured in lumens (lm).
24
Content
30-Jul-19
25
GMR Institute of Technology 30-Jul-19
Content
Chromaticity Diagram
26
Content
Colour Space
30-Jul-19
The RGB models are usually used in video cameras, CMYK models for colour
printing and IHS models resemble with the way humans interpret colour.
27
Content
30-Jul-19
RGB
In the RGB model, the primary colors of red, green and blue are used within a
Cartesian coordinate system. The RGB colour model is shown in Figure where
the primary colors of red, blue and green represent the three corners with black
at the origin and cyan, magenta and yellow representing the other three corners
of the cube. The cube shown is a unit cube with the underlying assumption that
all colour values have been normalized. Pixel depth is the name given to the
GMR Institute of Technology
number of bits used to represent each pixel within an RGB space. An RGB
image represented by 24 bit consists of (28)3 colors.
28
GMR Institute of Technology 30-Jul-19
Content
31
Content
IHS colour model
30-Jul-19
The RGB and CMY models fail to describe colors that are of practical interest to
humans. Humans define colour in an object in terms of its hue, saturation and
brightness. IHS colour model presents intensity (I), hue () and saturation (S)
within a colour image and hence is suitable for algorithms based on colour
descriptions more intuitive to humans. Within the IHS sphere, intensity axis
represents variations in brightness (black being 0 to white being 255). Hue
GMR Institute of Technology
represents the dominant wavelength of colour. (0 at mid point of red tones and
increases anti clockwise to 255). Saturation represents the purity of colour and
ranges from 0 at the centre of the sphere to 255 at the circumference. A
saturation of 0 represents a completely impure colour, in which all wavelengths
are equally represented (in grey tones).
32
GMR Institute of Technology 30-Jul-19
Content
33
Content
30-Jul-19
GMR Institute of Technology
36
GMR Institute of Technology 30-Jul-19
Content
37
GMR Institute of Technology 30-Jul-19
38
GMR Institute of Technology 30-Jul-19
39
GMR Institute of Technology 30-Jul-19
40
GMR Institute of Technology 30-Jul-19
41
GMR Institute of Technology 30-Jul-19
42
GMR Institute of Technology 30-Jul-19
43
GMR Institute of Technology 30-Jul-19
44
GMR Institute of Technology 30-Jul-19
45
GMR Institute of Technology 30-Jul-19
46
GMR Institute of Technology 30-Jul-19
47
30-Jul-19
IMAGE FILTERING
IMAGE TRANSFORMATION
IMAGE CLASSIFICATION
48
Image registration is the process of transforming different sets of data into
30-Jul-19
one coordinate system. Data may be multiple photographs, data from different
sensors, times, depths, or viewpoints.
and “image-to-map-registration”.
Selected image data of the Khorat area was rectified with reference to the
1:50 000 scale topographic maps (image-to-map-registration).
49
30-Jul-19
50
IMAGE ENHANCEMENT alters the visual impact that the image has on the
30-Jul-19
a. Contrast enhancement
b. Intensity, hue, and saturation
transformations
c. Density slicing
GMR Institute of Technology
d. Edge enhancement
e. Making digital mosaics
f. Producing synthetic stereo images
51
30-Jul-19
Contrast Enhancement
There is a strong influence of contrast ratio on resolving power and detection
capability of images. Techniques for improving image contrast are among the most
widely used enhancement processes. The sensitivity range of any remote sensing
detector is designed to record a wide range of terrain brightness from black basalt
plateaus to white sea beds under a wide range of lighting conditions.
GMR Institute of Technology
52
Nonlinear Contrast Stretch: Nonlinear contrast enhancement is made in
30-Jul-19
53
GMR Institute of Technology 30-Jul-19
54
30-Jul-19
Density Slicing
Density slicing converts the continuous grey tone of an image into a series of density intervals, or
each corresponding to a specified digital range D. Slices may be displayed as areas bounded by
contour lines. This technique emphasizes subtle grey-scale differences that may be imperceptible
to the viewer.
Edge Enhancement
GMR Institute of Technology
Most interpreters are concerned with recognizing linear features in images such as joints and
lineaments. Geographers map manmade linear features such as highways and canals. Some
linear features occur as narrow lines against a background of contrasting brightness; others are
the linear contact between adjacent areas of different brightness . In all cases, linear features are
formed by edges. Some edges are marked by pronounced differences that may be difficult to
recognize. Contrast enhancement may emphasize brightness differences associated with some
linear features. This procedure, however, is not specific for linear features because all elements of
the scene are enhanced equally, not just the linear elements. Digital filters have been developed
specifically to enhance edges in images and fall into two categories: directional and non-
directional.
55
30-Jul-19
INFORMATION EXTRACTION
Image restoration and enhancement processes utilize computers to provide
corrected and improved images for study by human interpreters. The computer
makes no decisions in these procedures. However, processes that identify and
extract information do utilize the computer's decision-making capability to identify
and extract specific pieces of information. A human operator must instruct the
GMR Institute of Technology
56
For any pixel in a multispectral image, the DN values are commonly highly
30-Jul-19
This correlation means that if the reflectance of a pixel in one band (IRS band 2,
for example) is known, one can predict the reflectance in adjacent bands (IRS
bands 1 and 3 ) . The correlation also means that there is much redundancy in a
multispectral data set. If this redundancy could be reduced, the amount of data
required to describe a multispectral image could be compressed.
57
GMR Institute of Technology 30-Jul-19
58
CHANGE DETECTION IMAGES
30-Jul-19
next step is to plot these values as an image in which neutral grey t one represents
zero. Black and white tones represent the maximum negative and positive
differences respectively. Contrast stretching is employed to emphasize the
differences. The agricultural practice of seasonally alternating between cultivated
and fallow fields can be clearly shown by the light and dark tones in the difference
image. Change-detection processing is also useful for producing difference images
for other remote sensing data, such as between night time and daytime thermal IR
images.
59
30-Jul-19
• Low Frequency
• High Frequency
60
GMR Institute of Technology 30-Jul-19
61
GMR Institute of Technology 30-Jul-19
62
GMR Institute of Technology 30-Jul-19
63
30-Jul-19
IMAGE CLASSIFICATION
This is the science of converting of RS data into meaningful categories
representing the surface conditions or classes
• Spectral Pattern Recognition
• Spatial Pattern
• Temporal Pattern
GMR Institute of Technology
64
30-Jul-19
Types of Classification
Classification is the process of labelling a pixel or group of pixels in an image
on the basis of similarity in (statistical) properties (spectral, spatial, temporal). It
is most widely approach for preparation of thematic maps in RS. The traditional
methods of classification mainly follow two approaches:
• Unsupervised
GMR Institute of Technology
• Supervised
65
30-Jul-19
Unsupervised approach
•The unsupervised approach attempts to identify spectrally homogeneous
clusters of pixels within the image.
•It results in spectral groupings that may have an unclear meaning from the
user's point of view. Having established these, the analyst then tries to
associate an information class with each group.
GMR Institute of Technology
66
30-Jul-19
Supervised approach
• In the supervised approach to classification the image analyst supervises
the pixel categorization process by specifying, to the computer algorithm,
numerical descriptors of the various land cover types present in the scene.
• Representative sample sites of known cover types, called training areas or
training sites, are used to compile a numerical interpretation key that
GMR Institute of Technology
describes the spectral attributes for each feature type of interest ( training
stage see figure).
• Each pixel in the data set is then compared numerically to each category
in the interpretation key and labeled with the name of the category it looks
most like ( classification stage).
• The final result is in the form a thematic map where each pixel has a fixed
label or class assigned to it.
67
GMR Institute of Technology 30-Jul-19
68
GMR Institute of Technology 30-Jul-19
69
GMR Institute of Technology 30-Jul-19
70
GMR Institute of Technology 30-Jul-19
71
GMR Institute of Technology