Unit III Satellite Data Products _ Processing
Unit III Satellite Data Products _ Processing
The raw remote sensing data recorded through the medium of electromagnetic radiation contain many
systematic distortions and errors. The data have to be processed to remove the distortions and errors.
Finally, the data are converted into various products to be supplied to users for various applications. There
is a remarkable difference between raw remote sensing data recorded and processed remote sensing
data which is supplied to users.
Remote sensing data is a product like a block of aluminium metal before it is shaped into a usable
utensil/vessel. You will find that the whole process with raw satellite images is just like collecting a raw
material and processing the same into a finished one, just as aluminium ore (i.e. bauxite) is processed into
aluminium metal to be used to make utensils/vessels.
Data Products
The first step while starting a geoinformatics
related project is to place an order for the
procurement of remote sensing data. There are two
types of data used in geoinformatics. One is raster
data and the other is vector data. When we talk of
remote sensing data, we always mean raster data.
In its simplest form, a raster data means data
consisting of a matrix of cells or pixels
a) Standard Products : These products are generated using information received directly from the satellites and by applying
necessary radiometric and geometric corrections
i). Path-Row Based Standard Products: Actually, the data are recorded by the sensors along the track/path of the satellite
covering a specific width (across the satellite track/path). The user specifies path and row of the scene, sensor, sub-scene (if
any), number of scenes, date of pass of the satellite, band numbers/band combination (for photographic products) and the
product code (specified by the data provider) as necessary information to procure the desired products from the provider, for
example in India, the National Data Centre (NDC) of National Remote Sensing Centre (NRSC), Hyderabad.
ii). Shift Along Track (SAT) Products: In case a user’s area of interest falls between two successive scenes of the same
path (one below the other), the data can be supplied by sliding the scene in along-the-track direction. These products are
called Shift-Along-Track Products. In this case, the percentage (10% to 90% in multiples of 10%) of shift has to be specified by
the user in addition to the other necessary information about both the scenes.
iii). Quadrant Products: In IRS series of satellites, LISS III full scene
(path/ row based products) data has been divided into 4 nominal
quadrants designated by letter A, B, C & D. Each of these quadrants is
one full scene of PAN data (B & W data), and is further divided into 9
quadrants. These are given numbers from 1 to 9 which are nothing but
PAN sub- scenes. While placing a request for these products, users
need to specify the quadrant number in addition to the details specified
for path/row based products
Shift Along Track
Stereo Based
IV). Georeferenced Products: Georeferencing is the process of transforming the remote sensing data or a map to a
coordinate and projection system. This means, all the objects/elements in remote sensing data and the map get specific
geographic location in terms of longitudes and latitudes.
V). Basic Stereo Products: A stereo-pair comprises two images of the same area, acquired on different dates or same day;
from different angles. These products are used for generating digital elevation maps or 3D visualization of the study area.
Cartosat-1 mission provides along track stereo images.
When standard products are processed according to specifications and requirements of users, these products get converted into
value added products. These products are basically of four types viz.
● Geocoded Products: These products are also known as georeferenced products. Geocoding is the process of determining
geographic coordinates for place names, street addresses, and codes (e.g., zip codes).
● Merged Products: Remote sensing satellites record multiband/multispectral (MSS) data and single-band/panchromatic data.
Panchromatic/ mono-band data contains higher spatial resolution and multi-band data have comparatively low in spatial
resolution. When MSS data area is merged with panchromatic data, we get MSS data with spatial resolution of panchromatic
data. Data can be merged only when both MSS and PAN data have been registered with the same georeferencing system.
● Ortho Products: Geometrically corrected products, with corrections for displacement caused by tilt and relief, are
called Ortho Products. Therefore, ortho images show ground objects in their true planimetric positions, like the
positions of objects in a map. The basic inputs required for ortho-image generation are: (i) Digital Elevation Model,
(ii) Ground Control Points, (iii) Satellite ephemeris (orbit and altitude information), and (iv) Radiometrically corrected
satellite data.
● Template Registered Products: In specific cases, such as the study of crops and their monitoring, data of the
same area having similar geometric fidelity and temporally registered are required. Such data sets are called
template registered products. In India, data from AWiFS sensor is envisaged for extensive use in crop monitoring.
Information extracted from remote sensing data is also provided as useful products by data providers. For example,
vegetation index map or sea surface temperature profiles extracted from various remote sensing data products are called
derived products. These derived products are generated by further processing /analysing the data, and are readily usable
by the user. Based on output media, data products are available on both photographic as well as digital media.
Photographic products can be supplied as films or prints. Output products can vary from 1:1 million to 1:50000 or even to
1:25000.
Products Based on Output Media/Scale
● Photographic media
● Digital media.
Remote sensed data might contain noise and other deficiencies derived from the sensors onboard
or radiative transfer processes. Therefore, we often conduct further preprocessing techniques to
deal with such flaws. These different processing techniques are generally referred to as Image
Processing.
There are several Image processing techniques used in Earth observation, and we tend to
categorize them into four broad categories: Preprocessing, Transformation, Correction, and
Classification.
Pre-Processing Techniques
Some distortions need to be corrected before carrying out analysis and post-processing
techniques. The image preparation and processing operations carried out before analysis to
correct or minimize image distortions from, i.e., imaging systems, sensors, and observing
conditions, are often referred to as pre-processing techniques.
Some typical pre-processing operations include but not limited to the following types:
● Radiometric Correction
● Atmospheric Corrections
● Geometric Correction
Radiometric Correction
Radiometric distortions usually come from two sources: sensor characteristics and difference in
illumination condition. Under this type of distortions, common imaging problems can happen when
the image was taken might not coincide with the objects' emitted or reflected energy. Therefore,
these radiometric distortions are necessary to handle before image interpretations and analysis.
Radiometric corrections are classified into two broad categories: Sun Angle/Topography
Radiometric Corrections and Sensor Irregularities Radiometric Corrections.
The Sun Angle/Topography radiometric corrections correct the effects of diffusion of sunlight,
especially in the water surface and mountains, by estimating the shading curve.
On the other hand, Sensor irregularity corrections involve removing radiometric noise from
changes in sensor sensitivity or degradation of the sensor. The correction process under this
category calculates new relationships between calibrated irradiance measurement and sensor
output signal. Therefore, the process is also called Calibration.
Comparison images before radiometric
correction and after radiometric
correction, (a) Original November 2005
image with contrails (bands 4, 3, 2
composite). (b) November 2005 image
with contrails image after radiometric
correction (bands 4, 3, 2 composite). (c)
Original February 2006 image without
contrails (bands 4, 3, 2 composite). (d)
February 2006 image without contrails
after radiometric correction (bands 4, 3, 2
composite).
Atmospheric Corrections
Radiation from Earth's surface goes through different atmospheric interactions before reaching the
sensor. It is difficult to get an explicit observing scene and might be affected by, for example, clouds and
aerosols in the atmosphere. Therefore, many images contain atmospheric noise, distorting image
interpretation, and thus needs to be corrected.
Atmospheric correction methods also fall into two broad categories: The Absolute Correction method
and the Relative Correction method.
The absolute Correction Method considers several time-dependent parameters, including solar zenith
angle, the total optical depth of the aerosol, Irradiance at the top of the atmosphere, and sensor viewing
geometry to correct the atmospheric distortions.
However, absolute correction methods are so complex, and exact measurements of atmospheric
conditions are challenging. We often use the Relative Correction Methods, which involves the
normalization of multiple images collected on different dates in a given scene with a reference scene.
Geometric Correction
As we record remotely sensed data in motion, we often have geometric distortions due to altitude, sensor,
or earth variations. Ideally, we would have a pixel or a scene with the exact location or grid point on the
ground with two different images taken at other times.
Therefore, we need to undertake geometric corrections to avoid these geometric distortions and establish
the relationship between the image Coordinate Reference System (CRS) and the Geographic CRS to
improve the images' spatial coincidences.
Geometric corrections are achieved through image-to-map rectification or image-to-image registration (co-
registration) through establishing affine relationships in both image CRS and Geographic CRS with ground
control points.
The traditional geometric corrections are time-consuming and require manual identification. However, with
advancements in remote sensing technologies, providers now include orthorectification, which requires
more information than georeferencing with ground control points. Orthorectification corrects distortions from
sensor tilt and the Earth’s terrain (relief displacement) and is necessary for most earth observation
applications.
Basic Principles of Visual Interpretation
When we look at aerial and space images, we see various objects of different sizes, shapes, and colors. Some of these
objects may be readily identifiable while others may not, depending on our own individual perceptions and experience. When
we can identify what we see on the images and communicate this information to others, we are practicing visual image
interpretation. The images contain raw image data. These data, when processed by a human interpreter’s brain, become
usable information.
Aerial and space images contain a detailed record of features on the ground at the time of data acquisition. An image
interpreter systematically examines the images and, frequently, other supporting materials such as maps and reports of field
observations. Based on this study, an interpretation is made as to the physical nature of objects and phenomena appearing in
the images. Interpretations may take place at a number of levels of complexity, from the simple recognition of objects on the
earth’s surface to the derivation of detailed information regarding the complex interactions among earth surface and
subsurface features.
Visual perception is the ability to interpret information and surroundings from the effects of visible light reaching the eye .
Interpretation is the process of extraction of qualitative and quantitative information of objects from aerial photographs or
satellite images.
Visual image interpretation is very useful in various fields such as geography, geology, agriculture, forestry, environment,
ocean studies, wetlands, conservation of natural resources, urban and regional planning, defence and many other purposes.
Elements of Image Interpretation
A systematic study of aerial and space images usually involves several basic characteristics of features shown on an image.
The exact characteristics useful for any specific task and the manner in which they are considered depend on the field of
application. However, most applications consider the following basic characteristics, or variations of them: shape, size,
pattern, tone (or hue), texture, shadows, site, association, and spatial resolution
Shape refers to the general form, configuration, or outline of individual objects. In the case of stereoscopic images, the
object’s height also defines its shape. The shape of some objects is so distinctive that their images may be identified solely
from this criterion. The Pentagon building near Washington, DC, is a classic example. All shapes are obviously not this
diagnostic, but every shape is of some significance to the image interpreter.
Size of objects on images must be considered in the context of the image scale. A small storage shed, for example, might be
misinterpreted as a barn if size were not considered. Relative sizes among objects on images of the same scale must also be
considered.
Pattern relates to the spatial arrangement of objects. The repetition of certain general forms or relationships is characteristic
of many objects, both natural and constructed, and gives objects a pattern that aids the image interpreter in recognizing them.
For example, the ordered spatial arrangement of trees in an orchard is in distinct contrast to that of natural forest tree stands.
Tone (or hue) refers to the relative brightness or color of objects on an image. Without differences in tone or hue, the
shapes, patterns, and textures of objects could not be discerned.
Texture is the frequency of tonal change on an image. Texture is produced by an aggregation of unit features that may be
too small to be discerned individually on the image, such as tree leaves and leaf shadows. It is a product of their individual
shape, size, pattern, shadow, and tone. It determines the overall visual “smoothness” or “coarseness” of image features.
Shadows are important to interpreters in two opposing respects: (1) The shape or outline of a shadow affords an impression
of the profile view of objects (which aids interpretation) and (2) objects within shadows reflect little light and are difficult to
discern on an image (which hinders interpretation). For example, the shadows cast by various tree species or cultural
features (bridges, silos,towers, poles, etc.) can definitely aid in their identification on airphotos.
Site refers to topographic or geographic location and is a particularly important aid in the identification of vegetation types.
For example, certain tree species would be expected to occur on well-drained upland sites, whereas other tree species
would be expected to occur on poorly drained lowland sites. Also, various tree species occur only in certain geographic
areas (e.g., redwoods occur in California, but not in Indiana).
Association refers to the occurrence of certain features in relation to others. For example, a Ferris wheel might be difficult to
identify if standing in a field near a barn but would be easy to identify if in an area recognized as an amusement park.
Spatial resolution depends on many factors, but it always places a practical limit on interpretation because some objects
are too small or have too little contrast with their surroundings to be clearly seen on the image.
Other factors, such as
image scale, spectral
resolution, radiometric
resolution, date of
acquisition, and even the
condition of images (e.g.,
torn or faded historical
photographic prints) also
affect the success of
image interpretation
activities.
Equipment for Visual Interpretation
There are following three types of equipments which are commonly used in mapping i.e. HME, LFOE and light tables.
First two types come under the optical projection types instruments.
High Magnification Enlarger (HME): HME is developed at SAC is a versatile aid for visual interpretation of remotely
sensed data in the form of transparencies. i.e. B/W or FCC diapositives. Enlargements upto 20 times are possible in
this instrument thus enabling a 1:1million diapositives such as LANDSAT TM transparency to be enlarged up to the
scale of 1:50 000 corresponding to SOI topographic sheets. Even high magnifications have also been attempted with
this equipment comparison.
Large Format Optical Enlarger (LFOE): Large format optical enlarger has also been developed at SAC. It is used
for two and four times enlargement of 240mm diapositives.1:1M scale images such as LANDSAT TM and MSS data
can be enlarged upto the scale of 1:250 000 scale corresponding to SOI topographical scale at his scale. IRS LISS II
images at 1:500 000 scale can be enlarged to a scale of 1:250 000 using a LFOE having capability of two times
enlargements. It can also project multi-spectral images in 70 mm format for easy comparison.
Light tables: In addition to the instruments used for optical projection an interpretation aid used for delineation of
features from hardcopy paper prints are light tables. Interpretation corresponding to the scale of paper print can be
carried out. Handling of images for interpretation is relatively easier using light tables.
Enlargers
Diapositives
Light Tables
Schematic Presentation of the Interpretation Process
Ground Truth
Ground truth of a satellite image means the collection of information at a particular location. It
allows satellite image data to be related to real features and materials on the ground. This
information is frequently used for calibration of remote sensing data and compares the result with
ground truth.
Ground truth data are typically collected by visiting a site and perform some experiments like survey
on that particular location, measuring different properties and features of locations like area covered
by forest, agriculture, water, buildings and other class of lands by performing surface observations
in different aspects. Ground truth is important in the initial supervised classification of an image.
These data are often used to access the performance of satellite image classification where each
pixel of the image is compared with corresponding ground truth data to find a match. The objective
is to minimize the error between the segmented satellite image and ground truth information.
Nowadays, many software and image processing tools are available to perform classification and
segmentation of satellite images, which yields good results. However, the problem arises when
ground truth information is not available for a particular geographical location.
True color composite
satellite image showing
ground truth points and
photograph of major
mangrove species in
different localities of the
Sundarbans. The five
dominated species types
of the study area are also
shown here
Color Composite : False and True Color Composite
Sensors on earth observing satellites measure the amount of electromagnetic radiation (EMR)
that is reflected or emitted from the Earth’s surface. These sensors, known as multispectral
sensors, simultaneously measure data in multiple regions of the electromagnetic spectrum,
including visible light, near and short wave infrared. The range of wavelengths measured by a
sensor is known as a band and is commonly described by the wavelength of the energy. Bands
can represent any portion of the electromagnetic spectrum, including ranges not visible to the
eye, such as the infrared or ultraviolet sections.
Each band of a multispectral image can be displayed one band at a time as a grayscale image,
or in a combination of three bands at a time as a color composite image. The three primary
colors of light are red, green, and blue. Computer screens can display an image in three
different bands at a time, by using a different primary color for each band. When we combine
these three images we get a color composite image.
Natural or True Color Composites
Noise removal, in particular, is an important precursor to most enhancements. Without it, the image interpreter is left with the
prospect of analyzing enhanced noise. The most commonly applied digital enhancement techniques can be categorized as
contrast manipulation, spatial feature manipulation, or multi-image manipulation.
2. Spatial feature manipulation. Spatial filtering, edge enhancement, and Fourier analysis.
3. Multi-image manipulation. Multispectral band ratioing and differencing, vegetation and other indices, principal components,
canonical components, vegetation components, intensity–hue–saturation (IHS) and other color space transformations, and
decorrelation stretching.
Image Classification
What is Image Classification ?
Image classification refers to a process in computer vision that can classify an image according to
its visual content. For example, an image classification algorithm may be designed to tell if an
image contains a human figure or not. While detecting an object is trivial for humans, robust image
classification is still a challenge in computer vision applications.
Image classification is the process of assigning land cover classes to pixels. For example, classes
include water, urban, forest, agriculture and grassland
Image classification refers to the task of extracting information classes from a multiband raster
image. The resulting raster from image classification can be used to create thematic maps.
Depending on the interaction between the analyst and the computer during classification, there
are two types of classification: Supervised and Unsupervised.
Image classification is assigning pixels in the
image to categories or classes of interest
Examples: built- up areas, water body, green
vegetation, bare soil, rocky areas, cloud,
shadow etc. in order to classify a set of data
into different classes or categories, the
relationship between the data and the classes
into which they are classified must be well
understood.
(ii) Classification techniques were originally Digital Image Processing and Result Assessment
developed out of research in Pattern
Recognition field
Supervised Classification
Supervised imagery is mainly a human-guided classification. Human image analysts play a crucial role. They specify the
multispectral reflectance emittance values of each land cover class or land use. In short, the analysts will supervise the
pixel classification process through three stages; training, allocation, and testing.
Training is where the analysts get to identify a sample of pixels of a known class membership gathered from referenced
data. Such data may include aerial photographs or existing maps. The training pixels are used to derive various statistics
for each land cover class. In the allocation stage, images are classified and allocated to the classes in which they show the
greatest similarities based on the statistics results. Lastly, in the testing stage, a group of testing pixels is selected, and the
different class identities compared. Comparison is based on the reference data and spectral properties of each pixel in the
image. The results are based on an error matrix depending on the agreements and disagreements of the test samples. On
completion of the three stages, an analyst can evaluate the image classification for each land cover class.
Apart from this, a large number of supervised classification methods have been developed. These algorithms include;
Unsupervised classification is where the groupings of pixels with common characteristics are based on
software analysis of an image without the user defining training fields for each land cover class. All this is
done without the help of training data or prior knowledge. The image analyst’s responsibility is to
determine the correspondences between the spectral classes that the algorithm defines.
In unsupervised classification, there are two basic steps to follow. These include; generate clusters and
assigning classes. Using the remote sensing software, an analyst will first create clusters and identify the
number of groups to generate. After this, they assign land cover classes to each cluster. All this is made
possible by the use of algorithms such as;
● K-means
● Iterative Self-Organizing Data Analysis (ISODATA)
Normalized Satellite Indices
Normalize difference indices are utilized in remote sensing to analyze and classify surface cover
types.
In the simplest terms possible, the Normalized Difference Vegetation Index (NDVI) measures the greenness and the
density of the vegetation captured in a satellite image. Healthy vegetation has a very characteristic spectral reflectance
curve which we can benefit from by calculating the difference between two bands – visible red and near-infrared. NDVI is
that difference expressed as a number – ranging from -1 to 1.
Snow cover is as bright as the clouds, and this makes it is difficult to differentiate it from the cloud cover.
However, at 1.6 mm, snow cover absorbs sunlight, and for that reason, it appears darker than the clouds.
This enables an effective distinction between clouds and snow cover. The image, therefore,
demonstrates the ability to separate clouds from snow using observations at these wavelengths.
NDSI is a measure of the relative magnitude of the reflectance difference between visible (green) and
shortwave infrared (SWIR). It controls variance of two bands (one in the near infrared or short-wave
infrared and another one in the visible parts of the spectrum). This is useful for snow mapping. Snow is
not only very reflective in the visible parts of the electromagnetic spectrum but also highly absorptive in
the NIR or the short-wave infrared part of the spectrum, while the most cloud reflectance remains to be
high in the same parts of the spectrum, this allows good separation of most clouds and snow.
The SRTM radar contained two types of antenna panels, C-band and X-band. The near-
Shuttle Radar Topography Mission (SRTM) global topographic maps of Earth called Digital Elevation Models (DEMs) are made from
the radar data.
Terra MODIS and Aqua MODIS are viewing the entire Earth's surface every 1 to 2 days,
acquiring data in 36 spectral bands, or groups of wavelengths. These data will improve
Moderate Resolution Imaging Spectroradiometer (MODIS) our understanding of global dynamics and processes occurring on the land, in the
oceans, and in the lower atmosphere.
OceanSat-1 or IRS-P4 was the first Indian satellite built specifically for Ocean
applications. It was a part of the Indian Remote Sensing satellite series. The satellite
OCEANSAT-2 carried an Ocean Colour Monitor and a Multi-frequency Scanning Microwave
Radiometer for oceanographic studies.
The Cartosat satellites are a series of Indian optical earth observation satellites built
and operated by the ISRO. Cartosat carries a state-of-the-art panchromatic camera
that take black and white pictures of the earth in the visible region of the
CARTOSAT-1 electromagnetic spectrum. The data from the satellite is used for detailed mapping and
other cartographic applications at cadastral level, urban and rural infrastructure
development and management, as well as applications in Land Information System
(LIS) and Geographical Information System (GIS).
Copernicus Open Access Hub
Satellite Description
Sentinel-1 is the first of the Copernicus Programme satellite constellation conducted
by the European Space Agency. This mission is composed of a constellation of two
satellites, Sentinel-1A and Sentinel-1B, which share the same orbital plane. They
Sentinel-1 carry a C-band synthetic-aperture radar instrument which provides a collection of
data in all-weather, day or night. This instrument has a spatial resolution of down to
5m and a swath of up to 400 km.