0% found this document useful (0 votes)
65 views58 pages

Unit III Satellite Data Products _ Processing

The document discusses satellite data products and processing in remote sensing, emphasizing the importance of data pre-processing to correct distortions and convert raw data into usable products. It outlines various types of data products based on processing levels, output media, and area coverage, including standard, value-added, and derived products. Additionally, it covers essential image interpretation principles and techniques, highlighting the significance of visual characteristics such as shape, size, and texture in analyzing remote sensing images.

Uploaded by

Hary Dubay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views58 pages

Unit III Satellite Data Products _ Processing

The document discusses satellite data products and processing in remote sensing, emphasizing the importance of data pre-processing to correct distortions and convert raw data into usable products. It outlines various types of data products based on processing levels, output media, and area coverage, including standard, value-added, and derived products. Additionally, it covers essential image interpretation principles and techniques, highlighting the significance of visual characteristics such as shape, size, and texture in analyzing remote sensing images.

Uploaded by

Hary Dubay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Remote Sensing

BE E&TC 2019 Elective IV


Unit III Satellite Data Products & Processing

Satellite Data Analysis: Data Products and Their Characteristics,


Data Pre-processing – Atmospheric, Radiometric, Geometric
Corrections - Basic Principles of Visual Interpretation, Equipment
for Visual Interpretation, Ground Truth; Color Composite : False
and True Color Composite;Image enhancements; Classifications -
Supervised and Unsupervised, Normalized satellite Indices - NDVI,
NDWI, GDVI, NDSI etc; Remote Sensing Data Sources : USGS,
Bhuvan, ESA, Sentinel etc
Satellite Data Analysis

The raw remote sensing data recorded through the medium of electromagnetic radiation contain many
systematic distortions and errors. The data have to be processed to remove the distortions and errors.
Finally, the data are converted into various products to be supplied to users for various applications. There
is a remarkable difference between raw remote sensing data recorded and processed remote sensing
data which is supplied to users.

Remote sensing data is a product like a block of aluminium metal before it is shaped into a usable
utensil/vessel. You will find that the whole process with raw satellite images is just like collecting a raw
material and processing the same into a finished one, just as aluminium ore (i.e. bauxite) is processed into
aluminium metal to be used to make utensils/vessels.
Data Products
The first step while starting a geoinformatics
related project is to place an order for the
procurement of remote sensing data. There are two
types of data used in geoinformatics. One is raster
data and the other is vector data. When we talk of
remote sensing data, we always mean raster data.
In its simplest form, a raster data means data
consisting of a matrix of cells or pixels

Figure organised into rows and columns (or a grid)


where each cell contains a value representing
information, such as reflected electromagnetic
radiation (EMR), temperature, or height values.
Raster data products include digital aerial
photographs, imagery from satellites, digital
Arrangement of grid cells or pixels in raster data
pictures, or even scanned maps.
The data from various sensors are
presented in a form and format with
specified radiometric and geometric
accuracy which can be readily used by
scientists and engineers. Remote
Sensing data can be procured by a
number of users for various applications
and information extraction, in the form of
a ‘Data Product’. This may be in the
form of photographic output for visual
processing or in a digital format
amenable for further computer
processing.
Data Formats
Remote sensing data products are generated in certain ‘Data Formats’ about which the users
must be aware of, for various practical reasons. Pre-processed remote sensing data are
generated into a number of products, like hardcopy prints on various types of papers, digital data
on various types of computer compatible media, like tapes, compact discs (CDs), DVDs, and
various other computer compatible storage devices. If the data product is in hard copy print then it
is impossible to carry out any further processing or conversion before use. But if the product is in
digital form, it may be possible to convert the data into a processed digital image. It may be further
required to carry out certain processing before any image analysis operation is performed. ​

Types of Data Products

The types of remote sensing data products depend on:


● Level of processing
● Output media/scale and
● Area of coverage.
Level of Processing
Based on level of processing, the data products are classified into following three types:

a) Standard Products : These products are generated using information received directly from the satellites and by applying
necessary radiometric and geometric corrections

i). Path-Row Based Standard Products: Actually, the data are recorded by the sensors along the track/path of the satellite
covering a specific width (across the satellite track/path). The user specifies path and row of the scene, sensor, sub-scene (if
any), number of scenes, date of pass of the satellite, band numbers/band combination (for photographic products) and the
product code (specified by the data provider) as necessary information to procure the desired products from the provider, for
example in India, the National Data Centre (NDC) of National Remote Sensing Centre (NRSC), Hyderabad.

ii). Shift Along Track (SAT) Products: In case a user’s area of interest falls between two successive scenes of the same
path (one below the other), the data can be supplied by sliding the scene in along-the-track direction. These products are
called Shift-Along-Track Products. In this case, the percentage (10% to 90% in multiples of 10%) of shift has to be specified by
the user in addition to the other necessary information about both the scenes.

iii). Quadrant Products: In IRS series of satellites, LISS III full scene
(path/ row based products) data has been divided into 4 nominal
quadrants designated by letter A, B, C & D. Each of these quadrants is
one full scene of PAN data (B & W data), and is further divided into 9
quadrants. These are given numbers from 1 to 9 which are nothing but
PAN sub- scenes. While placing a request for these products, users
need to specify the quadrant number in addition to the details specified
for path/row based products
Shift Along Track

Path Row Based

Stereo Based
IV). Georeferenced Products: Georeferencing is the process of transforming the remote sensing data or a map to a
coordinate and projection system. This means, all the objects/elements in remote sensing data and the map get specific
geographic location in terms of longitudes and latitudes.

V). Basic Stereo Products: A stereo-pair comprises two images of the same area, acquired on different dates or same day;
from different angles. These products are used for generating digital elevation maps or 3D visualization of the study area.
Cartosat-1 mission provides along track stereo images.

b). Value Added Products

When standard products are processed according to specifications and requirements of users, these products get converted into
value added products. These products are basically of four types viz.

● Geocoded Products: These products are also known as georeferenced products. Geocoding is the process of determining
geographic coordinates for place names, street addresses, and codes (e.g., zip codes).

● Merged Products: Remote sensing satellites record multiband/multispectral (MSS) data and single-band/panchromatic data.
Panchromatic/ mono-band data contains higher spatial resolution and multi-band data have comparatively low in spatial
resolution. When MSS data area is merged with panchromatic data, we get MSS data with spatial resolution of panchromatic
data. Data can be merged only when both MSS and PAN data have been registered with the same georeferencing system.
● Ortho Products: Geometrically corrected products, with corrections for displacement caused by tilt and relief, are
called Ortho Products. Therefore, ortho images show ground objects in their true planimetric positions, like the
positions of objects in a map. The basic inputs required for ortho-image generation are: (i) Digital Elevation Model,
(ii) Ground Control Points, (iii) Satellite ephemeris (orbit and altitude information), and (iv) Radiometrically corrected
satellite data.
● Template Registered Products: In specific cases, such as the study of crops and their monitoring, data of the
same area having similar geometric fidelity and temporally registered are required. Such data sets are called
template registered products. In India, data from AWiFS sensor is envisaged for extensive use in crop monitoring.

c). Derived Products

Information extracted from remote sensing data is also provided as useful products by data providers. For example,
vegetation index map or sea surface temperature profiles extracted from various remote sensing data products are called
derived products. These derived products are generated by further processing /analysing the data, and are readily usable
by the user. Based on output media, data products are available on both photographic as well as digital media.
Photographic products can be supplied as films or prints. Output products can vary from 1:1 million to 1:50000 or even to
1:25000.
Products Based on Output Media/Scale

NRSC provides remote sensing products


in two types of media:

● Photographic media
● Digital media.

Photographic products are supplied on film


or paper prints. The scale of these
photographic products can range from
1:1M to 1:5000.
Products Based on Area of Coverage
Major types of remote sensing data products are based on area of coverage
Data Pre-Processing

Remote sensed data might contain noise and other deficiencies derived from the sensors onboard
or radiative transfer processes. Therefore, we often conduct further preprocessing techniques to
deal with such flaws. These different processing techniques are generally referred to as Image
Processing.

Image processing is a method to perform some operations on an image, in order to get an


enhanced image or to extract some useful information from it. It is a type of signal processing in
which input is an image and output may be image or characteristics/features associated with that
image — Digital Image Processing.

There are several Image processing techniques used in Earth observation, and we tend to
categorize them into four broad categories: Preprocessing, Transformation, Correction, and
Classification.
Pre-Processing Techniques

Some distortions need to be corrected before carrying out analysis and post-processing
techniques. The image preparation and processing operations carried out before analysis to
correct or minimize image distortions from, i.e., imaging systems, sensors, and observing
conditions, are often referred to as pre-processing techniques.

Some typical pre-processing operations include but not limited to the following types:

● Radiometric Correction
● Atmospheric Corrections
● Geometric Correction
Radiometric Correction

Radiometric distortions usually come from two sources: sensor characteristics and difference in
illumination condition. Under this type of distortions, common imaging problems can happen when
the image was taken might not coincide with the objects' emitted or reflected energy. Therefore,
these radiometric distortions are necessary to handle before image interpretations and analysis.

Radiometric corrections are classified into two broad categories: Sun Angle/Topography
Radiometric Corrections and Sensor Irregularities Radiometric Corrections.

The Sun Angle/Topography radiometric corrections correct the effects of diffusion of sunlight,
especially in the water surface and mountains, by estimating the shading curve.

On the other hand, Sensor irregularity corrections involve removing radiometric noise from
changes in sensor sensitivity or degradation of the sensor. The correction process under this
category calculates new relationships between calibrated irradiance measurement and sensor
output signal. Therefore, the process is also called Calibration.
Comparison images before radiometric
correction and after radiometric
correction, (a) Original November 2005
image with contrails (bands 4, 3, 2
composite). (b) November 2005 image
with contrails image after radiometric
correction (bands 4, 3, 2 composite). (c)
Original February 2006 image without
contrails (bands 4, 3, 2 composite). (d)
February 2006 image without contrails
after radiometric correction (bands 4, 3, 2
composite).
Atmospheric Corrections
Radiation from Earth's surface goes through different atmospheric interactions before reaching the
sensor. It is difficult to get an explicit observing scene and might be affected by, for example, clouds and
aerosols in the atmosphere. Therefore, many images contain atmospheric noise, distorting image
interpretation, and thus needs to be corrected.

Atmospheric correction methods also fall into two broad categories: The Absolute Correction method
and the Relative Correction method.

The absolute Correction Method considers several time-dependent parameters, including solar zenith
angle, the total optical depth of the aerosol, Irradiance at the top of the atmosphere, and sensor viewing
geometry to correct the atmospheric distortions.

However, absolute correction methods are so complex, and exact measurements of atmospheric
conditions are challenging. We often use the Relative Correction Methods, which involves the
normalization of multiple images collected on different dates in a given scene with a reference scene.
Geometric Correction
As we record remotely sensed data in motion, we often have geometric distortions due to altitude, sensor,
or earth variations. Ideally, we would have a pixel or a scene with the exact location or grid point on the
ground with two different images taken at other times.

Therefore, we need to undertake geometric corrections to avoid these geometric distortions and establish
the relationship between the image Coordinate Reference System (CRS) and the Geographic CRS to
improve the images' spatial coincidences.

Geometric corrections are achieved through image-to-map rectification or image-to-image registration (co-
registration) through establishing affine relationships in both image CRS and Geographic CRS with ground
control points.

The traditional geometric corrections are time-consuming and require manual identification. However, with
advancements in remote sensing technologies, providers now include orthorectification, which requires
more information than georeferencing with ground control points. Orthorectification corrects distortions from
sensor tilt and the Earth’s terrain (relief displacement) and is necessary for most earth observation
applications.
Basic Principles of Visual Interpretation
When we look at aerial and space images, we see various objects of different sizes, shapes, and colors. Some of these
objects may be readily identifiable while others may not, depending on our own individual perceptions and experience. When
we can identify what we see on the images and communicate this information to others, we are practicing visual image
interpretation. The images contain raw image data. These data, when processed by a human interpreter’s brain, become
usable information.

Aerial and space images contain a detailed record of features on the ground at the time of data acquisition. An image
interpreter systematically examines the images and, frequently, other supporting materials such as maps and reports of field
observations. Based on this study, an interpretation is made as to the physical nature of objects and phenomena appearing in
the images. Interpretations may take place at a number of levels of complexity, from the simple recognition of objects on the
earth’s surface to the derivation of detailed information regarding the complex interactions among earth surface and
subsurface features.

Visual perception is the ability to interpret information and surroundings from the effects of visible light reaching the eye .
Interpretation is the process of extraction of qualitative and quantitative information of objects from aerial photographs or
satellite images.

Visual image interpretation is very useful in various fields such as geography, geology, agriculture, forestry, environment,
ocean studies, wetlands, conservation of natural resources, urban and regional planning, defence and many other purposes.
Elements of Image Interpretation
A systematic study of aerial and space images usually involves several basic characteristics of features shown on an image.
The exact characteristics useful for any specific task and the manner in which they are considered depend on the field of
application. However, most applications consider the following basic characteristics, or variations of them: shape, size,
pattern, tone (or hue), texture, shadows, site, association, and spatial resolution

Shape refers to the general form, configuration, or outline of individual objects. In the case of stereoscopic images, the
object’s height also defines its shape. The shape of some objects is so distinctive that their images may be identified solely
from this criterion. The Pentagon building near Washington, DC, is a classic example. All shapes are obviously not this
diagnostic, but every shape is of some significance to the image interpreter.

Size of objects on images must be considered in the context of the image scale. A small storage shed, for example, might be
misinterpreted as a barn if size were not considered. Relative sizes among objects on images of the same scale must also be
considered.

Pattern relates to the spatial arrangement of objects. The repetition of certain general forms or relationships is characteristic
of many objects, both natural and constructed, and gives objects a pattern that aids the image interpreter in recognizing them.
For example, the ordered spatial arrangement of trees in an orchard is in distinct contrast to that of natural forest tree stands.
Tone (or hue) refers to the relative brightness or color of objects on an image. Without differences in tone or hue, the
shapes, patterns, and textures of objects could not be discerned.

Texture is the frequency of tonal change on an image. Texture is produced by an aggregation of unit features that may be
too small to be discerned individually on the image, such as tree leaves and leaf shadows. It is a product of their individual
shape, size, pattern, shadow, and tone. It determines the overall visual “smoothness” or “coarseness” of image features.

Shadows are important to interpreters in two opposing respects: (1) The shape or outline of a shadow affords an impression
of the profile view of objects (which aids interpretation) and (2) objects within shadows reflect little light and are difficult to
discern on an image (which hinders interpretation). For example, the shadows cast by various tree species or cultural
features (bridges, silos,towers, poles, etc.) can definitely aid in their identification on airphotos.

Site refers to topographic or geographic location and is a particularly important aid in the identification of vegetation types.
For example, certain tree species would be expected to occur on well-drained upland sites, whereas other tree species
would be expected to occur on poorly drained lowland sites. Also, various tree species occur only in certain geographic
areas (e.g., redwoods occur in California, but not in Indiana).

Association refers to the occurrence of certain features in relation to others. For example, a Ferris wheel might be difficult to
identify if standing in a field near a barn but would be easy to identify if in an area recognized as an amusement park.

Spatial resolution depends on many factors, but it always places a practical limit on interpretation because some objects
are too small or have too little contrast with their surroundings to be clearly seen on the image.
Other factors, such as
image scale, spectral
resolution, radiometric
resolution, date of
acquisition, and even the
condition of images (e.g.,
torn or faded historical
photographic prints) also
affect the success of
image interpretation
activities.
Equipment for Visual Interpretation
There are following three types of equipments which are commonly used in mapping i.e. HME, LFOE and light tables.
First two types come under the optical projection types instruments.

High Magnification Enlarger (HME): HME is developed at SAC is a versatile aid for visual interpretation of remotely
sensed data in the form of transparencies. i.e. B/W or FCC diapositives. Enlargements upto 20 times are possible in
this instrument thus enabling a 1:1million diapositives such as LANDSAT TM transparency to be enlarged up to the
scale of 1:50 000 corresponding to SOI topographic sheets. Even high magnifications have also been attempted with
this equipment comparison.

Large Format Optical Enlarger (LFOE): Large format optical enlarger has also been developed at SAC. It is used
for two and four times enlargement of 240mm diapositives.1:1M scale images such as LANDSAT TM and MSS data
can be enlarged upto the scale of 1:250 000 scale corresponding to SOI topographical scale at his scale. IRS LISS II
images at 1:500 000 scale can be enlarged to a scale of 1:250 000 using a LFOE having capability of two times
enlargements. It can also project multi-spectral images in 70 mm format for easy comparison.

Light tables: In addition to the instruments used for optical projection an interpretation aid used for delineation of
features from hardcopy paper prints are light tables. Interpretation corresponding to the scale of paper print can be
carried out. Handling of images for interpretation is relatively easier using light tables.
Enlargers

Diapositives
Light Tables
Schematic Presentation of the Interpretation Process
Ground Truth
Ground truth of a satellite image means the collection of information at a particular location. It
allows satellite image data to be related to real features and materials on the ground. This
information is frequently used for calibration of remote sensing data and compares the result with
ground truth.

Ground truth data are typically collected by visiting a site and perform some experiments like survey
on that particular location, measuring different properties and features of locations like area covered
by forest, agriculture, water, buildings and other class of lands by performing surface observations
in different aspects. Ground truth is important in the initial supervised classification of an image.

These data are often used to access the performance of satellite image classification where each
pixel of the image is compared with corresponding ground truth data to find a match. The objective
is to minimize the error between the segmented satellite image and ground truth information.
Nowadays, many software and image processing tools are available to perform classification and
segmentation of satellite images, which yields good results. However, the problem arises when
ground truth information is not available for a particular geographical location.
True color composite
satellite image showing
ground truth points and
photograph of major
mangrove species in
different localities of the
Sundarbans. The five
dominated species types
of the study area are also
shown here
Color Composite : False and True Color Composite

Sensors on earth observing satellites measure the amount of electromagnetic radiation (EMR)
that is reflected or emitted from the Earth’s surface. These sensors, known as multispectral
sensors, simultaneously measure data in multiple regions of the electromagnetic spectrum,
including visible light, near and short wave infrared. The range of wavelengths measured by a
sensor is known as a band and is commonly described by the wavelength of the energy. Bands
can represent any portion of the electromagnetic spectrum, including ranges not visible to the
eye, such as the infrared or ultraviolet sections.

Each band of a multispectral image can be displayed one band at a time as a grayscale image,
or in a combination of three bands at a time as a color composite image. The three primary
colors of light are red, green, and blue. Computer screens can display an image in three
different bands at a time, by using a different primary color for each band. When we combine
these three images we get a color composite image.
Natural or True Color Composites

A natural or true color composite is an image


displaying a combination of visible red, green and
blue bands to the corresponding red, green and
blue channels on the computer. The resulting
composite resembles what would be observed
naturally by the human eye, vegetation appears
green, water dark is blue to black and bare ground
and impervious surfaces appear light grey and
brown. Many people prefer true color composites,
as colors appear natural to our eyes, but often
subtle differences in features are difficult to
recognize. Natural color images can be low in
contrast and somewhat hazy due the scattering of
blue light by the atmosphere.
False Color Composites
False color images are a representation of a
multi-spectral image produced using bands
other than visible red, green and blue as the
red, green and blue components of an image
display. False color composites allow us to
visualize wavelengths that the human eye can
not see (i.e. near-infrared). Using bands such
as near infrared increases the spectral
separation and often increases the
interpretability of the data. There are many
different false colored composites which can
highlight many different features.
Landsat 8 measures different ranges
Landsat 8 Reflective Bands of wavelengths along the
electromagnetic spectrum. Each of
these ranges in known as a band and
in total Landsat 8 has 11 bands. The
first 7 of these bands are in the
visible and infrared part of the
spectrum and are commonly known
as the "reflective bands" and are
captured by the Operational Land
Imager (OLI) on board Landsat 8. In
addition to the 7 bands listed in the
table to the right, there is also a
panchromatic or black-and-white
band (Band 8) and a cirrus cloud
band (Band 9) that is used to detect
cirrus clouds. Landsat 8 also has a
Thermal Infrared Sensor (TIRS)
which collects data in two thermal
infrared bands.
Band Combinations for Landsat 8

Band combinations are selected for a number of


reasons and it is helpful to understand the spectral
reflectance profiles of features you are interested in
studying. For example in the NIR false color composite
shown above healthy vegetation appears bright red as
they reflect more near infrared than green.

Though there are many possible combinations of


wavelength bands, the table to the right is a list of some
that are commonly used. The band combinations are
listed by band number in order of red, green, blue
(RGB).
Image Transformation / Enhancement
The primary goal of image enhancement is to improve the visual interpretability of an image by increasing the apparent
distinction between the features in the scene. The range of possible image enhancement and display options available to the
image analyst is virtually limitless. Most enhancement techniques may be categorized as either point or neighborhood
operations. Point Operations modify the brightness value of each pixel in an image data set independently. Neighborhood
Operations modify the value of each pixel based on neighboring brightness values. Either form of enhancement can be
performed on single-band (monochrome) images or on the individual components of multi- image composites. The resulting
images may also be recorded or displayed in black and white or in color.

Noise removal, in particular, is an important precursor to most enhancements. Without it, the image interpreter is left with the
prospect of analyzing enhanced noise. The most commonly applied digital enhancement techniques can be categorized as
contrast manipulation, spatial feature manipulation, or multi-image manipulation.

1. Contrast manipulation. Gray-level thresholding, level slicing, and contrast stretching.

2. Spatial feature manipulation. Spatial filtering, edge enhancement, and Fourier analysis.

3. Multi-image manipulation. Multispectral band ratioing and differencing, vegetation and other indices, principal components,
canonical components, vegetation components, intensity–hue–saturation (IHS) and other color space transformations, and
decorrelation stretching.
Image Classification
What is Image Classification ?

Image classification refers to a process in computer vision that can classify an image according to
its visual content. For example, an image classification algorithm may be designed to tell if an
image contains a human figure or not. While detecting an object is trivial for humans, robust image
classification is still a challenge in computer vision applications.

Image classification is the process of assigning land cover classes to pixels. For example, classes
include water, urban, forest, agriculture and grassland

Image classification refers to the task of extracting information classes from a multiband raster
image. The resulting raster from image classification can be used to create thematic maps.
Depending on the interaction between the analyst and the computer during classification, there
are two types of classification: Supervised and Unsupervised.
Image classification is assigning pixels in the
image to categories or classes of interest
Examples: built- up areas, water body, green
vegetation, bare soil, rocky areas, cloud,
shadow etc. in order to classify a set of data
into different classes or categories, the
relationship between the data and the classes
into which they are classified must be well
understood.

To achieve this by computer, the computer must


be trained

(i) Training is key to the success of


classification,

(ii) Classification techniques were originally Digital Image Processing and Result Assessment
developed out of research in Pattern
Recognition field
Supervised Classification
Supervised imagery is mainly a human-guided classification. Human image analysts play a crucial role. They specify the
multispectral reflectance emittance values of each land cover class or land use. In short, the analysts will supervise the
pixel classification process through three stages; training, allocation, and testing.

Training is where the analysts get to identify a sample of pixels of a known class membership gathered from referenced
data. Such data may include aerial photographs or existing maps. The training pixels are used to derive various statistics
for each land cover class. In the allocation stage, images are classified and allocated to the classes in which they show the
greatest similarities based on the statistics results. Lastly, in the testing stage, a group of testing pixels is selected, and the
different class identities compared. Comparison is based on the reference data and spectral properties of each pixel in the
image. The results are based on an error matrix depending on the agreements and disagreements of the test samples. On
completion of the three stages, an analyst can evaluate the image classification for each land cover class.

Apart from this, a large number of supervised classification methods have been developed. These algorithms include;

● Maximum Likelihood Classifier


● Minimum Distance-to-means Classifier
● Mahalanobis Distance Classifier
● K-Nearest Neighbors Classifier
● Support Vector Machine
Unsupervised Classification

Unsupervised classification is where the groupings of pixels with common characteristics are based on
software analysis of an image without the user defining training fields for each land cover class. All this is
done without the help of training data or prior knowledge. The image analyst’s responsibility is to
determine the correspondences between the spectral classes that the algorithm defines.

In unsupervised classification, there are two basic steps to follow. These include; generate clusters and
assigning classes. Using the remote sensing software, an analyst will first create clusters and identify the
number of groups to generate. After this, they assign land cover classes to each cluster. All this is made
possible by the use of algorithms such as;

● K-means
● Iterative Self-Organizing Data Analysis (ISODATA)
Normalized Satellite Indices

Normalize difference indices are utilized in remote sensing to analyze and classify surface cover
types.

Usually, a Landsat 8 image is used to generate output layers for :

Normalized Difference Water Index (NDWI),

Normalized Difference Vegetation Index (NDVI), and

Normalized Difference Snow Index (NDSI).


Normalized Difference Vegetation Index

In the simplest terms possible, the Normalized Difference Vegetation Index (NDVI) measures the greenness and the
density of the vegetation captured in a satellite image. Healthy vegetation has a very characteristic spectral reflectance
curve which we can benefit from by calculating the difference between two bands – visible red and near-infrared. NDVI is
that difference expressed as a number – ranging from -1 to 1.

NDVI of a crop or a plant calculated regularly over periods of time can


reveal a lot about the changes in their conditions. In other words, we can
use NDVI to estimate plant health remotely.
Normalized Difference Water Index

The Normalized Difference Water Index


(NDWI) is used to highlight open water
features in a satellite image, allowing a
water body to “stand out” against the soil
and vegetation.

NDWI = (Green – NIR)/(Green + NIR)

Whenever there is a need to detect a water body, sharpen


its outline on the map, and monitor changes in its clarity,
the NDWI index is applied. Beyond the visible spectrum
towards the infrared, water reflects almost no light. The
NDWI makes use of this property to successfully outline
water bodies on the map and monitor water’s turbidity.
Normalized Difference Snow Index
Snow is one of the common global climatological phenomenon well-known to be a serious constituent of
the hydrological series and ecological hazard. In some parts of the world, snow is usually limited to the
higher grounds and is regarded to be one of the most destructive natural hazards. Consequently,
Identification of the snow cover is important to the hazard mitigation and catchment management. It is
also significant for hydrological and weather forecasting. When identifying the snow presence, satellite
tools includes the observations at 0.66 & 1.6mm. There is transparency in the atmosphere at the
wavelengths, while snow is not reflective at 1.6mm and very reflective at 0.66 mm.

Snow cover is as bright as the clouds, and this makes it is difficult to differentiate it from the cloud cover.
However, at 1.6 mm, snow cover absorbs sunlight, and for that reason, it appears darker than the clouds.
This enables an effective distinction between clouds and snow cover. The image, therefore,
demonstrates the ability to separate clouds from snow using observations at these wavelengths.

NDSI is a measure of the relative magnitude of the reflectance difference between visible (green) and
shortwave infrared (SWIR). It controls variance of two bands (one in the near infrared or short-wave
infrared and another one in the visible parts of the spectrum). This is useful for snow mapping. Snow is
not only very reflective in the visible parts of the electromagnetic spectrum but also highly absorptive in
the NIR or the short-wave infrared part of the spectrum, while the most cloud reflectance remains to be
high in the same parts of the spectrum, this allows good separation of most clouds and snow.

To calculate the ratio of the two bands taken and composed in a


satellite image in a specific time and location

Band 2: Visible Green, 0.53 – 0.61 micrometers


Band 5: Short Wave Infrared, 1.55 – 1.75 micrometers
Normalized Satellite Indexes
United States Geological Survey (USGS)
Satellite Description
Broad-band, four or five channel scanner, sensing in the visible, near-infrared, and
The Advanced Very High Resolution Radiometer (AVHRR) thermal infrared portions of the electromagnetic spectrum.

The SRTM radar contained two types of antenna panels, C-band and X-band. The near-
Shuttle Radar Topography Mission (SRTM) global topographic maps of Earth called Digital Elevation Models (DEMs) are made from
the radar data.

The Landsat program is the longest-running enterprise for acquisition of satellite


imagery of Earth. The most recent, Landsat 8, was launched on February 11, 2013. The
LANDSAT instruments on the Landsat satellites have acquired millions of images. Data is acquired
in the visible, NIR, SWIR and TIR regions of the electromagnetic spectrum.

It is an imaging instrument on-board Terra, the flagship satellite of NASA's Earth


Observing System launched in December 1999. ASTER is a cooperative effort between
The Advanced Spaceborne Thermal Emission and Reflection NASA, Japan's Ministry of Economy, Trade and Industry and Japan Space Systems.
Data is acquired in the VNIR, SWIR and TIR regions of the electromagnetic spectrum.
Radiometer (ASTER) ASTER data are used to create detailed maps of land surface temperature, reflectance,
and elevation.

Terra MODIS and Aqua MODIS are viewing the entire Earth's surface every 1 to 2 days,
acquiring data in 36 spectral bands, or groups of wavelengths. These data will improve
Moderate Resolution Imaging Spectroradiometer (MODIS) our understanding of global dynamics and processes occurring on the land, in the
oceans, and in the lower atmosphere.

Resourcesat is an advanced remote sensing satellite built by ISRO. Data is acquired in


RESOURCESAT the visible, VNIR and SWIR regions of the electromagnetic spectrum.
BHUVAN
Satellite Description
SCATSAT-1 is a satellite providing weather forecasting, cyclone prediction, and
tracking services to India. It has been developed by ISRO Satellite Centre, Bangalore
Scatterometer Satellite-1 (SCATSAT-1) whereas its payload was developed by Space Applications Centre, Ahmedabad. The
satellite carries a Ku-band scatterometer.

OceanSat-1 or IRS-P4 was the first Indian satellite built specifically for Ocean
applications. It was a part of the Indian Remote Sensing satellite series. The satellite
OCEANSAT-2 carried an Ocean Colour Monitor and a Multi-frequency Scanning Microwave
Radiometer for oceanographic studies.

The Resourcesat is designed to provide multispectral, monoscopic and stereoscopic


imageries of the earth’s surface with it’s advanced on-board sensors. Linear Imaging
RESOURCESAT-1,2 and Self Scanning Sensor (LISS-III), an Advanced Wide Field Sensor (AWiFS)
and a High Resolution Multispectral Sensor LISS-IV constitute main payload of
Resourcesat.

HySIS is an Earth observation satellite which will provide hyperspectral imaging


services to India for a range of applications in agriculture, forestry and in the
Hyper-Spectral Imaging Satellite (HySIS) assessment of geography such as coastal zones and inland waterways. The data will
also be accessible to India's defence forces. HySIS carries two payloads, the first is the
Visible Near Infrared (VNIR) and the second is the Shortwave Infrared Range (SWIR)

The Cartosat satellites are a series of Indian optical earth observation satellites built
and operated by the ISRO. Cartosat carries a state-of-the-art panchromatic camera
that take black and white pictures of the earth in the visible region of the
CARTOSAT-1 electromagnetic spectrum. The data from the satellite is used for detailed mapping and
other cartographic applications at cadastral level, urban and rural infrastructure
development and management, as well as applications in Land Information System
(LIS) and Geographical Information System (GIS).
Copernicus Open Access Hub
Satellite Description
Sentinel-1 is the first of the Copernicus Programme satellite constellation conducted
by the European Space Agency. This mission is composed of a constellation of two
satellites, Sentinel-1A and Sentinel-1B, which share the same orbital plane. They
Sentinel-1 carry a C-band synthetic-aperture radar instrument which provides a collection of
data in all-weather, day or night. This instrument has a spatial resolution of down to
5m and a swath of up to 400 km.

Sentinel-2 is an Earth observation mission from the Copernicus Programme that


systematically acquires optical imagery at high spatial resolution (10 m to 60 m) over
Sentinel-2 land and coastal waters. The mission is a constellation with two twin satellites,
Sentinel-2A and Sentinel-2B. Multi-spectral data with 13 bands in the visible, near
infrared, and short wave infrared part of the spectrum are collected.

Sentinel-3 is an Earth observation satellite constellation developed by the European


Space Agency as part of the Copernicus Programme. It currently (as of 2019)
consists of 2 satellites: Sentinel-3A and Sentinel-3B. Two more satellites, Sentinel-3C
Sentinel 3 and Sentinel-3D, are on order. The applications of Sentinel-3 are diverse. Using the
collection of sensors on-board Sentinel-3 is able to detect ocean and land
temperature and colour change. The Ocean and Land Colour Instrument has a 300 m
resolution with 21 distinct bands allowing global coverage in less than four days.
Indian Meteorological Department (IMD)
Datasets Description
•River basins
Rainfall Information •Rainfall statistics
•Normal rainfall maps

•Onset and advance map of monsoon


•Daily monsoon activity
Monsoon Information •Week by week and cumulative rainfall activity
•End of season monsoon report
•Monsoon forecast verification

•Tropical weather outlook


•Track of cyclone disturbance
Cyclone Information •Wind warning
•Storm surge warning

•Daily rainfall map


•Daily temperature map
Climate Services •Standardized precipitation index
•Time series of temperature and rainfall

City Forecast •Temperature and weather forecast in cities.


Summary

● Satellite Data Analysis: Data Products and their Characteristics,


● Data Pre-processing – Atmospheric, Radiometric, Geometric Corrections
● Basic Principles of Visual Interpretation
● Color Composite : False and True Color Composite
● Classifications - Supervised and Unsupervised,
● Normalized satellite Indices
● Remote Sensing Data Sources : USGS, Bhuvan, ESA, Sentinel etc
Thank You

You might also like