0% found this document useful (0 votes)
2 views216 pages

MSc 3rd Sem_Remote Sensing - Copy

Remote sensing is the science and art of acquiring information about the Earth's surface from a distance without direct contact, primarily using electromagnetic energy. It involves various techniques such as aerial photography, photogrammetry, and the analysis of electromagnetic radiation interactions with the atmosphere and surface materials. The document details the principles of remote sensing, including energy sources, types of sensors, and the effects of atmospheric conditions on data collection and interpretation.

Uploaded by

nabinadhikari59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views216 pages

MSc 3rd Sem_Remote Sensing - Copy

Remote sensing is the science and art of acquiring information about the Earth's surface from a distance without direct contact, primarily using electromagnetic energy. It involves various techniques such as aerial photography, photogrammetry, and the analysis of electromagnetic radiation interactions with the atmosphere and surface materials. The document details the principles of remote sensing, including energy sources, types of sensors, and the effects of atmospheric conditions on data collection and interpretation.

Uploaded by

nabinadhikari59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 216

REMOTE SENSING

What is remote sensing?


“far away” “Believing or observing or acquiring some
information”

Remote Sensing means acquiring information of the


things from distance
Or
Remote sensing is the science and art of acquiring
information about the earth surface without actually
being in contact with it.
This is done by sensing and recording reflected
or emitted energy and processing, analysing and
applying the information
OR
Remote sensing is defined as the science and art of acquiring
informations about a material object by making measurements, at
a distance from ( without coming into physical contact with) it, of
the electromagnetic(EM) energy it radiates.

These measurements are usually made using a sophisticated


instrument called the sensor (e.g. Camera, an MSS, a Radar), a
device which carries out the measurements through the light
(electromagnetic radiation) reflected/emitted by the object

Thus remote sensing includes all the methods of obtaining


pictures and other forms of electromagnetic records, mostly of the
earths surface, from a distance, and their processing and
interpretation.
Remote sensing = Science and art of obtaining information about an object,
area or phenomenon through analysis of the data acquired by a device that is not in
contact with the object, area or phenomenon under investigation

Aerial Photography = Science of making photographs from air for studying


the surface of the earth

Photogeology = the technique of aerial photo-interpretation when used to


collect information for the purpose of deciphering the geology of the area

Photogrammetry = the science and art of determining the size and


shape of objects as a result of analysing images recorded on film or electronic media
Or art of making measurements from the images or aerial photographs
Electromagnetic Energy/ Electromagnetic Radiation (EMR)

All energy that moves with the speed of light in a harmonic wave pattern (light, heat,
radio waves) . Electromagnetic Energy is identified in terms of wavelength and
frequency.

First requirement of remote sensing is to have an energy source to illuminate the


target, unless the remotely sensed energy is being emitted by the target itself. Remote
sensing sensors need a source of energy to illuminate the earth surface.

Sources of energy applicable to remote sensing are


a) Natural e.g. Sunlight and earth’s emission
b) Artificial e.g. Flashlight, Radar and LASER

Whether the energy is radiated from an external (natural or artificial) source or


emitted from the object itself, it is in the form of EMR
Electromagnetic spectrum
The electromagnetic spectrum is the orderings of EM radiation according to
wavelength, or frequency or energy. EM Spectrum is most commonly presented
between cosmic rays and radio waves. The EM spectrum is divided into various
regions or domains.
Primary colours : Blue, Green, Red

No single primary colour can be created from the other two but
all others colours can be formed by combining primary colours
in various proportions

Wavelength of visible spectrum

We see sunlight as a uniform or homogeneous colour, but it is


actually composed of various wavelenghts of radiation
 The process of separating the constituent colours in
white light is known as Dispersion

0.620 - 0.7µm
0.592 - 0.620µm
0.578 - 0.592µm
0.500 - 0.578µm
0.446 - 0.5000µm

0.4 - 0.446µm
 Electromagnetic Spectrum : from cosmic rays to radio waves

 Remote sensing is generally performed within the range of


ultraviolet to microwave region

 Different bands of EMR are used for different types of remote sensing

 Cosmic rays, Gamma rays and X-rays are not used in remote sensing

 Longer wavelength beyond the microwave portion of the EM


spectrum are television and radio waves
Different bands of electromagnetic spectrum
A Radiation by energy source
B Interaction with the earth’s
atmosphere
C Interaction with the target
D Recording of energy by the sensor
E Transmission, reception and
processing
F Interpretation and analysis
G Applications

Earth’s
Atmosphere Vacuum

 EMR propagates through the vacuum and then through the earth’s
atmosphere

 No change in Vacuum

 Atmosphere may affects not only the speed of radiation but also its
wavelength, its intensity, and its spectral distribution

 Affects are caused by two main mechanisms: Scattering and Absorption


ABSORPTION

 Absorption is the process by which radiant energy is absorbed and


converted into other form of energy
 Absorption band is a range of wavelength (or frequencies) in the
EM spectrum in which radiant energy is absorbed by a substance.

 Absorption cause the atmosphere to close down completely in certain


regions of spectrum which is not desirable for remote sensing as no
energy is available to sensed

 Ozone, Carbon dioxide, water vapour

Ozone serves to absorb the harmful ultraviolet radiation from the sun

 Water vapour absorbs radiation strongly in the far infrared


(thermal infrared) portion of spectrum
 Water Vapour in the atmosphere absorbs incoming longwave infrared
(thermal) and shortwave microwave regions

 Absorption of electromagnetic energy in different specific regions


of the spectrum influence where we can look for remote sensing
purpose

 Those areas of the spectrum which are not severely influenced by


atmospheric absorption and thus , are useful to remote sensors are
called Atmospheric Window

 All spectral regions are affected to some extend by absorption in the


atmosphere but some spectral regions are less affected and nearly
transparent and useful for remote sensing
SCATTERING

 Unpredictable diffusion of radiation by particles in the atmosphere

Occurs when particles or large gas molecules present in the atmosphere


interact with EMR and cause the EMR to be redirected from it original
path

Scattering depends on
Wavelength of the radiation
Diameter of particles or gaseous molecules
Distance of radiation travels through the atmosphere

 Selective Scattering and Non- selective scattering


Selective Scattering

Rayleigh Scattering- Also known as Molecular Scattering Occurs when effective


diameter of the matter (air molecules e.g. oxygen and nitrogen) is many times smaller
than the wavelength of the incident EMR
wavelength of the incident EMR >> diameter of the matter

 Scattering takes place in the upper 4.5km of the atmosphere

 Responsible for Blue appearance of the sky as the shorter Violet and Blue
wavelengths are more efficiently scattered than the longer green and red wavelengths

 Dominant for near ultraviolet and visible radiations and becomes negligible for
wavelengths longer than 1 micrometer

 That’s why most remote sensing system avoid detecting and recording wavelengths
in the ultraviolet and blue portion of the spectrum
Mie Scattering-

Also known as Non-Molecular Scattering Occurs when effective


diameter of the matter approximately equal to the size of wavelength of the incident
EMR

 Caused by the particles with radii between 0.1 to 10µm such as dust, smoke and
aerosols

 Scattering takes place in the lower 4.5 km of the atmosphere where there may be
essentially spherical particle present
Non – selective scattering

 Occurs when particles are greater than 10 times the wavelength of


the incident EMR

 Non-selective scattering takes place in the lowest portion of the


atmosphere

 Scattering is non-selective i.e. all wavelengths of light are scattered,


not just blue, green or red

 The cloud appear white because the water droplets and ice crystals that
make up clouds and fog banks scatter all wavelength of visible light
equally well

 Non- selective scattering of approximately equal proportions of blue,


green red light always appears as white to causal observer
Refraction
 Bending of light when it passes from one medium to another

When EMR encounters substances of different densities, like air


and water, refraction takes place

 Refraction occurs because the media are of different densities and


the speed of the EMR is different in each

Reflection

 Reflection is the process whereby radiation bounces off an object

 Differs from scattering as the direction associated with scattering


is unpredictable but in case of reflection it is predictable
Interaction with earth surfaces
 Electromagnetic radiation that passes through the
earth's atmosphere without being absorbed or scattered
reaches the earth's surface to interact in different ways
with different materials constituting the surface.

 The incident energy on any given earth surface interact


with it and the energy is reflected, absorbed and
transmitted

 The proportion of energy reflected, absorbed and


transmitted varies with the features, depending on the
type and condition of the material constituting the
feature
 the energy interaction is given by the
relationship
EI(L) = ER(L)+EA(L)+ET(L)
or ER(L) = EI(L)-EA(L)-ET(L)

EI = incident energy
ER = Reflected energy
EA = Absorbed energy
ET = Transmitted energy
In remote sensing, we are most interested in
measuring the radiation reflected from targets

There are two main types of reflections

– Specular reflector

– Diffuse or Lambertian reflector

This is mainly a function of the roughness of the object


• Specular reflector = it has a flat surface which mirror like
reflections takes place. Here the angle of reflection is equal
to the angle of incidence . Contains no spectral information
When the surface is smooth, we get a mirror-like or
smooth reflection where all (or almost all) of the incident
energy is reflected in one direction. This is called Specular
Reflection

• Lambertian reflector = it has a rough surface from which


uniform reflection in all directions takes place. Contains
spectral information.
When the surface is rough, the energy is reflected
uniformly in almost all directions. This is called Diffuse
Reflection
 But most of the earth surface features are neither perfectly specular
nor diffuse reflectors.

 Their characteristics are intermediate between the two. They reflect


some energy in all directions and but reflect a larger portion in
specular directions

 Hence remote sensing is mostly interested in measuring diffuse


reflectance properties of the feature
SPECTRAL REFLECTANCE
CURVE
• Shows the relationship of electromagnetic
spectrum with the associated percent reflectance
for any given material

• The amount of solar radiation that reflects,


absorbs, or transmits for any given material varies
with wavelength

• This important property of matter makes it


possible to identify different substances or classes
and separate them by their spectral signatures
• No information about the absorption and
transmittance of the radiant energy
• Only reflected energy
• At some wavelength, dry soil reflects more
energy than green vegetation but other
wavelength its absorbs more and reflect less
than does the vegetation
Aerial Photography
 A Photograph taken from the air with the camera axis pointing
downwards at the time of exposure is known as aerial photograph
Or
Science of taking photographs from a point in the air for the purpose
of making some type of study the surface of the earth

►An aerial camera mounted on an aircraft takes the aerial photograph

►Camera can be positioned that it can photograph the ground from


any angle above the horizon

►Aerial photograph are of immense value particularly for mapping


and interpreting the earth surface features.
Types of aerial photographs
Depending on the optical axis of the camera

Vertical Photographs
Photograph taken with the optical axis of the camera pointing
vertically downwards. Photograph with optical axis inclined at an
angle of ± 3º from the vertical is considered as vertical photograph

 Oblique Photographs
Photograph taken with the optical axis of the camera tilted is called
oblique photograph ( tilt is the deviation of the camera axis from the
vertical). These types of photograph covers large area of the ground,
but the quality of the image deteriorates towards the far end of the
photographs
- Low Oblique Photograph
- High Oblique photographs
Low oblique photographs – in which the horizon
does not appear. These photographs sometimes used to
compile reconnaissance maps of inaccessible areas

High Oblique photographs in which the tilt of the


camera is sufficient to contain the horizon
Convergent Photographs

 Low oblique photograph taken with two cameras


exposed simultaneously at successive camera stations
with the camera axes tilted from the vertical at a fixed
angle in the direction of flight line, so that the forward
exposure of the first station forms a stereo pair with the
backward exposure of the next station.

The angle of tilt is along the flight line – forward or


backward
Trimetragon Photographs

Photographs taken simultaneously with three cameras


held in a single mouth

One is held vertically and photographs the area below


the plane

The others are held obliquely at an angle of 60 from the


vertical and photograph the area adjacent to the area
being photograph by the vertical camera

Trimetragon photographs is used for a rapid production


of reconnaissance maps in small scales
According to lens type/system

 Single lens photography

Three lens photography

Four lens photography

Nine lens Photography

Continuous strip Photography


• According to film, filter or photographic
equipment

– Black and white Photography

– Infra red photography

– Colour Photography

– Colour Infra-red photography

– Thermal infra-red photography

– Rader imagery
– Black and white Photography
Record all the reflections of visible spectrum
Most suited to general photo-interpretation

– Infra red photography


Record only red and infra red part of the spectrum
Suited for water-vegetation discrimination

– Colour Photography
Record all the reflections of visible spectrum in colour or
near natural colours
Detail investigation in mineral prospecting, forestry,
agriculture, industry, town planning etc

– Colour Infra-red photography


Records spectral colours and infra red in combination
resulting in false colours
Plant and crop discrimination, land-water-vegetation
discrimination, water pollution etc
– Thermal infra-red photography
Records only thermal infra-red emissions of objects
Suited for studies involving temperature variation like
geothermal studies, water pollution etc

– Rader imagery
Records reflections of radar waves
Topographic studies, morpo-tectonic studies and general
condition of grounds

– Spectrazonal photography
Record only the selective part of the spectrum
Different part of the spectrum suited to different studies
How aerial Photograph are taken
 Aerial photographs of given area is taken by covering the
area in a series of parallel flight lines

 Generally the plane is flown along the flight line in one


direction so that it can obtain a strip of photographs of the
section of the area covered by the flight line

 Its path is then reversed by 180º to obtain a strip of


photographs of the section of the area covered by the next
flight line

 And the process is continued till the entire area is


photographed
 The strip of photograph along the flight lines are taken
in such a way that some area between the strips of the
photographs along two successive flight lines overlaps
and no area is left unphotographed between the flight
lines

 The area common to two successive strips is known as


side lap or lateral lap

 Side lap is generally maintained at about 30%

 The area common to two successive photographs in the


flight direction is called the forward overlap

 Forward overlap is usually maintained at 60% so that


every point on the ground is represented on at least two
consecutive photographs
The quality and usefulness of the photographs
depends on
Flight and weather condition
On the camera lens
Film developing
Photo printing processes
Photographs should be as vertical as possible
and are free from the elements of tilt and tip

Drift and crab common defects produced due


to error in flying/ or error in flight line
Drift
due to the side winds the pre-determined
direction of plane is influenced and
straightness of the run is loss.
The flight path deviated from its original flight
line in the direction of the wind
Errors in taking photograph due to side wind is
known ad drift
Crab
Correction of drift by turning into the wind
without reorienting the camera then the crab
will result
Photographic flight planning/ mission
• Purpose of photography - Large scale or small scale, general or detailed
mapping, purpose of the photography-geology, forestry, environment study etc

• Area to be photographed - shape, size, terrain elevation, water conditions,


vegetations

• Type of photography – whether vertical, oblique, B/W, infra red, colour or any
other type

• Scale of the photography – large scale(1:50000) or small scale (1:70000 or


less)

• Aerial camera and lenses

• Flight direction – generally from east to west but depends also on shape,
layout and physical configuration of the area
• Time of photography –
– deep shadow when the sun is low –not good for
photography

• Season of photography

• Overlaps
Geometric Characteristics of Aerial
Photographs
Scale of the photograph
 A photographic scale is an expression that states that one unit (any
unit) of distance on a photograph represents a specific number of
units of actual ground distance

 Expressed as unit equivalents, representative fractions or ratio

 Eg. If 1mm on a photograph represents 25m on the ground, the scale


of the photograph can be represented as

 1mm=25m (Unit equivalent)

 (representative fraction)

 1:25000 (ratio)
 On vertical aerial photograph, scale can be
expressed as

S = or

 Focal length of the camera lens


Flying height over the datum

 Eg, an aerial photography with f = 6 inches and


H=15000 feet, then scale can be represented as

S = 6inches
15000 feet
 = 1/30000 or 1: 30000
Scale = S =
S= =
 If the focal length and flying height is not known, scale
of the photograph may also be determined by
computing the ratio of distances between the two
known points on the aerial photograph with the same
distances on the ground

 S = photo scale = = =

=

 In this case it is better to take the measurements along


the lines where radial displacement of images due to
relief is minimum

 Accurate scale is not possible if high relief is present


resulting in the radial displacement of images
 Photographic scales varies directly with the
focal length

When a camera with longer focal length (f) is


used, a greater scale is obtained

Photographic scale varies inversely with flying


height (H) above the mean sea level or the
ground datum line (H-h)

 Greater the flying height, smaller the scale of


the photograph
Large scale and small scale

 Which photograph have a larger scale


 1:10,000 scale covering the several city block
 1:50,000 photo that covers an entire city

 The photo covering the larger area (entire city) i.e 1:50,000 is the
large scale product – Not true

 The larger scale product is the 1:10,000 image because it


shows ground features at a larger, in a more detailed size

 The 1:50,000 scale photo of the entire city shows ground


features at a much smaller, less detailed size

 Hence in spite of large coverage 1:50,000 scale photo is


termed as smaller scale product
 A Convenient way to remember scale comparison is that small
objects are smaller on a “smaller” scale photograph than on a
“larger” scale photograph

 Scale comparison can also be made by comparing the


magnitudes of the representative fractions involved (i.e
1/50000 is smaller than 1/10000)

 Aerial photographs have general scale rather than uniform


scale
Q. A camera equipped with a 152mm focal length lens is used
to make a vertical photographs from a flying height of 2780
m above mean sea level. If the terrain is flat and located at
an elevation of 500 m , what is the scale of the photograph?

Q. Assume that a vertical photograph was taken at a flying height of


5000m above sea level using camera with a 152 mm focal length
lens
a. Determine the photo scale at points A and B, which
lie at elevation of 1200 and 1960
b. what ground distance corresponds to a 20.1mm
photo distance measured at each of these elevations?
Relief displacement
→ All the elevations and depressions on an aerial photograph will have
their images displaced from their original position on the ground
because of central perception

→ With exception of the objects at the nadir point or principal point in


vertical aerial photographs

→ On a photograph, areas of terrain at the higher elevations lie closer


to the camera at the time of exposure and therefore appears larger
than corresponding areas lying at lower elevations

→ The higher points with reference to the datum line are displaced
away from the principal point and the lower points are displaced
towards the principal point

→ This distortion is called relief displacement


→Thus due to relief displacement we can determine the
height of building on a single vertical photograph

→Stereoscopic vision is possible due to the relief


displacement

→This makes the relief of the terrain visible which is


very important factor in photo-interpretation

→These displacement of images will be radial from the


principal point
=

d =

h =

h = height of the object


H = Flying Height
d = Relief displacement
r = radial distance of image from the principal point
→Relief displacement is directly proportional to the
difference in elevation of an object “h”, between the
top of the object whose image is displaced and the
datum surface

→Relief displacement is directly proportional to the


radial distance “r” between the displace image and
the principal point

→Relief displacement is inversely proportional to the


flying altitude “H” of the camera above the datum
Exmaple
• An image of a hill is 3.5 inches from the p.p, h is 2,000 ft and
H is 14,000ft. What is the relief displacement of the hill

Here, r = 3.5 inch


h = 2,000ft
H = 14,000ft

So, d=

= 2,000 X 3.5
14,000
= 0.5 inch
Q. If H is known to be 2,000ft, r is measured as
0.025ft and d is measured as 0.0012ft then find
out he height of the building?

Q. S = 1:20,000; r of an image = 100mm; h of


an object = 1,000ft, f = 6 inches. What is “d”
needed to correct radial plot?
Tilt and image displacement
 Camera axes deviated from its true vertical
position

 Almost all vertical photographs are tilted


accidently

 Tilt factor affects the displacement of images and


the scale variation on an aerial photographs

 Effect of tilt is prominent when there is much


relief in an area
 eg. If H =12,000ft, r = 3 inch , then d for 20 ft
high object will be only 0.005 inch

Thus the relief displacement is very small and


is very difficult to measure accurately

Tilt changes the scale of the photograph

Greater scale in the side tilted downward and


smaller scale towards the side tilted upward
Smaller scale

Greater scale
In case of tiled photograph the scale can be
expressed as

 S=

t = tilt angle
y = distance of the image from the isocentre measured in
the direction of tilt
Fiducial Marks
Index marks rigidly connected with camera lens and
forming images on the negative
Adjusted in such a way that the intersection of lines
drawn between opposite fiducial marks define the
position of principal point of the photograph
The lines joining the opposite fiducial marks on a
photograph are known as fiducial axes

Principal point
 It is a point on the aerial photograph, where
perpendicular from the interior perspective centre of
the camera lens meet the plane of the aerial
photographs
Perspective centre

 Point of origin or termination of bundles of perspective rays

 In a perfect camera lens, perspective rays from the interior


perspective centre to the photographic images inclose the same
angle as do the corresponding rays from the exterior perspective
centre to the objects photographed
Conjugate principal point(ccp)
 It is the principal point of an aerial photograph represented
on an adjacent aerial photograph
Nadir Point
 Nadir point is a point at which a vertical line through the
perspective centre of the camera lens falls on the plane of
the aerial photographs.
 The point on the ground vertically beneath the perspective
centre of the camera lens is known as ground nadir point
Isocentre
 The point on the photo that falls on a line half way between
the principal point and nadir point
Focal length
 Distance measured along the lens axis from the rear
noddle point to the plane of the best coverage
definition over the entire filed used in the vertical
camera

Air base
 Line joining the two air stations along the line of the
flight

Photo base
 Air base is called photo base on an aerial photograph
 Represents the length of the air base on a photogarph
 In a aerial photograph the distance between the
principal point and the conjugate principal point
represents the photobase
Isocentre
 The point on the photo that falls on a line half way between
the principal point and nadir point

 It is a unique point common to the plane of a photograph,


its principal plane and the plane of assumed truly vertical
photographs taken from same camera station and having an
equal principal distance from the same camera station

 In truly vertical photographs, the principal point, nadir


point and isocentre coincide with each other

 Isocentre is significant in oblique photograph as it is the


centre from where the radial displacement of the images
takes place
Platforms
Based on platform
The remote sensors must reside on a stable platform away from the target or surface
being observed to collect and record energy reflected or emitted from a target or surface

Thus the base on which remote sensors are mounted is called a remote sensing platforms,
Either stationary or moving

Based on its altitude above earth surface, platforms may be classified as

Ground borne

Air borne

Space borne
Ground borne platform :

Sensors base stands on the ground

Sensors are used to record detailed information about the surface of the earth. Mainly
used for collecting ground truth

Used for the close range, high accuracy applications e.g. bridge and dam monitoring,
landslide erosion mapping etc

 Can be used to better characterized the target that is being imaged by other sensors
making it possible to better understand the information in the imagery

 Sensors are placed on ladder, scaffolding, tall building, cherry picker, cranes etc
Aerial or airborne platform

Balloons , kites and Aircrafts are generally used to acquire aerial photographs for
photo-interpretation and photogrammetric purpose

 Typical platforms ranges from balloons and kites for low altitude remote sensing to
aircraft and satellites for aerial and space remote sensing.

 Balloons and kites are the early platforms for remote sensing and currently not used

Aircrafts are the main aerial platforms although helicopters are also used occasionally

 Higher the altitude large area can be viewed by the sensor

 Thus the attitude determines the ground resolution

 The size of the object that can be discriminated by a sensor depends on the resolving
power of the sensor and height of the platforms
Advantages of using aircraft as remote sensing platforms are

 Flexibility of operation over any desired area at any convenient time

 High resolution of data for application

 Capability of imaging large area economically

 convenience of selecting different scale

 Possibilities of repetitive surveys

 adequate control at all time

Due to limitations of operating altitude and range, the aircraft finds its greatest
Applications in local or regional programs rather than measurement on global scale
Space borne or space platforms

 Remote sensing is conducted mainly from satellites also called satellite remote
sensing

 Satellite platform enables large area coverage at frequent intervals in uniform


solar illumination condition and are highly suited for applications based on
synoptic measurements over large areas and where periodic observations over
same area are required

 Satellites used for remote sensing are generally of two types depending on altitude

 Geosynchronous or Geostationary orbit

 Sun synchronous orbit


Geosynchronous or Stationary orbit

 A geostationary satellite is any satellite placed in a geostationary orbit which is a


circular orbit oriented in the plane of the earth’s equator

 Geostationary satellite are stationary with respect to the earth, placed an altitude of
approximately 35,800km directly above the earths equator

 Revolves in the same direction the earth rotates

 Primary advantage of the orbit is that the satellite is stationary and allows
continuous viewing of that portion of the earth within the line of sight of the
satellite sensors

 Disadvantage - extreme distance of earth viewing, potential shortage of spacecraft


parking space due to uniqueness of orbital requirement and needs several satellites
to obtain global coverage

 Single geostationary satellite can provide one-third coverage of entire planet


Geosynchronous or Stationary orbit

 Satellites are placed in lower Earth orbit unlike geostationary

 Sun synchronous orbits are polar orbits

Sun synchronous orbit always passes over the same part of the earth at approximately
the same local sun time each day

 The advantage of putting the satellite in polar orbit is that the Earth rotates beneath,
thus a single rotating satellite can have a global coverage.
Based on Energy source-
Sun provide a suitable source of energy for remote sensing and the energy is either
reflected, as it is visible and reflective infrared wavelengths, or absorbed and then
re-emitted, as it is for thermal infrared wavelength

Passive remote sensing


Remote sensing systems which measures energy that is naturally available are called
Passive Remote sensing

Active remote sensing


Active remote sensing provide their own energy source for illumination. Sensors emits
radiation which is directed towards the target
Based on Imaging Media
Photographic Imaging –
uses chemical reactions on the surface of light sensitive film to detect and
record energy variation. Widely used earlier for aerial platforms. Only possible
within the range of photographic region (0.3-0.9µm) of electromagnetic
spectrum

Digital Imaging-
sensors use electronic transducers such as charged coupled devices (CCDs).
It is rather new technique that is being used from satellites as well as
aeroplanes. Satellite remote sensing is based on digital imaging and the
recorded data are transmitted from satellite to earth via digital communication
Based on the regions of Electromagnetic Spectrum

Optical remote sensing- Performed within optical region


Photographic remote sensing- performed within photographic region
Thermal remote sensing- uses thermal region
Microwave remote sensing- Microwave region

Optical and photographic remote sensing records reflected energy from


the earth’s surface (exception LiDAR). Sun as a source of energy

Thermal and Microwave remote sensing uses energy emitted from the
earth’s surface
Active microwave remote sensing uses artificially generated energy
and the backscattered( reflection in the opposite direction to the incident
active microwave rays) energy is recorded by the sensor
Based on Number of Bands
Images may be collected in single band or more than one bands and can
Be classified based on the number of bands to which a sensor is sensitive

Panchromatic Remote sensing- record data in a single band of the


electromagnetic spectrum. Generally images are collected within the
visible region (0.4-0.7µm) or in some instances a wider region is also
used (0.3-0.9µm)

Multi-spectral Remote sensing- record data in a multiple bands of the


electromagnetic spectrum. May be performed in the optical, thermal as
well as microwave regions. Sensors and imaging techniques are different
for different regions

.
Hyper-spectral remote sensing-
continuous sampling of narrow intervals of the spectrum. It is an
extension of the techniques employed in multispectral remote sensing.
Multispectral remote sensors provide images with a few relatively broad
wavelength bands whereas hyper spectral remote sensing collect image
data continuously in dozens and hundreds of narrow adjacent spectral
bands. Generally performed within the optical regions of electromagnetic
spectrum.

Also known as Hyperspectral spectroscopy, imaging spectroscopy and


Narrow-band imaging
Hyperspectral image
Sensors
Device used for observations , usually consist sophisticated lenses with filter
coating in which the detectors are placed, to focus the area observed on to a plane

Detectors are sensitive to a particular region of electromagnetic spectrum in which


the sensor is designed to operate and produce output

Sensor Resolutions (resolving power)


 Spatial Resolution
 Spectral Resolution
 Radiometric Resolution
 Temporal resolution
Spatial Resolution

 Actual area covered on the ground per pixel of the image

 Details of information decreases if the spatial resolution


decreases

0.5m X 0.5m 5.0m X 5.0 m 20m X 20m


Spectral Resolution
 Spectral resolution refers to the number of bands, individual and widths and the
entire range of electromagnetic spectrum covered by bands
 Different features on the earth surface have different spectral response that
characterizes the reflectance and/ or emmitance of a feature or target over a
wide variety of wavelengths

 Black and white films record wavelength extend over the visible region of the
EM spectrum. Its spectral resolution is fairly coarse as the various wavelength of
the visible region of EM Spectrum is not individually distinguish and the
overall reflectance in the entire visible region is recorded.

 Colour films is also sensitive to reflected energy over visible portion of the
spectrum but has higher spectral resolution , as it is individually sensitive to
reflected energy at the Blue, Green, and Red wavelengths of spectrum

 Multispectral sensors have a higher spectral resolution than Panchromatic


sensors

 Hyper-spectral sensors have higher spectral resolution Multispectral sensors


Hyperspectral image
Radiometric Resolution
 The sensitivity of the remote sensing detectors to differences in signal strength
as it records the radiation flux reflected or emitted from the terrain

 Arrangement of pixels describes the spatial structure of an image, the radiometric


characteristics describe the actual information content in the image

When an image is acquired on film or by sensor , its sensitivity to the magnitude


of the electromagnetic energy determines the radiometric resolutions

 Defines the discriminable signal levels. Radiometric R of an imaging system


describes its ability to discriminate very slight differences in energy.

 Finer the resolution of a sensor, the more sensitive it is for detectinmg small
differences in reflected or emitted energy
 Number of radiometric level or brightness level or grey level is expressed in
terms of the number of binary (base two) digits(bits)

Each bit records an exponent of base 2


e.g. 1bit = 21=2

The maximum number of brightness level available depends on bits used in


representing the energy recorded.

 If a sensor used 8bits to record the data there would be 2 8 = 256 digital values
available ranging from 0-255
8 bit image 2 bit image
Temporal Resolution

Revisit period of a sensor

 Temporal resolution of a remote sensing system refers to how often it records


imagery of a particular area, means the frequency of repetitive coverage

 e.g. IRS-1A sensor had 22 days temporal resolution, it means it could acquire
the image of a particular area in 22 days interval repetitively

 Lower temporal resolution refers to infrequent repeat coverage, while higher


temporal resolution refers to frequent repeat coverage
Swath and Nadir
DIGITAL IMAGE PROCESSING

 PRE PROCESSING

 IMAGE ENHANCEMENT

IMAGE TRANSFORMATION

IMAGE CLASSIFICATION
PRE PROCESSING (image restoration/image correction/ image rectification)

The correction of deficiencies and the removal of flaws from the raw
data are called pre-processing.
 atmospheric correction
 sun-illumination geometry
 surface induced geometric distortions
 spacecraft velocity and altitude variation
 effects of earth’s rotation
 skew effects
 abnormalities of instrument performance

Pre-processing techniques categorized in two broad groups

 Radiometric corrections

 Geometric corrections
Radiometric corrections

 RC normally involves the processing of digital image to improve the


fidelity of the brightness value

 To reduce the influence of errors or inconsistencies in image brightness


value which limit the image interpretation and image analysis process

 Emitted and reflected energy observed by sensor does not coincide


with the energy emitted or reflected from the same object from a
distance

 Radiometric corrections is performed to obtain the real irradiance


or reflectance from the source or object

The sensed or observed energy by the sensor is influenced by


 Sun’s azimuth and elevation
 atmospheric condition e.g. fog or serosol
 Sensor’s response
Radiometric corrections classified into three groups

 Detector response calibration


De-stripping
Removal of missing scan line
Random noise removal
Vignetting removal

 Sun angle and topographic correction

 Atmospheric correction
De-stripping

 Banding or stripping occurs if detector goes out of adjustment

 Also occurs due to the improper data recording or transmission

 Stripping shows reading consistently greater than or less than value


than the other detectors for the same band over the same ground
coverage
 Removal of strips or patterns of line with constantly high and low
digital numbers is known as de-stripping

 stripping was very Common in early LANDSAT-MSS DATA due to


variation and drift over time of six MSS detectors.

 De-stripping is done by construction of histograms for each detectors


of the problem band

 e.g. histogram generated for six detectors of of LANDSAT-MSS

 Histogram are calculated for the lines 1,7,13……, lines 2, 8, 14…. Etc

 Then means and standard deviation is calculated for each of the six
histograms
IMAGE HISTOGRAM
 It can be corrected by adopting one sensor data as standard and
adjusting the brightness of all pixels recorded by each other detectors
so that their mean brightness and standard deviation match those of
standard detectors

DNnew = (σ d/ σ i) DN old +md - (σ d/ σ i) mi

DNnew = New DN (Brightness value)


DN old = Old DN (Brightness value)

Md and σ d = reference value of mean brightness and standard deviation


Mi and σ i = mean and standard deviation of the detector under consideration
Removal of missing scan line

 Detector error which causes line drop out

 Error occurs when a detector either completely fails to function, or


becomes temporarily saturated during a scan

 Creates a horizontal streak which is a result of missing of a line or


partial line or defective data along a scan line
 Line drop out is corrected by replacing the bad line with
a line of estimated data values based on the lines above
and below it

 In case of one line loss, the lost line is replaced by


averaging the two neighboring lines

 If noise pixel, column x and line y has value DNXY then


the value is calculated as
DNXY = (DNXY-1 + DNXY+1 )/2
 if two lines are lost, the first line is recovered by
repeating the previous line while the second line is
recovered by repeating the subsequent line

If three consecutive line are lost, the first line is


recovered by repeating the previous line and the third
lost line is recovered by repeating the subsequent line.

The second line is replaced by the average of the first


and third line.

The lost lines are not recovered in case of more than


three consecutive lines losses
RANDOM NOISE REMOVAL
 Odd pixels that have spurious DN value frequently in
images and if they are not systematic are considered as
random noise

 This type of noise is characterized by non systematic


variations in grey levels from pixel to pixel

 Also called bit errors or shot noise

 Such noise is often referred to as being “spikey” in


character and it causes images to have a “salt and pepper”
or “snowy” appearance
 Bit errors or noise values normally change much more
abruptly than true image values

 Noise can be identified by comparing each pixel in an


image with its neighbors – marked differences in DN
from adjacent pixels

 Noisy pixel values can be replaced by the average of its


neighboring pixel values

 Moving windows of 3x3 or 5x5 pixels are typically used


in such procedures.
ATMOSPHERIC CORRECTION
 Solar Radiation – absorbed or scattered by the
atmosphere during transmission to the ground surface

 Reflected or emitted radiation from the target is also


absorbed or scattered by the atmosphere before it reaches
a sensor

 Sensor receives – the direct reflected or emitted radiation


from the target, the scattered radiation from a target and
the scattered radiation from the atmosphere

 Scattering is wavelength dependent, so scattered


radiation varies from band to band
ATMOSPHERIC CORRECTION

Ls=LtotρT + Lp

Where
Ls = Energy recorded by the sensor
Ltot = Total incident energy in a specific spectral band
ρ = Ratio of incident to reflected energy
T = Atmospheric transmittance
Lp = Path irradiance in a specific spectral band
 Atmospheric effect is not considered as errors as they
are part of the signal received by the sensors

 Atmospheric correction is required especially for scene


matching and change detection analysis

Number of algorithms and technique have been developed


to correct he atmospheric effects
Eg.
 Using radiative transfer equation
 Using ground truth data
 Using special sensor to measure aerosol density and
water vapour density together with an imaging
sensor for atmospheric correction
GEOMETRIC CORRECTION

 Geometric errors occurs due to


- earth curvature
- platform motion
- relief displacement
- nonlinearities in scanning motion
- earth rotation etc.

 Geometric errors contains both systematic and non


systematic errors
Systematic errors

 errors are constant and can be predicted in advance

Mainly three types of error

1. Scan Skew – caused by the forward motion of the


spacecraft during the time each mirror sweep. Ground
swath scanned is not normal to the ground track

2. Known mirror velocity variation – used to correct the


mirror distortion due to the velocity of scan mirror not
being constant from start to finish of each scan line
3. Cross-track distortion – occur in all the images
acquired by the cross track scanner

 result from sampling pixels along a scan line at


constant time intervals

 The width of the pixel is proportional to the tangent


of the scan angle and therefore wider at the either
margins of the scan line that compresses the pixel
Non – systematic errors

 Earth surface is spherical, but flat map to represent the


phenomenon on the surface
 Transform the coordinates on the spherical surface to a
flat sheet using map projection
 Remote sensing images are transformed so that it has
the scale and projection properties

Two types of non-systematic corrections


- image to ground geocorrection (or georeferencing)
- image to image correction (or registration)
- Image to ground correction is the correction of digital
images to ground coordinates using GCPs collected
from map or collected from ground using GPS

- it is the process of projecting image data onto a plane


and making it conform to a map projection system

- If GCP are collected from ground, known as image to


ground georeferencing

- If GCP are collected from existing map, then known as


image to map georeferencing
- Image to image correction

 Fitting of the coordinate system of one image to


that of a second image system of the same area

 It involves matching the coordinate systems of two


digital images with one image acting as a reference
image and the other as the image to be rectified

 it is the process of making image data conform to


another image

 map coordinate system is not necessarily involved


Coordinate transformation

 Image to ground and image to image correction


involves rearrangement of the input pixels onto a new
grid

 Polynomial equations are used to convert the source


coordinates to rectified coordinates

 Depending on the geometric errors the order of


polynomials is determine

 The distribution and number of GCPs influence the


accuracy of the geometric corrections
 The distribution of control points should be random but
almost equally spaced including the corner areas

 The accuracy of geometric correction is represented by


the standard deviation (root mean square, RMS), in
pixel units
RESAMPLING AND INTERPOLATION

 To transform and geometrically correct the raw distorted


image, resampling is used to determine the digital value
to place in the new pixel locations of the corrected
output location

 Calculate new pixel values from the original digital pixel


values of uncorrected image

 This transformation can be done by different sampling


methods to determine the pixel values in the corrected
image
Nearest Neighbour

 Use the value of the closest input pixel for the output
pixel value

 Original data are retain – output values are the original


input values

 Produces choppy, stair-stepped effects. Image has


rough appearance related to the original un-rectified
data

 Data value may be lost while other data may be


duplicated
Nearest Neighbour interpolation
Bilinear interpolation

 Estimated the cell value by calculating the weighted


average of the four closest input cells based on distance

 Stair-step effects caused by nearest neighbour


approach is reduced and image looks smooth

 Alters the original data and reduces contrast by


averaging neighbouring values together
Cubic Convolution
 Uses the weighted average of the nearest sixteen pixels to the
output pixel

 Output is similar to bilinear interpolation but smoothing effects


caused by the averaging of surrounding pixel value is more
dramatic

 Stair step effect caused by nearest neighbour is reduced and


image looks smooth

 Alters original data and reduces contrast by averaging


neighbouring value together
IMAGE
INTERPRETATION
Basic principles

1. An image taken from the air or space is a pictorial presentation


of the pattern of landscape

2. The pattern is composed of indicator of objects and events that


relate to the physical, biological and cultural composition of the
landscape

3. Similar condition in similar circumstances and surroundings,


reflects similar pattern and unlike condition reflect unlike
pattern

4. The type and amount of information that can be extracted is


proportional to the knowledge, skill and experience of analyst
Elements of photo interpretation
 Image is a pectoral representation of the energy
reflected and/or emitted from the scene in different
part of electromagnetic spectrum

 Appears in many shapes, sizes, tones , pattern etc on


an image depending upon the scene, source and
sensor characteristics

 The understanding elements of photo interpretation


are important to describe the perception or
observations and signification of the object
 Tone
 Colour
 Texture
 Pattern
 Shape
 Size
 Shadow
 site, situation and association
 Scale
Tone
Colour
Texture
Shape
Size
Shadow
Drainage anomaly
 Drainage Pattern - Arrangement of the streams of an area

 Drainage is controlled by physical character and attitude of


surface rocks. Hence interpretation of drainage help to understand
LITHOLOGY and STRUCTURE

 The dominating pattern should be considered as normal and any


deviation as anomaly

 Thus drainage anomaly is local deviation from the regional drainage/


drainage pattern
 May be planimetric, pattern, depth and width of valley, various
floodplain characteristics or presence of other features

 May be present with some causal features and structures or it


may occur without such features which could explain its presence

 Drainage anomaly helps in interpreting more correctly and it


provide clues to the region more correctly

 Provides clues to the structural features and topographic


conditions
Common anomalies are
 Linear streams – occur due to linear features such as faults,
fractures, joints, non-resistant rocks

 Radial drainage – associated with domes

 Abrupt drainage density – changes corresponding to lithological


variations

 Stream piracy – due to tectonic or non-tectonic causes

 Linearity of sink holes, springs – indicative of fractures, faults,


joints and other linear features

 Presence of isolated pond, marsh or alluvial fill along path of a


mature stream may indicate damming by subsidence or by uplift
directly downstream
Anomalous drainage generally
indicates structural effect such as :
Anomalous Trunk Stream
 Incised drainage- Contemporaneous structural uplift. Dibru
River at Doomdooma
 Narrowed Floodplain- Change in lithology due to folding or
faulting at the site of the anomaly
 Broadened Floodplain- Structural influence at the downstream
side of the anomaly
 Abrupt change in direction- Local uplift, Folding or Faulting
 Alignments of bends- Fold or Fault axis
 Abrupt change in meander frequency- local fold or fault zones
normal to drainage course
Anomalous Tributary Streams

 Trellis – Anticlinal flanks, fault blocks, steeply dipping


beds
 Radial – Younger surface=Dome, Older=Horizontal
beds
 Annular – Doming or upwarping [Dikhou-Jhanji
(Rudrasagar) Uplift]
 Parallel pattern – Gentle dipslopes
 Alignment of linear drainage – Fault/fracture
 Rectangular pattern – Fault/Joint system
 Local angular pattern – Local fault/fracture
 Abnormal convergence and branching – fault or fold
 Ponding – Structural obstacle downstream
 Alignment of waterfalls – Fault
OTHER ANOMALIES
 Abrupt and localized appearance of meanders – Reduction in stream
gradient due to active structure

 Compressed meanders – Doming

 Abrupt localized braiding – Rising structure

 Anomalous ponds, marshes- Damming by subsidence or by uplift directly


downstream.

 Anomalous breadth of levees - Subsidence due to downwarping.

 Flying levee – Subsidence

 Anomalous curves and turns – Domal upwarp across the path of the stream.
GIS

Geographic Information System


Geographical Information System
• Geography
• Information system
• Geography
 Scientific study of geospatial pattern and process
 Identify and account for the location and distribution of
human and physical phenomenon on the earths surface
• Information System involves System containing
electronic records
 Input of source document
 Recording in a electronic media
 Output records along with related documents and any
indexes
 Information system - interactive combination of people,
computer hardware and software, communication
services and procedures designed to provide a
continuous flow of information to the people who need
information to make decisions or perform analysis
GIS is a computer based information system
used to digitally represent and analyse the
geospatial data or geographical data. It is a
both a database system with specific
capabilities for spatially referenced data as
well as a set of operations for working wit the
data
GIS is a systematic integration of computer
hardware, software and spatial data, for
capturing, storing, displaying, updating
manipulating and analysing in order to solve
complex management problems
GIS
Defined as an information system that is used to
input, store, retrieve, manipulate, analyse and
output geospatial data, in order to support
decision making for planning and management
of landuse, natural resources, environment
transportation, urban facilities and other
administrative record
COMPONENTS

HARDWARE
SOFTWARE
PROCEDURE/METHODOLOGY
DATA
USER
HARDWARE
• Computer hardware system on which the GIS runs
• Key component to store, display and print spatial data
• GIS on whole spectrum of computer system
– Personal computer (PC)
– Multiuser Super Computer
• Central Processing Unit (CPU)
• Input devices such as digitizers, scanners, GPS
receivers
• Storage devices such as magnetic tapes and disk, CD
ROMS and other optical disk
• Output devices such as printer, plotters
SOFTWARE
• Programmes that runs on computers
– To manage computer
– To perform specific functions
• Modules for organizing the GIS database, integration
operations are done and output are processed
• Include GIS software, database and drawing software
– GIS software provides the functions and tools that are
necessary to store, analyse and display geographic
information
– Arc GIS, Geomtica, ERDAS etc
• Some can work on desktop computers, PCs
• Some an work on networked server-based environment
• Some has both capabilities
• Web based GIS – eg. Google earth
PROCEDURE/METHODOLOGY

• Defined methods to analyse the data and produce


accurate results
• Access protocol, Standards and Guidelines
• Besides Hardware, software and databases,
institutional framework and policies are also
important for functional GIS
• A successful GIS operates according to a well
designed plan and business rules which are the
models and operating practice unique to each
organization
DATA
• Geospatial data and Attribute data
• GIS facilitates integration of Spatial and
attribute data and this makes them unique in
contrast to other database system
• Collected from field, existing topographical
maps, satellite images and aerial photos,
available tabular data and from other such
commercial data providers
USER
• Users of GIS technology are responsible for day to day
operations of the GIS
• Includes
– Technical specialist
– Planners
– Administrators
– End users
• Limited value without the people who manage the
system and to develop plans for applying it.
• GIS does not exist in isolation of users
• Must always be people to plan, implement and operate
the system as well as to make decisions based on the
output
DATA FORMAT
• RASTER DATA

• VECTOR DATA
RASTER DATA
• Data expressed as a matrix of array of grid cells or
pixels
• Records spatial information in a regular grid or matrix
organized as a set of rows and columns
• Each cell within this grid contains a number
representing a particular geographic feature
– Soil type
– Elevation
– Landuse
– Slope etc.
• Stores information about geographic features that vary
continuously over a surface such as reflectance,
elevation, ground water depth etc
• Satellite images, aerial photographs, scanned
documents or maps
VECTOR DATA
• Positional data in the form of (x,y) coordinates
• Record spatial information as (x,y) coordinates in
a rectangular coordinate system
• Point feature recorded as single x,y location
• Line features including polygons are recorded as
an order series of x,y coordinates
• GIS is an attempt to model the real world in its
entirety in a computer system so that with little
cost, time and effort any planning, management
and predictive models could be possible

• Geographic features on earth are essentially


represented by five different types of spatial
objects
– Point
– Line
– Polygon
– Surface
– Network

– Point ,Line and Polygon = basic spatial objects


• Point -Features having a specific locations but
without extend in any directions. Eg. Village
location, Cities, Oil wells, water wells, hotels ect
• Line – Connecting two data points. It is a set of
line segments and represents a linear line segment
. Eg. River, road, streams, pipeline etc
• Also represents non-geographical boundaries. Eg.
Political boundary, contour lines, voting districts,
school zones etc
• Polygon – Area or region. Defined by the lines
that makes up its boundary. Closed shape defined
by connecting sequence of x,y coordinate pairs
where the first and last coordinate pairs are same
and all others are unique.
Data structure
• The form or format of the data stored and
manipulated on the computer is called data
structure. There are several methods for
storage and representation of the spatial
entities in the computer

• Raster data structure


• Vector data structure
RASTER DATA STRUCTURE

• RUN LENGTH ENCODING


• CHAIN ENCODING
• BLOCK ENCODING
• QUADTREE AND BINARY TREE ENCODING
RUN LENGTH ENCODING (RLE)

• Allows the cell in each mapping unit to be


stored per row for each class in terms of
beginning cell and end cell and attribute
8 Columns and 8 rows = 64 cells
Shaded portion of the raster can
be stored in RLE by storing the
beginning cell number and end cell
number per row from left to right

1,1 5,8 (row 5)


1,8 (row 6)
1,8 (row 7)
2,2 5,8 (row8)

26 cells of the shaded region have been completely coded by 12 values


• Several methods of
conversation for RLE
• RLE by storing the
beginning cell number
and its run length per
row from left to right
1,1 5,4 (row 5)
1,8 (row 6)
1,8 (row 7)
2,2 5,4 (row8)

26 cells of the shaded region have been completely coded by 12 values

Useful for bitonal or binary raster data


0
0
0
0
0
0
0
1
0
1
0
1
0
1
0
1
 In case of Multiple values
0 1 1 1 1 1 1 1  Data can be encoded as a
1 1 1 1 1 1 1 1 pair of numbers
2 1 1 1 2 2 2 2  First the run length and then
2 2 2 2 2 2 2 2 the cell number value
2 2 2 2 2 2 2 2
0 2 0 0 2 2 2 2

8,0 (row 1)
3,0 5,1 (row 2)
1,0 7,1 (row 3)
8,1 (row 4)
1,2 3,1 4,2 (row 5)
8, 2 (row 6)
8,2 (row 7)
1,0 1,2 2,0 4,2 (row 8)

64 cells raster can be coded with 30 values


CHAIN ENCODING
• Represents the raster boundary of a region by
giving a starting point and the cardinal
direction (east, north, west, south)
• Chain codes can be stored using integer data
types and therefore provide a very compacr
way of storing
0 0 0 0 0 0 0 0 • The raster can be encoded
0 0 0 1 1 1 1 1 starting from row =5 and
0 1 1 1 1 1 1 1
column =1 in clockwise
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
direction (considering east
2 2 2 2 2 2 2 2
=0, north=1, west=2,
2 2 2 2 2 2 2 2 south =3)
0 2 0 0 2 2 2 2

Origin Value Sequence


5,1 2 23,50,21,40,43,42,21,42,23,21,22,31
BLOCK ENCODING
• Block encoding is a generalization of RLE to
two dimension
• Square blocks are counted instead of sequence
• Codes consist of the origin coordinate, radius
and value of the cell
• Origin can be the centre cell or the bottom left
cell
0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 1
0 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
2 2 2 2 2 2 2 2 Block encoding
2 2 2 2 2 2 2 2
0 2 0 0 2 2 2 2

Block origin Radius Value


5,1 1 2
7,1 2 2
7,3 2 2
8,2 1 2
8,5 4 2
Quadtree and Binary tree Encoding
• Divides a geographic area into square cells
using principle of recursive subdivision of a
non- homogeneous square array of cells into
four equal sized quadrants
• Quartering is continued to a suitable level until
a square is found to be honogeneous
1 2

4 5 6 7
8 9 10 11 3
12 13 14 15
16 17 18 19

You might also like