0% found this document useful (0 votes)
81 views83 pages

Cse 542 Gis Word

The document discusses remote sensing and geographic information systems (GIS). For remote sensing, it describes different types of photographic sensors and platforms used to capture images, including cameras, films, and aerial photography. It also discusses non-photographic optical sensors and considerations for choosing sensors. For GIS, it outlines different data types used, methods for acquiring and importing digital data, techniques for analyzing data, and issues of data quality. The document concludes with references for further reading on remote sensing and GIS topics.

Uploaded by

Clara Kerubo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views83 pages

Cse 542 Gis Word

The document discusses remote sensing and geographic information systems (GIS). For remote sensing, it describes different types of photographic sensors and platforms used to capture images, including cameras, films, and aerial photography. It also discusses non-photographic optical sensors and considerations for choosing sensors. For GIS, it outlines different data types used, methods for acquiring and importing digital data, techniques for analyzing data, and issues of data quality. The document concludes with references for further reading on remote sensing and GIS topics.

Uploaded by

Clara Kerubo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 83

CSE 542 GIS and Remote Sensing Dr. Khaemba W.

A
CSE 542: GIS AND REMOTE SENSING

REMOTE SENSING:
Photographic sensors and platforms: The cameras and basic photographic sequence, black and white,
colour and infra-red films, spectral sensitivity of films, resolution.
Non-photographic optical sensors and Platforms: Sensor measurements, design considerations, Choice
of sensor and data processing. Fundamental analogue and digital image analysis.
Applications: Soil, land use, water resources, planning, terrain evolution.

GEOGRAPHIC INFORMATION SYSTEMS:


Data types: Geometry, Topology, Attribute data, vector data format, raster data format.
Data acquisition and sources of digital data: Primary Sources (Land Surveying, GNSS,
Photogrammetry, Remote Sensing), Secondary Data Sources (Scanning, Digitizing), Digital data import
and format conversion
Data Analysis: Queries, Spatial operations, Topological operations, Thematic operations.
Data quality: Metadata, Errors. GIS applications.
Laboratory work: Practical exercises with GIS and Remote Sensing software

REFERENCES

1. Remote sensing and image interpretation - Lillesand T.M., and R.W. Kiefer (2000)
2. The ESRI Guide to GIS Analysis, Vol 1, Geographic patterns and relationships: By Andy Mitchell (1999)
3. Geographic Information Systems and Science : By Paul A. Longley, Michael F. Goodchild, David J. Maguire and
David W. Rhind, John Willey & Sons Ltd, (2005)
4. The Esri Guide to GIS Analysis Vol 1. Geographic patterns and Relationships. By Andy Mitchell (1999)
5. Introduction to Geographic Information Systems 2nd, 4th, 5th & 6th Ed: By Kang-tsung Chang (2005, 2008, 2010,
and 2012).
6. Getting started with Geographical Information Systems: By Keith C. Clarke, 3 rd Edition (2001)
7. Concepts and Techniques of Geographical Information Systems: By C.P.Lo and Albert K.W Yeung.
8. Manual of Geographical Science and Technology: By John D. Bossler (2002)

1
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
1 INTRODUCTION
Remote sensing is the science (and to some extent, art) of acquiring information about the Earth's surface
without actually being in contact with it. This is done by sensing and recording reflected or emitted energy
and processing, analyzing, and applying that information".

In much of remote sensing, the process involves an interaction between incident radiation and the targets of
interest. This is exemplified by the use of imaging systems where the following seven elements are
involved. Note, however that remote sensing also involves the sensing of emitted energy and the use of
non-imaging sensors.

Remote Sensing process

1. Energy Source or Illumination (A) - the first requirement for remote sensing is to have an energy
source which illuminates or provides electromagnetic energy to the target of interest.
2. Radiation and the Atmosphere (B) - as the energy travels from its source to the target, it will come in
contact with and interact with the atmosphere it passes through. This interaction may take place a
second time as the energy travels from the target to the sensor.
3. Interaction with the Target (C) - once the energy makes its way to the target through the atmosphere,
it interacts with the target depending on the properties of both the target and the radiation.
4. Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted from the
target, we require a sensor (remote - not in contact with the target) to collect and record the
electromagnetic radiation.

2
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
5. Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be
transmitted, often in electronic form, to a receiving and processing station where the data are processed
into an image (hardcopy and/or digital).
6. Interpretation and Analysis (F) - the processed image is interpreted, visually and/or digitally or
electronically, to extract information about the target which was illuminated.
7. Application (G) - the final element of the remote sensing process is achieved when we apply the
information we have been able to extract from the imagery about the target in order to better understand
it, reveal some new information, or assist in solving a particular problem.
These seven elements comprise the remote sensing process from beginning to end.

As the energy travels from a source (the sun) to the target (the Earth’s surface) it interacts with the Earth’s
atmosphere. The total amount of radiation that strikes an object is equal to:

Reflected absorbed transmitted incident


radiation radiation radiation radiation

Reflected off absorbed by transmitted


the object the object through the object

The Electromagnetic Spectrum


The electromagnetic spectrum ranges from the shorter wavelengths (including gamma and x-rays) to the
longer wavelengths (including microwaves and broadcast radio waves). There are several regions of the
electromagnetic spectrum which are useful for remote sensing.

3
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
2 PHOTOGRAPHIC SENSORS AND PLATFORMS
2.1 Cameras and Aerial Photography
Cameras and their use for aerial photography are the simplest and oldest of sensors used for remote sensing
of the Earth's surface. Cameras are framing systems which acquire a nearinstantaneous "snapshot" of an
area (A), of the surface. Camera systems are Passive optical sensors that use a lens
(B) (or system of lenses collectively referred to as the optics) to form an
image at the focal plane (C), the plane at which an image is sharply defined.
framing system

Photographic films are sensitive to light from 0.3 µm to 0.9 µm in wavelength covering the ultraviolet
(UV), visible, and near-infrared (NIR). Panchromatic films are sensitive to the UV and the visible portions
of the spectrum. Panchromatic film produces black and white images and is the most common type of film
used for aerial photography. UV photography also uses panchromatic film, but a filter is used with the
camera to absorb and block the visible energy from reaching the film. As a result, only the UV reflectance
from targets is recorded. UV photography is not widely used, because of the atmospheric scattering and
absorption that occurs in this region of the spectrum. Black and white infrared photography uses film
sensitive to the entire 0.3 to 0.9 µm wavelength range and is useful for detecting differences in vegetation
cover, due to its sensitivity to IR reflectance.

4
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Colour and false colour (or colour infrared, CIR) photography involves the use of a three layer film with
each layer sensitive to different ranges of light. For a normal colour photograph, the layers are sensitive to
blue, green, and red light - the same as our eyes. These photos appear to us the same way that our eyes see
the environment, as the colours resemble those which would appear to us as "normal" (i.e. trees appear
green, etc.). In colour infrared (CIR) photography, the three emulsion layers are sensitive to green, red, and
the photographic portion of near-infrared radiation, which are processed to appear as blue, green, and red,
respectively. In a false colour photograph, targets with high near-infrared reflectance appear red, those with
a high red reflectance appear green, and those with a high green reflectance appear blue, thus giving us a
"false" presentation of the targets relative to the colour we normally perceive them to be.

Most aerial photographs are classified as either


oblique or vertical, depending on the orientation of
the camera relative to the ground during acquisition.
Oblique aerial photographs are taken with the
camera pointed to the side of the aircraft. High
oblique photographs usually include the horizon
while low oblique photographs do not. Oblique
photographs can be useful for covering very large
areas in a single image and for depicting terrain relief
and scale. However, they are not widely used for
mapping as distortions in scale from the
foreground to the background preclude easy measurements of distance, area, and elevation.

Vertical photographs taken with a single-lens frame camera is the most common use of aerial
photography for remote sensing and mapping purposes. These cameras are specifically built for capturing a
rapid sequence of photographs while limiting geometric distortion. They are often linked with navigation
systems onboard the aircraft platform, to allow for accurate geographic coordinates to be instantly assigned
to each photograph. Most camera systems also include mechanisms which compensate for the effect of the
aircraft motion relative to the ground, in order to limit distortion as much as possible. When obtaining
vertical aerial photographs, the aircraft normally flies in a series of lines, each called a flight line. Photos
are taken in rapid succession looking straight down at the ground, often with a 50-60 percent overlap (A)
between successive photos. The overlap ensures total coverage along a flight line and also facilitates
stereoscopic viewing. Successive photo pairs display the overlap region from different perspectives and
can be viewed through a device called a stereoscope to see a three-dimensional view of the area, called a
stereo model. Many applications of aerial photography use stereoscopic coverage and stereo viewing.

5
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Aerial photographs are most useful when fine spatial detail is more critical than spectral information, as
their spectral resolution is generally coarse when compared to data captured with electronic sensing
devices. The geometry of vertical photographs is well understood and it is possible to make very accurate
measurements from them, for a variety of different applications (geology, forestry, mapping, etc.). The
science of making measurements from photographs is called photogrammetry and has been performed
extensively since the very beginnings of aerial photography. Photos are most often interpreted manually by
a human analyst (often viewed stereoscopically). They can also be scanned to create a digital image and
then analyzed in a digital computer environment.

Multiband photography uses multi-lens systems with different film-filter combinations to acquire photos
simultaneously in a number of different spectral ranges. The advantage of these types of cameras is their
ability to record reflected energy separately in discrete wavelength ranges, thus providing potentially better
separation and identification of various features. However, simultaneous analysis of these multiple
photographs can be problematic. Digital cameras, which record electromagnetic radiation electronically,
differ significantly from their counterparts which use film. Instead of using film, digital cameras use a
gridded array of silicon coated CCDs (charge-coupled devices) that individually respond to electromagnetic
radiation. Energy reaching the surface of the CCDs causes the generation of an electronic charge which is
proportional in magnitude to the "brightness" of the ground area. A digital number for each spectral band is
assigned to each pixel based on the magnitude of the electronic charge. The digital format of the output
image is amenable to digital analysis and archiving in a computer environment, as well as output as a
hardcopy product similar to regular photos. Digital cameras also provide quicker turn-around for
acquisition and retrieval of data and allow greater control of the spectral resolution. Although parameters

6
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
vary, digital imaging systems are capable of collecting data with a spatial resolution of 0.3m, and with a
spectral resolution of 0.012 mm to 0.3 mm. The size of the pixel arrays varies between systems, but
typically ranges between 512 x 512 to 2048 x 2048.

2.2 Platforms
Cameras can be used on a variety of platforms including ground-based stages, helicopters, aircraft, and
spacecraft. Very detailed photographs taken from aircraft are useful for many applications where
identification of detail or small targets is required. The ground coverage of a photo depends on several
factors, including the focal length of the lens, the platform altitude, and the format and size of the film. The
focal length effectively controls the angular field of view of the lens and determines the area "seen" by the
camera. Typical focal lengths used are 90mm, 210mm, and most commonly, 152mm. The longer the focal
length, the smaller the area covered on the ground, but with greater detail (i.e. larger scale). The area
covered also depends on the altitude of the platform. At high altitudes, a camera will "see" a larger area on
the ground than at lower altitudes, but with reduced detail (i.e. smaller scale). Aerial photos can provide
fine detail down to spatial resolutions of less than 50 cm. A photo's exact spatial resolution varies as a
complex function of many factors which vary with each acquisition of data.

2.3 Photographic sequence


Many photographic procedures, particularly black and white techniques, employ a two-phase negative-
topositive sequence. In this process, the “negative” and “positive” materials are typically film and paper
prints. Each of these materials consists of a light –sensitive photographic emulsion coated onto a base. The
generalized cross sections of black and white film and print paper are shown in the figures below. In both
cases, the emulsion consists of a thin layer of light sensitive silver halide crystals, or grains, held in place by
a solidified gelatin. Paper is the base material for paper prints. Various plastics are used for film bases.
When exposed to light, he silver halide crystalls within an emulsion undergo a photochemical reaction
forming an invisible latent image. Upon treatment with suitable agents in the development process, these
exposed silver salts are reduced to silver grains that appear back, forming a visible image.
The negative-to-positive sequence of black and white photography is depicted in the figure below. In figure
2.3a the letter F is shown to represent a scene that is imaged through a lens system and recorded as a latent
image on the film. When processed, the film crystals exposed to light are reduced to silver. Those areas on
the negative that were not exposed are clear after processing because crystals in these areas are dissolved as
part of the development process. Those areas of the film that were exposed become various shades of gray,
depending on the amount of exposure. Hence a negative image of reversed tonal rendition is produced. In

7
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
figure 2.3b, the negative is illustrated and reprojected through an enlarger lens so that it is focused on print
paper, again forming a latent image. When printed, the paper print produces dark areas where light was
transmitted through the negative and light areas where the illuminating light was decreased by the negative.
The final result is a realistic rendering of the original scene whose size is determined by the enlarger setup.

Most aerial photographic paper prints are produced using the negative-to-positive sequence and a contact
printing procedure (fig 2.3c). Here, the film is exposed and processed as usual, resulting in a negative of
reversed scene geometry and brightness. The negative is then placed in emulsion-to-emulsion contact with
print paper. Light is passed through the negative, thereby exposing the print paper. When processed, the
image on the print is a positive representation of the original ground scene at the size of the negative.
Positive images need not be printed on print paper. For example, transparent positives are often made on

8
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
plastic-based or glass-based emulsions. These types of images are referred to as diapositives or
transparencies.

2.4 Spectral Sensitivity of black and White films


Black and white aerial photographs are normally made with either panchromatic film or infrared –sensitive
film. The generalized spectral sensitivities for each of these film types in shown in fig below. Panchromatic
film has long been the standard film type for aerial photography. As can be seen from the figure below, the
spectral sensitivity of panchromatic film extends over the UV and the visible energy but also to near-IR
energy.

Black and white IR photography has often been used to distinguish between deciduous and coniferous trees.
It is of interest to note what determines the boundaries of the spectral sensitivity of black and white film
materials. One can photograph over a range of about 0.3μm to 0.9μm. The 0.9μm limit stems from the
photochemical instability of emulsion materials that are sensitive beyond this wavelength. (Certain films for
scientific experimentation are sensitive out to about 1.2 μm and form the only exception to this rule. These
films are not commonly available and typically require long exposure times, making them unsuitable for
aerial photography.) Figure below shows a comparison between panchromatic and black and white IR aerial
photographs.

9
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

The image tones shown in this figure are very typical. Healthy green vegetation reflects much more
sunlight in the near IR than in the visible part of the spectrum; therefore, it appears lighter in tone on black
and white infrared photographs than on panchromatic photographs. Note that the trees are much lighter
toned in (b) than in (a). Note also that the limits of the stream water and the presence of water and wet soils
in the fields can be seen more distinctly in the black and white IR photographs than in panchromatic

10
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
photographs because sunlight reflection from water and wet soils in the near IR is considerably less than in
the visible part of the electromagnetic spectrum.

To date, the applications of aerial UV photography have been limited in number, due primarily to strong
atmospheric scattering of UV energy. A notable exception is the use of UV photography in monitoring oil
films on water. Minute traces of floating oil , often invisible on other types of photography can be detected
in UV photography.

2.5 Spectral Sensitivity of Color Film


The basic cross-sectional structure and spectral sensitivity of colour film are shown in figure below. As
shown in (a), the top film layer is sensitive to blue light, the second layer to green and blue light, and the
third to red and blue light. Because these bottom two layers have blue sensitivity as well as the desired
green and red sensitivities, a blue-absorbing filter layer is introduced between the first and second
photosensitive layers. This filter layer blocks the passage of blue layer beyond the blue-sensitive layer. This
effectively results in selective sensitization of each of the film layers to the blue, green, and red primary
colours. The yellow (blue-absorbing) filter has no permanent effect on the appearance of the film because it
is dissolved during processing.

11
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

From the standpoint of spectral sensitivity, the three layers of colour film can be thought of as three black
and white silver halide emulsions (fig. b). Again, the colours physically present in each of these layers after
the film is processed are not blue, green, and red. Rather, after processing, the blue sensitive layer contains
yellow die, the green-sensitive layer contains magenta dye, and the red-sensitive layer contains cyan dye.
The amount of dye introduced in each layer is inversely related to the intensity of the corresponding
primary light present in the scene photographed. When viewed in composite, the dye layers produce the
visual sensation of the original scene.

12
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
2.6 Color Infrared Film

In contrast to “normal” color film, color IR film is manufactured to record green, red, and the photographic
portion (0.7 to 0.9 μm) of the near –IR scene energy in its three emulsion layers. The dyes developed in
each of these layers are again yellow, magenta and cyan. The result is a “false color” film in which blue
images result from objects reflecting primarily in green energy, green images result from objects reflecting
primarily red energy, and red images result from objects reflecting primarily in the near-I portion of the
spectrum. The basic structure and spectral sensitivity of color IR film are shown in figure below. (note that
there are some overlaps in the sensitivities of the layers).

Various combinations of the primary colors and complimentary colors, as well as black and white, can also
be reproduced on the film, depending on scene reflectance. For example, an object with a high reflectance

13
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
in both green and near IR would produce a magenta image (blue plus red). It should be noted that most
color IR films are designed to be used with a yellow (blue absorbing) filter over the camera lens.
Color IR film was developed during the World War II to detect painted targets that were camouflaged to
look like vegetation. Because healthy vegetation reflects IR energy much more strongly than it does green
energy, It generally appears in various tones of red on color IR film. However, objects painted green
generally have low IR reflectance. Thus they appear blue on the film and can be readily discriminated from
healthy green vegetation. Because of its genesis, color film has often been referred to as “camouflage
detection film”. With its vivid color portrayal of near-IR energy, color IR has become an extremely useful
film for resource analysis.

2.7 Film Resolution


Spatial resolution is an expression of the optical quality of an image produced by a particular camera
system. Resolution is influenced by a host of parameters, such as the resolving power of the film and
camera lens used to obtain an image, any uncompensated image motion exposure, the conditions of film
processing and so on. Some of these elements are quantifiable. For example we can measure the resolving
power of a film by photographing a standard test chart. Such a chart is shown in figure below.

14
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
It consists of three parallel lines separated by spaces equal to the width of the lines. Successive groups
systematically decrease in size within the chart. The resolving power of the film is the reciprocal of the
center-to-center distance (in millimeters) of the lines that are just distinguishable in the test chart image
when viewed under a microscope. Hence film resolving power is expressed in units of lines per millimeter.
Film resolving power is sometimes referred to in units of “line pairs” per millimeter. In this case, the term
line pair refers to a line (white) and a space (black) of equal width, as shown in figure. The terms lines per
millimeter and line pairs per millimeter refer to the same line spacing and can be used interchangeably.

Film resolving power is specified at a particular contrast ratio between the lines and their background. This
is done so because resolution is very strongly influenced by contrast. Typical aerial film resolutions are
shown in table below. Note the significant difference in film resolution between the 1000:1 and 1.6:1
contrast ratios.

The effects of scale and resolution can be combined to express image quality in terms of a ground
resolution distance (GRD). This distance extrapolates the dynamic system resolution to a ground distance.
We can express this as
GRD = Reciprocal of image scale

System resolution
For example, a photograph at a scale of 1:50,000 taken with a system having a dynamic resolution of
40lines/mm would have a ground resolution distance of
GRD = 50,000 = 1250 mm = 1.25m

15
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
40
This result assumes that we are dealing with an original film at the scale at which it was exposed.
Enlargements would show some loss of image definition in the printing and enlarging process. In summary,
the ground resolution distance provides a framework for comparing the expected capabilities of various
images to record spatial detail. However this and any other measure of spatial resolution must be used with
caution because many unpredictable variables enter into what can and can not be detected, recognized, or
identified on an aerial photograph.

3 NON-PHOTOGRAPHIC OPTICAL SENSORS AND PLATFORMS


3.1 Satellite Characteristics: Orbits and Swaths
Although ground-based and aircraft platforms may be used, satellites provide a great deal of the remote
sensing imagery commonly used today. Satellites have several unique characteristics which make them
particularly useful for remote sensing of the Earth's surface.

The path followed by a satellite is referred to as its orbit. Satellite orbits are matched to the capability and
objective of the sensor(s) they carry. Orbit selection can vary in terms of altitude (their height above the
Earth's surface) and their orientation and rotation relative to the Earth.

Satellites at very high altitudes, which view the same portion


of the Earth's surface at all times have Geostationary orbits
geostationary orbits. These geostationary satellites, at altitudes of approximately 36,000 kilometers, revolve
at speeds which match the rotation of the Earth so they seem stationary, relative to the Earth's surface. This
allows the satellites to observe and collect information continuously over specific areas. Weather and
communications

16
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
satellites commonly have these types of orbits. Due to
their high altitude, some geostationary weather satellites Near polar orbit
can monitor weather and cloud patterns covering an entire
hemisphere of the Earth.

Many remote sensing platforms are designed to follow an orbit (basically north-
south) which, in conjunction with the Earth's rotation (west-east), allows them to
cover most of the Earth's surface over a certain period of time. These are near-
polar orbits, so named for the inclination of the orbit relative to a line running
between the North and South poles. Many of these satellite orbits are also sun-
synchronous such that they cover each area of the world at a constant local time
of day called local sun time. At any given latitude, the position of the sun in the
sky as the satellite passes overhead will be the same within the same season.
This ensures consistent illumination conditions when acquiring images in a
specific season over successive years, or over a

particular area over a series of days. This is an important factor for

monitoring changes between images or for mosaicking adjacent images

together, as they do not have to be corrected for different illumination

conditions.

Most of the remote sensing satellite platforms today are in


nearpolar orbits, which means that the satellite travels northwards
on one side of the Earth and then toward the southern pole on the
second half of its orbit. These are called ascending and descending
passes, respectively. If the orbit is also sunsynchronous, the
ascending pass is most likely on the shadowed side of the Earth
while the descending pass is on the sunlit side.
Sensors recording reflected solar energy only image the surface
on a descending pass, when solar illumination is available.
Ascending and descending passes Active sensors which provide their own illumination or passive sensors
that record emitted (e.g. thermal) radiation can also image the surface on ascending passes.

17
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

As a satellite revolves around the Earth, the sensor "sees" a certain portion of the Earth's surface. The area
imaged on the surface, is referred to as the swath. Imaging swaths for space-borne sensors generally vary
between tens and hundreds of kilometers wide. As the satellite orbits the Earth from pole to pole, its
eastwest position wouldn't change if the Earth didn't rotate. However, as seen from the Earth, it seems that
the satellite is shifting westward because the Earth is rotating (from west to east) beneath it.

This apparent movement allows the satellite swath to cover a new


area with each consecutive pass. The satellite's orbit and the rotation
of the Earth work together to allow complete coverage of the Earth's
surface, after it has completed one complete cycle of orbits.

Swath

If we start with any randomly selected pass in a satellite's orbit, an


orbit cycle will be completed when the satellite retraces its path,
passing over the same point on the Earth's surface directly below
the satellite (called the nadir point) for a second time. The exact
length of time of the orbital cycle will vary with each satellite. The
interval of time required for the satellite to complete its orbit cycle
is not the
same as the "revisit period". Using steerable sensors, a satellite-
borne instrument can view an area (off-nadir) before and after the orbit passes over a target, thus making
Overlap in adjacent swaths the 'revisit' time less than the orbit cycle time.

The revisit period is an important consideration for a number of monitoring applications, especially when
frequent imaging is required (for example, to monitor the spread of an oil spill, or the extent of flooding). In
near-polar orbits, areas at high latitudes will be imaged more frequently than the equatorial zone due to the
increasing overlap in adjacent swaths as the orbit paths come closer together near the poles.

18
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

3.1.1 Comparison of Platforms

Advantages of an aircraft platform


There are many airborne remote sensing systems available to the image analyst. Aircraft platform offer the
possibility of low flying altitudes, high resolution imagery (down to 10 x 10 cm pixels), frequent revisits,
and variable flight paths. Overall collection parameters for aircraft are more flexible than satellite
platforms, with the aircraft platform being more agreeable to changing mission plans and relocation. With
many of the aircraft systems, there is a potential for simultaneous collection of differential GPS (DGPS)
and inertial navigation information, along with the imagery, thus providing the possibility for accurate
georeferencing.

Disadvantages of an aircraft platform


The most obvious disadvantage of selecting an aircraft platform is that the lower flying altitude will limit
the area of coverage. Unlike a space-borne sensor system , that can have areal coverage on the order of
100s to 1,000s of square kilometres , an aircraft system must have multiple flight to cover a large area
similar to a satellite’s .
Mosaicking of multiple photographs or flight lines thus becomes an important factor when selecting an
aircraft platform. Though imaging systems on board aircraft are improving, the stability of these systems in
terms of geometric distortions is always a concern. The analyst must always take into consideration the
effects of aircraft roll, pitch, and yaw on the resulting remote sensing data collection.

Advantages of a satellite platform:


Satellites orbit in repetitive geo-synchronous or sun–synchronous orbits. The orbits are relatively stable and
result in remote sensor data that does not have as much geometric distortion in it. A satellite’s altitude,
orbit, and path are generally fixed with each mission, and thus provide data sets that have similar spatial
resolutions, swaths (or field–of–view fro geo-synchronous satellites), and illumination. Many new
satelliteborne sensor systems have the capability to acquire images of the earth not just below the satellite at
nadir, but at locations off –nadir. This off – nadir pointing capability is especially useful when performing
disaster assessment.
Disadvantages of a satellite platform:

19
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
In general, there are relatively few limitations to consider when selecting a satellite platform. In the past , a
lack of available high – solution data ( because of high , fixed orbits) was a concern , but today’s new
generation of satellite-based sensors offer high spatial resolution data collection ( ground resolutions < 1 x
1 m ) at relatively high fixed orbits .Timeliness may also be a perceived a limitation , but currently there
are several satellite –based system that provide near real-time data. Unfortunately, when a satellite remote
sensing system has a problem it is not possible to retrieve and fix it like aircraft-based sensor systems.

3.1.2 Space Scanning

Basic imaging principle

While the platform is moving over the ground the scanner images are built up sequentially line-by-line.
This results in a continuous strip of the imaged scene.

Early scanning Principles:


Optical-mechanical scanning with scanning mirrors:

3.1.3 More Recent Space scanning Principles:

Nowadays most scanners are “pushbroom” arrays. A complete line is recorded at a time i.e. all detectors in
a linear array (CCD array –charge coupled detectors) record in parallel. More and more frame type CCDs
are expected to be used in new space scanners development, which are geometrically similar to

20
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
conventional frame cameras. Of special importance for three-dimensional data collection is the acquisition
of overlapping image pairs (stereo coverage). There are three different configurations for stereo data
recording:
a) Across-track configuration: (e.g. used by SPOT and IRS-1C/1D).
The across-track configuration makes use of a mirror that can be rotated in the cross-track direction
(perpendicular to the flight path); pointing off-nadir to either side of the orbital track (flight line) – under
command from a ground station. Stereo coverage is obtained from two separate orbits.

Cross track configuration with


opposite pointing directions

b) Along-track configuration: (e.g. used by sensors MOMS, OPS and MEIS).


Along-track scanning sensor systems are equipped with two or more CCD line arrays for simultaneous data
recording: one array points in forward looking direction down the flight line, the other points in the
backward direction along the same line. Stereo coverage obtained by use fore and aft pointing rays.

21
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Data capture:
• During a single flight the forward and backward pointing rays record the same piece of terrain with a
short time delay (e,g 20 sec for MOMS).
• This results in different positions in space for corresponding image lines of fore and aft channels- stereo
coverage of the swath of the imaged Earth’s surface.

c) Flexible pointing configuration (Principle of the American high resolution satellites: Quickbird,
IKONOS).
• The sensor can be commanded to point in any direction at viewing angles up to 45° (IKONOS)
• Stereo coverage: across-track, along-track or any intermediate direction can be selected for acquisition
• Uses so-called gimballed mirrors, in principle also whole body movement of spacecraft is possible

3.2 General Characteristics of Remote Sensing Instruments and Systems


Here we consider satellite-borne sensors which operate in the visible, infrared and microwave region of the
electromagnetic spectrum. Important properties of RS instruments/ systems are:
• Temporal resolution or revisit time is the time that elapses between successive dates of imagery
cquisition
• Spatial, spectral and radiometric resolution of the imaging RS instrument

22
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
• Number of spectral bands in which data are collected ranges from several bands (multispectral
instruments) to several hundred bands (hyperspectral instruments)

3.2.1 Temporal Resolution (Revisit time)

Temporal resolution refers to the length of time it takes for a satellite to complete one entire orbit cycle.
The revisit period of a satellite sensor is usually several days. Therefore the absolute temporal resolution of
a remote sensing system to image the exact same area at the same viewing angle a second time is equal to
this period. However, because of some degree of overlap in the imaging swaths of adjacent orbits for most
satellites and the increase in this overlap with increasing latitude, some areas of the Earth tend to be
reimaged more frequently. Also, some satellite systems are able to point their sensors to image the same
area between different satellite passes separated by periods from one to five days. Thus, the actual temporal
resolution of a sensor depends on a variety of factors, including the satellite/sensor capabilities, the swath
overlap, and latitude.

The ability to collect imagery of the same area of the Earth's surface at different periods of time is one of
the most important elements for applying remote sensing data. Spectral characteristics of features may
change over time and these changes can be detected by collecting and comparing multi-temporal imagery.
For example, during the growing season, most species of vegetation are in a continual state of change and
our ability to monitor those subtle changes using remote sensing is dependent on when and how frequently
we collect imagery. By imaging on a continuing basis at different times we are able to monitor the changes
that take place on the Earth's surface, whether they are naturally occurring (such as changes in natural
vegetation cover or flooding) or induced by humans (such as urban development or deforestation). The time
factor in imaging is important when:
• persistent clouds offer limited clear views of the Earth's surface (often in the tropics
• short-lived phenomena (floods, oil slicks, etc.) need to be imaged
• multi-temporal comparisons are required (e.g. the spread of a forest disease from one year to the next
• the changing appearance of a feature over time can be used to distinguish it from near-similar features
(wheat / maize)

The revisit time ranges from several minutes to several days: several minutes, if satellite is effectively
stationary with respect to a fixed point on the Earth's surface (this kind of satellites are so-called
geostationary orbit with fixed observation points) example: Meteosat: several hours or days, if the

23
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
satellite moves relative to the Earth's surface on a polar orbit examples: NOAA satellite (National Oceanic
and Atmospheric Administration) repeat cycle time of the order of hours, Landsat, SPOT repeat cycle
time of several days.

The temporal resolution of a polar orbiting satellite is set by the choice of orbit parameters: - inclination,
orbital shape and height. Typical Earth observation satellites can be characterized by:
• near-circular, polar orbit e.g. Landsat 4 and 5 have a equatorial crossing time of 9.45 am (local Sun
time, descending node), have an inclination 98.2°: The inclination is the angle between the orbital
plane and the Earth’s equatorial plane
• sun-synchronous orbit the satellite passes over a given point on the Earth surface at the same Sun time
each day

The relationship between the orbit period T and the orbit radius r is given by the following equation:
r
2
Tp  2r gR
where g = acceleration due to gravity g = 9.81 m/sec2,
R = Earth radius = 6380 km,
and r = R + h
Example: For an orbit altitude of h ≈ 800 km, an orbit period of T ≈ 100 min is obtained.

Easy to see from the equation above is that the greater the orbit altitude h of a satellite the longer is the
orbital period T (time taken for a complete orbit). The temporal resolution is also influenced by the swath
width. The wider the swath width the shorter is the shorter is the time interval for imaging the same point
on the Earth’s surface. Example: Landsat TM (Thematic Mapper) has a swath width of 185 km AVHRR
(Advanced Very High Resolution Radiometer) of NOAA has a swath width of 3000 km. Benefit of
pointable sensors like the SPOT HRV – the revisit time, in theory, is much shorter than the orbital pattern
and the swath width would indicate.

3.2.2 Spatial resolution.

For some remote sensing instruments, the distance between the target being imaged and the platform, plays
a large role in determining the detail of information obtained and the total area imaged by the sensor.
Sensors onboard platforms far away from their targets, typically view a larger area, but cannot provide great

24
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
detail. Compare what an astronaut onboard the space shuttle sees of the Earth to what you can see from an
airplane. The astronaut might see your whole province or country in one glance, but couldn't distinguish
individual houses. Flying over a city or town, you would be able to see individual buildings and cars, but
you would be viewing a much smaller area than the astronaut. There is a similar difference between satellite
images and airphotos.

The detail discernible in an image is dependent on the spatial resolution of the sensor and refers to the size
of the smallest possible feature that can be detected. Spatial resolution of passive sensors depends primarily
on their Instantaneous Field of View (IFOV). The IFOV is the angular cone of visibility of the sensor (A)
and determines the area on the Earth's surface which is "seen" from a given altitude at one particular
moment in time (B).
The size of the area viewed is determined by multiplying the IFOV by the distance from the ground to the
sensor (C). This area on the ground is called the resolution cell and
determines a sensor's maximum
spatial resolution.

For a homogeneous feature to be detected, its size generally has to be equal to or larger than the resolution
cell. If the feature is smaller than this, it may not be detectable as the average brightness of all features in
that resolution cell will be recorded. However, smaller features may sometimes be detectable if their
reflectance dominates within a particular resolution cell allowing sub-pixel or resolution cell detection.
Most remote sensing images are composed of a matrix of picture elements, or pixels, which are the smallest
units of an image. Image
pixels are normally square and represent a certain area on an image. It is important to distinguish between
pixel size and spatial resolution - they are not interchangeable. If a sensor has a spatial resolution of 20
metres and an image from that sensor is displayed at full resolution, each pixel represents an area of 20m x
20m on the ground. In this case the pixel size and resolution are the same. However, it is possible to display

25
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
an image with a pixel size different than the resolution. Many posters of satellite images of the Earth have
their pixels averaged to represent larger areas, although the original spatial resolution of the sensor that
collected the imagery remains the same.

Images where only large features are visible are said to have coarse or low resolution. In fine or high
resolution images, small objects can be detected. Military sensors for example, are designed to view as
much detail as possible, and therefore have very fine resolution. Commercial satellites provide imagery
with resolutions varying from a few metres to several kilometres. Generally speaking, the finer the
resolution, the less total ground area can be seen. The ratio of distance on an image or map, to actual ground
distance is referred to as scale. If you had a map with a scale of 1:100,000, an object of 1cm length on the
map would actually be an object 100,000cm (1km) long on the ground. Maps or images with small "map-
to-ground ratios" are referred to as small scale (e.g. 1:100,000), and those with larger ratios (e.g.
1:5,000) are called large scale.

There exist different criteria to measure spatial resolution:


- geometrical properties of the imaging system,
- ability to distinguish between point targets,
- ability to measure the periodicity of repetitive targets, - ability to measure spectral
properties of small targets.

IFOV can be measured as the angle α or as the equivalent distance on the ground (more less the diameter of
a circle).

26
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
From this figure can be seen that IFOV is an area
on the ground that is viewed by (one) the detector
(element) from a given altitude at any given instant
of time.
Because no satellite has a perfectly stable orbit its
height above the Earth will vary. This means the
actual IFOV (area on the ground) might differ
significantly from the nominal IFOV.

Example: Landsat 1-3 had a nominal altitude of 913 km, but the actual varied between 880 km and 940 km.
The nominal IFOV of Landsat MSS is generally specified as 79 m, the actual resolution (IFOV) is between
76 and 81 m.

3.2.3 Spectral resolution

Different classes of features and details in an image can often be distinguished by comparing their
responses over distinct wavelength ranges. Broad classes, such as water and vegetation, can usually be
separated using very broad wavelength ranges - the visible and near infrared. Other more specific classes,
such as different rock types, may not be easily distinguishable using either of these broad wavelength
ranges and would require comparison at much finer wavelength ranges to separate them. Thus, we would
require a sensor with higher spectral resolution. Spectral resolution describes the ability of a sensor to
define fine wavelength intervals. The finer the spectral resolution, the narrower the wavelength range for a
particular channel or band.

27
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Black and white film records wavelengths extending over much, or all of the visible portion of the
electromagnetic spectrum. Its spectral resolution is fairly coarse, as the various wavelengths of the visible
spectrum are not individually distinguished and the overall reflectance in the entire visible portion is
recorded. Colour film is also sensitive to the reflected energy over the visible portion of the spectrum, but
has higher spectral resolution, as it is individually sensitive to the reflected energy at the blue, green, and
red wavelengths of the spectrum. Thus, it can represent features of various colours based on their
reflectance in each of these distinct wavelength ranges.

Many remote sensing systems record energy over several separate wavelength ranges at various spectral
resolutions. These are referred to as multi-spectral sensors. Advanced multi-spectral sensors called
hyperspectral sensors, detect hundreds of very narrow spectral bands throughout the visible, near-infrared,
and mid-infrared portions of the electromagnetic spectrum. Their very high spectral resolution facilitates
fine discrimination between different targets based on their spectral response in each of the narrow bands.

Most of the digital images collected by satellite-borne sensors are multi-band or multi-spectral.
Spectral resolution refers to the width of these spectral bands. Important aspects of spectral resolution
are - the position in the spectrum;
- the width and
- the numbers of spectral bands.
This will determine the degree to which individual targets (vegetation spaces, rock types, etc) can be
discriminated on the multi-spectral image. The discrimination power using the multi-spectral images is
higher than the discrimination power using any single band taken on its own.

Example: Different rock types might only


be separable if the recording device is
capable of collecting data in a narrow
wave band. A wide-band instrument would
simply average out the differences; recent
developments include the use of
hyperspectral sensors (imaging
spectrometers), e.g. AVIRIS
(airborne visible infrared imaging

28
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

3.2.4 Radiometric resolution

While the arrangement of pixels describes the spatial structure of an image, the radiometric characteristics
describe the actual information content in an image. Every time an image is acquired on film or by a sensor,
its sensitivity to the magnitude of the electromagnetic energy determines the radiometric resolution. The
radiometric resolution of an imaging system describes its ability to discriminate very slight differences in
energy. The finer the radiometric resolution of a sensor the more sensitive it is to detecting small
differences in reflected or emitted energy.

Imagery data are represented by positive digital numbers which vary from 0 to (one less than) a selected
power of 2. This range corresponds to the number of bits used for coding numbers in binary format. Each

spectrometer) acquires data in 224 narrow Different rock types


spectral bands.

bit records an exponent of power 2 (e.g. 1 bit=2 1=2). The maximum number of brightness levels available
depends on the number of bits used in representing the energy recorded. Thus, if a sensor used 8 bits to
record the data, there would be 28=256 digital values available, ranging from 0 to 255. However, if only 4
bits were used, then only 24=16 values ranging from 0 to 15 would be available. Thus, the radiometric
resolution would be much less. Image data are generally displayed in a range of grey tones, with black
representing a digital number of 0 and white representing the maximum value (for example, 255 in 8-bit
data). By comparing a 2-bit image with an 8-bit image, we can see that there is a large difference in the
level of detail discernible depending on their radiometric resolutions.

29
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Comparing a 2-bit image (left) with an 8-bit image (right)

3.3 Image Analysis

Interpretation and analysis of remote sensing imagery involves the identification and/or measurement of
various targets in an image in order to extract useful information about them. Targets in remote sensing
images may be any feature or object which can be observed in an image, and have the following
characteristics:

• Targets may be a point, line, or area feature. This means that they can have any form, from a bus in a
parking lot or plane on a runway, to a bridge or roadway, to a large expanse of water or a field.
• The target must be distinguishable; it must contrast with other features around it in the image.

Much interpretation and identification of targets in remote sensing imagery is performed manually or
visually, i.e. by a human interpreter. In many cases this is done using imagery displayed in a pictorial or
photograph-type format as shown below independent of what type of sensor was used to collect the data
and how the data were collected. In this case we refer to the data as being in analog format.

30
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Remote sensing images can also be represented in a computer as arrays of pixels, with each pixel
corresponding to a digital number, representing the brightness level of that pixel in the image. In this case,
the data are in a digital format. Visual interpretation may also be performed by examining digital imagery
displayed on a computer screen. Both analogue and digital imagery can be displayed as black and white
(also called monochrome) images, or as colour images by combining different channels or bands
representing different wavelengths.

When remote sensing data are available in digital format,


digital processing and analysis may be performed using a
computer. Digital processing may be used to enhance data as a
prelude to visual interpretation. Digital processing and analysis
may also be carried out to automatically identify targets and
extract information completely without manual intervention by
a human interpreter. However, rarely is digital processing and
analysis
carried out as a complete replacement for manual interpretation. Often, it is done to supplement and assist
the human analyst. Manual interpretation and analysis dates back to the early beginnings of remote sensing
for air photo interpretation. Digital processing and analysis is more recent with the advent of digital
recording of remote sensing data and the development of computers.

Manual interpretation and analysis dates back to the early beginnings of remote sensing for air photo
interpretation. Digital processing and analysis is more recent with the advent of digital recording of remote

31
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
sensing data and the development of computers. Both manual and digital techniques for interpretation of
remote sensing data have their respective advantages and disadvantages.

• Manual interpretation requires little, if any, specialized equipment, while digital analysis requires
specialized, and often expensive, equipment.
• Manual interpretation is often limited to analyzing only a single channel of data or a single image at a
time due to the difficulty in performing visual interpretation with multiple images. The computer
environment is more amenable to handling complex images of several or many channels or from several
dates. In this sense, digital analysis is useful for simultaneous analysis of many spectral bands and can
process large data sets much faster than a human interpreter.
• Manual interpretation is a subjective process, meaning that the results will vary with different
interpreters. Digital analysis is based on the manipulation of digital numbers in a computer and is thus
more objective, generally resulting in more consistent results. However, determining the validity and
accuracy of the results from digital processing can be difficult.

It is important to reiterate that visual and digital analyses of remote sensing imagery are not mutually
exclusive. Both methods have their merits. In most cases, a mix of both methods is usually employed when
analyzing imagery. In fact, the ultimate decision of the utility and relevance of the information extracted at
the end of the analysis process, still must be made by humans.

3.3.1 Elements of visual interpretation

Analysis of remote sensing imagery involves the identification of various targets in an image, and those
targets may be environmental or artificial features which consist of points, lines, or areas. Targets may be
defined in terms of the way they reflect or emit radiation. This radiation is measured and recorded by a
sensor, and ultimately is depicted as an image product such as an air photo or a satellite image.

Viewing objects from directly above also provides a very different perspective than what we are familiar
with. Combining an unfamiliar perspective with a very different scale and lack of recognizable detail can
make even the most familiar object unrecognizable in an image. Finally, we are used to seeing only the
visible wavelengths, and the imaging of wavelengths outside of this window is more difficult for us to
comprehend. Recognizing targets is the key to interpretation and information extraction. Observing the
differences between targets and their backgrounds involves comparing different targets based on any, or all,
of the visual elements of tone, shape, size, pattern, texture, shadow, and association.

32
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
3.3.2 Digital Image Processing

In today's world of advanced technology where most remote sensing data are recorded in digital format,
virtually all image interpretation and analysis involves some element of digital processing. Digital image
processing may involve numerous procedures including formatting and correcting of the data, digital
enhancement to facilitate better visual interpretation, or even automated classification of targets and
features entirely by computer. In order to process remote sensing imagery digitally, the data must be
recorded and available in a digital form suitable for storage on a computer tape or disk. Obviously, the other
requirement for digital image processing is a computer system, sometimes referred to as an image analysis
system, with the appropriate hardware and software to process the data. Several commercially available
software systems have been developed specifically for remote sensing image processing and analysis.

Most of the common image processing functions available in image analysis systems can be categorized
into the following four categories:

• Preprocessing
• Image Enhancement
• Image Transformation
• Image Classification and Analysis

(a) Preprocessing
Preprocessing functions involve those operations that are normally required prior to the main data analysis
and extraction of information, and are generally grouped as radiometric or geometric corrections.

33
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Radiometric corrections include correcting the data for sensor irregularities and unwanted sensor or
atmospheric noise, and converting the data so they accurately represent the reflected or emitted radiation
measured by the sensor. Geometric corrections include correcting for geometric distortions due to
sensorEarth geometry variations, and conversion of the data to real world coordinates (e.g. latitude and
longitude) on the Earth's surface.

Pre-processing operations, sometimes referred to as image restoration and rectification, are intended to

correct for sensor- and platform-specific radiometric and geometric distortions of data. Radiometric

corrections may be necessary due to variations in scene illumination and viewing geometry, atmospheric

conditions, and sensor noise and response. Each of these will vary depending on the specific sensor and

platform used to acquire the data and the conditions during data acquisition. Also, it may be desirable to

convert and/or calibrate the data to known (absolute) radiation or reflectance units to facilitate comparison

between data.

Variations in illumination and viewing geometry between images (for optical sensors) can be corrected by
modeling the geometric relationship and distance between the area of the Earth's surface imaged, the sun,
and the sensor. This is often required so as to be able to more readily compare images collected by different
sensors at different dates or times, or to mosaic multiple images from a single sensor while maintaining
uniform illumination conditions from scene to scene.

34
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
(b) Image Enhancement

Enhancements are used to make it easier for visual interpretation and understanding of imagery. The
advantage of digital imagery is that it allows us to manipulate the digital pixel values in an image. Although
radiometric corrections for illumination, atmospheric influences, and sensor characteristics may be done
prior to distribution of data to the user, the image may still not be optimized for visual interpretation.
Remote sensing devices, particularly those operated from satellite platforms, must be designed to cope with
levels of target/background energy which are typical of all conditions likely to be encountered in routine
use. With large variations in spectral response from a diverse range of targets (e.g. forest, deserts,
snowfields, water, etc.) no generic radiometric correction could optimally account for and display the
optimum brightness range and contrast for all targets. Thus, for each application and each image, a custom
adjustment of the range and distribution of brightness values is usually necessary.

In raw imagery, the useful data often populates only a small portion of the
available range of digital values (commonly 8 bits or 256 levels). Contrast
enhancement involves changing the original values so that more of the available
range is used, thereby increasing the contrast between targets and their
backgrounds. The key to understanding contrast enhancements is to understand
the concept of an image histogram. A histogram is a graphical representation of
the brightness values that comprise an image. The brightness values (i.e. 0-255)
are displayed along the x-axis of the graph. The frequency of occurrence of each
of these values in the image is shown on the y-axis.

35
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
By manipulating the range of digital values in an image, graphically represented by its histogram, we can
apply various enhancements to the data. There are many different techniques and methods of enhancing
contrast and detail in an image; we will cover only a few common ones here. The simplest type of
enhancement is a linear contrast stretch. This involves identifying lower and upper bounds from the
histogram (usually the minimum and maximum brightness values in the image) and applying a
transformation to stretch this range to fill the full range. In our example, the minimum value (occupied by
actual data) in the histogram is 84 and the maximum value is 153. These 70 levels occupy less than onethird
of the full 256 levels available. A linear stretch uniformly expands this small range to cover the full range
of values from 0 to 255. This enhances the contrast in the image with light toned areas appearing lighter and
dark areas appearing darker, making visual interpretation much easier. This graphic illustrates the increase
in contrast in an image before (left) and after (right) a linear contrast stretch.

(c) Image transformations

Image transformations typically involve the manipulation of multiple bands of data, whether from a single
multispectral image or from two or more images of the same area acquired at different times (i.e.
multitemporal image data). Either way, image transformations generate "new" images from two or more
sources which highlight particular features or properties of interest,
better than the original input images.

Basic image transformations apply simple arithmetic operations to the


image data. Image subtraction is often used to identify changes that
have occurred between images collected on different dates. Typically,
two images which have been geometrically registered, are used with the
pixel (brightness) values in one image (1) being subtracted from the
pixel values in the other (2). Scaling the resultant image (3) by adding a constant (127 in this case) to the
output values will result in a suitable 'difference' image. In such an image, areas where there has been little

36
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
or no change (A) between the original images, will have resultant brightness values around 127 (mid-grey
tones), while those areas where significant change has occurred (B) will have values higher or lower than
127 - brighter or darker depending on the 'direction' of change in reflectance between the two images . This
type of image transform can be useful for mapping changes in urban development around cities and for
identifying areas where deforestation is occurring, as in this example.

(d) Image classification and analysis

A human analyst attempting to classify features in an image uses the elements of visual interpretation to
identify homogeneous groups of pixels which represent various features or land cover classes of interest.
Digital image classification uses the spectral information represented by the digital numbers in one or
more spectral bands, and attempts to classify each individual pixel based on this spectral information. This
type of classification is termed spectral pattern recognition. In either case, the objective is to assign all
pixels in the image to particular classes or themes (e.g. water, coniferous forest, deciduous forest, corn,
wheat, etc.). The resulting classified image is comprised of a mosaic of pixels, each of which belong to a
particular theme, and is essentially a thematic "map" of the original image.

When talking about classes, we need to distinguish between information classes and spectral classes.
Information classes are those categories of interest that the analyst is actually trying to identify in the
imagery, such as different kinds of crops, different forest types or tree species, different geologic units or
rock types, etc. Spectral classes are groups of pixels that are uniform (or near-similar) with respect to their
brightness values in the different spectral channels of the data. The objective is to match the spectral classes
in the data to the information classes of interest. Rarely is there a simple one-to-one match between these
two types of classes. Rather, unique spectral classes may appear which do not necessarily correspond to any
information class of particular use or interest to the analyst. Alternatively, a broad information class (e.g.
forest) may contain a number of spectral sub-classes with unique spectral variations. Using the forest
example, spectral sub-classes may be due to variations in age, species, and density, or perhaps as a result of

37
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
shadowing or variations in scene illumination. It is the analyst's job to decide on the utility of the different
spectral classes and their correspondence to useful information classes.

Common classification procedures can be broken down into two broad


subdivisions based on the method used: supervised classification and
unsupervised classification. In a supervised classification, the
analyst identifies in the imagery homogeneous representative samples
of the different surface cover types (information classes) of interest.
These samples are referred to as training areas. The selection of
appropriate training areas is based on the analyst's familiarity with the
geographical area and their knowledge of the actual surface cover types present in the
image. Thus, the analyst is "supervising" the categorization of a set of specific classes. The numerical
information in all spectral bands for the pixels comprising these areas are used to "train" the computer to
recognize spectrally similar areas for each class. The computer uses a special program or algorithm (of
which there are several variations), to determine the numerical "signatures" for each training class. Once the
computer has determined the signatures for each class, each pixel in the image is compared to these
signatures and labeled as the class it most closely "resembles" digitally. Thus, in a supervised classification
we are first identifying the information classes which are then used to determine the spectral classes which
represent them.

Unsupervised classification in essence reverses the supervised classification process. Spectral classes are
grouped first, based solely on the numerical information in the data, and are then matched by the analyst to
information classes (if possible). Programs, called clustering algorithms, are used to determine the natural
(statistical) groupings or structures in the data. Usually, the analyst specifies how many groups or clusters

38
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
are to be looked for in the data. In addition to specifying the desired number of classes, the analyst may also
specify parameters related to the separation distance among the clusters and the variation within each
cluster. The final result of this iterative clustering process may result in some clusters that the analyst will
want to subsequently combine, or clusters that should be broken down further - each of these requiring a
further application of the clustering algorithm. Thus, unsupervised classification is not completely without
human intervention. However, it does not start with a pre-determined set of classes as in a supervised
classification.

4 APPLICATIONS OF REMOTE SENSING IN CIVIL ENGINEERING


Applications of remote sensing to civil engineering may include the following:
• site studies including determination of foundation conditions, location of possible bridge sites,
identification of spoil areas and possible borrow pits. identification of major hazard areas e.g. poorly
drained soils, unstable grounds and location of possible sources of constructional material
• engineering geology- satellite imagery can be used in exploration for placers, sand, gravel and clay, and
emplacing engineering structures
• water resources: surface water; supply, pollution; underground water; snow and ice mapping
• small-scale topographic mapping from satellite sensors
• generation of thematic mapping information appropriate to engineering projects
• aerial traffic investigations using helicopters and airborne video cameras
• environmental monitoring including investigations of heat loss from buildings, septic tank seepage into
water supply sewage outfall and oil spillage at sea,
• etc

The following illustrate some of the applications

39
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
4.1 Hydrology

Hydrology is the study of water on the Earth's surface, whether flowing above ground, frozen in ice or
snow, or retained by soil. Hydrology is inherently related to many other applications of remote sensing,
particularly forestry, agriculture and land cover, since water is a vital component in each of these
disciplines. Most hydrological processes are dynamic, not only between years, but also within and between
seasons, and therefore require frequent observations. Remote sensing offers a synoptic view of the spatial
distribution and dynamics of hydrological phenomena, often unattainable by traditional ground surveys.
Radar has brought a new dimension to hydrological studies with its active sensing capabilities, allowing the
time window of image acquisition to include inclement weather conditions or seasonal or diurnal darkness.

Examples of hydrological applications include:

• wetlands mapping and monitoring,


• soil moisture estimation,
• snow pack monitoring / delineation of extent,
• measuring snow thickness,
• determining snow-water equivalent,
• river and lake ice monitoring,
• flood mapping and monitoring,
• glacier dynamics monitoring (surges, ablation)
• river /delta change detection
• drainage basin mapping and watershed modelling
• irrigation canal leakage detection

40
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Flood Delineation & Mapping

A natural phenomenon in the hydrological cycle is flooding. Flooding is necessary to replenish soil fertility
by periodically adding nutrients and fine grained sediment; however, it can also cause loss of life,
temporary destruction of animal habitat and permanent damage to urban and rural infrastructure. Inland
floods can result from disruption to natural or man-made dams, catastrophic melting of ice and snow, rain,
river ice jams and / or excessive runoff in the spring.

Remote sensing techniques are used to measure and monitor the areal extent of the flooded areas , to
efficiently target rescue efforts and to provide quantifiable estimates of the amount of land and
infrastructure affected. Incorporating remotely sensed data into a GIS allows for quick calculations and
assessments of water levels, damage, and areas facing potential flood danger. Users of this type of data
include flood forecast agencies, hydropower companies, conservation authorities, city planning and
emergency response departments, and insurance companies (for flood compensation). The identification
and mapping of floodplains, abandoned river channels, and meanders are important for planning and
transportation routing.

Many of these users of remotely sensed data need the information during a crisis and therefore require
"near-real time turnaround". Turnaround time is less demanding for those involved in hydrologic
modelling, calibration/validation studies, damage assessment and the planning of flood mitigation. Flooding
conditions are relatively short term and generally occur during inclement weather, so optical sensors,
although typically having high information content for this purpose, can not penetrate through the cloud
cover to view the flooded region below. For these reasons, active SAR sensors are particularly valuable for
flood monitoring. RADARSAT in particular offers a high turnaround interval, from when the data is

41
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
acquired by the sensor, to when the image is delivered to the user on the ground. The land / water interface
is quite easily discriminated with SAR data, allowing the flood extent to be delineated and mapped. The
SAR data is most useful when integrated with a pre-flood image, to highlight the floodaffected areas, and
then presented in a GIS with cadastral and road network information.

4.2 Digital Elevation Models

The availability of digital elevation models (DEMs) is critical for performing geometric and radiometric
corrections for terrain on remotely sensed imagery, and allows the generation of contour lines and terrain
models, thus providing another source of information for analysis.

Present mapping programs are rarely implemented with only planimetric considerations. The demand for
digital elevation models is growing with increasing use of GIS and with increasing evidence of
improvement in information extracted using elevation data (for example, in discriminating wetlands, flood
mapping, and forest management). The incorporation of elevation and terrain data is crucial to many
applications, particularly if radar data is being used, to compensate for foreshortening and layover effects,
and slope induced radiometric effects. Elevation data is used in the production of popular topographic
maps.

Elevation data, integrated with imagery is also used for generating perspective views, useful for tourism,
route planning, to optimize views for developments, to lessen visibility of forest clearcuts from major
transportation routes, and even golf course planning and development. Elevation models are integrated into
the programming of cruise missiles, to guide them over the terrain. Resource management,
telecommunications planning, and military mapping are some of the applications associated with DEMs.
There are a number of ways to generate elevation models. One is to create point data sets by collecting
elevation data from altimeter or Global Positioning System (GPS) data, and then interpolating between the

42
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
points. This is extremely time and effort consuming. Traditional surveying is also very time consuming and
limits the timeliness of regional scale mapping.

Generating DEMs from remotely sensed data can be cost effective and efficient. A variety of sensors and
methodologies to generate such models are available and proven for mapping applications. Two primary
methods of generating elevation data are 1. Stereogrammetry techniques using airphotos (photogrammetry),
and 2. Radar interferometry.

Stereogrammetry involves the extraction of elevation information from stereo overlapping images, typically
airphotos, SPOT imagery, or radar. To give an example, stereo pairs of airborne SAR data are used to find
point elevations, using the concept of parallax. Contours (lines of equal elevation) can be traced along the
images by operators constantly viewing the images in stereo.

The potential of radar interferometric techniques to measure terrain height, and to detect and measure
minute changes in elevation and horizontal base, is becoming quickly recognized. Interferometry involves
the gathering of precise elevation data using successive passes (or dual antenna reception) of spaceborne or
airborne SAR. Subsequent images from nearly the same track are acquired and instead of examining the
amplitude images, the phase information of the returned signals is compared. The phase images are
coregistered, and the differences in phase value for each pixel is measured, and displayed as an
interferogram. A computation of phase "unwrapping" or phase integration, and geometric rectification are
performed to determine altitude values. High accuracies have been achieved in demonstrations using both
airborne (in the order of a few centimetres) and spaceborne data (in the order of 10m).

Primary applications of interferometry include high quality DEM generation, monitoring of surface
deformations (measurement of land subsidence due to natural processes, gas removal, or groundwater
extraction; volcanic inflation prior to eruption; relative earth movements caused by earthquakes), and
hazard assessment and monitoring of natural landscape features and fabricated structures, such as dams.

43
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
This type of data would be useful for insurance companies who could better measure damage due to natural
disasters, and for hydrology-specialty companies and researchers interested in routine monitoring of ice
jams for bridge safety, and changes in mass balance of glaciers or volcano growth prior to an eruption.

From elevation models, contour lines can be generated for topographic maps, slope and aspect models can
be created for integration into (land cover) thematic classification datasets or used as a sole data source, or
the model itself can be used to orthorectify remote sensing imagery and generate perspective views.

4.3 Land Cover & Land Use

Although the terms land cover and land use are often used interchangeably, their actual meanings are quite
distinct. Land cover refers to the surface cover on the ground, whether vegetation, urban infrastructure,
water, bare soil or other. Identifying, delineating and mapping land cover is important for global monitoring
studies, resource management, and planning activities. Identification of land cover establishes the baseline
from which monitoring activities (change detection) can be performed, and provides the ground cover
information for baseline thematic maps.

44
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Land use refers to the purpose the land serves, for example, recreation, wildlife habitat, or agriculture. Land
use applications involve both baseline mapping and subsequent monitoring, since timely information is
required to know what current quantity of land is in what type of use and to identify the land use changes
from year to year. This knowledge will help develop strategies to balance conservation, conflicting uses,
and developmental pressures. Issues driving land use studies include the removal or disturbance of
productive land, urban encroachment, and depletion of forests.

It is important to distinguish this difference between land cover and land use, and the information that can
be ascertained from each. The properties measured with remote sensing techniques relate to land cover,
from which land use can be inferred, particularly with ancillary data or a priori knowledge.

Land cover / use studies are multidisciplinary in nature, and thus the participants involved in such work are
numerous and varied, ranging from international wildlife and conservation foundations, to government
researchers, and forestry companies. Regional government agencies have an operational need for land cover
inventory and land use monitoring, as it is within their mandate to manage the natural resources of their
respective regions. In addition to facilitating sustainable management of the land, land cover and use
information may be used for planning, monitoring, and evaluation of development, industrial activity, or
reclamation. Detection of long term changes in land cover may reveal a response to a shift in local or

Resource managers involved in parks, oil, timber, and mining companies, are concerned with both land use
and land cover, as are local resource inventory or natural resource agencies. Changes in land cover will be
examined by environmental monitoring researchers, conservation authorities, and departments of municipal
affairs, with interests varying from tax assessment to reconnaissance vegetation mapping. Governments are
also concerned with the general protection of national resources, and become involved in publicly sensitive
activities involving land use conflicts.

Land use applications of remote sensing include the following:

• natural resource management


• wildlife habitat protection
• baseline mapping for GIS input
• urban expansion / encroachment
• routing and logistics planning for seismic / exploration / resource extraction activities
• damage delineation (tornadoes, flooding, volcanic, seismic, fire)
• legal boundaries for tax and property evaluation
45
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
• target detection - identification of landing strips, roads, clearings, bridges, land/water interface

Land Use Change (Rural / Urban)


As the Earth's population increases and national economies continue to move away from agriculture based
systems, cities will grow and spread. The urban sprawl often infringes upon viable agricultural or
productive forest land, neither of which can resist or deflect the overwhelming momentum of urbanization.
City growth is an indicator of industrialization (development) and generally has a negative impact on the
environmental health of a region.

The change in land use from rural to urban is monitored to estimate populations, predict and plan direction
of urban sprawl for developers, and monitor adjacent environmentally sensitive areas or hazards.
Temporary refugee settlements and tent cities can be monitored and population amounts and densities
estimated.

Analyzing agricultural vs. urban land use is important for ensuring that development does not encroach on
valuable agricultural land, and to likewise ensure that agriculture is occurring on the most appropriate land
and will not degrade due to improper adjacent development or infrastructure.

With multi-temporal analyses, remote sensing gives a unique perspective of how cities evolve. The key
element for mapping rural to urban landuse change is the ability to discriminate between rural uses
(farming, pasture forests) and urban use (residential, commercial, recreational). Remote sensing methods
can be employed to classify types of land use in a practical, economical and repetitive fashion, over large
areas.

Requirements for rural / urban change detection and mapping applications are 1) high resolution to obtain
detailed information, and 2) multispectral optical data to make fine distinction among various land use
classes. Sensors operating in the visible and infrared portion of the spectrum are the most useful data
sources for land use analysis.

5 DATA TYPES

Geospatial data comprise the spatial and attribute components. Spatial data describe the locations of spatial
features, whereas attribute data describe the characteristics of spatial features. The spatial data models that
are commonly used in 2-dimensional GIS mapping are the vector model and the raster model. The vector

46
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
model uses the geometric objects of point, line, and area to represent simple spatial features. Although ideal
for discrete features with well defined locations and shapes, the vector data model does not work well with
spatial phenomena that vary continuously over the space such as precipitation, elevation etc. A better option
for representing continuous phenomena is the raster data model. The raster data model uses a regular grid to
cover the space.

5.1 Geometry data


The geometry data forms the base of a GIS. It relates all data to a coordinate system, defining the geometric
primitives point, line, and area or raster cell.

Graphical description
The graphical description defines the representation of the geometry data on an output device:
Point: symbol, size, direction, colour
Line: width, colour, pattern
Polygon: colour, fill pattern (with direction and interval)
Raster: grey value, colour

Graphic data = Geometry data + graphical description


The graphic data is the combination of the geometry and its graphical description, generally completed by
text.

They key question is, how do we represent the features of a map in the computer? The computer contains a
digital representation of the map, which it can manipulate and present.

Figure 1: data conversions

47
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

5.2 Attributive data


All descriptive data without a geometrical representation, like text, measuring values, statistic data
associated to a spatial entity etc. This data is normally stored in tables. An attribute table is organized into
rows and columns. Each row represents a spatial feature, each column describes a characteristic, and the
intersection of a column and a row shows the value of a particular characteristic for a particular feature. A
row is called a record and a column is also called a field. The georelational data model e.g., a coverage
stores spatial data and attribute data separately and links the two by the featured ID (Table 1 below). In this
table the soils coverage uses SOIL-ID to link spatial and attribute data. The two data sets are synchronized
so that they can be querried, analysed, and displayed in unison. The object-based data model e.g., a
geodatabase combines both spatial and attribute data in a single system. Each spatial feature has a unique
object ID and an attribute to store its geometry as illustrated in table 2 below. Although the two data models
handle the storage of spatial data differently, both operate in the same relational database environment.
Table 1: Example of the georelational data model.
Record Soil-ID Area Perimeter
1 1 106.39 495.86
2 2 8310.84 508,382.38
3 3 554.11 13,829.50
4 4 531.83 19,000.03

Table 2: Object-based data model.


Object ID Shape Shape-Length Shape-Area
1 polygon 106.39 495.86
2 polygon 8310.84 508,382.38
3 polygon 554.11 13,829.50
4 polygon 531.83 19,000.03

5.2.1 Types of attribute data


One method of classifying attribute data is by data type. The data type determines how an attribute is stored
in a GIS. Depending on the GIS package, the available data types can vary. Common data types are
number, text (or character), data, and binary large object (BLOB). Data types for numbers include integer

48
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
(for numbers with decimal digits). Moreover depending on the designated computer memory an integer can
be short or long and a float can be single precision or double precision. BLOB stores images, multimedia,
and the geometry of spatial features as long sequences of binary numbers.

5.2.2 Attribute data entry


The first step in attribute data entry is to define each field in the table. A field definition usually includes
the field name, data width, data type, and number of spaces to be reserved for a field. The width should be
large enough for the largest number, including the sign, or the longest string in the data. The data type must
follow data types allowed in the GIS package.

5.2.3 Storing attribute data


Attribute data are therefore often called tabular data and are normally stored in a relational database. Data
on different types of objects are usually stored in separate tables, each dedicated to a single object type. In
each table, line formats and lengths are identical throughout. The number of columns may be extended by
combining several tables, either by using a common access key or by entering new attributes manually. In
principle, table design is independent of whether the geometrical data to which attributes refer are in form
of vector data or raster data. However, table content must be relevant to the objects, so each object or line
must have a stable identity or access key. Data available in existing computerized registers are not always
in convenient tabular form. As a result, conversions and roundabout methods must often be used to access
data for GIS uses.

5.2.4 Adding and deleting fields


We regularly download data from the Internet for GIS projects. Often the data set contains far more
attributes than we need. It is a good idea to delete those fields that are not needed. This not only reduces
confusion in using the data set but also saves computer time for data processing. Adding a field is required
for the classification or computation. To add a field, we must define the field in the same way as for
attribute data entry. Therefore, adding or deleting a field must be performed through either the data set or
the feature attribute table.

5.3 Vector and Raster Data Models


5.3.1 The grid or raster representation
The raster model represents reality through selected surfaces arranged in a regular pattern. Reality is thus
generalized in terms of uniform, regular cells, which are usually square but may be rectangular triangular or
49
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
hexagonal. The raster model is in many ways a mathematical model, as represented by the regular cell
pattern. Because squares are often used and a pictorial view of them resembles a classic grid of squares, it is
sometimes called the grid model. Geometric resolution of the model depends on the size of the cells.

Each cell in a raster carries a value, which represents the characteristic of a spatial phenomenon at the
location denoted by its row and column. Depending on the coding of its cell values, a raster can be either an
integer or a floating-point raster. An integer value has no decimal digits, whereas a floating – point value
does. Integer cell values usually represent categorical data, which may or may not be ordered. A land cover
raster may use 1 for urban land use, 2 for forested land, 3 for water body, and so on. A wildlife habitat
raster, on the other hand, may use the same integer numbers to represent ordered categorical data of
optimal, marginal, and unsuitable habitats. Floating – point cell values represent continuous, numeric data.
For example, a precipitation raster may have precipitation values of 20.15, 12.23, and so forth.

The cell size determines the resolution of the raster data model. A cell size of 10 metres means that each
cell measures 100 square metres (10 x 10 metres). A cell size of 30 metres, on the other hand, means that
each cell measures 900 square metres (30 x 30 metres). Therefore a 10-metre raster has a finer (higher)
resolution than a 30-metre raster. A large cell size cannot represent the precise location of spatial features,
thus increasing the chance of having mixed features such as forest, pasture and water in a cell. These
problems lessen when a raster uses a smaller cell size. But a small cell size increases the data volume and
the data processing time.

A raster may have a single band or multiple bands. Each cell in a multiband raster is associated with more
than one cell value. An example of a multiband raster is a satellite image, which may have five, seven, or
more bands at each cell location. Each cell in a single – band raster has only one cell value. An example of
a single band raster is an elevation raster, which has one elevation value at each cell location.

Advantages of the grid or raster representation:


• simple concept
• easy management within the computer; many computer languages deal effectively with matrices
• map overlay and algebra is simple: cell by cell
• native format of satellite imagery
• suitable for scanned images

50
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
• modelling and interpolating is simple

Disadvantages of the grid or raster representation:


• fixed resolution can't be improved. So when combining rasters of various resolutions, you must accept
the coarsest resolution.
• information loss at any resolution, increasingly expensive storage and processing requirements to
increase resolution
• large amount of data especially at high resolution
• not appropriate for high-quality cartography (line drawing)
• slow transformations of projections (must transform each cell)
• some kinds of raster analysis (e.g. networks) is difficult or at least not `natural' as there is no
correspondence to real world objects, i.e. difficult to link additional attributive data
5.3.2 The vector representation
The basis of the vector model is the assumption that the real world can be divided into clearly defined
elements, each of which consists of an identifiable object with its own geometry of points, lines, or areas. In
principle, every point on a map and every point in the terrain it represents is uniquely located using two or
three numbers in a coordinate system such as in the northing, easting, and elevation Cartesian coordinate
system. On maps, coordinate systems are commonly displayed in graticules with location numbers along
the map edges. On the ground, coordinate systems are imaginary, yet marked out by survey control stations.
Points, lines and areas are mathematical figures that are used to describe the positions and extensions of
geographical objects.
Points
Points are the fundamental and simplest form of geographical objects and are zero-dimensional because
they have no extension. Each point is represented by a coordinate pair. Points are stored in the computer
with their `exact' coordinates.
Lines
Lines consist of points linked together with line segments. The points represent start, break, or end points of
the line. They can be connected to form lines (straight or described by some other parametric function, e.g.
an arc of a circle, a spline, etc). A line has two points as a boundary; a start point and an end point. Lines
are one-dimensional, as they stretch in only one direction. Two is the lowest number of coordinate pairs
representing a straight line between two points. Mathematically a vector is a straight line having both
magnitude and direction. Therefore, a straight line between two coordinate points on a digital map is a

51
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
vector – hence the concept of vector data used in GIS and the designation of vector-based systems. There
are a number of different line types such as string, link, chain, arc and so forth.

Areas
An area is represented by a single line that enclose a space, thus forming a closed polygon. The surrounding
line, called a ring, has to start and end at the same point in order for the area to be closed and defined. Areas
are two dimensional because they stretch in two dimensions. The term polygon is often used to designate an
area. Area is the most complicated of the geometric primitives. It is possible to have complex areas with
different variants of inner rings defining holes, and holes in holes. Thus an area has a closed line in the
outer boundary, but can also have several inner closed boundaries.

Each of these spatial entities may have an identifier which is a key to an attached database containing the
attributes (tabular data) about the entity. All the information about a set of spatial entities can be kept
together. For example, a point which represents at a certain scale a population centre may have a database
entry for its name, population, mean income etc. A line which represents a road at a certain scale may have
a database entry for its route number, number of lanes, traffic capacity etc. A polygon which represents a
soil map unit may have a database entry for the various characteristics (depth, parent material, field texture,
etc.).

Advantages of the vector representation


• precision is only limited by quality of the original data
• very storage-efficient, since only points about which there is information or which form parts of lines

and boundaries are stored


• structuring the data logically is very easy
• explicit topology makes some kinds of spatial analysis easy  high-quality output

Disadvantages of the vector representation


• time consuming capturing (digitizing, field survey)
• not suitable for continuous surfaces such as scanned or remotely-sensed images and models based on
these
• more expensive hardware and especially software

52
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Figure 2: Vector vs Raster representation

6 SOURCES OF DATA AND METHODS OF CAPTURE


The most expensive part of a GIS is data construction. Converting from paper maps to digital maps used to
be the first step in constructing a database. But in recent years this situation has changed as digital data
clearinghouses have become commonplace on the Internet. Now we can look at what is available in the
public domain before deciding to create new data. Many government agencies at the federal, state, regional
and local levels have set up Spatial Data Infrastructure (SDI’s) for distributing GIS data. Creating a GIS
database is a complex operation which may involve data capture, verification, and structuring process.
Because raw geographical data are available in many different analogue or digital forms, such as maps,
aerial photographs, satellite images, or tables, a spatial database can be built in several, not mutually
exclusive ways. These are:
• Acquire geodata in digital form from a data supplier
• Digitize existing analogue maps
• Carry our one’s own survey of geographical entities
• Interpolate from point observations to continuous surface

In all cases, the data must be geometrically registered to a generally accepted and properly defined
coordinate system and coded so that they can be stored in the internal database structure of the GIS being

53
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
used. The desired result should be a current, complete database which can support subsequent data analysis
and modelling.
6.1 Primary Sources (Land Surveying, GNSS, Photogrammetry, Remote Sensing)
New GIS data can be created from a variety of data sources. These include field data from surveying and
Global Navigation Satellite System (GNSS), photogrammetry and remote sensing sources.

(i) Survey Data


Survey data consist of distances, directions and elevations. Distances can be measured using tape or
electronic distance measurement instrument. In GIS, field survey typically provides data for determining
parcel boundaries. Coordinate geometry provides the methods for creating geospatial data of points, lines,
and polygons from survey data. A typical example of this is the tacheometry survey process illustrated in
figure 8 below:
In tacheometry, a ground surface is divided into small parts and measured by points. In this case the
measurement of horizontal and vertical directions is combined with the optical or electronic distance
measurements for the determination of 3D-coordinates. The accuracy of the tacheometry lies in the range of
a few to tens of centimeters and depends on the size of the area and the accuracy specification of the
equipment.

Figure 3: Basic principle of tacheometry

A tacheometer can measure only an arc of a horizontal angle (horizontal direction). The horizontal direction
is referenced to the direction of the north (in the illustration the directions t AB and tA1). The horizontal angle
β1 is the difference between the two horizontal directions (β1 = tA1 - tAB). With the cartesian coordinates of A
known, the polar coordinates of a point 1 can be measured since the point is now defined by the horizontal
distance SA1 and the horizontal angle β1 and its cartesian coordinates can then be computed from a simple
polar computation and the level of a neighboring ground point 1 can be determined trigonometrically.

54
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Figure 9 below outlines the processing steps of a common tacheometric survey. The classical data
processing still contains numerous manual operation steps before the data can be integrated in a digital
system (CAD, GIS) where they can be further processed. Using modern electronic tacheometers an
automatic data flow can be produced from the collection of the data in the field to the production of maps.
An object coded data acquisition assigns an object code to the measured points in doing so this enables
further processing in a GIS to be undertaken directly.

Figure 4: Stages of a surveying process

(ii) GNSS Data


A satellite navigation system with global coverage may be termed a global navigation satellite system or
GNSS. The United States Global Positioning System (GPS) and the Russian GLONASS are fully globally
operational GNSSs. The global positioning system (GPS) is a satellite supported radio navigation system
that was developed and maintained by the US military and with the aim to determine the position of objects
on the earth's surface worldwide, independent of meteorological conditions and in real time.

In spite of its military origin, numerous civilian applications for GPS have increasingly developed over
time. The control and determination of achievable system accuracies remains however a preserve of the
American military. The current GPS system traces back to the NAVSTAR (NAVigation System with Time
and Ranging) GPS and has been used for geodetic tasks since 1983.

The satellite system is divided into three segments:

55
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
• The space segment (Figure 10) which is composed of 24 satellites (plus reserve satellites), which are

positioned in six circular orbital planes at an altitude of approx. 20 000 km with an inclination of 55
degrees. They orbit the earth in approx. 12 hours. This constellation guarantees a continuous coverage of
at least four satellites at every location on earth.

Figure 5: GPS space segment (see system constellation).

• The control segment consisting of a master control center, three monitor stations and several other ground
station antennas distributed worldwide. The monitor stations check the altitude, position, speed and
overall health of orbiting satellites. This information is relayed to the main control center. The main
control center then uses these data to predict the behavior of each satellite’s orbit and clock. This is then
uploaded as navigation data via the ground station antennas to individual satellites.
• The user segment is composed - depending on the end user - of different receiver equipments. The
equipment receives satellite signals in real time and use them for computation of positions on the earths
surface.

The basic principle of GPS is based on the measurement of so-called pseudodistances between a ground
point and four satellites. This distance measurement occurs via the determination of a time difference:
Both the receiver (user segment) and the satellites produce synchronous signals whereby the satellite
signals in each case are recorded with some delay by the receiver. This delay ΔT is proportional to the
distance between ground point and satellite and can be converted - considering the error in the time
measurement - by multiplication with the speed of light to a distance dimension.
This is referred to as pseudodistance for there is a time synchronization error between the satellite- and the
receiver clock. Using these distances and the satellite coordinates in a suitable reference system (WGS84,

56
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
World Geodetic System 1984) the coordinates of the ground point can be
calculated using intersection. Three distance measurements are adequate for
positioning a new point. The fourth measurement is used for the synchronization
between the satellite clock and the receiver clock.
Figure 6: GPS measurement principle.

(iii)Photogrammetry and Remote Sensing


Photogrammetric and remote sensing methods make use of the fact that electromagnetic radiation (e.g.
sunlight as natural, radar as artificial radiation) is reflected by the objects on the earth's surface in a
different way. The reflected radiation can be collected with a sensor (camera, scanning system) and stored
analogously on e.g. a film or in digital form on electronic media. The data gathered in this way is then to be
subjected to processing or evaluation (e.g. image measurements, image interpretation, classification) and
incorporated into a GIS.

Photogrammetry can be defined as the art, science and technology of obtaining reliable information about
physical objects and the environment through processes of recording, measuring and interpreting
photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.
Analysis procedures range from obtaining distances, areas, elevations to generating DEMS, orthophotos,
thematic GIS data etc. Photogrammetry traditionally deals with images of the visible spectrum.

Optical photographic systems are the most used for taking vertical aerial images with aerial cameras. The
most commonly used sensor platform for the cameras is nowadays the airplane. Generally an area of

57
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
interest is flown by a plane at an altitude of between 300 and 7500 m in parallel flight strips (EW or NS
direction), which overlap one another by about 20-30%. In order to be able to derive three-dimensional
information, it is required that every ground point be mapped in at least two neighboring photos (the
minimum overlap of neighboring pictures is therefore approx. 60% in length). Data storage of aerial image
films is done on black-white emulsions (panchromatic or with infrared sensitization) or color films are used
for the data storage, depending on the purpose of use.
Currently most aerial photographs are taken by digital cameras and the result is digital photographs that are
ready to incorporate into a GIS or used to digitize physical features in a photogrammetric workstation.

Measurements are captured from overlapping pairs of photographs using stereoplotters. These build a
model and allow 3D measurements to be captured, edited, stored and plotted. Stereoplotters have undergone
three major generations of development: analog (optical, analytic and digital). Photogrammetric techniques
are particularly suitable for highly accurate capture of contours, digital elevation models, and almost any
type of object that can be identified on an aerial photograph or image.

Figure 7: Aerialphotography with satellite positioning Figure 8: Satellite as a


platform for a scanning system

58
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Remote sensing is the science and art of obtaining useful information about an object, area or phenomenon
through the analysis of data acquired by a device that is not in contact with the object, area or phenomenon
under investigation (Lillessand and Kiefer). Mostly remote sensing systems are used on satellite platforms.
They collect the electromagnetic radiation in a range, which goes far beyond that of photographic systems
(i.e. also collect thermal infrared or microwaves besides the visible light spectrum).

The results are stored digitally as raster data. A distinction is made between two categories of scanning
systems in remote sensing: passive and active systems as illustrated in figure 13 below..
In the case of passive systems the radiation comes from a natural source (sun), reaches an object (the earth's
surface) then and is reflected and/or absorbed by it. Absorbed radiation cause a warming up of the object,
which then emits thermal radiation (thermal infrared, IR). The radiation reflected or emitted by the object is
then recorded by the passive sensor which may be a camera, a scanner or a warmth sensor, depending on
the spectral sensitivity.

Active
Sensor
Passive
Sensor

Figure 9: Passive vs Active sensors

In the case of active systems the scanner system itself sends an electromagnetic radiation and receives it
after it has been reflected by an object on the earth surface. In this case digital data acquisition occurs
independent of the natural radiation and the weather situation. Active systems primarily consist of radar
methods (radio detection and ranging) where microwave impulses are sent from an antenna of the radar.
Objects in the radiation field reflect the waves which are then partially received back by the antenna. The
distance of the object is computed from the impulse return time, just as is the direction of the object from
the antenna angle.

The data recorded by optical sensors is normally processed using different computational techniques
depending on application purpose. Image processing encompasses preparation of photograph copies,

59
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
enlargements, transformations, vertical projections or gray value operations (e.g. digital filtering) while
image interpretation serves to derive information from the aerial or satellite images. Interpretation work is
undertaken with the use of stereoscopes, color mixing projectors (for mixing multi-spectral images) or
instruments for the recognition of temporal changes based on the referenced multi-temporal images. The
main goal of photo and image processing is for the determination of positional and height information of
points on the ground that appear in at least in two images (stereo image).

An evaluation of satellite image data may be used for the automatic classification of objects whereby pixels
are assigned to a certain feature classes (e.g. type of soil coverage) based on statistical gray value
distribution in the different spectral ranges. Satellite images can be digitally processed to produce a wide
variety of thematic data for a GIS project. Land use/land cover data such as USGS National Land Cover
Data are typically derived from satellite images. Satellite images provide timely data and, if collected at
regular intervals, they can also provide temporal data that are valuable for recording and monitoring
changes in terrestrial and aquatic environments.

6.2 Secondary Data Sources (Scanning, Digitizing)


(i) Scanning
Objective
Maps are scanned to:
• Use digital scan image data as base maps for other (vector) mapped information
• Convert scanned data to vector data for use in GIS
Preparation
Scanning requires that a map scanned:
• Be of high cartographic quality, with clearly defined lines, text, and symbols
• Be clean without extraneous stains
• Have lines that are not too thin ( preferably of width of 0.1mm or greater)

Scanning process
Scanning is the method that converts an analog map into a scanned file, which is then converted back to
vector format through tracing. A scanner converts an analog map into a scanned image file in raster format.
The simplest type of map to be scanned is a black and white map where black lines represent map features
and white areas represent the background. Scanning converts the map into a binary scanned file in raster
format, each pixel has a value of either 1 (map feature) or 0 (background). Map features are shown as raster

60
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
lines, a series of connected pixels on the scanned file. Color maps can also be scanned by a scanner that can
recognize colors.

Scanning consists of two operations, scanning and binary encoding. Scanning a map in gay scale mode
produces a regular grid (pixels) of 256 gray-scale levels, which range numerically from 0 (complete black)
to 256 (pure white). Normally map scanners use a pixel of size of 25 μm, 50 μm, or 100 μm (1 μm = 0.001
mm). As a black and white line map comprises only black lines against a white background, scanning must
clearly differentiate black from white. Accordingly, the scanner must be precisely adjusted for a gray-scale
level corresponding to the map demarcation between black and white.

(ii) Digitizing
Digitizing is the process of converting data from analogue to digital format. Manual digitizing uses a
digitizing table. A digitizing table has a built-in electronic mesh, which can sense the position of the cursor.
To transmit the x, y coordinates of a point to the connected computer; the operator simply clicks on a button
on the stylus after lining up the cursor’s hair with the point. Many GIS packages have a built-in digitizing
module for manual digitizing. Digitizing usually begins with a set of control points (also called tics), which
are later used for converting the digitized map to real-world coordinates. Digitizing point features is simple:
each point is clicked once to record its location. Digitizing line features can follow either point mode or
stream mode. The operator selects points to digitize in point mode. In stream mode, lines are digitized at a
preset time or distance interval. Point mode is preferred if features to be digitized have many straight –line
segments. Because the vector data model treats a polygon as a series of lines, digitizing polygon features is
the same as digitizing line features.

Although digitizing itself is mostly manual, the quality of digitizing can be improved with planning and
checking. An integrated approach is useful in digitizing different layers of a GIS database that share
common boundaries. For example, soils, vegetation types, and land-use types may share common
boundaries in a study area. Digitizing these boundaries only once and using them on each layer not only
saves time in digitizing but also ensures the matching of layers. A rule of thumb in digitizing line or
polygon features is to digitize each line once and only once to a void duplicate lines. Duplicate lines are
seldom on top of one another because of the high accuracy of digitizing table. One way to reduce the
number of duplicate lines is to put a transparent sheet on top of the source map and to mark off each on the
transparent sheet after the line is digitized. This method can also duce the number of missing lines.

61
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

By attaching a map sheet on to the tablet and tracing important points, lines and areas on the map with a
stylus as shown in figure 14 below the user is able to digitally collect coordinates of the features which are
later on transformed into the required coordinate system based on pass points on the map. Geometry and
topology as well as the descriptive information are encoded directly into the corresponding objects of the
GIS. This method is particularly useful when complex map contents with irregular geometries, a lot of
symbols and heterogeneous objects have to be collected from small sections of the map. Manual digitizing
is so time-consuming and labor intensive that the current trend is towards the adaptation of semiautomatic
and automatic digitizing methods.

Figure 10: Manual Digitizing on a tablet

On-Screen Digitizing
This is manual digitizing on the computer monitor using a data source such as a digital aerial photograph or
other remotely sensed data that has been differentially rectified or corrected to remove image displacements
by camera tilt and terrain relief. On-screen digitizing is an efficient method for editing or updating an
existing layer such as adding new trails or roads that are not on an existing layer but are on a new digital
image.

62
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Manual digitizing vs Scanning


The advantages of manual digitizing are that:
• Can be performed on inexpensive equipment
• Requires little training
• Does not need particularly high map quality The disadvantages are that it:
• Is tedious
• Is time -consuming
The advantages of scanning are that:
• Is easily performed
• Provides rapid compilation of digital map images The disadvantages are that
• Raster data files are large and consume memory space
• Attribute data cannot be linked directly to line and polygon structures in raster data
• Raster data lack “intelligence” and hence can be used only as bases for drawings
• Improvement of data quality can be labor intensive
• Clean maps with well defined lines are required
• Relatively expensive equipment is required where standards of accuracy are high

(iii) Raster to Vector: Vectorization


GIS applications sometimes require data in a form differing from that which is available. As a result, many
GISs now have facilities for automatic conversion between vector and raster. Raster data are converted to
vector data through Vectorization. The reverse process is rasterization. In Vectorization, areas containing
the same cell values are converted to polygons with attribute values equivalent to the preconversion cell
values.

Vectorization turns raster lines into vector lines in a process called tracing. Tracing or vectorization
involves three basic things; line thining, line extraction and topological reconstruction. Tracing can be
semiautomatic or manual. In semiautomatic mode, the user selects a starting point on the image map and
lets the computer trace all the connecting raster lines. In manual mode, the user determines the raster line
to be traced and the direction of tracing e.g., determining the position of each vertex.

63
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Results of automatic tracing depend on the robustness of the tracing algorithm that is built into the GIS
package. Although no single tracing algorithm will work satisfactorily with different types of maps under
different conditions, some algorithms are better than others. Examples of problems that must be solved by
the tracing algorithm include: how to trace an intersection, where the width of a raster line may double or
triple; how to continue when a raster line is broken or when two raster lines are close together; and how to
separate a line from a polygon.

Some scanners vectorize as they scan. Otherwise, vectorization is usually part of the subsequent editing
process, or part of the process of automatic line tracing. The vectorization of lines, symbols, etc., may be
summarized in six steps:
1. A number of pixels forming a structure, such as a line are registered
2. All pixels transverse to the line, except those in its center, are stripped off (skeleton plotting)
3. Starting at one end, the pixels are connected one by one along the line (linearization)
4. Line curvatures are checked against set maxima which, if exceeded, indicate that the line is no longer
straight, so linearization terminates
5. Coordinates are determined for the start and end points of the terminated straight-line segment, and a
vector along the segment is formed accordingly
6. Little by little, lines and structures are assigned coordinates and vectorization continues.

Vectorization may be subject to errors and defects, including:


• Deformations or interruptions of lines intersecting at nodes
• Vectorization of extraneous stains and particles on the original map
• Vectorization of alphanumeric information and text
• Unintentional line breaks, resulting in divided vectors
• Dotted-line symbols (trails, soil-type boundaries, etc), resulting in many small vectors  Smooth
curves that become jagged i.e., introduction of unwanted inflection points.

Scanning uses the machine and computer algorithm to do most of the work, thus avoiding human errors
caused by fatigue or carelessness. Secondly onscreen digitizing has both the scanned image and vector lines
on the same screen, making tracing more flexible than manual digitizing. With onscreen digitizing, the
operator can zoom in or out and can move around the raster image with ease. Manual digitizing requires the
operator’s attention on both the digitizing table and the computer monitor, and the operator can easily get

64
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
tired. Thirdly, semi-automated vectorization has been reported to be more cost effective than manual
digitizing.

7 DATA ANALYSIS
7.1 Queries
The basic GIS functions are those that display all or a selection of the objects under investigation. This may
include graphic elements and selected attributes. Such analysis operations are known as query functions.
They can be divided into two categories -spatial and attribute.
An attribute query relies on the non-geometric datasets linked to spatial data. The queries are generally
broken down into a series of elements that initially identify the field of interest and then the search can be
carried out using specific characteristics. Results may be in the form of a table, or displayed as spatial data.
Criteria for attribute queries may be based on numeric or textual searches or alphanumeric matching. In
addition, the database software associated with the GIS may have a query language available. These are
designed to enable the user to tailor queries in a particular manner.

The nature of an entity is given by its attributes, its whereabouts by its geographical location or coordinates,
and the spatial relations between different entities in terms of proximity and connectivity (topology).
Attributes of geographic entities can be divided into three parts:
• Location (latitude, longitude, elevation)
• Qualitative or quantitative descriptors of some non-spatial property (e.g. parcel number, name of the
owner, land cover of a piece of land)
• Derivations of the spatial properties of the entity itself (e.g. area, shape, contiguity)
As in conventional information systems, new attributes can be attached to entities as the result of a
database operation. For example, a new attribute (or a new value of an attribute) can be computed for land
parcels larger than a given size, or for those having owners living abroad. For displaying information, the
new attribute could be the color or the symbol chosen to represent this kind of entity on the map. The new
attribute can be derived by any legitimate method of logical and mathematical analysis, including
operations on the proximity and topological properties of entities.

Logical searches in databases normally employ set algebra or Boolean algebra. Set algebra uses the three
operators equal to, greater than, and less than and combinations thereof for example:
= equal, <> not equal, < less than, > greater than, <= less or equal, >= greater or equal.

65
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Boolean algebra uses the AND, OR, NOR, and NOT operators to test whether a statement is true of false
ie. A AND B, A OR B, A NOR B, A NOT B.

7.2 Spatial operations and Location Analysis


All spatial query functions are based around the GIS features (points, lines, polygons, surfaces) and there
are five geometric functions to investigate - circle, rectangle, polygon, and line and point search. These can
be developed into a host of more complicated queries.

An essential capability of any GIS is ability to perform spatial queries, i.e. queries using the spatial
relationship between different features. Spatial queries are questions like
• Find all water wells in the county,
• Find all cities within a distance of 20 km to a river,  Find all lakes which meet a forest, etc.
• Show all the houses within a distance of 5 km to the shopping center
• Find all farmland with a river or brook as its boundary
Mostly the spatial queries are combined with queries on feature attributes like
• Find all parcels within a distance of 2 km to a motorway that are larger than 500 m²
• Find all parcels of Farmer Smith that are within a distance of 300 m to the river A
• As you see above, distances are very important for spatial operations. This is why buffering is an
essential method in any GIS.
Once the entities have been selected and tagged, the procedure for attribute analysis can be applied, either
per entity, or collectively. For example, the minimum and maximum water levels could be extracted for a
given year for each groundwater well, or the average water level of all wells could be computed. The result
of these computations can be used to tag the enclosing polygon, which can be displayed with a new color,
shading, or label.

ArcGIS offers the following spatial operators (the naming of the operators is not standardized, so
alternative formulation are given in parenthesis):
• contain: contain operator identifies those features of set A, which are surrounded by features of set B.
• are within (are contained by): The are within operator identifies features fully surrounded by a feature.
 completely contain (entirely contain): This operator works very much like the contain operator except
for boundary cases, i.e. if there is any common position (vertex) on the boundary, the larger feature
cannot completely contain the smaller one, but can only contain it.

66
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
• are completely within (are entirely contained by): see above
• intersect (crosses, overlap): The intersect operator shows all features, which are wholly or partially
inside another feature.
• are within a distance of (meet with a distance =0): Using this operator is very similar to performing a
buffering operation and finding the features that overlap the buffer zone.

Most GIS operators work with similar spatial operators though the naming might be slightly different.
Below is an extract from Geomedia GIS software.

67
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

(a) Buffering
Operations of the type 'A is within / beyond distance D from B', where D is a simple crow's flight distance
are carried out with the help of a buffering command. This is used to draw a zone around the initial entity
where the boundaries of the zone or buffer are all distance D from the coordinates of the original entity. If
that is a point, then the zone is a circle, if a straight line, a rectangle with rounded ends, an irregular line or

68
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
polygon, an enlarged version of the same. The buffer is an effect a new polygon that may be used as a
temporary aid to spatial query or that itself is added to the database. The determination of whether a feature
is inside/outside or overlaps the buffer zone is then carried out using the spatial operations described above.
Buffering is the process of creating a new feature or features (buffer zones) that encompass an area around a
point, line or other area feature within the specific distance or distances.

Buffering may be the most common spatial operation performed in a GIS or desktop mapping system. The
product of a buffering operation is a buffer zone, which can be one or more new area features. Some of the
different types of buffer zones are described below:

(i) Buffer zones around points:


The basic type of a buffer zone is a single “circle” around a single point. The quotation marks show, that in
GIS circles are usually stored as polygons. If the radius for buffer zones is bigger than the distance of
points, the zones will overlap. In this case it is to decide if this is correct or if the buffer zones should be
merged.

(ii) Buffer zones around linear features


Buffering linear features offers another possibility in creating buffer zones. Buffers around linear features
can be created as squared buffers or as round buffers like shown below. Using rounded buffers, the idea of
equal distances is exactly realized, but squared buffers are easier to be calculated and they need less
memory to store and less time to work with.

69
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

Like buffers around point features, linear buffers can be merged or not. An important factor to consider
when buffering linear features is the various ways in which linear objects in the world can be represented in
a GIS. For example, a street can be stored from start to finish as a single linear feature from beginning to
end as one linear feature, with many vertices along the way that represent turns in the road.

It is also possible to represent a road as a set of road segments, with a different road segment connecting
each vertex along the road, or with small groups of vertices combined into one segment. In this case the
road will be represented by a number of feature instances in a record set. Usually the last method is state of
the art.

The sketch beside shows the different results using unmerged buffers on different modeled street geometry.
To create only one buffer without overlapping and intersection lines merged buffer creation should be used.

(iii) Buffer zones around area features


Creating buffer zones around area features is basically quite simple, but there are some important points to
remember.

70
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
• Area buffering can take place inside or outside the area.
• Area buffers can either be merged or unmerged.

Example:

71
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Suppose that you have a GIS database for a neighbourhood which has only one church and only one
kindergarten. The church and the kindergarten are 150 m apart. The database contains the following
features:
• Buildings with their attributes like “building number”, “land use” (e.g. church, residential, school,
kindergarten...) etc.
• Parcels etc
Describe the workflow (step by step) to find all residential houses within a distance of 100 m to the church
and within a distance of 70 m to the kindergarten. Note that a sketch would be helpful to describe the
workflow.

Solution:
1. Select the church (buildings, LandUse = church) as a query called "Church"
2. Create an outside buffer with a distance of 100 m around the church. Store the buffer in your database
as “Church_plus100”.
3. Find all residential houses (buildings, LandUse=Residential" which are completely within the buffer
zone Church_plus100 and show these houses in red colour.

4. Find all of these houses within a distance of 70 m to the Kindergarten (buildings, LandUse =
Kindergarten). Therefore you should create a buffer zone Kindergarten_plus70.

72
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
5. Find all the residential houses that are completely within the buffer zone Church plus100 and that
touch the buffer zone Kindergarten plus70.

(b) Overlays
Overlay is used in data integration and is a technical process, the results of which can be used in realistic
forms of spatial analysis. Overlays refer to the ability to bring together 2 or more datasets. There are about 3
groups, namely:
• Point in polygon - this type of operation overlays point objects on areas and produces in effect and is
contained within relationship.
• Line on polygon - this type of operation allows line objects to be overlaid on area, or polygon, objects.
This also produces and is contained within relationship.
• Polygon on polygon - this allows one polygon theme/coverage/layer to be overlaid or combined with
another polygon coverage. This relates to adjacency
Polygon overlay
Polygon overlay is a spatial operation in which a first thematic layer containing polygons is superimposed
onto another to form a new thematic layer with new polygons. This technique may be likened to placing
map overlays on top of each other on a light table. The corners of each new polygon are at the intersections
of the borders of the original polygons; hence, computing the coordinates of border intersections is a vital
function in polygon overlay. The computations are relatively straightforward, but they must be able to cope
with all conceivable geometric situations, including vertical lines, parallel lines, and so on. Computing the
intersections of a large number of polygons can be very time-consuming.

If areas are stored as links in a topological model, fewer intersections need to be computed, thus reducing
the computing time. The new intersections are identified as nodes and the lines between the nodes as links.
The new nodes and links then constitute a new topological structure. Consider as an example polygon C4 in
figure 14.15, which is a combination of polygons C and 4.

Each new polygon is a new object that is represented by a row in the attribute table. Each object has a new
attribute, which is represented by a column in the attribute table. Superimposing and comparing two
geometrical data sets of differing origin and accuracy often give rise to a large number of small polygons.
For example, two polygons representing land areas may have slightly differing geometric borders on a lake,
yet on a piecemeal basis the borders may coincide. Superimposing the polygons, called slivers (fig 14.16).

73
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
this proliferation of smaller polygons, may be counteracted automatically by laying a small zone-fuzzy
tolerance-around each line. If two zones intersect when the polygons are superimposed, the lines they
surround combine into one line. Smaller polygons may also be removed later, using area size, shape, and
other criteria. In practice, however, it is difficult to set limits that reduce the number of undesirable small
polygons while retaining smaller polygons that are useful.

In addition to performing the overlay computations, the system can present a new image of the new
structure, borders between polygons of like identity being removed to form joint polygons. This
combination process may be either automatic or controlled by the user.
The overall procedure for polygon overlay is to:
1. Compute intersection points
2. Form nodes and links
3. Establish topology and hence new objects/new IDs
4. Remove excessive numbers of small polygons where necessary and join like polygons.
5. Compile new attributes and additions to the attribute table

Polygon overlay is a sizable operation for which even on the most powerful computers may require
relatively long processing time. The process may be used to clip geographical windows in a database. For
example, a township border in one thematic layer may be used to clip all other thematic layers in order to
produce a collection of data for that township only (fig 14.17).

Overlay produces a thematic map comprising thematically comparable homogeneous units [integrated
terrain units (ITUs)] and an expanded attribute table. Arithmetic, logical, and statistical operations may be
performed on the attributes to reveal patterns, connections, and conflicts, for example, when simulating
alternatives and studying that the number of alternatives considered are not limited by system capacity, that
all alternatives can be based on the same data, and that all available information may be used.

Points in polygons
Just as polygons may be superimposed on other polygons, so may points be superimposed on polygons (fig
14.18). The points are then assigned the attributes of the polygons on which they are superimposed. The
relevant geometric operation means that points must be associated within polygons, and there will thus not
be created new geometry in point-in-polygon operations. The approach requires computing the intersection

74
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
of the polygon border with lines through points, to test whether the points lie within the polygon. Attribute
tables are updated after all points are associated with polygons.

Lines in polygons
Lines may be superimposed on polygons (fig 14.19), with the result that a new set of lines contains
attributes of both the original lines and the polygons. These particular computations are similar to those
used in polygon overlay: intersections are computed, nodes and links are formed, topology is established,
and attribute tables are updated. The new attributes can be analyzed after the geometrical operations are
carried out.
7.3 Operations using Topology
Topology is the description of the relative location of geographic phenomena independent of their exact
position. In digital data, topological relationships such as connectivity, adjacency and relative position are
usually expressed as relationships between nodes, links and polygons. For example, the topology of a line
includes its from- and to-nodes, and its left and right polygons.

In these operations the features are directly linked in the database; the linkage can be spatial, as in the
contiguity case where A is a direct neighbor of B, or the case where A is connected to B by a topological
network that models roads, rivers or other line features. Inter-entity distances over the network or other
measures of connectivity such as travel times, attractiveness of a route, etc. can be used to determine
indices of interaction.

The analysis of connectivity over a topologically directed net is much used in automated route finding (car
and truck navigation systems) and for the optimum location of emergency services. In ArcGIS, such
analysis is carried out with the extension Network Analyst. The analysis of transport times from A to B in
terms of crow's flight distance (left-hand) and times along different routes in a network to determine
expected travel times for different road conditions.

75
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

7.4 Data quality


Information about the quality of data should be contained in a dedicated metadata. Unfortunately, this isn't
always the case. A critical evaluation of the data has to be made concerning its suitability and reliability for
the desired purpose during the capture or transfer process. The most important aspects are:
- Origin of the data
- Positional accuracy
- Attribute precision
- Consistency
- Completeness
- How current the data is

7.4.1 Positional accuracy


In ordinary mapping, accuracy is inversely proportional to map scale. A map to scale 1:1000 is more
accurate than one to scale 1:100,000. The use of insufficiently accurate data may eventually prove more
expensive than the original acquisition of the data. A GIS can plot smooth lines on a map and perform
convincing computations and analyses, all on the basis of its stored data and regardless of the accuracy of
the original data entered. Moreover, a GIS can perform calculations of cut-and –fill works from data
accurate to ±0.2m, and plot equally fine profiles from both. However, the potential consequences of using
the less accurate set of data are severe. The results may be totally useless or at best cause serious
miscalculation of construction costs.
Positional accuracy is proposed by ISO (2001) to cover the following elements:
1. Absolute or external accuracy – closeness of reported coordinate values accepted as or being true
2. Relative or internal accuracy – closeness of the relative position of features in a data set to their
respective relative position accepted as or being true

76
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
3. Gridded data position accuracy – closeness of gridded data position values to values accepted as or
being true.
The preferred test for positional accuracy is a comparison with an independent source of higher accuracy.
When the dates of testing and source differ, one must ensure that the results relate to positional errors and
not to temporal effects. Requirements of accuracy are often stated with the standard deviation or RMS.
Serious errors should be rejected and the limit is set as three times the standard deviation or RMS for the
data in question.

7.4.2 Attribute precision


Considerations of accuracy usually focus on the accuracy of geometrical data, although in practice the
accuracy of attribute data is equally important. The proposed elements for defining the accuracy of attribute
data are:
• Classification correctness – comparison of the classes assigned to features or their attributes to a
universe of discourse e.g ground truth
• Non-quantitative attribute correctness
• Quantitative attribute accuracy
Attribute data often form the basis for classification. Classification correctness is determined by comparing
the actual classification with the description of each class. Quality will thus be dependent on a good
description of the classes and well –trained staff performing out the classification. In some cases,
misclassification may involve simple sorting errors. An object of type A is put in class B. Thus a residential
building may be classified as commercial building if the demarcation between the two classifications is
unclear. In other cases the class structure may be faulty, as when there is no class C for objects containing
elements of class A and B.

7.4.3 Consistency
Logical consistency comprises the following:
Conceptual consistency – adherence to the rules of the schema. This means that it should be stated when
elements in a database are established on the basis of new rules, for example new relations, as compared
with what is specified in the data model.
Domain consistency – adherence of values to the value domain. This means that it should be stated if any of
the data have values that exceed the specified value limits –for example, that an object type has a specified
accuracy of ±1m, whereas some objects are registered with ±10m.

77
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
Format consistency – data sets to be stored in accordance with the physical structure or the format
specified. Should the data sets not meet format specifications, then they have reduced quality which would
cause problems for users.
Topological consistency – correctness of the explicitly encoded topological characteristics i.e a database
with lines should not have the following topological errors: overshoots and undershoots, polygons not
closed, double lines, incorrect intersection of lines etc.
7.4.4 Completeness
Describes how fully data on an object type have been entered in relation to a specification – for example
whether attribute data as well as digital geometric data have been entered for all properties, all roads, all
buildings, and so on in an area. There should be no commission (excess data present in a data set) or
omission (data absent from a data set). Consequently, data completeness should also be recorded for data
entered and stored, to inform users of the limitations entailed.
7.4.5 Quality control
Normally, it would be better to control the data during rather than after production. Data can thus be
supplied with a “quality guaranty”, and users will be able to assess the data’s areas of applications. I is well
known that geographical data are processed (transformed, etc) and analysed several times and by different
institutions. Ideally therefore each process should be subject to quality control. A few simple and general
rules may help to reduce the problems of inaccuracies:
• Employ verification routines to ensure quality
• Verify data as early as possible
• Verify data at several stages of their manipulation
• Know the nature of the data, be it geometry data or attribute data
• Be critical in all data uses
• Apply processing results carefully
• State inaccuracies associated with results and analyses

8 APPLICATIONS TO CIVIL ENGINEERING


Civil engineering is about developing and sustaining infrastructure. The profession covers many areas of
interest and a broad range of expertise. As a result, civil engineers work with a voluminous amount of data
from a variety of sources. Geographic information system (GIS) technology provides the tools for creating,
managing, analyzing, and visualizing the data associated with developing and managing infrastructure. GIS
allows civil engineers to manage and share data and turn it into easily understood reports and visualizations
that can be analyzed and communicated to others. This data can be related to both a project and its broader

78
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
geographic context. It also helps organizations and governments work together to develop strategies for
sustainable development. Thus, GIS is playing an increasingly important role in civil engineering
companies, supporting all phases of the infrastructure life cycle.

A centralized information system provides civil engineers with the IT framework for maintaining and
deploying critical data and applications across every aspect of the infrastructure project life cycle including
planning and design, data collection and management, spatial analysis, construction, and operations
management and maintenance. This architecture provides the tools to assemble intelligent GIS applications
and improve a project process by giving engineers, construction contractors, surveyors, and analysts a
single data source from which to work. Centrally hosting applications and data makes it easy to manage,
organize, and integrate geographic data, including CAD data, from existing databases to visualize, analyze,
and make decisions. The system helps combat data communication errors, eliminating the need for
multiple, flat files in disparate systems.

An engineering information system based on enterprise GIS technology streamlines activities from field
data collection to project management. With this single relational database, you are connected to all your
clients; construction sites; and inventory, network, and maintenance data.

79
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
1. Planning
It contains high-level planning functions for site location including environmental impact mitigation,
economic analysis, regulatory permitting, alternative siting analysis, routing utilities, what-if scenarios,
visualization of concept options, data overlay, modeling, and benefit/cost alternatives analysis.

2. Data Collection
It has specific functions to collect precise site data used for predesign analysis; design; and calculations
including field survey, topography, soils, subsurface geology, traffic, lidar, photogrammetry, imaging,
sensitive environmental areas, wetlands, hydrology, and other sitespecific design-grade data. Mobile GIS is
the expansion of GIS from the office into the field. Wireless connectivity, geoservices, and Web mapping
applications enable communication with and coordination of service technicians and contractors in the
field. This increases efficiency and provides access to previously unavailable data for users who may have
limited GIS experience.

3. Environmental Analysis
It provides analysis to support design including hydrology analysis, volume calculations, soil load analysis,
traffic capacity, environmental impact, slope stability, materials consumption, runoff, erosion control, and
air emissions. During environmental analysis, view project maps, site photos, CAD files, survey
measurements, and 3D renderings. Analysis of the environment with a GIS allows you to view patterns,
trends, and relationships that were not clearly evident without the visualization of data. Some of the
analyses that can be performed by GIS are:
• Water distribution analysis
• Traffic management analysis
• Soil analysis
• Site feasibility analysis
• Environment impact analysis
• Volume or Area analysis of catchment
• River or canals pattern analysis
• Temperature and humidity analysis

80
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A

4. Design
It allows creation of new infrastructure data for new civil works including grading, contouring,
specifications, cross sections, design calculations, mass haul plans, environmental mitigation plans, and
equipment staging. This includes integration with traditional design tools such as CAD and databases for
new design capabilities.

81
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
5. Construction
It provides the mechanics and management for building new infrastructure including takeoffs; machine
control; earth movement; intermediate construction, volume and material, and payment calculations;
materials tracking; logistics; schedules; and traffic management. To keep the construction within budget
and schedule GIS guides us about how to utilize our resources on site efficiency by:
• Timely usage of construction equipment.
• Working Hours
• Effects of seasonal fluctuations.
• Optimizing routes for dumpers and concrete trucks
• Earth filling and cutting
• Calculation of volumes and areas of constructed phase thereby helping in Estimation and Valuation

6. Data Collection As-Built Surveying


GIS provides the tools to collect precise site data and document existing conditions. With as-built surveying
infrastructure data, operators use defined, operational, industry-standard data models. As-built surveying
with GIS technology permits the surveyor to deliver data into operational GIS, eliminating costly data
conversion and reducing errors.

7. Operations/Maintenance
It models utility and infrastructure networks and integrates other related types of data such as raster images
and CAD drawings. Spatial selection and display tools allow you to visualize scheduled work, ongoing
activities, recurring maintenance problems, and historical information. The topological characteristics of a

82
CSE 542 GIS and Remote Sensing Dr. Khaemba W. A
GIS database can support network tracing and can be used to analyze specific properties or services that
may be impacted by such events as stoppages, main breaks, and drainage defects.

83

You might also like