REMOTE SENSING AND-part-4
REMOTE SENSING AND-part-4
(a)
(b)
Figure 1.24 Uninhabited aerial vehicles (UAVs) used for environmental applications of remote
sensing. (a) NASA’s Ikhana UAV, with imaging sensor in pod under left wing. (Photo courtesy
NASA Dryden Flight Research Center and Jim Ross.) (b) A vertical takeoff UAV mapping seagrass
and coral reef environments in Florida. (Photo courtesy Rick Holasek and NovaSol.)
1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING 49
The student should now begin to appreciate that successful use of remote sensing
is premised on the integration of multiple, interrelated data sources and analysis
procedures. No single combination of sensor and interpretation procedure is
appropriate to all applications. The key to designing a successful remote sensing
effort involves, at a minimum, (1) clear deûnition of the problem at hand, (2) eva-
luation of the potential for addressing the problem with remote sensing techni-
ques, (3) identiûcation of the remote sensing data acquisition procedures
appropriate to the task, (4) determination of the data interpretation procedures to
be employed and the reference data needed, and (5) identiûcation of the criteria
by which the quality of information collected can be judged.
50 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
All too often, one (or more) of the above components of a remote sensing
application is overlooked. The result may be disastrous. Many programs exist
with little or no means of evaluating the performance of remote sensing systems
in terms of information quality. Many people have acquired burgeoning quan-
tities of remote sensing data with inadequate capability to interpret them. In
some cases an inappropriate decision to use (or not to use) remote sensing has
been made, because the problem was not clearly deûned and the constraints or
opportunities associated with remote sensing methods were not clearly under-
stood. A clear articulation of the information requirements of a particular pro-
blem and the extent to which remote sensing might meet these requirements in a
timely manner is paramount to any successful application.
The success of many applications of remote sensing is improved considerably
by taking a multiple-view approach to data collection. This may involve multistage
sensing, wherein data about a site are collected from multiple altitudes. It may
involve multispectral sensing, whereby data are acquired simultaneously in sev-
eral spectral bands. Or, it may entail multitemporal sensing, where data about a
site are collected on more than one occasion.
In the multistage approach, satellite data may be analyzed in conjunction
with high altitude data, low altitude data, and ground observations (Figure 1.25).
Each successive data source might provide more detailed information over smal-
ler geographic areas. Information extracted at any lower level of observation may
then be extrapolated to higher levels of observation.
A commonplace example of the application of multistage sensing techniques
is the detection, identiûcation, and analysis of forest disease and insect problems.
From space images, the image analyst could obtain an overall view of the major
vegetation categories involved in a study area. Using this information, the areal
extent and position of a particular species of interest could be determined and
representative subareas could be studied more closely at a more reûned stage of
imaging. Areas exhibiting stress on the second-stage imagery could be delineated.
Representative samples of these areas could then be ûeld checked to document
the presence and particular cause of the stress.
After analyzing the problem in detail by ground observation, the analyst
would use the remotely sensed data to extrapolate assessments beyond the small
study areas. By analyzing the large-area remotely sensed data, the analyst can
determine the severity and geographic extent of the disease problem. Thus, while
the question of speciûcally what the problem is can generally be evaluated only by
detailed ground observation, the equally important questions of where, how
much, and how severe can often be best handled by remote sensing analysis.
In short, more information is obtained by analyzing multiple views of the ter-
rain than by analysis of any single view. In a similar vein, multispectral imagery
provides more information than data collected in any single spectral band. When
the signals recorded in the multiple bands are analyzed in conjunction with each
other, more information becomes available than if only a single band were used
or if the multiple bands were analyzed independently. The multispectral approach
1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING 51
being used extensively in computer-based GISs (Section 1.10). The GIS environ-
ment permits the synthesis, analysis, and communication of virtually unlimited
sources and types of biophysical and socioeconomic data—as long as they can be
geographically referenced. Remote sensing can be thought of as the <eyes= of such
systems providing repeated, synoptic (even global) visions of earth resources from
an aerial or space vantage point.
Remote sensing affords us the capability to literally see the invisible. We
can begin to see components of the environment on an ecosystem basis, in that
remote sensing data can transcend the cultural boundaries within which much of
our current resource data are collected. Remote sensing also transcends dis-
ciplinary boundaries. It is so broad in its application that nobody <owns= the ûeld.
Important contributions are made to—and beneûts derived from—remote sensing
by both the <hard= scientist interested in basic research and the <soft= scientist
interested in its operational application.
There is little question that remote sensing will continue to play an increas-
ingly broad and important role in the scientiûc, governmental, and commercial
sectors alike. The technical capabilities of sensors, space platforms, data commu-
nication and distribution systems, GPSs, digital image processing systems, and
GISs are improving on almost a daily basis. At the same time, we are witnessing
the evolution of a spatially enabled world society. Most importantly, we are
becoming increasingly aware of how interrelated and fragile the elements of our
global resource base really are and of the role that remote sensing can play in
inventorying, monitoring, and managing earth resources and in modeling and
helping us to better understand the global ecosystem and its dynamics.
TABLE 1.1 Example Point, Line, and Area Features and Typical Attributes
Contained in a GISa
The data in a GIS may be kept in individual standalone ûles (e.g., <shape-
ûles=), but increasingly a geodatabase is used to store and manage spatial data.
This is a type of relational database, consisting of tables with attributes in columns
and data records in rows (Table 1.2), and explicitly including locational informa-
tion for each record. While database implementations vary, there are certain
desirable characteristics that will improve the utility of a database in a GIS. These
characteristics include üexibility, to allow a wide range of database queries and
operations; reliability, to avoid accidental loss of data; security, to limit access to
authorized users; and ease of use, to insulate the end user from the details of the
database implementation.
One of the most important beneûts of a GIS is the ability to spatially inter-
relate multiple types of information stemming from a range of sources. This con-
cept is illustrated in Figure 1.26, where we have assumed that a hydrologist
wishes to use a GIS to study soil erosion in a watershed. As shown, the system
contains data from a range of source maps (a) that have been geocoded on a cell-
by-cell basis to form a series of layers (b), all in geographic registration. The ana-
lyst can then manipulate and overlay the information contained in, or derived
from, the various data layers. In this example, we assume that assessing the
potential for soil erosion throughout the watershed involves the simultaneous
cell-by-cell consideration of three types of data derived from the original data lay-
ers: slope, soil erodibility, and surface runoff potential. The slope information can
be computed from the elevations in the topography layer. The erodibility, which
is an attribute associated with each soil type, can be extracted from a relational
database management system incorporated in the GIS. Similarly, the runoff
potential is an attribute associated with each land cover type (land cover data can
Figure 1.26 GIS analysis procedure for studying potential soil erosion.
map of the entire nation. Another common constraint is that the compilation
dates of different source maps must be reasonably close in time. For example, a
GIS analysis of wildlife habitat might yield incorrect conclusions if it were based
on land cover data that are many years out of date. On the other hand, since other
types of spatial data are less changeable over time, the map compilation date
might not be as important for a layer such as topography or bedrock geology.
Most GISs use two primary approaches to represent the locational compo-
nent of geographic information: a raster (grid-based) or vector (point-based) for-
mat. The raster data model that was used in our soil erosion example is illustrated
in Figure 1.27b. In this approach, the location of geographic objects or conditions
is deûned by the row and column position of the cells they occupy. The value
stored for each cell indicates the type of object or condition that is found at that
location over the entire cell. Note that the ûner the grid cell size used, the more
geographic speciûcity there is in the data ûle. A coarse grid requires less data sto-
rage space but will provide a less precise geographic description of the original
data. Also, when using a very coarse grid, several data types and/or attributes may
occur in each cell, but the cell is still usually treated as a single homogeneous unit
during analysis.
The vector data model is illustrated in Figure 1.27c. Using this format, feature
boundaries are converted to straight-sided polygons that approximate the original
regions. These polygons are encoded by determining the coordinates of their
vertices, called points or nodes, which can be connected to form lines or arcs.
Topological coding includes <intelligence= in the data structure relative to the
spatial relationship (connectivity and adjacency) among features. For example,
topological coding keeps track of which arcs share common nodes and what
polygons are to the left and right of a given arc. This information facilitates such
spatial operations as overlay analysis, buffering, and network analysis.
Raster and vector data models each have their advantages and disadvantages.
Raster systems tend to have simpler data structures; they afford greater computa-
tional efûciency in such operations as overlay analysis; and they represent
features having high spatial variability and/or <blurred boundaries= (e.g., between
pure and mixed vegetation zones) more effectively. On the other hand, raster data
volumes are relatively greater; the spatial resolution of the data is limited to the
size of the cells comprising the raster; and the topological relationships among
spatial features are more difûcult to represent. Vector data formats have the
advantages of relatively lower data volumes, better spatial resolution, and the pre-
servation of topological data relationships (making such operations as network
analysis more efûcient). However, certain operations (e.g., overlay analysis) are
more complex computationally in a vector data format than in a raster format.
As we discuss frequently throughout this book, digital remote sensing images
are collected in a raster format. Accordingly, digital images are inherently compa-
tible spatially with other sources of information in a raster domain. Because
of this, <raw= images can be easily included directly as layers in a raster-based
GIS. Likewise, such image processing procedures as automated land cover classi-
ûcation (Chapter 7) result in the creation of interpreted or derived data ûles in a
raster format. These derived data are again inherently compatible with the other
sources of data represented in a raster format. This concept is illustrated in
Plate 2, in which we return to our earlier example of using overlay analysis to
assist in soil erosion potential mapping for an area in western Dane County, Wis-
consin. Shown in (a) is an automated land cover classiûcation that was produced
by processing Landsat Thematic Mapper (TM) data of the area. (See Chapter 7 for
additional information on computer-based land cover classiûcation.) To assess
the soil erosion potential in this area, the land cover data were merged with infor-
mation on the intrinsic erodibility of the soil present (b) and with land surface
slope information (c). These latter forms of information were already resident in a
GIS covering the area. Hence, all data could be combined for analysis in a mathe-
matical model, producing the soil erosion potential map shown in (d). To assist
the viewer in interpreting the landscape patterns shown in Plate 2, the GIS was
also used to visually enhance the four data sets with topographic shading based
on a DEM, providing a three-dimensional appearance.
For the land cover classiûcation in Plate 2a, water is shown as dark blue, non-
forested wetlands as light blue, forested wetlands as pink, corn as orange, other
row crops as pale yellow, forage crops as olive, meadows and grasslands as
yellow-green, deciduous forest as green, evergreen forest as dark green, low-
intensity urban areas as light gray, and high-intensity urban areas as dark gray. In
(b), areas of low soil erodibility are shown in dark brown, with increasing soil
erodibility indicated by colors ranging from orange to tan. In (c), areas of increas-
ing steepness of slope are shown as green, yellow, orange, and red. The soil
erosion potential map (d) shows seven colors depicting seven levels of potential
soil erosion. Areas having the highest erosion potential are shown in dark red.
These areas tend to have row crops growing on inherently erodible soils with
sloping terrain. Decreasing erosion potential is shown in a spectrum of colors
from orange through yellow to green. Areas with the lowest erosion potential are
1.11 SPATIAL DATA FRAMEWORKS FOR GIS AND REMOTE SENSING 57
If one is examining an image purely on its own, with no reference to any outside
source of spatial information, there may be no need to consider the type of coor-
dinate system used to represent locations within the image. In many cases, how-
ever, analysts will be comparing points in the image to GPS-located reference
data, looking for differences between two images of the same area, or importing
an image into a GIS for quantitative analysis. In all these cases, it is necessary
to know how the column and row coordinates of the image relate to some real-
world map coordinate system.
Because the shape of the earth is approximately spherical, locations on the
earth’s surface are often described in an angular coordinate or geographical sys-
tem, with latitude and longitude speciûed in degrees (°), minutes (0 ), and seconds
(00 ). This system originated in ancient Greece, and it is familiar to many people
today. Unfortunately, the calculation of distances and areas in an angular coordi-
nate system is complex. More signiûcantly, it is impossible to accurately represent
the three-dimensional surface of the earth on the two-dimensional planar surface
of a map or image without introducing distortion in one or more of the following
elements: shape, size, distance, and direction. Thus, for many purposes we wish
to mathematically transform angular geographical coordinates into a planar, or
Cartesian ðX – Y Þ coordinate system. The result of this transformation process is
referred to as a map projection.
While many types of map projections have been deûned, they can be grouped
into several broad categories based either on the geometric models used or on
the spatial properties that are preserved or distorted by the transformation.
Geometric models for map projection include cylindrical, conic, and azimuthal
or planar surfaces. From a map user’s perspective, the spatial properties of
map projections may be more important than the geometric model used. A con-
formal map projection preserves angular relationships, or shapes, within local
58 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING
areas; over large areas, angles and shapes become distorted. An azimuthal (or
zenithal) projection preserves absolute directions relative to the central point of
projection. An equidistant projection preserves equal distances, for some but not
all points—scale is constant either for all distances along meridians or for all dis-
tances from one or two points. An equal-area (or equivalent) projection preserves
equal areas. Because a detailed explanation of the relationships among these
properties is beyond the scope of this discussion, sufûce it to say that no two-
dimensional map projection can accurately preserve all of these properties, but
certain subsets of these characteristics can be preserved in a single projection.
For example, the azimuthal equidistant projection preserves both direction and
distance—but only relative to the central point of the projection; directions and
distances between other points are not preserved.
In addition to the map projection associated with a given image, GIS data
layer, or other spatial data set, it is also often necessary to consider the datum
used with that map projection. A datum is a mathematical deûnition of the three-
dimensional solid (generally a slightly üattened ellipsoid) used to represent the
surface of the earth. The actual planet itself has an irregular shape that does
not correspond perfectly to any ellipsoid. As a result, a variety of different datums
have been described; some designed to ût the surface well in one particular
region (such as the North American Datum of 1983, or NAD83) and others
designed to best approximate the planet as a whole. Most GISs require that both a
map projection and a datum be speciûed before performing any coordinate
transformations.
To apply these concepts to the process of collecting and working with remo-
tely sensed images, most such images are initially acquired with rows and col-
umns of pixels aligned with the üight path (or orbit track) of the imaging
platform, be it a satellite, an aircraft, or a UAV. Before the images can be mapped,
or used in combination with other spatial data, they need to be georeferenced. His-
torically, this process typically involved identiûcation of visible control points
whose true geographic coordinates were known. A mathematical model would
then be used to transform the row and column coordinates of the raw image into
a deûned map coordinate system. In recent years, remote sensing platforms have
been outûtted with sophisticated systems to record their exact position and angu-
lar orientation. These systems, incorporating an inertial measurement unit (IMU)
and/or multiple onboard GPS units, enable highly precise modeling of the viewing
geometry of the sensor, which in turn is used for direct georeferencing of the sen-
sor data—relating them to a deûned map projection without the necessity of addi-
tional ground control points.
Once an image has been georeferenced, it may be ready for use with other
spatial information. On the other hand, some images may have further geometric
distortions, perhaps caused by varying terrain, or other factors. To remove these
distortions, it may be necessary to orthorectify the imagery, a process discussed
in Chapters 3 and 7.
1.12 VISUAL IMAGE INTERPRETATION 59
When we look at aerial and space images, we see various objects of different sizes,
shapes, and colors. Some of these objects may be readily identiûable while others
may not, depending on our own individual perceptions and experience. When we can
identify what we see on the images and communicate this information to others, we
are practicing visual image interpretation. The images contain raw image data. These
data, when processed by a human interpreter’s brain, become usable information.
Image interpretation is best learned through the experience of viewing hun-
dreds of remotely sensed images, supplemented by a close familiarity with the
environment and processes being observed. Given this fact, no textbook alone can
fully train its readers in image interpretation. Nonetheless, Chapters 2 through 8
of this book contain many examples of remote sensing images, examples that we
hope our readers will peruse and interpret. To aid in that process, the remainder
of this chapter presents an overview of the principles and methods typically
employed in image interpretation.
Aerial and space images contain a detailed record of features on the ground at
the time of data acquisition. An image interpreter systematically examines the ima-
ges and, frequently, other supporting materials such as maps and reports of ûeld
observations. Based on this study, an interpretation is made as to the physical nat-
ure of objects and phenomena appearing in the images. Interpretations may take
place at a number of levels of complexity, from the simple recognition of objects on
the earth’s surface to the derivation of detailed information regarding the complex
interactions among earth surface and subsurface features. Success in image inter-
pretation varies with the training and experience of the interpreter, the nature of
the objects or phenomena being interpreted, and the quality of the images being
utilized. Generally, the most capable image interpreters have keen powers of obser-
vation coupled with imagination and a great deal of patience. In addition, it is
important that the interpreter have a thorough understanding of the phenomenon
being studied as well as knowledge of the geographic region under study.
this challenge continues to be mitigated by the extensive use of aerial and space
imagery in such day-to-day activities as navigation, GIS applications, and weather
forecasting.
A systematic study of aerial and space images usually involves several basic
characteristics of features shown on an image. The exact characteristics useful
for any speciûc task and the manner in which they are considered depend on the
ûeld of application. However, most applications consider the following basic
characteristics, or variations of them: shape, size, pattern, tone (or hue), texture,
shadows, site, association, and spatial resolution (Olson, 1960).
Shape refers to the general form, conûguration, or outline of individual
objects. In the case of stereoscopic images, the object’s height also deûnes its
shape. The shape of some objects is so distinctive that their images may be identi-
ûed solely from this criterion. The Pentagon building near Washington, DC, is a
classic example. All shapes are obviously not this diagnostic, but every shape is of
some signiûcance to the image interpreter.
Size of objects on images must be considered in the context of the image
scale. A small storage shed, for example, might be misinterpreted as a barn if size
were not considered. Relative sizes among objects on images of the same scale
must also be considered.
Pattern relates to the spatial arrangement of objects. The repetition of certain
general forms or relationships is characteristic of many objects, both natural
and constructed, and gives objects a pattern that aids the image interpreter in
recognizing them. For example, the ordered spatial arrangement of trees in an
orchard is in distinct contrast to that of natural forest tree stands.
Tone (or hue) refers to the relative brightness or color of objects on an image.
Figure 1.8 showed how relative photo tones could be used to distinguish between
deciduous and coniferous trees on black and white infrared photographs. Without
differences in tone or hue, the shapes, patterns, and textures of objects could not
be discerned.
Texture is the frequency of tonal change on an image. Texture is produced by
an aggregation of unit features that may be too small to be discerned individually
on the image, such as tree leaves and leaf shadows. It is a product of their indivi-
dual shape, size, pattern, shadow, and tone. It determines the overall visual
<smoothness= or <coarseness= of image features. As the scale of the image is
reduced, the texture of any given object or area becomes progressively ûner and
ultimately disappears. An interpreter can often distinguish between features with
similar reüectances based on their texture differences. An example would be
the smooth texture of green grass as contrasted with the rough texture of green
tree crowns on medium-scale airphotos.
Shadows are important to interpreters in two opposing respects: (1) The
shape or outline of a shadow affords an impression of the proûle view of objects
(which aids interpretation) and (2) objects within shadows reüect little light and
are difûcult to discern on an image (which hinders interpretation). For example,
the shadows cast by various tree species or cultural features (bridges, silos,