0% found this document useful (0 votes)
12 views15 pages

REMOTE SENSING AND-part-4

Uploaded by

lzxz061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views15 pages

REMOTE SENSING AND-part-4

Uploaded by

lzxz061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

46 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

characteristics and limitations, as discussed in Chapter 6. Whether


employing a passive or active system, the remote sensing analyst needs to
keep in mind the nonuniformity and other characteristics of the energy
source that provides illumination for the sensor.
2. The atmosphere. The atmosphere normally compounds the problems
introduced by energy source variation. To some extent, the atmosphere
always modiûes the strength and spectral distribution of the energy
received by a sensor. It restricts where we can look spectrally, and its
effects vary with wavelength, time, and place. The importance of these
effects, like source variation effects, is a function of the wavelengths
involved, the sensor used, and the sensing application at hand. Elimina-
tion of, or compensation for, atmospheric effects via some form of cali-
bration is particularly important in those applications where repetitive
observations of the same geographic area are involved.
3. The energy–matter interactions at the earth’s surface. Remote sen-
sing would be simple if every material reüected and/or emitted energy in
a unique, known way. Although spectral response patterns such as those
in Figure 1.9 play a central role in detecting, identifying, and analyzing
earth surface materials, the spectral world is full of ambiguity. Radically
different material types can have great spectral similarity, making identi-
ûcation difûcult. Furthermore, the general understanding of the energy–
matter interactions for earth surface features is at an elementary level for
some materials and virtually nonexistent for others.
4. The sensor. An ideal sensor would be highly sensitive to all wave-
lengths, yielding spatially detailed data on the absolute brightness (or
radiance) from a scene as a function of wavelength, throughout the
spectrum, across wide areas on the ground. This <supersensor= would be
simple and reliable, require virtually no power or space, be available
whenever and wherever needed, and be accurate and economical to
operate. At this point, it should come as no surprise that an ideal <super-
sensor= does not exist. No single sensor is sensitive to all wavelengths or
energy levels. All real sensors have ûxed limits of spatial, spectral, radio-
metric and temporal resolution.
The choice of a sensor for any given task always involves trade-offs.
For example, photographic systems generally have very ûne spatial reso-
lution, providing a detailed view of the landscape, but they lack the broad
spectral sensitivity obtainable with nonphotographic systems. Similarly,
many nonphotographic systems are quite complex optically, mechani-
cally, and/or electronically. They may have restrictive power, space, and
stability requirements. These requirements often dictate the type of plat-
form, or vehicle, from which a sensor can be operated. Platforms can
range from stepladders to aircraft (ûxed-wing or helicopters) to satellites.
In recent years, uninhabited aerial vehicles (UAVs) have become an
increasingly important platform for remote sensing data acquisition.
1.8 CHARACTERISTICS OF REMOTE SENSING SYSTEMS 47

While the development of UAV technology for military applications has


received a great deal of attention from the media, such systems are also
ideally suited for many civilian applications, particularly in environmental
monitoring, resource management, and infrastructure management
(Laliberte et al., 2010). UAVs can range from palm-size radio-controlled
airplanes and helicopters to large-size aircraft weighing tens of tons and
controlled from thousands of km away. They also can be completely
controlled through human intervention, or they can be partially or fully
autonomous in their operation. Figure 1.24 shows two different types of
UAVs used for environmental applications of remote sensing. In 1.24(a),
the Ikhana UAV is a ûxed-wing aircraft based on the design of a military
UAV but operated by NASA for civilian scientiûc research purposes. (The
use of Ikhana for monitoring wildûres is discussed in Section 4.10, and
imagery from this system is illustrated in Figure 4.34 and Plate 9.) In con-
trast, the UAV shown in 1.24(b) is a vertical takeoff UAV, designed in the
form of a helicopter. In this photo the UAV is carrying a lightweight hyper-
spectral sensor used to map marine environments such as seagrass beds
and coral reefs.
Depending on the sensor–platform combination needed in a particular
application, the acquisition of remote sensing data can be a very expensive
endeavor, and there may be limitations on the times and places that data
can be collected. Airborne systems require detailed üight planning in
advance, while data collection from satellites is limited by the platform’s
orbit characteristics.
5. The data processing and supply system. The capability of current
remote sensors to generate data far exceeds the capacity to handle these
data. This is generally true whether we consider <manual= image inter-
pretation procedures or digital analyses. Processing sensor data into an
interpretable format can be—and often is—an effort entailing consider-
able thought, hardware, time, and experience. Also, many data users
would like to receive their data immediately after acquisition by the sen-
sor in order to make the timely decisions required in certain applications
(e.g., agricultural crop management, disaster assessment). Fortunately,
the distribution of remote sensing imagery has improved dramatically
over the past two decades. Some sources now provide in-üight data pro-
cessing immediately following image acquisition, with near real-time data
downloaded over the Internet. In some cases, users may work with ima-
gery and other spatial data in a cloud computing environment, where the
data and/or software are stored remotely, perhaps even distributed widely
across the Internet. At the opposite extreme—particularly for highly spe-
cialized types of imagery or for experimental or newly developed remote
sensing systems—it may take weeks or months before data are made
available, and the user may need to acquire not just the data but highly
specialized or custom software for data processing. Finally, as discussed
48 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

(a)

(b)

Figure 1.24 Uninhabited aerial vehicles (UAVs) used for environmental applications of remote
sensing. (a) NASA’s Ikhana UAV, with imaging sensor in pod under left wing. (Photo courtesy
NASA Dryden Flight Research Center and Jim Ross.) (b) A vertical takeoff UAV mapping seagrass
and coral reef environments in Florida. (Photo courtesy Rick Holasek and NovaSol.)
1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING 49

in Section 1.6, most remote sensing applications require the collection


and analysis of additional reference data, an operation that may be com-
plex, expensive, and time consuming.
6. The users of remotely sensed data. Central to the successful applica-
tion of any remote sensing system is the person (or persons) using the
remote sensor data from that system. The <data= generated by remote
sensing procedures become <information= only if and when someone
understands their generation, knows how to interpret them, and knows
how best to use them. A thorough understanding of the problem at hand is
paramount to the productive application of any remote sensing methodol-
ogy. Also, no single combination of data acquisition and analysis proce-
dures will satisfy the needs of all data users.

Whereas the interpretation of aerial photography has been used as a practical


resource management tool for nearly a century, other forms of remote sensing are
relatively new, technically complex, and unconventional means of acquiring
information. In earlier years, these newer forms of remote sensing had relatively
few satisûed users. Since the late 1990s, however, as new applications continue to
be developed and implemented, increasing numbers of users are becoming aware
of the potentials, as well as the limitations, of remote sensing techniques. As a
result, remote sensing has become an essential tool in many aspects of science,
government, and business alike.
One factor in the increasing acceptance of remote sensing imagery by end-
users has been the development and widespread adoption of easy-to-use geo-
visualization systems such as Google Maps and Earth, NASA’s WorldWinds, and
other web-based image services. By allowing more potential users to become
comfortable and familiar with the day-to-day use of aerial and satellite imagery,
these and other software tools have facilitated the expansion of remote sensing
into new application areas.

1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING

The student should now begin to appreciate that successful use of remote sensing
is premised on the integration of multiple, interrelated data sources and analysis
procedures. No single combination of sensor and interpretation procedure is
appropriate to all applications. The key to designing a successful remote sensing
effort involves, at a minimum, (1) clear deûnition of the problem at hand, (2) eva-
luation of the potential for addressing the problem with remote sensing techni-
ques, (3) identiûcation of the remote sensing data acquisition procedures
appropriate to the task, (4) determination of the data interpretation procedures to
be employed and the reference data needed, and (5) identiûcation of the criteria
by which the quality of information collected can be judged.
50 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

All too often, one (or more) of the above components of a remote sensing
application is overlooked. The result may be disastrous. Many programs exist
with little or no means of evaluating the performance of remote sensing systems
in terms of information quality. Many people have acquired burgeoning quan-
tities of remote sensing data with inadequate capability to interpret them. In
some cases an inappropriate decision to use (or not to use) remote sensing has
been made, because the problem was not clearly deûned and the constraints or
opportunities associated with remote sensing methods were not clearly under-
stood. A clear articulation of the information requirements of a particular pro-
blem and the extent to which remote sensing might meet these requirements in a
timely manner is paramount to any successful application.
The success of many applications of remote sensing is improved considerably
by taking a multiple-view approach to data collection. This may involve multistage
sensing, wherein data about a site are collected from multiple altitudes. It may
involve multispectral sensing, whereby data are acquired simultaneously in sev-
eral spectral bands. Or, it may entail multitemporal sensing, where data about a
site are collected on more than one occasion.
In the multistage approach, satellite data may be analyzed in conjunction
with high altitude data, low altitude data, and ground observations (Figure 1.25).
Each successive data source might provide more detailed information over smal-
ler geographic areas. Information extracted at any lower level of observation may
then be extrapolated to higher levels of observation.
A commonplace example of the application of multistage sensing techniques
is the detection, identiûcation, and analysis of forest disease and insect problems.
From space images, the image analyst could obtain an overall view of the major
vegetation categories involved in a study area. Using this information, the areal
extent and position of a particular species of interest could be determined and
representative subareas could be studied more closely at a more reûned stage of
imaging. Areas exhibiting stress on the second-stage imagery could be delineated.
Representative samples of these areas could then be ûeld checked to document
the presence and particular cause of the stress.
After analyzing the problem in detail by ground observation, the analyst
would use the remotely sensed data to extrapolate assessments beyond the small
study areas. By analyzing the large-area remotely sensed data, the analyst can
determine the severity and geographic extent of the disease problem. Thus, while
the question of speciûcally what the problem is can generally be evaluated only by
detailed ground observation, the equally important questions of where, how
much, and how severe can often be best handled by remote sensing analysis.
In short, more information is obtained by analyzing multiple views of the ter-
rain than by analysis of any single view. In a similar vein, multispectral imagery
provides more information than data collected in any single spectral band. When
the signals recorded in the multiple bands are analyzed in conjunction with each
other, more information becomes available than if only a single band were used
or if the multiple bands were analyzed independently. The multispectral approach
1.9 SUCCESSFUL APPLICATION OF REMOTE SENSING 51

Figure 1.25 Multistage remote sensing concept.

forms the heart of numerous remote sensing applications involving discrimina-


tion of earth resource types, cultural features, and their condition.
Again, multitemporal sensing involves sensing the same area at multiple
times and using changes occurring with time as discriminants of ground condi-
tions. This approach is frequently taken to monitor land use change, such as sub-
urban development in urban fringe areas. In fact, regional land use surveys might
call for the acquisition of multisensor, multispectral, multistage, multitemporal
data to be used for multiple purposes!
In any approach to applying remote sensing, not only must the right mix of
data acquisition and data interpretation techniques be chosen, but the right mix
of remote sensing and <conventional= techniques must also be identiûed. The stu-
dent must recognize that remote sensing is a tool best applied in concert with
others; it is not an end in itself. In this regard, remote sensing data are currently
52 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

being used extensively in computer-based GISs (Section 1.10). The GIS environ-
ment permits the synthesis, analysis, and communication of virtually unlimited
sources and types of biophysical and socioeconomic data—as long as they can be
geographically referenced. Remote sensing can be thought of as the <eyes= of such
systems providing repeated, synoptic (even global) visions of earth resources from
an aerial or space vantage point.
Remote sensing affords us the capability to literally see the invisible. We
can begin to see components of the environment on an ecosystem basis, in that
remote sensing data can transcend the cultural boundaries within which much of
our current resource data are collected. Remote sensing also transcends dis-
ciplinary boundaries. It is so broad in its application that nobody <owns= the ûeld.
Important contributions are made to—and beneûts derived from—remote sensing
by both the <hard= scientist interested in basic research and the <soft= scientist
interested in its operational application.
There is little question that remote sensing will continue to play an increas-
ingly broad and important role in the scientiûc, governmental, and commercial
sectors alike. The technical capabilities of sensors, space platforms, data commu-
nication and distribution systems, GPSs, digital image processing systems, and
GISs are improving on almost a daily basis. At the same time, we are witnessing
the evolution of a spatially enabled world society. Most importantly, we are
becoming increasingly aware of how interrelated and fragile the elements of our
global resource base really are and of the role that remote sensing can play in
inventorying, monitoring, and managing earth resources and in modeling and
helping us to better understand the global ecosystem and its dynamics.

1.10 GEOGRAPHIC INFORMATION SYSTEMS (GIS)


We anticipate that the majority of individuals using this book will at some point
in their educational backgrounds and/or professional careers have experience
with geographic information systems. The discussion below is provided as a brief
introduction to such systems primarily for those readers who might lack such
background.
Geographic information systems are computer-based systems that can deal
with virtually any type of information about features that can be referenced by geo-
graphical location. These systems are capable of handling both locational data and
attribute data about such features. That is, not only do GISs permit the automated
mapping or display of the locations of features, but also these systems provide a
capability for recording and analyzing descriptive characteristics (<attributes=) of
the features. For example, a GIS might contain not only a map of the locations of
roads but also a database of descriptors about each road. These attributes might
include information such as road width, pavement type, speed limit, number of
trafûc lanes, date of construction, and so on. Table 1.1 lists other examples of attri-
butes that might be associated with a given point, line, or area feature.
1.10 GEOGRAPHIC INFORMATION SYSTEMS (GIS) 53

TABLE 1.1 Example Point, Line, and Area Features and Typical Attributes
Contained in a GISa

Point feature Well (depth, chemical constituency)


Line feature Power line (service capacity, age, insulator type)
Area feature Soil mapping unit (soil type, texture, color, permeability)
a
Attributes shown in parentheses.

The data in a GIS may be kept in individual standalone ûles (e.g., <shape-
ûles=), but increasingly a geodatabase is used to store and manage spatial data.
This is a type of relational database, consisting of tables with attributes in columns
and data records in rows (Table 1.2), and explicitly including locational informa-
tion for each record. While database implementations vary, there are certain
desirable characteristics that will improve the utility of a database in a GIS. These
characteristics include üexibility, to allow a wide range of database queries and
operations; reliability, to avoid accidental loss of data; security, to limit access to
authorized users; and ease of use, to insulate the end user from the details of the
database implementation.
One of the most important beneûts of a GIS is the ability to spatially inter-
relate multiple types of information stemming from a range of sources. This con-
cept is illustrated in Figure 1.26, where we have assumed that a hydrologist
wishes to use a GIS to study soil erosion in a watershed. As shown, the system
contains data from a range of source maps (a) that have been geocoded on a cell-
by-cell basis to form a series of layers (b), all in geographic registration. The ana-
lyst can then manipulate and overlay the information contained in, or derived
from, the various data layers. In this example, we assume that assessing the
potential for soil erosion throughout the watershed involves the simultaneous
cell-by-cell consideration of three types of data derived from the original data lay-
ers: slope, soil erodibility, and surface runoff potential. The slope information can
be computed from the elevations in the topography layer. The erodibility, which
is an attribute associated with each soil type, can be extracted from a relational
database management system incorporated in the GIS. Similarly, the runoff
potential is an attribute associated with each land cover type (land cover data can

TABLE 1.2 Relational Database Table Format


ID Numbera Street Name Lanes Parking Repair Date ...

143897834 <Maple Ct= 2 Yes 2012/06/10 ...


637292842 <North St= 2 Seasonal 2006/08/22 ...
347348279 <Main St= 4 Yes 2015/05/15 ...
234538020 <Madison Ave= 4 No 2014/04/20 ...
a
Each data record, or <tuple,= has a unique identiûcation, or ID, number.
54 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

(a) (b) (c) (d) (e)

Figure 1.26 GIS analysis procedure for studying potential soil erosion.

be obtained through interpretation of aerial photographs or satellite images). The


analyst can use the system to interrelate these three sources of derived data (c) in
each grid cell and use the result to locate, display, and/or record areas where
combinations of site characteristics indicate high soil erosion potential (i.e., steep
slopes and highly erodible soil–land cover combinations).
The above example illustrates the GIS analysis function commonly referred
to as overlay analysis. The number, form, and complexity of other data analyses
possible with a GIS are virtually limitless. Such procedures can operate on the
system’s spatial data, the attribute data, or both. For example, aggregation is an
operation that permits combining detailed map categories to create new, less
detailed categories (e.g., combining <jack pine= and <red pine= categories into a
single <pine= category). Buffering creates a zone of speciûed width around one or
more features (e.g., the area within 50 m of a stream). Network analysis permits
such determinations as ûnding the shortest path through a street network, deter-
mining the stream üows in a drainage basin, or ûnding the optimum location for
a ûre station. Intervisibility operations use elevation data to permit viewshed map-
ping of what terrain features can be <seen= from a speciûed location. Similarly,
many GISs permit the generation of perspective views portraying terrain surfaces
from a viewing position other than vertical.
One constraint on the use of multiple layers in a GIS is that the spatial scale
at which each of the original source maps was compiled must be compatible with
the others. For example, in the analysis shown in Figure 1.26 it would be inap-
propriate to incorporate both soil data from very high resolution aerial photo-
graphs of a single township and land cover digitized from a highly generalized
1.10 GEOGRAPHIC INFORMATION SYSTEMS (GIS) 55

map of the entire nation. Another common constraint is that the compilation
dates of different source maps must be reasonably close in time. For example, a
GIS analysis of wildlife habitat might yield incorrect conclusions if it were based
on land cover data that are many years out of date. On the other hand, since other
types of spatial data are less changeable over time, the map compilation date
might not be as important for a layer such as topography or bedrock geology.
Most GISs use two primary approaches to represent the locational compo-
nent of geographic information: a raster (grid-based) or vector (point-based) for-
mat. The raster data model that was used in our soil erosion example is illustrated
in Figure 1.27b. In this approach, the location of geographic objects or conditions
is deûned by the row and column position of the cells they occupy. The value
stored for each cell indicates the type of object or condition that is found at that
location over the entire cell. Note that the ûner the grid cell size used, the more
geographic speciûcity there is in the data ûle. A coarse grid requires less data sto-
rage space but will provide a less precise geographic description of the original
data. Also, when using a very coarse grid, several data types and/or attributes may
occur in each cell, but the cell is still usually treated as a single homogeneous unit
during analysis.
The vector data model is illustrated in Figure 1.27c. Using this format, feature
boundaries are converted to straight-sided polygons that approximate the original
regions. These polygons are encoded by determining the coordinates of their
vertices, called points or nodes, which can be connected to form lines or arcs.
Topological coding includes <intelligence= in the data structure relative to the
spatial relationship (connectivity and adjacency) among features. For example,
topological coding keeps track of which arcs share common nodes and what
polygons are to the left and right of a given arc. This information facilitates such
spatial operations as overlay analysis, buffering, and network analysis.
Raster and vector data models each have their advantages and disadvantages.
Raster systems tend to have simpler data structures; they afford greater computa-
tional efûciency in such operations as overlay analysis; and they represent

(a) (b) (c)


Figure 1.27 Raster versus vector data formats: (a) landscape patches, (b) landscape
represented in raster format, and (c) landscape represented in vector format.
56 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

features having high spatial variability and/or <blurred boundaries= (e.g., between
pure and mixed vegetation zones) more effectively. On the other hand, raster data
volumes are relatively greater; the spatial resolution of the data is limited to the
size of the cells comprising the raster; and the topological relationships among
spatial features are more difûcult to represent. Vector data formats have the
advantages of relatively lower data volumes, better spatial resolution, and the pre-
servation of topological data relationships (making such operations as network
analysis more efûcient). However, certain operations (e.g., overlay analysis) are
more complex computationally in a vector data format than in a raster format.
As we discuss frequently throughout this book, digital remote sensing images
are collected in a raster format. Accordingly, digital images are inherently compa-
tible spatially with other sources of information in a raster domain. Because
of this, <raw= images can be easily included directly as layers in a raster-based
GIS. Likewise, such image processing procedures as automated land cover classi-
ûcation (Chapter 7) result in the creation of interpreted or derived data ûles in a
raster format. These derived data are again inherently compatible with the other
sources of data represented in a raster format. This concept is illustrated in
Plate 2, in which we return to our earlier example of using overlay analysis to
assist in soil erosion potential mapping for an area in western Dane County, Wis-
consin. Shown in (a) is an automated land cover classiûcation that was produced
by processing Landsat Thematic Mapper (TM) data of the area. (See Chapter 7 for
additional information on computer-based land cover classiûcation.) To assess
the soil erosion potential in this area, the land cover data were merged with infor-
mation on the intrinsic erodibility of the soil present (b) and with land surface
slope information (c). These latter forms of information were already resident in a
GIS covering the area. Hence, all data could be combined for analysis in a mathe-
matical model, producing the soil erosion potential map shown in (d). To assist
the viewer in interpreting the landscape patterns shown in Plate 2, the GIS was
also used to visually enhance the four data sets with topographic shading based
on a DEM, providing a three-dimensional appearance.
For the land cover classiûcation in Plate 2a, water is shown as dark blue, non-
forested wetlands as light blue, forested wetlands as pink, corn as orange, other
row crops as pale yellow, forage crops as olive, meadows and grasslands as
yellow-green, deciduous forest as green, evergreen forest as dark green, low-
intensity urban areas as light gray, and high-intensity urban areas as dark gray. In
(b), areas of low soil erodibility are shown in dark brown, with increasing soil
erodibility indicated by colors ranging from orange to tan. In (c), areas of increas-
ing steepness of slope are shown as green, yellow, orange, and red. The soil
erosion potential map (d) shows seven colors depicting seven levels of potential
soil erosion. Areas having the highest erosion potential are shown in dark red.
These areas tend to have row crops growing on inherently erodible soils with
sloping terrain. Decreasing erosion potential is shown in a spectrum of colors
from orange through yellow to green. Areas with the lowest erosion potential are
1.11 SPATIAL DATA FRAMEWORKS FOR GIS AND REMOTE SENSING 57

indicated in dark green. These include forested regions, continuous-cover crops,


and grasslands growing on soils with low inherent erodibility, and üat terrain.
Remote sensing images (and information extracted from such images), along
with GPS data, have become primary data sources for modern GISs. Indeed, the
boundaries between remote sensing, GIS, and GPS technology have become
blurred, and these combined ûelds will continue to revolutionize how we inven-
tory, monitor, and manage natural resources on a day-to-day basis. Likewise,
these technologies are assisting us in modeling and understanding biophysical
processes at all scales of inquiry. They are also permitting us to develop and
communicate cause-and-effect <what-if= scenarios in a spatial context in ways
never before possible.

1.11 SPATIAL DATA FRAMEWORKS FOR GIS AND REMOTE


SENSING

If one is examining an image purely on its own, with no reference to any outside
source of spatial information, there may be no need to consider the type of coor-
dinate system used to represent locations within the image. In many cases, how-
ever, analysts will be comparing points in the image to GPS-located reference
data, looking for differences between two images of the same area, or importing
an image into a GIS for quantitative analysis. In all these cases, it is necessary
to know how the column and row coordinates of the image relate to some real-
world map coordinate system.
Because the shape of the earth is approximately spherical, locations on the
earth’s surface are often described in an angular coordinate or geographical sys-
tem, with latitude and longitude speciûed in degrees (°), minutes (0 ), and seconds
(00 ). This system originated in ancient Greece, and it is familiar to many people
today. Unfortunately, the calculation of distances and areas in an angular coordi-
nate system is complex. More signiûcantly, it is impossible to accurately represent
the three-dimensional surface of the earth on the two-dimensional planar surface
of a map or image without introducing distortion in one or more of the following
elements: shape, size, distance, and direction. Thus, for many purposes we wish
to mathematically transform angular geographical coordinates into a planar, or
Cartesian ðX – Y Þ coordinate system. The result of this transformation process is
referred to as a map projection.
While many types of map projections have been deûned, they can be grouped
into several broad categories based either on the geometric models used or on
the spatial properties that are preserved or distorted by the transformation.
Geometric models for map projection include cylindrical, conic, and azimuthal
or planar surfaces. From a map user’s perspective, the spatial properties of
map projections may be more important than the geometric model used. A con-
formal map projection preserves angular relationships, or shapes, within local
58 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

areas; over large areas, angles and shapes become distorted. An azimuthal (or
zenithal) projection preserves absolute directions relative to the central point of
projection. An equidistant projection preserves equal distances, for some but not
all points—scale is constant either for all distances along meridians or for all dis-
tances from one or two points. An equal-area (or equivalent) projection preserves
equal areas. Because a detailed explanation of the relationships among these
properties is beyond the scope of this discussion, sufûce it to say that no two-
dimensional map projection can accurately preserve all of these properties, but
certain subsets of these characteristics can be preserved in a single projection.
For example, the azimuthal equidistant projection preserves both direction and
distance—but only relative to the central point of the projection; directions and
distances between other points are not preserved.
In addition to the map projection associated with a given image, GIS data
layer, or other spatial data set, it is also often necessary to consider the datum
used with that map projection. A datum is a mathematical deûnition of the three-
dimensional solid (generally a slightly üattened ellipsoid) used to represent the
surface of the earth. The actual planet itself has an irregular shape that does
not correspond perfectly to any ellipsoid. As a result, a variety of different datums
have been described; some designed to ût the surface well in one particular
region (such as the North American Datum of 1983, or NAD83) and others
designed to best approximate the planet as a whole. Most GISs require that both a
map projection and a datum be speciûed before performing any coordinate
transformations.
To apply these concepts to the process of collecting and working with remo-
tely sensed images, most such images are initially acquired with rows and col-
umns of pixels aligned with the üight path (or orbit track) of the imaging
platform, be it a satellite, an aircraft, or a UAV. Before the images can be mapped,
or used in combination with other spatial data, they need to be georeferenced. His-
torically, this process typically involved identiûcation of visible control points
whose true geographic coordinates were known. A mathematical model would
then be used to transform the row and column coordinates of the raw image into
a deûned map coordinate system. In recent years, remote sensing platforms have
been outûtted with sophisticated systems to record their exact position and angu-
lar orientation. These systems, incorporating an inertial measurement unit (IMU)
and/or multiple onboard GPS units, enable highly precise modeling of the viewing
geometry of the sensor, which in turn is used for direct georeferencing of the sen-
sor data—relating them to a deûned map projection without the necessity of addi-
tional ground control points.
Once an image has been georeferenced, it may be ready for use with other
spatial information. On the other hand, some images may have further geometric
distortions, perhaps caused by varying terrain, or other factors. To remove these
distortions, it may be necessary to orthorectify the imagery, a process discussed
in Chapters 3 and 7.
1.12 VISUAL IMAGE INTERPRETATION 59

1.12 VISUAL IMAGE INTERPRETATION

When we look at aerial and space images, we see various objects of different sizes,
shapes, and colors. Some of these objects may be readily identiûable while others
may not, depending on our own individual perceptions and experience. When we can
identify what we see on the images and communicate this information to others, we
are practicing visual image interpretation. The images contain raw image data. These
data, when processed by a human interpreter’s brain, become usable information.
Image interpretation is best learned through the experience of viewing hun-
dreds of remotely sensed images, supplemented by a close familiarity with the
environment and processes being observed. Given this fact, no textbook alone can
fully train its readers in image interpretation. Nonetheless, Chapters 2 through 8
of this book contain many examples of remote sensing images, examples that we
hope our readers will peruse and interpret. To aid in that process, the remainder
of this chapter presents an overview of the principles and methods typically
employed in image interpretation.
Aerial and space images contain a detailed record of features on the ground at
the time of data acquisition. An image interpreter systematically examines the ima-
ges and, frequently, other supporting materials such as maps and reports of ûeld
observations. Based on this study, an interpretation is made as to the physical nat-
ure of objects and phenomena appearing in the images. Interpretations may take
place at a number of levels of complexity, from the simple recognition of objects on
the earth’s surface to the derivation of detailed information regarding the complex
interactions among earth surface and subsurface features. Success in image inter-
pretation varies with the training and experience of the interpreter, the nature of
the objects or phenomena being interpreted, and the quality of the images being
utilized. Generally, the most capable image interpreters have keen powers of obser-
vation coupled with imagination and a great deal of patience. In addition, it is
important that the interpreter have a thorough understanding of the phenomenon
being studied as well as knowledge of the geographic region under study.

Elements of Image Interpretation

Although most individuals have had substantial experience in interpreting <con-


ventional= photographs in their daily lives (e.g., newspaper photographs), the
interpretation of aerial and space images often departs from everyday image
interpretation in three important respects: (1) the portrayal of features from an
overhead, often unfamiliar, vertical perspective; (2) the frequent use of wave-
lengths outside of the visible portion of the spectrum; and (3) the depiction of
the earth’s surface at unfamiliar scales and resolutions (Campbell and Wynne, 2011).
While these factors may be insigniûcant to the experienced image interpreter, they
can represent a substantial challenge to the novice image analyst! However, even
60 CHAPTER 1 CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

this challenge continues to be mitigated by the extensive use of aerial and space
imagery in such day-to-day activities as navigation, GIS applications, and weather
forecasting.
A systematic study of aerial and space images usually involves several basic
characteristics of features shown on an image. The exact characteristics useful
for any speciûc task and the manner in which they are considered depend on the
ûeld of application. However, most applications consider the following basic
characteristics, or variations of them: shape, size, pattern, tone (or hue), texture,
shadows, site, association, and spatial resolution (Olson, 1960).
Shape refers to the general form, conûguration, or outline of individual
objects. In the case of stereoscopic images, the object’s height also deûnes its
shape. The shape of some objects is so distinctive that their images may be identi-
ûed solely from this criterion. The Pentagon building near Washington, DC, is a
classic example. All shapes are obviously not this diagnostic, but every shape is of
some signiûcance to the image interpreter.
Size of objects on images must be considered in the context of the image
scale. A small storage shed, for example, might be misinterpreted as a barn if size
were not considered. Relative sizes among objects on images of the same scale
must also be considered.
Pattern relates to the spatial arrangement of objects. The repetition of certain
general forms or relationships is characteristic of many objects, both natural
and constructed, and gives objects a pattern that aids the image interpreter in
recognizing them. For example, the ordered spatial arrangement of trees in an
orchard is in distinct contrast to that of natural forest tree stands.
Tone (or hue) refers to the relative brightness or color of objects on an image.
Figure 1.8 showed how relative photo tones could be used to distinguish between
deciduous and coniferous trees on black and white infrared photographs. Without
differences in tone or hue, the shapes, patterns, and textures of objects could not
be discerned.
Texture is the frequency of tonal change on an image. Texture is produced by
an aggregation of unit features that may be too small to be discerned individually
on the image, such as tree leaves and leaf shadows. It is a product of their indivi-
dual shape, size, pattern, shadow, and tone. It determines the overall visual
<smoothness= or <coarseness= of image features. As the scale of the image is
reduced, the texture of any given object or area becomes progressively ûner and
ultimately disappears. An interpreter can often distinguish between features with
similar reüectances based on their texture differences. An example would be
the smooth texture of green grass as contrasted with the rough texture of green
tree crowns on medium-scale airphotos.
Shadows are important to interpreters in two opposing respects: (1) The
shape or outline of a shadow affords an impression of the proûle view of objects
(which aids interpretation) and (2) objects within shadows reüect little light and
are difûcult to discern on an image (which hinders interpretation). For example,
the shadows cast by various tree species or cultural features (bridges, silos,

You might also like