Pors Hyper Linked
Pors Hyper Linked
R EMOTE S ENSING
first
previous
next
last
back
exit
zoom
contents
index
about
Editors
Lucas L. F. Janssen
Gerrit C. Huurneman
Authors
Wim H. Bakker
Ben G. H. Gorte
John A. Horn
first
previous
next
Lucas L. F. Janssen
Colin V. Reeves
Christine Pohl
Michael J. C. Weir
Anupma Prakash
Tsehaie Woldai
last
back
exit
zoom
contents
index
about
first
previous
next
last
back
exit
zoom
contents
index
about
Contents
1
25
L. L. F. Janssen
Spatial data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . 26
Application of remote sensing . . . . . . . . . . . . . . . . . . . . . 32
Structure of this textbook . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3
first
Introduction . . . . . . . . . . . . . . . . .
Electromagnetic energy . . . . . . . . . . .
2.2.1 Waves and photons . . . . . . . . .
2.2.2 Sources of EM energy . . . . . . .
2.2.3 Electromagnetic spectrum . . . . .
2.2.4 Active and passive remote sensing
Energy interaction in the atmosphere . . .
2.3.1 Absorption and transmission . . .
2.3.2 Atmospheric scattering . . . . . .
previous
next
last
back
exit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
T. Woldai
. . . 50
. . . 52
. . . 53
. . . 56
. . . 58
. . . 60
. . . 61
. . . 63
. . . 65
index
about
Contents
2.4
3
3.3
3.4
3.5
4
Introduction . . . . . . . . . . . . .
Sensors . . . . . . . . . . . . . . . .
3.2.1 Passive sensors . . . . . . .
3.2.2 Active sensors . . . . . . . .
Platforms . . . . . . . . . . . . . . .
3.3.1 Airborne remote sensing . .
3.3.2 Spaceborne remote sensing
Image data characteristics . . . . .
Data selection criteria . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
L. L. F. Janssen & W. H. Bakker
. . . . . . . . . . . . . . . . 84
. . . . . . . . . . . . . . . . 85
. . . . . . . . . . . . . . . . 87
. . . . . . . . . . . . . . . . 97
. . . . . . . . . . . . . . . . 101
. . . . . . . . . . . . . . . . 102
. . . . . . . . . . . . . . . . 104
. . . . . . . . . . . . . . . . 107
. . . . . . . . . . . . . . . . 109
Aerial cameras
4.1
4.2
4.3
first
Introduction . . . . . . . . . . . . . . . . . . . . . . .
Aerial camera . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Lens cone . . . . . . . . . . . . . . . . . . . .
4.2.2 Film magazine and auxiliary data . . . . . .
4.2.3 Camera mounting . . . . . . . . . . . . . . .
Spectral and radiometric characteristics . . . . . . .
4.3.1 General sensitivity . . . . . . . . . . . . . . .
4.3.2 Spectral sensitivity . . . . . . . . . . . . . . .
4.3.3 Monochrome photography . . . . . . . . . .
4.3.4 True colour and colour infrared photography
previous
next
last
back
exit
70
72
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
118
J. A. Horn
. . . . 119
. . . . 122
. . . . 123
. . . . 125
. . . . 127
. . . . 128
. . . . 130
. . . . 131
. . . . 132
. . . . 134
index
about
Contents
4.4
4.5
4.6
5
4.3.5 Scanning . . . . . . . . . .
Spatial characteristics . . . . . . .
4.4.1 Scale . . . . . . . . . . . .
4.4.2 Spatial resolution . . . . .
Aerial photography missions . .
Advances in aerial photography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
154
W. H. Bakker
. . . . . 155
. . . . . 156
. . . . . 157
. . . . . 159
. . . . . 160
. . . . . 162
. . . . . 163
. . . . . 164
. . . . . 165
. . . . . 166
. . . . . 168
. . . . . 171
. . . . . 172
. . . . . 173
. . . . . 175
. . . . . 177
Multispectral scanners
5.1
5.2
5.3
5.4
first
Introduction . . . . . . . . . . . . . . . . . . . . . . . .
Whiskbroom scanner . . . . . . . . . . . . . . . . . . .
5.2.1 Spectral characteristics . . . . . . . . . . . . . .
5.2.2 Geometric characteristics . . . . . . . . . . . .
Pushbroom scanner . . . . . . . . . . . . . . . . . . . .
5.3.1 Spectral characteristics . . . . . . . . . . . . . .
5.3.2 Geometric characteristics . . . . . . . . . . . .
Some operational spaceborne multispectral scanners .
5.4.1 Meteosat-5 . . . . . . . . . . . . . . . . . . . . .
5.4.2 NOAA-15 . . . . . . . . . . . . . . . . . . . . .
5.4.3 Landsat-7 . . . . . . . . . . . . . . . . . . . . .
5.4.4 SPOT-4 . . . . . . . . . . . . . . . . . . . . . . .
5.4.5 IRS-1D . . . . . . . . . . . . . . . . . . . . . . .
5.4.6 IKONOS . . . . . . . . . . . . . . . . . . . . . .
5.4.7 Terra . . . . . . . . . . . . . . . . . . . . . . . .
5.4.8 EO-1 . . . . . . . . . . . . . . . . . . . . . . . .
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
index
.
.
.
.
.
.
.
.
.
.
.
.
138
139
140
142
143
147
about
Contents
6
RADAR
6.1
6.2
6.3
6.4
6.5
6.6
7
What is radar? . . . . . . . . . . . . . . . . . . . .
Principles of imaging radar . . . . . . . . . . . .
Geometric properties of radar . . . . . . . . . . .
6.3.1 Radar viewing geometry . . . . . . . . . .
6.3.2 Spatial resolution . . . . . . . . . . . . . .
6.3.3 Synthetic Aperture Radar (SAR) . . . . .
Distortions in radar images . . . . . . . . . . . .
6.4.1 Scale distortions . . . . . . . . . . . . . .
6.4.2 Terrain-induced distortions . . . . . . . .
6.4.3 Radiometric distortions . . . . . . . . . .
Interpretation of radar images . . . . . . . . . . .
6.5.1 Microwave signal and object interactions
6.5.2 Scattering patterns . . . . . . . . . . . . .
6.5.3 Applications of radar . . . . . . . . . . . .
Advanced radar processing techniques . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
219
C. V. Reeves
. . . . . 220
. . . . . 221
. . . . . 223
. . . . . 227
. . . . . 229
first
Introduction . . . . . . . . . . . . . . . . .
Gamma-ray surveys . . . . . . . . . . . .
Gravity and magnetic anomaly mapping
Electrical imaging . . . . . . . . . . . . . .
Seismic surveying . . . . . . . . . . . . . .
previous
next
last
back
exit
.
.
.
.
.
.
.
.
.
.
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
184
C. Pohl
. . 185
. . 187
. . 192
. . 193
. . 195
. . 198
. . 199
. . 200
. . 201
. . 205
. . 209
. . 210
. . 213
. . 214
. . 215
index
about
Contents
8
Radiometric aspects
8.1
8.2
8.3
Introduction . . . . . . . . . . . . . .
Cosmetic corrections . . . . . . . . .
8.2.1 Periodic line dropouts . . . .
8.2.2 Line striping . . . . . . . . . .
8.2.3 Random noise or spike noise
Atmospheric Corrections . . . . . . .
8.3.1 Haze correction . . . . . . . .
8.3.2 Sun angle correction . . . . .
8.3.3 Skylight correction . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
252
L. L. F. Janssen & M. J. C. Weir
. . . . . . . . . . . . . . . . 253
. . . . . . . . . . . . . . . . 255
. . . . . . . . . . . . . . . . 258
. . . . . . . . . . . . . . . . 259
. . . . . . . . . . . . . . . . 262
. . . . . . . . . . . . . . . . 265
. . . . . . . . . . . . . . . . 266
. . . . . . . . . . . . . . . . 268
. . . . . . . . . . . . . . . . 269
Geometric aspects
9.1
9.2
9.3
9.4
Introduction . . . . . . . . . . .
Relief displacement . . . . . . .
Two-dimensional approaches .
9.3.1 Georeferencing . . . . .
9.3.2 Geocoding . . . . . . . .
Three-dimensional approaches
9.4.1 Monoplotting . . . . . .
9.4.2 Orthoimage production
9.4.3 Stereoplotting . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
235
A. Prakash
. . . . 236
. . . . 237
. . . . 238
. . . . 240
. . . . 242
. . . . 243
. . . . 244
. . . . 245
. . . . 247
275
B. G. H. Gorte
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
first
previous
next
last
back
exit
zoom
contents
index
about
Contents
10.2 Perception of colour . . . . . . . . . . . . . . . . . . . .
10.2.1 Tri-stimuli model . . . . . . . . . . . . . . . . .
10.2.2 Colour spaces . . . . . . . . . . . . . . . . . . .
10.3 Visualization of image data . . . . . . . . . . . . . . .
10.3.1 Histograms . . . . . . . . . . . . . . . . . . . .
10.3.2 Single band image display . . . . . . . . . . . .
10.4 Colour composites . . . . . . . . . . . . . . . . . . . .
10.4.1 Application of RGB and IHS for image fusion
10.5 Filter operations . . . . . . . . . . . . . . . . . . . . . .
10.5.1 Noise reduction . . . . . . . . . . . . . . . . . .
10.5.2 Edge enhancement . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
306
L. L. F. Janssen
. . . . . . 307
. . . . . . 309
. . . . . . 310
. . . . . . 312
. . . . . . 315
. . . . . . 317
. . . . . . 318
. . . . . . 325
. . . . . . 331
. . . . . . 333
first
previous
next
last
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
277
278
280
286
287
290
293
295
297
299
300
341
L. L. F. Janssen & B. G. H. Gorte
back
exit
zoom
contents
index
about
10
Contents
12.1 Introduction . . . . . . . . . . . . . . . . . .
12.2 Principle of image classification . . . . . . .
12.2.1 Image space . . . . . . . . . . . . . .
12.2.2 Feature space . . . . . . . . . . . . .
12.2.3 Image classification . . . . . . . . . .
12.3 Image classification process . . . . . . . . .
12.3.1 Preparation for image classification
12.3.2 Supervised image classification . . .
12.3.3 Unsupervised image classification .
12.3.4 Classification algorithms . . . . . . .
12.4 Validation of the result . . . . . . . . . . . .
12.5 Problems in image classification . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
342
344
345
346
349
351
353
355
356
359
365
368
Glossary
379
409
first
previous
next
last
back
exit
zoom
contents
index
about
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
28
34
35
37
38
39
43
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
.
.
.
.
.
.
.
.
50
53
54
56
58
62
63
64
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
about
11
12
List of Figures
2.9
2.10
2.11
2.12
2.13
2.14
2.15
Rayleigh scattering . . . . . . . . . . . . . . . .
Rayleigh scattering affects the colour of the sky
Effects of clouds in optical remote sensing . . .
Specular and diffuse reflection . . . . . . . . . .
Reflectance curve of vegetation . . . . . . . . .
Reflectance curves of soil . . . . . . . . . . . . .
Reflectance curves of water . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
66
67
69
70
73
75
76
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
Overview of sensors . . . . . . . . . . . . . . .
Example video image . . . . . . . . . . . . . . .
Example multispectral image . . . . . . . . . .
TSM derived from imaging spectrometer data
Example thermal image . . . . . . . . . . . . .
Example microwave radiometer image . . . . .
DTM derived by laser scanning . . . . . . . . .
Example radar image . . . . . . . . . . . . . . .
Roll, pitch and yaw angles . . . . . . . . . . . .
Meteorological observation system . . . . . . .
An image file comprises a number of bands . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
90
91
93
95
96
98
100
102
106
107
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
120
121
122
124
125
128
131
133
first
previous
next
last
back
exit
zoom
contents
index
about
13
List of Figures
4.9
4.10
4.11
4.12
4.13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
134
135
137
140
144
5.1
5.2
5.3
5.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
156
158
160
167
6.1
6.2
6.3
6.4
6.5
6.6
6.7
186
190
191
192
193
194
205
7.1
7.2
7.3
7.4
7.5
222
224
226
228
230
8.1
8.2
8.3
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
index
about
14
List of Figures
8.4
8.5
8.6
8.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
239
240
242
245
9.1
9.2
9.3
9.4
9.5
9.6
9.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
255
257
258
263
264
266
270
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
278
281
282
283
289
290
291
292
294
296
297
300
11.1
11.2
first
previous
next
last
back
exit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
index
about
15
List of Figures
11.3
11.4
11.5
11.6
11.7
11.8
11.9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
316
319
320
326
327
334
335
12.1
12.2
12.3
12.4
12.5
12.6
12.7
12.8
12.9
12.10
12.11
12.12
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
345
346
347
348
350
351
352
358
360
362
364
370
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
index
about
List of Tables
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
9.1
10.1
10.2
10.3
10.4
10.5
first
previous
next
last
back
exit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
165
166
168
170
171
172
173
176
178
288
289
299
299
300
about
16
17
List of Tables
11.1
11.2
11.3
11.4
11.5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
323
324
328
329
330
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
.
.
.
.
.
.
.
.
409
410
410
410
about
Preface
Principles of Remote Sensing is the basic textbook on remote sensing for all students enrolled in the 20002001 educational programmes at ITC. As well as
being a basic textbook for the institutes regular MSc and PM courses, Principles
of Remote Sensing will be used in various short courses and possibly also by ITCs
sister institutes. The first edition is an extensively revised version of an earlier
text produced for the 19992000 programme. Principles of Remote Sensing and the
companion volume, Principles of Geographic Information Systems [6], are published
in the ITC Educational Textbook series. We need to go back to the 1960s to find
a similar official ITC textbook on subjects related to Remote Sensing: the ITC
Textbooks on Photogrammetry and Photo-interpretation, published in English
and French [11, 12].
You may wonder why ITC has now produced its own introductory textbook while there are already many books on the subject available on the market. Principles of Remote Sensing is different in various aspects. First of all, it
has been developed for the specific ITC student population, thereby taking into
account their entry level and knowledge of English language. The textbook relates to the typical ITC application disciplines and among others provides an
introduction into techniques which acquire sub-surface characteristics. As the
first
previous
next
last
back
exit
zoom
contents
index
about
18
19
Preface
textbook is used in the start of the programmes, it tries to stimulate conceptual
and abstract thinking by providing and explaining some fundamental, however
simple, equations (in general, no more than one equation per chapter). Principles of Remote Sensing aims to provide a balanced approach towards traditional
photogrammetric and remote sensing subjects: three sensors (aerial camera, multispectral scanner and radar) are dealt with more or less the same detail. Finally,
compared to other introductory textbooks which often focus on the technique,
Principles of Remote Sensing also introduces processes. In this sense, it provides
a frame to refer to when more detailed subjects are dealt with later in the programme.
first
previous
next
last
back
exit
zoom
contents
index
about
20
Preface
first
previous
next
last
back
exit
zoom
contents
index
about
21
Preface
Acknowledgements
This textbook is the result of a process to define and develop material for a core
curriculum. This process started in 1998 and was carried out by a working group
comprising of Rolf de By, Michael Weir, Cees van Westen, myself, chaired by
Ineke ten Dam and supported by Erica Weijer. This group put many efforts
in the definition and realization of the earlier version of the two core textbooks.
Ineke was also supervising the process leading to this result. My fellow working
group members are greatly acknowledged for their support.
This textbook could not have materialized without the efforts of the (co) authors of the chapters: Wim Bakker, Ben Gorte, John Horn, Christine Pohl, Colin
Reeves, Michael Weir and Tsehaie Woldai. Many other colleagues contributed
one way or another to either the earlier version or this version of Principles of Remote Sensing: Paul Hofstee, Gerrit Huurneman, Yousif Hussin, David Rossiter,
Rob Soeters, Ernst Schetselaar, Andrew Skidmore, Dhruba Shrestha and Zoltan
Vekerdy.
The design and implementation of the textbook layout, of both the hard-copy
and electronic document, is the work of Rolf de By. Using the LATEX typesetting system, Rolf realized a well structured and visually attractive document to
study. Many of the illustrations in the book have been provided by the authors,
supported by Job Duim and Gerard Reinink. Final editing of the illustrations
was done by Wim Feringa who also designed the cover.
Michael Weir has done a tremendous job in checking the complete textbook
on English spelling and grammar. We know that our students will profit from
this.
The work on this textbook was greatly stimulated through close collaboration
first
previous
next
last
back
exit
zoom
contents
index
about
22
Preface
with the editor of Principles of Geographic Information Systems, Rolf de By.
Lucas L. F. Janssen, Enschede, September 2000
first
previous
next
last
back
exit
zoom
contents
index
about
23
Preface
first
previous
next
last
back
exit
zoom
contents
index
about
24
Preface
Gerrit C. Huurneman, Enschede, August 2001
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 1
Introduction to remote sensing
first
previous
next
last
back
exit
zoom
contents
index
about
25
26
1.1
All ITC students, one way or another, deal with georeferenced data. They might
be involved in the collection of data, processing of the data, analysis of the data
or actually using the data for decision making. In the end, data are acquired
to yield information for management purposes: water management, land management, resources management, et cetera. By data we mean representations that
can be operated upon by a computer; by information we mean data that has been
interpreted by human beings (Principles of GIS, Chapter 1). This textbook focusses on the methods used to collect georeferenced, or geo-spatial, data. The
need for spatial data is best illustrated by some examples.
An agronomist is interested in forecasting the overall agricultural production of a large area. This requires data on the area planted with different
crops and data on biomass production to estimate the yield.
An urban planner needs to identify areas in which dwellings have been
built illegally. The different types of houses and their configuration needs
to be determined. The information should be in a format that enables integration with other socio-economic information.
An engineer needs to determine the optimal configuration for siting of relay stations for a telecommunication company. The optimal configuration
primarily depends on the form of the terrain and on the location of obstacles such as buildings.
A mining engineer is asked to explore an area and to provide a map of the
surface mineralogy. In addition, s/he should start to give a first estimation
of the effect of water pumping on the neighbouring agricultural region.
first
previous
next
last
back
exit
zoom
contents
index
about
27
first
previous
next
last
back
exit
zoom
contents
index
about
28
Real world
Spatial
database
remote sensing methods, which are based on the use of image data acquired
by a sensor such as aerial cameras, scanners or a radar. Taking a remote
sensing approach means that information is derived from the image data,
which form a (limited) representation of the real world (Figure 1.2).
Real world
Sensor
Image data
Observation and
measurement
Spatial
database
previous
next
last
back
exit
zoom
contents
index
about
29
first
previous
next
last
back
exit
zoom
contents
index
about
30
previous
next
last
back
exit
zoom
contents
index
about
31
first
previous
next
last
back
exit
zoom
contents
index
about
32
1.2
The textbook Principles of GIS [6] introduced the example of studying the El Nino
effect to illustrate aspects of database design and functionality of spatial information systems. The example departs from a database table, which stores a
number of parameters derived from buoys, that is in situ, measurements. These
measurements can be analysed as they are (per buoy, over time). Most often,
however, spatial interpolation techniques will be used to generate maps that enable analysis of spatial and temporal patterns.
Now, let us consider the analysis of Sea Surface Temperature (SST) patterns
and discuss some particular aspects related to taking a remote sensing approach
case.
to the El Nino
first
previous
next
last
back
exit
zoom
contents
index
about
33
first
previous
next
last
back
exit
zoom
contents
index
about
34
first
Real world
RS sensor
previous
next
last
Image data
back
exit
Analysis
zoom
Spatial
database
contents
index
about
35
Figure 1.4:
Sea surface
temperature
as
determined from NOAAAVHRR data. Courtesy of
NOAA
first
previous
next
last
back
exit
zoom
contents
index
about
36
first
previous
next
last
back
exit
zoom
contents
index
about
37
first
previous
next
last
back
exit
zoom
contents
index
about
38
first
previous
next
last
back
exit
zoom
contents
index
about
39
first
previous
next
last
back
exit
zoom
contents
index
about
40
first
previous
next
last
back
exit
zoom
contents
index
about
41
first
previous
next
last
back
exit
zoom
contents
index
about
42
first
previous
next
last
back
exit
zoom
contents
index
about
43
Sensor
Image data
Observation and
measurement
Spatial
database
2- Electromagnetic
energy
34567-
1.3
first
previous
next
last
back
exit
zoom
contents
index
about
44
first
previous
next
last
back
exit
zoom
contents
index
about
45
Summary
A large part of human activities and interest has a geographic component. For
planning, monitoring and decision making, there is a need for georeferenced
(geo-spatial) data.
Ground-based methods and remote sensing based methods for spatial data
acquisition are distinguished. Remote sensing methods rely on the measurement of electromagnetic energy from a distance (aerospace).
A remote sensing approach is usually complemented by ground-based methods and the use of numerical models. For the appropriate choice of relevant remote sensing data acquisition you have to define the information requirements
of your application.
first
previous
next
last
back
exit
zoom
contents
index
about
46
Questions
The following questions can help to study Chapter 1.
1. To what extent are Geographic Information Systems (GIS) applied by your
organization (company)?
2. Which ground-based and which remote sensing methods are used by your
organization (or company) to collect georeferenced data?
3. Remote sensing data and derived data products are available on the internet. Locate three web-based catalogues or archives that comprise remote
sensing image data.
first
previous
next
last
back
exit
zoom
contents
index
about
47
first
previous
next
last
back
exit
zoom
contents
index
about
48
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 2
Electromagnetic energy and remote
sensing
first
previous
next
last
back
exit
zoom
contents
index
about
49
50
2.1. Introduction
2.1
Introduction
Passive
Sensor
Passive
Sensor
Reflected
Sunlight
Active
Sensor
Earth
energy
Figure 2.1:
A remote
sensing sensor measures
reflected or emitted energy. An active sensor has
its own source of energy.
Earths surface
Many sensors used in remote sensing measure reflected sunlight. Some sensors, however, detect energy emitted by the Earth itself or provide their own
energy (Figure 2.1). A basic understanding of EM energy, its characteristics and
its interactions is required to understand the principle of the remote sensor. This
knowledge is also needed to interpret remote sensing data correctly. For these
reasons, this chapter introduces the basic physics of remote sensing.
In Section 2.2, EM energy, its source and the different parts of the electrofirst
previous
next
last
back
exit
zoom
contents
index
about
51
2.1. Introduction
magnetic spectrum are explained. In between the remote sensor and the Earths
surface is the atmosphere that influences the energy that travels from the Earths
surface to the sensor. The main interactions between EM waves and the atmosphere are described in Section 2.3. Section 2.4 introduces the interactions that
take place at the Earths surface.
first
previous
next
last
back
exit
zoom
contents
index
about
52
2.2
first
Electromagnetic energy
previous
next
last
back
exit
zoom
contents
index
about
53
Wavelength, l
Distance
Magnetic field
Velocity of light, c
One characteristic of electromagnetic waves is particularly important for understanding remote sensing. This is the wavelength that is defined as the distance between successive wave crests (Figure 2.2). Wavelength is measured in
metres (m) or some factor of metres such as nanometres (nm, 109 m) or micrometres (m, 106 m). (For an explanation of units and prefixes refer to Appendix 1).
The frequency, v, is the number of cycles of a wave passing a fixed point
over a specific period of time. Frequency is normally measured in hertz (Hz),
first
previous
next
last
back
exit
zoom
contents
index
about
54
c = v.
In this equation, c is the speed of light (3 108 m/s), is the wavelength (m), and
v is the frequency (cycles per second, Hz).
The shorter the wavelength, the higher the frequency. Conversely, the longer
the wavelength, the lower the frequency (Figure 2.3).
Short wavelength
Long wavelength
High frequency
High energy
Low frequency
Low energy
where Q is the energy of a photon (J), h is Plancks constant (6.6262 1034 J s),
and v the frequency (Hz). From Equation 2.2 it follows that the longer the wavelength, the lower its energy content. Gamma rays (around 109 m) are the most
first
previous
next
last
back
exit
zoom
contents
index
about
55
first
previous
next
last
back
exit
zoom
contents
index
about
56
4
12
T=
73
T=
107
T=873
Figure 2.4:
Blackbody
radiation curves based on
Stefan-Boltzmanns
law
(with temperatures in K)
T=673
1.0
first
previous
2.0
next
3.0
last
4.0
back
5.0
exit
6.0
zoom
7.0
contents
8.0
index
about
57
first
previous
next
last
back
exit
zoom
contents
index
about
58
0.7 (mm)
red
green
UV
0.5
blue
0.4
Near-infrared
Visible
Wavelength (mm)
10
102
103
104 105
106
107 108
109
d
an io
d
ra
io
is
av
ro
ev
ic
l
Te
IR
ys
ra
al
m
er
Th -IR
id -IR
lt
ys
ra
ys
ra
ic
M ar
V
e le (U
N ib let
s o
Viravi
os
previous
next
last
back
exit
zoom
contents
index
about
59
first
previous
next
last
back
exit
zoom
contents
index
about
60
first
previous
next
last
back
exit
zoom
contents
index
about
61
2.3
The most important source of energy is the Sun. Before the Suns energy reaches
the Earths surface, three fundamental interactions in the atmosphere are possible: absorption, transmission and scattering. The energy transmitted is then
reflected or absorbed by the surface material. (Figure 2.6).
first
previous
next
last
back
exit
zoom
contents
index
about
62
Sun
y
nerg
ent e
Incid
RS
Sensor
Atmospheric
absorbtion
Scattered
radiation
Cloud
Direct
radiation
Scattered
radiation
Earth
first
previous
next
Atmospheric
emission
Thermal
emission
Reflected
radiation
Reflection
processes
last
back
Emission
processes
exit
zoom
contents
index
about
63
100
H2O
and
Co2
50
Co2
H2O
O2
10
Co2
12
14
H2O
16
18
20
22
Wavelength, microns
previous
next
last
back
exit
zoom
contents
index
about
64
1.0 x
104
Solar extra terrestrial
6000 K blackbody
Solar at Earths surface
0.8
O3
O2
H2O
O2
H2 O
0.6
0.4
Visible
0.2
0.2
0.4
H2O
0.8
1.0
1.8 2.0
1.4
H2O
CO2
H2O
H2O
2.4
2.6
2.8
3.0
Wavelength m
The solar spectrum as observed both with and without the influence of the
Earths atmosphere is shown in Figure 2.8. First of all, look at the radiation curve
of the Sun (measured outside the influence of the Earths atmosphere), which
resembles a blackbody curve at 6000 K. Secondly, compare this curve with the
radiation curve as measured at the Earths surface. The relative dips in this curve
indicate the absorption by different gases in the atmosphere.
first
previous
next
last
back
exit
zoom
contents
index
about
65
first
previous
next
last
back
exit
zoom
contents
index
about
66
q1
Blue light
Red light
q2
In the absence of particles and scattering, the sky would appear black. In
daytime, the Sun rays travel the shortest distance through the atmosphere. In
that situation, Rayleigh scattering causes a clear sky to be observed as blue because this is the shortest wavelength the human eye can observe. At sunrise and
sunset, however, the Sun rays travel a longer distance through the Earths atmosphere before they reach the surface. All the shorter wavelengths are scattered
after some distance and only the longer wavelengths reach the Earths surface.
As a result, the sky appears orange or red (Figure 2.10).
In the context of satellite remote sensing, Rayleigh scattering is the most important type of scattering. It causes a distortion of spectral characteristics of
the reflected light when compared to measurements taken on the ground: by
the Rayleigh effect the shorter wavelength are overestimated. In colour photos
taken from high altitudes it accounts for the blueness of these pictures. In genfirst
previous
next
last
back
exit
zoom
contents
index
about
67
Daytime
Sunset
Blue
Red
Earth
Figure 2.10:
Rayleigh
scattering causes us to
perceive a blue sky during
daytime and a red sky at
sunset.
Sun
eral, the Rayleigh scattering diminishes the contrast in photos, and thus has a
negative effect on the possibilities for interpretation. When dealing with digital
image data (as provided by scanners) the distortion of the spectral characteristics
of the surface may limit the possibilities for image classification.
first
previous
next
last
back
exit
zoom
contents
index
about
68
first
previous
next
last
back
exit
zoom
contents
index
about
69
nr
ad
Cloud
previous
next
ow zo
ne
Cloud
Shadow
Earth
first
ce
Zone of no
penetration
Shad
ce
an n
i
d
ra utio
d
b
ou tri
Cl con
ian
last
back
exit
zoom
contents
index
about
70
2.4
In land and water applications of remote sensing we are most interested in the
reflected radiation because this tells us something about surface characteristics.
Reflection occurs when radiation bounces off the target and is then redirected.
Absorption occurs when radiation is absorbed by the target. Transmission occurs when radiation passes through a target. Two types of reflection, which
represent the two extremes of the way in which energy is reflected from a target,
are specular reflection and diffuse reflection (Figure 2.12). In the real world, usually
a combination of both types is found.
Specular reflection, or mirror-like reflection, typically occurs when a surface is smooth and all (or almost all) of the energy is directed away from
the surface in a single direction. It is most likely to occur when the Sun is
high in the sky. Specular reflection can be caused, for example, by a water
(a)
first
previous
next
last
(b)
back
exit
zoom
contents
index
about
71
first
previous
next
last
back
exit
zoom
contents
index
about
72
first
previous
next
last
back
exit
zoom
contents
index
about
73
Red
20
0
0.4
0.6
Colour IR
sensitive region
Green
Chlorophyll absorption
Blue
40
Leaf refle
cta
Percent reflectance
60
nc
Chlorophyll absorption
Chlorophyll absorption
0.8
1.0
1.2
1.4
1.6
1.8
Wavelength, micrometers
2.0
2.2
2.4
2.6
Vegetation
The reflectance characteristics of vegetation depend on the properties of the
leaves including the orientation and the structure of the leaf canopy. The proportion of the radiation reflected in the different parts of the spectrum depends
on leaf pigmentation, leaf thickness and composition (cell structure) and on the
amount of water in the leaf tissue. Figure 2.13 shows an ideal reflectance curve
from healthy vegetation. In the visible portion of the spectrum, the reflection
from the blue and red light is comparatively low since these portions are absorbed by the plant (mainly by chlorophyll) for photosynthesis and the vegetation reflects relatively more green light. The reflectance in the near-infrared is
highest but the amount depends on the leaf development and the cell structure
first
previous
next
last
back
exit
zoom
contents
index
about
74
first
previous
next
last
back
exit
zoom
contents
index
about
75
reflectance (%)
30
b
c
20
d
a
10
e
0
400
800
1200
1600
2000
2400
wavelength (nm)
first
previous
next
last
back
exit
zoom
contents
index
about
76
40
reflectance (%)
30
20
10
a
400
800
1200
1600
2000
2400
wavelength (nm)
first
previous
next
last
back
exit
zoom
contents
index
about
77
Summary
Remote sensing is based on the measurement of Electromagnetic (EM) energy.
EM energy propagates through space in the form of sine waves characterized by
electrical (E) and magnetic (M) fields, which are perpendicular to each other. EM
can be modelled either by waves or by energy bearing particles called photons.
One property of EM waves that is particularly important for understanding remote sensing is the wavelength (), defined as the distance between successive
wave crests measured in metres (m), nanometres (nm, 109 m) or micrometres
(m, 106 m). The frequency is the number of cycles of a wave passing a fixed
point in a specific period of time and is measured in hertz (Hz). Since the speed
of light is constant, wavelength and frequency are inversely related to each other.
The shorter the wavelength, the higher the frequency and vice versa.
All matter with a temperature above the absolute zero (0 K) radiates EM
energy due to molecular agitation. Matter that is capable of absorbing and reemitting all EM energy received is known as a blackbody. All matter with a
certain temperature radiates electromagnetic waves of various wavelengths depending on its temperature. The total range of wavelengths is commonly referred to as the electromagnetic spectrum. It extends from gamma rays to radio
waves. The amount of energy detected by a remote sensing system is a function
of the interactions on the way to the object, the object itself and the interactions
on the way returning to the sensor.
The interactions of the Suns energy with physical materials, both in the atmosphere and at the Earths surface, cause this energy to be reflected, absorbed,
transmitted or scattered. Electromagnetic energy travelling through the atmosphere is partly absorbed by molecules. The most efficient absorbers of solar
radiation in the atmosphere are ozone (O3 ), water vapour (H2 O) and carbon
first
previous
next
last
back
exit
zoom
contents
index
about
78
first
previous
next
last
back
exit
zoom
contents
index
about
79
Questions
The following questions can help you to study Chapter 2.
1. What are advantages/disadvantages of aerial RS compared to spaceborne
RS in terms of atmospheric disturbance?
2. How important are laboratory spectra in understanding the remote sensing images?
first
previous
next
last
back
exit
zoom
contents
index
about
80
first
previous
next
last
back
exit
zoom
contents
index
about
81
first
previous
next
last
back
exit
zoom
contents
index
about
82
7. Indicate True or False: Only the wavelength region outside the main absorption bands of the atmospheric gases can be used for remote sensing.
8. Indicate True or False: The amount of energy detected by a remote sensing
sensor is a function of how energy is partitioned between its source and
the materials with which it interacts on its way to the detector.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 3
Sensors and platforms
first
previous
next
last
back
exit
zoom
contents
index
about
83
84
3.1. Introduction
3.1
Introduction
In Chapter 2, the underlying principle of remote sensing was explained. Depending on the surface characteristics electromagnetic energy from the Sun or
active sensor is reflected or energy may be emitted by the Earth itself. This energy is measured and recorded. The resulting data can be used to derive information about surface characteristics.
The measurements of electromagnetic energy are made by sensors that are
attached to a static or moving platform. Different types of sensors have been
developed for different applications (Section 3.2). Aircraft and satellites are generally used to carry one or more sensors (Section 3.3). General references with
respect to missions and sensors are [13, 14]. Please refer to ITCs Aerospace Sensor Database for a complete and up-to-date overview.
The sensor-platform combination determines the characteristics of the resulting image data. For example, when a particular sensor is operated from a higher
altitude the total area imaged is increased while the level of detail that can be
observed is reduced (Section 3.4). Based on your information needs and on time
and budgetary criteria, you can determine which image data are most appropriate (Section 3.5).
first
previous
next
last
back
exit
zoom
contents
index
about
85
3.2. Sensors
3.2
Sensors
first
previous
next
last
back
exit
zoom
contents
index
about
86
3.2. Sensors
Visible
domain
Passive
sensors
gamma ray
spectrometer
Optical
domain
- multispectral scanner
- imaging spectrometer
Microwave
domain
thermal
scanner
passive microwave
radiometer
- aerial camera
- video camera
Active
sensors
radar
altimeter
laser
scanner
imaging
radar
wavelength
first
previous
next
last
back
exit
zoom
contents
index
about
87
3.2. Sensors
first
previous
next
last
back
exit
zoom
contents
index
about
88
3.2. Sensors
Gamma-ray spectrometer
The gamma-ray spectrometer measures the amount of gamma rays emitted by
the upper soil or rock layers due to radioactive decay. The energy measured
in specific wavelength bands provides information on the abundance of (radio
isotopes that relate to) specific minerals. Therefore, the main application is found
in mineral exploration. Gamma rays have a very short wavelength on the order
of picometres (pm)). Because of large atmospheric absorption of these waves
this type of energy can only be measured up to a few hundred metres above the
Earths surface. Example data acquired by this sensor are given in Figure 7.1.
first
previous
next
last
back
exit
zoom
contents
index
about
89
3.2. Sensors
Aerial camera
The camera system (lens and film) is mostly found in aircraft for aerial photography. Low orbiting satellites and NASA Space Shuttle missions also apply
conventional camera techniques. The film types used in the camera enable electromagnetic energy in the range between 400 nm and 900 nm to be recorded.
Aerial photographs are used in a wide range of applications. The rigid and
regular geometry of aerial photographs in combination with the possibility to
acquire stereo-photography has enabled the development of photogrammetric
procedures for obtaining precise 3D coordinates. Although aerial photos are
used in many applications, principal applications include medium and large
scale (topographic) mapping and cadastral mapping. Today, analogue photos
are often scanned to be stored and processed in digital systems. Various examples of aerial photos are shown in Chapter 4.
first
previous
next
last
back
exit
zoom
contents
index
about
90
3.2. Sensors
Video camera
Video cameras are sometimes used to record image data. Most video sensors are
only sensitive to the visible colours, although a few are able to record the nearinfrared part of the spectrum (Figure 3.2). Until recently, only analogue video
cameras were available. Today, digital video cameras are increasingly available,
some of which are applied in remote sensing. Mostly, video images serve to
provide low cost image data for qualitative purposes, for example, to provide
additional visual information about an area captured with another sensor (e.g.,
laser scanner or radar).
Figure 3.2:
Analogue
false colour video image
of De Lopikerwaard (NL).
Courtesy of Syntoptics
first
previous
next
last
back
exit
zoom
contents
index
about
91
3.2. Sensors
Multispectral scanner
The multispectral scanner is an instrument that mainly measures the reflected
sunlight in the optical domain. A scanner systematically scans the Earths
surface thereby measuring the energy reflected from the viewed area. This is
done simultaneously for several wavelength bands, hence the name multispectral scanner. A wavelength band is an interval of the electromagnetic spectrum
for which the average reflected energy is measured. The reason for measuring a
number of distinct wavelength bands is that each band is related to specific characteristics of the Earths surface. For example, reflection characteristics of blue
light give information about the mineral composition; reflection characteristics
of infrared light tell something about the type and health of vegetation. The
definition of the wavebands of a scanner, therefore, depends on the applications
for which the sensor has been designed. An example of multispectral data for
Figure 3.3:
Landsat
TM colour composite
(RGB=457) of Yemen.
first
previous
next
last
back
exit
zoom
contents
index
about
92
3.2. Sensors
geological applications is given in Figure 3.3.
first
previous
next
last
back
exit
zoom
contents
index
about
93
3.2. Sensors
Imaging spectrometer
The principle of the imaging spectrometer is similar to that of the multispectral
scanner, except that spectrometers measure only very narrow (510 nm) spectral
bands. This results in an almost continuous reflectance curve per pixel rather
than the values for relatively broad spectral bands. The spectral curves measured depend on the chemical composition of the material. Imaging spectrometer data, therefore, can be used to determine mineral composition of the surface
30100+
TSM
Figure 3.4:
Total Suspended Matter concentration of the North Sea derived from SeaWIFS (OrbView2) data. Courtesy of
CCZM, Rijkswaterstaat
0
SeaWIFS
2-bands MIM
Atm correction: Modtran
first
previous
next
last
back
exit
zoom
contents
index
about
94
3.2. Sensors
or the chlorophyll content of the surface water (Figure 3.4).
first
previous
next
last
back
exit
zoom
contents
index
about
95
3.2. Sensors
Thermal scanner
Thermal scanners measure thermal data in the range of 1014 m. Wavelengths
in this range are directly related to an objects temperature. Data on cloud, land
and sea surface temperature are extremely useful for weather forecasting. For
this reason, most remote sensing systems designed for meteorology include a
thermal scanner. Thermal scanners can also be used to study the effects of
drought (water stress) on agricultural crops, and to monitor the temperature
of cooling water discharged from thermal power plants. Another application is
in the detection of coal fires (Figure 3.5).
Figure 3.5: Night-time airborne thermal scanner image of a coal mining area.
Dark tones represent the
relatively cold surfaces,
whilst light tones represent
the relatively warm spots.
Most of the warm spots
are due to underground
coal fires apart from the
largest light patch which is
a lake.
first
previous
next
last
back
exit
zoom
contents
index
about
96
3.2. Sensors
Radiometer
EM energy with very long wavelengths (1100 cm) is emitted from the soil and
rocks on, or just below, the Earths surface. The depth from which this energy is
emitted depends on the properties, such as water content, of the specific material. Radiometers are used to detect this energy. The resulting data can be used
in mineral exploration, soil mapping and soil moisture estimation (Figure 3.6).
first
previous
next
last
back
exit
zoom
contents
index
about
97
3.2. Sensors
first
previous
next
last
back
exit
zoom
contents
index
about
98
3.2. Sensors
Laser scanner
Laser scanners are mounted on aircraft and use a laser beam (infrared light) to
measure the distance from the aircraft to points located on the ground. This
distance measurement is then combined with exact information on the aircrafts
position to calculate the terrain elevation. Laser scanning is mainly used to produce detailed, high-resolution, Digital Terrain Models (DTM) for topographic
mapping (Figure 3.7). Laser scanning is increasingly used for other purposes,
such as the production of detailed 3D models of city buildings and for measuring tree heights in forestry.
first
previous
next
last
back
exit
zoom
contents
index
about
99
3.2. Sensors
Radar altimeter
Radar altimeters are used to measure the topographic profile parallel to the satellite orbit. They provide profiles (single lines of measurements) rather than image data. Radar altimeters operate in the 16 cm domain and are able to determine height with a precision of 24 cm. Radar altimeters are useful for measuring relatively smooth surfaces such as oceans and for small scale mapping of
continental terrain models. Sample results of radar altimeter measurements are
given in Figure 1.6 and Figure 7.2.
first
previous
next
last
back
exit
zoom
contents
index
about
100
3.2. Sensors
Imaging radar
Radar instruments operate in the 1100 cm domain. As in multispectral scanning, different wavelength bands are related to particular characteristics of the
Earths surface. The radar backscatter (Figure 3.8) is influenced by the illuminating signal (microwave parameters) and the illuminated surface characteristics
(orientation, roughness, di-electric constant/moisture content). Since radar is an
active sensor system and the applied wavelengths are able to penetrate clouds, it
has all-weather day-and-night acquisition capability. The combination of two
radar images of the same area can provide information about terrain heights.
Combining two radar images acquired at different moments can be used to precisely assess changes in height or vertical deformations (SAR Interferometry).
Figure 3.8: ERS SAR image of a delta in Kalimantan, Indonesia. The image allows three different
forest types to be distinguished.
first
previous
next
last
back
exit
zoom
contents
index
about
101
3.3. Platforms
3.3
Platforms
In remote sensing, the sensor is mounted on a platform. In general, remote sensing sensors are attached to moving platforms such as aircraft and satellites. Static
platforms are occasionally used in an experimental context. For example, by using a multispectral sensor mounted to a pole, the changing reflectance characteristics of a specific crop during the day or season can be assessed.
Airborne observations are carried out using aircraft with specific modifications to carry sensors. An aircraft that carries an aerial camera or a scanner
needs a hole in the floor of the aircraft. Sometimes Ultra Light Vehicles (ULVs),
balloons, Airship or kites are used for airborne remote sensing. Airborne observations are possible from 100 m up to 3040 km height. Until recently, the
navigation of an aircraft was one of the most difficult and crucial parts of airborne remote sensing. In recent years, the availability of satellite navigation
technology has significantly improved the quality of flight execution.
For spaceborne remote sensing, satellites are used. Satellites are launched into
space with rockets. Satellites for Earth Observation are positioned in orbits between 15036,000 km altitude. The specific orbit depends on the objectives of
the mission, e.g., continuous observation of large areas or detailed observation
of smaller areas.
first
previous
next
last
back
exit
zoom
contents
index
about
102
3.3. Platforms
+
0
-
+
0
-
roll angle
+
0
-
pitch angle
yaw angle
Today, most aircraft are equipped with satellite navigation technology, which
yield the approximate position (RMS-error of less than 30 m). More precise positioning and navigation (up to decimetre accuracy) is possible using so-called
differential approaches. In this textbook we refer to satellite navigation in general, which comprises the American GPS system, the Russian Glonass system
and the proposed European Galileo system.
In aerial photography the measurements are stored on hard-copy material:
the negative film. For other sensors, e.g., a scanner, the digital data can be stored
first
previous
next
last
back
exit
zoom
contents
index
about
103
3.3. Platforms
on tape or mass memory devices. Tape recorders offer the fastest way to store
the vast amount of data. The recorded data are only available after the aircraft
has returned to its base.
Owning, operating and maintaining survey aircraft, as well as employing a
professional flight crew is an expensive undertaking. In the past, survey aircraft
were owned mainly by large national survey organizations that required large
amounts of photography. There is an increasing trend towards contracting specialized private aerial survey companies. Still, this requires basic understanding
of the process involved. A sample contract for outsourcing aerial photography
is provided by the American Society of Photogrammetry and Remote Sensing at
their ASPRS web site.
first
previous
next
last
back
exit
zoom
contents
index
about
104
3.3. Platforms
previous
next
last
back
exit
zoom
contents
index
about
105
3.3. Platforms
same area, is determined by the repeat cycle together with the pointing
capability of the sensor. Pointing capability refers to the possibility of the
sensor-platform to look sideways. Pushbroom scanners, such as those
mounted on SPOT, IRS and IKONOS (Section 5.4), have this possibility.
The following orbit types are most common for remote sensing missions:
Polar, or near polar, orbit. These are orbits with inclination angle between
80 and 100 degrees and enable observation of the whole globe. The satellite
is typically placed in orbit at 600800 km altitude.
Sun-synchronous orbit. An orbit chosen in such a way that the satellite always
passes overhead at the same local solar time is called sun-synchronous. Most
sun-synchronous orbits cross the equator at mid-morning (around 10:30 h). At
that moment the Sun angle is low and the resultant shadows reveal terrain relief. Sun-synchronous orbits allow a satellite to record images at two fixed times
during one 24-hour period: one during the day and one at night. Examples of
near polar sun-synchronous satellites are Landsat, SPOT and IRS.
Geostationary orbit. This refers to orbits in which the satellite is placed above
the equator (inclination angle is 0 ) at a distance of some 36,000 km. At this
distance, the period of the satellite is equal to the period of the Earth. The result
is that the satellite is at a fixed position relative to the Earth. Geostationary orbits
are used for meteorological and telecommunication satellites.
Todays meteorological weather satellite systems use a combination of geostationary satellites and polar orbiters. The geo-stationary satellites offer a continuous view, while the polar orbiters offer a higher resolution (Figure 3.10).
The data of spaceborne sensors need to be sent to the ground for further analysis and processing. Some older spaceborne systems utilized film cartridges that
fell back to a designated area on Earth. In the meantime, practically all Earth Observation satellites apply satellite communication technology for downlink of the
first
previous
next
last
back
exit
zoom
contents
index
about
106
3.3. Platforms
Figure 3.10:
Meteorological
observation
system
comprised
of
geo-stationary and polar
satellites.
data. The acquired data are sent down to a receiving station or to another communication satellite that downlink the data to receiving antenn on the ground.
If the satellite is outside the range of a receiving station the data can be temporarily stored by a tape recorder in the satellite and transmitted later. One of
the trends is that small receiving units (consisting of a small dish with a PC) are
being developed for local reception of image data.
first
previous
next
last
back
exit
zoom
contents
index
about
107
3.4
Remote sensing image data are more than a picturethey are measurements of
EM energy. Image data are stored in a regular grid format (rows and columns).
The single elements are called pixels, an abbreviation of picture elements. For
each pixel, the measurements are stored as Digital Number values or DN-values.
Typically, for each wavelength band measured, a separate layer is stored (Figure 3.11).
Columns
Rows
45
band 3
26 81
53 35 57
band 2
band 1
DN-values
Single pixel
previous
next
last
back
exit
zoom
contents
index
about
108
previous
next
last
back
exit
zoom
contents
index
about
109
3.5
first
previous
next
last
back
exit
zoom
contents
index
about
110
first
previous
next
last
back
exit
zoom
contents
index
about
111
first
previous
next
last
back
exit
zoom
contents
index
about
112
first
previous
next
last
back
exit
zoom
contents
index
about
113
Summary
This chapter has introduced the principle of remote sensing observations: a sensor attached to a moving platform. Aircraft and satellites are the main platforms
used in remote sensing. Both types of platforms have their advantages and disadvantages. Two main categories of sensors are distinguished. Passive sensors
depend on an external source of energy such as the Sun. Active sensors have
their own source of energy. A sensor carries out measurements of reflected or
emitted (EM) energy. The energy measured in specific wavelength bands is related to (Earth) surface characteristics. The measurements are usually stored as
image data. The characteristics of image data are related to the resolutions of
the sensor-platform system (spatial, spectral and radiometric). Depending on
the spatio-temporal phenomena of interest, the most appropriate remote sensing data can be determined. In practice, data availability and costs determine
which remote sensing data are used.
first
previous
next
last
back
exit
zoom
contents
index
about
114
Questions
The following questions can help you to study Chapter 3.
1. Think of an application, define the spatio-temporal characteristics of interest and determine the type of remote sensing image data required.
2. How many of the sensor types introduced in Section 3.2 were already
known to you?
3. Which types of sensors are used in your discipline or field-of-interest?
first
previous
next
last
back
exit
zoom
contents
index
about
115
first
previous
next
last
back
exit
zoom
contents
index
about
116
first
previous
next
last
back
exit
zoom
contents
index
about
117
4. Describe two differences between aircraft and satellite remote sensing and
their implications for the data acquired.
5. Which two types of satellite orbits are mainly used for Earth observation?
6. List and describe four characteristics of image data.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 4
Aerial cameras
first
previous
next
last
back
exit
zoom
contents
index
about
118
119
4.1. Introduction
4.1
Introduction
Aerial photography has been used since the early 20th century to provide spatial
data for a wide range of applications. It is the oldest, yet most commonly applied remote sensing technique. Photogrammetry is the science and technique
of making measurements from photos or image data. Nowadays, almost all topographic maps are based on aerial photographs. Aerial photographs also provide the accurate data required for many cadastral surveys and civil engineering projects. Aerial photography is a useful source of information for specialists
such as foresters, geologists and urban planners. General references for aerial
photography are [10, 16, 17].
Two broad categories of aerial photography can be distinguished: vertical
photography and oblique photography (Figure 4.1). In most mapping applications,
vertical aerial photography is required. Vertical aerial photography is produced
with a camera mounted in the floor of an aircraft. The resulting image is rather
similar to a map and has a scale that is approximately constant throughout the
image area. Usually, vertical aerial photography is also taken in stereo, in which
successive photos have a degree of overlap to enable stereo-interpretation and
stereo-measurements.
Oblique photographs are obtained when the axis of the camera is not vertical. Oblique photographs can be made using a hand-held camera and shooting through the (open) window of an aircraft. The scale of an oblique photo
varies from the foreground to the background. This scale variation complicates
the measurement of positions from the image and, for this reason, oblique photographs are rarely used for mapping purposes. Nevertheless, oblique images
can be useful for purposes such as viewing sides of buildings and for inventories
of wildlife.
first
previous
next
last
back
exit
zoom
contents
index
about
120
4.1. Introduction
(a)
nadir
nadir
(b)
This chapter focusses on the camera, films and methods used for vertical
aerial photography. First of all, Section 4.2 introduces the aerial camera and its
main components. Photography is based on exposure of a film, processing and
printing. The type of film applied largely determines the spectral and radiometric characteristics of the printed products (Section 4.3). Section 4.4 focusses on
the geometric characteristics of aerial photography. In Section 4.5 some aspects
of aerial photography missions are introduced. In the advanced Section 4.6,
some technological developments are discussed.
first
previous
next
last
back
exit
zoom
contents
index
about
121
4.1. Introduction
(a)
first
(b)
previous
next
last
back
exit
zoom
contents
index
about
122
4.2
Aerial camera
A camera used for vertical aerial photography for mapping purposes is called an
aerial survey camera. In this section the standard aerial camera is introduced.
At present, there are only two major manufacturers of aerial survey cameras,
namely Zeiss and Leica. These two companies produce the RMK-TOP and the
RC-30 respectively. Just like a typical hand-held camera, the aerial survey camera contains a number of common components as well as a number of specialized ones necessary for its specific role. Figure 4.3 shows a schematic drawing
of an aerial camera. The large size of the camera results from the need to acquire
images of large areas with a high spatial resolution. This is realized by using
a large film size. Modern aerial survey cameras produce negatives measuring
23 cm 23 cm. Up to 600 photographs may be recorded on a single roll of film.
Film Magazine
Intervalometer
Camera Body
Camera
Mount
Viewfinder
Lens Cone
Exposure Control
System
first
previous
next
last
back
exit
zoom
contents
about
123
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
124
Optical Axis
Lens
System
Diaphragm
Shutter
Coloured +
A.V. Filter
first
previous
next
last
back
exit
zoom
contents
index
about
125
Enschede
25 June
1991
Watch
Message
Pad
Fiducial Marks
Frame
Number
Spirit
Level
2205
A vacuum plate is used for flattening the film at the instant of exposure. Socalled fiducial marks are recorded in all corners of the film. The fiducial marks are
first
previous
next
last
back
exit
zoom
contents
index
about
126
first
previous
next
last
back
exit
zoom
contents
index
about
127
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
128
4.3
Base Layer
Anti-Halation Layer
The film emulsion type applied determines the spectral and radiometric characteristics of the photograph. Two terms are important in this context:
Spectral sensitivity describes the range of wavelengths to which the emulsion is sensitive. For the study of vegetation the near-infrared wavelengths
yield much information and should be recorded; for other purposes a standard colour photograph normally can yield the optimal basis for interpretation.
first
previous
next
last
back
exit
zoom
contents
index
about
129
first
previous
next
last
back
exit
zoom
contents
index
about
130
first
previous
next
last
back
exit
zoom
contents
index
about
131
1.0
1.0
0.8
0.8
Relative Sensitivity
Relative Sensitivity
Sensitization techniques are used not only to increase the general sensitivity but
also to produce films that are sensitive to longer wavelengths. By adding sensitizing dyes to the basic silver halide emulsion, the energy of longer light wavelengths becomes sufficient to produce latent images. In this way a monochrome
film can be made sensitive to green, red or infrared wavelengths (Section 3.3.2).
A blackandwhite (monochrome) type of film has one emulsion layer. Using sensitization techniques, different types of monochrome films are available.
Most common are the panchromatic and infra-red sensitive film. The sensitivity
curves of these films are shown in Figure 4.7.
0.6
0.4
0.2
0.4
0.2
Blue + Green + Red + Infra Red Sensitive
0.0
(a)
Figure 4.7:
Spectral
sensitivity curves of a
panchromatic film (a) and
a black/white infrared film
(b). Note the difference in
scaling on the x-axis
0.6
400
500
600
Wavelength (nm)
700
400
500
600
700
Wavelength (nm)
800
900
(b)
first
previous
next
last
back
exit
zoom
contents
index
CAUTION
about
132
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
133
Dense part
of Negative
transmits
little light
Clear part of
Negative
transmits
much light
Positive Print
and forms only a small amount of silver in the positive image. Conversely, in
clear areas of the negative, much light passes and forms large quantities of silver
in the print (see Figure 4.8).
first
previous
next
last
back
exit
zoom
contents
index
about
134
CAUTION
Red
exposure
Colour Coupler
first
previous
next
last
back
exit
zoom
contents
index
about
135
Green
subject
Red
subject
Yellow
dye
Magenta
dye
Cyan
dye
Blue
image
first
previous
next
last
Green
image
back
exit
Red
image
zoom
contents
index
about
136
first
previous
next
last
back
exit
zoom
contents
index
about
137
first
previous
next
last
back
exit
zoom
contents
index
about
138
4.3.5 Scanning
Classical photogrammetric techniques as well as visual photo-interpretation generally employ hard-copy photographic images. These can be the original negatives, positive prints or diapositives. Digital photogrammetric systems, as well
as geographic information systems, require digital photographic images. A scanner is used to convert a film or print into a digital form. The scanner samples
the image with an optical detector and measures the brightness for small areas
(pixels). The brightness value are then represented as a digital number (DN)
on a given scale. In the case of a monochrome image, a single measurement is
made for each pixel area. In the case of a coloured image, separate red, green
and blue values are measured. For simple visualization purposes, a standard
office scanner can be used; but high metric quality scanners are required if the
digital photos are to be used in precise photogrammetric procedures.
In the scanning process is the setting of the size of the scanning aperture is
most relevant. This is also referred to as the scanning density and is expressed in
dots per inch (dpi; 1 inch = 2.54 cm). The dpi-setting depends on the detail required for the application and is usually limited by the scanner. Office scanners
permit around 600 dpi (43 m) whilst photogrammetric scanners may produce
3600 dpi (7 m).
For a monochrome 23 23 cm negative, 600 dpi scanning results in a file
size of 9 600 = 5, 400 rows and the same number of columns. Assuming that
1 byte is used per pixel (i.e., there are 256 grey levels), the resulting files requires
29 Mbyte of disk space. When the scale of the negative is given, the ground pixel
size of the resulting image can be calculated. Assuming a photo scale of 1:18,000,
the first step is to calculate the size of one dot: 25.4 mm / 600 dots = 0.04 mm
per dot. The next step is to relate this to the scale: 0.04 mm 18,000 = 720 mm in
the terrain. The ground pixel size of the resulting image is therefore 0.72 metre.
first
previous
next
last
back
exit
zoom
contents
index
about
139
4.4
Spatial characteristics
Two important properties of an aerial photograph are scale and spatial resolution. These properties are determined by sensor (lens cone and film) and platform (flying height) characteristics. Lens cones are produced with different focal
length. Focal length is the most important property of a lens cone since, together
with flying height, it determines the photo scale. The focal length also determines the angle of view of the camera. The longer the focal length, the narrower
the angle of view. The 152 mm lens is the most commonly used lens.
first
previous
next
last
back
exit
zoom
contents
index
about
140
4.4.1 Scale
The relationship between the photo scale factor, s, flying height, H, and lens focal
length, f, is given by
s=
H
.
f
(4.1)
Hence, the same scale can be achieved with different combinations of focal
length and flying height. If the focal length of a lens is decreased whilst the
flying height remains constant, then (also refer to Figure 4.12):
The image scale factor will increase and the size of the individual details in
the image becomes smaller. In the example shown in Figure 4.12, using a
150 mm and 300 mm lens at H=2000 m results in a scale factor of 13,333
and 6,666 respectively;
f=150mm
f=300mm
H=2000 m
74
41
3066 m
(a)
first
previous
next
last
1533 m
back
exit
zoom
(b)
contents
index
about
141
first
previous
next
last
back
exit
zoom
contents
index
about
142
previous
next
last
back
exit
zoom
contents
index
about
143
4.5
first
previous
next
last
back
exit
zoom
contents
index
about
144
Sideways
overlap
between
flight lines
Survey Area
3
Figure 4.13: Example survey area for aerial photography. Note the forward
and sideways overlap of
the photographs.
previous
next
last
back
exit
zoom
contents
index
about
145
first
previous
next
last
back
exit
zoom
contents
index
about
146
first
previous
next
last
back
exit
zoom
contents
index
about
147
4.6
The most significant improvements to standard aerial photography made during the last decade can be summarized as follows:
Global Navigation Satellite Systems (GPS-USA, Glonass-SU and proposed
Galileo-EU) provide a means of achieving accurate navigation. They offer precise positioning of the aircraft along the survey run, ensuring that
the photographs are taken at the correct points. This method of navigation is especially important in survey areas where topographic maps do
not exist, are old, are of small scale or of poor quality. It is also helpful
in areas where the terrain has few features (deserts, forests etc) because in
these cases conventional visual navigation is particularly difficult. The major aerial camera manufacturers (as well as some independent suppliers),
now offer complete software packages that enable the flight crew to plan,
execute and evaluate an entire aerial survey mission. Before the mission,
the boundaries of the survey area to be photographed are first entered into
the software (either from a keyboard or digitizing table), along with basic
mission parameters such as the required scale, lens focal length, overlap
requirements, terrain elevation et cetera. The program then calculates the
optimum positions of the required flight lines as well as the positions of the
individual photo-centres. This pre-calculated information is then stored
on a diskette and taken into the aircraft in order to execute the mission.
During the flight, the crew is presented with a moving map display showing the position of the aircraft, the required flight lines and the individual photo-centres. Either flying the aircraft manually or on auto-pilot, the
display is followed and the software instructs the camera precisely where
to take the photographs. After the mission the co-ordinates of the phofirst
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
148
previous
next
last
back
exit
zoom
contents
index
about
149
first
previous
next
last
back
exit
zoom
contents
index
about
150
Summary
The characteristics of oblique and vertical aerial photography are distinguished.
Vertical aerial photography requires a specially adapted aircraft. The execution
of photo flights is performed mainly by specialized companies or governmental
units. The main components of an aerial camera system are the lens and film.
The lens, in combination with the flying height, determines the photo scale factor. The film type used determines which wavelengths bands are recorded. The
most commonly used film types are panchromatic, black-and-white infrared,
true-colour and false-colour infrared. Another characteristic of the film is the
general sensitivity, which is related to the size of the grains. After exposure,
the film is developed and printed. The printed photo can be scanned to use the
photo in a digital environment.
There have been many technological developments to improve mission execution as well the image quality itself. Most recent in this development is the
digital camera, which directly yields digital data.
first
previous
next
last
back
exit
zoom
contents
index
about
151
Questions
The following questions can help you to study Chapter 4.
1. Consider an area of 500 km2 that needs aerial photo coverage for topographic mapping at 1:50,000. Which specifications would you give on film,
photo scale, overlap, et cetera?
2. Go to the Internet and locate three catalogues (archives) of aerial photographs. Compare the descriptions and specifications of the photographs
(in terms of scale, resolution, format, . . . ).
first
previous
next
last
back
exit
zoom
contents
index
about
152
first
previous
next
last
back
exit
zoom
contents
index
about
153
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 5
Multispectral scanners
first
previous
next
last
back
exit
zoom
contents
index
about
154
155
5.1. Introduction
5.1
Introduction
first
previous
next
last
back
exit
zoom
contents
index
about
156
5.2
Whiskbroom scanner
first
previous
next
last
back
exit
zoom
contents
index
about
157
Q=hv
where Q is the energy of a photon (J), h is Plancks constant (6.6262 1034 J s),
and, v is the frequency (Hz). The solid state detector measures the amount of
energy (J) during a specific time period, which results in J/s = Watt (W).
The range of input radiance, between a maximum and a minimum level, that
a detector can handle is called the dynamic range. This range is converted into the
range of a specified data format. Typically an 8-bit, 10-bit or 12-bit data format
is used. The 8-bit format allows 28 = 256 levels or DN-values; similarly 12-bit
format allows 212 = 4096 DN-values. The difference in input level that can be
distinguished is called the radiometric resolution. Consider a dynamic range of
energy between 0.53 W. Using 100 or 250 DN-values results in a radiometric
resolution of 25 mW and 10 mW respectively.
first
previous
next
last
back
exit
zoom
contents
index
about
158
Figure 5.2:
Normalized
spectral response curve of
a specific sensor. It shows
that the setting of this band
ranges from approximately
570 to 710 nm.
first
previous
next
last
back
exit
zoom
contents
index
about
159
D = H.
first
previous
next
last
back
exit
zoom
contents
index
about
160
5.3
Pushbroom scanner
The pushbroom scanner is based on the use of Charged Coupled Devices (CCDs)
for measuring the electromagnetic energy (Figure 5.3). A CCD-array is a line of
photo-sensitive detectors that function similar to solid state detectors. A single
element can be as small as 5 m. Today, two-dimensional CCD-arrays are used
in digital cameras and video recorders. The CCD-arrays used in remote sensing
are more sensitive and have larger dimensions. The first satellite sensor using
this technology was SPOT-1 HRV. High resolution sensors such as IKONOS and
Orbview3 also apply the pushbroom principle.
The pushbroom scanner records one entire line at a time. The principal advantage over the whiskbroom scanner is that each position (pixel) in the line has
first
previous
next
last
back
exit
zoom
contents
index
about
161
first
previous
next
last
back
exit
zoom
contents
index
about
162
first
previous
next
last
back
exit
zoom
contents
index
about
163
first
previous
next
last
back
exit
zoom
contents
index
about
164
5.4
This section gives some details about specific spaceborne missions that carry
multispectral scanners and describes some of their applications.
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
165
5.4.1 Meteosat-5
Meteosat is a geostationary satellite that is used in the world meteorological programme. The programme comprises seven satellites in total. The first Meteosat
satellite was placed in orbit in 1977. Meteosat satellites are owned by the European organisation Eumetsat. At this moment, Meteosat-5 is operational with
Meteosat-6 as a back-up.
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Ground pixel size
Data archive at
Table 5.1:
Meteosat-5
VISSR characteristics
Meteosat-5
Geo-stationary, 0 longitude
VISSR (Visible and Infrared Spin Scan Radiometer)
Full Earth disc (FOV = 18 )
Not applicable
30 minutes
0.50.9 (VIS), 5.77.1 (WV), 10.512.5
(TIR)
2.5 km (VIS and WV), 5 km (TIR)
www.eumetsat.de
The spectral bands of the VISSR sensor are chosen for observing phenomena
that are relevant to meteorologists: a panchromatic band (VIS), a mid-infrared
band, which gives information about the water vapour (WV) present in the atmosphere, and a thermal band (TIR). In case of clouds, the thermal data relate
to the cloud top temperature, which is used for rainfall estimates and forecasts.
Under cloud-free conditions the thermal data relate to the surface temperature
of land and sea.
first
previous
next
last
back
exit
zoom
contents
index
about
166
5.4.2 NOAA-15
NOAA stands for National Oceanic and Atmospheric Administration, which is a
US-government body. The sensor onboard of NOAA missions that is relevant for
Earth Observation is the Advanced Very High Resolution Radiometer (AVHRR).
Today, two NOAA satellites (-14, -15) are operational.
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
Table 5.2:
NOAA-15
AVHRR characteristics
NOAA-15
850 km, 98.8 , sun-synchronous
AVHRR-3 (Advanced Very High Resolution
Radiometer)
2800 km (FOV = 110 )
No
214 times per day, depending on latitude
0.580.68(1), 0.731.10 (2), 3.553.93(3),
10.311.3(4), 11.412.4(5)
1 km (at nadir), 6 km (at limb),
IFOV=1.4 mrad
www.saa.noaa.gov
As the AVHRR sensor has a very wide FOV (110 ) and is at a large distance
from the Earth, the whiskbroom principle causes a large difference in the ground
cell measured within one scanline (Figure 5.4). The standard image data products of AVHRR yield image data with equally sized ground pixels.
AVHRR data are used primarily in day-to-day meteorological forecasting
where it gives more detailed information than Meteosat. In addition, there are
many land and water applications.
first
previous
next
last
back
exit
zoom
contents
index
about
167
Figure 5.4:
The
NOAA/AVHRR
sensor, which observes an
area of 1 1 km2 at the
centre and 6 3 km2 at
the edge.
first
previous
next
last
back
exit
zoom
contents
index
about
168
5.4.3 Landsat-7
The Landsat programme is the oldest Earth Observation programme. It started
in 1972 with the Landsat-1 satellite carrying the MSS multispectral sensor. After
1982, the Thematic Mapper (TM) replaced the MSS sensor. Both MSS and TM
are whiskbroom scanners. In April 1999 Landsat-7 was launched carrying the
ETM+ scanner. Today, only Landsat-5 and -7 are operational.
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
Table 5.3:
Landsat-7
ETM+ characteristics
Landsat-7
705 km, 98.2 , sun-synchronous, 10:00 AM
crossing, 16 days repeat cycle
ETM+ (Enhanced Thematic Mapper)
185 km (FOV = 15 )
No
16 days
0.450.52(1), 0.520.60(2), 0.630.69(3),
0.760.90(4), 1.551.75(5), 10.412.50(6),
2.082.34(7), 0.500.90(PAN)
15 m (PAN), 30 m (bands 15,7), 60 m
(band 6)
earthexplorer.usgv.gov
There are many applications of Landsat Thematic Mapper data: land cover
mapping, land use mapping, soil mapping, geological mapping, sea surface
temperature mapping, et cetera. For land cover and land use mapping Landsat
Thematic Mapper data are preferred, e.g., over SPOT multispectral data, because
of the inclusion of middle infrared bands. Landsat Thematic Mapper is the only
non-meteorological satellite that has a thermal infrared band. Thermal data are
first
previous
next
last
back
exit
zoom
contents
index
about
169
first
previous
next
last
back
exit
zoom
contents
index
about
170
first
Band
1
Wavelength (m)
0.450.52
0.520.60
0.630.69
0.760.90
1.551.75
10.412.5
2.082.35
previous
next
last
Principal Applications
Designed for water body penetration,
making it useful for coastal water mapping. Also useful for soil-vegetation discrimination and forest type mapping.
Designed to measure green reflectance
peak of vegetation for vegetation discrimination and vigour assessment.
Designed to sense in a chlorophyll absorption region aiding in plant species differentiation.
Useful for determining vegetation types,
vigour, and bio-mass content, for delineating water bodies, and for soil moisture
discrimination.
Indicative of vegetation moisture content
and soil moisture. Also useful for differentiation of snow from clouds.
Useful in vegetation stress analysis,
soil moisture discrimination, and thermal
mapping applications.
Useful for discrimination of mineral and
rock types. Also sensitive to vegetation
moisture content.
back
exit
zoom
contents
index
about
171
5.4.4 SPOT-4
SPOT stands for Systeme Pour lObservation de la Terre. SPOT-1 was launched in
1986. SPOT is owned by a consortium of French, Swedish and Belgium governments. It was the first operational pushbroom CCD sensor with off-track viewing capability to be put into space. At that time, the 10 m panchromatic spatial
resolution was unprecedented. In March 1998 a significantly improved SPOT-4
was launched: the HRVIR sensor has 4 instead of 3 bands and the VEGETATION
instrument was added. VEGETATION has been designed for frequent (almost
daily) and accurate monitoring of the globes landmasses.
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
first
previous
next
SPOT-4
835 km, 98.7 , sun-synchronous, 10:30 AM
crossing, 26 days repeat cycle
two HRVIR sensors (High Resolution Visible and Infrared)
60 km (3000 pixels CCD-array)
Yes, 27 across-track
46 days (depending on latitude)
0.500.59(1), 0.610.68(2), 0.790.89(3),
1.581.75(4), 0.610.68(PAN)
10 m (PAN), 20 m (bands 14)
sirius.spotimage.fr
last
back
exit
zoom
contents
index
about
172
5.4.5 IRS-1D
India puts much effort into remote sensing and has many operational missions
and missions under development. The most important Earth Observation programme is the Indian Remote Sensing (IRS) programme. Launched in 1995
and 1997, two identical satellites, IRS-1C and IRS-1D, can deliver image data
at high revisit times. IRS-1C and IRS-1D carry three sensors: the Wide Field
Sensor (WiFS) designed for regional vegetation mapping, the Linear Imaging
Self-Scanning Sensor 3 (LISS3), which yields multispectral data in four bands
with a spatial resolution of 24 m, and the PAN.
In this subsection, the characteristics of the PAN sensor are given. For a number of years, up to the launch of IKONOS in September 1999, the IRS-1C and -1D
were the civilian satellites with the highest spatial resolution. Applications are
similar to those of SPOT and Landsat.
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
first
previous
next
IRS-1D
817 km, 98.6 , sun-synchronous, 10:30 AM
crossing, 24 days repeat cycle
PAN (Panchromatic Sensor)
70 km
Yes, 26 across-track
5 days
0.500.75
6m
www.spaceimaging.com
last
back
exit
zoom
contents
index
about
173
5.4.6 IKONOS
IKONOS was the first commercial high resolution satellite to be placed into orbit in space. IKONOS is owned by SpaceImaging, a USA based Earth observation company. The other commercial high resolution satellites foreseen are:
Orbview-3 (OrbImage), Quickbird (EarthWatch), and EROS-A1 (West Indian
Space). IKONOS was launched in September 1999 and regular data ordering
has been taking place since March 2000.
The OSA sensor onboard is based on the pushbroom principle and can simultaneously take panchromatic and multispectral images. IKONOS delivers
the highest spatial resolution so far achieved by a civilian satellite. Apart from
the high spatial resolution it also has a high radiometric resolution using 11-bit
quantization.
Many applications for the IKONOS data are foreseen. The owner expects
that the application fields are able to pay for the commercially priced data. It is
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
first
previous
next
IKONOS
680 km, 98.2 , sun-synchronous, 10:30 AM
crossing, 14 days repeat cycle
Optical Sensor Assembly (OSA)
11 km (12 m CCD elements)
Yes, 50 omnidirectional
13 days
0.450.52(1), 0.520.60(2), 0.630.69(3),
0.760.90(4), 0.450.90(PAN)
1 m (PAN), 4 m(bands 14)
www.spaceimaging.com
last
back
exit
zoom
contents
index
about
174
first
previous
next
last
back
exit
zoom
contents
index
about
175
5.4.7 Terra
EOS (Earth Observing System) is the centerpiece of NASAs Earth Science mission. The EOS AM-1 satellite, later renamed to Terra, is the flagship of the
fleet and was launched in December 1999. It carries five remote sensing instruments including MODIS and ASTER. ASTER, the Advanced Spaceborne Thermal Emission and Reflectance Radiometer, is a high resolution imaging spectrometer. The instrument is designed with three bands in the visible and nearinfrared spectral range with a 15 m resolution, six bands in the short-wave infrared with a 30 m resolution, and five bands in the thermal infrared with a 90 m
resolution. The VNIR and SWIR bands have a spectral bandwidth in the order
of 10 nm. ASTER consists of three separate telescope systems, each of which can
be pointed at selected targets. By pointing to the same target twice, ASTER can
acquire high-resolution stereo images. The swath width of the image is 60 km
and the revisit time is about 5 days.
MODIS, the Moderate-Resolution Imaging Spectroradiometer observes the
entire surface of the Earth every 12 days with a whisk-broom scanning imaging
radiometer. Its wide field of view (over 2300 km) provides images of daylight
reflected solar radiation and day/night thermal emissions over the entire globe.
Its spatial resolution ranges from 2501000 m.
first
previous
next
last
back
exit
zoom
contents
index
about
176
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
first
previous
next
Table 5.8:
teristics
Terra
705 km, 98.2 , sun-synchronous, 10:30 AM
crossing, 16 days repeat cycle
ASTER
60 km
Yes, 8.5 SWIR and TIR 24 VNIR
5 days
VNIR 0.0.56(1), 0.66(2), 0.81(3)
SWIR 1.65(1), 2.17(2), 2.21(3), 2.26(4),
2.33(5), 2.40(6)
TIR 8.3(1), 8.65(2), 9.10(3), 10.6(4),
11.3(5)
15 m (VNIR), 30 m(SWIR), 90 m(TIR)
terra.nasa.gov
last
back
exit
zoom
contents
index
about
Terra charac-
177
5.4.8 EO-1
The EO-1 mission is part of the NASA New Millennium Program and is focused
on new sensor and spacecraft technologies that can directly reduce the cost of
Landsat and related Earth monitoring Systems. The EO-1 satellite is in an orbit
that covers the same ground track as Landsat 7, approximately one minute later.
This enables EO-1 to obtain images of the same ground area at nearly the same
time, so that direct comparison of results can be obtained from Landsat ETM+
and the three primary EO-1 instruments. The three primary instruments on the
EO-1 spacecraft are the Hyperion and the Linear Etalon Imaging Spectrometer
Array (LEISA), Atmospheric Corrector (LAC) and the Advanced Land Imager
(ALI).
Hyperion is a grating imaging spectrometer with a 30 m ground sample distance over a 7.5 km swath, providing 10 nm (sampling interval) contiguous
bands of the solar reflected spectrum from 4002500 nm. LAC is an imaging
spectrometer covering the spectral range from 9001600 nm which is well-suited
to monitor the atmospheric water absorption lines for correction of atmospheric
effects in multispectral imagers such as ETM+ on Landsat.
The Earth Observing-1 (EO-1) Advanced Land Imager (ALI) is a technology
verification instrument. Operating in a pushbroom fashion at an orbit of 705 km,
the ALI will provide Landsat-type panchromatic and multispectral bands. These
bands have been designed to mimic six Landsat bands with three additional
bands covering 0.4330.453, 0.8450.890, and 1.201.30 m. The ALI also contains wide-angle optics designed to provide a continuous 15 1.625 field of
view for a fully populated focal plane with 30 m resolution for the multispectral
pixels and 10 m resolution for the panchromatic pixels.
first
previous
next
last
back
exit
zoom
contents
index
about
178
System
Orbit
Sensor
Swath width
Off-track viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
first
previous
next
EO-1
705 km, 98.7 , sun-synchronous, 10:30 AM
crossing, 16 days repeat cycle
ALI (Advanced Land Imager)
37 km
No
16 days
As Landsat 7 + 0.4330.453, 0.8450.890,
and 1.201.30
10 m (PAN), 30 m (other bands)
eo1.gsfc.nasa.gov
last
back
exit
zoom
contents
index
about
179
Summary
The multispectral scanner is a sensor that collects data in various wavelength
bands of the EM spectrum. The scanner can be mounted on an aircraft or on a
satellite. There are two types of scanners: whiskbroom and pushbroom scanners. They use solid state detectors and CCD-arrays respectively for measuring
the level of energy. The resulting image data store the level of energy as Digital
Numbers, which are calculated during the quantization process. Multispectral
scanners provide multi-band data.
In terms of geometrically reliable data the pushbroom scanner performs best.
In terms of measuring many spectral bands (including thermal infrared) the current whiskbroom scanners are the best.
first
previous
next
last
back
exit
zoom
contents
index
about
180
Questions
The following questions can help you to study Chapter 5.
1. Compare multispectral scanner data with scanned aerial photographs.
Which similarities and differences can you identify?
2. Go to the Internet and check the availability of multispectral image data
of your country (area of interest). First determine which range of spatial
resolution you are interested in.
first
previous
next
last
back
exit
zoom
contents
index
about
181
first
previous
next
last
back
exit
zoom
contents
index
about
182
first
previous
next
last
back
exit
zoom
contents
index
about
183
7. What is quantization and to which part of the scanning process does it relate?
8. Which range of spatial resolutions are encountered with todays multispectral (and panchromatic) scanners?
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 6
CAUTION
RADAR
first
previous
next
last
back
exit
zoom
contents
index
about
184
185
6.1
What is radar?
So far, you have learned about remote sensing using the visible and infrared
part of the electromagnetic spectrum. Microwave remote sensing uses electromagnetic waves with wavelengths between 1 cm and 1 m (Figure 2.5). These
relatively longer wavelengths have the advantage that they can penetrate clouds
and are independent of atmospheric conditions, like haze. In microwave remote
sensing there are active and passive sensors. Passive sensors operate similarly
to thermal sensors by detecting naturally emitted microwave energy. They are
used in meteorology, hydrology and oceanography. In active systems, the antenna transmits microwave signals from an antenna to the Earths surface where
they are backscattered. The part of the electromagnetic energy that is scattered
into the direction of the antenna is detected by the sensor as illustrated in Figure 6.1. There are several advantages to be gained from the use of active sensors,
which have their own energy source:
It is possible to acquire data at any time including during the night (similar
to thermal remote sensing).
Since the waves are created actively, the signal characteristics are fully controlled (e.g., wavelength, polarization, incidence angle, et cetera) and can be
adjusted according to the desired application.
Active sensors are divided into two groups: imaging and non-imaging sensors.
RADAR sensors belong to the group of most commonly used active imaging microwave sensors. The term RADAR is an acronym for RAdio Detection And Ranging. Radio stands for the microwave and range is another term for distance. Radar
sensors were originally developed and used by the military. Nowadays, radar
first
previous
next
last
back
exit
zoom
contents
index
about
186
sensors are widely used in civil applications too, such as environmental monitoring. To the group of non-imaging microwave instruments belong altimeters,
which collect distance information (e.g., sea surface height), and scatterometers,
which acquire information about the object properties (e.g., wind speed).
This chapter will focus on the principles of imaging radar and its applications. The interpretation of radar imagery is less intuitive than that obtained
from optical remote sensing. This is because of differences in physical interaction of the waves with the Earths surface. The chapter will explain which
interactions take place and how radar images can be interpreted.
first
previous
next
last
back
exit
zoom
contents
index
about
187
6.2
G 2 2 P t
,
(4)3 R4
(6.1)
where
Pr is the received energy,
G is the antenna gain,
is the wavelength,
Pt is the transmitted energy,
is the radar cross section, it is a function of the object characteristics and
the size of the illuminated area,
first
previous
next
last
back
exit
zoom
contents
index
about
188
first
previous
next
last
back
exit
zoom
contents
index
about
189
first
previous
next
last
back
exit
zoom
contents
index
about
190
first
Frequency (GHz)
0.3
1.0
3.0
10.0
30.0
100.0
Wavelength (cm)
100
30
10
0.3
previous
next
last
back
exit
Q V
zoom
contents
Figure 6.2:
Microwave
spectrum
and
band
identification by letters
index
about
191
Wavelength, l
Distance
Magnetic field
Velocity of light, c
first
previous
next
last
back
exit
zoom
contents
index
about
192
6.3
The platform carrying the radar sensor moves along the orbit in flight direction (Figure 6.4). You can see the ground track of the orbit/flight path on the
Earths surface at nadir. The microwave beam illuminates an area, or swath, on
the Earths surface, with an offset from the nadir. The direction along-track is
called azimuth, the direction perpendicular (across-track) is called range.
SAR
Slant Range
Look angle
Groundtrack
Incidence Angle
Near Range
Mid Range
th
imu
Az
Far Range
Radar Swath
first
previous
next
last
back
exit
zoom
contents
index
about
193
Vertical
Radar
wave
Local
slope
Local
incidence
angle
Incidence
angle
Scattering surface
first
previous
next
last
back
exit
zoom
contents
index
about
194
sensor
1
slant range
43
altitude
Figure 6.6:
Geometric distortions in radar
imagery due to terrain
elevations
terrain
F
2
ground range 1
Foreshortening
first
previous
next
last
L
3
4
Layover
back
exit
S
5
S
6
7
Shadow
zoom
contents
index
about
195
first
previous
next
last
back
exit
zoom
contents
index
about
196
first
previous
next
last
back
exit
zoom
contents
index
about
197
first
previous
next
last
back
exit
zoom
contents
index
about
198
first
previous
next
last
back
exit
zoom
contents
index
about
199
6.4
Due to the side-looking viewing geometry, radar images suffer from serious geometric and radiometric distortions. In radar imagery, you encounter variations
in scale (slant range to ground range conversion), foreshortening, layover and shadows (terrain elevation). Interference due to the coherency of the signal causes
speckle effects.
first
previous
next
last
back
exit
zoom
contents
index
about
200
first
previous
next
last
back
exit
zoom
contents
index
about
201
first
previous
next
last
back
exit
zoom
contents
index
about
202
first
previous
next
last
back
exit
zoom
contents
index
about
203
first
previous
next
last
back
exit
zoom
contents
index
about
204
first
previous
next
last
back
exit
zoom
contents
index
about
205
(a)
(b)
first
previous
next
last
back
exit
zoom
contents
index
about
206
first
previous
next
last
back
exit
zoom
contents
index
about
207
first
previous
next
last
back
exit
zoom
contents
index
about
208
first
previous
next
last
back
exit
zoom
contents
index
about
209
6.5
The brightness of features in a radar image depends on the strength of the backscattered signal. In turn, the amount of energy that is backscattered depends on various factors. An understanding of these factors will help you to interpret radar
images properly.
first
previous
next
last
back
exit
zoom
contents
index
about
210
first
previous
next
last
back
exit
zoom
contents
index
about
211
first
previous
next
last
back
exit
zoom
contents
index
about
212
first
previous
next
last
back
exit
zoom
contents
index
about
213
first
previous
next
last
back
exit
zoom
contents
index
about
214
first
previous
next
last
back
exit
zoom
contents
index
about
215
6.6
Radar data provide a wealth of information that is not only based on a derived
intensity image but also on other data properties that measure characteristics of
the objects. One example is radar interferometry, an advanced processing method
that takes advantage of the phase information of the microwave. If you look at
two waves that are emitted with a slight offset you obtain a phase difference
between the two waves. The offset can be based on two antennas mounted
on the same platform (e.g., aircraft or space shuttle) or based on two different orbits/passes. The range difference is calculated by measuring the phase
difference between the two backscattered waves received at the antenna. With
the knowledge of the position of the platform with respect to the Earth surface
the elevation of the object is determined. Phase differences are displayed in socalled interferograms where different colours represent variations in height. The
interferogram is used to produce digital elevation models. Differential interferometry is based on the creation of two interferograms of successive radar data
acquisitions. These interferograms are subtracted from each other in order to
illustrate the changes that have occurred. These are useful for change detection,
e.g., earthquake damage assessment.
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
216
Summary
In this chapter, the principles of imaging radar and its applications have been
introduced. The microwave interactions with the surface have been explained
to illustrate how radar images are interpreted. Radar sensors measure distances
and detect backscattered signal intensities. In radar processing, special attention has to be paid to geometric corrections and speckle reduction for improved
interpretation. Radar data have many potential applications in the fields of geology, oceanography, hydrology, environmental monitoring, land use and land
cover mapping and change detection.
first
previous
next
last
back
exit
zoom
contents
index
about
217
Questions
The following questions can help to study Chapter 6.
1. List three major differences between optical and microwave remote sensing?
2. What type of information can you extract from imaging radar data?
3. What are the limitations of radar images in terms of visual interpretation?
first
previous
next
last
back
exit
zoom
contents
index
about
218
4. What kind of processing is necessary to prepare radar images for interpretation? Which steps are obligatory and which are optional?
5. Search the Internet for successful applications of radar images from ERS1/2, Radarsat and other sensors.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 7
Remote sensing below the ground
surface
first
previous
next
last
back
exit
zoom
contents
index
CAUTION
about
219
220
7.1. Introduction
7.1
Introduction
first
previous
next
last
back
exit
zoom
contents
index
about
221
7.2
Gamma-ray surveys
first
previous
next
last
back
exit
zoom
contents
index
about
222
first
previous
next
last
back
exit
zoom
contents
index
about
223
7.3
The Earth has a gravity field and a magnetic field. The former we experience
as the weight of any mass and its tendency to accelerate towards the centre of
the Earth when dropped. The latter is comparatively weak but is exploited,
for example, in the design of the magnetic compass that points towards magnetic north when used in the field. Rocks that have abnormal density or magnetic propertiesparticularly rocks lying in the uppermost few kilometres of the
Earths crustdistort the broad gravity and magnetic fields of the main body of
the Earth by tiny but perceptible amounts, producing local gravity and magnetic anomalies. Careful and detailed mapping of these anomalies over any area
reveals complex patterns that are related to the structure and composition of the
bedrock geology. Both methods therefore provide important windows on the
geology, even when it is completely concealed by cover formations such as soil,
water, younger sediments and vegetation. The unit of measurement in gravimetry is the milli-Gal (mGal), an acceleration of 105 m/s2 . The normal acceleration
due to gravity (g) is about 9.8 m/s2 (980,000 mGal) and, to be useful, a gravity
survey must be able to detect changes in g as small as 1 mGal or about 1 part per
million (ppm) of the total acceleration. This may be achieved easily by reading
a gravimeter at rest on the ground surface, but is still at the limit of technical
capability from a moving vehicle such as an aircraft.
Conventional gravity surveys are ground- based and therefore slow and costly;
the systematic scanning of the Earths surface by gravity survey is still confined
largely to point observations that lack the continuity of coverage achievable with
other geophysical methods. An exception is over the worlds oceans where radar
altimetry of the sea- level surface from a satellite has been achieved with a precision of better than 10 cm. The sea-surface is an equipotential surface with
first
previous
next
last
back
exit
zoom
contents
index
about
224
first
previous
next
last
back
exit
zoom
contents
index
about
225
first
previous
next
last
back
exit
zoom
contents
index
about
226
(a)
first
previous
(b)
next
last
back
exit
zoom
contents
index
about
227
7.4
Electrical imaging
Solid rocks are normally rather resistive to the passage of electricity. The presence of water (groundwater) in pores, cracks and fissures and the electrical properties of certain minerals nevertheless allow applied currents to flow through the
large volume of the subsurface. This has been exploited in methods developed
to permit the mapping of subsurface electrical conductivity in two and three dimensions. While seldom of such regional (geological mapping) application as
gravity and magnetic methods, electrical methods have found application both
in the search for groundwater and in mineral exploration where certain ore minerals have distinctive electrical properties.
Where the ground is stratified an electrical sounding can be interpreted to reveal the layering in terms of the resistivity or conductivity of each layer. Electrical
profiling can be used to reveal lateral variations in rock resistivity, such as often
occur across fissures and faults. Ground- based methods that require physical
contact between the apparatus and the ground by way of electrodes are supplemented by so-called electromagnetic (EM) methods where current is induced
to flow in the ground by the passage of an alternating current (typically of low
audio frequency) through a transmitter coil. EM methods require no electrical
contact with the ground and can therefore also be operated from an aircraft, increasing the speed of survey and the uniformity of the data coverage. Airborne
EM surveys have been developed largely by the mineral exploration community since many important ore bodiessuch as the massive sulphide ores of the
base metalsare highly conductive and stand out clearly from their host rocks
through electrical imaging (Figure 7.4). Other important ore bodies are made
up of disseminated sulphides that display an electrochemical property known
as chargeability. Mapping of chargeability variations is the objective in induced
first
previous
next
last
back
exit
zoom
contents
index
about
228
first
previous
next
last
back
exit
zoom
contents
index
about
229
7.5
Seismic surveying
Virtually all new discoveries of oil and gas are these days made possible by
seismic imaging of the Earths subsurface. Such surveys probably account for
over 90 per cent of the expenditure on geophysical surveys for all exploration
purposes. Seismic waves are initiated by a small explosion or a vibratory source
at the surface, in a shallow borehole or in the water above marine areas. Energy
in a typically sub-audio frequency range (10 to 100 Hz) radiates from the source
and is reflected off changes in acoustic properties of the rock, typically changes
in lithology from one stratum to the next, and are detectable from depths of
many kilometres.
By deploying a suitable array of seismic sources and receiving reflected energy at a large number of receiving stations known as geophones, an image of
the surface may be built up in three dimensions. This involves processing an
enormous amount of data to correct for multiple reflections and the geometry of
the source-receiver configurations. To achieve the detail necessary for the successful siting of expensive, deep exploratory wells, most surveys now carried
out are known as 3D surveys, though isolated lines of 2D survey, typical of earlier decades, are still carried out for reconnaissance purposes in new areas. The
accuracy and precision of the seismic method in mapping the subsurface (Figure 7.5) is now sufficient not only to find trapped oil and gas but also to assess
the volume and geometry of the reservoir to plan optimum extraction strategies.
Repeated surveys during the production lifetime of a given field (time lapse
seismic) permit the draw-down to be monitored and so maximize the recovery
of the oil and gas in a field. Similar seismic technology, adapted to more modest
scales of exploration, can be applied for shallow investigations (depths of a few
tens of metres), useful in groundwater exploration and site investigation.
first
previous
next
last
back
exit
zoom
contents
index
about
230
first
previous
next
last
back
exit
zoom
contents
index
about
231
Summary
Geophysical methods therefore provide a wide range of possible methods of
imaging the subsurface. Some are used routinely, others only for special applications. All are potentially useful to the alert geoscientist.
Gravity and magnetic anomaly mapping has been carried out for almost
50 years. While most countries have national programmes, achievements to date
are somewhat variable from country to country. The data are primarily useful
for geological reconnaissance at scales from 1:250,000 to 1:1,000,000. Gamma-ray
spectrometry, flown simultaneously with aeromagnetic surveys, has joined the
airborne geophysical programmes supporting geological mapping in the past
decade. All three methods are therefore used primarily by national geological
surveys to support basic geoscience mapping, alongside conventional field and
photo-geology, and to set the regional scene for dedicated mineral and oil exploration. It is normal that the results are published at nominal cost for the benefit
of all potential users.
Geophysical surveys for mineral exploration are applied on those more limited
areas (typically at scales 1:50,000 to 1:10,000) selected as being promising for
closer (and more expensive!) examination. Typically this might start with an
airborne EM and magnetometer survey that would reveal targets suitable for
detailed investigation with yet more expensive methods (such as EM and IP)
on the ground. Once accurately located in position (x, y) and depth, the most
promising anomalies can be tested further by drilling.
Groundwater exploration has historically relied on electrical sounding and profiling but has been supplemented in some cases by EM profiling and sounding
and shallow seismic surveys. Regrettably, poor funding usually dictates that
such surveys are less thorough and systematic than is the case in mineral ex-
first
previous
next
last
back
exit
zoom
contents
index
about
232
first
previous
next
last
back
exit
zoom
contents
index
about
233
Questions
The following questions can help you to study Chapter 7.
1. Make a list of geophysical maps (and their scales) that you are aware of in
your own country (or that part of it you are familiar with).
2. Trace the geophysical features revealed in Figure 7.3(a) on a transparent
overlay and compare your result with the geological map in Figure 7.3(b).
first
previous
next
last
back
exit
zoom
contents
index
about
234
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 8
Radiometric aspects
first
previous
next
last
back
exit
zoom
contents
index
about
235
236
8.1. Introduction
8.1
Introduction
Cosmetic
corrections
- Radiometric
corrections
- Atmospheric
Pre-processing
corrections
-
first
previous
next
last
Geometric
corrections
back
exit
zoom
contents
index
about
237
8.2
Cosmetic corrections
Cosmetic corrections involve all those operations that are aimed at correcting
visible errors and noise in the image data. Defects in the data may be in the form
of periodic or random missing lines (line dropouts), line striping, and random
or spike noise. These effects can be identified visually and automatically.
34
28
23
17
13
11
10
10
10
10
15
25
33
39
43
46
48
50
51
52
53
27
21
17
14
12
11
11
11
11
11
18
31
40
44
47
48
50
52
53
54
55
20
16
14
13
12
11
11
11
11
13
21
36
45
47
49
50
52
54
55
56
57
17
15
14
12
11
11
11
11
13
17
26
40
47
49
50
51
53
55
57
58
59
17
17
15
13
11
11
11
13
17
25
33
42
48
50
51
53
55
57
58
58
58
19
18
16
12
11
11
12
16
23
33
40
46
49
51
52
54
56
58
59
59
58
20
18
15
11
10
12
16
22
30
40
47
49
51
53
54
55
56
58
59
59
58
21
18
15
12
12
15
20
28
36
45
50
50
51
53
54
54
55
56
57
57
57
22
20
17
14
16
22
28
36
42
47
50
50
50
51
52
52
53
54
55
55
55
21
19
16
13
15
23
31
39
45
48
50
50
50
50
51
51
51
51
52
52
52
19
16
13
11
13
21
29
39
45
48
50
50
50
50
51
51
50
50
49
49
48
18
14
12
12
16
24
32
42
48
50
51
51
52
52
51
51
50
50
49
49
49
16
14
14
17
23
31
39
47
51
53
54
54
54
54
53
53
52
52
51
51
52
17
17
19
25
31
39
46
52
55
55
55
55
55
55
55
55
55
55
54
54
55
22
24
28
34
40
46
51
55
57
57
57
57
57
57
57
57
57
58
59
59
61
(b)
(a)
first
previous
next
last
back
exit
zoom
contents
index
about
Figure 8.2:
Simulated
Landsat MSS image of
a coastal area (a) and
the corresponding Digital
Numbers (DN) of a subset
of it (b).
238
27
0
17
14
12
11
11
0
11
11
18
31
40
44
47
48
50
52
53
0
55
20
0
14
13
12
11
11
0
11
13
21
36
45
47
49
50
52
54
55
0
57
17
0
14
12
11
11
11
0
13
17
26
40
47
49
50
51
53
55
57
0
59
17
0
15
13
11
11
11
0
17
25
33
42
48
50
51
53
55
57
58
0
58
19
0
16
12
11
11
12
0
23
33
40
46
49
51
52
54
56
58
59
0
58
20
0
15
11
10
12
16
0
30
40
47
49
51
53
54
55
56
58
59
0
58
21
0
15
12
12
15
20
0
36
45
50
50
51
53
54
54
55
56
57
0
57
22
0
17
14
16
22
28
0
42
47
50
50
50
51
52
52
53
54
55
0
55
21
0
16
13
15
23
31
0
45
48
50
50
50
50
51
51
51
51
52
0
52
19
0
13
11
13
21
29
0
45
48
50
50
50
50
51
51
50
50
49
0
48
18
0
12
12
16
24
32
0
48
50
51
51
52
52
51
51
50
50
49
0
49
16
0
14
17
23
31
39
0
51
53
54
54
54
54
53
53
52
52
51
0
52
17
0
19
25
31
39
46
0
55
55
55
55
55
55
55
55
55
55
54
0
55
22
0
28
34
40
46
51
0
57
57
57
57
57
57
57
57
57
58
59
0
61
(a)
(b)
The first step in the restoration process is to calculate the average DN-value
per scan line for the entire scene. The average DN-value for each scan line is then
compared with this scene average. Any scan line deviating from the average by
more than a designated threshold value is identified as defective. In regions
of very diverse land cover, better results can be achieved by considering the
histogram for sub-scenes and processing these sub-scenes separately.
The next step is to replace the defective lines. For each pixel in a defective
first
previous
next
last
back
exit
zoom
contents
index
about
239
27
22
17
14
12
11
11
11
11
11
18
31
40
44
47
48
50
52
53
54
20
17
14
13
12
11
11
11
11
13
21
36
45
47
49
50
52
54
55
56
17
15
14
12
11
11
11
12
13
17
26
40
47
49
50
51
53
55
57
58
17
16
15
13
11
11
11
14
17
25
33
42
48
50
51
53
55
57
58
58
19
17
16
12
11
11
12
17
23
33
40
46
49
51
52
54
56
58
59
58
20
17
15
11
10
12
16
23
30
40
47
49
51
53
54
55
56
58
59
58
21
18
15
12
12
15
20
28
36
45
50
50
51
53
54
54
55
56
57
57
22
19
17
14
16
22
28
35
42
47
50
50
50
51
52
52
53
54
55
55
21
18
16
13
15
23
31
38
45
48
50
50
50
50
51
51
51
51
52
52
19
16
13
11
13
21
29
37
45
48
50
50
50
50
51
51
50
50
49
48
18
15
12
12
16
24
32
40
48
50
51
51
52
52
51
51
50
50
49
49
16
15
14
17
23
31
39
45
51
53
54
54
54
54
53
53
52
52
51
51
17
18
19
25
31
39
46
50
55
55
55
55
55
55
55
55
55
55
54
54
22
25
28
34
40
46
51
54
57
57
57
57
57
57
57
57
57
58
59
60
(b)
(a)
first
previous
next
last
back
exit
zoom
contents
index
about
240
27
92
17
14
12
11
11
22
11
11
18
31
40
88
47
48
50
52
53
91
55
20
92
14
13
12
11
11
22
11
13
21
36
45
93
49
50
52
54
55
99
57
17
90
14
12
11
11
11
22
13
17
26
40
47
95
50
51
53
55
57
99
59
17
94
15
13
11
11
11
26
17
25
33
42
48
95
51
53
55
57
58
99
58
19
88
16
12
11
11
12
32
23
33
40
46
49
97
52
54
56
58
59
99
58
20
86
15
11
10
12
16
44
30
40
47
49
51
98
54
55
56
58
59
99
58
21
86
15
12
12
15
20
56
36
45
50
50
51
98
54
54
55
56
57
97
57
22
94
17
14
16
22
28
72
42
47
50
50
50
96
52
52
53
54
55
96
55
21
98
16
13
15
23
31
78
45
48
50
50
50
94
51
51
51
51
52
94
52
19
92
13
11
13
21
29
78
45
48
50
50
50
94
51
51
50
50
49
98
48
18
88
12
12
16
24
32
84
48
50
51
51
52
96
51
51
50
50
49
98
49
16
87
14
17
23
31
39
94
51
53
54
54
54
98
53
53
52
52
51
96
52
17
84
19
25
31
39
46
95
55
55
55
55
55
99
55
55
55
55
54
94
55
22
88
28
34
40
46
51
96
57
57
57
57
57
99
57
57
57
58
59
94
61
(b)
(a)
Though several procedures can be adopted to correct this effect, the most
popular is the histogram matching. Separate histograms corresponding to each
detector unit are constructed and matched. Taking one response as standard, the
gain (rate of increase of DN) and offset (relative shift of mean) for all other detector units are suitably adjusted and new DN-values are computed and assigned.
first
previous
next
last
back
exit
zoom
contents
index
about
241
first
previous
next
last
back
exit
zoom
contents
index
about
242
27
21
17
14
12
11
11
11
11
11
18
31
40
44
47
48
50
52
53
54
55
20
16
14
13
12
11
180
11
11
13
21
36
45
47
49
50
52
120
55
56
57
17
15
14
12
11
11
11
11
13
17
26
40
47
49
50
51
53
55
57
58
59
17
17
15
13
11
11
11
13
17
25
33
42
48
50
51
53
55
57
58
58
58
19
18
16
12
11
11
12
16
23
33
40
46
49
51
52
54
0
58
59
59
58
20
18
15
11
10
12
16
22
30
40
47
49
51
53
54
55
56
58
59
59
58
21
18
15
12
12
15
20
28
36
45
50
50
51
53
54
54
55
56
57
57
57
22
20
17
14
16
22
28
36
42
47
50
50
50
51
52
52
53
54
55
55
55
21
204
16
13
15
23
31
39
45
48
50
50
50
50
51
51
51
51
52
52
52
19
16
13
11
13
21
29
39
45
48
50
50
8
50
51
51
50
50
49
49
48
18
14
12
12
16
24
32
42
48
50
51
51
52
52
51
51
50
50
49
49
49
16
14
14
17
23
31
39
47
51
53
54
54
54
54
53
53
52
52
51
51
52
17
17
19
25
31
39
46
52
55
55
55
55
55
55
55
55
55
55
54
2
55
22
24
28
34
40
46
51
55
57
57
57
57
57
57
57
57
57
58
59
59
61
(b)
(a)
first
previous
next
last
back
exit
zoom
contents
index
about
243
8.3
Atmospheric Corrections
All reflected and emitted radiations leaving the Earths surface are attenuated
mainly due to absorption and scattering by the constituents in the atmosphere
(refer to Section 2.1). The atmospheric induced distortions occur twice in case of
sunlight reflection and once in case of emitted radiation. These distortions are
wavelength dependent. Their effect on remote sensing data can be reduced by
applying atmospheric correction techniques. These corrections are related to
the influence of
haze,
sun angle, and
skylight.
first
previous
next
last
back
exit
zoom
contents
index
about
244
first
previous
next
last
back
exit
zoom
contents
index
about
245
Satellite
ng
Ta
en
ith
lan
tp
n
Ze
Equator
Sun angle
Sun
Summer
Sun
Spring
Autumn
Sun
Winter
Figure 8.7: Effects of seasonal changes on the solar elevation angle in the
northern hemisphere (after Lillesand and Kiefer,
1994).
first
previous
next
last
back
exit
zoom
contents
index
about
246
DN
sin()
(8.1)
Here, DN is the input pixel value, DN 0 is the output pixel value, and is the
solar elevation angle. Note that since the angle is smaller than 90 , the sine will
be smaller than 1 and DN will be larger than DN.
When multitemporal data sets of the same area are available, a relative sun
angle correction can be performed. In such cases, the image with higher sun
elevation angle is taken as a reference and the radiometric values of the other
image are adjusted to it.
first
previous
next
last
back
exit
zoom
contents
index
about
247
first
previous
next
last
back
exit
zoom
contents
index
about
248
Summary
Radiometric corrections constitute an important step in the pre-processing of remotely sensed data. They comprise of cosmetic corrections to reduce the influence of atmospheric and illumination parameters. Atmospheric corrections are
particularly important for generating image mosaics and for comparing multitemporal remote sensing data. However, such corrections should be applied
with care, after understanding the physical principles behind these corrections.
first
previous
next
last
back
exit
zoom
contents
index
about
249
Questions
The following questions can help to study Chapter 8.
1. Should radiometric corrections be performed before or after geometric corrections, and why?
2. Why is the effect of haze more pronounced in shorter wavelength bands?
3. In relative sun angle correction, why is it preferred to take the image with
a larger sun angle as the reference image?
first
previous
next
last
back
exit
zoom
contents
index
about
250
4. In a change detection study, if there were images from different years but
from the same season, would it still be necessary to perform atmospheric
corrections? Why or why not?
first
previous
next
last
back
exit
zoom
contents
index
about
251
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 9
Geometric aspects
first
previous
next
last
back
exit
zoom
contents
index
about
252
253
9.1. Introduction
9.1
Introduction
first
previous
next
last
back
exit
zoom
contents
index
about
254
9.1. Introduction
photogrammetry [25]. Deriving 3D measurements from radar data is in the discipline of interferometry and radargrammetry. This chapter deals with a limited
number of concepts and topics. In Section 9.2 relief displacement is introduced.
Section 9.3 and Section 9.4 introduce 2D and 3D approaches respectively. 2D
approaches relate to methods and techniques that neglect terrain elevation differences. In 3D approaches, methods for correction of relief displacement are
introduced. If a DTM is not available, 3D coordinates can be measured directly
from stereo pairs. Note that spatial referencing in general and an introduction
to map projections have been introduced in Principles of GIS [6].
first
previous
next
last
back
exit
zoom
contents
index
about
255
9.2
Relief displacement
A characteristic of most sensor systems is the distortion of the geometric relationship between the image data and the terrain caused by relief differences on
the ground. This effect is most apparent in aerial photographs and airborne
scanner data. The effect of relief displacement is illustrated in Figure 9.1. Consider the situation on the left in which a true vertical aerial photograph is taken
of a flat terrain. The distances (A B) and (a b) are proportional to the total
width of the scene and its image on the negative, respectively. In the left hand
situation, by using the scale factor, we can compute (AB) from a measurement
of (a b) in the negative. In the right hand situation, there is significant terrain
relief difference. As you can now observe, the distance between a and b in the
(a)
first
previous
next
last
back
exit
zoom
contents
(b)
index
about
256
previous
next
last
back
exit
zoom
contents
index
about
257
first
previous
next
last
back
exit
zoom
contents
index
about
258
9.3
Two-dimensional approaches
Column (i)
1
Row
(j)
4
y
(x,y)
2
3
4
(i,j)
(0,0)
(a)
first
previous
next
last
back
exit
zoom
contents
index
(b)
about
Figure 9.3:
Coordinate
system of the image defined by rows and columns
(a) and map coordinate
system with x- and y-axes
(b).
259
9.3.1 Georeferencing
The simplest way to link an image to a map projection system is to use a geometric
transformation. A transformation is a function that relates the coordinates of two
systems. A transformation, relating (x, y) to (i, j) is typically defined by linear
equations, such as: x = 3 + 5i, and, y = 2 + 2.5j.
Using the above transformation, for example, image position (i = 5, j = 8)
relates to map coordinates (x = 28, y = 18). Once such a transformation has
been determined, the map coordinates for each image pixel can be calculated.
The resulting image is called a georeferenced image. It allows the superimposition
of vector data and the storage of the data in map coordinates when applying
on-screen digitizing. Note that the image as such remains stored in the original
(i, j) raster structure, and that its geometry is not altered.
The process of georeferencing includes two steps: selection of the appropriate type of transformation, and determination of the transformation parameters.
The type of transformation depends mainly on the sensor-platform system used.
For aerial photographs (of a flat terrain) usually a so-called perspective transformation is used to correct for the effect of tilt and roll (Section 3.2). A more general
transformation is the polynomial transformation, which enables 1st , 2nd to nth order transformations. In many situations a 1st order transformation is adequate.
A 1st order transformation relates map coordinates (x, y) with image coordinates
(i, j) as follows:
x = a + bi + cj
(9.2)
y = d + ei + f j
(9.3)
Equations 9.2 and 9.3 require that six parameters (a . . . f ) be determined. The
transformation parameters can be determined by means of ground control points
first
previous
next
last
back
exit
zoom
contents
index
about
260
i
254
149
40
26
193
j
68
22
132
269
228
x
958
936
916
923
954
y
155
151
176
206
189
xc
958.552
934.576
917.732
921.835
954.146
yc
154.935
150.401
177.087
204.966
189.459
dx
0.552
-1.424
1.732
-1.165
0.146
dy
-0.065
-0.599
1.087
-1.034
0.459
(GCPs). GCPs are points that can be clearly identified in the image and in a
source, which is in the required map projection system. One possibility is to use
topographical maps of an adequate scale. The operator then needs to identify
identical points on both sources e.g., using road crossings, waterways, typical
morphological structures, et cetera. Another possibility is to identify points in
the image and to measure the coordinates in the field by satellite positioning.
The result of GCP selection is a set of related points (Table 9.1). To solve the
above equations, at least three GCPs are required. However, to calculate the
error of the transformation more points are required. Based on the set of GCPs
the computer calculates the optimal transformation parameters using a best fit
procedure. The errors that remain after transformation are called residual errors.
Their magnitude is an indicator of the quality of the transformation.
Table 9.1 shows an example in which five GCPs have been used. Based on
the given (i, j) and (x, y), one can determine the following transformation:
x = 902.76 + 0.206i + 0.051j,
and
y = 152.579 0.044i + 0.199j.
first
previous
next
last
back
exit
zoom
contents
index
about
261
(9.4)
n i=1
For the y-direction, a similar equation can be used to calculate my . The overall
error, mtotal , is calculated by:
q
mtotal =
mx 2 + my 2
(9.5)
For the example data set given in Table 9.1, the residuals have been calculated. The respective mx , my and mtotal are 1.159, 0.752 and 1.381. The RMS error
is a quantitative method to check the accuracy of the transformation. However,
the RMS error does not take into account the spatial distribution of the GCPs.
The RMS error is only valid for the area that is bounded by the GCPs. In the
selection of GCPs, therefore, you should preferably include points located near
the edges of an image.
first
previous
next
last
back
exit
zoom
contents
index
about
262
9.3.2 Geocoding
The previous section explained that two-dimensional coordinate systems, for
example an image system and a map projection system, can be related using
geometric transformations. This georeferencing approach is useful in many situations. However, in other situations a geocoding approach, in which the image
grid is also transformed, is required. Geocoding is required when different images need to be combined or when the image data are used in a GIS environment
that requires all data to be stored in the same map projection. The effect of georeferencing and geocoding is well illustrated by Figure 9.5.
Geocoding is georeferencing with subsequent resampling of the image raster.
This means that a new image raster is defined along the xy-axes of the selected
map projection. The geocoding process comprises two main steps: first each new
raster element is projected (using the transformation parameters) onto the original image, secondly a (DN) value for the new pixel is determined and stored.
As the orientation and size of original (input) and required (output) raster are
different, there is no exclusive one-to-one relationship between elements (pixels)
of these rasters. Therefore, interpolation methods are required to make a decision
regarding the new value of each pixel. Various resampling algorithms are available (Figure 9.4). The main methods are: nearest neighbour, bilinear interpolation and cubic convolution. In nearest neighbour, the pixel value is assigned
the nearest value in the original image. Using bilinear interpolation the weighted
mean is calculated over the four nearest pixels in the original image. Cubic convolution applies a polynomial approach based on the values of 16 surrounding
pixels. The choice of the resampling algorithm depends, among others, on the
ratio between input and output pixel size and the purpose of the resampled image data.
first
previous
next
last
back
exit
zoom
contents
index
about
263
first
previous
next
last
back
exit
zoom
contents
index
about
264
first
previous
next
last
back
exit
zoom
contents
index
about
265
9.4
Three-dimensional approaches
Unlike the previous section, which was concerned only with 2D coordinates, in
this section also the 3rd dimension is relevant. The following processes will be
explained in the subsequent sections:
Monoplotting, which is an approach to correct for terrain relief during digitizing of terrain features from aerial photos that results in relatively accurate (x, y) coordinates.
Orthoimage production, which is an approach to correct image data for terrain relief and store the image in a specific map projection. Orthoproducts
can be used as a backdrop to other data or, used to directly determine the
(x, y) geometry of the features of interest. Today, many organizations and
companies sell orthoproducts.
Stereoplotting, which is used to extract 3D information from stereo pairs.
Stereo pairs allow for the determination of (x, y, z) positions. 3D information extraction requires an orientation process. Examples of 3D products
are a Digital Terrain Model (DTM) and large-scale databases, for instance,
related to urban constructions, roads, or cadastral parcels.
first
previous
next
last
back
exit
zoom
contents
index
about
266
(x,y)
Processing
(X,Y,Z)
9.4.1 Monoplotting
Suppose you need to derive accurate positions from an aerial photograph expressed in a specific map projection. This can be achieved for a flat terrain using
a true vertical photograph and a georeferencing approach. However, if there are
significant terrain relief differences you need to correct for relief displacement.
For this purpose the method of monoplotting has been developed.
Monoplotting is based on reconstruction of the position of the camera at the
moment of exposure relative to a Digital Terrain Model (DTM) of the terrain.
This is achieved by identification of a number (at least four) of ground control
points (GCPs) for which both the photo and map coordinates must be known.
The applied DTM should be stored in the required map projection system and
the heights should be expressed in an adequate vertical reference system. When
digitizing features from the photograph, the computer uses the DTM to calculate
first
previous
next
last
back
exit
zoom
contents
index
about
267
first
previous
next
last
back
exit
zoom
contents
index
about
268
first
previous
next
last
back
exit
zoom
contents
index
about
269
9.4.3 Stereoplotting
The basic process of stereoplotting is to form a stereo model of the terrain and to
digitize features by measurements made in this stereo model. A stereo model is
a special combination of two photographs of the same area taken from different positions. For this purpose, aerial photographs are usually flown with 60%
overlap between subsequent photos. Stereo pairs can also be derived from other
sensors such as multispectral scanners and imaging radar.
The measurements made in a stereo model refer to a phenomenon that is called
parallax. Parallax refers to the fact that an object photographed from different
positions has different relative positions in the two images. Since this effect is
directly related to the relative height, measurement of these parallax differences
yield height information. (Figure 9.7).
A stereo model enables (parallax) measurement using a special (3D) cursor.
If the stereo model is appropriately oriented (see below) the parallax measurements yield (x, y, z) coordinates. To view and navigate in a stereo model, various hardware solutions may be used. So-called analogue and analytical plotters were used in the past. These instruments are called plotters since the features delineated in the stereo model were directly plotted onto film (for reproduction). Today, digital photogrammetric workstations (DPWs) are increasingly
used. Stereovision, the impression of depth, in a DPW is realized using a dedicated combination of monitor and special spectacles (e.g., polarized).
To form a stereo model for 3D measurements, the stereo model needs to be
oriented. The orientation process involves three steps:
1. First, the relation between the film-photo and the camera system is defined. This is the so-called inner orientation. It requires identification of the
fiducial marks on the photos and the exact focal length.
first
previous
next
last
back
exit
zoom
contents
index
about
270
first
previous
next
last
back
exit
zoom
contents
index
about
271
Summary
This chapter has introduced some general geometric aspects of dealing with image data. A basic principle in dealing with remote sensing sensors is terrain
relief, which can be neglected (2D approaches) or taken into account (3D approaches). In both approaches there is a possibility to keep the image data stored
in their (i, j) system and relate it to other data through coordinate transformations (georeferencing and monoplotting). The other possibility is to change the
image raster into a specific map projection system using resampling techniques
(geocoding and orthoimage production). A true 3D approach is that of stereoplotting, which applies parallax differences as observed in stereo pairs to measure (x, y, z) coordinates of terrain and objects.
first
previous
next
last
back
exit
zoom
contents
index
about
272
Questions
The following questions can help you to study Chapter 9.
1. Suppose your organization develops a GIS application for road maintenance. What would be the consequences of using georeferenced versus
geocoded image data as a backdrop?
2. Think of two situations in which image data are applied and in which you
need to take relief displacement into account.
3. For a transformation of a specific image into a specific coordinate system,
an mtotal error of two pixels is given. What additional information do you
need to assess the quality of the transformation?
first
previous
next
last
back
exit
zoom
contents
index
about
273
first
previous
next
last
back
exit
zoom
contents
index
about
274
4. Calculate the map position (x, y) for image position (10, 20) using the two
following equations: x = 10 + 5i j and y = 5 + 2i + 2j
5. Explain the purpose of monoplotting. What inputs do you need?
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 10
Image enhancement and
visualisation
first
previous
next
last
back
exit
zoom
contents
index
about
275
276
10.1. Introduction
10.1
Introduction
Many of the figures in the previous chapters have presented examples of remote
sensing image data. There is a need to visualize image data at most stages of
the remote sensing process. For example, the procedures for georeferencing, explained in Chapter 8, cannot be performed without visual examination to measure the location of ground control points on the image. However, it is in the
process of information extraction that visualization plays the most important
role. This is particularly so in the case of visual interpretation (Chapter 11), but
also during automated classification procedures (Chapter 12).
Because many remote sensing projects make use of multispectral data, this
chapter focuses on the visualization of colour imagery. An understanding of
how we perceive colour is required at two main stages in the remote sensing
process. In the first instance, it is required in order to produce optimal pictures from (multispectral) image data on the computer screen or as a (printed)
hard-copy. Thereafter, the theory of colour perception plays an important role in
the subsequent interpretation of these pictures. To understand how we perceive
colour, Section 10.2 deals with the theory of colour perception and colour definition. Section 10.3 gives the basic principles you need to understand and interpret
the colours of a displayed image. The last section (Section 10.5) introduces some
filter operations for enhancing specific characteristics of the image.
first
previous
next
last
back
exit
zoom
contents
index
about
277
10.2
Perception of colour
Colour perception takes place in the human eye and the associated part of the
brain. Colour perception concerns our ability to identify and distinguish colours,
which, in turn, enables us to identify and distinguish entities in the real world.
It is not completely known how human vision works, or what exactly happens
in the eyes and brain before someone decides that an object is (for example) light
blue. Some theoretical models, supported by experimental results are, however,
generally accepted. Colour perception theory is applied whenever colours are
reproduced, for example in colour photography, TV, printing and computer animation.
first
previous
next
last
back
exit
zoom
contents
index
about
278
Blue
400
wavelength (nm)
Green Red
500
600
700
previous
next
last
back
exit
zoom
contents
index
about
279
first
previous
next
last
back
exit
zoom
contents
index
about
280
first
previous
next
last
back
exit
zoom
contents
index
about
281
(a)
(b)
In the additive colour scheme, all visible colours can be expressed as combinations of red, green and blue, and can therefore be plotted in a three-dimensional
space with R, G and B along the axes. The space is bounded by minimum and
maximum values (intensities) for red, green and blue, defining the so-called
first
previous
next
last
back
exit
zoom
contents
index
about
282
yellow
[1,1,0]
white
cyan
[0,1,1]
ic
at
lin
[1,1,1]
[0,0,0]
[0,0,1]
o
hr
ac
red
black
[1,0,0]
magenta
[1,0,1]
blue
first
previous
next
last
back
exit
zoom
contents
index
about
283
g
S
Figure 10.4 illustrates the correspondence between the RGB and the IHS system. Although the mathematical model for this description is tricky, the description is, in fact, more natural. For example, light, pale red is easier to imagine
than a lot of red with considerable amounts of green and blue. The result, however, is the same. Since the IHS scheme deals with colour perception, which is
first
previous
next
last
back
exit
zoom
contents
index
about
284
first
previous
next
last
back
exit
zoom
contents
index
about
285
first
previous
next
last
back
exit
zoom
contents
index
about
286
10.3
In this section, various ways for visualizing single and multi-band image data
are introduced. The section starts with an explanation of the concept of an image histogram. The histogram has a crucial role in realizing optimal contrast of
images. An advanced section deals with the application of RGB-IHS transformation to integrate different types of image data.
first
previous
next
last
back
exit
zoom
contents
index
about
287
10.3.1 Histograms
A number of important characteristics of a single-band image, such as a panchromatic satellite image, a scanned monochrome photograph or a single band from
a multi-band image, are found in the histogram of that image. The histogram describes the distribution of the pixel values (Digital Numbers, DN) of that image.
In the usual case, the DN-values range between 0255. A histogram indicates
the number of pixels for each value in this range. In other words, the histogram
contains the frequencies of DN-values in an image. Histogram data can be represented either in tabular form or graphically. The tabular representation (Table 10.1)
normally shows five columns. From left to right these are:
DN: Digital Numbers, in the range [0. . . 255]
Npix: the number of pixels in the image with this DN (frequency)
Perc: frequency as a percentage of the total number of image pixels
CumNpix: cumulative number of pixels in the image with values less than
or equal to DN
CumPerc: cumulative frequency as a percentage of the total number of
image pixels
Histogram data can be further summarized in some characteristic statistics:
mean, standard deviation, minimum and maximum, as well as the 1% and 99%
values (Table 10.2). Standard deviation is a statistical measure of the spread of
the values around the mean. The 1% value, for example, defines the cut-off value
below which only 1% of all the values are found. 1% and 99% values can be used
to define an optimal stretch for visualization.
first
previous
next
last
back
exit
zoom
contents
index
about
288
first
previous
DN
0
13
14
15
16
51
52
53
54
102
103
104
105
106
107
108
109
110
111
163
164
165
166
173
174
255
Npix
0
0
1
3
2
55
59
94
138
1392
1719
1162
1332
1491
1685
1399
1199
1488
1460
720
597
416
274
3
0
0
next
last
Perc
0.00
0.00
0.00
0.00
0.00
0.08
0.08
0.13
0.19
1.90
2.35
1.59
1.82
2.04
2.31
1.91
1.64
2.04
2.00
0.98
0.82
0.57
0.37
0.00
0.00
0.00
back
CumNpix
0
0
1
4
6
627
686
780
918
25118
26837
27999
29331
30822
32507
33906
35105
36593
38053
71461
72058
72474
72748
73100
73100
73100
exit
CumPerc
0.00
0.00
0.00
0.01
0.01
0.86
0.94
1.07
1.26
34.36
36.71
38.30
40.12
42.16
44.47
46.38
48.02
50.06
52.06
97.76
98.57
99.14
99.52
100.00
100.00
100.00
zoom
contents
index
about
289
StdDev
27.84
Min
14
Max
173
1%-value
53
Table 10.2:
Summary
statistics for the example
histogram given above.
99%-value
165
Cumulative histogram
3%
2%
Ordinary histogram
50%
1%
0%
first
previous
0 14
next
114
last
back
0%
255
173
exit
Figure 10.5:
Standard
histogram and cumulative
histogram corresponding
with Table 10.1.
zoom
contents
index
about
290
Using the original image values to control the monitor values usually results in an image with little contrast since only a limited number of grey values
are used. In the example introduced in the previous section (Table 10.2) only
173 14 = 159 out of 255 grey levels would be used. To optimize the range of
grey values, a transfer function maps DN-values into grey shades on the monitor
(Figure 10.7). The transfer function can be chosen in a number of ways. Linear
contrast stretch is obtained by finding the DN-values where the cumulative histogram of the image passes 1% and 99%. DNs below the 1% value become black
(0), DNs above the 99% value are white (255), and grey levels for the intermediate values are found by linear interpolation. Histogram equalization, or histogram
stretch, shapes the transfer function according to the cumulative histogram. As
a result, the DNs in the image are distributed as equally as possible over the
available grey levels (Figure 10.7).
first
previous
next
last
back
exit
zoom
contents
index
about
291
no stretch
linear stretch
histogram stretch
grey
shade
(black)
0
DN
255
DN
255
DN
255
first
previous
next
last
back
exit
zoom
contents
index
about
292
first
previous
next
last
back
exit
zoom
contents
index
about
293
10.4
Colour composites
The previous section explained visualization of single band images. When dealing with a multi-band image any combination of three bands can, in principle,
be used as input to the RGB channels of the monitor. The choice should be made
based on the application of the image data. To increase contrast, the three bands
can be subjected to linear contrast stretch or histogram equalization.
Sometimes a true colour composite, where the RGB channels relate to the red,
green and blue wavelength bands of a scanner, is made. A popular choice is to
link RGB to the near-infrared, red and green bands respectively to yield a false
colour composite (Figure 10.9). The results look similar to prints of colour-infrared
photography (CIR). As explained in Chapter 4, the three layers in a false colour
infrared film are sensitive to the NIR, R, and G parts of the spectrum and made
visible as R, G and B respectively in the printed photo. The most striking characteristic of false colour composites is that vegetation appears in a red-purple
colour. In the visible part of the spectrum, plants reflect mostly green light (this
is why plants appear green), but their infrared reflection is even higher. Therefore, vegetation in a false colour composite is shown as a combination of some
blue but even more red, resulting in a reddish tint of purple.
Depending on the application, band combinations other than true or false
colour may be used. Land-use categories can often be distinguished quite well
by assigning a combination of Landsat TM bands 543 or 453 to RGB. Combinations that display near-infrared as green show vegetation in a green colour
and are, therefore, called pseudo-natural colour composites (Figure 10.9).
first
previous
next
last
back
exit
zoom
contents
index
about
294
Pseudo-natural colour
composite (3,5,2)
first
previous
next
last
back
exit
zoom
contents
index
about
295
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
CAUTION
296
SPOT XS
band 3
SPOT XS
band 2
SPOT XS
band 1
SPOT
Pan
combinations of data, for example TM with SPOT PAN, SPOT XS with ERS-SAR,
multispectral satellite imagery with B/W aerial photography, et cetera.
Red
Green
Hue
RGB to
HSI
Blue
Red
HSI
to RGB
Satur.
Int.
Green
Color
composite
Blue
contrast
stretch
Intensity
first
previous
next
last
back
exit
zoom
contents
index
about
297
10.5
Filter operations
A further step in producing optimal images for interpretation is the use of filter
operations. Filter operations are local image transformations: a new image is
calculated and the value of a pixel depends on the values of its former neighbours. Filter operations are usually carried out on a single band. Filters are
used for spatial image enhancement, for example to reduce noise or to sharpen
blurred images. Filter operations are extensively used in various semi-automatic
procedures that are outside the scope of this chapter.
To define a filter, a kernel is used. A kernel defines the output pixel value as a
linear combination of pixel values in a neighbourhood around the corresponding position in the input image. For a specific kernel, a so-called gain can be
calculated as follows:
gain =
1
ki
(10.1)
The gain sums all kernel coefficients (ki ). In general, the sum of the kernel
coefficient, after multiplication by the gain, should be equal to 1 to result in an
Figure 10.11: Input and
output result of a filtering
operation: the neighbourhood in the original image determines the value
of the output. In this situation a smoothing filter was
applied.
Output
Input
16 12 20
first
previous
13
15
12
next
last
12
back
exit
zoom
contents
index
about
298
first
previous
next
last
back
exit
zoom
contents
index
about
299
1
1
1
1
1
1
In the above kernel, all pixels have equal contribution in the calculation of the
result. It is also possible to define a weighted average. To emphasize the value of
the central pixel, a larger value can be put in the centre of the kernel. As a result,
less drastic blurring takes place. In addition, it is necessary to take into account
that the horizontal and vertical neighbours influence the result more strongly
that the diagonal ones. The reason for this is that the direct neighbours are closer
to the central pixel. The resulting kernel, for which the gain is 1/16 = 0.0625, is
given in Table 10.4.
1
2
1
first
previous
next
last
2
4
2
back
1
2
1
exit
zoom
contents
index
about
300
-1
16
-1
-1
-1
-1
Figure 10.12: Original image (middle), edge enhanced image (left) and
smoothed image (right).
first
previous
next
last
back
exit
zoom
contents
index
about
301
Summary
The way we perceive colour is most intuitively described by the Hue component
of the IHS colour space. The colour space used to describe colours on computers
monitor is the RGB space.
When displaying an image on a screen (or on hard copy) many choices need
to be made. The selection of bands, the sequence in which these are linked to the
Red-Green-Blue channels of the monitor, the use of stretching techniques and
the possible use of (spatial) filtering techniques.
The histogram, and the derived cumulative histogram, is the basis for all
stretching methods. Stretching, or contrast enhancement, is realized using transfer functions.
Filter operations are based on the use of a kernel. The weights of the coefficients in the kernel determine the effect of the filter which can be, for example,
to smooth or sharpen the original image.
first
previous
next
last
back
exit
zoom
contents
index
about
302
Questions
The following questions can help you to study Chapter 10.
1. How many possibilities are there to visualize a 4 band image using a computer monitor?
2. You are shown a picture in which grass looks green and houses are red
what is your conclusion? Now, you are shown a picture in which grass
shows as purple and houses are blackwhat is your conclusion now?
3. What would be a reason for not using the default application of histogram
equalization for all image data?
first
previous
next
last
back
exit
zoom
contents
index
about
303
4. Can you think of a situation in your own context where you would probably use filters to optimize interpretation of image data?
first
previous
next
last
back
exit
zoom
contents
index
about
304
first
previous
next
last
back
exit
zoom
contents
index
about
305
4. Which technique is used to maximize the range of colours (or grey values)
when displaying an image?
5. Using an example, explain how a filter works.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 11
Visual image interpretation
first
previous
next
last
back
exit
zoom
contents
index
about
306
307
11.1. Introduction
11.1
Introduction
Up to now, we have been dealing with acquisition of image data. The data acquired still needs to be interpreted (or analysed) to extract the required information. In general, information extraction methods from remote sensing imagery
can be subdivided into two groups:
Information extraction based on visual analysis or interpretation of the
data. Typical examples of this approach are visual interpretation methods for land use or soil mapping. Also the generation/updating of topographic maps from aerial photographs is based on visual interpretation.
Visual image interpretation is introduced in this Chapter.
Information extraction based on semi-automatic processing by the computer. Examples include automatic generation of DTMs, image classification and calculation of surface parameters. Image classification is introduced
in Chapter 12.
The most intuitive way to extract information from remote sensing images
is by visual image interpretation, which is based on mans ability to relate colours
and patterns in an image to real world features. Chapter 10 has explained different methods used to visualize remote sensing image data.
In some situations pictures are studied to find evidence of the presence of
features, for example, to study natural vegetation patterns. Most often the result
of the interpretation is made explicit by digitizing the geometric and thematic
data of relevant objects (mapping). The digitizing of 2D features (points, lines
and areas) is carried out using a digitizer tablet or on-screen digitizing. 3D features interpreted in stereopairs can be digitized using stereoplotters or digital
photogrammetric workstations.
first
previous
next
last
back
exit
zoom
contents
index
about
308
11.1. Introduction
In Section 11.2 some theory about image understanding is explained. Visual
image interpretation is used to produce spatial information in all of ITCs fields
of interest: urban mapping, soil mapping, geomorphological mapping, forest
mapping, natural vegetation mapping, cadastral mapping, land use mapping
and many others. As visual image interpretation is application specific, it is illustrated by two examples (soil mapping, land cover mapping) in (Section 11.3).
The last section (11.4) addresses some aspects of quality.
first
previous
next
last
back
exit
zoom
contents
index
about
309
11.2
first
previous
next
last
back
exit
zoom
contents
index
about
310
first
previous
next
last
back
exit
zoom
contents
index
about
311
first
previous
next
last
back
exit
zoom
contents
index
about
312
previous
next
last
back
exit
zoom
contents
index
about
313
previous
next
last
back
exit
zoom
contents
index
about
314
first
previous
next
last
back
exit
zoom
contents
index
about
315
previous
next
last
back
exit
zoom
contents
index
about
316
tacles comprise one red and one green glass. This method is known as the
anaglyph system and is particularly suited to viewing overlapping images on
a computer screen. An approach used in digital photogrammetric systems is to
apply polarization for the left and right images. Polarized spectacles make the
left image visible to the left eye and the right image to the right eye.
first
previous
next
last
back
exit
zoom
contents
index
about
317
11.3
first
previous
next
last
back
exit
zoom
contents
index
about
318
first
previous
next
last
back
exit
zoom
contents
index
about
319
Figure 11.4:
Panchromatic photograph to be
interpreted.
the interpreter finds and draws master lines dividing major landscapes
(mountains, hill land, plateau, valley, . . . ). Each landscape is then divided
into relief types (e.g., sharply-dissected plateau), each of which is further divided by lithology (e.g., fine-bedded shales and sandstones), and
finally by detailed landform (e.g., scarp slope). The landform consists of
a topographic form, a geomorphic position, and a geochronological unit,
which together determine the environment in which the soil formed. A
legend category usually comprises many areas (polygons) with the same
photo-interpretation characteristics. Figure 11.4 shows a photograph and
first
previous
next
last
back
exit
zoom
contents
index
about
320
Figure 11.5:
Photointerpretation
transparency related to the
aerial photo shown in
Figure 11.4.
Figure 11.5 shows the interpretation units that resulted from its stereo interpretation.
In the next phase, a sample area of the map is visited in the field to study
the soil. The sampled area is between 1020% of the total area and comprises all legend classes introduced in the previous stage. The soils are
described in the field, and samples are taken for laboratory analysis, to determine their characteristics (layering, particle-size distribution, density,
first
previous
next
last
back
exit
zoom
contents
index
about
321
first
previous
next
last
back
exit
zoom
contents
index
about
322
first
previous
next
last
back
exit
zoom
contents
index
about
323
Landscape
Relief
Lithology
Landform
API Code
Hilland
Dissected
ridge
Loess
Summit
Hi111
Shoulder &
backslope
Scarp
Hi112
Toe slope
Hi212
Slope
Hi311
Bottom
Slope
Hi312
Hi411
Tread
Pl311
High
race
ter-
Old floodplain
first
previous
next
last
Colluvium
from loess
Loess over
old river alluvium
Old
vium
allu-
back
exit
Hi211
Abandoned
Pl312
channel
Abandoned
Pl411
floodplain
(channelized)
zoom
contents
index
about
324
API Code
Hi111
Hi112
Hi211
Hi212
Hi311
Hi312
Hi411
Pl311
Pl312
Pl411
first
previous
next
last
back
exit
zoom
60%
40%
50%
50%
contents
index
about
325
previous
next
last
back
exit
zoom
contents
index
about
326
Buildings
Buildings
first
Road
previous
Grass
Trees
next
P
(a)
Trees
last
back
exit
(b)
zoom
contents
index
about
327
311
311
112l
231
243
112 l
211
211
231
l121
231
142
312
242
231
231
112
211
111l
312
313
231
111
121
l
l
211 l
l
512
111
first
previous
next
last
111
211
l
l
l121
111
231l
211
242
back
211
exit
zoom
111l
contents
index
about
328
Level 1
1. Artificial
Level 2
1.1.
Urban fabric
1.1.2.
Surfaces
1.2.
Level 3
Industrial,
commercial
and transport
units
1.2.1.
ric
Discontinuous
urban
fabric
Industrial or commercial units
1.3.
1.4.
Mine,
dump
and construction sites
Artificial nonagricultural
vegetated
areas
1.2.3.
1.2.4.
1.3.1.
first
previous
next
last
back
exit
zoom
contents
index
about
329
Level 2
2. Agricultural 2.1.
Arable land
Level 3
2.1.1. Non-irrigated
areas
arable
land
2.1.2. Permanently
irrigated
land
2.2.
Permanent
crops
2.3.
2.4.
Pastures
2.3.1.
Heterogeneous 2.4.1.
agricultural
areas
2.4.2.
2.4.3.
3. Forests
4.
5.
first
and semi
natural
areas
Wetlands
Water bodies
previous
...
...
2.4.4.
...
...
...
...
...
...
...
next
last
back
exit
plantations
Olive groves
Pastures
Annual crops associated with permanent
crops
Complex
cultivation
patterns
Land principally occupied by agriculture, with
significant areas of natural vegetation
Agro-forestry
...
...
...
zoom
contents
index
about
330
Table 11.5:
CORINEs
extended description for
class 1.3.1 (Mineral Extraction Sites). Source:[5]
first
previous
next
last
back
exit
zoom
contents
index
about
331
first
previous
next
last
back
exit
zoom
contents
index
about
332
first
previous
next
last
back
exit
zoom
contents
index
about
333
11.4
Quality aspects
previous
next
last
back
exit
zoom
contents
index
about
334
definition (crisp or ambiguous) and the instructions and methods used. Two
examples are given here to give you an intuitive idea. Figure 11.8 gives two interpretation results for the same area. Note that both results differ in terms of
total number of objects (map units) and in terms of (line) generalization. Figure 11.9 compares 13 individual interpretation results of a geomorphological interpretation. Similar to the previous example, large differences are found along
the boundaries. In addition to this, you also can conclude that for some objects
(map units) there was no agreement on the thematic attribute.
first
previous
next
last
back
exit
zoom
contents
index
about
335
first
previous
next
last
back
exit
zoom
contents
index
about
336
Summary
Visual image interpretation is one of the methods to extract information from
remote sensing image data. For that purpose, images need to be visualized on
screen or in hard-copy. The human vision system is used to interpret the colours
and patterns on the picture. Spontaneous recognition and logical inference (reasoning) are distinguished.
Interpretation keys or guidelines are required to instruct the image interpreter. In such guidelines, the (seven) interpretation elements can be used to
describe how to recognize certain objects. Guidelines also provide a classification scheme, which defines the thematic classes of interest and their (hierarchical) relationships. Finally, guidelines give rules on the minimum size of objects
to be included in the interpretation.
When dealing with a new area or a new application, no guidelines are available. An iterative approach is then required to establish the relationship between
features observed in the picture and the real world.
In all interpretation and mapping processes the use of ground observations is
essential to (i) acquire knowledge of local situation, (ii) gather data for areas that
cannot be mapped from the images (iii) to check the result of the interpretation.
The quality of the result of visual image interpretation depends on the experience and skills of the interpreter, the appropriateness of the image data applied
and the quality of the guidelines being used.
first
previous
next
last
back
exit
zoom
contents
index
about
337
Questions
The following questions can help you to study Chapter 11.
1. What is the relationship between image visualization and image interpretation?
2. Describe (for a colleague) how to recognize a road on an aerial photo (make
use of the interpretation elements).
3. Why is it necessary to have a sound conceptual model of how soils form in
the landscape to apply the aerial photo-interpretation method presented
in Section 11.3.1? What are the advantages of this approach in terms of
efficiency and thematic accuracy, compared to interpretation element(only)
analysis?
first
previous
next
last
back
exit
zoom
contents
index
about
338
4. Describe a relatively simple method to check the quality (in terms of replicability) of visual image interpretation.
5. Which products in your professional environment are based on visual image interpretation?
6. Consider the CORINE nomenclature; identify three classes which can be
accurately mapped; also identify three classes that can be expected to be
exchanged (confused) with other classes.
first
previous
next
last
back
exit
zoom
contents
index
about
339
first
previous
next
last
back
exit
zoom
contents
index
about
340
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 12
Digital image classification
first
previous
next
last
back
exit
zoom
contents
index
about
341
342
12.1. Introduction
12.1
Introduction
first
previous
next
last
back
exit
zoom
contents
index
about
343
12.1. Introduction
operates in the feature space. Section 12.3 gives an overview of the classification
process, the steps involved and the choices to be made. The result of an image
classification needs to be validated to assess its accuracy (Section 12.4). Finally,
two major problems in image classification are addressed in Section 12.5
first
previous
next
last
back
exit
zoom
contents
index
about
344
12.2
first
previous
next
last
back
exit
zoom
contents
index
about
345
Rows
45
band 3
26 81
53 35 57
previous
next
band 1
DN-values
Single pixel
first
band 2
last
back
exit
zoom
contents
index
about
346
band 1
band 2
Image
band 1
band 2
band 3
(v1, v2, v3)
(v1, v2)
band 3
band 2
v3
v2
3-dimensional
Feature Space
2-dimensional
Feature Space
v1
band 1
v1
v2
band 1
band 2
Similarly, this approach can be visualized for a three band situation in a threedimensional graph. A graph that shows the values of the feature vectors is called
a feature space or feature space plot. Figure 12.2 illustrates how a feature vector
(related to one pixel) is plotted in the feature space for two and three bands.
Usually we only find two axis feature space plots.
Note that plotting values is difficult for a four- or more-dimensional case. A
practical solution when dealing with four or more bands is that all the possible
first
previous
next
last
back
exit
zoom
contents
index
about
347
combinations of two bands are plotted separately. For four bands, this already
yields six combinations: bands 1 and 2, 1 and 3, 1 and 4, bands 2 and 3, 2 and 4,
and bands 3 and 4.
Plotting the combinations of the values of all the pixels of one image yields a
large cluster of points. Such a plot is also referred to as a scatterplot (Figure 12.3).
A scatterplot provides information about the combinations of pixel values that
occur within the image. Note that some combinations will occur more frequently
and can be visualized by using intensity or colour.
first
previous
next
last
back
exit
zoom
contents
index
about
348
Distance in the feature space is expressed as Euclidian distance and the units
are DN (as this is the unit of the axes). In a two-dimensional feature space the
distance can be calculated according to Pythagoras theorem. In the situation of
Figure 12.4, the distance between (10, 10) and (40, 30) equals the square root of
(40 10)2 + (30 10)2 . For three or more dimensions, the distance is calculated
in a similar way.
(0,0)
first
previous
next
last
Figure 12.4:
Euclidian
distance between the two
points is calculated using
Pythagoras theorem
back
exit
zoom
contents
index
about
349
first
previous
next
last
back
exit
zoom
contents
index
about
350
255
grass
water
band y
trees
houses
bare soil
Figure 12.5:
Feature
space showing the respective clusters of six
classes; note that each
class occupies a limited
area in the feature space.
wheat
band x
first
previous
next
last
back
255
exit
zoom
contents
index
about
351
12.3
The process of image classification (Figure 12.6) typically involves five steps:
1. Selection and preparation of the image data. Depending on the cover types
to be classified, the most appropriate sensor, the most appropriate date(s)
of acquisition and the most appropriate wavelength bands should be selected (Section 12.3.1).
Training
data
Remote
Sensing
data
running the
actual
classification
Classification
data
2. Definition of the clusters in the feature space. Here two approaches are
possible: supervised classification and unsupervised classification. In a supervised classification, the operator defines the clusters during the training
process (Section 12.3.2); in an unsupervised classification a clustering algorithm automatically finds and defines a number of clusters in the feature
space (Section 12.3.3).
3. Selection of classification algorithm. Once the spectral classes have been
defined in the feature space, the operator needs to decide on how the pixels
(based on their DN-values) are assigned to the classes. The assignment can
be based on different criteria (Section 12.3.4).
first
previous
next
last
back
exit
zoom
contents
index
about
352
(a)
(b)
The above points are elaborated in the next sections. Most examples deal
with a two-dimensional situation (two bands) for reasons of simplicity and visualization. In principle, however, image classification can be carried out on any
n-dimensional data set. Visual image interpretation limits itself to an image that
is composed of a maximum of three bands.
first
previous
next
last
back
exit
zoom
contents
index
about
353
previous
next
last
back
exit
zoom
contents
index
about
354
first
previous
next
last
back
exit
zoom
contents
index
about
355
first
previous
next
last
back
exit
zoom
contents
index
about
356
previous
next
last
back
exit
zoom
contents
index
about
357
first
previous
next
last
back
exit
zoom
contents
index
about
358
first
previous
next
last
back
exit
zoom
contents
index
about
359
first
previous
next
last
back
exit
zoom
contents
index
about
360
Band 2
Band 2
255
Band 1
255
Band 1
255
The disadvantage of the box classifier is the overlap between the classes. In
such a case, a pixel is arbitrarily assigned the label of the first box it encounters.
first
previous
next
last
back
exit
zoom
contents
index
about
361
first
previous
next
last
back
exit
zoom
contents
index
about
362
Band 2
255
Mean vectors
255
0
Band 2
Band 1
255
Distance
Threshold
0
Band 1
255
first
Figure 12.10:
Principle
of the minimum distance
to mean classification in
a two-dimensional situation. The decision boundaries are shown for a
situation without threshold distance (upper right)
and with threshold distance (lower right).
Unknown
Band 2
255
previous
next
last
back
exit
Band 1
zoom
contents
255
index
about
363
first
previous
next
last
back
exit
zoom
contents
index
about
364
Band 2
255
Band 2
Band 1
255
Band 1
255
Unknown
Figure 12.11:
Principle
of the maximum likelihood classification. The
decision boundaries are
shown for a situation without threshold distance (upper right) and with threshold distance (lower right).
Band 2
255
first
previous
next
last
back
exit
Band 1
zoom
contents
255
index
about
365
12.4
Image classification results in a raster file in which the individual raster elements
are class labelled. As image classification is based on samples of the classes, the
actual quality should be checked and quantified afterwards. This is usually done
by a sampling approach in which a number of raster elements are selected and
both the classification result and the true world class are compared. Comparison
is done by creating an error matrix from which different accuracy measures can be
calculated. The true world class are preferable derived from field observations.
Sometimes, sources of an assumed higher accuracy, such as aerial photos, are
used as a reference.
Various sampling schemes have been proposed to select pixels to test. Choices
to be made relate to the design of the sampling strategy, the number of samples
required, and the area of the samples. Recommended sampling strategies in
the context of land cover data are simple random sampling or stratified random
sampling. The number of samples may be related to two factors in accuracy
assessment: (1) the number of samples that must be taken in order to reject a
data set as being inaccurate; or (2) the number of samples required to determine
the true accuracy, within some error bounds, for a data set. Sampling theory
is used to determine the number of samples required. The number of samples
must be traded-off against the area covered by a sample unit. A sample unit can
be a point but also an area of some size; it can be a single raster element but may
also include the surrounding raster elements. Among other considerations the
optimal sample area size depends on the heterogeneity of the class.
Once the sampling has been carried out and the data collected, an error matrix can be established (Table 12.1). Other terms for this table are confusion matrix
or contingency matrix. In the table, four classes (A, B, C, D) are listed. A total
first
previous
next
last
back
exit
zoom
contents
index
about
366
a
b
c
d
Total
Error
of
Omission
Producer
Accuracy
Total
35
4
12
2
53
34
14
11
9
5
39
72
11
3
38
12
64
41
1
0
4
2
7
71
61
18
63
21
163
66
28
59
29
Error of
Commission
43
39
40
90
User
Accuracy
57
61
60
10
of 163 samples were collected. From the table you can read that, for example,
53 cases of A were found in the real world (reference) while the classification
result yields 61 cases of a; in 35 cases they agree.
The first and most commonly cited measure of mapping accuracy is the overall accuracy, or Proportion Correctly Classified (PCC). Overall accuracy is the
number of correctly classified pixels (i.e., the sum of the diagonal cells in the
error matrix) divided by the total number of pixels checked. In Table 12.1 the
overall accuracy is (35 + 11 + 38 + 2)/163 = 53%. The overall accuracy yields one
figure for the result as a whole.
Most other measures derived from the error matrix are calculated per class.
Error of omission refers to those sample points that are omitted in the interpretation result. Consider class A, for which 53 samples were taken. 18 out of the 53
samples were interpreted as b, c or d. This results in an error of omission of
first
previous
next
last
back
exit
zoom
contents
index
about
367
first
previous
next
last
back
exit
zoom
contents
index
about
368
12.5
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
369
Table 12.2:
Spectral
classes
distinguished
during classification can
be aggregated to land
cover classes. 1-to-n and
n-to-1 relationships can
exist between land cover
and land use classes.
The other main problem and limitation of image classification is that each
pixel is only assigned to one class. When dealing with (relatively) small pixels,
this is not a problem. However when dealing with (relatively) large pixels more
land cover classes will occur within this pixel. As a result, the spectral value
of the pixel is an average of the reflectance of the land cover present within the
pixel. In a standard classification these contributions cannot be traced back and
the pixel will be assigned to one of either classes or even to another class. This
phenomenon is usually referred to as the mixed pixel, or mixel (Figure 12.12). This
problem of mixed pixels is inherent to image classification: assigning the pixel
to one thematic class. The solution to this is to use a different approach, for
example, assigning the pixel to more than one class. This brief introduction into
the problem of mixed pixels also highlights the importance of using data with
the appropriate spatial resolution.
first
previous
next
last
back
exit
zoom
contents
index
about
370
Terrain
Image
first
previous
next
last
back
exit
zoom
contents
index
about
371
Summary
Digital image classification is a technique to derive thematic classes from image
data. Input are multi-band image data; output is a raster file containing thematic
(nominal) classes. In the process of image classification the role of the operator
and additional (field) data is significant. The operator needs to provide the computer with training data and select the appropriate classification algorithm. The
training data are defined based on knowledge (derived by field work, or from
secondary sources) of the area being processed. Based on the similarity between
pixel values (feature vector) and the training classes a pixel is assigned to one of
the classes defined by the training data.
An integral part of image classification is validation of the results. Again,
independent data are required. The result of the validation process is an error
matrix from which different measures of error can be calculated.
first
previous
next
last
back
exit
zoom
contents
index
about
372
Questions
The following questions can help you to study Chapter 12.
1. Compare digital image classification with visual image interpretation in
terms of input of the operator/photo-interpreter and in terms of output.
2. What would be typical situations in which to apply digital image classification?
3. Another wording for image classification is partitioning of the feature
space. Explain what is meant by this.
first
previous
next
last
back
exit
zoom
contents
index
about
373
first
previous
next
last
back
exit
zoom
contents
index
about
374
first
previous
next
last
back
exit
zoom
contents
index
about
Bibliography
[1] Stan Aronoff. Geographic Information Systems: A Management Perspective.
WDL Publications, Ottawa, 1989.
[2] Henk J. Buiten and Jan G. P. W. Clevers. Land Observation by Remote Sensing: Theory and Applications, volume 3 of Current Topics in Remote Sensing.
Gorden & Breach, 1993. 30
[3] Peter A. Burrough and Andrew U. Frank. Geographic Objects with Indeterminate Boundaries. GISDATA Series. Taylor & Francis, London, 1996. 331
[4] Peter A. Burrough and R. McDonnell. Principles of Geographical Information
Systems. Oxford University Press, Oxford, 1998.
[5] European Community. CORINE Land Cover Technical Guide. ECSCEEC
EAEC, Brussels, Belgium, 1993. EUR 12585 EN. 325, 330
[6] Rolf A. de By, editor. Principles of Geographic Information Systems, volume 1 of
ITC Educational Textbook Series. International Institute for Aerospace Survey
and Earth Sciences, Enschede, second edition, 2001. 18, 32, 254
first
previous
next
last
back
exit
zoom
contents
index
about
375
376
Bibliography
[7] J. A. Deckers, F. O. Nachtergaele, and O. C. Spaargaren, editors. World
reference base for soil resources: introduction. ACCO, Leuven, Belgium, 1998.
318
[8] G. Edwards and K. E. Lowell. Modeling uncertainty in photo-interpretated
boundaries. Photogrammetric Engineering and Remote Sensing, 60(4):337391,
1996. 334
[9] P. Gunn. Airborne magnetic and radiometric surveys. Journal of Australian
Geology and Geophysics, 17:1216, 1997. Special Issue. 220
[10] John Horn. Aerial Photography. ITC Lecture Notes PHM.80. ITC, Enschede,
The Netherlands, 2000. 119
[11] ITC. ITC Textbook of Photo-interpretation. Four volumes, 19631974. 18
[12] ITC. ITC Textbook of Photogrammetry. Five volumes, 19631974. 18
[13] Janes. Janes Space Directory 19971998. Alexandria, Janes Information
Group, 13th edition, 1997. 84
[14] J. H. Kramer. Observation of the Earth and its Environment: Survey of Mission
and Sensors. Springer Verlag, Berlin, Germany, third edition, 1996. 84
[15] Robert Laurini and Derek Thompson. Fundamentals of Spatial Information
Systems, volume 37 of The APIC Series. Academic Press, London, 1992.
[16] Thomas M. Lillesand and Ralph W. Kiefer. Remote Sensing and Image Interpretation. John Wiley & Sons, New York, NY, third edition, 1994. 30, 58,
119, 170
first
previous
next
last
back
exit
zoom
contents
index
about
377
Bibliography
[17] Keith R. McCloy. Resource Management Information Systems. Taylor & Francis, London, U.K., 1995. 75, 76, 119
[18] Hans Middelkoop. Uncertainty in a GIS, a test for quantifying interpretation output. ITC Journal, 1990(3):225232, 1990. 335
[19] Martien Molenaar. An Introduction to the Theory of Spatial Object Modelling.
Research Monographs in GIS Series. Taylor & Francis, London, 1998. 331
[20] V. Perdigao and A. Annoni. Technical and Methodological Guide for Updating CORINE Land Cover Data Base. EC-JRC, EEA, Brussels, Belgium, 1997.
EUR 17288 EN. 325
[21] Donna J. Peuquet and D. F. Marble, editors. Introductory Readings in Geographic Information Systems. Taylor & Francis, London, 1990.
[22] Colin V. Reeves. Continental scale and global scale geophysical anomaly
mapping. ITC Journal, 1998(2):9198, 1998. 220
[23] David G. Rossiter. Lecture Notes: Methodology for Soil Resource Inventories.
ITC Lecture Notes SOL.27. ITC, Enschede, The Netherlands, 2nd revised
edition, 2000. 318
[24] F. F. Sabins. Remote Sensing: Principles and Interpretation. Freeman & Co.,
New York, NY, third edition, 1996. 30
[25] Toni Schenk. Digital Photogrammetry, volume 1. TerraScience, Laurelville,
1999. 254
first
previous
next
last
back
exit
zoom
contents
index
about
378
Bibliography
[26] W. Smith and D. Sandwell. Measured and estimated seafloor topography,
version 4.2. Poster RP1, 1997. World Data Center for Marine Geology and
Geophysics. 224
[27] Michael F. Worboys. GIS: A Computing Perspective. Taylor & Francis, London, U.K., 1995.
[28] Alfred J. Zinck. Physiography & Soils. ITC Lecture Notes SOL.41. ITC, Enschede, The Netherlands, 1988. 318
first
previous
next
last
back
exit
zoom
contents
index
about
Glossary
first
previous
next
last
back
exit
zoom
contents
index
about
379
380
Glossary
A
Absorption The process in which electromagnetic energy is converted in an
object into other forms of energy (e.g., heat).
Active sensor Sensor with a built in source of energy. The sensor both emits
and receives energy (e.g., radar and laser).
Additive colours The additive principle of colours is based on the three primary colours of light: red, green, blue. All three primary colours together produce white. Additive colour mixing is used, for example,
on computer screens and television.
first
previous
next
last
back
exit
zoom
contents
index
about
381
Glossary
B
Backscatter The microwave signal reflected by elements of an illuminated
surface in the direction of the radar antenna.
Band
first
previous
next
last
back
exit
zoom
contents
index
about
382
Glossary
C
Charge coupled device (CCD) Semi-conductor elements usually aligned as a
linear (scanner) or surface array (video, digital camera). CCDs produce image data.
Class
Cluster
Colour
Colour film Also known as true colour film used in (aerial) photography. The
principle of colour film is to add sensitized dyes to the silver halide.
Magenta, yellow and cyan dyes are sensitive to red, green and blue
light respectively.
Colour infrared film Film with specific sensitivity for infrared wavelengths.
Typically used in surveys of vegetation.
first
previous
next
last
back
exit
zoom
contents
index
about
383
Glossary
D
Di-electric constant Parameter that describes the electrical properties of a medium. Reflectivity of a surface and penetration of microwaves into the
material are determined by this parameter.
Digital Elevation Model (DEM) Special case of a DTM. A DEM stores terrain
elevation (surface height) by means of a raster. Elevation refers to a
height expressed with respect to a specific reference.
Digital Terrain Model (DTM) Term indicating a digital description of the terrain relief. A DTM can be stored in different manners (contour lines,
TIN, raster) and may also contain semantic, relief-related information
(breaklines, saddlepoints).
first
previous
next
last
back
exit
zoom
contents
index
about
384
Glossary
E
Earth Observation (EO) Term indicating the collection of remote sensing techniques performed from space.
Electromagnetic energy Energy with both electric and magnetic components.
Both the wave model and photon model are used to explain this phenomenon. The measurement of reflected and emitted electromagnetic
energy is an essential aspect in remote sensing.
Electromagnetic spectrum The complete range of all wavelengths, from gamma rays (1012 m) up to very long radio waves (1012 m).
Emission
Emissivity The radiant energy of an object compared to the energy of a blackbody of the same temperature, expressed as a ratio.
Error matrix Matrix that compares samples taken from the source to be evaluated with observations that are considered as correct (reference). The
error matrix allows calculation of quality parameters such as overall
accuracy, error of omission and error of commission.
first
previous
next
last
back
exit
zoom
contents
index
about
385
Glossary
F
False colour infrared film see Colour infrared film.
Feature space The mathematical space describing the combinations of observations (DN values in the different bands) of a multispectral or multiband image. A single observation is defined by a feature vector.
Feature space plot A two- or three-dimensional graph in which the observations made in different bands are plotted against each other.
Field of view (FOV) The total swath as observed by a sensor-platform system.
Sometimes referred to as total field of view. It can be expressed as an
angle or by the absolute value of the width of the observation.
Filter
first
(1) Physical product made out of glass and used in remote sensing
devices to block certain wavelenghts, e.g., ultraviolet-filter. (2) Mathematical operator used in image processing for modifying the signal,
e.g., a smoothing filter.
previous
next
last
back
exit
zoom
contents
index
about
386
Glossary
G
Geo-spatial data Data that includes positions in the geographic space. In this
book, usually abbreviated to spatial data.
Geocoding Process of transforming and resampling image data in such way
that these can be used simultaneously with data that are in a specific
map projection. Input for a geocoding process are image data and
control points, output is a geocoded image. A specific category of
geocoded images are orthophotos and orthoimages.
Geographic information Information derived from spatial data, and in the
context of this book, from image data. Information is what is relevant in a certain application context.
Geographic Information System (GIS) A software package that accommodates
the capture, analysis, manipulation and presentation of georeferenced
data. It is a generic tool applicable to many different types of use (GIS
applications).
Georeferencing Process of relating an image to a specific map projection. As a
result, vector data stored in this projection can for example be superimposed on the image. Input for a georeferencing process are image
data and coordinates of ground control points, output is a georeferenced image.
Global Navigation Satellite System (GNSS) The Global Navigation Satellite
System is a global infrastructure for the provision of positioning and
timing information. It consists of the American GPS and Russian
Glonass systems. There is also a proposed European Galileo system.
first
previous
next
last
back
exit
zoom
contents
index
about
387
Glossary
Ground control points (GCPs) Points which are used to define or validate
a geometric transformation process. Sometimes also referred to as
Ground Control Points stating these have been measured on the ground.
Ground control points should be recognizable both in the image and
in the real world.
Ground range Range direction of the side-looking radar image as projected
onto the horizontal reference plane.
Ground truth A term that may include different types of observations and
measurements performed in the field. The name is imprecise because
it suggests that these are 100% accurate and reliable, and this may be
difficult to achieve.
first
previous
next
last
back
exit
zoom
contents
index
about
388
Glossary
H
Histogram Tabular or graphical representation showing the (absolute and/or
relative) frequency. In the context of image data it relates to the distribution of the (DN) values of a set of pixels.
Histogram equalization Process used in the visualization of image data to optimize the overall image contrast. Based on the histogram, all available grey levels or colours are distributed in such way that all occur
with equal frequency in the result.
first
previous
next
last
back
exit
zoom
contents
index
about
389
Glossary
I
Image
first
previous
next
last
back
exit
zoom
contents
index
about
390
Glossary
Incidence angle Angle between the line of sight from the sensor to an element
of an imaged scene and a vertical direction to the scene. One must
distinguish between the nominal incidence angle determined by the
geometry of the radar and the Earths geoidal surface and the local
incidence angle, which takes into account the mean slope of the pixel
in the image.
Infrared waves Electromagnetic radiation in the infrared region of the electromagnetic spectrum. Near-infrared (7001200 nm), middle infrared
(12002500 nm) and thermal infrared (814 m) are distinguished.
Instantaneous field of view (IFOV) The area observed on the ground by a sensor, which can be expressed by an angle or in ground surface units.
Interferometry Computational process that makes use of the interference of
two coherent waves. In the case of imaging radar, two different paths
for imaging cause phase differences from which an interferogram can
be derived. In SAR applications, interferometry is used for constructing a DEM.
Interpretation elements The elements used by the human vision system to interprete a picture or image. Interpretation elements are: tone, texture,
shape, size, pattern, site, association and resolution.
first
previous
next
last
back
exit
zoom
contents
index
about
391
Glossary
L
Latent image When exposed to light, the silver halide crystals within the photographic emulsion undergo a chemical reaction, which results in an
invisible latent image. The latent is transformed into a visible image by the development process in which the exposed silver halide is
converted into silver grains that appear black.
Look angle The angle of viewing relative to the vertical (nadir) as perceived
from the sensor.
first
previous
next
last
back
exit
zoom
contents
index
about
392
Glossary
M
Microwaves Electromagnetic radiation in the microwave window, which ranges from 1100 cm.
Mixel
Acronym for mixed pixel. Mixel is used in the context of image classification where different spectral classes occur within the area covered
by one pixel.
first
previous
next
last
back
exit
zoom
contents
index
about
393
Glossary
N
Nadir
first
The point (or line) directly under the platform during acquisition of
image data.
previous
next
last
back
exit
zoom
contents
index
about
394
Glossary
O
Objects
Objects are real world features and have clearly identifiable geometric characteristics. In a computer environment, objects are modelled
using an object-based approach in contrast to a field-based approach,
which is more suited for continuous phenomena.
Orbit
The path of a satellite through space. Types of orbits used for remote
sensing satellites are, for example, (near) polar and geostationary.
first
previous
next
last
back
exit
zoom
contents
index
about
395
Glossary
P
Panchromatic Indication of one (wide or narrow) spectral band in the visible
and near-infrared part of the electromagnetic spectrum.
Passive sensor Sensor that records energy that is produced by external sources
such as the Sun and the Earth.
Pattern recognition Term for the collection of techniques used to detect and
identify patterns. Patterns can be found in the spatial, spectral and
temporal domains. An example of spectral pattern recognition is image classification; an example of spatial pattern recognition is segmentation.
Photogrammetry The science and techniques of making measurements from
photos or image data. Photogrammetric procedures are required for
accurate measurements from stereo pairs of aerial photos, image data
or radar data.
Photograph Image obtained by using a camera. The camera produces a negative film, which is can be printed into positive paper product.
Pixel
Pixel value The representation of the energy measured at a point, usually expressed as a Digital Number (DN-) value.
first
previous
next
last
back
exit
zoom
contents
index
about
396
Glossary
Q
Quantization The number of discrete levels applied to store the energy as measured by a sensor, e.g., 8-bit quantization allows 256 levels of energy.
first
previous
next
last
back
exit
zoom
contents
index
about
397
Glossary
R
Acronym for Radio Detection And Ranging. Radars are active sensors
at wavelengths between 1100 cm.
Radar
Radiance
Radiometric resolution
RAR
Raster
A regularly spaced set of cells with associated (field) values. In contrast to a grid, the associated values represent cell values, not point
values. This means that the value for a cell is assumed to be valid for
all locations within the cell.
Reflectance The ratio of the reflected radiation to the total irradiation. Reflectance depends on the wavelength.
Reflection
first
previous
next
last
back
exit
zoom
contents
index
about
398
Glossary
Remote sensing (RS) Remote sensing is the instrumentation, techniques and
methods to observe the Earths surface at a distance and to interpret
the images or numerical values obtained in order to acquire meaningful information of particular objects on Earth.
Resampling Process to generate a raster with another orientation and/or different cell size and to assign DN-values using one of the following
methods,nearest neighbour selection, bilinear interpolation and cubic convolution.
Resolution Indicates the smallest observable (measurable) difference at which
objects can still be distinguished. In remote sensing context used in
spatial, spectral and radiometric resolution.
RMS error Root Mean Squared error. A statistical measure of accuracy, similar
to standard deviation, indicating the spread of the measured values
around the true value.
first
previous
next
last
back
exit
zoom
contents
index
about
399
Glossary
S
SAR
Scale
Scanner
(1) remote sensing sensor that is based on the scanning principle, e.g.,
a multispectral scanner (2) office device to convert analogue products
(photo, map) into digital raster format.
Slant range Image direction as measured along the sequence of line of sight
rays from the radar to each reflecting point in the scene.
Spatial data In the broad sense, spatial data is any data with which position is
associated.
Spatial resolution
Speckle
See resolution.
Interference of backscattered waves stored in the cells of a radar image. It causes the return signals to be extinguished or amplified resulting in random dark and bright pixels in the image.
previous
next
last
back
exit
zoom
contents
index
about
400
Glossary
Spectral resolution
See resolution.
Specular reflection
first
previous
next
last
back
exit
zoom
contents
index
about
401
Glossary
T
Training stage Part of the image classification process in which pixels representative for a certain class are identified. Training results in a training set that comprises the statistical characteristics (signatures) of the
classes of interest.
Transmittance The ratio of the radiation transmitted to the total irradiation.
first
previous
next
last
back
exit
zoom
contents
index
about
402
Glossary
V
Variable, interval A variable that is measured on a continuous scale, but with
no natural zero. It cannot be used to form ratios.
Variable, nominal A variable that is organized in classes, with no natural order, i.e., cannot be ranked.
Variable, ordinal A variable that is organized in classes with a natural order,
and so it can be ranked.
Variable, ratio A variable that is measured on a continuous scale, and with a
natural zero, so can be used to form ratios.
Viewing angle
sor.
first
previous
next
last
back
exit
zoom
contents
index
about
403
Glossary
W
Wavelength Minimum distance between two events of a recurring feature in
a periodic sequence such as the crests of a wave. Wavelength is expressed as a distance (e.g., m or nm).
first
previous
next
last
back
exit
zoom
contents
index
about
Index
absorptance, 56
active sensor, 60
radar, 185
additive colours, 282
aerial camera, 111, 122
digital, 148
aerial photography
oblique, 119
vertical, 119
aerospace surveying, 30
altitude, 104
angle
inclination, 104
atmospheric window, 63
blackbody, 56
charged coupled device, 149
charged coupled devices, 160
classification algorithm
box, 360
first
previous
next
last
back
exit
zoom
contents
index
about
404
405
Index
error of omission, 366
georeferencing, 259
GIS, 253
ground control point, 260, 270
ground observations, 34
ground pixel size, 108
ground truth, 332
ground-based observations, 28
previous
next
last
back
exit
zoom
contents
index
about
406
Index
land cover, 325
land use, 325, 368
latent image, 132
passive sensor, 60
pattern, 313
photogrammetry, 119, 253
photography
colour infrared, 136
monochrome, 132
photon, 55, 157
pixel, 107
mixed, 369
platform, 101
aircraft, 102
satellite, 104
pocket stereoscope, 315
mapping, 307
mapping unit
minimum, 326
mirror stereoscope, 315
monoplotting, 266
multispectral scanner, 155
IKONOS-OSA, 173
IRS-1D, 172
Landsat-ETM+, 168
Meteosat-VISSR, 165
NOAA-AVHRR, 166
SPOT-HRVIR, 171
quality
geometric transformation, 261
image classification, 365
photo-interpretation, 333
quantization, 108
orbit
geostationary, 105
period, 104
polar, 105
repeat cycle, 104
sun-synchronous, 105
orientation, 269
orthoimage, 268
orthophoto, 268
overall accuracy, 366
overlap, 144
first
previous
next
radar, 100
azimuth direction, 192
bands, 190
cross section, 212
equation, 187
foreshortening, 202
ground range, 193
ground range resolution, 197
last
back
exit
zoom
contents
index
about
407
Index
imaging, 187
incidence angle, 193
layover, 203
polarisation, 191
range direction, 192
real aperture, 195
slant range, 193
slant range resolution, 196
synthetic aperture, 198
receiving station, 106
reflectance curve, 72
soil, 75
vegetation, 74
water, 76
reflection, 70
diffuse, 71
specular, 71
relief displacement, 255
Remote Sensing, 30
replicability, 333
resampling, 262
resolution
radiometric, 107, 130, 157
spatial, 108, 142, 159
spectral, 107, 131, 158, 162
revisit time, 108
previous
next
last
back
exit
zoom
contents
index
about
408
Index
spaceborne missions
operational, 111
spatial-temporal characteristics, 353
spatio-temporal phenomena, 110
speckle, 206, 299
spectral band, 158
spectral sensitivity, 128
Stefan-Boltzmanns Law, 56
stereo model, 269
stereogram, 315
stereoplotting, 269
stereoscopic vision, 315
stereoviewing, 269
subtractive colours, 134, 285
superimposition, 259, 270
wavelength, 53
wavelength band, 91, 107
texture, 313
three-dimensional, 253
tie point, 270
tone, 312
transfer function, 290
transmission,absorbtion, 61
two-dimensional, 253
updating, 270
validation, 365
viewing angle, 159
first
previous
next
last
back
exit
zoom
contents
index
about
Appendix A
SI units & prefixes
Quantity
Length
Time
Temperature
Energy
Power
first
previous
next
last
back
SI unit
metre (m)
second (s)
kelvin (K)
joule (J)
watt (W) (J/s)
exit
zoom
contents
index
about
409
410
Appendix
Prefix
tera (T)
giga (G)
mega (M)
kilo (k)
centi (c)
milli (m)
micro ()
nano (n)
pico (p)
Unit
centimetre
millimetre
micron
micrometre
nanometre
Parameter
speed of light
C
inch
foot
mile
first
previous
next
last
back
Multiplier
1012
109
106
103
102
103
106
109
1012
SI Equivalent
102 m
103 m
106 m
106 m
109 m
Value
2.9979 108 m/s
( C + 273.15) K
2.54 cm
30.48 cm
1, 609 m
exit
zoom
contents
index
about