0% found this document useful (0 votes)
14 views

Digital Image & Analysis Best

Uploaded by

katama gilo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Digital Image & Analysis Best

Uploaded by

katama gilo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

INTRODUCTION TO DIGITAL

IMAGES AND DIGITAL


ANALYSIS TECHNIQUES :

A Basic Course for the Appreciation of Digital


Analysis of Remotely Sensed Multispectral Data

INSTRUCTIONS
(EXERCISE FORMS)
(EXERCISE SOLUTIONS)

T. T. Alföldi
Applications Division

Technical Note 78-1


Printed March 1978
Reprinted October 1986
Digital Version August 1996

Canada Centre for Remote Sensing


Natural Resources Canada
(formerly: Energy, Mines and Resources Canada)

Ottawa, Canada
ABSTRACT
This document provides a self-teaching mechanism for the student of digital
multispectral image analysis. The detailed step-by-step instructions and the
corresponding figures lead the reader through some of the basic analysis procedures
used in the study of satellite images for a variety of earth resources applications. No
mathematical ability is required. Using only pencil (and eraser) the reader is led
through steps which mimic the actions of a computer. Once this exercise is completed,
the reader should have a functional knowledge of the basic multispectral analysis
methods and data presentation formats.

RÉSUMÉ
Le présent document permet au lecteur d'étudier par lui-même l'analyse numérique d'images
multispectrales; des instructions détaillées et des graphiques illustrent chaque étape; le lecteur
prend connaissance des procédés fondamentaux d'analyse utilisés pour l'étude des images
transmises par satellite et appliquée à la détection d'une gamme de ressources terrestres. Aucune
aptitude spéciale pour les mathématiques n'est requise. Le lecteur franchit uniquement à l'aide
d'un crayon et d'une gomme à effacer, les étapes qui reproduisent les opérations d'un ordinateur.
Une fois cet exercice terminé, le lecteur devrait posséder des connaissances fonctionnelles de la
méthode fondamentale d'analyse spectrale et des modes de présentation des données.

AVAILABILITY/PERMISSION
This digital document is a faithful reproduction of the original ‘Technical Note’ published in
1978, with a few adjustments made to bring it up to date. It has three components: instructions,
exercise forms and exercise solutions. It may be reproduced in whole for non-commercial use
only. Its source must be acknowledged on all copies. Additional copies may be accessed at:
http://www.ccrs.nrcan.gc.ca/

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
TABLE OF CONTENTS

1. INTRODUCTION

2. DATA ARRANGEMENT AND PRESENTATION

3. CLASSIFICATION USING ONE BAND

4. MULTISPECTRAL CLASSIFICATION (RECTANGULAR)

5. MULTISPECTRAL CLASSIFICATION (VECTOR)

6. INTERPRETATIONS (FROM SPECTRAL SIGNATURES)

7. UNSUPERVISED CLASSIFICATION

8. FURTHER STUDY

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
1. INTRODUCTION
A simulation of ‘hands-on’ processing of digital images has been designed to give the new user
familiarity with the manipulation of digital data. Although the original intent of this exercise was to provide
a self-teaching mechanism for potential users of the IMAGE-IOO at the Canada Centre for Remote
Sensing (CCRS), the concepts contained herein are sufficiently general as to be applicable to most digital
image processing systems. If an instructor is available, the practical exercise should be preceded by a
description of the basic characteristics of digital, multispectral remotely-sensed images: spatial, temporal,
spectral and radiometric.

Note:
To permit hand simulation with only paper and pencil, the following simplifications have been made:

a) The ‘picture’ to be processed by the reader is reduced in size to 49 picture elements (pixels), compared to
the several millions typical of digital images.

b) A multispectral image often contains from four to 24 spectral channels or bands. For hand simulation
purposes, the ‘pseudo’ images have been designed with only two spectral bands (or dimensions in
feature space).

c) The number of intensity levels which may be recorded by a sensor is typically 64 to 256. This is
impractical to manipulate by hand, so the reader's image has merely 10 levels.

The manual techniques to be described are very similar to the tasks performed by computer.
However, the computer's great speed permits it to handle much larger images with more channels and
greater radiometric range.

2. DATA ARRANGEMENT AND PRESENTATION

Figure 1 shows the data from two bands of a small segment of a satellite scene, with brightness
information quantified into 10 levels (from 0 to 9) for each band. One band ‘A’ is red-sensitive and the
other band ‘B’ covers a portion of the reflective infrared. The format of the data in this figure is
"line-interleaved" . For the 7 x 7 image represented on this tape, the first seven numbers correspond to the
pixel intensities in the first line of band ‘A’, from left to right in the picture. The next seven numbers are for
the same first line but of band ‘B’ data. This is followed by the next seven numbers which are for line No.
2, band ‘A’, and so on. For the 7 x 7 picture area, there are 7 lines x 7 pixels x 2 bands = 98 numbers. It is
useful to arrange the numbers in a geometrically convenient form.

INSTRUCTION No. 1:

Beginning at the ‘start’, mark off every seven numbers from left to
right, and label the first seven for band ‘A’, the second seven for band
‘B’, the third seven for band ‘A’, and so on.

Figure 2 prepares the format of the digital image. Band ‘A’ and band ‘B’ represent the same area
on the earth's surface, but are coded separately because they represent different portions of the
electromagnetic spectrum (or colours of light).

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
INSTRUCTlON No. 2:

Insert the numbers from the tape into their appropriate geometric
position thus: The first seven numbers of band ‘A’ from the tape
are positioned as pixels 1 - 7 of line No. 1 of the band ‘A’ matrix
in Figure 2. The next seven numbers from the tape are for pixels
1-7 of line No. 1 of band ‘B’ in Figure 2. Continue in this manner
until the two matrices of Figure 2 are filled.

The digital or numerical maps constructed in Figure 2 may now be converted into another format
which will permit a visual appreciation of the image. A "grey map" is produced to synthesize a visual image
from the numbers at hand.

The computer-produced grey maps usually present a problem, in that the number of intensity
levels (64 to 256) greatly exceeds the number of available grey ‘shades’ which the printing device can
generate. Each pixel in the 7 x 7 image is coded at a particular intensity level from 0-9 (l0 levels). It is
appropriate in this simple example then, to use three shades of grey, in an attempt to represent the l0-level
image.

INSTRUCTlON No. 3:

For each of the band A’ and ‘B’ digital images (Figure 2),
transform the numerical values of each pixel into a "shade"
of grey according to the following conversion:

Numerical Value 0 1 2 3 4 5 6 7 8 9
Grey Level
and sketch these ‘transformed‘ pixels into Figure 3. Note that
the smallest intensities are represented as the ‘darkest’.

Note that in the completed Figure 3, there are some similar and some dissimilar patterns appearing
when bands ‘A’ and ‘B’ are compared. Although some environmental spatial patterns are beginning to
appear, it is obvious that the grey maps represent much less information than is inherent in the digital maps.

Following are a number of techniques which are intended to give a better understanding of the
data.

An ‘intensity profile’ gives a one-dimensional view of a single cross-section of the data. It is a


popular technique in photographic analysis as well, where a ‘density profile’ is constructed.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
INSTRUCTION No. 4:

An intensity profile will be constructed for line number 6 of each


of the band ‘A’ and band ‘B’ images. Use the graph outlines
prepared in Figure No. 4. For band ‘A’, line No. 6, determine the
intensity level for each pixel using the digital map, and plot a point
to correspond to that pixel number and intensity level. When all
seven points are plotted, join the points, progressing from pixel
numbers 1 to 7. Repeat the same procedure for band ‘B’.

The completed Figure 4 now represents the intensities of reflected light as found on the ground
track which corresponds to line No. 6 of the image. These intensity profiles may be produced for any line
drawn across the image (at any angle). The inherent limitations of this technique reduce its usefulness. A
more informative data arrangement is described below.

A one-dimensional histogram offers a graphical representation of the data distribution for a single
band. The plot (see Figure 5) shows the number of pixels which have a particular intensity level. This is an
abstract, but important concept. Alternately, one may ask: ‘What area of the image corresponds to a
particular intensity level?’

INSTRUCTION No. 5:

For band ‘A’, count the number of pixels which have an intensity of
zero. Use the digital image in Figure 2. Enter this number in the space
provided below the left graph of Figure 5. Now count the number of
times that the intensity level 1 occurs and similarly record it on Figure
5. Continue for all levels. Check that the sum of these values is 49 (=
7 x 7). Now plot these values on the graph and join the plotted points
with straight lines, progressing from left to right. Similarly construct
the histogram for band ‘B’.

There are several observations to be made regarding the appearance of these two histograms. First,
the fact that the two histograms are significantly different means that there is different information (and
perhaps useful information) available from the two bands concerning the same pixel (or ground area).
Second, note the various peaks of the histograms. Each peak separated from neighbouring peaks by valleys
is called a ‘mode’ of the histogram. Often it is found that a mode corresponds to a particular feature on the
ground. The presence of several of these modes (a multi-modal histogram) leads to the conclusion that
several (different) environmental features have been imaged.

Next, consider the band ‘B’ histogram. There are two major modes in this histogram, separated by
the valley at intensity level 2. Since band ‘B’ is a reflective infrared band, knowledge of the infrared
reflection characteristics of land and water can help identify these two modes. Water strongly absorbs
infrared, resulting in low reflectivity. The typically vegetation-covered land surfaces will have high reflection
during the summer months. Thus, the assumption is made that the left peak or mode designates water while
the large mode on the right is of the land surfaces. By counting the number of pixels in each mode, we
already have an idea of the relative size of areas of land and water in the image, even though we haven’t
seen the image as yet!

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
The ‘spectral signature’ of any one pixel is the combination of its intensity levels in the two bands.
This characteristic may be plotted in a two-dimensional (2-D) histogram. In a 2-D histogram, the two axes
depict the intensity levels for the two bands (see Figure 6). What is to be plotted is the frequency of
occurrence of any one combination of band ‘A’ and band ‘B’ intensities. Stated in another manner, a 2-D
histogram indicates the number of pixels which have a particular combination of intensities in the two
bands. As an example, refer to pixel number 6 of line 2 (using Figure 2). In band ‘A’ it has an intensity
value of 4, and in band ‘B’ a value of 6. Thus in Figure No. 6 it would be plotted at the coordinates 4,6 for
the band ‘A’ and band ‘B’ axes respectively. What the completed histogram will show, is the data
distribution of both bands simultaneously.

INSTRUCTlON No. 6:

Plot the intensity coordinates for each pixel on Figure 6. Use the
digital maps of Figure 2. When the intensity coordinates for a pixel
are found in Figure 6, place a tick mark in the appropriate square.
These tick marks will be summed later to find the total in any one
square. Only the pixels in the first three lines of Figure 2 should be
plotted, since the last four lines are already plotted as may be seen in
Figure 6.

INSTRUCTlON No. 7:

Transfer the data of Figure 6 into Figure 7 by summing the tick


marks in each square and placing the numerical value in the
corresponding square of Figure 7.

Each ‘square’ or location in the completed 2-D histogram is called a ‘cell’ or sometimes a ‘vector’.
The number in any one cell of Figure 7 depicts the frequency of occurrence of that particular set of intensity
coordinates to be found in the original 7 x 7 image. This 2-D histogram plot is also the spectral signature
domain. Cells which are close to each other in this plot have near-similar spectral characteristics. A much
more sophisticated analysis is possible using the two spectral axes simultaneously (2-D histogram), then
separately (l-D histograms), as shown in the examples below.

3. CLASSIFICATION USING ONE BAND

Figure 8 represents a ground verification map for a particular environmental feature: ‘forest’. Three
sites have been positively identified on the ground as being forested terrain, and the experts who collected
this data are further assured that the combination of these three sites is representative of all forest types to be
found in the area covered by the remotely sensed image. The ground verification map has been
geometrically registered to the image, so that the same point on the map and the image may now be
referenced by ‘line’ and ‘pixel’ coordinates. By using the spectral signature of these verified ‘forest’ areas,
one may find all other ‘forest’ pixels. This can be done by searching all pixels in the scene for similar
spectral signatures. The first task is then to define the spectral characteristics of the given training sites.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
INSTRUCTlON No. 8:

Find the intensities corresponding to each of the three test sites for
band ‘A’. Use Figure 8 to obtain the geometric locations, and extract
the intensity levels from the corresponding locations of Figure 2.
Enter these values below Figure 8. The category or class, ‘forest’,
may then be assumed to be characterized by the range of intensities
found in band ‘A’. The range is defined by the minimum and
maximum value from these three samples. Enter these also below
Figure 8. Repeat the process for band ‘B’.

The actual classification process now involves searching for all pixels which have an intensity level
falling within the range of intensities found in the test sites. This is done separately for each band.

INSTRUCTION No. 9:

Using the band ‘A’ digital image of Figure 2, scan all of the pixels in
the image for intensities falling in the range defined for band ‘A’ in
the previous instruction. For any pixel with a band ‘A’ intensity
falling in this range (inclusive of the minimum and maximum
intensities), darken the corresponding pixel in Figure No. 9A. Repeat
the process using the band ‘B’ digital image of Figure 2, the band ‘B’
intensity range defined in the previous instruction, and map onto
Figure 9B.

The two forest theme maps in Figure 9A and 9B represent the same environmental feature (=
forest), yet are different because each map was generated using information from one band only. The
procedure used to produce these theme maps is akin to a rudimentary form of intensity ‘slicing’. One
specific range of intensities was sliced from the total available range. A more valid or ‘correct’ classification
may be produced if the intensities in both bands were to be considered simultaneously.

4. MULTISPECTRAL CLASSIFlCATlON (RECTANGULAR)

In order to classify an image in a multi-spectral (or multi-band) mode, the intensities in all bands
must be considered simultaneously. In Figure l0A, the range of intensities in band ‘A’ representing ‘forest’,
is represented by the shaded area from band ‘A’ intensity 2 to 5 inclusive. Similarly for band ‘B’, ‘forest’ is
represented by the shaded area of intensities 3 to 7 inclusive. The overlap of these two individual intensity
ranges in this two-dimensional diagram is a cross-hatched area representing the multispectral (rectangularly-
defined) spectral signature of ‘forest’. In order to produce a multispectral classification of ‘forest’, it is
necessary to find all pixels of the image whose spectral coordinates fall inside the cross-hatched rectangular
area.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
INSTRUCTlON No. 10:

Using the digital images of Figure 2, scan all pixels for band ‘A’
intensities of 2, 3, 4, or 5. When a pixel has one of these band ‘A’
intensities, check if its band ‘B’ intensity is 3, 4, 5, 6 or 7. If a pixel
agrees with both of these criteria, then shade in the corresponding
pixel in Figure l0B. Only the last four lines of the image need to be
considered, since the first three lines are already mapped in this
manner.

As a check on this classification, note that a total of 21 pixels should have been identified as
‘forest’. Further, it may be noted that the forest map of Figure l0B is the logical overlap between forest
maps 9A and 9B.

In review, a rectangular classification, as has been performed above, requires ‘representative


samples’ of the environmental feature to be mapped. The intensities of these sample pixels are collected in
each band. The range of intensities which correspond to the feature of interest is plotted as spectral band
versus spectral band (feature space). The rectangle thus defined in two-dimensional feature space is the
spectral signature of the environmental feature. When more than two dimensions are used ( hundreds of
bands on airborne multispectral sensors is not uncommon, then N-bands can produce an N-dimensional
‘rectangular parallelepiped’ spectral signature, which is analogous to the (two-dimensional) rectangle
produced above.

5. MULTISPECTRAL CLASSlFlCATlON (VECTOR)

A further refinement of the spectral signature, as defined by rectangular multispectral classification,


is possible. To illustrate this, it is appropriate to demonstrate the most basic limitation of the rectangular
classification technique.

INSTRUCTION No. 11:

The forest map of Figure l0B now has been verified on the ground by
visual inspection, and homogeneous stands of coniferous and
deciduous forest identified. Figure 11A shows the spatial distribution
of these two forest types. The task is to delineate the portions of
spectral feature space which correspond to each of the two forest
types. For each pixel identified in the ground verification map of
Figure No. 11A, find the corresponding band ‘A’ and band ‘B’
intensities in Figure 2. Plot each such spectral coordinate in Figure
11b using symbols ‘C’ (coniferous) and ‘D’ (deciduous). Draw a
rectangle to define the spectral signature of ‘coniferous forest’. The
two,vertical sides of this rectangle are the lower and upper limits of
the range of intensities for band ‘A’. The top and bottom lines of the
rectangle are the upper and lower limits of the range of intensities for
band ‘B’. Draw a similar rectangle for the spectral signature of
‘deciduous forest’. Note that the two rectangles will partially overlap.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
It is this overlapping of spectral signatures in feature space which is a major limiting factor to the
usefulness of the rectangular classification method. If a particular pixel has spectral coordinates which fall
into the overlap region, then one is at a loss whether to label this pixel as ‘coniferous’ or ‘deciduous’.

The multispectral vector classifier looks inside any feature space rectangle to identify each spectral
coordinate (also known as "cell" or ‘vector’) and the number of pixels which are associated with each
coordinate. This defines the data density distribution in feature space.

The next task is to use a multispectral vector classification scheme to map coniferous and
deciduous forests in another scene. The first scene has been used in Figure 11A and 11B to ‘train’ the
classifier. In other words the first scene contains the test sites from which the spectral signatures of the two
forest types have been defined. One may extrapolate these spectral signatures to another area (scene No. 2)
to look for similar environmental features. Here, we must recognize the fact that similar spectral signatures
are the only themes which a machine or an algorithm will produce. It is the analystwho must make the
assumption that similar spectral signatures indicate similar environmental phenomena.

INSTRUCTlON No. 12:

Figures 12A and 12B contain the digital images of band ‘A’ and
band ‘B’ of the new scene (No. 2). Figure 12C is the feature space
representation of the spectral signatures of coniferous (C) and
deciduous (D) forests, as previously constructed. Scan the new scene
pixel by pixel for the first four lines (the last three lines have been
mapped already). In order to classify any one pixel as one of the two
forest types, it must agree in band ‘A’ and band ‘B’ intensities with
one of the cells in Figure 12C, which is marked ‘C’ or ‘D’. It is NOT
sufficient for the spectral coordinates to merely fall within the
rectangular limits. It is mandatory that the spectral coordinates being
considered coincide with a cell marked by ‘C’ or ‘D’. Only in this
manner will ambiguities related to the overlap region be avoided.
Those pixels thus identified in Figure 12C should be marked
appropriately as ‘C’ or ‘D’ on the theme map of Figure 12D.

The above procedure is sometimes called: ‘N-dimensional training’, to refer to the fact that the
intensities of more than one band are considered simultaneously. This type of training and classification
scheme is also known as ‘non-parametric’ in that the absolute location of spectral coordinates in feature
space is the criterion for defining an environmental feature, and NOT statistical parameters such as mean
and standard deviation.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
6. INTERPRETATlONS (FROM SPECTRAL SIGNATURES)

Although ground verification is essential in assigning environmentally valid names to features


found on an image, it is, however, apropos to make deductions concerning an unverified feature, from its
spatial and spectral characteristics.

INSTRUCTION No. 13:

As has been previously described, a one-dimensional histogram of


the infrared band ‘B’ will display a marked difference between a low
intensity mode corresponding to water, and a high intensity mode
corresponding to land features. This may be observed on the
one-dimensional histogram of band ‘B’ of scene No. 2, plotted in
Figure 13A. The left mode of the histogram comprising band ‘B’
intensities less than 2 will relate to water pixels to a high degree of
certainty. (The major conflicting phenomenon would be shadow areas
due to clouds or mountains which would also result in low
intensities.) Find those pixels in Figure 12B which have intensities
less than 2 and mark the corresponding pixels in Figure 12D as: ‘W’

Figure 13B shows the data density of spectral feature space for scene
No. 2. Note the cluster of three low-intensity cells which correspond
to ‘water’. Sum the number of pixels corresponding to these three
cells and check that the number of pixels marked ‘W’ in Figure 12D
agrees with this sum.

The portions of feature space which were previously defined as


representing coniferous and deciduous forest are indicated in Figure
13B. It is reasonable to assume that the portion of feature space which
lies between two specific spectral signatures, may represent a mixed
environmental target. Thus, the cells lying between the areas shown
as ‘coniferous’ and ‘deciduous’ may be the spectral representation of
the mixture of these two forest types, namely mixedwood. Identify
the pixels which are represented by these (three) cells, and mark them
on Figure 12D as ‘M’. Verify that the correct number of pixels have
been identified as mixedwood by summing the densities of the three
cells in Figure 13B.

7. UNSUPERVISED CLASSIFICATION

The previous classification schemes used a training area in the image from which a spectral
signature was defined, and for which the investigator had some prior knowledge. This spectral signature
was then extrapolated throughout the image, or to another image, in search of other spectrally similar areas.
This is known as a variation of ‘supervised classification’. An investigator may prefer to reverse the
procedure. It is often useful to delineate spectrally dissimilar areas of an image, even when nothing is
known about the environmental character of the resulting subdivisions or classes. These classes are then
mapped and the map taken into the field to identify the classes. This procedure is known as ‘unsupervised

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
classification’ since no training sites are involved. The main advantage of using an unsupervised
classification technique is that the classes are subdivided based on their statistical characteristics usually
covering large geographical areas, rather than depending on a ‘training sample’ which may be quite
unrepresentative of the class variability over the whole scene to be mapped.

There are a large number of mathematical algorithms which use various schemes to locate and
separate the statistically "cohesive" clusters in feature space, which are likely to have environmental
significance. Most of these algorithms rely on finding areas of high pixel density (in feature space) which
are separated by regions of low density. The following task serves to illustrate this by a less sophisticated
algorithm than is actually used in practice.

INSTRUCTlON No. 14:

The feature space representation of scene No. 2 is reproduced in


Figure 14A. Into Figure 14B, copy those cells from Figure 14A,
which have a count of (density) 3 or more.

To be observed now in Figure 14B are three clusters of high density


cells. These groupings of high density cells are each only the nucleus
of a cluster. The next step is to define the boundaries of each whole
cluster. For reference, the cluster with a nucleus of two cells will be
called cluster ‘A’ in feature space and class ‘A’ when it is finally
mapped geographically (by line and pixel). Identify each cell which
touches the nucleus cells of cluster ‘A’, by marking such cells with
the letter ‘A’. There should be l0 such cells marked, counting even
those cells which touch with a corner only.

Repeat the process for cluster ‘B’ (with three nucleus cells) using the
letter ‘B’ for the neighbouring cells, and also for cluster ‘C’ (with one
nucleus cell) and using the letter ‘C’. There will be a point of
ambiguity where two clusters overlap and a cell is identified as
belonging to the neighbourhood of two clusters. A decision must be
forced, so identify this conflict cell as belonging to the cluster with the
larger nucleus.
Draw the boundary for each cluster enclosing its complete
neighbourhood in Figure 14B. There should be 11 cells inside the
boundary for cluster ‘A’, 15 cells in cluster ‘B’, and six cells in
cluster ‘C’. Transfer just the boundaries back to the original feature
space in Figure 14A.

For the final representation of feature space as divided into clusters,


transfer those cells in Figure 14A, which fall inside the boundary for
cluster ‘A’ and which display a count or density of 1 or greater, into
Figure 14C, and identify those cells by the letter ‘A’. Repeat the
procedure for clusters ‘B’ and ‘C’ using the appropriate designation.

Now that feature space has been (pseudo-) statistically subdivided


into cohesive clusters, it remains to map these clusters into their

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
geographical locations. This will require that for each pixel of Figure
14D which is to be classified, its spectral coordinates must be
obtained from the digital maps of Figures 12A and 12B. These
coordinates are located in Figure 14C, and the representative symbol
(A, B, or C) allocated to the pixel in Figure 14D. Only the last three
lines of pixels need to be considered, since the first four lines have
been mapped.

The unsupervised classification shown in Figure 14D now requires that the individual classes (A,
B, C) be identified environmentally. This may be done by a variety of techniques, notably airphoto
interpretation, or actually visiting the site, if practical. But it is not necessary to completely cover the scene in
question. Another advantage of the unsupervised classification scheme is that it can direct fieldwork or other
forms of ground verification, to convenient, small and representative locations, where environmentally valid
names may be assigned to the classes. For instance, the location marked by an asterisk in Figure 14D
would be a suitable location to identify classes ‘A’, ‘B’, and ‘C’ because of their proximity to each other.
Just a few such locations need to be investigated for a large scene, in order to be able to name the classes
with confidence. Sometimes this field investigation procedure is used to establish classification accuracy
figures.

Also, during a ground verification exercise, it may be opportune to identify those areas which are
‘unclassified’ (such as pixel No. 4, line No. 6 on Figure 14D). Whereas these unclassified areas may be due
to imperfections in the processing technique or a ‘noisy’ image, if the areas seem large and cohesive, they
may be identified as a particular environmental feature.

8. FURTHER STUDY

A few examples are shown of documents which deal with similar topics and are instructional in
format. They vary, however, in complexity, approach and emphasis.

Grabau, W.E., "Pixel Problems", Miscellaneous Paper M-76-9 Mobility and Environmental Systems
Laboratory, U.S. Army Engineering Waterways Experiment Station, P.o. Box 631, Vicksburg,
Miss. 39180. May, 1976.

Landgrebe, D.A.. "Machine Processing for Remotely Acquired Data", LARS Information Note 031573,
Purdue University, West Lafayette, Indiana. 1973.

Lindenlaub, J. and J. Russell, "An Introduction to Quantitative Remote Sensing", LARS Information Note
ll0474, Purdue University, West Lafayette, Indiana, 1974.

Orhaug, T. and 1. Akersten, "A Workshop Introduction to Digital Image Processing, FOA Report
D-30053-El. Research Institute of Sweden National Defense, S-l04 50 Stockholm 80, Sweden.
September 1976.

Smith, J.A., L.D. Miller, and T. Ells, "Pattern Recognition Routines for Graduate Training in the Automatic
Analysis of Remote Sensing Imagery -- RECOGIl, Science Series No. 3A, Colorado State
University, Fort Collins, Colorado. February, 1972.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing
Readers are reminded that the above list was compiled in 1978. Since that date, a wealth of instructional
documentation has been constructed on this topic by many authors and can be found in remote sensing
journals, symposia proceedings, textbooks and the World Wide Web.

Introduction to Digital Images and Digital Analysis Techniques - Tom Alföldi - Canada Centre for Remote Sensing

You might also like