Digital Elevation Model
Digital Elevation Model
Digital Elevation Model (DEM) is a generic term for both topographic and bathymetric data,
and can be found in various forms. A model means that the digital form of the elevation data can be
used, analysed and modelled directly by a computer in 3 dimension and in such way reduces the need
for labour-intensive human interpretation (Maune, 2001). The DEM implies elevation of the bare
earth surface without vegetation and man-made features. DEM term is usually used together with
DTM, but the later may also incorporate elevation of significant topographic feature on the land
together with mass points and breaklines that are regularly spaced for better representative of true
bare earth shape. The DTM term was originally come from Massachusetts Institute of Technology in
about 1955 to 1960, using digital data obtained from stereo models (Lodwick, 1983). According to
U.S Geological Survey (USGS), “a DEM is the digital cartographic representation of the elevation of
the terrain at regularly spaced intervals in x and y directions, using z-values referenced to a common
datum”(Maune, 2001). Meanwhile Digital Surface Model (DSM) is similar to DEM or DTM except
that the DSM includes the elevations of vegetation and man-made structures.
The resolution of the DEM will define terrain detail that can be represented by the elevation
model. Coarse terrain model resolution might miss terrain details. However, detailed terrain
properties represented by high-resolution terrain model require massive data storage and needs
longer processing time. For instance, in hydrological application, stream detail is crucial in
hydrological modelling, and according to above-mentioned constraint, the uniformly spaced DEM
will miss the streamline and fail to show the streamline in DEM. Therefore, a new form of hydro-
enforced DEM is now being produced by USGS, Federal Emergency Management Agency (FEMA)
and others who drainage need pattern to be appeared in DEM.
New emerging technologies have improved the quality of DEM for instance, laser altimetry
in particular LiDAR (Light Distance and Ranging). LiDAR can produce accurate and high resolution
DEM by direct measurement of ground surface. With 2000-5000 height measurements per second,
15 cm vertical precision and with 1 m horizontal resolution, LiDAR is considered as an expensive
method in acquiring DSM and DEM compared to any conventional method such as photogrammetry
and geodetic surveying. Nevertheless LiDAR data are decreasing in cost as techniques of acquisition
and processing (notably, filtering out vegetation and man-made structures) improve in efficiency and
economies of scale make the data more competitive in the marketplace. Another technique, in which
terrain accuracy and resolution are comparable to optical remote sensing method, is Radar
Interferometry or InSAR. InSAR requires simultaneous or repeated signal acquisitions by synthetic-
aperture mapping radar and delivered a near-global DEM at a uniform horizontal resolution of 90 m
The Shuttle Radar Topography Mission (SRTM) obtained elevation data on near-global scale
generate the most complete high-resolution digital topographic database of earth (about 80 percent of
earth land surface). SRTM is an international project spearheaded by the National Geospatial-
Intelligence Agency (NGA) and the National Aeronautics and Space Administration (NASA). There
are two versions of SRTM elevation data, 90 m and 30 m horizontal resolution and they are available
for the whole globe and United States areas respectively. However at the early stage of SRTM data
acquisition, 1/5 of earth’s land area was excluded. Moreover, the average relative accuracy of the
new dataset is 10 m, and for 90 m dataset with serious error, the accuracy would potentially at 30 m
(Pike, 2002). Some reasons were laid on the limitation of radar in penetrating dense vegetation cover.
Furthermore, sometimes some elevation data filled with blank and water bodies may not appear flat.
Throughout this report, the term DTM will refer to bare earth surface.
DTMs are used for many purposes, such as in terrain modelling, map visualization and
hydrologic modelling (Wechsler, 2003). The DTM resolution is highly dependent on application, for
instance high resolution DTM (between 2.25 to 3.25 m) is important for modelling processes at the
micro scale level. In general the resolution is also defined by the surface landscape where low relief
landscape being less sensitive to resolution impacts and larger range of relief needs higher resolution
(Anderson et al.). Accurate DTM is essential for urban flood modelling, since, many obstacles
involve in controlling the behaviour of floodwater. Structural elements in floodplain such as river
embankments and elevated road have a significant effect on flood extent and thus it is crucial to use a
good DTM in representing such obstacles (Bishop and Catalano, 2001; Werner, 2001). The irregular
nature of urban environment makes fixed grid the easier approach compared to unstructured grid in
representing topography (Bishop and Catalano, 2001). Urban flood modelling by Schmitt et al.,
(2004) has made sure that the DTM includes “distinct levels of street cross section, side-walks and
street curbs as well as the border line between public (street, side-walk) and private space”. However
in order to include all these elements, it would increase the computational time. On the other hand,
such detailed information is only possible to get through detail conventional ground elevation
observation or through LiDAR.
LIDAR can produce dense elevation information with vertical accuracy of 15 to 20 cm and
sub-meter planimetric accuracy (Cobby et al., 2001). Mark et al.,(2004) has suggested that DTM with
spatial size of 1 m by 1 m to 5 m by 5 m is sufficient in representing urban elements such as road
width, side-walk width, houses and building in urban flood analysis. Moreover, DTM with 1 m by 1
m horizontal resolution is not necessary in order to get accurate result in flood level, and 5 m by 5 m
DTM size could be used for quick assessment of the model results. Blomgren (1999) used DEM with
horizontal resolution of 5 to 10 m with less than 1.0 m vertical accuracy, to accurately represent
narrow features such as road embankments and dunes in his flood modelling. Major road should be
one of important elements in the DTM, since it would behave as an artificial channel during flood
(Mark et al., 2004). Buildings are one of the important elements in controlling floodwater behaviour.
Building treated as solid building, in one hand might overestimate flood extent and flood depth. On
the other hand, building as bare earth with rough surface could underestimate flood depth and flood
extent. El-Ashmawy (2003), showed that buildings have a significant effect on flood extent when
built-up area is 10 percent or more, in fact treating building as solid block raises flood height twice as
much as if building is treated as partially solid block.
The flow pattern remains unchanged after solid block building structures were removed and
high surface roughness is used instead (Tennakoon, 2004). Building structures that are treated as
rough surface would help in reducing the overestimation of flood depth. Moreover, building
structures, which are treated as solid block in area with 10 percent building density, will increase
floodwater depth by 25 cm (El-Ashmawy, 2003). This concludes that, area with less than 10 percent
building density doesn’t need special representation of building structures (building structure with
partially solid structure, which allows intrusion of water during flood) (El-Ashmawy, 2003).
Roads and other obvious man-made features will influence the floodwater flow, thus features
less than final DTM resolution need to be increased to at least ™ (pixel resolution) (Tennakoon,
2004). However this assumption is only valid up to certain DTM resolution compared to features
size. Feature size exaggeration might overestimate the contributions of such elements in flood
modelling. Table 1-2 shows the optimum pixel size for DTM for various scales of flood modelling
applications.
Table 3-1: Optimum pixel size for various model applications (Tennakoon, 2004)
The requirement on the DTM accuracy for flood modelling depends on the characteristics of
the study area. In his research, Maune (2001) has pointed out that for hydraulic and hydrologic
modelling, the hydraulic modelling for lower part area, for instance floodplain, high vertical accuracy
and high resolution of elevation data is required compared to hills and mountain areas. The direction
water flows is crucial and complicated at relatively flat area compared to hilly and mountainous area
where water direction is defined by the steep slopes.
Fixed-size grid is a common approach in any Geographical Information System (GIS)
packages for representing continuous terrain surface. However, fixed-sized gridded DEMs suffers a
problem in sampling size (spatial resolution), where rough terrain is usually undersampled with
coarse sampling size, while on the other hand, over-sampling the flat terrain area (Raaflaub and
Collins, 2005). Inevitably, the generation of DEM is prone to invariability introduces by error or
artificial elevation samples. According to Raaflaub and Collins (2005) artificial pits or sink features
in DTM are hydrologically serious problem. In fact, “Pits generally appear in flatter area where even
a one meter error can be enough to produce a close depression”(Raaflaub and Collins, 2005).
Artificial pit can introduce serious problem during flood modelling, where water seems trapped and
stays longer than it should be. “However, most pits can be considered errors since fluvial processes
will not normally produce such features at the scale resolved by DEMs” cited as Band (1986) in
(Raaflaub and Collins, 2005). Therefore, pits should be analyzed and remove if necessary after the
DTM construction.
Various interpolation methods have been used with the aim to fill the gaps of unsampled
measurements (Demirhan et al., 2003). The interpolation is based on the available measurement of
the surrounding areas. In DEM interpolation, a gridded surface of unknown elevation data is created
from known elevation measurements. Generally, the interpolation techniques can be classified into
several techniques such as classification models, trend surface analysis, fourier series, proximal or
nearest neighbours, moving averages and inverse distance weighting, splines and finally kriging.
Kidner (2003) pointed out that the DTM accuracy depends on the interpolation method. In his
research, he found that the interpolation techniques with consideration on the local terrain
neighbourhoods are more consistent and accurate. He had proved that the bilinear interpolation
method had reduced the root mean square error (RMSE) by up to 20 percent. However, different
interpolation approaches should be used for different surface characteristics, thus there is no single
interpolation method that is the best for all situations (Maune, 2001). On the other hand, a single
interpolation technique might have many parameters. Therefore different areas might require
different parameters values. Terrain interpolation for areas with different geomorphological classes,
for instance a coastal plain area might require different parameter values compared to floodplain and
mountainous areas. In his study, Tomaž Podobnikar (2005) pointed out that the quality of the DEM
might be different based on their purpose of use, quality of the source data, interpolation algorithm
and operator experience. Instead of that, one of the major challenges faced by most of the
interpolation algorithm is the presence of noise in elevation dataset (Demirhan et al., 2003). They
had also pointed out that, this noise might be originated from measurement error, overlapping or near
elevation measurement with obvious difference in elevation value and etc.
The basic principle of terrain interpolation is that elevation points that are closer together tend
to be more alike than points that farther apart (Demirhan et al., 2003; Maune, 2001). The changes of
elevations, from one point to another, are closely related to their distance or the elevation difference
is spatially dependence. Therefore, in most of the interpolation algorithms, the closest point to
the
predicted location would have big influence in the interpolation. This principle is normally defined
by the weighting method. In this method the weight value decreases as the surrounding available
measurements are farther away from the point to be predicted. The interpolation method can be
classified into 2 main approaches namely, deterministic and probabilistic methods. In principle, the
deterministic interpolation methods use the surrounding elevation samples in interpolation. The
probabilistic method, on the other hand, shares the similar principal. In addition, the selection of the
surrounding elevation measurements will be based on their correlation with the predicted area. For
instance, the geostatistical based interpolation methods use the calculated autocorrelation values from
point pairs, to develop a spatial dependence variogram model. The weights will be assigned to the
surrounding measurements and it will be based on the distance and correlation between the predicted
points and the samples points.
F
a
Normalized distance b
Figure 3-1: Inverse Distance Weighted (IDW)
interpolation method; Weighting function with
normalized distance in X axis and weight value
in Y axis (a), predicted point marked as “A” (b)
The IDW interpolation method is among the simplest interpolation methods (figure 3-1). The
IDW uses linearly weighted combination of values from the nearby points to determine the value of
new points. The weight is defined as a decreasing function of distance. However this assumption is
not valid for terrain interpolation (Maune, 2001).
The second interpolation method is the Natural Neighbour interpolation method known as
“area stealing” which is based on area in calculating weight. The result will be a smooth and
conservative terrain surface. The key strength for the interpolation is laid on the method in finding
appropriate input samples (neighbours) for height interpolation at a given point. The nearest points
are chosen, based on Thiessen polygon, and this is crucial to ensure the interpolated value will
depend only on a local subset of data. There is no required searching radius distance in the
interpolation which becomes a problem when the input samples are not evenly distributed. The
weighting scheme is based on the amount of area that is picked from the Thiessen polygons
representing input samples. The samples, which are not selected as neighbourhood, would have zero
weight (figure 3-2). The linear-based interpolation scheme will produce a smooth interpolation
everywhere except at the sample points. While the non-linear based interpolation scheme will
produce smooth interpolated surface everywhere including the sample points.
a b
Figure 3-2: The location of a query point that needs an interpolated height relative to the position of
the TIN elements (a) ; The interpolation found the closest surrounding nodes in all direction to the
query point and establishes a relationship to them for the use in height estimation (b) (Maune, 2001)