1711.08681v1
1711.08681v1
b
Univ. Bretagne-Sud, UMR 6074, IRISA, F-56000 Vannes, France
Abstract
In this work, we investigate various methods to deal with semantic label-
ing of very high resolution multi-modal remote sensing data. Especially, we
study how deep fully convolutional networks can be adapted to deal with
multi-modal and multi-scale remote sensing data for semantic labeling. Our
contributions are three-fold: a) we present an efficient multi-scale approach
to leverage both a large spatial context and the high resolution data, b) we
investigate early and late fusion of Lidar and multispectral data, c) we val-
idate our methods on two public datasets with state-of-the-art results. Our
results indicate that late fusion make it possible to recover errors steam-
ing from ambiguous data, while early fusion allows for better joint-feature
learning but at the cost of higher sensitivity to missing data.
Keywords: Deep Learning, Remote Sensing, Semantic Mapping, Data
Fusion
1. Introduction
Remote sensing has benefited a lot from deep learning in the past few
years, mainly thanks to progress achieved in the computer vision community
on natural RGB images. Indeed, most deep learning architectures designed
for multimedia vision can be used on remote sensing optical images. This
resulted in significant improvements in many remote sensing tasks such as
2. Related Work
Semantic labeling of remote sensing data relates to the dense pixel-wise
classification of images, which is called either “semantic segmentation” or
2
“scene understanding” in the computer vision community. Deep learning has
proved itself to be both effective and popular on this task, especially since
the introduction of Fully Convolutional Networks (FCN) [12]. By replacing
standard fully connected layers of traditional Convolutional Neural Networks
(CNN) by convolutional layers, it was possible to densify the single-vector
output of the CNN to achieve a dense classification at 1:8 resolution. The
first FCN model has quickly been improved and declined in several vari-
ants. Some improvements have been based on convolutional auto-encoders
with a symetrical architecture such as SegNet [8] and DeconvNet [13]. Both
use a bottleneck architecture in which the feature maps are upsampled to
match the original input resolution, therefore performing pixel-wise predic-
tions at 1:1 resolution. These models have however been outperformed on
multimedia images by more sophisticated approaches, such as removing the
pooling layers from standard CNN and using dilated convolutions [14] to pre-
serve most of the input spatial information, which resulted in models such
as the multi-scale DeepLab [15] which performs predictions at several reso-
lutions using separate branches and produces 1:8 predictions. Finally, the
rise of the residual networks [9] was soon followed by new architectures de-
rived from ResNet [16, 17]. These architectures leverage the state-of-the-art
effectiveness of residual learning for image classification by adapting them
for semantic segmentation, again at a 1:8 resolution. All these architectures
were shown to perform especially well on popular semantic segmentation of
natural images benchmarks such as Pascal VOC [18] and COCO [19].
On the other hand, deep learning has also been investigated for multi-
modal data processing. Using dual-stream autoencoders, [20] successfully
jointly processed audio-video data using an architecture with two branches,
one for audio and one for video that merge in the middle of the network.
Moreover, processing RGB-D (or 2.5D) data has a significant interest for the
computer vision and robotics communities, as many embedded sensors can
sense both optical and depth information. Relevant architectures include two
parallel networks CNN merging in the same fully connected layers [21] (for
RGB-D data classification) and two CNN streams merging in the middle [22]
(for fingertip detection). FuseNet [10] extended this idea to fully convolu-
tional networks for semantic segmentation of RGB-D data by integrating an
early fusion scheme into the SegNet architecture. Finally, the recent work
of [23] builds on the FuseNet architecture to incorporate residual learning
and multiple stages of refinement to obtain high resolution multi-modal pre-
dictions RGB-D data. These models can be used to learn jointly from several
3
heterogeneous data sources;, although they focus on multimedia images.
As deep learning significantly improved computer vision tasks, remote
sensing adopted those techniques and deep networks have been often used
for Earth Observation. Since the first successful use of patch-based CNN for
roads and buildings extraction [24], many models were built upon the deep
learning pipeline to process remote sensing data. For example, [25] performed
multiple label prediction (i.e. both roads and buildings) in a single CNN. [26]
extended the approach to multispectral images including visible and infrared
bands. Although successful, the patch-based classification approach only
produces coarse maps, as an entire patch gets associated with only one label.
Dense maps can be obtained by sliding a window over the entire input, but
this is an expensive and slow process. Therefore, for urban scenes with dense
labeling in very high resolution, superpixel-based classification [27] of urban
remote sensing images was a successful approach that classified homogeneous
regions to produce dense maps, as it combines the patch-based approach with
an unsupervised pre-segmentation. Thanks to concatenating features fed to
the SVM classifier, [28, 29] managed to extend this framework to multi-scale
processing using a superpixel-based pyramidal approach. Other approaches
for semantic segmentation included patch-based prediction with mixed deep
and expert features [30], that used prior knowledge and feature engineering
to improve the deep network predictions. Multi-scale CNN predictions have
been investigated by [31] with a pyramid of images used as input to an en-
semble of CNN for land cover use classification, while [1] used several convo-
lutional blocks to process multiple scales. Lately, semantic labeling of aerial
images has moved to FCN models [5, 4, 32]. Indeed, Fully Convolutional Net-
works such as SegNet or DeconvNet directly perform pixel-wise classification
are very well suited for semantic mapping of Earth Observation data, as
they can capture the spatial dependencies between classes without the need
for pre-processing such as a superpixel segmentation, and they produce high
resolution predictions. These approaches have again been extended for so-
phisticated multi-scale processing in [33] using both the expensive pyramidal
approach with an FCN and the multiple resolutions output inspired from [15].
Multiple scales allow the model to capture spatial relationships for objects
of different sizes, from large arrangements of buildings to individual trees,
allowing for a better understanding of the scene. To enforce a better spatial
regularity, probabilistic graphical models such as Conditional Random Fields
(CRF) post-processing have been used to model relationships between neigh-
boring pixels and integrate these priors in the prediction [34, 5, 35], although
4
this add expensive computations that significantly slow the inference. On the
other hand, [33] proposed a network that learnt both the semantic labeling
and the explicit inter-class boundaries to improve the spatial structure of the
predictions. However, these explicit spatial regularization schemes are ex-
pensive. In this work, we aim to show that these are not necessary to obtain
semantic labeling results that are competitive with the state-of-the-art.
Previously, works investigated fusion of multi-modal data for remote sens-
ing. Indeed, complementary sensors can be used on the same scene to mea-
sure several properties that give different insights on the semantics of the
scene. Therefore, data fusion strategies can help obtain better models that
can use these multiple data modalities. To this end, [30] fused optical and
Lidar data by concatenating deep and expert features as inputs to random
forests. Similarly, [35] integrates expert features from the ancillary data (Li-
dar and NDVI) into their higher-order CRF to improve the main optical
classification network. The work of [2] investigated late fusion of Lidar and
optical data for semantic segmentation using prediction fusion that required
no feature engineering by combining two classifiers with a deep learning end-
to-end approach. This was also investigated in [36] to fuse optical and Open-
StreetMap for semantic labeling. During the Data Fusion Contest (DFC)
2015, [29] proposed an early fusion scheme of Lidar and optical data based
on a stack of deep features for superpixel-based classification of urban remote
sensed data. In the DFC 2016, [37] performed land cover classification and
traffic analysis by fusing multispectral and video data at a late stage. Our
goal is to thoroughly study end-to-end deep learning approaches for multi-
modal data fusion and to compare early and late fusion strategies for this
task.
3. Method description
3.1. Semantic segmentation of aerial images
Semantic labeling of aerial image requires a dense pixel-wise classification
of the images. Therefore, we can use FCN architectures to achieve this,
using the same techniques that are effective for natural images. We choose
the SegNet [8] model as the base network in this paper. SegNet is based
on an encoder-decoder architecture that produces an output with the same
resolution as the input, as illustrated in Fig. 1. This is a desirable property
as we want to label the data at original image resolution, therefore producing
maps at 1:1 resolution compared to the input. SegNet allows such task to
5
ns
tio
ic
ax
ed
ftm
pr
so
e
ns
de
Encoder Decoder tion
ce em nta
Sour conv + BN + ReLU + pooling upsampling + conv + BN + ReLU
Seg
Figure 1: SegNet architecture [8] for semantic labeling of remote sensing data. See text
for more detailed explanations of each layer.
indices
(1,1) (2,1)
1.5 1.7 1.4 1.3 0 0 0 0
maxpooling (0,2) (3,3) unpooling
2.0 2.1 1.8 1.6 0 2.1 1.8 0
do as the decoder is able to upsample the feature maps using the unpooling
operation. We also compare this base network to a modified version of the
ResNet-34 network [9] adapted for semantic segmentation.
The encoder from SegNet is based on the convolutional layers from VGG-
16 [38]. It has 5 convolution blocks, each containing 2 or 3 convolutional
layers of kernel 3 × 3 with a padding of 1 followed by a rectified linear unit
(ReLU) and a batch normalization (BN) [39]. Each convolution block is
followed by a max-pooling layer of size 2 × 2. Therefore, at the end of the
encoder, the feature maps are each W 32
H
× 32 where the original image has a
resolution W × H.
The decoder performs both the upsampling and the classification. It
6
learns how to restore the full spatial resolution while transforming the en-
coded feature maps into the final labels. Its structure is symmetrical with
respect to the encoder. Pooling layers are replaced by unpooling layers as
described in [40]. The unpooling relocates the activation from the smaller fea-
ture maps into a zero-padded upsampled map. The activations are relocated
at the indices computed at the pooling stages, i.e. the argmax from the max-
pooling (cf. Fig. 2). This unpooling allows to replace the highly-abstracted
features of the decoder to the saliency points of the low-level geometrical
feature maps of the encoder. This is especially effective on small objects
that would otherwise be misplaced or misclassified. After the unpooling, the
convolution blocks densify the sparse feature maps. This process is repeated
until the feature maps reach the input resolution.
According to [9], residual learning helps train deeper networks and achieved
new state-of-the-art classification performance on ImageNet, as well as state-
of-the-art semantic segmentation results on the COCO dataset. Conse-
quently, we also compare our methods applied to the ResNet-34 architecture.
ResNet-34 model uses four residual blocks. Each block is comprised of 2 or
3 convolutions of 3 × 3 kernels and the input of the block is summed into
the output using a skip connection. As in SegNet, convolutions are followed
by Batch Normalization and ReLU activation layers. The skip connection
can be either the identity if the tensor shapes match, or a 1 × 1 convolution
that projects the input feature maps into the same space as the output ones
if the number of convolution planes changed. In our case, to keep most of
the spatial resolution, we keep the initial 2 × 2 max-pooling but reduce the
stride of all convolutions to 1. Therefore, the output of the ResNet-34 model
is a 1:2 prediction map. To upsample this map back to full resolution, we
perform an unpooling followed by a standard convolutional block.
Finally, both networks use a softmax layer to compute the multinomial
logistic loss, averaged over the whole patch:
N
1 XX i
k exp(z i )
j
loss = yj log k , (1)
N i=1 j=1 P i
exp(zl )
l=1
where N is the number of pixels in the input image, k the number of classes
and, for a specified pixel i, y i denote its label and (z1i , . . . , zki ) the prediction
vector. This means that we only minimize the average pixel-wise classifica-
7
gradient
gradient
deconv block 1 deconv block 1
deconv block 2 deconv block 2
deconv block 3 deconv block 3
deconv block 4 deconv block 4
deconv block 5 deconv block 5
conv block 5 conv block 5
conv block 4 conv block 4
conv block 3 conv block 3
conv block 2 conv block 2
conv block 1 conv block 1
Figure 3: Multi-scale deep supervision of SegNet with 3 branches on remote sensing data.
tion loss without any spatial regularization, as it will be learnt by the network
during training. We do not use any post-processing, e.g. a CRF, as it would
significantly slow down the computations for little to no gain.
8
smaller maps are then interpolated to full resolution and averaged to obtain
the final full resolution semantic map.
Let Pf ull denote the full resolution prediction, Pdownd the predictions at
the downscale factor d and fd the bilinear interpolation that upsamples a map
by a factor d. Therefore, we can aggregate our multi-resolution predictions
using a simple summation (with f0 = Id), e.g. if we use four scales:
X
Pf ull = fd (Pdownd ) = P0 + f2 (P2 ) + f4 (P4 ) + f8 (P8 ). (2)
d∈{0,2,4,8}
This ensures that earlier layers still have a meaningful gradient, even when
the global optimization is converging. As argued in [42], deeper layers now
only have to learn how to refine the coarser predictions from the lower reso-
lutions, which helps the overall learning process.
9
VI
/ ND
DS M L L L L L
ns
/
tio
DSM
ic
ax
ed
ftm
N
pr
so
e
ns
de
G
Encoder Decoder
ntation
IRR conv + BN + ReLU + pooling upsampling + conv + BN + ReLU
gme
Se
(a) FuseNet architecture [10] for early fusion of remote sensing data.
residual residual residual residual
pooling
L L L L
NDSM/DSM/NDVI
L L L L
10
convn (aux) convn (main) convn (aux) convn (main)
L
L
convn−1 (mix)
be the auxiliary data (cf. Fig. 5a). There is a conceptual unbalance in the
way the two sources are dealt with. We suggest an alternative architecture
with a third “virtual” branch that does not have this unbalance, which might
improve performance.
Instead of computing the sum of the two sets of feature maps, we suggest
an alternative fusion process to obtain the multi-modal joint-features. We
introduce a third encoder that does not correspond to any real modality, but
instead to a virtual fused data source. At stage n, the virtual encoder takes
as input its previous activations concatenated with both activations from the
other encoders. These feature maps are passed through a convolutional block
to learn a residual that is summed with the average feature maps from the
other encoders. This is illustrated in Fig. 5b. This strategy makes FuseNet
symetrical and therefore relieves us of the choice of the main source, which
would be an additional hyperparameter to tune. This architecture will be
named V-FuseNet in the rest of the paper for Virtual-FuseNet.
11
ns
io
t
ic
ed
pr
e
ns
de
L
ns
tio
ic
ax
n
ed
sio
ftm
pr
fu
so
VI
e
ns
de
/ ND
DSM
concat
/
NDSM
ns
tio
ic
ed
pr
e
ns
ion
de
L
m e ntat
Seg
Encoder Decoder
G
IRR conv + BN + ReLU + pooling upsampling + conv + BN + ReLU
(a) Residual correction [2] for late fusion using two SegNets.
residual residual residual residual
pooling unpool classifier
L L L L
fusion softmax
NDSM/DSM/NDVI
⊕
pooling unpool classifier
Ground truth
L L L L
IRRG
residual residual residual residual
(b) Residual correction [2] for late fusion using two ResNets.
Figure 6: Architectures of altered baselines FCN to fit the residual correction framework.
12
a smooth classification map. Then, we re-train the correction module in a
residual fashion. The residual correction network therefore learns a small
offset to apply to each pixel-probabilities. This is illustrated in Fig. 6a for
the SegNet architecture and Fig. 6b for the ResNet architecture.
Let R be the number of outputs on which to perform residual correction,
P0 the ground truth, Pi the prediction and i the error term from Pi w.r.t.
the ground truth. We predict P 0 , the sum of the averaged predictions and
the correction term c which is inferred by the fusion network:
R R
1X 1X
P 0 = Pavg + c = Pi + c = P0 + i + c , (3)
R i=1 R i=1
As our residual correction module is optimized to minimize the loss, we
enforce:
kP 0 − P0 k → 0 (4)
which translates into a constraint on c and i :
R
1X
k i − ck → 0 . (5)
R i=1
As this offset c is learnt in a supervised way, the network can infer which
input to trust depending on the predicted classes. For example, if the aux-
iliary data is better for vegetation detection, the residual correction will at-
tribute more weight to the prediction coming out of the auxiliary SegNet.
This module can be generalized to n inputs, even with different network
architectures. This architecture will be denoted SegNet-RC (for SegNet-
Residual Correction) in the rest of the paper.
13
Table 1: Validation results on Vaihingen.
4. Experiments
4.1. Datasets
We validate our method on the two image sets of the ISPRS 2D Semantic
Labeling Challenge 1 . These datasets are comprised of very high resolution
aerial images over two cities in Germany: Vaihingen and Potsdam. The
goal is to perform semantic labeling of the images on six classes : buildings,
impervious surfaces (e.g. roads), low vegetation, trees, cars and clutter. Two
online leaderboards (one for each city) are available and report test metrics
obtained on held-out test images.
ISPRS Vaihingen. The Vaihingen dataset has a resolution of 9 cm/pixel with
tiles of approximately 2100 × 2100 pixels. There are 33 images, from which
1
http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html
14
Table 3: Final results on the Vaihingen dataset.
15
16 have a public ground truth. Tiles consist in Infrared-Red-Green (IRRG)
images and DSM data extracted from the Lidar point cloud. We also use the
normalized DSM (nDSM) from [43].
16
learning rate of the pre-initialized weights is set as half the learning of the
new weights as suggested in [2].
Results are cross-validated on each dataset using a 3-fold split. Final
models for testing on the held-out data are re-trained on the whole training
set.
4.3. Results
Table 1 details the cross-validated results of our methods on the Vaihingen
dataset. We show the pixel-wise accuracy and the average F1 score over all
classes. The F1 score over a class is defined by:
precisioni × recalli
F 1i = 2 , (7)
precisioni + recalli
tpi tpi
recalli = , precisioni = , (8)
Ci Pi
where tpi the number of true positives for class i, Ci the number of pixels
belonging to class i, and Pi the number of pixels attributed to class i by
the model. As per the evaluation instructions from the challenge organizers,
these metrics are computed after eroding the borders by a 3px radius circle
and discarding those pixels.
Table 2 details the results of the multi-scale approach. “No branch” de-
notes the reference single-scale SegNet model. The first branch was added
after the 4th convolutional block of the decoder (downscale = 2), the sec-
ond branch after the 3rd (downscale = 4) and the third branch after the 2nd
(downscale = 8).
Table 3 and Table 4 show the final results of our methods on the held-out
test data from the Vaihingen and Potsdam datasets respectively.
5. Discussion
5.1. Baselines and preliminary experiments
As a baseline, we train standard SegNets and ResNets on the IRRG and
composite versions of the Vaihingen and Potsdam datasets. These models
are already competitive with the state-of-the-art as is, with a significant
advance for the IRRG version. Especially, the car class has an average F1
score of ' 59.0% on the composite images whereas it reaches ' 85.0% on
the IRRG tiles. Nonetheless, we know that the composite tiles contain DSM
17
(a) IRRG image (b) Ground truth
Figure 7: Effect of the multi-scale prediction strategy on a excerpt of the ISPRS Vaihingen
dataset. Small objects or surfaces with ambiguous spatial context are regularized by the
multiple scales prediction aggregation.
(white: roads, blue: buildings, cyan: low vegetation, green: trees, yellow: cars)
18
(a) RGB image (b) Composite image (c) Ground truth
Figure 8: Effect of the fusion strategy on an excerpt of the ISPRS Potsdam dataset.
Confusion between impervious surfaces and buildings is significantly reduced thanks to
the contribution of the nDSM in the V-FuseNet strategy.
(white: roads, blue: buildings, cyan: low vegetation, green: trees, yellow: cars)
less subject to overfitting. Overall, ResNet and SegNet obtain similar results,
with ResNet being more stable. However, ResNet requires significantly more
memory compared to SegNet, especially when using the fusion schemes. No-
tably, we were not able to use the V-FuseNet scheme with ResNet-34 due to
the memory limitation (12Gb) of our GPUs. Nonetheless, these results show
that the investigated data fusion strategies can be applied to several flavors
of Fully Convolutional Networks and that our findings should generalize to
other base networks from the state-of-the-art.
19
IRRG Ground SegNet IRRG Ground SegNet
image truth prediction image truth prediction
(a) SegNet can perform arguably better (b) SegNet sometimes overfits on
than the ground truth. geometrical aberrations.
Figure 9: Disputable inconsistencies between our predictions and the ground truth.
(white: roads, blue: buildings, cyan: low vegetation, green: trees, yellow: cars)
Figure 10: Errors in the Vaihingen nDSM are poorly handled by both fusion methods.
Here, an entire building goes missing.
20
predictions. This makes it easier for subsequent human interpretation or
post-processing, such as vectorization or shapefiles generation, especially on
the man-made structures.
As a side effect of this investigation, our tests showed that the downscaled
outputs were still quite accurate. For example, the prediction downscaled by
a factor 8 was in average accuracy only 0.5% below the full resolution predic-
tion, with the difference mostly residing in “car” class. This is unsurprising
as cars are usually ' 30px long in the full resolution tile and therefore cover
only 3-4 pixels in the downscaled prediction, which makes them harder to
see. Though, the good average accuracy of the downscaled outputs seems
to indicate that the decoder from SegNet could be reduced to its first con-
volutional block without losing too much accuracy. This technique could be
used to reduce the inference time when small objects are irrelevant while
maintaining a good accuracy on the other classes.
21
IRRG image Ground truth SegNet FuseNet SegNet-RC
(a) Predictions from various models on a patch of the Vaihingen dataset.
SegNet IRRG SegNet IRRG SegNet IRRG SegNet comp. SegNet comp.
confidence confidence confidence confidence confidence
(buildings) (roads) (cars) (buildings) (cars)
(b) SegNet confidence heat maps for various classes using several inputs.
and to fuse them to alleviate the uncertainty around the cars in the rooftop
parking lot. This works well on Vaihingen as both the IRRG and composite
sources achieve a global accuracy higher than 85%. However, on Potsdam,
the composite SegNet is less informative and achieves only 79% accuracy, as
the annotations are more precise and the dataset overall more challenging
for a data source that relies only on Lidar and NDVI. Therefore, the residual
correction fails to make the most of the two data sources. This analysis is
comforted by the fact that, on the Vaihingen validation set, the residual cor-
rection achieves a better global accuracy with ResNets than with SegNets,
22
thanks to the stronger ResNet-34 trained on the composite source.
Meanwhile, the FuseNet architecture learns a joint representation of the
two data sources, but faces the same pitfall as the standard SegNet model
: edge cases such as cars on rooftop parking lots disappear. However, the
joint-features are significantly stronger and the decoder can perform a better
classification using this multi-modal representation, therefore improving the
global accuracy of the model.
In conclusion, the two fusion strategies can be used for different use cases.
Late fusion by residual correction is more suited to combine several strong
classifiers that are confident in their predictions, while the FuseNet early
fusion scheme is more adapted for integrating weaker ancillary data into the
main learning pipeline.
On the held-out testing set, the V-FuseNet strategy does not perform as
well as expected. Its global accuracy is marginally under the original FuseNet
model, although F1 scores on smaller and harder classes are improved, espe-
cially “clutter” which is improved from 49.3% to 51.0%. As the “clutter” class
is ignored in the dataset metrics, this is not reflected in the final accuracy.
23
help alleviate overfitting and improve robustness by training on synthetic
data, as proposed in [47].
6. Conclusion
In this work, we investigate deep neural networks for semantic labeling
of multi-modal very high-resolution urban remote sensing data. Especially,
we show that fully convolutional networks are well-suited to the task and
obtain excellent results. We present a simple deep supervision trick that
extracts semantic maps at multiple resolutions, which helps training the net-
work and improves the overall classification. Then, we extend our work to
non-optical data by integrating digital surface model extracted from Lidar
point clouds. We study two methods for multi-modal remote sensing data
processing with deep networks: early fusion with FuseNet and late fusion us-
ing residual correction. We show that both methods can efficiently leverage
the complementarity of the heterogeneous data, although on different use
cases. While early fusion allows the network to learn stronger features, late
fusion can recover errors on hard pixels that are missed by all the other mod-
els. We validated our findings on the ISPRS 2D Semantic Labeling datasets
of Potsdam and Vaihingen, on which we obtained results competitive with
the state-of-the-art.
Acknowledgements
The Vaihingen dataset was provided by the German Society for Pho-
togrammetry, Remote Sensing and Geoinformation (DGPF) [11]: http:
//www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html. The authors thank
the ISPRS for making the Vaihingen and Potsdam datasets available and
organizing the semantic labeling challenge. Nicolas Audebert’s work is sup-
ported by the Total-ONERA research project NAOMI.
References
References
[1] X. Chen, S. Xiang, C. L. Liu, C. H. Pan, Vehicle Detection in Satellite
Images by Hybrid Deep Convolutional Neural Networks, IEEE Geo-
science and Remote Sensing Letters 11 (10) (2014) 1797–1801.
24
[2] N. Audebert, B. Le Saux, S. Lefèvre, Semantic Segmentation of Earth
Observation Data Using Multimodal and Multi-scale Deep Networks,
in: Asian Conference on Computer Vision (ACCV16), Taipei, Taiwan,
2016.
[9] K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image
Recognition, in: Proceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition, Boston, USA, 2016.
25
[11] M. Cramer, The DGPF test on digital aerial camera evaluation –
overview and test design, Photogrammetrie – Fernerkundung – Geoin-
formation 2 (2010) 73–82.
[17] H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, Pyramid scene parsing net-
work, in: Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, Honolulu, USA, 2017.
26
Vision – ECCV 2014, no. 8693 in Lecture Notes in Computer Science,
Springer International Publishing, 2014, pp. 740–755.
[23] S.-J. Park, K.-S. Hong, S. Lee, Rdfnet: Rgb-d multi-level residual feature
fusion for indoor semantic segmentation, in: The IEEE International
Conference on Computer Vision (ICCV), 2017.
27
2-D Contest, IEEE Journal of Selected Topics in Applied Earth Obser-
vations and Remote Sensing PP (99) (2016) 1–13.
[28] N. Audebert, B. Le Saux, S. Lefèvre, How useful is region-based clas-
sification of remote sensing images in a deep learning framework?, in:
2016 IEEE International Geoscience and Remote Sensing Symposium
(IGARSS), Beijing, China, 2016, pp. 5091–5094.
[29] A. Lagrange, B. Le Saux, A. Beaupere, A. Boulch, A. Chan-Hon-Tong,
S. Herbin, H. Randrianarivo, M. Ferecatu, Benchmarking classification
of earth-observation data: From learning explicit features to convolu-
tional networks, in: IEEE International Geosciences and Remote Sens-
ing Symposium (IGARSS), 2015, pp. 4173–4176.
[30] S. Paisitkriangkrai, J. Sherrah, P. Janney, A. Van Den Hengel, Effective
semantic pixel labelling with convolutional networks and Conditional
Random Fields, in: Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition Workshops, Boston, USA, 2015, pp.
36–43.
[31] Q. Liu, R. Hang, H. Song, Z. Li, Learning Multi-Scale Deep Fea-
tures for High-Resolution Satellite Image Classification, arXiv preprint
arXiv:1611.03591.
[32] M. Volpi, D. Tuia, Dense semantic labeling of subdecimeter resolution
images with convolutional neural networks, IEEE Transactions on Geo-
science and Remote Sensing 55 (2) (2017) 881–893.
[33] D. Marmanis, K. Schindler, J. D. Wegner, S. Galliani, M. Datcu,
U. Stilla, Classification With an Edge: Improving Semantic Image
Segmentation with Boundary Detection, arXiv:1612.01337 [cs]ArXiv:
1612.01337.
[34] G. Lin, C. Shen, A. Van Den Hengel, I. Reid, Efficient piecewise training
of deep structured models for semantic segmentation, in: Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition,
Boston, USA, 2015.
[35] Y. Liu, S. Piramanayagam, S. T. Monteiro, E. Saber, Dense semantic
labeling of very-high-resolution aerial imagery and LiDAR with fully-
convolutional neural networks and higher-order crfs, in: Proceedings
28
of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) Workshops, Honolulu, USA, 2017.
[43] M. Gerke, Use of the Stair Vision Library within the ISPRS 2d Semantic
Labeling Benchmark (Vaihingen), Tech. rep., International Institute for
Geo-Information Science and Earth Observation (2015).
[44] K. He, X. Zhang, S. Ren, J. Sun, Delving Deep into Rectifiers: Surpass-
ing Human-Level Performance on ImageNet Classification, in: Proceed-
ings of the IEEE International Conference on Computer Vision, 2015,
pp. 1026–1034.
29
[45] J. Hoffman, S. Gupta, T. Darrell, Learning with side information
through modality hallucination, in: Proceedings of the IEEE Confer-
ence on Computer Vision and Pattern Recognition, Las Vegas, USA,
2016, pp. 826–834.
30