FADNet A Fast and Accurate Network for Disparity Estimation
FADNet A Fast and Accurate Network for Disparity Estimation
Qiang Wang1,∗ , Shaohuai Shi1,∗ , Shizhen Zheng1 , Kaiyong Zhao1,† , Xiaowen Chu1,†
resources to accurately predict the disparity, especially for (a) Input image (b) PSMNet [6]
those 3D convolution based networks, which makes it diffi-
cult for deployment in real-time applications. On the other
hand, existing computation-efficient networks lack expression
capability in large-scale datasets so that they cannot make an
accurate prediction in many scenarios. To this end, we propose
an efficient and accurate deep network for disparity estimation
named FADNet with three main features: 1) It exploits efficient
2D based correlation layers with stacked blocks to preserve fast (c) Our FADNet (d) Ground truth
computation; 2) It combines the residual structures to make the
Fig. 1: Performance illustrations. (a) a challenging input
deeper model easier to learn; 3) It contains multi-scale predic-
tions so as to exploit a multi-scale weight scheduling training image. (b) Result of PSMNet [6] which consumes 13.99 GB
technique to improve the accuracy. We conduct experiments GPU memory and runs 399.3 ms for one stereo image pair
to demonstrate the effectiveness of FADNet on two popular on an Nvidia Tesla V100 GPU. (c) Result of our FADNet,
datasets, Scene Flow and KITTI 2015. Experimental results which only consumes 1.62 GB GPU memory and runs 18.7
show that FADNet achieves state-of-the-art prediction accuracy,
ms for one stereo image pair on the Nvidia Tesla V100 GPU.
and runs at a significant order of magnitude faster speed than
existing 3D models. The codes of FADNet are available at
https://github.com/HKBU-HPML/FADNet. learning (AutoML) for neural architecture search (NAS) on
stereo matching, while some others [5][10] focus on creating
I. I NTRODUCTION large scale datasets with high-quality labels. In practice, to
It has been seen that deep learning has been widely measure whether a DNN model is good enough, we not only
deployed in many computer vision tasks. Disparity estima- need to evaluate its accuracy on unseen samples (whether
tion (also referred to as stereo matching) is a classical and it can estimate the disparity correctly), but also its time
important problem in computer vision applications, such as efficiency (whether it can generate the results in real-time).
3D scene reconstruction, robotics and autonomous driving. In ED-Conv2D methods, stereo matching neural networks
While traditional methods based on hand-crafted feature [2][3][5] are first proposed for end-to-end disparity estima-
extraction and matching cost aggregation such as Semi- tion by exploiting an encoder-decoder structure. The encoder
Global Matching (SGM) [1]) tend to fail on those textureless part extracts the features from the input images, and the
and repetitive regions in the images, recent advanced deep decoder part predicts the disparity with the generated fea-
neural network (DNN) techniques surpass them with decent tures. The disparity prediction is optimized as a regression
generalization and robustness to those challenging patches, or classification problem using large-scale datasets (e.g.,
and achieve state-of-the-art performance in many public Scene Flow [5], IRS [10]) with disparity ground truth. The
datasets [2][3][4][5][6][7]. The DNN-based methods for correlation layer [11][5] is then proposed to increase the
disparity estimation are end-to-end frameworks which take learning capability of DNNs in disparity estimation, and it
stereo images (left and right) as input to the neural network has been proved to be successful in learning strong features
and predict the disparity directly. The architectures of DNN at multiple levels of scales [11][5][12][13][14]. To further
are very essential to achieve accurate estimation, and can be improve the capability of the models, residual networks
categorized into two classes, encoder-decoder network with [15][16][17] are introduced into those ED-Conv2D networks
2D convolution (ED-Conv2D) and cost volume matching since the residual structure enables much deeper network to
with 3D convolution (CVM-Conv3D). Besides, recent studies be easier to train [18]. The ED-Conv2D methods have been
[8][9] begin to reveal the potential of automated machine proved computing efficient, but they cannot achieve very high
estimation accuracy.
∗ Authorshave contributed equally. To address the accuracy problem of disparity estimation,
† Correspondingauthors.
1 Department of Computer Science, Hong Kong Baptist University,
researchers have proposed CVM-Conv3D networks to better
{qiangwang,csshshi,szzheng,kyzhao,chxw} capture the features of stereo images and thus improve the
@comp.hkbu.edu.hk estimation accuracy [3][19][6][7][20]. The key idea of the
CVM-Conv3D methods is to generate the cost volume by multi-view images. Although monocular vision is low cost
concatenating left feature maps with their corresponding and commonly available in practice, it does not explicitly
right counterparts across each disparity level [19][6]. The introduce any geometrical constraint, which is important
features of cost volume are then automatically extracted by for disparity estimation[21]. On the contrary, stereo vision
3D convolution layers. However 3D operations in DNNs are leverages the advantages of cross-reference between the left
computing-intensive and hence very slow even with current and the right view, and usually show greater performance
powerful AI accelerators (e.g., GPUs). Although the 3D con- and robustness in geometrical tasks. In this paper, we mainly
volution based DNNs can achieve state-of-the-art disparity discuss the work related to stereo images for disparity
estimation accuracy, they are difficult for deployment due estimation, which is classified into two categories: 2D based
to their resource requirements. On one hand, it requires a and 3D based CNNs.
large amount of memory to install the model; so only a In 2D based CNNs, end-to-end architectures with mainly
limited set of accelerators (like Nvidia Tesla V100 with convolution layers [5][22] are proposed for disparity esti-
32GB memory) can run these models. On the other hand, it mation, which use two stereo images as input and generate
takes several seconds to generate a single result even on the the disparity directly and the disparity is optimized as a
very powerful Tesla V100 GPU using CVM-Conv3D models. regression task. However, the models are pure 2D CNN
The memory consumption and the inefficient computation architectures which are difficult to capture the matching fea-
make the CVM-Conv3D methods difficult to be deployed in tures such that the estimation results are not good. To address
practice. Therefore, it is crucial to address the accuracy and the problem, the correlation layer which can express the
efficiency problems for real-world applications. relationship between left and right images is introduced in the
To this end, we propose FADNet which is a Fast end-to-end architecture (e.g., DispNetCorr1D [5], FlowNet
and Accurate Disparity estimation Network based on ED- [11], FlowNet2 [23], DenseMapNet [24]). The correlation
Conv2D architectures. FADNet can achieve high accuracy layer significantly increases the estimating performance com-
while keeping a fast inference speed. As illustrated in Fig. pared to the pure CNNs, but existing architectures are still
1, our FADNet can easily obtain comparable performance not accurate enough for production.
as state-of-the-art PSMNet [6], while it runs approximately 3D based CNNs are further proposed to increase the
20× faster than PSMNet and consumes 10× less GPU estimation performance [3][19][6][7][20], which employ 3D
memory. In FADNet, we first exploit the multiple stacked convolutions with cost volume. The cost volume is mainly
2D-based convolution layers with fast computation, and then formed by concatenating left feature maps with their cor-
we combine state-of-the-art residual architectures to improve responding right counterparts across each disparity level
the learning capability, and finally we introduce multi-scale [19][6], and the features of the generated cost volumes can
outputs for FADNet so that it can exploit the multi-scale be learned by 3D convolution layers. The 3D based CNNs
weight scheduling to improve the training speed. These can automatically learn to regularize the cost volume, which
features enable FADNet to efficiently predict the disparity have achieved state-of-the-art accuracy of various datasets.
with high accuracy as compared to existing work. Our However, the key limitation of the 3D based CNNs is
contributions are summarized as follows: their high computation resource requirements. For example,
• We propose an accurate yet efficient DNN architecture training GANet [7] with the Scene Flow [5] dataset takes
for disparity estimation named FADNet, which achieves weeks even using very powerful Nvidia Tesla V100 GPUs.
comparable prediction accuracy as CVM-Conv3D mod- Even they achieve good accuracy, it is difficult to deploy due
els and it runs at an order of magnitude faster speed than to their very low time efficiency. To this end, we propose a
the 3D-based models. fast and accurate DNN model for disparity estimation.
• We develop a multiple rounds training scheme with
III. M ODEL D ESIGN AND I MPLEMENTATION
multi-scale weight scheduling for FADNet during train-
ing, which improves the training speed yet maintains Our proposed FADNet exploits the structure of DispNetC
the model accuracy. [5] as a backbone, but it is extensively reformed to take
• We achieve state-of-the-art accuracy on the Scene Flow care of both accuracy and inference speed, which is lacking
dataset with up to 20× and 45× faster disparity predic- in existing studies. We first change the structure in terms of
tion speed than PSMNet [6] and GANet [7] respectively. branch depth and layer type by introducing two new modules,
The rest of the paper is organized as follows. We introduce residual block and point-wise correlation. Then we exploit
some related work in DNN based stereo matching problems the multi-scale residual learning strategy for training the
in Section II. Section III introduces the methodology and refinement network. Finally, a loss weight training schedule
implementation of our proposed network. We demonstrate is used to train the network in a coarse-to-fine manner.
our experimental results in Section IV. We finally conclude
A. Residual Block and Point-wise Correlation
the paper in Section V.
DispNetC and DispNetS which are both from the study
II. R ELATED W ORK in [5] basically use an encoder-decoder structure equipped
There exist many studies using deep learning methods with five feature extraction and down-sampling layers and
in estimating image depth using monocular, stereo and five feature deconvolution layers. While conducting feature
L
Dual Point-Wise
ResBlock Correlation
DeConvolution Disparity
Element-Wise
R
Concatenate Å Addition
Conv, 3x3,
stride=1
Å Å Å Å Å Å Å
Conv, 3x3,
Conv, 3x3, stride=2
L
stride=1
Å
Conv, 3x3,
stride=2 Conv, 3x3,
stride=2
extraction and down-sampling, DispNetC and DispNetS first where k is the kernel size of cost matching, x1 and x2
adopt a convolution layer with a stride of 1 and then a are the centers of two patches from f1 and f2 respectively.
convolution layer with a stride of 2 so that they consistently Computing all patch combinations involves c×K 2 ×w2 ×h2
shrink the feature map size by half. We call the two-layer multiplication and produces a cost matching map of w × h.
convolutions with size reduction as Dual-Conv, which is Given a maximum searching range D, we fix x1 and shift the
shown in the left-bottom corner of Fig. 2. DispNetC equipped x2 on the x-axis direction from −D to D with a stride of two.
with Dual-Conv modules and a correlation layer finally Thus, the final output cost volume size will be w × h × D.
achieves an end-points error (EPE) of 1.68 on the Scene However, the correlation operation assumes that each pixel
Flow dataset, as reported in [5]. in the patch contributes equally to the point-wise convolution
The residual block originally derived in [15] for image results, which may loss the ability to learn more complicated
classification tasks is widely used to learn robust features matching patterns. Here we propose point-wise correlation
and train a very deep networks. The residual block can composed of two modules. The first module is a classical
well address the gradient vanish problem when training very convolution layer with a kernel size of 3 × 3 and a stride of
deep networks. Thus, we replace the convolution layer in 1. The second one is an element-wise multiplication which
the Dual-Conv module by the residual block to construct a is defined by Eq. (2).
new module called Dual-ResBlock, which is shown in the X
c(x1 , x2 ) = hf1 (x1 ), f2 (x2 )i, (2)
left-bottom corner of Fig. 2. With Dual-ResBlock, we can
make the network deeper without training difficulty as the where we remove the patch convolution manner from Eq. (1).
residual block allows us to train very deep models. Therefore, Since the maximum valid disparity is 192 in the evaluated
we further increase the number of feature extraction and datasets, the maximum search range for the original image
down-sampling layers from five to seven. Finally, DispNetC resolution is no more than 192. Remember that the correla-
and DispNetS are evolving to two new networks with better tion layer is put after the third Dual-ResBlock, of which the
learning ability, which are called RB-NetC and RB-NetS output feature resolution is 1/8. So a proper searching range
respectively, as shown in Fig. 2. value should not be less than 192/8=16. We set a marginally
One of the most important contributions of DispNetC larger value 20. We also test some other values, such as 10
is the correlation layer, which targets at finding corre- and 40, which do not surpass the version of using 20 in
spondences between the left and right images. Given two the network. The reason is that applying too small or large
multi-channel feature maps f1 , f2 with w, h and c as their search range value may lead to under-fitting or over-fitting.
width, height and number of channels, the correlation layer Table I lists the accuracy improvement brought by apply-
calculates the cost volume of them using Eq. (1). ing the proposed Dual-ResBlock and point-wise correlation.
X We train them using the same dataset as well as the training
c(x1 , x2 ) = hf1 (x1 + o), f2 (x2 + o)i, (1) schemes. It is observed that RB-NetC outperforms DispNetC
o∈[−k,k]×[−k,k] with a much lower EPE, which indicates the effectiveness of
the residual structure. We also notice that setting a proper Note that ds is the ground truth disparity of scale 21s
searching range value of the correlation layer helps further and dˆs is the predicted disparity of scale 21s . The loss
improve the model accuracy. function is separately applied in the seven scales of outputs,
which generates seven loss values. The loss values are then
TABLE I: Model accuracy improvement of Dual-ResBlock accumulated with loss weights.
and point-wise correlation with different D.
Model D Training EPE Test EPE TABLE II: Multi-scale loss weight scheduling.
DispNetC 20 2.89 2.80 Round w0 w1 w2 w3 w4 w5 w6
RB-NetC 10 2.28 2.06
RB-NetC 20 2.09 1.76 1 0.32 0.16 0.08 0.04 0.02 0.01 0.005
RB-NetC 40 2.12 1.83 2 0.6 0.32 0.08 0.04 0.02 0.01 0.005
3 0.8 0.16 0.04 0.02 0.01 0.005 0.0025
4 1.0 0 0 0 0 0 0
Fig. 3: Results of disparity prediction for Scene Flow testing data. The leftest column shows the left images of the stereo
pairs. The rest three columns respectively show the disparity maps estimated by (a) DispNetC [5], (b) PSMNet [6], (c)
FADNet.
Fig. 4: Results of disparity prediction for KITTI 2015 testing data. The leftest column shows the left images of the stereo
pairs. The rest three columns respectively show the disparity maps estimated by (a) DispNetC [5], (b) PSMNet [6], (c)
FADNet, as well as their error maps.
V. C ONCLUSION AND F UTURE W ORK this paper. First, we would like to develop fast disparity infer-
ence of FADNet on edge devices. Since the computational
In this paper, we proposed an efficient yet accurate neural
capability of them is much lower than that of the server
network, FADNet, for end-to-end disparity estimation to
GPUs used in our experiments, it is necessary to explore
embrace the time efficiency and estimation accuracy on the
the techniques of model compression, including pruning,
stereo matching problem. The proposed FADNet exploits
quantization, and so on. Second, we would also like to
point-wise correlation layers, residual blocks, and multi-scale
apply AutoML [9] for searching a well-performing network
residual learning strategy to make the model be accurate
structure for disparity estimation.
in many scenarios while preserving fast inference time. We
compared FADNet with existing state-of-the-art 2D and 3D
ACKNOWLEDGEMENTS
based methods on two popular datasets in terms of accu-
racy and speed. Experimental results showed that FADNet This research was supported by Hong Kong RGC GRF
achieves comparable accuracy while it runs much faster than grant HKBU 12200418. We thank the anonymous reviewers
the 3D based models. Compared to the 2D based models, for their constructive comments and suggestions. We would
FADNet is more than two times accurate. also like to thank NVIDIA AI Technology Centre (NVAITC)
We have two future directions following our discovery in for providing the GPU clusters for some experiments.
R EFERENCES [22] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, “Cascade resid-
ual learning: A two-stage convolutional neural network for stereo
[1] H. Hirschmuller, “Stereo processing by semiglobal matching and matching,” in Proceedings of the IEEE International Conference on
mutual information,” IEEE Transactions on pattern analysis and Computer Vision, 2017, pp. 887–895.
machine intelligence, vol. 30, no. 2, pp. 328–341, 2007. [23] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox,
[2] S. Zagoruyko and N. Komodakis, “Learning to compare image patches “Flownet 2.0: Evolution of optical flow estimation with deep net-
via convolutional neural networks,” in Proceedings of the IEEE works,” in Proceedings of the IEEE conference on computer vision
conference on computer vision and pattern recognition, 2015, pp. and pattern recognition, 2017, pp. 2462–2470.
4353–4361. [24] R. Atienza, “Fast disparity estimation using dense networks,” in 2018
[3] J. Zbontar, Y. LeCun et al., “Stereo matching by training a convolu- IEEE International Conference on Robotics and Automation (ICRA).
tional neural network to compare image patches.” Journal of Machine IEEE, 2018, pp. 3207–3212.
Learning Research, vol. 17, no. 1-32, p. 2, 2016. [25] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan, “Cascade resid-
[4] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, ual learning: A two-stage convolutional neural network for stereo
P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning matching,” in The IEEE International Conference on Computer Vision
optical flow with convolutional networks,” in The IEEE International (ICCV) Workshops, Oct 2017.
Conference on Computer Vision (ICCV), December 2015. [26] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei,
[5] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, “ImageNet: A large-scale hierarchical image database,” in Computer
and T. Brox, “A large dataset to train convolutional networks for Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference
disparity, optical flow, and scene flow estimation,” in Proceedings of on. IEEE, 2009, pp. 248–255.
the IEEE Conference on Computer Vision and Pattern Recognition, [27] M. Menze, C. Heipke, and A. Geiger, “Joint 3d estimation of vehicles
2016, pp. 4040–4048. and scene flow,” in ISPRS Workshop on Image Sequence Analysis
[6] J.-R. Chang and Y.-S. Chen, “Pyramid stereo matching network,” in (ISA), 2015.
The IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), June 2018.
[7] F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: Guided
aggregation net for end-to-end stereo matching,” in The IEEE Con-
ference on Computer Vision and Pattern Recognition (CVPR), June
2019.
[8] T. Saikia, Y. Marrakchi, A. Zela, F. Hutter, and T. Brox, “Autodispnet:
Improving disparity estimation with automl,” in The IEEE Interna-
tional Conference on Computer Vision (ICCV), October 2019.
[9] X. He, K. Zhao, and X. Chu, “Automl: A survey of the state-of-the-
art,” arXiv preprint arXiv:1908.00709, 2019.
[10] Q. Wang, S. Zheng, Q. Yan, F. Deng, K. Zhao, and X. Chu, “Irs: A
large synthetic indoor robotics stereo dataset for disparity and surface
normal estimation,” arXiv preprint arXiv:1912.09678, 2019.
[11] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov,
P. Van Der Smagt, D. Cremers, and T. Brox, “Flownet: Learning
optical flow with convolutional networks,” in Proceedings of the IEEE
international conference on computer vision, 2015, pp. 2758–2766.
[12] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox,
“Flownet 2.0: Evolution of optical flow estimation with deep net-
works,” in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), July 2017.
[13] E. Ilg, T. Saikia, M. Keuper, and T. Brox, “Occlusions, motion and
depth boundaries with a generic network for disparity, optical flow
or scene flow estimation,” in The European Conference on Computer
Vision (ECCV), September 2018.
[14] Z. Liang, Y. Feng, Y. Guo, H. Liu, W. Chen, L. Qiao, L. Zhou,
and J. Zhang, “Learning for disparity estimation through feature
constancy,” in Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, 2018, pp. 2811–2820.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
image recognition,” in The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), June 2016.
[16] A. E. Orhan and X. Pitkow, “Skip connections eliminate singularities,”
arXiv preprint arXiv:1701.09175, 2017.
[17] W. Zhan, X. Ou, Y. Yang, and L. Chen, “Dsnet: Joint learning for
scene segmentation and disparity estimation,” in 2019 International
Conference on Robotics and Automation (ICRA). IEEE, 2019, pp.
2946–2952.
[18] X. Du, M. El-Khamy, and J. Lee, “Amnet: Deep atrous
multiscale stereo disparity estimation networks,” arXiv preprint
arXiv:1904.09099, 2019.
[19] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy,
A. Bachrach, and A. Bry, “End-to-end learning of geometry and
context for deep stereo regression,” in Proceedings of the IEEE
International Conference on Computer Vision, 2017, pp. 66–75.
[20] G.-Y. Nie, M.-M. Cheng, Y. Liu, Z. Liang, D.-P. Fan, Y. Liu, and
Y. Wang, “Multi-level context ultra-aggregation for stereo matching,”
in Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, 2019, pp. 3283–3291.
[21] Y. Luo, J. Ren, M. Lin, J. Pang, W. Sun, H. Li, and L. Lin, “Single
view stereo matching,” in The IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), June 2018.