2402.14308v1
2402.14308v1
to Corner Cases
Jie Yin, Ang Li, Wei Xi, Wenxian Yu, and Danping Zou*
Feature Num
Feature Num
VINS-RGBD 80
10
VIW-Fusion
y/m
y/m
4 0.8 1.5
Ground-Fusion
0 60
GT
2 0.6
1 50
-10 40
0 0.4
0.5
-20 20
-2 0.2
-30 -4 0 0 0
-20 -10 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 0 5 10 15 20 25 30 35 40 45 50 0 10 20 30 40 50 60 70
x/m x/m Time (s) Time (s)
70 140
Feature Num
Feature Num
60 120
15 60 50 100
30 60
10 20
25
Odom
IMU-Odom 0.035
0 B. Outdoor Tests
Baseline -0.2
Ours 0.03 Odom
Anomaly Value
thre 0.025
-0.6
Ours
y/m
15 0.02 -0.8 GT
-1
0.015
10
0.01
-1.2
-1.4
collection, with all the sensors well-synchronized and cali-
brated. We recorded some sequences in various scenarios4
5
0.005
-1.6
0 0 -1.8
0 10 20 30 40 50 -1 -0.5 0 0.5 1 1.5
Time (s) x/m
and choose three most challenging sequences in this paper:
(a) (b) In sequence Lowspeed, the ground vehicle moved at a low
speed and made several stops; Sequence Tree was under
Fig. 4. (a) Wheel anomaly analysis and (b) Trajectory of different methods
in the Anomaly sequence. dense tree cover, causing occlusion of the GNSS satellites;
In sequence Switch, the vehicle transitioned from outdoors
1.4
1.2
Odom
GVINS
VIWG
14
to indoors, and then returned outdoors again.
1
Ours
Satellite num
12
GNSS challenge: We evaluate our method under GNSS
Relative Pose Error(m)
10
Satellite Num
0.8
8
challenges against baseline methods, with their localization
0.6
0.4
6
results shown in Table V. Overall, Ground-Fusion outper-
4
0.2
2
forms baseline methods in all these cases. In Lowspeed,
0
0 50 100 150 200
Time (s)
250 300 350
when the robot is stationary, GVINS fails to localize due
(a) (b) to Doppler noise, and the VINS-GW 5 also drifts, while
Ground-Fusion works robustly by removing unreliable GNSS
Fig. 5. (a) ATE RMSE(m) of SLAM systems on wheel anomaly sequences measurements. As shown in Figure 5(a), in Switch, VINS-
(b) The solid lines denote the value of each method, and dashed lines denote
their corresponding thresholds. Grey shading denotes areas where at least GW experiences severe drift due to the loosely-coupled
two stationary conditions are satisfied. GNSS signals that deteriorate significantly as the robot
approaches indoor areas, while both GVINS and our method
remain unaffected due to their tightly-coupled integration.
and Slope1, the robot moves on a rough road and a steep In Sequence Tree, GVINS falters due to unstable visual
slope respectively; In Loop2, the robot traverses a hall with features, while our method remains robust due to tightly-
a carpeted aisle loop, where wheels slip considerably; Se- coupled wheel and depth measurement.
quence Corridor1 is a zigzag route in a long corridor. Table TABLE V
III shows that our method achieves the best performance ATE RMSE( M ) OF SLAM SYSTEMS ON SAMPLE SEQUENCES
in all these sequences. We further conducted ablation tests
to verify the effect of IMU angular velocity as a substitute Sequence Lowspeed Switch Tree
Raw Odom 8.88 4.95 2.88
for wheel angular velocity. We selected two sequences with SPP 2.54 6.60 3.03
sharp turns, including corridor1 and loop2. The results GVINS [8] fail 1.40 fail
in Table IV indicate that the IMU-odometer measurements VINS-RGBD [4] 4.72 1.70 2.27
VINS-GW 20.68 33.61 3.26
contribute to a better localization accuracy. Ours 0.63 1.32 0.55
TABLE IV
ATE RMSE( M ) OF SLAM SYSTEMS ON SELECTED SEQUENCES Zero velocity update (ZUPT): We visualize the three
stationary detection methods with the GT distance on the
Seq. Odom IMU-Odom Baseline Ours Lowspeed sequence in Figure 5(b). The figure shows that
Corridor1 2.17 1.03 0.89 0.66
Loop2 17.60 6.53 4.66 1.53 a single sensor might misclassify the motion state. For
Anomaly 0.78 0.77 0.62 0.07
Static 3.54 3.51 2.35 0.01
instance, the wheel method fails to detect the stationary
state at approximately 110 seconds. By contrast, our scheme
Moreover, we test our method on sequences with wheel combines three sensors, presenting reliable detection of the
anomaly. In the Anomaly sequence, the robot body moves as stationary state. Quantitatively, the ATE RMSE in Lowspeed
the carpet beneath it is pulled away, while the robot wheels decreased by 0.05m after ZUPT.
do not move. Conversely, in the Static sequence the robot
is suspended so the robot body will not move even when V. C ONCLUSION
wheels are moving. The results of different methods on the This paper presents a tightly-coupled RGBD-Wheel-IMU-
two sequences are shown in Table IV, where ”baseline” GNSS SLAM system to achieve reliable localization for
denotes Ground-Fusion without wheel anomaly detection, ground vehicles. Our system features robust initialization
while ”ours” denotes our full method. As depicted in Figure through three strategies. We have also devised effective
4 (a), a wheel anomaly is evident between 20s and 40s. Our anomaly detection and handling methods to address corner
full method adeptly eliminates erroneous wheel odometer cases, with experimental results demonstrating the superior-
readings here. Figure 4 (b) shows that only our full method ity of our system.
matches well with the ground-truth trajectory. Similarly, our
4 https://github.com/SJTU-ViSYS/M2DGR-plus
method effectively detects the wheel anomaly in the Static
5 https://github.com/Wallong/VINS-GPS-Wheel
sequence.
R EFERENCES [22] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers,
“A benchmark for the evaluation of rgb-d slam systems,” in 2012
[1] J. Yin, C. Liang, X. Li, Q. Xu, H. Wang, T. Fan, Z. Wu, and Z. Zhang, IEEE/RSJ international conference on intelligent robots and systems
“Design, sensing and control of service robotic system for intelligent (IROS). IEEE, 2012, pp. 573–580.
navigation and operation in internet data centers,” in 2023 IEEE [23] K. Y. Leung, Y. Halpern, T. D. Barfoot, and H. H. Liu, “The
19th International Conference on Automation Science and Engineering utias multi-robot cooperative localization and mapping dataset,” The
(CASE). IEEE, 2023, pp. 1–8. International Journal of Robotics Research, vol. 30, no. 8, pp. 969–
[2] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, 974, 2011.
I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous [24] X. Shi, D. Li, P. Zhao, Q. Tian, Y. Tian, Q. Long, C. Zhu, J. Song,
localization and mapping: Toward the robust-perception age,” IEEE F. Qiao, L. Song, et al., “Are we ready for service robots? the
Transactions on robotics, vol. 32, no. 6, pp. 1309–1332, 2016. openloris-scene datasets for lifelong slam,” in 2020 IEEE international
[3] A. Martinelli, “Closed-form solution of visual-inertial structure from conference on robotics and automation (ICRA). IEEE, 2020, pp.
motion,” International Journal of Computer Vision, vol. 106, pp. 138– 3139–3145.
152, 2013. [25] I. Skog, P. Handel, J.-O. Nilsson, and J. Rantakokko, “Zero-velocity
[4] Z. Shan, R. Li, and S. Schwertfeger, “Rgbd-inertial trajectory estima- detection—an algorithm evaluation,” IEEE transactions on biomedical
tion and mapping for ground robots,” Sensors, vol. 19, no. 10, p. 2251, engineering, vol. 57, no. 11, pp. 2657–2666, 2010.
2019. [26] B. D. Lucas, T. Kanade, et al., An iterative image registration
[5] K. J. Wu, C. X. Guo, G. Georgiou, and S. I. Roumeliotis, “Vins technique with an application to stereo vision. Vancouver, 1981,
on wheels,” in 2017 IEEE International Conference on Robotics and vol. 81.
Automation (ICRA). IEEE, 2017, pp. 5155–5162. [27] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE
[6] T. Hua, L. Pei, T. Li, J. Yin, G. Liu, and W. Yu, “M2c-gvio: motion transactions on pattern analysis and machine intelligence, vol. 40,
manifold constraint aided gnss-visual-inertial odometry for ground no. 3, pp. 611–625, 2017.
vehicles,” Satellite Navigation, vol. 4, no. 1, pp. 1–15, 2023. [28] O. Kähler, V. A. Prisacariu, C. Y. Ren, X. Sun, P. Torr, and D. Murray,
[7] J. Yin, H. Jiang, J. Wang, D. Yan, and H. Yin, “A robust and efficient “Very high frame rate volumetric integration of depth images on
ekf-based gnss-visual-inertial odometry,” in 2023 IEEE International mobile devices,” IEEE transactions on visualization and computer
Conference on Robotics and Biomimetics (ROBIO). IEEE, 2023, pp. graphics, vol. 21, no. 11, pp. 1241–1250, 2015.
1–5. [29] T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davi-
[8] S. Cao, X. Lu, and S. Shen, “Gvins: Tightly coupled gnss–visual– son, “Elasticfusion: Dense slam without a pose graph.” Robotics:
inertial fusion for smooth and consistent state estimation,” IEEE Science and Systems, 2015.
Transactions on Robotics, 2022. [30] W. Wang, Y. Hu, and S. Scherer, “Tartanvo: A generalizable learning-
[9] J. Yin, T. Li, H. Yin, W. Yu, and D. Zou, “Sky-gvins: a sky- based vo,” in Conference on Robot Learning. PMLR, 2021, pp.
segmentation aided gnss-visual-inertial system for robust navigation 1761–1772.
in urban canyons,” Geo-spatial Information Science, vol. 0, no. 0, pp. [31] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
1–11, 2023. arXiv preprint arXiv:1804.02767, 2018.
[10] J. Yin, A. Li, T. Li, W. Yu, and D. Zou, “M2dgr: A multi-sensor and
multi-scenario slam dataset for ground robots,” IEEE Robotics and
Automation Letters, 2022.
[11] J. Yin, H. Yin, C. Liang, H. Jiang, and Z. Zhang, “Ground-challenge: A
multi-sensor slam dataset focusing on corner cases for ground robots,”
in 2023 IEEE International Conference on Robotics and Biomimetics
(ROBIO). IEEE, 2023, pp. 1–5.
[12] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monoc-
ular visual-inertial state estimator,” IEEE Transactions on Robotics,
vol. 34, no. 4, pp. 1004–1020, 2018.
[13] J. Liu, W. Gao, and Z. Hu, “Visual-inertial odometry tightly coupled
with wheel encoder adopting robust initialization and online extrinsic
calibration,” in 2019 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). IEEE, 2019, pp. 5391–5397.
[14] L. Gui, C. Zeng, S. Dauchert, J. Luo, and X. Wang, “A zupt aided
initialization procedure for tightly-coupled lidar inertial odometry
based slam system,” Journal of Intelligent & Robotic Systems, vol.
108, no. 3, p. 40, 2023.
[15] C. Campos, R. Elvira, J. J. G. Rodrı́guez, J. M. Montiel, and
J. D. Tardós, “Orb-slam3: An accurate open-source library for visual,
visual–inertial, and multimap slam,” IEEE Transactions on Robotics,
2021.
[16] J. Liu, X. Li, Y. Liu, and H. Chen, “Rgb-d inertial odometry for a
resource-restricted robot in dynamic environments,” IEEE Robotics
and Automation Letters, vol. 7, no. 4, pp. 9573–9580, 2022.
[17] C. Yu, Z. Liu, X.-J. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, “Ds-
slam: A semantic visual slam towards dynamic environments,” in 2018
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS). IEEE, 2018, pp. 1168–1174.
[18] M. Quan, S. Piao, M. Tan, and S.-S. Huang, “Tightly-coupled monocu-
lar visual-odometric slam using wheels and a mems gyroscope,” IEEE
Access, vol. 7, pp. 97 374–97 389, 2019.
[19] G. Peng, Z. Lu, S. Chen, D. He, and L. Xinde, “Pose estimation based
on wheel speed anomaly detection in monocular visual-inertial slam,”
IEEE Sensors Journal, vol. 21, no. 10, pp. 11 692–11 703, 2020.
[20] S. Hewitson and J. Wang, “Gnss receiver autonomous integrity moni-
toring (raim) performance analysis,” Gps Solutions, vol. 10, pp. 155–
170, 2006.
[21] D. He, W. Xu, N. Chen, F. Kong, C. Yuan, and F. Zhang, “Point-lio:
Robust high-bandwidth light detection and ranging inertial odometry,”
Advanced Intelligent Systems, p. 2200459, 2023.