Visual-Inertial Navigation_ Challenges and Applications
Visual-Inertial Navigation_ Challenges and Applications
IROS 2019 Full-day Workshop: November 8, 2019, Macau, China; Room: LG-R16 @ Venetian
Macao Resort Hotel
Updates
(10/25) The workshop program is available now (at the end of this page). Note: All posters
will have (up to) 5-min spotlight oral presentation and 45-min poster presentation.
(7/28) We are pleased to annouce that there will be a limited number of IET-CSR Travel
Grants to (partitially) support students to travel to workshop to present their work.
Information about how to apply coming later.
(7/26) Upon many requests, paper submission deadline is extended to: August 15, 2019
(7/26) Paper limits do not include references (say n pages), i.e., 6+n for research papers, 4+n
for field reports, and 2+n for demo papers.
(7/3) We are pleased to announce that LORD Sensing will sponsor the LORD Best Paper
Award (with MicroStrain IMUs as prize)!
(7/2) Please prepare papers following the IROS template available in LaTeX and MS Word.
(7/1) We are pleased to announce that there will be a Special Issue in the IET Cyber-
systems and Robotics, which will invite some of best papers presented at this workshop.
Overview
As cameras and IMUs are becoming ubiquitous, visual-inertial navigation systems (VINS) that
provide high-precision 3D motion estimation, hold great potentials in a wide range of applications
from augmented reality (AR) and aerial navigation to autonomous driving, in part because of the
complementary sensing capabilities and the decreasing costs and size of these sensors. While
visual-inertial navigation, alongside with SLAM, has witnessed tremendous progress in the past
decade, yet certain critical aspects in the design of visual-inertial systems remain poorly explored,
greatly hindering the widespread deployment of these systems in practice. For example, many
VINS algorithms are yet not robust to high dynamics and poor lighting conditions; they are yet not
accurate enough for long-term, large-scale operations, in particular, in life-critical scenarios; and
yet they are unable to provide semantic and cognitive understandings to support high-level
decision making. This workshop brings together researchers in robotics, computer vision and AI,
from both academia and industry, to share their insights and thoughts on the R&D of VINS. The
goal of this workshop is to bring forward the latest breakthroughs and cutting-edge research on
visual-inertial navigation and beyond, to open discussions about technical challenges and future
research directions for the community, and to identify new applications of this emerging
technology.
Call for Contributions
We welcome submissions of papers describing VINS-related work in progress, preliminary results,
novel concepts, and industry experiences. All submitted papers will be reviewed by at least two
experts (see Program Committee below) on the basis of technical quality, relevance, significance,
and clarity. Topics of interest to this workshop include, but are not necessarily limited to:
Visual-inertial odmetry
Visual-inertial perception
Visual SLAM
Sensor calibration
High-speed visual control and estimation of aerial vehicles
Deep learning for visual SLAM
Cooperative visual-inertial navigation
Multi-sensor fusion
Co-design of hardware and software of VINS
Simulations and benchmarking of visual-inertial navigation
Visual perception in challenging and dynamic environments
Human motion modeling
Field robotics
AR/VR
All accepted papers will appear on the workshop website. Note that the authors retain all the
intellectual properties of their contributions to the workshop. We will also be exploring the
possibility of a journal special issue for the best contributions at the workshop.
Important Dates
August 1, 2019 –> August 15, 2019: Deadline of paper submission
Please submit your papers/reports/demos via email: iros2019-vins-
[email protected]
Organizers
Guoquan (Paul) Huang, University of Delaware
Shaojie Shen, Hong Kong University of Science and Technology
Michael Kaess, CMU
Stergios Roumeliotis, University of Minnesota
John Leonard, MIT
Program Committee
Kevin Eckenhoff, University of Delaware / Facebook
Chao Guo, Google
Shoudong Huang, University of Technology Sydney
Mingyang Li, Alibaba
Yasir Latif, University of Adelaide
Yong Liu, Zhejiang University
Agostino Martinelli, INRIA Rhone Alpes
Wisely Babu Benzun Pious, Bosch
Yue Wang, Zhejiang University
Poster Papers
Dengshen Chen, Yuanlong Yu, and Xiang Gao: Semi-Supervised Deep Learning Framework
for Monocular Visual Odometry
Geoff Fink, and Claudio Semini: Proprioceptive Sensor Dataset for Quadrupeds
Wanlong Li, Yu Tang, Chao Ding, Xueshi Li, and Feng Wen: Visual-Inertial Ego-Motion
Estimation using Rolling-Shutter Camera in Autonomous Driving
Jiajun Lyu, Jinhong Xu, Xingxing Zuo, and Yong Liu: An Efficient LiDAR-IMU Calibration
Method Based on Continuous-Time Trajectory
Yongseok Lee, Hanbyeol Yoon, Jinuk Heo, WonHa Lee, and Dongjun Lee: Wearable Visual-
Inertial Hand Tracking Interface Regardless of Environment and Occlusion
Patrick Geneva, Kevin Eckenhoff, Woosik Lee, Yulin Yang, and Guoquan Huang: OpenVINS:
A Research Platform for Visual-Inertial Estimation
Ziqiang Wang, Chengcheng Guo, Lin Zhao, Mei Li, and Xinyu Qi: Direct Sparse Visual-
Inertial Odometry with Stereo Cameras
He Zhang, Lingqiu Jin, and Cang Ye: A Depth-Enhanced Visual Inertial Odometry for a
Robotic Navigation Aid for Blind People
Joshua Jaekel, and Michael Kaess: Robust Multi-Stereo Visual-Inertial Odometry
Program
9:00-9:05AM: Welcome and Introduction
9:05-9:45AM: Stergios Roumeliotis (UMN): A Short Tutorial on VINS
9:45-10:15AM: Davide Scaramuzza (Zurich): Visual Inertial SLAM: Current Status and the
Road Ahead
10:15-10:45AM: Laurent Kneip (Shanghai Tech): Dimensionality reduction in visual-inertial
SLAM
10:45-11:15AM: Maurice Fallon (Oxford): VILENS - the Challenge of Visual Navigation on
Quadruped Robots
11:15-11:45AM: COFFEE BREAK
11:45-12:15PM: Luca Carlone (MIT): Chasing a Chimera: from VIN to real-time high-level
understanding
12:15-1:00PM: Poster Spotlight (5 minutes each paper)
1:00-2:00PM: LUNCH; Poster Setup
2:00-2:30PM: Guofeng Zhang (ZJU): Robust VI-SLAM and HD-Map Reconstruction for
Location-based Augmented Reality
2:30-3:00PM: Giuseppe Loianno (NYU): Challenges and Opportunities for Visual Inertial
Navigation of Aerial Robots
3:00-3:30PM: Michael Kaess (CMU): Robust Multi-Stereo Visual-Inertial Odometry
3:45-4:15PM: COFFEE BREAK
4:15-4:45PM: Ross Hartley (Amazon): Contact-aided Invariant Extended Kalman Filtering
for Legged Robot State Estimation
4:45-5:30PM: Poster Session
5:30-5:45PM: Concluding Remarks (incl. LORD best paper award annoucement)