0% found this document useful (0 votes)
1 views

10-SLAM presentation

The document discusses robot localization and mapping, focusing on the concepts of estimating a robot's location using various sensors and building maps of the environment. It introduces the SLAM (Simultaneous Localization and Mapping) problem, highlighting the challenges of needing a map to localize and vice versa. Additionally, it explores deep learning applications in localization, specifically through the use of particle filter networks.

Uploaded by

Shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

10-SLAM presentation

The document discusses robot localization and mapping, focusing on the concepts of estimating a robot's location using various sensors and building maps of the environment. It introduces the SLAM (Simultaneous Localization and Mapping) problem, highlighting the challenges of needing a map to localize and vice versa. Additionally, it explores deep learning applications in localization, specifically through the use of particle filter networks.

Uploaded by

Shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Robot Localization and Mapping

Peter Karkus
23 March 2021
Localization
Where am I?

What is around me? Mapping

What should I do? Planning


Robot Localization and Mapping

Localization: estimate robot location given a map

Mapping: build map given robot location

SLAM: simultaneously estimate robot location and build map

Map: information about the environment, spatially organized


Outline

• Robot localization

• SLAM

• Deep learning results


The robot localization problem
What we have:
• 𝑢!:# = {𝑢!, 𝑢$, 𝑢%, … , 𝑢# } -- robot’s controls
• 𝑧!:# = {𝑧!, 𝑧$, 𝑧%, … , 𝑧# } -- sensor observations
•𝑚 -- map of the environment
• 𝑏 𝑠& -- initial belief

What we want:
• 𝑥':# = 𝑥', 𝑥!, 𝑥$, … , 𝑥# -- robot path
• or only 𝑥# -- robot pose
The robot localization problem
Sensor:
• LIDAR, sonar, RGB camera, depth camera, radio sensor, GPS, etc.
Map representation:
• occupancy grid, floor map, landmarks, point cloud, graph, sketch, etc.
Robot location:
• 2.5D pose, 6D pose, other state
Problem setting:
• tracking / global localization / “kidnapped” robot
• static environment / dynamic environment
• active localization / passive localization
Probabilistic formulation
What we want:
𝑝 𝑥# | 𝑧!:# , 𝑢!:# , 𝑚 = 𝑏 𝑥#

Bayes filter:
𝑏 𝑥# = 𝜂 𝑝 𝑧# |𝑥# , 𝑚 /𝑝 𝑥# 𝑥#(!, 𝑢# 𝑏(𝑥#(!) 𝑑𝑥#(!
Robot localization algorithms

• Kalman filtering

• Histogram filtering

• Particle filtering (Monte-Carlo localization)


Particle filter localization
Particle filter localization
Sensor:
• LIDAR, sonar, RGB camera, depth camera, radio sensor, GPS, etc.
Map representation:
• occupancy grid, floor map, landmarks, point cloud, graph, sketch, etc.
Robot location:
• 2.5D pose, 6D pose, other state
Problem setting:
• tracking / global localization / “kidnapped” robot
• static environment / dynamic environment
• active localization / passive localization
Occupancy grid map
occupied

free University of Freiburg


LIDAR sensor
Sensor noise

Figures from Wikipedia, Probabilistic Robotics


Particle filter localization
Measurement model (beam model)
We want: 𝑝 𝑧! 𝑥! , 𝑚)
Ray casting
Assume laser-beams are independent

For each beam 𝑘:


𝑧!"∗
expected
distance
Measurement model (beam model)
We want: 𝑝 𝑧! 𝑥! , 𝑚)
Assume laser-beams are independent

For each beam 𝑘:

Figures from Probabilistic Robotics


Measurement model (beam model)
We want: 𝑝 𝑧! 𝑥! , 𝑚)
Assume laser-beams are independent

For each beam 𝑘:

Figures from Probabilistic Robotics


Motion model
We want: 𝑝 𝑥! 𝑥!$% , 𝑢! )

Figures from Probabilistic Robotics


Motion model
We want: 𝑝 𝑥! 𝑥!$% , 𝑢! )

* 𝜃̅ to 𝑥′,
Robot moves from 𝑥,̅ 𝑦, * 𝜃′̅
̅ 𝑦′,
Odometry information: 𝑢 = (𝛿&'!% , 𝛿!&()* , 𝛿&'!+ )

Figures from Probabilistic Robotics


Particle filter localization
Issues
• Need to choose 𝐾
• Hard to recover (kidnapped robot)
• Noise is not time-invariant
• Sensor cannot be too good
Simultaneous Localization and
Mapping (SLAM)
The SLAM problem
What we have:
• 𝑢!:# = {𝑢!, 𝑢$, 𝑢%, … , 𝑢# } -- robot’s controls
• 𝑧!:# = {𝑧!, 𝑧$, 𝑧%, … , 𝑧# } -- sensor observations
• 𝑏 𝑠& -- initial belief

What we want:
• 𝑥# -- robot path
•𝑚 -- map of the environment
Probabilistic formulation
What we want:
𝑝 𝑥# , 𝑚 | 𝑧!:# , 𝑢!:#
0.5 1.0
Distribution over maps:
Each pixel is a binary random variable

𝑝 𝑚 = 𝑝 𝑚% , 𝑚+ … , 𝑚) = 3 𝑝(𝑚, )
,
0.0
Assumes
• Environment is static 𝑝 𝑚,
• Cells are independent
Why is SLAM hard?
“Chicken or egg” problem
• Need a map to localize
• Need location to build map
Why is SLAM hard?
“Chicken or egg” problem
• Need a map to localize
• Need location to build map
Why is SLAM hard?
“Chicken or egg” problem
• Need a map to localize
• Need location to build map
Why is SLAM hard?
“Chicken or egg” problem
• Need a map to localize
• Need location to build map
Why is SLAM hard?
“Chicken or egg” problem
• Need a map to localize
• Need location to build map

Data association problem


Why is SLAM hard?
“Chicken or egg” problem
• Need a map to localize
• Need location to build map

Data association problem

Loop closure problem


SLAM algorithms

• Filtering-based algorithms

• Optimization-based algorithms
Particle SLAM
What we want: 𝑝 𝑥# , 𝑚 | 𝑧!:# , 𝑢!:#
Each particle can represent a location + map. But state space is huge!

Particle 1 Particle 2 Particle 3


Particle SLAM
What we want: 𝑝 𝑥# , 𝑚 | 𝑧!:# , 𝑢!:#
Each particle can represent a location + map. But state space is huge!
Idea: factorized state space

Particle 1 Particle 2 Particle 3


Particle SLAM
What we want: 𝑝 𝑥# , 𝑚 | 𝑧!:# , 𝑢!:#
Each particle can represent a location + map. But state space is huge!
Idea: factorized state space

𝑝(𝑚| … )

Particle 1 Particle 2 Particle 3


Particle SLAM
What we want: 𝑝 𝑥# , 𝑚 | 𝑧!:# , 𝑢!:#
Each particle can represent a location + map. But state space is huge!
Idea: factorized state space (Rao-Blackwellized particle filter)
• Particle state: 𝑥# pose sample; 𝒎 parametric map distribution
• Map update as if location was correct
• Measurement update as if map was correct
Particle SLAM
Map update
Treat each cell independently
𝒎𝒌𝒕 = 𝑝 𝑚 | 𝑧!:# , 𝑥!:# = 4 𝑝 𝑚+ | 𝑧!:# , 𝑥!:#
+
Inverse sensor model: 𝑝 𝑚+ | 𝑧# , 𝑥#
Map update
Treat each cell independently
𝒎𝒌𝒕 = 𝑝 𝑚 | 𝑧!:# , 𝑥!:# = 4 𝑝 𝑚+ | 𝑧!:# , 𝑥!:#
+
Inverse sensor model: 𝑝 𝑚+ | 𝑧# , 𝑥#

Courtesy by C. Stachniss
Map update
Treat each cell independently
𝒎𝒌𝒕 = 𝑝 𝑚 | 𝑧!:# , 𝑥!:# = 4 𝑝 𝑚+ | 𝑧!:# , 𝑥!:#
+
Inverse sensor model: 𝑝 𝑚+ | 𝑧# , 𝑥#

Binary Bayes Filter Update:

9 :! | ;":$ , <":$ 9 :! | ;$ ,<$ 9 :! | ;":$%" , <":$%" =>9(:! )


=
=>9 :! | ;":$ , <":$ =>9 :! | ;$ , <$ =>9 :! | ;":$%" , <":$%" 9(:! )
what we want Inverse sensor last step prior
model
Particle SLAM

Magicc Lab, Brigham Young University


Modern Visual 3D SLAM

T. Whelan et al. Deformation-based Loop Closure for Large Scale Dense RGB-D SLAM, IROS, 2013
Deep learning results
1. Particle Filter Networks
Localization challenge: RGB camera

Abstract 2D floor map Images of rich 3D world


Can we use particle filter?

We don’t have this for RGB images!


Can we use deep learning?

Floor map

Robot location

RGB image
Can we use deep learning?

Training
House3D dataset
200 environments
45k trajectories Train

Testing
Test
in unseen environments
Can we use deep learning?

Floor map

Robot location

RGB image

It is difficult to learn multi-modal belief tracking..


Idea: encode particle filter in neural network
Idea: encode particle filter in neural network

Transition Measurement
model model

Floor map

Particle Filter

Robot location

RGB image
Particle Filter Networks (PF-net)

Transition Measurement
model model

Floor map

Particle Filter

Robot location

RGB image

P. Karkus, D. Hsu, W.S. Lee. Particle Filter Networks with Application to Visual Localization. CoRL 2018
Learned measurement model

56x56x3
14x14x16
CNN
ot
<latexit sha1_base64="VkxUtnQoWo0A1VlvGgth0UbzCYU=">AAAB6nicdVBNS8NAEJ34WetX1aOXxSJ4CkkNbb0VvAheKtoPaEPZbDft0s0m7G6EEvoTvHhQxKu/yJv/xk1bQUUfDDzem2FmXpBwprTjfFgrq2vrG5uFreL2zu7efungsK3iVBLaIjGPZTfAinImaEszzWk3kRRHAaedYHKZ+517KhWLxZ2eJtSP8EiwkBGsjXQbD/SgVHbsi3q14lWRYztOza24OanUvHMPuUbJUYYlmoPSe38YkzSiQhOOleq5TqL9DEvNCKezYj9VNMFkgke0Z6jAEVV+Nj91hk6NMkRhLE0Jjebq94kMR0pNo8B0RliP1W8vF//yeqkO637GRJJqKshiUZhypGOU/42GTFKi+dQQTCQztyIyxhITbdIpmhC+PkX/k3bFdqu2e+OVG9fLOApwDCdwBi7UoAFX0IQWEBjBAzzBs8WtR+vFel20rljLmSP4AevtE8hCjis=</latexit>

Spatial
! ltk
<latexit sha1_base64="BWuwirwXPgIxHRiLoeroypIxNzE=">AAAB7HicdVBNS8NAEN34WetX1aOXxSJ4CkkNbb0VvAheKpi20May2W7bpZtN2J0IJfQ3ePGgiFd/kDf/jZu2goo+GHi8N8PMvDARXIPjfFgrq2vrG5uFreL2zu7efungsKXjVFHm01jEqhMSzQSXzAcOgnUSxUgUCtYOJ5e5375nSvNY3sI0YUFERpIPOSVgJF/04W7SL5Ud+6JerXhV7NiOU3Mrbk4qNe/cw65RcpTREs1+6b03iGkaMQlUEK27rpNAkBEFnAo2K/ZSzRJCJ2TEuoZKEjEdZPNjZ/jUKAM8jJUpCXiufp/ISKT1NApNZ0RgrH97ufiX101hWA8yLpMUmKSLRcNUYIhx/jkecMUoiKkhhCpubsV0TBShYPIpmhC+PsX/k1bFdqu2e+OVG9fLOAroGJ2gM+SiGmqgK9REPqKIowf0hJ4taT1aL9bronXFWs4coR+w3j4BQSCPBQ==</latexit>

transform CNN LFC FC


skt
<latexit sha1_base64="hGXhpQRhe55cKPVlra/BXcnbQb0=">AAAB7HicdVBNS8NAEN34WetX1aOXxSJ4CkkNbb0VvAheKpi20May2W7bpZtN2J0IJfQ3ePGgiFd/kDf/jZu2goo+GHi8N8PMvDARXIPjfFgrq2vrG5uFreL2zu7efungsKXjVFHm01jEqhMSzQSXzAcOgnUSxUgUCtYOJ5e5375nSvNY3sI0YUFERpIPOSVgJF/34W7SL5Ud+6JerXhV7NiOU3Mrbk4qNe/cw65RcpTREs1+6b03iGkaMQlUEK27rpNAkBEFnAo2K/ZSzRJCJ2TEuoZKEjEdZPNjZ/jUKAM8jJUpCXiufp/ISKT1NApNZ0RgrH97ufiX101hWA8yLpMUmKSLRcNUYIhx/jkecMUoiKkhhCpubsV0TBShYPIpmhC+PsX/k1bFdqu2e+OVG9fLOAroGJ2gM+SiGmqgK9REPqKIowf0hJ4taT1aL9bronXFWs4coR+w3j4BS9iPDA==</latexit>

2x3 5x5x16
28x28 14x14x8 14x14x24
local map

M
<latexit sha1_base64="B4sQuD1yhl19bceQBTRLYTkHqpg=">AAAB6HicdVBNS8NAEJ3Ur1q/qh69LBbBU0hiaOut4EUQoQXbCm0om+2mXbv5YHcjlNBf4MWDIl79Sd78N27aCir6YODx3gwz8/yEM6ks68MorKyurW8UN0tb2zu7e+X9g46MU0Fom8Q8Frc+lpSziLYVU5zeJoLi0Oe0608ucr97T4VkcXSjpgn1QjyKWMAIVlpqXQ/KFcs8r1cdt4os07JqtmPnxKm5Zy6ytZKjAks0B+X3/jAmaUgjRTiWsmdbifIyLBQjnM5K/VTSBJMJHtGephEOqfSy+aEzdKKVIQpioStSaK5+n8hwKOU09HVniNVY/vZy8S+vl6qg7mUsSlJFI7JYFKQcqRjlX6MhE5QoPtUEE8H0rYiMscBE6WxKOoSvT9H/pOOYdtW0W26lcbWMowhHcAynYEMNGnAJTWgDAQoP8ATPxp3xaLwYr4vWgrGcOYQfMN4+AQnYjSI=</latexit>

P. Karkus, D. Hsu, W.S. Lee. Particle Filter Networks with Application to Visual Localization. CoRL 2018
Particle Filter Networks (PF-net)

P. Karkus, D. Hsu, W.S. Lee. Particle Filter Networks with Application to Visual Localization. CoRL 2018
2. Differentiable SLAM-net
Differentiable SLAM-net

Mapping Transition Observation


model model model
Weighted D* Subgoal
planner controller
#!
SLAM "! Robot
action
RGB(D) input
Map & pose

Differentiable SLAM: learning particle SLAM for downstream visual navigation. P. Karkus, S. Cai, D. Hsu. CVPR, 2021 (to appear)
Habitat challenge
SLAM-net results

Differentiable SLAM: learning particle SLAM for downstream visual navigation. P. Karkus, S. Cai, D. Hsu. CVPR, 2021 (to appear)
SLAM-net results

CVPR Habitat challenge leaderboard


Spot navigation (ongoing work)
Open questions
• Dynamic environments and relocalization
• SLAM + decision making
• Semantic maps
• Metric vs. topological maps
• Long-term autonomy
• Multi-robot SLAM
References
• Probabilistic Robotics (Chapter 6 & 8)
S. Thrun, W. Burgard, D. Fox. MIT Press, 2005

• FastSLAM 2.0: An improved particle filtering algorithm for simultaneous


localization and mapping that provably converges.
M. Montemerlo, S. Thrun, D. Koller, B. Wegbreit, et al. IJCAI, 2003

• Particle Filter Networks with Application to Visual Localization.


P. Karkus, D. Hsu, W.S. Lee. Conference on Robot Learning, 2018

• Differentiable SLAM: learning particle SLAM for downstream visual navigation.


P. Karkus, S. Cai, D. Hsu. CVPR, 2021 (to appear)
Thank you

You might also like