0% found this document useful (0 votes)
6 views6 pages

Outlier Rejection for Visual Odometry Using Parity Space Methods

This document presents two novel outlier rejection techniques for visual odometry using the Parity Space Approach (PSA), namely Group Parity Outlier Rejection (GPOR) and Parity Initialized RANSAC (PI-RANSAC). These methods aim to enhance computational efficiency and accuracy compared to traditional RANSAC methods by performing consistency checks without requiring the explicit computation of the system state. Experimental results demonstrate that both GPOR and PI-RANSAC significantly improve run-time and accuracy in visual odometry applications, with PI-RANSAC showing an average 44.45% increase in accuracy and 68.95% improvement in run-time over RANSAC.

Uploaded by

sabarimooc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views6 pages

Outlier Rejection for Visual Odometry Using Parity Space Methods

This document presents two novel outlier rejection techniques for visual odometry using the Parity Space Approach (PSA), namely Group Parity Outlier Rejection (GPOR) and Parity Initialized RANSAC (PI-RANSAC). These methods aim to enhance computational efficiency and accuracy compared to traditional RANSAC methods by performing consistency checks without requiring the explicit computation of the system state. Experimental results demonstrate that both GPOR and PI-RANSAC significantly improve run-time and accuracy in visual odometry applications, with PI-RANSAC showing an average 44.45% increase in accuracy and 68.95% improvement in run-time over RANSAC.

Uploaded by

sabarimooc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2014 IEEE International Conference on Robotics & Automation (ICRA)

Hong Kong Convention and Exhibition Center


May 31 - June 7, 2014. Hong Kong, China

Outlier Rejection for Visual Odometry using Parity Space Methods


Arun Das∗ and Steven L. Waslander†
University of Waterloo, Waterloo, ON, Canada, N2L 3G1

Abstract— Typically, random sample consensus (RANSAC) cost using the full observation set. Although effective, R-
approaches are used to perform outlier rejection for visual RANSAC still requires the computation of the model param-
odometry, however the use of RANSAC can be computationally eters at every iteration, which is computationally expensive
expensive. The parity space approach (PSA) provides methodol-
ogy to perform computationally efficient consistency checks for for VO and SLAM applications.
observations, without having to explicitly compute the system An alternative method for outlier detection is the parity
state. This work presents two outlier based rejection techniques,
Group Parity Outlier Rejection and Parity Initialized RANSAC, space approach (PSA), which was first developed in order to
which use the parity space approach to perform rapid outlier perform fault detection and isolation for instrument clusters
rejection. Experiments demonstrate the proposed approaches [14], but has also been applied to fault detection for nuclear
are able to compute solutions with increased accuracy and power stations [15], and in flight avionics [16]. The PSA
improved run-time when compared to RANSAC. has also been applied in a visual SLAM formulation [17],
I. I NTRODUCTION where batches of image features are tested for outliers using
a PSA consistency test. Using the measurement model of the
In situations where observed data is fitted to a parame-
system, the PSA projects the measurements into the parity
terized model, the presence of outliers in the measurements
space, where the presence of outliers can be detected. Since
can corrupt the solution. Many approaches exist which aim
the projection onto parity space is computed using only the
to attenuate the effect of outliers, or remove them from the
measurement model, the state of the system is not required.
measurement set all-together. The use of robust statistical
methods, such as the L-estimator, M-estimator, R-estimator Visual odometry estimates the egomotion of the camera
[1] and least median squares [2], were proposed in order though examination of the changes that motion induces
to reduce the affect of outliers on the final solution. Batch on the camera images. A typical VO approach is to use
heterogeneous outlier removal algorithms have also been corresponding image features between successive camera
suggested for SLAM applications [3], however the approach frames and estimate the incremental motion, which can be
is mainly limited to scenarios where the number of observed successfully performed using both monocular [18], [19] and
measurements is small. stereo cameras [7], [5], [20]. In order to provide an accurate
In order to perform outlier rejection where many measure- motion estimate, feature correspondences should not contain
ments are observed, the RANSAC algorithm [4] is typically outliers, and typically a rejection scheme such as RANSAC
used, and has been successful in many computer vision is used.
[5], visual odometry (VO) [6], [7] and SLAM [8], [9]
In this work, we propose two methods which use parity
applications. Many extensions of the RANSAC algorithm
space consistency testing to perform outlier rejection and
have been proposed, such as Maximum Liklihood Sample
demonstrate their effectiveness when used in a VO appli-
Consensus (MLESAC) [10], which assumes known proba-
cation. In the first method, Group Parity Outlier Rejection
bility distributions of the inlier and outlier to evaluate the
(GPOR), a parity space test is performed on subgroups of the
sample hypothesis, and Local Optimization RANSAC (LO-
measurement vector, and groups which fail the test are dis-
RANSAC) [11], where the maximum inlier set is refined at
carded. The second method, Parity Initialized RANSAC (PI-
each iteration using a local optimization technique.
RANSAC), improves the RANSAC algorithm by performing
The general issue with the RANSAC approach is having
a parity space consistency test on the randomly selected
to perform a sufficiently large number of iterations in order
sample set. The parity space test is computationally efficient
to achieve a model with a high confidence level. The number
and is shown to generate accurate parameter estimates with
of required iterations can become large when the outlier
significantly fewer iterations when compared to RANSAC.
ratio is high. The Random-RANSAC (R-RANSAC) approach
[12], [13] was suggested to reduce the number of iterations The GPOR and PI-RANSAC methods are validated using
required. In the R-RANSAC approach, the model computed using the KITTI vision dataset [21]. The experiments demon-
from the random sample set is first verified using a small strate that the GPOR approach consistently outperforms
subset of the measurements prior to evaluating the fitness RANSAC in terms of run-time while providing comparable
accuracy in the solution, and the PI-RANSAC method is
* PhD Student, Mechanical and Mechatronics Engineering, University of
able to provide solutions with an average 44.45% increase
Waterloo; [email protected]
† Assistant Professor, Mechanical and Mechatronics Engineering, Uni- in accuracy and an average 68.95% improvement in run-time
versity of Waterloo; [email protected] when compared to RANSAC.

978-1-4799-3685-4/14/$31.00 ©2014 IEEE 3613


II. P ROBLEM F ORMULATION measurements randomly selected from the full measurement
A. Visual Odometry Method set. Each measurement is then compared to the hypothesis
model and is added to an inlier consensus set if an application
The visual odometry algorithm used in this work was specific fitness measure is satisfied. In the case of visual
initially developed by [6] and [7]. An open source im- odometry, the reprojection error for the feature must be
plementation is available at http://www.cvlibs.net/ less than a given threshold to be condsidered an inlier.
software/libviso/. A brief overview of the approach The entire process of constructing hypothesis models from
is presented here. random samples and determining the inliers is repeated for a
To determine the egomotion of the stereo camera between fixed number of iterations, while keeping track of the largest
two successive frames, feature detection and matching is consensus set. A final model for the system can then be
performed between the images of the stereo pair at the generated using the largest inlier consensus set.
previous time-step to the images of the stereo-pair at the
current time-step. Denote the ith 3D feature point from the III. PARITY S PACE FAULT D ETECTION
previous time step as di ∈ R3 , where di consists of the x, y
In this section, a brief overview of the parity space
and z components of the point in the camera frame.
fault detection methodology is presented. For a thorough
Given NY features from the previous time-step, denote
discussion on parity space fault detection and isolation, the
the full set of 3D feature points as Y = {d1 , . . . , dNY }. Let
reader is directed to [22], [23]. Although the approach is
the full state which defines the egomotion of the camera
demonstrated using a linear measurement model, it should
be denoted as x ∈ SE(3). Provided an estimate of the
be noted that a nonlinear measurement model, h(x), can
egomotion is available, each feature from the previous time-
be linearized about an operating point, x0 , to generate an
step can be transformed according to the egomotion estimate
approximate linear model,
and re-projected back into the pixel co-ordinates of the stereo
images for the current time-step. Denote the left camera ŷ = H̄x, (3)
reprojection mapping, Θx (d)l : R3 7→ R2 as
where H̄ is the linearized measurement model, H̄ = ∂h ∂x |x0 ,
l
Θx (d) = K(Tx (d)), (1) and ŷ = y − h(x0 ) are the shifted measurements according
to the operating point. Although the linearization introduces
where K is the camera matrix and Tx (d) : R3 7→ R3 is the
a state dependent measurement matrix to the parity space
transformation which maps a point, d, from the left camera
approach, the linearization error is negligible assuming the
frame of the previous time-step to the left camera frame in
chosen operating point is sufficiently close to the true state
the current time step according to the egomotion state x. A
[22]. In the case of egomotion estimation using visual odom-
re-projection mapping into the right camera image is similary
etry, performing the linearization assuming zero movement
denoted as Θx (d)r .
between successive frames is acceptable, provided the frame-
Denote a measurement of the feature corresponding to
rate of the camera is sufficiently high.
a point, di ∈ Y , in the current left image as zdl i ∈ R2
and similarly a measurement in the current right image A. Parity Vector Generation
as zdri ∈ R2 . Since each corresponding feature results in
two measurements from the left and right image, and each Suppose that k measurements are observed at any given
measurement contains two elements for the u and v pixel time-step. For a system with n states, the measurement
co-ordinates respectively, a single feature actually provides equation is defined as
4 independent measurements. Using the measured features of y = Hx + e + f, (4)
the current frame and the re-projected feature point locations
from the previous frame, a cost function, ΛY (x) : SE(3) 7→ where y ∈ Rk is the measurement vector, H ∈ Rk×n is
R, which penalizes the reprojection error over the entire set the measurement model, x ∈ Rn is the state vector, e ∈ Rk
of corresponding features from feature set Y can be defined is the additive measurement noise vector, and f ∈ Rk is
as the fault vector which models the error in the measurements
X under the assumption that the measurement set contains an
ΛY (x) = kzdl i − Θx (di )l k2 + kzdri − Θx (di )r k2 . (2) outlier. If y contains no outliers, the fault vector f is the
di ∈Y
zero vector. The additive measurement noise, e, is assumed
Finally, the egomotion of the camera is estimated by to be drawn from a Gaussian distribution with zero mean and
optimizing the cost given by Equation (2) over the features covariance Q = σe Ik×k , where σe ∈ R is the measurement
contained in Y . noise variance.
The parity space approach seeks to transform the measure-
B. Outlier Rejection ments into a vector space, known as the parity space where
Loosely speaking, an outlier can be defined as a measure- outlier rejection can be performed. The projection of the
ment which, by some measure, is inconsistent with the ma- measurements into the parity space is a linear transformation
jority of the measurements. In a typical RANSAC approach, defined by the matrix V ∈ R(k−n)×k , and the column space
a hypothesis model is constructed from a sample set of of V is defined as the parity space. For the parity space

3614
approach, it is required that V is an orthogonal matrix that if λ > δ, the test indicates that a there is an inconsistency in
is also orthogonal to H, or the measurements due to an outlier and the null hypothesis
is rejected.
V H = 0, (5)
VV T
= I(k−n) . (6) C. Probability of Missed Detection
Define the probability of missed detection, Pm as the
Although any V which satisfies Equations (5) and (6) is
probability that the outlier detection test declares no outliers
sufficient, a simple method for computing V exists. Suppose
are present when, in fact, the measurement vector contains an
the matrix W ∈ R(k−n)×k is given as
outlier. Generally, Pm is difficult to determine accurately as it
W = I − H(H T H)−1 H T . (7) requires the integration of the parity vector distribution which
has been shifted by the fault vector fi vi , over the hypersphere
If W is post multiplied by H, it is clear that Equation 5
defined by the detection test threshold, δ. However, an
is satisfied. To satisfy Equation 6, a Gram-Schmidt orthog-
upper bound on Pm can be determined by marginalizing
onalization procedure is performed on W , resulting in an
the parity vector distribution in the direction of the fault
orthogonal matrix V . The parity vector, p ∈ R(k−n) , is
vector, resulting in a one dimensional distribution which can
defined as the projection of the measurements, y, onto the
be easily integrated. The upper bound on Pm is given as
parity space, and can be used to determine if the measure-
Z δ
ment vector contains an outlier. Once V is known, the parity 1 − 1 ( ρ−fi )
vector is calculated as p = V y, and when substitued into Pm < √ e 2 σe2 dρ. (9)
−δ 2π
Equation (4), the expression for the parity vector becomes
p = V Hx + V e + V f . Due to Equation 5, the term V Hx Although this is an acceptable upper bound, in cases where
is zero, and the state is removed from the problem. The the parity space is relatively low ((k − n) = 2), it is possible
resulting parity vector is to compute a tighter approximation for the upper bound by
further integrating Equation 9 over the disk defined by the
p = V e + V f. (8) detection threshold, δ.
T
With the noise assumption of E[e] = 0 and E[ee ] = Q,
IV. P ROPOSED A PPROACHES
if there are no outliers in the data, then the parity vector
is normally distributed as p ∼ N (0, V QV T ). In order to In addition to the detection of an outlier within the
model an inconsistency in the measurements, assume that the measurement vector, the isolation of the inconsistent mea-
ith measurement is an outlier. The outlier can be modelled surement is also possible within the parity space framework
using the vector f by setting the ith component of f , fi [22], [23]. However, the approach is only well suited for the
to a non-zero value. Then, the parity vector is distributed isolation of one outlier, where as in most visual odometry
as p ∼ N (vi fi , V QV T ), where vi is the ith column of V . applications, the removal of multiple outliers is necessary. As
Therefore, when an outlier is present the mean of the parity such, performing the parity space outlier isolation analysis
vector is shifted by magnitude fi , in the fault direction given is unsuitable, and a different approach is required to handle
by vi . outliers from camera feature matches.
B. Outlier Detection Test A. Parity Group Outlier Rejection
Under the assumption of no outliers in the measurement To remove multiple outliers from the measurement vec-
vector, the magnitude of the parity vector is generally small tor, the Parity Group Outlier Rejection (GPOR) strategy is
as defined by the measurement noise Q. Conversely, when proposed. A similar approach is discussed in [17], however
an outlier is present, the magnitude of the parity vector is the method of selection for the group is unclear. The GPOR
dominated by the size of the fault component, fi . Thus, the strategy simply divides the measurement vector into groups
magnitude of the parity vector can be used to construct an of size g, and performs the parity space outlier detection
outlier detection test statistic, and is given as λ = pT p. test on each individual group. If the test fails, all features
The test statistic, λ, is chi-squared (χ2 ) distributed with within the group are discarded. The selection of the group
k − n degrees of freedom, therefore, a χ2 test can be used to size is an important parameter, as a large group size will
determine the presence of an outlier. Suppose a false alarm lead to a high rate of false positives and will discard many
probability of α is desired. The critical threshold, δ, which good measurements, while a group size that is too small will
satisfies the probability P (λ > δ) = α, can be determined increase the likelihood of including a group that contains
using a χ2 distribution look up table. Next, define the null multiple outliers that happen to be in agreement. Excessive
hypothesis, H0 , and the alternative hypothesis, H1 as discarding of good measurements will also occur if the
H0 : y contains no outliers outlier ratio is large, and so GPOR can be thought of as
conservative in its assessment of outliers. The method is
H1 : y contains an outlier
therefore most applicable for data with reasonably low outlier
If λ ≤ δ, the measurements are consistent, there are no ratios and allows for rapid outlier removal that scales linearly
outliers present and the null hypothesis is accepted. Else, with the number of measurements.

3615
f:6σe
B. Parity Initialized RANSAC f:7σe
f:8σe
Since the GPOR algorithm is best suited to measure- 0.5
f:9σe

ments with a lower outlier ratio, another approach, such as f:10σe

RANSAC, is required for situations where a large percentage

probability of incorrect model


0.4

of the observations are corrupt. It is evident that the perfor-


mance of the RANSAC algorithm is heavily dependent on 0.3

the inlier to outlier ratio of the measurement data. When


the outlier percentage is high, many RANSAC iterations are 0.2
required in order to generate a correct model with sufficiently
high certainty.
0.1
In certain applications, running the RANSAC algorithm
for many iterations becomes computationally expensive. In
0
the case of the presented visual odometry algorithm, a 1 2 3 4 5 6
number of iterations
7 8 9 10

nonlinear optimization must be performed at every iteration


in order to generate the candidate model from the random Fig. 1. Probability of the PI-RANSAC algorithm returning an incorrect
model as a function of the number of performed iterations with a detection
sample set. Furthermore, determination of the inliers using threshold of δ = 6σe . Plots are shown for normalized fault vector
the pixel reprojection error is on the order of O(k). As magnitudes ranging from fi = δ to fi = 10σe .
the number of measurements, k, can become quite large in
practice, performing fewer RANSAC iterations is desirable meaning Pm = 0, only one PI-RANSAC iteration would be
to improve computational performance. required. However, such an approach is only possible if the
In order to reduce the number of required RANSAC distribution of the fault vector is completely known, which is
iterations without sacrifice of solution quality, we propose the an unreasonable assumption in practice. In the case of outlier
Parity Initialised RANSAC (PI-RANSAC) algorithm, where rejection for visual odometry, the magnitude of the fault,
a parity space consistency check on the randomly selected fi , is unknown. However, the larger the difference between
measurement set is performed at each iteration. Denote the the detection threshold and the expected value of the fault
candidate sample set as S, where S is selected randomly magnitude, the lower the probability of a missed detection.
from a uniform distribution over the entire 3D feature set, Using the probability of a missed detection, it is possible
Y , such that S ∼ U(Y ) and |S| = NS . Recall that each 3D to determine the probability the PI-RANSAC algorithm will
point in S, di , is taken from the previous image-pair, and return a bad model as a function of the number of iterations.
has an associated feature correspondence, zdi , in the current After performing j iterations, the probability of an incorrect
image-pair. To that end, define yS as the measurement vector model, Pb , is
fi j
containing only the feature correspondences according to the Pb = (Pm ) , (11)
points from S, or fi
where Pm denotes the probability of a missed detection from
yS = [zdlu1 zdlv1 zdru1 zdrv1 ... zdluN zdlvN zdruN zdrvN ]T . (10) the parity space test, given a fault vector magnitude of fi .
S S S S
Figure 1 illustrates the probability of returning an incorrect
Instead of using yS to calculate the model, as one would model as a function of the PI-RANSAC iterations performed,
with a typical RANSAC approach, the PI-RANSAC algo- for a range of fault vector magnitudes. It can be seen that
rithm performs a parity space consistency check to determine even when the expected value of the fault vector magnitude
if yS contains an outlier. If the consistency check fails, is equal to the parity space test detection threshold (fi = δ),
another random sample set is selected. Once the consistency the probability of returning an incorrect model is 0.0009 after
test is passed, the model is computed using yS , and the only 10 iterations.
inlier set for the current iteration is determined. A feature is
considered an inlier if the pixel reprojection error is less than V. E XPERIMENTAL R ESULTS
a predefined threshold, . By using the parity based check on In order to validate the proposed methods, a series of ex-
the random measurement sample, the computational burden periments are performed using camera data from the KITTI
of model computation and determination of the full inlier Vision Benchmark Dataset [21]. The image data is collected
set at each iteration is replaced with a simple parity space using two Point Grey Flea 2 cameras mounted on an auto-
test that is on the order of O(c3 ). It is important to note motive platform driving in urban environments. Ground truth
that although the parity check scales cubically, mainly due of the vehicle motion is recorded by an OXTS RT 3003
to computation of the V matrix, in typical applications integrated GPS/IMU, capable of providing position data at
c << k. The entire process is repeated for a fixed number an accuracy of approximately 10 cm. The outlier rejection
of iterations, where upon termination, the algorithm returns methods are applied to the LIBVISO2 implementation of
the largest inlier set. the visual odometry method outlined in [7]. The mapping
The effectiveness of the PI-RANSAC approach is de- between the naming convention of the data sequences to the
pendent on the accuracy of the parity space test of the names as provided from the KITTI Dataset are shown in
random sample set. If the parity check was 100% effective, Table I.

3616
Sequence Name KITTI Dataset
500
S01 2011 09 26 drive 0067
S02 2011 09 30 drive 0016 450

S03 2011 09 30 drive 0020 400


S04 2011 09 30 drive 0027
350
S05 2011 09 30 drive 0033
300

z position [m]
TABLE I
250
S EQUENCE NAME M APPING
200

Using the VO data, the Group Party Outlier Rejection 150

(GPOR) technique, as well as the PI-RANSAC algorithm 100 Ground Truth


RANSAC
performed for 10 iterations (PR-10), is compared to the 50 PI−RANSAC

RANSAC algorithm performed with 10, 50, 100 and 1000 0


GPOR

iterations (denoted in the tables as R-10,R-50,R-100,R-


−200 −100 0 100 200 300 400
1000). The incremental motion computed between frames x position [m]

using each outlier rejection techniques is integrated over


Fig. 2. Vehicle motion generated using visual odometry and outlier
the test sequence to recover the egomotion of the camera, rejection techniques for S05.
and is compared to the provided ground truth. To determine
the effect which the number of features has on the outlier Example plots comparing the calculated vehicle motion
rejection techniques, the experiments are performed using trajectory using GPOR, R-10, and PR-10 for sequence S05
small and large feature sets. The number of features is is presented in Figure 2.
controlled by increasing the maximum number of allowable One issue with evaluation of the integrated visual odom-
features maintained by the bucketing algorithm [6]. Other etry motion is that a poor solution in the middle of the
than the number of features and RANSAC iterations, all sequence of position estimates will affect all subsequent
other parameters for the visual odometry are set to the default positions. Thus, the relative error between the frame to
values. For the PI-RANSAC implementation, the detection frame positions and relative motion derived from the ground
threshold is set to δ = 0.5 and for the GPOR method the truth are computed, and summary statistics are presented
group size was set to g = 3. The results from the experiments in Table IV, for both small and large feature counts. As
for the small and large feature sets are presented in Tables the grouping for the GPOR method is based purely on the
II and III, respectively. measurement vector, the test may have a tendency to reject
For the small feature number experiment, the PI-RANSAC a high percentage of good measurements when the outlier
approach computes the solution with the least average posi- ratio is large. It is apparent that for the sequences with
tion error, for all five test sequences. Although the average large feature counts, GPOR produced the solutions with the
position error should generally decrease as the number of smallest average relative error. The result suggests that it is
RANSAC iterations is increased, in some cases a large possible to use a large number of measurments with a low
number of RANSAC iterations returns an accidental largest outlier ratio to attenuate the effect of feature deprivation. In
consensus set which does not accurately reflect the state such cases, a simple test such as GPOR can rapidly produce
of the system [24]. To that end, the PI-RANSAC approach accurate solutions and can outperform RANSAC methods in
is compared to the best solution discovered by RANSAC both accuracy and runtime.
over 10, 50, 100 and 1000 iterations. In Tables II and
III, the percent improvement of the PI-RANSAC solution VI. C ONCLUSION
over the best RANSAC solution is computed using the bold
quantities. It is evident that the PI-RANSAC approach is very This work presents two outlier rejection approaches,
effective, as after only 10 iterations it was able to compute a GPOR and PI-RANSAC that are based within the parity
solution with an average 44.45% increase in accuracy and an space framework, and applies them to a stereo camera based
average 68.95% improvement in run-time compared to the VO application. The GPOR algorithm allows for rapid outlier
best RANSAC solutions. rejection by performing parity space tests to subgroups of
A similar trend is seen with the experiments performed features of the measurement vector, resulting in a fast outlier
on the large feature data set, with the exception of R-1000 removal strategy that scales linearly with the number of
for the S02, sequence, where the average position error for measurements. The PI-RANSAC algorithm improves upon
R-1000 is marginally better compared to PR-10. Significant RANSAC by performing a fast parity space consistency test
computational benefit is seen for the PI-RANSAC method on the random sample of measurements and ensures only
on the large feature sets, as in the best case a 97.19% speed- consistent samples are evaluated for the maximum inlier
up was observed when compared to RANSAC. Although the set. Future work includes thorough analysis of parity space
GPOR method did not produce the solutions with the best approach with nonlinear models, evaluation using synthetic
accuracy relative to ground truth, the average position errors data with known outlier ratios, and application of the pro-
are indeed comparable to both PI-RANSAC and RANSAC, posed methods on problems such as SLAM and point cloud
and consistently achieved the best run-time in all test cases. registration.

3617
S01 S02 S03 S04 S05
avg. feature count 299 211 234 292 245
R EFERENCES
Average Position Error (m) [1] P. Huber, Robust Statistics. New York: Wiley, 1974.
GPOR 8.59 2.64 8.82 13.75 22.00 [2] P. J. Rousseeuw, “Least median of squares regression,” Journal of the
PR-10 7.39 0.89 4.39 3.43 19.81 American statistical association, vol. 79, no. 388, pp. 871–880, 1984.
[3] C. H. Tong and T. D. Barfoot, “Batch heterogeneous outlier rejection
R-10 20.19 3.39 8.78 9.75 28.75
for feature-poor slam,” in IEEE International Conference on Robotics
R-50 14.36 2.03 9.53 9.05 27.63 and Automation (ICRA), Shanghai, China, May 2011, pp. 2630–2637.
R-100 12.14 3.17 7.95 10.04 26.04 [4] M. A. Fischler and R. C. Bolles, “Random sample consensus: a
R-1000 13.46 1.90 7.83 12.89 30.49 paradigm for model fitting with applications to image analysis and
Improvement (%) 39.13 53.16 43.93 62.10 23.92 automated cartography,” Communications of the ACM, vol. 24, no. 6,
Average Run Time (ms) pp. 381–395, June 1981.
GPOR 0.95 0.70 0.74 0.85 0.79 [5] R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer
PR-10 2.80 4.00 4.80 4.00 4.60 Vision, 2nd ed. Cambridge University Press, 2004.
[6] B. Kitt, A. Geiger, and H. Lategahn, “Visual odometry based on stereo
R-10 1.30 1.00 1.00 1.30 1.00 image sequences with RANSAC based outlier rejection scheme,” in
R-50 6.40 4.80 5.10 5.60 5.00 IEEE Intelligent Vehicles Symposium (IVS), San Diego, CA, June
R-100 12.10 90.0 9.50 11.00 13.00 2010, pp. 486–492.
R-1000 68.70 30.0 40.0 70.60 56.00 [7] A. Geiger, J. Ziegler, and C. Stiller, “Stereoscan: Dense 3D recon-
Speed-Up (%) 76.86 86.67 88.00 28.57 64.62 struction in real-time,” in IEEE Intelligent Vehicles Symposium (IVS),
Baden-Baden, Germany, June 2011, pp. 963–968.
TABLE II [8] B. Morisset, R. B. Rusu, A. Sundaresan, K. Hauser, M. Agrawal,
ACCURACY AND RUNTIME COMPARISON USING A SMALL FEATURE SET J.-C. Latombe, and M. Beetz, “Leaving flatland: Toward real-time
( LESS THAN 300 FEATURES ON AVERAGE ). 3D navigation,” in IEEE International Conference on Robotics and
Automation (ICRA), Kobe,Japan, may 2009, pp. 3786 –3793.
S01 S02 S03 S04 S05 [9] G. Klein and D. Murray, “Parallel tracking and mapping for small
avg. feature count 2070 1320 1231 1594 1408 AR workspaces,” in IEEE International Symposium on Mixed and
Augmented Reality (ISMAR), Nara, Japan, November 2007, pp. 225–
Average Position Error (m) 234.
GPOR 3.18 3.09 6.24 12.23 22.75 [10] P. H. S. Torr and A. Zisserman, “MLESAC: a new robust estimator
PR-10 6.61 1.62 6.82 6.20 18.67 with application to estimating image geometry,” Journal of Computer
R-10 14.31 3.56 14.66 10.72 23.29 Vision and Image Understanding, vol. 78, no. 1, pp. 138 – 156, 2000.
R-50 9.75 3.01 8.79 10.01 21.00 [11] O. Chum, J. Matas, and S. Obdrzalek, “Enhancing RANSAC by
R-100 10.97 1.57 7.75 10.03 21.79 generalized model optimization,” in Asian Conference on Computer
R-1000 9.26 1.56 10.70 7.95 21.60 Vision (ACCV), Jeju, Korea, January 2004, pp. 812–817.
[12] J. Matas and O. Chum, “Randomized RANSAC with sequential
Improvement (%) 28.62 -3.85 12.00 22.02 11.11 probability ratio test,” in IEEE International Conference on Computer
Average Run Time (ms) Vision (ICCV), Beijing, China, October 2005, pp. 1727–1732.
GPOR 6.50 4.10 3.70 5.10 4.50 [13] O. Chum and J. Matas, “Randomized RANSAC with T(d,d) test,” in
PR-10 9.90 8.80 8.30 12.10 9.30 British Machine Vision Conference (BMVC), Cardiff, UK, September
R-10 8.20 5.40 5.00 6.30 5.60 2002, pp. 448–457.
R-50 38.10 25.10 22.80 30.90 26.40 [14] I. E. Potter and M. C. Sunman, “Threshold-less redundancy man-
R-100 70.90 45.30 43.40 58.40 50.90 agement with arrays of skewed instruments,” AGARD, Tech. Rep.
R-1000 352.60 188.60 161.60 341.30 256.30 AGARDOGRAPH-224 (pp 15-25), 1977.
[15] M. D. Asok Ray and J. Deyst, “Fault detection and isolation in a
Speed-Up (%) 97.19 95.33 80.88 96.45 64.77 nuclear reactor,” Journal of Energy, vol. 7, no. 1, pp. 79–85, 1983.
TABLE III [16] S. Hall, P. Motyka, E. Gai, and J. J. Deyst, “In-flight parity vector
compensation for FDI,” Transactions on Aerospace and Electronic
ACCURACY AND RUNTIME COMPARISON USING A LARGE FEATURE SET Systems, vol. AES-19, no. 5, pp. 668–676, 1983.
( GREATER THAN 1000 FEATURES ON AVERAGE ). [17] D. Tornqvist, T. Schon, and F. Gustafsson, “Detecting spurious features
using parity space,” in IEEE International Conference on Control Au-
Average Relative Position Error (m) tomation, Robotics and Vision (ICARCV), Hanoi, Vietnam, December
S01 S02 S03 S04 S05 2008, pp. 353–358.
avg. feature count 299 211 234 292 245 [18] D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry for ground
GPOR 0.049 0.090 0.084 0.060 0.082 vehicle applications,” Journal of Field Robotics, vol. 23, no. 1, pp.
PR-10 0.051 0.086 0.060 0.042 0.088 3–20, 2006.
[19] J.-P. Tardif, Y. Pavlidis, and K. Daniilidis, “Monocular visual odometry
R-10 0.091 0.103 0.097 0.061 0.115 in urban environments using an omnidirectional camera,” in IEEE
R-50 0.064 0.094 0.096 0.065 0.104 International Conference on Intelligent Robots and Systems (IROS),
R-100 0.060 0.091 0.079 0.060 0.104 Nice, France, September 2008, pp. 2531–2538.
R-1000 0.601 0.085 0.078 0.072 0.103 [20] A. Comport, E. Malis, and P. Rives, “Accurate quadrifocal tracking
Average Relative Position Error (m) for robust 3D visual odometry,” in IEEE International Conference on
avg. feature count 2070 1320 1231 1594 1408 Robotics and Automation (IROS), San Diego,CA, October 2007, pp.
40–45.
GPOR 0.038 0.083 0.064 0.056 0.082
[21] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics:
PR-10 0.051 0.096 0.087 0.058 0.083 The kitti dataset,” International Journal of Robotics Research, vol. 32,
R-10 0.069 0.111 0.078 0.061 0.089 no. 11, pp. 1231–1237, 2013.
R-50 0.052 0.091 0.094 0.064 0.102 [22] F. Gustafsson, “Statistical signal processing approaches to fault detec-
R-100 0.052 0.073 0.083 0.061 0.117 tion,” Annual Reviews in Control, vol. 31, no. 1, pp. 41–54, 2007.
R-1000 0.048 0.087 0.101 0.060 0.107 [23] F. Gustafsson and F. Gustafsson, Adaptive filtering and change detec-
tion. Wiley Londres, 2000, vol. 1.
TABLE IV [24] Q. Fan, “Matching slides to presentation videos,” Ph.D. dissertation,
C OMPARISON OF AVERAGE FRAME BY FRAME RELATIVE ERROR FOR Department of Computer Science, University of Arizona, 2008.
BOTH SMALL AND LARGE AVERAGE FEATURE COUNTS .

3618

You might also like