0% found this document useful (0 votes)
19 views

Thesis Report

A very good thesis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Thesis Report

A very good thesis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Master’s The-

sis
Some title that describes the master’s thesis

C. Theunisse

Student number: 5072379


Date: correct date, 2024
Supervisors: Dr. C. Pek, TU Delft
Dr. L. J. Slooten, ALTEN

images/front drawing.pdf
Master’s Thesis
Some title that describes the master’s thesis
by

C. Theunisse
to be defended on ... .

Student number: 5072379


Project duration: November 13, 2023 – February 19, 2024
Supervisors: Dr. C. Pek, TU Delft, supervisor
Dr. L. J. Slooten, ALTEN, supervisor
Abstract

— TODO: ....... —

iii
Contents

1 Real World struggles 1


1.1 Trajectory planner and follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Implement a trajectory follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Account for uncertainty in the trajectory planner . . . . . . . . . . . . . . . . . . . . . 2
1.2 Field of view approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Localization — ekf filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.1 IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.2 Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Estimating uncertainties - Uncertainty framework - Error modelling 3
2.1 Localization error (true state — estimated state) . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Trajectory following error (estimated state — trajectory state) . . . . . . . . . . . . . . . . . . 3
2.3 FOV uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3.1 Uncertainty Lidar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3.2 Delay Lidar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3.3 Pose uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 — TODO: —. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4.1 Model fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Error Modeling and Calibration 7
3.1 State errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1.1 Error in estimated state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.2 Error in trajectory following . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Field of View (FOV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 Lidar Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Delays Lidar/FOV uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Final uncertainty discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

v
Chapter 1

Real World struggles

1.1 Trajectory planner and follower


No trajectory follower needed in simulation – completely new. Need to account for error between planned
trajectory and the following error.

1.1.1 Implement a trajectory follower


Trajectory from Foresee the Unseen is a state list: positions, orientations, velocities and time steps. Cannot
consider all of them in the trajectory follower. Controller should track the path and the velocities.
If the velocities are tracked well, the error on the time steps is limited. Certain planning horizon, always
standstill. An upper limit on the error on the time steps / distance can be calculated given an error on the
velocity. Accelerations should be limited to be able to give an upper limit on the velocity error.
If path is tracked well, the error on the positions and orientations will remain limited. Both errors can
be determined by testing. Again, should define certain constraints. Min corner radius, certain accelerations
(maybe).

Which trajectory follower? Not main goal, so wanted existing software package. No ros2 software package that
can solve the problem of just following a trajectory, probably because robot specific. Often more about tra-
jectory planning. Use an implementation from this GitHub repo [1]: https://github.com/AtsushiSakai/
PythonRobotics?tab=readme-ov-file#path-tracking.
Velocity control is a bit difficult since the battery voltage is not constant and there is no sensor that pro-
vides this. Motor control is based on PWM signal, so lower battery voltage need higher PWM signal to reach
the same speed.

• MPC requires model -> changing battery voltage might mess with model / How computational power,
especially nonlinear MPC

• LQR: uses derivative of the error -> discrete position updates by lidar localization with introduce high
derivatives.

• Stanley control: only two parameters to tune, no derivatives. Separate velocity and steering control.
velocity PID can account for battery voltage problem. Easy to understand.

Need to convert delta to a difference on the motors. Show the calculations -> no depending on velocity (it
is proportional).

Desired accuracy:

• lateral direction: should stay within lane

• longitudinal direction: account for a certain error

• account for velocity error

1
2 1. Real World struggles

1.1.2 Account for uncertainty in the trajectory planner


To make work in real world -> change corner smoothing.

Does not involve orientation error -> ensure / prove ego vehicle stays within lane boundaries. Different sce-
narios start/stop. Choose lanewidth, velocity etc. to make sure that the vehicle stays in the lane.
According to [2] average passenger car width in Europe is 1802 mm. Wegbreedte NL 80-weg: 2.75 m.
https://www.publicspaceinfo.nl/media/uploads/files/PROVZUIDHO_2012_0004_1.pdf. US for high-
ways: 3.7 m https://en.wikipedia.org/wiki/Interstate_Highway_standards. German highways:
3.5 m https://en.wikipedia.org/wiki/Lane#:~:text=In%20Europe%2C%20laws%20and%20road,(8.
2%20to%2010.7%20ft)..

1.2 Field of view approximation


No obstacle tracking needed – Foresee the Unseen uses same approach. The area under and behind detected
obstacles is part of the occluded area and should therefore be tracked. This area can have any velocity and
any orientation (given some constraints), which will also include the actual vehicle.
FOV from the lidar pointcloud – no uncertainty from datmo module – also accounts for no rectangular
shapes and does not rely on the way datmo tracks these shapes. – Need to account for uncertainties because
of turning.

Use old pose / transform for FOV

1.3 Localization — ekf filter


1.3.1 IMU
Need covariance matrices for the sensor data. The IMU does not provide the variances in the datasheet.
Therefore, the IMU values where obtained for 12 minutes and the variances where obtained from the data.
However, especially the orientation (integrated by the Madgwick algorithm CITATION) tends to drift. There-
fore, variance calculation not reliable. Use the mean square successive difference [3]. Gives a estimation of the
variance in the presence of drift by assuming that subsequent measurements have the same mean. Values
are given in table ??.
However, we need to have an accurate result for the variance for a 0.5 second interval, which involves 40
measurements. Therefore, just calculate the variances (and thus the means) over these intervals. Take the
average of the variances in a proper way sqrt(square of all N variances / N). Compare this to the standard
deviation using the r-ratio. The amount of drift can be quantified by the r-ratio [4]. 1.0 means no drift. -> This
assumes that properly tuned etc. -> Just a lower boundary, because too fast movements, improper tuning etc.
introduce extra error.
Angular velocity and linear acceleration also have a drift, but this is not accounted for so need to take
the variance over the whole interval. Current operations do only use the robot for a most a few minutes, so
variance of a 12.5 min dataset is sufficient. Could be solved be recalibrating at a standstill.

1.3.2 Odometry
Determine the uncertainty on the wheel encoders and convert to position and orientation uncertainty. Error
on wheel encoders is ... — TODO: Error on wheel encoders is not the only problem, but also the rotation
position —
Left wheel least accurate
Chapter 2

Estimating uncertainties - Uncertainty


framework - Error modelling

Uncertainties are split up for each part of the software package. The resulting uncertainty can be found be
integrating or Monte Carlo simulations.

The following sources of uncertainty are detected and should be accounted for in the trajectory planning: ...
. — TODO: Introduction with graph that presents the sources of the uncertainties. —

A plan/method is described to determine these uncertainties. (try to find some sources). Subsequently, (a
simplified version) of this method is applied to estimate the specific uncertainty.

Uncertainty consists of 2 sources (=accuracy vs. precision): biases (caused by systematic errors), which is the
mean error (µe ) and imprecision (caused by random errors), which is expressed as the standard deviation (σe )
or variance (σ2e ) of the error. Uncertainties can be either fixed or proportional or a combination of both, as is
shown in figure ??. Biases are typically observed in sensor measurements and can be removed by calibration,
which is the estimation of these biases and correcting the measurements for these biases. Imprecision is
expressed with the standard deviation (σ) or the variance (σ2 ).

2.1 Localization error (true state — estimated state)


Find by using / analyzing the Kalman filter output » maybe use control inputs.

Error uncertainty estimation of the SLAM. Uncertainty in the SLAM localization is reported by the SLAM
software package and is assumed to be correct, since estimating this is outside the scope of this project.

The uncertainty between these global updates increases over time and is calculated by the ekf based on the
process noise. This process noise can be tuned. The uncertainty can be verified a posteriori by inspecting the
average error between the new SLAM position and the estimated position based on the IMU and odometry
velocities.

2.2 Trajectory following error (estimated state — trajectory state)


Goal: model for the lateral and longitudinal error (orientation error might be hidden in the lateral error). The
model should depend on acceleration, velocity and road radius. The model should describe the relationship
between the error (bias and variance) and the parameters.

Can make a very general model for trajectories with different road radii and different distances before and
after the corner. However, it will be a complex model to use in the calculations and requires a lot of data.
Therefore, want to take a look at the different sources of uncertainty.
Sources of uncertainty — longitudinal error

• accelerating or decelerating results in a velocity error, since the PID-controller for the velocity intro-

3
4 2. Estimating uncertainties - Uncertainty framework - Error modelling

duces some delay. Add longitudinal error for accelerations not 0.

• Velocity error results in a longitudinal error, since the distance is the integral of the velocity: model
relationship between distance and longitudinal error.

Sources of uncertainty — lateral error

• random noise causes a lateral error on the straight parts, but no bias here.

• longitudinal error in the corners, results in lateral error. Corner cutting. First identify typical longitu-
dinal errors and then test them in the corners. Model should describe the relationship between the
longitudinal error and the lateral error in the corners.

2.3 FOV uncertainty


2.3.1 Uncertainty Lidar
The used Lidar is a RPLidar A1 from SLAMTEC that uses triangulation and the intended range is 0.15 to 12
m. (Spec sheet: https://www.slamtec.ai/wp-content/uploads/2023/11/LD108_SLAMTEC_rplidar_
datasheet_A1M8_v3.0_en.pdf). The uncertainty in the measurements (ranges/angles error) of the Lidar.
So biases and variances in the angle measurements and the range measurement.

Foresee the Unseen: max velocity = 10, ref velocity = 8.33, max acc = 3.5, max dec = 5, view range = 100
Racing bot: max velocity = 0.5, max acc = 0.2, max dec = 0.2, view range = 5 » test area is also not bigger and at
5m the beams diverge too much and tend to go over the obstacles.

Useful paper but not accessible: Assessing Accuracy of a Laser Rangefinder in Estimating Grassland Bird Den-
sity

Estimate biases / calibration

Angle: Need the mounting angle compared to the vehicle. Assume fixed bias for the angle. -> based on the
triangulation principle??? Place the vehicle parallel (or perpendicular) to a wall > wheel axis perpendicular
to the wall. Fit the wall in the point cloud with a line and determine the angle that makes the heading axis
parallel (or perpendicular) to the wall.

Range: For the range, detect a straight wall. If this wall results in a straight line in the point cloud, this means
that the range bias is the same for all the ranges and is therefore fixed. However, if this wall is curved, the
range bias contains at least a proportional part. The biases can be measured by comparing measurements to
the ground truth. The distance measurements can be obtained by detecting that same wall and measuring
the distance to the closest point on the wall. Based on symmetry, the true value for the distance to the whole
wall can be found. Comparing the true value to the estimated value results in a model for the bias for each
distance (bias (dist) = prop * dist + fixed).

Estimate variances

Angle: The Lidar reports the range for a fixed amount of angle increments. Different sources of uncertainty:
1) angular resolution (fixed and already dealt with -> need a certain object size) [5], 2) beam divergence [5]
and 3) clock/angle measurement uncertainty. This means that an object can be detected at one time step at
a certain (reported) angle, but not at another. Calculate the mean and the standard deviation by detecting a
box with a clear edge.

Range: The imprecision can be simply calculated by calculating the standard deviation or variance for each
measurement on this wall for a certain period of time.

2.3.2 Delay Lidar


The time necessary to complete a Lidar scan, also introduces uncertainty, since the measurement is only
yielded after the whole scan is completed. The rotation rate of the Lidar was measured using a tachometer
2.4. — TODO: — 5

and runs at 456 rpm, which is 7.6 Hz. The maximum delay between measuring and publishing of a single
range is therefore the time necessary to complete a scan, which is 1/7.6 = 0.132s. This can be solved by so-
called deskewing the Lidar measurement. However, another workaround that uses much less computational
power / is easier to implement, is to just add a padding and angular margin to the FOV to account for move-
ments during the scanning. -> Integrate movement over the last seconds (use tf??)

2.3.3 Pose uncertainty


A Lidar measurement is related to a pose to calculate the FOV. The uncertainty (variance) on the angle can be
directly translated to a variance / margin on the angle of the FOV. The absolute uncertainty on the position
can be translated to a padding on the FOV. This is similar to taking the underapproximative intersection of
the FOVs at each possible position given the uncertainty.

2.4 — TODO: —
• EKF variance estimation analysis / tuning

• Find / determine trajectory following error

• FOV:

– Estimate Lidar uncertainties


– Calculate other lidar uncertainties

2.4.1 Model fitting


— TODO: Verify these claims – add citations — We want to use a Gaussian process since it can also explicitly
model the uncertainty. It finds the posterior of the distribution of possible all functions. Since the model is
nonparametric, it can fit any function. The chosen kernel defines the prior of the set of functions (also re-
ferred to as the model) and is one of the most important hyperparameters. Commonly considered a machine
learning method, but mathematically well grounded, and small datasets have an exact solution. Hyperpa-
rameters are learned based on a maximum log likelihood. Bigger dataset use ?variational learning?, but are
proven to optimize a lower bound on the approximated posterior.

A GP can also determine the noise on the input data. So there is uncertainty on the model fit and there is
uncertainty on the data. The uncertainty on the input data is also determined by maximizing the log likeli-
hood of the parameters. Most GP uses cases assume a homoskedatic noise on the input data or the consider
a fixed heteroskedatic noise and optionally optimize an additional homoskedatic noise. However, the goal
of the error model is to model both the mean error and the uncertainty on the error. As solution is to fit
two GPs, on for the mean value and one for the standard deviation. Both have a separate kernel. (source:
https://arxiv.org/pdf/1212.6246.pdf)

f 1 (x) ∼ GP (0, k 1 (·, ·))


f 2 (x) ∼ GP (0, k 2 (·, ·))
loc(x) = f 1 (x) (2.1)
scale(x) = transform( f 2 (x))
y i | f 1 , f 2 , x i ∼ N (loc(x i ), scale(x i )2 )
Chapter 3

Error Modeling and Calibration

— TODO: Make link between error and uncertainty — The real world introduces all types of uncertainties
compared to a simulation environment. Uncertainties mean that values cannot be determined with a perfect
accuracy and therefore introduce errors in the calculations. The planner should account for these uncertain-
ties to be able to maintain a legal safety guarantee in the real world. In reality, there is always a tiny probability
of a massive error, so the best we can do is to ensure that the values are within a certain confidence interval.
The confidence level can, for example, be chosen based on the consequences of a value being outside this
interval. However, the desired confidence level is a decision that should not be made by the developers of
AVs, but should be the result of a broader social discussion resulting in the necessary regulations. The task of
developers of AVs is to provide the ability to choose the confidence interval for an algorithm.

To be able to model and account for the encountered uncertainties, they are separately modelled for each
part of the algorithm. The uncertainties introduced by the real world can generally be assigned to one of two
quantities. The first one is the state, which involves the uncertainty between the true state and the estimated
state on the one hand and the uncertainty between the estimated state and the planned state on the other
hand. The other quantity is the field of view (FOV) and is affected by uncertainty in the Lidar measurements
and the inevitable, accompanying delays. An overview is given in figure 3.1. — TODO: Mention something
about given a general approach to find the model and own implementation/simplification —

Figure 3.1: — TODO: Update this figure —

The assumption is that the uncertainties are caused by errors in the different measurements — TODO: Also
mention trajectory following – break down errors in the state and in the FOV — that are Gaussian distributed.
Since the errors in the measurements consist of both a systematic deviation or bias and a variance, a model
is needed to describe both the mean and the variance of this uncertainty or error, which is often referred to

7
8 3. Error Modeling and Calibration

as error modeling. Gaussian Processes will be used to obtain a model of error from the data because they
explicitly model the uncertainty, and these models are nonparametric, so no assumption has to be made
about the underlying function.

— TODO: Gaussian Process description: Verify these claims – add citations – Read some papers – Better merge
these parts — We want to use a Gaussian process since it can also explicitly model the uncertainty. It finds
the posterior of the distribution of possible all functions. Since the model is nonparametric, it can fit any
function. The chosen kernel defines the prior of the set of functions (also referred to as the model) and is
one of the most important hyperparameters. Commonly considered a machine learning method, but mathe-
matically well grounded, and small datasets have an exact solution. Hyperparameters are learned based on a
maximum log likelihood. Bigger dataset use ?variational learning?, but are proven to optimize a lower bound
on the approximated posterior.

"Noise-free training function values f are not directly observed i.e. are latent variables."

A GP can also determine the noise on the input data. So there is uncertainty on the model fit and there is
uncertainty on the data. The uncertainty on the input data is also determined by maximizing the log likeli-
hood of the parameters. Most GP uses cases assume a homoskedatic noise on the input data or the consider
a fixed heteroskedatic noise and optionally optimize an additional homoskedatic noise. However, the goal
of the error model is to model both the mean error and the uncertainty on the error. As solution is to fit
two GPs, on for the mean value and one for the standard deviation. Both have a separate kernel. (source:
https://arxiv.org/pdf/1212.6246.pdf)

— TODO: Describe a proper learning procedure: early stopping, test/train-split. —

f 1 (x) ∼ GP (0, k 1 (·, ·))


f 2 (x) ∼ GP (0, k 2 (·, ·))
loc(x) = f 1 (x) (3.1)
scale(x) = transform( f 2 (x))
y i | f 1 , f 2 , x i ∼ N (loc(x i ), scale(x i )2 )
— TODO: Maybe??? do a Monte Carlo simulation to predict the probability of an accident or a crucial
violation of the sets based on the other confidence intervals (Bayesian Newtork) —

— TODO: Something about computational delays —

— TODO: Something about calibration of the transforms – lidar transform was found be measuring the po-
sition by hand and aligning the robot with the wall. – IMU transform was found by measuring by hand ->
should be hidden in the position error. —

— TODO: How to deal / describe uncertainties in the calibration process: Maybe give an estimate and com-
pare to the found uncertainties: lidar uncertainty +- 5 cm, tapemeasure +- 0.5 mm —

3.1 State errors


The state of the AV is described by the pose, which consists of the position and orientation and the velocities,
both linear and angular. Since a 2D world is considered, the position is given by a 2D vector ([x y]T ) and the
orientation is given by the orientation around the z-axis, the yaw (ψ). The considered AV is nonholonomic, so
the velocity is described by the linear velocity in the orientation direction v and the angular velocity around
the z-axis (ψ̇). An implementation on a real AV introduces two inevitable sources of error. Firstly, an error
between the true state and the estimated state and secondly, an error between the estimated state and the
desired or planned state, since it is not possible to perfectly follow a trajectory.

• In simulation: vehicle is just at the planned state; real world: error in estimated state and error in
trajectory following.

• Error in estimated state: An extended Kalman filter (EKF) is used to merge the different sensor readings

• Error in trajectory following: A model is made to account predict the error in the trajectory following
3.1. State errors 9

3.1.1 Error in estimated state


— TODO: Something about update rates EKF – Used SLAM module etc – probably not here?? —

— TODO: Short description Kalman filter - give equations / matrices — The state is estimated using a combi-
nation of different sensors which are merged with two extended Kalman filters. The first EKF is used to merge
the linear velocity estimate from the wheel encoders and the angular velocity estimate from the IMU into one
velocity estimate and is further referred to as velocity EKF. This velocity estimate is put together with the pose
estimate from the SLAM algorithm in another EKF, the position EKF, to get the global position. The different
sensors and the noise matrices of the EKFs are discussed below. A diagram of the connection between the
different sensors and EKFs is given in figure ?? — TODO: Add plot for structure of EKFs —.

Wheel encoders - The linear velocity is computed with the based on the absolute wheel orientations returned
by the encoders. The linear velocity is used in the trajectory planner, which also accounts for the uncertainty
of the velocity. Therefore, the uncertainty on the linear velocity needs to be estimated. This is achieved by
having the wheels spin freely at a constant speed, and comparing the velocity calculated from the wheel en-
coder values with the velocity measured with a tachometer. The measurements are shown in figure 3.2 and
the mean and the standard deviation of the error are calculated. The mean is used to calibrate the parame-
ters used in the velocity calculations, and the standard deviation is used to set the variance on the velocity
estimate. A constant variance is used to minimize the complexity and the computational load.
— TODO: Discuss shortcomings: only constant velocities —

Wheel encoder linear velocity error


raw
calibrated
bias
0.015
velocity error [m/s]

0.010

0.005

0.000

0.0 0.1 0.2 0.3 0.4 0.5


measured tachometer velocity [m/s]

Figure 3.2: The error in the linear velocity estimate based on the wheel encoders.

IMU - The angular velocity is obtained from the IMU which contains a gyroscope that measures the angular
velocities directly. The angular velocity is not directly used in the trajectory planner and the reported un-
certainty therefore has to be less precise than for the linear velocity. Besides, it is more difficult to obtain a
reliable ground truth for the angular velocity. The motors of the robot were used to let the robot rotate around
its axis, which is assumed to be a constant angular velocity. The total rotation accumulated over 30 seconds
and measured by the SLAM algorithm was used together with this assumption to determine the constant
angular velocity. This velocity was used to calculate the bias and calibrate the IMU readings and the Mean
Square Successive Difference (MSSD) — TODO: describe MSSD — was used to estimate the variance, since
the angular velocity is actually not constant.
— TODO: Discuss shortcomings: MSSD —

SLAM algorithm - The ROS package SLAM toolbox is used to get a pose estimate for the robot relative to its
environment. The slam algorithm provides the covariance matrix with the pose, which is assumed to describe
the uncertainty properly and is not further investigated.

Process Noise EKFs - The position EKF integrates the velocity reported by the velocity EKF to produce a pose
10 3. Error Modeling and Calibration

Bias at different velocities MSSD at different velocities


0.14 0.0020

0.0018 1.4
0.15
0.0016

Average angular velocity [rad/s]


1.2

angular velocity2 [rad2/s2]


0.0014
0.16 1.0

rate [-]
0.0012

0.17 0.8
0.0010

0.0008 0.6
0.18
0.0006 0.4

0.19 0.0004
bias mssd

Figure 3.3: The error in the angular velocity estimate based on the IMU.

estimate during the intervals between the SLAM measurements, since the SLAM only updates at 1 Hz. The
variance reported on this estimate, can be tuned by the adjusting the process noise matrix. Assuming that
the pose reported by the SLAM algorithm is on average correct, the uncertainty reported by the position EKF
is compared against the pose estimated by the SLAM algorithm. For a series of SLAM and EKF poses, this
means that at least n% of all slam updates should be within the n%-confidence interval of the uncertainty on
the EKF estimate.
— TODO: More thorough explaination on the plots – standstill and small velocities are not used — The
confidence interval for the position and orientation is visualized in figure 3.6 and the resulting occurrence
frequency of each confidence interval is shown in figure 3.5, which indicates that the uncertainty estimate is
a bit conservative.
So the process noise of the velocity EKF is tuned to make sure that the uncertainty does not explode, but
that filter effect is also minimized. The process noise of the position EKF is tuned to make the uncertainty
prediction conservative enough as shown in figure 3.5. The matrices are given in equation 3.2.

0.1 0.1 0.0


 
 0.0 0.0 0.2
 
0.01
velocity EKf =  0.0 0.0
 0.0 0.0 0.1
0.1 0.1 0.0
(3.2)
0.01 0.01 0.0
 
 0.0 0.0 0.01
 
0.01
position EKf =  0.0 0.0 

 0.0 0.0 0.05
0.01 0.01 0.0

3.1.2 Error in trajectory following


— TODO: Clearly state:

• Which errors and WHY?

• Which datasets and WHY?

• Which models/parameters and WHY?


3.1. State errors 11

2D positions with uncertainty Orientation with uncertainty


Confidence interval = 48.7 % Confidence interval = 24.3 %
2.5 1.0
-225° 45°
2.4 0.8
0.6
2.3 0.4
0.2
2.2
-180° 0.0 0°
y [m]

2.1
2.0
1.9
1.8 -135° -45°

0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 -90°
slam position xekf
[m]position confidence slam yaw ekf yaw confidence
interval interval

Figure 3.4: — TODO: caption —

Position and orientation error occurence frequencies vs. Confidence intervals


SLAM interval = 0.5 s SLAM interval = 1 s SLAM interval = 2 s SLAM interval = 3 s
1.0
Cummulative occurrence

0.8
Position
frequency [-]

0.6
error
distribution 0.4
0.2
0.0
1.0
Cummulative occurrence

0.8
Orientation
frequency [-]

0.6
error
distribution 0.4
0.2
0.0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Confidence interval [%] Confidence interval [%] Confidence interval [%] Confidence interval [%]

Figure 3.5: — TODO: Good explaination about the plots and the interpretation —

The planner of the AV finds a collision-free trajectory for the AV to follow. Since it is impossible for the
AV to perfectly track a trajectory, a model should be made that describes how much the AV is expected to
deviate from the planned trajectory. The description of the trajectory planner together with the definition of
a trajectory and the description of the trajectory follower can be found in respectively ?? and ??.

The model for the error in the trajectory following models the error between the goal pose given by the
planned trajectory and the actual, estimated pose. This results in the following three types of errors — TODO:
explain why absolute errors - This ignores a deviation to one side, for example, because of a less strong motor,
but eliminates the need to test all corner trajectories mirrored. —:

Definition (Longitudinal error). The difference between the current position and the goal position along the
path defined by the subsequent waypoints.

Definition (Lateral error). The absolute value of the difference between the current position and the closest
point on the path defined by the subsequent waypoints measured perpendicular to the path.

Definition (Orientation error). The absolute value of the difference between the current orientation and the
goal orientation defined along the path.
12 3. Error Modeling and Calibration

Ta
n
ge
nt
Path t
bo
Ro

Goal state

Error:
Longitudinal
Lateral
Orientation

Figure 3.6: Visualization of the different trajectory following errors.

The trajectory follower is implemented to track the goal velocity (more details can be found in ??) instead
of a goal position. Errors in the velocity can therefore accumulate into an increasing longitudinal error in the
position, since no feedback about the position is used in the velocity calculation. The change of the longi-
tudinal error over time is therefore modelled since this describes the observed errors better, which is further
referred to as the longitudinal error rate. This is perfectly understandable because the trajectory follower fol-
lows the goal velocity, which has units meter per second (m/s), similar to the unit of the longitudinal error
rate.

Definition (Longitudinal error rate). The rate of change over time in or the derivative of the longitudinal error
rate.

The trajectory follower uses a point on the path close to the robot for the feedback on the orientation and
the lateral position to calculate the steering angle. The orientation error and lateral error are therefore mod-
elled directly instead of their rate of change, since these values are not susceptible to drift like the longitudinal
error.

Datasets

Generating a model requires data about the errors that is obtained along trajectories that are similar to the
trajectories encountered in normal driving scenarios. Ideal situation, all types of different trajectories: vary-
ing accelerations, corner radii, velocities, start- and end points, start- and end velocities. However, should be
a real lot of trajectories to create a reliable dataset. However, the goal of this thesis is to get an insight into the
biggest sources of uncertainty and scenarios that result in the biggest uncertainties. Therefore, the data was
collected by using trajectories that enable to isolate every source of uncertainty as much as possible.
More specifically, two types of test trajectories: trajectories that only contain straight parts and trajectories
that contain a corner. The straight trajectories are tested for different combinations of maximum velocity,
maximum acceleration and length. The parameters are shown in table ... (velocity = [0.25, 0.5], acceleration
= [0.125, 0.25] and length = linspace(0.4, 7, 9) or so). When considering trajectories with a corner, not only
the length, but also the start and end point should vary to create a dataset that captures real scenarios as
much as possible. Since the corner radius should also vary to account for different road geometries, the set
of possible trajectories grows exponentially. Therefore, the acceleration is 0 for all corner trajectories and
these trajectories therefore start and end with the same constant velocity. The parameters used to create
trajectories are shown in table 3.1.

— TODO: mention acceleration is skewed – velocities other than 0.25 and 0.5 have acc != 0, while 0.25 and 0.5
almost all acc == 0. – curvature is better distributed. all velcoties same number of curvature == [0., 0.67, 1.] —
— TODO: Mention assumptions: corners 90 degrees, this specific trajectory follower, length of straight
part, length of straight part after corner -> roughly based on real scenarios, sees 5 seconds ahead or so, ...
meters —This results in three different scenarios, which are divided into three datasets.

• straight-without-corners: velocity and acceleration (radius = inf -> straight)


3.1. State errors 13

Parameters
Trajectory Velocities Accelerations Curvature Length Repetitions
Shape [m/s] [m/s2 ] [rad/m] [m] [-]
[5.68, 5.0, 4.32, 3.64, 2.96,
Straight [0.25, 0.5] [0.125, 0.25] [0.] 1
2.28, 1.6, 0.92, 0.27]
90 degrees corner +
Corner [0.25, 0.5] [0.] [0.67, 1.] 4
1.5 m straight

Table 3.1: Every possible combination was tested for each trajectory shape. So for the straight trajectories 2 · 2 · 1 · 9 · 1 = 36 trajectories
and for the corner trajectories 2 · 1 · 2 · 1 · 4 = 16

• corners: radius and velocity (acceleration = 0)

• straight-with-corners: (straight parts after a corner). radius and velocity (acceleration = 0)

Longitudinal error

— TODO: Say something about i.i.d. — The longitudinal error rate (ė l ong ) is calculated by simply taking the
difference in the longitudinal error (e l ong ) between two time steps and dividing it by the time difference be-
tween these two time steps (equation 3.3). The autocorrelation of the data is close to 0, which indicates that
the data is i.i.d. The goal is to find which parameters influence the longitudinal error rate. As mentioned
before, different trajectories were tested which resulted in different datasets for the errors depending on dif-
ferent parameters, namely the velocity, acceleration, and radius.

Lateral error data autocorrelation Orientation error data autocorrelation Longitudinal error rate data autocorrelation
1.0 straight-wo-corners 1.0 straight-wo-corners 1.0 straight-wo-corners
straight-w-corners straight-w-corners straight-w-corners
corners corners corners
0.8 0.8 0.8

0.6 0.6 0.6


autocorrelation [-]

autocorrelation [-]

autocorrelation [-]

0.4 0.4
0.4
0.2 0.2
0.2
0.0 0.0
0.0
0.2 0.2
0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400
Lags [-] Lags [-] Lags [-]

Figure 3.7: — TODO: make a caption —

e l ong (t i +1 ) − e l ong (t i )
ė l ong = (3.3)
t i +1 − t i

To learn if the longitudinal error rate depends on the acceleration, the data from the straight-without-corners
dataset was used to plot the acceleration against the longitudinal error rate, since this is the only dataset that
contains accelerations apart from 0. The data was fitted with a Gaussian Process (GP) to capture the trend
of the data. The data and the resulting fit are shown in figure ??a. It can be clearly seen that there is an ap-
proximately linear relationship between the acceleration and the longitudinal error rate, so the longitudinal
error rate is acceleration-dependent. To investigate if a corner in the trajectory influences the longitudinal
error rate, the error rate is shown in relation to the corner radius. This is achieved by using the data from
the straight-without-corners and corners dataset to display the relation between the corner curvature and the
error rate. Only the data points are used at the parameters the two datasets have in common, that is a velocity
of 0.25 or 0.5 m/s and an acceleration of 0 m/s2 . The results in figure ??b indicate that the mean of the error is
uncorrelated with the curvature, but the uncertainty does linearly increase with the curvature. Therefore, the
uncertainty of the longitudinal error rate is curvature-dependent.
The third parameter that was varied for the different trajectories, is the velocity. To only investigate the
effect of the velocity on the error rate and filter out the influence of the acceleration, only the measurements
where the acceleration is zero are used. Since all the datasets contain data with a varying velocity, the data
14 3. Error Modeling and Calibration

acceleration vs. longitudinal error rate curvature vs. longitudinal error rate velocity vs. longitudinal error rate
straight-without-corners dataset straight-without-corners and corners datasets where [acc. == 0] all datasets where [acc. == 0]
0.4 0.4 0.4
0.3 0.3 0.3
0.2 0.2 0.2
longitudinal error rate [m/s]

longitudinal error rate [m/s]

longitudinal error rate [m/s]


0.1 0.1 0.1
0.0 0.0 0.0
0.1 0.1 0.1
0.2 0.2 0.2
error mean error mean error mean
0.3 1 0.3 1 0.3 1
2 2 2
0.4 0.4 0.4
0.2 0.1 0.0 0.1 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.25 0.30 0.35 0.40 0.45 0.50
acceleration [m/s^2] curvature [rad/m] velocity [m/s]
a) b) c)

Figure 3.8: — TODO: make a caption —

from all the datasets is used where again the velocity = [0.25, 0.5] m/s and acceleration = [0.] m/s2 . This
leads to the same conclusion as for the curvature, that only the uncertainty on the longitudinal error rate is
velocity-dependent.

The error model for the longitudinal error rate should be based on either the velocity and the acceleration
or the velocity and the curvature, since there is no dataset that contains error data over a range of these three
parameters. Simply combining the datasets would result in a huge gap in the dataset, since there are no
measurements for the region where acceleration > 0 and curvature > 0. Since the mean error rate depends on
the acceleration, the straight-without-corners dataset will be used to make the model and the parameters will
be the acceleration and the velocity. The standard deviation predicted by this model will be multiplied by a
constant to account for the higher uncertainty for increasing curvatures. This will be based on the corner with
the biggest curvature in the road ahead, and is simply calculated by comparing the uncertainty at a curvature
of 0 and at the specific curvature (equation 3.4).

σ(curvature = road curvature)


scale factor = (3.4)
σ(curvature = 0)

— TODO: some comment that its not possible to use a different model for the corners and the staright parts,
since the corner influences the straight part. -> That is the very reason for the straight-WITH-corners dataset.

Estimating the longitudinal error rate with this approach only gives a rough estimate, but should be enough
for the purpose of this thesis, since the focus is more on accounting for all sources of uncertainty and identi-
fying the most dominant ones.

The trajectory planner needs to have the longitudinal error, and therefore the longitudinal error rate needs
to be numerically integrated. The longitudinal error rate can be simply sampled from the model, using the
velocity and acceleration at each state in the list of states defining the trajectory (— TODO: Refer to definition
of trajectory —). The longitudinal error follows from numerically integrating the error rates as described in
equation 3.7.
ė l ong = N (µr at e , σ2r at e )
n n (3.5)
µr at e,i · ∆t , σ2r at e,i · ∆t 2 )
X X
e l ong = N (
i =1 i =1

However, the value of ∆t should be the same as the time between the subsequent longitudinal measurements
which are used to calculate the rate (eq. 3.3), where (t i +1 − t i ) is typically 0.033 seconds. The mean of the rate
will be the same for a different time step, but the variance changes. The formula to convert the variance is
given in equation ??, where d t r at e and d t t r a j are respectively the time step at which the data is recorded and
the time step of the trajectory planner. σ2r at e is the variance predicted by the model and σ2t r a j is the calculated
variance that is valid for the time step used in the trajectory planner (d t t r a j ). The formula uses the fact that
the integrated variance should be the same in both cases. n r at e and n t r a j are the number of integration
steps and since the total time is the same for both integrations, the following equality holds: n r at e ∗ ∆t r at e =
3.1. State errors 15

n t r a j ∗ ∆t t r a j .
nX
r at e nt r a j
σ2i nt eg r al = σ2r at e,i · ∆t r2at e = σ2t r a j ,i · ∆t t2r a j
X
i =1 i =1
n r at e · σ2r at e · ∆t r2at e = n t r a j · σ2t r a j · ∆t t2r a j
(3.6)
σ2r at e · ∆t r at e = σ2t r a j · ∆t t r a j
∆t r at e 2
σ2t r a j = · σr at e
∆t t r a j
With the above formula, equation 3.7 can be rewritten to account for a different time step, resulting in
equation ??.
n n ∆t r at e
e l ong = N ( µr at e,i · ∆t t r a j , σ2r at e,i · · ∆t t2r a j )
X X
(3.7)
i =1 i =1 ∆t tr a j

Lateral error

The lateral error is determined by calculating the perpendicular distance to the closest point on the path.
However, the problem is that the obtained measurements are not i.i.d. (independent and identically dis-
tributed), since the lateral errors are strongly correlated over time. The change of the lateral error is limited
by the dynamics of the robot. The autocorrelation is a measure to describe the dependency of the sequential
measurements in a time series, and this indicates a strong dependency between subsequent measurements
— TODO: autocorrelation —. The Ljung-Box test [6] uses the autocorrelation and provides a formal approach
to quantify the probability that the data is i.i.d. More specificially, it calculates the probability of observing
similar or more extreme data given that the hypothesis that the data is i.i.d. is true. A probability smaller than
0.05 is considered to be too small for the hypothesis to hold and therefore means that the data is not i.i.d.
Therefore, the lateral error data should be subsampled in such a way that the Ljung-Box test value is above
0.05.
Only one in every 60 lateral error measurements from each trajectory are used, which is one measurement
in every 2 seconds. With this subsample frequency, 52 out of 55 trajectories have a Ljung-Box test value bigger
than 0.05, meaning that 94.5% of the trajectories lie within a 95%-confidence interval.

acceleration vs. lateral error curvature vs. lateral error velocity vs. lateral error
straight-without-corners dataset straight-without-corners and corners datasets where [acc. == 0] all datasets where [acc. == 0]
0.14 error mean 0.14 error mean 0.14 error mean
1 1 1
2 2 2
0.12 0.12 0.12

0.10 0.10 0.10


lateral error [m]

lateral error [m]

lateral error [m]

0.08 0.08 0.08

0.06 0.06 0.06

0.04 0.04 0.04

0.02 0.02 0.02

0.00 0.00 0.00


0.2 0.1 0.0 0.1 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.25 0.30 0.35 0.40 0.45 0.50
acceleration [m/s^2] curvature [rad/m] velocity [m/s]
a) b) c)

Figure 3.9: — TODO: make a caption —

The same approach as for the longitudinal error rate is used to find inspect the relation between the lateral
error and the trajectory parameters acceleration, radius and acceleration. The same type of graphs are used,
but this time a log-normal distribution is fitted instead of a normal distribution, since the errors are strictly
positive. The displayed relation between the acceleration and the lateral error (fig. ??.a), shows that the lateral
error is almost zero for positive accelerations. This has to do with the fact that the trajectories always start in
the current position of the robot, so the initial lateral error is 0. Positive accelerations are also only present at
the beginning of a trajectory, meaning that positive accelerations are related to low lateral errors. Although
the data is not particularly reliable for positive accelerations, it does indicate that the error at acceleration
0 is approximately similar or higher than the error at other accelerations, meaning that a dataset with no
accelerations apart from 0 can be used.
16 3. Error Modeling and Calibration

Error Parameters Dataset(s) Additional


Multiplication factor for the
acceleration
Longitudinal standard deviation based on
& straight-without-corners
rate (error,curvature)-model
velocity
(trained on all datasets)
curvature
straight-without-corners [acc == 0]
Lateral & -
corners
velocity
curvature
straight-without-corners [acc == 0]
Orientation & -
corners
velocity

Table 3.2: — TODO: Some caption —

Figure ??.b depicts the correlation between the curvature and the lateral error, which shows that only the
uncertainty slightly increases with an increasing curvature. In other words, the uncertainty of the lateral error
is curvature-dependent.
The velocity is graphed against the lateral error in figure ??.c, which indicates that the lateral error is
velocity-dependent.

Since the correlation with the acceleration in the straight-without-corners dataset is not reliable, the error
model for the lateral error is based on the curvature and the velocity. This means that the data from the
straight-without-corners dataset where the acceleration is 0 and from the corners dataset.

Orientation error

The orientation error is determined by calculating the difference between the current orientation and the
tangent of the path at the closest point. Similar to the lateral error, the measurements of the orientation
error are not i.i.d. — TODO: autocorrelation, as in figure ... — The same approach is used to determine the
subsample frequency based on the Ljung-Box test. One in 30 data points is used for the orientation error, or
one measurement in each second. This results in 52 out of 55 trajectories with a Ljung-Box test value bigger
than 0.05, so again, 94.5% of trajectories lie within a 95%-confidence interval.

acceleration vs. orientation error curvature vs. orientation error velocity vs. orientation error
straight-without-corners dataset straight-without-corners and corners datasets where [acc. == 0] all datasets where [acc. == 0]
0.30 0.30 0.30
error mean error mean error mean
1 1 1
0.25 2 0.25 2 0.25 2
orientation error [rad]

orientation error [rad]

orientation error [rad]

0.20 0.20 0.20

0.15 0.15 0.15

0.10 0.10 0.10

0.05 0.05 0.05

0.00 0.00 0.00


0.2 0.1 0.0 0.1 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.25 0.30 0.35 0.40 0.45 0.50
acceleration [m/s^2] curvature [rad/m] velocity [m/s]
a) b) c)

Figure 3.10: — TODO: make a caption —

The relationship of the orientation error, which is also fitted with a log-normal distribution, with the parame-
ters acceleration, curvature, and velocity is visualized in figure ??, similar to the other errors. This leads to the
conclusion that the orientation error does not depend on the acceleration. On the other hand, the orientation
error is strongly curvature-dependent and also depends on the velocity.

The same approach as for lateral error is used for the orientation error, where the error model depends on the
curvature and velocity. The straight-without-corners dataset is used for the data points where the acceleration
is 0 together with the corners dataset.
3.2. Field of View (FOV) 17

3.2 Field of View (FOV)


The measurements of a rotating Lidar sensor that results in a 2D view are used to determine the field of view
(FOV), which is the area that is visible to the AV. Uncertainties on the FOV are therefore introduced by 1)
uncertainties in the Lidar measurements and 2) delays in the Lidar measurements, since the Lidar rotates
with a finite frequency. The uncertainty or error distribution — TODO: Think about terminology: uncertainty
== error distribution — in the Lidar measurements should be described with a model. The raw Lidar mea-
surements can be adjusted based on this model to assure that the true value is equal or bigger with a certain
confidence .

3.2.1 Lidar Uncertainty


• Want to make an empirical model - hard to find in literature - most papers focus on either a very de-
tailed physical model: beam size etc. or on a very general model that just handles the errors in the
calibration

• Empirical lidar model: "A LiDAR Error Model for Cooperative Driving Simulations"

• model based on theta and range, but theta hard to estimate so take the marginal probability –> Not
susceptible to skewed dataset

— TODO: Talk about the literature here — The goal is to find a model that gives the uncertainty of a raw Lidar
measurement. Hardly any examples of such a model were found in the literature, since most research either
focussed on a very detailed, physical error model or considered just the calibration error in transformations
to other sensor frames. — TODO: A example of a similar approach is ... —
A Lidar measurement consists of a list of detected ranges related to a measurement angle. Thus, two
models are needed, one for the error in the ranges and one for the error in the angles. The errors are modelled
as a function of the measured range and of the angle of incidence. This means that it is assumed that the error
is independent of the measurement angle, which is a reasonable assumption since the same range finder is
used for every direction by rotating it around. — TODO: Maybe figure —. However, before discussing the
models and the methods to obtain the necessary data, the errors need to be properly defined.
Definition (Range error). The error between the measured range and the true range at a certain angle, where
the range is the distance to the closest obstacle. Positive for measured ranges larger than the true range.
Definition (Angle error). The angle that is added at the boundaries of a detection. Positive for extra angle. —
TODO: Think about this — The error in the size of an obstacle at the boundaries of the obstacle expressed by the
amount of error in the angle.
— TODO: Figure that explains both error types —

Range error model

To make a model for the error on the range, measured ranges together with the ground truth range and the
incident angle are needed. To be able to obtain enough data without the need to obtain a massive amount
of ground truth values by hand, the Lidar was set up to scan a straight wall. The shortest distance to the wall
was measured by hand. The orientation of the wall was determined by using the least square method to fit a
linear line to the points accumulated from 100 scans. The distance and the orientation of the wall could be
used to determine the ground truth range and the incident angle for every measurement. See figure 3.11.

The wall has been scanned for 100 scans at 6 different distances. The resulting error for the measurements
is shown in figure 3.12, where the color of the points indicates the error. The distance is bounded between
0.15m, which is set by the Lidar manufacturer and 5m, since a bigger wall was not available and this should
be enough for the experiments anyway. Moreover, it can be observed that the errors in the measured range
start to explode at a certain inclination angle and at beyond some angle the Lidar is unable to make any more
detections. The FOV estimation should therefore account for a maximum inclination angle. Although the
maximum inclination angle seems to decrease with the range, it is easier to have an absolute limit on the
inclination angle and this is set at 1.222 rad or 70 degrees.
When we remove the measurements beyond the maximum inclination angle, the Gaussian Processes can
be fitted to the data. As described above, one GP is learned to fit the mean and another one describes the
18 3. Error Modeling and Calibration

distance

π - max_inclination_angle

Figure 3.11: — TODO: Should I keep this picture —

Figure 3.12: — TODO: Update this figure —

noise on the input. The models depend on both the inclination angle and the range. However, since it is
difficult to reliably obtain the inclination angle from raw Lidar data and this will introduce new assumptions,
the goal is to have a model that depends solely on the measured range. This is achieved by marginalizing the
distribution. This is a better approach than just directly fitting the model on the measured range, since the
data does not contain the same distribution of inclination angles at every range. The resulting distribution is
shown in figure ?? (left 3D, right 2D).
Next, we fit the describe Gaussian Processes to describe the mean and the standard deviation of the error.
— TODO: Define some proper learning procedure. —

Angle error model

Similarly, to obtain a model for the error on the angle, the angle error needs to be determined at a measured
range and the incident angle. Again, the goal is to obtain as much as possible reliable ground truth data while
measuring as few as possible ground truth values. Important for this method is that the absolute orientation
of the Lidar and therefore of the robot should be known. — TODO: Should I describe earlier method?? this
tells what actually does not work -> at least something about the expectation that a big part of the error is
from error in the actual rotation —.

— TODO: Explain more thorough — — TODO: Maybe first use range error model on this data — The robot
was placed on a grid and a plate was placed on different positions on the grid to obtain data for different
ranges and angles. The approach is visualized in figure ??. The position of the furthest edge of the plate
3.3. Final uncertainty discussion 19

is known since the plate was placed on the grid. The measurement error on the edge of the plate can be
calculated using this information and the actual measurements. The data is visualized in figure 3.13 and the
resulting model trained on this data in figure ??.

The measurements at each plate position


on the grid are clustered
1.4 0.020

1.2
0.015
inclination angle [rad]

1.0

mean error [m]


0.8 0.010

0.6
0.005
0.4

0.2 0.000

0.0
0 1 2 3 4 5
measured range [m]
measurements range/angle limits

Figure 3.13: — TODO: Should I keep this picture —

3.2.2 Delays Lidar/FOV uncertainty


• Skewed pointcloud: NOT NECESSARY https://github.com/rsasaki0109/laser_deskew/blob/
master/src/laser_deskew.cpp
• Maybe republish lidarscan in the map frame, so that it is only transformed at the time of measurement.
For debugging.

• add padding based on time delay

3.3 Final uncertainty discussion


• Cannot capture all dependencies. Test/data should be as much as possible acquired in normal envi-
ronments encountered by AVs/ or same materials to account prevent missing aspects.

• Maybe mention the uncertainty in the calibration of the different frames: IMU, laser, encoder (gives
linear velocity in base_link frame). Give a rough estimate, calculate influence probably? Most of this
uncertainty is hidden in the position estimate since ekf integrated pose (based on imu and encoders)
uncertainty is compared to slam pose. So imu and encoder frames on one hand and laser frame on the
other hand. Only uncertainty in the velocity used by the trajectory planner. – Ignore this SLAM sets laser
-> baselink -> odom -> map frame transform, so leading.
To-Dos Code

Mandatory
• Trajectory planner should account for computational delay

• FOV use old transform + variance on that transform -> Pose buffer or so might be better.

• Actions / lifecyclenodes -> get obstacles working

• Transform FOV with the tf at the moment of capturing.

Optional
• Smooth trajectories

• Deskew point cloud

21
Bibliography

[1] A. Sakai, D. Ingram, J. Dinius, K. Chawla, A. Raffin, and A. Paques, Pythonrobotics: A python code collec-
tion of robotics algorithms, 2018. arXiv: 1808.10703 [cs.RO].
[2] T. I. C. on Clean Transportation (ICCT), “European Vehicle Market Statistics,” Tech. Rep., 2021. [Online].
Available: https://theicct.org/wp-content/uploads/2021/12/ICCT-EU-Pocketbook-2021-
Web-Dec21.pdf.
[3] J. von Neumann, R. H. Kent, H. R. Bellinson, and B. I. Hart, “The mean square successive difference,” The
Annals of Mathematical Statistics, vol. 12, no. 2, pp. 153–162, 1941, ISSN: 00034851. [Online]. Available:
http://www.jstor.org/stable/2235765 (visited on 03/13/2024).
[4] R. Haeckel and B. Schneider, Clinical Chemistry and Laboratory Medicine (CCLM), vol. 21, no. 8, pp. 491–
498, 1983. DOI: doi:10.1515/cclm.1983.21.8.491. [Online]. Available: https://doi.org/10.
1515/cclm.1983.21.8.491.
[5] C. Glennie, “Rigorous 3d error analysis of kinematic scanning LIDAR systems,” vol. 1, no. 3, pp. 147–157,
Nov. 1, 2007, Publisher: De Gruyter Section: Journal of Applied Geodesy, ISSN: 1862-9024. DOI: 10.1515/
jag.2007.017. [Online]. Available: https://www.degruyter.com/document/doi/10.1515/jag.
2007.017/html (visited on 04/03/2024).
[6] G. M. Ljung and G. E. Box, “On a measure of lack of fit in time series models,” Biometrika, vol. 65, no. 2,
pp. 297–303, 1978.

23

You might also like