Positioning and Docking of An AGV in A Clinical Environment
Positioning and Docking of An AGV in A Clinical Environment
DAVID ANDERSSON
ERIC MÅNSSON
The purpose of this project was to investigate options and find a solution to dock an
Automatic Guided Vehicle (AGV) to a predefined path within an unique working cell.
Precision, repeatability, fast docking time and minimization of load for the operator
should be ensured to preserve quality of the tasks. The primary aim was to be able to
maneuver the AGV between predefined positions with an accuracy of ±5 mm.
By investigating prior works (articles) covering AGV control, we decided to proceed
with dead-reckoning using odometry and a vision-based system together with an optical
path. A path tracking technique called Quadratic curve was used together with the
Extended Kalman filter algorithm for estimation of location.
Dead-reckoning was used to reach the optical path, after which the vision-based
system was enabled to track it. With 30 repetitions of the docking phase, we reached a
mean offset (Euclidean distance) of 5.1 mm with a standard deviation of 3.0 mm. The
data were derived from the Kalman filter states where the measurements were performed
by the vision-based system as an offset from the indicated stopping points.
We came to the conclusion that odometry has simple calculations and is easy to
implement, but suffers from sensitivity to both stochastic (floor irregularities, collisions,
skid) and systematic (cumulative) errors. The kinematic model was affected by the lack
of measurement tolerance in the assessment of vehicle dimensions.
The vision-based system proved, as predicted, to be very accurate but also sensitive
to proper light conditions. The low speed of the AGV (maximum 200 mm/s) was very
advantageous in relation to the relatively low camera frame rate (approximately 15
frames per second). The accuracy for the vision system was limited by the frame rate.
Note that computer vision is computational heavy and thus quite demanding on the
system performance.
With an improved prototype set and access to calibration tools for camera mount-
ing and assessing necessary dimensions together with favorable light conditions and an
optimized object detection algorithm, we believe that we could have reached well within
the requested accuracy.
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Design Architecture 6
2.1 AGV Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Optical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Vision-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.3 Magnetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.4 Encoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.5 Other techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.6 Sensor fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.7 Evaluation of sensor techniques . . . . . . . . . . . . . . . . . . . . 10
2.1.8 Selection of sensor techniques . . . . . . . . . . . . . . . . . . . . . 11
2.2 AGV Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Path planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Selection of path planning technique . . . . . . . . . . . . . . . . . 15
2.2.3 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.4 Selection of localization technique . . . . . . . . . . . . . . . . . . 16
2.3 Model representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Software development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 AGV prototype environment and conditions . . . . . . . . . . . . . . . . . 22
2.5.1 AGV prototype description . . . . . . . . . . . . . . . . . . . . . . 22
2.5.2 PC system information . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6 Error awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
i
CONTENTS
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Implementation 24
3.1 Components used for realizing the solution . . . . . . . . . . . . . . . . . 24
3.2 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.2 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3 Interacting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Software Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Camera measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6 Odometry/dead-reckoning test . . . . . . . . . . . . . . . . . . . . . . . . 32
3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Results 34
4.1 Kalman filter simulation on odometry . . . . . . . . . . . . . . . . . . . . 34
4.2 Odometry/dead-reckoning calibration . . . . . . . . . . . . . . . . . . . . 35
4.3 Dead-reckoning error accumulation . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Docking phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.5 Aligning phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 Software execution times . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.7 Exclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Bibliography 50
A Use-Case 55
ii
CONTENTS
B Time schedule 57
iii
1
Introduction
main purpose of this chapter is to introduce the reader to the full scope of this
T heproject, including background, purpose, objective, limitations and work method.
The time schedule Gantt chart can be found in Appendix B. This chapter serves as an
organized template for verification and validation in the wrap-up phase of the project.
1.1 Background
Imagine a manufacturer where several departments with multiple cells have an auto-
mated installed robot that performs a certain task. Due to variations in the inflow of
materials or orders to each department/cell, the usage efficiency of each individual robot
will never go above 80%. Also, a complete breakdown of this installation will have a
significant economic impact upon the manufacturer.
Consider the potential benefits if this robot could be mobilized and easily transfered
between respective departments and cells. The installation and maintenance costs of
the robots will decrease significantly since fewer mobile robots can replace the installed
ones. The ability to easily switch a malfunctioning robot will also decrease the severity
of a total breakdown. The potential is very demonstrable in the medical equipment sec-
tor. Equipment could be transfered to ski resorts during seasons. Immobilized patients
could receive medical attention at home. Hospitals and care givers could share heavy
installations to increase efficiency. The list goes on. The bottom line is that modern
technology allows for lean, light and portable solutions that could substitute installed
technical equivalents.
An innovative company would like us to investigate options and find a solution to dock
an Automatic Guided Vehicle (AGV) to a predefined path within an unique working cell.
Secondarily, the unit shall be able to automatically align to a set of devices in the working
cell. The purpose of the automated docking is to facilitate the operator. Precision,
repeatability, fast docking time and minimization of load for the operator should be
1
CHAPTER 1. INTRODUCTION
ensured to preserve quality of the tasks. Other desired features will be discussed later
in this chapter. Due to confidentiality constraints, the reader has to settle with the sole
fact that the AGV is supposed to be operating in a clinical environment.
1.2 Purpose
The purpose of our task is primarily to enable automated docking of this AGV to a
predefined path which is parallel to a work station in an unique working cell. Once
docked, the unit shall align to an operating tool on the station in a parallel manner
when commanded by the user. Secondarily, the operating tool of the AGV shall follow
the height (z-position) of the operating tool belonging to the working station. The
working cell environment is illustrated in Figure 1.1 below.
Figure 1.1: In the unique working cell (room) is the AGV placed manually within a marked
area. An unique working cell ID is acquired which describes the necessary cell environment
parameters. Upon command, the AGV docks automatically to the predefined path which is
parallel to the work station.
1.3 Objective
The following list is the requirements for the whole AGV unit, specified by the company.
Items are listed in descending order of priority. The automated control features of the
AGV are described in 6(a)-(e).
2
CHAPTER 1. INTRODUCTION
1. The added feature of preference should be an option, i.e. the basic version of the
unit shall not bear costs for the selected features.
2. It should be possible to add the feature during field operations or at the working
cell.
3. The added feature should be applicable to different kinds of working cells with
different setups of operating equipment.
4. It should be easy and fast to install the added feature at the working cell.
(a) The installation at one working cell should be performed in less than four
hours.
(b) Installation without mechanical tampering on the working site floor is pre-
ferred.
5. The added feature should be robust against soiling, wear and tear.
6. The added features will exists in different versions with certain levels of perfor-
mance.
(a) The unit is only aligning to a predefined position (point) within the working
cell.
(b) The unit is following a predefined path, but the drive command is given
manually by the operator.
(c) The unit is aligning to the work station automatically, along the predefined
path.
(d) The operating tool of the unit is adjusting height with regard to input coming
from the work station.
(e) Combinations of the above features.
7. The accuracy of the absolute position must be within ±5 mm with verifiable re-
peatability. This condition is not relevant while the unit is moving.
8. The unit should be able to recognize which working cell it is currently situated in.
The ID of the working cell will determine which features, choices and information
that are available to the operator.
The main research questions can be summarized as follows
AGV positioning. Is it possible to achieve a positioning system for the AGV within
an accuracy of 5 mm?
AGV navigation. Is it possible to navigate the AGV to a predefined path and enable
following that path, without any mechanical tampering on the floor?
Working cell identification. Is it possible for the AGV to identify which working cell
it is currently situated in?
3
CHAPTER 1. INTRODUCTION
Problem identification
Many of the challenging problems for this project will probably be generated by the
combination of the given objectives. Combining a mobile unit and high accuracy posi-
tioning is identified as the greatest challenge, especially when considering the preference
of no mechanical tampering on the working site floor.
Furthermore, continuously verifying the accuracy of the estimated stop position,
choosing sensor techniques that allow for robustness, development of software with all
required functions (such as sensor fusion) and solving all required interfacing are identi-
fied as major challenges. The interfacing can be divided into AGV-sensors, sensors-PC
and PC-user.
The parallel and height alignment mentioned above does not have to be in real time,
but can be executed with a certain (limited) delay. In a more practical perspective,
we have to find a solution for docking the unit and allowing it to align to the parallel
movement of the working station. The software architecture must be able to manage
common errors and hold robustness against user operations and/or extraordinary sensor
inputs that might result in severe consequences. If it fits within our time frame, we would
like to implement the z-alignment of the operating tool versus the working station.
1.4 Limitations
The following items will not be covered by the project work
• Handling of the unit until it has reached its position for automated docking.
• Handling of the unit after it has been used in the automated mode.
• Software related safety philosophy according to any specific ISO-standard. The
company will not rely on our solution to be final in that sense, but we will have to
consider and work with such principles in mind.
• Hardware related safety philosophy, including collision avoidance for the AGV.
• Real time following of the work station.
• Selection of hardware and/or software outside our working prototype.
• Selection of DC motors and frame body.
1.5 Method
This section covers the planned work flow and approach for solving the specified tasks.
The work flow described below follows the Gantt-chart available in Appendix B. The
Use-Case (Appendix A) will define the technical objectives of the project. The method
and template for the Use-Case was provided by the company. Major check-ups with
the company will be performed at the Toll Gates after each phase. Minor check-ups are
scheduled every second week.
4
CHAPTER 1. INTRODUCTION
Definition Phase
The Use-Case describes interaction between user and system in a step-by-step manner.
This will be the primary benchmark for our working progress. The Design Architecture
can be initiated once a crude draft of the Use-Case is complete. It has to be noted that
the Use-Case will be continuously updated during the project as more information and
details are gathered.
The task of defining Design Architecture includes searching for different system solu-
tions. These solutions are rough approaches to reach the objectives, and will not include
detailed information on hardware and software components. Information for creating
the different system solutions will be gathered through brainstorming, consulting with
supervisors, web search and literature search. We will also attempt to acquire insight
information via study visits at hospitals and environments where similar solutions are
used (e.g. AGVs in warehouses).
Design Phase
The first priority of the Design Phase is to evaluate and choose an approach which fulfills
the benchmarks of the Use-Case according to preferences, feasibility and the demands
specified by the company. The Test Specification will determine how the Detailed Design
will be tested and evaluated. Defining the Test Specification will be tailored to the Use-
Case. The Detailed Design phase includes choosing hardware and producing software.
The Test Specification is solely a communication tool between us and the company and
will not be included in this report.
Verification Phase
The Verification Phase starts with merging the designed system with the complete pro-
totype setup provided by the company. A large portion of this phase will be dedicated to
modify the design system to the prototype (the complete setup with all wheels, bottom
plate, motors, sensors and system logic). The resulting solution with modification will
be explained in a separate Test Report which is communicated with the company at Toll
Gate 3 (TG 3). The Test Report will not be included in the report.
Validation Phase
The validation process will be carried out as a workshop involving the company as a
jury. A demonstration of the prototype with the added features will be a final measure
of achieved results.
5
2
Design Architecture
chapter we will discuss a few possible approaches to solve the task. As stated
I ninthis
the introduction, the unit during the docking and alignment phase can be con-
sidered as an AGV. With this condition, the acquisition of possible solutions is fairly
straightforward. The utility of AGVs is increasing, which is noticeable in several indus-
tries. Besides being able to perform multiple-shift operations and repetitive sequences,
they allow a more rigorous tracking of material, improves safety and reduces labor costs
and material damage [1].
All AGVs today are equipped with embedded systems (sensors, bumpers and/or cam-
eras) [2]. The AGV concept that is used for this unit is Dual rear drive and steer wheels
which is suitable for forward- or single-direction movement [2]. The steering concept is
called Steered-Wheel steer control which is similar to that of a shopping cart.
Figure 2.1 below illustrates a common architecture in AGV design. The sensors ac-
quire pathway and ambient information. The actuators represent activation of engine
movement, lifting/loading or any action the AGV is designed to perform. The communi-
cation infrastructure manages information exchange between other AGVs and the main
controller system. The AGV Controller is the information hub for all hardware. The
control sensitivity is determined by a logical map. The map can be magnet tape, virtual
path, laser guidance et cetera. The AGV Agent embeds the system logic.
The working prototype for this project will not have any main controller system since
there is only one unit (AGV). In other words, there will be no need for any communication
infrastructure initially. The focus will mainly be on the actuators controlling the DC
motors for the rear wheel pair.
6
CHAPTER 2. DESIGN ARCHITECTURE
2.1.1 Optical
There are several ways to navigate AGVs with optical systems. Izosaki et al. [4] suggested
a solution with cameras in the ceiling which had a band-pass filter for IR light. An IR
LED marker pattern were attached to the AGV to be able to detect the position of the
AGV. Unfortunately the maximum error of 11.7 mm provided by this solution is not
sufficient for our requirements [4].
Another very common optical approach is the use of laser that can either navigate by
reflectors in the environment or find euclidean distances to certain landmarks [5]. The
mean error reported here was 13 mm, but these values varies with the weight and stature
of the AGV. Many of the optical systems are not constructed to handle relatively small
deviations such as ours (±5 mm). The specifications that are emphasized by the AGV-
system OEM is mostly maximum payload and AGV speed [6]. In a short conversation
with Atab, a Swedish manufacturer of AGVs, the company states that most systems
7
CHAPTER 2. DESIGN ARCHITECTURE
require a precision down to ±5 mm. The main issue with the laser navigated AGVs
is that the system require a rotating device that emits beams to the reflectors. Such
a device might not be suitable in a clinical environment due to moving personnel and
other equipment in the surroundings which can block the beam path. The state-of-
the-art systems is seemingly tailored for larger navigation patterns such as warehouse
complexes.
2.1.2 Vision-based
Passive systems such as vision-based sensors are a growing application in the automotive
industry [7]. One obvious downside of this approach is the relatively computational-
heavy image processing. Note that the computational load in a clinical environment
with low speed and simple patterns (such as a color tape against the floor) would be
significantly mitigated. Also, the vision-based system in a clinical environment would
be considerably more robust due to the lack of changing environment conditions such as
rain, fog and sun. The low requirements on the performance parameters on the image
acquisition device such as resolution, depth, stereo vision and colors will also decrease
the costs [7].
2.1.3 Magnetic
Today there are a few different methods/ideas of using magnetic fields as guidance sys-
tems. One idea that is being tested in the automotive industry is based on bipolar
cylinder magnets that are hidden in the road with an equal distance and creating binary
codes by alternating which pole that is upwards [8]. In this way it can provide both lane
guidance and information about road conditions.
A more common application is guidance of AGVs in the manufacturing industry
where the magnetic systems often consists of roughly a magnetic tape and a magnetic
field sensor [3]. There are numerous of companies that provide complete solutions to-
wards the industry but there are also providers of the separate components.
These kind of systems are often developed towards a specific application. Hence lit-
erature based research is often unavailable or nonexistent, which means that information
about the systems is in the form of data sheets. A common factor for these systems is
that there often is not a high demand for positioning accuracy.
Designing a magnetic tape guidance system that has an accuracy better than ±5 mm
will set high demands on both hardware and software.
2.1.4 Encoders
Shaft encoders converts the angular motion of a shaft to bits or an analogue signal.
Shaft encoders allow for dead-reckoning, which is the simple positioning technique that
determines the AGV position based on the distance traveled by each wheel. This mea-
suring technique is generally referred to as odometry [9]. The accuracy of the positioning
is limited by the amount of pulses per wheel revolution produced by the encoder. The
8
CHAPTER 2. DESIGN ARCHITECTURE
odometry has an inherent systematic error since the dimensions of the mechanical sys-
tem cannot be measured exactly. Thus, it will accumulate error over time that must be
accounted for.
An odometric system is also relatively sensitive to stochastic errors in the model. If
the AGV ought to slip or skid due to some irregularities in the floor, or slightly bump
into something, the system will suffer from an offset bias in the measurements. Unless
these stochastic processes are accounted for in the model, the AGV will not be able to
recover the true position without another complementary sensor input.
9
CHAPTER 2. DESIGN ARCHITECTURE
Each of the above listed systems comes with both positive and negative aspects which
is gathered below.
Optical
Prō
+ Suitable to navigate a fleet of AGVs.
Contrā
- Not suitable in an environment with moving equipment and personnel that
might block the optical path.
- High accuracy, optic systems are quite expensive and installation-heavy.
Vision-based guidance
Prō
+ Allows for high resolution depending on hardware.
+ May be easier to realize absolute positioning.
Contrā
- Can demand high computational power.
- Sensitive in some environments.
Contrā
- Magnetic fields are complex.
- May be sensitive for interference.
- It may be difficult to realizing absolute positions due to the complex geome-
tries of the magnetic fields.
10
CHAPTER 2. DESIGN ARCHITECTURE
Encoders
Prō
+ Simple implementation.
+ Fast calculations.
Contrā
- Accumulates error over time due to uncertainties in geometries.
- Relatively sensitive to stochastic errors (such as wheel slip).
- Can not recover from errors alone.
Contrā
- Low accuracy, depending on the distance between each diode.
11
CHAPTER 2. DESIGN ARCHITECTURE
The path planning algorithm is considered complete if and only if the AGV states,
trajectory with discrete nodes, sub targets and main target exists such that the AGV
may complete its task [16]. There are mainly three rather significant simplifications that
will constitute the foundation of the path planning algorithm. One may argue regarding
the information overlap in the following statements. First, since the AGV is operating
at relatively low speed (maximum operating speed below 50 mm/s when in parallel
alignment mode), dynamics such as inertia, slip and lateral acceleration rarely requires
any regard. This advantage also facilitates control issues on the cognitive level of the
AGV. The second simplification is that a differential-drive AGV, such as in this project,
is holonomic 1 . The states describing the position of the AGV (x, y, θ) has three degrees of
freedom (DoF), and has the holonomic constraints (ẋ, ẏ, θ̇) [16]. Since a differential-drive
AGV can rotate around its center point, the movement can be regarded as holonomic.
Third, the AGV is modeled as a point in a 2-dimensional Euclidean space.
Although it is usually a very large portion of the problem solving, obstacle avoidance
is not within the scope of this project. The present environment will not include any
obstacles of consideration. In a general case, different approaches visualizes obstacles in
different manners (polygons, Voronoi diagrams, cell decomposition, potential fields etc.)
[16].
1
Holonomicity describes the relation between total degrees of freedom in the AGV, or more generally,
in any system, versus the amount of controllable degrees of freedom. If the the amount of controllable
degrees of freedom is less than the total amount, the system (AGV) is considered non-holonomic. If the
amount is equal, the system is considered holonomic.
12
CHAPTER 2. DESIGN ARCHITECTURE
We decided to evaluate three different techniques for the implementation of path plan-
ning, namely Follow-the-carrot, Pure pursuit and Quadratic curve. These techniques are
presented below.
Follow-the-carrot
Follow-the-carrot is one of the most intuitive and easy path planning algorithms that
exists. A line is drawn between the local AGV origo and the current ”carrot”-point (sub
target). The orientation error is defined as the angle between the local AGV coordinate
system and the carrot point. Some control regulator (P/PI/PID) minimizes this error
and the AGV drives towards the point. The obvious downsides of this implementation
are that it cuts corners and allows for large oscillations if the carrot-points are adjacent
and the AGV is operating at higher speed [17].
Pure pursuit
The pure pursuit algorithm calculates a circular segment between the local AGV coordi-
nate system and the goal point. The local origo and the goal points share the same circle
center. A steering angle is calculated related to the circular segment, and is translated
into corresponding outputs to the differential-drive wheels. The goal point is continu-
ously updated in the pursuit. The algorithm can be viewed as follows [17]
(a) Compute an (arbitrary) point close to the AGV with a common center point
to the origin of the AGV.
(b) Compute the Euclidean distance between these points.
3. Transform the goal point coordinates into the local AGV coordinates.
5. Move the AGV towards the goal point along the circular segment.
This algorithm is normally more stable than the Follow-the-carrot approach [17]. The
smoothening of the path into a curvature allows for less oscillations.
Quadratic curve
Yoshizawa et al. [18] proposed a quadratic curve path tracking method, which they claim
is simple to implement, requires low computational power and follows the desired path
accurately. This section will summarize that proposal.
13
CHAPTER 2. DESIGN ARCHITECTURE
The quadratic curve algorithm uses a reference point along the desired path which
is a given length away from the AGV position and then a quadratic link is calculated
between the two points. As a second step the control inputs to the AGV are calculated
as rotational and translational velocities in a way that the AGV will move along the
quadratic link. A graphical explanation is shown in Figure 2.2.
Implementation of the method requires a few mathematical expressions, defining the ref-
erence state vector as Xr = (xr , yr , θr )T and the AGV state vector as Xc = (xc , yc , θc )T .
The quadratic curve in local AGV coordinates is calculated by
sign(ex )ey
y = Ax2 , A= (2.1)
e2x
where the error vector, e, is calculated by transforming the global error, (Xr − Xc ),
according to the following equation
ex cos θc sin θc 0
e = − sin θ cos θ 0 X − X (2.2)
y c c r c
eθ 0 0 1
To make the AGV move along the quadratic curve, the AGV control signals are derived
as p
v = sign(ex ) ẋ2 (1 + 4A2 x2 ) (2.3)
2Aẋ3
ω= (2.4)
v2
where ω is the rotational velocity and v is the translation velocity. Thus the AGV will
move forward when ex > 0 and backward in the other case. If x at time n∆t ≤ t <
(n + 1) ∆t is given by
α
x = Kn (t − n∆t), Kn = sign(ex ) (2.5)
1 + |An |
14
CHAPTER 2. DESIGN ARCHITECTURE
2.2.3 Localization
Localization is a process where the AGV recognizes its position in a defined reference
system by estimation techniques. Typically, the reference system is a three-dimensional
Cartesian system constituting the room occupied (partly) by the AGV. Below we will
discuss a couple of approaches that we came across in the pre-study phase.
Kalman filtering
To determine the state of the system, i.e. the position and velocity of the unit, a typical
approach is to use a Kalman filter algorithm. The Kalman filter utilizes data fusion
between sampled measurement data from the unit and a linear dynamic model of the
system. The recursive algorithm estimates the internal state of the system from a set
of measurements that are noisy. In this context, the linear dynamic system would be
represented by a kinematic model of the AGV, with measurements from the wheels
to determine position and velocity. Weighting between these two information sources
will produce estimated state variables. For a regular Kalman filter, there are some
requirements to provide an optimal algorithm
• The dynamic system is linear. There are extensions that can handle nonlinearities
but there are draw-backs. One is that a linearization can produce unstable filters
if some assumptions are incorrect [19].
• The parameters of the dynamic system must be accurately represented.
• The noise sources have a Gaussian distribution and remain Gaussian after passing
through the dynamics of the linear system [20].
15
CHAPTER 2. DESIGN ARCHITECTURE
16
CHAPTER 2. DESIGN ARCHITECTURE
C0
B0
A0
∆Dr
∆D
∆Dl
∆θ
A B C
R L
The rear wheel pair of the AGV has a common axis along the circular segment B to B’.
This represents the ”true” position of the AGV. The left wheel travels along the path A
to A’ and the right wheel travels along the path from C to C’. L indicates the distance
between the rear wheels. The distance traveled ∆D by the common axis and the changed
angle ∆θ yields
∆Dr = (L + R)∆θ, ∆Dl = R∆θ
Giving,
∆D = (∆Dr + ∆Dl )/2 (2.9)
∆θ = (∆Dr − ∆Dl )/L (2.10)
∆D is thus the average of of two wheel paths ∆Dr and ∆Dl and ∆θ is proportional to
the difference. Further, the position P of the AGV in 2-D space at a given sample n is
represented by
Pk = (Xk , Yk , θk )
If the previous position is given by Xk−1 and Yk−1 and the previous orientation is given
by θk−1 , there is a common method to approximate the position along an arbitrary path
given by
17
CHAPTER 2. DESIGN ARCHITECTURE
∆θk
Xk = Xk−1 + ∆Dk cos θk−1 + (2.11)
2
∆θk
Yk = Yk−1 + ∆Dk sin θk−1 + (2.12)
2
θk = θk−1 + ∆θk (2.13)
xk = f (xk−1 ) + wk−1
(2.14)
zk = h(xk ) + vk
where wk is the system noise, which is all disturbances and uncertainties in the subsys-
tems such as slip, wheel diameter, real wheel distance, uneven substrate and constraints
on all wheels. In addition, the vision-based system have pixel errors and image artifacts
that contributes as well as systematic errors such as camera placement bias. vk rep-
resents all measurement/observation noises. The system and measurement noises with
covariance matrices Q and R respectively are assumed to be zero mean random variables
with Gaussian distributions that depend on the state variable xk
" # " # " #!
wk 0 Q 0
=N ∼ ,
vk 0 0 R
The state transition matrix F is defined by the following Jacobian
∂f
Fk−1 =
∂x xk =x̂k|k−1
where x, y and θ are given by equations (2.11), (2.12) and (2.13) respectively. The EKF
algorithm is then calculated as follows below. The superscript (− ) denotes predicted
value and the hat accent (ˆ) denotes estimated values. Step 2 to 6 are iterated for each
measured sample.
18
CHAPTER 2. DESIGN ARCHITECTURE
x̂−
k = Fx̂k−1
P− T
k = FPk−1 F + Q
Kk = P− T − T
k H (HPk H + R)
−1
x̂k = x̂− −
k + Kk (zk − Hx̂k )
Pk = P− −
k − Kk HPk
Note that the measurement matrix H will be different depending on the measurement
subsystem. The odometry will measure Ḋl,k and Ḋr,k (or rather pulses translated into
speed) and the vision-based system will measure yk and θk at each sampling instant
respectively. zk is a measurement at time k.
19
CHAPTER 2. DESIGN ARCHITECTURE
railroad with two parameters, position along the railroad and its speed. Then a normal
assumption is that the speed of the train will not change between two samples if the
sample time is small enough. So what happens when the train accelerates? This is where
the Q matrix comes in. If the acceleration can be in the size of a and the system states
are x and v ,then the Q matrix can be calculated by2 :
" # " #
0 h ih i 0 0
Q = BQ1 BT = a × ∆T 0 v =
v 0 v 2 × a × ∆T
where B is selected as the columns from the system model that represents speed (system
change per time unit) and the diagonal of Q1 should be assigned in the size of the
systems acceleration. Determining the Q matrix in this manner would probably not be
necessary if the system is linear and has a constant sampling time since Q would be
constant, a trial and error method could be sufficient. But in the case of the opposite,
a linearized system with a varying sampling time, the above described method enables
improved compliance.
2
Lars Hammarstrand, Post-Doc Research Fellow at the Department of Signals and Systems, Chalmers
University of Technology
20
CHAPTER 2. DESIGN ARCHITECTURE
2.3.2 Overview
In Figure 2.4 below is an illustration of the estimation sequence used in this project.
The vision system collects observations yo , θo at a sample rate limited by the frames per
second that the PC and/or camera can process. A rotary encoder collects observations
(pulses) as often as the device allows. The pulses are translated into respective wheel
speed determined by time between samples, wheel radius and amount of pulses per wheel
revolution. Using previous reference states xR , yR , θR from the system model (stored in
software variables), the Kalman filter fuses the data in different steps (described earlier
in section 2.2.3) into new estimated states.
In the AGV Control ”box” is the quadratic curve algorithm that tracks the estimated
states in conjunction with the respective virtual path that is stored in a database.
Figure 2.4: The Extended Kalman filter fuses observed data from the camera yo , θo and the
odometry xo , yo , θo and generates new estimated states in conjunction with the non-linear
system model.
21
CHAPTER 2. DESIGN ARCHITECTURE
22
CHAPTER 2. DESIGN ARCHITECTURE
Stochastic errors
• Skid.
• Sudden change of illuminating conditions.
• Mechanical deformation.
• Floor irregularities.
• Collision.
• Sudden image artifacts.
Systematic errors
• Wheel diameter.
• Rear wheel distance.
• Camera fixation (angle, placement).
• Image acquisition noise.
2.7 Summary
The AGV prototype is based on a power-driven wheel chair with combined indoor/outdoor
use. The AGV concept is dual rear drive and steer wheel with a steering mode called
steered-wheel steer control. A vision-based system fused with dead-reckoning from en-
coder measurements are used for the AGV perception. An algorithm called quadratic
curve performs the path planning sequence in the navigation. The AGV estimates its
position with an Extended Kalman filter algorithm.
23
3
Implementation
1. PC laptop
3. PhidgetAnalog 4-Output
5. Voltage divider
For odometry measurements two pulse encoders from Henglster 1 were used, one for each
wheel. They produces 720 pulses per revolution or 2880 when also counting rising and
falling edges of each pulse. The encoder shaft was connected to drive shaft via a couple
of cogs and a timing belt. Due to the method of mounting the encoders the relationship
1
http://www.hengstler.com/
24
CHAPTER 3. IMPLEMENTATION
25
CHAPTER 3. IMPLEMENTATION
stage) of 2.5 volt and joystick maneuvers that resulted in equally sized voltage changes
in opposite directions on the two channels. The embedded motor controller had only
current controlled feedback.
3.2.1 Simulation
Before the developed software was executed on the real system, a number of simulations
and testing were performed with the assistance of the GUI. The most important steps
are listed below:
• Simulating path generation.
• Testing Kalman filter with simulated values.
• Testing motor control algorithm, i.e. path tracking.
• Simulating software sequences.
• Testing image processing algorithms.
These tests would be very hard to perform without the GUI, and in some cases impos-
sible.
3.2.2 Verification
When the software first was executed on the target system, the GUI was mainly used to
verify measurements from the encoders and the camera. Later on it also served as a tool
for verifying Kalman filter parameters, global position and calibrating rear-axis length
and tire radius on the AGV.
3.2.3 Interacting
In the final software version user input is only needed in a few occasions, but there are
possibilities to get a lot of information about the system‘s current state. An example of
when user input is needed is demonstrated in Figure 3.2.
26
CHAPTER 3. IMPLEMENTATION
3
http://www.robertnz.net/index.html
4
http://www.openframeworks.cc/about/
5
http://www.codeblocks.org/
27
CHAPTER 3. IMPLEMENTATION
28
CHAPTER 3. IMPLEMENTATION
Figure 3.4: Description of parameters extracted from camera images when a line and
calibration point is detected.
The accuracy of these measurements is determined by the camera resolution, the distance
between the camera focal point and the floor and the view angle of the camera. With
these three parameters a relation between camera pixels and real world distance can be
determined. This setup can be viewed in Figure 3.5. In our setup one pixel represented
approximately 0.2 mm. The image noise affected our measurements with a change of one
or two pixels, which corresponds to a tolerance radius less than 0.5 mm on the centre
point of a measurement.
29
CHAPTER 3. IMPLEMENTATION
Figure 3.5: Illustrating important variables to consider when mounting the camera.
Regarding the software part of these measurements there is mainly one parameter, thresh-
old, that needs to be tuned depending on light conditions and the contrast between back-
ground image and the captured frame. The background image can be either manually
set to e.g. white or a captured image of the floor (without any obstacles present). The
latter was not an option in our case since the camera was moving and the floor had a
canescent mottled pattern. Then every captured frame is compared with the background
image, generating a new image. As a last step before the image is processed with ob-
ject detection algorithms, the threshold is applied. This action discards any pixels that
does not satisfy the threshold value, i.e. does not have enough contrast compared to the
background image.
30
CHAPTER 3. IMPLEMENTATION
" #
0 0 0 1 0
Hodo =
0 0 0 0 1
0 1 0 0 0
0 0 1 0 0
Hcam =
0 0 0 1 0
0 0 0 0 1
In the same manner, the measurement noise covariance matrix R depends on which
measurement that is being performed
" #
rk1 0
Rodo =
0 rk1
rk2 0 0 0
0 rk2 0 0
Rcam =
0 0 rk1 0
0 0 0 rk1
where rk1 = 10−3 , representing the error covariance from the odometry and rk2 = 10−6
representing the error covariance from the camera measurements. These were empirically
set by simulations and test runs. The size of the measurement zk is dependent on the
amount of measures performed. The system noise covariance matrix Q was as follows
Q = B × Q1 × BT
and " #
q1 0
Q1 =
0 q1
31
CHAPTER 3. IMPLEMENTATION
where q1 = 0.01 × ∆t and ∆t is the sampling time interval. The initial error covariance
matrix P0 was empirically set to
p0 0 0 0 0
0 p0 0 0 0
P0 = 0 0 p0 0 0
0 0 0 p 0
0
0 0 0 0 p0
where p0 = 10−10 . The scalar p0 represents a small uncertainty covariance since we
initially define the AGV position.
32
CHAPTER 3. IMPLEMENTATION
This was done until the AGV drove straight and the traveled distance error between the
real world measurement and the softwares estimation was less than 10 mm. Although
not very scientific, there was really no way around it using odometry alone with the
crude prototype model that was provided,
In the second step, the AGV should drive three laps along the virtual path of the
nascar track with a path aim of 450 mm (corresponding to the reference point in the
quadratic-curve algorithm). The AGV should be able to drive both clockwise and coun-
terclockwise with roughly equal performance. The x and y offset from the real path
should be measured with an inch ruler to provide sufficient information to the calibra-
tion.
The path, as mentioned earlier, is nascar-like and has a total path length Ltot of
3.7 Summary
The components used in the project was a PC laptop, PhidgetEncoder HighSpeed 4-
Input (usb interface), PhidgetAnalog 4-Output (usb interface), a web camera, a voltage
divider, two optical pulse encoders and the power-driven wheel chair. A Graphical
User Interface (GUI) was made to facilitate simulation, verification and interacting with
the AGV unit. Simulations started in a MATLAB environment and was ported to
C++ later. The vision-based system extracts unique environment information from
a predefined binary binary pattern. It also measures the AGV displacement from an
optical path.
33
4
Results
Although the benchmark above is crude, it reveals that the current calibration is within
acceptable levels. In Figure 4.2a and 4.2b is the result of three laps in the tracks described
in Section 3.6. Due to the software setting to aim on the path 450 mm ahead - as a
result to the quadratic curve algorithm - the AGV will ”cut corners” as seen in both
figures. The path aim relates to the smoothness behavior of the AGV control. We can
also derive from Figure 4.2a and 4.2b that the filter states does not deviate much from
lap to lap.
With a regular inch ruler, the following data in Table 4.1 were acquired. The values
are momentary states in the Kalman filter.
Due to the difference in wheel diameter, the AGV will behave differently depending on
whether its a left turn or a right turn. Determining a fitting rear axis distance L that
35
CHAPTER 4. RESULTS
ought to match both laps was difficult. It is almost impossible to distinguish between very
small changes in parameters, such as rear axis distance, and variations from stochastic
noise (bumps et cetera) in the environment.
The key knowledge from this calibration procedure is the sole performance of the
odometry system alone. We have already identified that this system will not satisfy the
demands on its own.
Note that the angle θ is growing for each lap. The physical angle was not measured.
(a) x or y-coordinate offset per wheel rotation. (b) θ offset per wheel rotation.
36
CHAPTER 4. RESULTS
The complete data set of measurements can be found in Appendix C. The average
Euclidean distance to the calibration point was calculated to dist = 0.0051 m with a
standard deviation σdist = 0.0030 m. Examining the bar plot in Figure 4.4a yields a
good perspective of the AGV performance.
The corresponding scatterplot of all measured x̂, ŷ can be seen in Figure 4.4b. Together
37
CHAPTER 4. RESULTS
Another interesting figure in this sequence might be the GUI representation at that
instance. A snapshot of the docking phase can be seen in Figure 4.5. The AGV (red)
is illustrated on the virtual path (green). As stated in Chapter 3, the GUI has been a
most supportive tool. The center of the circle in Figure 4.5 represents the global origo
(0,0) in the two-dimensional Euclidean space.
Figure 4.5: The lower right box is the camera perspective, displaying the AGV stopped
at the calibration point (circle). The white line and circle with the subtext ”offSet” is a
software function that represents the position of the operating tool.
38
CHAPTER 4. RESULTS
The average cycle time for the whole software was measured to 17 ms, though minimum
and maximum values are quite far away from that value. The main reason for these
disseminated measurements is the fact that execution of most program sequences are
conditional. For instance when the algorithm that identifies station IDs is executed the
cycle time more than doubles due to heavy computations.
4.7 Exclusions
Reviewing the objectives in Section 1.3, we particularly addressed item 5 to 8. Item 6(d)
was excluded due to lack of time and item 6(c) is partially fulfilled by the fact that the
AGV can align to predefined coordinates along the optical path, representing standard
locations for the working tool.
Whether we successfully completed item 5 (robust against soiling, wear and tear) is
arguable. If the wheels are dirt they will soil the textile tape after some cycles. The
same goes for footsteps. The current optical marker thus needs regular cleaning.
39
5
Discussion and conclusion
40
CHAPTER 5. DISCUSSION AND CONCLUSION
donated prototype were obsolete due to the wear and deformations and there were no
sophisticated tools available for us to determine them. As seen in Section 4.3, the model
is quite sensitive to inaccuracies in the mechanical dimensions.
The rear wheels had pneumatic tyres, which had tiresome consequences during the
calibration process. The desiccated tyre tubes of the condemned power-driven wheelchair
deflated after hours of calibration work which set us back both in time and morale. We
had to procure new tyres from a local store.
The same applies to the rear axis/wheel distance. Determining the actual contact
point of the wheels and then actually measuring the distance within a tolerance of ±1
mm is impracticable. The skew frame body might also contribute to an axis length
displacement while turning.
The optimum conditions would have been solid core wheels with heavy rubber en-
foldment. The wheels should have been industrially mounted with exact dimensions and
tolerances provided.
41
CHAPTER 5. DISCUSSION AND CONCLUSION
42
CHAPTER 5. DISCUSSION AND CONCLUSION
5.2.2 Odometry
Using pulse encoders in combination with dead-reckoning as the main positioning tech-
nique allowed for easy simulation and getting started in an early stage. One should be
aware that this technique requires a well defined mechanical system to be able to de-
liver high accuracy positioning. Miss-match between the model (measurements of tyre
radius and rear-axis length) and the mechanical system will rapidly result in extensive
positioning errors as the miss-match grows.
The choice of pulse encoders will, of course, also play a role. Number of pulses per
revolution is the main variable of interest, though we were delayed by the fact that
the voltage output from the encoders were not suited for our hardware. If one pulse
represents 1 mm travel for a wheel, the positioning accuracy will be limited to 1 mm.
We mounted our pulse encoders with a gear ratio which resulted in a resolution of about
ten pulses per millimeter traveled, which from our point-of-view was more than enough.
43
CHAPTER 5. DISCUSSION AND CONCLUSION
44
CHAPTER 5. DISCUSSION AND CONCLUSION
of the AGV. The report we examined for the quadratic curve algorithm demonstrated
good simulation results with a similar kinematic model for the AGV.
The localization was performed without the Extended Kalman filter as well. The per-
formance was almost as good within the short distance that the AGV was planned to
travel. To really asses the difference, we should have had an equal set (30) of measure-
ments without the filter active. Note that the Extended Kalman filter is not an optimum
filter for several reasons. One of the optimality criterion is the Minimum Mean Square
Error (MMSE), which estimator is linear in data only if the densities are Gaussian and
the models linear. Since EKF is used when the model are non-linear, it can not be
MMSE. From an academic point of view, it was a very worthwhile experience to have
implemented it.
45
CHAPTER 5. DISCUSSION AND CONCLUSION
With a camera setup facing downwards, as in this project, a stereo system would not
add much redundancy but rather add unnecessary computational load. With a different
stereo system setup, for instance two cameras facing forward, the system could have
provided another distance dimension (x) to the filtering algorithm.
One fundamental issue remains. The attachment point of both camera has to be
known, and another camera might also be another source of error. Calibrating attach-
ment points for two cameras requires an acceptable angle and distance bias. Although,
as mentioned earlier, such uncertainties can be embedded in the measurement model.
What about another system to measure the absolute position? Sensory alternatives
were discussed in Section 2.1. It is difficult to imagine a simpler system than the odome-
try with the rotary encoders. Also, considering cost and implementation, odometry has
an advantage compared to other positioning systems.
A gyro would have been a suitable complement to compensate inadequacies of odom-
etry. In addition to further redundancy to the dead-reckoning, a gyro can manage
stochastic errors such as collisions, skid and floor irregularities.
It is debatable whether further redundancy would add significant value to the perfor-
mance of the AGV positioning. Since the distance between docking point and calibration
point is short and several other points along the optical path provides absolute positions,
the current setup is probably enough.
5.5 Conclusions
Coupling to the main research questions stated in the introduction, we can conclude as
follows:
AGV positioning. With the use of odometry and a vision-based system, we reached
a mean offset of 0.0051 m with a standard deviation of 0.0030 m. With improved
mechanics and a suitable motor controller, we believe that we can achieve the requested
accuracy of ±5 mm. We encountered problems with calibrating the software (embedding
the kinematic model) for the crude prototype dimensions we had.
We used the Extended Kalman filter algorithm for location estimation which per-
formed satisfactory but allows for improvement. The path tracking algorithm called
Quadratic curve controlled the motor output in a smooth manner.
AGV navigation. The AGV could successfully navigate without any mechanical
tampering on the floor. Odometry is simple and easy to implement but sensitive to
stochastic errors and has the nature of accumulating errors (systematic). Vision-based
systems are precise and very potent in acquiring information, but limited by the relatively
computational-heavy algorithms that makes acquisition difficult at higher speeds.
As an optical marker, we used a black textile tape which had good optical properties
(non-reflective) but was quite susceptible to soiling. The camera we had was cheap and
had an attachment unfit the purpose of adjusting angle and placement.
Working cell identification. The vision-based system identified the current work-
ing cell by interpreting a binary pattern on the floor, corresponding to a unique working
46
CHAPTER 5. DISCUSSION AND CONCLUSION
cell ID. The vision-based system also acquired parameters from the binary pattern re-
garding the working cell geometry to determine the virtual path.
As a final measure of success we used the feedback received from the company that
provided us with the task. They acknowledged our solution as successful and recognized
it as a potential add-on on future products.
Improved front wheel pair. Castor wheels with smaller offset distance between the
center axis of the shaft and the contact point gives a better compliance when
changing direction. The new wheels also has less friction and a solid core.
Improved rear wheel pair. The rear wheel pair will have a solid aluminum core sup-
ported by a steel ring with a tough rubber enfoldment. These wheels are robust
to aging and will (hopefully) not change dimensions over time.
Improved body frame. The body frame will be an extruded aluminum structure. The
alpha version had a skew geometry.
Improved camera attachment. Calibration of the camera position with real calibra-
tion tools and a proper attachment will improve the measurement model.
Improved light conditions. A surrounding apron and proper diffuse light sources in
the undercarriage will enhance the vision system significantly. This also includes
an improved optical marker and a camera optimized for the task.
Improved motor controller. The motor controller hardware in the alpha had issues
dealing with small output voltages, resulting in a chopping motion at low speeds.
The beta motor controller will have feedback directly from the encoders rather
than being current controlled.
Additional aspects worth considering in future work and improvements are Improvement
of object-detection algorithm, System model with bias factors, and Porting the setup to a
lean and efficient environment. Improving the object-detection algorithm (not available
in the report) is one step on the road to enhancing the system performance. As of
now, there is much redundancy in the algorithm to ensure that ”correct” object is being
tracked. Since we are searching for relatively basic geometries and few objects at one
sampling instance, we did not put much effort in streamlining the code or evaluate
program cycles.
47
CHAPTER 5. DISCUSSION AND CONCLUSION
Another improvement to the localization estimation could have been to add the bias
factors to the measurement model. This issue has been covered. Porting the setup
to a lean and efficient environment could most certainly increase the FPS of camera,
thus improving the accuracy when reaching calibration points along the optical path.
Consider that the program is running on a Windows 7 (32-bit)/PC environment with
TM
system specifications of Intel
R
Core Duo CPU T6600 @ 2.20 GHz, GeForce GT 130M
and 3 Gb usable RAM. Probably it would be better with a Linux kernel free from heavy
background processes.
48
Bibliography
[1] T. Meyer, “AGVs: The Future Is Now,” Material Handling Management, p. 14,
October 2009.
[2] D. E. Mulcahy, Materials Handling Handbook, 1st ed., D. E. Mulcahy, Ed. McGraw-
Hill, 1999.
[5] D. Ronzoni, R. Olmi, C. Secchi, and C. Fantuzzi, “AGV Global Localization Using
Indistinguishable Artificial Landmarks,” Shanghai, China, May 2011.
[6] Anonymous, “Laser point the way for Indumat’s AGV vehicle,” Motor Transport,
pp. 16–17, 2004.
[10] J. P. Huissoon, “Curved ultrasonic array transducer for AGV applications,” Ultra-
sonics, no. 4, pp. 221–225, 1989.
49
BIBLIOGRAPHY
[14] A. Azenha and A. Carvalho, “Dynamic analysis of AGV control under dead-
reckoning algorithm,” Robotica, vol. 26, pp. 635–641, March 2008.
[15] C. Soria, E. Freire, and R. Carelli, “Stable AGV corridor navigation based on data
and control signal fusion,” Latin American applied research, no. 2, June 2006.
[17] M. Lundgren, “Path Tracking and Obstacle Avoidance for a Miniature Robot,”
Master’s thesis, Umeå University, 2003.
[19] S. J. Julier and J. K. Uhlmann, “A New Extension of the Kalman Filter to Nonlinear
Systems,” Department of Engineering Science, The University of Oxford, Tech. Rep.,
1997.
[20] R. E. Kalman, “A new approach to linear filtering and and prediction problems.”
Trans. ASME Ser. D. J. Basic. Eng., no. 82, pp. 24–45, 1960.
V
[21] J.-C. Yoo and Y.-S. Kim, “Alpha–beta-tracking index (α–β– ) tracking filter,” Sig-
nal Processing, vol. 83, no. 1, pp. 169–180, July 2003.
[22] C. M. Wang, Location estimation and uncertainty analysis for mobile robots.
Springer-Verlag New York Inc., USA.
[23] J. Coombs and R. Prabhu, “OpenCV on TI‘s DSP+ARM platforms: Mitigating the
challenges of porting OpenCV to embedded platforms,” http://www.ti.com/, July
2011.
50
A Use-Case ver. 1.2
As mentioned in the introduction, a Use-Case describes interaction between user and
system in a step-by-step manner. The purpose of a Use-Case in this context is that it
brings a sequential template for software development. Although it is quite simple and
lean, it provides a general concept of how the unit is supposed to act and if it fits the
acting that the company had in mind. The Use-Case is updated continuously. More
detailed information will be added during the project as more data is acquired on how
to solve the task. This version was final at the time of completion of the project.
Docking
This Use-Case describes the docking sequence of the unit.
Pre-conditions
No error indications from the unit (light or sound signaling).
A site is predefined by tape or any equivalent marking to demonstrate where the
unit is to be place to be able to initiate docking. The site also has an optic marker
where the unit shall be able to perform parallel tracking.
The station ID at the docking site is linked to a global coordinate system which
contains the calibration point coordinates, virtual path of the optical marker for
parallel tracking and the docking point coordinates itself.
Normal flow
1. The user maneuvers the unit to the predefined marked site (docking site),
facing a predetermined direction.
2. A vision system acquires station ID, angle and distance offset from a prede-
fined optical binary pattern.
3. The unit indicates (display/light/sound signaling) that it is ready to initiate
docking.
4. The user commands the unit by pushing button(s) to initiate docking.
5. The unit calculates a virtual path based on the acquired angle and distance
offset between the current position and a predefined optical path.
6. The unit awaits user to push dead-mans grip button(s) on a hand held remote
device to enable drive along the path towards the predefined optical path.
7. The unit drives along the virtual path by dead-reckoning using odometry
measurements on each wheel.
8. The unit reaches the predefined optical path and initializes vision tracking.
The vision measurements are fused with the odometry to enable elimination
of the inherent systematic (cumulative) errors from the odometry alone.
51
9. The unit drives along the predefined optical path until it recognizes the cal-
ibration site (pattern recognition) that indicates an absolute position along
the path.
10. The unit stops at the calibration site and acquires an offset angle and dis-
tance from the calibration marker and indicates docking mode complete (dis-
play/light/sound signaling).
Alternative flow
Start at 7. The user releases the button(s) before the unit has reached the calibra-
tion point.
- 8. The unit stops immediately and puts the docking sequence on hold.
- 9. The unit awaits new command from user to continue with the docking se-
quence.
Continue at 7.
Alternative flow
Start at 7. The unit has driven a maximum allowed distance along the optical
without recognizing the calibration point.
- The unit changes direction to make another attempt on finding the calibration
point.
Continue at 1.
Post-conditions
The unit is ready to begin alignment from the predefined calibration site along the
path.
Special requirements
The button(s) to command the unit to perform the docking sequence must be held
continuously to secure that the sequence is monitored and ensure that the sequence
can be cancelled easily.
Pre-conditions
No error indications from the unit (display/light/sound signaling).
The docking sequence is completed.
Position data from work station is available and offset between calibration site and
work station is verified.
Normal flow
1. The user moves the work station operating tool to desired position.
2. The user commands the unit by pushing button(s) to initiate parallel align-
ment of the unit against the work station.
52
3. The unit drives towards the acquired position of the work station, correspond-
ing to a parallel position on the path.
4. When the unit approaches the desired position, it decelerates and stops within
a ±5 mm accuracy of this position.
5. The unit indicates (display/light/sound signaling) that it is within a ±5 mm
accuracy of this position and is ready to perform work task.
Alternative flow
Start at 3. The user releases the button(s) before the alignment sequence is com-
plete.
- 4. The unit stops immediately and puts the alignment sequence on hold.
- 5. The unit awaits new command from user to continue with the docking se-
quence.
Continue at 3.
Alternative flow
Start at 3. The unit loses tracking of its position or path.
- 4. The unit stops immediately and puts the alignment sequence on hold.
- 5. The unit indicates (display/light/sound signaling) that it has lost position or
path.
- 6. The unit is set to manual driving mode and is required to be moved manually
to the docking position.
Continue at 1 in Docking.
Alternative flow
Start at 2 or 3. The unit fails to acquire the position of the work station operating
tool or acquires a non-valid position of the work station operating tool.
- 4. The unit stops immediately and puts the alignment sequence on hold.
- 5. The unit indicates (display/light/sound signaling) that it fails to acquire the
position of the work station operating tool or acquires a non-valid position of the
work station operating tool.
- 6. The unit alerts the user to contact a technician and/or enables manual drive
mode.
Post-conditions
The unit is ready to Perform work task or Release unit from docked mode.
Special requirements
The button(s) to command the unit to perform the docking sequence must be held
continuously to secure that the sequence is monitored and ensure that the sequence
can be cancelled easily.
53
Pre-conditions
No error indications from the unit (display/light/sound signaling).
Operating tool parallel alignment completed.
Normal flow
1. The user commands the unit to perform work task by pressing button(s).
2. The unit indicates (display/light/sound signaling) ”work task in progress” and
starts to execute work task.
3. The unit indicates (display/light/sound signaling) ”work task completed”.
Alternative flow
Start at 2. The user releases the button(s) before the work task execution is
complete.
- 3. The unit stops immediately and puts the work task execution on hold.
- 4. The unit awaits new command from user to continue with the work task
execution.
Continue at 2.
Post-conditions
The unit is ready to Perform work task or Release unit from docked mode or initiate
Parallel alignment against the work station operating tool.
Special requirements
The button(s) to command the unit to perform the docking sequence must be held
continuously to secure that the sequence is monitored and ensure that the sequence
can be cancelled easily.
Pre-conditions
No error indications from the unit (display/light/sound signaling).
In Docked mode.
Normal flow
1. The user commands the unit to Release from docked mode by pressing but-
ton(s).
2. The unit drives to and stops at to the docking position.
3. The unit indicates (display/light/sound signaling) that it is ready for manual
driving mode.
4. The user commands manual driving mode.
5. The unit is no longer in docked mode and can be driven manually.
54
Alternative flow
Start at 2. The user releases the button(s) before the unit has reached docking
position.
- 3. The unit stops immediately and puts the release sequence on hold.
- 4. The unit awaits new command from user to continue with the release sequence.
Continue at 2.
Post-conditions
The unit is ready for manual driving.
Special requirements
The button(s) to command the unit to perform the docking sequence must be held
continuously to secure that the sequence is monitored and ensure that the sequence
can be cancelled easily.
55
B Time schedule
2012
January February March April May June
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Definition Phase
Planning Report
Study visits
Use-Case
Design Architectures
Design Phase
Familiarize with LAB equipment
Evaluation of Design Architechtures
Test Specification
Detailed Design
Verification Phase
Hardware Merge and Initial Testing
Prototype based Modification
Test Report
Validation Phase
Workshop/Demo
Finalization Phase
Final Report
Toll Gates TG 1 TG 2 TG 3
56
Time schedule evaluation
The Definition Phase planning were matched almost to the day with the expected
progress. The Use-Case though was continuously updated during the project. The
Design Phase was completed earlier due to a fast decision to drop the initial thought
to proceed with a magnetic solution for AGV guidance. The two first phases are also
the most uncertain ones and at the planning stage, we felt we kept a good pace.
What delayed the progress was the order time of the hardware and installation with
mounting of encoders that allowed us to start testing our prepared software. Once all
the hardware were set, much correction in the software was needed to make the AGV
behave as we wanted. Tuning the mechanical dimensions to the kinematic model and
selecting proper values in the covariances matrices of the Extended Kalman filter were
very time consuming.
Once satisfied with the solution, we prepared a demo to an Open House session as
a part of the Validation Phase. The demo consisted of a nascar-track, much like the
ones in the calibration stage of the odometry but with the optical path along one long
side to demonstrate the vision-based system.
57
C Docking phase data
x̂ ŷ Dist. θ̂
0.001 0.004 0.0041 0.029
-0.003 0.000 0.0030 -0.146
0.002 -0.005 0.0054 0.100
0.001 -0.012 0.0120 0.079
0.000 0.007 0.0070 0.023
0.003 -0.011 0.0114 0.191
0.000 -0.004 0.0040 0.017
-0.001 -0.003 0.0032 -0.042
0.001 -0.010 0.0100 0.077
0.001 -0.006 0.0061 0.048
0.002 -0.008 0.0082 0.109
0.001 -0.009 0.0091 0.065
-0.002 0.001 0.0022 -0.116
-0.001 -0.003 0.0032 -0.037
0.000 -0.006 0.0085 0.020
-0.002 -0.003 0.0036 -0.115
-0.001 -0.002 0.0022 -0.035
-0.001 -0.004 0.0041 -0.067
0.000 -0.004 0.0040 -0.014
0.001 0.004 0.0041 0.093
0.001 0.003 0.0032 -0.040
0.000 0.005 0.0050 0.035
0.000 -0.001 0.0010 -0.009
0.000 0.003 0.0030 -0.143
-0.001 -0.003 0.0032 -0.063
-0.004 0.000 0.0040 0.200
0.001 0.004 0.0041 0.065
-0.003 0.000 0.0030 -0.154
0.003 -0.009 0.0095 0.171
0.001 0.002 0.0022 -0.049
58