Orientation Estimation and Movement Recognition Using Low Cost Sensors
Orientation Estimation and Movement Recognition Using Low Cost Sensors
Processing
June 2017
Contact Information:
Author(s):
Álvaro López Revuelta
E-mail: [email protected]
University advisors:
Benny Lövström
Department of Applied Signal Processing
E-mail: [email protected]
Ronnie Lövström
Department of Applied Signal Processing
E-mail: [email protected]
Abstract i
1 Introduction 1
1.1 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Orientation 3
2.1 Rotation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Euler Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Sensors 12
3.1 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.1 Hard Iron . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.2 Soft Iron . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Sensor Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Euler Angles without fusion . . . . . . . . . . . . . . . . . . . . . 21
3.5.1 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5.2 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5.3 Magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . 22
4 Sensor Fusion 23
4.1 Complimentary Filter . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 DCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1 Initialize DCM . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.2 Calculate heading . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.3 Update matrix . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.4 Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.5 Correct drift . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.6 Calculate Euler . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Madgwick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
ii
5 Experimental Setup 36
5.1 Components and budget . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Hardware Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 Study Case 56
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.2 Tests and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9 Future Work 68
References 69
iii
List of Figures
iv
6.7 Gyroscope drift over time. . . . . . . . . . . . . . . . . . . . . . . 49
6.8 Euler estimation with complimentary filter, testing resilence to ex-
ternal accelerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.9 Static comparative between DCM and Madgwick. . . . . . . . . . 51
6.10 Static error between DCM and Madgwick. . . . . . . . . . . . . . 51
6.11 Slow movement comparative between DCM and Madgwick. . . . . 52
6.12 Slow movements error between DCM and Madgwick. . . . . . . . 53
6.13 Fast movement comparative between DCM and Madgwick. . . . . 53
6.14 Fast movements error between DCM and Madgwick. . . . . . . . 54
v
List of Tables
vi
Chapter 1
Introduction
1
Chapter 1. Introduction 2
• Can the orientation be estimated using low cost sensors such as accelerom-
eters, gyroscopes and magnetometers in high acceleration scenarios, using
them separately?
• Can the orientation be estimated using low cost sensors such as accelerom-
eters, gyroscopes and magnetometers in high acceleration scenarios, using
sensor fusion techniques?
• Can these algorithms run in low cost hardware such as Arduino, at frequen-
cies in the order of 50-100 Hz?
Other approach to estimate the orientation, and solve the drift, is to use sensor
fusion algorithms. This method is the one addressed by this thesis. In [24], [23]
an algorithm called Madgwick is proposed, which uses the quaternion form for the
estimation, where accelerometer and magnetometer is combined formulating an
optimization problem by gradient descent. Other approach that also uses sensor
fusion is presented in [5], [14], where a DCM (Direction Cosine Matrix) is used
as the core of the algorithm.
Chapter 2
Orientation
Representing the attitude orientation is a well known topic in the aerospace field,
where it has an important role in aircrafts or UAVs but it also has taken relevance
in other fields. In the following chapter the three main mathematical ways of rep-
resenting the attitude of a rigid body in a three dimensional space are presented.
These ways are: Rotation Matrix [15][16] found in Sec. 2.1, Euler Angles [7] in
Sec. 2.2 and Quaternions [22] in Sec. 2.3.
It is also really important to know the relation between these three ways,
which can be found in [8], [18], [12]. Note also that the ZYX convention will be
used for the thesis, and in [2] there is a really good compilation of all possible
representations. The most important relations, always assuming ZYX can be
found in Eq. 2.12, Eq. 2.26, Eq. 2.27 and Eq. 2.28.
cos(θ) − sin(θ)
R(θ) = (2.1)
sin(θ) cos(θ)
A rotation matrix is orthogonal, which means that R−1 = Rt and its deter-
minant det(R) = ±1. When det(R) = 1 the matrix is called proper and when
det(R) = −1 is called improper. This means that the rotation is made counter
clock wise and clock wise, respectively.
So a vector with components (x, y) can be rotated θ using Eq. 2.2, where
3
Chapter 2. Orientation 4
⎛ ⎞
1 0 0
Rx (θ) = ⎝ 0 cos(θ) − sin(θ) ⎠ (2.4)
0 sin(θ) cos(θ)
⎛ ⎞
cos(θ) 0 sin(θ)
Ry (θ) = ⎝ 0 1 0 ⎠ (2.5)
sin(θ) 0 cos(θ)
⎛ ⎞
cos(θ) − sin(θ) 0
Rz (θ) = ⎝ sin(θ) cos(θ) 0 ⎠ (2.6)
0 0 1
An example of the rotation described above is presented in Fig. 2.1, where a
rotation is produced along the z axis by an angle α.
An interesting property of rotation matrices is that two different rotations
using two matrices R1 and R2 described in Eq. 2.7 and Eq. 2.8, is equivalent to
Chapter 2. Orientation 5
v = R1 v (2.7)
v = R2 v (2.8)
v = R2 R1 v = R3 v (2.9)
Where R3 = R2 R1 is the new equivalent rotation matrix and note that rota-
tions are applied in reverse order.
So any rotation can be described as one rotation in each axis x, y, z and that
can be represented by a 3x3 rotation matrix like Eq. 2.10. Coefficients aij can be
obtained by multiplying the different rotation matrices. Note that the results will
vary depending on the order of the rotations. It is not the same rotation x, y, z
than rotating z, y, x.
⎛ ⎞ ⎛ ⎞⎛ ⎞
x1 a11 a12 a13 x1
⎝ x2 ⎠ = ⎝ a12 a22 a23 ⎠⎝ x2 ⎠ (2.10)
x3 a13 a23 a33 x3
There is also one particular case of rotation matrix, called DCM or Direction
Cosine Matrix which is often used in the literature of orientation. As the name
indicates, it is a matrix composed by the cosines of the unsigned angles that the
body forms with the reference axis. Reference axis are denoted by (x, y, z) and
body axes by (x , y , z ). The angles are denoted by θx x . So the general equation
for a DCM matrix is Eq. 2.11.
⎛ ⎞
cos(θx x ) cos(θx y ) cos(θx z )
RDCM = ⎝ cos(θy x ) cos(θy y ) cos(θy z ) ⎠ (2.11)
cos(θz x ) cos(θz y ) cos(θz z )
Later on in the thesis, the DCM algorithm for calculating the Euler angles will
be explained. That algorithm uses a DCM matrix to perform sensor fusion, and
in the last step Euler angles are calculated. That conversion from DCM matrix
to Euler angles can be performed using Eq. 2.12. Note that pitch, roll and yaw
are the Euler Angles explained in Sec. 2.2. Note also that Rx,y stands for the
element x, y of the matrix.
⎛ ⎞
⎛ ⎞ − arcsin (R31 )
pitch
⎜ R ⎟
⎝ roll ⎠ = ⎜ arctan R31 ⎟ (2.12)
⎝
⎠
33
yaw arctan 21 R
R11
Chapter 2. Orientation 6
Note that the order that the angles are represented is not important at all,
but the order of rotation is. For this thesis the rotation order ZY X (see DCM
section) will be used. The following equation shown in Eq. 2.13 is a very impor-
tant expression, which represent the rotation along the three axes in the ZY X
order. Note that for shortening purposes, cos(x) is c(x) and sin(x) is s(x).
⎛ ⎞⎛ ⎞⎛ ⎞
c(φ) −s(φ) 0 c(θ) 0 s(θ) 1 0 0
R(φ, θ, ψ) = ⎝ s(φ) c(φ) 0 ⎠⎝ 0 1 0 ⎠⎝ 0 c(ψ) −s(ψ) ⎠ (2.13)
0 0 1 −s(θ) 0 c(θ) 0 s(ψ) c(ψ)
After doing the maths, in Eq. 2.14 the rotation matrix for angles φ, θ, ψ is
represented using the order Z, Y, Z, which can also be denoted as RZY X (φ, θ, ψ).
⎛ ⎞
c(φ)c(θ) c(φ)s(θ)s(ψ) − s(φ)c(ψ) c(φ)s(θ)c(ψ) + s(φ)s(ψ)
R = ⎝ s(φ)c(θ) s(φ)s(θ)s(ψ) + c(φ)c(ψ) s(φ)s(θ)c(ψ) − c(φ)s(ψ) ⎠ (2.14)
−s(θ) c(θ)s(ψ) c(θ)c(ψ)
Note that in some literature, these angles can also be referred as α, β, γ, equiv-
alent to φ, θ, ψ. An example of the rotation order is represented in Fig. 2.3.
Chapter 2. Orientation 7
Euler angles seem to be intuitive and easy to work with. However, they have
a limitation called gimbal lock. This singularity is the loss of one degree of free-
dom in a three-gimbals system. It occurs when two out of the three gimbals are
aligned, and a degree of freedom is lost. An example is shown in Fig. 2.4.
⎛ ⎞
0 c(φ)s(ψ) − s(φ)c(ψ) c(φ)c(ψ) + s(φ)s(ψ)
R = ⎝ 0 s(φ)s(ψ) + c(φ)c(ψ) s(φ)c(ψ) − c(φ)s(ψ) ⎠ (2.15)
−1 0 0
Chapter 2. Orientation 8
Using some basic trigonometric relations Eq. 2.15 becomes Eq. 2.16. In this
expression it can be seen that changing the values φ, ψ, has the same effect. One
degree of freedom has been lost. In this case a pitch of 90 degrees will lead to a
gimbal lock.
⎛ ⎞
0 − sin(φ − ψ) cos(φ − ψ)
R = ⎝ 0 cos(φ − ψ) sin(φ − ψ) ⎠ (2.16)
−1 0 0
Chapter 2. Orientation 9
2.3 Quaternions
Quaternions were introduced by Hamilton (1843) and are also used to represent
the attitude of a rigid body in the space. They can be viewed as an extension
of complex numbers. A complex number can be used to represent the rotation
in a two dimensional space. A quaternion is similar, but instead of one axis it
is extended to three axes. For its representation, four values are needed, where
there is one real component q0 and three imaginary q1 , q2 , q3 shown in Eq. 2.17.
q = q0 + q 1 i + q 2 j + q 3 k (2.17)
It is also represented as a vector matrix q, shown in Eq. 2.18, where q1 , q2 , q3
are multiplied by i, j, k respectively.
q = [q0 q1 q2 q3 ]t (2.18)
The quaternion that represents a rotation of θ degrees around an axis defined
by the vector n = [nx ny nz ] is given by the Eq. 2.19.
(a1 + b1 i + c1 j + d1 k)(a2 + b2 i + c2 j + d2 k) =
a1 a2 − b1 b2 − c1 c2 − d1 d2
+(a1 b2 + b1 a2 + c1 d2 − d1 c2 )i (2.23)
+(a1 c2 − b1 d2 + c1 a2 + d1 b2 )j
+(a1 d2 + b1 c2 − c1 b2 + d1 a2 )k
The Eq. 2.24 determine all the possible combinations of i, j, k multiplications,
so for example k = ij or ji = −k
i2 = j 2 = k 2 = ijk = −1 (2.24)
The Hamilton product is very interesting for the scope of this project, because
it can combine two rotations into only one. For example if the quaternion in Eq.
2.20, which represent a rotation of θ = π/2 in the axis n = [1 0 0] is used two
times, the result rotation will be θ = π in the same axis. Doing the maths, the
Hamilton product of that quaternion multiplied with itself, gives Eq. 2.25.
q = [0 1 0 0] (2.25)
Which is the same as substituting θ = π and n = [1 0 0] in Eq. 2.19. Both
results are the same.
Now that the main ways of representing the attitude of a body in the space
were explained, it is important to know how to change between different repre-
sentations. For this project the ZYX or 3,2,1 conversion will be used, and the
most important expressions are given. Using Eq. 2.26 Euler angles θ1 , θ2 , θ3 can
be converted to the quaternion representation.
⎛ ⎞ ⎛ ⎞
q0 sin(θ1 /2) sin(θ2 /2) sin(θ3 /2) + cos(θ1 /2) cos(θ2 /2) cos(θ3 /2)
⎜ q1 ⎟ ⎜ − sin(θ1 /2) sin(θ2 /2) cos(θ3 /2) + sin(θ3 /2) cos(θ1 /2) cos(θ2 /2) ⎟
⎜ ⎟=⎜ ⎟
⎝ q2 ⎠ ⎝ sin(θ1 /2) sin(θ3 /2) cos(θ2 /2) + sin(θ2 /2) cos(θ1 /2) cos(θ3 /2) ⎠
q3 sin(θ1 /2) cos(θ2 /2) cos(θ3 /2) − sin(θ2 /2) sin(θ3 /2) cos(θ1 /2)
(2.26)
In Eq. 2.27 the conversion between quaternion to rotation matrix is given and
in Eq. 2.28 the expression for converting from quaternion to Euler is given.
⎛ 2 ⎞
2q0 − 1 + 2q12 2q1 q2 + q0 q3 2q1 q3 − q0 q2
R = ⎝ 2q1 q2 − q0 q3 2q02 − 1 + 2q22 2q2 q3 + q0 q1 ⎠ (2.27)
2 2
2q1 q3 + q0 q2 2q2 q3 − q0 q1 2q0 − 1 + 2q3
Chapter 2. Orientation 11
⎛ 2q2 q3 − q0 q1 ⎞
arctan( 2 )
⎛ ⎞ ⎜
⎜
2q0 − 1 + 2q32 ⎟
⎟
φ ⎜ ⎟
⎝θ⎠=⎜
⎜ arctan
2q1 q3 + q0 q2 ⎟
⎟ (2.28)
⎜ 1 − (2q q + q q ) 2 ⎟
ψ ⎜ 1 3 0 2 ⎟
⎝ 2q1 q3 − q0 q3 ⎠
arctan( 2 2
)
2q0 − 1 + 2q1
Chapter 3
Sensors
In this chapter, the sensors used for the project are described: accelerometers in
Sec 3.1, gyroscopes in Sec. 3.2 and magnetometers in Sec. 3.3), explaining what
they can measure and how to get them calibrated. Then, in Sec. 3.4 the most
important parameters of the sensors are explained, which are very important to
select one for an application. To conclude, in Sec. 3.5 a first approach on how
to calculate Euler angles is given without using sensor fusion (using each sensor
separately), highlighting the limitations of each method.
3.1 Accelerometer
This sensors are used in many fields and industries: aerospace, biology, structural
monitoring, medical applications, navigation, transport or consumer electronics.
In the study case of this project, it will be used to determine the orientation of a
body in three dimensions.
If the device is moved π/4 degrees, the 1g acceleration pointing to the floor
will still exist, but will affect the sensor in a different way, because it has been
rotated. This gives a first idea on how to calculate the orientation of a body using
only an accelerometer, but this only will be valid if there are not other external
accelerations.
It is very important to calibrate the sensors before using them. For the ac-
celerometer case, it is quite simple and can be done with only few calculations
12
Chapter 3. Sensors 13
that won’t take much CPU time. Note that these operations have to be done in
each iteration, right after reading the sensor values. The first step is to know if
the accelerometer readings are correct. This can be done by using the gravity
acceleration of 1g as a reference. So first put the accelerometer in a flat surface
and measure the acceleration. If the sensor were ideal, the measurement should
be −1g. And if the sensor is turned around the value will be 1g. However, this
won’t be always true. There is always some errors due to mechanical characteris-
tics, humidity, pressure of temperature. Knowing this, the values we have to get
are the following. The minimum will be close to -1g and the maximum to +1g:
• AX X
min , Amax
• AYmin , AYmax
• AZmin , AZmax
Once minimum and maximum values are known, it is time to calculate the
offset using Eq. 3.1. Let AX Y Z
O , AO , AO be the offsets for each axis.
⎛ X⎞ ⎛ X ⎞
AO (Amin − AXmax )/2
⎝ AYO ⎠ = ⎝ (AYmin − AYmax )/2 ⎠ (3.1)
Z
AO (Amin − Amax )/2
Z Z
Note that if the sensor were ideal, Amin , Amax would be the same and the
offsets would be zero. Next step is to calculate the scale factor AX Y Z
S , AS , AS given
by Eq. 3.2.
⎛ gravity ⎞
⎛ ⎞ ⎜ AX ⎟
⎜ max − AO
X
AX ⎟
S ⎜ gravity ⎟
⎝ AYS ⎠ = ⎜ ⎟ (3.2)
⎜ AY − AY ⎟
ASZ ⎜ max O ⎟
⎝ gravity ⎠
AZmax − AZO
Once offset and scale factor values are calculated, the last step is to correct
the measurements by subtracting the offset and scaling the value. Expression is
given in Eq. 3.3. Let AX Y Z
meas , Ameas , Ameas be the measurements for x, y, z axis
before applying calibration and Acal , AYcal , AZcal the calibrated measurements.
X
⎛ X ⎞ ⎛ X X⎞
Acal (Ameas − AX O )AS
⎝ AYcal ⎠ = ⎝ (AYmeas − AYO )AYS ⎠ (3.3)
Z
Acal (Ameas − AO )AS
Z Z Z
Chapter 3. Sensors 15
3.2 Gyroscope
Gyroscopes are another type of sensors [9], but they measure angular speeds in
degrees/s or rad/s. They are also used in many fields such as aerospace, navi-
gation or consumer electronics.
3.3 Magnetometer
A magnetometer is a device that measures magnetic field, typically expressed in
Tesla T or Gauss G. In the study case of this project, a triple axis magnetome-
ter x, y, z will be used. The application will be to measure the Earth magnetic
field, which will allow to determine the orientation in the yaw plane. Although
magnetometers are a great tool, they are very influenced by its surroundings. If
some kind of metal or magnets are close to the magnetometer, the measurements
will drift and become unreliable. However some effects can be fixed with a good
calibration.
ymax − ymin
β= (3.6)
2
However, hard iron effects are not always enough to get accurate measurements
from the magnetometer. For more accuracy, soft iron distortions are corrected.
y1
θ = arcsin( ) (3.8)
r
Once the ellipse is rotated, next step is to scale the major axis in order to
convert it to a circle. Let q, r be the mayor and minor axis shown in 3.9 and σ
the scale factor calculated according to Eq. 3.9.
q
σ= (3.9)
r
For this thesis, three dimensional calibration is needed for the three magne-
tometer axis x, y, z. Instead of an ellipse and ellipsoid will be used, which is the ex-
pansion to three dimension and is described by nine parameters A, B, C, D, E, G, H, I
as shown in 3.10.
X Y Z
Now let Mcal , Mcal , Mcal be the magnetometer calibrated values calculated
X Y Z
using Eq. 3.11, Mmeas , Mmeas , Mmeas the measured values, e11 ...e33 the ellipsoid
X Y Z
transform matrix and C , C , C the center values of the ellipsoid. See [6] for
further information.
⎛ X ⎞ ⎛ ⎞⎛ X ⎞
Mcal e11 e12 e13 Mmeas − C X
⎝ McalY ⎠
= ⎝ e21 e22 e23 ⎠⎝ Mmeas
Y
− CY ⎠ (3.11)
Z
Mcal e31 e32 e33 Mmeas − C
Z Z
Chapter 3. Sensors 20
• Range: A range in a sensor indicates what are the minimum and maximum
values that it can measure. If the real variable goes above or below that
value, the output value will be limited to the maximum, so the measurement
won’t be correct. It is expressed in the same units that the sensor measures,
g for acceleroemter, degrees/s for gyroscope and G for magnetometer.
• Accuracy: It is the difference that exist between the real value and the
measured one. Note that one sensor can be very precise but not accurate.
• Noise: It will depend on the internal characteristics of the sensor and the op-
eration mode. It is typically measured in LSB rms but in some applications
it is given as a function of the operation frequency.
There are also other characteristics such as: power or current consumption, tem-
perature of operation, supply voltages, connection interface, dimensions or weight.
They all can be checked in the data sheet of the sensor.
Chapter 3. Sensors 21
3.5.1 Accelerometer
Pitch and roll angles can be calculated using only one accelerometer [30]. Note
that yaw estimation is not possible using this technique. Pitch is given by Eq.
3.12 in degrees and Roll by Eq. 3.13. Remember from previous sections that
AX Y Z
cal , Acal , Acal are the x, y, z calibrated values of the accelerometer.
X
A 180
pitchA = arctan − Y cal Z (3.12)
(Acal )2 + (Acal )2 π
A −AYcal 180
roll = arctan (3.13)
AZcal π
This way is very simple and does not require many operations. However, the
pitch/roll calculations will only be valid when there are no external accelerations
in the device. If the device is moving up and down, the pitch and roll will become
unreliable. This will be discussed in next sections.
3.5.2 Gyroscope
Using only the angular velocity given by the gyroscope, pitch/roll/yaw can be
calculated with a very simple operation [43]. It is well known the relation between
the position of an object and its velocity by the expression Eq. 3.14, where the
position x is the integral of the velocity v plus the start point x0 .
t
x(t) = x0 + v(t)dt (3.14)
0
In this case, angular speed in deg/s is known, and position in deg is the desired
variable. In discrete time, it can be calculated using Eq. 3.15.
θ = θ + wΔt (3.15)
Where θ is the angle in deg, w is the angular speed in deg/s and Δt is the time
between measurements. Remembering from previous sections that GX Y Z
cal , Gcal , Gcal
are the measurements from the gyroscope after calibration for x, y, z axis, pitch/rol-
l/yaw can be calculated as indicated in Eq. 3.16.
⎛ ⎞ ⎛ ⎞
pitchn+1 pitchG X
n + Gcal Δt
⎝ rolln+1 ⎠ = ⎝ rollnG + GYcal Δt ⎠ (3.16)
G X
yawn+1 yawn + Gcal Δt
Chapter 3. Sensors 22
Note that in the case of this project the axis are aligned so x corresponds
with the pitch, y with the roll and z with the yaw, but other set up will lead to
a different order.
3.5.3 Magnetometer
The yaw or heading parameter can be obtained from a three axis magnetome-
X Y Z
ter measurements using Eq.3.17 where Mcal , Mcal , Mcal are the x, y, z calibrated
measurements from the magnetometer.
Y
Mcal
yaw = arctan X
(3.17)
Mcal
However, this way of getting the yaw is not accurate, because if the sensor is
tilted (pitch or roll different from zero) the result won’t be valid. To solve this,
there is a method called tilt compensation [33].
Chapter 4
Sensor Fusion
Sensor fusion consists in combining different data derived from different sensors in
order to get a resulting variable that has less uncertainty than using the sources
individually. In this chapter different sensor fusion techniques to estimate the
orientation of a body , using a three-axis accelerometer, gyroscope and magne-
tometer are explained. The fist one and simplest is the Complimentary Filter
described in Sec. 4.1, which only uses the gyroscope and accelerometer. The
second one is the DCM algorithm found in Sec. 4.2 that uses a rotation matrix
and a PI controller. Last algorithm based on quaternions is the Madgwick one ex-
plained in Sec. 4.3, which although of being the most complex, it is implemented
very efficiently.
23
Chapter 4. Sensor Fusion 24
time but not in short periods of time. Note that yaw is not corrected by the
accelerometer.
Chapter 4. Sensor Fusion 25
4.2 DCM
Other way of calculating the orientation of a rigid body in the space, is using
the DCM (Direction Cosine Matrix ) algorithm [5], [14], [37], [36], [32]. It uses a
rotation matrix (see Sec. 2.1 ) for calculating the rotation, and at the end Euler
angles are obtained using Eq. 2.12. In Fig. 4.1 a diagram of the algorithm is
presented. Note that the core of the algorithm is the rotation matrix denoted by
R that is calculated using gyroscope data and corrected using the magnetometer
and accelerometer. A PI control system is used to correct the drift, where P
corresponds to the proportional term and I to the integrative one. At the end of
each iteration, Euler angles are calculated using the DCM matrix.
The algorithm can be divided in the following steps, assuming that the input
data is calibrated:
• Initialize DCM: This part is only for the first iteration. It will initialize the
DCM matrix with the first available data. See Sub. 4.2.1.
• Normalize: Keep the vectors orthogonal and normalize them. See Sub.
4.2.4.
Chapter 4. Sensor Fusion 26
• Correct drift: Correct the drift of the gyroscope using a PI control. See
Sub. 4.2.5.
• Calculate Euler: Calculate Euler angles from DCM matrix. See Sub. 4.2.6
aux =
ux × (Acal ×
ux ) (4.2)
Using the calculated aux vector, the roll can be calculated using Eq. 4.3.
Note that auxy and auxz are the second and third components of the vector.
auxy
roll = arctan (4.3)
auxz
Last step to feed the matrix, is calculating the yaw. This step is explained in
4.2.2.
−Mcal
Y Z
c(roll) + Mcal s(roll)
yaw = arctan X Y Z
(4.4)
Mcal c(pitch) + Mcal s(roll)s(pitch) + Mcal c(roll)s(pitch)
Ω = Gcal + ΩI + ΩP (4.5)
Chapter 4. Sensor Fusion 27
So in each step, the rotation matrix will be updated using the read gyroscope
values and corrected according to the PI controller that will be explained then.
4.2.4 Normalize
This step is very important to the performance of the algorithm. Remember from
Seq. 2.1 the properties of a rotation matrix. By definition a rotation matrix is
orthogonal, which means that Rt R = RRt = I. The problem is that numerical
errors can keep accumulating and break the orthogonality condition, which will
result in a bad performance because the matrix will no longer represent a rotation.
In order to avoid this, in each iteration two steps are done: first make the vec-
tors orthogonal again, second normalize them. The Gram-Schmidt method for
orthogonalization is a very well known process, but for this purpose a different
approach is taken. The used method does not fix one of the vector and calcu-
lated the others. Both vectors are changed, applying to each one half of the error.
error
Yorthogonal = Y − X (4.8)
2
The error is calculated as the dot product of X · Y . Remember that the dot
product of a · b is calculated as |a||b| cos(θ), where θ is the angle between both
vectors. If they were orthogonal, cos(θ) would be zero and the error would be
also zero.
a different approach presented in [32] is used. This way assumes that the norm
is not much greater than one, so Taylor’s expansion can be used. The norm of a
vector
u using this approach is shown in Eq. 4.9.
1
unorm = (3 −
u ·
u)
u (4.9)
2
First step is to calculate the error in pitch and roll. This can be calculated
using the cross product between the acceleration calibrated vector Acal and the
last row of the DCM matrix DCM31 , DCM32 , DCM33 . See Eq. 4.12.
P
ΩP = errorpitch/roll Kpitch/roll |A
cal | (4.13)
I
ΩI = ΩI + errorpitch/roll Kpitch/roll |A
cal | (4.14)
Now it is time to correct the yaw drift using the yaw obtained by the magne-
tometer using Eq. 4.15.
P
ΩP = ΩP + erroryaw Kyaw (4.16)
I
ΩI = ΩI + erroryaw Kyaw (4.17)
In the next iteration this values will be used to correct the drift of the gyro-
scope values, as explained in Sub. 4.2.3.
4.3 Madgwick
The Madgwick algorithm (Sebastian 2010) [24][23] is an orientation filter that
can be applied to IMU (accelerometer + gyroscope) or MARG (accelerometer +
gyroscope + magnetometer). It allows to get the orientation of the device using a
quaternion representation, that can also be transformed intro Euler angles θ, φ, ψ.
One of the benefits that this filter include against the typical Kalman filter ap-
proach is the low computational complexity. The IMU version only requires 109
scalar arithmetic operations per update, and the MARG 277, which makes the
Madgwick algorithm very convenient for low size, cost and energy consumption
microprocessors. The author reports a good behavior even using low sampling
rates, close to 10 Hz. Different implementations of the algorithm are provided
in different programming languages [26], [27],[25] ,[41]. Some improvements and
error corrections of the algorithm can be found in [1],[34].
The DCM algorithm presented in Sec. 4.2, used a rotation matrix to compute
the orientation, and at the end of each iteration the Euler angles were computed
using that matrix. However, the Madgwick algorithm uses other approach, using
the quaternion representation to determine the orientation.
Before going in depth in the maths behind the Madgwick filter, it is impor-
tant to understand properly what can different sensors provide. For the study
case of this thesis, the MARG approach will be used, so it is assumed that an
accelerometer, gyroscope and magnetometer are present in the device. For sim-
plification, the data is assumed to be calibrated as explained in Ch. 3 where the
accelerometer and gyroscope are calibrated for the offset and magnetometer is
calibrated with soft and hard iron effects. Knowing that, these are the steps that
will be followed to obtain the estimated quaternion in each iteration. Note that
S stands for sensor and E for earth, so the quaternion is always describing the
rotation of the sensor frame in relation to the earth.
• First of all the gyroscope measurements will be used to compute the quater-
nion denoted by SE q ω,t . It represent the rotation at time t described by the
angular speed measured in that iteration.
• Next step is to use the measurements provided by the accelerometer and
Chapter 4. Sensor Fusion 31
• Finally, using both estimations explained before SE q ω,t and SE q ∇,t , a quater-
nion will be obtained by using a sensor fusion filter similar to the compli-
mentary filter to get SE q est,t . This last quaternion will express the orientation
of the body in each time t, and it is almost trivial to get the Euler angles
from it.
As said before, first step is to compute the orientation from the gyroscope. In
Eq. 4.18 data from the gyroscope in rads−1 is arranged in s ω. The orientation
S S
E q w,t can be calculated by integrating the quaternion derivative E q̇ w,t as presented
in Eq. 4.19, 4.20. Note that Δt is the sampling period, SE q̂ est,t−1 is the previous
estimation and s ω t is the gyroscope measurement at time t.
s
ω = [0 wx wy wz ] (4.18)
1S
S
E q̇ w,t = q̂ ⊗ sωt (4.19)
2 E est,t−1
S
E q w,t = SE q̂ est,t−1 + SE q̇ w,t Δt (4.20)
Next step is to use the acceleration provided by the accelerometer to get the
orientation quaternion. Note that an accelerometer will measure not only the
earth gravity but also the linear accelerations due to the motion of the sensor.
This is a problem that will be explained in Ch. 6 because it is assumed that the
accelerometer only measures gravity.
Using that the direction of the earth’s field is known in the earth frame, a
measurement of the sensor will allow to determine the orientation relative to the
earth frame. However, note that the solution won’t be a unique value, but a line
of infinite points. This is because the accelerometer can’t estimate the heading,
only the pitch and roll.
S
E q̂ = [q1 q2 q3 q4 ] (4.21)
Chapter 4. Sensor Fusion 32
E
d̂ = [0 dx dy dz ] (4.22)
S
ŝ = [0 sx sy sz ] (4.23)
The estimation SE q̂ can be formulated as an optimization problem described
by Eq. 4.24, where the objective function is Eq. 4.25.
∇f (SE q̂ , E d̂ , S ŝ)
S
E q k+1 = SE q k − μ (4.26)
∇f (SE q̂ , E d̂ , S ŝ)
in Eq. 4.30 where there are only two non zero components. Measurements of the
magnetometer S m̂ are represented in Eq. 4.31.
E
b̂ = [0 bx 0 bz ] (4.30)
S
m̂ = [0 mx my mz ] (4.31)
Following the same steps as before, the objective function fb (SE q̂ , E b̂, S m̂) and
Jacobian matrix Jb (SE q̂ , E b̂) are shown in Eq. 4.32, 4.33 respectively.
⎛ ⎞
2bx (0.5 − q3 2 − q4 2 ) + 2bz (q2 q4 − q1 q3 ) − mx
⎜ ⎟
fb (SE q̂ , E b̂, S m̂) = ⎝ 2bx (q2 q3 − q1 q4 ) + 2bz (q1 q2 + q3 q4 ) − my ⎠ (4.32)
2 2
2bx (q1 q3 + q2 q4 ) + 2bz (0.5 − q2 − q3 ) − mz
⎛ ⎞
−2bz q3 2bz q4 −4bx q3 − 2bz q1 −4bx q4 + 2bz q2
⎜ ⎟
Jb (SE q̂ , E b̂) = ⎝ −2bx q4 + 2bz q2 2bx q3 + 2bz q1 2bx q2 + 2bz q4 −2bx q1 + 2bz q3 ⎠
2bx q3 2bx q4 − 4bz q2 2bx q1 − 4bz q3 2bx q2
(4.33)
As discussed before none of the measurements taken by the accelerometer or
magnetometer can provide a full solution to the orientation of the body, because
the solution is not unique. Remember that the accelerometer can’t estimate the
heading. To achieve a complete solution, both sensors have to be combined.
S S
f g ( E q̂ , â)
fg,b (SE q̂ , S â, E b̂, S m̂) = (4.34)
fb (E q̂ , E b̂, S m̂)
S
S
J g ( E q̂ )
Jg,b (SE q̂ , E b̂) = (4.35)
Jb (E q̂ , E b̂)
S
So the solution to the estimated orientation can be calculated using the gra-
dient descent algorithm shown in Eq. 4.36, where SE q ∇,t is the orientation at time
t. The subscript ∇ denotes that the orientation is computed using the gradient
descent.
∇f
S
E q ∇,t = SE q est,t−1 − μ (4.36)
∇f
∇f = Jg,b
T S
(E q̂ est,t−1 , E b̂)fg,b (SE q̂ est,t−1 , S â, E b̂, S m̂) (4.37)
Note that a conventional optimization problem runs for n iterations until a
certain error is minimized or a number of iterations are completed. However, in
this case as the author explains, it is enough to compute one iteration per time
Chapter 4. Sensor Fusion 34
sample. Also the step size μ should be recalculated in each iteration but in order
to save calculations, it can be set as Eq. 4.38.
0
∇fg,b = 8bx 2 q2 q3 q4 + 4q1 bx 2 q3 2 − 4q1 bx 2 q4 2 + 8bx bz q3 q4 2 − 2mz bx q3 −
2my bx q4 + 4q1 bz 2 q2 2 + 4q1 bz 2 q3 2 − 2my bz q2 + 2mx bz q3 + (4.39)
4q1 q2 2 − 2ay q2 + 4q1 q3 2 + 2ax q3
1
∇fg,b = 2q1 (2q1 q2 − ay + 2q3 q4 ) −
(2bx q3 + 2bz q1 ) (my + 2bx (q1 q4 − q2 q3 ) − 2bz (q1 q2 + q3 q4 )) −
2 2 1
(2bx q4 − 4bz q2 ) mz − 2bx (q1 q3 + q2 q4 ) + 2bz q2 + q3 − −
2 (4.40)
2q4 (ax + 2q1 q3 − 2q2 q4 ) + 4q2 2q2 2 + 2q3 2 + az − 1 −
2 2 1
2bz q4 mx + 2bz (q1 q3 − q2 q4 ) + 2bx q3 + q4 −
2
2
∇fg,b = 2q1 (ax + 2q1 q3 − 2q2 q4 ) −
(2bx q2 + 2bz q4 ) (my + 2bx (q1 q4 − q2 q3 ) − 2bz (q1 q2 + q3 q4 )) +
2 2 1
(4bx q3 + 2bz q1 ) mx + 2bz (q1 q3 − q2 q4 ) + 2bx q3 + q4 − −
2 (4.41)
2 2 1
(2bx q1 − 4bz q3 ) mz − 2bx (q1 q3 + q2 q4 ) + 2bz q2 + q3 − +
2
2q4 (2q1 q2 − ay + 2q3 q4 ) + 4q3 2q2 2 + 2q3 2 + az − 1
3
∇fg,b = (2bx q1 − 2bz q3 ) (my + 2bx (q1 q4 − q2 q3 ) − 2bz (q1 q2 + q3 q4 ))
−2q2 (ax + 2q1 q3 − 2q2 q4 ) +
2 2 1
(4bx q4 − 2bz q2 ) mx + 2bz (q1 q3 − q2 q4 ) + 2bx q3 + q4 − +
2
2 2 1
2q3 (2q1 q2 − ay + 2q3 q4 ) − 2bx q2 mz − 2bx (q1 q3 + q2 q4 ) + 2bz q2 + q3 −
2
(4.42)
Chapter 4. Sensor Fusion 35
Now that SE q ω,t and SE q ∇,t are known, next step is to merge both to get the
final estimation of the orientation SE q est,t . This fusion is done by using Eq. 4.43.
S
E q est,t = γt SE q ∇,t + (1 − γt )SE q ω,t 0 ≤ γt ≤ 1 (4.43)
After some simplifications and assumptions [24] the expression in Eq. 4.43
can be simplified to Eq. 4.44 and it is the equation that will be running in the
Arduino and is the core of the algorithm.
β∇f Δt S
S
E q est,t =− + E q̂ est,t−1 + SE q̇w,t Δt (4.44)
∇f
Note that β represents the divergence rate of the measurement from the gy-
roscope, so it has to be adjusted for different sensors and sampling frequencies.
Note also that ∇f is a four dimensional vector which components were shown
before in Eq. 4.39, 4.40, 4.41, 4.42. Also Δt is the sampling period, SE q̂ est,t−1 is
the previous estimation and SE q̇w,t was described in Eq. 4.19.
If necessary, the last step is to convert from quaternions to Euler angles repre-
sentation, which is more convenient and intuitive. That can be done as explained
before in Eq. 2.28.
Chapter 5
Experimental Setup
In this chapter the experimental setup of the thesis is explained and is divided
into three parts. The first one explains all the components used for the project
and budget (Seq. 5.1), the second one describes the hardware setup (Sec. 5.2),
and the last part explains the most important facts about the software (Sec. 5.3).
Arduino is a very well known company that manufactures different kind of mi-
croprocessors or microcontrollers with different sizes and for different purposes,
but with one thing in common, they are open source. There is a huge community
behind, that is continually improving the product and collaborating with each
other in forums. One of the most important factors that lead to chose Arduino
as a developing platform is that it can be programmed in C through the Arduino
IDE, which can save a lot of time comparing with other kind of microcontrollers.
Particularly, Arduino MKR1000 [3] (see Fig. 5.1) was chosen because of its
reduced size and integrated WiFi module, which makes it very suitable for IoT
applications.
The SparkFun 9DOF Sensor Stick [13] shown in Fig. 5.2 was chosen because
it gathers into one single small board the three sensors that are needed. It can
be connected to any device with only four pins, which are: ground, voltage,
SCA and SCL. This last two pins allow to interact with the sensor to send (i.e.
configuration parameters) and receive data (i.e. different sensor measurements).
36
Chapter 5. Experimental Setup 37
This Sensor Stick has inside three different sensors whose specifications can
be found in Table. 5.1[11], 5.2 [19], 5.3 [17] for the accelerometer, gyroscope and
magnetometer respectively.
In Table 5.4 a budget for the project is presented. This budget also has a
battery, which allows the Arduino and sensor to operate stand alone, without the
need off being plugged into the USB. The battery has a capacity of 1400 mAh,
which may be over sized for the project but was the only available at that moment.
It is important to keep in mind that due to the Arduino battery circuit charger
the battery has to be grater than 700 mAh or otherwise it can be damaged while
charging. This happens because the charge is done at 350 mA, and the typical
charge ratio of batteries is C/2.
Chapter 5. Experimental Setup 38
In Fig. 5.4 and 5.5 the final experimental setup of the project can be seen. It
can be seen that is composed by the Arduino MKR1000, the sensor board with
the accelerometer, magnetometer, gyroscope, a battery and a switch to turn on
and off the device.
5.3 Software
Once the hardware setup used for the project has been explained, it is turn
to describe the software that will be running in the Arduino to test different
algorithms. All the programming is done using the Arduino IDE (Integrated De-
velopment Environment which allows to program the Arduino in C. In Fig. 5.6
Chapter 5. Experimental Setup 40
In order to interact with the sensors easily, the Wire library [20] is used. It
provides high level functions to read/write from/to registers usin the I2C serial
protocol. For sending data from the Arduino to the computer WiFi 101 library,
[28] is used. In [4] a function for calibrating the sensor is provided.
The flow of execution is shown in Fig. 5.7 where five major parts are pre-
sented. First the code is initialized, then data is read through I2C. After that
each reading is calibrated before feeding the sensor fusion algorithm. Finally, the
resulting data is sent through the serial port in UDP packets via WiFi. After
that, different scripts in Matlab have been developed, in order to post process or
plot the incoming data from the Arduino.
Chapter 5. Experimental Setup 41
Figure 5.5: Hardware setup. Arduino, sensor, battery and switch in relation to a
coin.
After having studied different sensor fusion techniques from a theoretical point of
view in Ch. 4, in this chapter they are studied in an experimental way, using a
specific sensor and an specific microprocessor. One of the main problems that was
detected during the study of the algorithms, is the lack of an absolute and perfect
reference to compare the algorithms. This can be done with quite expensive optic
sensors, but unfortunately this was not possible. However, good results can also
be archived by comparing the algorithms with each other using the same device.
6.1 Calibration
Previously, in Ch. 3 calibration for the accelerometer, magnetometer and gy-
roscope was explained. Each sensor is calibrated through a different process in
order to compensate different problems.
In the case of the accelerometer, an offset and a scale factor is used. These
two parameters are calculated using the minimum and maximum values in each
axis, and then corrected as shown in Eq. 3.3. Note that Eq. 3.1 is used to calcu-
late the offset and Eq. 3.2 for the scale. In Table 6.1 the different minimum and
maximum measured values of the gravity are displayed.
43
Chapter 6. Testing and Results 44
The magnetometer has also to be calibrated and is the most vulnerable sensor.
Note that the magnetometer can be calibrated against soft or hard iron effects,
but in the case of this project hard iron calibration was used, because it is more
accurate and fewer more operations are needed. To perform this calibration, the
sensor has to be rotated in all directions in order to get the value of the magnetic
field in all direction in all axis. After that a three dimensional plot can be used to
visualize the data. An ideal measurement will be a perfect sphere, but however
due to external influences of ferromagnetic materials or external fields, that is
not true. In Fig. 6.1 the measured field is represented in red, and in blue the
compensated measurements after perform the ellipsoid fit calibration method (see
Sec. 3.3.
One of the major problems when calibrating the magnetometer, was the pres-
ence of a WiFi module in the Arduino, close to the magnetometer. Also, having
a battery close to the sensor introduced variations in the measurements. This
results that the device has to be calibrated with the WiFi module switched on,
and with the battery attached to the device. In Fig. 6.2 the difference of mea-
surements with the WiFi turned on and off is displayed. The error is quite big,
and in practice, if this calibration is not done, the output Euler angles are not
reliable at all.
6.2 Accelerometer
As explained before in Eq. 3.12, 3.13 the accelerometer by itself can be used to
estimate the attitude (pitch and roll) of an object. It can’t be used to estimate
the heading or yaw because of the nature of the gravity.
In Fig. 6.3 an example of pitch and roll estimation with the accelerometer is
presented. The y-axis represents the angle and the x-axis the time. In this test
the device was turned to the left and to the right, having it held with the hand.
Chapter 6. Testing and Results 45
400
200
-200
-400
Original
Compensated
400
200
600
0 400
-200 200
0
-400 -200
-400
400
WiFi Off
WiFi On
200
-200
-400
400 600
200 400
0 200
0
-200 -200
-400 -400
Figure 6.2: Magnetometer bias with and without WiFi module working.
Chapter 6. Testing and Results 46
Euler Angles
150
Yaw
Pitch
Roll
100
50
Angle (deg)
-50
-100
-150
-200
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Same test but pitching to the front is represented in Fig. 6.3. As seen in the
figure, when the pitch goes close to 90 degrees, the measurements start to ran-
domly change. Note also that the roll should be zero but it is not. This happens
because when the pitch is 90, the y and z-components should be zero but they
are slightly bigger and have a bit of noise, which causes this behavior.
During the last tests shown in Fig 6.3, 6.4 the sensor was moved slowly, so the
acceleration that the sensor was measuring was the pure earth gravity. In the test
displayed in Fig 6.5 the sensor is moved upward and backward at a frequency in
the order of 2 or 3 Hz. In this case the Euler angles should not change, because the
device is moving along one axis without changing the orientation, but the results
are noisy measurements that does not make any sense. When the accelerometer
is corrupted by external accelerations, the formulas used to calculate the attitude
are no longer valid. It can be concluded that the accelerometer by itself can be
a first cheap approach (in terms of money and computational cost) but it is not
suitable for applications where the heading is required, nor where the device is
moving experimenting external accelerations.
Chapter 6. Testing and Results 47
Euler Angles
200
Yaw
Pitch
150 Roll
100
50
Angle (deg)
-50
-100
-150
-200
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Euler Angles
60
Yaw
Pitch
Roll
40
20
Angle (deg)
-20
-40
-60
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Figure 6.5: Euler estimation with accelerometer, moving the sensor forward and
backwards.
Chapter 6. Testing and Results 48
Euler Angles
140
Yaw
Pitch
Roll
120
100
Angle (deg)
80
60
40
20
0
0 100 200 300 400 500 600 700
Time (s)
6.3 Gyroscope
As described with Eq. 3.16 the gyroscope can be used by itself to estimate the
orientation of a body in terms of pitch, roll and yaw with simple integration.
However, cheap MEMS sensors are not very accurate nor precise, so the integral
is feed with the measurement x plus a small amount of noise n. After performing
the integration, a small error is being added to the angle estimation in each iter-
ation, which makes it to drift over time. It is refered as drift, the error over time
that the gyro is accumulating. In Fig 6.6 and 6.7 the device was left during ten
minutes in a static position. The expected output would be a constant value of
pitch, roll and yaw, but due to the explained effect, there is a drift over time. For
the yaw it is 1/10 deg/min, for the pitch 2 deg/min and for the yaw 6 deg/min.
This drift makes MEMS cheap gyroscope not suitable for many applications, spe-
cially when they are more than just few seconds.
Note that there are techniques that can estimate as a state the drift of the gy-
roscope and compensate, like Kalman filtering, but its computational complexity
is much higher, and not always suitable for low cost microprocessors.
Yaw
123
Angle (deg)
Yaw
122
121
120
0 100 200 300 400 500 600 700
Time (s)
Pitch
40
Angle (deg)
Pitch
30
20
10
0 100 200 300 400 500 600 700
Time (s)
Roll
60
Angle (deg)
Roll
40
20
0
0 100 200 300 400 500 600 700
Time (s)
to take the best of each sensor. Note that this method can estimate the pitch
and roll as a fusion but not the yaw, which is taken only from the gyroscope and
which defects are already known.
Euler Angles
15
Yaw
Pitch
Roll
10
5
Angle (deg)
-5
-10
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Figure 6.8: Euler estimation with complimentary filter, testing resilence to exter-
nal accelerations.
• Static performance
In the first test both algorithms were tested with the device static and left in
the table. In Fig 6.9 the output Euler angles of each sensor are displayed and in
Fig. 6.10 the difference is plotted. Both algorithms seems to perform equally, and
the DCM one looks a little bit more noisy. Note that neither of those algorithms
have been post processed with filtering.
The second test will determine the behavior when the device is moved slowly,
without applying big accelerations. This is equivalent to a person with the sensor
attached walking around the room. In Fig. 6.11 the Euler angles over time are
represented and in Fig. 6.12 the difference is plotted. The error is bigger than in
Chapter 6. Testing and Results 51
Yaw
Angle (deg) 131.8
Yaw DCM
131.6 Yaw Madgwick
131.4
131.2
0 5 10 15 20 25 30
Time (s)
Pitch
0.2
Angle (deg)
Pitch DCM
0.1 Pitch Madgwick
-0.1
0 5 10 15 20 25 30
Time (s)
Roll
19.9
Angle (deg)
Roll DCM
19.8 Roll Madgwick
19.7
19.6
0 5 10 15 20 25 30
Time (s)
0.2
0
0 5 10 15 20 25 30
Time (s)
Pitch Difference. RMS: 0.044039
0.2
Angle (deg)
-0.1
0 5 10 15 20 25 30
Time (s)
Roll Difference. RMS: 0.095499
0
Angle (deg)
-0.1
-0.2
0 5 10 15 20 25 30
Time (s)
Yaw
150
Angle (deg)
Yaw DCM
Yaw Madgwick
100
50
0 5 10 15 20 25 30
Time (s)
Pitch
50
Angle (deg)
Pitch DCM
0 Pitch Madgwick
-50
-100
0 5 10 15 20 25 30
Time (s)
Roll
100
Angle (deg)
Roll DCM
50 Roll Madgwick
-50
0 5 10 15 20 25 30
Time (s)
In the last test, the sensor is moved very fast, so the accelerometer also mea-
sures external accelerations. The Euler angles calculated by each algorithm are
shown in Fig. 6.13 and in Fig. 6.14 the difference between both algorithms is
represented. In this case both algorithms perform quite different. The Madgwick
algorithm seems to be much more accurate when external acceleration is present.
This conclusion was taken by watching in real time the performance of both al-
gorithms (see attached video dcmmadgwick.mov ).
In Table. 6.3 RMS values of the error between both algorithms can be seen. It
can be seen that in the scenarios where the device is moving, the error is bigger.
It is also important to note that the difference in the yaw estimation is bigger
than in the pitch and roll.
Chapter 6. Testing and Results 53
-5
0 5 10 15 20 25 30
Time (s)
Pitch Difference. RMS: 0.50499
4
Angle (deg)
-2
0 5 10 15 20 25 30
Time (s)
Roll Difference. RMS: 0.73274
4
Angle (deg)
-2
0 5 10 15 20 25 30
Time (s)
Yaw
140
Angle (deg)
Yaw DCM
Yaw Madgwick
120
100
0 5 10 15 20 25 30
Time (s)
Pitch
40
Angle (deg)
Pitch DCM
20 Pitch Madgwick
-20
0 5 10 15 20 25 30
Time (s)
Roll
100
Angle (deg)
Roll DCM
Roll Madgwick
0
-100
0 5 10 15 20 25 30
Time (s)
Angle (deg)
Yaw DCM - Yaw Madgwick
10
-10
0 5 10 15 20 25 30
Time (s)
Pitch Difference. RMS: 3.3381
10
Angle (deg)
-10
-20
0 5 10 15 20 25 30
Time (s)
Roll Difference. RMS: 6.7603
20
Angle (deg)
-20
-40
0 5 10 15 20 25 30
Time (s)
• Convert to the sensor units and apply the calibration are tasks that not
require much time.
• Transmit the data over the serial port takes much less time that sending a
UDP packet.
• Madgwick algorithm is very optimized, and it takes less time that the DCM,
in spite of being more complicated.
It is also very important to determine the upper bound of the device (Sen-
sor + Arduino MKR1000). Assuming that the device has to read data from the
sensors, apply the calibration, run an algorithm, and send the data, the sam-
pling frequency will never be more than 160-200 Hz, depending on more specific
information.
Chapter 6. Testing and Results 55
In previous chapters different sensor fusion techniques have been studied and
compared, seeing their advantages and disadvantages. Due to the low execution
time and performance, Madgwick algorithm has been chosen. In this chapter a
practical scenario is presented, where a movement recognition algorithm is pre-
sented in Sec. 7.1 and evaluated in Sec. 7.2.
The proposed algorithm runs in the hardware setup explained in Ch. 5, us-
ing Madgwick algorithm for Euler angles calculations and sending all the data
through a UDP port to Matlab. Once the data is received, the proposed algo-
rithm is executed and different movements are recognized. One of the practical
applications of this scenario, would be to attach the sensor to a bike, skate or
snowboard to recognize the different movements that the user is doing.
In order to provide a more practical application, Plotly [31] libraries has been
used. Plotly is a web based app that offers different API to interact with its
content written in Matlab or Python. Real time data can be streamed to a Plotly
plot from Matlab and that content can be seen in any web browser with a simple
URL.
7.1 Introduction
The algorithm is fed with Euler angles and accelerations along each axis. Magne-
tometer and gyroscope data is also sent and available but not used. The algorithm
parameters can be modified with different thresholds to be adapted to different
scenarios (i.e. bike or skateboard). These are the movements that are recognized
by default:
• θ = 360: A 360 degrees turn around the θ axis, which corresponds to the
roll.
56
Chapter 7. Study Case 57
• ψ = 180: A 180 degrees turn around the ψ axis, which corresponds to the
yaw.
In Fig. 7.1, 7.2 the execution flow of the algorithm is displayed. Note that
this section runs in Matlab and its input are the Euler angles and acceleration
values: Important sections to take into account are:
• Unwrap: Note that Euler angles are inside the range -180/+180 for roll,
-90/+90 for pitch and -180/+180 for yaw. So if two turns are performed
i.e. in the roll axis the angle will go from -180 to 180 in the first turn, then
it will jump to -180 again and go to 180. A similar approach as the one
used in signal processing with the phase is applied in the algorithm. The
idea is to avoid that jumps, but a continuous angle. So after two turns the
output after the unwrap will be 720 degrees.
• Recognition: In the recognition phase there are two steps. First one is
to detect when the user has finished a trick. In the case of a skate this
is detected when an impulsive acceleration is detected in the z-axis. This
means that the skate has "landed". After that, the movement has been
performed is detected, if any.
• Yaw offset: When the skater starts going to a specific direction (yaw) that
direction is set as zero. This is detected when a strong acceleration is
detected in the y-axis. This threshold value can be tuned for different
scenarios.
A first test is shown in Fig. 7.3. In this example the yaw is reset at t = 3.6s
and a 360 turn is done in the roll axis. It can be seen that after the trick, the
unwrap is reset. In Fig. 7.4 raw data from the acceleroemter, magnetometer and
gyroscope is shown. It can be clearly seen that the recognition is much more
Chapter 7. Study Case 58
Euler Angles
400
Yaw
Pitch
Roll
300
200
Angle (deg)
100
-100
-200
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Figure 7.3: Time and Euler angles plot for a simple flip of 360 degrees.
Gyroscope
Angular rate (deg/s)
1000
X
500 Y
Z
0
-500
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Accelerometer
20
Acceleration (g)
X
10 Y
Z
0
-10
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Magnetometer
500
X
Flux (G)
Y
0 Z
-500
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Euler Angles
200
Yaw
Pitch
Roll
150
100
Angle (deg)
50
-50
-100
-150
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Figure 7.5: Time and Euler angles plot for a 180 degrees flip along the z (yaw
axis).
A similar test is shown in Fig. 7.5 where the movement that was performed
was a rotation of 180 degrees in the yaw axis. In 7.6 the raw data of the sensors
is also shown.
In Fig. 7.7 the advantages of performing the unwrap are seen. In this case
the performed movement was several turns in the roll axis. The exact number of
turns can be calculated by dividing by 360 the final angle.
Now that some basic examples of the performance of the algorithm were de-
scribed, it is turn to run a test of around one minute, where different movements
were done. In Fig. 7.8 a time vs trick plot is shown, so for example at time
t = 19.8 the trick one was done, which corresponds to a turn of 360 in the roll
axis θ. Processed Euler angles data is shown in Fig. 7.9.
Note that Fig. 7.8 and 7.9 are Matlab plots that are only available at the com-
puter that is receiving and processing the data. Moreover, the program streams
real time data to a Plotly plot. This allows anyone with the related address to
watch in real time the tricks that has been done in any platform, just having a
simple URL direction. In Fig. 7.10 an example is shown in a smart phone, where
Chapter 7. Study Case 62
Gyroscope
Angular rate (deg/s) 1000
X
Y
0 Z
-1000
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Accelerometer
5
Acceleration (g)
X
Y
0 Z
-5
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Magnetometer
500
X
Flux (G)
Y
0 Z
-500
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Figure 7.6: Sensor measurements for a 180 degrees flip along the z (yaw axis).
Euler Angles
1600
Yaw
Pitch
1400 Roll
1200
1000
Angle (deg)
800
600
400
200
-200
0 1 2 3 4 5 6 7 8 9 10
Time (s)
Figure 7.7: Time and Euler angles plot for several flips to illustrate the unwrap.
Chapter 7. Study Case 63
4
Trick
0
0 10 20 30 40 50
Time (s)
Euler Angles
400
Yaw
Pitch
300 Roll
200
100
Angle (deg)
-100
-200
-300
-400
0 10 20 30 40 50 60
Time (s)
Figure 7.9: Euler angles over time when different tricks were made.
Chapter 7. Study Case 64
the x-axis is time and y-axis is the movement that was detected from 1 to n.
Streaming the data to Plotly is very convenient because it can be visualized in
many platforms concurrently and it is stored in the cloud, which means that can
be accessed at any time.
Chapter 7. Study Case 65
During this thesis, the existing orientation techniques have been described in Ch.
2, which are: quaternions, Euler angles and rotation matrices. Later on in Ch.
3 and Ch. 4 different methods to calculate the orientation were presented using
low cost sensors such as accelerometer, magnetometer and gyroscope. In Ch. 5
the experimental setup for testing the algorithms was described and in Ch. 6 this
setup was used to empirically test the performance of different orientation algo-
rithms. After determining the most suitable algorithm for the study case, which
was the Madgwick one due to its accuracy and computational cost, in Ch. 7 an
application of the thesis is presented. In this application the device is attached
to a moving object, like a skate board or bike, and through some data processing
in Matlab, different movements are detected.
Through the different tests that were performed, these are the conclusions
that were reached:
• Gyroscopes can provide really good estimation of the orientation even when
strong external accelerations are applied. However, low cost gyroscopes are
quite noisy and after the integration used to estimate the angle from the
angular speed, they tend to drift at high rates. So this make the gyroscopes
only valid in short time periods (i.e. very few minutes).
66
Chapter 8. Summary and Conclusions 67
However, yaw estimation has not any fusion and is taken directly from the
gyroscope, so the drift is still present.
• Calibration is extremely important to archive a good performance. Ac-
celerometer and gyroscope calibration is quite easy to apply, with only a
scale/offset and an offset respectively, and is not as critical as the mag-
netometer one. There are several ways of calibrating the magnetometer,
but the one used for this thesis known as ellipsoid fit is fine and not very
computational demanding.
• Accelerometer and gyroscope can be calibrated just once and they will not
drift a lot with pressure or temperature. If they do a bit, it does not
comprise the performance of the algorithm at all.
• The magnetometer has to be before every use calibrated because it is very
influenced by its surroundings. Ferromagnetic materials or other external
magnetic fields make the measurements totally unreliable. In the device
used in the experimental setup, the WiFi module was placed very close to
the magnetometer and this was taken into account for calibration.
• The DCM sensor fusion algorithm is able to estimate pitch roll and yaw
without any drift using the accelerometer, magnetometer and gyroscope but
is very influenced by external accelerations. It is also quite computational
efficient so it can run in low cost micro processors such as Arduino.
• Madgwick algorithm was marked as the most suitable algorithm for the
application of the thesis, where the orientation has to be estimated in high
acceleration scenarios. The computational cost is slightly higher than the
DCM so it can also run in low cost micro processors.
• Kalman Filter and its variations like the Extended Kalman Filter are very
well known techniques that can perform sensor fusion, or estimate the drift
of the gyroscope as an additional state. However its computational cost is
quite high and can’t be implemented in an Arduino.
• Low cost sensors can provide a good estimation of the orientation but note
that the error can be of some degrees. Aerospace IMU or AHRS can cost
from some hundreds to several thousand euros, so they can’t be compared
with a few euros one.
• As said before, accelerometers, magnetometers or gyroscopes by themselves
can’t provide a full estimation of orientation. Accelerometers can’t estimate
yaw, magnetometers can’t estimate pitch, and a gyroscopes can estimate all
angles but with drift. Sensor fusion techniques are very useful in scenar-
ios like this, where each sensor provides some information that the other
doesn’t.
Chapter 9
Future Work
In this chapter some future work ideas and research lines are presented, both for
hardware and software. Regarding to hardware, these are some ideas that can be
improved for future versions of the device:
• The actual device used for running the experiments is based on an Arduino
MKR1000 plus a nine degrees of freedom sensor. As seen in the budget, it
is above 100 euros, so for a final product it is important to keep it cheap.
Using other micro processors and sensors bought directly from the supplier,
a price of less than 20 euros can be achieved.
• The device has three different ICs, one for each sensor (accelerometer, mag-
netometer, gyroscope). There are options in the market that gather all of
them into a single IC. This will allow to reduce the size.
• Depending on the application, the battery size would need to be changed.
In, in the application related in this thesis a battery life of 2 or 3 hours shall
be enough, so smaller batteries can be used in order to reduce the weight.
• The device is using UDP sockets to send data to the receiver over a WiFi
network. This means that an access point is needed, and this might be a
drawback. A Bluetooth module can be another option instead of the WiFi
one.
Related to software, sensor fusion algorithms and data processing:
• Madgwick and DCM fusion algorithms worked quite well for estimating the
orientation of the sensor. However, the DCM implementation is not as
optimized as the Madgwick one, so this can be a future work.
• Sensor fusion using Kalman filters in low cost micro processors is a complex
task because of the lack of computational resources. Studying its perfor-
mance in an Arduino can be an interesting project.
• In this project, a simple IoT application was tested. An interesting project
would be to use a lot of sensors like the one presented in this project for a
particular application.
68
References
[1] Madgwick Sensor fusion on LSM9DS0. [Online forum comment], 2014. Avail-
able at: Link.
[4] Peter Bartz. Razor AHRS Arduino Code. [Computer program], 2013. Avail-
able at: Link.
[6] Peter Bartz. Razor 9dof attitude and heading reference system, 2016. Avail-
able at: Link.
[7] Mona Berciu. Rotations and the Euler angles. 2014. Available at: Link.
[10] Pieter-Jan Van de Maele. Reading a IMU Without Kalman: The Comple-
mentary Filter. 2013. Available at: Link.
[12] James Diebel. Representing Attitude: Euler Angles, Unit Quaternions, and
Rotation Vectors. 2006. Available at: Link.
69
References 70
[14] Aleš Janota et al. Improving the Precision and Speed of Euler Angles Com-
putation from Low-Cost Rotation Sensor Data. University of Žilina, 2015.
Available at: Link.
[16] John Safko Herbert Goldstein, Charles Poole. Classical Mechanics. Addison
Wesley, 3 edition. Available at: Link.
[18] Noel H. Hughes. Quaternion to Euler Angle Conversion for Arbitrary Rota-
tion Sequence Using Geometric Methods. Available at: Link.
[20] Keller Kindt. Arduino Wire Library. [Computer program], 2016. Available
at: Link.
[21] Christopher Konvalin. Compensating for Tilt, Hard-Iron, and Soft-Iron Ef-
fects. 2009. Available at: Link.
[22] Jack B. Kuipers. Quaternions and Rotation Sequences. Calvin College, 2000.
Available at: Link.
[27] Sebastian O. H. Madgwick. Open Source AHRS with x-IMU. 2013. Available
at: Link.
[28] Sandeep Mistry. Wifi Library for the Arduino WiFi 101 Shield. [Computer
program], 2016. Available at: Link.
[31] Plotly. Plotly Library for Matlab. [Computer program], 2017. Available at:
Link.
[32] William Premerlani and Paul Bizard. Direction Cosine Matrix IMU: Theory.
2009. Available at: Link.
[34] Tobias Simon. Madgwick IMU/AHRS and Fast Inverse Square Root. [Online
forum comment], 2012. Available at: Link.
[37] Erick Macias et al. Nine-Axis Sensor Fusion Using the Direction Cosine
Matrix Algorithm on the MSP430F5xx Family. Texas Instruments, 2012.
Available at: Link.
[39] Xia Yan et al. Research on Random Drift Modeling and A Kalman Filter
Based on The Differential Signal of MEMS Gyroscope. IEEE, 2013. Available
at: Link.
[42] Andrea Vitali. Ellipsoid Orspherefitting for Sensor Calibration. 2016. Avail-
able at: Link.
[43] Gordon Wetzstein. Inertial Measurement Units II: Sensor Fusion and
Quaternions. Standford. Available at: Link.