Unmanned aerial vehicle guidance and navigation control in gps denied environment
Unmanned aerial vehicle guidance and navigation control in gps denied environment
A. Relative Position Initialization Filter 1) If the platforms are traveling along intersecting lines,
For complete details of this filter’s formulation and a there are exactly four solutions for the relative position.
performance analysis, the reader is pointed to a companion 2) If the platforms are traveling parallel with different
paper [4]. Here, we motivate the high-level concept behind its velocities, there are exactly two solutions for the relative
design. position.
In this method, there is no prior information needed about 3) If the platforms are traveling parallel with equal veloc-
the relative pose of each platform. Instead, the motion of ities, there are an infinite number of solutions for the
each platform is used to construct a graph with the range relative position.
measurements between platforms and principal component 4) If the platforms are traveling along the same line in
distance traveled over multiple locations. The constructed opposite directions, there is exactly one solution for the
graph presented in Figure 2 is analogous to a five bar linkage. relative position.
In this problem, only case 1 and case 2 are considered
because case 3 and case 4 are easily detectable and are not
as likely as case 1 and case 2. Using the constructed linkage,
the relative position in polar coordinates can be obtained by
solving the loop closure equations of the linkage.
α cos θ3 = β (1)
for predicting the relative pose and the smoothed angular Lastly, given the set of correspondences between any two
coordinate for updating the measurements. Once the filters images and the 3×3 intrinsic camera parameter matrix, K, the
converge, the coordinate frame transformation is calculated for five-point algorithm [7] can be used to compute an estimate
each of the four solutions. At this point, one of the platforms of the essential matrix.
must turn in a different direction and continue traveling at a
B B
new constant velocity. Using the estimated coordinate frame E = RA [tA ] (6)
transformation, the relative position is propagated during this B
Where RA represents the relative rotation from platform A’s
maneuver using INS. Now, the expected distance between
coordinate frame to platform B’s and tB A represents the relative
platforms can be calculated using the INS solution. Using
translation between the two coordinate frames. Thus, a 3D
the difference between the ranging radio measurement and the
point from platform A’s coordinate frame can be transformed
expected distance, the likelihood is calculated for each of the
into platform B’s coordinate frame given the true relative pose.
four solutions. The solution with the highest likelihood is the
solution nearest to the correct solution.
xB = RA
B A
x + tB
A (7)
B. Computer Vision Relative Pose Estimation Since E is being recovered from correspondences, only the
A commonly used model of image formation is the pinhole direction of tB
A can be computed due to the projective ambi-
camera model, which assumes that the model for a camera p guity.
can be parameterized using 3 elements: a 3×3 upper triangular The relative pose from those correspondences is calculated
p
intrinsic camera parameter, K p , a 3 × 3 rotation matrix, RW , using the five-point algorithm in a RANdom Sampling and
representing a rotation of 3D world points to camera-centered Consensus (RANSAC) [8] loop for robustness.
3D points, and lastly a 3 element vector representing camera
p’s position in world coordinates, cp . Given these parameters C. Relative Navigation Filter
the full 3 × 4 perspective projective matrix can be defined by The nonlinear estimator used is an unscented Kalman filter
Equation 3. (UKF). For details on UKF implementations, the readers are
p p p
referred to [9] and [10]. The states estimated by the filter
P p = K p [RW − RW c ] (3) are relative position, relative velocity, and relative orientation
Then the transformation projecting a 3D world point y onto between platform B and platform A. All of the estimated states
2D image point xp using the homogeneous coordinate repre- are represented in the body frame of platform B.
sentation can be defined by Equation 4. B
xAB
B
x̃p = P p ỹ (4) yAB
B
z
Supposing that two such camera models for cameras A and B AB
B
rB
were available the relative pose can be directly calculated by ẋAB
B AB B
subtracting the positions and calculating the rotation matrix x= ẏAB = vAB (8)
between camera A and B. B
żAB B
θAB
B
B B A T
α
RA = RW (RW ) (5) AB
B
βAB
With the goal of associating pixels between two images and B
γAB
calculating the relative transformation a three step process is
employed. First, the set of images is searched from the two Where xB AB represents the relative position of platform A
cameras to find corresponding images using a modification of with respect to platform B in the x-axis of the body frame
B
the latent Dirichlet allocation (LDA) [5] coupled to a Hidden of platform B. Similarly, αAB represents the relative rotation
Markov Model (HMM). We extract features from the imagery of platform A around the x-axis of platform B’s body axis
and perform vector quantization in feature space to create a in order to match orientation with platform B. The nonlinear
discrete set of ”words” (so named because the LDA model Kalman filter predicts relative pose using relative INS integra-
was developed for document analysis). The LDA model is a tion and issues measurement updates provided by computer
hierarchical statistical model that captures the co-occurrence vision, peer-to-peer ranging radio, and magnetometer sensor
of features in imagery. Finding corresponding images proceeds data. The filter estimates the relative position and velocity
by using the statistical model to calculate the maximum vectors of platform A in the body-frame of platform B, as
likelihood match of the set of features from platform B to well as the attitude that represents the transformation from
B
the models learned on the images of platform A. The HMM platform A to B, RA .
is used to enforce time consistency between observations. 1) Relative Inertial Navigation: The inertial navigation
Next, we use a frame-to-frame matching to align the top formulation used in this study was also used by Frosbury and
matching candidate images from the set of corresponding im- Crassidis in [11], where the authors derived full navigation
ages. A feature-matching approach using the Scale Invariance equations and error-state expressions for relative navigation
Feature Transform (SIFT) [6] is used in this study. applications. Frosbury and Crassidis presented this work for its
4
TABLE III
I NITIAL C ONDITION E RROR D ISTRIBUTIONS
TABLE IV
IMU Q UALITY M ONTE C ARLO D ISTRIBUTION
TABLE V
C OMPUTER V ISION U PDATE F REQUENCY M ONTE C ARLO D ISTRIBUTION
Computer Vision
# of Runs
Frequency (Hz)
1 105
Fig. 4. ENU Representation of Flight Path 2 2 95
5 101
10 111
25 88
TABLE I
I NERTIAL M EASUREMENT U NIT S CALING FACTORS
Fig. 7. CDF of Position and Attitude Error from 500 trials (considering last
2/3 of each flight)
geometry, where the closer the starting positions of the aircraft Sensitivity to Computer Vision Update Rate
1
the more accurate the pose estimation than the two with larger
0.8
initial seperation. 0.6
0.4
Sensitivity to Flight Path 0.2
1
0
0.8 1 2 4 6 8
10 10 10 10 10
0.6 3D RSS Relative Position (m)
0.4
1
0.2
0.8
0 1 Hz
10
1
10
2
10
4
10
6
10
8 0.6 2 Hz
3D RSS Relative Position (m) 0.4 5 Hz
0.2 10 Hz
1 25 Hz
0
0.8
10 -1 10 0 10 1 10 2
0.6 3D RSS Relative Attitude (deg)
Path 1
0.4 Path 2
0.2 Path 3
0 Fig. 10. CDF of Position and Attitude Error from Varying Computer Vision
10 -1 10 0
10 1
10 2 Update Frequency (considering last 2/3 of each flight)
3D RSS Relative Attitude (deg)
Fig. 12. CDF of Position and Attitude Error from ESF Values of Ranging Radio and Magnetometer Sensors (considering last 2/3 of each flight)
Fig. 13. CDF of Position Error from Initial Condition Errors in Position and Velocity Estimation (considering last 2/3 of each flight)
Fig. 14. CDF of Position and Attitude Error from Initial Condition Errors in Attitude Estimation (considering last 2/3 of each flight)
integration of inertial navigation and computer vision. For and used to assess updates of this nature being fused with
computer vision updates, a statistical model was used to other sensors. Based on this analysis, it is shown that this
rapidly determine overlapping content between two sets of algorithm can provide relative pose estimates in a GPS-
imagery such that relative pose can be extracted. Using a set denied setting with enough accuracy to assist in the problem
of flight data imagery, a statical error model was developed of platform target handoff. In particular, considering overall
9
ACKNOWLEDGEMENT
This material is based upon work supported by the United
States Air Force under Contract No. FA8650-15-M-1943.
Approved for public release, case number 88ABW-2016-0102.
R EFERENCES
[1] A. J. Kerns, D. P. Shepard, J. A. Bhatti, and T. E. Humphreys,
“Unmanned aircraft capture and control via gps spoofing,” Journal of
Field Robotics, vol. 31, no. 4, pp. 617–636, 2014.
[2] R. Sharma and C. Taylor, “Cooperative navigation of mavs in gps denied
areas,” in Multisensor Fusion and Integration for Intelligent Systems.
IEEE, 2008.
[3] J. Knuth and P. Barooah, “Collaborative 3d localization of robots from
relative pose measurements using gradient descent on manifolds,” in
Robotics and Automation. IEEE, 2012.
[4] J. Strader, Y. Gu, J. Gross, M. DePetrillo, and J. Hardy, “Cooperative
relative localization for moving uavs with single link range
measurements,” in Proceedings of IEEE/ION Position Location and
Navigation Symposium. IEEE/ION, 2016.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,”
Jornal of Machine Learning Research, vol. 3, pp. 993–1022, 2003.
[6] D. Lowe, “Distinctive image features from scale-invariant keypoints,”
International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110,
2004.
[7] H. Stewenius, C. Engels, and D. Nister, “Recent developments on direct
relative orientation,” ISPRS Journal of Photogrammetry and Remote
Sensing, vol. 60, no. 4, pp. 284–294, 2006.
[8] M. Fischler and R. Bolles, “Random sample consensus: A paradigm
for model fitting with application to image analysis and automated
cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395,
1981.
[9] M. Rhudy and Y. Gu, “Understanding nonlinear kalman filters, part ii:
An implementation guide,” 2013.
[10] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear
approaches. John Wiley & Sons, 2006.
[11] A. M. Fosbury and J. L. Crassidis, “Relative navigation of air vehicles,”
Journal of Guidance, Control, and Dynamics, vol. 31, no. 4, pp. 284–
834, 2008.
[12] R. A. Mayo, “Relative quaternion state transition relation,” Journal of
Guidance, Control, and Dynamics, vol. 2, no. 1, pp. 44–48, 1979.
[13] J. N. Gross, Y. Gu, and M. B. Rhudy, “Robust uav relative navigation
with dgps, ins, and peer-to-peer radio ranging,” Automation Science and
Engineering, IEEE Transactions, vol. 12, no. 3, pp. 935–944, 2015.
[14] R. Watson, V. Sivaneri, and J. Gross, “Performance Characterization
of Tightly-Coupled GNSS Precise Point Positioning Inertial Navigation
within a Simulation Environment,” in 2016 AIAA Guidance Navigation
and Control Conference. AIAA, 2016.
[15] P. D. Groves, Principles of GNSS, inertial, and multisensor integrated
navigation systems. Artech House, 2013.
[16] “Hg1930 datasheet,” https://aerospace.honeywell.com/products/
communication-nav-and-surveillance/inertial-navigation/
inertial-measurement-units/hg1930, 2015.