Sensors 21 05293
Sensors 21 05293
Article
Vision-Less Sensing for Autonomous Micro-Drones †
Simon Pikalov, Elisha Azaria , Shaya Sonnenberg, Boaz Ben-Moshe ∗ and Amos Azaria
Computer Science Department, Ariel University, Ariel 40700, Israel; [email protected] (S.P.);
[email protected] (E.A.); [email protected] (S.S.); [email protected] (A.A.)
* Correspondence: [email protected]
† This paper is an extended version of our paper published in: Ben-Moshe, B.; Landau, Y.; Marbel, R.; Mishiner,
A. Bio-Inspired Micro Drones. In Proceedings of the 2018 IEEE International Conference on the Science of
Electrical Engineering in Israel (ICSEE), Eilat, Israel, 12–14 December 2018.
Abstract: This work presents a concept of intelligent vision-less micro-drones, which are motivated
by flying animals such as insects, birds, and bats. The presented micro-drone (named BAT: Blind
Autonomous Tiny-drone) can perform bio-inspired complex tasks without the use of cameras. The
BAT uses LIDARs and self-emitted optical-flow in order to perform obstacle avoiding and maze-
solving. The controlling algorithms were implemented on an onboard micro-controller, allowing the
BAT to be fully autonomous. We further present a method for using the information collected by
the drone to generate a detailed mapping of the environment. A complete model of the BAT was
implemented and tested using several scenarios both in simulation and field experiments, in which it
was able to explore and map complex building autonomously even in total darkness.
to allow the drone to maintain position. Modern flight controllers for drones, such as
Pixhawk [6], support several inertial and navigation sensors (e.g., MEMS-Gyro, accelerator,
magnetometer, barometer, and GNSS receiver). When fused together, these sensors allow a
relatively robust navigation in outdoor conditions. In indoor flights, GNSS navigation is
mostly insufficient; in such cases, cameras and range sensors are commonly used for visual
navigation and obstacle detecting and avoiding [7,8]. The vision of having a sustainable
swarm of autonomous aerial vehicles [9] has attracted researcheres from both academy and
industry [10–13]. Yet, even the concept of a single autonomous drone still encapsulates
a wide range of challenges [14,15]. Recent improvement in hardware and software for
edge deep learning platforms [16,17] allows micro-drones to use visual sensors for obstacle
avoidance and navigation [18]. In this paper, we present a vision-less alternative approach;
we conjecture that in many real-world natural cases, the suggested framework performs
better than vision-based solutions. There have been several previous attempts to deploy
autonomous drones for mapping in-door environments, most of which require vision
sensors. Dowling et al. [19] presented a method for mapping in-door environments using
a drone. While their results seem promising, their approach uses the Erle-copter drone,
which is relatively large (at least 10 times larger than BAT). In addition, their approach
requires substantial computing power using a Raspberry Pi 3b. Similarly, Zhang et al. [20]
proposed a method for 3D cave mapping for archaeology applications using a drone. Their
proposed drone is intended for use with human support for controlling it, and thus cannot
be seen as a fully autonomous drone. While the exact proposed drone type is not presented
in the paper, the sensors and computer power described mitigate the possibility for using
a micro-done.
Li et al. [21] propose a method for using a drone to map mine environments. They
show the efficiency of their method by running experiments both in simulation and in the
real world with their developed drone. They propose the use of the DJI Matrice 100 drone,
which weighs over 3kg and has a clearance of over one meter. As stated, all these attempts
require large drones, and thus cannot benefit from all of the advantages of micro-drones.
Figure 1. Our Blind Autonomous Tiny-drone (BAT), which is based on the Tello commercial tiny-
drone (A). BAT is equipped with a multi ranger LIDAR array (front, left, right, up, and back) marked
by (C), and it uses a WiFi based microcontroller (B) to control the Tello as if it was a remote control.
Prop guards are used to protect BAT (E), 4x down facing LEDs are used for low light conditions (D),
a small LoRa (UHF) antenna (F) is used to transmit mapping data to the mapping station.
1.2. Motivation
Motivated by bio-inspired robotics, this paper considers challenges related to sustain-
able robotics [22]. In particular to design and construct an aerial robot that can perform
some kind of robotic-life-cycle rather than dedicated predefined tasks. The micro-drone
Sensors 2021, 21, 5293 3 of 14
sustainability algorithm will provide it with the basic abilities needed to survive in the
environment, while performing the predefined backmapping task (e.g., sensing and search-
ing). The scope of sustainable robotics research is wide and involves multi-discipline fields
of research (Robotics, Machine Learning, Multi-Agent Systems, and Human–Machine
Interaction). The general use case of a swarm of autonomous micro-drones is not well
defined but a general purpose mission. Therefore, in order to accomplish such vision, we
start with defining the following individual (bio-inspired) capabilities for a single drone.
• Obstacle detection: this property is required for detecting hazards while flying and
acting accordingly.
• Path planning: smart planning of the flying path is crucial for the drone resource
saving and mission efficiency.
• Mapping: allowing the micro-drone to map and learn its environment, in a way that
the next mission can benefit from information the drone has acquired in the last one;
moreover, this info can be shared between drones.
• Communicate with others: allowing drones to share information and perform cali-
brated missions.
This research focuses on the notion of sustainable robotics for a single micro drone
that is both autonomous and does not need vision for navigation.
in each coordinate. The Optical Flow sensor assists the drone with hovering over a given
location without drift, it is also used to calculate the drone’s relative velocity. The optical
flow sensor is only used internally by the drone in order to compute the velocity, and cannot
be accessed directly. The ToF range sensor, located at the bottom of the drone facing the
ground, is used to determine the relative height of the drone from an object below it, which
is essential for climbing staircases and avoiding obstacles from below. Using the barometer
the drone can detect a relative altitude (height) with respect to the takeoff point.
2.2.1. MicroController
To command BAT autonomously and to process the various sensor data, an ESP32
based microcontroller was used; we have mainly focused on the WiFi LoRa 32 (V2) by
HELTEC. The small-factor microcontroller weighs about 5 g and has an ESP32 (dual-core
32-bit MCU + ULP core) Microprocessor, which includes WiFi, Bluetooth, and a LoRa node
chip (SX1278) with its external antenna.
2.2.3. LEDs
The optical flow present in the Tello drone requires at least 100 lumens per square
meter (lux) in order to function properly; this may be challenging in a dark environment.
The optical flow is not only used for obtaining measurements required by the mapping,
but a lack in lighting conditions may cause the drone to drift even when not receiving
any commands from the microcontroller. Furthermore, the Tello drone does not accept
any commands from the microcontroller in poor lighting conditions. Therefore, we added
4x down-facing LEDs to BAT’s legs in order to provide additional lighting; in practice,
the LEDs provide enough light even in an absolutely dark environment.
2.3. Communication
Several different communication protocols were used in this project. The drone and
ESP32 communicate over WiFi utilizing two UDP ports, one for commanding the drone
using both RC commands (throttle, roll, pitch, and yaw) and higher level commands
(i.e., “takeoff”, “land”, etc.), and the second port is for getting the information from
the drone’s on-board sensors back from the drone. The multi-ranger deck is connected
to the I 2 C bus. After assigning each sensor its unique I 2 C address, all of the sensors
can be addressed independently. LoRa communication is used to send the data back
to the mapping station, human intervention if needed is also accomplished via LoRa
communication. Bluetooth is also used to send the data back to the mapping station, this
link is less reliable and is used mainly when BAT is close to the mapping station. See
Figure 2 for a communication diagram.
Sensors 2021, 21, 5293 5 of 14
Figure 2. The communication diagram of BAT. BAT has a companion computer based on an ESP32
microcontroller which supports WiFi, Bluetooth, and LoRa. The WiFi is controlling the (original)
Tello as if it was a remote control, the communication with the mapping station is performed via
Long Range (LoRa), while the Bluetooth modem is reserved for short range debugging and in the
future can be served as a base for drone to drone (mesh) communication.
3. Controlling Algorithm
Recall that BAT’s goal is to explore and map an unknown indoor environment. There-
fore, its goal is to maximize the newly visited regions and not revisiting known areas or
traveling in an endless loop, while performing a safe flight and avoiding obstacles. To that
end, BAT’s control algorithm is based on the concept of the wall follower algorithm, which
follows the right wall [23]. Clearly, the algorithm can be mirrored by following the left wall,
which can be useful if BAT wishes to return to the takeoff point. BAT has the following
discrete states:
• Ground: BAT is on the ground. This is the initial state.
• Takeoff: BAT starts flying upwards and gets to a predefined altitude (e.g., 1 m).
• Control: The main control loop.
Rotate C.C.W.: BAT slightly rotates counterclockwise (to align with the right wall).
Emergency: BAT brakes to avoid crashing.
Tunnel: BAT centers in between the left and right walls while maintaining the
desired speed.
Turn C.W.: BAT turns 90 degrees clockwise (to find the right wall).
Fly Forward: BAT flies forward while making minor adjustments to maintain
predefined bounds, i.e., its distance from the right and the desired speed.
Sensors 2021, 21, 5293 6 of 14
The high-level control algorithm role is to select the next state from the five possible
states (Rotate C.C.W., Emergency, Tunnel, Turn C.W., Fly Forward), each time the control
state is reached (see Figure 3).
BAT’s desired value in the Roll, Pitch, and Throttle is determined based on the range
from the corresponding direction using a proportional-integral-derivative (PID) controller
for each controlling channel [24]. A PID controller is not used for the Yaw because it is
controlled by the logic of the state machine to enable right wall navigation. When there are
no obstacles, BAT accelerates until it reaches the predefined maximal speed and height.
Before we provide the PID controller formula, we introduce the following notation.
g represents the desired goal value. e[t] represents the current error, i.e., the difference
between g and the current measurement. u[t] denotes the weighted combined value
computed by the controller at time t. K p , Ki , and Kd are manually defined constants
that represent the proportional, integral, and derivative gain, i.e., the weight given to the
current, past, and predicted future error, respectively.
Finally, the PID is based on the following formula:
∆e
Z
u ( t ) = K p · e ( t ) + Ki e(t)∆t + Kd · (1)
∆t
Note that in order to allow a robust and reliable controller, it is a recommended
practice to have a constraint-range for the controller output.
BAT’s control loop logic is described in Algorithm 1. The constants as implemented in
BAT controllers can be found in the Appendix A.
4. Mapping
The main goal of BAT is to provide a mapping of an indoor environment. The map-
ping is performed using a coordinate system that is relative to BAT’s starting point and
orientation; we denote this coordinate system as the global one. The mapping station
performs the mapping based on the information received from BAT. The mapping station
computes a geometrical model of the environment, which can be visualized in 2D (see
Figure 4) and 3D (see Figure 5) graphical representation.
Figure 4. A 2D mapping of a building as computed in the simulator. BAT is marked in red, its path is
marked by a dotted magenta trace, its current ranging is marked in blue rays, and the building 2D
map is marked by black dots and lines.
Figure 5. A 3D mapping of a building as transmitted by an actual BAT to the mapping station. BAT
(A) is marked in red, its current ranging (B) is marked in blue rays, the path (D) of BAT is marked
in magenta, and its starting point is marked as (E). The building 3D point-clouds (C) is marked by
black tiles.
4.1. Data
BAT provides the following data:
• Time: the time in seconds since the start of the mission.
• Yaw, Pitch, Roll: BAT’s orientation in degrees around the global axes y, x, and z.
• Vx , Vy , Vz : the velocity relative to the global coordinate system, in m/s.
• Ranges: the range (distance) in meters from the closest object in six directions (up,
down, left, right, front, and back) with respect to BAT’s current position and orienta-
tion. We use R f ront to denote the range to the object in front of BAT, similarly for all
the other directions.
Sensors 2021, 21, 5293 8 of 14
Py and Pz are computed using the same method. In order to add the objects that
were in BAT’s proximity to the geometrical model, each provided range is projected from
BAT’s position and orientation to the global coordinates using the following method. First,
the ranges are converted to a vector representation, in R3 , in relation to BAT’s position
0
and orientation, such that the front range is converted to R f~ront : 0 . Similarly, all
R f ront
other ranges are converted to a vector representation with respect to their direction. That
is, ranges in opposite directions (e.g., front and back) have opposite signs, and ranges of
different axes are transformed to vector representations with non-zero values at different
0
~ : − Rdown ). The resulting vector ~R is transformed by
entries (e.g., Rdown
0
Ox cos α cos β cos α sin β sin γ − sin α cos γ cos α sin β cos γ + sin α sin γ Px
~
Oy = R sin α cos β sin α sin β sin γ + cos α cos γ sin α sin β cos γ − cos α sin γ + Py
Oz − sin β cos β sin γ cos β cos γ Pz
After the calculation is performed, the geometrical model is used for composing visualiza-
tions of BAT’s environment.
4.3. 2D
The 2D visualization is a top-down view of the environment (see Figure 4 for an
example). This visualization is easier to understand, but it does not represent the objects’
and obstacles’ heights. Once the mapping station receives BAT’s map, an offline loop-
closure algorithm (see [25,26]), can be used to improve the map and reduce the drifts—on
the mapping station side.
4.4. 3D
The 3D visualization is a perspective view of the environment. This visualization
provides a more complete and realistic view of the environment, but it might be more
difficult to perceive the building scheme as a change in the Y-axis (i.e., going up or down)
results in the walls visualized at different heights. In addition, it is hard to differentiate
between a real object and noise caused by inaccurate data.
small size of our BAT and the use of brushed motors, the use of magnetic field sensors
are unreliable and therefore are not in use in most micro-drones.
• Optical Flow: has a drift which is correlated to the light and the ground texture
conditions. In most cases the error is below 10% of the distance.
• Barometer: evaluating the relative height from the air pressure as measured by the
barometer may result with a drift of up to 10 cm a minute, yet from our tests during a
10 min flight the expected attitude error is usually smaller than 30 cm.
The output map, as computed by the drone after flying for approximately 5 min in an
average speed of 0.5 m/s, allows us, in most cases, to compute a map with an expected
error of 1–2 m.
5. Experimental Results
In this section, we present two sets of experiments, one in simulation (using a simu-
lated BAT) and the other is a field test of BAT in the real-world.
5.1. Simulation
Using the Microsoft AirSim platform, we developed a custom indoor 3D map. We
further developed a model that simulates all the features of the real-world BAT, including
the range-deck and the Tello drone, using the true physical parameters, such as the drone
mass and dimensions. The simulation includes the following components:
• A modeling of the indoor environment, which includes obstacles and a starting point.
• BAT’s state, which includes its position, velocity and orientation, as well as the sensor
reading, with artificially added noise.
• BAT’s autonomous flight-controlling algorithm.
The developed simulator allows us to first test the algorithm in the simulated envi-
ronment, and only if the performance in the simulator is satisfying, to deploy the same
algorithm on the real BAT. This method eliminated hardware problems, and allowed us
to solve the algorithmic part first, and only then to deal with the real-world hardware-
related issues.
Figure 6 presents a screenshot of the simulation. The simulated BAT successfully flew in
the simulator without colliding with obstacles, and managed to explore a complex building
with an area of 600 m2 over a period of 5 min, see in [27] for the complete BAT simulation.
Figure 6. A 3D screenshot of the simulation. The magenta line represents the path of the simu-
lated BAT.
5.2. Real-World
The real-world testing was performed indoors in 7 different buildings. Recall that all
the computation is performed on the microcontroller of BAT, and it did not receive any
external commands (see Figures 7 and 8). BAT was able to fly for approximately 5 min at a
time (until the battery was drained out), without crashing and without returning to the
same spot twice. See [28] for a video of BAT exploring a two-story building and flying up
the stairs from the first floor to the second, without crashing into the walls. BAT successfully
Sensors 2021, 21, 5293 10 of 14
explored all the buildings, and its collected data was used to compose a mapping of these
buildings. Figure 9 shows the 2D mapping as transmitted by BAT to the mapping-station.
Figure 10 shows the 3D mapping as transmitted by BAT and presented using a point-cloud
viewer. During BAT’s flight there were between three to four students watching BAT and
following it; their main locations were captured by BAT.
Figure 9. A 2D mapping by BAT, performed on a two story building: BAT’s position is marked in magenta; the left and right
sensor-ranging are marked in black. Left: BAT has mapped the right side of the first floor and then turned right and started to go
up-stairs; Right: BAT reached the second floor. Note that there are several “noisy spots" due to students following BAT.
Sensors 2021, 21, 5293 11 of 14
Figure 10. A 3D mapping of a two story building. BAT’s up, right, down and self position sensor-ranging are shown in brown, green,
black and magenta lines, respectively. BAT’s starting point is marked by (a), (b) marks the stairs to the second floor—as shown in the
lower right image. Changes in the ceiling height are marked by (c).
Author Contributions: Conceptualization, B.B.-M. and A.A.; Investigation, S.P., E.A., S.S., B.B.-
M. and A.A.; Methodology, S.P., E.A., B.B.-M. and A.A.; Software, S.P. and E.A.; Hardware, S.S.;
Validation, S.P. and E.A.; Visualization, E.A. and S.P.; Writing—original draft, S.P., E.A., S.S., B.B.-M.
and A.A. All authors have read and agreed to the published version of the manuscript.
Funding: This work was supported in part by the Ministry of Science & Technology, Israel.
Conflicts of Interest: The authors declare no conflict of interest.
Appendix A.
Appendix A.1. LiDAR Sensor
The VL53L1X sensor contains a sensing array of SPADs (single-photon avalanche
diode), an integrated 940 nm invisible light source based on an eye-safe Class 1 VCSEL
(vertical cavity surface-emitting laser) and a low-power embedded microcontroller. For
every measurement, the laser beam is emitted from the VCSEL and reflected back from an
obstacle to the 16 × 16 SPAD array, the embedded microcontroller derives the distance and
accuracy of the measurement and sends it to the HELTEC microcontroller (on the I 2 C bus).
The sensors are configured in a continuous ranging mode, an “inter-measurement
period” and “ranging duration” (timing budget) can be set. Optimizing each of these
parameters can help with measurement accuracy and repeatability, for example, on the one
hand a longer inter-measurement period allows ranging to a farther distance, but on the
other hand, this may result in a reduction of measurement frequency making it less reliable
to fly smoothly.
All of the vl53l1x sensors have the same default I 2 C address (0x29), running all the
sensors simultaneously will raise an address conflict, therefore, every sensor must be
assigned a unique I 2 C address. To overcome this issue, the on-board pca9534 I 2 C GPIO
Expander is utilized, as all the vl53l1x sensors XSHUT pins are connected to pca9534 I/O
pins and can be controlled by software. On startup the sensors are cycled one by one,
released from standby mode and assigned a new I 2 C address along with the sensor setup.
Figure A1. Wiring Diagram. The sensor has 5 pins: PWR, GND, SDA, SCL, and XSHUT. The XSHUT
pin can be used to hold the sensor in standby mode, useful for setting up multiple sensors.
Sensors 2021, 21, 5293 13 of 14
References
1. Floreano, D.; Wood, R.J. Science, technology and the future of small autonomous drones. Nature 2015, 521, 460. [CrossRef]
[PubMed]
2. Floreano, D.; Zufferey, J.C.; Srinivasan, M.V.; Ellington, C. Flying Insects and Robots; Springer: Berlin/Heidelberg, Germany, 2009.
3. Holland, R.A. Orientation and navigation in bats: known unknowns or unknown unknowns? Behav. Ecol. Sociobiol. 2007,
61, 653–660. [CrossRef]
4. Iida, F. Biologically inspired visual odometer for navigation of a flying robot. Robot. Auton. Syst. 2003, 44, 201–208. [CrossRef]
5. Ruffier, F.; Viollet, S.; Amic, S.; Franceschini, N. Bio-inspired optical flow circuits for the visual guidance of micro air vehicles. In
Proceedings of the 2003 International Symposium on Circuits and Systems (ISCAS’03), Bangkok, Thailand, 25–28 May 2003.
6. Meier, L.; Tanskanen, P.; Heng, L.; Lee, G.H.; Fraundorfer, F.; Pollefeys, M. PIXHAWK: A micro aerial vehicle design for
autonomous flight using onboard computer vision. Auton. Robot. 2012, 33, 21–39. [CrossRef]
7. Duan, H.; Li, P. Bio-Inspired Computation in Unmanned Aerial Vehicles; Springer: Berlin/Heidelberg, Germany, 2014.
8. Aguilar, W.G.; Casaliglla, V.P.; Pólit, J.L. Obstacle avoidance based-visual navigation for micro aerial vehicles. Electronics 2017,
6, 10. [CrossRef]
9. Bürkle, A.; Segor, F.; Kollmann, M. Towards autonomous micro uav swarms. J. Intell. Robot. Syst. 2011, 61, 339–353. [CrossRef]
10. Hambling, D. Swarm Troopers: How Small Drones Will Conquer the World; Archangel Ink: Venice, FL, USA, 2015.
11. Condliffe, J. A 100-Drone Swarm, Dropped from Jets, Plans Its Own Moves. Available online: https://www.technologyreview.
com/2017/01/10/154651/a-100-drone-swarm-dropped-from-jets-plans-its-own-moves (accessed on 2 August 2021).
12. Werner, D. Drone Swarm: Networks of Small UAVs Offer Big Capabilities. Available online: https://wiki.nps.edu/display/
CRUSER/2013/06/20/Drone+Swarm%3A+Networks+of+Small+UAVs+Offer+Big+Capabilities (accessed on 2 August 2021).
13. Miller, I.D.; Cladera, F.; Cowley, A.; Shivakumar, S.S.; Lee, E.S.; Jarin-Lipschitz, L.; Bhat, A.; Rodrigues, N.; Zhou, A.; Cohen, A.;
et al. Mine tunnel exploration using multiple quadrupedal robots. IEEE Robot. Autom. Lett. 2020, 5, 2840–2847. [CrossRef]
14. Wang, N.; Catal, O.; Verbelen, T.; Hartmann, M.; Dhoedt, B. Towards bio-inspired unsupervised representation learning for
indoor aerial navigation. arXiv 2021, arXiv:2106.09326.
15. Rogers, J.G.; Gregory, J.M.; Fink, J.; Stump, E. Test Your SLAM! The SubT-Tunnel dataset and metric for mapping. In
Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August
2020; pp. 955–961.
16. Palossi, D.; Loquercio, A.; Conti, F.; Flamand, E.; Scaramuzza, D.; Benini, L. Ultra low power deep-learning-powered autonomous
nano drones. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018), ETH
Zurich, Piscataway, NJ, USA, 1–5 October 2018.
17. Piyasena, D.; Lam, S.K.; Wu, M. Edge Accelerator for Lifelong Deep Learning using Streaming Linear Discriminant Analysis.
In Proceedings of the 2021 IEEE 29th Annual International Symposium on Field-Programmable Custom Computing Machines
(FCCM), Orlando, FL, USA, 9–12 May 2021; p. 259.
18. Palossi, D.; Loquercio, A.; Conti, F.; Flamand, E.; Scaramuzza, D.; Benini, L. A 64-mW DNN-based visual navigation engine for
autonomous nano-drones. IEEE Internet Things J. 2019, 6, 8357–8371. [CrossRef]
19. Dowling, L.; Poblete, T.; Hook, I.; Tang, H.; Tan, Y.; Glenn, W.; Unnithan, R.R. Accurate indoor mapping using an autonomous
unmanned aerial vehicle (UAV). arXiv 2018, arXiv:1808.01940.
20. Zhang, G.; Shang, B.; Chen, Y.; Moyes, H. SmartCaveDrone: 3D cave mapping using UAVs as robotic co-archaeologists. In
Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017;
pp. 1052–1057.
21. Li, H.; Savkin, A.V.; Vucetic, B. Autonomous area exploration and mapping in underground mine environments by unmanned
aerial vehicles. Robotica 2020, 38, 442–456. [CrossRef]
Sensors 2021, 21, 5293 14 of 14
22. Weisz, J.; Huang, Y.; Lier, F.; Sethumadhavan, S.; Allen, P. Robobench: Towards sustainable robotics system benchmarking. In
Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May
2016; pp. 3383–3389.
23. Baltazar, R.; Cervantes, A.; Zamudio, V. A Simple Wall Follower NXT Robot to Localization and Mapping in an unknown
environment. In Workshop Proceedings of the 7th International Conference on Intelligent Environments; IOS Press: Amsterdam,
The Netherlands, 2011; pp. 74–84.
24. Willis, M. Proportional-Integral-Derivative Control. Available online: http://educypedia.karadimov.info/library/PID.pdf
(accessed on 2 August 2021).
25. Bosse, M.; Zlot, R. Keypoint design and evaluation for place recognition in 2D lidar maps. Robot. Auton. Syst. 2009, 57, 1211–1224.
[CrossRef]
26. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous
localization and mapping that provably converges. IJCAI 2003, 3, 1151–1156.
27. Azaria, E. BAT: Believe It or not I’m Walking on Simulated Air. Available online: https://youtu.be/tYiBW78trgM (accessed on 6
June 2021).
28. Pikalov, S. BAT: Believe It or not I’m Walking on Air. Available online: https://youtu.be/AuPLlUi7e7w (accessed on 6 June 2021).
29. Thrun, S.; Montemerlo, M. The graph SLAM algorithm with applications to large-scale mapping of urban structures. Int. J. Robot.
Res. 2006, 25, 403–429. [CrossRef]
30. PMD. Time-of-Flight (ToF). Available online: https://pmdtec.com/picofamily/ (accessed on 6 June 2021).
31. Stone, P.; Sutton, R.S.; Kuhlmann, G. Reinforcement learning for robocup soccer keepaway. Adapt. Behav. 2005, 13, 165–188.
[CrossRef]
32. Van Hasselt, H.; Guez, A.; Silver, D. Deep reinforcement learning with double Q-learning. In Proceedings of the AAAI Conference
on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016.
33. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam,
V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [CrossRef]
[PubMed]
34. Li, J.; Monroe, W.; Ritter, A.; Galley, M.; Gao, J.; Jurafsky, D. Deep reinforcement learning for dialogue generation. arXiv 2016,
arXiv:1606.01541.
35. Li, Y.; Wen, Y.; Guan, K.; Tao, D. Transforming Cooling Optimization for Green Data Center via Deep Reinforcement Learning.
arXiv 2017, arXiv:1709.05077.
36. Bojarski, M.; Del Testa, D.; Dworakowski, D.; Firner, B.; Flepp, B.; Goyal, P.; Jackel, L.D.; Monfort, M.; Muller, U.; Zhang, J.; et al.
End to end learning for self-driving cars. arXiv 2016, arXiv:1604.07316.
37. Azar, A.T.; Koubaa, A.; Ali Mohamed, N.; Ibrahim, H.A.; Ibrahim, Z.F.; Kazim, M.; Ammar, A.; Benjdira, B.; Khamis, A.M.;
Hameed, I.A.; et al. Drone Deep Reinforcement Learning: A Review. Electronics 2021, 10, 999. [CrossRef]