Intelligent+Raspberry+Pi+for+UAV+Swarm+Scheme
Intelligent+Raspberry+Pi+for+UAV+Swarm+Scheme
Article Info ABSTRACT Swarms of drones are increasingly being requested to carry out
Received: January 04, 2021
missions that single drones cannot complete. Particularly in civil security, high
Accepted: April 03, 2021 needs emerge in surveillance and observation of hostile areas. Existing
Published: April 26, 2021 solutions do not meet this demand, as they utilise heavy infrastructures without
consideration of the quality of service (especially in terms of throughput). This
paper proposes a light and efficient solution to synchronise a swarm of drones
based only on ad hoc communications to position drones. The proposed
Keywords method operates a semi-autonomous leader drone, while all the others (the
followers) are autonomous and follow the Leader using commands sent by the
Artificial Intelligence
UAV Leader via a communication channel. The proposed method emulates
A* algorithm squadrons' fundamental flight operations. The test results show that the process
Obstacle avoidance is very efficient, with all follower drones responding efficiently.
Swarm
Autonomous flight
their flight. These algorithms include path planning, The remainder of this paper is organised as follows;
object avoidance, detection, identification, and Section 2 discusses UAV architecture and sensor
tracking. This process depends on several indicators, payload. Section 3 presents autonomous obstacle
including creating a squadron of drones, because it avoidance. Section 4 describes the ground control unit.
provides many advantages compared to non-swarms, Section 5, presents the results and discussion. Finally,
such as area coverage, speed, and coordinated effects Section 6 provides some conclusions.
(R. Arnold 2019) (B. Abruzzo 2019). One more
important advantage is that it allows collective control II.UAV ARCHITECTURE AND
of many agents while a limited number of mission SENSOR PAYLOAD
objectives can be prepared. UAV swarms can be
equipped to communicate and take decisions quickly In this study, a mix of Raspberry Pi 3 and 4 are used
(R. Arnold 2019), and this is proven to be more to control the swarm of drones autonomously. This
efficient than using one drone with many decision- confirms that the Raspberry Pi 3 and 4 can perform the
making individuals. same task, and their difference does not hinder their
Each drone can receive info that helps in the operational capabilities. Along with the raspberry, Pi's
searching and rescuing processes such as missing drone also carries two cameras (visual and Infrared)
persons, speed, location, and all data. Meanwhile, plus ultrasound sensors to aid in positioning the drones
decision-makers and rescuers can't get much data or within the swarm and to detect obstacles during the
process it as quickly as it should be transmitted. execution of drone duties. In this section, the overall
Whereas, in milliseconds, drones of artificial structure of the proposed system is presented.
intelligence algorithms can process data and take
decisions. This combination of advantages makes the A. System Overview
squadron more efficient without a doubt. Second,
equipping the drones with cameras to assist in the Among the huge set of applications for drones, there
finding and rescue process, where the camera is one of is a real interest in video surveillance and observation.
the most important elements of drone surveillance. A fixed-wing aircraft drone or a quadcopter can be
The drone is able to cover large spaces, but of course, operated to realise such a mission. The possibilities are
that depends on the shooting height, which affects the limited in the sense that the covered area is reduced
location and size of the object. (because of radio range and the characteristics of a
single drone). To cover an extended area or allow
Moreover, the visibility and size of the object (S.
long-distance real-time video, we propose a basic
Sambolek 2019) are affected by lighting and weather.
Making object detection efficient requires creating a drone swarm where we use the notion of Leader and
model of artificial intelligence and training it to detect followers. In the proposed swarm, followers act as
and recognise a group of objects, like a human, a rock relays; the Leader is always human-piloted and
or a tree. Lately, deep learning, especially a model responsible for real-time video capture, deciding on
based on convolutional neural networks, has targets, and operational change. The drones and the
contributed to achieving object detection. control centre are all interconnected using internet
network extended coverage. The drones and the centre
Third, when a group of drones are flying, they have
are all capable of pear to pear communication. This
to communicate and coordinate to execute the required
will solve the problem of leader loss due to technical
operations efficiently. The basic data exchange is
based on long-range protocols (LoRa) (Redmon 2016) or operational problems. If for any reason the Leader
and IEEE 802.11s (Seller 2013), and for example, this is lost or danged during an operation, the command
data is the path information for each drone to take, this centre will promote a follower to a leader who will
opens up the possibility for many applications in wide- then control the remaining followers.
areas coverage. Long Range Wide Area Network
(LoRaWAN(The Things Network 2020) at a Medium
Access Control (MAC) layer) as well as LoRa (at the
physical (PHY) layer) are attractive as they allow
users to reach long distances (on the order of several
kilometres) in Line-of-Sight (LOS) applications
(Callebaut 2019)(Petäjäjärvi 2015), e.g., inside the
swarm. In addition, IEEE 802.11 (Wi-Fi) (IEEE
20216) has been adopted in many applications,
especially those using live video broadcasting.
B. SYSTEM COMPONENT
port on the pi USB on the PixHawk; the two boards sensors with diverse resolutions are the essential
communicate using the MavLink Protocol. A library components. Some of the sensors used in observation
called Drone-Kit was downloaded into the Pi, which are LiDAR, visual cameras, thermal or IR cameras,
contained all the various dependencies for Pi and and solid-state or optomechanical devices (C. A.
PixHawk communication. Wargo 2014). The usage of sensors is quite diverse,
depending on the needs. To detect an obstacle,
different types of sensors are used, which can be
C. EQUATION OF MOTION
mainly categorised into two sets:
Perception is the first step in any collision Visual sensors or cameras rely on capturing the
avoidance system. To detect obstacles, the drone images of the environment and objects to give useful
should perceive its surroundings and environment. For information to be extracted. Visual cameras are
this, it needs to be equipped with one or more sensors typically monocular, stereo, and event-based cameras
working as a perception unit (C. H. R. Everett 1989). (S. Saha 2014), (S. A. S. Mohamed 2020). The
For remote sensing systems, sensors such as imaging benefits of using cameras are their small size, lesser
weight, lower power consumption, flexibility, and instance, LiDARs have difficulty detecting clear glass,
easily mounted. In contrast, the disadvantages of using while ultrasonic sensors are not affected by the colour
such sensors are, e.g., their high dependency on of the objects.
weather conditions, lack of image clarity, as well as
sensitivity to lighting conditions and background III. Autonomous Obstacle Avoidance
colour contrast. These factors have a significant
impact on the outcome, as the quality of the captured The study requires developing autonomous
image drops drastically if any of those factors play a navigation for the UAV without any external control
part in it. (remote controls or another control source) while
respecting real-time constraints and ensuring high-
An improved Inverse Perspective Mapping (IPM) performance and robust navigation. Indeed, our UAV
with a vertical plane model is used to perform a coarse had to calculate an optimal path from World Geodetic
obstacle detection in the bottom third of the image, System (WGS) indicated by the window Server to
which makes it only appropriate for slow-moving optimise battery consumption and increase operating
robots ( 1m/s). Afterwards, the obstacles are performance to cover the search area. The main
segmented using the Markov random field (MRF) features of our UAV are:
framework, and the distance between the robot and the
nearest obstacle is obtained. • The calculation of the optimal UAV
navigation trajectory to cover a search area.
2. ACTIVE SENSORS • Automatic piloting of the Raspberry
Pi 3B+ embedded Android from the Ubuntu
Active sensors work on the mechanism of emission server.
of radiation and reading the reflected radiation. An • Detection of obstacles during the
active sensor has a transmitter (source) and navigation of the UAV.
receiver/detector. A transmitter emits a signal in the • Live streaming of the mission
form of a light wave, an electrical signal, or an acoustic
signal; this signal bounces off an object, and the sensor In the sequel, we detail the features as mentioned
receiver reads the reflected signal (M. Rouse 2020), above. Starting with the preparation of the area to
(Active Sensors 2020). Their ability to penetrate the cover and the computation of the optimal trajectory
atmosphere in most conditions is due to the fact that
Ultrasonic sensors work on the principle of emitting A. Optimal Trajectory
sound waves and listening to its reflection back from Computation
an object to calculate the distance between the object
and the sensor (S. Chaulya 2016). The sound waves To be able to design a suitable algorithm for the
are generated at a frequency too high, i.e., 25-50 KHz, optimal trajectory computation, the terrain must be
for the human hearing frequency band ([30] J. M. modelled as an oriented graph. However, intelligent
Armingol 2018). The basic principle used to calculate modelling is chosen instead of basing the graph on
the distance is similar to the one used by radars or conventional meshing; this allows for a reduced
LiDARs, i.e., emit a wave, wait until the bounced off calculation time on the graph without losing precision
wave from an object arrives, and calculate the distance (which is the classical problem with meshing). The
based on how long it took for the wave to reflect using modelling was based on the notion of a visibility
this simple formula: graph. It integrated the consideration of obstacles,
danger zones, and the non-holonomy constraint of the
(𝑣∗𝑡) aircraft that implies a maximum steering angle. The
𝑑= (5)
2 resulting graph was then cleaned to keep only the strict
minimum necessary for trajectory calculation. The
where d is the distance, v is the speed of the wave, generation of the graph can require much computation
and t is the time of flight. time, but this generation is done only once before the
planning stage and therefore does not affect the
Ultrasonic sensors are readily available and are trajectory computation times.
much cheaper than most of the other ranging sensors
available in the market. Unlike LiDARs, ultrasonic The objective of the first part of the problem was
sensors are unaffected by the object's transparency; for to find a path between the start (0) and the goal (1) by
VOLUME 02, NO: 01. 2021
Abdelrahman R. S. Almassri | Intelligent Raspberry Pi for UAV Swarm Scheme.
avoiding obstacles (−1). Many algorithms find a path o Search for object
connecting two points on this graph. One of the best o Track an object
known is the Dijkstra algorithm and A* algorithm. o Surveillance
• Replace the Leader in any of the swarm
In this part, we illustrate the tests performed to
validate and verify the behaviour of the A* algorithm The swarms will send their video feeds back to the
in order to compute an optimal path from the WGS ground control; the user will further decide if there is
coordinates sent by the server. Figure 4 shows a going to be any changes or modification to the
scenario to find the optimal path from Point 1 (sstart) assigned task; the user command will be transmitted to
to the endpoint (sgoal) while passing from several the Leader, the Leader will pass the new instructions
nodes s1 to s4. To illustrate the advantages of this to all assigned followers. The Leader and the followers
algorithm, we used the previous example of a map will execute their assigned tasks autonomously.
without obstacles, with sstartin (100, 50) and sgoal in
(200, 50). We present the A* function:
Longitude
28.981
28.9817). 28.9808
Transmitted
28.9806
28.9804
Leader
The leader drone, once command received, it 28.9802
28.98
executed the command in sequence 28.9798
41.09785 41.0979 41.09795 41.098 41.09805 41.0981
Latitude
• take-off
• transmit the command to the Follower
• It hovered for 2 minutes FIGURE 7. Transmitted coordinates & Leader actual coordinates.
avoidance," in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. Seller, OBA; Sornin, N. Low Power Long Range Transmitter.
2010, pp. 87–92. European Patent EP2763321A1, 5 February 2013. [Google
Scholar]
Madawalagama, S. Low Cost Aerial Mapping with Consumer Grade
Drones. 37th Asian Conference on Remote Sensing,2016. The Things Network (TTN). 2020. Available online:
https://www.thethingsnetwork.org/ (accessed on 29 December
M. B. van Leeuwen and F. C. A. Groen, "Vehicle detection with a 2020).
mobile camera: Spotting midrange, distant, and passing cars,"
IEEE Robot. Autom. Mag., vol. 12, no. 1, pp. 37–43, Mar. 2005. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, A.
C. Berg, "SSD: Single shot multi-box detector," in European
M. Rouse and M. Haughn. What is Active Sensor?—Definition conference on computer vision, Springer, Cham, 2016, pp. 21-
From Whatis.com. Accessed: Mar. 13, 2020. Accessed: Mar. 13, 37.
2020. [Online]. Available:
https://internetofthingsagenda.techtarget.com/definition/active-
sensor