0% found this document useful (0 votes)
26 views

Detection of UAS Paper

Detection of UAS paper

Uploaded by

Vu Phan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Detection of UAS Paper

Detection of UAS paper

Uploaded by

Vu Phan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

sensors

Review
Advances and Challenges in Drone Detection and Classification
Techniques: A State-of-the-Art Review
Ulzhalgas Seidaliyeva 1, * , Lyazzat Ilipbayeva 2 , Kyrmyzy Taissariyeva 1 , Nurzhigit Smailov 1 and Eric T. Matson 3

1 Department of Electronics, Telecommunications and Space Technologies, Satbayev University,


Almaty 050013, Kazakhstan; [email protected] (K.T.)
2 Department of Radio Engineering, Electronics and Telecommunications, International IT University,
Almaty 050040, Kazakhstan
3 Department of Computer and Information Technology, Purdue University,
West Lafayette, IN 47907-2021, USA; [email protected]
* Correspondence: [email protected]

Abstract: The fast development of unmanned aerial vehicles (UAVs), commonly known as drones, has
brought a unique set of opportunities and challenges to both the civilian and military sectors. While
drones have proven useful in sectors such as delivery, agriculture, and surveillance, their potential
for abuse in illegal airspace invasions, privacy breaches, and security risks has increased the demand
for improved detection and classification systems. This state-of-the-art review presents a detailed
overview of current improvements in drone detection and classification techniques: highlighting
novel strategies used to address the rising concerns about UAV activities. We investigate the threats
and challenges faced due to drones’ dynamic behavior, size and speed diversity, battery life, etc.
Furthermore, we categorize the key detection modalities, including radar, radio frequency (RF),
acoustic, and vision-based approaches, and examine their distinct advantages and limitations. The
research also discusses the importance of sensor fusion methods and other detection approaches,
including wireless fidelity (Wi-Fi), cellular, and Internet of Things (IoT) networks, for improving the
accuracy and efficiency of UAV detection and identification.

Keywords: unmanned aerial vehicles (UAVs); UAV detection; drone detection; drone identification;
Citation: Seidaliyeva, U.; Ilipbayeva,
UAV identification; UAV classification; drone classification; drone localization; detection technologies;
L.; Taissariyeva, K.; Smailov, N.;
radio frequency (RF); radar; acoustic; visual; sensor fusion; drone incidents; drone threats; machine
Matson, E.T. Advances and
learning based drone detection; deep learning based UAV identification
Challenges in Drone Detection and
Classification Techniques: A
State-of-the-Art Review. Sensors 2024,
24, 125. https://doi.org/10.3390/
s24010125 1. Introduction

Academic Editor: Andrey V. Savkin


In recent years, as a result of the ongoing development of technology, micro unmanned
aerial vehicles (UAVs), more often referred to as drones, have seen improvements in their
Received: 1 November 2023 technical capabilities and an expansion in their range of applications. Due to their ability to
Revised: 5 December 2023 fly long-range distances and their compact size, mobility, and payload options, the potential
Accepted: 15 December 2023 applications for drones have expanded from personal to military usage. Drones play a
Published: 26 December 2023
significant role in modern life as a result of their low cost and ease of usage in many sectors
of daily life, from official government work like border security and wildfire surveillance to
civilian private sector work such as first aid, disaster management, monitoring of crowded
Copyright: © 2023 by the authors.
places, farming, delivery services, internet communications, and general filmmaking [1].
Licensee MDPI, Basel, Switzerland. Therefore, drone technology’s democratization has resulted in broad acceptance, making
This article is an open access article the sky more accessible than ever before. Along with the benefits, the increased use of
distributed under the terms and drones has created substantial issues in ensuring the security, privacy, and safety of both
conditions of the Creative Commons airborne and ground-based organizations. Effective drone detection and classification
Attribution (CC BY) license (https:// systems have become a key concern for regulatory agencies, security services, and aviation
creativecommons.org/licenses/by/ authorities all over the world. The central goal of this review paper is to investigate the
4.0/). advancements in drone detection and classification algorithms that have evolved as a

Sensors 2024, 24, 125. https://doi.org/10.3390/s24010125 https://www.mdpi.com/journal/sensors


Sensors 2024, 24, 125 2 of 31

result of these issues. It aims to present a comprehensive overview of the state-of-the-art


technologies that are now in use or are being developed. This involves an investigation
of several detection modalities, such as radar, radio frequency (RF) analysis, acoustic
sensors, and camera sensors, and the fusion of these systems into comprehensive detection
networks. Moreover, the paper covers the importance of sensor fusion methods and other
detection approaches, including Wi-Fi fingerprinting, 5G, and IoT networks, for improving
the robustness of UAV detection and identification systems.
The key objectives of this review paper are to:
- describe the most recent drone incidents and threat categories;
- identify and describe the main approaches utilized for drone detection and classifica-
tion;
- analyze the advantages and limitations of various techniques;
- emphasize the interference factors in real-world scenarios for individual sensor modal-
ities and suggest additional integrated approaches.
In order to enforce a consistent set of rules, drone regulations are now being prioritized
throughout all nations. But as technology has advanced, it is possible to create non-
registered, bespoke drones that might fly in prohibited areas [2]. Numerous incidents
and events involving the usage of drones flying near vital infrastructure, over restricted
locations, or during important public meetings have been recorded in the media in recent
years [1]. To demonstrate the necessity for a drone detection system, below we briefly
discuss a couple of the threat categories in the context of concrete incidents.

1.1. Significance of UAV Detection: Drone Threat Categories and Incidents


Recently, the horizon of drone use has expanded rapidly from military to smart
agriculture. As the use of drones spreads across industries, the ability to detect and identify
them effectively becomes increasingly critical in order to avoid illegal or harmful drone
activity. The motivation for this research derives from the increasing use of drones and the
hazards they bring to privacy, security, and safety. This research is significant because it
provides a contemporary and relevant overview of drone detection technologies, which are
vital for national security, privacy, and safety. Drones serve as delivery vehicles for various
payloads, such as GPS devices, infrared cameras, batteries, and sensors. These drones
often use high-energy batteries, such as lithium batteries, to provide a flight duration of
20–40 min. Drones that can fly several kilometers away from their operator and stay in
the air for thirty minutes may be purchased for around USD 1000. This therefore gives
terrorist organizations the ability to utilize drones for both the delivery of explosives or
toxic materials and for the surveillance of and gathering of data from targets [3]. Therefore,
drones with long-range flight capability, heavy-payload-carrying capacity, and wind and
weather resistance may serve as a means for transporting dangerous goods.
In order to determine proper anti-drone system goals, we list the factors of unmanned
aerial vehicles that represent the greatest threats to society (Figure 1).
Drone attacks: Due to the fact that unmanned aerial vehicles are capable of carrying
explosives as well as biological and chemical weapons, drone attacks fall under the first
category of possible threats. These explosives might be used to attack a variety of targets,
such as specific individuals, public institutions, business organizations, and even whole
nations [4].
Illegal smuggling: Smuggling is the second-leading threat category for drone use.
For border patrols and jail staff, drone drug smuggling has grown to be a serious issue.
Sometimes weapons or other illegal items are smuggled beyond the reach of ground-based
security. Border locations have a wide range of weather; therefore, smuggling drones need
to be able to operate in adverse weather conditions [1].
Drone espionage threats: Drones with strong cameras may also be used to spy on people,
businesses, and governmental institutions from a distance. Despite privacy claims, this
worry might be a drone hazard, such as a privacy invasion or espionage [3].
Sensors 2024, 24, 125 3 of 31

Drone collisions: Accidentally or purposefully launching a remote-controlled drone


near an aircraft or in its flight path might threaten the safety of the crew and passengers
and might damage property [3].

Figure 1. The main drone threat categories: drone attacks, illegal smuggling, drone espionage, and
drone collisions.

In 2023, a surge of drone incidents has been continuing to hit the news, highlighting
the need for strong counter-drone solutions and rational rules for using small UAVs.
These incidents ranged from smuggling and illegal drone intrusions to collisions and
technological failures and affected a variety of industries. The majority of rogue drone
actions occurred in sensitive areas such as airports, border crossings, prisons, communities,
and neighborhoods. The total number of reported drone incidents since January 2023
is around 150 [5]. Incidents related to the delivery of drug contraband at borders and
surveillance of border patrols using drones make up 38% of total drone incidents (Figure 2).
Especially, drone smuggling has increased across the India–Pakistan border and is occurring
several times a month [6] (Figure 3). Approximately 84% of reported drone smuggling
incidents took place across the India–Pakistan border, 12% across the Jordan–Syria border,
and 4% of reported drone smuggling incidents were attempted at the Israel–Lebanon
border. As well, drone smuggling and the delivery of illegal items into prisons by visitors
or corrupt jail personnel have always been issues. A total of 28% of all reported drone
incidents involved drones delivering illegal items and drugs into prisons. More than half
of the drone smuggling cases occurred in prisons in the United States and Canada; the
rest were recorded in prisons in the United Kingdom. Drone smuggling incidents have
also been reported in prisons in Spain, France, India, and Ecuador. In 2023, apart from the
prison and border sectors, airports also faced some challenges due to illegally launched
drones. Therefore, 18% of reported drone incidents were related to drones flying close to
runways. According to an analysis of global drone incidents for 2023, Dublin Airport was
the most affected by drone incidents. Also, thousands of passengers at several airports
in the UK and the USA were inconvenienced due to the fact that drones flew too close to
planes and caused temporary suspensions of flights and flight delays. As a result of the
illegal use of weaponized drones, different communities and neighborhoods, mostly in
the USA, India, Israel, and the UK, have faced various challenges of violence [6]. Figure 3
illustrates the number of different drone threats for each month, starting from January 2023
until the beginning of December 2023.
Sensors 2024, 24, 125 4 of 31

Figure 2. Recorded drone incidents by main threat category.

Figure 3. Worldwide drone incidents from January until December 2023. Four types of threats are
highlighted in four colors: drones breaching borders in blue, drones plaguing prisons in orange, drone
danger and disruptions at airports in gray, and drone threats to communities and neighborhoods
in yellow.

A list of the significant drone incidents is provided in Table 1.


Sensors 2024, 24, 125 5 of 31

Table 1. The list of recent drone incidents reported in the press worldwide.

Date and Location Type of Threat Category Incident Details Response


a business owner of a heating
and air conditioning company
13 August 2023, Absecon, accused of using a drone to
drone attack drone spotted by authorities
NJ, USA drop harmful chemicals into
commercial and residential
pools [7]
a 55-year-old man was
detained after using his drone
3 June 2022, American drone intercepted
drone attack to discharge lit illegal
Canyon, CA, USA by authorities
M-80-style explosive
devices [8]
a 56-year-old man was
accused of using a drone to fly
11 July 2023, County a police dog spotted and
drone smuggling contraband items into a
Durham, UK caught the man
Stockton’s Holme house
prison [9]
over a kilogram of cannabis
the Correctional Service of
16–19 June 2023, and other unauthorized items
drone smuggling Canada seized the
Kingston, Canada were confiscated at Collins
flying drone
Bay Institution [10].
a lifestyle block owner shot
down a drone above his
22 June 2023, Canterbury, the incident was not reported
drone espionage property because he thought it
New Zealand to the police
was being controlled by a
thief [11]
a DJI quadcopter flying over Israeli forces shot down the
drone espionage and
25 May 2023, Lebanon border the border from Lebanon was drone using electronic
surveillance
shot down [12] warfare techniques
a holiday plane carrying up to
189 passengers had a lucky
2 December 2022, Stansted the incident was reported by
near collision escape from an illegally flown
Airport, UK the UK Airprox Board
drone that was only a few feet
away [13]
the incident was reported to
14 May 2023, Gatwick all runways were closed due
near collision the police and the proper
Airport, UK to a drone sighting [14]
aviation authorities
the incident was brought
22 May 2023, Gao all the runways were closed
drone collision under control by the airport
International Airport, Mali due to a drone crash [15]
staff and security forces
a low-cost Wizz Air airliner the incident was reported to
22 May 2023, Katowice flew dangerously close to a the police and the proper
drone collision
Airport, Poland huge quadcopter with only aviation authorities
50 m of clearance [16].

The countries with the highest number of drone occurrences based on analysis of
recent drone incidents reported by the press worldwide are: India, the USA, the UK,
and Canada. Most of the drone incidents were related to illegal items being delivered into
prisons using drones and drone flight over airports or near borders or restricted zones.
The majority of drone models involved in recent drone incidents were the DJI Mavic 2, DJI
Phantom 4, DJI Mavic 3, etc. All of the above-mentioned incidents highlight the necessity
for UAV detection systems to stop unauthorized drone applications. The goal of such a
detection system is to identify drones’ existence as well as their location and other details,
including their type, direction, and velocity. In light of this, this research problem has
Sensors 2024, 24, 125 6 of 31

recently attracted a lot of interest from both businesses and academia. The research problem
is compounded by the rapid evolution of drone technology, which includes their tiny size,
low flying altitude, and low radar cross-section. Also, drones may readily enter no-fly
zones, and they are more difficult to detect than conventional big air targets. Consequently,
detection of suspicious drones becomes more challenging [17].

1.2. Key Challenges to UAV Detection


The UAV detection and classification task presents numerous significant challenges,
which researchers [18,19] and business experts are currently addressing. Some of these
challenges include:
1. Size and speed diversity: UAVs are available in a range of sizes, from tiny consumer
drones to bigger commercial or military-grade aircraft. The majority of tiny UAVs fly at
speeds of 15 m per second or less. However, large-sized UAVs have a higher top speed
of up to 100 m/s [18,19]. Since various UAVs may have distinct flying characteristics
and shapes, these size and speed variances make UAV detection and classification tasks
more challenging.
2. Dynamic behavior and similarity to other flying objects: It is challenging to monitor and
precisely identify UAVs because of their high speed and unpredictable flying patterns and
behaviors. The difficulty for detection and classification systems increases when dealing
with UAVs’ dynamic nature. As well, the large similarity of drones to other objects, such as
birds or airplanes, makes it difficult to distinguish them from other flying objects. Therefore,
a drone detection system must be fast enough not to miss a drone in video frames and yet
accurate enough not to confuse the drone with another flying object. This is such an urgent
problem that a drone vs. bird challenge is organized each year [20].
3. Different detection ranges and altitudes: UAVs may fly at various altitudes, between a
few meters and many kilometers above the ground. A UAV’s range refers to the region
from which it may be remotely controlled. While larger drones have a range of hundreds of
kilometers, smaller drones have a range of only a few meters. Hence, aerial platforms might
be divided into low-altitude and high-altitude platforms. Due to their quick deployment
and inexpensive cost, low-altitude platforms are mostly used for malicious purposes.
As malicious drones frequently fly at low altitudes, traditional countermeasure systems
may be ineffective or even harmful owing to possible collateral damage [18].
4. Environmental conditions: Environmental factors that can impair the effectiveness of
UAV detection systems include varying weather conditions [17] (such as rain, fog, or wind);
urban locations with plenty of obstructions, such as buildings and trees; terrain; background
noise; adverse lighting conditions [21]; etc. Severe environmental factors can affect the
precision and robustness of sensors, such as radar or optical systems, resulting in false positives
or false negatives in UAV identification. Thus, it is an ongoing challenge to adapt and modify
detection and classification algorithms to deal with various environmental circumstances.
5. Limited battery life: One of the biggest issues that UAVs encounter is their lim-
ited battery life. Increasing the battery size of UAVs is not possible since this would
increase weight, which is another key consideration. Most research has concluded that
lithium–polymer (LiPo) batteries are the most commonly utilized battery type in drones;
nevertheless, lithium–iron–phosphate (LiFePO4) batteries are believed to be safer and
have a longer life cycle [22,23]. Drones can fly in the air for 30–40 min before landing,
and swapping batteries might be troublesome. Temperature, humidity, altitude, and flight
speed all have a major influence on drone batteries. However, the charging strategies for
these batteries are equally critical. The right charging protocol is crucial for extending
battery life and ensuring consistent performance. To mitigate the impact of UAV flight
time limitations imposed by limited on-board battery life, it is critical to reduce energy
consumption in UAV operations [22].
These challenges drive continuing research into the creation of novel detection and
classification methods and the advancement of sensor technology. As well as addressing
these challenges, this topic calls for multidisciplinary study and cooperation amongst
Sensors 2024, 24, 125 7 of 31

specialists in artificial intelligence (AI), computer vision, and signal processing. Also,
overcoming these challenges can aid in ensuring the security, privacy, and safety of both
people and vital infrastructure.

2. Drone Detection Technologies


In recent years, many research works have been published to address UAV detection,
tracking, and classification problems. The main drone detection technologies are: radar
sensors [24–36], RF sensors [37–47], audio sensors [48–60], and camera sensors using visual
UAV characteristics [61–71]. Based on the above-mentioned sources, the advantages and
disadvantages of each drone detection technology are compared in Table 2. In addition,
the academic and industrial communities have begun to pay attention to bimodal and
multi-modal sensor fusion methods; however, these methodologies are still in the early
stages of development and implementation.

Table 2. Comparison of different drone detection technologies.

Detection Technique Principle of Operation Advantages Disadvantages


limited detection capability
long range; all-weather
due to low radar cross section
performance; ability to
employs radio waves to detect (RCS); limited performance
Radar-based recognize micro-Doppler
and locate nearby objects due to low altitudes and
signatures (MDS); speed and
speeds; high cost and
direction measurement
complexity of deployment
long-range detection and
identification; resistance to all
weather conditions; ability to unable to identify
captures wireless signals to
capture signals and autonomous drones;
RF-based detect the radio frequency
communication spectra from interference with other RF
signals from drones
the UAV and its operator; sources; vulnerable to hackers
ability to distinguish different
types of UAVs
cost effective; no line-of-sight background noise; limited
detects drones by their unique
Acoustic-based (LoS) required; quick detection range; vulnerability
sound signatures
deployment to wind conditions
limited detection range and
captures drone visual data visual confirmation;
Vision-based requires LoS; weather and
using camera sensors non-intrusive; cost-effective
lighting dependence

2.1. Radar-Based Detection


Principles of radar-based detection: Radar, which stands for “Radio Detection and Rang-
ing”, is generally considered one of the most trustworthy sensing devices that comes
to mind when addressing UAV detection since it has traditionally been utilized for air-
craft detection in both military and civilian purposes (such as aviation) [17]. Radar is
an electromagnetic technology that employs radio waves to detect and locate nearby ob-
jects. Any radar system works on the basis of echo-based measurements and consists of a
radar transmitter that sends out short electromagnetic waves in the radio or microwave
band, transmitting and receiving antennas, a radar receiver that receives the reflected
signals from the target, and a processor that identifies the objects’ attributes. Therefore,
radar can calculate important object characteristics, including distance, velocity, azimuth,
and elevation [24].
Types of radar: Radars come in two varieties: active radars and passive radars. Active
radar transmits a signal and then receives the reflected signal to detect objects. Therefore,
if the radar sensor illuminates objects, it is defined as active [25]. In contrast, passive
radar does not emit any kind of signal; instead, it relies on external signal sources such
as natural sources of energy (the Sun and stars) as well as other signal sources, including
Sensors 2024, 24, 125 8 of 31

cellular signals, frequency modulation (FM) radio, etc. Active radars are frequently referred
to simply as radars in the literature. If there is a difference between the transmitting
and receiving antennas, active radar is said to be bistatic; otherwise, it is referred to as
monostatic. An (active) radar transmits either trains of electromagnetic energy pulses or
a continuous wave; in the first case, these are known as continuous wave (CW) radars,
such as stepped frequency continuous wave (SFCW) radar; in the second case, these are
known as pulse radars. Pulse-Doppler radar is an alternative form of radar that combines
the characteristics of the two radar systems mentioned above. Depending on operating
frequency band designations, radar sensors have several classifications. Radar frequency
bands in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards
and their typical applications are presented in [25].
Several common varieties of radar are briefly explained below:
(1) Surveillance radar: The typical application for this kind of radar is long-range surveil-
lance and detection. It has extensive coverage and can detect UAVs up to a few kilometers
away. Radars used for surveillance often operate in the X-band or S-band frequencies and
feature an elevated platform to increase the detection range. In [26], the authors proposed
a reliable bird and drone target classification method using motion characteristics and a
random forest model. Different flight patterns and motion characteristics of birds and
drones, such as velocity, direction, and acceleration, extracted from the surveillance radar
data were used as the primary features for the classification system.
(2) Millimeter-wave (mmWave) radar: The use of mmWave technology in radar systems
can be an effective tool for UAV detection due to its abilities to penetrate various weather
conditions, improve resolution, and assist in the detection of tiny drones. A mmWave
radar uses radio waves with wavelengths ranging from 1 to 10 mm and can detect the
presence of drones by measuring the radio waves reflected off their surfaces. In [2], the
authors presented a novel drone classification approach using deep learning techniques
and radar cross section (RCS) signatures extracted from millimeter-wave (mmWave) radar.
The majority of drone classification techniques typically convert RCS signatures into images
before doing classification using a convolutional neural network (CNN). Due to the added
computational complexity caused by converting every signature into an image, CNN-based
drone classification shows low classification accuracy concerning the dynamic characteris-
tics of a drone. Thus, by adding a weight optimization model that can minimize computing
cost by preventing the gradient from flowing through hidden states of the long short-term
memory (LSTM) model, the authors presented an enhanced LSTM. Experimental results
showed that the accuracy of the long short-term memory adaptive-learning-rate-optimizing
(LSTM-ALRO)-based drone classification model is much greater than the accuracy of the
CNN- and Google-based models.
(3) Pulse-Doppler radar: This type of radar emits short radio wave pulses and detects
the frequency shift brought on by the motion of an unmanned aerial vehicle (UAV) even in
the presence of background noise or interference [27].
(4) Continuous wave (CW) radar: This type of radar detects unmanned aerial vehicles
(UAVs) by continuously transmitting radio waves and analyzing the frequency shift in the
reflected signal [28].
(5) Frequency-modulated continuous wave (FMCW) radar: FMCW radar continually
transmits an electromagnetic signal with a fluctuating frequency over time and uses the
difference in frequencies between emitted and reflected signals to determine the range
and velocity of objects [17]. These signals, often known as chirps, vary from CW in that
the operational frequency is not changed throughout the transmission [29]. Due to their
constant pulsing, inexpensive cost of hardware components, and superior performance,
FMCW and CW radars are recommended for use in UAV detection and identification [30].
Micro-Doppler effect in radar: Compared to conventional measurements like radar
cross section (RCS) and speed, micro-Dopplers created by moving blades can be used as
more efficient signatures for detecting radar signals from UAVs. Recent years have seen a
significant increase in the importance of the detection and classification of UAVs using the
Sensors 2024, 24, 125 9 of 31

radar micro-Doppler effect [28]. Due to the rotating blades of drones modulating incident
radar waves, it is known that drones cause micro-Doppler shifts in radar echoes. Specific
parts of an object that move separately from the rest provide a micro-Doppler signature.
These signatures may be produced by drones simply by rotating their propeller blades [31].
As well, the drone’s moving components, such as rotors or wings, produce distinctive
radar echoes known as micro-Doppler signatures. By examining these signatures, the
authors of [32] proposed a novel approach that focuses on developing some patterns and
characteristics to identify various small drones based on blade types. The presence or
absence of these micro-Doppler shifts, which are produced in the spectra of drones, helps to
separate drone signals from background noise like birds or humans. Distinctions between
drone types were made using the variations in the micro-Doppler parameters developed by
the authors, such as the Doppler frequency difference (DFD) and the Doppler magnitude
ratio (DMR). X-band pulse-Doppler radar was used to analyze radar signatures of different
small drone types such as multi-rotor drones with only lifting blades, fixed-wing drones
with only puller blades, and hybrid vertical take-off and landing (VTOL) drones with both
lifting and puller blades. Experimental results demonstrated that for all three types of
drones, lifting blades produced greater micro-Doppler signals than puller blades.
Even when using very sensitive radar systems, it is not possible to differentiate
between multi-copters and birds with sufficient accuracy using either the physical size or the
radar cross section (RCS). Investigations of multi-copters for their micro-Doppler signatures
and comparison with those of birds have shown excellent findings in related studies [33,36].
In [33], the authors presented the characteristic radar micro-Doppler properties of three
different drone models and four bird species of different sizes obtained by processing
a phase-coherent radar system at K-band (24 GHz) and W-band (94 GHz) frequencies.
The experimental outcomes clearly showed that there are considerable differences between
the micro-Doppler signatures of birds and proved that a K-band or millimeter-wave radar
system is capable of detecting drones with excellent fidelity. The findings demonstrated that
micro-Doppler signatures such as bird wing beat signatures, helicopter rotor modulation
(HERM) lines, micro-Doppler dispersion across the Doppler axis, and rotor blade flash
might all be employed as classification features for accurate target recognition. Thus, target
classification may be accomplished using the micro-Doppler effect based on the feature
“micro-Doppler signatures”. Flying multi-copters can be detected using the radar micro-
Doppler signatures produced by the micro-movements of rotating propellers. In the case of
multi-copters, rotating rotor blades are the primary source of micro-Doppler signatures,
while the wing beats of birds provide these signatures. The authors of [33] demonstrated
that drones and birds may be consistently distinguished by analyzing the radar return,
including the micro-Doppler characteristic. Further, the work [31] addresses the problem of
classifying various drone types such as DJI and Parrot using radar signals at X- and W-band
frequencies, Convolutional neural networks (CNNs) were used to analyze the short-time
Fourier transform (STFT) spectrograms of the simulated radar signals emitted by the drones.
The experimental results demonstrated that a neural network that was trained using data
from an X-band radar with a 2 kHz pulse repetition frequency outperformed a CNN trained
using the aforementioned W-band radar. In [34], the authors proposed a deep-learning-
based technique for the detection and classification of radar micro-Doppler signatures of
multi-copters. Radar micro-Doppler signature images of rotating-wing-copters and various
other non-rotating objects were collected using continuous wave (CW) radar, and then
the micro-Doppler images were labeled and fed to a trained CNN model. Experimental
measurements showed 99.4% recognition accuracy with various short-range radar sensors.
In [35], the authors examined micro-Doppler data obtained from a custom-built 10 GHz
continuous wave (CW) radar system that was specifically designed for use with a range of
targets, including UAVs and birds, in various scenarios. Support vector machines (SVMs)
were used for performing different classification types, such as drone size classification,
drone and bird binary classification, as well as multi-class-specific classification among
Sensors 2024, 24, 125 10 of 31

the five classes. The main shortcoming of the research is the limited conditions for data
collection, and the authors hope to address this limitation in their future work.

2.2. Radio Frequency (RF)-Based Detection


Principles of RF-based detection: As electronic components aboard drones, such as ra-
dio transmitters and Global Positioning System (GPS) receivers, emit energy that may be
detected by RF sensors, RF-based UAV detection is considered one of the most efficient de-
tection techniques. An RF-based UAV detection system is made up of the UAV itself, a UAV
remote controller or ground control station, as well as two receiver units for registering
the power of the received signal produced by the UAV. One of the receivers captures the
RF signals’ lower band, while the upper band of the RF signals is recorded by the second
receiver (see Figure 4). The communication signals between the controller device and the
UAV are considered RF signals. Drones often operate on a variety of frequencies. However,
the majority of commercial drones communicate with their ground controllers by using RF
signals in the 2.4 GHz Industrial, Scientific, and Medical (ISM) frequency band. Usually,
the drone’s frequency is unknown, and the RF scanner passively listens to the signals sent
between the UAV and its controller [17]. RF scanner technologies that capture wireless
signals can be used to detect the presence of UAVs in the target area. Therefore, UAV
detection in no-fly zones is most frequently accomplished by intercepting and analyzing
the RF-transmitted signals between the UAV and the ground control station. Typically,
these signals include up-link control signals from the ground station and down-link data
signals (position and video data) from the drone [37].

Figure 4. RF-based UAV detection system components: UAV, UAV remote controller, and two receiver
units for capturing lower and upper bands of RF signals [37].

Recent related studies on the use of RF sensors for UAV detection and classification
may be found in [37–47].
State-of-the-art in RF-based UAV detection, classification and identification: Classification is
one of the crucial parts of supervised machine learning techniques. For a given dataset,
the number of labels and classes is the same. The binary classification problem matters
when traditional machine learning (ML)- or deep learning (DL)-based UAV detection and
identification is performed using only two labels, such as “drone” and “no drone”, “drone”
and “bird”, etc. On the other hand, multi-class classification means drone identification
based on different models, payloads, number of rotors, and flight modes [37]. Recent
related studies on the use of RF sensors for UAV detection and classification may be found
in [37–47].
Drone detection and identification (DDI) systems using ML algorithms: A novel hierarchical
learning approach for efficient detection and identification of RF signals for different
UAV types was proposed in [37]. The proposed approach is performed based on several
Sensors 2024, 24, 125 11 of 31

stages, such as problem formulation specifying the system model and dataset [39], a
data pre-processing stage including smoothing filters, an ensemble learning approach
for solving multi-class classification tasks based on the voting principle between two ML
algorithms such as eXtreme Gradient Boosting (XGBoost) and k-nearest neighbors (KNN),
and model evaluation based on appropriate metrics. The experiment was performed
based on a publicly available DroneRF [39] dataset consisting of 227 segments stored in a
comma-separated values (CSV) format. The hierarchical learning approach consists of four
classifiers that work in a hierarchical way. The first classifier performs binary classification
by specifying the presence of a UAV (in this case, there are two labels: “UAV” and “no
UAV”). If a UAV is detected, then the second classifier defines the type of UAV (in this case,
there are three labels: Parrot AR, Phantom 3, and Parrot Bebop). If the detected UAV is a
Parrot Bebop, then the third classifier is activated and specifies the flight mode of the Bebop
drone between four flight modes: on, hovering, flying, and flying with video recording.
If the detected UAV’s model is Parrot AR, then the fourth classifier defines the flight mode
of AR. The DJI Phantom 3 has only one flight mode. To evaluate the detection system, false
negative rate (FNR), false discovery rate (FDR), and F1-score metrics were defined using the
values of precision and recall. The experiment outcomes demonstrated that with an increase
in the number of classes, the classification accuracy decreased. Despite such a challenge,
the authors could reach a classification accuracy of about 99% with their proposed method.
The same research was performed in [38], wherein the authors ensured a comprehensive
comparison of six ML technologies (XGBoost, adaptive boosting (AdaBoost), decision
tree, random forest, KNN, as well as multilayer perceptron (MLP) for RF-based drone
identification also using the DroneRF [39] dataset. A two-step experiment, including
discrete Fourier transform (DFT)-based feature extraction in the frequency domain and
ML-based classification, showed that XGBoost provides state-of-the-art outcomes. Another
similar work was conducted in [40], wherein the authors proposed an RF-based machine
learning drone detection and identification system that leverages low-band RF signals
for communication between the drone and the flight controller. The study was carried
out utilizing the DroneRF dataset [39], which is accessible to the public and contains
227 segments (186 “Drone” segments and 41 “No drone” segments) of RF signal-strength
data obtained from three different kinds of drones. These three drones have four different
operational modes: on, hovering, flying without video recording, and flying with video
recording. These drone operating modes are labeled as Modes 1–4. Information about
the Drone RF dataset, including the quantity of segments and raw samples for each kind
of drone, is briefly summarized in Table 3 [39]. Additionally, Table 4 demonstrates the
detection and identification cases for the open-source dataset [39]. Three machine learning
models were created to detect and identify drones using the XGBoost algorithm under the
three scenarios: drone presence, drone presence and type, as well as drone presence, type
and operating mode. According to the experimental results, in comparison to the other two
scenarios, the accuracy of the classifier decreases when utilizing an RF signature to identify
and detect the operating modes of drones. This implies that utilizing a signal’s frequency
components as a signature to identify drone activity has limited efficacy.

Table 3. An overview of the DroneRF dataset [39].

Drone Type Number of Segments Number of Samples Ratio, %


Parrot Bebop 84 1680 × 106 37
Parrot AR 81 1620 × 106 35.68
DJI Phantom 3 21 420 × 106 9.25
No Drone 41 820 × 106 18.06
Sensors 2024, 24, 125 12 of 31

Table 4. Detection and identification cases for DroneRF dataset [39].

Case Name Class Name Number of Segments


2-class problem Drone 186
No Drone 41
4-class problem Bebop 84
AR 81
Phantom 21
No drone 41
10-class problem Bebop mode 1 21
Bebop mode 2 21
Bebop mode 3 21
Bebop mode 4 21
AR mode 1 21
AR mode 2 21
AR mode 3 21
AR mode 4 18
Phantom mode 1 21
No drone 41

Drone detection and identification (DDI) systems using Dl algorithms: The authors of [41]
trained a deep neural network (DNN) with four fully connected layers and confirmed the
dataset’s validity using 10-fold cross-validation. By contrast, in [42], the authors attempted
to tackle the drone detection issue and achieve better accuracy by using a simple CNN
rather than DNN. The research in [43] established a new technique for RF-signal-based UAV
detection and identification using transfer learning. Firstly, a sparse RF signal was sampled
using compressed sensing, and then, in the preprocessing part, a wavelet transform was
used to extract time–frequency features of non-stationary RF sample signals. Three-level
hierarchical classification was performed using a Visual Geometry Group (VGG)-16 net-
work retrained on the preprocessed compressively sensed RF signals to detect and identify
UAV presence, model, and flight mode. UAV detection, type, and operating mode identifi-
cation capabilities for various compression rates are assessed using the publicly available
dataset DroneRF [39]. The experimental findings demonstrated that the proposed tech-
nique performs well and has a greater detection rate and recognition accuracy than other
methods [40–42]. In [44], the authors suggested a compressively sensed RF-signal-based
multi-stage deep learning approach for UAV detection and identification. The authors
replaced the traditional data sampling theorem with the compressive sensing theory to
sample and preprocess the initial data. Then, two neural network models were designed to
detect the presence of UAVs as well as to classify the UAV model and flight modes. A DNN
network containing five dense layers was used for UAV presence detection. The CNN
network consisted of six 1D convolutional layers for feature extraction followed by pooling
layers for reducing data size and two fully connected layers for classifying the output data;
the network was designed to identify and classify UAV types and flight modes. Different
experiments were conducted using the open-access DroneRF dataset [39]. Experimen-
tal results were compared to those of other research works [37,38,40–42] carried out on
the same dataset [39]. By comparing all of these methods, the authors of the paper [45]
proved the efficiency of the compressed sensing technique for sampling the communica-
tion signals between the controller and the UAV. Furthermore, the experiment outcomes
demonstrated that even at extremely low sample rates, the approach employed in their
work performs better than alternative learning strategies in terms of evaluation metrics
such as accuracy, F1-score, and recall. Another study uses a multi-channel 1-dimensional
convolutional neural network (1D-CNN) for drone presence detection and drone type
and flight state identification [45]. The authors suggested channelizing the spectrum into
several channels and using each channel as a different input for the classifier. Therefore,
the whole Wi-Fi frequency spectrum is separated into eight distinct channels, whereby
Sensors 2024, 24, 125 13 of 31

each channel provides unique information about the drone’s RF activity and flight mode.
The proposed multi-channel model consists of channelized input data, a feature extractor
with two stages of convolutional and pooling layers, as well as a fully connected layer,
or MLP, for classification tasks. The experiment was conducted on the DroneRF dataset [39],
which features drones operating in various modes. Experimental outcomes demonstrated
that the proposed multi-channel 1DCNN performs noticeably better than the methods
described in [41]. Related works on DDI using ML and DL methods were briefly compared
(see Table 5). A number of deep learning and machine learning models were tested by
the authors of [46] for drone RF feature extraction and classification tasks. The proposed
drone detection solution started with preprocessing raw data of drone RF signals from
the publicly available DroneRF dataset [39]. Then, feature extraction methods such as
root mean square (RMS) and zero crossing rate (ZCR) for extracting time domain features;
discrete Fourier transform (DFT), power spectral density (PSD), and mel-frequency cepstral
coefficients (MFCCs) for extracting frequency domain features; as well as fused methods
were used to extract time and frequency domain features. Several drone detection solutions
were performed during the experiment. In the first solution, extracted features were fed
into an XGBoost machine learning classifier, whereas in the second solution, both feature
extraction and classification were performed using a 1D-CNN deep learning model; in the
third solution, features were extracted using a 1D-CNN deep learning model and classified
using machine learning classifiers and vice versa. K-fold cross-validation was used to
compare the performance of each solution. The authors solved the class imbalance problem
by using the Synthetic Minority Oversampling Technique (SMOTE) data augmentation
method, whereby synthetic data points are generated based on the original points for the
minority class. This helped to improve the classification performance of the proposed
drone detection approach. According to the experimental results, PSD features with the
XGBoost classifier showed better results than other solutions. In [47], the authors proposed
a deep learning model based on time–frequency multiscale convolutional neural networks
(TFMS-CNNs) for drone detection and identification. The proposed model is made up
of two parallel networks with a layered design: one of the networks takes input in the
frequency domain and the other one in the time domain. In the preprocessing stage, the raw
RF segments were transformed into frequency domain signals using the discrete Fourier
transform. The two parallel networks consist of 1D convolutional blocks and global max
pooling layers. Experimental results showed that the proposed TFMS-CNN approach,
which incorporates both time and frequency domain data, significantly outperforms previ-
ous works’ models [41–43] that were trained solely on frequency domain signals in terms
of accuracy and F1-scores. Table 5 below summarizes the comparison of these ML- and
DL-based drone detection and identification methods based on the DroneRF [39] dataset.

Table 5. Comprehensive comparison of existing DDI methods based on DroneRF dataset [39].

Accuracy for 10-Class


Preprocessing Feature Extraction Classification Method Ref.
Problem, %
Data engineering: a Hierarchical approach
smoothing filter with a Feature engineering: data based on voting principle
window of 15 was applied segmentation using fast between two ML 99.2% [37]
to each RF signal in order Fourier transform (FFT). algorithms such as
to remove noise and clutter. XGBoost and KNN
six ML algorithms:
Feature extraction in
XGBoost, AdaBoost, XGBoost algorithm
frequency domain using
given RF raw segments decision tree, random performs well for 10-class [38]
discrete Fourier transform
forest, k-nearest neighbor, problem: 79.25%
(DFT)
multilayer perceptron
Sensors 2024, 24, 125 14 of 31

Table 5. Cont.

Accuracy for 10-Class


Preprocessing Feature Extraction Classification Method Ref.
Problem, %
discrete Fourier transform
(DFT) of lower band (LB)
given RF raw segments XGBoost 70.09% [40]
and upper band (UB)
segments
convolutional layer extracts DNN network with four
given RF raw segments not given [41]
features from the data fully connected layers
by using filters on the input
data, 1D convolutional
layer extracts features from dense or fully connected
reshape function is applied the data; average pooling layer with activation
59.20% [42]
on the input data layer followed by conv function performs
layer reduces the space classification
dimension of the extracted
data
wavelet transform extracts
transfer-learning-based
compressive sampling richer time–frequency 90.2% [43]
VGG-16
information
5 dense layers for DNN
compressive-sensing- 1D convolutional layers
network; 2 fully connected
based sampling; each followed by pooling
layers and activation 99.3% [44]
additionally, ZCC and PSD layers extract the features
functions for CNN network
computation for CNN network
perform classification tasks
feature extraction with two
fully connected layer or
data channelizing stages of convolutional and 87.4% [45]
MLP for classification tasks
pooling layers
RMS and ZCR for
extracting time domain
SMOTE data augmentation
features; as well as DFT, XGBoost ML and 1D-CNN
method for solving 99.51% [46]
PSD, and MFCC for DL algorithms
imbalanced data problem
extracting frequency
domain features
both time and frequency fully connected layers of
DFT transformed the raw
domain features are time–frequency multiscale
FR segments into 87.67% [47]
extracted using two convolutional neural
frequency domain signals
parallel networks network

2.3. Acoustic-Based Detection


Principles of acoustic-based detection: Audio-based methods have been shown to be
promising for drone identification in recent years. Due to their engines, propeller blades,
aerodynamic features, etc., flying drones generate a variety of distinct acoustic signatures,
which can be used to aid in detection purposes. Nevertheless, the sound produced by
propeller blades is frequently employed for detection because it has a comparatively larger
amplitude. Numerous research works have examined the sound produced by drones, using
characteristics like frequency, amplitude, modulation, and duration to identify a drone’s
existence [48–60].
Acoustic-based drone detection relies on the use of specialized, highly sensitive audio
sensors like microphones or microphone arrays to capture the noises made by propeller
blades of drones, and the resulting audio signals are analyzed based on different methods
such as correlation/autocorrelation or machine learning to identify the presence, type, and
capabilities of drones [48]. Usually, the term “drone detection” refers to the identification
of unauthorized drone activity, such as entering a no-fly zone or filming something on
Sensors 2024, 24, 125 15 of 31

camera. However, the expanded process of intruder drone detection often entails figuring
out its presence, type, model, and other capabilities such as size, speed, altitude, direction,
position, etc. Finding out whether a drone is in the region of interest refers to “drone
presence detection” or just the “drone detection” problem. However, determining the
precise model or type of the spotted drone is the drone identification problem. The exact
geographic position of the identified drone is ascertained using drone localization. Drone
fingerprinting is the procedure of processing a captured audio signal and mapping it
to those of UAV IDs stored in a database. Drone presence detection performs a binary
classification task. Drone identification can be represented as binary classification when
determining if a drone is authorized or not, or as multi-class classification by identifying
the detected drone by its model, activity, size, unique ID, etc. [17].
Drone detection and identification using ML algorithms: The acoustic-based drone detec-
tion problem was considered in [49], wherein the authors proposed an inexpensive and
robust machine-learning-based drone detection system. Two online databases including
different drones’ augmented audio signals were used for drone data preparation. The “No
Drone” database is made up of various ambient noises that were gathered from sound
databases on YouTube and from the BBC. A total of 26 mid-level audio features, called ’mel-
frequency cepstral coefficients’ (MFCCs) were extracted from both drone and non-drone
data. Different duplicate values and outliers were removed in the preprocessing stage.
As MFCC features represent high-dimensional drone spectral envelopes, principal com-
ponent analysis (PCA) was applied to reduce the high-dimensional attributes. K-medoid
clustering was applied on the normalized PCA to obtain the ground truth for the total
number of drones in the drone database. Two ML classifiers, balanced random forest (BRF)
and multi-layer perceptron (MLP), performed binary classification by identifying real-time
drone presence in the target area. Experimental results demonstrated that for the test
dataset, MLP, with an F1-score of 0.83, gave better results than BRF (the F1-score for BRF
was 0.75). The distance dependence of an acoustic-based UAV detection system based on
various machine learning algorithms was studied in [50]. The dataset consisted of sounds
from several drone models (DJI Phantom 3, Parrot AR, Cheerson CX 10, and Hobbyking
FPV250) as well as motorbikes, helicopters, airplanes, and building sites; the labels were
“drone” and “no drone”. Although there is no data for frequencies higher than 8 kHz,
the signals have been resampled with a sampling frequency of 16 kHz from their original
44.1 kHz sampling frequency. In the feature extraction stage, several spectral features in the
frequency domain, like MFCC, delta MFCC, delta-delta MFCC, pitch, centroid, etc., were
calculated for every frame. The extracted features were fed to ML classifiers, such as the
least squares linear classifier, MLP, radial basis function network (RBFN), support vector
machine (SVM), and RF, and these classifiers were compared using distance-based error
probabilities. A more-thorough evaluation of the models was provided by using receiver
operating characteristic (ROC) curves, for which RF seemed to be the best detector in
terms of probability of detection. On the other hand, simple linear classifiers outperformed
in terms of distance-based performance variation. Subsequently, the authors suggested
combining these two algorithms to achieve the best results. Another similar work was con-
ducted in [51], wherein the authors focused on finding the best parametric representation
for ML-based audio drone identification. The drone database used consisted of two com-
mercially available drones’ (Bebop and Mambo) propeller noise recordings for both “Drone”
and “No Drone” classes. In the feature extraction part, five acoustic features—MFCC, Gam-
matone cepstral coefficients (GTCC), linear prediction coefficients (LPC), zero-crossing rate
(ZCR), and spectral roll-off (SRO)—were extracted to examine optimal audio descriptors
for the drone detection problem. In the classification part, an SVM classifier was employed
to classify drone audio data. Experimental findings proved that GTCC can adequately
describe the acoustic characteristics of a drone and produce the best outcomes compared to
the other feature sets; furthermore, the MFCC and LPC feature sets are also useful for audio
drone detection, as demonstrated by their considerable detection performance; on the other
hand, ZCR and SRO, as single-value features, perform the worst based on sole utilization.
Sensors 2024, 24, 125 16 of 31

Nevertheless, combining all five audio features yields the best classification outcomes.
In [52], the authors presented an ML framework for amateur drone (ADr) identification
among other noises in a loud environment. Four distinct sound datasets featuring birds,
drones, thunderstorms, and airplanes were gathered for the multi-class classification task.
Linear predictive cepstral coefficients (LPCC) and MFCC features were extracted from
acoustic sounds. Following feature extraction, these sounds were accurately classified
using support vector machines (SVMs) with different kernels. The experimental findings
confirmed that the SVM cubic kernel with MFCC performs better than the LPCC approach
by reaching an accuracy of about 96.7% for ADr identification. In future works, the authors
plan to improve detection accuracy by utilizing deep neural networks with a huge amount
of acoustic data. A robust ML-based drone detection framework for acoustic surveillance
was established in [53]. The dataset was captured in various sound environments and
included various UAV types for training and unseen test sets, as well as sounds of ambient
noises such as birds, wind, engines, and building noise. Different block-wise temporal
and spectral domain features were extracted to distinguish UAV sounds from background
noises. The extracted relevant features were fed into an SVM classifier that enabled the
researchers to recognize and classify sounds based on their acoustic profiles. The exper-
imental results were evaluated with a confusion matrix first, whereby robust detection
findings were produced with a very low percentage of false positives. Based on other
evaluation metrics such as accuracy, F1-score, sensitivity, etc., the proposed SVM-based
technique outperformed spectrogram-CNN on the testing dataset. Additionally, the au-
thors checked the robustness of the proposed approach by dividing the binary classes into
several subclasses, including specific types of noise disruptions and problematic sound
occurrences. The result was a decrease in performance in identifying recordings from
more remote or silent UAVs. Also, bird calls and building sounds from the noise category
frequently caused confusion, leading to false-positive UAV detection. The authors claimed
that these limitations can be fixed by extending training audio recordings, combining
audio–visual data into the detection process, or integrating many acoustic sensor outputs
into array processing systems. Similar research on acoustic-based UAV recognition based
on analyzing UAV rotor sounds was presented in [54]. Hexacopters and quadcopters were
employed for creating drone and non-drone datasets from scratch: gradually starting from
21 samples and reaching 696 samples. The samples include several scenarios, such as UAVs
landing, taking off, hovering, and doing flyby passes. Feature extraction was performed by
MFCCs representing the short-term sound power spectrum. The extracted features were
trained on an SVM classifier. For the CNN classifier, a spectrogram saved as a 224 × 224 jpg
was made for each sample. The validation accuracy of both classifiers was checked for 21,
110, and 284 samples. Experimental results indicated that in the 21-, 110-, and 284-sample
tests, SVM was more accurate than CNN. Due to small and imbalanced training data,
the 284-sample test results from CNN saw a steep decline, leading to the decision to shift
the focus to the SVM model going forward, which for the 696-sample test achieved a data
validation accuracy of 92%.
Drone detection and identification using DL algorithms: The authors of [55] concentrated
on deep learning approaches for drone detection and identification, including convolutional
neural networks (CNNs), recurrent neural networks (RNNs), and convolutional recurrent
neural networks (CRNNs). The dataset was acquired by recording Bebop and Mambo
drone propeller sound samples with a smartphone’s built-in microphone while the drone
was hovering and flying in a quiet indoor space. Preprocessing started with reformatting,
which converted the audio file type, sample rate, bitrate, channel, etc. Then, segmentation
of the formatted audio files was performed to optimize the model’s training for real-time
implementation. The authors overlapped real-world background noises with the drone
audio as a data augmentation method. For the first experiment—the drone detection
problem—the data acquired for the Bebop and Mambo drones were combined into a single
entity labeled as “drone”, and all other audio clips were labeled as “not a drone”. The
second experiment focused on the drone identification problem, for which a multi-class
Sensors 2024, 24, 125 17 of 31

classification problem was addressed to identify Bebop and Mambo drone models and
unknown noises, including other background noises. The results indicated that in terms of
all evaluation metric values, CNN outperformed RNN; however, compared to CNN and
convolutional recurrent neural network (CRNN) methods, RNN required the least amount
of training time; further, CRNN outperformed RNN but showed decreased performance
compared to CNN across all evaluation metrics. Considering that the RNN algorithm is
designed to work best with sequential data, short audio clips may have led to a decline in
its performance. CRNN and CNN showed close results in terms of performance; however,
in terms of training time, CRNN was significantly quicker than CNN. In [56], a CNN-based
detection system with a STFT feature extraction technique was used to compare drum
sounds and tiny fan sounds to a UAV’s hovering acoustic signal. The acoustic signal
data from a drum and a tin were recorded in a silent laboratory with no other noise.
UAV hovering acoustic data was provided by IEEE SPCUP 2019. In the preprocessing
stage, STFT was applied to the noise produced by the UAV to convert one-dimensional
data into two-dimensional features. The proposed simple CNN model consists of seven
layers. According to the experimental outcomes, the CNN-based model for UAV detection
showed a high detection rate. However, the authors suggested that in the future, small and
noise-free data should be verified in a more populated outside area. Another CNN-based
multi-class UAV sound classification with a large-scale dataset was proposed in [57]. The
large-scale UAV dataset introduced contains audio files of 10 distinct UAVs, ranging from
toy drones to Class I drones, with each file including a 5-second recording of the flying
drone sound. In the feature extraction part, MFCCs were extracted from audio samples
in two steps: first, the audio was converted from the Hz to the Mel scale and then the
logarithm of the Mel scale; next, logarithmic magnitude and discrete cosine transformations
were performed. The MFCC feature is a cepstrum feature formed from the Mel frequencies
as a result of the preceding processes. The extracted features were then fed into a simple
CNN model to train the drone classification system. The obtained results were evaluated by
loss and accuracy plots as well as evaluation metrics. The resulting F1-score values proved
that the CNN-based framework coped with the classification task of different UAV models:
showing overall test accuracy around 97.7% and test loss around 0.05%. In [58], audio-
based UAV detection and identification employing DNN, CNN, LSTM, convolutional
long short-term memory (CLSTM), and transformer encoders (TE) was benchmarked with
the dataset used in [55]. In addition to the dataset from [55], the authors gathered their
own varied-identification audio dataset, which included seven different kinds of sound
categories, such as ’drone’, ’drone-membo’, ’drone-bebop’, ’no-UAV’, ’helicopter’, ’aircraft’,
and ’drone-hovering’. Mel-frequency cepstral coefficients (MFCC) were calculated for each
audio frame for the feature extraction module. The extracted features were fed into deep
neural network models for UAV sound detection and UAV identification tasks. A DNN
model was built with three hidden layers, and the output layer depended on a softmax
activation function. On the other hand, the CNN model with small-sized inputs had three
convolutional layers without any subsequent pooling layers and had a hidden layer with a
softmax activation function. The LSTM was built using one 128-unit LSTM layer followed
by a hidden output layer with the same activation function as previous models. Likewise,
CLSTM employed a convolutional layer and an LSTM layer followed by a hidden output
layer. Finally, the TE was built with a positional encoder, two attention heads, a 128-unit
encoder layer, and a softmax activation mechanism. A comprehensive comparison of
existing ML and DL-based drone detection methods based on acoustic signatures was
shown in Table 6.
Sensors 2024, 24, 125 18 of 31

Table 6. Comprehensive comparison of existing drone detection methods based on acoustic features.

Dataset Extracted Features Classification Method Experiment Results Ref.


F1-score for MLP was 0.83,
two online databases 26 mid-level MFCCs BRF and MLP [49]
and for BRF was 0.75
ROC curves; RF classifier
dataset of several drone least squares linear classifier, showed the best detection
MFCC, delta MFCC,
models’ audio (DJI Phantom MLP, radial basis function probability; simple linear
delta–delta MFCC, pitch, [50]
3, Parrot AR, Cheerson CX network (RBFN), SVM, classifier outperformed in
centroid
10, Hobbyking FPV250) and RF terms of distance-based
performance variation
drone database consists of combination of five audio
MFCC, GTCC, LPC, ZCR,
Bebop and Mambo drones’ SVM features yields the best [51]
and SRO
propeller noise recordings classification outcomes
SVM cubic kernel with
sound datasets of birds, MFCC performs better than
drones, thunderstorms, LPCC and MFCC SVM with different kernels the LPCC approach by [52]
and airplanes reaching an accuracy of
about 96.7%
the experimental results
different block-wise
sound dataset of various were evaluated with
temporal and spectral SVM classifier [53]
UAV types confusion matrix, accuracy,
domain features
F1-score, sensitivity
sound data of hexacopters data validation accuracy for
MFCCs SVM and CNN classifiers [54]
and quadcopters SVM showed 92%
audio file type, sample rate,
drone dataset was acquired bitrate, and channel are
by recording the Bebop and converted based on CNN outperformed other
CNN, RNN, and CRNN [55]
Mambo drone propellers’ reformatting; then, methods
sounds segmentation was
performed
dataset of drum sounds and CNN-based model for UAV
tiny fan sounds plus UAVs STFT CNN detection showed high [56]
hovering detection rate
large-scale UAV dataset overall test accuracy around
containing audio files of MFCCs simple CNN model 97.7% and test loss around [57]
10 distinct UAVs 0.05%
data sounds of 7 UAV DNN, CNN, LSTM, CLSTM, LSTM model outperformed
MFCCs [58]
classes and TE with high accuracy
drone sound dataset was
GRU outperformed in
gathered by flying several SimpleRNN, LSTM,
MFCCs distinguishing loaded and [60]
drone models with and BiLSTM, and GRU
unloaded UAVs
without payloads

Researchers have compared the detection and identification capabilities of all of these
proposed models with the models established in [55], such as RNN, CNN, and CRNN, for
the same dataset. Based on the UAV detection experimental findings, the LSTM model
showed the best results compared to all the other models. In the UAV identification
experiment, all the proposed models showed better identification accuracy than the models
in [55]: specifically, the LSTM model outperformed with high accuracy. The authors of [59]
proposed an enhanced dataset version of their study conducted in [55] by employing
real-like drone audio clips artificially created by generative adversarial networks (GANs).
By comparing experiments based on GAN to those without GAN, the authors concluded
by suggesting a strategy of employing GANs demonstrates a promising solution to bridge
the gap imposed by the scarcity of drone acoustic datasets while simultaneously improving
Sensors 2024, 24, 125 19 of 31

classifier performance in the majority of cases, whereas in cases for which no significant
improvement was observed, the classifier performed similarly to when GAN was not
used. Real-time UAV sound recognition based on different RNN networks was conducted
in [60]. A drone sound dataset was gathered by flying several drone models with and
without payloads. In order to differentiate UAV sounds from false negatives, the data of
ambient noises were also gathered, including noises of wind, motorbike riding, and canopy
whispers. All the collected data were labeled as “Unloaded UAV”, “Loaded UAV”, or
“Background noise” and were examined in the time and frequency domains to determine
the intervals of the information domains. Once the desired frequency range was selected,
special filters were used to filter out any sounds below 16,000 Hz. Thus, mel-scale features
were extracted by employing a variety of time and frequency hyperparameters. UAV sound
recognition models were trained based on RNNs such as SimpleRNN, LSTM, bidirectional
long short-term memory (BiLSTM), and gated recurrent unit (GRU). Experimental results
proved the efficacy of using RNNs for UAV sound recognition, as they outperformed CNN
models. For comparison between RNN models, the GRU architecture was discovered to be
an efficient model for distinguishing loaded and unloaded UAVs as well as background
noise with high accuracy.

2.4. Vision-Based Detection


Principles of visual-based detection: A visual detection system is based on capturing
drone visual data, such as images or videos, using camera sensors and then detecting
drones in them using computer-vision-based object detection algorithms. The process of
capturing visual data from objects is called image acquisition and consists of three main
steps: first, energy is reflected from the object of interest; then, it is focused by an optical
system; and finally, the amount of energy is measured using a camera sensor (see Figure 5).

Figure 5. The main steps of the image acquisition process are: energy is reflected from the UAV, the
reflected energy is focused by an optical system, and a camera sensor measures the amount of energy.

Related works on drone object detection based on visual data can be divided into two
major categories depending on whether the authors employed a deep learning model or
dealt with feature extraction using traditional handcrafted features. However, recent studies
on visual-based drone detection and identification have mostly employed learned features
or deep learning models [61–71]. Deep learning for object detection with convolutional
neural networks (CNNs) has two approaches: one-stage detection and two-stage detection.
Due to their high inference speed, one-stage detectors such as the you only look once
(YOLO) and single shot detector (SSD) algorithms are extremely quick. These detectors lack
a region proposal stage, which is common in two-stage detectors. Two-stage detectors are
more accurate, which implies improved localization and identification. This method seeks
to provide several bounding boxes for the objects in the image. The most well-known two-
Sensors 2024, 24, 125 20 of 31

stage detectors are the region-based convolutional neural network (R-CNN), fast R-CNN,
faster R-CNN, etc., which are shown in Figure 6.

Figure 6. Visual-based object detection methods. Object detection with low-level hand-crafted
features use edges, blobs, and color information. Object detection with learned features use extensive
DL models.

Object detection with learned features based on one-stage detectors: In [61], the authors
proposed a novel anti-drone YOLOv5s approach to detect low-altitude small-sized UAVs,
which is mainly oriented towards border defense applications. Initially, the low-altitude,
small-size object detection problem was solved based on the optimization method for
feature enhancement. Secondly, as the YOLOv5 base model includes numerous parameters
that are unsuitable for use in embedded systems, the authors used ghost modules and depth-
wise separable convolution to decrease the number of model parameters. The proposed
lightweight detection approach improved the mAP value by 2.2% and the recall value
by 1.8%. Another study in [62] focused on drone detection using single-shot detectors
such as YOLOv4, YOLOv5, and detection transformer (DETR). In the preprocessing stage,
images from the open-access D-Drone dataset were resized, labeled to obtain ground-truth
labels, and then converted to YOLO and common objects in context (COCO) formats.
Detection models were trained on one-stage detectors, and the results were evaluated
based on evaluation metrics. Experimental outcomes showed that YOLOv5 outperformed
the other two detectors in terms of average precision (AP) with 99% and an intersection
over Union (IoU) of 0.84. The DETR model showed the worst results. The authors in [63]
addressed the real-time mini-UAV detection issue in terms of accuracy and speed of
detection by using the one-stage YOLOv5 model. The main contributions of the work
are the creation of a custom mini-UAV dataset including drones flying in low-visibility
circumstances using a Dahua multisensory network pan, tilt, zoom (PTZ) camera and the
redesign of the YOLOv5 model for the recognition of extremely tiny flying objects from
an aerial perspective based on features learned by deep CNN. As the receptive field size
of the baseline YOLOv5 model is insufficient to detect small flying objects, the structure
Sensors 2024, 24, 125 21 of 31

of the original YOLOv5 was redesigned with two improvements: to collect additional
texture and contour information from small mini-UAVs, a fourth scale is added to the
three scales of YOLOv5 feature maps; and to decrease feature information loss for small
mini-UAVs, feature maps from the backbone network are incorporated into the extra
fourth scale. Experimental results demonstrated that all evaluation metric values for
improved YOLOv5 outperformed the baseline model. Drone and bird detection based on
YOLOv4 and YOLOv5 was conducted in [64]. A custom dataset compiled from several
online sources consists of 900 images: 664 images for drones and 236 images for birds.
By evaluating the experimental results, the authors concluded that YOLOv5 is quicker
at object detection and overcomes the problem of detecting tiny drones and birds in real
time with high accuracy. Some limitations of the current study include a short dataset,
a class imbalance problem, and a failure to account for background noise. An automated
image-based drone detection system based on the fine-tuned YOLOv5 method and transfer
learning was presented in [65]. The dataset is available online from Kaggle and consists
of 1359 drone images acquired from an Earth-to-drone perspective and labeled for binary
classification tasks as drone or background classes. The performance of the proposed
model was evaluated using the transfer learning pretrained model weights. To highlight
the superiority of the proposed approach, the obtained results were compared to other
versions of YOLO, such as YOLOv3 and YOLOv4, as well as the two-stage Mask R-CNN
model. The comparative findings proved the efficacy and superiority of the suggested
framework, especially for distant and tiny drone detection. A comparative study of drone
detection based on one-stage detectors such as YOLO and MobileNetSSDv2 was presented
in [66]. Two open-source datasets were used for drone classification. Experimental results:
the YOLO architecture delivered high performance in terms of precision and accuracy while
retaining comparable FPS and memory usage. A fine-tuned YOLOv4-based automated
drone and drone-like object detection framework was established in [67]. A dataset of
drone and bird images was gathered from different internet sources. The total number of
gathered images was 2395, of which 479 images were for bird classes and 1916 images were
labeled for drone classes. To evaluate the performance of the proposed model, the trained
model was tested on custom videos in which two types of drones were flown at three
different altitudes. According to the experimental results, the YOLOv4 model showed
appropriate performance with a mAP of 74.36%. Overall, the fine-tuned YOLOv4 was
able to overcome issues with speed, accuracy, and model overfitting. The same authors
continued this task in [68], wherein the YOLOv5 detector was employed on the same
dataset. The main contribution of this study relied on the usage of a data augmentation
technique to artificially solve data shortage issues; thus, the total number of images reached
5749. To evaluate the model, evaluation metrics such as mAP, FPS, F1-score, etc., were
calculated. The proposed technique surpassed the authors’ previous methodology in [67]
with a mAP of 0.904. Due to its lightweight construction, YOLOv5 detected objects faster
than YOLOv4.
Object detection with learned features based on two-stage detectors: The authors in [69]
presented a comprehensive approach for detecting and tracking unmanned aerial vehicles
(UAVs) by proposing a visible-light dataset called the ’DUT Anti-UAV’ dataset. The sug-
gested fusion system reliably identifies and tracks UAVs in real-time by combining image
processing, object detection, and tracking methods. The authors evaluated state-of-the-art
one-stage and two-stage detection methods, including 14 detectors and 8 trackers retrained
on their own dataset. The research also underlines the difficulties with camera-based
UAV detection, such as differences in appearance caused by various UAV types, weather,
and environmental conditions. To overcome these difficulties, the authors suggested using
online adaptation approaches, whereby the detection model may be continually modified
to accommodate new UAV features. Numerous experiments revealed that, depending
on the detection algorithm, the tracking performance gain might vary, and the authors’
fusion approach could greatly boost the tracking efficiency of all the trackers. However,
Faster-RCNN with the VGG-16 version was a superior fusion option for the majority of
Sensors 2024, 24, 125 22 of 31

trackers. Drone detection using a two-stage Mask R-CNN detector with two backbones,
such as residual network (ResNet)-50 and MobileNet, was presented in [70]. The model
was trained on a dataset [71] with 1000 images of drones. The two backbone networks were
compared in terms of mean average precision (mAP). Based on the experimental results,
Mask R-CNN with ResNet-50 outperformed Mask R-CNN with the MobileNet backbone.
The accuracy of any UAV detection system might be hindered by different interference
factors in a real-world scenario. Table 7 lists the main examples of real-life interference
factors for each detection technology.

Table 7. Real-world interference factors for drone detection technologies.

Detection
Interference Factor Real-World Scenario Impact on Detection
Technology
reduces the likelihood
of distinguishing
Radar birds and wildlife bird echoes
radar echoes from
drones [72];
presence of wireless
communication interference from
signals from Wi-Fi other devices
frequency overlap; and Bluetooth operating on similar
RF
signal jamming sources [73–75]; near frequencies; inability
military installations to detect RF signals
or during jamming from drones
attacks
noise of high-traffic
areas, such as
masking of drone’s
motorbike riding [60];
ambient noise; acoustic signature;
sounds of birds,
Acoustic background noise; distortion of sound
airplanes, rain and
wind waves, making
thunderstorms [76];
detection difficult
open fields or coastal
areas
nighttime [21] or
varying weather
conditions such as reduced visibility;
lighting [21] and
fog, rain, and line-of-sight
weather
snow [17]; forested blockages preventing
Visual conditions [17];
areas or cluttered visual identification;
obstructions; moving
environments; birds, increased
objects [20]
airplanes, insects, false negatives
and moving parts of
scenes [20]

2.5. Sensor Fusion and Other Methods for Drone Detection


The review of the four detection methodologies presented in the previous subsections
shows that each of the drone detection modalities has its own set of limitations, and a solid
anti-drone system may be supplemented by integrating several modalities. Sensor fusion
systems can integrate audio and visual features from acoustic and image sensors [77–79] or
combine radar and visual imaging systems [80,81]; RF and image sensors [82]; radar, RF,
and camera sensors [83]; optical camera, audio, and radar sensors [84]; as well as visible,
thermal, and audio sensors [85] to enhance drone identification, tracking, and classification.
By integrating the capabilities of several sensory modalities, the method increases the
robustness and accuracy of detection systems. Sensor fusion techniques such as early
fusion and late fusion can be applied to drone detection systems. These approaches define
when and how data from various sensors are merged during the detection and identification
of drones.
Sensors 2024, 24, 125 23 of 31

Early fusion (sometimes referred to as ’data-level’ or ’feature-level’ fusion) entails


integrating raw data or extracted features from various sensors prior to any additional
processing or analysis. The goal is to integrate data as soon as possible, which results in a
more-thorough representation of the information (see Figure 7).

Figure 7. Early sensor fusion. Sensor fusion is accomplished by fusing raw data from individual
sensors, and the fused data are fed to a single detection system.

Late fusion (sometimes referred to as ’decision-level’ fusion) is a sensor fusion ap-


proach in which the decisions or confidence scores indicating the drone’s presence from
numerous separate sensors are integrated at a later stage in the processing pipeline (see
Figure 8). These individual decisions or confidence scores are then aggregated to form
a final decision. Aggregation is critical in late fusion because it uses a variety of sensors
to increase the overall accuracy and resilience of the detection system. Voting systems,
weighted averages, Bayesian approaches, and machine learning models are some examples
of popular aggregation approaches used in late fusion for drone detection.

Figure 8. General scheme for late fusion. Each sensor modality is trained separately to detect drones.
Sensor fusion is achieved by fusing the detection decisions of individual sensor modalities.

Drone detection and identification based on Wi-Fi signal and RF fingerprint: As we indicated
when describing the main threats posed by drones in Section 1.1, to stop illegal activities
such as threatening privacy, smuggling, and even terrorism acts, a solid anti-drone system
is needed. Generally, most drones employ a radio communication (RC) transmitter and
receiver pair to control them; however, these days, Wi-Fi and Bluetooth have made it
possible for drone manufacturers to create gadget controllers that operate on smartphones
Sensors 2024, 24, 125 24 of 31

and tablets. If the drone uses wireless communication, then Wi-Fi enables the transmission
of massive amounts of data to and from UAVs using Wi-Fi-related protocols such as
802.11a, 802.11b, 802.11g, 802.11n, and 802.11 within a set control radius. These tiny
UAVs have a larger potential for imposing hazards across several sectors of society than
drones that just use RC controllers with no Wi-Fi- or Bluetooth-linked mobile applications.
Therefore, drone detection and identification using RF signals can be performed based
on RF fingerprinting and Wi-Fi fingerprinting approaches. In both approaches, the RF
communication signal between the drone and its controller is captured by employing
an RF sensing device [73]. RF fingerprinting techniques extract physical layer features
and signatures from the captured RF communication signal, while in Wi-Fi fingerprinting
approaches, medium access control (MAC) and network layer features are extracted. Multi-
stage drone detection and classification using RF fingerprints in the presence of wireless
interference signals was conducted in [73]. In the first stage, the captured RF data are
preprocessed using wavelet-based multiresolution analysis. The preprocessed RF data
are modeled using a two-state Markov-model-based Naïve Bayes detection method to
differentiate the RF signal from the noise class. In the second stage, if present, signals from
Wi-Fi and Bluetooth sources are recognized based on the detected RF signal’s bandwidth
and modulation features. After identifying the input signal as a UAV controller signal, it is
classified using several ML algorithms. The best classification accuracy achieved by a k-NN
classifier was 98.13% for classifying 15 various controllers based on only three features.
The experiment demonstrated that the suggested method can categorize the same make and
model of UAV controller without reducing overall accuracy. In [74], the authors developed
a mechanism for detecting UAVs with both an RC controller and Wi-Fi- or Bluetooth-linked
mobile applications based on radio communication and an internet protocol (IP) address.
The detection system was divided into two stages. The first stage involves searching for
the presence of any unfamiliar IP address in the range of interest. As the UAV has a Wi-Fi
connection, it has its own IP address. The drone detection system employs an algorithm
that stores certain IP addresses of known devices in its database. Then, utilizing a Wi-Fi
adapter, the algorithm continues to verify whether any unfamiliar device is connected
to the router by matching its IP address with those stored in the database. In the event
that any unfamiliar device is connected to the router, the system outputs its IP address.
The second stage entails determining whether the detected unfamiliar device is a UAV or
not by using an RF receiver. Overall, the detection system prototype could be able to detect
drone presences over a range of 100 m, even when birds appear in the region of interest.
Drone detection and identification based on cellular and IoT networks: With the arrival of
5G and the Internet of Things (IoT), a low-cost bistatic radar system was introduced in [86].
The proposed system uses 5G non-line-of-sight (NLOS) signals bounced off the drone’s
body and the drone’s Wi-Fi-received signal strength indicator (RSSI) emission to identify
and locate the drone. A k-NN classifier was used to train these two signatures for real-time
drone location prediction. The system was designed for indoor and outdoor environ-
ments using three HackRF-One SDRs: two HackRF SDRs act as 5G signal transmitters and
receivers, while the third HackRF SDR works as a Wi-Fi RSSI receiver. The experimental re-
sults showed that the proposed cost-effective system can perform real-time drone detection
with zero false negative (FN) outcomes. An ensemble-based IoT-enabled drone detection
approach using transfer learning and background subtraction was proposed in [87]. The en-
semble model includes two DL models: YOLOv4 and ResNet-50. The proposed detection
scheme consists of a Raspberry Pi, cameras equipped with IoT, and ensemble models of
DL. A hybrid dataset of birds and drones was collected from publicly available datasets.
All the images were preprocessed and sent to the background subtraction module. Then,
foreground images were sent to the YOLOv4 and ResNet-50 models. In terms of detection
accuracy, the suggested approach outperformed competing existing systems. Wireless
communication is based on radio frequency waves, occupies a large transmission spectrum,
and has similar hardware components as radar sensing, which leads to the use of 5G [88,89]
and beyond-5G (B5G) networks [90] (such as 6G, which is still in the research phase) for
Sensors 2024, 24, 125 25 of 31

detecting, identifying, and managing UAVs. Further, cellular networks have high-speed
connectivity and low latency that are critical for the real-time control and monitoring of
UAVs. In [88], single- and multi-rotor UAVs were identified by transmitting 5G millimeter
waves and employing a joint algorithm such as short-time Fourier transform (STFT) and a
Bessel function basis. Drone identification utilizing a passive radar and a fully operating
5G new radio (NR) network as an illumination source was presented in [89]. An intelligent
charging–offloading approach for an air–ground integrated network in B5G wireless com-
munication was suggested in [90]. The proposed approach used the UAV as a mobile edge
computing (MEC) server and mobile power supply in order to extend the sensor network’s
lifetime and improve system performance.

3. Discussion and Conclusions


A survey of current drone detection and classification techniques indicates a fast-
expanding area characterized by considerable technological advances and novel approaches.
The fast development in the usage of UAVs has prompted serious issues about privacy,
security, and safety. As a result, developing effective UAV detection algorithms has become
critical. The central goal of this review article was to offer a complete overview of present
UAV detection and classification approaches, methodologies, and frameworks. Therefore,
the paper offered an overview of several UAV detection approaches, such as radar-based,
acoustic-based, RF-based, and visual-based approaches. However, the discussion also
emphasized the inherent challenges that persist. The size and speed diversity of drones,
their dynamic behavior and similarity to other flying objects, and their limited battery life
make the detection task more challenging. Additionally, different interference factors in
real-world scenarios, including adverse weather and lighting conditions, urban locations
with plenty of obstructions such as buildings and trees, ambient and background noise,
wind, the presence of wireless communication signals from Wi-Fi and Bluetooth sources,
bird echoes, etc., present unique challenges for each detection modality.
From Table 2, it is evident that no single detection method is a panacea; each of the
drone detection modalities has its own set of limitations, and a solid anti-drone system
may be supplemented by integrating several modalities. Integrating the capabilities of
several sensory modalities can increase the robustness and accuracy of a detection system.
According to the reviewed research works, early and late fusion can be applied to integrate
individual sensor modalities. These approaches define when and how data from various
sensors are merged during the detection and identification of drones. In early fusion, raw
data or relevant features from various sensors are combined before the model, which is
trained by normalizing data ranges or concatenating the feature vectors. The primary
benefit of early fusion is that it allows the detection system to fully utilize the combined
data, perhaps resulting in more accurate detection and classification results. Early fusion
necessitates that all sensor data be consistent in terms of size, resolution, and temporal
alignment. Therefore, it might be more computationally intensive, as the fused dataset
may be huge and complicated. In late fusion, the decisions or confidence scores indi-
cating a drone’s presence from individual sensors are combined at the detection phase.
The independent decisions are then aggregated using different aggregation approaches.
These days, most drone manufacturers produce Wi-Fi- and Bluetooth-enabled UAVs
with gadget controllers that operate on smartphones and tablets. These tiny UAVs have
a larger potential for imposing hazards in several sectors of society than drones that just
use RC controllers with no Wi-Fi- or Bluetooth-linked mobile applications. Therefore,
drone detection and identification using RF signals based on Wi-Fi fingerprinting is also
an important field. Wi-Fi fingerprinting approaches extract the MAC and network layer
features in the feature extraction stage. Regarding this approach, different multistage drone
detection methods using Wi-Fi and RF fingerprinting have been proposed by researchers.
With the arrival of 5G networks and the Internet of Things (IoT), drone detection
systems have been expanded. Wireless communication is based on radio frequency waves,
occupies large transmission spectra, and has hardware components similar to those used
Sensors 2024, 24, 125 26 of 31

for radar sensing. Therefore, these properties of cellular networks lead their use as bistatic
radar systems for effective drone detection.
This review article seeks to offer a complete overview of the state-of-the-art approaches,
methodologies, and challenges in the field of UAV detection and classification, paving the
way for developments that will answer the rising concerns about UAV operations. We
hope that this review article will be a helpful resource for academics, engineers, and policy-
makers working in UAV detection and classification since it consolidates the information
and insights gathered from a variety of research activities. It might also throw light on
future research paths: highlighting prospective avenues to improve the efficacy, efficiency,
and dependability of UAV detection systems.

Author Contributions: Conceptualization, U.S.; methodology, U.S.; investigation, U.S.; resources,


U.S., K.T. and E.T.M.; writing—original draft preparation, U.S.; writing—review and editing, U.S.,
N.S. and E.T.M.; visualization, U.S., N.S. and K.T.; supervision, E.T.M. and L.I. All authors have read
and agreed to the published version of the manuscript.
Funding: This research was funded by the Science Committee of the Ministry of Science and Higher
Education of the Republic of Kazakhstan (grant No. AP14971031).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this manuscript:

AdaBoost Adaptive boosting


ADr Amateur drone
AI Artificial intelligence
AP Average precision
BiLSTM Bidirectional long short-term memory
BRF Balanced random forest
CW Continuous wave
CLSTM Convolutional long-short term memory
CNN Convolutional neural network
COCO Common objects in context
CRNN Convolutional recurrent neural network
CSV Comma-separated value
DETR Detection transformer
DIY Do-it-yourself
DFD Doppler frequency difference
DFT Discrete Fourier transform
DMR Doppler magnitude ratio
DNN Deep neural network
FMCW Frequency-modulated continuous wave
FDR False discovery rate
FNR False negative rate
GAN Generative adversarial network
GPS Global Positioning System
GTCC Gammatone cepstral coefficients
GRU Gated recurrent unit
HERM Helicopter rotor modulation
ISM Industrial, Scientific, and Medical
IoU Intersection over union
IEEE Institute of Electrical and Electronics Engineers
Sensors 2024, 24, 125 27 of 31

KNN k-nearest neighbors


LoS Line-of-sight
LSTM Long short-term memory
LSTM-ALRO Long short-term memory-adaptive learning rate optimizing
LPC Linear prediction coefficients
LPCC Linear predictive cepstral coefficients
MAC Medium access control
mAP Mean average precision
mmWave Millimeter-wave
MFCC Mel-frequency cepstral coefficients
MLP Multilayer perceptron
PCA Principal component analysis
PTZ pan, tilt, zoom
PSD Power spectral density
RADAR Radio detection and ranging
RBFN Radial basis function network
ResNet Residual networks
RF Radio frequency
RCS Radar cross section
R-CNN Region-based convolutional neural network
RMS Root mean square
RNN Recurrent neural network
ROC Receiver–operator characteristic
STFT Short-time Fourier transform
SFCW Stepped frequency continuous wave
SMOTE Synthetic minority oversampling technique
SRO Spectral roll-off
SSD Single shot detector
SVM Support vector machine
TFMS-CNN Time–frequency multiscale convolutional neural networks
TE Transformer encoder
UAV Unmanned aerial vehicle
VGG Visual geometry group
VTOL Vertical take-off and landing
XGBoost eXtreme gradient boosting
YOLO You look only once
ZCR Zero-crossing rate
ZCC Zero-centered compression

References
1. Samaras, S.; Diamantidou, E.; Ataloglou, D.; Sakellariou, N.; Vafeiadis, A.; Magoulianitis, V.; Lalas, A.; Dimou, A.; Zarpalas, D.;
Votis, K.; et al. Deep Learning on Multi-Sensor Data for Counter UAV Applications—A Systematic Review. Sensors 2019, 19, 4837.
[CrossRef] [PubMed]
2. Fu, R.; Al-Absi, M.A.; Kim, K.-H.; Lee, Y.-S.; Al-Absi, A.A.; Lee, H.-J. Deep Learning-Based Drone Classification Using Radar
Cross Section Signatures at mmWave Frequencies. IEEE Access 2021, 9, 161431–161444. [CrossRef]
3. Counter Drone Tactics: Which Drones Are a Real Threat, and Which Aren’t? Available online: Https://www.ifsecglobal.com/
drones/counter-drone-tactics-which-drones-are-a-real-threat-and-which-arent/ (accessed on 19 February 2021).
4. Park, S.; Kim, H.T.; Lee, S.; Joo, H.; Kim, H. Survey on Anti-Drone Systems: Components, Designs, and Challenges. IEEE Access
2021, 9, 42635–42659. [CrossRef]
5. Worldwide Drone Incidents. Available online: Https://dedrone.com/resources/incidents-new/all?bd17d27c_page=1 (accessed on
25 October 2023).
6. Drone Incident Review: First Half of 2023. Available online: Https://d-fendsolutions.com/blog/drone-incident-review-first-
half-2023/ (accessed on 8 August 2023).
7. Drone Pilot Arrested for Dropping Dye into Nearby Pools. Available online: Https://dronedj.com/2023/09/06/newjersey-
drone-dye-pool-arrest/ (accessed on 6 September 2023).
8. California Man Arrested for Dropping Illegal Fireworks from Drone. Available online: Https://dronedj.com/2022/06/07/drone-
fireworks-arrest-california/ (accessed on 7 June 2022).
Sensors 2024, 24, 125 28 of 31

9. Drone Used to Fly Items into Stockton’s Holme House Prison. Available online: Https://www.bbc.com/news/uk-england-tees-
66166266 (accessed on 11 July 2023).
10. Over a Kilogram of Cannabis among Items Seized at Collins Bay Institution. Available online: Https://www.kingstonist.com/
news/over-a-kilogram-of-cannabis-among-items-seized-at-collins-bay-institution/ (accessed on 23 June 2023).
11. Canterbury Man Shoots Down Drone over His Property. Available online: https://www.odt.co.nz/star-news/star-christchurch/
canterbury-man-shoots-down-drone-over-his-property (accessed on 22 June 2023).
12. Small Drone from Lebanon Downed over Northern Border Town. Available online: Https://www.timesofisrael.com/small-
drone-from-lebanon-downed-over-northern-border-town/ (accessed on 25 May 2023).
13. Boeing 737 Travelling at at 200 mph and Carrying up to 189 Passengers Missed Drone by Just 6ft in One of the Closest ever
Near-Misses in UK Airspace, Report Reveals. Available online: Https://www.dailymail.co.uk/news/article-10268343/Boeing-
737-travelling-200mph-carrying-189-passengers-missed-drone-just-6ft.html (accessed on 2 December 2021).
14. Gatwick Airport Forced to Shut Runway for Almost an Hour over ’Suspected Drone’. Available online: Https://news.sky.com/
story/gatwick-airport-force-to-shut-runway-for-almost-an-hour-over-suspected-drone-12880784 (accessed on 14 May 2023).
15. Drone Crash Shuts Down Mali’s Gao Airport. Available online: Https://bnn.network/breaking-news/drone-crash-shuts-down-
malis-gao-airport/ (accessed on 23 May 2023).
16. UAV in Near Miss with Passenger Plane. Available online: Https://www.thefirstnews.com/article/UAV-in-near-miss-with-
passenger-plane-38464 (accessed on 16 May 2023).
17. Khan, M.A.; Menouar, H.; Eldeeb, A.; Abu-Dayya, A.; Salim, F.D. On the Detection of Unauthorized Drones—Techniques and
Future Perspectives: A Review. IEEE Sens. J. 2022, 22, 11439–11455. [CrossRef]
18. Mohsan, S.A.H.; Othman, N.Q.H.; Li, Y.; Alsharif, M.H.; Khan, M.A. Unmanned aerial vehicles (UAVs): Practical aspects,
applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 2023, 16, 109–137. [CrossRef]
19. Chiper, F.-L.; Martian, A.; Vladeanu, C.; Marghescu, I.; Craciunescu, R.; Fratu, O. Drone Detection and Defense Systems: Survey
and a Software-Defined Radio-Based Solution. Sensors 2022, 22, 1453. [CrossRef]
20. Seidaliyeva, U.; Akhmetov, D.; Ilipbayeva, L.; Matson, E.T. Real-Time and Accurate Drone Detection in a Video with a Static
Background. Sensors 2020, 20, 3856. [CrossRef]
21. Al-Adwan, R.S.; Al-Habahbeh, O.M. Unmanned Aerial Vehicles Sensor-Based Detection Systems Using Machine Learning
Algorithms. Int. J. Mech. Eng. Robot. Res. 2022, 11, 663–668. [CrossRef]
22. Abeywickrama, H.V.; Jayawickrama, B.A.; He, Y.; Dutkiewicz, E. Comprehensive Energy Consumption Model for Unmanned
Aerial Vehicles, Based on Empirical Studies of Battery Performance. IEEE Access 2018, 6, 58383–58394. [CrossRef]
23. Mohsan, S.A.H.; Othman, N.Q.H.; Khan, M.A.; Amjad, H.; Żywiołek, J.A. Comprehensive Review of Micro UAV Charging
Techniques. Micromachines 2022, 13, 977. [CrossRef]
24. Taha, B.; Shoufan, A. Machine Learning-Based Drone Detection and Classification: State-of-the-Art in Research. IEEE Access 2019,
7, 138669–138682. [CrossRef]
25. Batool, S.; Frezza, F.; Mangini, F.; Simeoni, P. Introduction to Radar Scattering Application in Remote Sensing and Diagnostics:
Review. Atmosphere 2020, 11, 517. [CrossRef]
26. Liu, J.; Xu, Q.; Chen, W. Classification of Bird and Drone Targets Based on Motion Characteristics and Random Forest Model
Using Surveillance Radar Data. IEEE Access 2021, 9, 160135–160144. [CrossRef]
27. Wang, C.; JTian, J.; Cao, J.; XWang, X. Deep Learning-Based UAV Detection in Pulse-Doppler Radar. IEEE Trans. Geosci. Remote
Sens. 2022, 60, 1–12. [CrossRef]
28. Li,S.; Chai, Y.; Guo, M.; Liu, Y. Research on Detection Method of UAV Based on micro-Doppler Effect. In Proceedings of the 39th
Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 3118–3122.
29. Coluccia, A.; Parisi, G.; Fascista, A. Detection and Classification of Multirotor Drones in Radar Sensor Networks: A Review.
Sensors 2020, 20, 4172. [CrossRef] [PubMed]
30. Yousaf, J.; Zia, H.; Alhalabi, M.; Yaghi, M.; Basmaji, T.; Shehhi, E.A.; Gad, A.; Alkhedher, M.; Ghazal, M. Drone and Controller
Detection and Localization: Trends and Challenges. Appl. Sci. 2022, 12, 12612. [CrossRef]
31. Raval, D.; Hunter, E.; Hudson, S.; Damini, A.; Balaji, B. Convolutional Neural Networks for Classification of Drones Using Radars.
Drones 2021, 5, 149. [CrossRef]
32. Yan, J.; Hu, H.; Gong, J.; Kong, D.; Li, D. Exploring Radar Micro-Doppler Signatures for Recognition of Drone Types. Drones 2023,
7, 280. [CrossRef]
33. Rahman, S.; Robertson, D.A. Radar micro-Doppler signatures of drones and birds at K-band and W-band. Sci. Rep. 2018, 8, 17396.
[CrossRef]
34. Samuell, G.; Maurer, P.; Hassan, A.; Frangenberg, M. A Deep Learning Approach for Multi-copter Detection using mm-Wave Radar
Sensors: Application of Deep Learning for Multi-copter detection using radar micro-Doppler signatures. In Proceedings of the
ICRAI 2021: 2021 7th International Conference on Robotics and Artificial Intelligence, Guangzhou, China, 19–22 November 2021.
35. Narayanan, R.M.; Tsang, B.; Bharadwaj, R. Classification and Discrimination of Birds and Small Drones Using Radar Micro-
Doppler Spectrogram Images. Signals 2023, 4, 337–358. [CrossRef]
36. Leonardi, M.; Ligresti, G.; Piracci, E. Drones Classification by the Use of a Multifunctional Radar and Micro-Doppler Analysis.
Drones 2022, 6, 124. [CrossRef]
Sensors 2024, 24, 125 29 of 31

37. Nemer, I.; Sheltami, T.; Ahmad, I.; Yasar, A.U.-H.; Abdeen, M.A.R. RF-Based UAV Detection and Identification Using Hierarchical
Learning Approach. Sensors 2021, 21, 1947. [CrossRef] [PubMed]
38. Zhang, Y. RF-based drone detection using machine learning. In Proceedings of the 2021 2nd International Conference on
Computing and Data Science (CDS), Stanford, CA, USA, 28–29 January 2021; pp. 425–428.
39. Allahham, M.S.; Al-Sa’d, M.F.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. DroneRF dataset: A dataset of drones for RF-based
detection, classification and identification. Data Brief 2019, 26, 104313. [CrossRef] [PubMed]
40. Medaiyese, O.O.; Syed, A.; Lauf, A.P. Machine Learning Framework for RF-Based Drone Detection and Identification System.
In Proceedings of the 2021 2nd International Conference on Smart Cities, Automation and Intelligent Computing Systems
(ICON-SONICS), Tangerang, Indonesia, 12–13 October 2021; pp. 58–64.
41. Al-Sa’D, M.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. RF-based drone detection and identification using deep learning
approaches: An initiative towards a large open-source drone database. Future Gener. Comput. Syst. 2019, 100, 86–97. [CrossRef]
42. Al-Emadi, S.; Al-Senaid, F. Drone detection approach based on radio frequency using convolutional neural network. In
Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5
February 2020; pp. 29–34.
43. Allahham, M.S.; Khattab, T.; Mohamed, A. Deep Learning for RF-Based Drone Detection and Identification: A Multi-Channel 1-D
Convolutional Neural Networks Approach. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and
Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 112–117.
44. He, Z.; Huang, J.; Qian, G. UAV Detection and Identification Based on Radio Frequency Using Transfer Learning. In Proceedings
of the 2022 IEEE 8th International Conference on Computer and Communications (ICCC), Chengdu, China, 9–12 December 2022;
pp. 1812–1817.
45. Mo, Y.; Huang, J.; Qian, G. Deep Learning Approach to UAV Detection and Classification by Using Compressively Sensed RF
Signal. Sensors 2022, 22, 3072. [CrossRef] [PubMed]
46. Inani, K.N.; Sangwan, K.S. Machine Learning based framework for Drone Detection and Identification using RF signals. In
Proceedings of the 2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT), Kottayam, India,
11–12 February 2023; pp. 1–8.
47. Mandal, S.; Satija, U. Time–Frequency Multiscale Convolutional Neural Network for RF-Based Drone Detection and Identification.
IEEE Sens. Lett. 2023, 7, 1–4. [CrossRef]
48. Fagiani, F.R.E. UAV Detection and Localization System Using an Interconnected Array of Acoustic Sensors and Machine Learning
Algorithms. Ph.D. Dissertation, Purdue University, West Lafayette, IN, USA, 2021.
49. Ahmed, C.A.; Batool, F.; Haider, W.; Asad, M.; Hamdani, S.H.R. Acoustic Based Drone Detection Via Machine Learning. In
Proceedings of the 2022 International Conference on IT and Industrial Technologies (ICIT), Chiniot, Pakistan, 3–4 October 2022;
pp. 1–6.
50. Tejera-Berengue, D.; Zhu-Zhou, F.; Utrilla-Manso, M.; Gil-Pita, R.; Rosa-Zurera, M. Acoustic-Based Detection of UAVs Using
Machine Learning: Analysis of Distance and Environmental Effects. In Proceedings of the 2023 IEEE Sensors Applications
Symposium (SAS), Ottawa, ON, Canada, 18–20 July 2023; pp. 1–6.
51. Salman, S.; Mir, J.; Farooq, M.T.; Malik, A.N.; Haleemdeen, R. Machine Learning Inspired Efficient Audio Drone Detection using
Acoustic Features. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST),
Islamabad, Pakistan, 12–16 January 2021; pp. 335–339.
52. Anwar, M.Z.; Kaleem, Z.; Jamalipour, A. Machine Learning Inspired Sound-Based Amateur Drone Detection for Public Safety
Applications. IEEE Trans. Veh. Technol. 2019, 68, 2526–2534. [CrossRef]
53. Ohlenbusch, M.; Ahrens, A.; Rollwage, C.; Bitzer, J. Robust Drone Detection for Acoustic Monitoring Applications. In Proceedings
of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; pp. 6–10.
54. Solis, E.R.; Shashev, D.V.; Shidlovskiy, S.V. Implementation of Audio Recognition System for Unmanned Aerial Vehicles. In
Proceedings of the 2021 International Siberian Conference on Control and Communications (SIBCON), Kazan, Russia, 13–15 May
2021 ; pp. 1–8
55. Al-Emadi, S.; Al-Ali, A.; Mohammad, A.; Al-Ali, A. Audio Based Drone Detection and Identification using Deep Learning. In
Proceedings of the 2019 15th International Wireless Communications and Mobile Computing Conference (IWCMC), Tangier,
Morocco, 24–28 June 2019; pp. 459–464.
56. Kim, B.; Jang, B.; Lee, D.; Im, S. CNN-based UAV Detection with Short Time Fourier Transformed Acoustic Features. In
Proceedings of the 2020 International Conference on Electronics, Information, and Communication (ICEIC), Barcelona, Spain,
19–22 January 2020; pp. 1–3.
57. Wang, Y.; Chu, Z.; Ku, I.; Smith, E.C.; Matson, E.T. A Large-Scale UAV Audio Dataset and Audio-Based UAV Classification Using
CNN. In Proceeding of the 2022 Sixth IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 5–7 December
2022; pp. 186–189.
58. Katta, S.S.; Nandyala, S.; Viegas, E.K.; AlMahmoud, A. Benchmarking Audio-based Deep Learning Models for Detection and
Identification of Unmanned Aerial Vehicles. In Proceedings of the 2022 Workshop on Benchmarking Cyber-Physical Systems and
Internet of Things (CPS-IoTBench), Milan, Italy, 3–6 May 2022; pp. 7–11.
59. Al-Emadi, S.; Al-Ali, A.; Al-Ali, A. Audio-Based Drone Detection and Identification Using Deep Learning Techniques with
Dataset Enhancement through Generative Adversarial Networks. Sensors 2021, 21, 4953. [CrossRef]
Sensors 2024, 24, 125 30 of 31

60. Utebayeva, D.; Ilipbayeva, L.; Matson, E.T. Practical Study of Recurrent Neural Networks for Efficient Real-Time Drone Sound
Detection: A Review. Drones 2023, 7, 26. [CrossRef]
61. Shang, Y.; Liu, C.; Qiu, D.; Zhao, Z.; Wu, R.; Tang, S. AD-YOLOv5s-based UAV detection for low-altitude security. Int. J. Micro Air
Veh. 2023, 15, 17568293231190017. [CrossRef]
62. Kabir, M.S.; Ndukwe, I.K.; Awan, E.Z.S. Deep Learning Inspired Vision based Frameworks for Drone Detection. In Proceedings of
the 2021 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Kuala Lumpur, Malaysia,
12–13 June 2021; pp. 1–5.
63. Delleji, T.; Chtourou, Z. An Improved YOLOv5 for Real-time Mini-UAV Detection in No-Fly Zones. In Proceedings of the 2nd
International Conference on Image Processing and Vision Engineering, Online, 22–24 April 2022; pp. 174–181.
64. Selvi, S.S.; Pavithraa, S.; Dharini, R.; Chaitra, E. Deep Learning Approach to Classify Drones and Birds. In Proceedings of the
2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon), Mysuru, India, 16–17 October 2022; pp. 1–5.
65. Al-Qubaydhi, N.; Alenezi, A.; Alanazi, T.; Senyor, A.; Alanezi, N.; Alotaibi, B.; Alotaibi, M.; Razaque, A.; Abdelhamid, A.A.;
Alotaibi, A. Detection of Unauthorized Unmanned Aerial Vehicles Using YOLOv5 and Transfer Learning. Electronics 2022,
11, 2669. [CrossRef]
66. Pansare, A.; Sabu, N.; Kushwaha, H.; Srivastava, V.; Thakur, N.; Jamgaonkar, K.; Faiz, M.Z. Drone Detection using YOLO and
SSD A Comparative Study. In Proceedings of the 2022 International Conference on Signal and Information Processing (IConSIP),
Pune, India, 26–27 August 2022; pp. 1–6.
67. Singha, S.; Aydin, B. Automated Drone Detection Using YOLOv4. Drones 2021, 5, 95. [CrossRef]
68. Aydin, B.; Singha, S. Drone Detection Using YOLOv5. Eng 2023, 4, 416–433. [CrossRef]
69. Zhao, J.; Zhang, J.; Li, D.; Wang, D. Vision-Based Anti-UAV Detection and Tracking. IEEE Trans. Intell. Transp. Syst. 2022, 23,
25323–25334. [CrossRef]
70. Mubarak, A.S.; Vubangsi, M.; Al-Turjman, F.; Ameen, Z.S.; Mahfudh, A.S.; Alturjman, S. Computer Vision-Based Drone Detection
Using Mask R-CNN. In Proceedings of the 2022 International Conference on Artificial Intelligence in Everything (AIE), Lefkosa,
Cyprus, 2–4 August 2022; pp. 540–543.
71. Mehdi Ozel. Available online: Https://www.kaggle.com/dasmehdixtr/drone-dataset-uav (accessed on 25 December 2021).
72. Gong, J.; Yan, J.; Li, D.; Kong, D.; Hu, H. Interference of Radar Detection of Drones by Birds. Prog. Electromagn. Res. M 2019, 81,
1–11. [CrossRef]
73. Ezuma, M.; Erden, F.; Anjinappa, K.C.; Ozdemir, O.; Guvenc, I. Detection and Classification of UAVs Using RF Fingerprints in the
Presence of Wi-Fi and Bluetooth Interference. IEEE Open J. Commun. Soc. 2020, 1, 60–76. [CrossRef]
74. Sharma, V.; Kumari, M. Drone Detection Mechanism using Radiocommunication Technology and Internet Protocol Address. In
Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 27–29
November 2019; pp. 449–453.
75. Swinney, C.J.; Woods, J.C. The Effect of Real-World Interference on CNN Feature Extraction and Machine Learning Classification
of Unmanned Aerial Systems. Aerospace 2021, 8, 179. [CrossRef]
76. Zahoor, U.; Altaf, M.; Bilal, M.; Nkenyereye, L.; Bashir, A.K. Amateur Drones Detection: A machine learning approach utilizing
the acoustic signals in the presence of strong interference. arXiv 2020, arXiv:2003.01519.
77. Jamil, S.; Fawad; Rahman, M.; Ullah, A.; Badnava, S.; Forsat, M.; Mirjavadi, S.S. Malicious UAV Detection Using Integrated Audio
and Visual Features for Public Safety Applications. Sensors 2020, 20, 3923. [CrossRef]
78. Kim, J.; Lee, D.; Kim, Y.; Shin, H.; Heo, Y.; Wang, Y.; Matson, E.T. Deep Learning Based Malicious Drone Detection Using Acoustic
and Image Data. In Proceedings of the 2022 Sixth IEEE International Conference on Robotic Computing (IRC), Naples, Italy , 5–7
December 2022; pp. 91–92.
79. Liu, H.; Wei, Z.; Chen, Y.; Pan, J.; Lin, L.; Ren, Y. Drone Detection Based on an Audio-Assisted Camera Array. In Proceedings
of the 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017;
pp. 402–406.
80. Abdelsamad, S.E.; Abdelteef, M.A.; Elsheikh, O.Y.; Ali, Y.A.; Elsonni, T.; Abdelhaq, M.; Alsaqour, R.; Saeed, R.A. Vision-Based
Support for the Detection and Recognition of Drones with Small Radar Cross Sections. Electronics 2023, 12, 2235. [CrossRef]
81. Mehta, V.; Dadboud, F.; Bolic, M.; Mantegh, I. A Deep Learning Approach for Drone Detection and Classification Using Radar
and Camera Sensor Fusion. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS), Ottawa, ON, Canada, 18–20
July 2023; pp. 1–6.
82. Aledhari, M.; Razzak, R.; Parizi, R.M.; Srivastava, G. Sensor Fusion for Drone Detection. In Proceedings of the 2021 IEEE 93rd
Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–7.
83. Vidyasagar, P.M.; Shoaib, S.; Shiva, P.M.; Kashyap, S.R.; Lethan, M.N. Detection and Surveillance of UAVs Based on RF and Radar
Technology. Int. Res. J. Eng. Technol. (IRJET) 2021, 08, 4463–4466.
84. Lee, H.; Han, S.; Byeon, J.I.; Han, S.; Myung, R.; Joung, J.; Choi, J. CNN-Based UAV Detection and Classification Using Sensor
Fusion. IEEE Access 2023, 11, 68791–68808. [CrossRef]
85. Svanström, F.; Englund, C.; Alonso-Fernandez, F. Real-Time Drone Detection and Tracking with Visible, Thermal and Acoustic
Sensors. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021;
pp. 7265–7272.
Sensors 2024, 24, 125 31 of 31

86. Teo, M.I.; Seow, C.K.; Wen, K. 5G Radar and Wi-Fi Based Machine Learning on Drone Detection and Localization. In Proceedings
of the 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS), Chengdu, China, 23–26 April
2021; pp. 875–880.
87. Singh, J.; Sharma, K.; Wazid, M.; Das, A.K.; Vasilakos, A.V. An Ensemble-Based IoT-Enabled Drones Detection Scheme for a Safe
Community. IEEE Open J. Commun. Soc. 2023, 4, 1946–1956. [CrossRef]
88. Yang, T.; Zhao, J.; Hong, T.; Chen, W.; Fu, X. Automatic Identification Technology of Rotor UAVs Based on 5G Network
Architecture. In Proceedings of the IEEE International Conference on Networking, Architecture and Storage (NAS), Chongqing,
China, 11–14 October 2018; pp. 1–9.
89. Maksymiuk, R.; Płotka, M.; Abratkiewicz, K.; Samczyński, P. Network-Based Passive Radar for Drone Detection. In Proceedings
of the 24th International Radar Symposium (IRS), Berlin, Germany, 24–26 May 2023; pp. 1–10.
90. Wang, J.; Jin, C.; Tang, Q.; Xiong, N.N.; Srivastava, G. Intelligent Ubiquitous Network Accessibility for Wireless-Powered MEC in
UAV-Assisted B5G. IEEE Trans. Netw. Sci. Eng. 2021, 8, 2801–2813. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like