Development of A Vision - Based Anti-Drone Identification Friend or Foe Model To Recognize Birds and Drones Using Deep Learning
Development of A Vision - Based Anti-Drone Identification Friend or Foe Model To Recognize Birds and Drones Using Deep Learning
An International Journal
To cite this article: Yasmine Ghazlane, Maha Gmira & Hicham Medromi (2024) Development
Of A Vision- based Anti-drone Identification Friend Or Foe Model To Recognize Birds
And Drones Using Deep Learning, Applied Artificial Intelligence, 38:1, 2318672, DOI:
10.1080/08839514.2024.2318672
Introduction
Recently, drones have become widespread in several domains thanks to the
progress made in Artificial Intelligence (AI), security and transportation fields
(Gupta, Kumari, and Tanwar 2021; Kumar et al. 2021; Kurt et al. 2021;
Serrano-Hernandez, Ballano, and Faulin 2021; Spanaki et al. 2021). Thus,
they are considered a revolution of the 21st century. With the rapid develop
ment of intelligent techniques that range from image and speech recognition
signals. Another paper (Casabianca and Zhang 2021) leverages the acoustic
signal to develop a CNN in order to improve the drone’s detection
performance.
Actually, an anti-drone should be able to recognize the main types of
airborne targets and especially distinguish between the prevalent ones; which
are the drones and birds. Since they share the same airspace and altitudes;
mainly the low altitude airspace up to 32 000 ft as an upper limit. At this
altitude, recognizing the flying targets turns out to be a real challenge regard
ing their similarities which increases the likelihood of false detections.
It is a significant task for an anti-drone system to distinguish accurately
between drones and birds. Bird flights can cause far-reaching impacts within
safeguarded areas and critical infrastructures such as airports and military
bases. Thus, it is of high importance to effectively distinguish between birds
and drones to avoid doomed interception and collateral damages where anti-
drone measures are deployed. For instance, bird strikes with deployed drones
and airplanes can result in severe damage to the environment, which poses
a safety concern and a potential accident. The collision of birds with drones
can damage the frame and the parts of the drone, compromising flight safety.
Efforts to mitigate any potential negative impacts on birds involve the use of
an appropriate recognition model within the anti-drone system. The birds can
be considered as false-positive detections for anti-drone if the used recognition
model determines the bird as a drone and thus the system is falsely triggered
causing a failed neutralization. Due to the complexity of the system, this might
cause unnecessary alerts or interruptions. As anti-drone technologies aim to
detect and mitigate threats from drones, they must distinguish them from
harmless airborne objects, such as birds.
Furthermore, detecting the identity of the airborne targets is crucial since
the drones and birds have similar movement behaviors, radar-cross sections,
very close flying altitudes and speeds, and also their radar signals share similar
signal amplitude as well as the fluctuation of time series and spectrum struc
ture (Gong et al. 2019). Moreover, when the distance increases, the generated
radio and acoustic signals cannot be recognized and distinguished properly
(Alaparthy, Mandal, and Cummings 2021; Fuhrmann et al. 2017; Patel,
Fioranelli, and Anderson 2018; Torvik, Olsen, and Griffiths 2016). For this
reason, a thorough study of the detection and identification model is of great
importance in reinforcing aerial security and building effective anti-drone
systems. Based on the research work done in [10,17–19], the visual-based
detection is the most advantageous in view of the quality and quantity of
information delivered by the Electro-Optical (EO) and Infrared (IR) sensors.
Recent visual recognition tasks with image detection and tracking use
Convolutional Neural Networks (CNN) as a foundational part to process
and extract the visual features from the input images to provide
a probability distribution over a set of categories (Isaac-Medina et al. 2021).
e2318672-4 Y. GHAZLANE ET AL.
Related Work
To date, the recent advances in AI and Machine Learning (ML) have signifi
cantly accelerated the improvements done on both drones and anti-drone
systems. Thus, the anti-drone systems have become intelligent and autono
mous in terms of decision-making. Moreover, their capabilities are enhanced
with the fusion of AI approaches with deep learning algorithms, computer
vision techniques and more precisely image recognition.
The four phases of the DITI process aim to automate some or all the operations
to improve the system performance using AI techniques. Thus, the anti-drone
becomes a cutting-edge device by adopting and fusing knowledge from AI,
Internet-of-Things and robotics to respond effectively to the security threats
posed by the malicious drones (Choi 2022; Ding et al. 2018). Since the anti-
drone relies mainly on the detection model used, it is important to select suitable
algorithm and methodology, optimal parameters and appropriate data to achieve
satisfactory results.
One cannot talk of AI and computer vision without talking of image recogni
tion and classification algorithms and CNN architectures. The achieved advances
have reinforced the effectiveness of the visual detection in anti-drone systems, in
terms of accuracy and processing time. The drone recognition have shifted from
using traditional methods that use low-level handcrafted features and classical
classifiers (Ganti and Kim 2016; Gökçe et al. 2015; Lai, Mejias, and Ford 2011;
Rozantsev, Lepetit, and Fua 2016; Unlu, Zenou, and Rivière 2018; Wang et al.
2018; Wu, Sui, and Wang 2017) to more automated ones represented most of all
by deep neural networks (Wang, Fan, and Wang 2021) considered as a “black-
box” solution for most of the problems (Osco et al. 2021; Seidaliyeva et al. 2020).
In fact, the anti-drone system effectiveness depends mainly on the reliability and
validity of the results delivered by the recognition model.
For this reason, several later research studies have focused on developing
novel drone detection strategies by leveraging different AI approaches with the
purpose of reaching high-level confidence results.
In the following, we review the existing models in the literature as single and
binary airborne target recognition models, which include detection and
classification.
e2318672-6 Y. GHAZLANE ET AL.
Proposed Methodology
In most previous studies, most of the attention has been centered on improv
ing detecting drones as unique targets while the real challenge consists of
distinguishing drones and non-drone objects.
In this study, we develop a proper model able to recognize and distin
guish drones and birds. Also, we propose to enhance the recognition
system by developing a twofold dataset representing the largest combina
tion of backgrounds and foregrounds, in different environments and
weather conditions.
Some of the challenges in visual-based drone detection and recognition are:
Data Acquisition
In supervised learning, all is about learning from the data and the model are
data driven (Taha and Shoufan 2019). The training process learns the input
output relationship and maps the function that binds the output to the inputs
thanks to the acquired knowledge going from the obvious to underlying
patterns that results in mapping a function between the input and the output.
Once trained, the model makes prediction by assigning the unseen data to the
category it belongs to using the acquired knowledge.
Thus, dataset preparation is a crucial task for our supervised image recogni
tion task. In fact, we are looking for both high quality and large quantity of
data for creating a model with minimal bias and variance, representing a wide
range of real cases.
In this work, we have carefully collected images with different types of
drones and birds with the purpose to highlight the maximum possible combi
nation of the most encountered aerial targets in different environments. We
have selected 20 000 instances mainly from (Caltech-UCSD Birds-200-2011, n.
d; Pawełczyk and Wojtyra 2020). The dataset includes different types and
categories of drones and birds captured in different altitudes, weather condi
tions and locations. It is important to note that the selected dataset ensures
that the trained model can adapt to a variety of situations and environments.
APPLIED ARTIFICIAL INTELLIGENCE e2318672-11
Figure 1. A selection of bird and drone images from the collected twofold dataset.
Figure 2. Transfer learning framework used from pre-trained models to our target model; divided
on a) a training and b) a testing phases.
System Setup
In this section, we detail the used setup and configuration of the proposed
model, adjusted according to the anti-drone requirements and needs.
Experimental Design
The proposed models are trained on a laptop a NVIDIA® GeForce RTX 3050,
Intel® Core i7 -11,370 H Processor 3.3 GHz 12 M with 16GB of memory and
desktop NVIDIA Quadro P4000, Intel(R) Xeon(R) W-2155@ 3.30 GHZ with
e2318672-14 Y. GHAZLANE ET AL.
32GB of memory and Windows as OS. Our experiments are executed using
Tensorflow deep learning framework.
During our training process, the parameter and hypermarameters are care
fully selected and then tested to be in accordance with the binary classification
task. Table 2 details the used parameters and hyperparameters. Actually, the
parameters are determined from the general context. Meanwhile, the hyper
parameters are adjustable parameters that are tuned several times before
finding the optimal ones.
Moreover, layer regularization techniques are used to avoid the overfitting
risk and to speed up the training process, which results in lesser variance. In
this work, we have found that dropping randomly 20% of the neurons at each
iteration is the optimal value for our problem.
Since, we are developing a backbone model to be integrated within the anti-
drone system, it’s important to save the best performing weights of the model
for each epochs under h5 files by using ModelCheckpoint technique. Further,
the EarlyStopping allows to stop training when the performance is getting
worse, especially when the model stops improving with the purpose to retain
the optimal generalization performance. After several trials, the early stopping
patience is adapted to each set of epochs when the approximation and com
plexity errors get close to each other detect and the dominance of the variance
part begins as explained in details in (Prechelt n.d.).
System Model
The overall research flow diagram with the significant followed steps is illu
strated in Figure 3. We start by feeding the selected dataset to the model which
is augmented and split into three sets: training, validation and testing. After,
the training process starts using the specified parameters and techniques. The
validation process is done in parallel to assess the reliability of tuned hyper
perparameters. Thereafter, we test the model on an unseen set of images to get
the performance of the model with numerical confidence scores and visual
results.
Evaluation Protocol
In this paper, we are using predefined metrics to assess the proposed models.
Used Metrics
In order to evaluate the performance of the proposed classification models and
methods, we adopt accuracy, precision, F1 score and confusion matrix as
performance metrics. Indeed, these metrics allow better assessment of the
models and methods in the case of binary classification problems since they
rely on the verified and missed detection with True Positives (TP) which are
APPLIED ARTIFICIAL INTELLIGENCE e2318672-15
drones, True Negatives (TN) representing the Birds, False Positives (FP) and
Negatives (FN) are the falsely classified drones and birds. Table 3 details the
relation between true and predicted labels.
The used metrics are defined by the following equations.
The accuracy indicates the number of successful detected drones and birds.
P
TP þ TN
Accuracy ¼ P (1)
TP þ TN þ FP þ FN
The precision and recall show the validity of the positive detected aerial
targets.
P
TP
Precision ¼ P (2)
TP þ FP
P
TP
Recall ¼ P (3)
TP þ FN
The F1 score measures the proposed models’ accuracy. Usually, it is used in the
case of binary classification, with positive and negative samples.
precision � recall
F1 score ¼ 2: (4)
precision � recall
The confusion matrix presents the rate of true positive and false positive in the
true and predicted classes of the two classes in a matrix format.
number of layers. From this table, it is shown the number of layers and
parameters has a high impact on the performance since the largest models
extract the maximum amount of informative and discriminative features.
Performance Behavior
Table 5. This confirms the reliability of the retained model. Further, we have
investigated their behavior during 50 epochs, and it is confirmed that
EfficientNetB6 has the better performance as shown in Figure 5.
Ablation Experiment
As explained earlier, we have selected carefully appropriate Data
Augmentation (DA) and Fine-Tuning (FT) regularization techniques for
developing the aforementioned model. In order to assess and verify their
respective contributions in the overall performance, we have conducted abla
tion experiments and comparison with the proposed model. The ablation
experiment is presented in Table 7.
We have tested our retained model without DA and FT techniques, and
then with each of them separately to analyze their impact. Based on the
presented results, it can be seen that the integration of each technique
increases significantly the results.
e2318672-20 Y. GHAZLANE ET AL.
Table 8. Comparison between our proposed model and the existing ones.
Papers Accuracy Precision Recall F1 score
(Samadzadegan et al. 2022) 83% 84% 84% 83%
(Pawełczyk and Wojtyra 2020) 70% - - 60.2%
(Coluccia et al. 2019) - - 73%
(Coluccia et al. 2021) - 80% - -
(Coluccia et al. 2022) - 0.796 0.910 -
Our model 98.115% 98.184% 98.115% 98.115%
Prediction Visualization
Discussion
The developed model successfully responds to the anti-drone needs and fulfills the
related challenges and requirements to recognize efficiently and properly the most
encountered airborne targets, which are drones and birds. The conducted experi
ments have shown that using transfer learning and finetuning significantly
enhances the detection performance. Through the conducted experiments, we
have found that using the backbone model allows for feature extraction in the real-
time detection module, allowing it to focus more on decision-making rather than
raw data processing. This proposed model serves as backbone for real-time
detection during the anti-drone deployment with a complementary real-time
detection model which includes more targets such as airplanes, dayframes and
building (Yasmine, Maha, and Hicham 2023).
e2318672-24 Y. GHAZLANE ET AL.
Disclosure Statement
The authors have no relevant financial or non-financial interests to disclose.
Funding
The authors declare that no funds, grants, or other support were received during the prepara
tion of this manuscript.
ORCID
Yasmine Ghazlane http://orcid.org/0000-0002-9665-1005
APPLIED ARTIFICIAL INTELLIGENCE e2318672-25
References
Akyon, F. C., S. O. Altinuc, and A. Temizel. 2022. Slicing aided hyper inference and fine-tuning
for small object detection. 2022 IEEE International Conference on Image Processing (ICIP),
966–70. doi:10.1109/ICIP46576.2022.9897990
Alaparthy, V., S. Mandal, and M. Cummings. 2021. Machine learning vs. Human performance
in the realtime acoustic detection of drones. 2021 IEEE Aerospace Conference (50100), 1–7.
doi:10.1109/AERO50100.2021.9438533
Allahham, M. S., T. Khattab, and A. Mohamed. 2020. Deep learning for RF-Based drone
detection and identification: A multi-channel 1-D convolutional neural networks approach.
2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT),
112–17. doi:10.1109/ICIoT48696.2020.9089657
Al-Sa’d, M. F., A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad. 2019. RF-based drone
detection and identification using deep learning approaches: An initiative towards a large
open source drone database. Future Generation Computer Systems 100:86–97. doi:10.1016/j.
future.2019.05.007.
Ashraf, M. W., W. Sultani, and M. Shah. 2021. Dogfight: Detecting drones from drones videos.
arXiv 2103:17242 [Cs]. http://arxiv.org/abs/2103.17242.
Behera, D. K., and A. Bazil Raj. 2020. Drone detection and classification using deep learning.
2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS),
1012–16. doi:10.1109/ICICCS48265.2020.9121150
Caltech-UCSD Birds-200-2011. n.d. Accessed March 7, 2022. http://www.vision.caltech.edu/
visipedia/CUB-200-2011.html
Casabianca, P., and Y. Zhang. 2021. Acoustic-based UAV detection using late fusion of deep
neural networks. Drones 5 (3):54. doi:10.3390/drones5030054. Article 3
Çetin, E., C. Barrado, and E. Pastor. 2020. Counter a drone in a complex neighborhood area by
deep reinforcement learning. Sensors 20 (8):2320. doi:10.3390/s20082320. Article 8
Çetin, E., C. Barrado, and E. Pastor. 2021. Improving real-time drone detection for
counter-drone systems. The Aeronautical Journal 125 (1292):1871–96. doi:10.1017/aer.
2021.43.
Choi, Y.-J. 2022. Security threat scenarios of drones and anti-drone technology. Academy of
Strategic Management Journal. 21 (1):7.
Coluccia, A., A. Fascista, A. Schumann, L. Sommer, A. Dimou, D. Zarpalas, M. Méndez, D. de
la Iglesia, I. González, J.-P. Mercier, et al. 2021. Drone vs. Bird detection: Deep learning
algorithms and results from a grand challenge. Sensors 21 (8):2824. doi:10.3390/s21082824.
Coluccia, A., A. Fascista, A. Schumann, L. Sommer, A. Dimou, D. Zarpalas, N. Sharma,
M. Nalamati, O. Eryuksel, K. A. Ozfuttu, et al. 2022. Drone-vs-bird detection challenge at
ICIAP 2021. In Image analysis and processing. ICIAP 2022 workshops, ed. P. L. Mazzeo,
E. Frontoni, S. Sclaroff, and C. Distante, vol. 13374, 410–21. Springer International
Publishing. doi:10.1007/978-3-031-13324-4_35.
Coluccia, A., A. Fascista, A. Schumann, L. Sommer, M. Ghenescu, T. Piatrik, G. De Cubber,
M. Nalamati, A. Kapoor, M. Saqib, et al. 2019. Drone-vs-Bird detection challenge at IEEE
AVSS2019. 2019 16th IEEE International Conference on Advanced Video and Signal Based
Surveillance (AVSS), 1–7. doi:10.1109/AVSS.2019.8909876
e2318672-26 Y. GHAZLANE ET AL.
Ding, G., Q. Wu, L. Zhang, Y. Lin, T. A. Tsiftsis, and Y.-D. Yao. 2018. An amateur drone
surveillance system based on the cognitive internet of things. IEEE Communications
Magazine 56 (1):29–35. doi:10.1109/MCOM.2017.1700452.
Exploring the Efficacy of Transfer Learning in Mining Image-Based Software Artifacts | Journal
of Big Data | Full Text. n.d.-b. Accessed July 21, 2022. https://journalofbigdata.springeropen.
com/articles/10.1186/s40537-020-00335-4
Fuhrmann, L., O. Biallawons, J. Klare, R. Panhuber, R. Klenke, and J. Ender. 2017. Micro-
doppler analysis and classification of UAVs at Ka band. 2017 18th International Radar
Symposium (IRS), 1–9. doi:10.23919/IRS.2017.8008142
Fujii, S., K. Akita, and N. Ukita. 2021a. Distant bird detection for safe drone flight and its
dataset. 2021 17th International Conference on Machine Vision and Applications (MVA),
1–5. doi:10.23919/MVA51890.2021.9511386
Fujii, S., K. Akita, and N. Ukita. 2021b. Distant bird detection for safe drone flight and its
dataset. 2021 17th International Conference on Machine Vision and Applications (MVA),
1–5. doi:10.23919/MVA51890.2021.9511386
Ganti, S. R., and Y. Kim. 2016. Implementation of detection and tracking mechanism for small
UAS. 2016 International Conference on Unmanned Aircraft Systems (ICUAS), 1254–60.
doi:10.1109/ICUAS.2016.7502513
Garcia, A. J., J. Min Lee, and D. S. Kim. 2020. Anti-drone system: A visual-based drone
detection using neural networks. 2020 International Conference on Information and
Communication Technology Convergence (ICTC), 559–61. doi:10.1109/ICTC49870.2020.
9289397
Ge, R., M. Lee, V. Radhakrishnan, Y. Zhou, G. Li, and G. Loianno. 2022. Vision-Based Relative
Detection and Tracking for Teams of Micro Aerial Vehicles (arXiv:2207.08301). arXiv. http://
arxiv.org/abs/2207.08301
Gökçe, F., G. Üçoluk, E. Şahin, and S. Kalkan. 2015. Vision-based detection and distance
estimation of micro unmanned aerial vehicles. Sensors (Basel, Switzerland) 15 (9):23805–46.
doi:10.3390/s150923805.
Gong, J., J. Yan, D. Li, D. Kong, and H. Hu. 2019. Interference of radar detection of drones by
birds. Progress in Electromagnetics Research M 81:1–11. doi:10.2528/PIERM19020505.
Gupta, R., A. Kumari, and S. Tanwar. 2021. Fusion of blockchain and artificial intelligence for
secure drone networking underlying 5G communications. Transactions on Emerging
Telecommunications Technologies 32 (1). doi:10.1002/ett.4176.
Hua, J., L. Zeng, G. Li, and Z. Ju. 2021. Learning for a robot: Deep reinforcement learning,
imitation learning, transfer learning. Sensors 21 (4):1278. doi:10.3390/s21041278. Article 4
Imai, S., S. Kawai, and H. Nobuhara. 2020. Stepwise PathNet: A layer-by-layer
knowledge-selection-based transfer learning algorithm. Scientific Reports 10 (1):8132.
doi:10.1038/s41598-020-64165-3.
Isaac-Medina, B. K. S., M. Poyser, D. Organisciak, C. G. Willcocks, T. P. Breckon, and
H. P. H. Shum. 2021. Unmanned aerial vehicle visual detection and tracking using deep
neural networks: A performance benchmark. arXiv 2103.13933 [Cs]. http://arxiv.org/abs/
2103.13933
Kangunde, V., R. S. Jamisola, and E. K. Theophilus. 2021. A review on drones controlled in
real-time. International Journal of Dynamics and Control 9 (4):1832–46. doi:10.1007/s40435-
020-00737-5.
Kannojia, S. P., and G. Jaiswal. 2018. Effects of varying resolution on performance of CNN
based image classification an experimental study. International Journal of Computer Sciences
& Engineering 6 (9):451–56. doi:10.26438/ijcse/v6i9.451456.
Kolamunna, H., T. Dahanayaka, J. Li, S. Seneviratne, K. Thilakaratne, A. Y. Zomaya, and
A. Seneviratne. 2021. DronePrint: Acoustic signatures for open-set drone detection and
APPLIED ARTIFICIAL INTELLIGENCE e2318672-27
identification with online data. Proceedings of the ACM on Interactive, Mobile, Wearable and
Ubiquitous Technologies 5 (1):1–31. doi:10.1145/3448115.
Krizhevsky, A., I. Sutskever, and G. E. Hinton. 2017. ImageNet classification with deep
convolutional neural networks. Communications of the ACM 60 (6):84–90. doi:10.1145/
3065386.
Kumar, A., S. Bhatia, K. Kaushik, S. M. Gandhi, S. G. Devi, J. De, D. A. Pacheco, and A. Mashat.
2021. Survey of promising technologies for quantum drones and networks. Institute of
Electrical and Electronics Engineers Access 9:125868–911. doi:10.1109/ACCESS.2021.
3109816.
Kurt, A., N. Saputro, K. Akkaya, and A. S. Uluagac. 2021. Distributed connectivity main
tenance in swarm of drones during post-disaster transportation applications. IEEE
Transactions on Intelligent Transportation Systems 22 (9):6061–73. doi:10.1109/TITS.
2021.3066843.
Lai, J., L. Mejias, and J. J. Ford. 2011. Airborne vision-based collision-detection system. Journal
of Field Robotics 28 (2):137–57. doi:10.1002/rob.20359.
Lee, Z. W., W. H. Chin, and H. W. Ho. 2023. Air-to-air micro air vehicle interceptor with an
embedded mechanism and deep learning. Aerospace Science and Technology 135:108192.
doi:10.1016/j.ast.2023.108192.
Ling, S., F. Zhu, and X. Li. 2015. Transfer learning for visual categorization: A survey. IEEE
Transactions on Neural Networks and Learning Systems 26 (5):1019–34. doi:10.1109/TNNLS.
2014.2330900.
Lin Tan, L. K., B. C. Lim, G. Park, K. H. Low, and V. C. Seng Yeo. 2021. Public acceptance of
drone applications in a highly urbanized environment. Technology in Society 64:101462.
doi:10.1016/j.techsoc.2020.101462.
Mahdavi, F., and R. Rajabi. 2020. Drone detection using convolutional neural networks. 2020
6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), 1–5. doi:10.
1109/ICSPIS51611.2020.9349620
Oh, H. M., H. Lee, and M. Y. Kim. 2019. Comparing Convolutional Neural Network
(CNN) models for machine learning-based drone and bird classification of anti-drone
system. In 2019 19th International Conference on Control, Automation and Systems
(ICCAS) (pp. 87-90). IEEE.
Osco, L. P., J. Marcato Junior, A. P. Marques Ramos, L. A. de Castro Jorge, S. N. Fatholahi, J. de
Andrade Silva, E. T. Matsubara, H. Pistori, W. N. Gonçalves, and J. Li. 2021. A review on
deep learning in UAV remote sensing. International Journal of Applied Earth Observation
and Geoinformation 102:102456. doi:10.1016/j.jag.2021.102456.
Park, S., H. T. Kim, S. Lee, H. Joo, and H. Kim. 2021. Survey on anti-drone systems:
Components, designs, and challenges. Institute of Electrical and Electronics Engineers
Access 9:42635–59. doi:10.1109/ACCESS.2021.3065926.
Patel, J. S., F. Fioranelli, and D. Anderson. 2018. Review of radar classification and RCS
characterisation techniques for small UAVs or drones. IET Radar, Sonar & Navigation
12 (9):911–19. doi:10.1049/iet-rsn.2018.0020.
Pawełczyk, M. Ł., and M. Wojtyra. 2020. Real world object detection dataset for quadcopter
unmanned aerial vehicle detection. Institute of Electrical and Electronics Engineers Access
8:174394–409. doi:10.1109/ACCESS.2020.3026192.
Prechelt, L. (n.d.). Early stopping | but when? 15.
Rozantsev, A., V. Lepetit, and P. Fua. 2016. Detecting flying objects using a single moving
camera IEEE transactions on pattern analysis and machine intelligence. 39(5): 879–892.
Samadzadegan, F., F. Dadrass Javan, F. Ashtari Mahini, and M. Gholamshahi. 2022. Detection
and recognition of drones based on a deep convolutional neural network using visible
imagery. Aerospace 9 (1):31. doi:10.3390/aerospace9010031.
e2318672-28 Y. GHAZLANE ET AL.
Schindler, A., T. Lidy, and A. Rauber. n.d. Comparing shallow versus deep neural network
architectures for automatic music genre classification, vol. 5.
Seidaliyeva, U., D. Akhmetov, L. Ilipbayeva, and E. T. Matson. 2020. Real-time and accurate
drone detection in a video with a static background. Sensors 20 (14):3856. doi:10.3390/
s20143856.
Serrano-Hernandez, A., A. Ballano, and J. Faulin. 2021. Selecting freight transportation modes
in last-mile urban distribution in Pamplona (Spain): An option for drone delivery in smart
cities. Energies 14 (16):4748. doi:10.3390/en14164748.
Shi, X., C. Yang, W. Xie, C. Liang, Z. Shi, and J. Chen. 2018. Anti-drone system with multiple
surveillance technologies: Architecture, implementation, and challenges. IEEE
Communications Magazine 56 (4):68–74. doi:10.1109/MCOM.2018.1700430.
Spanaki, K., E. Karafili, U. Sivarajah, S. Despoudi, and Z. Irani. 2021. Artificial intelli
gence and food security: Swarm intelligence of AgriTech drones for smart AgriFood
operations. Production Planning & Control 33 (16):1498–516. doi:10.1080/09537287.
2021.1882688 .
Svaigen, A. R., L. M. S. Bine, G. L. Pappa, L. B. Ruiz, and A. A. F. Loureiro (2021). Automatic
drone identification through rhythm-based features for the internet of drones. 2021 IEEE
33rd International Conference on Tools with Artificial Intelligence (ICTAI), 1417–21. doi:10.
1109/ICTAI52525.2021.00225
Taha, B., and A. Shoufan. 2019. Machine learning-based drone detection and classification:
State-of-the-art in research. Institute of Electrical and Electronics Engineers Access
7:138669–82. doi:10.1109/ACCESS.2019.2942944.
Torvik, B., K. E. Olsen, and H. Griffiths. 2016. Classification of birds and UAVs based on radar
polarimetry. IEEE Geoscience and Remote Sensing Letters 13 (9):1305–09. doi:10.1109/LGRS.
2016.2582538.
Unlu, E., E. Zenou, and N. Rivière. 2018. Using shape descriptors for UAV detection. Electronic
Imaging 2017 (9):128–5. https://hal.archives-ouvertes.fr/hal-01740282.
Upadhyay, M., S. K. Murthy, and A. A. B. Raj. 2021. Intelligent system for real time detection
and classification of aerial targets using CNN. 2021 5th International Conference on
Intelligent Computing and Control Systems (ICICCS), 1676–81. doi:10.1109/ICICCS51141.
2021.9432136
U.S.C. Title 49—TRANSPORTATION. n.d.-c. Accessed March 15, 2021. https://www.govinfo.
gov/content/pkg/USCODE-2018-title49/html/USCODE-2018-title49-subtitleVII-partA-
subpartiii-chap448-sec44801.htm
Wang, P., E. Fan, and P. Wang. 2021. Comparative analysis of image classification algorithms
based on traditional machine learning and deep learning. Pattern Recognition Letters
141:61–67. doi:10.1016/j.patrec.2020.07.042.
Wang, Z., L. Qi, Y. Tie, Y. Ding, and Y. Bai. 2018. Drone detection based on FD-HOG
descriptor. 2018 International Conference on Cyber-Enabled Distributed Computing and
Knowledge Discovery (CyberC), 433–33. doi:10.1109/CyberC.2018.00084
Wu, Y., Y. Sui, and G. Wang. 2017. Vision-based real-time aerial object localization and
tracking for UAV sensing system. Institute of Electrical and Electronics Engineers Access
5:23969–78. doi:10.1109/ACCESS.2017.2764419.
Yang, X., Y. Zhang, W. Lv, and D. Wang. 2021. Image recognition of wind turbine blade
damage based on a deep learning model with transfer learning and an ensemble learning
classifier. Renewable Energy 163:386–97. doi:10.1016/j.renene.2020.08.125.
Yasmine, G., G. Maha, and M. Hicham. 2022. Survey on current anti-drone systems: Process,
technologies, and algorithms. International Journal of System of Systems Engineering
12 (3):235–70. doi:10.1504/IJSSE.2022.125947.
APPLIED ARTIFICIAL INTELLIGENCE e2318672-29
Yasmine, G., G. Maha, and M. Hicham. 2023. Anti-drone systems: An attention based
improved YOLOv7 model for a real-time detection and identification of
multi-airborne target. Intelligent Systems with Applications 20:200296. doi:10.1016/j.
iswa.2023.200296 .
Zeng, Y., Q. Duan, X. Chen, D. Peng, Y. Mao, and K. Yang. 2021. UAVData: A dataset for
unmanned aerial vehicle detection. Soft Computing 25 (7):5385–93. doi:10.1007/s00500-020-
05537-9.
Zhao, J., J. Zhang, D. Li, and D. Wang. 2022. Vision-Based Anti-UAV Detection and Tracking
(arXiv:2205.10851). arXiv. http://arxiv.org/abs/2205.10851
Zheng, Y., Z. Chen, D. Lv, Z. Li, Z. Lan, and S. Zhao. 2021. Air-to-air visual detection of
micro-UAVs: An experimental evaluation of deep learning. IEEE Robotics and Automation
Letters 6 (2):1020–27. doi:10.1109/LRA.2021.3056059.
Zhong, G., X. Ling, and L. Wang. 2019. From shallow feature learning to deep learning:
Benefits from the width and depth of deep architectures. WIREs Data Mining and
Knowledge Discovery 9 (1). doi:10.1002/widm.1255.
Zhu, W., B. Braun, L. H. Chiang, and J. A. Romagnoli. 2021. Investigation of transfer learning
for image classification and impact on training sample size. Chemometrics and Intelligent
Laboratory Systems 211:104269. doi:10.1016/j.chemolab.2021.104269.