An_Intelligent_Real-Time_Object_Detection_System_o
An_Intelligent_Real-Time_Object_Detection_System_o
sciences
Article
An Intelligent Real-Time Object Detection System on Drones
Chao Chen 1 , Hongrui Min 1,2 , Yi Peng 1,2 , Yongkui Yang 1 and Zheng Wang 1, *
1 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
[email protected] (C.C.); [email protected] (H.M.); [email protected] (Y.P.); [email protected] (Y.Y.)
2 University of Chinese Academy of Sciences, Beijing 101408, China
* Correspondence: [email protected]
Abstract: Drones have been widely used in everyday life and they can help deal with various tasks,
including photography, searching, and surveillance. Nonetheless, it is difficult for drones to perform
customized online real-time object detection. In this study, we propose an intelligent real-time object
detection system for drones. It is composed of an FPGA and a drone. A neural-network (NN) engine
is designed on the FPGA for NN model acceleration. The FPGA receives activation data from an NN
model, which are assembled into the data stream. Multiple fetch and jump pointers catch required
activation values from the data stream, which are then filtered and sent to each thread independently.
To accelerate processing speed, multiple processing elements (PEs) deal with tasks in parallel by
using multiple weights and threads. The image data are transferred from the drone host to the FPGA,
which are tackled with high speed by the NN engine. The NN engine results are returned to the host,
which is used to adjust the flying route accordingly. Experimental results reveal that our proposed
FPGA design well utilizes FPGA computing resources with 81.56% DSP and 72.80% LUT utilization
rates, respectively. By using the Yolov3-tiny model for fast object detection, our system can detect
objects at the speed of 8 frames per second and achieves a much lower power consumption compared
to state-of-the-art methods. More importantly, the intelligent object detection techniques provide
more pixels for the target of interest and they can increase the detection confidence score from 0.74 to
Citation: Chen, C.; Min, H.; Peng, Y.; 0.90 and from 0.70 to 0.84 for persons and cars, respectively.
Yang, Y.; Wang, Z. An Intelligent
Real-Time Object Detection System Keywords: neural network accelerator; FPGA; object detection; intelligent system; machine learning
on Drones. Appl. Sci. 2022, 12, 10227.
https://doi.org/10.3390/app
122010227
classification and detection. The drone sends image data to the FPGA via the host, which are
assembled into the data stream for access by multiple fetch and jump pointers. Activation
values are fetched for each thread and then filtered. Processing elements tackle tasks in
parallel by using multiple weights and threads, which significantly speeds up the running
of deep-learning neural networks (NNs).
Then, we build an intelligent system using the FPGA on drones, such that the drone
can flexibly adjust flying routes to generate more pixels for objects which helps detect
objects more accurately based on FPGA results. Experimental results reveal that our
proposed FPGA design well utilizes FPGA computing resources, with 81.56% DSP and
72.80% LUT utilization rates, respectively. By using the Yolov3-tiny model for fast object
detection, our system can detect objects at the speed of 8 frames per second (FPS) and
achieve a much lower power consumption compared to state-of-the-art methods. More
importantly, the intelligent object detection techniques provide more pixels for the target of
interest and it can increase the detection confidence score from 0.74 to 0.90 and from 0.70 to
0.84 for persons and cars, respectively.
The rest of the paper is organized as follows. Section 2 introduces related work,
materials and methods are demonstrated in Section 3, results and discussion are presented
in Section 4 and Section 5 concludes the paper.
2. Related Work
Researchers have proposed plenty of object detection algorithms, especially with the
help of AI techniques. AI algorithms have been proven to be effective in object detection
in terms of detection accuracy and speed. In this section, we review existing related
work with AI techniques in different aspects, i.e., FPGA acceleration, AI object detection,
and intelligent systems.
Ma et al. [9] propose an FPGA accelerator for Yolov2-tiny, where the Xilinx Virtex-7
485T FPGA is used for development. They support operators of convolution, pooling,
ReLU, and fully connected layers. The input size is 448 × 448 and the system generates
7 × 7 × 2 bounding boxes. The essential contribution is that they optimize the hardware
implementation of the fully connected layer and the detection speed is fast. It is worth
mentioning that in the new Yolov3, the fully connected layer is not used.
Guo et al. [10] develop an FPGA accelerator using Xilinx Zynq 7020 FPGA board. This
design can be applied to various neural networks and it supports Yolov2, whose operators
are the same as those in [9]. Its detection speed is low compared to the work in [9].
Ding et al. [11] achieve high-speed detection using a Xilinx Virtex-7 690T FPGA. They
accelerate the convolution by utilizing FFT. The operators supported include convolution,
max pool, leaky ReLU, etc. The main contributions are that they implement the fully-
Appl. Sci. 2022, 12, 10227 3 of 15
connected layer using convolution and compress the model weight to 4.95 MB. This is
much smaller than the original and it comes with a cost of accuracy
In addition, the related studies all accelerate Yolov2—which consists of nine convolu-
tional layers—and do not support Yolov3. The operators in Yolov2 are limited. Our design
can accelerate Yolov3, which includes 13 convolutional layers and new operators such as
up-sampling and concatenation. The previous studies generate bounding boxes which are
then processed by CPUs and our design includes a post-processing module in the FPGA,
which greatly accelerates the post-processing. Since the previous studies all include SoCs
that run an operating system, the power consumption of these studies is inevitably high
and they are not suitable for lower power-consumption scenarios.
this section. Many sensors are equipped in various systems to help gather surrounding
information that can be used to make intelligent decisions. Tactile sensors are used in
advanced intelligent systems–such as robotics and human-machine interfaces–to achieve
safe and precise operations in uncertain environments [19]. Vehicle visual sensors are
adopted by Li et al. to perform real-time semantic understanding and segmentation of
urban scenes [20]. Qiu et al. [21] build intelligent human motion capture systems using
wearable sensors. Haseeb et al. [22] propose a sensor-cloud architecture for intelligent
Internet of Things to provide data security and network scalability.
In addition, machine learning (ML) is becoming increasingly important in intelligent
systems [23,24]. A large amount of data have been generated from equipped sensors and
the Internet, which offers routes to training systems becoming more intelligent using these
data together with ML. He et al. [25] summarize sensing applications and investigate
how machine-learning algorithms are used to develop intelligent sensor systems. The
standard extreme learning machine achieves the best diagnosis result with an accuracy of
80%. Prencipe et al. [26] propose a system that utilizes a 2.5D convolutional neural network
(CNN) with the multi-class focal Dice loss, such that liver segments can be classified for
surgical planning. An intelligent system is developed to manage and control plant leaf
diseases using feature fusion and PCA-LDA classification [27].
Haq et al. [28] propose an intelligent system that takes advantage of decision trees
to classify healthy and diabetic subjects using clinical data. González et al. [29] build an
intelligent system to solve a closed-loop supply-chain problem using fuzzy logic with
machine learning. Deepa et al. [30] perform healthcare analysis with an AI-based intel-
ligent system using a Ridge-Adaline Stochastic Gradient Descent (RASGDC) classifier.
Zekić et al. [31] utilize deep neural network (DNN) and tree-partitioning methods to pre-
dict energy consumption in smart cities such that an intelligent system is developed for
efficient energy management.
The architecture of the NN engine on the FPGA is displayed in Figure 2. Data are
first transferred to the FPGA. Then the NN engine assembles the data into the data stream,
which are used for the following processing. Specifically, data streams are sent to multiple
fetch and jump pointers. Note that we adopt a multi-threading strategy such that parallel
computation can be achieved using multiple weights and buffers.
Instruction
PE PE PE WB
fetch Out-of-
decode order
PE PE PE WB
commit
Finite State buffer
PE PE PE WB
Machine
Thread buffer
Prog. In-order Dynamic thread alloc.
Regs construct Weight
Activation filtering
Data Thread fetcher
stream control F&J F&J F&J
A fetch and jump (F&J) mechanism is used to independently obtain required activation
data for each thread from the data stream. Fetch pointers—which are programmed in
accordance with the kernel structure—access the data stream in parallel, and fetch the
activation data into specific L1 buffers. Together with a low-cost jump logic, a fetch pointer
catches the required data individually. Then, we filter the activation data with sliding
windows and use a dynamic thread-allocation module to mark thread buffer and thread
status. At the same time, neural network weights are put into weight buffers. The activation
data and weights are used to process image data using multiple processing elements (PEs)
for NN acceleration, whose outputs are written to the out-of-order commit buffer. The out-
of-order data are reconstructed by the in-order construction module and they are sent to
the external data sink.
Appl. Sci. 2022, 12, 10227 6 of 15
Camera
Image
data
Linux OS
Controller Deep Learning
ARM SoC Model
Flight status Image data
Flight Controller /commands /NN model
Drone Host result FPGA
From Figure 3, we can see that the drone host is the central unit of the intelligent
system and it is in charge of the system management. The camera sends image data to the
drone host for processing. The host forwards the image data to the FPGA for AI processing
using ML models, and the result is returned to the host. Meanwhile, the host sends flight
status and commands to the flight controller—a separate module for flight control and
adjustment—throughout the flight journey, such that the drone can fly smoothly and safely.
The intelligent system picture is shown in Figure 4. As explained in previous sections,
the system includes four key components, i.e., the host, the camera, the flight controller,
and the FPGA. In addition, other components are necessary to support the system, including
the battery for power supply and GPS for outdoor positioning.
Kintex-7
410T FPGA Battery
GPS
Flight Raspberry
Controller Realsense Pi 4B Host
T265 Camera
No
Area Object Target of
Image
Searching Detection Interest?
Yes
Increase No Score ≥ Yes
Pixels of
threshold?
Object Image
Report
Image
Specifically, in the workflow of Figure 5, the drone first starts flying in an area of
interest and then looks around to search for different types of targets. Depending on the
battery, a drone can fly typically between 10 to 30 minutes and power consumption is a
concern for drones.
While flying, the drone captures images using equipped sensors, including pressure
sensors, humidity sensors, accelerometer sensors, image cameras, etc. With the help of
deep learning, it is feasible to detect objects using images. Therefore, we use the camera to
capture images during the flight.
Upon the image data generation, the host transfers the image data to the FPGA
for object detection. The FPGA contains the NN engine that can run various NN models,
including ResNet-50, Yolov3, and Yolov3-tiny models. These models can be used to perform
object classification and detection tasks with low power consumption.
Based on the object detection results, the system evaluates if there is any target of
interest (TOI) in the search area. The TOI is the target that the user specifies in advance,
and it may be a person, a car, or other objects. If no TOI is found, the drone moves to the
next area of interest.
When TOI is found, our system looks into the confidence score of the object detection.
We use a confidence-score threshold as the tuning hyperparameter for the user to adjust.
When the confidence score is greater than or equal to the threshold, the system is confident
that the image provides enough information and then sends the image to the remote
machine for the report. The drone can then start searching the next area.
If the confidence score is below the threshold value, the drone flies around and
attempts to take images of the object from different angles. By approaching the target,
the system can generate images that may contain more pixels for the TOI. In this case,
the confidence score will be effectively increased.
Note that the confidence score denotes the uncertainty of the object detection result
from Yolo. By using a preset threshold, the uncertainty in decision-making can be reduced.
When the threshold is set to a low value (e.g., 0.60), the uncertainty is high and the object
may not be that clear in the image; when it is set to a high value (e.g., 0.95), the uncertainty
decreases but the system may not be able to reach the threshold, which results in a deadlock.
Figure 6 illustrates the principle of intelligent object detection. The target of interest
is the person, and in the first image, the object detection confidence score is 0.60 and it
Appl. Sci. 2022, 12, 10227 8 of 15
is below the threshold (e.g., 0.8). By providing more pixels of the person, the system can
produce a more effective object detection with a higher confidence score—i.e., 0.90—and
report the image accordingly.
Person 0.90
Person 0.60
Figure 6. The illustration of intelligent object detection by providing more pixels for the target of
interest, which effectively increases the object detection confidence score.
4.2. Power
Figure 7 illustrates the power consumption of the FPGA. It is shown that the total
on-chip power is 3.529 W, where dynamic power is the dominant one with a portion of
93%. The on-chip power is not large and can be supplied with portal DC power sources.
Thus, we adopt a battery bank to supply a 5 V voltage for the FPGA.
Appl. Sci. 2022, 12, 10227 9 of 15
There are two power sources for the intelligent system, i.e., a battery for the flying of
the drone and a battery for the FPGA. Table 3 lists the battery-life duration of the drone
and the FPGA. We can see that although the FPGA exhibits low-power characteristics that
can work for 2 hours, the system’s duration is determined by the battery duration of the
drone, which is limited to the duration of 11 minutes.
Weight Loading
NN model Weight Size Processing Time
Time
ResNet-50 147MB 16.01 s 0.8 s
Yolov3 384 MB 39.31 s 1.17 s
Yolov3-tiny 51 MB 5.88 s 0.13 s
ResNet-50
Yolov3
Yolov3-tiny
0 10 20 30 40 50
With our intelligent object detection techniques, we empirically set the confidence
score threshold to 0.8 and the drone approaches the detected target when the score is less
than the threshold. On the one hand, it can generate more detection results, such that the
wrong detection results may be spotted. On the other hand, it produces more pixels as the
drone approaches, which helps improve the accuracy of object detection.
Figure 10 displays the person-detection result using intelligent object detection tech-
niques. We can see that the confidence score of the person has improved from 0.74 to
0.90. In addition, the misclassified bikes have been corrected from person to bicycle with a
confidence score of 0.50.
Appl. Sci. 2022, 12, 10227 12 of 15
5. Conclusions
In this research, we propose an intelligent object detection system on drones. We
implement a power-efficient neural network engine on the FPGA and use it as the NN
model accelerator. The image data are transferred by the host from the camera to the FPGA.
To improve the accuracy of object detection, our intelligent detection approaches the target
and generates images with more pixels for the target. Experimental results indicate that our
system can achieve real-time detection at the speed of 8 FPS. Compared to other approaches,
our work achieves low power consumption at 3.529 W with a lower latency of 130 ms.
In addition, it can increase the detection confidence score from 0.74 to 0.90 and from 0.70 to
0.84 for persons and cars, respectively. In future work, we plan to apply our approach to
various intelligent applications—such as smart city monitoring and fault diagnosis using AI
techniques—and combine our design with state-of-the-art edge-computing and blockchain
technologies to deal with data security issues while capturing images in real-time [34,35].
Author Contributions: Conceptualization, C.C. and Z.W.; methodology, C.C.; software, H.M.; valida-
tion, Y.P.; data curation, Y.Y.; writing—original draft preparation, C.C.; writing—review and editing,
Z.W. and Y.Y. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the Key-Area Research and Development Program of Guang-
dong Province (grant number 2019B010155003), National Natural Science and Foundation of China
(NSFC 61902355), and the Guangdong Basic and Applied Basic Research Foundation (grant number
2020B1515120044).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Shahmoradi, J.; Talebi, E.; Roghanchi, P.; Hassanalian, M. A comprehensive review of applications of drone technology in the
mining industry. Drones 2020, 4, 34.
2. Krul, S.; Pantos, C.; Frangulea, M.; Valente, J. Visual SLAM for indoor livestock and farming using a small drone with a monocular
camera: A feasibility study. Drones 2021, 5, 41.
3. Moshref-Javadi, M.; Winkenbach, M. Applications and Research avenues for drone-based models in logistics: A classification and
review. Expert Syst. Appl. 2021, 177, 114854.
4. Daud, S.M.S.M.; Yusof, M.Y.P.M.; Heo, C.C.; Khoo, L.S.; Singh, M.K.C.; Mahmood, M.S.; Nawawi, H. Applications of drone in
disaster management: A scoping review. Sci. Justice 2022, 62, 30–42.
5. Rapuano, E.; Meoni, G.; Pacini, T.; Dinelli, G.; Furano, G.; Giuffrida, G.; Fanucci, L. An fpga-based hardware accelerator for
cnns inference on board satellites: Benchmarking with myriad 2-based solution for the cloudscout case study. Remote Sens. 2021,
13, 1518.
6. Wang, Z.; Zhou, L.; Xie, W.; Chen, W.; Su, J.; Chen, W.; Du, A.; Li, S.; Liang, M.; Lin, Y.; et al. Accelerating hybrid and compact
neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration. J. Semicond. 2020,
41, 022401.
7. Wang, J.; Gu, S. FPGA Implementation of Object Detection Accelerator Based on Vitis-AI. In Proceedings of the 2021 11th
International Conference on Information Science and Technology (ICIST), Chengdu, China, 21–23 May 2021; pp. 571–577.
https://doi.org/10.1109/ICIST52614.2021.9440554.
8. Li, W.; Liewig, M. A survey of AI accelerators for edge environment. In Proceedings of the World Conference on Information
Systems and Technologies, Budva, Montenegro, 7–10 April 2020, pp. 35–44.
9. Ma, J.; Chen, L.; Gao, Z. Hardware Implementation and Optimization of Tiny-YOLO Network. In Proceedings of the Digital TV and
Wireless Multimedia Communication; Zhai, G., Zhou, J., Yang, X., Eds.; Springer Singapore: Singapore, 2018; pp. 224–234.
10. Guo, K.; Sui, L.; Qiu, J.; Yao, S.; Han, S.; Wang, Y.; Yang, H. From model to FPGA: Software-hardware co-design for efficient
neural network acceleration. In Proceedings of the 2016 IEEE Hot Chips 28 Symposium (HCS), Cupertino, CA, USA, 21–23
August 2016; pp. 1–27. https://doi.org/10.1109/HOTCHIPS.2016.7936208.
11. Ding, C.; Wang, S.; Liu, N.; Xu, K.; Wang, Y.; Liang, Y. REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for
Object Detection on FPGAs. In Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate
Arrays, Seaside, CA, USA, 24–26 February 2019; Association for Computing Machinery: New York, NY, USA, 2019; FPGA ’19; pp.
33–42. https://doi.org/10.1145/3289602.3293904.
Appl. Sci. 2022, 12, 10227 15 of 15
12. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; Zitnick, L. Microsoft COCO: Common Objects in
Context. In Proceedings of the ECCV, European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014.
13. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December
2015; pp. 1440–1448.
14. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016, ; pp. 779–788.
15. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted
windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October
2021; pp. 10012–10022.
16. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient
convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861.
17. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014;
pp. 580–587.
18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
19. Liu, Y.; Bao, R.; Tao, J.; Li, J.; Dong, M.; Pan, C. Recent progress in tactile sensors and their applications in intelligent systems. Sci.
Bull. 2020, 65, 70–88.
20. Li, Y.; Shi, J.; Li, Y. Real-Time Semantic Understanding and Segmentation of Urban Scenes for Vehicle Visual Sensors by Optimized
DCNN Algorithm. Appl. Sci. 2022, 12, 7811. https://doi.org/10.3390/app12157811.
21. Qiu, S.; Zhao, H.; Jiang, N.; Wu, D.; Song, G.; Zhao, H.; Wang, Z. Sensor network oriented human motion capture via wearable
intelligent system. Int. J. Intell. Syst. 2022, 37, 1646–1673. https://doi.org/https://doi.org/10.1002/int.22689.
22. Haseeb, K.; Almogren, A.; Ud Din, I.; Islam, N.; Altameem, A. SASC: Secure and Authentication-Based Sensor Cloud Architecture
for Intelligent Internet of Things. Sensors 2020, 20, 2468. https://doi.org/10.3390/s20092468.
23. Injadat, M.; Moubayed, A.; Nassif, A.B.; Shami, A. Machine learning towards intelligent systems: Applications, challenges, and
opportunities. Artif. Intell. Rev. 2021, 54, 3299–3348.
24. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695.
25. Ha, N.; Xu, K.; Ren, G.; Mitchell, A.; Ou, J.Z. Machine Learning-Enabled Smart Sensor Systems. Adv. Intell. Syst. 2020, 2, 2000063.
https://doi.org/https://doi.org/10.1002/aisy.202000063.
26. Prencipe, B.; Altini, N.; Cascarano, G.D.; Brunetti, A.; Guerriero, A.; Bevilacqua, V. Focal Dice Loss-Based V-Net for Liver
Segments Classification. Appl. Sci. 2022, 12, 3247. https://doi.org/10.3390/app12073247.
27. Ali, S.; Hassan, M.; Kim, J.Y.; Farid, M.I.; Sanaullah, M.; Mufti, H. FF-PCA-LDA: Intelligent Feature Fusion Based PCA-LDA
Classification System for Plant Leaf Diseases. Appl. Sci. 2022, 12, 3514. https://doi.org/10.3390/app12073514.
28. Haq, A.U.; Li, J.P.; Khan, J.; Memon, M.H.; Nazir, S.; Ahmad, S.; Khan, G.A.; Ali, A. Intelligent machine learning approach for
effective recognition of diabetes in E-healthcare using clinical data. Sensors 2020, 20, 2649.
29. González Rodríguez, G.; Gonzalez-Cava, J.M.; Méndez Pérez, J.A. An intelligent decision support system for production planning
based on machine learning. J. Intell. Manuf. 2020, 31, 1257–1273.
30. Deepa, N.; Prabadevi, B.; Maddikunta, P.K.; Gadekallu, T.R.; Baker, T.; Khan, M.A.; Tariq, U. An AI-based intelligent system for
healthcare analysis using Ridge-Adaline Stochastic Gradient Descent Classifier. J. Supercomput. 2021, 77, 1998–2017.
31. Zekić-Sušac, M.; Mitrović, S.; Has, A. Machine learning based system for managing energy efficiency of public sector as an
approach towards smart cities. Int. J. Inf. Manag. 2021, 58, 102074.
32. Husni, N.L.; Sari, P.A.R.; Handayani, A.S.; Dewi, T.; Seno, S.A.H.; Caesarendra, W.; Glowacz, A.; Oprz˛edkiewicz, K.; Sułow-
icz, M. Real-Time Littering Activity Monitoring Based on Image Classification Method. Smart Cities 2021, 4, 1496–1518.
https://doi.org/10.3390/smartcities4040079.
33. Glowacz, A. Thermographic Fault Diagnosis of Ventilation in BLDC Motors. Sensors 2021, 21, 7245. https://doi.org/10.3390/s21217245.
34. Gadekallu, T.R.; Pham, Q.V.; Nguyen, D.C.; Maddikunta, P.K.R.; Deepa, N.; Prabadevi, B.; Pathirana, P.N.; Zhao, J.; Hwang, W.J.
Blockchain for edge of things: Applications, opportunities, and challenges. IEEE Internet Things J. 2021, 9, 964–988.
35. Khan, A.A.; Laghari, A.A.; Gadekallu, T.R.; Shaikh, Z.A.; Javed, A.R.; Rashid, M.; Estrela, V.V.; Mikhaylov, A. A drone-based
data management and optimization using metaheuristic algorithms and blockchain smart contracts in a secure fog environment.
Comput. Electr. Eng. 2022, 102, 108234.