Forest Fire Detection and Notification Method Based On AI and IoT Approaches
Forest Fire Detection and Notification Method Based On AI and IoT Approaches
Article
Forest Fire Detection and Notification Method Based on AI and
IoT Approaches
Kuldoshbay Avazov 1 , An Eui Hyun 1 , Alabdulwahab Abrar Sami S 1 , Azizbek Khaitov 2 ,
Akmalbek Bobomirzaevich Abdusalomov 1 and Young Im Cho 1, *
Abstract: There is a high risk of bushfire in spring and autumn, when the air is dry. Do not bring
any flammable substances, such as matches or cigarettes. Cooking or wood fires are permitted only
in designated areas. These are some of the regulations that are enforced when hiking or going to a
vegetated forest. However, humans tend to disobey or disregard guidelines and the law. Therefore,
to preemptively stop people from accidentally starting a fire, we created a technique that will allow
early fire detection and classification to ensure the utmost safety of the living things in the forest.
Some relevant studies on forest fire detection have been conducted in the past few years. However,
there are still insufficient studies on early fire detection and notification systems for monitoring fire
disasters in real time using advanced approaches. Therefore, we came up with a solution using
the convergence of the Internet of Things (IoT) and You Only Look Once Version 5 (YOLOv5). The
experimental results show that IoT devices were able to validate some of the falsely detected fires or
undetected fires that YOLOv5 reported. This report is recorded and sent to the fire department for
further verification and validation. Finally, we compared the performance of our method with those
of recently reported fire detection approaches employing widely used performance matrices to test
the achieved fire classification results.
Citation: Avazov, K.; Hyun, A.E.;
Sami S, A.A.; Khaitov, A.; Keywords: bushfire; fire detection; forest environment; YOLOv5; fire-like lights
Abdusalomov, A.B.; Cho, Y.I. Forest
Fire Detection and Notification
Method Based on AI and IoT
Approaches. Future Internet 2023, 15, 1. Introduction
61. https://doi.org/10.3390/
Given its direct impact on public safety and the environment, early fire detection is a
fi15020061
difficult but crucial problem. To avoid harm and property damage, advanced technology
Academic Editor: Paolo Bellavista requires appropriate methods for detecting fires as soon as possible [1]. According to
Received: 29 November 2022
UNEP, “Wildfires are becoming more intense and more frequent, ravaging communities
Revised: 19 January 2023
and ecosystems in their path” [2]. Wildfires continue to burn for days without swift action,
Accepted: 29 January 2023
resulting in a climate crisis and the loss of lives. Due to the climate crisis, the world is
Published: 31 January 2023 starting to face anonymous fluctuations in water levels, changes in temperature, and the
extinction of some protected animals, which will affect the balance of life in the future.
Therefore, we must take wildfire problems seriously before they become catastrophic.
Installing early fire detection in the forest and an automatic notification system to notify
Copyright: © 2023 by the authors. the fire department can reduce countless problems.
Licensee MDPI, Basel, Switzerland. It has always been a challenge to control fires on a global scale. In 2019, there were
This article is an open access article 40,030 fires in South Korea, resulting in 284 deaths and 2219 injuries, according to the
distributed under the terms and Korean National Fire Agency. In addition, property damage totaled KRW 2.2 billion as a
conditions of the Creative Commons result of 110 fires and 0.8 fire-related deaths daily. Two significant Korean cities experienced
Attribution (CC BY) license (https://
fires in 2020 that resulted in the deaths of over 50 people in each location. A 33-story
creativecommons.org/licenses/by/
tower block building in Ulsan burned down, and a warehouse fire broke out in Incheon [3].
4.0/).
However, these incidents are just in Korea. Imagine having a wildfire that is the size of
a country continuing on for weeks. According to UNEP, the intensity and frequency of
wildfires are increasing, wreaking havoc on the ecosystems and populations they pass
through. These wildfires happen for many reasons. It is either due to bushfires that are
caused naturally or to human errors. If bushfires are detected early, the spreading of
wildfires can be prevented. Furthermore, if we can stop people from starting a fire as they
light it, we can prevent further incidents. In our research, we found some studies on fire
alarms that are installed in the forest. However, we found some challenges they may face
in our own study.
# Sensors are one of the widely used techniques to determine whether there is a fire
or not. We found that just using a smoke detection sensor can lead to false alarms
because some of the smells can come from different places or from someone smoking.
# Remote cameras are used in some of the other fire detection systems to determine
whether there is a fire or not. This kind of surveillance requires human employees
to continuously monitor the cameras, which can sometimes lead to the employees
falling asleep.
# R-CNN-based approaches are also efficiently used to identify and eliminate fire
catastrophes. However, this method might also make errors sometimes and incor-
rectly classify candidate fire regions as real fires. Therefore, they may also lead to
false alarms.
Identifying these previous errors made by other research and companies gave us
innovative ideas. When humans eliminate one of their sensory features, other sensory
features become more enhanced. For example, when you blindfold someone, other senses
in their body enhance. This realization came through serendipity. Having walked by
an air purifier multiple times after a workout, the air purifier worked extra hard. Then,
we realized the machine could not see but could smell us first. Our human body can
smell things first before it can see. Another example is when leaving a pot of soup boiling
for too long, you can smell the burnt scent before you can see it. However, we humans
double-check to see what is burning. We wanted our ideas to do both so we converged the
ideas from all the research into one product that would prevent wildfires from happening.
The rest of this manuscript is structured into five more sections. Section 2 reviews the
literature on traditional and deep learning methods used to identify particular fire regions.
Section 3 includes a description of how the fire detection system works. Section 4 presents
the results of the experiment. Section 5 highlights certain limitations of the proposed
method. Lastly, Section 6 covers the conclusion of the paper and the discussion of the
proposed method’s future directions.
2. Related Work
Forest fire detection technologies can be divided into two main categories: machine
learning, deep learning, and computer vision methods, and sensor-based methods. In
recent studies, it was observed that object-based detection in the industry has gained
popularity from deep learning [4]. The most common approaches to detect objects in deep
learning are image-based convolutional neural networks (CNNs) [5], fully convolutional
networks [6], spatio-spectral deep neural networks [7], and faster R-CNNs [8].
real-time color-based program did not provide a better output because of the smoke and
shadow. In [12], based on the dynamic textures of smoke and flame, the fire was detected
using dynamic systems (LDSs).
Figure 1. Overall
Figure flow
1. Overall chart
flow of the
chart system.
of the system.
Working with devices and sensors and testing out beta products, Raspberry Pi 4 or
Arduino are used due to the easy accessibility to the pins and sensors. However, this can
be bypassed by using a desktop Windows 10 Pro and its GPU to render video quality, and
using the Raspberry Pi 4 or Arduino just for the smoke sensors.
Listed below are the hardware requirements:
• Raspberry Pi 4 or Arduino;
• Raspberry Pi 4 power cable;
• Raspberry Pi 4 Internet cable;
• Solid-state drive (SSD) or hard drive;
• Internet access;
• Desktop Windows 10 Pro (optional/recommended);
• MQ-2 flammable gas & smoke sensor;
• Pin wires;
• Breadboard;
• Camera.
3.4. Dataset
Dataset collection and generation processes were obtained from the Robmarkcole
and Glenn-Jocher databases, as shown in Table 1 [34,35]. For the experiment, we used a
secondary dataset that contains a collection of indoor and outdoor fire images [36]. The
dataset we used provides two folders: train and val. The full fire image dataset was split
into training (75%) and test (25%) sets. The train folder was for the training images, and
the val folder was for image validation. Both folders contained a set of unlabeled images as
Future Internet 2023, 15, 61 6 of 13
well as labeled images to train, test, and validate the model. Additionally, since we did not
have wildfires, we used YouTube videos that had different shapes and types of fire to test
our model and check its accuracy.
3.4.2. YOLOv5
YOLOv5 is a recently released CNN that distinguishes between static and moving
objects in real time, with notable performance and good accuracy. This model processes the
full image region using a single neural network, divides it into different components, and
then predicts the candidate-bounding boxes and probabilities for each component. The
YOLOv5 network is an evolution of the YOLOv1-YOLOv4 network and is composed of
three architectures: the head, which generates YOLO layers for multi-scale prediction, the
neck, for enhancing information flow based on the path aggregation network (PANet), and
the backbone based on cross-stage partial (CSP) integrated into the Darknet [37,38]. The
data are given to CSPDarknet for feature extraction before being transferred to PANet for
feature fusion. As seen in Figure 2, the YOLO layer uses three independent feature maps to
produce detection results (class, score, location, and size).
Future
Future Internet 2023, 15,
Internet 2023, 15, 61
x FOR PEER REVIEW 77of
of 14
13
Figure 2.
Figure 2. YOLOv5
YOLOv5 network
network structure [39].
structure [39].
We used the Raspberry Pi 4 Model B to detect the smell and sight using the MQ-2
flammable gas and smoke sensor and a Logitech C920 webcam. Due to its portability,
we were able to put this device anywhere to detect smoke and fire. Therefore, it could
also be installed in the forest. After detecting the fire using the MQ-2 flammable gas and
smoke sensor, it records it with the Logitech C920 webcam and sends it to the personal
computer in Table 2. Then, with its high performance, it uses the Anaconda console
Future Internet 2023, 15, x FOR PEER REVIEW to
10 of 14
execute the video and automatically detects if there is a fire through YOLOv5. Once the fire
is confirmed, the video is sent to the fire department for further action.
accidents, even multiple fires and flames, in both indoor and outdoor environments.
As seen in Figure 4, the confidence level was mostly above 80%. This is because we
had to retrain the data to obtain the better output result. We retrained the data to
YOLOv5x and set the batch size to 16 and the epoch to 3 at first. Then, we observed the
accuracy level and it was average, as it can be seen here. There was a big difference be-
tween YOLOv5s and YOLOv5x. If we were to keep the pre-trained data, we would obtain
Figure 4. Visible experiment results in forest fire scenes.
Figure 4. Visible 30%.
approximately experiment results
However, asinitforest
can befireseen,
scenes.
there was a big jump in confidence level.
As seen in Figure 4, the confidence level was mostly above 80%. This is because we
had to retrain the data to obtain the better output result. We retrained the data to
YOLOv5x and set the batch size to 16 and the epoch to 3 at first. Then, we observed the
accuracy level and it was average, as it can be seen here. There was a big difference be-
tween YOLOv5s and YOLOv5x. If we were to keep the pre-trained data, we would obtain
approximately 30%. However, as it can be seen, there was a big jump in confidence level.
Figure 5. Visible experiment results in indoor and outdoor environments with fire scenes.
Figure 5. Visible experiment results in indoor and outdoor environments with fire scenes.
Figure
As seen5 in shows
Figure that, after
4, the increasinglevel
confidence the epoch size toabove
was mostly 10, the confidence
80%. level went
This is because we
up
hadatolittle. This
retrain theresearch showsthe
data to obtain andbetter
proves that result.
output the moreWe you constantly
retrained train
the data the data,
to YOLOv5x
the
andbetter
set theyourbatchoutcome
size to 16is. andThese fire detection
the epoch methods
to 3 at first. Then, we areobserved
conducted thethrough
accuracysomelevel
calculations. The IoU or Jaccard Index is used to determine
and it was average, as it can be seen here. There was a big difference between YOLOv5s if the prediction is correct to
an object
and YOLOv5x. or not. It is defined
If we wereresults as
to keepthe predicted
the pre-trained box intersection
data, divided
we wouldwith by
obtain the actual
approximately box
Figure 5. Visible experiment in indoor and outdoor environments fire scenes.
intersection
30%. However, divided
as it canby their
be seen,union
there[42,43].
was a In bigother
jumpwords, it is anlevel.
in confidence effective metric for
evaluating
Figuredetection
Figure 55 shows
shows that, results,
that, after
afterand is defined
increasing
increasing theasepoch
the the area
epoch sizeof
size tooverlap
to 10, the
10, between the
the confidence
confidence detected
level
level went
went
fire
up region
a little. and
This the ground
research truth
shows divided
and proves by the
that area
the of
moreunion
you
up a little. This research shows and proves that the more you constantly train the data, between
constantlythe detected
train the fire
data,
region
the and
better the
your ground
outcome truth
is. (1):
These fire detection methods
the better your outcome is. These fire detection methods are conducted through some are conducted through some
calculations. The TheIoUIoU or or Jaccard
Jaccard Index
Index is is used
used to to determine
determine ifif thethe prediction
prediction is is correct
correct to
to
calculations. 𝑔𝑟𝑜𝑢𝑛𝑑𝑇𝑟𝑢𝑡 ℎ ∩ 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛
an object
object or or not.
not. ItIt is defined 𝐼𝑜𝑈
is defined = predicted
as the
the predicted box box intersection
intersection divided by the actual box (1)
an as 𝑔𝑟𝑜𝑢𝑛𝑑𝑇𝑟𝑢𝑡 ℎ ∪ 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛divided by the actual box
intersection divided by their union [42,43]. In other words, it is an effective metric for
intersection divided by their union [42,43]. In other words, it is an effective metric for
The FM score and IoU value range is between 0 and 1, where these metric scores
evaluating detection results, and is defined as the area of overlap between the detected
reach their best values at 1.
fire region and the ground truth divided by the area of union between the detected fire
To understand mAP, the precision and recall calculation is needed, as we detailed in
region and the ground truth (1):
previous research [44]. TP stands for “true positives”, FP stands for “false positives”, and
Future Internet 2023, 15, 61 10 of 13
evaluating detection results, and is defined as the area of overlap between the detected fire
region and the ground truth divided by the area of union between the detected fire region
and the ground truth (1):
groundTruth ∩ prediction
IoU = (1)
groundTruth ∪ prediction
The FM score and IoU value range is between 0 and 1, where these metric scores reach
their best values at 1.
To understand mAP, the precision and recall calculation is needed, as we detailed in
previous research [44]. TP stands for “true positives”, FP stands for “false positives”, and
FN stands for “false negatives”. Precision, or the percentage of true positive predictions
among all positive forecasts, is the positive predictive value. The average precision and
recall rates of the fire detection techniques can be calculated using the following equations:
Precision = TPTP
+ FP (2)
Recall = TPTP
+ FN
We need to determine the AP for each class in order to calculate the mAP. However,
we only have one class. A PR (precision–recall) curve is obtained by plotting these precision
and recall values. Average precision is the region beneath the PR curve (AP). The PR curve
has a zig-zag shape, as recall rises steadily while precision generally falls with intermittent
increases. The AP, which in VOC 2007 was defined as the mean of precision values at a set of
11 equally spaced recall levels [0, 0.1, . . . , 1] (0 to 1 at a step size of 0.1), describes the shape
of the precision–recall curve rather than the AUC. However, in VOC 2010, the computation
of the AP changed so, instead of just taking 11, we take all points into account [45].
1
AP = ∑ pinterp(r)
11 r∈(0,0.1...,1
(3)
)
The greatest precision measured for a method for which the corresponding recall
exceeds r is used to interpolate the precision at each recall level r.
Using quantitative and qualitative performance data, we rank the durability of previ-
ously introduced publications employing the suggested approach in many classifications,
as shown in Table 3. In accordance with the scores, the proposed approach worsened when
far- and small-region flames occurred, but it was able to successfully distinguish between
fake or non-fire sceneries and actual fires with a quick processing time performance.
5. Limitations
Our research team faced a lot of limitations due to the time it takes to train the data.
Therefore, we were not able to run enough training sessions because we were limited on
computers and we needed to conduct other research to improve the accuracy. Therefore,
Future Internet 2023, 15, 61 11 of 13
some of the images came out as having a low confidence level. However, those images
were outputting small fire images (Figure 6). Furthermore, YOLOv5 would mistakenly
recognize red shirts or red blinking lights as fires. We are creating a sizable fire image
dataset utilizing data augmentation methodologies that includes fire and non-fire photos
Future Internet 2023, 15, x FOR PEER REVIEW 12 of 14
for model training and testing in order to efficiently identify the target data and address
the mentioned difficulties.
Figure 6.
Figure 6. Small
Small size
size fire
fire region
region detected
detected images.
images.
The Internet
The InternetofofThings
Things (IoT)
(IoT) hashas evolved
evolved intointo a interchange
a free free interchange of useful
of useful infor-
information
between various various
mation between real-world devices.devices.
real-world Several Several
technological issues must
technological issuesbemust
overcome in
be over-
order
come to
in improve fire detection
order to improve and warning
fire detection and accuracy, which canwhich
warning accuracy, be separated into five
can be separated
major issues:
into five security
major andsecurity
issues: privacy,and
storage and cloud
privacy, computing,
storage energy,
and cloud communication,
computing, energy,
and compatibilityand
communication, andcompatibility
standardization
and[46].
standardization [46].
6.
6. Conclusions
Conclusions
In
In this
this paper,
paper, we
we have
have introduced
introduced ourour new
new technology
technology toto reduce
reduce wildfires
wildfires by
by using
using
AI and IoT devices and sensors. Therefore, with our system, we believe that
AI and IoT devices and sensors. Therefore, with our system, we believe that the proposed the proposed
system
system cancanbebeeffectively
effectivelyused
usedtotoend
endthe rapid
the rapidincrease in the
increase world
in the climate
world crisis
climate and and
crisis the
loss of lives.
the loss The The
of lives. system can be
system caninstalled in forests
be installed and start
in forests and detecting smoke smoke
start detecting to let the
to AI
let
model detect the exact fire location and notify the fire department to disallow
the AI model detect the exact fire location and notify the fire department to disallow the the fire to
continue for days. Finally, we hope that this technology will be effective in other countries
fire to continue for days. Finally, we hope that this technology will be effective in other
to prevent wildfires worldwide.
countries to prevent wildfires worldwide.
Recent studies have shown that, in order to promote safety in our daily lives, it is
Recent studies have shown that, in order to promote safety in our daily lives, it is
critical to quickly identify fire accidents in their early phases. As a result, we hope to
critical to quickly identify fire accidents in their early phases. As a result, we hope to
carry out more research in this area and enhance our findings. Our goal is to identify fire
carry out more research in this area and enhance our findings. Our goal is to identify fire
occurrences in real time with fewer false positives using the YOLOv6 and YOLACT models.
occurrences in real time with fewer false positives using the YOLOv6 and YOLACT
Future goals include improving the accuracy of the approach and addressing wrongly
models. Future goals include improving the accuracy of the approach and addressing
detected situations in the same color cases with fire regions. Using 3D CNN and 3D U-Net
wrongly detected situations in the same color cases with fire regions. Using 3D CNN
in the IoT environment, we intend to create a compact model with reliable fire-detection
and 3D U-Net in the IoT environment, we intend to create a compact model with reliable
performance and without communication issues.
fire-detection performance and without communication issues.
Author Contributions: This manuscript was designed and written by K.A. and A.E.H. A.B.A. con-
Authorthe
ceived Contributions: This
main idea of this manuscript
study. A.A.S.S.was
wrotedesigned and written
the program by K.A.alland
and conducted A.E.H. A.B.A.
experiments. A.K.
conceived the main idea of this study. A.A.S.S. wrote the program and conducted all experiments.
and Y.I.C. supervised the study and contributed to the analysis and discussion of the algorithm and
A.K. and Y.I.C. supervised the study and contributed to the analysis and discussion of the algo-
the experimental results. All authors have read and agreed to the published version of the manuscript.
rithm and the experimental results. All authors have read and agreed to the published version of
Funding: This study was funded by Korea Agency for Technology and Standards in 2022, project
the manuscript.
numbers are K_G012002073401, K_G012002234001 and by the Gachon University research fund of
Funding: This study was funded by Korea Agency for Technology and Standards in 2022, project
2021 (GCU-202106340001).
numbers are K_G012002073401, K_G012002234001 and by the Gachon University research fund of
Data Availability Statement: Not applicable.
2021(GCU-202106340001).
Acknowledgments: The authors
Data Availability Statement: would
Not like to express their sincere gratitude and appreciation to the
applicable.
supervisor, Young Im Cho (Gachon University), for her support, comments, remarks, and engagement
Acknowledgments: The authors would like to express their sincere gratitude and appreciation to
over the period in which this manuscript was written. Moreover, the authors would like to thank the
the supervisor, Young Im Cho (Gachon University), for her support, comments, remarks, and en-
gagement over the period in which this manuscript was written. Moreover, the authors would like
to thank the editor and anonymous referees for their constructive comments toward improving the
contents and presentation of this paper.
Conflicts of Interest: The authors declare no conflict of interest.
Future Internet 2023, 15, 61 12 of 13
editor and anonymous referees for their constructive comments toward improving the contents and
presentation of this paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Korea Forest Service 2019, Korea Forest Service Website, Korean Government. Available online: https://english.forest.go.kr
(accessed on 10 November 2022).
2. Nairobi 2022, Unified Nation Enviroment Programme Website. Available online: https://www.unep.org (accessed on 10
November 2022).
3. Korean Statistical Information Service. Available online: http://kosis.kr (accessed on 10 August 2021).
4. Mukhiddinov, M.; Muminov, A.; Cho, J. Improved Classification Approach for Fruits and Vegetables Freshness Based on Deep
Learning. Sensors 2022, 22, 8192. [CrossRef] [PubMed]
5. Larsen, A.; Hanigan, I.; Reich, B.J.; Qin, Y.; Cope, M.; Morgan, G.; Rappold, A.G. A deep learning approach to identify smoke
plumes in satellite imagery in near-real time for health risk communication. J. Expo. Sci. Environ. Epidemiol. 2021, 31, 170–176.
[CrossRef] [PubMed]
6. Toan, N.T.; Thanh Cong, P.; Viet Hung, N.Q.; Jo, J. A deep learning approach for early wildfire detection from hyperspectral
satellite images. In Proceedings of the 2019 7th International Conference on Robot Intelligence Technology and Applications
(RiTA), Daejeon, Korea, 1–3 November 2019; pp. 38–45. [CrossRef]
7. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625.
[CrossRef]
8. Tang, Z.; Liu, X.; Chen, H.; Hupy, J.; Yang, B. Deep Learning Based Wildfire Event Object Detection from 4K Aerial Images
Acquired by UAS. AI 2020, 1, 166–179. [CrossRef]
9. Toulouse, T.; Rossi, L.; Celik, T.; Akhloufi, M. Automatic fire pixel detection using image processing: A comparative analysis of
rule-based and machine learning-based methods. Signal Image Video Process. 2016, 10, 647–654. [CrossRef]
10. Jiang, Q.; Wang, Q. Large space fire image processing of improving canny edge detector based on adaptive smoothing. In
Proceedings of the 2010 International Conference on Innovative Computing and Communication and 2010 Asia-Pacific Conference
on Information Technology and Ocean Engineering, Macao, China, 30–31 January 2010; pp. 264–267.
11. Celik, T.; Demirel, H.; Ozkaramanli, H.; Uyguroglu, M. Fire detection using statistical color model in video sequences. J. Vis.
Commun. Image Represent. 2007, 18, 176–185. [CrossRef]
12. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio temporal flame modeling and dynamic texture analysis for automatic
video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 339–351. [CrossRef]
13. Park, M.; Ko, B.C. Two-Step Real-Time Night-Time Fire Detection in an Urban Environment Using Static ELASTIC-YOLOv3 and
Temporal Fire-Tube. Sensors 2020, 20, 2202. [CrossRef] [PubMed]
14. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An Improved Forest Fire Detection Method
Based on the Detectron2 Model and a Deep Learning Approach. Sensors 2023, 23, 1512. [CrossRef]
15. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional Neural Networks Based Fire Detection in Surveillance
Videos. IEEE Access 2018, 6, 18174–18183. [CrossRef]
16. Pan, H.; Badawi, D.; Cetin, A.E. Computationally Efficient Wildfire Detection Method Using a Deep Convolutional Network
Pruned via Fourier Analysis. Sensors 2020, 20, 2891. [CrossRef]
17. Li, T.; Zhao, E.; Zhang, J.; Hu, C. Detection of Wildfire Smoke Images Based on a Densely Dilated Convolutional Network.
Electronics 2019, 8, 1131. [CrossRef]
18. Kim, B.; Lee, J. A Video-Based Fire Detection Using Deep Learning Models. Appl. Sci. 2019, 9, 2862. [CrossRef]
19. Wu, S.; Zhang, L. Using popular object detection methods for real time forest fire detection. In Proceedings of the 11th International
Symposium on Computational Intelligence and Design (SCID), Hangzhou, China, 8–9 December 2018; pp. 280–284.
20. Imran; Iqbal, N.; Ahmad, S.; Kim, D.H. Towards Mountain Fire Safety Using Fire Spread Predictive Analytics and Mountain Fire
Containment in IoT Environment. Sustainability 2021, 13, 2461. [CrossRef]
21. Gagliardi, A.; Saponara, S. AdViSED: Advanced Video SmokE Detection for Real-Time Measurements in Antifire Indoor and
Outdoor Systems. Energies 2020, 13, 2098. [CrossRef]
22. Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A Forest Fire Detection System Based on Ensemble Learning. Forests 2021, 12, 217. [CrossRef]
23. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural
Inf. Process. Syst. 2015, 28. [CrossRef]
24. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer
Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; ECCV 2016. Lecture Notes in Computer Science; Springer:
Cham, Switzerland, 2016; Volume 9905. [CrossRef]
25. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788.
26. Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An improvement of the fire detection and classification method
using YOLOv3 for surveillance systems. Sensors 2021, 21, 6519. [CrossRef]
Future Internet 2023, 15, 61 13 of 13
27. Avazov, K.; Mukhiddinov, M.; Makhmudov, F.; Cho, Y.I. Fire Detection Method in Smart City Environments Using a Deep
Learning-Based Approach. Electronics 2021, 1, 73. [CrossRef]
28. Sisias, G.; Konstantinidou, M.; Kontogiannis, S. Deep Learning Process and Application for the Detection of Dangerous Goods
Passing through Motorway Tunnels. Algorithms 2022, 15, 370. [CrossRef]
29. Voudiotis, G.; Moraiti, A.; Kontogiannis, S. Deep Learning Beehive Monitoring System for Early Detection of the Varroa Mite.
Signals 2022, 3, 506–523. [CrossRef]
30. Kontogiannis, S.; Asiminidis, C. A Proposed Low-Cost Viticulture Stress Framework for Table Grape Varieties. IoT 2020, 1,
337–359. [CrossRef]
31. Ahrens, M.; Maheshwari, R. Home Structure Fires; National Fire Protection Association: Quincy, MA, USA, 2021.
32. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for
the Blind and Visually Impaired. Sensors 2022, 22, 3307. [CrossRef]
33. Abdusalomov, A.B.; Mukhiddinov, M.; Kutlimuratov, A.; Whangbo, T.K. Improved Real-Time Fire Warning System Based on
Advanced Technologies for Visually Impaired People. Sensors 2022, 22, 7305. [CrossRef]
34. Robmarkcole 2022, Fire-Detection-from-Images, Github. Available online: https://github.com/robmarkcole/fire-detection-from-
images (accessed on 10 November 2022).
35. Glenn Jocher 2022, Yolov5, Github. Available online: https://github.com/ultralytics/yolov5 (accessed on 10 November 2022).
36. Valikhujaev, Y.; Abdusalomov, A.; Cho, Y.I. Automatic fire and smoke detection method for surveillance systems based on dilated
CNNs. Atmosphere 2020, 11, 1241. [CrossRef]
37. Redmon, J. Darknet: Open-Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed
on 22 October 2022).
38. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020,
arXiv:2004.10934.
39. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based
on the Optimized YOLOv5. Sensors 2022, 22, 9384. [CrossRef] [PubMed]
40. Safarov, F.; Temurbek, K.; Jamoljon, D.; Temur, O.; Chedjou, J.C.; Abdusalomov, A.B.; Cho, Y.-I. Improved Agricultural Field
Segmentation in Satellite Imagery Using TL-ResUNet Architecture. Sensors 2022, 22, 9784. [CrossRef]
41. Sharma, A. Training the YOLOv5 Object Detector on a Custom Dataset. 2022. Available online: https://pyimg.co/fq0a3 (accessed
on 22 October 2022).
42. Ayvaz, U.; Gürüler, H.; Khan, F.; Ahmed, N.; Whangbo, T. Automatic speaker recognition using mel-frequency cepstral coefficients
through machine learning. Comput. Mater. Contin. 2022, 71, 5511–5521. [CrossRef]
43. Nodirov, J.; Abdusalomov, A.B.; Whangbo, T.K. Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain
Tumor Images. Sensors 2022, 22, 6501. [CrossRef]
44. Abdusalomov, A.; Whangbo, T.K. An improvement for the foreground recognition method using shadow removal technique for
indoor environments. Int. J. Wavelets Multiresolut. Inf. Process. 2017, 15, 1750039. [CrossRef]
45. AlZoman, R.M.; Alenazi, M.J.F. A Comparative Study of Traffic Classification Techniques for Smart City Networks. Sensors 2021,
21, 4677. [CrossRef] [PubMed]
46. Pereira, F.; Correia, R.; Pinho, P.; Lopes, S.I.; Carvalho, N.B. Challenges in Resource-Constrained IoT Devices: Energy and
Communication as Critical Success Factors for Future IoT Deployment. Sensors 2020, 20, 6420. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.