fire_detection
fire_detection
Signal Processing
journal homepage: www.elsevier.com/locate/sigpro
a r t i c l e i n f o a b s t r a c t
Article history: Wildfire is one of the most critical natural disasters that threaten wildlands and forest resources. Tra-
Received 16 June 2021 ditional firefighting systems, which are based on ground crew inspection, have several limits and can
Revised 20 August 2021
expose firefighters’ lives to danger. Thus, remote sensing technologies have become one of the most de-
Accepted 30 August 2021
manded strategies to fight against wildfires, especially UAV-based remote sensing technologies. They have
Available online 31 August 2021
been adopted to detect forest fires at their early stages, before becoming uncontrollable. Autonomous
Keywords: wildfire early detection from UAV-based visual data using different deep learning algorithms has attracted
Computer vision significant interest in the last few years. To this end, in this paper, we focused on wildfires detection at
Deep learning their early stages in forest and wildland areas, using deep learning-based computer vision algorithms to
Aerial images processing prevent and then reduce disastrous losses in terms of human lives and forest resources.
Wildfire detection system
Smoke detection system © 2021 Elsevier B.V. All rights reserved.
Unmanned aerial vehicle
https://doi.org/10.1016/j.sigpro.2021.108309
0165-1684/© 2021 Elsevier B.V. All rights reserved.
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
Fig. 1. UAV-based remote sensing system flowchart for forest fire detection and concerned authorities’ notifications.
detection of wildfires is fundamental to ensure that fire remains few years, several deep learning-based fire and smoke detection al-
manageable [42]. Several advanced techniques were proposed over gorithms were proposed achieving impressive results. Most of the
the years to help official authorities and first responders in identi- developed detection algorithms are based on Convolutional Neu-
fying wildfires at their early stages and allocate the right resources ral Networks (CNNs), including different versions of YOLO [15,65–
to extinguish them. Most of them are based on terrestrial and 67], R-CNN and its variants [33,34,68], SSD [59], U-Net [70], and
spatial technologies, including watchtowers equipped with various DeepLab [25]. Other deep learning architectures also can be used
sensors and satellite platforms. However, these methods are facing for fire detection, such as Long Short-Term Memory (LSTM) [41],
several limitations that could reduce fire detection performance. Deep Belief Network [40], and Generative Adversarial Network
Watchtowers suffer from several limitations such as their limited (GAN) [35]. However, these algorithms demand powerful hardware
range of view, and high construction cost. Also, they are very ex- to be executed in real-time. Therefore, the recent technological ad-
posed to destruction by the fire causing additional costs. Similarly, vances in terms of processing power, sensing devices, and develop-
satellites provide a very large field of view. However, they have ment software are making wildfire detection using powerful deep
some limitations in terms of cost, flexibility, and spatial/temporal learning-based computer vision algorithms on UAVs platforms pos-
image resolution making fire spot detection at the right time very sible. Nowadays, UAVs can detect, localize and notify the concerned
difficult [3,89]. authorities in just a small amount of time.
Recently, UAV-based early wildfire detection and warning sys- In this study, we aim to provide the most reliable techniques,
tems that integrate various remote sensing technologies and based on deep learning techniques and UAV technologies, that
deep learning-based computer vision techniques have emerged as could help in fighting wildfires at their early stages and before they
promising technologies for wildfire monitoring [47,64,89] (Fig. 1). become uncontrollable. The contributions of this paper are as fol-
Instead of sending ground crews to dangerous environments or low:
using different classical techniques that have many limitations in
terms of cost and efficiency, UAVs equipped with visual remote • Presenting the influence of the recent UAV-based visual remote
sensing technologies were proposed as new and promising tech- sensing technologies and Deep Learning-based computer vision
nologies that could help for wildfires monitoring and fighting. algorithms to improve firefighting by detecting fires in forests
Combining UAVs and deep learning architectures could be very and wildlands at their early stages.
useful to detect fires at their early stages and send valuable in-
formation to the concerned authorities using efficient communi- • Helping researchers and firefighters to decide what remote
cation technologies, including LoraWAN and 5G [43,49]. In the last sensing and what algorithms they should use according to the
2
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
Fig. 2. Number of published documents by year from 2013 to early 2021 on the topics related to early wildfire detection, UAV imagery, and deep learning algorithms (a)
from SCOPUS database, (b) from Google Scholar database.
structure of the covered areas and the mission they aim to the most used remote sensing technology for many forestry ap-
achieve. plications. Several studies have adopted satellite imagery to detect
wildfires and fire smoke in forest regions, which could help to re-
• Discussing different UAV-based fire/smoke detection difficulties,
duce their risks [1,22,32,50]. However, satellite-based images are
including the variations of smoke/fire appearance, the chosen
not the best solution for early forest fire detection due to the low
architecture, among others.
spatial resolution making small fire spot detection very difficult or
In the literature review process, we performed a systematic impossible in most cases [56]. The satellites’ temporal resolution is
search on Scopus and Google Scholar databases with the key- another major limitation that restricts forest monitoring efficacity,
words “deep learning”, “wildfire”, “forest fire”, “smoke”, “drone” where they are not always available to provide continuous infor-
and “UAV”. We obtained a total of 670 papers on Scopus and mation about the forest state [56,61]. Moreover, cloudy and bad
620 papers on Google Scholar that are relevant to the topic of weather conditions prevent satellites to collect clear data of the
early wildfire detection from UAV imagery using deep learning al- forests [11,27,80].
gorithms. Next, we manually selected the most relevant papers Advanced high-resolution fixed cameras mounted on-ground
to our topic and exclude unrelated ones by applying an inclu- are other available solutions to monitor forest fires. Terrestrial early
sion/exclusion criterion, where we get a total number of around wildfire detection systems are mainly based on optical/thermal
40 strongly related articles. Fig. 2 shows the number of published cameras that are mostly mounted on watchtowers. These methods
papers by year in this field from 2013 to early 2021, which are dra- were adopted by many researchers and authorities to detect forest
matically increased since 2018. The statistics in Fig. 2 represent the fires, such as in Govil et al. [36]. Most of the time, terrestrial tech-
number of papers that target wildfire detection using UAV tech- niques combine visual sensors with other types of sensors, like hu-
nologies whether using deep learning algorithms or other tech- midity, smoke, and temperature sensors, to improve the fire/smoke
niques. However, in this paper, we reviewed only the papers that detection performance. These sensors may work very well in a
use both UAVs and Deep Learning algorithms. close environment, like buildings, but they suffer in open spaces
Fig. 3 shows the number of published papers for 10 countries like forests because they require to be in proximity to the fire or
(from the Scopus database), where the United States takes the lead smoke. Moreover, they are not able to provide some valuable in-
with more than 194 scientific papers from 2013 to early 2021 fol- formation, such as the fire size and location. Similarly, on-ground
lowed by China with 146 papers. cameras, including those mounted on watchtowers, can cover only
The rest of the paper is organized as follows. In Section 2, we limited areas and need to be placed carefully to ensure adequate
present different visual remote sensing technologies used for wild- visibility. Therefore, we need to install a very large number of sen-
fire detection in the review methodology. In Section 3, we present sors to cover the whole forest area making it very expensive.
different deep learning algorithms used to detect wildfires from Unmanned Aerial Vehicle (UAV) platforms have emerged as new
images/videos collected through cameras mounted on UAV plat- efficient technologies that combine satellites and on-ground sys-
forms. We have dedicated Sections 4 and 5 to discussions and con- tems advantages. They can cover larger areas than terrestrial tech-
clusions. niques and can provide images with higher spatial and tempo-
ral resolutions than satellites. Moreover, their operational cost is
2. Vision-based remote sensing technologies much lower than satellite and terrestrial technologies. Thus, UAVs
equipped with adequate remote sensing technologies are consid-
Visual data from UAV-based remote sensing technologies pro- ered the best choice for wildfire disaster monitoring. UAVs use dif-
vide valuable information to those fighting destructive wildfires. ferent types of sensors to collect valuable information about the
This information could be employed to save human lives and for- forest state. Using the right information in the right way could help
est resources. Thus, it could be used to control fire spreading by UAVs to identify fire areas and inform the concerned authorities at
identifying the most vulnerable regions. Moreover, search and res- the right time allowing a reduction in wildfires losses and risks. In
cue operations could be achieved to help trapped persons in the this section, we aim to present the most used cameras for forest
middle of the forests and save animals while keeping firefighters fire detection, monitoring, and fighting.
safe. To this end, several studies have targeted early fire detection
in wildlands and forests from visual data collected through differ- 2.1. Optical cameras
ent platforms equipped with different types of cameras and visual
sensors. The majority of remote sensing cameras mounted on UAV plat-
There are three main remote sensing-based methods to detect forms can acquire only the visible bands that range from around
fire/smoke in forest and wildland areas. Satellites are considered 400 nm to about 700 nm, we call them optical (or RGB) cameras.
3
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
Fig. 3. Number of published documents by country from 2013 to early 2021 on the topic of early wildfire detection from UAV imagery using deep learning algorithms (from
SCOPUS database).
Their large use is due to many factors such as low cost, high spa- which give us the possibility to install such impressive cameras on
tial resolution, ease to use, and lightweight [27]. These pros mak- UAV platforms easily. They are capable of capturing different levels
ing optical cameras very suitable for small-size UAVs, especially for of temperature. Nowadays, thermal camera technologies have been
low-cost forestry applications. They could be helpful for object de- widely used as a new solution for wildfire monitoring to overcome
tection from UAV imagery achieving various ecological tasks, in- some limitations of optical cameras. They are able to measure the
cluding fire/smoke identification. UAVs equipped with optical cam- thermal radiation emitted by the object making them more suit-
eras are capable of capturing high-resolution images that could be able than optical cameras for early fire detection. Optical cameras
used to detect wildfires smoke and flame in their early stages eas- can confuse fire with other similar objects that have the same
ily, especially when we have cameras with good visibility charac- color and cannot detect hidden fire flames in dense forests. How-
teristics [11,29,36]. However, most of these cameras have a limited ever, thermal cameras turn UAVs into an impressive tool that is in-
field of view, which obliges us to use more than one camera or dependent of light and able of detecting covered wildfire flames
take many photos to cover a larger field of view [11]. through the thermal radiation emitted by the fire within Mid-
Recently, other optical cameras were developed to overcome dle Wavelength InfraRed (MWIR) and Long Wavelength InfraRed
some limitations of conventional optical cameras. The authors in (LWIR) spectral ranges (Fig. 4) [9,75]. Sousa et al. [75] explore the
Barmpoutis and Stathaki [10], Barmpoutis et al. [11] adopted a effectiveness of thermal images, acquired from static and UAV plat-
newly introduced CMOS 360◦ optical camera mounted on a UAV forms, in detecting fire outbreaks. The authors in Shamsoshoara
platform to capture unlimited field of view images for early for- et al. [72] used a DJI Matrice 200 equipped with an infrared cam-
est fire detection. Converting the equirectangular projections, ac- era to collect thermal heatmap providing a valuable heat distri-
quired with the 360◦ cameras, to stereographic projections could bution dataset. Thus, it could be helpful to train a deep learning
reduce the false detection of wildfires, where the region of inter- model to detect hidden flames and to improve early forest fire de-
est location is always at the center of the image [11]. In 2009, tection due to the thermal images’ characteristics. Thermal cam-
Microsoft released another special type of optical sensor, called eras mounted on UAV platforms could solve several limitations of
RGB-D cameras [86], to solve the problem of depth information optical cameras, but they come with their challenges and limita-
from 2D images [83]. This type of camera combines an RGB cam- tions, including thermal distance problems and low spatial resolu-
era with a depth sensor to capture 2D images and calculate the tion [89,94].
distance between the targeted object and the UAV. RGB-D camera Combining data gathered from thermal and optical, or other
was adopted in Novac et al. [64] to identify the forest fire prop- types of sensors, is another solution for accurate early wildfire de-
erties, such as its height and exact size. However, optical cameras tection. Recently, sensor fusion has emerged as one of the most
still facing several issues, where it is impossible to detect wild- important topics that are widely used in different fields, includ-
fire smoke at night time and very hard to detect wildfire flames in ing autonomous vehicles, agriculture applications, and even wild-
dense forests that could be hidden by high trees. Moreover, visi- fire detection from UAV platforms. This discipline can improve
ble camera sensors are very sensitive to environmental conditions, the early wildfire detection accuracy by combining the informa-
such as sunlight angle, clouds, and shadows. tion collected through multiple types of sensors. Benzekri et al.
[13] adopted a sensor fusion method using a network of wire-
2.2. Thermal infrared cameras less sensors to measure different parameters, including tempera-
ture and the carbon monoxide amount. The collected data were
Recent advances in camera sensors technologies helped to de- processed using deep learning algorithms to decide whether a for-
velop robust lightweight thermal cameras with competitive prices,
4
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
Fig. 4. (a) Optical camera-based image, (b) Thermal camera-based image [82].
est fire was identified or not. To reduce the false detection rate, detection [12], facial recognition [39], self-driving cars [79], plant
they used a UAV platform to survey the wanted regions. Also, in disease identification [71], among others.
Kanand et al. [49], the authors used a VTOL platform hovering on Recently, Unmanned Aerial Vehicles (UAVs) are being increas-
top of the forest, which is equipped with optical and thermal cam- ingly used in various forestry applications, including forest scout-
eras for wildfire detection. The RGB camera was used for smoke ing, search and rescue operations, forest resources surveying, and
detection in the daytime while the thermal camera was adopted forest fire fighting. They could be one of the most powerful inno-
to identify hotspots at night-time. vative tools to solve such problems. Therefore, the choice of UAVs
platforms over other available technologies is due to several prop-
3. UAV and deep learning-based computer vision algorithms erties like low cost, high flexibility, flying at different altitudes, and
for early wildfire detection ease to use. Moreover, thanks to recent advances in hardware and
software technologies, it is possible to process heavy and complex
Computer vision is the science that gives machines, including visual data on the UAV itself. In recent years, fire and smoke detec-
UAVs, the ability to perceive their environment visually and re- tion, in wildlands and forests, using deep learning-based computer
spond to it according to the targeted mission. It is inspired by the vision techniques have attracted a lot of interest. Two main visual
biological visual system, where the eyes are replaced by camera features can help UAVs to identify wildfire sources autonomously
sensors and the brain by computer vision algorithms, which allow using deep learning algorithms, which are flame and smoke. Flame
UAVs to extract meaningful information from digital images/videos and smoke are the most important visual features for early and
acquired through cameras. There are two main types of computer precise wildfire detection. Some studies have focused on fire detec-
vision techniques adopted for wildfire detection. The first one is tion through flame [37,64]. Other studies have targeted fire detec-
traditional machine learning-based methods, which are based on tion by smoke [2,87], which seems more suitable for early detec-
handcrafted feature extraction and color transformations [9,44]. tion, because the fire in its early stage could be hidden, especially
However, manual feature selection and engineering take a very in dense forests [42]. Recently, many studies have focused on de-
long time and need domain experts to select the valuable features tecting both, flame and smoke, at the same time to overcome some
that can make machine learning algorithms more efficient. Also, limitations when we target only one object (flame or smoke). Early
these techniques suffer when we target complex problems, such wildfire detection using UAV and deep learning algorithms could
as fire detection in dense forests with a cluttered background. The be achieved through three main ways: wildfire image classifica-
second techniques are based on deep learning algorithms, which tion, wildfires detection based on object detection algorithms, and
can extract relevant and strong features automatically. Computer semantic segmentation-based wildfires detection. However, these
vision is not a new topic, and it has been researched for many techniques need a very large amount of data and high process-
decades. However, it achieved state-of-the-art results just recently ing power in the training process. Also, we need to choose the
with the advance of Convolutional Neural Network (CNN) architec- right architecture carefully and how we can train it with the right
tures and new hardware (GPUs) and software (Tensorflow, PyTorch, data. Therefore, in this section, we aim to present the state-of-the-
Keras) technologies. Today, deep learning architectures are revolu- art deep learning algorithms adopted for the early identification of
tionizing the world enabling machines to accomplish difficult and wildfires.
complex tasks that were impossible a few years ago. Thanks to
the recent advances in computing speed and sensing technolo- 3.1. Image classification-based methods
gies, these architectures achieve state-of-the-art results in the most
complex problems of image processing and computer vision ap- Image classification-based methods rely on classifying input im-
plications, especially CNN architectures. However, computer vision ages into different categories, including images that contain fire
techniques are facing various difficulties and challenges that could instances or not (Fig. 5). Deep CNN architectures are the best
affect their performances. Among these challenges, we find view- choice for image classification task [92] due to their ability to ex-
point variation, changing light conditions, flame/smoke appearance tract high representative features from 2D images. Over the years,
variations, scale issues, occlusion, clutter and dense environments, different CNN architectures were developed that achieved an ac-
object class variations, to name a few. Even with all of these dif- curacy higher than human level. Recently, several studies have
ficulties, recently, deep learning-based computer vision techniques adopted CNN to classify UAV-based forest fire images. The au-
have achieved impressive results in many fields, including vehicle thors in Srinivas and Dua [76] proposed applying a basic CNN
5
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
Table 1
Studies targeting early wildfire detection using deep learning-based image classification.
Ref Method Flame/Smoke Camera type Hardware Accuracy (%) FPS Processing speed (s)
Lee et al. [56] AlexNet Flame/Smoke RGB images GTX Titan X 94.8 / 7.7
GoogLeNet 99 11.6
VGG-13 86.2 10.2
Modified GoogLeNet 96.9 10
Modified VGG-13 96.2 7.9
Zhang et al. [87] SVM-RAW (Train set 1) Flame RGB images / 92.2 / 0.16
SVM-RAW (Train set 2) 74
SVM-Pool5 (Train set 1) 95.6 /
SVM-Pool5 (Train set 2) 89
CNN-RAW (Train set 1) 93.1 2.1
CNN-RAW (Train set 2) 88.6
CNN-Pool5 (Train set 1) 97.3 1.4
CNN-Pool5 (Train set 2) 90.1
Srinivas and Dua [76] AlexNet-like CNN Flame RGB images Tesla K80 95 /
Chen et al. [26] CNN-9 Flame/Smoke RGB images / 53 / /
CNN-9 (hm + na) 61
CNN-17 (hm + na) 86
Novac et al. [64] VGG-16 Flame RGB-D / 99.74 1 /
ResNet-50 99.38 16.4
Inception v3 99.29 13.7
DenseNet 99.65 14
NASNetMobile 98.94 12
MobileNet v2 99.47 19.2
architecture to classify forest fire images. They stacked convolu- of forest fires. This method can be mounted on UAV platforms to
tional and pooling layers in an AlexNet-like architecture followed perform forest fire detection. They adopted two CNN architectures
by flattening and two dense layers, where they used a sigmoid where the first CNN tends to classify the whole input image as
activation function in the last one for binary classification. Apply- fire or not while the second tends to localize the fire. However,
ing such architecture, they achieved an acceptable accuracy of 95% image classification-based methods are the most basic applications
(Table 1). Similarly, the authors in Lee et al. [56] adopted five dif- that could suffer to classify images that contain only small spots of
ferent CNN architectures to classify images captured by UAVs into fires, making them not suitable for wildfire early detection.
fire and non-fire classes. The used architectures are AlexNet [55],
GoogLeNet [78], VGG-Net [73], and modified versions of GoogLeNet 3.2. Object detection-based methods
and VGG-Net, where they achieved an accuracy of 94.8%, 99%,
86.2%, 96.9%, and 96.2%, respectively (Table 1). Chen et al. [27] pro- Unlike image classification, object detection algorithms are ca-
posed a CNN-based wildfire detection at its early stage using a pable of identifying and localizing the object of interest in an in-
hexa-copter equipped with Sony A7 optical camera. Before feed- put image/video by drawing a rectangular bounding box around
ing the images to the developed CNN model, these images passed the targeted object [74], which is in our case fires’ flames and
through some preprocessing techniques like histogram equaliza- smokes (Fig. 6). However, compared to image classification task,
tion and non-linear filters to enhance the data quality and reduce object detection algorithms require more computational resources
noises. The proposed model is capable of classifying fire and non- for both training and inferencing. Several object detection algo-
fire scenes using a nine layers CNN achieving good results. How- rithms were proposed over the past few years achieving very good
ever, they trained and tested the developed model on a simulated performances. These algorithms can be divided into two principal
dataset, which could not fit with real scenarios. Chen et al. im- groups, which are two-stages and single-stage detectors. The two-
proved their work in Chen et al. [26] by adopting two 17 layers stages, or region-based, algorithms consist of two main parts. Re-
CNN architectures, one for smoke images classification and the sec- gions of interest that may contain fire instances are generated in
ond for flame images classification. Also, Zhang et al. [87] proposed the first stage, using the selective search or RPN, while the sec-
a vision-based method to classify and give the exact localization ond part is responsible to classify each of these regions depending
6
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
Fig. 6. Example of object detection operation; (a) Flame and smoke detection, (b) Flame detection [42].
on the occurrence or not of the targeted object. The R-CNN family wildfires at their early stages, the authors in Jiao et al. [46], 47]
are the most known and efficient two-stages detection algorithms. proposed a modified version of YOLOv3-tiny and YOLOv3, respec-
On the other hand, single-stage detectors skip the region proposal tively. They are used to detect flame and smoke instances from
generation step and process the input image in one single pass UAV imagery in real-time. To improve small spots detection, Jiao
providing higher detection speed while keeping remarkable accu- et al. [47] added four DBL layers (DBL = Darknet convolutional
racy. Among the most efficient one-stage detectors, we cite differ- layer, Batch Normalization layer, and Leaky ReLU layer) achieving
ent YOLO variants, SSD, and RetinaNet [58]. Other object detection a precision rate of 82% while providing an inference speed of 6.5
algorithms that could achieve good results can be used to detect FPS on the DJI MANIFOLD onboard computer. Also, Goyal et al.
forest fires were recently proposed. Zhifan Zhu and Zechao Li pro- [37] adopted YOLOv3 as the main architecture to identify wildfires
posed a Local and Mid-range feature Propagation (LMP) object de- and notify the concerned authorities as fast as possible achieving
tection algorithm to detect objects from videos [91]. Zhou et al. an F1-Score of around 91%, which is a good result. The methods
[90] proposed a CAD framework for real-time object detection. based on object detection techniques are more effective than the
Also, a Facebook research team developed an object detection al- classification-based ones ether in terms of processing speed or pre-
gorithm called DETR (DEtection TRansformer) [21], which is based cision. They perform the wildfire detection task as an end-to-end
on a transformer network providing impressive results. These algo- operation to improve the inference speed making them more suit-
rithms are very effective to achieve object detection tasks, includ- able to achieve real-time applications.
ing fire detection. However, in this paper, we only reviewed the
object detection algorithms that are used in the literature to de- 3.3. Semantic segmentation-based methods
tect wildfires.
The authors in Kinaneva et al. [52] and Barmpoutis et al. The application of deep learning-based computer vision algo-
[11] showed the great results achieved using Faster R-CNN algo- rithms is not restricted to image classification and object detection,
rithms to detect both smoke and flames in UAV imagery. As shown but it can be used for semantic and instance segmentation. Seman-
in Table 2, the Faster R-CNN used in Barmpoutis et al. [11] pro- tic segmentation algorithms are considered among the most effec-
vides the second-best results among the tested object detection tive deep learning techniques for forest fire identification, which
algorithms achieving an F1-Score rate of 72.7%, 70.6%, and 71.5% tend to classify each pixel in the image according to the objects’
for flame, smoke, and both flame and smoke, respectively. SSD al- class it belongs to (flame, smoke, forest) (Fig. 7). Therefore, se-
gorithm was adopted in many studies providing acceptable results mantic segmentation algorithms are more powerful than bound-
for identifying forest fires, but it is considered as the least effective ing box-based object detection techniques. However, they are more
detector for wildfire detection application as shown in the works complex and demand higher computation performances and longer
of [2,11,84] as presented in Table 2. From 2016 to 2018, Redmond time for annotating training images. Several semantic segmenta-
developed three versions of the most effective and used object de- tion techniques were proposed over the years to identify wildfires
tection algorithms that provide the best tradeoff between accuracy in digital images and videos captured through UAV platforms with
and speed. YOLO was adopted in several wildfire-related studies. higher precision, including DeepLab [25], U-Net [70], SegNet [8],
Alexandrov et al. [2] adopted five different techniques to detect CTNet [57], among others.
forest fire smoke from UAV-based RGB imagery, where three of The famous Googles’ DeepLabV3+ architecture was adopted by
them are based on deep learning techniques, including Faster R- Barmpoutis et al. [11], where they applied two Inception-ResNet
CNN, SSD, and YOLOv2. Among these detectors, the YOLOv2 de- v2-based [77] DeepLabV3+ for smoke and fire detection and lo-
tector achieved impressive results providing the best inference calization in forest regions. The raw images acquired with an RGB
speed (FPS = 6), precision (100%), recall (98.3%), F1-score (99.14%), 360◦ camera mounted on a UAV were converted from equirectan-
and accuracy (98.3%) (Table 2). YOLOv3, YOLOv3-Tiny, YOLOv3-SPP, gular format to stereographic format before feeding them the se-
YOLOv4, CSResNext50-Panet-SPP, and SSD-ResNet were adopted in mantic segmentation algorithm. The proposed system achieved re-
Yadav [84] for wildfire detection. YOLOv3 provides the best mean markable results with a reduced number of sensors, which may
Average Precision (mAP) of 89.5% on the emergency fire dataset reduce the complexity of the system. Zhao et al. [89] developed a
while YOLOv3-SPP achieved slightly better mAP of 97.81% than saliency detection algorithm that is based on Deep CNN architec-
YOLOv3 (97.6%) on the single flame dataset. However, YOLOv3- ture to localize and segment fire areas from UAV imagery achiev-
Tiny achieved the lowest inference speed of around 0.2 s mak- ing an accuracy of 98%. They used color and texture information to
ing it more suitable for real-time operations. Similarly, to identify identify areas that were more likely represent fire spots.
7
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al.
Table 2
Studies targeting early wildfire detection using deep learning-based object detection.
Ref Method Flame/Smoke Camera type Hardware Precision (%) Recall (%) F1-Score (%) mAP (%) Accuracy (%) FPS Processing
speed (s)
Jiao et al. [47] YOLOv3-tiny Flame/Smoke Optical/Infrared DJI MANIFOLD 82 79 81 79.84 3.2 - 6.5
Jiao et al. [46] YOLOv3 Flame/Smoke Optical/Infrared NVIDIA RTX 2080 84 78 81 78.92 30 - 82.4
Barmpoutis et al. [11] SSD Flame 360◦ Optical 69.7
Smoke 67.3
Flame/Smoke 67.6
YOLOv3 Flame 80.6
Smoke 78.3
Flame/Smoke 78.8
Faster R-CNN Flame 72.7
Smoke 72.7
Flame/Smoke 70.6
Goyal et al. [37] YOLOv3 Flame Optical 90 92 91
Hossain et al. [42] Proposed method (color + multi-space Flame Optical Intel Core i7-9750H 89 80 84
LBP + ANN)
Smoke 93 88 90
Color + multi-space LBP + SVM Flame 89 72 80
Smoke 90 86 88
Color + multi-space LBP + random Flame 92 57 71
forest classifier
Smoke 92 78 84
8
Fig. 7. Example of semantic segmentation operation; a) Input image, b) Flame/Smoke detection [11].
Table 3
Studies targeting early wildfire detection using deep learning-based semantic segmentation.
Ref Method Flame/Smoke Camera type Precision Recall F1-Score Accuracy Processing
(%) (%) (%) (%) speed (s)
According to Tables 2 and 3, the authors in Barmpoutis et al. curacy of 99.89% 99.82%, and 99.77% for GRU, LSTM, and RNN re-
[11] show that semantic segmentation provides better results to spectively. To confirm the wildfire occurrence, they used a UAV to
achieve flame and/or smoke detection from UAV imagery. hover on top of the region of interest. Also, Cao et al. [20] pro-
posed the Attention enhanced Bidirectional LSTM (ABi-LSTM) algo-
3.4. Other deep learning-based methods rithm for early forest fire smoke identification from videos achiev-
ing impressive accuracy of 97.8% and provide lower false detection
There are several different types of deep learning techniques rates. In [45], the authors combine a CNN-based detector and a
that were adopted in various studies targeting early wildfire detec- lightweight version of LSTM for wildfire smoke detection in real-
tion enhancement. However, to the best of our knowledge, these time from videos acquired through cameras mounted on watch-
techniques are not yet used for forest fire detection from visual towers. To improve the detection speed of LSTM architecture, they
data acquired from UAV platforms, but it still worth mentioning reduced the number of layers and cells constituting the original
them. Usually, CNN performs very well on static images achieving LSTM architecture. The YOLOv3 architecture is used to detect wild-
impressive results. However, they still facing some limitations in fire smoke while the lightweight student LSTM is used for fire
the case of sequential data like videos, because they do not con- verification by analyzing smoke motion. Luo et al. [60] developed
sider the temporal image variations along the time [51]. Recurrent Slight Smoke Perceptual Network (SSPN) for smoke detection from
Neural Network (RNN) is another important deep learning archi- videos. The proposed architecture is divided into two parts where
tecture that can provide a solution for such a problem giving deep they used CNN architecture to extract static features and an LSTM
neural networks the concept of memory via the hidden state. Dif- architecture for dynamic feature extraction. However, it is not an
ferent RNN types (RNN, GRU, and LSTM) was adopted by Benzekri end-to-end fire detection technique, which could affect the detec-
et al. [13] to detect forest fires at their early stages from wireless tion speed.
sensor network mounted on the ground achieving remarkable ac-
9
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
An evolved CNN architecture, called Generative Adversarial Net- forest and urban regions. These images were collected using a 360◦
work (GAN), was invented in 2014 by Ian Goodfellow [35], which is CMOS optical camera mounted on a UAV platform equipped with
considered a great advancement in the deep learning field. Mostly, GPS technology. This dataset could be used to perform wildfire de-
GANs are used for data augmentation generating new unseen in- tection and segmentation tasks.
stances of a targeted object. Therefore, they could be very helpful Other datasets Due to the shortage of UAV-based visual datasets,
to overcome the lack of wildfire dataset problem. Recently, GANs most of the wildfires datasets used in the literature that con-
were used even for wildfire smoke and flame detection. Aslan et al. tain aerial images were extracted from other datasets that con-
[6] proposed a two-stage training approach adopting Convolutional sist of fire images at different environments (including forests) or
Generative Adversarial Neural Networks (DCGANs) to detect wild- collected from various news reports and search engines, such as
fire smoke. They trained regularly the DCGAN with real images and Google, Baidu, YouTube, Yandex, Flickr, Bing, among others. Several
noise vectors while the discriminator was trained separately, with- studies that targeted wildfire detection from UAV platforms used
out the generator, using images that contain smoke. Similarly, in visual datasets collected from different search engines. For exam-
Aslan et al. [5], DCGAN architecture with temporal slices was used ple, the authors in Hossain et al. [42], Lee et al. [56], Novac et al.
for flame detection in video achieving good results with negligi- [64], Zhang et al. [87], Zhao et al. [89] collected datasets that con-
ble false alarm rate. Another deep learning algorithm, called Deep sist of aerial images containing forest fires from different search
Belief Network (DBN), was adopted in Kaabi et al. [48] for early engines. Other studies [76,87] have extracted some relevant data
smoke identification of forest fires in video scenes achieving a de- that contains forest fire aerial images from pre-exited datasets,
tection rate of 95%. Although most of these techniques have not such as the FireSmoke [28], FireDetectionImage [19], Flickr-
yet been used to detect forest fires from UAV-based images/videos, FireSmoke [23], and Fire detection dataset [24]. Also, the authors
they may provide results that could achieve even better results in Zhang et al. [88] inserted real and simulated smoke instances on
than the ones obtained through the use of CNN-based methods. forest background to overcome the shortage of datasets. However,
all of the aforementioned techniques could affect the performance
3.5. Datasets and evaluation metrics of the deep learning model due to the quality of the data that
is acquired using different platforms like satellites and airborne,
Data collection is one of the most important steps to build an which do not have the same characteristics as UAV platforms.
effective deep learning-based wildfire detection model, where the
data type, size, and quality have a significant impact on the per-
formance of the deep learning-based approaches. In any research 3.5.2. Evaluation metrics
field, standard datasets are critical for fairly evaluating the perfor- Deep learning models’ evaluation is the process that allows to
mance of any deep learning-based model. However, there is a lack determine the effectiveness of the trained model to identify wild-
in wildfire images/videos accessible datasets, especially those col- fires. After the development of the deep learning model, we need
lected from UAV platforms. Therefore, in this section, we aim to to find out how good it is through different evaluation metrics. Be-
present some commonly used datasets in the literature, as well fore presenting the evaluation metrics, we need to present some
as the most important evaluation metrics used to evaluate deep important metrics that are used to calculate the evaluation met-
learning-based wildfire detection models. rics. These metrics are True Positive (TP), True Negative (TN), False
Negative (FN), and False Positive (FP). These results could be ob-
3.5.1. Datasets for wildfire detection tained from the confusion matrix. TP is when there is a fire in the
FLAME dataset The FLAME dataset [72] is a publicly available input image/video and the model correctly predicted that there is
dataset that consists of fire images and videos acquired through a fire. True Negative (TN) is when the model correctly predicted
UAV platforms equipped with RGB and thermal camera sensors. that there is no fire instance in the input image/video. False Nega-
It contains RGB/FLIR videos and RGB images acquired through DJI tive (FN) is when we have a negative prediction and the model in-
Phantom 3 Professional and DJI Matrice 200 drones equipped with correctly predicted it as a positive one. False Positive (FP) is when
Zenmuse X4S, FLIR Vue Pro thermal camera, and DJI Phantom 3 the model incorrectly predicted that there is a fire; however, there
camera. The first video is a 16 min raw video recorded at 29 are no fire instances in the input image/video. Depending on the
Frames Per Second (FPS) using the Zenmuse X4S camera. Similarly, study’s goal, several evaluation metrics could be measured to eval-
the second video is another 16 min raw video recorded at 29 FPS uate the deep learning models, including accuracy, precision, recall,
through the Zenmuse X4S camera, where it shows the behavior of and F1-score. Therefore, in this section, the most important evalu-
one pile from the beginning of the burning process. Both the first ation metrics are presented.
and second videos were of a resolution of 1280 × 720. The third, Accuracy The accuracy represents the simplest and the most
fourth, and fifth videos are 89 s, 5 min, and 25 min WhiteHot, used evaluation metric to measure the performance of the trained
GreentHot, and fusion heatmap videos, respectively. These videos deep learning model. According to Eq. 1, the accuracy metric refers
were recorded using the FLIR Vue Pro R thermal camera with a to the number of correct predictions out of the whole number of
resolution of 640 × 512 at 30 FPS. The sixth one is 17 min of predictions.
high-quality RGB video acquired from the DJI Phantom 3 cam- TP + TN
era at 30 FPS with a resolution of 3840 × 2160. The seventh Accuracy = (1)
TP + TN + FP + FN
and eighth repositories contain 39,375 and 8617 images resized to
254 × 254 pixels that could be used to perform the image clas- However, it seems that accuracy is not always the best eval-
sification task. The ninth and tenth repositories have more than uation metric that we can use, because it can provide mislead-
20 0 0 high-resolution fire images and masks to achieve the fire seg- ing results in the case of imbalanced data, which could affect our
mentation task. This dataset was mainly created to achieve fire/not judgment on the models’ performance. Therefore, we need to per-
fire images classification and fire images segmentation. Also, they form more evaluation measurements, including Precision, Recall,
could be used to perform fire detection from RGB and thermal UAV F1-score, and Average Precision.
imagery. Precision
Fire detection 360-degree dataset The fire detection 360-degree The Precision rate (or specificity) is another way to evaluate
dataset [11] is another dataset that consists of 150 360◦ equirect- how good a model is. It shows how many of the non-fire instances
angular images that contain synthetic and real fire events in the our model incorrectly predicted as fire. The following equation is
10
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
used to determine the Precision rate: impressive results in wildfires detection at their early stage [2,42].
TP Smoke detection is more suitable to detect wildfires at their be-
P recision = (2) ginning because smoke appears earlier than fire flame and can
TP + FP
be seen from farther distances while covering larger areas mak-
Recall
ing it easier to be detected. However, these systems suffer in de-
The Recall rate (also known as sensitivity) is the opposite of
tecting smoke during nighttime and, also, can confuse smoke with
the Precision rate, where it indicates how many of the images that
other similar objects like fog, clouds, and chimney smoke. To this
contain fire instances our model incorrectly predicted as no fire.
end, several researchers have proposed to use algorithms that can
Unlike the Precision rate metric that focuses on false-positive val-
detect both, flame and smoke, at the same time [11,26,52,56,89].
ues, the Recall rate allows to focus on the false-negative part. The
As shown in Table 2, fusing thermal and optical sensors to detect
Recall rate can be determined using the following equation:
wildfire smoke and flame, as done in Jiao et al. [47] and Jiao et al.
TP [46], is a very useful solution that could improve the detection per-
Recall = (3)
TP + FN formance in both day and night times.
F1-score
The F1-score is another evaluation metric that considers both 4.2. UAV imagery data considerations
Precision and Recall rates. The F1-score metric represents a
weighted harmonic mean of Precision and Recall rates, where it The lack of available UAV-based forest fire datasets is one of the
can be calculated using the following equation: biggest problems facing deep learning developers and researchers.
Many solutions have been proposed to overcome such a limitation.
2 ∗ P recision ∗ Recall
F 1 − score = (4) Lee et al. [56] gathered their training dataset of wildfires by ex-
P recision + Recall tracting frames from aerial videos available on the internet. The
Average precision (AP) and mean average precision (mAP) authors in Hossain et al. [42], created a dataset by collecting im-
The AP and mAP are other popular evaluation metrics that are ages available on the web from different image search engines and
widely adopted for measuring the performance of deep learning- press reports. The created dataset in Hossain et al. [42] consists of
based object detection algorithms. The AP metric is calculated from aerial images with different resolutions of recent wildfires, includ-
the area under the precision-recall curve across all the Recall val- ing California and Australia wildfires. Data augmentation is another
ues that vary between “0” and “1”, where a higher score means a effective solution that is adopted by many researchers [42,56]. The
better model and vise versa. Furthermore, mAP is the average value authors in Hossain et al. [42], Lee et al. [56] used data augmen-
of the AP across all classes, where we need to calculate the AP for tation techniques based on random cropping, resizing, and hori-
each class and averages them. In the literature, these two terms zontal and vertical flipping. Moreover, the impact of data augmen-
(AP and mAP) are frequently used interchangeably. The AP and tation on the performance of wildfire detection was investigated
mAP metrics can be calculated according to the following equa- in Yadav [84], where they achieved the best average precision by
tions: augmenting 50% of the RAW data. Similarly, as shown in Table 3,
1 the authors in Zhao et al. [89] showed the impact of data aug-
AP = P ( R )d ( R ) (5) mentation on the detection accuracy. Collecting and labeling more
0
data could be a very good solution to improve detection accuracy,
1
N but it might be very costly and it is not always a feasible solu-
mAP = APi (6) tion. Thus, other studies are based on tuning some hyperparame-
N
i=1 ters and transferring pre-trained model’s knowledge to achieve ac-
where P, R, and N denote Precision rate, Recall rate, and the num- ceptable results with just a few data [84,87]. The transfer learning
ber of classes, respectively. technique is about transferring the knowledge from a pre-trained
Frame rate The frame rate, also called frame per second (FPS) model in a specific dataset targeting one domain to another model
is a very important metric that provides the information about the that targets another related domain. Thus, the authors of [87] fine-
detection speed. Higher frame rates mean a faster model that could tuned a pre-trained AlexNet model to overcome the lack of dataset.
perform wildfire detection in real-time. The frame rate depends on Similarly, in Allauddin et al. [4] and Kinaneva et al. [53], the au-
the complexity of the selected deep learning model architecture thors used one of the state-of-the-art one-stage detectors, called
and the used hardware. SSD_MobileNet-V1, for wildfire detection from UAV imagery.
4.1. Wildfire characteristics and camera sensors Detecting and notifying concerned authorities in real-time can
prevent disastrous losses, but it still one of the biggest challenges
Flame and smoke are the main visual features that could be facing researchers. Some studies have adopted lightweight models
used to identify wildfires at their early stages. Several studies tar- deployed on the UAV itself to perform wildfire detection as fast as
geted fire flame detection from UAV imagery using deep learning- possible like the work done in Jiao et al. [47]. However, this will
based techniques, such as object detection and semantic segmen- come at the cost of reducing the detection accuracy leading to a
tation. However, it is very challenging to detect wildfires from false alarm rate increasement [16]. In [37], the authors used an ac-
flame features, especially at their beginning due to several reasons. celerator neural stick to improve YOLOv3 inference speed on Rasp-
Detecting wildfire flames visually still very hard in dense forests berry Pi 3 achieving an F1-Score of 91% (Table 2). Other studies
and cluttered images with objects that have similar features as have adopted complex and deep architectures to improve the ac-
fire flames. Moreover, most of the fires do not even have a flame curacy. Lee et al. [56] developed a system based on deep CNN ar-
at their start, where they could be covered by their smoke. To chitecture and a UAV platform for wildfire detection. Thus, AlexNet,
overcome such a problem, some researchers have adopted thermal GoogLeNet, VGG-13, and modified versions of GoogLeNet and VGG-
cameras to measure the thermal radiation emitted by the fire [37], 13 CNN architectures were evaluated, where GoogLeNet achieves
where they have achieved acceptable results. Also, instead of flame the best accuracy of 99%. The used architectures provide high ac-
detection, other studies are based on smoke detection achieving curacy but it takes a considerable amount of time to classify each
11
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
image due to the large number of trainable parameters, where speed of 19 fps. The proposed approach is based on resizing the
they achieved a classification time of 7.743 s/image at the best input images before feeding them to the ANN, which could af-
cases when they applied AlexNet architecture. However, these al- fect the quality of the image resulting in bad classification. Also,
gorithms are not suitable to be implemented on UAV platforms due it has some limitations to identify smoke blocks with a smooth
to the limitation of computational resources. These systems col- texture. As shown in Table 2, YOLOv3 provides the highest preci-
lect data using UAV platforms and transmit them to powerful on- sion rate with the lowest false alarm, but it provides the worst re-
ground computational systems or doing the processing task on the call rate that means a higher missed detection rate, especially for
cloud. The adopted UAV in Jiao et al. [47] embed a lightweight DJI fire flame and smoke represented with a small number of pixels.
MANIFOLD on-board computer, to perform flame/smoke detection However, recent deep learning advancements could improve the
through a modified YOLOv3-tiny. They achieved acceptable results YOLOv3 algorithm and solve some of its limitations. Also, Alexan-
on forest flame/smoke detection achieving a precision rate of 82%. drov et al. [2] compared different classical and deep learning-based
However, the limited processing power of the on-board computer approaches to detect wildfire smoke. As shown in Table 2, YOLOv2
still cannot perform the detection operation in real-time achieving and Faster R-CNN achieved an F1-Score of 99.14% and 97.9%; re-
only 3.2 frames per second (FPS). Hence, they proposed to perform spectively, with very low false alarm and missed detection rates
the detection task on the ground station using YOLOv3 architec- where they achieved precision rates of 100% both and recall rates
ture [46]. They achieved similar results in terms of precision, re- of 98.3% and 95.9%, respectively. However, compared to classical
call, and F1-Score, but with a higher processing speed of around approaches, deep learning models are very slow achieving 6 fps as
80 FPS due to the good processing power of the adopted hard- the best detection speed for YOLOv2.
ware (NVIDIA RTX 2080). Similarly, YOLOv3 was adopted in Goyal
et al. [37] to detect wildfires as fast as possible and notify the con- 5. Conclusions
cerned authorities in real-time. According to Table 2, they achieved
an F1-Score of around 91%, which is a good result. However, the UAV-based remote sensing technologies play a very important
proposed system takes a relatively long time to detect forest fires, role in vision-based forest monitoring systems. Therefore, combin-
which is done in the first 12 h of its initialization. Srinivas and ing them with recent deep learning-based computer vision algo-
Dua [76] proposed a whole IoT system that can detect forest fires rithms and powerful computational hardware may provide smart
with an accuracy of 95% while notifying the concerned authori- UAVs that are able to navigate, detect forest fires, and notify the
ties in real-time. The proposed detection method was based on an concerned authorities autonomously without any human interven-
AlexNet-like CNN architecture, where the processing is done on the tion. UAVs are capable of providing high-resolution images in real-
cloud to improve detection speed. Moreover, the UAV flying time time from very hard and complex forest and wildlands locations
should be increased by performing the heavy computation on the with ease making them the most suitable platforms for wildfire
on-ground station. However, we need strong algorithms to secure identification and monitoring. Thus, in this study, we investigated
such a system against attackers and hackers, which could be very several deep learning methods and approaches for wildfire early
challenging and complex. detection from UAV imagery. According to the reported works in
the literature, deep learning techniques showed impressive results
4.4. The deep learning models architectures selection and accuracies both in speed and accuracy, which should help firefighters to in-
tervene as fast as possible to reduce wildfire risks.
The architecture of the selected network plays a crucial key in Different deep learning-based methods for early wildfires’
the whole performance of the wildfire detection system. As shown smoke and flame detection, namely image classification, object de-
in Table 1, the authors in Chen et al. [26] investigated the impact tection, and semantic segmentation. In general, techniques based
of the selected CNN architecture and image preprocessing oper- on object detection algorithms are the most adopted among all
ations. They achieved an accuracy rate of 86% by applying a 17 of them due to their high accuracy and ease compared to image
layers CNN with some image preprocessing techniques while they classification and semantic segmentation, respectively. Other deep
achieved an accuracy of 53% and 61% by applying CNN-9 and CNN- learning algorithms are presented in this review, which can im-
9 (hm + na), respectively. Also, the proposed model in Barmpoutis prove wildfire detection, especially in the case of fire detection
et al. [11], which is based on DeepLabV3+ architecture, provided from video scenes. However, these algorithms are not tested yet
the best results achieving an F1-Score of 94.6% against other tested on UAV-based images. Thus, LSTM algorithms will be investigated
architectures, such as SSD (67.6%), FireNet (71.7%), YOLOv3 (78.8%), in future works to improve wildfire early detection from streaming
Faster R-CNN (71.5%), and U-Net (71.9%). Also, it is worthy to men- videos acquired through UAV platforms. Also, GANs will be inves-
tion that fire detection through flame achieved slightly better re- tigated to solve the problem of dataset lack, which could help to
sults than through smoke detection, which could be due to the generate new instances of fire scenes.
presence of objects with smoke-like features resulting in higher
false alarm rates. Moreover, as shown in Tables 1 and 2, the pro- Declaration of Competing Interest
posed system has a very low missed detection rate achieving a re-
call of 99.3%. However, the proposed system still faces some lim- The authors declare that they have no known competing finan-
itations making it not able to detect forest fire at night time. In cial interests or personal relationships that could have appeared to
[89], the authors showed the impact of batch size and dropout ra- influence the work reported in this paper.
tio on the accuracy of the model. Also, they compared four dif-
ferent model architectures. Fire_Net architecture provides the best References
trade-off between speed and accuracy achieving a validation accu-
racy of 98% and a processing speed of 41.5 ms. According to the [1] D. Akca, E. Stylianidis, D. Poli, A. Gruen, O. Altan, M. Hofer, K. Smagas, V.S. Mar-
tin, A. Walli, E. Jimeno, A. Garcia, Pre- and post-fire comparison of forest areas
results presented in Table 2, the proposed ANN-based approach in
in 3D, in: O. Altan, M. Chandra, F. Sunar, T.J. Tanzi (Eds.), Intelligent Systems for
Hossain et al. [42] achieved the best results against the other pre- Crisis Management, Springer International Publishing, Cham, 2019, pp. 265–
sented methods including a state-of-the-art deep learning-based 294, doi:10.1007/978- 3- 030- 05330- 7_11.
detector (YOLOv3), where they achieved an F1-Score of 84% for [2] D. Alexandrov, E. Pertseva, I. Berman, I. Pantiukhin, A. Kapitonov, Analysis
of machine learning methods for wildfire security monitoring with an un-
flame detection and 90% for smoke detection, against 62% and 77% manned aerial vehicles, in: 2019 24th Conference of Open Innovations Asso-
achieved by YOLOv3 while providing a near-real-time processing ciation (FRUCT), 2019, pp. 3–9, doi:10.23919/FRUCT.2019.8711917.
12
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
[3] A.A.A. Alkhatib, A review on forest fire detection techniques, Int. J. Distrib. [28] DeepQuestAI, Fire-smoke-dataset, 2019. https://github.com/DeepQuestAI/
Sens. Netw. 10 (3) (2014) 597368, doi:10.1155/2014/597368. Fire- Smoke- Dataset.
[4] M.S. Allauddin, G.S. Kiran, G.R. Kiran, G. Srinivas, G.U.R. Mouli, P.V. Prasad, De- [29] K. Dimitropoulos, P. Barmpoutis, N. Grammalidis, Higher order linear dynami-
velopment of a surveillance system for forest fire detection and monitoring us- cal systems for smoke detection in video surveillance applications, IEEE Trans.
ing drones, in: IGARSS 2019 - 2019 IEEE International Geoscience and Remote Circuits Syst. Video Technol. 27 (5) (2017) 1143–1154, doi:10.1109/TCSVT.2016.
Sensing Symposium, 2019, pp. 9361–9363, doi:10.1109/IGARSS.2019.8900436. 2527340.
[5] S. Aslan, U. Güdükbay, B.U. Töreyin, A.E. Çetin, Deep convolutional generative [30] C.A. Emmerton, C.A. Cooke, S. Hustins, U. Silins, M.B. Emelko, T. Lewis,
adversarial networks for flame detection in video, in: N.T. Nguyen, B.H. Hoang, M.K. Kruk, N. Taube, D. Zhu, B. Jackson, M. Stone, J.G. Kerr, J.F. Orwin, Severe
C.P. Huynh, D. Hwang, B. Trawiński, G. Vossen (Eds.), Computational Collec- western Canadian wildfire affects water quality even at large basin scales, Wa-
tive Intelligence, Springer International Publishing, Cham, 2020, pp. 807–815. ter Res. 183 (2020) 116071, doi:10.1016/j.watres.2020.116071.
10.10 07/978-3-030-630 07-2_63 [31] A.I. Filkov, T. Ngo, S. Matthews, S. Telfer, T.D. Penman, Impact of Australia’s
[6] S. Aslan, U. Güdükbay„ B.U. Töreyin, A.E. Çetin, Early wildfire smoke detec- catastrophic 2019/20 bushfire season on communities and environment. ret-
tion based on motion-based geometric image transformation and deep convo- rospective analysis and current trends, J. Saf. Sci. Resil. 1 (1) (2020) 44–56,
lutional generative adversarial networks, in: ICASSP 2019 - 2019 IEEE Interna- doi:10.1016/j.jnlssr.2020.06.009.
tional Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, [32] L. Giglio, W. Schroeder, C.O. Justice, The collection 6 MODIS active fire de-
pp. 8315–8319, doi:10.1109/ICASSP.2019.8683629. tection algorithm and fire products, Remote Sens. Environ. 178 (2016) 31–41,
[7] B. Aydin, E. Selvi, J. Tao, M.J. Starek, Use of fire-extinguishing balls for a doi:10.1016/j.rse.2016.02.054.
conceptual system of drone-assisted wildfire fighting, Drones 3 (1) (2019), [33] R. Girshick, Fast r-CNN, in: 2015 IEEE International Conference on Computer
doi:10.3390/drones3010017. Vision (ICCV), 2015, pp. 1440–1448, doi:10.1109/ICCV.2015.169.
[8] V. Badrinarayanan, A. Kendall, R. Cipolla, SegNet: a deep convolutional [34] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accu-
encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. rate object detection and semantic segmentation, in: 2014 IEEE Conference
Mach. Intell. 39 (12) (2017) 2481–2495, doi:10.1109/TPAMI.2016.2644615. on Computer Vision and Pattern Recognition, 2014, pp. 580–587, doi:10.1109/
[9] P. Barmpoutis, P. Papaioannou, K. Dimitropoulos, N. Grammalidis, A review on CVPR.2014.81.
early forest fire detection systems using optical remote sensing, Sensors 20 [35] I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
(22) (2020), doi:10.3390/s20226442. A. Courville, Y. Bengio, Generative adversarial nets, in: Proceedings of the 27th
[10] P. Barmpoutis, T. Stathaki, A novel framework for early fire detection us- International Conference on Neural Information Processing Systems - Volume
ing terrestrial and aerial 360-degree images, in: J. Blanc-Talon, P. Delmas, 2, NIPS’14, MIT Press, Cambridge, MA, USA, 2014, pp. 2672—2680, doi:10.5555/
W. Philips, D. Popescu, P. Scheunders (Eds.), Advanced Concepts for Intelli- 2969033.2969125.
gent Vision Systems, Springer International Publishing, Cham, 2020, pp. 63–74, [36] K. Govil, M.L. Welch, J.T. Ball, C.R. Pennypacker, Preliminary results from a
doi:10.1007/978- 3- 030- 40605- 9_6. wildfire detection system using deep learning on remote camera images, Re-
[11] P. Barmpoutis, T. Stathaki, K. Dimitropoulos, N. Grammalidis, Early fire de- mote Sens. 12 (1) (2020), doi:10.3390/rs12010166.
tection based on aerial 360-degree sensors, deep convolution neural net- [37] S. Goyal, A. Kaur, H. Vohra, A. Singh, A YOLO based technique for early forest
works and exploitation of fire dynamic textures, Remote Sens. 12 (19) (2020), fire detection, Int. J. Innov. Technol. Explor. Eng. (IJITEE) 9 (2020) 1357–1362,
doi:10.3390/rs12193177. doi:10.35940/ijitee.F4106.049620.
[12] B. Benjdira, T. Khursheed, A. Koubaa, A. Ammar, K. Ouni, Car detection using [38] K. Grala, R.K. Grala, A. Hussain, W.H. Cooke, J.M. Varner, Impact of human fac-
unmanned aerial vehicles: comparison between faster r-CNN and YOLOv3, in: tors on wildfire occurrence in mississippi, United States, Forest Policy Econ. 81
2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), (2017) 38–47, doi:10.1016/j.forpol.2017.04.011. Forest sector trade
2019, pp. 1–6, doi:10.1109/UVS.2019.8658300. [39] C. Herrmann, D. Willersinn, J. Beyerer, Low-resolution convolutional neural
[13] W. Benzekri, A.E. Moussati, O. Moussaoui, M. Berrajaa, Early forest fire detec- networks for video face recognition, in: 2016 13th IEEE International Confer-
tion system using wireless sensor network and deep learning, Int. J. Adv. Com- ence on Advanced Video and Signal Based Surveillance (AVSS), 2016, pp. 221–
put. Sci. Appl. 11 (5) (2020), doi:10.14569/IJACSA.2020.0110564. 227, doi:10.1109/AVSS.2016.7738017.
[14] M. Bo, L. Mercalli, F. Pognant, D. Cat Berro, M. Clerico, Urban air pollution, cli- [40] G.E. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm for deep be-
mate change and wildfires: the case study of an extended forest fire episode in lief nets, Neural Comput. 18 (7) (2006) 1527—1554, doi:10.1162/neco.2006.18.
northern italy favoured by drought and warm weather conditions, Energy Rep. 7.1527.
6 (2020) 781–786, doi:10.1016/j.egyr.2019.11.002. The 6th International Con- [41] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput. 9 (8)
ference on Energy and Environment Research - Energy and environment: chal- (1997) 1735—1780, doi:10.1162/neco.1997.9.8.1735.
lenges towards circular economy [42] F.M.A. Hossain, Y.M. Zhang, M.A. Tonima, Forest fire flame and smoke detec-
[15] A. Bochkovskiy, C.-Y. Wang, H.-Y. M. Liao, Yolov4: optimal speed and accuracy tion from UAV-captured images using fire-specific color features and multi-
of object detection, arXiv preprint arXiv:2004.10934(2020). color space local binary pattern, J. Unmanned Veh. Syst. 8 (4) (2020) 285–309,
[16] A. Bouguettaya, A. Kechida, A.M. Taberkit, A survey on lightweight CNN-based doi:10.1139/juvs- 2020- 0 0 09.
object detection algorithms for platforms with limited computational re- [43] G. Hristov, J. Raychev, D. Kinaneva, P. Zahariev, Emerging methods for early
sources, Int. J. Inform. Appl. Math. 2 (2) (2019) 28–44. detection of forest fires using unmanned aerial vehicles and lorawan sensor
[17] J.L. Boylan, C. Lawrence, The development and validation of the bushfire psy- networks, in: 2018 28th EAEEIE Annual Conference (EAEEIE), 2018, pp. 1–9,
chological preparedness scale, Int. J. Disaster Risk Reduct. 47 (2020) 101530, doi:10.1109/EAEEIE.2018.8534245.
doi:10.1016/j.ijdrr.2020.101530. [44] A. Jadon, M. Omama, A. Varshney, M.S. Ansari, R. Sharma, FireNet: a special-
[18] F. Bu, M.S. Gharajeh, Intelligent and vision-based fire detection systems: a sur- ized lightweight fire & smoke detection model for real-time IoT applications,
vey, Image Vis. Comput. 91 (2019) 103803, doi:10.1016/j.imavis.2019.08.007. arXiv preprint arXiv:1905.11922(2019).
[19] Cair, Fire-detection-image-dataset, 2017. https://github.com/cair/ [45] M. Jeong, M. Park, J. Nam, B.C. Ko, Light-weight student LSTM for real-time
Fire- Detection- Image- Dataset. wildfire smoke detection, Sensors 20 (19) (2020), doi:10.3390/s20195508.
[20] Y. Cao, F. Yang, Q. Tang, X. Lu, An attention enhanced bidirectional LSTM [46] Z. Jiao, Y. Zhang, L. Mu, J. Xin, S. Jiao, H. Liu, D. Liu, A YOLOv3-based learn-
for early forest fire smoke recognition, IEEE Access 7 (2019) 154732–154742, ing strategy for real-time UAV-based forest fire detection, in: 2020 Chinese
doi:10.1109/ACCESS.2019.2946712. Control And Decision Conference (CCDC), 2020, pp. 4963–4967, doi:10.1109/
[21] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End– CCDC49329.2020.9163816.
to-end object detection with transformers, in: European Conference on Com- [47] Z. Jiao, Y. Zhang, J. Xin, L. Mu, Y. Yi, H. Liu, D. Liu, A deep learning based
puter Vision, Springer, 2020, pp. 213–229. forest fire detection approach using UAV and YOLOv3, in: 2019 1st Inter-
[22] L.C. Carvalheiro, S.O. Bernardo, M.D.M. Orgaz, Y. Yamazaki, Forest fires mapping national Conference on Industrial Artificial Intelligence (IAI), 2019, pp. 1–5,
and monitoring of current and past forest fire activity from meteosat second doi:10.1109/ICIAI.2019.8850815.
generation data, Environ. Model. Softw. 25 (12) (2010) 1909–1914, doi:10.1016/ [48] R. Kaabi, M. Sayadi, M. Bouchouicha, F. Fnaiech, E. Moreau, J.M. Ginoux, Early
j.envsoft.2010.06.003. smoke detection of forest wildfire video using deep belief network, in: 2018
[23] M.T. Cazzolato, L.P. Avalhais, D.Y. Chino, J.S. Ramos, J.A. de Souza, J.F. Rodrigues 4th International Conference on Advanced Technologies for Signal and Image
Jr., A.J. Traina, FiSmo: a compilation of datasets from emergency situations Processing (ATSIP), 2018, pp. 1–6, doi:10.1109/ATSIP.2018.8364446.
for fire and smoke analysis, in: Brazilian Symposium on Databases-SBBD, SBC, [49] T. Kanand, G. Kemper, R. König, H. Kemper, Wildfire detection and disas-
2017, pp. 213–223. ter monitoring system using UAS and sensor fusion technologies, Int. Arch.
[24] A.E. Cetin, Computer vision based fire detection software, 2007, Photogramm., Remote Sens. Spat. Inf. Sci. XLIII-B3-2020 (2020) 1671–1675,
[25] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, Deeplab: seman- doi:10.5194/isprs-archives-XLIII-B3-2020-1671-2020.
tic image segmentation with deep convolutional nets, atrous convolution, and [50] V. Khryashchev, R. Larionov, Wildfire segmentation on satellite images us-
fully connected CRFs, IEEE Trans. Pattern Anal. Mach. Intell. 40 (4) (2018) 834– ing deep learning, in: 2020 Moscow Workshop on Electronic and Network-
848, doi:10.1109/TPAMI.2017.2699184. ing Technologies (MWENT), 2020, pp. 1–5, doi:10.1109/MWENT47943.2020.
[26] Y. Chen, Y. Zhang, J. Xin, G. Wang, L. Mu, Y. Yi, H. Liu, D. Liu, UAV image- 9067475.
based forest fire detection approach using convolutional neural network, in: [51] G. Kim, J. Kim, S. Kim, Fire detection using video images and temporal vari-
2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), ations, in: 2019 International Conference on Artificial Intelligence in Informa-
2019, pp. 2118–2123, doi:10.1109/ICIEA.2019.8833958. tion and Communication (ICAIIC), 2019, pp. 564–567, doi:10.1109/ICAIIC.2019.
[27] Y. Chen, Y. Zhang, J. Xin, Y. Yi, D. Liu, H. Liu, A UAV-based forest fire detection 8669083.
algorithm using convolutional neural network, in: 2018 37th Chinese Control [52] D. Kinaneva, G. Hristov, J. Raychev, P. Zahariev, Application of artificial intel-
Conference (CCC), 2018, pp. 10305–10310, doi:10.23919/ChiCC.2018.8484035. ligence in UAV platforms for early forest fire detection, in: 2019 27th Na-
13
A. Bouguettaya, H. Zarzour, A.M. Taberkit et al. Signal Processing 190 (2022) 108309
tional Conference with International Participation (TELECOM), 2019, pp. 50–53, [75] M.J. Sousa, A. Moutinho, M. Almeida, Classification of potential fire outbreaks:
doi:10.1109/TELECOM48729.2019.8994888. a fuzzy modeling approach based on thermal images, Expert Syst. Appl. 129
[53] D. Kinaneva, G. Hristov, J. Raychev, P. Zahariev, Early forest fire detection using (2019) 216–232, doi:10.1016/j.eswa.2019.03.030.
drones and artificial intelligence, in: 2019 42nd International Convention on [76] K. Srinivas, M. Dua, Fog computing and deep CNN based efficient approach
Information and Communication Technology, Electronics and Microelectronics to early forest fire detection with unmanned aerial vehicles, in: S. Smys,
(MIPRO), 2019, pp. 1060–1065, doi:10.23919/MIPRO.2019.8756696. R. Bestak, A. Rocha (Eds.), Inventive Computation Technologies, Springer In-
[54] Y. Kountouris, Human activity, daylight saving time and wildfire occurrence, ternational Publishing, Cham, 2020, pp. 646–652.
Sci. Total Environ. 727 (2020) 138044, doi:10.1016/j.scitotenv.2020.138044. [77] C. Szegedy, S. Ioffe, V. Vanhoucke, A.A. Alemi, Inception-v4, inception-res-
[55] A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep con- net and the impact of residual connections on learning, in: Proceedings of
volutional neural networks, Commun. ACM 60 (6) (2017) 84—90, doi:10.1145/ the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, AAAI Press,
3065386. 2017, pp. 4278—4284.
[56] W. Lee, S. Kim, Y.-T. Lee, H.-W. Lee, M. Choi, Deep neural networks for wild [78] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Van-
fire detection with unmanned aerial vehicle, in: 2017 IEEE International Con- houcke, A. Rabinovich, Going deeper with convolutions, in: 2015 IEEE Con-
ference on Consumer Electronics (ICCE), 2017, pp. 252–253, doi:10.1109/ICCE. ference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9,
2017.7889305. doi:10.1109/CVPR.2015.7298594.
[57] Z. Li, Y. Sun, J. Tang, CTNet: context-based tandem network for semantic seg- [79] V. Totakura, B.R. Vuribindi, E.M. Reddy, Improved safety of self-driving car
mentation, arXiv preprint arXiv:2104.09805(2021). using voice recognition through CNN, IOP Conf. Ser. 1022 (2021) 012079,
[58] T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollr, Focal loss for dense object detec- doi:10.1088/1757-899x/1022/1/012079.
tion, in: 2017 IEEE International Conference on Computer Vision (ICCV), 2017, [80] D.C. Tsouros, S. Bibi, P.G. Sarigiannidis, A review on UAV-based applications
pp. 2999–3007, doi:10.1109/ICCV.2017.324. for precision agriculture, Information 10 (11) (2019), doi:10.3390/info10110349.
[59] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, A.C. Berg, SSD: https://www.mdpi.com/2078-2489/10/11/349
single shot multibox detector, in: B. Leibe, J. Matas, N. Sebe, M. Welling (Eds.), [81] S. Vardoulakis, G. Marks, M.J. Abramson, Lessons learned from the australian
Computer Vision – ECCV 2016, Springer International Publishing, Cham, 2016, bushfires: climate change, air pollution, and public health, JAMA Intern. Med.
pp. 21–37, doi:10.1007/978- 3- 319- 46448- 0_2. 180 (5) (2020) 635–636, doi:10.1001/jamainternmed.2020.0703.
[60] S. Luo, X. Zhang, M. Wang, J.-H. Xu, X. Zhang, A slight smoke perceptual net- [82] A. Viseras, J. Marchal, M. Schaab, J. Pages, L. Estivill, Wildfire monitoring and
work, IEEE Access 7 (2019) 42889–42896, doi:10.1109/ACCESS.2019.2906695. hotspots detection with aerial robots: measurement campaign and first re-
[61] J.R. Martinez-de Dios, B.C. Arrue, A. Ollero, L. Merino, F. Gmez-Rodriguez, Com- sults, in: 2019 IEEE International Symposium on Safety, Security, and Rescue
puter vision techniques for forest fire perception, Image Vis. Comput. 26 (4) Robotics (SSRR), 2019, pp. 102–103, doi:10.1109/SSRR.2019.8848961.
(2008) 550–562, doi:10.1016/j.imavis.2007.07.002. [83] Y. Xiao, V.R. Kamat, C.C. Menassa, Human tracking from single RGB-d cam-
[62] C. Maxouris, Here’s just how bad the devastating australian fires are – era using online learning, Image Vis. Comput. 88 (2019) 67–75, doi:10.1016/j.
by the numbers, 2020, https://edition.cnn.com/2020/01/06/us/australian-fires- imavis.2019.05.003.
by- the- numbers- trnd/index.html. [84] R. Yadav, Deep learning based fire recognition for wildfire drone automation,
[63] M.H. Mockrin, H.K. Fishler, S.I. Stewart, After the fire: perceptions of land use Can. Sci. Fair J. 3 (2) (2020) 1–8.
planning to reduce wildfire risk in eight communities across the united states, [85] G. Zanchi, L. Yu, C. Akselsson, K. Bishop, S. Köhler, J. Olofsson, S. Belyazid, Sim-
Int. J. Disaster Risk Reduct. 45 (2020) 101444, doi:10.1016/j.ijdrr.2019.101444. ulation of water and chemical transport of chloride from the forest ecosystem
[64] I. Novac, K.R. Geipel, G.J.E. de Domingo, L.G.d. Paula, K. Hyttel, D. Chrysosto- to the stream, Environ. Model. Softw. 138 (2021) 104984, doi:10.1016/j.envsoft.
mou, A framework for wildfire inspection using deep convolutional neural net- 2021.104984.
works, in: 2020IEEE/SICE International Symposium on System Integration (SII), [86] C. Zhang, T. Huang, Q. Zhao, A new model of RGB-d camera calibration based
2020, pp. 867–872, doi:10.1109/SII46433.2020.9026244. on 3Dcontrol field, Sensors 19 (23) (2019), doi:10.3390/s19235082.
[65] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real- [87] Q. Zhang, J. Xu, L. Xu, H. Guo, Deep convolutional neural networks for for-
time object detection, in: 2016 IEEE Conference on Computer Vision and Pat- est fire detection, in: Proceedings of the 2016 International Forum on Man-
tern Recognition (CVPR), 2016, pp. 779–788, doi:10.1109/CVPR.2016.91. agement, Education and Information Technology Application, Atlantis Press,
[66] J. Redmon, A. Farhadi, Yolo90 0 0: better, faster, stronger, in: 2017 IEEE Confer- 2016/01, pp. 568–575, doi:10.2991/ifmeita-16.2016.105.
ence on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6517–6525, [88] Q.-x. Zhang, G.-h. Lin, Y.-m. Zhang, G. Xu, J.-j. Wang, Wildland forest fire smoke
doi:10.1109/CVPR.2017.690. detection based on faster r-CNN using synthetic smoke images, Procedia Eng.
[67] J. Redmon, A. Farhadi, Yolov3: an incremental improvement, arXiv preprint 211 (2018) 441–446, doi:10.1016/j.proeng.2017.12.034. 2017 8th International
arXiv:1804.02767 (2018). Conference on Fire Science and Fire Protection Engineering (ICFSFPE 2017)
[68] S. Ren, K. He, R. Girshick, J. Sun, Faster r-CNN: towards real-time object detec- [89] Y. Zhao, J. Ma, X. Li, J. Zhang, Saliency detection and deep learning-based
tion with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell. 39 wildfire identification in UAV imagery, Sensors 18 (3) (2018), doi:10.3390/
(6) (2017) 1137–1149, doi:10.1109/TPAMI.2016.2577031. s18030712.
[69] M. Rodrigues, P.J. Gelabert, A. Ameztegui, L. Coll, C. Vega-Garcia, Has COVID- [90] H. Zhou, Z. Li, C. Ning, J. Tang, Cad: scale invariant framework for real-time
19 halted winter-spring wildfires in the mediterranean? Insights for wild- object detection, in: 2017 IEEE International Conference on Computer Vision
fire science under a pandemic context, Sci. Total Environ. 765 (2021) 142793, Workshops (ICCVW), 2017, pp. 760–768, doi:10.1109/ICCVW.2017.95.
doi:10.1016/j.scitotenv.2020.142793. [91] Z. Zhu, Z. Li, Online video object detection via local and mid-range feature
[70] O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional networks for biomed- propagation, in: Proceedings of the 1st International Workshop on Human–
ical image segmentation, in: International Conference on Medical Image Com- Centric Multimedia Analysis, HuMA’20, Association for Computing Machinery,
puting and Computer-Assisted Intervention, Springer, 2015, pp. 234–241. New York, NY, USA, 2020, pp. 73–82. 10.1145/3422852.3423477
[71] M.H. Saleem, J. Potgieter, K.M. Arif, Plant disease detection and classification [92] M. Zong, R. Wang, X. Chen, Z. Chen, Y. Gong, Motion saliency based multi-
by deep learning, Plants 8 (11) (2019), doi:10.3390/plants8110468. stream multiplier resnets for action recognition, Image Vis. Comput. 107 (2021)
[72] A. Shamsoshoara, F. Afghah, A. Razi, L. Zheng, P.Z. Fulé, E. Blasch, Aerial im- 104108, doi:10.1016/j.imavis.2021.104108.
agery pile burn detection using deep learning: the FLAME dataset, Comput. [93] V. Zope, T. Dadlani, A. Matai, P. Tembhurnikar, R. Kalani, IoT sensor and
Netw. 193 (2021) 108001, doi:10.1016/j.comnet.2021.108001. deep neural network based wildfire prediction system, in: 2020 4th Inter-
[73] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale national Conference on Intelligent Computing and Control Systems (ICICCS),
image recognition, arXiv preprint arXiv:1409.1556(2014). 2020, pp. 205–208, doi:10.1109/ICICCS48265.2020.9120949.
[74] R. Solovyev, W. Wang, T. Gabruseva, Weighted boxes fusion: ensembling boxes [94] A.E. AGetin, K. Dimitropoulos, B. Gouverneur, N. Grammalidis, O. GAnay,
from different object detection models, Image Vis. Comput. 107 (2021) 104117, Y.H. Habiboǧlu, B.U. Töreyin, S. Verstockt, Video fire detection review, Digit.
doi:10.1016/j.imavis.2021.104117. Signal Process. 23 (6) (2013) 1827–1843, doi:10.1016/j.dsp.2013.07.003.
14