0% found this document useful (0 votes)
5 views

A_UAV_Forest_Fire_Detection_Method_Based_on_Dual-Light_Vision_Images

The document presents a UAV-based forest fire detection method utilizing dual-light vision images, combining visible light and infrared images for effective flame and smoke detection. The proposed method employs color space transformations and threshold segmentation to generate mask images for fire and smoke, verified using a DJI M300 RTK UAV platform. Experimental results indicate the method's efficiency in both daytime and nighttime conditions, highlighting its potential for rapid fire monitoring and detection.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

A_UAV_Forest_Fire_Detection_Method_Based_on_Dual-Light_Vision_Images

The document presents a UAV-based forest fire detection method utilizing dual-light vision images, combining visible light and infrared images for effective flame and smoke detection. The proposed method employs color space transformations and threshold segmentation to generate mask images for fire and smoke, verified using a DJI M300 RTK UAV platform. Experimental results indicate the method's efficiency in both daytime and nighttime conditions, highlighting its potential for rapid fire monitoring and detection.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

A UAV Forest Fire Detection Method Based on

Dual-Light Vision Images


2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS) | 979-8-3503-3775-4/23/$31.00 ©2023 IEEE | DOI: 10.1109/SAFEPROCESS58597.2023.10295827

Xuesong Xie Jiasheng Wang Youmin Zhang*


Department of Automation and Department of Automation and Department of Mechanical, Industrial
Information Engineering Information Engineering and Aerospace Engineering
Xi’an University of Technology Xi’an University of Technology Concordia University
Xi’an, China Xi’an,China Montreal, Canada
[email protected] [email protected] [email protected]

Jing Xin Lingxia Mu Xianghong Xue


Department of Automation and Department of Automation and Department of Automation and
Information Engineering Information Engineering Information Engineering
Xi’an University of Technology Xi’an University of Technology Xi’an University of Technology
Xi’an, China Xi’an, China Xi’an, China
[email protected] [email protected] [email protected]

Abstract—The forest fires have numerous hazards and separated, and the smoke detection is done by setting color
effects on environment. Unmanned aerial vehicle (UAV) can rules and frame difference method. The fire is detected using
provide important help for forest fire detection with fast and infrared images by adaptive thresholding and calculating the
flexible response features. In order to detect the occurrence of perimeter area. In [10], the detection of fires is effectively
forest fire(s) effectively and quickly, a UAV-based forest fire accomplished by using images taken by UAVs and
detection method is proposed with dual-light vision images. processed by color features and wavelet analysis. In [11], the
The main idea of the method is to use visible light images in detection is done using filtering, color space conversion,
YCrCb color space and infrared (IR) grayscale images to get threshold segmentation, morphological processing, and
flame mask to complete flame detection. Meanwhile, visible
other operations. In [12-14], the detection of fires is finally
light images in HSV color space is used to get smoke mask to
complete smoke detection even for possible earlier fire
accomplished by training classifiers using features such as
detection. Finally, the proposed method is verified using DJI color, texture, and motion. In [15-16], images are pre-
M300 RTK UAV platform with H20T camera as onboard processed and then fed into the deep neural network for
sensor. training and finally complete the testing work. In [17-18],
fire images are fed directly into the deep neural network for
Keywords—UAV, Dual-light image, Forest fire detection training, testing and providing the corresponding
performance evaluation.
I. INTRODUCTION
In [4], [5], and [7], extracting motion features all use the
The occurrence of forest fires is a problem that cannot be optical flow method, but considering that the real UAV
ignored. Prevention and early detection of forest fires are detection environment is in motion, it can lead to poor
effective means to reduce the potential hazard. The method detection results as the motion features of the foreground
proposed in this paper is for the early detection of forest fire target are not obvious. Works in [12-18] have large
in order to facilitate quick and timely warnings for workloads in the pre-processing stage and are time
subsequent fire fighting mission. consuming and relatively complex. Most of the studies
In recent years, due to the development of unmanned consider daytime detection problems and do not consider
aerial vehicles (UAVs) technology, more and more nighttime detection. In this paper, a UAV-based forest fire
researchers and developers have started to use UAVs as a detection method is proposed based on dual-light vision. The
new tool/method for fire monitoring, detection, and fighting. method mainly uses both visible images and infrared
There is a huge opportunity for the application of forest fire grayscale images. The flame mask image is extracted by
monitoring and detection by using UAVs. Many researchers setting threshold in the YCrCb color space of visible image
are also devoted to the research work in this area [1-3]. For and infrared gray space to complete flame detection. Then
example, in [4], a forest fire detection method that combines the smoke mask image is extracted by setting a threshold in
YCrCb color space rules and Lucas-Kanade optical flow the HSV color space of visible image to complete the smoke
method is proposed using both static and dynamic features detection. The computational cost is effectively reduced, and
from visible images. In [5], a rapid automatic fire detection the detection task can be completed accurately and quickly.
method is proposed based on UAV observed infrared
II. METHOD
images, where threshold segmentation and optical flow
method is used to accomplish the detection task. In [6], a In vision-based fire detection, both visible and infrared
background model is established using Gaussian mixture images are useful information. Therefore, a dual-light vision
model (GMM), and early smoke detection of forest fires is detection method is proposed, as shown in Fig. 1.
accomplished by determining the suspected smoke area Visible Visible
Color Space Transformation

through wavelet energy analysis. In [7], the authors use their


Set Threshold
Image Image
Morphological Operations

own defined color rules combined with optical flow motion Registration
Mask
Image
Display
Results

features for flame and smoke detection. In [8], the detection Infrared Infrared
Color Space Transformation
Set Threshold
of fire is also completed by using self-defined color rules and
Image Image
Morphological Operations

then segmenting the flame. In [9], smoke and flame are Fig. 1: Schematic structure of the proposed method

Authorized licensed use limited to: National Institute of Technology Patna. Downloaded on March 16,2025 at 18:46:34 UTC from IEEE Xplore. Restrictions apply.
First, both visible and infrared images are aligned, and
then the useful information of both images is used to obtain
the mask image of the detection object respectively, and
finally the results are visualized.
A. Visible and Infrared Images Registration
The UAV platform used is a DJI M300 RTK UAV with
quadrotors, which is equipped with an H20T camera as (b) Visible and infrared registration images
sensor that provides both visible and infrared images. Due Fig. 3: The images before and after the registration
to the hardware structure of the camera, to facilitate the
B. Threshold Rule Design
subsequent use of the dual-light image data, the images need
to be aligned to ensure that the two types of images have the Color features are widely used by most researchers as
same alignment. Fig. 2 shows the reasons for the need for one of the most intuitive features [4], [7-12], and also occupy
registration, Fig. 2(a) shows the physical location, and Fig. a significant portion as an important feature. Commonly
2(b) shows the focal distance. used color spaces include RGB, HSV, YCrCb, and HLS.
Infrared feature information is also very helpful for fire
The focal distance of the zoom lens of the H20T camera detection. Objects with high thermal radiation will display
is different from the focal distance of the infrared lens, which high brightness in infrared images, and flames precisely
results in different image sizes and picture contents of the match this characteristic. This paper uses color feature
spatial objects captured by the two lenses. The physical information and infrared information to design flame and
positions of these two lenses are different, so they will smoke threshold segmentation rules to complete fire
produce deviations in the X-direction and Y-direction. Due detection tasks.
to above mentioned problems, these two images cannot be
used directly. Hence, the two images are aligned to be used After two image registration, they can be used directly.
in the subsequent methods. Through subsequent The pseudocode of the flame detection process is shown in
experimental results, it can also be seen that the fast Algorithm 1.
registration method proposed in this paper is feasible.
Algorithm 1 Fire detection
Y

While True:
visible, infrared  registration
camera Y, Cr, Cb  split (visible)
lens
camera
lens Gray  BGR2GRAY (infrared)
Infrared optical center O2 focal distance

Visible optical center O1 X CCD CCD


mask1  morphologyEx (threshold(|Cr – Cb|))

(a) Physical location (b) Focal distance mask2  morphologyEx (threshold(Gray))

Fig. 2: The reason for registration mask_fire  bitwise_AND (mask1, mask2)


Return mask_fire
According to Eqs. (1) and (2), the scaling factor 𝛿𝛿𝑋𝑋 and
offset 𝜆𝜆𝑋𝑋 in the X-direction can be calculated. Similarly,
those in Y-direction can also be calculated. In Eqs. (1)-(2), Firstly, convert the infrared image into a grayscale image
𝐴𝐴𝑉𝑉𝑉𝑉 and 𝐵𝐵𝑉𝑉𝑉𝑉 represent the pixel coordinates of two points A and extract suspicious flame areas through threshold setting.
and B in the X-direction of the visible image, while 𝐴𝐴𝐼𝐼𝐼𝐼𝐼𝐼 and Here, the threshold is selected as 240 and binarized. Next,
𝐵𝐵𝐼𝐼𝐼𝐼𝐼𝐼 represent the pixel coordinates of two points A and B convert the visible image into the YCrCb color space, while
in the X-direction of the corresponding infrared image. The separating the three channels into Y, Cr, and Cb, and then
scaling factor used in this paper is 0.47 in the X direction treat the Cr and Cb channels as flame regions with absolute
and 0.65 in the Y direction. The offset pixel value in the X differences greater than 43. Here, the main reference is [4],
direction is 162, and the offset pixel value in the Y direction and the author believes that threshold 43 is the optimal
is 95. threshold after receiver operating characteristics (ROC)
analysis. After obtaining the binary images of the above
AVX − BVX (1) information, the mask image of the flame can be determined
δX = by performing operations on them. The extraction process of
A IRX − B IRX
flame mask images is shown in Fig. 4.
λ=
X AVX − A IRX (2)
Fig. 3(a) shows the visible and infrared images obtained
from H20T, while Fig. 3(b) shows the visible and infrared bitwise AND
images registered using the above-mentioned scaling factors
and offset pixel values. Visible image
YCrCb color Cr-Cb channel
space image binary image

Grayscale color Gray binary


Infrared image
space image image

Flame mask
binary image

(a) Visible and infrared original images


Fig. 4: Flame mask image extraction process

Authorized licensed use limited to: National Institute of Technology Patna. Downloaded on March 16,2025 at 18:46:34 UTC from IEEE Xplore. Restrictions apply.
The absolute difference between Cr and Cb channels can Considering the practical situations, the data for the
obtain suspected flame areas, while infrared grayscale can experiment are all real video streams captured by the
obtain high heat areas. Finally, the two mask images are experimental platform. It is also important to note that if the
bitwise AND the final flame mask image is obtained. flight speed is too fast, the infrared image collected by the
sensor of the H20T camera will have a slow response,
For smoke detection, we use HSV color space. The resulting in the phenomenon of picture lag. Therefore, the
pseudocode of the smoke detection process is shown in visible and infrared video data taken in this paper are at a
Algorithm 2. slower flight speed. It should also be noted that our FPS
(frames per second) calculation starts from the time when
Algorithm 2 Smoke detection
the dual-light data is read to the time before the visualization
While True: result. In addition, after registering the dual light data, the
visible, infrared  registration
size of the visible image changed from 1920×1080 to
300×332, and the size of the infrared image changed from
H, S, V  split (visible) 640×512 to 300×332. An i7-4790 CPU with 8G RAM
mask1  morphologyEx (threshold (|S – V|)) computer was used. A demonstration of the UAV platform
and experimental images captured are shown in Fig. 6.
mask2  morphologyEx (threshold (H))
mask_smoke  bitwise_AND (mask1, mask2)
Return mask_smoke

Firstly, convert the registered visible image into an HSV


color space image, while separating the three channels into
H, S, and V. Then, if the absolute value of the difference is
greater than the threshold, the S and V channels are
considered suspicious smoke areas, and the threshold here is
(a) UAV platform (b) Images captured
selected as 140. The H channel is a tone channel, and by
setting the threshold interval to (75,105), the areas within the Fig. 6: UAV platform and images captured
threshold interval are considered as suspected smoke areas. A. Test Results and Statistics
Finally, after obtaining the two binary images mentioned
above, perform the sum operation on them to obtain the There are five video data in this experiment, two of
mask image of the smoke. It should be noted here that due which are at night and three of which are during in the
to the small size of the experimental fire, it is difficult to daytime. As shown in Table Ⅰ for daytime experimental
observe the smoke at night. Whether it is a visible light result statistics at flight heights of 6m, 10m, 15m and night
image or an infrared image, smoke detection at night will not experimental result statistics at flight heights of 6m as shown
be considered. The extraction process of smoke mask in Table II.
images is shown in Fig. 5. TABLE I. STATISTICS OF DAYTIME TEST RESULT

day1_6 day2_10 day3_15

bitwise AND
total_frame 1199 1572 725

H channel miss_fire_frame/fire_frame 33/289 76/835 25/125


binary image

V-S channel miss_smoke_frame/smoke_frame 118/945 156/1467 146/725


binary image

HSV color Smoke mask false_frame(smoke) 25 136 0


Visible image
space image binary image
false_rate(false_frame/toatl_frame) 2.08% 8.65% 0.00%

True_miss_fire_rate 11.42% 9.10% 20.00%


Fig. 5: Smoke mask image extraction process
True_miss_smoke_rate 12.48% 10.57% 20.13%
The H channel can obtain areas with approximate smoke
color ranges, while the absolute difference between the V Accuracy_fire_rate 97.24% 95.16% 96.55%
and S channels can obtain areas with high saturation and
brightness. Finally, the two mask images are bitwise AND Accuracy_smoke_rate 88.07% 81.21% 79.86%
the final smoke mask image is obtained.
III. EXPERIMENT RESULTS AND ANALYSIS In Table I, the True_miss_fire/smoke_rate is calculated
by comparing the number of missed frames to the number of
The experiment is conducted using DJI M300 RTK positive sample frames. The Accuracy_fire/smoke_rate is
UAV platform with H20T as an onboard camera. First, the calculated by comparing the sum of the number of false and
fire video data required for the experiment was collected by missed frames to the total number of frames.
carrying the H20T camera. Then, the collected video data
are verified on the ground side and the results of the
experiment are analyzed. The algorithm in this paper is for
all-weather fire detection, so our experiments are also
divided into two time periods: nighttime and daytime.

Authorized licensed use limited to: National Institute of Technology Patna. Downloaded on March 16,2025 at 18:46:34 UTC from IEEE Xplore. Restrictions apply.
TABLE Ⅱ. STATISTICS OF NIGHT TEST RESULT target is originally very small. In general, the method
proposed in this paper can basically meet the requirements
night1_6 night2_6 of simplicity, speed, and accuracy.
total_frame 899 300 D. Comparison of Different Methods
In order to further demonstrate the effectiveness of the
miss_fire_frame/fire_frame 3/899 0/122 method presented in this paper, comparative experiments
are conducted in both cases. One is to verify that the
false_frame 0 0
addition of infrared information is effective in the absence
false_rate 0.00% 0.00%
of color rules for infrared information in the method in this
paper, and the other is to contrast with the method in this
True_miss_fire_rate 0.33% 0.00% paper in the case of color rules designed in literature [8].
The experimental results are shown in Fig. 8. The first
Accuracy_fire_rate 99.67% 100% line of Fig. 8 shows the detection results using the color rule
designed in [8], which only applies to flame detection.
B. Nighttime Test Results and Analysis Therefore, the smoke mask is a fully black image.
For the night detection, we only took the video data of Observing the flame mask image, it can be seen that many
6m height considering the limitation of environmental flame error detections occur under the interference of
conditions and safety issues. For experiments at other yellow and red leaf targets with similar flame colors. The
heights, we need to find a more secure and reliable site to second line in Fig. 8 shows the detection results of the
complete the experiment. The experimental image data of proposed method in the absence of infrared information.
this paper is a complex dataset containing day and night, Observing the flame mask image, it can be seen that there
different flight heights, two scenes. The first line of Fig. 7 are still many false detections of flames. The last line in Fig.
shows the detection results during flight at an altitude of 6m. 8 shows the detection results of the method proposed in this
paper. It can be seen that under the dual effects of visible
The nocturnal environment usually belongs to simple and infrared information, interference with targets similar
scenes, so once a fire occurs, it is a very obvious target object to flame color is effectively avoided, which to some extent
in the image. In most cases, the fire detection task can be demonstrates the effectiveness of the dual-light forest fire
accomplished simply by color threshold rules designed from detection method proposed in this paper.
visible images, but complemented by infrared information,
more possible interference targets can be filtered. Detection IV. CONCLUSIONS
of flame can be well done by both visible and infrared
Considering the realistic UAV-based forest fire
images information, and there is no false detection.
detection problem, effective detection algorithm is required
Meanwhile, flame mask images are also better to represent
to satisfy the special features of fast, simple, and accurate
the flames. From the detection results, the detection method
responses after occurrence of forest fire. Therefore, we
proposed in this paper shows satisfactory detection results,
propose a dual-light vision-based forest fire detection
while the FPS also reaches the requirements of real-time
method. We have combined the useful information of both
detection.
visible and infrared images to obtain the mask images of
C. Daytime Test Results and Analysis flame and smoke by setting the threshold and finally
For daytime detection, we captured video stream data at complete the detection task. We also conducted
heights of 6m, 10m and 15m, respectively. Data collected at experimental validation and analysis of the experimental
different altitudes are used for actual forest fire detection results through videos taken by a DJI M300 UAV with
tasks. The range of scenes that the UAV can capture is H20T onboard camera. The proposed method can achieve
different at different flight altitudes, and the size of the target fast, simple, and accurate fire detection. It can achieve a
object is also different. The second row of Fig. 7 shows the processing speed of around 200 FPS, a flame detection
detection results for a daytime height of 6m, while the third accuracy over 95%, and a smoke detection accuracy over
row of Fig. 7 shows the detection results for a daytime height 80%.
of 10m. The last line of Fig. 7 shows the detection results at Preliminary experiments have, to some extent, proven
a height of 15m during the day. It should be noted that the the effectiveness of using dual-light images for forest fire
detection speed of the proposed method can reach around detection. The extraction of smoke mask images in this
200 FPS. paper did not use infrared information (the self-captured
The daytime detection environment is relatively dual light data of flames and smoke targets are too small),
complex, and there are more interference targets. From the making it difficult to find infrared information related to
detection results of this paper, we can see that the detection smoke in infrared images. At the same time, using only
of flame can eliminate the interference of red leaves through threshold constraints to extract smoke mask images may
infrared information and has good detection performance; have low robustness issues. The future focus can be on
the detection performance of smoke is relatively poor, improving the accuracy and effectiveness of smoke
especially for smaller and thinner smoke, there are some detection. At the same time, it is also necessary to consider
situations where the mask image is incomplete. In addition, using deep learning methods and other effective methods.
the smoke and flame missed detection rate are also relatively There is still a lot of works to be done to further improve the
high, the analysis may be related to that when the UAV performance of the algorithm and deploy the algorithm to
height increases, the target becomes relatively small, or the the UAV platform.

Authorized licensed use limited to: National Institute of Technology Patna. Downloaded on March 16,2025 at 18:46:34 UTC from IEEE Xplore. Restrictions apply.
(a) Original image (b) Flame mask (c) Smoke mask (d) Detection result
Fig. 7: The detection results

Authorized licensed use limited to: National Institute of Technology Patna. Downloaded on March 16,2025 at 18:46:34 UTC from IEEE Xplore. Restrictions apply.
(a) Original image (b) Flame mask (c) Smoke mask (d) Detection result
Fig. 8: The detection results of different methods
[10] Z. T. Jiao, Y. M. Zhang, J. Xin, Y. M. Yi, D. Liu and H. Liu, "Forest
ACKNOWLEDGMENT fire detection with color features and wavelet analysis based on aerial
This work was supported in part by the National Natural imagery," 2018 Chinese Automation Congress (CAC), 2018, pp. 2206-
Science Foundation of China (No. 61833013 and No. 2211.
61903297), Young Talent Fund of University Association for [11] C. Yuan, Z. X. Liu and Y. M. Zhang, "UAV-based forest fire detection
and tracking using image processing techniques," 2015 International
Science and Technology in Shaanxi, China (No. 20210114), Conference on Unmanned Aircraft Systems (ICUAS), 2015, pp. 639-
and the Natural Sciences and Engineering Research Council 643.
of Canada. We would also like to thank all the lab colleagues [12] S. Ma, Y. M. Zhang, J. Xin, Y. M. Yi, D. Liu and H. Liu, "An early
for their great support and help during the outdoor forest fire detection method based on unmanned aerial vehicle vision,"
experimental flight tests. 2018 Chinese Control and Decision Conference (CCDC), 2018, pp.
6344-6349.
REFERENCES [13] F. M. A. Hossain, Y. M. Zhang, C. Yuan and C. Y. Su, "Wildfire flame
[1] A. Gaur, A. Singh, A. Kumar and A. Kumar, "Video flame and smoke- and smoke detection using static image features and artificial neural
based fire detection algorithms: A literature review," Fire Technology, network," 2019 1st International Conference on Industrial Artificial
2020, vol. 56, no. 5, pp. 1943-1980. Intelligence (IAI), 2019, pp. 1-6.
[2] P. Barmpoutis, P. Papaioannou, K. Dimitropoulos and N.Crammalidis, [14] F. M. A. Hossain, Y. M. Zhang and M. A. Tonima, "Forest fire flame
"A review on early forest fire detection systems using optical remote and smoke detection from UAV-captured images using fire-specific
sensing," Sensors, 2020, vol. 20, no. 22, pp. 6442-6442. color features and multi-color space local binary pattern," Journal of
Unmanned Vehicle Systems, 2020, vol. 8, no. 4, pp. 285-309.
[3] C. Yuan, Y. M. Zhang, and Z. X. Liu. "A survey on technologies for
[15] T. Gupta, H. Y. Liu and B. Bhanu, "Early wildfire smoke detection in
automatic forest fire monitoring, detection, and fighting using
videos," 2020 25th International Conference on Pattern Recognition
unmanned aerial vehicles and remote sensing techniques," Canadian
(ICPR), 2021, pp. 8523-8530.
Journal of Forest Research, 2015, vol. 45, no. 7, pp. 783-792.
[16] Y. M. Luo, L. Zhao, P. Z. Liu and D. T. Huang, "Fire smoke detection
[4] H. Y. Sun, G. M. Song, Z. Wei, Y. Zhang and S. S. Liu, "Bilateral
algorithm based on motion characteristic and convolutional neural
teleoperation of an unmanned aerial vehicle for forest fire detection,"
networks," Multimedia Tools and Applications, 2018, vol. 77, no. 12,
2017 IEEE International Conference on Information and Automation
pp. 15075-15092.
(ICIA), 2017, pp. 586-591.
[17] D. Kinaneva, G. Hristov, J. Raychev and P. Zahariev, "Application of
[5] C. Yuan, Z. X. Liu and Y. M. Zhang, "Fire detection using infrared
artificial intelligence in UAV platforms for early forest fire detection,"
images for UAV-based forest fire surveillance," 2017 International
2019 27th National Conference with International Participation
Conference on Unmanned Aircraft Systems (ICUAS), 2017, pp. 567-
(TELECOM), 2019, pp. 50-53.
572.
[6] N. M. Cang, W. J. Yu and X. Y. Wu, "Smoke detection for early forest [18] A. Shamsoshoara, F. Afghah, A. Razi, L. M. Zheng, P. Z. Fulé and E.
fire in aerial photography based on GMM background and wavelet Blasch, "Aerial imagery pile burn detection using deep learning: The
energy," 2021 IEEE International Conference on Power Electronics FLAME dataset," Computer Networks, 2021, vol. 193, p. 108001.
Computer Applications (ICPECA), 2021, pp. 763-765.
[7] H. D. Ngoc and H. N. Trung, "Aerial forest fire surveillance -
Evaluation of forest fire detection model using aerial videos," 2019
International Conference on Advanced Technologies for
Communications (ATC), 2019, pp. 142-148.
[8] D. Dzigal, A. Akagic, E. Buza, A. Brdjanin and N. Dardagan, "Forest
fire detection based on color spaces combination," 2019 11th
International Conference on Electrical and Electronics Engineering
(ELECO), 2019, pp. 595-599.
[9] X. S. Yang, L. B. Tang, H. S. Wang and X. X. He, "Early detection of
forest fire based on unmanned aerial vehicle platform," 2019 IEEE
International Conference on Signal, Information and Data Processing
(ICSIDP), 2019, pp. 1-4.

Authorized licensed use limited to: National Institute of Technology Patna. Downloaded on March 16,2025 at 18:46:34 UTC from IEEE Xplore. Restrictions apply.

You might also like