0% found this document useful (0 votes)
23 views

Fire Detection Using Image Processing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Fire Detection Using Image Processing

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/359854247

Fire Detection Using Image Processing

Article in Asian Journal of Computer Science and Technology · November 2021


DOI: 10.51983/ajcst-2021.10.2.2883

CITATIONS READS
3 1,463

1 author:

B.Swarajya Lakshmi
santhiram engineering college
4 PUBLICATIONS 9 CITATIONS

SEE PROFILE

All content following this page was uploaded by B.Swarajya Lakshmi on 29 November 2022.

The user has requested enhancement of the downloaded file.


Asian Journal of Computer Science and Technology
ISSN: 2249-0701 Vol.10 No.2, 2021, pp.14-19
© The Research Publication, www.trp.org.in
DOI: https://doi.org/10.51983/ajcst-2021.10.2.2883

Fire Detection Using Image Processing


B. Swarajya Lakshmi
Assistant Professor, Department of Computer Science and Engineering,
Santhiram Engineering College, Nandyal, Andhra Pradesh, India
E-mail: [email protected]

Abstract - Fire disasters have always been a threat to homes alarms depend on the fire particles reaching the given
and businesses even with the various systems in place to sensor. Apart from the inherent disadvantage in the delay
prevent them. They cause property damage, injuries and in detecting the fire due to the time taken for particles to
even death. Preparedness is vital when dealing with fires. reach the sensor, these alarms are basic and do not
They spread uncontrollably and are difficult to contain. To
contain them it is necessary for the fire to be detected early.
provide crucial information such as intensity, location
Image fire detection heavily relies on an algorithmic and the size of the fire. Many of the places with a fire
analysis of images. However, the accuracy is lower, the alarm system also have a surveillance system. These
detection is delayed and in common detection algorithms a surveillance cameras can be incorporated in the fire
large number of computation, including the image features detection process using object detection. This has
being extracted manually and using machine. Therefore, in become an important area of research. The object
this paper, novel image detection which will be based on the detection is based on image processing. Vision based fire
advanced object detection like CNN model of YOLO v3 is detection systems have several advantages. Already
proposed. The average precision of the algorithm based on installed surveillance cameras can be used for this and if
YOLO v3 reaches to 81.76% and also it has the stronger
robustness of detection performance, thereby satisfying the
they are not present, CCD (Charged coupled devices)
requirements of the real-time detection. cameras can be installed which are fairly inexpensive.
Keywords: Amharic, Fake News, Machine Learning, Natural The most important advantage is the detection time
Language Processing because vision-based systems do not require smoke or
heat to diffuse. Another advantage is the area covered. If
I. INTRODUCTION the camera is placed at a vantage point, it can cover a lot
of open space which is a very big improvement from
Fire alarms are present in a lot of buildings, industrial conventional sensors which are better in confined spaces.
parks and workplaces. These fire alarms are usually Lastly, in the case of a false alarm, the informed
based on sensors which detect certain characteristics of authority can check the surveillance feed to monitor the
fire such as smoke, radiation, or heat. However, these fire location.

Fig. 1 Graphical image annotation tool and it helps to label object bounding boxes in images

AJCST Vol.10 No.2 July-December 2021 14


Fire Detection Using Image Processing

II. THE PROPOSED FRAMEWORK R-CNN and even Fast R-CNN. For our project, we will
be using Tiny YOLO since we will be using a raspberry
A. Object Detection using YOLOv3 pi. The Tiny- YOLO architecture is around 442% faster
than other YOLO versions. The model size is small and
You only look once (YOLO) is an effective real time the fast interference speed makes it suitable for an
object detection system. In YOLO, a single neural embedded deep learning device such as Raspberry Pi.
network is applied to the full image. The images are
divided into regions and prediction boxes with the help The Common Objects in Context (COCO) dataset which
of network for region. Prediction probabilities weights is one of the most widely used datasets does not provide
bounding boxes. support for fire detection, so we will have to train a
custom model. This can be done by creating a custom
YOLO has a lot of advantages over other systems. The dataset. First, we collected images which fit our criteria,
predictions are based on global context in the image and which is, having a fire. Then we labelled them using
it also makes predictions with a single network label.
evaluation unlike R-CNN. This makes it a lot faster than

Fig. 2 GUI used to created bounding boxes in images

III. YOLO v3 ALGORITHM

To generate small scale feature map, Darknet-53 is being


used by YOLO v3, which is from the original it is 32 times
down sampled. For example, the size of the feature map is
13×13 if the size of the original image is 416×416. The
small-scale feature map is used to detect large objects. By
up sampling the small-scale feature map and concatenating
with a feature map from an earlier layer a large-scale feature
map is generated by YOLO v3. Small objects are detected
by using complex features of deeper layer and location
information of the earlier layer from the large scale
feature map. From the original image the three scales of
feature maps are 8, 16, and 32 times down sampled.
There are N units of Res Unit connected in series in
ResN. Concentration operation is denoted by concat.
This concat is different from the Add operation in
residual layers. Dimensions of the feature maps are
expanded by the feature maps.
Fig. 3 Diagram of the fire algorithm based on yolo v3

15 AJCST Vol.10 No.2 July-December 2021


B. Swarajya Lakshmi

On the other hand, Add operation is just adding the box could belong to multiple categories like fire and smoke.
dimension without changing them. To predict the multilabel Regions where fire and smoke appear simultaneously, this
classification per bounding box, independent sigmoid design is useful for detecting them.
function is used by YOLO v3. This means that per bounding

Fig. 4 Diagram small-scale feature map based on yolo v3

1. CBL: The smallest component in the Yolov3 dataset using Darknet. The following output is one of the
network structure, by Conv +BN +Leaky relu. The examples observed.
activation function consists of three.
2. Res unit: Learn from Resnet the residual structure in
the network allows the network to be built deeper.
3. ResN: By one CBL with XA residual component
constitutes a large component in Yolov3. The CBL
in front of each Res module plays the role of down
sampling, so after 3 times of Res module, the
resulting feature map is 416 > 52 > 26 >13.

IV. TRAINING ALGORITHM

A. Fire Image Dataset

A large number of data is required for fire images dataset


for the training of algorithms which are based on CNNs.
However, current small scale images/video fire databases
cannot meet the needs. Table I shows some small scale
dataset for images/videos. Therefore in this paper we
collected and labelled 1400 such images to give a good
foundation for our dataset. For our convenience, to test Fig. 5 Object detection output from Google Colab
the code and to create a custom model, we used Google
Colab. Our custom object detector was trained using this
TABLE I SMALL SCALE FIRE IMAGE/VIDEO DATABASES

Institutions Format Object Website


Fire, Smoke,
Kaggle Image https://www.kaggle.com/pylake1337/firedataset
Disturbance
National Fire Research Laboratory,
Image Fire https://www.nist.gov/topics/fire
NIST
State Key Laboratory of Fire Science,
University of Science and Image Fire, Smoke https://smoke.ustc.edu.cn/datasets.htm
Technology

AJCST Vol.10 No.2 July-December 2021 16


Fire Detection Using Image Processing

output of testset1 shows that the fire is detected but it


fails to detect the fire shadowed by the smoke. So it is
important that the smoke covering fire should be
detected.

B. Performance of Testset2

Testset2 is a benchmark fire image database consisting


of 1400 images, which includes 478 smoke samples and
896 fire samples. Testset2 is very challenging as it
collects images from more scenarios containing a large
number of smoke-like and fire-like disturbances.
Therefore, it is more suitable for evaluating the
performance of the proposed algorithms.

Fig. 6 Structure of the fire image dataset

TABLE II SMALL SCALE FIRE IMAGE/VIDEO DATABASES


Objects Disturbances
Scenario Smoke- Fire- Images
Smoke Fire
like like
Indoor 376 576 541 539 634
Outdoor 295 793 235 696 766
Total 671 1359 776 1235 1400

V. RESULTS AND DISCUSSION

A. Performance of Testset1

Testset1 is a benchmark fire image database consisting of


700 images. This database has a 578 fire images and 122
images containing no fire. The number of videos played
8, Number of true detection of fire in videos 59, Number
of false detection 19, Number of true false 5, The
Percentage of true detection 71.08%. In this testset1 the
number of true detection is less than the expected true
detection, Therefore, a more detailed evaluation is
conducted in the testset2.
Fig. 8 Output of Testset 2

The previous output fails to detect the fire overshadowed


by smoke, so we trained our model by adding the smoke
dataset to the fire dataset. The output of Testset 2 shows
that the fire and smoke is detected.

C. Quantitative Analysis of Results

A common metric to measure the object detection


algorithm is intersection over union (IOU). IOU is a
metric that finds the difference between ground truth
annotations and predicted bounding boxes. In object
detection, the model predicts multiple bounding boxes
for each object, and based on the confidence scores of
Fig. 7 Output of Testset1 each bounding box it removes unnecessary boxes based
on its threshold value.
We tested our trained model on video of fire, the image
shows that the fire is detected by a red rectangle with IOU=Area of union/area of intersection
the percentage of the fire that has been detected, the F1= Weighted average of precision and recall

17 AJCST Vol.10 No.2 July-December 2021


B. Swarajya Lakshmi

C. Real-Time Output

1. TP = True positive
2. TN = True negative
3. FP = False positive
4. FN = False negative

True Positives (TP): These are the correctly predicted


positive values which mean that the value of actual class
is yes and the value of predicted class is also yes.

True Negatives (TN): These are the correctly predicted


negative values which mean that the value of actual
class is no and value of predicted class is also no.

(False positives and false negatives, these values occur


when your actual class contradicts with the predicted
class).

False Positives (FP): When actual class is no and


predicted class is yes.

False Negatives (FN): When actual class is yes but


predicted class in no

Precision: Precision measures how accurate your


predictions are i.e. the percentage of your predictions are
correct. It measures how many of the predictions that your
model made were correct.

Recall: Recall is the ratio of correctly predicted positive


observations to the all observations in actual class.

Fig. 10 Output of our project in real time

The Raspberry pi is connected to the internet; Program is


running on the Pi, an image preview from the camera
Fig. 9 Ratio of correctly predicted positive observations which will be used to detect fire. Camera detects frame
every 3 sec and as soon as fire is detected a rectangle
The total number of times, these images were played was with the percentage of fire detected appears then the
203 times. Out of those 203 times, true positive value program will send a notification to the phone through an
was 159, value of false negative was 13. The value of app called pushover api the notification send will be an
false positive and true negative was 24 and 7 emergency priority, so it will not go away till the user
respectively. The accuracy came out to be 81%. acknowledge the notification.

AJCST Vol.10 No.2 July-December 2021 18


Fire Detection Using Image Processing

VI. CONCLUSION REFERENCES

In chapter one, we mentioned the disadvantages of the [1] Ko, Byoung Chul and Sooyeong, Kwak, “Survey of computer
existing fire detector. We mentioned how the existing vision-based natural disaster warning systems,” Opt. Eng. Vol. 51,
No. 7, pp. 070901, 28 June 2012.
fire detector had difficulties detecting the fire which are [2] Thou-Ho Chen, Ping-Hsueh Wu and Yung-Chuen Chiou, “An
out of its range. Also, we mentioned the objectives and early fire-detection method based on image processing,” 2004
scope of our projects. In chapter two we did the literature International Conference on Image Processing, 2004. ICIP’04,
survey of based on previous different research paper. In Singapore, Vol. 3, pp. 1707-1710, 2004.
DOI: 10.1109/ICIP.2004.1421401.
the third chapter we explained how this project will be
[3] Ko, Byoungchul, Cheong, Kwang-Ho and Nam, Jae-Yeal, Early
CNN based to detect fire. Also, we explained the object fire detection algorithm based on irregular patterns of flames and
detection using yolov3. We also used it on some image hierarchical Bayesian Networks. Fire Safety Journal, Vol. 45,
samples and showed the output. With the help of block 2010. DOI: 10.1016/j.firesaf.2010.04.001.
diagram and flowchart we explained the working of the [4] X. Qi, and Ebert, Jessica. A computer vision-based method for
fire detection in color videos. International Journal of Imaging,
system. In the fourth chapter we explained what type of Vol. 2, pp. 22-34. 2009.
camera we will be using for images. Also, we explained [5] J. Zhang, J. Zhuang, H. Du, S. Wang and X. Li. A Flame
the use of GPS and how we will use tiny yolo for Detection Algorithm Based on Video Multi- feature Fusion. In:
implementation on raspberry pi. Advanced object Jiao L., Wang L., Gao X., Liu J., Wu F. (eds) Advances in Natural
Computation. ICNC 2006. Lecture Notes in Computer Science,
detection CNNs YOLO v3 is used to improve the Springer, Berlin, Heidelberg, Vol. 4222. 2006.
performance of image fire detection technology to [6] Z. Yin, B. Wan, F. Yuan, X. Xia and J. Shi, “A Deep
develop algorithms of image fire detection. Complex Normalization and Convolutional Neural Network for Image
image fire features and detected fire in different scenes Smoke Detection,” in IEEE Access, Vol. 5, pp. 18429-18438,
can automatically be extracted with the help of proposed 2017. DOI: 10.1109/ACCESS.2017.2747399.
[7] K. Muhammad, J. Ahmad, I. Mehmood, S. Rho and S. W. Baik,
algorithms. The evaluation experiments results are given “Convolutional Neural Networks Based Fire Detection in
as follows. Surveillance Videos,” in IEEE Access, Vol. 6, pp. 18174-18183,
2018, DOI: 10.1109/ACCESS.2018.2812835.
1. In this testset1 the number of true detection is less [8] Li, Pu and Zhao, Wangda. Image fire detection algorithms based
than the expected true detection. on convolutional neural networks. Case Studies in Thermal
Engineering. Vol. 19, pp. 100625, 2020.
2. The highest accurate algorithm based on YOLO v3, DOI: 10.1016/j.csite.2020.100625.
with 81.7% accuracy, detects fire the most quickly
and is the strongest robust.
3. True detection of testset2 has the highest accuracy
than the previous testset1.

19 AJCST Vol.10 No.2 July-December 2021

View publication stats

You might also like