0% found this document useful (0 votes)
23 views

2009 - Fire Detectionin Video Sequences Using A Generic Color Model

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

2009 - Fire Detectionin Video Sequences Using A Generic Color Model

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ARTICLE IN PRESS

Fire Safety Journal 44 (2009) 147– 158

Contents lists available at ScienceDirect

Fire Safety Journal


journal homepage: www.elsevier.com/locate/firesaf

Fire detection in video sequences using a generic color model


- elik , Hasan Demirel
Turgay C
Department of Electrical and Electronic Engineering, Eastern Mediterranean University , Gazimağusa, TRNC, Mersin 10, Turkey

a r t i c l e in fo abstract

Article history: In this paper, a rule-based generic color model for flame pixel classification is proposed. The proposed
Received 8 December 2006 algorithm uses YCbCr color space to separate the luminance from the chrominance more effectively than
Received in revised form color spaces such as RGB or rgb. The performance of the proposed algorithm is tested on two sets of
7 May 2008
images, one of which contains fire, the other containing fire-like regions. The proposed method achieves
Accepted 8 May 2008
Available online 7 July 2008
up to 99% fire detection rate. The results are compared with two other methods in the literature and the
proposed method is shown to have both a higher detection rate and a lower false alarm rate.
Keywords: Furthermore the proposed color model can be used for real-time fire detection in color video sequences,
Fire detection and we also present results for segmentation of fire in video using only the color model proposed in this
Generic color model
paper.
Image processing
& 2008 Elsevier Ltd. All rights reserved.

1. Introduction grayscale and color video sequences. Krull et al. [5] used low-cost
CCD cameras to detect fires in the cargo bay of long range
Fire detection systems are one of the most important passenger aircraft. The method uses statistical features, based on
components in surveillance systems used to monitor buildings grayscale video frames, including mean pixel intensity, standard
and environment as part of an early warning mechanism that deviation, and second-order moments, along with non-image
reports preferably the start of fire. Currently, almost all fire features such as humidity and temperature to detect fire in the
detection systems use built-in sensors that primarily depend on cargo compartment. The system is commercially used in parallel
the reliability and the positional distribution of the sensors. The to standard smoke detectors to reduce the false alarms caused by
sensors should be distributed densely for a high precision fire the smoke detectors. The system also provides visual inspection
detector system. In a sensor-based fire detection system, coverage capability which helps the aircraft crew to confirm the presence or
of large areas in outdoor applications is impractical due to the absence of fire. However, the statistical image features are not
requirement of regular distribution of sensors in close proximity. considered to be used as part of a standalone fire detection
Due to the rapid developments in digital camera technology system. Most of the works on flame pixel classification in color
and video processing techniques, there is a big trend to replace video sequences are rule based. Chen et al. [1] used raw R, G, and B
conventional fire detection techniques with computer vision- information and developed a set of rules to classify the flame
based systems. In general computer vision-based fire detection pixels. Instead of using the rule-based color model as in Chen
systems employ three major stages [1–4]. First stage is the flame et al., Töreyin et al. [2] used a mixture of Gaussians in RGB space
pixel classification; the second stage is the moving object which is obtained from a training set of flame pixels. In a recent
segmentation, and the last part is the analysis of candidate paper, the authors employed Chen’s flame pixel classification
regions. This analysis is usually based on two figures of merit; method along with a motion information and Markov field
shape of the region and the temporal changes of the region. modeling of the flame flicker process [3]. Marbach et al. [6] used
The fire detection performance depends critically on the YUV color model for the representation of video data, where time
performance of the flame pixel classifier which generates seed derivative of luminance component Y was used to declare the
areas on which the rest of the system operates. The flame pixel candidate fire pixels and the Chrominance components U and V
classifier is thus required to have a very high detection rate and were used to classify the candidate pixels to be in the fire sector or
preferably a low false alarm rate. There exist few algorithms not. In addition to luminance and chrominance they have
which directly deal with the flame pixel classification in the incorporated motion into their work. They report that their
literature. The flame pixel classification can be considered both in algorithm detects less than one false alarm per week; however,
they do not mention the number of tests conducted. Homg et al.
[7] used HSI color model to roughly segment the fire-like regions
 Corresponding author. Tel.: +90 392 6301436; fax: +90 392 3650240. for brighter and darker environments. Initial segmentation is
- elik).
E-mail address: [email protected] (T. C followed by removing lower intensity and lower saturation pixels

0379-7112/$ - see front matter & 2008 Elsevier Ltd. All rights reserved.
doi:10.1016/j.firesaf.2008.05.005
ARTICLE IN PRESS

148 - elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C

in order to get rid of the spurious fire-like regions such as smoke. is no attempt to reduce the false positives and false negatives by
They also introduced a metric based on binary contour difference changing their threshold values. Celik et al. [4] used normalized
images to measure the burning degree of fire flames into classes RGB (rgb) values for a generic color model for the flame. The
such as ‘‘no fire’’, ‘‘small’’, ‘‘medium’’ and ‘‘big’’ fires. They report normalized RGB is proposed in order to alleviate the effects of
96.94% detection rate, together with results including false changing illumination. The generic model is obtained using
positives and false negatives for their algorithms. However, there statistical analysis carried out in r–g, r–b, and g–b planes. Due to

Fig. 1. Original RGB color images in column (a), and R, G, and B channels in columns (b)–(d), respectively.
ARTICLE IN PRESS

- elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C 149

the distribution nature of the sample fire pixels in each 2. Classification of flame pixels
plane, three lines are used to specify a triangular region
representing the region of interest for the fire pixels. Therefore, Each digital color image is composed of three color planes: red,
triangular regions in respective r–g, r–b, and g–b planes are green, and blue (R, G, and B). Each color plane represents a color-
used to classify a pixel. A pixel is declared to be a fire pixel receptor in human eye working on different wavelength. The
if it falls into three of the triangular regions in r–g, r–b, and g–b combination of RGB color planes gives ability to devices to
planes. Even though normalized RGB color space overcomes to represent a color in digital environment. Each color plane is
some extent the effects of variations in illumination, further quantized into discrete levels. Generally 256 (8 bits per color
improvement can be achieved if one uses YCbCr color space which plane) quantization levels are used for each plane, for instance
makes it possible to separate luminance/illumination from white is represented by (R, G, B) ¼ (255, 255, 255) and black is
chrominance. represented by (R, G, B) ¼ (0, 0, 0). A color image consists of pixels,
In this paper we propose to use the YCbCr color space to where each pixel is represented by spatial location in rectangular
construct a generic chrominance model for flame pixel classifica- grid (x, y), and a color vector (R(x, y), G(x, y), B(x, y)) corresponding
tion. In addition to translating the rules developed in the RGB and to spatial location (x, y). Each pixel in a color image containing a
normalized rgb to YCbCr color space, new rules are developed in
YCbCr color space which further alleviate the harmful effects of
Table 1
changing illumination and improves detection performance. The
Mean values of R, G, and B planes of fire regions for images given in Fig. 2
flame pixel classification rates of the proposed system with new
rules and new generic chrominance model is compared with the Row index in Fig. 2 Mean of R Mean of G Mean of B
previously introduced flame pixel classification models. The
1 218 137 97
proposed model gives 99.0% correct flame pixel classification rate
2 152 84 75
with a 31.5% false alarm rate. This is a significant improvement 3 211 158 105
over other methods used in the literature.

Fig. 2. Original RGB images are given in column (a) and corresponding fire regions, manually labeled with green color, are given in column (b).
ARTICLE IN PRESS

150 - elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C

fire blob (region containing fire), the value of Red channel is [1,4]. For instance in Fig. 1 column (a) shows samples of digital
greater than the Green channel, and the value of Green channel is color images, and columns (b)–(d) show R, G, and B color planes
greater than the value of Blue channel for the spatial location. (channels), respectively. It can be noticed from Fig. 1 that for the
Furthermore, the flame color has high saturation in Red channel fire regions, R channel has higher intensity values than the G

Fig. 3. RGB color images in column (a) and its Y, Cb, and Cr channels in columns (b)–(d), respectively.
ARTICLE IN PRESS

- elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C 151

channel, and G channel has higher intensity values than the B where Y is luminance, Cb and Cr are ChrominanceBlue and
channel. ChrominanceRed components, respectively. The range of Y is
In order to explain this idea better, we picked sample images [16 235], Cb and Cr are equal to [16 240].
from Fig. 1(a), and segmented its fire pixels as shown in Fig. 2(b) For a given image, one can define the mean values of the three
with green color. Then we calculate mean values of R, G, and B components in YCbCr color space as
planes in the segmented fire regions of the original images. The
results are given in Table 1 for the images given in Fig. 2. It is clear 1X K
Y mean ¼ Yðxi ; yi Þ
that, on the average, the fire pixels show the characteristics that K i¼1
their R intensity value is greater than G and G intensity value is 1X K
greater than the B. Cbmean ¼ Cbðxi ; yi Þ
K i¼1
Even though RGB color space can be used for pixel classifica-
tion, it has disadvantages of illumination dependence. It means 1X K
Cr mean ¼ Crðxi ; yi Þ (2)
that if the illumination of image changes, the fire pixel classifica- K i¼1
tion rules can not perform well. Furthermore, it is not possible to
where (xi, yi) is the spatial location of the pixel, Ymean, Cbmean, and
separate a pixel’s value into intensity and chrominance. The
Crmean are the mean values of luminance, ChrominanceBlue, and
chrominance can be used in modeling color of fire rather than
ChrominanceRed channels of pixels, and K is the total number of
modeling its intensity. This gives more robust representation for
pixels in image.
fire pixels. So it is needed to transform RGB color space to one of
The rules defined for RGB color space, i.e. RXGXB, and
the color spaces where the separation between intensity and
RXRmean [4,1], can be translated into YCbCr space as
chrominance is more discriminate. Because of the linear conver-
sion between RGB and YCbCr color spaces, we use YCbCr color Yðx; yÞ4Cbðx; yÞ (3)
space to model fire pixels. The conversion from RGB to YCbCr color
space is formulated as follows [8]: Crðx; yÞ4Cbðx; yÞ (4)
2 3 2 32 3 2 3 where Y(x, y), Cb(x, y), and Cr(x, y) are luminance, Chrominance-
Y 0:2568 0:5041 0:0979 R 16
6 7 6 0:1482 Blue and ChrominanceRed values at the spatial location (x, y).
4 Cb 5 4
¼ 0:2910 7 6 7
0:4392 54 G 5 þ 4 128 7
6
5 (1) Eqs. (3) and (4) imply, respectively, that flame luminance should
Cr 0:4392 0:3678 0:0714 B 128 be greater than ChrominanceBlue and ChrominanceRed should
be greater than the ChrominanceBlue. Eqs. (3) and (4) can be
interpreted to be a consequence of the fact that the flame has
Table 2
Mean values of Y, Cb, and Cr planes of fire regions of images given in Fig. 2
saturation in red color channel (R). In Fig. 3, we show the RGB
images and its corresponding Y, Cb, and Cr channel responses for
Row index in Fig. 2 Mean of Y Mean of Cb Mean of Cr the images shown in Fig. 1. The validity of Eqs. (3) and (4) can
easily been observed for fire regions.
1 151 98 166
Similar to Table 1, we picked sample images from Fig. 1(a), and
2 125 114 158
3 160 97 155 segmented its fire pixels as shown in Fig. 2(b). Then we calculate
mean values of Y, Cb, and Cr planes in the segmented fire regions

Fig. 4. RGB input image and its Y, Cb, and Cr channels: (a) original RGB image, (b) Y channel, (c) Cb channel, and (d) Cr channel.
ARTICLE IN PRESS

152 - elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C

of the original images. The results are given in Table 2 for the consisting of 1000 images. Fig. 5 shows a few samples from this
images given in Fig. 2. It is clear that, on the average, the fire pixels set. Note that in Fig. 5, there are a variety of images including ones
shows the characteristics that their Y color value is greater than Cb with changing illumination and lighting. Furthermore, the images
color value and Cr color value is greater than the Cb color value. are selected so that fire-like colored objects are also included in
Besides these two rules (Eqs. (3) and (4)), since the flame the set. For instance the Sun in the image that produces fire-like
region is generally the brightest region in the observed scene, the color. There are some images in the set which do not contain any
mean values of the three channels, in the overall image Ymean, fire. The image set consists of random images collected from the
Cbmean, and Crmean contain valuable information. For the flame internet. Images are from both indoor and outdoor environments.
region the value of the Y component is bigger than the mean Y The ROC curve for the image set is given in Fig. 6 where hand
component of the overall image while the value of Cb component segmented fire images are used in order to create the ROC curve.
is in general smaller than the mean Cb value of the overall image. The rules (2) through (6) are applied to hand segmented fire
Furthermore, the Cr component of the flame region is bigger than
the mean Cr component. These observations which are verified
over countless experiments with images containing fire regions
are formulated as the following rule:
(
1; if Yðx; yÞ4Y mean ; Cbðx; yÞoCbmean ; Crðx; yÞ4Cr mean
Fðx; yÞ ¼
0; otherwise
(5)
where F(x, y) indicates that any pixel which satisfies condition
given in Eq. (5) is labeled as fire pixel.
Fig. 4 shows the three channels for a representative image
containing fire in more detail. The rule in (5) can be easily verified.
It can easily be observed from the representative fire image
(Fig. 4(c) and (d)) that there is a significant difference between the
Cb and Cr components of the flame pixels. The Cb component is
predominantly ‘‘black’’ while the Cr component is predominantly
‘‘white’’. This fact is formulated as a rule as follows:
(
1; if jCbðx; yÞ  Crðx; yÞjXt
F t ðx; yÞ ¼ (6)
0; otherwise

where t is a constant.
The value of t is determined using a receiver operating
characteristics (ROC) [9] analysis of Eq. (6) on an image set Fig. 6. Receiver operating characteristics for t.

Fig. 5. Samples from set of images used in ROC curve analysis.


ARTICLE IN PRESS

- elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C 153

images with different values of t changing from 1 to 100. For each that high false positive rate. Using this tradeoff, in our experi-
value of t, we calculate corresponding true and false positive rates ments the value of t is picked such that the detection rate is over
on the image set and tabulate it. The true positive is defined as the 90% and false alarm rate is less than 40% (point d) which
decision when an image contains a fire, and false positive is corresponds to t ¼ 40.
defined as the decision when an image contains no fire but In addition to the above rules a statistical analysis of
classified as having fire. The ROC curve consists of 100 data points chrominance information in flame pixels over a larger set of
corresponding to different t values and some of them are labeled images is performed. For this purpose a set of 1000 images,
in Fig. 6 with blue letters, i.e. a–e. For each point in the ROC curve containing fire at different resolutions are collected from the
there are three values; true positive rate, false positive rate, and t. Internet. Samples from this set are shown in Fig. 7. The
For instance, for the point labeled with a, the true positive rate is collected set of images has a wide range of illumination and
60%, false positive rate is 6% and corresponding t is 96. Using the camera effects. The fire regions in the 1000 images are manually
ROC curve, different values of t can be selected with respect to segmented and the histogram of a total of 16,309,070 pixels is
required true positive and false positive rates. created in the Cb–Cr chrominance plane. Fig. 8 shows the
Since fire detection systems should not miss any fire alarm, the distribution of flame pixels in Cb–Cr plane. The area containing
value of t should be selected so that systems true positive rate is flame pixels in Cb–Cr plane can be modeled using intersections of
high enough. It is clear from Fig. 6 that, high positive rate means three polynomials denoted by fu(Cr), fl(Cr), and fd(Cr). The

Fig. 7. Samples from set of images used in extracting chrominance model for fire-pixels.

Fig. 8. 3-D distribution of hand labeled flame pixels in Cb–Cr color plane and three polynomials, fu(Cr), fl(Cr), and fd(Cr), bounding the flame region.
ARTICLE IN PRESS

154 - elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C

Fig. 9. Fire detection in a still image: (a) original image, (b) fire segmentation using only (3), (c) fire segmentation using only (4), (d) fire segmentation using only (5), (e) fire
segmentation using only (6), (f) fire segmentation using only (8), (g) fire segmentation using combination (3)–(6) and (8), and (h) segmented color image which consists of
fire.

equations for the polynomials are derived using a least-square Table 3


estimation technique [10]: Performance of the proposed algorithm compared with two similar algorithms in
the literature
fuðCrÞ ¼  2:6  1010 Cr 7 þ 3:3  107 Cr 6
Color model Detection rate in fire set False alarm rate in non-fire set
 1:7  104 Cr 5 þ 5:16  102 Cr 4  9:10  Cr 3
RGB [1] 0.939 0.664
þ 9:60  102 Cr 2  5:60  104 Cr þ 1:40  106 rgb [4] 0.970 0.584
flðCrÞ ¼  6:77  108 Cr 5 þ 5:50  105 Cr 4 YCbCr, proposed 0.990 0.315

 1:76  102 Cr 3 þ 2:78Cr 2


 2:15  102 Cr þ 6:62  103
in Fig. 9 in a step-by-step manner. As can be seen from Fig. 9, each
fdðCrÞ ¼ 1:81  104 Cr 4  1:02  101 Cr 3 þ 2:17  10Cr 2 rule produces false alarms, but their overall combination produces
 2:05  103 Cr þ 7:29  104 (7) the result which is convenient in identifying fire regions in
corresponding color image.
The region bounded by the three polynomials is depicted in
Fig. 8. The boundaries of the region which correspond to the
polynomials are shown in red. Once this region is obtained, it is
easy to define another rule for classifying the flame pixel. 3. Performance analysis
We formulate this in Eq. (8) as follows:
( The performance of the proposed flame pixel classifier model is
1; if Cbðx; yÞXfuðCrðx; yÞÞ \ Cbðx; yÞpfdðCrðx; yÞÞ \ Cbðx; yÞpflðCrðx; yÞÞ
F CbCr ðx; yÞ ¼ compared with the models defined in [1,4]. The model defined by
0; otherwise
Chen et al. [1] uses raw RGB values, and rules defined over RGB
(8)
space. On the other hand, the model defined by Celik et al. [4] uses
where F CbCr ðx; yÞ shows whether corresponding pixel at spatial rgb values.
location (x, y) falls into region defined by boundaries formulated Performance analysis is carried out using a set of 751 color
in Eq. (7) with 1 indicating that it is included in this region and 0 images of size 256  256 which are totally different from the sets
indicating that it is not included and \ is the binary AND operator. used in creating the fire color model. The set consists of 332
With the derived set of rules in the YCbCr color space given in images which contain flame, and the rest is a collection of images
Eqs. (3)–(6) and (8), one can classify whether a given pixel is a which do not contain any flame. It should be noted that this set
flame pixel or not. The overall segmentation process is illustrated may contain flame-like objects such as the Sun, a red rose, etc.
ARTICLE IN PRESS

- elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C 155

Fig. 10. Demonstration of fire segmentation in four different video sequences (a–d) with frame numbers of 1, 11, 21, 31, 41, 51, 61, and 71 are selected for visualization.
ARTICLE IN PRESS

156 - elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C

Fig. 10. (Continued)


ARTICLE IN PRESS

- elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C 157

In Table 3, we have tabulated flame detection results with false We have demonstrated fire segmentation for outdoor uncon-
alarm rates. It is clear that the proposed model outperforms the trolled environment using four different video sequences
model which uses rgb values proposed by Celik et al. both in from Wildland Fire Operations Research Group in Canada [11].
detection rates and false alarm rate. The proposed model has The sequences consist of views of different forest fire scenes
better performance than the model proposed by Chen et al. which recorded from a helicopter. The segmentation results using the
operates in RGB color space. proposed generic color model are shown in Fig. 10. Each video
The performance improvement is expected since YCbCr color sequence consists of frames recorded consecutively. The model
space has the ability of discriminating luminance from chromi- defined in this paper is applied to each frame of the video
nance information. Since the chrominance dominantly represents sequences. Each frame and its corresponding binary map showing
information without effect of luminance, the chrominance-based fire pixels is shown in Fig. 10 where it is clear that the proposed
rules and the color model defined in the chrominance plane are color model robustly detects fire regions in the given video
more descriptive for flame behavior. sequences.

Fig. 11. Demonstration of fire segmentation in an indoor video sequence collected from Celik et al. [4].
ARTICLE IN PRESS

158 - elik, H. Demirel / Fire Safety Journal 44 (2009) 147–158


T. C

We also demonstrated fire segmentation for indoor controlled Table 4


environment using a video sequence from Töreyin et al. [3]. The Arithmetic operations used in proposed method
video sequence consists of fire in a controlled environment where
Equation adds/subs divs/muls comps
the fire spread is changing with time. The fire segmentation
results are presented in Fig. 11. It is clearly noticable from Fig. 11 (1) 9HW 9HW –
that, the proposed color model robustly detects fire regions in the (2) 3(HW-1) 3 –
given video sequence. (3) – – HW
(4) – – HW
(5) – – 3HW
(6) HW 2HW
4. Computational complexity (7) – – –
(8) – – 6HW
In order to perform in real-time, the proposed model should be Final classification 5HW
cheap in computational power. The computational complexity Total operations 13HW1 9HW+3 18HW
analysis of the proposed algorithm is introduced in this section.
Let us assume that the size of input RGB image is H  W, where
H and W are height and width of the input RGB image, respectively.
We evaluate computational complexity of the proposed algorithm chrominance, hence is more robust to the illumination changes
by finding number of additions/subtractions (adds/subs), divisions/ than RGB or rgb color spaces. The performance of the proposed
multiplications (divs/muls), and comparisons (comps). color model is tested on two sets of images; one containing fire
The first stage in the algorithm is the conversion from RGB to and the other containing fire-like regions. The proposed color
YCbCr color space. For each pixel at spatial location (x,y), the model achieves 99.0% flame detection rate and 31.5% false alarm
conversion from RGB to YCbCr using Eq. (1) requires 9 adds/subs rate. The results are compared with two other methods in the
and 9 divs/muls. Total number of operations required to convert all literature and the performance improvement of the proposed
pixels from RGB to YCbCr is 9HW adds/subs and 9HW divs/muls. method both in correct fire detection rate and false alarm rate is
After converting from RGB to YCbCr color space, the next stage is demonstrated.
calculation of statistical parameters given in Eq. (2). Eq. (2) The number of arithmetic operations for the proposed color
requires 3(HW-1) adds/subs and 3 divs/muls. Eqs. (3) and (4) are model is linear with image size and algorithm is very cheap in
simple binary comparisons, and each of them requires HW comps. computational complexity. This makes it suitable for the real-time
Eq. (5) uses statistics derived from Eq. (2) to make simple binary applications.
comparisons and it requires 3HW comps. The absolute value in The proposed color model can be used in fire detection in video
Eq. (6) is actually a comparison to check whether or not the sequences. We have shown that the proposed algorithm performs
number is greater than 0. By using this information Eq. (6) well in segmenting fire regions in video sequences. In our future
requires HW comps and 2HW comps. work, we will make the time analysis of fire regions in video
The analytic equation defined in Eq. (7) requires high order sequence by measuring spread in the fire regions. Furthermore,
numerical calculations. Since the Cr is an integer numeric variable the flicker nature of fire will be considered as a future work.
in the range of [16 240], we use a lookup table. The lookup table
consists of values of analytic functions defined in Eq. (7) for each References
value of Cr. Since we are using a lookup table for the Eq. (7), we do
not need any arithmetic operation for it. The created lookup table [1] T. Chen, P. Wu, Y. Chiou, An early fire-detection method based on image
for Eq. (7) is used in Eq. (8). The binary AND operator in Eq. (8) is processing, in: Procedings of IEEE International on Image Processing, 2004,
pp. 1707–1710.
simply a comparison to check whether the binary values input to [2] B.U. Töreyin, Y. Dedeoğlu, A.E. C- etin, Flame detection in video using hidden
AND operator are all binary 1 or 0. Using this information, total Markov models, in: Procedings of IEEE International Conference on Image
number of operations for Eq. (8) is 6HW comps. Processing, 2005, pp. 1230–1233.
[3] B.U. Töreyin, Y. Dedeoğlu, U. Güdükbay, A.E. C - etin, Computer vision based
The final classification whether a given pixel is a flame pixel or
method for real-time fire and flame detection, Pattern Recognition Lett. 27 (1)
not is done by combining results from Eqs. (3)–(6) and Eq. (8). (2006) 49–58.
This operation is done by applying binary AND to results from Eqs. [4] T. Celik, H. Demirel, H. Ozkaramanli, Automatic fire detection in video
(3)–(6) and Eq. (8). Therefore, the final stage requires 5HW comps. sequences, in: Proceedings of European Signal Processing Conference
(EUSIPCO 2006), Florence, Italy, September 2006.
Total number of arithmetic operations required for the [5] W. Krull, I. Wıllms, R.R. Zakrzewskı, M. Sadok, J. Shırer, B. Zelıff, Design and
proposed method is tabulated in Table 4.The total number of test methods for a video-based cargo fire verification system for commercial
arithmetic operations are 13HW-1 adds/subs, 9HW+3 divs/muls, aircraft, Fire Saf. J. 41 (4) (2006) 290–300.
[6] G. Marbach, M. Loepfe, T. Brupbacher, An image processing technique for fire
and 18HW comps. It is clear that the number of arithmetic detection in video images, Fire Saf. J. 41 (4) (2006) 285–289.
operations is linear with image size and hence the proposed [7] Wen-Bıng Homg, Jım-Wen Peng, Chıh-Yuan Chen, A new ımage-based real-
algorithm is very cheap in computational complexity. time flame detection method using color analysis, in: Procedings of IEEE
Networking, Sensing and Control, ICNSC, 2005, pp. 100–105.
[8] C.A. Poynton, A Technical Introduction to Digital Video, Wiley, New York,
1996.
5. Conclusions [9] D.M. Green, J.M. Swets, Signal Detection Theory and Psychophysics, Wiley,
New York, 1966.
[10] J.H. Mathews, K.D. Fink, Numerical Methods using MATLAB, Prentice-Hall,
In this paper, a generic color model for flame pixel classifica- Englewood Cliffs, NJ, 1999.
tion is proposed. The proposed color model uses YCbCr color [11] Wildland Fire Operations Research Group, Retrieved August 11, 2006, from
space, which is better in discriminating the luminance from the /http://fire.feric.ca/index.htmS.

You might also like