0% found this document useful (0 votes)
21 views

Neural Network

The document presents a neural network approach for fire detection that uses temperature, smoke concentration, and carbon monoxide concentration as inputs. It trains a neural network model using these multisensor inputs to detect fires in the early stages. The method provides strong fault tolerance, anti-jamming capability, and early detection while reducing false alarms. When one detector detects a fire, the neural network fusion center queries other detectors to make a final decision.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Neural Network

The document presents a neural network approach for fire detection that uses temperature, smoke concentration, and carbon monoxide concentration as inputs. It trains a neural network model using these multisensor inputs to detect fires in the early stages. The method provides strong fault tolerance, anti-jamming capability, and early detection while reducing false alarms. When one detector detects a fire, the neural network fusion center queries other detectors to make a final decision.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Neural network

Traditional fire detection systems cannot meet the real needs of


complex fire alarm systems.
We will present a fire detection approach based on neural
networks, which uses the fire temperature, smoke concentration,
and CO concentration in the initial fire stage as a system input with
a neural network simulation model for the fire detection using the
self-learning and adaptive features of neural networks with
multisensor signal fusion and network training and simulation.

Temperature Fire Probability

Hidden output Smoldering Probability


Smoke
concentration
Layer Layer
No Fire Probability
Gas
Concentration

The method has strong fault tolerance and anti-jamming capability,


increases early fire detection capability, and reduces the leak-check
rate and false fire alarms which greatly affect safety monitoring.

When one detector output exceeds the detection limit, the hardware
circuit sends a local fire indication to the system and the
environmental characteristics to the information fusion center.
When a single detector local fire signal is received, the information
fusion center repeatedly queries the system detectors to generate a
final decision.
I- Model Representation

Input Layer Will include our 3 inputs (Temperature , Smoke


Concentration , CO Concentration)
Hidden Layer
Where all computations are done

The blue node has predefined “activation” function which defines if this
node will be “activated” or how “active” it will be, based on the summarized
value.
Cost Function (True – Predicted )
Our goal from all these computations is to optimize the cost
function and that’s by finding the minimum value of the cost
function.

Backpropagation Algorithm
Minimize the cost function by adjusting the weights and that’s by
compute the partial derivative of the cost function.

Forward

Input Prediction Prediction


SOP Error
weights output

Backward

Prediction Prediction SOP Input


Error output weights
Updated weights
The weights can be considered as the effect that each point has on the next
node.

Therefore, upon reaching the last stage in the forward direction, we get an
predicted error rate between the real output (pre-measured) and the predicted
output.

To reduce this error, we can use one of the means to know the percentage of
change for two things, which is the derivative.

∂ E ∂E ∂Y ∂ S
= × ×
∂ W ∂Y ∂ S ∂ W

Once we got a differentiation between the error and the weights we adjust
the network weights with updated ones:
∂E
W(new)= W(old) – learning rate × ∂ W

Continue updating weights according to derivatives and re-train the network


until reaching an acceptable error.

Implementation of BPNN:
Everything that has been explained happens to one weight in the neural
network,

Where one network may contain Tens of neurons,

And one neuron may require hundreds of epochs to get the best weight to get
lowest possible error rate.

We use Matlab as a simulator to create a model of neural network.

We needed to predefined outputs (targets) to build our network.

We Got efficiency at performance by 2.19 ×10−7

And mean squared error by 0.0025898.

You might also like