0% found this document useful (0 votes)
317 views7 pages

Neural Networks

This document describes a face recognition system using neural networks that analyzes different parts of the face at different scales. It uses a multi-layer neural network with backpropagation and a radial basis function network to classify faces. The system was tested on 500 images from 100 subjects from the FERET database. It achieved a 96% recognition rate using backpropagation and a 99% recognition rate using the radial basis function network, outperforming other face recognition techniques. Changing the resolution ratios of different facial features impacted the recognition rates.

Uploaded by

api-3714252
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
317 views7 pages

Neural Networks

This document describes a face recognition system using neural networks that analyzes different parts of the face at different scales. It uses a multi-layer neural network with backpropagation and a radial basis function network to classify faces. The system was tested on 500 images from 100 subjects from the FERET database. It achieved a 96% recognition rate using backpropagation and a 99% recognition rate using the radial basis function network, outperforming other face recognition techniques. Changing the resolution ratios of different facial features impacted the recognition rates.

Uploaded by

api-3714252
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 7

VIRUDHUNAGAR

TAMIL NADU

Neural Networks
Face Recognition Using
Back propagation
&
Radial Basis Function

DONE BY
VIJAY RAJKUMAR.C
SARAVANA BABU DOSS.G
III E.C.E
Contacts :
[email protected]
9944287326

1
Abstract:
This paper deals with the implementation of face recognition using neural network
(recognition classifier) on multi-scale features of face (such as eyes, nose, mouth and remaining
portions of face). The proposed system contains three parts, pre-processing, multi scale feature
extraction and face classification using neural network. The basic idea of the proposed method is to
construct facial features from multi-scale image patches for different face components. The multi-
scale features of face (such as Eyes, Nose, Mouth and remaining portion of face) becomes the input
to neural network classifier, which uses MLN (back propagation algorithm) and Radial Basis
function network to recognize familiar faces (trained) and faces with variations in expressions,
illumination changes and with specks. The crux of proposed algorithm is its beauty to use single
neural network as classifier, which produces straight forward approach towards face recognition.
The proposed algorithm was tested on FERET face data base for 500 images of 100 subjects (300
faces for training 200 for testing) and results are encouraging (99% recognition rate) compared to
other face recognition techniques.

Introduction:
Machine recognition of faces is gradually becoming very important due to its wide range of
commercial and law enforcement applications, which include forensic identification, access control,
border surveillance and human computer interactions. Given that all human faces share the same
basic features (i.e. eyes, nose and mouth) arranged in the same general configuration, the capacity to
distinguish one face from another must depend on the fine-grained analysis of a face’s components
and holistic information. As component and holistic based face representations have complementary
strengths and weaknesses, a well designed visual perception system should employ both types of
representation for face recognition. Many recent works demonstrate that face recognition using PCA
and LDA good methods for face recognition. Generally PCA is not suitable for classification
purpose as it does not use any class information and LDA has the risk of poor generalization ability.

Proposed Method:
It is generally believed that we human beings put different emphasis on different parts of a
face e.g. eyes, nose, cheeks, forehead and other remaining parts. The existing approaches put same
emphasize on all the parts of a face resulting in low recognition rate. In our approach, we select four
different observers – two eyes, nose, mouth and remaining portion of the face assuming that the eye
2
coordinates are known. We then pass these patches (except the eyes) observed by different viewers
through low pass (Gaussian) filter so as to smooth several parts of the image and reduce the effect
of noises. The observed patches by different viewers are then combined into a single image vector.
This image vector is then used as an input to the Artificial Neural Network and network is trained to
recognize all the faces in image databases. We start with 2D face images which are normalized, zero
mean and unit variances then four different observers are chosen, i.e. eyes, nose, mouth and
remaining portions. The flow chart shows the step by step procedure for obtaining the future vector
for a given face image.

Dimensionality of these face components are then reduced by simple down sampling
techniques (Eyes 1:1, Nose 1:2, Mouth 1:4, Remaining portion of face 1:8). These 2D face images

patches can be represented by a matrix, Where aT represents a one-dimensional


image-column obtained from the two dimensional image patches scanned in lexicographical order
and writing them to a column vector. T is the number of training images. We have used different
dimensionality reduction for different face components the size of the final image column vector is
N × 1, where N is the total number of pixels obtained from all the four image patches, which is
much less than the original full image data.

3
Implementation:
The algorithm is described in the following steps:
1. Given any face image normalized and pre-processed with known eye coordinates.
2. Based on eye coordinates estimate the size of left eye and right eye. Keep the original image
resolution for eye observers.
3. Apply a low pass (Gaussian) filter to the whole image.
4. Crop the Nose patch and Mouth patch from the above image. The locations of the Nose and
Mouth patches are kept same for all images.
5. For nose observer reduce the image resolution to half. Select one pixel for every two pixels. For
mouth keep one pixels for every four pixels. Apply to both x and y dimensions.
6. Using the above patches extract the remaining portion of the face. Keep only one pixel for every
eight pixels. Apply to both x and y dimensions.
7. Convert the above image patches obtained in into a single image vector of dimension N × 1 (In
this case it is 336 x 1- it changes with resolution)

Back Propagation:

Back propagation is the generalization of the Widrow-Hoff learning rule to multiple-layer


networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target
vectors are used to train a network until it can approximate a function, associate input vectors with
specific output vectors are used for training.

In the proposed neural network architecture,, multi - layer network is considered with a
hidden layer of size - 90, the number of input nodes are equal to number of face features -336(it

4
changes with Resolution), number of outputs nodes are equal to number of faces to be recognized,
in this case it is - 40, logsig and purelin transfer functions are used with goal error equal to 0.0001.
Radial Basis Function Network:

A radial basis function network is an artificial neural network, which uses radial functions as
activation functions. A radial basis function (RBF) is a real- valued function whose value depends
only on the distance from the origin.RBF networks are claimed to be more accurate than those
based on Back- Propagation (BP), and they provide a guaranteed, globally optimal solution via
simple, linear optimization. One advantage of radial basis networks over back propagation is that, if
the input signal is non-stationary, the localized nature of the hidden layer response makes the
networks less susceptible to phase include the learn rate and tolerance for errors.

Results and Discussion:

We conducted our experiments on the FERET face image database. The images are acquired
under variable illumination, facial expressions, with and without specks during different photo
sessions and sizes of the face also vary. We have created the database of total 500 frontal images i.e.
for 100 subjects each with 5 variations. Out of 500 images 300 (each subject with 3 variations for
100 subjects) are chosen randomly for training and remaining 200 (each subject with 2 variation for
100 subjects) are kept aside for testing.

After Multi-Scale Feature Extraction, the original image size of 192X128 is drastically
reduces to a column vector of size 336X1(it changes with resolution). Such image vectors of all the
images are collected and given as input to the neural network, which is used as classifier for actual
face recognition. It has been observed that the efficiency and accuracy have increased consistently
because of the use of artificial neural networks. We investigated the performance of this technique

5
for face recognition based on the computation of two of its types- Multi-layer network using Back
Propagation and RBF networks. We focus on the difficult problem of recognizing a large number of
known human faces with variations in expressions. A multi-resolution system achieves a recognition
rate of 96% for Back Propagation and 99% for RBF neural network.
We have done the analysis of our project by considering two types of neural networks – The
multi layer network (back propagation) and RBF network. 96% of the testing images have been
recognized by multi –layer network (back propagation) while, 99% of them have been recognized
by the RBF network. This implies that, the performance in terms of recognition rate is higher when
implemented using RBF networks than Multi – Layer Network using back propagation. This
recognition rate is very much higher compared to other techniques and Eigen feature technique.

Change in the resolution ratio of the discriminated features of the face (Left eye, right eye,
nose, and mouth and remaining portion of the face) can also influence the changes in recognition
rate .We have done this analysis using the MLN (Back Propagation) and Radial Basis Function
network on image with no facial expression changes. When we observe the first and last recordings
in TABLE: 2, we can see that the recognition rate has drastically reduced from 96% to 86% (it
changes 99% to 88% in the case of RBF network). Here we have observed two important aspects.
The first one is that, changing the resolution ratio of eyes from 1:1 to 1:2 saw comparatively drastic
decrease in recognition rate. This is because the entire implementation of our project is based on
position of eyes and its resolution ratio. Second important aspect, is that for faces without any facial
expression changes, change in resolution ratio for remaining.

Conclusion and Future Work:


6
In this paper, we have proposed a methodology for putting dissimilar emphasis on
different components of face images based on the human knowledge about the human face,
which results in dimensionality reduction or coarse-level feature extraction in the first stage.
In the second stage, dimensionalities are further reduced by using artificial neural network
where only trained networks weights need to be stored to recognize the face in data base,
thus it further reduces dimensionality. The proposed methodology therefore extracts face
features by combining the human knowledge about the discriminating features in human
face and the statistical results drawn from the training data. This combination is necessary
and useful because neither the current human knowledge about what the discriminating face
features are nor the limited number of training data (comparing to the high dimension of the
face image vector) can be fully trusted. Therefore, it is not a surprise that the recognition
accuracy is consistently improved by the merge of the human knowledge and the knowledge
drawn from the training data. Indeed, the experiments on the large face data base show the
consistent accuracy improvement of the proposed approach for different resolution of face
futures and different neural networks. Further research is to find the connection between the
visual similarity/differences of two persons when viewed holistically and component wise.
This will answer the question of how to arbitrate between these two viewing approaches.
*********

You might also like