Deepak Result Paper
Deepak Result Paper
Abstract—Face is important part to identify the One of face recognition main goals is the
person. It can be used as the computer visual understanding of the complex human visual
application. Face is the important part of our body system and the knowledge of how humans
by which it is easy to identify and recognize the
represent faces in order to discriminate different
person. Face detection is one of the challenging tasks
as there are many issues such as changes in the
identities with high accuracy. Mainly face
appearances of faces, variations in poses, noise, recognition falls into two main categories feature-
distortion and illumination condition. The objective based approach and holistic approach. Feature-
of the present study is to make a real time efficient based approach for face recognition basically
face recognition system with Open CV so as to apply relies on the detection and characterization of
it for the real time applications where we need a low individual facial features and their geometrical
recognition time but high accuracy. In this paper, relationships. Such features generally include the
the framework for efficient face detection using eyes, nose, and mouth. Holistic or global
fusion of PCA and LBP is presented. The image
approaches to face recognition, on the other hand,
features are represented as reduced features space
by using PCA which is a dimensionality reduction involve encoding the entire facial image and
technique. The performance of LBP is compared treating the resulting facial “code” as a point in a
with ULBP algorithm. All simulations are done in high-dimensional space [2].
OPENCV tool. The human face is a complex, natural
object that tends not to have easily identified
Keywords- PCA, Eigen faces, Eigen space, Face edges and features. Because of this, it is difficult
Detection, Face recognition etc. to develop a mathematical model of the face that
can be used as prior knowledge when analyzing a
I. INTRODUCTION particular image. Computational models of face
recognition are interesting because they can
Biometric is derived from a Greek word contribute not only to theoretical knowledge but
“bio” meaning life and metrics meaning “to also to practical applications. According to the
measure”. Biometrics refers to the identification or complexity of face detection process, many
verification of a person based on his/her applications based on human face detection have
physiological and behavioural characteristics. In been developed recently such as surveillance
contrast to the traditional security systems, which systems, digital monitoring, intelligent robots,
may be cracked or faked, current biometric notebooks, PC cameras, digital cameras and 3G
technologies are based on various unique aspects of cell phone [3].
human body (such as the face, fingerprint, palm There are several techniques for face
print, iris, retina, voice, and gait) [1]. detection like Principal Discriminant analysis
Among these technologies face (PCA), Linear Discriminant analysis (LDA)
recognition is one of the most relevant applications ,Hands-off distance measure for face recognition,
of image processing. Human Face detection is the Elastic Graph Matching (EGM), eigen space-
process of identifying the features of faces to detect based face recognition, a novel hybrid neural and
the faces on the basis of the discriminant features. dual eigen spaces methods, Fisher faces methods
Features of faces are eyes, ears, eyebrows, nose, and artificial neural network.
lips, hairs, chicks, forehead etc. Face detection can The identification of a person by their
be carried out using these features of faces. Face is facial image can be done in a number of different
important part to identify the person. It can be used ways such as by capturing an image of the face in
as the computer visual application. Face is the the visible spectrum using an inexpensive camera
important part of our body by which it is easy to or by using the infrared patterns of facial heat
identify and recognize the person. emission.Facial recognition in visible light
typically model key features from the central
1
portion of a facial image. Using a wide assortment based network divide the test face image into 9
of cameras, the visible light systems extract sub blocks and then different weights were given
features from the captured images that do not to different parts of image according to their
change over time while avoiding superficial importance at recognition stage. A method based
features such as facial expressions or hair. Several on Hausdorff distance which compute eigen face
approaches to modelling facial images in the from edge images showed that different face
visible spectrum are Principle Component regions have different degree of importance in
Analysis, Local Feature Analysis, Neural Network face recognition. For multi dimensional data like
, Elastic graph theory, and multi-resolution 3-D images hidden markov eigen face model is
analysis [4]. proposed in which the eigen faces are integrated
into separable lattice hidden markov models [6].
.
Image III. DESCRIPTION OF PROPOSED SYSTEM
Data Feature
base Extraction The objective of the present study is to
make a real time efficient face recognition system
with Open CV so as to apply it for the real time
applications where we need a low recognition time
but high accuracy. We do face recognition almost
Image on a daily basis. Most of the time we look at a face
ANN and are able to recognize it instantaneously if we
Query Output
Technique are already familiar with the face. This natural
ability if possible imitated by machines can prove
Figure 1: General Architecture of Face to be invaluable and may provide for very
Recognition System important in real life applications such as various
access control, national and international security
The paper is organized as follows. In and defence etc. Presently available face detection
Section II, It describes literature work related to methods mainly rely on two approaches. The first
system. In Section III, it describes the proposed one is local face recognition system which uses
system with some introduction of three step search facial features of a face e.g. nose, mouth, eyes etc.
and four step search algorithm. Section IV defines to associate the face with a person. The second
the results of proposed system. Finally, conclusion approach or global face recognition system use the
is given in Section V. whole face to identify a person. The above two
approaches have been implemented one way or
II. LITERATURE REVIEW another by various algorithms.
The Eigen face method for human face
On the analysis of face recognition system recognition is remarkably clean and simple. Where
some of the important studies on face recognition other face recognition methods are forced to
systems are discussed. A multi-algorithm method attempt to identify features and classify relative
is based on detecting the face of the individual by Distances between them, the eigen face method
combining four algorithms namely PCA, DCT, simply evaluates the entire image as a whole.
Template Matching using Correlation and These properties make this method practical in
Partitioned Iterative Function System. Image real world implementations. This technique
quality based adaptive face recognition mainly converts each two dimensional image into a one
used the multi-resolution property of wavelet dimensional vector. This vector is then
transforms to extract facial features. Face decomposed into orthogonal principle components
detection based on features analysis and edge The basic concept behind the eigen face method is
detection mainly consists of three phases including information reduction. In the process of
image pre-processing, skin color segmentation and decomposition, a large amount of data is
finally the determination of face . A multi view discarded as not containing significant
face recognition system is based on eigen face information since 90% of the total variance in the
using PCA to extract the features. This method face is contained in 5-10% of the components.
used Cr space instead of gray level [5]. This means that the data needed to identify an
Another method for LDA based face individual is a fraction of the data presented in the
recognition is based with selection of optimal image. When one evaluates even a small image,
components using E-coli bacterial foraging there is an incredible amount of information
strategy(EBF).A GA-PCA algorithm was present. From all the possible things that could be
developed to find optimal eigen values and represented in a given image, pictures of things
corresponding eigenvectors in LDA. A technique that look like faces clearly represent a small
which combined weighted eigen face and BP portion of this image space. Because of this, we
2
seek a method to break down pictures that will be
better equipped to represent face images rather
than images in general. To do this, we generate
“base-faces” and then represent any image being
analyzed by the system as a linear combination of
these base faces.
Any human face can be considered to be a
combination of these standard (base) faces. Thus
any image being analyzed by the system is a linear
combination of these base faces. If three base uk is the kth eigenface of the training data.
faces were chosen then there would be three Step 6: For image classification this feature space
coefficients to represent the intensity of each base can be utilized.Measure the vectors of weight
face to represent any image. This technique is which is found by multiplying the transpose of the
similar to what is done to represent colors. Base matrix U by a vector that is found by subtracting
colors are chosen and then all other colors are the average face image Ø, from a sample or test
represented in terms of the base colors. If we image An.
wanted to represent purple, we would choose w= UT(A - Ø)
coefficients so that the intensities of red and blue (3.5)
were approximately equal, and the coefficient of The weights form a vector WT = [w1,w2…..wm']
green would be zero. that describes the contribution of each eigenface in
1. PCA Algorithm representing the input face image, treating the
eigenface as a basis set for face images. This
vector is used in a standard pattern recognition
It assumes that M sample images are
algorithm.
being used. Each sample image is referred to as An
where n indicates that we are dealing with nth 2. Local Binary Pattern (LBP)
sample image (1<n< M). Each An should be a
LBP thresholds all pixels in a specific
column vector. Images are made of pixels, each
neighbourhood based on the value of the central
having (x,y) coordinates with (0,0) being at the
pixel of that neighbourhood to compute a new
upper left corner. Following are the steps involved
value for this central pixel. So, if the central pixel
in PCA:
is corrupted by noise for any reason the
Step 1: The size of the resulting An column vector
comparison between this corrupted pixel and its
will depend on the size of the sample images. If
neighbours will not be accurate. Also, according to
the sample images are x pixels across and y pixels
LBP strategy, assigning the value 1 to all pixels
tall, the column vector will be of size (x*y) × 1.
greater than or equal to the central pixel value and
Step 2: Calculate the average image, Ø, as follows.
assigning the value 0 to all pixels less than the
This average image will be a column vector of the
central pixel produces inferior. The system may
same size as the sample images ((x*y) × 1).
find a pixel with a value which is a little bit less
Ø= ∑ AL/M where 1<
than the central pixel value and there is another
L<M (1)
pixel which has a value significantly less than the
value of the central pixel. Based on the LBP
Step 3: Calculate the difference faces by
definition both of the two pixels will assign the
subtracting the average face from each sample
value 0 and this is undesirable.
image. Each will be a column vector the same size
as our sample image vectors ((x*y)×1) 3. LBP Operator
O= An– Ø
The nearby binary sample operator is an
(2)
image operator which transforms a photo into an
Step 4: Total scatter matrix or covariance matrix is
array or photograph of integer labels describing
calculated from Ø .The covariance matrix is
the small-scale look of the photo. These labels or
defined by AAT where A is
their information, most customarily the histogram,
A= [ O1 O2 O3….Om]
are then used for in addition photograph analysis.
The most broadly used versions of the operator are
(3) designed for monochrome nevertheless images
The matrix A will be of size (x*y) × M. however it's been extended moreover for colour
Step 5:The eigenvectors of this matrix can be (multi-channel) photographs further to motion
found through the following formula: pictures and volumetric information.
4. Flow chart of Process
In the LBP technique for texture class,
the occurrences of the LBP codes in a photo are
3
gathered right into a histogram. The classification
is then in keeping with-fashioned by computing
easy histogram similarities. However, considering
a comparable technique for facial picture
illustration effects in a lack of spatial records and
consequently one need to codify the feel
information at the same time as preserving also
their locations. One way to obtain this aim is to
use the LBP texture descriptors to build numerous
nearby descriptions of the face and integrate them
into an international description. Such
neighbourhood descriptions had been gaining
interest these days that is understandable given the
restrictions of the holistic representations. This
nearby characteristic primarily based techniques
seem to be extra robust in opposition to variations
in pose or illumination than holistic techniques.
The LBP process consist of 6 steps, that firstly it
captures a face image then it is processed to the
block of the Face image is divided into several
blocks for the calculation & after this, the
histogram of the image is calculated for each and
every block. After this, the block histograms are
concentrated into Face Image and Face Image
decides the Facial recognition are represented by
LBP or not.
4
We have used two generally used database for
face recognition i.e. ORL database and AR
database. ORL database include 400 face images
taken from 40 subjects, with each subjects
providing ten face images. For some subjects, the
images were taken at different times, with varying
lighting, facial expressions (open /closed eyes,
smiling/not smiling) and facial details (glasses/ no
glasses). Each face image from the ORL database
has been resized to a 40 × 56 matrix by using the
down sampling algorithm. In this some images are
used for training samples and the remaining serves
as the test samples.
LBP 10 40
5
LBP 16 25 variations in facial expressions or between the
images of different persons.
REFFERENCES
Table 2: Comparison of Time Detection Using LBP
[1]. David Edmundson and Gerald Schaefer, “Fast Mobile
Technique Time Image Retrieval”, IEEE International Conference on
Systems, Man, and Cybernetics 2013, pp. 882-887.
LBP 8 0.8 sec
[2]. Suraya Abu Bakar, Muhammad Suzuri Hitam,
LBP 10 1.0 sec “Investigating the Properties of Zernike Moments for
Robust Content Based Image Retrieval”, IEEE
LBP 16 2.0 sec International Conference on Computer Applications
Technology (ICCAT) 2013, pp. 5285-5290.
[3]. S. Pradeep, L.Malliga “Content Based Image Retrieval
Table 3: Performance Comparison of LBP and ULBP System and Segmentation of Medical Image Database With
Fuzzy Values”, IEEE International Conference on
No. of PCA LDA Basic LBP ULBP Computational Science and Computational
training Intelligence, 2014, pp. 3834-3840.
face [4]. G. Nandhakumar, V. Saranya, “IRMA-Improvisation
1 68 68 69 68 of Image Retrieval with Markov chain based on
Annotation”, IEEE International Conference on
2 80 72 81 81
Computational Science, 2014, pp.3841-3847 .
3 84 84 86 85 [5]. R. Grace, Dr. R. Manimegalai, “Medical Image
Retrieval System in Grid using Hadoop Framework”,
4 87 87 90 91 IEEE International Conference on Computational
5 90 89 92 93 Science and Computational Intelligence, pp. 144-148,
2014.
6 92 91 94 94 [6]. M. Dass, M. Ali, “Image Retrieval Using Interactive
Genetic Algorithm”, IEEE International Conference
7 94 93 95 95 on Computational Science and Computational
8 94 94 96 96 Intelligence, pp. 215-220, 2014.
[7]. K. Belloulata, L. Belallouche, “Region Based Image
9 95 95 97 97 Retrieval Using Shape-Adaptive DCT”, IEEE Summit
& International Conference on Signal and Information
Processing (ChinaSIP), pp. 470-474, July 2014.
Table 3 shows the performance comparison of [8]. X. Yang, X. Qian, “Scalable Mobile Image Retrieval
LBP and ULBP system. It shows that ULBP found by Exploring Contextual Saliency”, IEEE
Transactions on Image Processing, Vol. 24, No. 6, pp.
with better results as compared to LBP system 1709-1721, June 2015.
under different no. of training faces. [9]. Peizhong Liu, Jing-Ming Guo, "Fusion of Deep
Learning & Compressed Domain Features for Content
V. CONCLUSION Based Image Retrieval", IEEE Transactions of Image
Processing, Volume: 26, Issue: 12, pp. 5706-5717,
Dec. 2017.
Vision is also a key component for building [10]. YanshengLi ,Yongjun Zhang, "Large-Scale Remote
artificial systems that can perceive and understand Sensing Image Retrieval by Deep Hashing Neural
their environment. The most useful and unique Networks ", IEEE Transactions On Geo-Science and
Remote Sensing, Vol. 56, No. 2, pp. 950-965, February
features of the face image are extracted in the 2018.
feature extraction phase. In the classification, the [11]. J. M. Guo, H. Prasetyo, and N. J. Wang, “Effective
face image is compared with the images from the image retrieval system using dot-diffused block
database. This method represents the local feature truncation coding features,” IEEE Transactions on
multimedia, vol. 17, no. 9, pp. 1576-1590, Jun. 2015.
of the face and matches it with the most similar [12]. J. M. Guo, and Y. F. Liu, “Improved block truncation
face image in a database. The speed/frame or high coding using optimized dot diffusion,” IEEE
resolution is our objective to increase for the LBP Transactions on Image Processing, vol. 23, no. 3, pp.
algorithm in real time application This thesis 1269-1275, Mar. 2010.
[13]. Mohamed Elhoseiny, Sheng Huang, "Weather
investigated a promising method of face Classification With Deep Convolutional Neural
recognition by using techniques PCA in Open CV Networks", IEEE Conference on Computer Vision and
software, which is used here for feature extraction Pattern Recognition 2015, pp. 2249-2253.
and verification purpose. The principal [14]. C. Szegedy, et al., “Going deeper with convolutions,”
In Proceedings of the IEEE Conference on Computer
component analysis method for face recognition is Vision and Pattern Recognition, pp. 1-9, 2015.
motivated by information theory approach that [15]. J. Wan, D. Wang, S. C. H. Hoi, P. Wu, J. Zhu, Y.
decomposes face images into a small set of Zhang, and J. Li, “Deep Learning For Content-Based
characteristics features images called “ eigen Image Retrieval: A Comprehensive Study,” In
Proceedings in Journal of Association for Computing
faces” which may be thought of as the principal Machinery, pp. 157-166, 2014.
components of the initial training set of face [16]. G. J. Burghouts, and J. M. Geusebroek, “Material-
images. The simulation results shows that the Specific Adaptation of Color Invariant Features,”
proposed algorithm is very efficient in comparing IEEE Pattern Recognition Letters, vol. 30, pp. 306-
313, 2009.
the images either of the same person with
6
7