0% found this document useful (0 votes)
134 views

Face Recognition

The document discusses face recognition technology which involves three main steps: face detection to locate faces in images, feature extraction to analyze facial features, and face recognition to identify faces based on their features. It provides an overview of the history and development of face recognition, and describes the challenges of face detection including pose variation, feature occlusion, facial expressions, and different imaging conditions. The core content then explains each of the three steps in more detail.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views

Face Recognition

The document discusses face recognition technology which involves three main steps: face detection to locate faces in images, feature extraction to analyze facial features, and face recognition to identify faces based on their features. It provides an overview of the history and development of face recognition, and describes the challenges of face detection including pose variation, feature occlusion, facial expressions, and different imaging conditions. The core content then explains each of the three steps in more detail.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

ASSOSA UNIVERSITY

FACULTY OF COMPUTING AND INFORMATICS


DEPARTMENT OF COMPUTER SCIENCE

FACE RECOGNITION

BY:
Chala Dechasa
Abiyot Girima
Bereket Mengistu
Feysal kashif
Abdumalik Nadi

December 20, 2016

Assosa, Ethiopia
Table of Contents
Abstract ........................................................................................................................................... ii

1. Introduction ............................................................................................................................. 1

2. Overview of face Recognition ................................................................................................. 2

3. The core content of the report.................................................................................................. 3

3.1. Face Detection:..................................................................................................................... 3

3.1.1. Detection depending on the following condition: ......................................................... 4

3.2. Feature Extraction: ............................................................................................................... 4

3.3. Face Recognition: ................................................................................................................. 4

3.4. How Face recognition is done .............................................................................................. 6

3.5. Methods of Face Recognition .............................................................................................. 7

3.5.1. Knowledge based methods. ........................................................................................... 7

3.5.2. Feature invariant methods ............................................................................................. 8

3.5.3. Template matching methods .......................................................................................... 8

3.5.4. Appearance based approaches. ...................................................................................... 8

4. Result and Discussion ............................................................................................................ 10

5. Future Work (recommendation) ............................................................................................ 11

6. Conclusion ............................................................................................................................. 12

7. Reference............................................................................................................................... 13

i
Abstract
Approach of face recognition aims to detect faces in current image and sequence of image from
video. In recent years face recognition has received substantial attention from both research
communities and the market, but still remained very challenging in real applications. A lot of face
recognition algorithms are used, along with their changes, have been developed for many years. A
many number of algorithms are used in face recognition. A number of face databases available in
the public domain and several published performance evaluation results are used. Future research
directions based on the current recognition results are listed out Commented [p1]: Not necessary

ii
1. Introduction
Now a day’s information age is quickly improving the work we are doing. Everyday actions are
being handled electronically in computerized format, instead of using manual system. This
growth in electronic product has resulted in a greater demand for fast and accurate user
identification and authentication. For example in conventional method of identification based on
possession of identification cards or different knowledge like a social security number or a
password are not all together faithful. Identification cards can be lost or misplaced. But a face is
directly connected to its owner. It cannot be borrowed, stolen or easily forged. Face recognition
technology may solve this problem since a face is connected to its owner.

Image recognition technologies attempt to identify objects, people, buildings, places, logos, and
anything that has value to user and enterprises. In recent years face recognition has received a
great attention from researchers in biometrics, pattern recognition, and computer vision
communities. The machine learning and computer graphics communities are also increasingly
involved in face recognition. This common interest among researchers working in diverse fields
is motivated by our good ability to recognize people and the fact that human activity is a primary
concern both in everyday life and in an internet. Besides, there are a large number of
commercial, and security applications requiring the use of face recognition technologies. Face
recognition has become one of the most successful applications of technology in the field of
image recognition and artificial intelligence.

1
2. Overview of face Recognition
Face recognition is a new concept and developed in the 1960s, the first semi-automated system
for face recognition required the administrator to locate features such as eyes, ears, nose, and
mouth on the photographs before it calculated distances and ratios to a common reference point,
which were then compared to reference data. In the 1970s, Goldstein, Harmon, and Lesk used 21
specific subjective markers such as hair color and lip thickness to automate the recognition. The
problem with both of these early solutions was that the measurements and locations were
manually computed.

In 1988, Kirby and Sirovich applied principle component analysis, a standard linear algebra
technique, to the face recognition problem. This was considered somewhat of a milestone as it
showed that less than one hundred values were required to accurately code a suitably aligned and
normalized face image.

In 1991, Turk and Pentland discovered that while using the Eigen faces techniques, the residual
error could be used to detect faces in images; a discovery that enabled reliable real-time
automated face recognition systems. Although the approach was somewhat constrained by the
environmental factors, the nonetheless created significant interest in furthering development of
automated face recognition technologies. The technology first captured the public’s attention
from the media reaction to a trial implementation at the January 2001 Super Bowl, which
captured surveillance images and compared them to a database of digital mugshots. This
demonstration initiated much-needed analysis on how to use the technology to support national
needs while being considerate of the public’s social and privacy concerns.

As one of the most successful applications of image analysis and understanding, face recognition
has recently gained significant attention. Over the last few years, it has become a popular area of
research in computer vision and one of the most successful applications of image analysis and
understanding.

Today, face recognition technology is being used to combat passport fraud, support law
enforcement, identify missing children, and minimize benefit/identity fraud

2
3. The core content of the report Commented [p2]: Better to give another title for this
To recognize a face we divide the structure of face recognition technology into three steps: these portion

are Face Detection, Feature Extraction, and Face Recognition. Diagrammatically it is explained
as below.

Figure 1. Structure of face recognition.

3.1. Face Detection:


The main function of detection is to determine whether human faces appear in a given image or
not, and where these faces are located at. The expected outputs of this step are patches containing
each face in the input image. In order to make further face recognition system more robust and
easy to design, face alignment are performed to justify the scales and orientations of these
patches. Besides serving as the pre-processing for face recognition, face detection could be used
for region of interest detection, retargeting, video and image classification.
Face detection must deal with several known challenges. These challenges can be attributed to
some factors, this factors are explained below:

 Pose variation: The ideal condition for face detection would be one in which only frontal
images were involved. But, as stated, this is very unlikely in general uncontrolled
conditions. Moreover, the performance of face detection algorithms drops severely when
there are large pose variations. It’s a major research issue. Pose variation can happen due
to subject’s movements and camera’s angle.
 Feature occlusion: The presence of elements like beards, glasses or hats introduces high
variability. Faces can also be partially covered by objects or other faces.
 Facial expression: Facial features also vary greatly because of different facial gestures.
 Imaging conditions: Different cameras and dark conditions that is the lighting condition
can affect the quality of an image, affecting the appearance of a face.

3
3.1.1. Detection depending on the following condition:
Controlled environment: It’s the most useful case. Images are taken under controlled light,
background. Simple edge detection techniques can be used to detect faces.

Color images: The typical skin colors can be used to find faces. They can be weak if light
conditions change. Moreover, human skin color changes a lot, from nearly white to almost black.
But, several studies show that the major difference lies between their intensity, so chrominance is
a good feature. It’s not easy to establish a solid human skin color representation. However, there
are attempts to build robust face detection algorithms based on skin color.

Images in motion: Real time video gives the chance to use motion detection to localize faces.
Nowadays, most commercial systems must locate faces in videos. There is a continuing
challenge to achieve the best detecting results with the best possible performance. Another
approach based on motion is eye blink detection, which has many uses aside from face detection

3.2. Feature Extraction:


After the face detection step, human-face patches are extracted from images. Directly using these
patches for face recognition have some disadvantages, first, each patch usually contains over
1000 pixels, which are too large to build a robust recognition system. Second, face patches may
be taken from different camera alignments, with different face expressions, illuminations, and
may suffer from occlusion and clutter. To overcome these drawbacks, feature extractions are
performed to do information gathering, dimension reduction, and noise cleaning.

3.3. Face Recognition:


After representation of each face, the last step is to recognize the identities of these faces. In
order to achieve automatic recognition, a face database is required to build. For each person,
several images are taken and their features are extracted and stored in the database. Then when
an input face image comes in, we perform face detection and feature extraction, and compare its
feature to each face class stored in the database. There have been many researches and
algorithms proposed to deal with this classification problem, and we’ll discuss them in later
sections. Consider the following image to identify the structure of image recognition:

4
From the above example of how the three steps work on an input image.

(a) The input image and the result of face detection that is the red rectangle.

(b) The extracted face patch

(c) The feature vector after feature extraction

(d) Comparing the input vector with the stored vectors in the database by classification
techniques and determine the most probable class (the red rectangle). Here we express each face
patch as a d-dimensional vector.

There are two general applications of face recognition, one is called identification and another
one is called verification. Face identification means given a face image, we want the system to
tell who he / she is or the most probable identification. Is a one-to-many matching process that
compares a query face image against all the template images in a face database to determine the
identity of the query face. The identification of the test image is done by locating the image in
the database who has the highest similarity with the test image. The identification process is a
closed test, which means the sensor takes an observation of an individual that is known to be in
the database. The test subject’s (normalized) features are compared to the other features in the
system’s database and a similarity score is found for each comparison. These similarity scores
are then numerically ranked in a descending order. While in face verification, given a face image
and a guess of the identification, we want the system to tell true or false about the guess.

Face recognition technology is the least intrusive and fastest biometric technology. It works with
the most obvious individual identifier human face. The facial expression recognition system

5
work contributes a resilient face recognition model based on the mapping of behavioral
characteristics with the physiological biometric characteristics. The physiological characteristics
of the human face with relevance to various expressions such as happiness, sadness, fear, anger,
surprise and disgust are associated with geometrical structures which restored as base matching
template for the recognition system

3.4. How Face recognition is done


Instead of requiring people to place their hand on a reader a process not acceptable in some
cultures as well as being a source of illness transfer or precisely position their eye in front of a
scanner, face recognition systems continuously take pictures of people's faces as they enter a
defined area. There is no delay, and in most cases the subjects are entirely unaware of the
process. They do not feel "under surveillance" or that their privacy has been invaded.

Facial recognition analyzes the characteristics of a person's face images input through a digital
video camera. It measures the overall facial structure, including distances between eyes, nose,
mouth, and jaw edges. These measurements are retained in a database and used as a comparison
when a user stands before the camera. This biometric has been widely, and perhaps wildly,
touted as a fantastic system for recognizing potential threats whether terrorist, scam artist, or
known criminal. It is projected that biometric facial recognition technology will soon overtake
fingerprint biometrics as the most popular form of user authentication.

Every face has numerous, distinguishable landmarks, the different peaks and valleys that make
up facial features. Some of these measured by the Facial Recognition Technology are:

 Distance between the eyes.


 Width of the nose.
 Depth of the eye sockets.
 The shape of the cheekbones.
 The length of the jaw line.

6
3.5. Methods of Face Recognition

In the past few years automatic face recognition has been extensively studied due to its important
role in a number of application domains including visual surveillance, access control, and
government issued identity documents for instance driver license and passport.

The influence of some features make face recognition a hard and complicated task that is glasses,
beard, environment factors like lighting conditions and background and in the fact there are
variations in human face as color, age and size. Since, the problem exists for many years to solve
this problem many work has been done. The categorization of different schemes and strategies
for the face recognition is not easy. Various face recognition methods can be grouped into four
categories:

i. Knowledge based methods.


ii. Feature invariant methods.
iii. Template matching methods.
iv. Appearance-based approaches.

Let us see them one by one

3.5.1. Knowledge based methods.


These methods use pre-defined rules to determine a face based on human knowledge. These
methods encode human knowledge that constitutes a typical face, usually by finding the
relationships between facial features. A face is represented using a set of human-coded rules.
These rules are then used to guide the face search process. The advantages of the knowledge-
based techniques are the easy rules to describe the face features and their relationships. Their
disadvantages are the difficulty to translate the human knowledge in rules precisely and the
difficulty to extend these methods to detect faces in different poses.

3.5.1.1. Hierarchical knowledge-based method


This method is composed of the multi resolution hierarchy of images and specific rules defined
at each image level. The hierarchy is built by image sub sampling. The face detection procedure
starts from the highest layer in the hierarchy or with the lowest resolution and extracts possible
face candidates based on the general look of faces. Then the middle and bottom layers
carry rule of more details such as the alignment of facial features and verify each face candidate.

7
3.5.1.2. Horizontal / vertical projection
This method uses the fairly simple image processing technique, the horizontal and vertical
projection. Based on the observations that human eyes and mouths have lower intensity than
other parts of faces, these two projections are performed on the test image and local
minimums are detected as facial feature candidates which together constitute a face
candidate. Finally, each face candidate is validated by further detection rules such as eyebrow
and nostrils.

3.5.2. Feature invariant methods


Feature invariant methods aim to find face structure features that are robust to pose and lighting
variations that is to detect invariant face features. The structural features of a face that exist even
when the pose, viewpoint or lighting conditions vary. The main advantage of the feature oriented
face detection approaches consists in the fact that these features are invariant to rotation changes.
Their main drawback is the difficulty to located facial features in a complex background. These
approaches are used for detecting features, like eyes, nose, ears, mouth, lips etc.

3.5.3. Template matching methods


These methods use pre-stored face templates to judge if an image is a face. Usually, these
approaches use correlation operations to locate faces in images. The templates are hand coded,
not learned. Also, these templates have to be created for different poses. These methods, are used
for face localization and detection by computing the correlation of an input image to a standard
face pattern.

3.5.4. Appearance based approaches.


The templates in appearance based methods are learned from the examples in the images. In
general, appearance-based methods rely on techniques from statistical analysis and machine
learning to find the relevant characteristics of face images. Some appearance based methods
work in a probabilistic network. An image or feature vector is a random variable with some
probability of belonging to a face or not. Another approach is to define a discriminant function
between face and non-face classes. These methods are also used in feature extraction for face
recognition and will be discussed later. Nevertheless, these are the most relevant methods or
tools:

8
 Eigenface-based: Sirovich and Kirby developed a method for efficiently representing
faces using PCA (Principal Component Analysis). Their goal of this approach is to
represent a face as a coordinate system. The vectors that make up this coordinate system
were referred to as Eigen pictures. Later, Turk and Pentland used this approach to
develop an eigenface-based algorithm for recognition.
 Distribution-based: These systems where first proposed for object and pattern detection
by Sung. The idea is collect a sufficiently large number of sample views for the pattern
class we wish to detect, covering all possible sources of image variation we wish to
handle. Then an appropriate feature space is chosen. It must represent the pattern class as
a distribution of all its permissible image appearances. The system matches the candidate
picture against the distribution-based canonical face model. Finally, there is a trained
classifier which correctly identifies instances of the target pattern class from background
image patterns, based on a set of distance measurements between the input pattern and
the distribution-based class representation in the chosen feature space. Algorithms like
PCA or Fisher’s Discriminant can be used to define the subspace representing facial
patterns.
 Neural Networks: Many pattern recognition problems like object recognition, character
recognition, etc. have been faced successfully by neural networks. These systems can be
used in face detection in different ways. Some early researches used neural networks to
learn the face and non-face patterns. They defined the detection problem as a two-class
problem. The real challenge was to represent the “images not containing faces” class.
Other approach is to use neural networks to find a discriminant function to classify
patterns using distance measures. Some approaches have tried to find an optimal
boundary between face and non-face pictures using a constrained generative model.
 Hidden Markov Model: This statistical model has been used for face detection. The
challenge is to build a proper HMM, so that the output probability can be trusted. The
states of the model would be the facial features, which are often defined as strips of
pixels. The probabilistic transition between states are usually the boundaries between
these pixel strips. As in the case of Bayesians, HMMs are commonly used along with
other methods to build detection algorithms.

9
 Information-Theoretical Approach: Markov Random Fields (MRF) can be used to
model contextual constraints of a face pattern and correlated features. The Markov
process maximizes the discrimination between classes (an image has a face or not) using
the Kullback–Leibler divergence. Therefore, this method can be applied in face detection.

4. Result and Discussion Commented [p3]: Add in your discussion which approach
At current level face recognition have been envisaged, and some of them have been hinted at gives better results and how in order to apply best face
recognition system
above. Commercial applications have so far only scratched the surface of the potential.
Installations so far are limited in their ability to handle pose, age and lighting variations, but as
technologies to handle these effects are developed, huge opportunities for deployment exist in
many domains:

Access Control. Face verification, matching a face against a single enrolled exemplar, is well
within the capabilities of current Personal Computer hardware. Since PC cameras have become
widespread, their use for face-based PC logon has become feasible, though take-up seems to be
very limited. Increased ease-of-use over password protection is hard to argue with today’s
somewhat unreliable and unpredictable systems, and for few domains is there motivation to
progress beyond the combinations of password and physical security that protect most enterprise
computers. As biometric systems tend to be third party, software add-ons the systems do not yet
have full access to the greater hardware security guarantees afforded by boot-time and hard disk
passwords.

Surveillance: The application domain where most interest in face recognition is being shown is Commented [p4]: Used it in application area face
recognition is the best technology now a time so there are
probably surveillance. Video is the medium of choice for surveillance because of the richness many areas that can applied in so it’s preferable to add in
your report those application area and what are they used
and type of information that it contains and naturally, for applications that require identification, for it
face recognition is the best biometric for video data. Though gait or lip motion recognition have
some potential. Face recognition can be applied without the subject’s active participation, and
indeed without the subject’s knowledge. Automated face recognition can be applied ‘live’ to
search for a watch-list of ‘interesting’ people, or after the fact using surveillance footage of a
crime to search through a database of suspects. The other are used as Identification system.

10
5. Future Work (recommendation) Commented [p5]: Only the last paragraph explains some
Face recognition systems used today work very well under good conditions, although all systems of recommendations others simply talk about the current
feature of the system
work much better with frontal images and constant lighting. All current face recognition Explore more on future work

algorithms fail under the varying conditions under which humans are able to identify other
people. Next generation person recognition systems will need to recognize people in real-time
and in much less constrained situations.

We believe that identification systems that are robust in natural environments, in the presence of
noise and illumination changes, cannot rely on a single modality, so that fusion with other
modalities is essential. Technology used in smart environments has to be unobtrusive and allow
users to act freely. Wearable systems in particular require their sensing technology to be small,
low powered and easily enterable with the user's clothing. Considering all the requirements,
identification systems that use face recognition and speaker identification seem to us to have the
most potential for wide-spread application.

Cameras and microphones today are very small, light-weight and have been successfully
integrated with wearable systems. Audio and video based recognition systems have the critical
advantage that they use the modalities humans use for recognition. Finally, researchers are
beginning to demonstrate that unobtrusive audio-and-video based person identification systems
can achieve high recognition rates without requiring the user to be in highly controlled
environments

The general experimental evaluation of the face expressional system guarantees better face
recognition rates. Having examined techniques to deal with expression variation, in future it may
be investigated in more depth about the face classification problem and optimal fusion of color
and depth information. Further study can be laid down in the direction of allele of gene matching
to the geometric factors of the facial expressions. The genetic property evolution framework for
facial expressional system can be studied to suit the requirement of different security models
such as criminal detection, governmental confidential security breaches.

11
6. Conclusion Commented [p6]: Your conclusion more focused on next
Today, machines are able to automatically verify identity information for secure transactions, for works write your conclusion depending on the above
document referring the approaches and total concept your
surveillance and security tasks, and for access control to buildings etc. These applications usually report.

work in controlled environments and recognition algorithms can take advantage of the
environmental constraints to obtain high recognition accuracy. However, next generation face
recognition systems are going to have widespread application in smart environments -- where
computers and machines are more like helpful assistants.

To achieve this goal computers must be able to reliably identify nearby people in a manner that
fits naturally within the pattern of normal human interactions. They must not require special
interactions and must conform to human intuitions about when recognition is likely. This implies
that future smart environments should use the same modalities as humans, and have
approximately the same limitations. These goals now appear in reach -- however, substantial
research remains to be done in making person recognition technology work reliably, in widely
varying conditions using information from single or multiple modalities

12
7. Reference Commented [p7]: Try to use standard referencing system
[1]. R. Chellappa, C. L. Wilson, and S. Sirohey, “Human and machine recognition of faces: a
survey,” Proc. IEEE, vol. 83, no. 5, pp. 705-740, 1995.

[2] G. Yang and T. S. Huang, “Human face detection in complex background,” Pattern
Recognition Letter, vol. 27, no. 1, pp. 53-63, 1994.

[3] Teja, G.P.; Ravi, S.,” Face recognition using subspaces techniques”, International Conference
on Recent Trends In Information Technology (ICRTIT), 2012, Digital Object Identifier:
10.1109/ICRTIT.2012.6206780, Publication Year: 2012.

[4]. Chellappa, R., Wilson, C.L., and Sirohey, S., “Human and Machinr

Recognition of Faces: A Survey”, Proc. IEEE, vol.83, pp.705-741,

May 1995.

13
14

You might also like