A Feasibility Study On Real-Time Gender Recognition: Ijarcce
A Feasibility Study On Real-Time Gender Recognition: Ijarcce
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
Abstract: The human gender is recognized by an automated system with various unique features of the male and
female. Real Time Gender recognition from facial features is a challenging task. In order to overcome this striving
situation, the facial features measure is utilized for the gender detection. Various techniques are used for recognizing
the gender from facial features. The facial parameters taken for recognition are eye, jaw, eye brows, and lips. The real
time gender recognition is necessary for improving the human robot interaction (HRI). Real time gender recognition
have wide variety of applications in government, forensic and commercial fields in which reliable personal
identification is vital. The performance of gender recognition is also affected by non ideal images such as motion
blur, poor contrast and various expressions . This paper is a study and analysis of gender recognition systems which
uses various techniques based on facial features and the skeletal features of the human body.
Keywords: Feature extraction Laplace – Gaussian edge detection, Inter ocular distance, RGB-D camera, Human
Robot Interaction (HRI).
I. INTRODUCTION
The gender detection is,„ the genetic processused to specify an individual‟s characteristics physically‟. The need for the
gender detection came in to existence with the increased requirement in the airport security. But detecting the gender of
person only with facial features is a difficult task. Gender detection is a dynamic research topic in pattern recognition,
which is widely used in the man-machine interface, visual communication, security, surveillance and many other
aspects. Systems that perform real-time analysis are becoming desirable in many applications. The gender
classification will defer for different races of people. The shape and size of the parameters of Indian and African are
different. Changes in the facial features will cause a change in facial parameters which affects the gender detection. So
the design should include the variation in the facial parameters for different races.
This paper is a study and analysis of various gender recognition systems which use different facial and skeletal features
for recognition. In Section II, Facial features extraction for the real time human gender classification [1] discuss a
processing technique using Laplacian of Gaussian filters. Facial features are determined and computed. For the
determination of the gender, the Grand Theft Auto V (GTAV ) database is used. Support Vector Machine (SVM) is
used for the classification of gender. In Section III, Gender recognition based on the 3D human body shape for the
human Robot interaction [2] uses a 3D laser scan for gender recognition. Here the system is based on the differences in
the human body shape. In the 3D human body shape, it checks the shoulder width and torso length and chest statics. In
Section IV, Multi-scale Independent Component Analysis ( I C A) texture pattern for gender recognition [3] presents a
texture feature which is adaptively learned in a frame work which is a sparse representation . A sparse classifier is
used for this technique. The basis image of the facial representation is obtained by changing the size and coefficients of
the mask. In Section V, an efficient and accurate real time gender recognition is proposed by Real time Gender
recognition on Field Programmable Gate Array FPGA) [4].In this the facial feature extraction and the distances
between the features are calculated based on the parameters like inter ocular distance, eye to nose distance, eye to mid
lip distances for the gender classification. These distances for the gender recognition are measured by MAT Lab codes
and then lab view convert the codes and interface with FPGA kit to make real time recognition.
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
Generally for gender classification the GWT features are considered. The major features are geometric arrangement of
colour, hair and mus-tache. For age classification, wrinkles and flabs of skin andtexture spots are considered.
Appearance based method for facial images uses SVM nonlinear classifier and compared their results with traditional
classifiers. The modern techniques such as large ensemble-RBF classifiers and Radial Basis Function (RBF) networks
are used for the differentiation in classification. Principal Component Analysis (PCA) is also used to represent each
image as a feature vector of low dimensional space. Genetic algorithm selects a subgroup of features from the low-
dimensional feature vectors, without considering certain Eigen values, that do not seem to encode for important gender
detection information.
The Sobel edge detector is used for image processing, that is, at each point in an image; the resultant of this operator
depends on either corresponding gradient vector, or depends on the norm of this vector. This will mainly depend on
convolving the image with a separable and integer valued filter. This convolution takes place in both horizontal and
vertical direction of an image. It gives better approximations of the existing derivatives.
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
Figure 2.2: (a) Original Input Image from GTAV database (b) After Laplacian of Gaussian edge detection.
Initially, the facial features are identified for the computation of ratios of distances between the features, which decide
the gender of an individual. The ratioswill be calculated depending on 4 key parameters of the face by the classifier.
They are,
1) Inter-Ocular distance: The distance between the midpoints of right eye-ball to the left eye-ball.
2) Eye to Nose: The distance between the midpoint of joining of two eyes pixel of the image and
3) Nose tip in the image.
4) Lips to Nose: The distance between nose tips to the midpoint of lips pixel in the image.
5) Lips to Eyes: The distance between midpoints of the lips pixel to the line joining of two eyes pixel in an image.
These global features are extracted with a rectangular box which is drawn from the starting point of the feature to a
certain area. This is done with all the features of the face image as shown in figure 2.3,and the distance between these
features are calculated. As the result has only two classes of data, researchers use SVM for recognition. To improve the
performance of SVM the nearest neighbour classification is used.
III. REAL- TIME GENDER RECOGNITION BASED ON 3D HUMAN BODY SHAPE FOR HUMAN
ROBOT INTERACTION
Considering the HRI scenario in gender recognition and the differences in human body shape, this work provides a
real-time gender recognition system for HRI scenario. It includes the depth image processing of an RGB-D image taken
by a camera (Kinect). Gender recognition methods can be divided into face-based methods using 2D images, ie, gait-
based methods using 2D video sequences and human-body-shape-based methods using 3D laser scans.
A. Gender recognition System
For the gender recognition process, the depth images from Kinect are used. The extraction of 3D skeletal joint positions
are identified from the depth image by shotton et.al. method. Machine learning methods based on a SVM were then
applied and utilized the 3D skeletal information for the gender recognition.
B. 3D Human Body Shape
The gender of a person can be distinguished by analysing the chest region as well as the body size information. The
short distance interactions of robots make it hard for them to obtain the upper body information of a human, like
shoulder width and torso length .so the
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
chest region is considered as a 3D information about the body. The upper body joint positions are shoulder Left (S L)
shoulder right (S R), Head to chest ( HC, )Head to spine ( SP) and shoulder centre (S C) as shown in Fig.3.1
Figure3 .1 3D analysis
D. Chest Statistics
The chest region of a female is different from a male. The gender can be determined by analysing the chest parameters
like chest altitude, the chest surface distribution. The depth 3D images give this statistics. These statistics are calculated
(yellow rectangle in Fig. 3.1) based on the positions of S L, S R, S C and SP. Chest altitude means the height of the
chest region projected on the men‟s heading orientation. The chest altitude CR means the vector from the origin of the
camera to the chest region point. The surface normal distribution can represent by calculating the mean and variance of
the angle between the local surface normal (L O) and the man‟s orientation. So that the flat chest tends to have a small
mean and variance.
Fig. 4.1 Multi-scale ICA basis images of sizes 5 × 5 (top row) and 7 × 7. (bottom row) learned from Feret database.
An encoded image can be obtained by a group of certain scaled filters.To obtain richer information, ICA filters of multi
scales are used to generate a coded image. The fig.4.1 shows the encoded images, divided in to sub regions which are
non-overlapping. The histograms of the sub regions are then concatenated in to a compact feature. The histograms at
various sizes are then concatenated to obtain the proposed feature. This is the extraction process of MITP shown in
Fig.4.2.
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
Fig. 4.2 Whole process of proposed multi-scale ICA texture pattern(MITP) feature extraction.
The classifier used is sparse classification (SC) which is based on the idea that a test sample from any class can be a
linear combination of the training samples in the same class.
a) Distance between centre point of the eyes are calculated by keeping the centre of the eye balls as centroid.
b) Position of the eye to carry out the further analysis
c) Distance from centroid of the eyes to two points above the eyes by considering the area of eye brows.
d) The distance below the eyes to ear to determine the presence of hair around the ear.
e) Distance from centroid of the eye to the nose point, and centroid to the jaw for determining the shape of the jaw.
f) Distance from centroid to the lower mid lip for detecting the facial hair.
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
Figure 3.2Detection of eyebrows and nose .Measurement of distance from eye line to nose.
Figure 3.3Detection of mouth and measurement of distance between eye line and lower lip
C. FPGA Module
The My Rio kit is used for the real time operation. It is Xilinx FPGA customizable I/O Lab View programmable, dual-
core ARM Cortex A9 processor with non volatile memory, this device is designed to operate in standalone mode.
USB 2.0 is used for connecting web cam to the FPGA ,Push button and 16x2 LCD to the FPGA Device are connected
to Expansion Port (MXP)
REFERENCES
[1]. H.D Vankayalapati, R S Vaddi, L N P Boggavarapu, K R Anne, Department of Information Technology, V R Siddhartha Engineering College
Vijayawada, India " Extraction of facial features for the real-time human gender classification"; proceedings of ICETECT‟ PROCEEDINGS
OF ICETECT 2011. 978-1-4244-7926-9/11/$26.00 ©2011 IEEE
[2]. Ren C. Luo, Xiehao Wu, International Center of Excellence on Intelligent `Robotics and Automation Research, National Taiwan
University,"Real-time Gender Recognition Based on 3D Human Body Shape for Human-Robot Interaction".HRI‟14, March 3–6, 2014,
Bielefeld, Germany. ACM 978-1-4503-2658-2/14/03.
[3]. M. Wu, J. Zhou and J. Sun ; "Multi-scale ICA texture pattern for gender recognition"; ELECTRONICS LETTERS. 24th May 2012 Vol. 48 No.
11
IJARCCE
nCORETech
LBS College of Engineering, Kasaragod
Vol. 5, Special Issue 1, Feburary 2016
[4]. Aniket Ratnakar, Gaurav More REAL TIME GENDER RECOGNITION ON FPGA ; International Journal of Scientific & Engineering
Research., Volume 6, Issue 2, February-2015
[5]. Moeini, A., Faez, K., and Moeini, H.: „Expression-invariant 3D face reconstruction from a single image by facial expression generic elastic
models‟, J. Electron. Imaging, 2014, 23, pp. 5–9
[6]. J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore. Real-time human pose recognition in parts
from single depth images. Communications of the ACM, 56(1):116–124, 2013.
[7]. Jing Xuan Chen, Jianchu Guo and Xianju Wu” 3-D Real – Time image matching Based on Kinect skleton”CCECE Torando, Canada.978-1-
4799-3010-9/14, 2014 IEEE.
[8]. Jian-Gang Wang , Wei-Yun Yau. “Real-time beard detection by combining image decolorization and texture detection with applications to
facial gender recognition”. CIBIM, 2013 IEEE Workshop on 16-19 April 2013. Page(s): 58 – 65
[9]. Ramesha. K et al. ,“Feature Extraction based Face Recognition, Gender and Age Classification,” International Journal on Computer Science of
Engineering, Vol. 02, No. 01s, 14-23, 2010.
[10]. G.Mallikarjuna Rao, G. R. Babu, G. Vijaya Kumari and N. Krishna Chaitanya, “Methodological Approach for Machine based Expression and
Gender Classification,” IEEE Internationl Advance Computing Conference, pp. 1369-1374, 6-7 March 2009.
[11]. Erno Makinen and Roope Raisamo, “Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 541-547, March 2008.
[12]. Hui-Cheng Lain and Bao-Liang Lu, “Age Estimation using a Min-Max modular Support Vector Machine,” Twelfth International Conference
on Neural Information Processing, pp. 83-99, November, 2005.
[13]. Patsadu O, Nukoolkit C, Watanapa B.”Human gesture recognition using Kinect camera”[C]IIComputer Science and Software Engineering
(JCSSE), 2012 International Joint Conference on. IEEE, 2012: 28-32.