Paper 11-A Survey On Deep Learning Face Age Estimation Model
Paper 11-A Survey On Deep Learning Face Age Estimation Model
Abstract—Face age estimation is a type of study in computer inaccurate age classification. Meanwhile, deep learning is
vision and pattern recognition. Designing an age estimation or another approach that could help algorithms improve the
classification model requires data as training samples for the computer's ability to discover common facial aging traits (e.g.
machine to learn. Deep learning method has improved estimation aging wrinkles) within vast amounts of data and classify the
accuracy and the number of deep learning age estimation models facial image into its correct age. However, face age databases
developed. Furthermore, numerous datasets availability is mostly have limited ethnic subjects, only one or two ethnicities
making the method an increasingly attractive approach. and may result in ethnic bias during age estimation, thus
However, face age databases mostly have limited ethnic subjects, impeding progress in understanding face age estimation.
only one or two ethnicities and may result in ethnic bias during
age estimation, thus impeding progress in understanding face age In this study, the review on face age estimation/
estimation. This paper reviewed available face age databases, classification/distribution examined problems regarding:
deep learning age estimation models, and discussed issues related
to ethnicity when estimating age. The review revealed changes in 1) What face databases are frequently used in the age
deep learning architectural designs from 2015 to 2020, frequently estimation study, and how many different ethnics are in those
used face databases, and the number of different ethnicities databases?
considered. Although model performance has improved, the 2) What deep learning technique is used in facial aging
widespread use of specific few multi-races databases, such as the
research? How did the technique change through time? And
MORPH and FG-NET databases, suggests that most age
estimation studies are biased against non-Caucasians/non-white do they account for different ethnicities in their studies?
subjects. Two primary reasons for face age research’s failure to 3) What are the most used deep learning network
further discover and understand ethnic traits effects on a architecture and what are their strengths and weaknesses?
person’s facial aging process: lack of multi-race databases and 4) How to obtain more face images of people of different
ethnic traits exclusion. Additionally, this study presented a ethnicities in the time of restrictions (e.g. due to quarantine)?
framework for accounting ethnic in face age estimation research
and several suggestions on collecting and expanding multi-race Accordingly, this study surveyed the available face age
databases. The given framework and suggestions are also databases, the most used database in this type of research, and
applicable for other secondary factors (e.g. gender) that affect the deep learning techniques used for the face age estimation (or
face age progression and may help further improve future face distribution; or classification) model design. More than 50
age estimation research. papers (2015-2020) that used the deep learning method for face
age studies were reviewed in this study. The aim of this paper is
Keywords—Deep learning; face age estimation; face database; to survey the different deep learning face age estimation
ethnicity bias methods and how they account for different ethnicities. By
I. INTRODUCTION understanding the different deep learning face age estimation
methods and the problem related to ethnic bias in their face age
Facial aging is a complex biological process. Most estimation, we can discover significant racial traits that could
researchers in the computer vision and the pattern recognition help distinguish unique aging patterns used to solve racial face
fields have already found multiple ways to extract information age estimation problems in real-life applications. Moreover, a
from the face for age estimation/classification. However, not all framework for studying CNN face age estimation while
information extracted can help the system learn. When the considering the ethnicities of the subjects is included in this
system learned from only a specific ethnic sample, it may not paper to help guide future face age estimation studies that use
estimate/classify the age of other ethnic subjects correctly, even either the deep learning approach or the standard machine
after the face age estimation system improved. learning approach.
Earlier face aging models combined extractors and The remainder of this paper is structured as follows:
classifiers to extract specific aging features and accurately Section 2 mention several related works regarding deep learning
classify the facial image into its correct age. The downside of and early face age estimation; Section 3 explains the human
this approach is that the data needed for learning are usually facial aging and differences in process between several races;
structured and quantitatively limited; too little or too much data Section 4 surveys the face age image databases that can be used
could lead to models learning incorrect patterns, resulting in for facial age estimation studies and shows the quantities of
86 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
each race in each database (if any); Section 5 explains the face age-invariant features, difficulties in capturing a person's face
age estimation model and reviews the different deep learning aging progression in a controlled/uncontrolled environment, and
techniques proposed between 2015-2020 as well as the capturing >100 face images of different ethnic people in equal
databases used. The importance of ethnic traits in age quantities can be time-consuming and costly. Nonetheless, it is
estimation is also highlighted; Section 6 discusses the relevant undeniable that the facial aging process differs between races;
open issues regarding ethnic characteristics; Section 7 discusses therefore, ethnicities should be considered in future research
several possible solutions to solve the problems, Section 8 when experimenting with the next CNN age estimation model.
presents the conclusions and Section 9 mentions the future Moreover, analyses on the ethnic age difference can contribute
directions. to a better understanding of human facial aging.
II. RELATED WORK III. HUMAN FACIAL AGING – ETHNICITIES
The deep learning model has two primary processes: 1) Face features and expression are fundamental ways of
training and 2) inferring. The training phase is the process of human communication. Many studies have observed the facial
labelling large quantities of data (i.e. identifying and appearance and examined ways to apply the knowledge to real-
memorising the data matching characteristics). Meanwhile, in world applications. One of these studies is face age estimation,
the inferring phase, the deep learning model decides on the label which is research on estimating a person's age based on facial
for the new data using the knowledge gained from the earlier appearance observations. Over the years, multiple facial traits
training phase. Manual feature extraction on the data is help determine a person's age, including the shape of the face,
unnecessary because the model’s neural network architecture skin texture, skin features, and skin colour contrast [5, 6]. The
can learn the feature directly from the data, eliminating the need two predetermined features are as follows: 1) face shape
for data labelling. This learning feature is advantageous when change, particularly the cranium bones that grow with time.
working on large quantities of unstructured data (multiple This process predominately occurs during childhood to
formats like text and pictures). Recently, deep learning, such as adulthood transition; 2) development of wrinkles or face texture
convolutional neural network (CNN), has become well-known as facial muscle weakens due to decreased elasticity. This
in the image processing and pattern recognition fields for its process occurs during the transition from adulthood to the
capability to 'learn' from a large number of images and perform senior stage [7, 8].
specific tasks accurately. The deep learning method can fit the
parameters of multi-layered networks of nodes to the vast
amounts of data before extrapolating outputs from new inputs.
Knowing the commonly used network designs in face age
estimation studies and their strengths and weaknesses would be
interesting enough.
Recently, face age estimation studies using the deep
learning approach to estimate a person’s age based on aging
features, such as the facial skull shape and aging wrinkle, have
increased. These aging features are a person's regular facial Fig. 1. Different Ethnic Facial Aging Features for Four Women Aged Over
60 Years Old. from Left to Right: Caucasian, East Asian, Latino/Hispanic,
aging changes that occur through the years. Nevertheless, and African (All Images were Taken from [13]).
considering ethnicity in age estimation can pose a different
problem since each ethnicity/race has been confirmed to have a
different rate of facial aging [1, 2, 3, 4]. For example, a 20-year-
old White subject would look older than a 20-year-old Asian
because of their facial bones and skin structures differences [2].
For the CNN model to learn correctly, many datasets containing
multiple races with equal ratios are needed.
Although many face databases are available for age
estimation, most are racially biased and have just only one or 1
two significant ethnicities. Unbalanced ethnic samples can
create problems as age estimation models depend solely on
these databases. A bias might occur, for example, when
estimating the age of an Asian subject if the majority of
ethnicities available in a database are Caucasians/White due to
the differences in facial structure and rate of skin aging [1, 2]. In
most previous face age estimation/classification/ distribution
studies, all sample databases were used for training and testing
while utilising different deep learning methods that match their Fig. 2. Facial Feature and Aging Difference for Adult Caucasian (Top Left)
research aim(s) and main objective(s). However, ethnic traits and Asian (Bottom left), while the Baby’s Face for the Caucasian is on the
are usually ignored, resulting in very few analyses of racial Top Right and Bottom right for the Asian (Images were taken from [2],
traits' effects on the face age estimation process. A few reasons Except for the Caucasian Baby, from [14]).
for this exclusion: researchers mainly consider racial traits as
87 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
Internal and external forces act upon the outer and inner skin IV. FACE AGE DATABASE
as a person age, causing some level of damage and changing the Designing face age estimation models require many samples
skin’s appearance. As demonstrated in [9, 10], the older skin for training and testing. Several studies collected face samples
was perceived to have a different colour contrast and luminosity and then made them available to the public so that others might
than the younger skin. Healthy young skin, which is plumper use them in their research. Furthermore, the shared database
and emits radiant colour, has a smooth and uniformly fine may serve as a benchmark against which other models can be
texture that reflects light evenly. Meanwhile, aged skin tends to compared and improved. Table I shows the face databases with
be rough and dry with more wrinkles, freckles, and age spots age information or labels (henceforth, called Face Age
and emits dull colour [11, 12]. However, ethnicities can affect Database). Only two databases captured face images in a
these aging rates because of differences in skull structure and controlled environment (MORPH and FACES). In contrast, the
skin type [1] (see Fig. 1). For instance, the skin of a Caucasian rest captured the face image in either a partially controlled or
subject will gradually have more aging wrinkles when uncontrolled environment. Meanwhile, the FG-NET database
compared to an Asian subject as the age increases from 20 to 39 has the most undersized samples and subjects, while the
years old. This phenomenon is due to the different skull and IMDB+WIKI database offers the most samples and subjects.
skin structures of each ethnic. Caucasians have a significant
angular face, while Asians tend to be broader and less angular, Table I reveals that most of the subjects in the databases are
similar to a baby's broad face [2] (see Fig. 2). Due to this Caucasian/White, whereas Table II provides the ethnic count.
broader facial structure, soft-tissue loss in Asians is seen and Correspondingly, the ethnic percentage is shown in Fig. 3,
felt to a lesser extent. Another example is between the which reveals very few databases with non-Caucasians/non-
Caucasians and the African-Americans' skin. Black skin’s White ethnic (White = 80%; Black = 3%; Asian = 8%; and
epidermis contains a thicker stratum corneum with more active Others = 9%). This gap creates an imbalance in the databases
fibroblasts than the Caucasians, making them less affected by when ethnicity is considered to estimate the age of non-
photo aging [3, 4]. Although black skin does not tend to get fine Caucasian/non-White races. Moreover, not all the databases
lines like white skin, it does get folded when getting older. Such have ethnic information (e.g. IMDB+WIKi, FERET, and
information should be considered to design a more accurate age Webface). The lack of ethnic labels can make it difficult for
estimation model which can specify proper age face age model researchers to divide samples into their
estimation/classification knowledge when dealing with specific appropriate ethnicity, eventually treated as one of their research
ethnic subjects. limitations.
88 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
89 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
Therefore, it can be inferred that most of these databases can be used in all deep learning areas: EA, CA, and DA.
TABLE III. DEEP LEARNING FACE AGE RESEARCH AREA AND AGE DATABASES USED FOR TRAINING AND TESTING (FROM 2015-2020)
90 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
TABLE IV. SUMMARY OF BEST CNN MODEL PERFORMANCE ON SELECTED DATABASES (FROM 2015-2020)
91 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
92 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
C. Deep Learning Technique Strengths and Weaknesses research aim and objectives, such as the problem(s) to solve that
A review of the different deep learning architectural can help improve face age classification/estimation/distribution.
networks used in previous studies revealed several techniques The problems include face detection, landmark localisation,
that are frequently used for face age estimation. Table V optimisation, regression, classification, feature extraction,
summarises the network architectures frequently used in the age residual learning part, sampling technique, layer size (depth and
estimation studies reviewed, as well as their strengths and width), discriminative distance, learning speed, training and/or
weaknesses. As previously stated, the main goal of deep testing process and others. This study identified several known
learning face age estimation is to find the best method for network architectures that were frequently used in comparison
learning the face aging features from a large sample of data and to the others [93]. Among these network architectures are the
then use the information to distinguish the different ages of test following:
subjects. Each study’s architecture was chosen based on its
TABLE V. SUMMARY OF NETWORK ARCHITECTURES MOSTLY USED BY AGE ESTIMATION STUDIES IN THIS SURVEY
Author(s) that
Background Learning
Architecture Strength Weakness Used the
(referred from [93]) Methodology
Architecture
- Problem to detect all aging
- Small and simple design.
features. Require extensive training.
- Invented in 1998 by Yann Lecun. Spatial - A good introduction to
LeNet - Speed and accuracy are [45]
- First popular CNN architecture. exploitation neural networks for
outperformed by newer network
beginners.
architecture.
- Introduced in 2012 at the - Using GPUs for training
ImageNet Large Scale Visual leads to faster training of
- Authors require to find design
Recognition Challenge. models.
Spatial solutions on how to compete with
AlexNet - Uses ReLu, dropout and overlap - ReLu helps lessen the [54, 82]
exploitation other newer network architectures
pooling. loss of features and
that are more accurate and faster.
- First major CNN model that used improve model training
GPUs for training. speed.
- Homogenous topology.
- Smaller kernels. - Computationally expensive as
- Visual geometric group (VGG) - Good architecture for more layer increases.
[35, 39, 42, 49, 51,
was introduced in 2014. Spatial benchmarking face age - Face age estimation studies need
VGG-Net 54, 55, 57, 58, 65,
- It groups multiple convolution exploitation estimation to consider the vanishing gradient
73]
layers with smaller kernel sizes. - Pre-trained networks for problem to improve the estimation
VGG-Net are freely performance.
available.
- Researchers at Google - Trains faster than VGG-
introduced GoogleNet in 2014. Net.
- Heterogeneous topology design
- Introduced block concept. - Smaller pre-trained size
Spatial require face age estimation studies
GoogleNet - Split transform and merge idea. than VGG-Net. [38, 41, 43]
exploitation to make thorough customisation -
- In a single layer, multiple types - Training network has
from module to module.
of ‘feature extractors’ are present many options to solve
to help the network perform better. tasks.
- Capable of skipping
learned feature(s),
reducing training time and
- Introduced in 2015.
improve accuracy.
- Residual learning.
Depth + - Solve the vanishing - Computationally expensive as
ResNet - Identity mapping-based skip [69, 71]
multi-path gradient problem faced by more layer increases.
connections.
VGG-Net.
- Possible to train very
deep networks and
generalise well.
[42, 43, 44, 46, 47,
- Specialise in learning 48, 50, 52, 53, 54,
Most designs were face representation for 56, 59, 60, 61, 62,
- Cater to a very specific
expanded/modified/or built from different ages. 63, 64, 65, 66, 67,
problem(s).
Novel Arch. scratch based on the previously - - Improving several parts 68, 69, 70, 71, 72,
- Time-consuming when building
available architectures (e.g. of the network based on 73, 74, 75, 76, 77,
from scratch.
AlexNet, VGG-Net, etc.) the study’s aim and 78, 79, 80, 81, 82,
objectives. 83, 84, 85, 86, 87,
88, 89]
93 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
1) LeNet: Yann Lecun invented the LeNet architecture in module. This study discovered that only [38, 40, 41] used this
1998 to perform optical character recognition (OCR), and its network architecture.
design is smaller and simpler than the rest of the network 5) ResNet; ResNet was introduced in 2015 and provides
architectures. For beginners, this network is a good way to residual learning to help solve the vanishing gradient problem
learn neural networks and can be used for face age estimation (from the VGG-Net architecture). Furthermore, ResNet can
studies, such as in [45]. However, due to its simple design, the have a deeper network (more layers) than VGG-Net while
network requires additional improvement that the designer avoiding performance degradation. ResNet is a concept in
must build from scratch if used for face age estimation. It is which if a feature has already been learned, it can be skipped
also outclassed by newer models in terms of speed and and focus can be given to newer features, thereby improving
accuracy when used as is, with no modifications. training time and accuracy. On the other hand, the ResNet
2) AlexNet: Alex Krizhevsky introduced the AlexNet structure design is primarily concerned with how deep the
architecture in 2012, and it was the first major CNN model to structure should be. If ResNet is chosen for face age
use graphics processing units (GPUs) for training, which aided estimation, the designer must consider how the network
in training speed. Meanwhile, ReLu, dropout and overlap should be structured to learn multiple aging features. Adding
pooling were used to reduce feature loss and improve training more layers is one of the common ideas. However, this could
speed. This architecture design was used in [54], [82] for face result in a longer learning time for the model (it can take
age classification and estimation, respectively. Their accuracy several weeks); therefore, the designer must also account for
performance, however, was inferior to that of the model that this. This study discovered that only a few face age estimation
used the LeNet network design [45] (see Table IV). This studies used Resnet architecture/concept in their design [69,
implies that, even though AlexNet is a newer network than 71].
LeNet, proper modification, structuring, and organisation of 6) New Arch: Is a network architecture created by
the AlexNet network are still required to achieve the best face expanding previous architectures, modifying them, or building
age estimation (or classification) performance. the network from scratch. These architectures were created
3) VGG-Net: Introduced in 2014, the VGG model specifically to find the best network approach for learning
improves training accuracy by improving its depth structure. how to best estimate age. For example, a facial image with a
The addition of more layers with smaller kernels increases specific age can be affected by facial variations caused by
nonlinearity, which is good for deep learning. This study external factors, such as lighting, which can lead to a
discovered that VGG-Net is the most commonly used network neighbouring age category being predicted as the final bias.
model among the many available (11 papers). One of the The study in [80] attempted to address this problem by
possible explanations is that the VGG pre-trained networks are proposing a network composed of a generator that could
freely available online. Although it is the best architecture for generate discriminative hard-examples (taken from extracted
benchmarking on the face age estimation task, the features done by a deep CNN) to complement the training
performance obtained by studies that used this model is not space for robust feature learning and a discriminator that could
the best, but it is also not the worst. This could be due to the determine the authenticity of the generated sample using a
vanishing gradient problem, one of the main challenges faced pre-trained age ranker [80]. This approach offers designers the
when using VGG-Net, which occurs when the number of ‘freedom’ to create the best solution to a given problem. The
layers exceeds 20, causing the model to fail to converge to the designs can be based on available networks and further
minimum error percentage. When this happens, the learning modified to their preferences, rather than being limited to the
rate slows to the point where no changes are made to the original design architecture. This study found that most of the
model’s weights. Furthermore, using VGG-Net can be time- previous studies, particularly those conducted in 2020, tend to
consuming because the training process can exceed a week, propose their own architectural network design. However, one
especially if it was built from scratch. As a result, when using major drawback of this design approach is that the designer
the VGG-Net network for face age estimation, users must may take a long time to modify/create networks when
address the vanishing gradient problem as well as the training compared to using available networks.
time.
D. Model Performance Evaluation
4) GoogleNet: A class of architecture designed by Google
researchers that won ImageNet 2014. Instead of a sequential Multiple protocols and performance calculations were used
in the studies to evaluate model performance. Table IV shows
architecture design, GoogleNet opted for a split transform and
the performance of the CNN models used in the studies on the
merge design, in which a single layer can have multiple types databases that they were tested on. The evaluation protocol is a
of “feature extractors”. In addition, GoogleNet has a smaller method for studies to determine the optimal number of training
pre-trained size and trains faster than VGG-Net [93]. One and testing datasets for their chosen databases. Meanwhile, the
drawback of GoogleNet is that almost every module must be performance calculation allows studies to compare the
customised. As a result, when designing a face age estimation estimation/classification/ distribution accuracy of their own
using GoogleNet, users must customise from module to model to that of others. Because of the numerous ways for
designing protocols and performance calculations, problems
94 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
arise when performance is compared on the same database but year; the improvement might be valid for MORPH since most
using different evaluation methods, resulting in a unanimous of the standard deviation obtained is low (< 0.3). However, the
'agreement' from most of the studies that specific performance implication for FG-NET may be invalid because only a few
calculation(s) should be used for comparison's sake for a studies used this database in 2015-2016. Most of the standard
specific database. Among the performance calculations used to deviation for 2017-2020 is high (> 0.4), meaning that the MAE
evaluate the accuracy of the face age deep learning model are: results obtained by the different studies are too wide apart.
Among other databases, the performance of the MOPRH
1) Mean absolute error (MAE): a widely used database appears to be the best. The samples captured in a
performance evaluation for age estimation studies that controlled environment help the models to better identify aging
measures the error between the predicted and actual ages. features because unwanted factors are absent (e.g occlusion).
MORPH, FG-NET, ChaLearn2015, and ChaLearn2016 are Meanwhile, the low quantity (1,002 images) and low quality
examples of databases that used this evaluation method. The (old images captured in an uncontrolled environment) samples
model performance improves as the MAE value decreases. of FG-NET might hinder the CNN model learning process in
2) E-Error: is the performance calculation used in the studies. Nonetheless, some studies were able to obtain low
apparent age estimation. This evaluation metric was used to MAE values using the FG-NET database: [62] MAE = 2.00 and
compare the performance of studies that used ChaLearn2015 [89] MAE = 2.71.
[32] and ChaLearn2016 [33] datasets. The lower the e-Error, Regarding publishers, from 2015-2020 (see Fig. 6), IEEE is
the better the performance. the publisher with the highest reviewed papers in this study.
3) Accuracy of an exact match (AEM): a method of Elsevier is in second place, and Springer is in third. The bar
calculating accuracy that calculates the percentage of correctly chart in Fig. 6 shows that the number of published papers
estimated/classified age per the total number of test images increased in 2016, but then declined until 2018, and then
remained relatively low until 2020. The figure seems to imply
used. This type of evaluation metric was used by the Adience
that the deep learning approach is becoming less attractive to
database. The higher the AEM value, the better the the face age research community, but this is most likely not the
performance. Some studies went so far as to include the case. When a more robust, advanced, and practical deep
standard deviation value in their evaluation. learning technique becomes available, a resurgence may occur.
4) Accuracy error of one age category (AEO): Is another
type of evaluation metric used on the Adience database, in
which errors of one age group are also included as correct age
classifications. The higher the AEO value obtained, the better
the overall model performance.
5) Cumulative score (CS): is defined as the percentage of
images with an error of no more than a certain number of
years. The evaluation is usually shown as a curve on a graph
(which is not depicted in this paper), with the x-axis
representing the error level in years and the y-axis
representing the cumulative score (in percentage value). This
type of evaluation performance was sometimes combined with
the MAE evaluation method in studies that used MORPH, FG-
NET, and other earlier year databases. Meanwhile, studies that
used the MegaAge-Asian database present some of their
results in terms of CA(θ), where θ is the allowable age error
corresponding to the cumulative accuracy, which several of
them are shown in Table IV.
Because the studies reviewed from 2015-2020 (see
Table IV) used different databases, analysing and comparing
their performance progress was difficult. Therefore, only the
most frequently used databases were chosen and averaged to
create a line chart depicting the performance progress of face
age research from 2015-2020. Fig. 5 illustrates the average
yearly performance for two different databases: MORPH and
FG-NET. As shown in Fig. 5, the MAE values for the MORPH
database decreased from 2015-2020, but not for FG-NET. The
chart may imply that models applied to the MORPH database
improved over six years, whereas FG-NET did not. Table VI Fig. 5. Line Chart Shows Face Age Research's Performance Progress from
shows the average MAE and its standard deviation for each 2015-2020 for MORPH and FG-NET (based on Table VI).
95 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
96 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
97 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
environment). Researchers must also decide whether to It would be interesting to develop the suggested model
capture a single face image or multiple faces at once. framework with different ethnic races for face age recognition.
However, the size and quality of the faces in the video may Significant racial traits might be discovered, which can further
differ between users. Therefore, this should be taken into distinguish the aging processes between different ethnic people.
This discovery could further improve the understanding of
account when trying to use this approach to collect samples
racial aging traits, particularly concerning the face and the
from volunteers, which can be co-workers or students (if the development of a model that can learn and identify those traits.
researcher is also an educator). Moreover, additional Additionally, using the suggested sample collection method to
information about volunteers, such as their age and ethnicity, collect and capture own samples may help ease the collection
can be directly requested and recorded for research purposes. process. Aside from face age studies, the collected face
Microsoft Teams, Zoom, and Google Meet are some of the images/samples can also be used for other facial image studies,
communication platforms that are available for use. Fig. 8 such as emotion recognition and ethnic recognition. These
shows an example of captured face images using Microsoft suggestions, however, are beyond the scope of this study and
Teams (single face or multiple faces). will be considered in future research.
When collecting samples, likely, some people would not be VIII. CONCLUSION
willing to help or give any personal information. Therefore, The analysis in this paper focused on ethnic consideration in
proper planning on target subjects selection before collecting the dataset used for the last six years for accurate age estimation
their face images is required. using the deep learning approach. This paper specifically
analysed 53 papers on deep learning face age estimation, model
performance, selected databases, and whether or not any face
ethnicity traits analysis was performed when estimating age.
This paper also highlighted 19 database papers that promote the
use of publicly available databases for age estimation research,
as well as information on multiple database ethnicities.
Although the deep learning approach improves face age
estimation over time, it can be further enhanced by
understanding how ethnicity affects face age estimation and
designing an evaluation protocol that takes the subjects’ ethnic
traits into account. Moreover, a sizeable multi-racial database is
needed for the investigation of aging in different ethnic groups.
Therefore, it is crucial to collect the necessary information to
create an extensive database with well-distributed age and
ethnic labels. Suggestions for capturing samples were also
provided to help researchers in increasing their ethnic-specific
samples for private or public use.
IX. FUTURE DIRECTION
Making the collected ethnic-specific samples public and
sharing them via web image collection sites can increase
interest in conducting more ethnicity-based face age estimation
research. More robust deep learning face age estimation models
can be developed by performing more such studies, sample
collection, and analyses in the future. Future research could also
discover significant racial traits that could help distinguish
unique aging patterns used to solve racial face age estimation
problems in real-life applications. Proper planning and key
considerations must be made when collecting samples, such as
ensuring personal data privacy or a subject’s consent.
Additionally, it would be good to reiterate the benefit of having
more samples for studies beyond facial age recognition.
ACKNOWLEDGMENT
The authors are grateful to the Faculty of Information
Fig. 8. Samples of Face Images - Captured using Microsoft Teams (Images Science and Technology, The National University of Malaysia,
taken from [97]).
for supporting and contributing to this study; under the grant
code GGPM-2019-038.
98 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
REFERENCES [23] M. Minear, & D. C. Park, “A lifespan database of adult facial stimuli,”
[1] N. A. Vashi, M. B. D. C. Maymone, & R. V. Kundu, “Aging differences Behavior Research Methods, Instruments, & Computers, vol. 36, no. 4,
in ethnic skin,” The Journal of clinical and aesthetic dermatology, vol. 9, pp. 630-633, 2004.
no. 1, pp. 31, 2016. [24] P. J. Phillips, et al., “Overview of the face recognition grand challenge,”
[2] Y. Shirakabe, Y. Suzuki, & S. M. Lam, “A new paradigm for the aging in 2005 IEEE computer society conference on computer vision and
Asian face,” Aesthetic plastic surgery, vol. 27, no. 5, pp. 397-402, 2003. pattern recognition (CVPR'05), Vol. 1, pp. 947-954, June, 2005.
[3] A. E. Brissett, & M. C. Naylor, “The aging african-american face,” [25] K. Ricanek, & T. Tesafaye, “Morph: A longitudinal image database of
Facial Plastic Surgery, vol. 26, no. 2, pp. 154-163, 2010. normal adult age-progression,” in IEEE 7th International Conference on
Automatic Face and Gesture Recognition (FGR06), pp. 341-345, April,
[4] M. O. Harris, “The aging face in patients of color: Minimally invasive 2006.
surgical facial rejuvenation—A targeted approach,” Dermatologic
therapy, vol. 17, no. 2, pp. 206-211, 2004. [26] Y. Fu & T. S. Huang, “Human age estimation with regression on
discriminative aging manifold,” in IEEE Transactions on Multimedia,
[5] M. G. Rhodes, “Age estimation of faces: A review,” Applied Cognitive vol. 10 no. 4, pp. 578-584, 2008.
Psychology: The Official Journal of the Society for Applied Research in
Memory and Cognition, vol. 23, no. 1, pp. 1-12, 2009. [27] A. C. Gallagher, & T. Chen, “Understanding images of groups of
people,” in 2009 IEEE Conference on Computer Vision and Pattern
[6] M. S. Zimbler, M. S. Kokoska, & J. R. Thomas, “Anatomy and Recognition, pp. 256-263, June, 2009.
pathophysiology of facial aging,” Facial plastic surgery clinics of North
America, vol. 9, no. 2, pp. 179-87, 2001. [28] N. C. Ebner, M. Riediger, & U. Lindenberger, “FACES—A database of
facial expressions in young, middle-aged, and older women and men:
[7] N. Ramanathan, R. Chellappa, & S. Biswas, “Age progression in human Development and validation,” Behavior research methods, vol. 42, no. 1,
faces: A survey,” Journal of Visual Languages and Computing, vol. 15, pp. 351-362, 2010.
3349-3361, 2009.
[29] S. Zheng, “Visual image recognition system with object-level image
[8] G. Y., Guo, & T. S. Huang, “Age synthesis and estimation via faces: A representation,” Doctoral dissertation, 2012.
survey,” IEEE transactions on pattern analysis and machine intelligence,
vol. 32, no. 11, pp. 1955-1976, 2010. [30] E. Eidinger, R. Enbar, & T. Hassner, “Age and gender estimation of
unfiltered faces,” in IEEE Transactions on Information Forensics and
[9] R. Russell, et al. "Facial contrast is a cue for perceiving health from the Security, vol. 9, no. 12, pp. 2170-2179, 2014.
face," in Journal of Experimental Psychology: Human Perception and
Performance, vol 42, no. 9, pp. 1354, 2016. [31] B. C. Chen, C. S. Chen, & W. H. Hsu, “Cross-age reference coding for
age-invariant face recognition and retrieval,” in European conference on
[10] C. Trojahn, G. Dobos, A. Lichterfeld, U. Blume-Peytavi, & J. Kottner, computer vision, Springer, Cham, pp. 768-783, September, 2014.
“Characterizing facial skin aging in humans: disentangling extrinsic
from intrinsic biological phenomena,” in BioMed research international, [32] S. Escalera, et al., “Chalearn looking at people 2015: Apparent age and
2015. cultural event recognition datasets and results,” In Proceedings of the
IEEE International Conference on Computer Vision Workshops, pp. 1-9,
[11] M. S. Zimbler, M. S. Kokoska, & J. R. Thomas, “Anatomy and 2015.
pathophysiology of facial aging,” Facial plastic surgery clinics of North
America, vol. 9, no. 2, pp. 179-87, 2001. [33] S. Escalera, et al., “Chalearn looking at people and faces of the world:
Face analysis workshop and challenge 2016,” In Proceedings of the
[12] T. Igarashi, K. Nishino, & S. K. Nayar, “The appearance of human skin: IEEE Conference on Computer Vision and Pattern Recognition
A survey,” Foundations and Trends® in Computer Graphics and Vision, Workshops, pp. 1-8, 2016.
vol. 3, no. 1, pp. 1-95, 2007.
[34] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, & S.
[13] N. A. Vashi, M. B. D. C. Maymone, & R. V. Kundu, “Aging differences Zafeiriou. “Agedb: the first manually collected, in-the-wild age
in ethnic skin,” The Journal of clinical and aesthetic dermatology, vol. 9, database,” In Proceedings of the IEEE Conference on Computer Vision
no. 1, pp. 31, 2016 and Pattern Recognition Workshops, pp. 51-59, 2017.
[14] Stock photo caucasian baby. 2021. Crello. [online] Available at:
[35] R. Rothe, R. Timofte, & L. Van Gool, “Deep expectation of real and
<https://crello.com/unlimited/stock-photos/166742418/stock-photo-
apparent age from a single image without facial landmarks,” in
caucasian-baby/> [Accessed 22 July 2021].
International Journal of Computer Vision, vol. 126, no. 2-4, pp. 144-
[15] A. Bastanfard, M. A. Nik, & M. M. Dehshibi, “Iranian face database 157, 2018.
with age, pose and expression,” in Machine Vision, pp. 50-55, 2007.
[36] E. Agustsson, et al., “Apparent and real age estimation in still images
[16] S. Setty, et al., “Indian movie face database: a benchmark for face with deep residual regressors on appareal database,” in 2017 12th IEEE
recognition under wide variations,” in 2013 Fourth National Conference International Conference on Automatic Face & Gesture Recognition (FG
on Computer Vision, Pattern Recognition, Image Processing and 2017), pp. 87-94, May, 2017.
Graphics (NCVPRIPG), pp. 1-5, December, 2013.
[37] V. Carletti, A. Greco, G. Percannella, & M. Vento, “Age from faces in
[17] Z. Niu, M. Zhou, L. Wang, X. Gao, & G. Hua, “Ordinal regression with the deep learning revolution,” in IEEE transactions on pattern analysis
multiple output cnn for age estimation,” in Proceedings of the IEEE and machine intelligence, 2019.
conference on computer vision and pattern recognition, pp. 4920-4928,
[38] X. Liu, et al. “Agenet: Deeply learned regressor and classifier for robust
2016.
apparent age estimation,” in Proceedings of the IEEE International
[18] Y. H. Kwon & N. da Vitoria Lobo. “Age classification from facial Conference on Computer Vision Workshops, pp. 16-24, 2015.
images,” Computer vision and image understanding, vol. 74, no. 1, pp.
[39] R. Rothe, R. Timofte, & L.Van Gool, “Dex: Deep expectation of
1-21, 1999.
apparent age from a single image,” in Proceedings of the IEEE
[19] R. Angulu, J. R. Tapamo, & A. O. Adewumi. “Age estimation via face International Conference on Computer Vision Workshops, pp. 10-15,
images: a survey,” EURASIP Journal on Image and Video Processing, 2015.
2018, no. 1, pp. 42, 2018.
[40] Y. Zhu, Y. Li, G. Mu, & G. Guo, “A study on apparent age estimation,”
[20] O. F. Osman, & M. H. Yap. “Computational intelligence in automatic in Proceedings of the IEEE International Conference on Computer
face age estimation: A survey,” IEEE Transactions on Emerging Topics Vision Workshops, pp. 25-31, 2015.
in Computational Intelligence, vol. 3, no. 3, pp. 271-285, 2018.
[41] Z. Kuang, C. Huang, & W. Zhang, “Deeply learned rich coding for
[21] P. J. Phillips, H. Wechsler, J. Huang, & P. J. Rauss, “The FERET cross-dataset facial age estimation,” in Proceedings of the IEEE
database and evaluation procedure for face-recognition algorithms,” International Conference on Computer Vision Workshops, pp. 96-101,
Image and vision computing, vol. 16, no. 5, pp. 295-306, 1998. 2015.
[22] A. Lanitis, C. J. Taylor, & T. F. Cootes, “Toward automatic simulation [42] X. Yang, et al. “Deep label distribution learning for apparent age
of aging effects on face images,” in IEEE Transactions on pattern estimation,” in Proceedings of the IEEE international conference on
Analysis and machine Intelligence, vol. 24, no. 4, pp. 442-455, 2002. computer vision workshops, pp. 102-108, 2015.
99 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
[43] X. Wang, R. Guo, & C. Kambhamettu, “Deeply-learned feature for age on pattern analysis and machine intelligence, vol. 40, no. 11, pp. 2610-
estimation,” in 2015 IEEE Winter Conference on Applications of 2623, 2017.
Computer Vision, pp. 534-541, January, 2015. [62] R. Ranjan, S. Sankaranarayanan, C. D. Castillo, & R. Chellappa, “An
[44] S. Li, J. Xing, Z. Niu, S. Shan, & S. Yan, “Shape driven kernel all-in-one convolutional neural network for face analysis,” in 2017 12th
adaptation in convolutional neural network for robust facial traits IEEE International Conference on Automatic Face & Gesture
recognition,” in Proceedings of the IEEE Conference on Computer Recognition (FG 2017), pp. 17-24, May, 2017.
Vision and Pattern Recognition, pp. 222-230, 2015. [63] H. Liu, J. Lu, J. Feng, & J. Zhou, “Ordinal deep learning for facial age
[45] I. Huerta, C. Fernández, C. Segura, J. Hernando, & A. Prati, “A deep estimation,” in IEEE Transactions on Circuits and Systems for Video
analysis on age estimation,” Pattern Recognition Letters, vol. 68, pp. Technology, vol. 29, no. 2, pp. 486-501, 2017.
239-249, 2015. [64] H. Liu, J. Lu, J. Feng, & J. Zhou, “Group-aware deep feature learning
[46] G. Levi, & T. Hassner, “Age and gender classification using for facial age estimation,” in Pattern Recognition, vol. 66, pp. 82-94,
convolutional neural networks,” in Proceedings of the IEEE conference 2017.
on computer vision and pattern recognition workshops, pp. 34-42, 2015. [65] G. Antipov, M. Baccouche, S. A. Berrani, & J. L. Dugelay, “Effective
[47] R. Ranjan, S. Zhou, J. Cheng Chen, A. Kumar, A. Alavi, V. M. Patel, & training of convolutional neural networks for face-based gender and age
R. Chellappa, “Unconstrained age estimation with deep convolutional prediction,” in Pattern Recognition, vol. 72, pp. 15-26, 2017.
neural networks,” in Proceedings of the IEEE International Conference [66] J. Xing, K. Li, W. Hu, C. Yuan, & H. Ling, “Diagnosing deep learning
on Computer Vision Workshops, pp. 109-117, 2015. models for high accuracy age estimation from a single image,” in Pattern
[48] J. C. Chen, A. Kumar, R. Ranjan, V. M. Patel, A. Alavi, & R. Recognition, vol. 66, pp. 106-116, 2017.
Chellappa, “A cascaded convolutional neural network for age estimation [67] K. Li, J. Xing, W. Hu, & S. J. Maybank, “D2C: Deep cumulatively and
of unconstrained faces,” in 2016 IEEE 8th International Conference on comparatively learning for human age estimation,” in Pattern
Biometrics Theory, Applications and Systems (BTAS), pp. 1-8, Recognition, vol. 66, pp. 95-105, 2017.
September, 2016.
[68] L. Hou, D. Samaras, T. M. Kurc, Y. Gao, & J. H. Saltz, “Convnets with
[49] F. Gurpinar, H. Kaya, H. Dibeklioglu, & A. Salah, “Kernel ELM and smooth adaptive activation functions for regression,” in Proceedings of
CNN based facial age estimation,” in Proceedings of the IEEE machine learning research, vol. 54, pp. 430, 2017.
conference on computer vision and pattern recognition workshops, pp.
[69] K. Zhang, et al., “Age group and gender estimation in the wild with deep
80-86, 2016.
ror architecture,” IEEE Access, vol. 5, pp. 22492-22503, 2017.
[50] Z. Niu, M. Zhou, L. Wang, X. Gao, & G. Hua, “Ordinal regression with
[70] F. Wang, H. Han, S. Shan, & X. Chen, “Deep multitask learning for
multiple output cnn for age estimation,” in Proceedings of the IEEE
joint prediction of heterogeneous face attributes,” in 2017 12th IEEE
conference on computer vision and pattern recognition, pp. 4920-4928,
International Conference on Automatic Face & Gesture Recognition (FG
2016.
2017), pp. 173-179, May, 2017.
[51] R. Rothe, R. Timofte, & L. Van Gool, “Some like it hot-visual guidance
[71] H. Liu, J. Lu, J. Feng, & J. Zhou, “Label-sensitive deep metric learning
for preference prediction,” in Proceedings of the IEEE conference on
computer vision and pattern recognition, pp. 5553-5561, 2016. for facial age estimation,” IEEE Transactions on Information Forensics
and Security, vol. 13, no. 2, pp. 292-305, 2017.
[52] Y. Yang, F. Chen, X. Chen, Y. Dai, Z. Chen, J. Ji, & T. Zhao, “Video
[72] H. Han, A. K. Jain, F. Wang, S. Shan, & X. Chen, “Heterogeneous face
system for human attribute analysis using compact convolutional neural
attribute estimation: A deep multitask learning approach,” IEEE
network,” in 2016 IEEE International Conference on Image Processing
(ICIP), pp. 584-588, September, 2016. transactions on pattern analysis and machine intelligence, vol. 40, no.
11, pp. 2597-2609, 2017.
[53] Y. Dong, Y. Liu, & S. Lian, “Automatic age estimation based on deep
learning algorithm,” Neurocomputing, vol. 187, pp. 4-10, 2016. [73] J. Wan, Z. Tan, Z. Lei, G. Guo, & S. Z. Li, “Auxiliary demographic
information assisted age estimation with cascaded structure,” IEEE
[54] G. Ozbulak, Y. Aytar, & H. K. Ekenel, “How transferable are CNN- transactions on cybernetics, vol. 48, no. 9, pp. 2531-2541, 2018.
based features for age and gender classification?” in 2016 International
[74] M. Duan, K. Li, & K. Li, “An Ensemble CNN2ELM for Age
Conference of the Biometrics Special Interest Group (BIOSIG), pp. 1-6,
September, 2016. Estimation,” IEEE Transactions On Information Forensics And Security,
vol. 13, no. 3, 2018.
[55] G. Antipov, M. Baccouche, S. A. Berrani, & J. L. Dugelay, “Apparent
[75] H. F. Yang, B. Y. Lin, K. Y. Chang, & C. S. Chen, “Joint estimation of
age estimation from face images combining general and children-
age and expression by combining scattering and convolutional
specialized deep learning models,” in Proceedings of the IEEE
networks,” ACM Transactions on Multimedia Computing,
conference on computer vision and pattern recognition workshops, pp.
Communications, and Applications (TOMM), vol. 14, no. 1, pp. 9. 2018.
96-104, 2016.
[56] Z. Huo, X. Yang, C. Xing, Y. Zhou, P. Hou, J. Lv, & X. Geng, “Deep [76] B. Yoo, Y. Kwak, Y. Kim, C. Choi, & J. Kim, “Deep facial age
age distribution learning for apparent age estimation,” in Proceedings of estimation using conditional multitask learning with weak label
the IEEE conference on computer vision and pattern recognition expansion,” IEEE Signal Processing Letters, vol. 25, no. 6, pp. 808-812,
workshops, pp. 17-24, 2016. 2018.
[57] M. Uricár, R. Timofte, R. Rothe, J. Matas, & L. Van Gool, “Structured [77] S. Taheri, & Ö. Toygar, “On the use of DAG-CNN architecture for age
output svm prediction of apparent age, gender and smile from deep estimation with multi-stage features fusion,” Neurocomputing, vol. 329,
features,” in Proceedings of the IEEE Conference on Computer Vision pp. 300-310, 2019.
and Pattern Recognition Workshops, pp. 25-33, 2016. [78] W. Im, S. Hong, S. E. Yoon, & H. S. Yang, “Scale-Varying Triplet
[58] R. Can Malli, M. Aygun, & H. Kemal Ekenel, “Apparent age estimation Ranking with Classification Loss for Facial Age Estimation,” in
using ensemble of deep learning models,” in Proceedings of the IEEE Computer Vision – ACCV 2018. Lecture Notes in Computer Science,
Conference on Computer Vision and Pattern Recognition Workshops, vol. 11365. Springer, Cham, 2018.
pp. 9-16, 2016. [79] O. Sendik, and Y. Keller. “DeepAge: Deep Learning of face-based age
[59] B. Hebda, & T. Kryjak, “A compact deep convolutional neural network estimation,” in Signal Processing: Image Communication, vol. 78, pp.
architecture for video based age and gender estimation,” in 2016 368-375, 2019.
Federated Conference on Computer Science and Information Systems [80] S. Penghui, L. Hao, W. Xin, Y. Zhenhua, & S. Wu. “Similarity-aware
(FedCSIS), pp. 787-790, September, 2016. deep adversarial learning for facial age estimation,” in 2019 IEEE
[60] Z. Hu, Y. Wen, J. Wang, M. Wang, R. Hong, & S. Yan, “Facial age International Conference on Multimedia and Expo (ICME), pp. 260-265,
estimation with age difference,” in IEEE Transactions on Image July, 2019.
Processing, vol. 26, no. 7, pp. 3087-3097, 2016. [81] C. Miron, V. Manta, R. Timofte, A. Pasarica, & R. I. Ciucu, “Efficient
[61] Z. Tan, J. Wan, Z. Lei, R. Zhi, G. Guo, & S. Z. Li, “Efficient group-n convolutional neural network for apparent age prediction,” in 2019 IEEE
encoding and decoding for facial age estimation,” in IEEE transactions
100 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 12, No. 11, 2021
15th International Conference on Intelligent Computer Communication environments,” Workshop on Faces in 'Real-Life' Images: Detection,
and Processing (ICCP), pp. 259-262, September, 2019. Alignment, and Recognition, October, 2008.
[82] N. Savov, M. Ngo, S. Karaoglu, H. Dibeklioglu, & T. Gevers, “Pose and [91] Z. Liu, P. Luo, X. Wang, & X. Tang, “Deep learning face attributes in
Expression Robust Age Estimation via 3D Face Reconstruction from a the wild,” in Proceedings of the IEEE international conference on
Single Image,” in Proceedings of the IEEE International Conference on computer vision, pp. 3730-3738, 2015.
Computer Vision Workshops, pp. 0-0, 2019. [92] O. Russakovsky, et al., “Imagenet large scale visual recognition
[83] H. Liu, J. Lu, J. Feng, & J. Zhou. “Ordinal Deep Learning for Facial challenge,” International journal of computer vision, vol. 115, no. 3, pp.
Age Estimation,” IEEE Transactions on Circuits and Systems for Video 211-252, 2015.
Technology, vol. 29, no. 2, pp. 486-501, 2019. [93] A. Khan, A. Sohail, U. Zahoora, & A. S. Qureshi. “A survey of the
[84] H. Liu, P. Sun, J. Zhang, S. Wu, Z. Yu, & X. Sun. “Similarity-Aware recent architectures of deep convolutional neural networks,” Artificial
and Variational Deep Adversarial Learning for Robust Facial Age Intelligence Review, vol. 53, no. 8, 5455-5516, 2020.
Estimation,” IEEE Transactions on Multimedia, 2020. [94] K. Ricanek, Y. Wang, C. Chen, & S. J. Simmons, “Generalized multi-
[85] P. Li, Y. Hu, X. Wu, R. He, & Z. Sun. “Deep label refinement for age ethnic face age-estimation,” in 2009 IEEE 3rd International Conference
estimation,” Pattern Recognition, vol. 100, p. 107178, 2020. on Biometrics: Theory, Applications, and Systems, pp. 1-6, September,
[86] N. Liu, F. Zhang, & F. Duan. “Facial Age Estimation Using a Multitask 2009.
Network Combining Classification and Regression,” IEEE Access, vol. [95] J. D. Akinyemi, & O. F. Onifade, “An ethnic-specific age group ranking
8, pp. 92441-92451, 2020. approach to facial age estimation using raw pixel features,” in 2016
[87] X. Liu, Y. Zou, H. Kuang, & X. Ma. “Face Image Age Estimation Based IEEE Symposium on Technologies for Homeland Security (HST), pp. 1-
on Data Augmentation and Lightweight Convolutional Neural 6, May, 2016.
Network,” Symmetry, vol. 12, no. 1, pp. 146, 2020. [96] M. Shin, J. H. Seo, & D. S. Kwon, “Face image-based age and gender
[88] J. C. Xie, & C. M. Pun. “Deep and Ordinal Ensemble Learning for estimation with consideration of ethnic difference,” in 2017 26th IEEE
Human Age Estimation From Facial Images,” IEEE Transactions on International Symposium on Robot and Human Interactive
Information Forensics and Security, vol. 15, pp. 2361-2374, 2020. Communication (RO-MAN), pp. 567-572. 2017.
[89] M. Xia, X. Zhang, L. Weng, & Y. Xu. “Multi-Stage Feature Constraints [97] Microsoft.com. 2021. Online Meeting Software, Video Conferencing |
Learning for Age Estimation,” IEEE Transactions on Information Microsoft Teams. [online] Available at:
Forensics and Security, vol. 15, pp. 2417-2428, 2020. <https://www.microsoft.com/en-my/microsoft-teams/online-meetings>
[Accessed 3 May 2021].
[90] G. B. Huang, M. Mattar, T. Berg, & E. Learned-Miller, “Labeled faces
in the wild: A database for studying face recognition in unconstrained
101 | P a g e
www.ijacsa.thesai.org