Age Prediction From Facial Images Using Deep Learning Architecture
Age Prediction From Facial Images Using Deep Learning Architecture
Abstract – Predicting age and gender through images is a appearance, and the lack of comprehensive databases. Recently,
common computer vision problem with many practical convolutional neural network (CNN) methods have been
applications. However, this problem faces many difficulties applied for age estimation and image classification with notable
because a person’s age can be affected by genetics, living
environment, diet, health, gender, and other factors. Therefore, improvements [5], [6]. In forensic research [7], to predict
the accuracy of the prediction model may decrease due to the gender and age groups from human eye area images, VGG16
enormous diversity and variability in the data. In this study, we was used to classify age groups. The results showed that the
use three models, including Unet, MobileNets, and EfficientNets, VGG16 model effectively predicted gender and age from eye
to test the performance of predicting a person’s age and gender area images.
through their photos. In addition, we also adjust the learning rate This study used the UTKFace dataset, which contains over
parameter to find optimal performance. The best results for
gender prediction are achieved by the Unet model with the highest
20 000 face images. We aim to build a model to classify gender
accuracy of 97.22 %, and the MobileNets model gives age and age from faces. We implemented three models, including
prediction results with MAE = 2.248, learning rate 0.001 for Unet, MobileNets, and EfficientNets. We tested and compared
optimal performance in the models of our study. different learning rates, including 0.001, 0.005, and 0.01.
The remaining parts of the article are organised as follows:
Keywords – Age prediction, computer vision, face image, Unet. Section II presents research on age and gender prediction.
Section III describes in detail the dataset, data pre-processing,
I. INTRODUCTION and algorithms used for comparison. Results and explanations
The human face expresses a diversity of shapes, including of each algorithm are presented in Section IV. Section V
features such as the eyes, nose, mouth, chin, eyebrows, and skin summarises potential future research directions and concludes
colour [1]. When they are combined with facial expressions and with the research implications.
other contours, these characteristics create a unique human face,
enabling us to recognise and distinguish individuals. This II. RELATED WORK
distinctiveness is evident across all languages worldwide, There have been many studies on prediction, including
influencing how we interact with someone young versus predicting age and gender based on their faces. Some studies
elderly. Recognising a person’s age and gender from the face include [8] the Consistent Rank Logits (CORAL) method to
can help us communicate more appropriately, rather than adapt popular CNN architectures and compare the results using
relying solely on estimates from external facial features. data warehouses, such as MORPH-2, CACD, and AFAD. The
Additionally, this has critical applications in many different MORPH-2 data set contains 55 608 face images ranging from
fields. In security and law enforcement [2], determining age and 16–70 years old, CACD data contains 159 449 images ranging
gender through facial images can aid in criminal identification from 14–62 years old, and Asian face data (AFAD) contains
and detecting falsified personal information, thereby addressing 165 501 faces between 15–40 years old. They chose the
public security concerns. In the medical field [3], information ResNet-34 architecture and the results were: OR-CNN method
about a person’s age and gender can be crucial for diagnosing (AVG ± SD of MAE: 2.83 ± 0.03, AVG ± SD of RMSE: 3.97
illnesses, treatment, and health management, especially in early ± 0.11), CORAL-CNN method (AVG ± SD of MAE: 2.64 ±
aging detection. In technology, this can be used to personalise 0.02, AVG ± SD of RMSE: 3.65 ± 0.04). The experimental
user experiences on online platforms, from marketing to fraud results of the study showed that CORAL improved the
detection [4]. performance expectations of the convolutional neural network
Estimating a person’s age and gender based on their face for age estimation on three independent datasets.
remains challenging due to differences in facial features and
*
Corresponding author’s e-mail: [email protected]
Article received 2024-07-14; accepted 2024-10-21
©2024 Author(s). This is an open access article licensed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0).
22
Applied Computer Systems
_________________________________________________________________________________________________2024/29
The study [9] compared the effect of initialising the EffcientNetB1, and MobileNetV2 had 94 %, 93 %, and 91 %,
AdienceNet, CaffeNet, Googlenet, and VGG-16 models with respectively. The study findings suggest an aging pattern in the
pre-trained weights on two real-world datasets, including the dura mater that can be used to classify age.
Adience database that contains 26 580 images on 2284 subjects A new framework for extracting age characteristics based on
with binary gender labels and one label from eight different age CNN, dimensionality reduction, and classification (SVR, PLS,
ranges. They compared image pre-processing, model CCA) was built [14] for estimating age. Compared to previous
bootstrapping, and architectural choices on the Adience dataset models based on CNN, this approach utilizes feature maps
and discussed how they affected performance. Using LRP to obtained in different layers to estimate activity instead of
visualise how the model interacts with specific input samples features obtained in the top layer. Experiments were conducted
directly, they demonstrated that bootstrapping the model on two datasets, MORPH and FG-NET. When identifying age
appropriately through back-mining the overfitting phenomenon on MORPH and FG-NET, the MAE index was 4.77 and 4.26,
leads to a view of comprehensive input. With the combination respectively. The study [15] also proposed a multi-output CNN
of simple pre-processing steps, the study achieved superior to solve the age and gender classification problem. High
performance for gender classification on the Adience data test performance was achieved in both the MORPH dataset and the
set. Asian Face Age dataset (AFAD). When identifying age on
In the study [10], the authors used YOLO to locate faces in MORPH and AFAD, the MAE index was 3.27 and 3.34,
images with a Web application written in the Python respectively. Additionally, the authors proposed solving the
programming language. They used a model based on the ordinal regression problem using an end-to-end deep learning
EfficientNet architecture to predict age. The MIVIA Age method. Specifically, an ordinal regression problem was
Dataset has 575 073 photos of more than 9000 people of transformed into a series of binary classification sub-problems
different ages. Face recognition accuracy is 94.6 %; age collectively solved by the proposed multi-output CNN learning
recognition MAE index is 2.89. A deep learning solution was algorithm.
introduced to estimate age from a facial image without using A nutritional recommendation system for users based on age
facial landmarks [11]. They crawled 523 051 facial images and gender prediction was built [16] with the method of pre-
from IMDb and Wikipedia websites to create IMDB-WIKI. The processing data images before going through the feature
authors used the Deep EXpectation (DEX) system to estimate extraction stage using a deep neural network (DNN). The
age. First, they selected an off-the-shelf face detector to obtain system was evaluated by classification rate, precision, and
the face’s position and size (scale) in each image. Next, they recall using the Adience and UTKface datasets. On the other
found that performance increased when considering the context hand, the study [17] focused on age estimation, age
around the face, and they expanded face detection by taking classification, and gender classification from static images of an
40 % more width and height of the face at all edges. Then, they individual with 20 000 photos of the UTKface dataset. The
used a Convolutional Neural Network of the VGG-16 results showed that simple linear regression trained on the
architecture pre-trained on ImageNet to classify the images. extracted features outperformed CNN, ResNet50, and
Finally, they estimated the age by screening the Softmax ResNeXt50 training for age estimation. Specifically, for the
expectation value. When identifying age, the MAE index Separable Conv2D + Spatial dropout + Xavier uniform
was 3.306. initialisation method in age estimation, MAE = 6.080; for age
A scheme for extracting aging characteristics and automatic classification, the accuracy was 78.279 %; for gender
age estimation is presented in [12] with the basic idea of classification, the accuracy was 91.269 %. Research [18]
learning by embedding a low-dimensional aging manifold using showed that improving the accuracy of models that determine
the appropriate subspace of the learning method. They then age and sex from images was sometimes hampered by external
designed a new method, locally adjusted regression (LARR), to factors such as lighting, makeup, and politics. Therefore, some
learn and predict aging patterns. They tested the FG-NET models cannot accurately predict age for these reasons. In terms
database for age estimation and age-specific face recognition of methods, they performed the following tasks: Load data
with 1002 high-resolution colour or grayscale face images with (IMDB-WIKI), Detect faces (using Viola-Jones algorithm),
significant variations in lighting, poses, and expressions of 82 Crop and resize faces (227×227×3), Extract features typically
people with ages ranging from 0 to 69 and an age gap of up to using CNN, apply PCA to extracted features (PCA is applied to
45 years. When identifying age, MAE index was 5.04. reduce the dimensionality of extracted features from 4096 to
In [13], the authors focus on using the transfer learning 500), train and test using CNN, visualise results and analyse.
method to classify the dural age group; 2000 Sclera images are The results obtained were that for the gender prediction model,
collected from 250 people of different ages, and the Otsu the accuracy was 95.5 %, MAE (0.871), MSE (0.8900), and for
threshold is used to segment images using morphological the age prediction model, the accuracy was 89.50 %, MAE
processes. The segmented images were trained and tested on (0.1441), MSE (0.3517).
four different pre-trained models (VGG16, ResNet50, The study [19] showed that gender prediction was possible
MobileNetV2, EfficiencyNet-B1), which were compared based through images recorded with the phone front camera. The
on different performance metrics in which ResNet-50 was authors used VGG and ResNet models as feature
shown to outperform the others, resulting in an accuracy, representations and pattern classifiers to predict gender on the
precision, recall and F1-score of 95% while VGG-16, large ImageNet dataset. The results showed that the VGG-16
23
Applied Computer Systems
_________________________________________________________________________________________________2024/29
model achieved 85.3 % accuracy using SVM on the eye region, index through epoch changes and learning rate. The
including single eyes taken with iPhone 5s. The Classification implementation method is presented in detail in the next
Error Rate (MCR) and Misclassification Error Rate (FCR) were section.
91.7 % and 73.7 %, respectively. When combining SVM with
III. METHODS
MLP at the score level, accuracy increased slightly to 85.7%,
while MCR and FCR decreased to 91.5 % and 77.6 %. There This section introduces the research method, which consists
were no significant differences between the VGG-16 and VGG- of three parts. First, Section III-A presents the UTKFace
19 models. In the ResNet Model, the best accuracy was about dataset, which contains more than 8000 face images. Next, we
87.3% achieved by SVM on the Oppo device. The classification introduce the image pre-processing steps since the initial data is
error rate (MCR) and misclassification error rate (FCR) were imbalanced. After that, we train our data using three models:
90.2 % and 84.8 %, respectively. Combining SVM with MLP Unet, EfficientNet, and MobileNets. In Section III-C, we
at the point level increased the accuracy to 89.0 %. MCR and describe the architecture and settings of each model.
FCR were 87.5 % and 88.0 %, respectively. Studies have shown Section III-D provides information about programming
many positive results in identifying a person’s age and gender language settings, necessary libraries, and computer
through real-time images and videos. In this study, we configuration. Finally, the evaluation parameters in the model
performed age and gender prediction through three models: include MAE for age prediction and accuracy for gender
Unet, MobileNets, and EfficientNets. We tested the prediction. Figure 1 describes in detail the implementation
performance of the models through accuracy and the MAE architecture of our model.
A. Dataset
We utilised the UTKFace dataset 2 comprising over 20 000
annotated facial images, including age, gender, and ethnicity
information, ranging from 0 to 116 years old. The age group of
20–30 years old constitutes the majority. The elderly group
represents a minority compared to other age groups; hence, in
this study, we excluded images with ages ranging from 86 to
100 to avoid age group imbalance. The dataset predominantly
comprises individuals with white ethnicity (approximately
5000 images, over 50 %), followed by a minority of black
individuals (only around 3–5 %), with other ethnicities
accounting for fluctuating proportions ranging from 10–15 %.
The female gender outweighs the male gender. Due to memory
constraints, we only utilised the first 8000 images. Figure 2 Fig. 2. Description of the dataset in our study.
depicts a detailed description of our dataset. The label of each In the dataset, [age] is an integer from 0 to 116 representing
image in the dataset has been annotated according to the age, [gender] represents gender with 0 (male) or 1 (female), and
structure [age]-[gender]-[race]-[date&time], with the extension [race] is an integer from 0 to 4 representing skin white, Black,
in jpg format.
2
https://susanqq.github.io/UTKFace/
24
Applied Computer Systems
_________________________________________________________________________________________________2024/29
Asian, Indian, [date&time] is in the format problems to assess the quality of a prediction model. It
“yyyymmddHHMMSSSFFF”, showing the date and time the measures the average absolute difference between predicted
image was collected in UTKFace. and actual values – the lower the MAE value, the more accurate
the model.
B. Image Pre-processing
As mentioned earlier, the dataset of more than 9000 photos IV. EXPERIMENTAL RESULTS
from the UTKFaces library is not uniform, and our goal is to This section presents the results of the study’s experimental
compare the performance between three models: EfficientNet, scenarios. We set up the models with default parameters,
U-Net, and MobileNet. We aim to determine which model is including an input image size of 224×224, batch size of 64, 100
most suitable based on various factors such as parameters, epochs, and a learning rate, which we varied to include 0.001,
performance, and accuracy. The dataset consists of photographs 0.005, and 0.01 to test the performance changes of the models.
that have been cropped to show only the subject’s face. We present detailed results in Section IV-A for the Unet model,
Therefore, our data pre-processing focuses on rebalancing the Section IV-B for the MobileNets model, and Section IV-C for
dataset to ensure class uniformity. the EfficientNets model. Additionally, we compare our study
Our image pre-processing steps include looping through all with previous studies in Section IV-D.
image files in the directory and extracting age and gender
A. Performance of the Unet Model
information from the image file names. After reading each
image file, we use “ImageOps.fit” to adjust the image size to fit To optimise the training process, we combined Unet with
each model. Once the images are read and processed, we create ResNet34, using it as a feature extraction encoder for the input
a data frame from the collected data. We then remove images. By doing so, the model does not need to learn basic
unnecessary data, such as eliminating samples with ages below features from scratch, reducing the complexity of the training
four years by selecting those with ages greater than four and process and the total number of parameters that need to be
excluding samples with ages outside the range of 4–80 or trained. We experimented with different learning rates (lr),
gender of 3. We randomly select a portion (30 %) of the including 0.001, 0.005, and 0.01, to select the most suitable rate
samples under four years old and add them to the dataset to for the environment. A learning rate that is too high may cause
balance the data. Finally, we normalise the image data by the model to “jump over” the optimal point, while a learning
dividing by 255 to ensure pixel values are in the range [0, 1]. rate that is too low may result in slow learning or cause the
These pre-processing steps help prepare the data for input into model to get stuck at a local optimum.
a machine-learning model to predict age and gender based on
the images.
C. Classification and Object Detection
In this section, we apply machine learning methods to create
a model that predicts a person’s age and gender. The modelling
process involves selecting from various models and employing
different techniques during testing. In this case, we used models
such as MobileNets [20], Unet [21], and EfficientNets [22] for
(a) Gender with lr = 0.001 (b) Age with lr = 0.001
prediction. We aim to identify the best classification model for
this analysis problem, ensuring the highest possible accuracy
and performance.
D. Environmental Settings
First, to run and train models with significant parameters, we
train the model on Google Colab Pro, where the code is sent to
Google’s server, executed, and results are returned. We select
the Tesla K80 GPU environment with approximately 12 GB of
RAM and the V100 GPU. The Python programming language (c) Gender with lr = 0.005 (d) Age with lr = 0.005
is used along with necessary libraries, including Keras in
TensorFlow, Sklearn, Pandas, NumPy, Operating System,
Python Imaging Library (PIL), TensorFlow, and segmentation
models. The dataset is divided into four subsets to serve two
goals: age identification and gender recognition. Each goal
includes two datasets: a training dataset and a test dataset. For
each test dataset, 20 % of representative data from each group
is selected for testing. This study uses two parameters to
evaluate the model’s performance. For age prediction, we use
(e) Gender with lr = 0.01 (f) Age with lr = 0.01
the Mean Absolute Error (MAE). For gender prediction, we use
accuracy. MAE is a measure commonly used in regression Fig. 3. Loss function results when changing the learning rate in the Unet model.
25
Applied Computer Systems
_________________________________________________________________________________________________2024/29
(b) lr = 0.005
26
Applied Computer Systems
_________________________________________________________________________________________________2024/29
C. Performance of the EfficientNets Model performance on the training and validation datasets is achieved
Similarly to the previous two models, we experimented with with 56.44 % and 60.20 % accuracy, respectively. Likewise,
different learning rates for the EfficientNets model. The results the best MAE scores are 4.7328 (train) and 6.8154 (val). Both
in Table III demonstrate that at a learning rate of 0.001, the best balance charts show relatively minor discrepancies between the
performance in predicting gender on both the training and training and testing sets, but the accuracy in gender prediction
validation datasets is achieved, with accuracies of 92.03 % and is relatively low. Figure 7 illustrates the loss function results for
91.47 %, respectively. Likewise, the best MAE scores are predicting age and gender at different learning rates.
2.3060 (train) and 4.6693 (val). Both balance charts show minor To test the model, we allowed it to recognise images
discrepancies between the training and testing sets when using randomly taken from the data warehouse featuring different
this model. At a learning rate of 0.005, the best gender ages and genders. Figure 8 shows the prediction results using
prediction performance on the training and validation datasets the EfficientNet model. The results show that the subjects’
is achieved with 96.52 % and 70.99 % accuracy, respectively. genders are correctly identified; however, their ages differ by
three years or more. This discrepancy may be attributed to the
TABLE III
quality of the photos, but this difference is still within an
PERFORMANCE OF THE EFFICIENTNETB3 MODEL WHEN CHANGING THE
LEARNING RATE acceptable range.
Learning Rate Accuracy of Gender Age MAE
0.001 0.9203 2.3060
0.005 0.9652 4.8394
0.01 0.5644 4.7328
27
Applied Computer Systems
_________________________________________________________________________________________________2024/29
TABLE IV
COMPARISON OF THE MODEL PERFORMANCE WITH RELATED STUDIES [7] Ö. F. Akmeşe, H. Çizmeci, S. Özdem, F. Özdemir, E. Deniz, R. Mazman,
M. Erdoğan, and E. Erdoğan, “Prediction of gender and age period from
Model Accuracy MAE periorbital region with VGG16,” Chaos Theory and Applications, vol. 5,
Unet 97.220 4.722 no. 2, pp. 105–110, July 2023.
https://doi.org/10.51537/chaos.1257597
MobileNets 87.450 2.821 [8] W. Cao, V. Mirjalili, and S. Raschka, “Rank consistent ordinal regression
EfficientNets 96.520 4.839 for neural networks with application to age estimation,” Pattern
Recognition Letters, vol. 140, pp. 325–331, Dec. 2020.
CNN ([18]) 95.500 0.871 [9] https://doi.org/10.1016/j.patrec.2020.11.008W. Samek, A. Binder, S.
SVMs ([25]) 79.700 - Lapuschkin, and K.-R. Muller, “Understanding and comparing deep
neural networks for age and gender classification,” in 2017 IEEE
VGG-16 ([19]) 85.300 - International Conference on Computer Vision Workshops (ICCVW),
VGG16 [7] 95.730 - Venice, Italy, Oct. 2017, pp. 1629–1638.
https://doi.org/10.1109/ICCVW.2017.191
AutoML ([23]) 69.400 - [10] G. Castellano, B. D. Carolis, N. Marvulli, M. Sciancalepore, and
Inception V3 ([24]) 91.820 - G. Vessio, “Real-time age estimation from facial images using YOLO and
EfficientNet,” in Computer Analysis of Images and Patterns,
Yolo ([10]) 94.600 2.890 N. Tsapatsoulis, A. Panayides, T. Theocharides, A. Lanitis, C. Pattichis,
DCNN ([16]) 91.269 6.080 and M. Vento, Eds. Springer, Cham, Oct. 2021, pp. 275–284.
https://doi.org/10.1007/978-3-030-89131-2_25
[11] R. Rothe, R. Timofte, and L. V. Gool, “Deep expectation of real and
V. CONCLUSION apparent age from a single image without facial landmarks,” International
Journal of Computer Vision, vol. 126, pp. 144–157, Apr. 2018.
Predicting age and gender has never been easy, and it faces https://doi.org/10.1007/s11263-016-0940-3
many challenges, such as discrepancies in data, ambiguity in [12] G. Guo, Y. Fu, C. R. Dyer, and T. S. Huang, “Image-based human age
age and gender, and peripheral factors like lighting and photo estimation by manifold learning and locally adjusted robust regression,”
IEEE Transactions on Image Processing, vol. 17, no. 7, pp. 1178–1188,
background. The complexity and diversity of the data increase July 2008. https://doi.org/10.1109/TIP.2008.924280
the difficulty of training the model for stable and optimal [13] P. O. Odiona, M. N. Musab, and S. U. Shuaibua, “Age prediction from
performance. In this study, we use models such as U-Net, sclera images using deep learning.,” Journal of the Nigerian Society of
Physical Sciences, vol. 4, no. 3, Aug. 2022.
MobileNets, and EfficientNets to address the problem of https://doi.org/10.46481/jnsps.2022.787
recognising and classifying age and gender based on images. [14] X. Wang, R. Guo, and C. Kambhamettu, “Deeply-learned feature for age
The U-Net model achieves 97.22 % accuracy for gender estimation,” in Proceedings of the 2015 IEEE Winter Conference on
Applications of Computer Vision, Waikoloa, HI, USA, Jan. 2015,
prediction and an MAE of 4.722 for age prediction. With the pp. 534–541. https://doi.org/10.1109/WACV.2015.77
MobileNets model, the accuracy is 87.45 % for gender [15] Z. Niu, M. Zhou, L. Wang, X. Gao, and G. Hua, “Ordinal regression with
prediction, with an MAE of 2.448 for age prediction. Finally, multiple output CNN for age estimation,” in 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA,
the EfficientNets model achieved 96.52 % accuracy for gender
Jun. 2016, pp. 4920–4928.
prediction and an MAE of 2.306 for age prediction. The best https://doi.org/10.1109/CVPR.2016.532
results were obtained when the learning rate was set to 0.001. [16] S. Haseena, S. Saroja, R. Madavan, A. Karthick, B. Pant, and M. Kifetew,
In the future, we will continue to extract features based on each “Prediction of the age and gender based on human face images based on
deep learning algorithm,” Computational and Mathematical Methods in
facial characteristic to support individual anti-aging efforts. Medicine, vol. 2022, pp. 1–16, Aug. 2022.
https://doi.org/10.1155/2022/1413597
CONFLICT OF INTEREST STATEMENT [17] V. Sheoran, S. Joshi, and T. R. Bhayani, “Age and gender prediction using
deep CNNs and transfer learning,” in Communications in Computer and
All authors declare that they have no conflicts of interest. Information Science, S.K. Singh, P. Roy, B. Raman, and P. Nagabhushan,
Eds. Springer Singapore, Mar. 2021, pp. 293–304.
https://doi.org/10.1007/978-981-16-1092-9_25
REFERENCES [18] B. Agrawal and M. Dixit, “Age estimation and gender prediction using
[1] M. Maithri, U. Raghavendra, A. Gudigar, J. Samanth, P. D. Barua, M. convolutional neural network,” in International Conference on
Murugappan, Y. Chakole, and U. R. Acharya, “Automated emotion Sustainable and Innovative Solutions for Current Challenges in
recognition: Current trends and future perspectives,” Computer Methods Engineering & Technology, 2019, pp. 163–175.
and Programs in Biomedicine, vol. 215, p. 106646, 2022. https://doi.org/10.1007/978-3-030-44758-8_15
https://doi.org/10.1016/j.cmpb.2022.106646 [19] A. Rattani, N. Reddy, and R. Derakhshani, “Convolutional neural
[2] T. L. Johnson, N. N. Johnson, V. Topalli, D. McCurdy, and A. Wallace, networks for gender prediction from smartphone-based ocular images,”
“Police facial recognition applications and violent crime control in U.S. IET Biometrics, vol. 7, pp. 423–430, Feb. 2018.
cities,” Cities, vol. 155, p. 105472, Dec. 2024. https://doi.org/10.1049/iet-bmt.2017.0171
https://doi.org/10.1016/j.cities.2024.105472 [20] U. Kulkarni, S. M. Meena, S. V. Gurlahosur, and G. Bhogar,
[3] F. Mauvais-Jarvis, N. B. Merz, P. J. Barnes, R. D. Brinton, J.-J. Carrero,D. “Quantization friendly MobileNet (QF-MobileNet) architecture for vision
L. DeMeo, G. J. De Vries, C. N. Epperson, R. Govindan, S. L. Klein, et based applications on embedded platforms,” Neural Networks, vol. 136,
al., “Sex and gender: modifiers of health, disease, and medicine,” The pp. 28–39, Apr. 2021. https://doi.org/10.1016/j.neunet.2020.12.022
Lancet, vol. 396, no. 10250, pp. 565–582, 2020. [21] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks
[4] A. Mutemi and F. Bacao, “E-commerce fraud detection based on machine for biomedical image segmentation,” in Medical Image Computing and
learning techniques: Systematic literature review,” Big Data Mining and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture
Analytics, vol. 7, no. 2, pp. 419–444, 2024. Notes in Computer Science, vol 9351, N. Navab, J. Hornegger, W. Wells,
[5] A. Abdolrashidi, M. Minaei, E. Azimi, and S. Minaee, “Age and gender and A. Frangi, Eds. Springer, Cham, 2015, pp. 234–241.
prediction from face images using attentional convolutional network,” https://doi.org/10.1007/978-3-319-24574-4_28
arXiv:2010.03791, Oct. 2020. https://doi.org/10.48550/arXiv.2010.03791
[6] D. Yi, Z. Lei, and S. Li, “Age estimation by multi-scale convolutional
network,” in Asian Conference on Computer Vision, 2014, pp. 144–158.
https://doi.org/10.1007/978-3-319-16811-1_10
28
Applied Computer Systems
_________________________________________________________________________________________________2024/29
29