0% found this document useful (0 votes)
19 views

VND Openxmlformats-Officedocument Wordprocessingml Document&rendition 1

The document discusses automated age estimation from facial images using deep learning models. It provides an overview of existing techniques for age estimation and compares classification, regression and a novel combined approach. The proposed combined loss function outperforms other considered methods while using a lightweight model suitable for implementation.

Uploaded by

m.azakerahmed123
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

VND Openxmlformats-Officedocument Wordprocessingml Document&rendition 1

The document discusses automated age estimation from facial images using deep learning models. It provides an overview of existing techniques for age estimation and compares classification, regression and a novel combined approach. The proposed combined loss function outperforms other considered methods while using a lightweight model suitable for implementation.

Uploaded by

m.azakerahmed123
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

FACIAL AGE ESTIMATION MODELS FOR DEEP LEARNING

ABSTRACT:

Automated age estimation from face images is the process of assigning either an exact age or a
specific age range to a facial image. In this paper a comparative study of the current techniques
suitable for this task is performed, with an emphasis on lightweight models suitable for
implementation. We investigate both the suitable modern deep learning architectures for feature
extraction and the variants of framing the problem itself as either classification, regression or soft
label classification. The models are evaluated on Audience dataset for age group classification
and FG-NET dataset for exact age estimation. To gather in-depth insights into automated age
estimation and in contrast to existing studies, we additionally compare the performance of both
classification and regression on the same dataset. We propose a novel loss function that
combines regression and classification approaches and show that it outperforms other considered
approaches. At the same time, with a lightweight backbone, such architecture is suitable for
implementation.
TABLE OF CONTENTS
CHAPTER NO. TITLE PAGE NO.
ABSTRACT i
LIST OF FIGURES v
LIST OF SYMBOLS vii

1. CHAPTER 1 : INTRODUCTION
1.1 GENERAL
1.2 SCOPE OF THE PROJECT
1.3 OBJECTIVE
1.4 EXISTING SYSTEM
1.4.1 EXISTINGSYSTEM DISADVANTAGES
1.5 LITERATURE SURVEY
1.6 PROPOSED SYSTEM
1.6.1 PROPOSED SYSTEM ADVANTAGES
2. CHAPTER 2 :PROJECT DESCRIPTION
2.1 GENERAL
2.2 METHODOLOGIES
2.2.1 MODULES NAME
2.2.2 MODULES EXPLANATION
2.3 TECHNIQUE OR ALGORITHM
3. CHAPTER 3 : REQUIREMENTS
3.1 GENERAL
3.2 HARDWARE REQUIREMENTS
3.3 SOFTWARE REQUIREMENTS

4. CHAPTER 4 :SYSTEM DESIGN


4.1 GENERAL
4.2 UML DIAGRAMS
4.2.1 USE CASE DIAGRAM
4.2.2 CLASS DIAGRAM
4.2.3 OBJECT DIAGRAM
4.2.4 STATE DIAGRAM
4.2.5 ACTIVITY DIAGRAM
4.2.6 SEQUENCE DIAGRAM
4.2.7 COLLABORATION DIAGRAM
4.2.8 COMPONENT DIAGRAM
4.2.9 DATA FLOW DIAGRAM
4.2.10 DEPLOYMENT DIAGRAM
4.2.11 SYSTEM ARCHITECTURE
5. CHAPTER 5 : DEVELOPMENT TOOLS
5.1 GENERAL
5.2 HISTORY OF PYTHON
5.3 IMPORTANCE OF PYTHON
5.4 FEATURES OF PYTHON
5.5 LIBRARIES USED IN PYTHON
CHAPTER 6 :IMPLEMENTATION
6. 6.1 GENERAL
6.2 IMPLEMENTATION

7. CHAPTER 7 :SNAPSHOTS
7.1 GENERAL
7.2 VARIOUS SNAPSHOTS
8. CHAPTER 8 :SOFTWARE TESTING
8.1 GENERAL
8.2 DEVELOPING METHODOLOGIES
8.3 TYPES OF TESTING
9. CHAPTER 9 :
FUTURE ENHANCEMENT
9.1 FUTURE ENHANCEMENTS
10 CHAPTER 10 :
10.1CONCLUSION
10.2 REFERENCES
LIST OF FIGURES

FIGURE NO NAME OF THE FIGURE PAGE NO.

4.1 Use case Diagram

4.2 Class diagram

4.3 Object diagram


4.4 State Diagram
4.5 Activity Diagram
4.6 Sequence diagram
4.7 Collaboration diagram
4.8 Component Diagram
4.9 Data flow diagram
4.10 Deployment Diagram
4.11 Architecture Diagram
LIST OF SYSMBOLS

NOTATION
S.NO NAME NOTATION DESCRIPTION

Class Name

1. Class Represents a collection of


similar entities grouped
+ public -attribute
together.
-private -attribute

NAME Associations represents


Class A Class B
2. Association static relationships between
classes. Roles represents
Class A Class B
the way the two classes see
each other.

It aggregates several classes


3. Actor into a single classes.

Class A Class A
Interaction between the
4. Aggregation system and external
Class B Class B environment
Used for additional process
Relation uses
5. communication.
(uses)

Extends relationship is used


6. Relation extends when one use case is similar
(extends) to another use case but does
a bit more.

7. Communication Communication between


various use cases.

8. State State State of the processes.

9. Initial State Initial state of the object

10. Final state Final state of the object

11. Control flow Represents various control


flow between the states.

12. Decision box Represents decision making


process from a constraint

13. Use case Interact ion between the


Uses case system and external
environment.
Represents physical
14. Component modules which are a
collection of components.

Represents physical
15. Node modules which are a
collection of components.

16. Data A circle in DFD represents


Process/State a state or process which has
been triggered due to some
event or action.

Represents external entities


17. External entity such as keyboard, sensors,
etc.

18. Transition Represents communication


that occurs between
processes.

19. Object Lifeline Represents the vertical


dimensions that the object
communications.

20. Message Message Represents the message


exchanged.
CHAPTER-1
INTRODUCTION

1
1.Introduction :
Automated age estimation (AAE) from face images can be defined as the process of assigning
either an exact age or a specific age range to a facial image. AAE has a wide scope of
applications in human-computer interaction, security systems, biometric systems, advertising
industry etc. Therefore, age estimation has become a topic of interest for both industry and
academic community. In spite of a large body of work dealing with facial age estimation, it is
still a challenging problem, as the aging process significantly differs from one person to another.
This is caused by internal factors such as genes, changes in the shape and the size of the face, but
also by external factors like lifestyle and living conditions of an individual. It has been shown
that in some cases it is very difficult to accurately infer the age of a person visually even for a
human. While automated age estimation methods that approach or even surpass human
performance have been The associate editor coordinating the review of this manuscript and
approving it for publication was Yongming Li . Proposed, there is still significant room for
improvements, especially in unconstrained conditions. Several examples that have proven
difficult to correctly classify in this study are shown in Fig. 1, along with the closest model
predictions and ground-truth labels. A typical pipeline of a state-of-the-art age estimation system
consists of three steps: (i) pre-processing, including face detection and normalization, (ii) feature
extraction, and (iii) applying the age estimation algorithm (Fig. 2). Regarding feature extraction,
AAE systems can be divided into two groups: (i) systems that use hand-crafted features and (ii)
systems based on deep learning. The systems that use hand-crafted features work reasonably well
on images taken in constrained conditions (i.e. single face, frontally aligned, simple background
etc.) [1], [5]. However, with recent development of in-the-wild datasets, hand-crafted methods
have increasingly been surpassed by deep learning models for feature extraction. Deep learning
models, especially convolutional neural networks (CNNs), have proven themselves to be more
robust to noise, variations in appearance, pose and lighting present in unconstrained datasets [6].
The problem of automated age estimation can be broadly framed either as a classification
problem or as a regression problem [1], [7]. When framing age estimation as a classification
problem, the classifier predicts an age group, e.g. ‘‘35 to 39 years old’’. Classification with soft
labels is another possibility, in which class assignments are not binary. When framing age
estimation as a regression problem, the goal is to predict the exact age as a number.
1.2 SCOPE OF THE PROJECT
The scope involves leveraging deep learning techniques to develop robust and accurate facial age
estimation models capable of predicting ages from facial images, accounting for diverse factors
and ensuring practical deployment in various domains where age estimation is crucial.
1.3 OBJECTIVE
Facial age estimation models in deep learning pursue several objectives to create accurate and
reliable systems for predicting ages from facial images. These models aim to accurately gauge
age, considering facial features indicative of aging, such as wrinkles and texture, while ensuring
robustness across various demographics, including different ethnicities and genders. A key goal
is to develop models that generalize well, effectively estimating ages for new faces not seen
during training. Addressing challenges like variations in lighting, expressions, and accessories is
vital to ensure the models' reliability in real-world scenarios.
1.4 EXISTING SYSTEM:
We perform a comparative study of the current techniques suitable for the automated age
estimation task, with an emphasis on running age estimation on embedded devices. Moreover,
the intended application is adapting multimedia content to the age of a viewer, which does not
require high age estimation accuracy. We investigate both the suitable modern deep learning
architectures for feature extraction and the variants of framing the problem itself as
classification, regression or soft label classification. To gather in-depth insights into automated
age estimation and in contrast to existing studies, we additionally compare the performance of
both classification and regression on the same dataset.

1.4.1 EXISTINGSYSTEM DISADVANTAGES:


 VGG-16 might struggle with tasks where highlighting specific areas of an image is
critical for accurate analysis
 VGG-16 can be a bit slow because it has many layers and needs a lot of computer power.
 It was taught and might not understand new things as well.
1.5 LITERATURE SURVEY
Title: Age from faces in the deep learning revolution

Author: V. Carletti, A. Greco, G. Percannella, and M. Vento

Year: 2020.

Description: Face analysis includes a variety of specific problems as face detection, person
identification, gender and ethnicity recognition, just to name the most common ones; in the last
two decades, significant research efforts have been devoted to the challenging task of age
estimation from faces, as witnessed by the high number of published papers. The explosion of
the deep learning paradigm, that is determining a spectacular increasing of the performance, is in
the public eye; consequently, the number of approaches based on deep learning is impressively
growing and this also happened for age estimation. The exciting results obtained have been
recently surveyed on almost all the specific face analysis problems; the only exception stands for
age estimation, whose last survey dates back to 2010 and does not include any deep learning
based approach to the problem. This paper provides an analysis of the deep methods proposed in
the last six years; these are analyzed from different points of view: the network architecture
together with the learning procedure, the used datasets, data preprocessing and augmentation,
and the exploitation of additional data coming from gender, race and face expression. The review
is completed by discussing the results obtained on public datasets, so as the impact of different
aspects on system performance, together with still open issues.
Title: Deep learning approach for facial age classification: A survey of the state-of-the-art

Author: O. Agbo-Ajala and S. Viriri

Year: 2021.

Description: Age estimation using face images is an exciting and challenging task. The traits
from the face images are used to determine age, gender, ethnic background, and emotion of
people. Among this set of traits, age estimation can be valuable in several potential real-time
applications. The traditional hand-crafted methods relied-on for age estimation, cannot correctly
estimate the age. The availability of huge datasets for training and an increase in computational
power has made deep learning with convolutional neural network a better method for age
estimation; convolutional neural network will learn discriminative feature descriptors directly
from image pixels. Several convolutional neural net work approaches have been proposed by
many of the researchers, and these have made a significant impact on the results and
performances of age estimation systems. In this paper, we present a thorough study of the state-
of-the-art deep learning techniques which estimate age from human faces. We discuss the
popular convolutional neural network architectures used for age estimation, present a critical
analysis of the performance of some deep learning models on popular facial aging datasets, and
study the standard evaluation metrics used for performance evaluations. Finally, we try to
analyze the main aspects that can increase the performance of the age estimation system in
future.
Title: Comprehensive analysis of the literature for age estimation from facial images

Author: A. S. Al-Shannaq and L. A. Elrefaei

Year:2019.

Description: Recently, a vast attention has grown in the field of computer vision and especially
in face recognition, detection and facial landmarks localization. Many significant features can be
directly derived from human face such as age, gender and race. Estimating the age can be
defined as the automatic process of classifying the facial image into the exact age or to a specific
age range. Practically, age estimation from face is still a challenging problem due to the affecting
from many internal factors such as gender and race and external factors such as environments
and lifestyle. Huge efforts have been addressed to reach an accepted and satisfied accuracy of
age estimation task. In this review paper we are trying to: analyze the main aspects that can
increase the performance of the age estimation system, present the hand-crafted based models
and deep learning-based models and show how the evaluations are being conducted, discuss the
proposed algorithms and models in age estimation, show the main limitations and challenges
facing the age estimation process. Also, different aging databases that contain age annotations
are discussed. At the end, few guidelines and the future prospective related to age estimation are
investigated.
Title: Facial age estimation using machine learning techniques: An overview

Author: K. ELKarazle, V. Raman, and P. Then

Year:2022

Description: Automatic age estimation from facial images is an exciting machine learning topic
that hasattracted researchers’ attention over the past several years. Numerous human–computer
interaction applications, such as targeted marketing, content access control, or soft-biometrics
systems, employage estimation models to carry out secondary tasks such as user filtering or
identification. Despitethe vast array of applications that could benefit from automatic age
estimation, building an automaticage estimation system comes with issues such as data disparity,
the unique ageing pattern of eachindividual and facial photo quality. This paper provides a
survey on the standard methods ofbuilding automatic age estimation models, the benchmark
datasets for building these models, andsome of the latest proposed pieces of literature that
introduce new age estimation methods. Finally, wepresent and discuss the standard evaluation
metrics used to assess age estimation models. In additionto the survey, we discuss the identified
gaps in the reviewed literature and present recommendationsfor future research.
Title:Facial age estimation using tensor based subspace learning and deep random forests

Author: Parneet

Year:2022.

Description:Recently, the estimation of facial age has attracted much attention. This letter
extends and improves a recently developed method (Guehairiaetal., 2020) for fusing multiple
deep facial features for age estimation. This method was based on deep random forests. We
propose a new pipeline that integrates tensor-based subspace learning before applying DRFs.
Deep face features of a training set are represented as a 3D tensor. Multi-linear Whitened
Principal Component (MWPCA) and Tensor Exponential Discriminant (TEDA) are used to
extract the most discriminative information. The tensor subspace features are then fed into DRFs
to predict age. Experiments conducted on five public face databases show that our method can
compete with many state-of-the-art methods.
1.6 PROPOSED SYSTEM
Deep learning techniques excel in various domains such as image and speech recognition, natural
language processing, recommendation systems, and autonomous vehicles, among others. The
success of deep learning stems from its ability to learn intricate patterns and representations
directly from raw data, leading to remarkable performance in tasks that involve understanding
and interpreting complex information.

1.6.1 PROPOSED SYSTEM ADVANTAGES:


 Deep learning can understand really complicated stuff in data, like recognizing objects
images or understanding speech, better than other methods.
 It works well with lots and lots of data
 It learns important things from the data itself.
CHAPTER 2

PROJECT DESCRIPTION

2.1 GENERAL:

Automated age estimation has been an actively researched topic in recent years, as detailed in a
number of comparative surveys. While earlier works predominately focused on explicitly
modeling the aging process using various computer vision techniques and hand-crafted features,
current age estimation methods typically apply some form of deep learning.
2.2 METHODOLOGIES

2.2.1MODULES NAME:

Modules Name:
 Data Preparation
 Selecting a Model
 Model Building
 Training the Model
 Model Evaluation
 Saving the Model
2.2.2 MODULES EXPLANATION:

1) Data Preparation
Gather a dataset relevant to your task, ensuring it's labeled appropriately for supervised learning
tasks. Preprocess the data by resizing images, normalizing values, or encoding features as
necessary.

2) Selecting a Model
The selection of a model should align with the problem at hand, considering factors like the
nature of the data, task complexity, interpretability, and computational resources available for
training and deployment. It often involves a balance between accuracy, interpretability, and
practicality for the given problem context.

3) Model Building
Model building is an iterative process that involves experimenting, training, evaluating, and
refining models to create the most effective and accurate solution for the given problem.

4) Training the Model

Training a model in machine learning is a pivotal process, a journey where data becomes
actionable insights. It begins by preparing the dataset, refining and partitioning it into training
and validation sets. Choosing the right model architecture or algorithm follows, establishing the
framework for learning. As data flows through the model, it calculates predictions and assesses
the error between these predictions and the actual outcomes. Through back propagation, the
model adjusts its internal parameters, fine-tuning its understanding of the data's patterns.
5) Model Evaluation
Model evaluation in machine learning serves as the litmus test, scrutinizing the capabilities of a
trained model in handling new, unseen data. It initiates with assessing the model's performance
on a separate validation set, distinct from the training data, using diverse metrics tailored to the
problem's nature. Techniques like cross-validation augment this process, enhancing reliability,
especially with limited data. The evaluation journey delves deeper, exploring potential over
fitting or under fitting issues, ensuring the model captures essential patterns without getting
entangled in noise or missing critical insights. Comparison among models and hyper parameter
variations unveils the optimal choice for deployment, backed by insights gained from
visualizations like confusion matrices or ROC curves.

6) Saving the Model


Saving the model marks the culmination of its training and optimization, preserving its learned
knowledge for future use. After rigorous training and fine-tuning, the model's parameters,
weights, and architecture are encapsulated and stored in a designated format.
2.3 TECHNIQUE USED OR ALGORITHM USED

2.3.1 EXISTING TECHNIQUE: -

VGG-16 Architecture
VGG-16 is a special type of computer brain made to understand pictures. It's called VGG-16
because it has 16 layers that work together to figure out what's in an image. Each layer looks for
different things, like edges or textures. What makes it cool is how it uses lots of tiny filters (3x3
squares) to do this. It's really good at recognizing different stuff in pictures, like telling apart cars
from cats. People like using it because it's pretty smart and helps computers understand pictures
better.
2.3.2 PROPOSED TECHNIQUE USED OR ALGORITHM USED:
 Deep Learning Technique

The term "deep" refers to the multiple layers that compose these neural networks. These layers
progressively extract higher-level features from raw data. In deep learning, each layer of the
neural network transforms the input data and passes it to the next layer, allowing the network to
automatically learn representations of the data. These learned representations become more
abstract and complex as they move through deeper layers, enabling the network to understand
intricate relationships within the data.

.
CHAPTER 3

REQUIREMENTS ENGINEERING

3.1 GENERAL

We can see from the results that on each database, the error rates are very low due to the
discriminatory power of features and the regression capabilities of classifiers. Comparing the
highest accuracies (corresponding to the lowest error rates) to those of previous works, our
results are very competitive.

3.2 HARDWARE REQUIREMENTS

The hardware requirements may serve as the basis for a contract for the
implementation of the system and should therefore be a complete and consistent
specification of the whole system. They are used by software engineers as the
starting point for the system design. It should what the system do and not how it
should be implemented.

 PROCESSOR : DUAL CORE 2 DUOS.


 RAM : 4GB DD RAM
 HARD DISK : 250 GB
3.3 SOFTWARE REQUIREMENTS

The software requirements document is the specification of the system. It should include both a
definition and a specification of requirements. It is a set of what the system should do rather than
how it should do it. The software requirements provide a basis for creating the software
requirements specification. It is useful in estimating cost, planning team activities, performing
tasks and tracking the teams and tracking the team’s progress throughout the development
activity.

 Operating System : Windows 7/8/10

 Platform : Spyder3

 Programming Language : Python

 Front End : Spyder3

3.4 FUNCTIONAL REQUIREMENTS

A functional requirement defines a function of a software-system or its component. A function is


described as a set of inputs, the behavior, Firstly, the system is the first that achieves the standard
notion of semantic security for data confidentiality in attribute-based deduplication systems by
resorting to the hybrid cloud architecture.
3.5 NON-FUNCTIONAL REQUIREMENTS

The major non-functional Requirements of the system are as follows

Usability

The system is designed with completely automated process hence there is no or less user
intervention.

Reliability

The system is more reliable because of the qualities that are inherited from the chosen platform
python. The code built by using python is more reliable.

Performance

This system is developing in the high level languages and using the advanced back-end
technologies it will give response to the end user on client system with in very less time.

Supportability

The system is designed to be the cross platform supportable. The system is supported on a wide
range of hardware and any software platform, which is built into the system.

Implementation

The system is implemented in web environment using Jupyter notebook software. The server is
used as the intellignce server and windows 10 professional is used as the platform. Interface the
user interface is based on Jupyter notebook provides server system.
CHAPTER 4

DESIGN ENGINEERING

4.1 GENERAL

Design Engineering deals with the various UML [Unified Modelling language] diagrams
for the implementation of project. Design is a meaningful engineering representation of a thing
that is to be built. Software design is a process through which the requirements are translated into
representation of the software. Design is the place where quality is rendered in software
engineering.
4.2 UML DIAGRAMS

4.2.1 USE CASE DIAGRAM

Data Analysis

Data Preprocessing

Model Building

Data Input

Training the Model

Users Input

Model Predection

Result

EXPLANATION:
The main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted. The above diagram consists of
user as actor. Each will play a certain role to achieve the concept.
4.2.2 CLASS DIAGRAM

Data Preprocessing
Data Analysis
Data input sklearn()
panads
tensorflow()
pandas numpy
opencv2
data set reading() pd.info()
analysis user data()
read)ccsv() to_replace()
read_image()
np.array()
write_image()

User Input Model Building


Train Model
live_camera opencv2
opencv_python

live_face() image_detect()
.fit()
draw_rectangle()

Model Predection
face detection
ade estimation
gender estimation

EXPLANATION

In this class diagram represents how the classes with attributes and methods are linked together
to perform the verification with security. From the above diagram shown the various classes
involved in our project.
4.2.3 OBJECT DIAGRAM

Data input Data Analysis Data Preprocessing

User Input Train Model Model Building

Model Predection

EXPLANATION:

In the above digram tells about the flow of objects between the classes. It is a diagram that shows
a complete or partial view of the structure of a modeled system. In this object diagram represents
how the classes with attributes and methods are linked together to perform the verification with
security.
4.2.4 STATE DIAGRAM

Data Input

Data Analysis

Preprocessing

Model Building

Model Train

User Input

Modelpredectio

EXPLANATION:

State diagram are a loosely defined diagram to show workflows of stepwise activities and
actions, with support for choice, iteration and concurrency. State diagrams require that the
system described is composed of a finite number of states; sometimes, this is indeed the case,
while at other times this is a reasonable abstraction. Many forms of state diagrams exist, which
differ slightly and have different semantics.
4.2.5 ACTIVITY DIAGRAM

Data input

user
Data Analysis Data Preprocessing

Model Building Model train

Predection

Result

EXPLANATION:
Activity diagrams are graphical representations of workflows of stepwise activities and
actions with support for choice, iteration and concurrency. In the Unified Modeling Language,
activity diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.
4.2.6 SEQUENCE DIAGRAM

User Input Data input Data Analysis Data Model Building Model Training Model
preprocessing Predection

collecting dataset

data set analysis


user data predected result

Removing unwanted data


applying model

entering user input data

Result shows to user

EXPLANATION:

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction


diagram that shows how processes operate with one another and in what order. It is a construct of
a Message Sequence Chart. A sequence diagram shows object interactions arranged in time
sequence. It depicts the objects and classes involved in the scenario and the sequence of
messages exchanged between the objects needed to carry out the functionality of the scenario.
4.2.7 COLLABORATION DIAGRAM

1: collecting dataset 2: data set analysis


Data Data Data
input Analysis preprocessing

3: Removing unwanted data

4: applying model
Model User
Model Training Input
Building
5: entering user input data

7: Result shows to user

6: model Predection

Model
Predection

EXPLANATION:
A collaboration diagram, also called a communication diagram or interaction diagram, is
an illustration of the relationships and interactions among software objects in the Unified
Modeling Language (UML). The concept is more than a decade old although it has been refined
as modeling paradigms have evolved.
4.2.8 COMPONENT DIAGRAM

Data Input Data Data


Analysis Preprocessing

User Input Training the Model Building


Model

Result Model
Predection

EXPLANATION

In the Unified Modeling Language, a component diagram depicts how components are wired
together to form larger components and or software systems. They are used to illustrate the
structure of arbitrarily complex systems. User gives main query and it converted into sub queries
and sends through data dissemination to data aggregators. Results are to be showed to user by
data aggregators. All boxes are components and arrow indicates dependencies.
4.2.9 DATA FLOW DIAGRAM

Level 0

Analysis

Dataset
Preprocessing

Model Training by
using the Opencv2
Level 1

User Input

Model Trained Data with opencv2

Model Prediction

Result

Fig 4.9: Data Flow Diagrams

EXPLANATION:

A data flow diagram (DFD) is a graphical representation of the "flow" of data through an
information system, modeling its process aspects. Often they are a preliminary step used to create an
overview of the system which can later be elaborated. DFDs can also be used for the visualization of data
processing (structured design).

A DFD shows what kinds of data will be input to and output from the system, where the data will
come from and go to, and where the data will be stored. It does not show information about the timing of
processes, or information about whether processes will operate in sequence or in parallel.
4.2.10 DEPLOYMENT DIAGRAM

Data Input Data Data


Analysis Preprocessin

User Input Model Model Train


Building

Model
Predecti

EXPLANATION:

Deployment Diagram is a type of diagram that specifies the physical hardware on which the
software system will execute. It also determines how the software is deployed on the underlying
hardware. It maps software pieces of a system to the device that are going to execute it.

SYSTEM ARCHITECTURE:
Fig 4.11: System Architecture
CHAPTER 5

DEVELOPMENT TOOLS

5.1 Python

Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is


designed to be highly readable. It uses English keywords frequently where as other languages
use punctuation, and it has fewer syntactical constructions than other languages.

5.2 History of Python

Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the GNU General
Public License (GPL).

Python is now maintained by a core development team at the institute, although Guido van
Rossum still holds a vital role in directing its progress.

5.3 Importance of Python


 Python is Interpreted − Python is processed at runtime by the interpreter. You do not
need to compile your program before executing it. This is similar to PERL and PHP.
 Python is Interactive − You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
 Python is Object-Oriented − Python supports Object-Oriented style or technique of
programming that encapsulates code within objects.
 Python is a Beginner's Language − Python is a great language for the beginner-level
programmers and supports the development of a wide range of applications from simple
text processing to WWW browsers to games.
5.4 Features of Python

 Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
 Easy-to-read − Python code is more clearly defined and visible to the eyes.
 Easy-to-maintain − Python's source code is fairly easy-to-maintain.
 A broad standard library − Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.
 Interactive Mode − Python has support for an interactive mode which allows interactive
testing and debugging of snippets of code.
 Portable − Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.
 Extendable − You can add low-level modules to the Python interpreter. These modules
enable programmers to add to or customize their tools to be more efficient.
 Databases − Python provides interfaces to all major commercial databases.
 GUI Programming − Python supports GUI applications that can be created and ported
to many system calls, libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support for large programs than shell
scripting.

Apart from the above-mentioned features, Python has a big list of good features, few are listed
below −

 It supports functional and structured programming methods as well as OOP.


 It can be used as a scripting language or can be compiled to byte-code for building large
applications.
 It provides very high-level dynamic data types and supports dynamic type checking.
 IT supports automatic garbage collection.
 It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.

5.5 Libraries used in python

 numpy - mainly useful for its N-dimensional array objects.

 pandas - Python data analysis library, including structures such as dataframes.

 matplotlib - 2D plotting library producing publication quality figures.

 scikit-learn - the machine learning algorithms used for data analysis and data mining
tasks.

Figure : NumPy, Pandas, Matplotlib, Scikit-learn

CHAPTER 6

IMPLEMENTATION

6.1 GENERAL

Coding:
CHAPTER 7

SNAPSHOTS

General:
This project is implements like application using python and the Server process is maintained
using the SOCKET & SERVERSOCKET and the Design part is played by Cascading Style
Sheet.

SNAPSHOTS

CHAPTER 8
SOFTWARE TESTING

8.1 GENERAL
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

8.2 DEVELOPING METHODOLOGIES


The test process is initiated by developing a comprehensive plan to test the general
functionality and special features on a variety of platform combinations. Strict quality control
procedures are used. The process verifies that the application meets the requirements specified in
the system requirements document and is bug free. The following are the considerations used to
develop the framework from developing the testing methodologies.

8.3Types of Tests

e 8.3.1 Unit testing


Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program input produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.

8.3.2 Functional test


Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.

8.3.3 System Test


System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

8.3.4 Performance Test


The Performance test ensures that the output be produced within the time limits,and the time
taken by the system for compiling, giving response to the users and request being send to the
system for to retrieve the results.

8.3.5 Integration Testing


Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

8.3.6 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.

Acceptance testing for Data Synchronization:


 The Acknowledgements will be received by the Sender Node after the Packets are
received by the Destination Node
 The Route add operation is done only when there is a Route request in need
 The Status of Nodes information is done automatically in the Cache Updation process

8.2.7 Build the test plan


Any project can be divided into units that can be further performed for detailed processing. Then
a testing strategy for each of this unit is carried out. Unit testing helps to identity the possibl bugs
in the individual component, so the component that has bugs can be identified and can be
rectified from errors.
CHAPTER 9
FUTURE ENHANCEMENT

9.1 FUTURE ENHANCEMENTS:

The next series of experiments explored several model configurations for exact age estimation:
simple regression, fine-grained classification with soft labels, and our novel hybrid approach
combining regression with soft-label fine-grained classification through (our original)
customloss. hybrid approach outperformed both classification and regression models.
CHAPTER 10

CONCLUSIONAND REFERENCES

10.1 CONCLUSION

We considered several approaches to age estimation problem. All evaluated architectures used
standard convolutional backbones for feature extraction, while the output head was configured
according to the defined task. In all experiments we pretrained the backbone on a large face
image dataset. The first approach was based on classification into predefined age groups and it
was evaluated on Adience dataset. The experiment showed that models using backbones of very
different capacity obtained similar results, thereby supporting the application of a lighter model
appropriate for embedded implementation. The hybrid approach performed the best, obtaining
the state-of-the-art result on FG-NET dataset. A final experiment was designed to compare
different approaches on a common task and dataset. To that end we adapted the exact age dataset
FG-NET for age group estimation task.
10.2 REFERENCES

1] Y. Fu, G. Guo, and T. S. Huang, ‘‘Age synthesis and estimation via faces: A survey,’’ IEEE
Trans. Pattern Anal. Mach. Intell., vol. 32, no. 11, pp. 1955–1976, Sep. 2010.
[2] H. Han, C. Otto, and A. K. Jain, ‘‘Age estimation from face images: Human vs. machine
performance,’’ in Proc. Int. Conf. Biometrics (ICB), Jun. 2013, pp. 1–8.
[3] V. Carletti, A. Greco, G. Percannella, and M. Vento, ‘‘Age from faces in the deep learning
revolution,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 9, pp. 2113–2132, Sep. 2020.
[4] O. Agbo-Ajala and S. Viriri, ‘‘Deep learning approach for facial age classification: A survey
of the state-of-the-art,’’ Artif. Intell. Rev., vol. 54, no. 1, pp. 179–213, Jan. 2021.
[5] A. Lanitis,‘‘Comparative evaluation of automatic age progression methodologies,’’
EURASIP J. Adv. Signal Process., vol. 2008, no. 1, 2008, Art. no. 239480.
[6] R. Angulu, J. R. Tapamo, and A. O. Adewumi, ‘‘Age estimation via face images: A
survey,’’ EURASIP J. Image Video Process., vol. 2018, no. 1, pp. 1–35, Dec. 2018.
[7] A. S. Al-Shannaq and L. A. Elrefaei, ‘‘Comprehensive analysis of the literature for age
estimation from facial images,’’ IEEE Access, vol. 7, pp. 93229–93249, 2019.
[8] A. Othmani, A. R. Taleb, H. Abdelkawy, and A. Hadid, ‘‘Age estimation from faces using
deep learning: A comparative analysis,’’ Comput. Vis. Image Understand., vol. 196, Jul. 2020,
Art. no. 102961.
[9] P. Punyani, R. Gupta, and A. Kumar, ‘‘Neural networks for facial age estimation: A survey
on recent advances,’’ Artif. Intell. Rev., vol. 53, no. 5, pp. 3299–3347, Jun. 2020.
[10] A. A. Shejul, K. S. Kinage, and B. E. Reddy, ‘‘Comprehensive review on facial based
human age estimation,’’ in Proc. Int. Conf. Energy, Commun., Data Anal. Soft Comput.
(ICECDS), Aug. 2017, pp. 3211–3216.
[11] K. ELKarazle, V. Raman, and P. Then, ‘‘Facial age estimation using machine learning
techniques: An overview,’’ Big Data Cognit. Comput., vol. 6, no. 4, p. 128, Oct. 2022.
[12] G. Levi and T. Hassncer, ‘‘Age and gender classification using convolutional neural
networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun.
2015, pp. 34–42.
[13] X. Wang, R. Guo, and C. Kambhamettu, ‘‘Deeply-learned feature for age estimation,’’ in
Proc. IEEE Winter Conf. Appl. Comput. Vis., Jan. 2015, pp. 534–541.
[14] A. Lanitis, C. J. Taylor, and T. F. Cootes, ‘‘Toward automatic simulation of aging effects on
face images,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 4, pp. 442–455, Apr. 2002.
[15] G. Panis, A. Lanitis, N. Tsapatsoulis, and T. F. Cootes, ‘‘Overview of research on facial
ageing using the FG-NET ageing database,’’ IET Biometrics, vol. 5, no. 2, pp. 37–46, May 2016.
[16] D. Deb, L. Best-Rowden, and A. K. Jain, ‘‘Face recognition performance under aging,’’ in
Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jul. 2017, pp. 548–556.
[17] R. Rothe, R. Timofte, and L. Van Gool, ‘‘Deep expectation of real and apparent age from a
single image without facial landmarks,’’ Int. J. Comput. Vis., vol. 126, pp. 144–157, Apr. 2016.
[18] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for large-scale image
recognition,’’ in Proc. Int. Conf. Learn. Represent. (ICLR), 2015, pp. 1–14. [Online]. Available:
https://arxiv.org/ abs/1409.1556
[19] K. Zhang, ‘‘Age group and gender estimation in the wild with deep RoR architecture,’’
IEEE Access, vol. 5, pp. 22492–22503, 2017.
[20] K. Zhang, ‘‘Fine-grained age estimation in the wild with attention LSTM networks,’’ IEEE
Trans. Circuits Syst. Video Technol., vol. 30, no. 9, pp. 3140–3152, Sep. 2020.
[21] P. Rodríguez, G. Cucurull, J. M. Gonfaus, F. X. Roca, and J. González, ‘‘Age and gender
recognition in the wild with deep attention,’’ Pattern Recognit., vol. 72, pp. 563–571, Dec. 2017.
[22] A. Garain, B. Ray, P. K. Singh, A. Ahmadian, N. Senu, and R. Sarkar, ‘‘GRA_Net: A deep
learning model for classification of age and gender from facial images,’’ IEEE Access, vol. 9,
pp. 85672–85689, 2021.
[23] O. Guehairia, F. Dornaika, A. Ouamane, and A. Taleb-Ahmed, ‘‘Facial age estimation using
tensor based subspace learning and deep random forests,’’ Inf. Sci., vol. 609, pp. 1309–1317,
Sep. 2022.
[24] S. Chen, C. Zhang, M. Dong, J. Le, and M. Rao, ‘‘Using ranking-CNN for age estimation,’’
in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 742–751.
[25]W. Cao, V. Mirjalili, and S. Raschka, ‘‘Rank consistent ordinal regression for neural
networks with application to age estimation,’’ Pattern Recognit. Lett., vol. 140, pp. 325–331,
Dec. 2020.
[26] N.-H. Shin, S.-H. Lee, and C.-S. Kim, ‘‘Moving window regression: A novel approach to
ordinal regression,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun.
2022, pp. 18739–18748.
[27]X. Geng, C. Yin, and Z.-H. Zhou,‘‘Facial age estimation by learning from label
distributions,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 10, pp. 2401–2412, Oct.
2013.
[28] R. Díaz and A. Marathe, ‘‘Soft labels for ordinal regression,’’ in Proc. IEEE/CVF Conf.
Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 4733–4742.
[29] G. Antipov, M. Baccouche, S. Berrani, and J. Dugelay, ‘‘Effective training of
convolutional neural networks for face-based gender and age prediction,’’ Pattern Recognit., vol.
72, pp. 15–26, Dec. 2017.
[30] H. Pan, H. Han, S. Shan, and X. Chen, ‘‘Mean-variance loss for deep age estimation from a
face,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5285–5294.
[31] Z. Zhao, P. Qian, Y. Hou, and Z. Zeng, ‘‘Adaptive mean-residue loss for robust facial age
estimation,’’ in Proc. IEEE Int. Conf. Multimedia Expo. (ICME), Jul. 2022, pp. 1–6.
[32] Q. Li, J. Wang, Z. Yao, Y. Li, P. Yang, J. Yan, C. Wang, and S. Pu, ‘‘Unimodal-
concentrated loss: Fully adaptive label distribution learning for ordinal regression,’’ in Proc.
IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 20481–20490.
[33] C. Zhang, S. Liu, X. Xu, and C. Zhu, ‘‘C3AE: Exploring the limits of compact model for
age estimation,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019,
pp. 12579–12588. [34] Y. Deng, S. Teng, L. Fei, W. Zhang, and I. Rida, ‘‘A multifeature
learning and fusion network for facial age estimation,’’ Sensors, vol. 21, no. 13, p. 4597, Jul.
2021.
[35] A. Greco, A. Saggese, M. Vento, and V. Vigilante, ‘‘Effective training of convolutional
neural networks for age estimation based on knowledge distillation,’’ Neural Comput. Appl., vol.
34, no. 24, pp. 21449–21464, Dec. 2022.

You might also like