0% found this document useful (0 votes)
5 views

Age ProgressionRegression by Conditional Adversari

Uploaded by

yusrafaisalcs
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Age ProgressionRegression by Conditional Adversari

Uploaded by

yusrafaisalcs
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Age Progression/Regression by Conditional Adversarial Autoencoder

Zhifei Zhang∗, Yang Song∗ , Hairong Qi


The University of Tennessee, Knoxville, TN, USA
{zzhang61, ysong18, hqi}@utk.edu
arXiv:1702.08423v1 [cs.CV] 27 Feb 2017

Abstract

“If I provide you a face image of mine (without telling Query


you the actual age when I took the picture) and a large
amount of face images that I crawled (containing labeled Progression/
faces of different ages but not necessarily paired), can you 𝓜
Regression
show me what I would look like when I am 80 or what I
was like when I was 5?” The answer is probably a “No.”
Most existing face aging works attempt to learn the trans- Return
formation between age groups and thus would require the
paired samples as well as the labeled query image. In this
paper, we look at the problem from a generative modeling
perspective such that no paired samples is required. In ad-
dition, given an unlabeled image, the generative model can Figure 1. We assume the face images lie on a manifold (M) , and
directly produce the image with desired age attribute. We images are clustered according to their ages and personality by
a different direction. Given a query image, it will first projected
propose a conditional adversarial autoencoder (CAAE) that
to the manifold, and then after the smooth transformation on the
learns a face manifold, traversing on which smooth age pro- manifold, the corresponding images will be projected back with
gression and regression can be realized simultaneously. In aging patterns.
CAAE, the face is first mapped to a latent vector through
a convolutional encoder, and then the vector is projected
to the face manifold conditional on age through a decon-
ification, entertainment, etc. The area has been attracting
volutional generator. The latent vector preserves personal-
a lot of research interests despite the extreme challenge in
ized face features (i.e., personality) and the age condition
the problem itself. Most of the challenges come from the
controls progression vs. regression. Two adversarial net-
rigid requirement to the training and testing datasets, as well
works are imposed on the encoder and generator, respec-
as the large variation presented in the face image in terms
tively, forcing to generate more photo-realistic faces. Ex-
of expression, pose, resolution, illumination, and occlusion.
perimental results demonstrate the appealing performance
The rigid requirement on the dataset refers to the fact that
and flexibility of the proposed framework by comparing with
most existing works require the availability of paired sam-
the state-of-the-art and ground truth.
ples, i.e., face images of the same person at different ages,
and some even require paired samples over a long range
of age span, which is very difficult to collect. For exam-
1. Introduction ple, the largest aging dataset “Morph” [11] only captured
images with an average time span of 164 days for each in-
Face age progression (i.e., prediction of future looks) and
dividual. In addition, existing works also require the query
regression (i.e., estimation of previous looks), also referred
image to be labeled with the true age, which can be incon-
to as face aging and rejuvenation, aims to render face im-
venient from time to time. Given the training data, existing
ages with or without the “aging” effect but still preserve
works normally divide them into different age groups and
personalized features of the face (i.e., personality). It has
learn a transformation between the groups, therefore, the
tremendous impact to a wide-range of applications, e.g.,
query image has to be labeled in order to correctly position
face prediction of wanted/missing person, age-invariant ver-
the image.
∗ with equal contribution. Although age progression and regression are equally im-

1
portant, most existing works focus on age progression. Very 2. Related Work
few works can achieve good performance of face rejuvenat-
ing, especially for rendering baby face from an adult be- 2.1. Age Progression and Regression
cause they are mainly surface-based modeling which sim- In recent years, the study on face age progression has
ply remove the texture from a given image [17, 13, 7]. On been very popular, with approaches mainly falling into
the other hand, researchers have made great progress on two categories, physical model-based and prototype-based.
age progression. For example, the physical model-based Physical model-based methods model the biological pattern
methods [26, 25, 13, 21] parametrically model biological and physical mechanisms of aging, e.g., the muscles [24],
facial change with age, e.g., muscle, wrinkle, skin, etc. wrinkle [22, 25], facial structure [21, 13] etc. through ei-
However, they suffer from complex modeling, the require- ther parametric or non-parametric learning. However, in
ment of sufficient dataset to cover long time span, and order to better model the subtle aging mechanism, it will
are computationally expensive; the prototype-based meth- require a large face dataset with long age span (e.g., from 0
ods [27, 11, 23, 28] tend to divide training data into differ- to 80 years old) of each individual, which is very difficult
ent age groups and learn a transformation between groups. to collect. In addition, physical modeling-based approaches
However, some can preserve personality but induce severe are computationally expensive.
ghosting artifacts, others smooth out the ghosting effect On the other hand, prototype-based approaches [1, 11]
but lose personality, while most relaxed the requirement of often divide faces into groups by age, e.g., the average face
paired images over long time span, and the aging pattern can of each group, as its prototype. Then, the difference be-
be learned between two adjacent age groups. Nonetheless, tween prototypes from two age groups is considered the ag-
they still need paired samples over short time span. ing pattern. However, the aged face generated from aver-
aged prototype may lose the personality (e.g., wrinkles). To
In this paper, we investigate the age progres- preserve the personality, [23] proposed a dictionary learning
sion/regression problem from the perspective of generative based method — age pattern of each age group is learned
modeling. The rapid development of generative adversarial into the corresponding sub-dictionary. A given face will be
networks (GANs) has shown impressive results in face im- decomposed into two parts: age pattern and personal pat-
age generation [18, 30, 20, 16]. In this paper, we assume tern. The age pattern was transited to the target age pat-
that the face images lie on a high-dimensional manifold as tern through the sub-dictionaries, and then the aged face
shown in Fig.1. Given a query face, we could find the corre- is generated by synthesizing the personal pattern and tar-
sponding point (face) on the manifold. Stepping along the get age pattern. However, this approach presents serious
direction of age changing, we will obtain the face images ghosting artifacts. The deep learning-based method [28]
of different ages while preserving personality. We propose represents the state-of-the-art, where RNN is applied on
a conditional adversarial autoencoder (CAAE) network to the coefficients of eigenfaces for age pattern transition. All
learn the face manifold. By controlling the age attribute, it prototype-based approaches perform the group-based learn-
will be flexible to achieve age progression and regression at ing which requires the true age of testing faces to localize
the same time. Because it is difficult to directly manipulate the transition state which might not be convenient. In ad-
on the high-dimensional manifold, the face is first mapped dition, these approaches only provide age progression from
to a latent vector through a convolutional encoder, and then younger face to older ones. To achieve flexible bidirectional
the vector is projected to the face manifold conditional on age changes, it may need to retrain the model inversely.
age through a deconvolutional generator. Two adversarial Face age regression, which predicts the rejuvenating re-
networks are imposed on the encoder and generator, respec- sults, is comparatively more challenging. Most age re-
tively, forcing to generate more photo-realistic faces. gression works so far [17, 7] are physical model-based,
where the textures are simply removed based on the learned
The benefit of the proposed CAAE can be summarized transformation over facial surfaces. Therefore, they cannot
from four aspects. First, the novel network architecture achieve photo-realistic results for baby face predictions.
achieves both age progression and regression while gener-
2.2. Generative Adversarial Network
ating photo-realistic face images. Second, we deviate from
the popular group-based learning, thus not requiring paired Generating realistically appealing images is still chal-
samples in the training data or labeled face in the test data, lenging and has not achieved much success until the rapid
making the proposed framework much more flexible and advancement of the generative adversarial network (GAN).
general. Third, the disentanglement of age and personal- The original GAN work [8] introduced a novel framework
ity in the latent vector space helps preserving personality for training generative models. It simultaneously trains two
while avoiding the ghosting artifacts. Finally, CAAE is ro- models: 1) the generative model G captures the distribution
bust against variations in pose, expression, and occlusion. of training samples and learns to generate new samples im-
itating the training, and 2) the discriminative model D dis-
criminates the generated samples from the training. G and Latent space
D compete with each other using a min-max game as Eq. 1, 𝒙𝟏
[𝒛𝟏 , 𝒍𝟏 ]

Personality (𝒛)
where z denotes a vector randomly sampled from certain
distribution p(z) (e.g., Gaussian or uniform), and the data E 𝒙𝟐 𝓜
G 𝒙𝟏
distribution is pdata (x), i.e., the training data x ∼ pdata (x). [𝒛𝟐 , 𝒍𝟐 ]

Age (𝒍)
min max Ex∼pdata (x) [log D(x)]+ 𝒙𝟐
G D
(1) Label
Ez∼p(z) [log(1 − D(G(z)))]
Figure 2. Illustration of traversing on the face manifold M. The
The two parts, G and D, are trained alternatively. input faces x1 and x2 are encoded to z1 and z2 by an encoder E,
One of the biggest issues of GAN is that the training pro- which represents the personality. Concatenated by random age la-
cess is unstable, and the generated images are often noisy bels l1 and l2 , the latent vectors [z1 , l1 ] and [z2 , l2 ] are constructed
and incomprehensible. During the last two years, several as denoted by the rectangular points. The colors indicate corre-
approaches [20, 19, 9, 3, 4, 10, 18] have been proposed to spondence of personality. Arrows and circle points denote the
traversing direction and steps, respectively. Solid arrows direct
improve the original GAN from different perspectives. For
traversing along the age axis while preserving the personality. The
example, DCGAN [20] adopted deconvolutional and con-
dotted arrow performs a traversing across both the age and per-
volutional neural networks to implement G and D, respec- sonality axes. The traversing in the latent space is mapped to the
tively. It also provided empirical instruction on how to build face manifold M by a generator G, as illustrated by the points and
a stable GAN, e.g., replacing the pooling by strides convo- arrows with corresponding markers and colors. Each point on M
lution and using batch normalization. CGAN [19] modi- is a face image, thus achieving age progression and regression.
fied GAN from unsupervised learning into semi-supervised
learning by feeding the conditional variable (e.g., the class
label) into the data. The low resolution of the generated im- preserving the personality. Starting from the red rectangu-
age is another common drawback of GAN. [4, 10] extended lar point [z2 , l2 ] (corresponding to x2 ) and evenly stepping
GAN into sequential or pyramid GANs to handle this prob- bidirectionally along the age axis (as shown by the solid red
lem, where the image is generated step by step, and each arrows), we could obtain a series of new points (red cir-
step utilizes the information from the previous step to fur- cle points). Through another mapping G (i.e. a generator),
ther improve the image quality. Some GAN-related works those points are mapped to the manifold M – generating a
have shown visually impressive results of randomly draw- series of face images, which will present the age progres-
ing face images [29, 18, 30, 20, 16]. However, GAN gener- sion/regression of x2 . By the same token, the green points
ates images from random noise, thus the output image can- and arrows demonstrate the age progressing/regression of
not be controlled. This is undesirable in age progression and x1 based on the learned manifold and the mappings. If we
regression, where we have to ensure the output face looks move the point along the dotted arrow in the latent space,
like the same person as queried. both personality and age will be changed as reflected on M.
We will learn the mappings E and G to ensure the generated
3. Traversing on the Manifold faces lie on the manifold, which indicates that the generated
faces are realistic and plausible for a given age.
We assume the face images lie on a high-dimensional
manifold, on which traversing along certain direction could 4. Approach
achieve age progression/regression while preserving the
personality. This assumption will be demonstrated ex- In this section, we first present the pipeline of the pro-
perimentally in Sec. 4.2. However, modeling the high- posed conditional adversarial autoencoder (CAAE) network
dimensional manifold is complicated, and it is difficult to (Sec. 4.1) that learns the face manifold (Sec. 4.2). The
directly manipulate (traversing) on the manifold. Therefore, CAAE incorporates two discriminator networks, which are
we will learn a mapping between the manifold and a lower- the key to generating more realistic faces. Sections 4.3 and
dimensional space, referred to as the latent space, which is 4.4 demonstrate their effectiveness, respectively. Finally,
easier to manipulate. As illustrated in Fig. 2, faces x1 and Section 4.5 discusses the difference of the proposed CAAE
x2 are mapped to the latent space by E (i.e., an encoder), from other generative models.
which extracts the personal features z1 and z2 , respectively.
4.1. Conditional Adversarial Autoencoder
Concatenating with the age labels l1 and l2 , two points are
generated in the latent space, namely [z1 , l1 ] and [z2 , l2 ]. The detailed structure of the propose CAAE network is
Note that the personality z and age l are disentangled in shown in Fig. 3. The input and output face images are
the latent space, thus we could simply modify age while 128 × 128 RGB images x ∈ R128×128×3 . A convolutional
128x128x64 Generator 𝑮
128x128x3 64x64x64 64x64x128
32x32x128 8x8x512 32x32x256 128x128x3
8x8x1024
16x16x256 16x16x512
50

Input z Output
face l face
Conv_3 Conv_4 FC_1
Reshape Deconv_1
Conv_2 Reshape FC_2
Conv_1 Deconv_2
Deconv_3
Encoder 𝑬 Deconv_4

L2 loss
1x1xn

Discriminator on z -- 𝑫𝒛 64x64x(n+16) Discriminator on face image -- 𝑫𝒊𝒎𝒈


64 Resize to 64x64x10
32
16
128x128x3
1 32x32x32
16x16x64 8x8x128 1024
z or FC_4 1
FC_3 Input/
p(z) FC_2 Label
FC_1
output
FC_2
face
Conv_3 Conv_4 FC_1
Conv_1 Conv_2 Reshape
Prior distribution of z (uniform)

Figure 3. Structure of the proposed CAAE network for age progression/regression. The encoder E maps the input face to a vector z
(personality). Concatenating the label l (age) to z, the new latent vector [z, l] is fed to the generator G. Both the encoder and the generator
are updated based on the L2 loss between the input and output faces. The discriminator Dz imposes the uniform distribution on z, and the
discriminator Dimg forces the output face to be photo-realistic and plausible for a given age label.

neural network is adopted as the encoder. The convolu- Simultaneously, the uniform distribution is imposed on z
tion of stride 2 is employed instead of pooling (e.g., max through Dz – the discriminator on z. We denote the distri-
pooling) because strided convolution is fully differentiable bution of the training data as pdata (x), then the distribution
and allows the network to learn its own spacial downsam- of z is q(z|x). Assuming p(z) is a prior distribution, and
pling [20]. The output of encoder E(x) = z preserves the z ∗ ∼ p(z) denotes the random sampling process from p(z).
high-level personal feature of the input face x. The out- A min-max objective function can be used to train E and
put face conditioned on certain age can be expressed by Dz ,
G(z, l) = x̂, where l denotes the one-hot age label. Unlike min max Ez∗ ∼p(z) [log Dz (z ∗ )] +
existing GAN-related works, we incorporate an encoder to E Dz
(3)
avoid random sampling of z because we need to generate Ex∼pdata (x) [log(1 − Dz (E(x)))]
a face with specific personality which is incorporated in z.
By the same token, the discriminator on face image, Dimg ,
In addition, two discriminator networks are imposed on E
and G with condition l can be trained by
and G, respectively. The Dz regularizes z to be uniform
distributed, smoothing the age transformation. The Dimg min max Ex,l∼pdata (x,l) [log Dimg (x, l)] +
G Dimg
forces G to generate photo-realistic and plausible faces for (4)
arbitrary z and l. The effectiveness of the two discrimina- Ex,l∼pdata (x,l) [log(1 − Dimg (G(E(x), l)))]
tors will be further discussed in Secs. 4.3 and 4.4, respec- Finally the objective function becomes
tively.
min max λL (x, G(E(x), l)) + γT V (G(E(x), l))
E,G Dz ,Dimg
4.2. Objective Function
+Ez∗ ∼p(z) [log Dz (z ∗ )]
The real face images are supposed to lie on the face man- (5)
+Ex∼pdata (x) [log(1 − Dz (E(x)))]
ifold M, so the input face image x ∈ M. The encoder E
maps the input face x to a feature vector, i.e., E(x) = z ∈ +Ex,l∼pdata (x,l) [log Dimg (x, l)]
Rn , where n is the dimension of the face feature. Given +Ex,l∼pdata (x,l) [log(1 − Dimg (G(E(x), l)))] ,
z and conditioned on certain age label l, the generator G where T V (·) denotes the total variation which is effective
generates the output face x̂ = G(z, l) = G(E(x), l). Our in removing the ghosting artifacts. The coefficients λ and γ
goal is to ensure the output face x̂ lies on the manifold while balance the smoothness and high resolution.
sharing the personality and age with the input face x (during Note that the age label is resized and concatenated to the
training). Therefore, the input and output faces are expected first convolutional layer of Dimg to make it discriminative
to be similar as expressed in Eq. 2, where L(·, ·) denotes L2 on both age and human face. Sequentially updating the net-
norm. work by Eqs. 2, 3, and 4, we could finally learn the manifold
min L (x, G(E(x), l)) (2) M as illustrated in Fig. 4.
E,G
𝒙𝟏 With 𝑫𝒛 Without 𝑫𝒛
𝒛𝟏 𝒛𝟏

𝒙𝟐

𝒛𝟐 𝒛𝟐
With 𝑫𝒛

Without 𝑫𝒛
Figure 5. Effect of Dz , which forces z to a uniform distribution.
For simplicity, z is illustrated in a 2-D space. Blue dots indicate
z’s mapped from training faces through the encoder. With Dz ,
the distribution of z will approach uniform. Otherwise, z may
present “holes”. The rectangular points denote the corresponding
z mapped from the input faces x1 and x2 , and the dotted arrow in-
dicates the traversing from z1 to z2 . The intermediate points along
Figure 4. Illustration of the learned face manifold M. The hor- the traversing are supposed to generate a series of plausible mor-
izontal axis indicates the traversing of age, and the vertical axis phing faces from x1 to x2 . Without Dz , the learned z presents a
indicates different personality. sparse distribution along the path of traversing, causing the gener-
ated face to look unreal. The series of figures at the bottom shows
the traversing with and without Dz .
4.3. Discriminator on z
The discriminator on z, denoted by Dz , imposes a prior ditional on age. Although minimizing the distance between
distribution (e.g., uniform distribution) on z. Specifically, the input and output images as expressed in Eq. 2 forces the
Dz aims to discriminate the z generated by encoder E. Si- output face to be close to the real ones, Eq. 2 does not en-
multaneously, E will be trained to generate z that could fool sure the framework to generate plausible faces from those
Dz . Such adversarial process forces the distribution of the unsampled faces. For example, given a face that is unseen
generated z to gradually approach the prior. We use uniform during training and a random age label, the pixel-wise loss
distribution as the prior, forcing z to evenly populate the la- could only make the framework generate a face close to the
tent space with no apparent “holes”. As shown in Fig. 5, the trained ones in a manner of interpolation, causing the gen-
generated z’s (depicted by blue dots in a 2-D space) present erated face to be very blurred. The Dimg will discriminate
uniform distribution under the regularization of Dz , while the generated faces from real ones in aspects of reality, age,
the distribution of z exhibits a “hole” without the applica- resolution, etc. Fig. 6 demonstrates the effect of Dimg .
tion of Dz . Exhibition of the “hole” indicates that face im- Comparing the generated faces with and without Dimg ,
ages generated by interpolating between arbitrary z’s may it is obvious that Dimg assists the framework to generate
not lie on the face manifold – generating unrealistic faces. more realistic faces. The outputs without Dimg could also
For example, given two faces x1 and x2 as shown in Fig. 5, present aging but the effect is not as obvious as that with
we obtain the corresponding z1 and z2 by E under the con- Dimg because Dimg enhances the texture especially for
ditions with and without Dz , respectively. Interpolating be- older faces.
tween z1 and z2 (dotted arrows in Fig. 5), the generated
faces are expected to show realistic and smooth morphing 4.5. Differences from Other Generative Networks
from x1 to x2 (bottom of Fig. 5). However, the morphing
In this section, we comment on the similarity and differ-
without Dz actually presents distorted (unrealistic) faces in
ence of the proposed CAAE with other generative networks,
the middle (indicated by dashed box), which corresponds to
including GAN [8], variational autoencoder (VAE) [12],
the interpolated z’s passing through the “hole”.
and adversarial autoencoder (AAE) [18].
4.4. Discriminator on Face Images VAE vs. GAN: VAE uses a recognition network to
predict the posterior distribution over the latent variables,
Inheriting the similar principle of GAN, the discrimina- while GAN uses an adversarial training procedure to di-
tor Dimg on face images forces the generator to yield more rectly shape the output distribution of the network via
realistic faces. In addition, the age label is imposed on back-propagation [18]. Because VAE follows an encoding-
Dimg to make it discriminative against unnatural faces con- decoding scheme, we can directly compare the generated
0~5 16~20 41~50 61~70 formed in Sec. 5.3. Finally, the tolerance to occlusion and
35
variation in pose and expression is illustrated in Sec. 5.4 .

5.1. Data Collection


We first collect face images from the Morph dataset [11]
and the CACD [2] dataset. The Morph dataset [11] is
the largest with multiple ages of each individual, includ-
29
ing 55,000 images of 13,000 subjects from 16 to 77 years
old. The CACD [2] dataset contains 13,446 images of 2,000
subjects. Because both datasets have limited images from
newborn or very old faces, we crawl images from Bing and
Google search engines based on the keywords, e.g., baby,
8
boy, teenager, 15 years old, etc. Because the proposed ap-
proach does not require multiple faces from the same sub-
ject, we simply randomly choose around 3,000 images from
the Morph and CACD dataset and crawl 7,670 images from
the website. The age and gender of the crawled faces are
Figure 6. Effect of Dimg , which forces the generated faces to be estimated based on the image caption or the result from age
more realistic in aspects of age and resolution. The first column estimator [15]. We divide the age into ten categories, i.e.,
shows the original faces, and their true ages are marked on the top. 0–5, 6–10, 11–15, 16–20, 21–30, 3–40, 41–50, 51–60, 61–
The right four columns are generated faces through the proposed 70, and 71–80. Therefore, we can use a one-hot vector of
framework, without (the upper row) or with (the lower row) Dimg . ten elements to indicate the age of each face during train-
The generated faces fall in four age groups as indicated at the top ing. The final dataset consists of 10,670 face images with
of each column. a uniform distribution on gender and age. We use the face
detection algorithm with 68 landmarks [5] to crop out and
align the faces, making the training more attainable.
images to the inputs, which is not possible when using a
GAN. A downside of VAE is that it uses mean squared er- 5.2. Implementation of CAAE
ror instead of an adversarial network in image generation,
so it tends to produce more blurry images [14]. We construct the network according to Fig. 3 with ker-
AAE vs. GAN and VAE: AAE can be treated as the nel size of 5 × 5. The pixel values of the input images are
combination of GAN and VAE, which maintains the au- normalized to [−1, 1], and the output of E (i.e., z) is also
toencoder network like VAE but replaces the KL-divergence restricted to [−1, 1] by the hyperbolic tangent activation
loss with an adversarial network like in GAN. Instead of function. Then, the desired age label, the one-hot vector,
generating images from random noise as in GAN, AAE uti- is concatenated to z, constructing the input of G. To make
lizes the encoder part to learn the latent variables approx- fair concatenation, the elements of label is also confined to
imated on certain prior, making the style of generated im- [−1, 1], where -1 corresponds to 0. Finally, the output is
ages controllable. In addition, AAE better captures the data also in range [−1, 1] through the hyperbolic tangent func-
manifold compared to VAE. tion. Normalizing the input may make the training process
CAAE vs. AAE: The proposed CAAE is more similar to converge faster. Note that we will not use the batch normal-
AAE. The main difference from AAE is that the proposed ization for E and G because it blurs personal features and
CAAE imposes discriminators on the encoder and genera- makes output faces drift far away from inputs in testing.
tor, respectively. The discriminator on encoder guarantees However, the batch normalization will make the framework
smooth transition in the latent space, and the discrimina- more stable if it is applied on Dimg . All intermediate lay-
tor on generator assists to generate photo-realistic face im- ers of each block (i.e., E, G, Dz , and Dimg ) use the ReLU
ages. Therefore, CAAE would generate higher quality im- activation function.
ages than AAE as discussed in Sec. 4.4. In training, λ = 100, γ = 10, and the four blocks are up-
dated alternatively through stochastic gradient descent us-
5. Experimental Evaluation ing a mini-batch size of 100. Face and age pairs are fed
to the network. After about 50 epochs, plausible generated
In the section, we will first clarify the process of data faces can be obtained. During testing, only E and G are
collection (Sec. 5.1) and implementation of the proposed active. Given an input face without true age label, E maps
CAAE (Sec. 5.2). Then, both qualitative and quantitative the image to z. Concatenating an arbitrary age label to z, G
comparisons with prior works and ground truth are per- will generate a photo-realistic face corresponding to the age
Input Prior Ours

7 16~20 0~5 6~10 11~15 16~20 21~30 31~40 41~50 51~60 61~70 71~80

5 58~68

8 31~40

0 15~20

35 61~80

45 60~80

5 69~80

3 69~80 0~5 6~10 11~15 16~20 21~30 31~40 41~50 51~60 61~70 71~80
Figure 7. Comparison to prior works of face aging. The first column shows input faces, and second column are the best aged faces cited
from prior works. The rest columns are our results from both age progression and regression. The red boxes indicate the comparable results
to the prior works.

and personality. original image X, a generated image A, and the correspond-


ing ground truth image B under the same group. They are
5.3. Qualitative and Quantitative Comparison asked whether the generated image A looks similar to the
To evaluate that the proposed CAAE can generate more ground truth B; or not sure. We ask the volunteers to ran-
photo-realistic results, we compare ours with the ground domly choose 45 questions and leave the rest blank. We
truth and the best results from prior works [28, 11, 23, 25], receive 3208 votes in total, with 48.38% indicating that the
respectively. We choose FGNET [13] as the testing dataset, generated image A is the same person as the ground truth,
which has 1002 images of 82 subjects aging from 0 to 69. 29.58% indicating they are not, and 22.04% not sure. The
Comparison with ground truth: In order to verify voting results demonstrate that we can effectively generate
whether the personality has been preserved by the pro- photo-realistic faces under different ages while preserving
posed CAAE, we qualitatively and quantitatively compare their personality.
the generated faces with the ground truth. The qualitative Comparison with prior work: We compare the perfor-
comparison is shown in Fig. 8, which shows appealing sim- mance of our method with some prior works [28, 11, 23,
ilarity. To quantitatively evaluate the performance, we pair 25], for face age progression and Face Transformer (FT) [6]
the generated faces with the ground truth whose age gap is for face age regression. To demonstrate the advantages of
larger than 20 years. There are 856 pairs in total. We design CAAE, we use the same input images collected from those
a survey to compare the similarity where 63 volunteers par- prior works and perform long age span progression. To
ticipate. Each volunteer is presented with three images, an compare with prior works, we cite their results as shown
Input Ours Ground truth Input Ours Ground truth
as shown in Fig. 10. It is worth noting that the previous
works [28, 11] often apply face normalization to alleviate
from the variation of pose and expression but they may
15 31~40 35 22 31~40 38
still suffer from the occlusion issue. In contrast, the pro-
posed CAAE obtains the generated faces without the need
to remove these variations, paving the way to robust perfor-
17 51~60 50 30 6~10 07 mance in real applications.

18 31~40 35 35 21~30 24

19 6~10 07 42 61~70 61

Figure 8. Comparison to the ground truth. Figure 10. Tolerance to occlusion and variation in pose and ex-
pression. The very left column shows the input faces, and the right
columns are generated faces by CAAE from younger to older ages.
in Fig. 7. We also compare with age regression works using The first input face presents relatively more dramatic expression,
the FT demo [6] as shown in Fig. 9. Our results obviously the second input shows only the face profile, and the last one is
show higher fidelity, demonstrating the capability of CAAE partially occluded by facial marks.
in achieving smooth face aging and rejuvenation. CAAE
better preserves the personality even with a long age span.
In addition, our results provide richer texture (e.g., wrinkle
6. Discussion and Future Works
for old faces), making old faces look more realistic. An-
other survey is conducted to statistically evaluate the perfor- In this paper, we proposed a novel conditional adver-
mance as compared with prior works, where for each testing sarial autoencoder (CAAE), which first achieves face age
image, the volunteer is to select the better result from CAAE progression and regression in a holistic framework. We de-
or prior works, or hard to tell. We collect 235 paired images viated from the conventional routine of group-based train-
of 79 subjects from previous works [28, 11, 23, 25]. We re- ing by learning a manifold, making the aging progres-
ceive 47 responses and 1508 votes in total with 52.77% in- sion/regression more flexible and manipulatable — from an
dicating CAAE is better, 28.99% indicating the prior work arbitrary query face without knowing its true age, we can
is better, and 18.24% indicating they are equal. This result freely produce faces at different ages, while at the same
further verifies the superior performance of the proposed time preserving the personality. We demonstrated that with
CAAE. two discriminators imposed on the generator and encoder,
respectively, the framework generates more photo-realistic
faces. Flexibility, effectiveness, and robustness of CAAE
Input

have been demonstrated through extensive evaluation.


The proposed framework has great potential to serve as a
general framework for face-age related tasks. More specif-
FT

ically, we trained four sub-networks, i.e., E, G, Dz , and


Dimg , but only E and G are utilized in the testing stage.
The Dimg is trained conditional on age. Therefore, it is able
Ours

to tell whether the given face corresponds to a certain age,


which is exactly the task of age estimation. For the encoder
Figure 9. Comparison to prior work in rejuvenation. The first row E, it maps faces to a latent vector (face feature), which pre-
shows the input faces, the middle row shows the baby faces gener-
serves the personality regardless of age. Therefore, E could
ated by FT [6] and the last row shows our results.
be considered a candidate for cross-age recognition. The
proposed framework could be easily applied to other image
generation tasks, where the characteristics of the generated
5.4. Tolerance to Pose, Expression, and Occlusion
image can be controlled by the conditional label. In the fu-
As mentioned above, the input images have large vari- ture, we would extend current work to be a general frame-
ation in pose, expression, and occlusion. To demonstrate work, simultaneously achieving age progressing (E and G),
the robustness of CAAE, we choose the faces with expres- cross-age recognition (E), face morphing (G), and age esti-
sion variation, non-frontal pose, and occlusion, respectively, mation (Dimg ).
References [17] Z. Liu, Z. Zhang, and Y. Shan. Image-based surface de-
tail transfer. IEEE Computer Graphics and Applications,
[1] D. M. Burt and D. I. Perrett. Perception of age in adult cau- 24(3):30–35, 2004. 2
casian male faces: Computer graphic manipulation of shape [18] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow. Adver-
and colour information. Proceedings of the Royal Society of sarial autoencoders. In International Conference on Learn-
London B: Biological Sciences, 259(1355):137–143, 1995. 2 ing Representations, 2016. 2, 3, 5, 6
[2] B.-C. Chen, C.-S. Chen, and W. H. Hsu. Cross-age refer- [19] M. Mirza and S. Osindero. Conditional generative adversar-
ence coding for age-invariant face recognition and retrieval. ial nets. arXiv preprint arXiv:1411.1784, 2014. 3
In Proceedings of the European Conference on Computer Vi- [20] A. Radford, L. Metz, and S. Chintala. Unsupervised rep-
sion, 2014. 6 resentation learning with deep convolutional generative ad-
[3] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, versarial networks. In International Conference on Learning
and P. Abbeel. InfoGAN: Interpretable representation learn- Representations, 2016. 2, 3, 4
ing by information maximizing generative adversarial nets. [21] N. Ramanathan and R. Chellappa. Modeling age progression
In Advances in Neural Information Processing Systems, in young faces. In IEEE Computer Society Conference on
2016. 3 Computer Vision and Pattern Recognition, volume 1, pages
[4] E. L. Denton, S. Chintala, R. Fergus, et al. Deep genera- 387–394. IEEE, 2006. 2
tive image models using a laplacian pyramid of adversarial [22] N. Ramanathan and R. Chellappa. Modeling shape and tex-
networks. In Advances in Neural Information Processing tural variations in aging faces. In IEEE International Confer-
Systems, pages 1486–1494, 2015. 3 ence on Automatic Face & Gesture Recognition, pages 1–8.
[5] Dlib C++ Library. http://dlib.net/. [Online]. 6 IEEE, 2008. 2
[6] Face Transformer (FT) demo. http://cherry.dcs. [23] X. Shu, J. Tang, H. Lai, L. Liu, and S. Yan. Personalized
aber.ac.uk/transformer/. [Online]. 8 age progression with aging dictionary. In Proceedings of the
[7] Y. Fu and N. Zheng. M-face: An appearance-based photo- IEEE International Conference on Computer Vision, pages
realistic model for multiple facial attributes rendering. IEEE 3970–3978, 2015. 2, 7, 8
Transactions on Circuits and Systems for Video Technology, [24] J. Suo, X. Chen, S. Shan, W. Gao, and Q. Dai. A concate-
16(7):830–842, 2006. 2 national graph evolution aging model. IEEE Transactions
[8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, on Pattern Analysis and Machine Intelligence, 34(11):2083–
D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- 2096, 2012. 2
erative adversarial nets. In Advances in Neural Information [25] J. Suo, S.-C. Zhu, S. Shan, and X. Chen. A compositional
Processing Systems, pages 2672–2680, 2014. 2, 5 and dynamic model for face aging. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 32(3):385–401,
[9] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and
2010. 2, 7, 8
D. Wierstra. Draw: A recurrent neural network for image
generation. arXiv preprint arXiv:1502.04623, 2015. 3 [26] Y. Tazoe, H. Gohara, A. Maejima, and S. Morishima. Fa-
cial aging simulator considering geometry and patch-tiled
[10] D. J. Im, C. D. Kim, H. Jiang, and R. Memisevic. Generating
texture. In ACM SIGGRAPH 2012 Posters, page 90. ACM,
images with recurrent adversarial networks. arXiv preprint
2012. 2
arXiv:1602.05110, 2016. 3
[27] B. Tiddeman, M. Burt, and D. Perrett. Prototyping and trans-
[11] I. Kemelmacher-Shlizerman, S. Suwajanakorn, and S. M. forming facial textures for perception research. IEEE Com-
Seitz. Illumination-aware age progression. In IEEE Con- puter Graphics and Applications, 21(5):42–50, 2001. 2
ference on Computer Vision and Pattern Recognition, pages
[28] W. Wang, Z. Cui, Y. Yan, J. Feng, S. Yan, X. Shu, and
3334–3341. IEEE, 2014. 1, 2, 6, 7, 8
N. Sebe. Recurrent face aging. In IEEE Conference on Com-
[12] D. P. Kingma and M. Welling. Auto-encoding variational puter Vision and Pattern Recognition, pages 2378–2386.
bayes. arXiv preprint arXiv:1312.6114, 2013. 5 IEEE, 2016. 2, 7, 8
[13] A. Lanitis, C. J. Taylor, and T. F. Cootes. Toward auto- [29] X. Yu and F. Porikli. Ultra-resolving face images by dis-
matic simulation of aging effects on face images. IEEE criminative generative networks. In European Conference
Transactions on Pattern Analysis and Machine Intelligence, on Computer Vision, pages 318–333. Springer, 2016. 3
24(4):442–455, 2002. 2, 7 [30] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based genera-
[14] A. B. L. Larsen, S. K. Sønderby, and O. Winther. Autoen- tive adversarial network. arXiv preprint arXiv:1609.03126,
coding beyond pixels using a learned similarity metric. arXiv 2016. 2, 3
preprint arXiv:1512.09300, 2015. 6
[15] G. Levi and T. Hassner. Age and gender classification us-
ing convolutional neural networks. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recogni-
tion Workshops, pages 34–42, 2015. 6
[16] M. Y. Liu and O. Tuzel. Coupled generative adversarial net-
works. In Advances in neural information processing sys-
tems, 2016. 2, 3

You might also like