Elastic Net Hypergraph Learning For Image Clustering and Seni Supervised Learning Liu2016
Elastic Net Hypergraph Learning For Image Clustering and Seni Supervised Learning Liu2016
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 1
Abstract—Graph model is emerging as a very effective tool for tures hiding in data. Generally, graph model takes each data
learning the complex structures and relationships hidden in data. point as a vertex and links a pairwise edge to represent
Generally, the critical purpose of graph-oriented learning algo- the association relationship between two data points. In this
rithms is to construct an informative graph for image clustering way, data clustering is usually formulated as a graph partition
and classification tasks. In addition to the classical K-nearest- problem without any assumption on the form of the clusters
neighbor and r-neighborhood methods for graph construction,
l1 -graph and its variants are emerging methods for finding the
[1], [2]. Graph is also widely used as a basic tool in many
neighboring samples of a center datum, where the corresponding machine learning methods such as subspace learning [3],
ingoing edge weights are simultaneously derived by the sparse [4], [5], manifold learning [6], [7], [8] and semi-supervised
reconstruction coefficients of the remaining samples. However, learning [9], [10].
the pair-wise links of l1 -graph are not capable of capturing Related work: How to construct an informative graph is
the high order relationships between the center datum and its a key issue in all graph-based learning methods. The K-
prominent data in sparse reconstruction. Meanwhile, from the Nearest Neighbors (KNN) graph and r-neighborhood graph
perspective of variable selection, the l1 norm sparse constraint, are two popular methods for graph construction. KNN connects
regarded as a LASSO model, tends to select only one datum each vertex to its k-nearest neighbors, where k is an integer
from a group of data that are highly correlated and ignore number to control the local relationships of data. The r-
the others. To simultaneously cope with these drawbacks, we
neighborhood graph connects each center vertex to the vertices
propose a new elastic net hypergraph learning model, which
consists of two steps. In the first step, the Robust Matrix Elastic falling inside a ball of radius r, where r is a parameter that
Net model is constructed to find the canonically related samples characterizes the local structure of data. Although simple, these
in a somewhat greedy way, achieving the grouping effect by two methods have some disadvantages. For example, due to the
adding the l2 penalty to the l1 constraint. In the second step, use of uniform neighborhood size, they cannot produce datum-
hypergraph is used to represent the high order relationships adaptive neighborhoods that determine the graph structure, and
between each datum and its prominent samples by regarding thus they are unable to well capture the local distribution of
them as a hyperedge. Subsequently, hypergraph Laplacian matrix data. To achieve better performance, some similarity measure-
is constructed for further analysis. New hypergraph learning al- ment functions [11], e.g., indicator function, Gaussian kernel
gorithms, including unsupervised clustering and multi-class semi- function and cosine distance, are employed to encode the graph
supervised classification, are then derived. Extensive experiments
edge weights. However, real-world data is often contaminated
on face and handwriting databases demonstrate the effectiveness
of the proposed method. by noise and corruptions, and thereby the similarities estimated
by directly measuring corrupted data may seriously deviate
Keywords—Hypergraph, matrix elastic net, group selection, data from the ground truth.
clustering, semi-supervised learning. Recently, Cheng et al. [12] proposed a robust and datum-
adaptive method called l1 -graph, in which sparse representa-
I. I NTRODUCTION tion is introduced to graph construction. l1 -graph simultane-
Graph model is widely regarded as an effective tool for ously determined both the neighboring samples of a datum
representing the association relationships and intrinsic struc- and the corresponding edge weights by the sparse recon-
struction from the remaining samples, with the objective of
Manuscript received November 02, 2014; revised July 21, 2015; September minimizing the reconstruction error and the l1 norm of the
14, 2015, and February 19, 2016; accepted October 9, 2016. This work was reconstruction coefficients. Compared with the conventional
supported in part by the Natural Science Foundation of China under Grant
61272223, Grant 61300162, and Grant 61672292, in part by the Australian graphs constructed by the KNN and r-neighborhood methods,
Research Council under Project DP-140102164, and FT-130101457 and in part the l1 -graph has some nice properties, e.g., the robustness to
by the Natural Science Foundation of Jiangsu Province, China, under Grant noise and the datum-adaptive ability. Inspired by l1 -graph, a
BK2012045 and Grant BK20131003. non-negative constraint is imposed on the sparse representation
Q. Liu, and Y. Sun are with the Jiangsu Key Laboratory of Big Data coefficients in [13]. Tang et al. constructed a KNN-sparse
Analysis Technology, CICAEET, Nanjing University of Information Science
and Technology, Nanjing 210044, China (e-mail: [email protected]; suny- graph for image annotation by finding datum-wise one-vs-
[email protected]). kNN sparse reconstructions of all samples [14]. All these
C. Wang is with Nanjing Technical Vocational College, Nanjing, China (e- methods used multiple pair-wise edges (i.e., the non-zero
mail:[email protected]). prominent coefficients) to represent the relationships between
T. Liu, and D. Tao is with the Centre for Artificial Intelligence and the
Faculty of Engineering and Information Technology, University of Tech- the center datum and the prominent datums. However, the
nology Sydney, 81 Broadway Street, Ultimo, NSW 2007, Australia (email: center datum has close relationships with all the prominent
[email protected]; [email protected]). datums, which is high-order rather than pair-wise. The pair-
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 2
wise links in l1 -graph are not capable of capturing such high- highest sparse coefficients to build a hyperedge. However,
order relationships, because some valuable information may be such a fixed order hyperedge still cannot adapt well to the
lost by breaking a multivariant relationship into multiple pair- local data distribution. In addition, SCHG also adopted the
wise edge connections. In general, it is very crucial to establish l1 norm as the sparsity measurement criterion, suffering the
effective representations for these high-order relationships in same shortcomings as LASSO and l1 -graph. In the nutshell, the
image clustering and analysis tasks [15]. fundamental problem of an informative hypergraph model is
In terms of variable selection using linear regression model, how to define hyperedges to represent the complex relationship
the l1 norm constrained sparse representation problem in l1 - information, especially the group structure hidden in the data.
graph can be regarded as a LASSO problem [16], which takes Our Work: In this paper, we propose a new elastic net
the center datum as the response and the remaining data as the hypergraph learning method for image clustering and semi-
covariate predictors [17]. According to the extensive studies in supervised classification. Our algorithm consists of two steps.
[17], [18], the l1 norm in LASSO has the shortcoming that each In the first step, we construct a robust matrix elastic net model
variable is estimated independently and therefore the relation- by adding the l2 penalty to the l1 constraint to achieve the
ships and structures between the variables are not considered. group selection effect. The Least Angle Regression (LARS)
More precisely, if there is a group of highly correlated vari- [16], [18] algorithm is used to find the canonically related
ables, then LASSO tends to select one variable from a group samples and obtain the representation coefficient matrix in
and ignore the others. In fact, it has been empirically observed a somewhat greedy way, unlike the convex optimization al-
that the prediction performance of LASSO is dominated by the gorithms adopted in [12] and [3]. In the second step, based
ridge regression if the high correlations between predictors on the obtained reconstruction, hyperedge is used to represent
existing [18]. Intuitively, we expect that all the related data the high-order relationship between a datum and its prominent
points are selected as a group to predict the response. To this reconstruction samples in the elastic net, resulting in an elastic
end, group sparsity techniques, e.g., the lp,q mixed norm, are net hypergraph. A hypergraph Laplacian matrix is then con-
suitable choices, because they favor the selection of multiple structed to find the spectrum signature and geometric structure
correlated covariates to represent the response [19]. However, of the data set for subsequent analysis. Compared to previous
the group sparsity regularization needs to know the grouping works, the proposed method can both achieve grouped selec-
information. In many cases, unfortunately, we are unaware of tion and capture high-order group information of the data by
the grouping information. elastic net hypergraph. Lastly, new hypergraph learning algo-
Motivation: In contrast to pair-wise graph, a hypergraph is a rithms, including unsupervised and semi-supervised learning,
generalization of a graph, where each edge (called hyperedge) are derived based on the elastic net hypergraph. Experiments
is capable to connect more than two vertices [20], [21]. In on the Extended Yale B, the PIE face databases and the
other words, vertices with similar characteristics can all be USPS handwriting database demonstrate the effectiveness of
enclosed by a hyperedge, and thus the high order information the proposed method. The main innovations of our paper are
of data, which is very useful for learning tasks, can be captured summarized below:
in an elegant fashion. Taking the clustering problem as an
• Robust Matrix Elastic Net is designed to find the canon-
example, it is often necessary to consider three or more data
ical groups of predictors from the dictionary to recon-
points together to determine whether they belong to the same
struct the response sample. More specially, if there is a
cluster [22], [23]. As a consequence, hypergraph is gaining
group of samples among which the mutual correlations
much attention in these years. Agarwal et al. [24], [25] applied
are very high, our model tends to recognize them as a
hypergraph for data clustering, in which clique average is
group and automatically include the whole group into
performed to transform a hypergraph to a usual pair-wise
the model once one of its sample is selected (group
graph. Zass and Shashua [26] adopted the hypergraph in
selection), which is very helpful for further analysis.
image matching by using convex optimization. Hypergraph
was applied to the problem of multilabel learning in [27] and • In order to link a sample with its selected groups of
video segmentation in [28]. In [29], Tian et al. proposed a predictors, an elastic net hypergraph model, instead of
semi-supervised learning method called HyperPrior to classify the traditional pair-wise graph, is proposed, where a hy-
gene expression data by using probe alignment as a constraint. peredge represents the high-order relationship between
[21] presented the basic concept of hypergraph Laplacian one datum and its prominent reconstruction samples
and the hypergraph Laplacian based learning algorithm. In in the elastic net. This paper devotes to construct an
[30], Huang et al. formulated the task of image clustering informative hypergraph for image analysis. Our mod-
as a problem of hypergraph partition. In [31], a hypergraph el can effectively represent the complex relationship
ranking was designed for image retrieval. However, almost all information, especially the group structure hidden in
the above methods use a simple KNN strategy to construct the data, which is beneficial for clustering and semi-
the hyperedges. Namely, a hyperedge is generated from the supervised learning derived upon the constructed elastic
neighborhood relationship between each sample and its K net hypergraph.
nearest neighbors, which cannot adaptively match the local In the following sections, we will first introduce the pre-
data distribution. Hong et al. integrated the idea of sparse liminaries of hypergraph. Section III details the construction
representation to construct a semantic correlation hypergraph of Elastic Net Hypergraph. Section V presents the cluster-
(SCHG) for image retrieval [32], which uses the top K ing and semi-supervised learning defined on the constructed
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 3
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 4
[x1 , x2 , . . . , xn ] ∈ Rd×n , whose columns are n data points where the “entrywise” l1 norm of the matrix Z is defined
drawn from d dimensional feature space. In practice, the data ∑
m ∑ n
by ∥Z∥1 = |zi,j |, ∥Z∥F is the Frobenius norm of
points X may be contaminated by gross error S, i=1 j=1
the matrix Z, ∥·∥2,1 denotes the l2,1 mixed norm for dealing
X = X0 + S, (4) with sample-specific corruptions, computed as the sum of the
∑n
where X0 and X represent the clean data and the observed data ℓ2 norm of the columns of the matrix: ∥S∥2,1 = ∥sj ∥2 ,
respectively, S = [s1 , s2 , . . . , sn ] ∈ Rd×n is the error matrix. j=1
The i-th sample is contaminated by error si , which can present λ is the weight parameter of the quadratic part and γ is
as noise, missed entries, outliers and corruption [33]. Then the the regularization parameter to trade off the proportion XZ
clean data X0 can be represented by a linear combination of and S. An additional constraint diag(Z) = 0 is introduced,
atoms from the dictionary A = [a1 , a2 , . . . , am ] ∈ Rd×m (m which is used to avoid the trivial solution of representing
is the atom number of A) as: a point as a linear combination of itself. In other words,
each datum is reconstructed by the linear combination of the
X = AZ + S, (5) remaining samples, which can be used to discover the group
structures and relationships hidden in the data. The elastic
where Z = [z1 , z2 , . . . , zn ] ∈ Rm×n is the coefficient matrix, net regularization encourages the grouping effect, favoring the
and zi is the representation of xi upon the dictionary A. The selection of multiple correlated data points to represent the test
dictionary A is often redundant and over-complete. Hence sample.
there can be many feasible solutions to problem (5). A popular Now we start out to solve the model (6). First, by replacing
method is to impose the common l1 sparsity criteria, known S with X − XZ, we can transform Eq. (6) into the following
as sparse linear representation. Intuitively, the sparsity of the equivalent equation,
coding coefficient vector can be measured by the l0 norm
2
to count the nonzero coefficients in the representation. It min ∥Z∥1 + λ ∥Z∥F + γ∥X − XZ∥2,1
has been shown that under certain conditions, the l1 norm Z (7)
s.t. diag(Z) = 0.
optimization can provide us the sparse solution with similar
nonzero supports as the l0 norm optimization [34]. This objective function is to obtain the elastic net decompo-
From the view of variable selection, the sparse linear sition of all the samples, which can be indeed solved in a
representation problem can be cast as a problem of sparse column-by-column fashion. Namely, it is equivalent to solve
covariate selection via a linear regression model by taking the elastic net decomposition zi of each sample xi respectively.
the dictionary matrix A as an observation of the covariate Inspired by [12], we cope with the constraint zi,i = 0 by
and the query matrix X as the response [17]. The l1 norm eliminating the sample xi from the sample matrix X and the
constrained sparse linear representation can be regarded as a elastic net decomposition of sample xi can be formulated as,
LASSO model, which seeks to predict an output by linearly
∥zi′ ∥1 + λ ∥zi′ ∥2 + γ∥xi − Bi z ′ i ∥2 ,
2
combining a small subset of the features that describe the data. min
′
(8)
zi
As a result of efficient optimization algorithms and the well-
developed theory for generalization properties and variable where the dictionary matrix Bi =
selection consistency, the l1 norm regularization has become [x1 , x2 , ..., xi−1 , xi+1 , ...., xn ] ∈ Rd×(n−1) and the
a popular tool for variable selection and model estimation. decomposition coefficient zi′ ∈ Rn−1 . It can be found
However, the l1 norm has its shortcomings in that each variable that Eq. (8) is a typical elastic net model as in [18]. Thus, we
is estimated independently, regardless of its position in the directly adopt the LARS-EN algorithm [18],[16] to solve Eq.
input feature vector. If there is a group of variables among (8), which can compute the entire elastic net regularization
which the pair-wise correlations are very high, then LASSO paths with the computational effort of a single ordinal least
tends to select only one variable from the group and does not squares fit. Since Eq. (8) is a convex problem, LARS-EN
care which one is selected. It lacks the ability to reveal the has been proved to converge to the global minimizer. After
grouping information. It has been empirically observed that if all the samples have been processed, the coefficient matrix
there are high correlations between predictors, the prediction can then be augmented as n × n dimensional matrix by
performance of LASSO is dominated by ridge regression. To adding zero to the diagonal elements. Finally, we can obtain
overcome these limitations, the elastic net adds a quadratic the coefficient matrix Z and the clean data X0 = XZ
part to the l1 regularization, which can be regarded as a from the given observation matrix X, the gross error S
combination of LASSO and ridge regression. Here we take can be accordingly computed as X − XZ. In terms of the
sample-specific corruption as an example, S indicates the reconstruction relationship of each vertex, we can define the
phenomenon that a fraction of the data points (i.e., columns hyperedge as the current vertex and its reconstruction, and
xi of the data matrix X) is contaminated by a large error. predict the cluster or label information through the hypergraph
By using the sample set X itself as the dictionary, the matrix defined on the obtained elastic net representation.
elastic net is modeled by
2
B. Hyperedge construction
min ∥Z∥1 + λ ∥Z∥F + γ∥S∥2,1 Given the data, each sample xi forms a vertex of the
Z,S (6)
s.t. X = XZ + S, diag(Z) = 0, hypergraph G, and can be represented by the other samples
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 5
as in Eq. (6), where zi is its sparse coefficients, naturally Algorithm 1 The process of constructing elastic net
characterizing the importance of the other samples for the hypergraph (ENHG)
reconstruction of xi . Such information is useful for recovering Input:
the clustering relationships among the samples. Although Data matrix X = [x1 , x2 , . . . , xn ] ∈ Rd×n , regularized
there are many zero components in zi , sample xi is mainly parameters λ, γ and threshold θ.
associated with only a few samples with prominent non-zero Procedure:
coefficients in its reconstruction. Thus, we design a quantitative 1: Normalize all the samples to zero mean and unit length.
rule to select the prominent samples and define the incidence 2: Solve the following problem to obtain the optimal solution
matrix H of an ENHG as: Z:
{ 2
min ∥Z∥1 + λ ∥Z∥F + γ∥S∥2,1
1, if |zij | > θ Z,S
h(vi , ej ) = (9) s.t. X = XZ + S, diag(Z) = 0.
0, otherwise,
3: The incidence matrix H of an ENHG can be obtained
where θ is a small threshold. For example, θ can be set as { coefficients Z:
based on the reconstruction
the mean values of |zi |. It can be seen that a vertex vi is 1, if zij > θ
h(vi , ej ) =
assigned to ej based on whether the reconstruction coefficients 0, otherwise.
zij is greater than the threshold θ. We take each sample as a 4: The affinity matrix can be derived by the similarity rela-
centroid and form a hyperedge by the centroid and the selected tionship from the reconstruction coefficients:
most relevant samples in the elastic net reconstruction. The M (i, j) = ⟨zi , zj ⟩.
number of neighbors selected by Eq. (9) is adaptive to each 5: Compute the hyperedge∑ weight w(ei ) by
datum, which is be propitious to capture the local grouping w(ei ) = vj ∈ei h(vj , ei )M (i, j).
information of non-stationary data. 6: return The incidence matrix H and the hyperedge weight
matrix W of ENHG.
C. Computation of hyperedge weights
The hyperedge weight also plays an important role in
is to perform spectral decomposition on the Laplacian matrix
the hypergraph model. In [32], the non-zero coefficients are
of the hypergraph model to obtain its eigenvectors and the
directly taken to measure the pair-wise similarity between
eigenvalues [21]. Our elastic net hypergraph Laplacian matrix
two samples in the hyperedge. This is unreasonable, because
is also computed as
the non-zero coefficients naturally represent the reconstruction
relationship, but not the explicit degree of similarity. In this L = I − Dv−1/2 HW De−1 H T Dv−1/2 , (12)
paper, we take each sparse representation vector zi as the
sparse feature of xi , and we measure the similarity between where Dv and De are the diagonal matrix of the vertex degrees
two samples by the dot product of two sparse vectors as and the hyperedge degrees, respectively. Based on the elastic
net hypergraph model and its Laplacian matrix, we can design
M (i, j) = |⟨zi , zj ⟩| .(10) different learning algorithms.
T
The affinity matrix can be calculated as: M = Z Z , and the
hyperedge weight w(ei ) is computed as follows: A. Hypergraph spectral clustering
∑
w(ei ) = h(vj , ei )M (i, j). (11) Clustering, or partitioning similar items into dissimilar
vj ∈ei ,j̸=i groups, is widely used in data analysis and is applied in various
areas such as, statistics, computer science, biology and social
Based on this definition, the compact hyperedge (local sciences. Spectral clustering is a popular algorithm for this
group) with higher inner group similarities is assigned a task and is a powerful technique for partitioning simple graphs.
higher weight, and a weighted hypergraph G = (V, E, W ) is Following [21], we develop an ENHG-based spectral clustering
subsequently constructed. The construction of ENHG model method.
is summarized in Algorithm 1. The main steps of spectral clustering based on ENHG are
as follows:
IV. L EARNING WITH E LASTIC NET H YPERGRAPH 1) Calculate the normalized hypergraph Laplacian matrix
A well-designed graph is critical for those graph-oriented by Eq. (12).
learning algorithms. In this section, we briefly introduce how 2) Calculate the eigenvectors of L corresponding to the
to benefit from ENHG for clustering and classification tasks. first k eigenvalues (sort ascending), denoting the eigen-
Based on the proposed ENHG model, a hypergraph Laplacian vectors by C = [c1 , c2 , . . . , ck ].
matrix is constructed to find the spectrum signature and geo- 3) Denote the i-th row of C by yi (i = 1, . . . , n), clus-
metric structure of the data set for subsequent image analysis. tering the points (yi )i=1,...,n in Rk with K-Means
Then, we formulate two learning tasks, i.e., spectral clustering algorithm into clusters c1 , c2 , . . . , ck .
and semi-supervised classification for image analysis in terms 4) Finally, assign xi to cluster j if the ith row of the matrix
of operations on our elastic net hypergraph. The principal idea C is assigned to cluster j.
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 6
LASSO Coefficients
sample8
300
sample9
v ∈V. 200
sample10
sample11
The hypergraph semi-supervised learning model can be 100 sample12
formulated as the following regularization problem, sample13
0 sample14
sample15
arg min Remp (F ) + λΩ(F ), (13) −100
F
−200
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
where Ω(F ) is a regularizer on the hypergraph, Remp (F ) is s=|z5|1/max(z5|1)
300
an empirical loss, and λ > 0 is the regularization parameter. sample6
The regularizer Ω(F ) on the hypergraph is defined by 250 sample7
Elastic Net Coefficients
sample8
1 ∑ ∑ w(e)H(u, e)H(v, e)
sample9
200
sample10
Ω(F ) = sample11
2 δ(e) 150 sample12
e∈E u,v∈e
( )2
sample13
(14) 100 sample14
f (u) f (v) sample15
× √ −√ = Tr(FT LF), 50
d(u) d(v)
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
s=|z | /max(|z | )
where Tr is the matrix trace, and L is the normalized 51 51
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 7
−20
1 63 127 191 255 319 383 447 511 575 639
200 40
20
100 0
−20
63 127 192 255 319 383 447 511 575 639
0 Sample Index
−100 (a)
63 127 191 255
Sample Index
40
0
1 63 127 191 255 319 383 447 511 575 639
effect if the regression coefficients of a group of highly 40
correlated variables tend to be equal (up to a change of sign 20
if negatively correlated). Theorem 1 of [18] pointed out the
0
quantitative relationship between the consistency of sample 1 63 127 191 255 319 383 447 511 575 639
40
xi ’s and xj ’s coefficient paths and their correlation ρ = xTi xj .
To empirically inspect the group selection effect of our elastic 20
net model, we perform a number of evaluation experiments 0
1 63 127 191 255 319 383 447 511 575 639
on the Extended Yale Face Database B [37] and examine Sample Index
the consistency of the solution path. We select the first four
individuals as the sample set X. Each individual has 64 near (b)
frontal images under different illuminations. We take each
sample as a vertex, so the hypergraph size is equal to the Fig. 5: Robustness and adaptiveness of hyperedge construction
number of training samples, and X is the sample matrix. in our elastic net hypergraph. A sample is used as the response
The evaluation experiment on the fifth face image of the first for illustration and the remaining 639 samples from the first
individual is presented for illustration. The response image ten individuals are utilized as the dictionary to represent this
(the fifth image) and partial predictor images (6th to 15th) response sample image. (a) sparse noise and (b) data missing.
are shown in Fig. 2.
Fig. 3 compares the solution path of the fifth face image of
the first individual (response) in our elastic net model and the
LASSO model. The coefficient paths of the sixth to fifteenth appropriately, such that the two models find roughly the same
samples (predictor) in the LASSO and the elastic net model are number of non-zero coefficients. A number of highly correlated
|z5 |1
displayed. We adopt s = max(|z 5 |1 )
as the horizontal axis. The samples surrounding the prominent samples are selected in the
vertical axis represents the coefficients value of each predictor. elastic net, which also demonstrates the group selection effect.
The LASSO paths are unstable and unsmooth. In contrast, However, the prominent samples spread independently in the
the elastic net has much smoother solution paths, and the LASSO model.
coefficient paths of highly related samples tend to coincide To evaluate the robustness of the hyperedge construction
with each other, which clearly shows the group selection in our elastic net hypergraph, we select the first ten indi-
effect. Fig. 4 presents the reconstruction coefficients of the viduals as the sample set. Each individual has 64 samples,
fifth face image of the first individual using the LASSO model thus there are 640 samples in total and 640 vertices in the
and our elastic net model respectively. The parameter are set constructed hypergraph accordingly. Among the sample set,
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 8
Fig. 6: Some examples of using our model to correct the Fig. 7: Sample images used in our experiments.
corruptions in faces. Left: The original data; Middle: The
corrected data; Right: The error
V. E XPERIMENT R ESULTS AND A NALYSIS
We conduct the experiments on three public databases: the
Extended Yale face database B [37], the PIE face database, and
the USPS handwritten digit database [38], which are widely
a sample from the first individual is used as the response used to evaluate clustering and classification algorithms.
for illustration and the remaining 639 samples are utilized as • Extended Yale Face Database B: This database has 38
the dictionary to represent this response sample image. Fig. individuals, and each subject has approximately 64 near
5 shows the results. The horizontal axis indicates the index frontal images under different illuminations. Following
number of the samples in the dictionary and the index range to [37], we crop the images by fixing the eyes and resize
is 1 to 639. The vertical axis indicates the distribution of them to the size of 32×32, and we select the first 10, 15,
the reconstruction coefficients for the remaining samples in 20, 30 and full subject set for the respective experiments.
the elastic net, and the response samples contaminated by • PIE Face Database: This database contains 41368
the increasing degree of corruption (sparse noise and data images of 68 subjects with different poses, illumination
missing) are shown in the right column. Those samples for and expressions. Similar to [39], we select the first 15
which the coefficients are beyond the threshold θ indicated and 25 subjects and only use the images of five near
by the red dash line are enclosed by the hyperedge. By this frontal poses (C05, C07, C09, C27, C29) under different
selection strategy, the number of neighbors, i.e. the size of the illuminations and expressions. Each image is cropped
hyperedge in ENHG, is adaptive to distinctive neighborhood and resized to the size of 32 × 32.
structure of each datum, which is valuable for applications • USPS Handwritten Digital Database: This database
with non-homogeneous data distributions. Although the sparse contains ten classes (0-9 digit characters) and 9298 hand-
error increases in the response sample, the distribution of the written digit images in total. 200 images are randomly
prominent samples in the elastic net does not show significant selected from each category for experiments. All of these
changes and the indices of the prominent samples beyond the images are normalized to the size of 16 × 16 pixels.
threshold θ remain. The main reason for this stability is that
the elastic net model can sperate the error from the corrupted Fig. 7 shows the sample images from the above three
sample. Fig. 6 shows the extracted components of some face databases. As in [40], we normalize the samples so that they
images. We can see that our model can effectively remove have a unit norm. To further evaluate the performance of the
the shadow. Compared with the hypergraphs constructed by proposed methods, we compare them to seven state-of-the-art
the KNN and r-neighborhood methods, the proposed elastic graph-based algorithms including:
net hypergraph (ENHG) has two inherent advantages. First, • G-graph: We adopt Euclidean distance as our similarity
ENHG is robust owing to the elastic net reconstruction from measure, and use a Gaussian kernel to compute a weight
the remaining samples and the explicit consideration of data for each edge of the graph.
corruption. Second, the size of each hyperedge is datum- • LE-graph: Following the example of [8], we construct
adaptive and automatically determined instead of uniformly the LE-graph, which used in Laplacian EigenMaps al-
global setting in the KNN and r-neighborhood methods. gorithm.
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 9
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 10
TABLE II: Comparison of the clustering accuracy (the accuracy/AC and the normalized mutual information/NMI) for spectral
clustering algorithms based on ENHG and other methods on the Extended Yale Face Database B.
YaleB
Metric G-graph LE-graph l1 -graph SSC LRR KNN-HG SCHG l1 -Hypergraph ENHG
Cluster#
AC 0.172 0.420 0.758 0.821 0.822 0.507 0.775 0.873 0.928
K=10
NMI 0.091 0.453 0.738 0.811 0.814 0.495 0.702 0.846 0.922
AC 0.136 0.464 0.762 0.801 0.816 0.494 0.791 0.896 0.921
K=15
NMI 0.080 0.494 0.759 0.767 0.802 0.464 0.749 0.866 0.914
AC 0.113 0.478 0.793 0.797 0.801 0.534 0.782 0.884 0.918
K=20
NMI 0.080 0.492 0.786 0.781 0.792 0.485 0.742 0.866 0.912
AC 0.08 0.459 0.821 0.819 0.807 0.512 0.773 0.876 0.911
K=30
NMI 0.090 0.507 0.803 0.814 0.806 0.484 0.737 0.856 0.933
AC 0.08 0.443 0.785 0.794 0.785 0.486 0.764 0.826 0.881
K=38
NMI 0.110 0.497 0.776 0.787 0.781 0.473 0.723 0.804 0.915
TABLE III: Comparison of the clustering accuracy (the accuracy/AC and the normalized mutual information/NMI) for spectral
clustering algorithms based on ENHG and other methods on the PIE database.
PIE
Metric G-graph LE-graph l1 -graph SSC LRR KNN-HG SCHG l1 -Hypergraph ENHG
Cluster#
AC 0.144 0.158 0.786 0.798 0.802 0.554 0.792 0.801 0.821
K=15
NMI 0.090 0.114 0.762 0.803 0.813 0.503 0.769 0.775 0.839
AC 0.131 0.149 0.771 0.782 0.794 0.554 0.781 0.788 0.813
K=25
NMI 0.087 0.106 0.753 0.766 0.760 0.503 0.763 0.757 0.828
TABLE IV: Comparison of the clustering accuracy (the accuracy/AC and the normalized mutual information/NMI) for spectral
clustering algorithms based on ENHG and other methods on the USPS database.
USPS
Metric G-graph LE-graph l1 -graph SSC LRR KNN-HG SCHG l1 -Hypergraph ENHG
Cluster #
AC 0.516 0.711 0.980 0.989 0.992 0.911 0.986 0.990 0.996
K=4
NMI 0.482 0.682 0.968 0.969 0.971 0.803 0.970 0.972 0.984
AC 0.424 0.69 0.928 0.936 0.957 0.871 0.925 0.945 0.980
K=6
NMI 0.351 0.542 0.917 0.928 0.937 0.762 0.916 0.927 0.942
AC 0.412 0.602 0.898 0.908 0.910 0.779 0.907 0.910 0.955
K=8
NMI 0.252 0.503 0.905 0.894 0.903 0.641 0.882 0.910 0.911
AC 0.338 0.582 0.856 0.881 0.889 0.765 0.801 0.886 0.932
K=10
NMI 0.213 0.489 0.872 0.866 0.871 0.636 0.822 0.870 0.874
TABLE V: Classification accuracy rates (%) of various graphs under different percentages of labeled samples (shown in parenthesis
after the dataset name). The bold numbers are the lowest error rates under different sampling percentages.
Dataset G-graph LE-graph l1 -graph KNN-HG SCHG l1 -Hypergraph ENHG
Extended Yale B (10%) 66.49 70.79 76.34 71.80 77.68 82.15 90.71
Extended Yale B (20%) 65.34 69.97 80.46 75.54 81.80 83.48 92.36
Extended Yale B (30%) 33.72 71.85 81.90 77.67 82.84 85.36 93.94
Extended Yale B (40%) 66.28 71.34 83.61 80.59 83.55 86.90 94.34
Extended Yale B (50%) 66.90 71.60 84.75 80.80 84.48 87.08 95.07
Extended Yale B (60%) 67.52 71.48 88.48 81.79 89.46 90.42 95.28
PIE (10%) 65.72 67.75 78.29 68.74 79.35 80.24 88.32
PIE (20%) 66.94 69.58 82.82 70.18 84.74 84.55 94.93
PIE (30%) 69.89 73.48 87.94 74.39 88.78 89.29 96.47
PIE (40%) 71.54 76.38 90.99 76.14 90.33 91.75 97.32
PIE (50%) 73.04 78.35 93.39 78.76 92.66 93.71 97.65
PIE (60%) 74.91 80.44 95.00 79.95 94.12 94.87 98.44
USPS (10%) 96.87 96.79 88.33 96.51 97.08 97.20 97.36
USPS (20%) 97.78 97.90 91.11 98.17 98.12 98.29 98.27
USPS (30%) 98.45 98.47 93.08 98.78 98.87 98.85 98.90
USPS (40%) 98.80 98.82 95.96 99.08 99.08 99.10 99.08
USPS (50%) 99.18 99.14 97.31 99.39 99.41 99.39 99.40
USPS (60%) 99.35 99.28 98.86 99.51 99.50 99.52 99.54
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 11
95
100
90
85 95
80
90
75
85
70
AC(γ=0.18)
65 80
NMI(γ=0.18)
60 AC(γ=0.08)
NMI((γ=0.08)) 75
55 AC((γ=1.8)) γ=0.18
NMI(γ=1.8)
70 γ=1.8
50
0 0.0001 0.001 0.01 0.1
λ
1 10 100 1000 γ=0.08
65
0 0.0001 0.001 0.01 0.1 1 10 100 1000
λ
Fig. 8: Spectral clustering results of our model as a function
of λ for several values of γ. Fig. 9: Semi-supervised classification accuracy rates of our
model as a function of λ for several values of γ.
and NMI scores reach the top at this time. With γ continually
increasing to 10, the noise component S cannot be removed 0.5
AC(λ=0.01)
well and the AC and NMI scores decrease slowly. Fig. 11
NMI(λ=0.01)
presents the semi-supervised classification results, which are 0.4
AC(λ=1)
similar to the spectral clustering results. However, the changing NMI(λ=1)
range of semi-supervised classification is smaller than spectral 0.3 AC(λ=10)
clustering. The value of λ controls the proportion of l2 norm NMI(λ=10)
in the constraint. Although the curves of AC and NMI scores 0.2
corresponding to each value of λ demonstrate a similar pattern 0.0002 0.0008 0.008 0.08 0.2 1 4 6 8 10
γ
of variability, the maximum scores of each curve are certainly
different.
Fig. 10: Spectral clustering results of our model as a function
VI. C ONCLUSIONS of γ for several values of λ.
This paper proposed a novel elastic net hypergraph (ENHG)
for two learning tasks, namely spectral clustering and semi-
supervised classification, which has three important properties: R EFERENCES
adaptive hyperedge construction, reasonable hyperedge weight
calculation, and robustness to data noise. The hypergraph [1] Jianbo Shi and Jitendra Malik, “Normalized cuts and image segmen-
structure and the hyperedge weights are simultaneously derived tation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp.
by solving a problem of robust elastic net representation of the 888–905, 2000.
whole data. Robust elastic net encourages a grouping effect, [2] Andrew Y Ng, Michael I Jordan, Yair Weiss, et al., “On spectral
where strongly correlated samples tend to be simultaneously clustering: Analysis and an algorithm,” Advances in neural information
processing systems, vol. 2, pp. 849–856, 2001.
selected or rejected by the model. The ENHG represents the
[3] Ehsan Elhamifar and Ren e Vidal, “Sparse subspace clustering:
high order relationship between one datum and its prominent Algorithm, theory, and applications,” IEEE Trans. Pattern Anal. Mach.
reconstruction samples by regarding them as a hyperedge. Intell., vol. 35, no. 11, pp. 2765–2781, 2013.
Extensive experiments show that ENHG is more effective and [4] Liu Guangcan, Lin Zhouchen, and et al., “Robust recovery of subspace
more suitable than other graphs for many popular graph-based structures by low-rank representation,” IEEE Trans. Pattern Anal. Mach.
machine learning tasks. Intell., vol. 35, no. 1, pp. 171–184, 2013.
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 12
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2621671, IEEE
Transactions on Image Processing
TO APPEAR IN IEEE TRANS. ON IMAGE PROCESSING, VOL. XX, NO. XX, XX 2016 13
[37] Kuang-Chih Lee, Jeffrey Ho, and David J Kriegman, “Acquiring linear CanTian Wang received the MSc degree in system
subspaces for face recognition under variable lighting,” IEEE Trans. analysis and integration from Nanjing University
Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 684–698, 2005. of Information Science and Technology, China, in
[38] Jonathan J. Hull, “A database for handwritten text recognition research,” 2014. He is currently a Lecturer with the Internet of
IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no. 5, pp. 550–554, things in Nanjing Technical Vocational College. His
1994. research interests include hypergraph theory, image
segmentation, and the internet of things technology.
[39] D Cai X He, et al., “Graph regularized non-negative matrix factorization
for data representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
33, no. 8, pp. 1548–1560, 2011.
[40] John Wright, Yi Ma, Julien Mairal, Guillermo Sapiro, Thomas S Huang,
and Shuicheng Yan, “Sparse representation for computer vision and
pattern recognition,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1031–
1044, 2010.
[41] Shuicheng Yan and Huan Wang, “Semi-supervised learning by sparse
representation.,” in Proceedings of the SIAM International Conference
on Data Mining, 2009, pp. 792–801.
1057-7149 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.