0% found this document useful (0 votes)
41 views6 pages

On A Type-2 Fuzzy Clustering Algorithm: Leehter Yao and Kuei-Sung Weng

logica difusa

Uploaded by

vlady120489
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views6 pages

On A Type-2 Fuzzy Clustering Algorithm: Leehter Yao and Kuei-Sung Weng

logica difusa

Uploaded by

vlady120489
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

PATTERNS 2012 : The Fourth International Conferences on Pervasive Patterns and Applications

On A Type-2 Fuzzy Clustering Algorithm

Leehter Yao and Kuei-Sung Weng


Dept. of Electrical Engineering
National Taipei University of Technology
Taipei, Taiwan
e-mail: [email protected]; [email protected]

AbstractA Type-2 fuzzy clustering algoritm that integreates


Type-2 fuzzy sets with Gustafson-Kessel algorithm is proposed
in this paper. The proposed Type-2 Gustafson-Kessel
algorithm (T2GKA) is essentially a combination of
probabilistic and possibilistic clustering schemes. It will be
shown that the T2GKA is less susceptive to noise than the
Type-1 GKA. The T2GKA ignores the inlier and outlier
interruptions. The clustering results show the robustness of the
proposed T2GKA since a reasonable amount of noise data does
not affect its clustering performance. A drawback of the
conventional GKA is that it can only find clusters of
approximately equal volume. To overcome this difficulty, this
work uses an algorithm called The Directed Evaluation
Ellipsoid Cluster Volume (DEECV) to effectively evaluate the
proper ellipsoid volume. The proposed T2GKA is essentially a
DEECV based learning algorithm integrated with T2GKA.
The experimental results show that the T2GKA can learn
suitable sized cluster volume along with a varying dataset
structure volume.
Keywords-ellipsoids; probabilistic; possibilistic; fuzzy cmeans; Gustafson-Kessel algorithm; Type-2 fuzzy clustering

I.

INTRODUCTION

Clustering shows powerful capabilities to determine a


finite number of clusters for partitioning a dataset. Hruschka
et al. [1] proposed a survey of evolutionary algorithms for
clustering, we can see the clustering area profile by focusing
more on those topics that have received more importance in
the literature. Based on the partition-based concepts, the
fuzzy clustering algorithm can be classified into probabilistic
fuzzy clustering and possibilistic fuzzy clustering. The fuzzy
c-means (FCM) algorithm proposed by Bezdek [2] is a
widely used and efficient clustering method for clustering
and classification. Because FCM employs the Euclidean
norm to measure dissimilarity, it inherently imposes a
spheroid onto the clusters regardless of the actual data
distribution. In [3] and [4], Gustafson and Kessel proposed
the G-K algorithm (GKA) using an adaptive distance norm
based on the cluster center and data point covariance
matrices to measure dissimilarity. Because the distance
norm employed in the GKA is in the Mahalanobis norm
form, GKA can be considered as utilizing ellipsoids to
cluster prototype data points. However, GKA assumes fixed

Copyright (c) IARIA, 2012.

ISBN: 978-1-61208-221-9

ellipsoid volumes before iteratively calculating the cluster


centers.
FCM and GKA are probabilistic fuzzy clustering
approaches. In a noise environment, the probabilistic fuzzy
clustering will force noise to belong to one or more clusters,
therefore seriously influencing the main dataset structure. To
relieve the probabilistic clustering drawbacks, Krishnapuram
and Keller proposed a possibilistic fuzzy clustering called the
Possibilistic c-means (PCM) [5-6]. The possibilistic fuzzy
clustering can evaluate a datum to a cluster depending only
on the distance of the datum to that cluster, but not on its
distance to other clusters. The possibilistic fuzzy clustering
can alleviate the noise influence, but it is very sensitive to
initialization, sometimes generating coincident clusters.
To avoid the various FCM and PCM problems, Pal et al.
proposed a new model called the possibilistic fuzzy c-means
(PFCM) model [7]. The PFCM is a hybridization of the
PCM and FCM models. The PFCM solves the noise
sensitivity defect of FCM and overcomes the coincident
clusters problem of PCM. However, the PFCM model has
four parameters that must be learned. For an uncertain
environment how to search for the best four parameters is
difficult. All aforementioned fuzzy clustering methods have
membership values called Type-1 membership values. In a
real application domain, the prototype data may have many
uncertain factors. Owing to the Type-1 fuzzy sets, their
membership functions are crisp and they cannot directly
model the uncertainties. On the other hand, the Type-2
membership functions are fuzzy, and they can appropriately
model the uncertainties.
The Type-2 fuzzy set concept was introduced by Zadeh
[8]. The advances of the Type-2 fuzzy sets and systems [9]
are largely attributed to their three-dimensional membership
function to handle more uncertainties in real application
problems. Recent researches [10-13] have shown that the
uncertainty in fuzzy systems can be captured with Type-2
fuzzy sets. In [14], the interval Type-2 fuzzy set was
incorporated into the FCM to observe the effect of managing
uncertainty from the two fuzzifiers. Type-2 fuzzy sets have
been used to manage the uncertainties in various domains
where the performance of Type-1 fuzzy sets is not
satisfactory. For instance, [15-17] used the Type-2 fuzzy set
for handling uncertainty in pattern recognition. Zarandi et al.

45

PATTERNS 2012 : The Fourth International Conferences on Pervasive Patterns and Applications

[18] presented a systematic Type-2 fuzzy expert system for


diagnosing human brain tumors.
When clustering methods are combined with Type-2
fuzzy sets the prototype data can be clustered more properly
and accurately. We extend the Type-1 membership values to
Type-2 by assigning a possibilistic-membership function to
each Type-1 membership value. The possibility theory,
introduced by Zadeh [19] appears as a mathematical
counterpart of probability theory that deals with uncertainty
using fuzzy sets. The Type-2 membership values are
obtained by taking the difference between each Type-2
membership function area with the corresponding Type-1
membership value. In this paper we use the unbounded
normal distributions Gaussian function as the secondary
membership function [20-21].
Using the aforementioned cencepts we combined
probabilistic and possibilistic methods to build Type-2 fuzzy
sets. We present a Type-2 GKA (T2GKA) that is an
extension of the conventional GKA. The membership values
for each prototype datum are extended as Type-2 fuzzy
memberships by assigning a membership grade to the Type1 memberships. The higher the membership value for a
prototype datum, the larger the prototype datum contribution
possesses in determining the cluster center location. The
experimental results show that the T2GKA was less
susceptible to noise than the Type-1 GKA.
To overcome the T2GKAs inability to determine
appropriate ellipsoid size, a Directed Evaluation Ellipsoid
Cluster Volume (DEECV) scheme is proposed in this paper,
so that the proper cluster volume can be directly evaluated
instead of each cluster using equal cluster volume in the
clustering learning. The Mahalanobis norm inducing matrix
determinant is utilized in this paper to measure the ellipsoid
size [22, 23]. The DEECV is developed to intelligently
estimate the proper ellipsoid size value. With the proper
ellipsoid size value determined by the proposed DEECV, the
learning efficiency can be further improved. The proposed
T2GKA is essentially a DEECV based learning algorithm
integrated with T2GKA.
II. COMBINED PROBABILISTIC AND
POSSIBILISTIC TO BUILD TYPE-2 FUZZY SET
We focus on providing a Type-2 fuzzy set model to avoid
uncertain outliers affecting the clustering learning results.
We explain how to build the Type-2 fuzzy sets based on the
following concept. For every prototype data point, the
ordered set of memberships to each of the clusters
{1 ,, c } spans a c-dimensional space. Sets of specific
membership values in this space are represented as points.
The possibility distribution transform of the Type-1
probability distribution on unbounded normal distributions
Gaussian function around the Type-1 membership value. For
each given point, the possibilistic type membership value
indicates the strength of the attribution to any cluster
independent from the rest. Figure 1 shows that two points x1
and x2 have the same Type-1 membership value but have
different possibility values.

Copyright (c) IARIA, 2012.

ISBN: 978-1-61208-221-9

Figure 1. The points have the same membership value but have different
possibility values

The idea in building Type-2 fuzzy sets is based simply on


the fact that, for the same Type-1 membership value, the
secondary membership function should make the larger
possibility value more than the smaller possibility value. The
secondary membership function based on the competitive
learning theory proposed here originates from the rivalpenalized competitive learning (RPCL) in [24]. The basic
idea of RPCL is that, for each input, the winner unit is
modified to adapt to the input and its rival is delearned using
a smaller learning rate, so, RPCL rewards the winner and
punishes the rival. A Type-2 fuzzy set is defined as an object
A which has the following form:
(1)
A { u , t , A ( i) } ,
where A (i) is an unbounded normal distributions Gaussian
function representing the secondary membership function of
the element (u , t ), u U , A (i) [0,1] in A . We set the Type1 membership value and Type-2 membership value relation
as following equations:
u = u max( A (i)) ,
(2)

t = u A ( i) ,

(3)

where u represents the primary membership value and t


represents the Type-2 membership value. The A (i) is an
unbounded normal distribution Gaussian function
representing the secondary membership function:
2
1 a b
(4)
A (i) = exp
.
2
Under the aforementioned concepts, reducing the Type-2
fuzzy sets involves complicated operations. We use the
input/output data points xk, k =1N, set as the possibility
value, pik as the unbounded normal distribution Gaussian
function standard deviation, and the ( pik 1) denotes the
distance between pik to the central unbounded normal
distribution Gaussian function, then design the secondary
0.5 (

pik 1

)2

pik
membership function e
.
The confidence intervals for varying possibilistic values
pik built around the same prototype datum xik with

46

PATTERNS 2012 : The Fourth International Conferences on Pervasive Patterns and Applications

membership value ik are nested. A unimodal numerical


possibility distribution may also be viewed as a nested set of
confidence intervals. The unbounded normal distribution
Gaussian functions confidence intervals are 2 and have a
95% confidence level. For example, the Type-1 membership
value = 0.5 , has a secondary membership function with
different possibility values as shown in Fig. 2.
The Type-2 membership values can be obtained using the
following equation;

tik = ik A tik = ik e

1 p 1
ik
2 pik

THE TYPE-2 G-K ALGORITHM (T2GKA):

To overcome the drawback of the GK algorithm, it is


used to find only clusters of approximately equal volumes. In
this paper an algorithm called The Directed Evaluation
Ellipsoid Cluster Volume (DEECV) is proposed to
effectively evaluate the proper ellipsoid volume. The
proposed T2GKA is essentially a DEECV based learning
algorithm integrated with T2GKA.

i =1 k =1

JT 2GKA (T,V, A ) =

(5)
where tik (ik) denotes the Type-2(1) memberships, pik
denotes the membership degrees for one datum resembling
the possibility of its being a member of the corresponding
cluster. For example, for the Type-1 membership
value = 0.5 , the following evaluations process interprets
that Type-2 fuzzy sets evaluate their secondary membership
values with different possibility values. The prototype data
points xk, k =1,,N, have Type-1 membership value
ik = 0.5 and possibility value pik = 1.0 then the Type-2
membership values tik = 0.5 are obtained using (5). For the
same Type-1 membership value ik = 0.5 , and possibility
value pik = 0.1 we obtain the Type-2 membership values as
tik = 1.2884e 018 0 .
We know that in our design the secondary membership
function, for the same Type-1 membership value, a larger
possibility value can make the Type-1 membership value
larger than the smaller possibility value does. Using the
aforementioned concepts, we combined the probability and
possibility membership values and propose the Type-2
Gustafson-Kessel Algorithm (T2GKA).
III.

A. The Type-2 G-K Algorithm (T2GKA):


Based on the prototype data points xk, k =1,,N, given
the random initial Type-1 fuzzy partition matrix U (0) = T (0) ,
T2GKA is to learn the Type-2 fuzzy partition matrix T, the
coordinates of all cluster centers V and the norm inducing
matrix Ai by minimizing , i = 1,, c

2
(tik )m DikA
+
i

( A
i

i =1

( t
k

ik

k =1

i )
(6)

1),

i =1

where tik has the same meaning of membership and


constraints as FCM. The distance between the k-th prototype
data point and the i-th cluster center is defined as the
Mahalanobis norm:
DikAi = ((xk vi )T Ai (xk vi ))1/ 2 .
(7)
For the i-th cluster, the ellipsoid i() is defined as
i ( x ) = ( x v i )T Ai ( x v i ) = 1 , i =1,, c.

(8)

Since the volume of i() is inversely proportional to the


determinant of Ai, det(Ai) is thus utilized as a measure of the
ellipsoid volume for T2GKA. If the determinant of Ai is
given as i, Ai is constrained by
det(Ai) = i, i > 0, i = 1,, c.
(9)
The optimization in (T,V,A) can be solved using
differentiations as follows:
Ai = i det ( Fi )

F =
i

N
k =1

Fi 1 i = 1, , c,

( tik )m ( xk vi )( xk vi )T
m
k =1 ( tik )
N

(10)

(11)

To avoid the covariance matrix being singular in the


iterative process, a scaled identity matrix is added to the
covariance matrix, i.e.,

Fi = (1- ) Fi + det( F0 )

I,

(12)

where [0,1] is a tuning factor with a small value and F0 is


the whole data set covariance matrix with fixed value. The
coordinate of each cluster center as well as the membership
element in the partition matrix can be updated using the
following equations:

(t ) x
=
(t )
m

vi

k =1 ik
N

(13)

k =1 ik

Figure 2. The secondary membership function with the different possibility


values

Copyright (c) IARIA, 2012.

ISBN: 978-1-61208-221-9

tik =

DikA
i

D jkA
j =1
i
c

2 ( m 1)

,1 i c;1 k N .

(14)

47

PATTERNS 2012 : The Fourth International Conferences on Pervasive Patterns and Applications

For each given point, the possibilistic type membership


value, indicating the strength of the attribution to any cluster,
is independent from the rest. We calculate the possibilistic
type membership value simultaneously using
1
pik =
.
(15)
1
2
( m 1)
DikA

i
1+

We determine the reasonable number of i by computing


N

i = K

k =1

m
ik

2
DikA
i

t
k =1

the covariance matrix Fi , and the norm inducing matrix A i ,


i = 1,, c. Denote Bi as the set of data points belonging to
the cluster corresponding to and x i as the j-th data point
j

belonging to Bi. Let x i be the data point with the largest


Mahalanobis distance Li among all data points in Bi, i.e.

x i = Argmax( x ij vi
x ij Bi

and Li = max(
x ij vi
i
x j Bi

A i

Bi

di =

B. The Directed Evaluation Ellipsoid Cluster Volume


(DEECV)
Without knowing the prototype data point distribution
range a priori, a tentative value a is first assigned to every
parameter, i, i = 1,, c. With i = a, i = 1,, c, T2GKA is
applied to calculate the tentative ellipsoid i with center vi ,

(17)

),

(18)

A i

A i

(19)

Lni

, i = 1,, c.

(21)

j =1

x ij vi

A i

Bi

(22)

where Bi denotes the number of data points in Bi. For all


data points in Bi, the farthest data point and its maximum
Mahalanobis distance can be respectively determined using
(17) and (18). Removing the outliers affects the clustering
learning results. With a predetermined threshold , any data
point x i belonging to the i-th cluster and its possibility
membership value Pik is larger than a predetermined
threshold possibility membership value Pik (in this paper,
we set = 0.1 ), satisfies the following criterion:
x i vi
Ai

(23)
di

is considered as an outlier and can be removed from Bi. The


outlier detection scheme, as shown in (22) and (23), is
recursively applied to every cluster of data points until no
outlier has been detected. After filtering out the outliers in
every cluster, the accuracy of calculating proper ellipsoid
volume according to (21) for T2GKAs directed evaluation
can be greatly improved.

denotes the Mahalanobis norm with the norm

inducing matrix A i as in (7). According to (7) and (10),


( x i - v )T ( det( F ))1/ n F 1 ( x i - v ) = L .

It is worth noting that if x i is an outlier for the cluster


corresponding to i , Li will be unreasonably large. This
results in an inaccurate initial ellipsoid volume i according
to (21). For the data points with too much noise, an outlier
detection scheme is required to determine the outliers and
filter them out before applying the directed initialization. Let
d i be the average Mahalanobis distance among all data
points belonging to Bi, then

(16)

m
ik

usually K=1 is chosen. For each given point, using the


possibilistic type membership value, the Type-2 membership
values can be updated using equation (5).

where i

i _ initial =

IV.

COMPUTER SIMULATIONS

We used the following computational conditions for all


datasets: 1. The termination tolerance = 0.000001 , the
DikA for the FCM, FCMPCM, and PFCM is the Euclidean
norm. 2. The DikA for the GK and T2GKA is the
Mahalanobis norm. 3. The number of c clusters c is 7 for
7cluster. 4. c is 5 for 5 same-circle and sinusoidal sets. 5. c
is 2 for all other datasets.
Example 1: The artificial 2-dimensional datasets X400 and
X550 are designed. The X400 is a mixture of two 2-variate
5.0
5.0
normal distributions with mean vectors
and
.
6.0

12.0
Each cluster has 200 points, while X550 is an augmented
version of X400 with an additional 150 points uniformly
distributed over [ 0,15] [ 0,11] . For data set X400 the
clustering results in Table I show that the terminal centroids
learned by all five algorithms produce good centroids.
i

It is thus obvious that if the initialization process


appropriately adjusts the initial ellipsoid volumes so that the
farthest data point x i with the largest Mahalanobis norm is
right on the initialized ellipsoid, all of the ellipsoid volumes
will be scaled to the range of solutions. As shown in (8), the
data points on the ellipsoids have a Mahalanobis distance of
1. Divide Li at both sides of (19),
( x i - vi )T (

a
Lni

det( Fi ))1/ n Fi 1 ( x i - vi ) = 1.

(20)

Therefore, the appropriate initial volume for the i-th


ellipsoid leading to the result that all data points are included
by the ellipsoid with tentative value a can thus be defined as:

Copyright (c) IARIA, 2012.

ISBN: 978-1-61208-221-9

48

PATTERNS 2012 : The Fourth International Conferences on Pervasive Patterns and Applications

x1 [ 0,100]

and

Normal( 0, 25 )

Copyright (c) IARIA, 2012.

is

ISBN: 978-1-61208-221-9

distributed random noise. The T2GKA clustered results with


the proper clusters volumes for the 7cluster and sinusoidal
datasets are shown in Figs. 5 and 6, respectively. The
proposed T2GKA is essentially a DEECV based learning
algorithm integrated with the T2GKA. The experimental
results show that the T2GKA can learn suitable sized cluster
volume along with dataset varying structure volume.
TABLE I. THE TERMINAL CENTROIDS LEARNED BY FCM, FCMPCM, PFCM,
GK, AND T2GKA IN THE DATASETS X400 AND X550, EXAMPLE 1
Data sets
Clustering
Algorithm

X400 (centroid)

FCM: m=2
FCMPCM: =2
PFCM: a=1, b=1,
m=2, =2
PFCM: a=1,
b=0.1, m=2, =2
GKA: m=2
T2GKA: m=2

x1
4.9794
4.9407
5.0017
4.9973
4.9843
4.9566
4.9800
4.9427
4.9782
4.9397
5.0048
5.0097

X550 (centroid)

x2
5.9531
12.0593
6.0094
12.0102
5.9746
12.0506
5.9558
12.0582
5.9538
12.0568
6.0239
12.0837

x1
5.5711
5.1885
5.0076
4.9968
5.3716
5.1281
5.5410
5.1804
5.1064
5.5502
5.0137
4.9743

x2
5.4143
11.6395
6.0091
12.0103
5.7308
11.6642
5.4604
11.6445
5.4443
11.4151
5.9593
12.1031

Figure 3. The T2GKA clustering results with the proper clusters volumes
for the dataset X550, Example 1

4
3
2
1
x2

When we cluster dataset X550, we hope that the 150 noise


points can be ignored and the cluster center will be found
closer to the true centroids Vtrue. From Table I, we can see
that all five algorithms clustered the dataset X550 terminal
centroids. Because PCM is very sensitive to initialization and
it sometimes generates coincident clusters, we utilized the
FCM clustering results to initialize PCM. The other four
clustering methods ran the algorithm directly. To make a
rough assessment of how each method accounted for inliers
2
and outliers, we estimated E A = Vtrue V A , where A denotes
FCM, FCMPCM, PFCM, GK, and T2GKA. The
EFCM=0.4173, EFCMPCM=0.0001, EPFCM=0.3714 (a=1, b=0.1,
m=2, =2), EPFCM=0.1699 (a=1, b=1, m=2, =2),
EGKA=0.4825, and ET2GKA=0.0066. The T2GKA clustering
results with the proper cluster volumes for the datasets X550
are shown in Fig. 3. We compared the five clustering
methods EA values, the EFCMPCM value is smaller than that in
other methods, but its membership values are independent of
the other clusters. We cannot depend on the membership
values to classify the data points belonging to which cluster.
Except for the EFCMPCM, the ET2GKA value is smaller than that
in other methods. The clustering results show the robustness
of the proposed T2GKA because a reasonable amount of
noise data does not affect its clustering performance.
Example 2: To verify that the proposed method can accord
the prototype dataset structure to learn the proper cluster
centers, 5 same-circles were designed with each cluster
containing 300 prototype data points. The dataset 5 samecircle is a mixture of two 2-variate normal distributions with
0.0 5.0 0.0 5.0
2.5
mean vectors
,
,
,
, and
.
3.0
3.0

3.0

3.0

0.0
The T2GKA clustered results with the proper clusters centers
for the 5 same-circle datasets are shown in Fig. 4. For the 5
same-circle datasets, the EFCM=0.0042, EFCMPCM=0.0003,
EPFCM=0.0039 (a=1, b=0.1, m=2, =2), EPFCM=12.2009 (a=1,
b=1, m=2, =2), EGKA=0.0036, and ET2GKA=0.0026. We
compared the five clustering methods EA values. Except for
the EFCMPCM, the ET2GKA value is smaller than that in other
methods. The clustering results show the robustness of the
proposed T2GKA because a reasonable amount of noise data
does not affect its clustering performance.
Example 3: To verify that the proposed method can accord
the prototype dataset structure to learn the proper cluster
volumes, 2 artificial datasets named 7cluster and sinusoidal
were designed. There are 700 and 200 prototype data points
in the 7cluster and sinusoidal datasets, respectively. There
are 700 prototype data points in the 7cluster datasets
clustered into 7 clusters with different sizes and orientations.
Each cluster contains 100 prototype data points. The 7cluster
dataset is a mixture of two 2-variate distributions with
varying
deviation,
its
mean
vectors
5.0
1.0
1.0
5.0
2.0

2.0

4.5
are ,
, ,
,
,
, and
.
1.0 5.0 1.0 5.0 2.0 2.0
3.0
The prototype data points in the dataset sinusoidal are
generated by x2 = 10 4 sin( 0.001x12 )x13 + , where

0
-1
-2
-3
-4
-2

x1

Figure 4. Clustering results using 5 ellipsoids for the prototype data points
in the dataset 5samecircle, Example 2

normally

49

PATTERNS 2012 : The Fourth International Conferences on Pervasive Patterns and Applications

[4]
[5]

[6]

[7]

[8]

[9]
Figure 5. Clustering results using 7 ellipsoids for the prototype data points in
the dataset 7cluster, Example 3

[10]

[11]

[12]

[13]

[14]

[15]
Figure 6. Clustering results using 5 ellipsoids for the prototype data points in
the dataset sinusoidal, Example 3

V.

[16]

CONCLUSIONS

This paper presented an efficient combined probabilistic


and possibilistic method for building Type-2 fuzzy sets.
Utilizing this concept we presented a Type-2 GKA (T2GKA)
that is an extension of the conventional GKA. The
experimental results showed that the T2GKA was less
susceptible to noise than the Type-1 GKA. The clustering
results showed the robustness of the proposed T2GKA
because a reasonable amount of noise data does not affect its
clustering performance.
The DEECV is proposed to effectively evaluate proper
ellipsoid volume. The proposed T2GKA is essentially a
DEECV-based learning algorithm integrated with T2GKA.
The experimental results showed that the T2GKA can learn
suitable sized clusters volume along with varying dataset
structure volume.

[17]

REFERENCES

[22]

E. R. Hruschka, R. J. G. B. Campello, A. A. Freitas and A. C.


P. L. F. de Carvalho, A Survey of Evolutionary Algorithm
for Clustering, IEEE Trans. Syst., Man, Cybern., pt. C, vol.
39, no. 2, pp.133-155, March 2009.
[2] J. Bezdek, Pattern Recognition with Fuzzy Objective
Function, Plenum Press, New York, 1981.
[3] D. E. Gustafson and W. C. Kessel, Fuzzy clustering with a
fuzzy covariance matrix, in Proc. IEEE Conf. Decision
Contr., San Diego, CA, pp. 761-766, 1979.

[18]

[19]

[20]

[21]

[1]

Copyright (c) IARIA, 2012.

ISBN: 978-1-61208-221-9

[23]

[24]

R. Babuka, Fuzzy modeling for control, Kluwer Academic


Publishers: Massachusetts, 1998.
R. Krishnapuram and J. Keller, A possibilistic approach to
clustering, IEEE Trans. Fuzzy Sys., vol. 1, no. 2, pp. 98-110,
May 1993.
R. Krishnapuram and J. Keller, The possibilistic c-Means
algorithm: Insights and recommendations, IEEE Trans.
Fuzzy Sys., vol. 4, no. 3, pp. 385-393, August 1996.
N. R. Pal, K. Pal, J. M. Keller and J. C. Bezdek, A
Possibilistic Fuzzy c-Means Clustering Algorithm, IEEE
Trans. Fuzzy Sys., vol. 13, no. 4, pp. 517-530, August 2005.
L. A. Zadeh, The concept of a linguistic variable and its
application to approximate reasoning-I, Inform. Sci., vol. 8,
no. 3, pp. 199-249, 1975.
J. Mendel, Advances in Type-2 fuzzy sets and systems,
Inform. Sci., vol. 177, pp. 84-110, 2007.
N. N. Karnik, J. M. Mendel and Q. Liang, Type-2 fuzzy
logic systems, IEEE Trans. Fuzzy Sys., vol. 7, no. 6, pp.
643-658, December 1999.
Q. Liang and J. M. Mendel, Interval Type-2 fuzzy logic
systems: theory and design, IEEE Trans. Fuzzy Syst., vol. 8,
no. 5, pp. 535-550, October 2000.
J. M. Mendel, Uncertain Rule-Based Fuzzy Logic Systems:
Introduction and New Directions, Upper Saddle River, NJ:
Prentice-Hall, 2001.
S. Coupland and R. John, Geometric Type-1 and Type-2
fuzzy logic systems, IEEE Trans. Fuzzy Sys., vol. 15, no. 1,
pp. 3-15, February 2007.
C. Hwang and F. C. H. Rhee, Uncertain Fuzzy Clustering:
Interval Type-2 Fuzzy Approach to C-Means, IEEE Trans.
Fuzzy Sys., vol. 15, no. 1, pp. 107-120, February 2007.
H. B. Mitchell, Pattern recognition using type-II fuzzy sets,
Inform. Sci., vol. 170, pp. 409-418, 2005.
J. Zeng and Z. Q. Liu, Type-2 Fuzzy Sets for Pattern
Recognition: The State-of-the-Art, Journal of Uncertain
Systems, vol. 1, no. 3, pp. 163-177, 2007.
J. Zeng, L. Xie and Z. Q. Liu, Type-2 fuzzy Gaussian
mixture models, Pattern Recognition, vol. 41, pp. 3636-3643,
2008.
M. H. Fazel Zarandi, M. Zarinbal and M. Izadi, Systematic
image processing for diagnosing brain tumors: A Type-II
fuzzy expert system approach, Applied Soft Computing, vol.
11, pp. 285-294, January 2011.
L. A. Zadeh, Fuzzy Sets as a Basis for a Theory of
Possibility, Fuzzy Sets and Systems, vol. 1, no. 1, pp. 328,
1978.
D. Dubois, L. Foulloy, G. Mauris and H. Prade, Probabilitypossibility transformations, triangular fuzzy sets and
probabilistic inequalities, Reliab. Comput., vol. 10, no. 4, pp.
273-297, 2004.
G. Mauris, Expression of Measurement Uncertainty in a
Very Limited Knowledge Context: A Probability TheoryBased Approach, IEEE Trans. Instr. Measu., vol. 56, no. 3,
pp. 731-735, June 2007.
L. Yao, Nonparametric learning of decision regions via the
genetic algorithm, IEEE Trans. System, Man, and
Cybernetics, vol. 26, no. 2, pp. 313-321, April 1996.
L. Vandenberghe, S. Boyd and S. P. Wu, determinant
maximization with linear matrix inequality constraints, J.
SIAM, vol. 19, no. 2, pp. 499-533, 1998.
L. Xu, A. Krzyak and E. Oja, Rival penalized competitive
learning for clustering analysis, RBF net, and curve
detection, IEEE Trans. Neural Netw., vol. 4, no. 4, pp. 636649, July 1993.

50

You might also like