Cao 2012
Cao 2012
1. Introduction
There is no doubt that credit-risk assessment has become an increasingly important
area for ¯nancial institutions. All banking institutions and their regulators attempt
to search for a precise internal credit system to measure the credit quality of their
borrowers. E®ective credit-risk assessment has become a crucial factor for gaining
competitive advantages in credit markets, which can help ¯nancial institutions to
grant credit to credit-worthy customers and reject non-credit-worthy customers to
reduce loss.
857
858 J. Cao et al.
banking sector has been simple, relying mostly on subjective judgment. In addition,
it is di±cult to grasp the changes of loans in the future and to control loan risk, so the
application has been limited.1
Numerous methods have been put forward to constitute a satisfactory multiclass
model to evaluate loan risk. In particular, statistical methods, arti¯cial intelligence,
machine-learning techniques (e.g., arti¯cial neural networks (ANNs) and support-
vector machines (SVMs)), as well as other hybrid approaches have been used to solve
the FCLC problem. Xue and Ke2 proposed a two-stage credit-evaluation model by
combining rough sets and Elman neural networks to study ¯ve-category loan-risk
classi¯cation in China's commercial banks. An integrated model of rough set and BP
neural network for FCLC has been proposed by Ke and Feng.1 In light of the present
status of China's commercial banks, Peng et al.3 have proposed a multiclass logistic
regression model for FCLC. A SVM ensemble method based on fuzzy integral4 is
presented to deal with the problem of ¯ve-category loan-risk classi¯cation. This
method aggregates the outputs of separate-component SVM with the importance of
each component, which is using fuzzy logic.
SVM, one of the new techniques for pattern classi¯cation, has been successfully
and widely used in pattern recognition, regression estimation, probabilistic density
estimation, time-series prediction, software defect predictors and credit-risk
evaluation.510 For binary-class classi¯cations, SVM constructs an optimal separating
hyperplane between the positive and negative classes with the maximal margin. It can
be formulated as a quadratic programming problem involving inequality constraints.
In most of these applications, SVM's generalization performance either matches or is
signi¯cantly better than that of other competing methods.11 LS-SVM was recently
proposed,12,13 which involves equality constraints only. Hence, the solution is obtained
by solving a system of linear equations. Extensive empirical study12 has shown that
LS-SVM is comparable to SVM in terms of generalization performance.
Although classical SVM and LS-SVM models have good performance in classi¯-
cation, they are sensitive to sample and parameter settings. However, classical SVM
is a binary classi¯cation method. Many research issues we need to study are multi-
class classi¯cation problems, such as ¯ve-category loan classi¯cation, image recog-
nition, and fault diagnosis. Thus, how to apply SVM to multiclass classi¯cation
e®ectively is one of the most important issues.
Novel Five-Category Loan-Risk Evaluation Model 859
In recent years, multiclass SVM algorithms have been applied to deal with mul-
ticlass classi¯cation. One approach is to consider the issue as a collection of binary
classi¯cation problems. Methods such as 1-v-r and 1-v-1 have been used widely in the
SVM literature to solve multiclass classi¯cation problems. Another way to solve
multiclass problems is to construct a decision function by considering all classes in
one case.14,15
Several multiclass SVM models16 are constructed to study the credit-scoring
problem, and comparison has been made with these algorithms. For consumer be-
havior of credit card customers, the credit card behavior-evaluation model based on
genetic algorithms and multiclass SVM is constructed.17 Kim and Ahn18 proposed a
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
new credit-rating model using multiclass SVMs with an ordinal pairwise partitioning
strategy.
However, the literature about multiclass SVM used in commercial bank credit-
risk assessment still contains relatively few studies in the ¯eld of credit-risk
evaluation. Additionally, it has not been very long since FCLC approaches were
used in China's commercial banks, and loan credit data is insu±cient. Thus, there
are few evaluation models about ¯ve-category credit-risk classi¯cation in com-
mercial banks. It is very signi¯cant for improving the credit-risk management to
construct e®ective and practical loan-risk rating tools to predict the real condition
of loans.
In this paper, the traditional binary classi¯cation model is extended to multiclass.
The model is suitable to deal with the FCLC problem in China's commercial banks.
LS-SVM is employed to establish sub-classi¯ers by using 1-v-1 technology. The
classi¯cation results and the importance to ¯nal a decision of each sub-classi¯er are
both taken into account by this method. To achieve high classi¯cation performance,
the parameters of 1-v-1 LS-SVM are optimized by the improved PSO algorithm.
Some empirical studies have been done, and the results have demonstrated that the
model has a better performance.
The rest of this paper is organized as follows. In Sec. 2, some methodologies are
¯rst introduced, including LS-SVM, multiclass SVM and improved PSO, and then
the framework of the PSO-improved 1-v-1 LS-SVM model and some strategies are
illustrated. To verify the e®ectiveness of our proposed methods, empirical analysis of
the models is given in Sec. 3. In Sec. 4, some conclusions and a discussion are
presented.
2. Methodology
2.1. Least-squares support-vector machines
LS-SVM is a variant of SVM, which leads to solving the linear KarushKuhn
Tucker (KKT) systems.20 A nice aspect of nonlinear SVM is that one solves non-
linear regression problems by means of convex quadratic programs. LS-SVM di®ers
from SVM, where the quadratic programming is transformed to linear programming,
860 J. Cao et al.
where the nonlinear mapping ðÞ will map the input data to the high-dimensional
feature space.
The optimization of LS-SVM applies the approximate function:
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
X
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
n
min J ðw; Þ ¼ 1=2w T w þ ðC =2Þ 2i
w;b;
i¼1 ; ð2Þ
subject to yi ¼ ðxi Þ w þ b þ i ; i ¼ 1; . . . ; n
where i 0 is the non-negative slack variable and C > 0 is a penalty parameter on
the training error. This optimization model can be solved using the Lagrangian
method, which is almost equivalent to the method for solving the optimization
problem in the separable case:
X
n X
n
L ¼ jjwjj2 þ ðC =2Þ 2i i ððxi Þ w þ b þ i yi Þ; ð3Þ
i¼1 i¼1
According to Eq. (5), the following linear equation is obtained by eliminating the
parameters w and :
0 eT
b 0 ð6Þ
e NN T þ C 1 I ðnþ1Þðnþ1Þ ¼ y ;
The kernel functions used most frequently are the polynomial, sigmoid, and the
RBF. The RBF is applied most frequently because it can classify multi-dimensional
data, unlike a linear kernel function. Additionally, the RBF has fewer parameters
than a polynomial kernel. Extensive empirical study has shown that the RBF kernel
function has better generalization performance.15 Thus, the RBF kernel has been
chosen, and its expression is described as follows:
K ðxi ; xj Þ ¼ expðjjxi xj jj=22 Þ; ð8Þ
where is the parameter that shows the width of the RBF kernel.
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
1 i T i Xt
min ðw Þ w þ C ij ðw i ÞT
w ;b ;i
i i 2 j¼1
ðw i ÞT ðxj Þ þ bi 1 ij ; if yi ¼ i ; ð9Þ
ðw i ÞT ðxj Þ þ bi 1 þ ij ; if yi 6¼ i
ij 0; j ¼ 1; . . . ; l
where the training data xi are mapped to a higher-dimensional space by the function
, and C is the penalty parameter.
(2) 1-v-1 SVM
Another major method is called the one-against-one method.21 This method con-
structs kðk 1Þ=2 classi¯ers, where each one is trained on data from two classes. For
training data from the ith and the jth classes, the following binary classi¯cation
problem is solved:
1 ij T ij X ij
min ðw Þ w þ C t
w ij ;bij ; 2
t
ðw ij ÞT ðxt Þ þ bij 1 ijt ; if yt ¼ i; ð10Þ
ðw ij ÞT ðx tÞ þ bij 1 þ ijt ; if yt ¼ j;
ijt 0:
862 J. Cao et al.
There are di®erent methods for doing the future testing after all kðk 1Þ=2
classi¯ers are constructed. After some tests, we decided to use the following voting
strategy.22
If signððw ij ÞT ðxt Þ þ bij Þ determines x in the ith class, the vote for the ith class is
added by one; otherwise, the jth is increased by one. Then we predict x is in the class
with the largest vote. The voting approach described above is also called the \Max
Wins" strategy. In case two classes have identical votes, though it may not be a good
strategy, the one with the smaller index is simply selected.
Practically, we solve the dual of Eq. (10), which has the same number of variables
as the number of data in two classes. Hence, if on average each class has l=k data
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
points, we have to solve kðk 1Þ=2 quadratic programming problems, where each of
them has about 2l=k variables.
particle changes its velocity according to the cognition and social parts as follows:
vkþ1 ¼ ½!ðtÞvk þ c1 r1 ðpbestk xk Þ þ c2 r2 ðgbestk xk Þ; ð12Þ
xkþ1 ¼ xk þ vkþ1 !ðtÞ; ð13Þ
2 t
¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; !ðtÞ ¼ 0:9 0:5;
j2 c c2 4cj Tmax
where c1 indicates the cognition learning factor, c2 indicates the social learning
factor, r1 and r2 are random numbers uniformly distributed in U ð0; 1Þ, and Tmax is
the maximum iteration. With the increase of t, parameter ! will decrease from 0.9 to
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
kðk 1Þ=2 sub-classi¯ers are used to judge unknown sample x. The following
strategy is employed.
If classi¯ers judge that x belongs to the ith class, then the vote for the ith class is
increased by one. Otherwise, the jth is increased by one. Then we predict that x is in
the class with the largest vote.
optimal parameter will greatly improve the accuracy of LS-SVM,27,28 and therefore
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
the parameter selection is a key issue for the successful application of the algorithm.
In this paper, the improved PSO algorithm is used for parameter selection. To
implement our proposed approach, we use the RBF kernel function (de¯ned
by Eq. (17)) for the LS-SVM classi¯er because it can be used to analyze higher-
dimensional data. The formulation is given below.
K ðxi ; xj Þ ¼ expðjjxi xj jj=22 Þ: ð17Þ
where n is the number of the test sample, yi is the actual value, y^i is the prediction
value, and f is the value of the ¯tness. Using the improved PSO algorithm, the
parameter-selection process of 1-v-1 LS-SVM is shown in Fig. 1.
The detailed steps are described as follows:
Particle-Swarm Optimization
No Is maximum iteration
reached?
Yes
Fig. 1. The process of optimizing the 1-v-1 LS-SVM parameters with PSO.
Step 5: The iteration is terminated if the number of iterations reaches the pre-
determined maximum and returns the current best individual as a result;
return to Step 2 otherwise.
Step 6: Through the above process, the optimal parameters and C of 1-v-1 LS-
SVM will be obtained.
Step 1: Sample data selection and index system construction. The loan data set
attributes have to be acquired ¯rst, and the index system is then con-
structed based on current credit and loan rating research.
Step 2: Data preprocessing and normalization. Continuous and symbolic variables
have to be preprocessed. Symbolic variables will be preprocessed by discrete
operation. Normalization processing of data before the training is crucial
866 J. Cao et al.
Parameter selection
PSO 1-v-1 multiclass classification
LS-SVM model
Prediction Output
Fig. 2. The proposed model based on PSO and 1-v-1 multiclass classi¯cation LS-SVM.
with the goal of speeding up the convergence of the model and reducing the
impact of imbalance of the data capacity to the network classi¯er.
Step 3: Construction of a multiclass PSO improved 1-v-1 LS-SVM model. The
processed data are imputed into the multiclass classi¯cation model. The
optimal parameters, including and C , will be searched and determined by
the improved PSO.
Step 4: Classi¯cation output. Our proposed FCLC model that has been trained is
employed to determine the category for the test sample.
Step 5: Algorithm comparison. For the same samples, 1-v-1 LS-SVM, 1-v-r LS-
SVM, 1-v-1 SVM, and 1-v-r SVM models are tested, respectively. Their
results are compared with the output of the multiclass LS-SVM model.
3. Experimental Analyzes
3.1. Index system and sample data
In order to verify the availability of the proposed approaches, one loan-default data
set consisting of 412 samples is acquired from one of China's rural credit coopera-
tives. In the data set there are 16 samples with ¯nancial data seriously absent. Thus,
396 samples are used for analysis. Loan levels include Passed, Special Mention,
Substandard, Doubtful, and Loss. Therein, numbers 1 through 5 are selected to
represent them, respectively. The composition of the sample is shown in Table 1.
For each experiment, 264 customers were selected randomly as a training sample
set, while the remaining 132 samples were used as a test sample set. In the training
sample set, there were 86 \Passed" customers, 78 \Special Mention" customers, 26
\Substandard" customers, 56 \Doubtful" customers, and 18 \Loss" customers. The
test sample set, which was composed of 40 \Passed" customers, 37 \Special Mention"
Novel Five-Category Loan-Risk Evaluation Model 867
Table 2. Complete attribute table of personal loans in the rural credit cooperative.
Element Index
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
Loan-identi¯cation attributes Loan-contract ID, loan-account ID, account ID, currency, customer
ID
Customer-status attributes Birth date, gender, marital status, nationality, language, country of
residence, legal status, annual income range, household income
range, highest education, highest degree, position, occupation,
title, years of work, political status, religion, dependent popu-
lation, the amount for the month, year of marriage, number of
children, personal interest, investment interest, risk type,
whether to accept marketing
Customer-loan-account attributes Loan-attribute ID, release amount, loan purpose, release date, due
date, interest settlement date, interest settlement cycle
22-Married; 23-Remar-
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
ried; 30-Widowed;
40-Divorce; 90-Marital
status unspeci¯ed
Demographic attributes Legal status char C4 0-Farmers; 1-Non-farmers;
of loan account 2-Farmers
UNPROFOR
Annual income range numeric C5 1-02000; 2-20005000;
3-500010,000;
4-10,00030,000;
5-30,00060,000;
6-60,000100,000;
7-100,000300,000;
8-300,0001000,000;
9->1000,000
Household annual numeric C6 1-02000; 2-20005000;
income range 3-500010,000;
4-10,00030,000;
5-30,00060,000;
6-60,000100,000;
7-100,000300,000;
8-300,0001000,000;
9->1000,000
Highest degree char C7 10-Graduate; 20-Bachelor;
30-College; 40-Second-
ary vocational schools;
50-Technical school;
60-Senior high school;
70-Junior high school;
80-Primary school;
90-Illiteracy or semi-
literate; 99-Unknown
Position char C8 1-Senior leader; 2-Interme-
diate leadership; 3-
General sta®; 4-Other;
9-Unknown
Occupation char C9 0-Other; 1-Cadre; 2-Wor-
ker; 3-Individual worker;
4-Farmer; 5-Soldier
Title char C10 0-No; 1-Senior; 2-Interme-
diate; 3-Junior;
9-Unknown
Novel Five-Category Loan-Risk Evaluation Model 869
Table 3. (Continued )
values of all sample data in attribute Ci , respectively; xij denotes the ith attribute in
the jth sample; and x ij0 denotes the data after being normalized.
3.3. Experiments
Our implementation platform is carried out on the MATLAB7.1, a mathematical
development environment, by extending the Libsvm version 2.82, which was origi-
nally designed by Chang and Lin.31 The empirical evaluation is performed on AMD
Turion 64 X2 Duo Core CPU running at 2.00 GHz and 2 GB RAM. Experiment
con¯guration is shown in Table 4.
870 J. Cao et al.
According to the nonlinear characteristic of customer loan data, the RBF kernel
function formulation is K ðx; xk Þ ¼ expðjjx xk jj2 =22 Þ. Improved PSO is used to
select the parameters of the proposed model. In our proposed model, the number of
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
the population size is 20, and the maximum number of iterations is 100. We set
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
c1 ¼ c2 ¼ 2, ¼ 0:729, ! ¼ 0:9.
In our experiment, we compare the performance of our classi¯er with other four
popular methods: (I) 1-v-1 LS-SVM, (II) 1-v-r LS-SVM, (III) 1-v-1 SVM, and (IV)
1-v-r SVM. We set ¼ 2:1 and C ¼ 7:3.
Forecast value
Fig. 3. Comparison of the classi¯cation result with di®erent models in three experiments.
4. Conclusion
In this paper, a novel model based on 1-v-1 LS-SVM and improved PSO is used to
research the loan-risk ¯ve-category classi¯cation. LS-SVM is a powerful machine-
learning technique in solving the complex problem of nonlinear function estimation.
The improved PSO is updated from the classical PSO by increasing convergence
index and inertia weight !, which are used to optimize parameters of LS-SVM. In
872 J. Cao et al.
empirical studies, the data set is derived from rural credit cooperatives. Five models
are constructed in case analysis, including the proposed model, 1-v-1 LS-SVM, 1-v-r
LS-SVM, 1-v-1 SVM, and 1-v-r SVM. To verify the e®ectiveness of the proposed
model, three experiments are carried out to demonstrate that the proposed model has
higher accuracy than the other four models, so the proposed model has better per-
formance than the others. To sum up, the proposed model provides a new approach
to solving the FCLC problem for Chinese commercial banks.
Acknowledgment
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
The authors would like to thank the anonymous referees for their valuable comments
and suggestions. Their comments helped to improve the quality of the paper im-
mensely. This work is partially supported by NSFC(60804047); the Science and
Technology Project of Jiangsu Province, China (BE2010201); the Ministry of Edu-
cation, Humanities, and Social Sciences Research Project (11YJCZH005); the
Jiangsu Provincial Department of Education, Philosophy, and Social Science Project
(2010SJB790025); and the Priority Academic Program Development of Jiangsu
Higher Education Institutions.
References
1. K. L. Ke and Z. X. Feng, Five-category classi¯cation of loan risk based on integration
of rough sets and neural network, Control Theory and Application 25(4) (2008)
759763.
2. F. Xue and K. L. Ke, Five-category evaluation of commercial bank's loan based on
integration of rough sets and neural network, Systems Engineering Theory & Practice
1 (2008) 4045.
3. J. G. Peng, H. B. Tu, J. He and Y. H. Zhou, The application of ordered logistic regression
model in the default probability measure, The Theory and Practice of Finance and
Economics 30(160) (2009) 17.
4. C. Wu, Y. J. Guo and H. Xia, The model of credit risk assessment in commercial banks on
fuzzy integral support vector machines ensemble, Operations Research and Management
Science 18(2) (2009) 115119.
5. J. P. Li, L. W. Wei, G. Li and W. X. Xu, An evolution-strategy based multiple kernels
multi-criteria programming approach: The case of credit decision making, Decision
Support System 51(2) (2011) 292298.
6. L. W. Wei, Z. Y. Chen and J. P. Li, Evolution strategies based adaptive Lp LS-SVM,
Information Sciences 181(14) (2011) 30003016.
7. J. L. Liu, J. P. Li, W. X. Xu and Y. Shi, A weighted Lq adaptive least squares support
vector machine classi¯ers-robust and sparse approximation, Expert Systems with Appli-
cations 38(3) (2011) 22532259.
8. L. Yu, X. Yao, S. Y. Wang and K. K. Lai, Credit risk evaluation using a weighted least
squares SVM classi¯er with design of experiment for parameter selection, Expert Systems
with Applications 38 (2011) 1539215399.
9. G. Wang and J. Ma, A hybrid ensemble approach for enterprise credit risk assessment
based on Support Vector Machine, Expert Systems with Applications 39 (2012)
53255331.
Novel Five-Category Loan-Risk Evaluation Model 873
10. Y. Peng, G. Kou, G. Wang, W. Wu and Y. Shi, Ensemble of software defect predictors:
An AHP-based evaluation method, International Journal of Information Technology &
Decision Making 10(1) (2011) 187206.
11. C. J. C. Burges, A tutorial on support vector machines for pattern recognition, Data
Mining and Knowledge Discovery 2(2) (1998) 121167.
12. T. V. Gestel, J. A. K. Suykens, B. Baesens, S. Viaene, J. Vanthienen, G. Dedene, B. De
Moor and J. Vandewalle, Benchmarking least squares support vector machine classi¯ers,
Machine Learning 54(1) (2004) 532.
13. J. A. K. Suykens, T. V. Gestel, J. D. Brabanter, B. D. Moor and J. Vandewalle, Least
Squares Support Vector Machines (World Scienti¯c Pub. Co., Singapore, 2002).
14. J. Weston and C. Watkins, Support vector machines for multi-class pattern recognition,
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
ESANN'99 (1999).
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.
15. Y. Peng, G. Kou, G. Wang and Y. Shi, FAMCDM: A fusion approach of MCDM methods
to rank multiclass classi¯cation algorithms, Omega 39(6) (2011) 677689.
16. G. Tsoumakas and I. Katakis, Multi-label classi¯cation: An overview, International
Journal of Data Warehousing and Mining 3(3) (2007) 113.
17. C. Cambell, Kernel methods: A survey of current techniques, Neurocomputing 48(124)
(2002) 6384.
18. K. J. Kim and H. C. Ahn, A corporate credit rating model using multi-class support
vector machines with an ordinal pairwise partitioning approach, Computers and
Operations Research 39 (2012) 18001811.
19. Y. Peng, G. Kou, Y. Shi and Z. Chen, A descriptive framework for the ¯eld of data mining
and knowledge discovery, International Journal of Information Technology & Decision
Making 7(4) (2008) 639682.
20. L. Zhang and B. Zhang, Geometrical representation of McCulloch-Pitts neural model and
its applications, IEEE Transactions on Neural Networks 10(4) (1999) 925929.
21. S. Knerr, L. Personnaz and G. Dreyfus, Single-layer learning revisited: A stepwise pro-
cedure for building and training a neural network, Neurocomputing: Algorithms, Archi-
tectures and Applications, ed. J. Fogelman (Springer-Verlag, New York, 1990).
22. J. Friedman, Another approach to polychotomous classi¯cation, Dept. Statist., Stanford
Univ., Stanford, CA (1996), http://www-stat.stanford.edu/reports/friedman/poly.ps.z.
23. J. Kennedy and R. C. Eberhart, Particle swarm optimization, Proceedings of IEEE
International Conference on Neural Networks (IEEE Press, Piscataway, 1995), pp.
19421948.
24. K. W. Xia, Y. Dong and H. L. Du, Oil layer recognition model of LS-SVM based on
improved PSO algorithm, Control and Decision 22(12) (2007) 13851389.
25. Y. Shi and R. C. Eberhart, Parameter selection in particle swarm optimization, Evolu-
tionary Programming VII, Proceedings of the Seventh Annual Conference on Evolu-
tionary Programming, New York (1998).
26. C. W. Hsu and C. J. Lin, A comparison of methods for multi-class support vector
machine, IEEE Transactions on Neural Networks 13 (2002) 415425.
27. M. Pardo and G. Sberveglieri, Classi¯cation of electronic nose data with support vector
machines, Sensors and Actuators B Chemical 107(2) (2005) 730737.
28. S. W. Lin, K. C. Ying, S. C. Chen et al., Particle swarm optimization for parameter
determination and feature selection of support vector machines, Expert Systems with
Applications 35(4) (2008) 18171824.
29. M. H. Jiang and X. C. Yuan, Construction and application of GA-SVM model for per-
sonal credit scoring, Journal of Hefei University of Technology 31(2) (2008) 267283.
874 J. Cao et al.
30. H. K. Lu and J. Cao, personal credit scoring model based on integration of rough set and
GA-neural network, Journal of Nanjing University of Science and Technology (Natural
Science) 33 (2009) 15.
31. C. C. Chang and C. J. Lin, LIBSVM: A library for support vector machines [EB/OL]
(2008), http://www.csie.ntu.edu.tw/cjlin/libsvm/index.html.
Int. J. Info. Tech. Dec. Mak. 2012.11:857-874. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND LIBRARIES on 01/23/15. For personal use only.