0% found this document useful (0 votes)
2 views

ClassificationTechniquesinMachineLearningApplicationsandIssues

This paper provides a comprehensive review of various classification techniques in machine learning, including Bayesian networks, decision trees, k-nearest neighbors, and Support Vector Machines (SVM). It discusses the strengths, weaknesses, applications, and issues associated with each method, as well as potential solutions for overcoming these challenges. The study aims to serve as a resource for both academia and newcomers in the field of machine learning.

Uploaded by

s4kwj99n6y
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ClassificationTechniquesinMachineLearningApplicationsandIssues

This paper provides a comprehensive review of various classification techniques in machine learning, including Bayesian networks, decision trees, k-nearest neighbors, and Support Vector Machines (SVM). It discusses the strengths, weaknesses, applications, and issues associated with each method, as well as potential solutions for overcoming these challenges. The study aims to serve as a resource for both academia and newcomers in the field of machine learning.

Uploaded by

s4kwj99n6y
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/319370844

Classification Techniques in Machine Learning: Applications and Issues

Article in Journal of Basic & Applied Sciences · August 2017


DOI: 10.6000/1927-5129.2017.13.76

CITATIONS READS
238 25,176

2 authors, including:

Aized Amin Soofi


National University of Modern Languages
11 PUBLICATIONS 457 CITATIONS

SEE PROFILE

All content following this page was uploaded by Aized Amin Soofi on 22 September 2017.

The user has requested enhancement of the downloaded file.


Journal of Basic & Applied Sciences, 2017, 13, 459-465 459

Classification Techniques in Machine Learning: Applications and


Issues

Aized Amin Soofi* and Arshad Awan

Department of Computer Science, Allama Iqbal Open University, Islamabad, Pakistan


Abstract: Classification is a data mining (machine learning) technique used to predict group membership for data
instances. There are several classification techniques that can be used for classification purpose. In this paper, we
present the basic classification techniques. Later we discuss some major types of classification method including
Bayesian networks, decision tree induction, k-nearest neighbor classifier and Support Vector Machines (SVM) with their
strengths, weaknesses, potential applications and issues with their available solution. The goal of this study is to provide
a comprehensive review of different classification techniques in machine learning. This work will be helpful for both
academia and new comers in the field of machine learning to further strengthen the basis of classification methods.

Keywords: Machine learning, classification, classification review, classification applications, classification


algorithms, classification issues.

1. INTRODUCTION Classification is categorized as one of the supreme


studied problems by researchers of the machine
Machine Learning (ML) is a vast interdisciplinary learning and data mining fields [8]. A general model
field which builds upon concepts from computer
supervised learning (classification techniques) is shown
science, statistics, cognitive science, engineering,
in Figure 1.
optimization theory and many other disciplines of
mathematics and science [1]. There are numerous Although classification is well known technique in
applications for machine learning but data mining is machine learning but it suffers with issues like handling
most significant among all [2]. Machine learning can missing data. Missing values in data set can cause
mainly classified into two broad categories include problem during both training and classification phases.
supervised machine learning and unsupervised Some of potential reasons of missing data are
machine learning.
presented in [9] includes; Non entry of record due to
Unsupervised machine learning used to draw misconception, data recognized irrelevant at the time of
conclusions from datasets consisting of input data entry, data removal because of deviation with other
without labeled responses [3] or we can say in documented data and equipment malfunction.
unsupervised learning desired output is not given.
Missing data problem can overcome by approaches
Supervised machine learning techniques attempt to
[10] like; Data miners can overlook the omitting data,
find out the relationship between input attributes
swap whole omitting values with an individual global
(independent variables) and a target attribute
constant, swap an omitting value with its feature mean
(dependent variable) [4]. Supervised techniques can
further classified into two main categories; classification for the given class, manually observe samples with
and regression. In regression output variable takes omitting values and insert a feasible or probable value.
continuous values while in classification output variable In this work we will focus only on some selected
takes class labels [5]. classification methods.

Classification is a data mining (machine learning) This paper organized as following; in section 2
approach that used to forecast group membership for methodology of review is presented. Section 3 is
data instances [6]. Although there are variety of divided into four subsections in which selected
available techniques for machine learning but classification techniques has been discussed. In
classification is most widely used technique [7]. section 3.1 Logic based technique (decision tree) has
Classification is an admired task in machine learning been discussed. In section 3.2, statistical learning
especially in future plan and knowledge discovery. techniques (Bayesian networks) are discussed. K-
Nearest neighbor classifiers are presented in section
3.3. Support Vector Machines has been discussed in
*Address correspondence to this author at the Department of Computer section 3.4.
Science, Allama Iqbal Open University, Islamabad, Pakistan;
Tel: +923217680092; E-mail: [email protected]

ISSN: 1814-8085 / E-ISSN: 1927-5129/17 © 2017 Lifescience Global


460 Journal of Basic & Applied Sciences, 2017, Volume 13 Soofi and Awan

Figure 1: Supervised learning classification techniques.

2. METHODOLOGY also simplifies the classification process [12]. The


decision tree is transparent mechanism it facilitate
A literature search was performed for the articles by users to follow a tree structure easily in order to see
using databases include IEEE xplore, google scholar, how the decision is made [13]. In this section basic
science direct and some related web pages that are philosophy of decision tree methods has been
written in English. The keywords used for literature discussed with their strengths, limitations and
search include; Machine learning, data mining, applications.
classification, classification review, classification
applications and classification algorithms. These The core objective of decision tree is to produce a
keywords were used alone and in combination for the model that calculates the value of a required variable
initial collection of research material. Only those based on numerous input variables [6]. Usually all
articles that contain relevant data about classification decision tree algorithms are constructed in two phases
techniques applications, challenges and solutions were (i) tree growth; in which training set based on local
included in this review. It is difficult to provide optimal criteria is splitting recursively until most of the
exhaustive review of all supervised machine learning record belonging to the partition having same class
classification methods in a single article, Therefore we label [14] (ii) tree pruning; in which size of tree is
focused only on commonly used classification reduced making it easier to understand [15]. In this
techniques include Decision Tree (ID3 and C4.5), section we will focus on ID3 and C4.5 decision tree
Bayesian Network, K-Nearest Neighbor and Support algorithm.
Vector Machines. Applications of different classification
techniques are presented in Table 1 and issues of ID3 (Iterative Dichotomiser 3) decision tree
classification techniques with their available solutions algorithm was introduced in 1986 [16, 17]. It is one of
are presented in Table 2. the widely used algorithms in the area of data mining
and machine learning due to its effectiveness and
3. CLASSIFICATION TECHNIQUES simplicity [16]. The ID3 algorithm is based on
information gain. Some of the strengths and
Major classification techniques has been discussed
weaknesses of ID3 decision tree are presented in [18],
in this section with their basic working, advantages and
strengths includes; easy to understand and in final
disadvantages.
decision whole training example is considered while
3.1. Decision Tree Induction weaknesses includes; no back tracking search, unable
to handle missing values and no global optimization.
Decision tree algorithms are most commonly used
C4.5 is a famous algorithm for decision trees
algorithms in classification [11]. Decision tree provides
production. It is an expansion of the ID3 algorithm and
an easily understandable modeling technique and it
Classification Techniques in Machine Learning Journal of Basic & Applied Sciences, 2017, Volume 13 461

Table 1: Classification Techniques Applications

Classification Techniques Applications Reference

predicting student performance [20]


land capability classification [31]
ID3 tolerance related knowledge acquisition [32]
computer crime forensics [33]
fraud detection application [34]
Decision making of loan application by debtor [35]
Predicting Software Defects [36]
Thrombosis collagen diseases [37]
C4.5
Electricity price prediction [38]
coal logistics customer analysis [39]
Selecting Question Pools [40]
automatic and interactive mode for Image Segmentation [41]
traffic incident detection [42]
signature verification [43]
Bayesian Network
efficient patrolling of nurses [44]
examine dental pain [45]
telecommunication and internet networks [46]
Microarray data classification [47]
Phoneme Prediction [48]
Face recognition [49]
K- Nearest neighbor Agarwood oil quality grading [50]
Classification of nuclear receptors and their subfamilies [51]
Short-term traffic flow forecasting [52]
Plant Leaf Recognition [53]
Scene classification [54]
Predict corporate financial distress [55]
SVM Induction motors fault diagnosis [56]
Analog circuit fault diagnosis [57]
enterprise market competition [58]

it minimize its drawbacks caused by ID3. In pruning Bayesian Network learning tasks can be isolated into
phase C4.5 tries to eliminate the un-comfort branches two subtasks; (a) network DAG structure learning, (b)
by swapping them with leaf nodes by going back parameters determination.
through the tree once it has been generated [19]. The
strengths of C4.5 are dealing training data with missing One of the problems with Bayesian networks
feature values, deals both discrete and continuous classifier is that it usually requires continuous attributes
features and providing facility of both pre and post to be discretized. The process of conversion of
pruning [18, 20]. The weaknesses includes; not continuous attribute into discrete attribute introduced
suitable for small data set [18] and high processing classification issues [22, 23]. These issues may include
time as compare to other decision trees. noise, missing information and consciousness to the
change of the attributes towards class variables [24].
3.2. Bayesian Networks The other method of Bayesian network classifier in
which continuous attribute does not converted into
A Bayesian Network (BN) refers graphical model for discrete attribute, needs valuation of the attribute’s
probability associations betwixt a set of variables [21]. conditional density [23].
BN structure S consist directed acyclic graph (DAG)
and the nodes in S are in one-to-one communication To overcome the problem of conditional density
with the X features. The arcs exemplify unexpected estimation of attributes, in [24] Gaussian kernel
impacts betwixt the nodes while the scarcity of possible function with stable constraints for evaluation of
arcs in S encodes conditional liberties [2]. Normally attributes density was used. Then Experiment was
462 Journal of Basic & Applied Sciences, 2017, Volume 13 Soofi and Awan

Table 2: Classification Techniques Issue and Solutions

Classification Approach Issue Solution/technique Ref.

Algorithm by combining ID3 and association [62]


function(AF)
multi valued attributes
modification to the attribute selection [63]
Decision tree (ID3 and Complex information entropy and attribute with methods, pre pruning strategy and rainforest
C4.5) more values approach
Noisy data classification Enhanced algorithm with Taylor formula [64]
Credal-C4.5 tree [65]
Attributes conditional density estimation [24]
Gaussian kernel function
Inference (large domain discrete and continuous [66]
Bayesian Network decision-tree structured conditional probability
variables) [67]
greedy learning algorithm
Multi-dimensional data
Prototype selection [68]
space requirement
feature selection and extraction methods [69]
K nearest neighbor time requirement
finding R-Tree index [70]
KNN scaling over multimedia dataset
multimedia KNN query processing system [30]
controlling the false positive rate Risk Area SVM (RA-SVM) [71]
SVM low sparse SVM classifier Cluster Support Vector Machine (CLSVM) [72]
multi-label classification fuzzy SVMs (FSVMs) [73]

performed on data set given at UCI machine learning One of the main advantage of KNN technique is that
repository indicate that continuous attributes provides it is effective for large training data and robust to noisy
better classification accuracy as compare to other training data [29]. Scaling KNN queries over enormous
techniques by using Gaussian kernel function in high dimensional multimedia datasets is a stimulating
Bayesian Network classifiers. issue for KNN classifiers. To overcome this issue an
high performance multimedia KNN query processing
Some of the advantages of Bayesian network are system [30] was introduced, in this system the fast
presented in [25] includes (i) smoothness properties; distance based pruning methods are coupled with
minor changes in Bayesian network model do not suggested Distance-Pre computation based R-tree
influence the working of the system (ii) Flexible (DPR-Tree) index structure. Input/output cost is
applicability; identical Bayesian Network model can be reduced by this exclusive coupling but it increase the
used for resolving both regression and classification computational work of KNN search.
issues (iii) handling missing data; Bayesian network
has capability to filled out missing data by assimilating Two important obstacles with nearest neighbor
over all opportunities of the missing values. based classifiers are highlighted in [59] that includes;
space requirement and its classification time. Different
3.3. K- Nearest Neighbor methods have been introduced to overcome space
requirement issue. K-Nearest Neighbor Mean Classifier
In K-nearest neighbor (KNN) technique, nearest (k-NNMC) was introduced in [59]. K-NNMC
neighbor is measured with respect to value of k, that independently search k nearest neighbors for every
define how many nearest neighbors need to be training pattern class and calculate mean for all given
examine to describe class of a sample data point [26]. k-neighbors. It is presented experimentally by using
Nearest neighbor technique is divided into two numerous standard data-sets that the classification
categories i.e, structure based KNN and structure less accuracy of suggested classifier is better as compare
KNN. The structure based technique deals with the to other classifiers like weighted k-nearest neighbor
basic structure of the data where the structure has less classifier (Wk-NNC) [60] and it has ability to combine
mechanism which associated with training data efficiently with any space reduction and indexing
samples [27]. In structure less technique entire data is methods.
categorized into sample data point and training data,
distance is calculated between sample points and all The advantages of KNN include simplicity,
training points and the point with smallest distance is transparency, Robust to noisy training data, easy to
known as nearest neighbor [28]. understand and implement and disadvantages includes
Classification Techniques in Machine Learning Journal of Basic & Applied Sciences, 2017, Volume 13 463

computation complexity, memory limitation, poor run- [2] Kotsiantis SB, Zaharakis I, Pintelas P. Supervised machine
learning: A review of classification techniques. ed, 2007.
time performance for large training set and irrelevant
[3] Zhang D, Nunamaker JF. Powering e-learning in the new
attributes can cause problems [28, 61]. millennium: an overview of e-learning and enabling
technology. Information Systems Frontiers 2003; 5: 207-218.
3.4. Support Vector Machines https://doi.org/10.1023/A:1022609809036
[4] Maimon O, Rokach L. Introduction to supervised methods, in
Data Mining and Knowledge Discovery Handbook, ed:
Vapnik proposed statistical learning theory based Springer, 2005 pp. 149-164.
machine learning method which is known as Support [5] Ng A. "CS229 Lecture notes."
vector machine (SVM) [74]. SVM has considered as [6] Kesavaraj G, Sukumaran S. A study on classification
one of the highest prominent and convenient technique techniques in data mining. in Computing, Communications
for solving problems related to classification of data and Networking Technologies (ICCCNT), 2013 Fourth
International Conference on, 2013; pp. 1-7.
[75] and learning and prediction[76]. Support vectors
[7] Singh M, Sharma S, Kaur A. Performance Analysis of
are the data points that lie closest to the decision Decision Trees. International Journal of Computer
surface [77]. It executes the classification of data Applications 2013; 71.
vectors by a hyper plane in immense dimensional [8] Baradwaj BK, Pal S. Mining educational data to analyze
students' performance. arXiv preprint arXiv:1201.3417, 2012.
space [78]. Maximal margin classifier is the simplest or
[9] Dunham MH. Data mining: Introductory and advanced topics:
basic form of SVM that helps to determine the most Pearson Education India, 2006.
simple classification problem of linear separable [10] Kantardzic M. Data mining: concepts, models, methods, and
training data with binary classification [27]. The algorithms: John Wiley & Sons, 2011.
maximal margin classifier used to find the hyper plane [11] Twa MD, Parthasarathy S, Roberts C, Mahmoud AM,
Raasch TW, Bullimore MA. Automated decision tree
with maximal margin in real world complications [79]. classification of corneal shape. Optometry and vision
science: official publication of the American Academy of
The main advantage of SVM is its capability to deal Optometry 2005; 82: 1038.
https://doi.org/10.1097/01.opx.0000192350.01045.6f
with wide variety of classification problems includes
[12] Brodley CE, Utgoff PE. Multivariate versus univariate
high dimensional and not linearly separable problems. decision trees: Citeseer, 1992.
One of the major drawback of SVM that it requires [13] Jang J-SR. ANFIS: adaptive-network-based fuzzy inference
number of key parameters to set correctly to attain system. Systems, Man and Cybernetics, IEEE Transactions
excellent classification results [80]. on, 1993; 23: 665-685.
https://doi.org/10.1109/21.256541

4. CONCLUSION [14] Rutkowski L, Pietruczuk L, Duda P, Jaworski M. Decision


trees for mining data streams based on the McDiarmid's
bound. Knowledge and Data Engineering, IEEE Transactions
In this paper various popular classification on, 2013; 25: 1272-1279.
https://doi.org/10.1109/TKDE.2012.66
techniques of machine learning has been discussed
[15] Patil DD, Wadhai V, Gokhale J. Evaluation of decision tree
with their basic working mechanism, strengths and pruning algorithms for complexity and classification accuracy,
weaknesses. The potential applications and issues with 2010.
their available solutions have also been highlighted. [16] Quinlan JR. Induction of decision trees. Machine learning
1986; 1: 81-106.
Classification methods are typically strong in modeling https://doi.org/10.1007/BF00116251
interactions. The discussed classification techniques [17] Quinlan JR. Simplifying decision trees. International Journal
can be implemented on different type of data set i.e. of man-Machine Studies 1987; 27: 221-234.
health, financial etc. It is difficult to find out which https://doi.org/10.1016/S0020-7373(87)80053-6
[18] Sharma S, Agrawal J, Agarwal S. Machine learning
technique is superior to other because each technique techniques for data mining: A survey, in Computational
has its own merits, demerits and implementation Intelligence and Computing Research (ICCIC), 2013 IEEE
issues. The selection of classification technique International Conference on, 2013; pp. 1-6.

depends on user problem domain. However, lot of work [19] Bhukya DP, Ramachandram S. Decision tree induction: an
approach for data classification using AVL-tree. International
has been done in classification domain but it still Journal of Computer and Electrical Engineering 2010; 2: 660.
requires formal attention of research community to https://doi.org/10.7763/IJCEE.2010.V2.208
overcome classification issues that have been arising [20] Adhatrao K, Gaykar A, Dhawan A, Jha R, Honrao V.
Predicting Students' Performance using ID3 and C4. 5
due to dealing with new classification problems like Classification Algorithms, arXiv preprint arXiv:1310.2071,
problems in classification of Big Data. 2013.
[21] Phyu TN. Survey of classification techniques in data mining,
REFERENCES in Proceedings of the International MultiConference of
Engineers and Computer Scientists 2009; pp. 18-20.
[1] Ghahramani Z. "Unsupervised learning," in Advanced [22] Yang Y, Webb GI. Discretization for naive-Bayes learning:
lectures on machine learning, ed: Springer, 2004; pp. 72- managing discretization bias and variance. Machine learning
112. 2009; 74: 39-74.
https://doi.org/10.1007/978-3-540-28650-9_5 https://doi.org/10.1007/s10994-008-5083-5
464 Journal of Basic & Applied Sciences, 2017, Volume 13 Soofi and Awan

[23] Friedman N, Goldszmidt M. Discretizing continuous attributes [41] Zhang L, Ji Q. A Bayesian network model for automatic and
while learning Bayesian networks, in Icml 1996; pp. 157-165. interactive image segmentation, Image Processing, IEEE
[24] Wang S-C, Gao R, Wang L-M. Bayesian network classifiers Transactions on, 2011; 20: 2582-2593.
based on Gaussian kernel density. Expert Systems with https://doi.org/10.1016/j.trc.2006.11.001
Applications, 2016. [42] Zhang K, Taylor MA. Effective arterial road incident
[25] Myllymäki P. Advantages of Bayesian Networks in Data detection: a Bayesian network based algorithm.
Mining and Knowledge Discovery Available: Transportation Research Part C: Emerging Technologies
http://www.bayesit.com/docs/advantages.html 2006; 14: 403-417.
[26] Cover T, Hart P. Nearest neighbor pattern classification. [43] Xiao X, Leedham G. Signature verification using a modified
IEEE Transactions on Information Theory 1967; 13: 21-27. Bayesian network. Pattern Recognition 2002; 35: 983-995.
https://doi.org/10.1109/TIT.1967.1053964 https://doi.org/10.1016/S0031-3203(01)00088-7
[27] Wu X, Kumar V, Quinlan JR, Ghosh J, Yang Q, Motoda H, et [44] Aoki S, Shiba M, Majima Y, Maekawa Y. Nurse call data
al. Top 10 algorithms in data mining. Knowledge and analysis using Bayesian network modeling, in Aware
Information Systems 2008; 14: 1-37. Computing (ISAC), 2010 2nd International Symposium on,
https://doi.org/10.1007/s10115-007-0114-2 2010; pp. 272-277.
[28] Bhatia N. Survey of nearest neighbor techniques. arXiv [45] Chattopadhyay S, Davis RM, Menezes DD, Singh G,
preprint arXiv:1007.0085, 2010. Acharya RU, Tamura T. Application of Bayesian classifier for
the diagnosis of dental pain, Journal of Medical Systems 36:
[29] Teknomo K. Strengths and weaknesse of K Nearest 2012; 1425-1439.
Neighbor. Available: http://people.revoledu.com/kardi/tutorial/ https://doi.org/10.1007/s10916-010-9604-y
KNN/Strength%20and%20Weakness.htm
[46] Bashar A, Parr G, McClean S, Scotney B, Nauck D.
[30] Li H, Liu L, Zhang X, Wang S. Hike: A High Performance Knowledge discovery using Bayesian network framework for
kNN Query Processing System for Multimedia Data, in 2015 intelligent telecommunication network management, in
IEEE Conference on Collaboration and Internet Computing Knowledge Science, Engineering and Management, ed:
(CIC), 2015; pp. 296-303. Springer, 2010; pp. 518-529.
https://doi.org/10.1109/CIC.2015.44
[47] Kumar M, Rath SK. Microarray data classification using
[31] Kumar N, Obi Reddy G, Chatterjee S, Sarkar D. An Fuzzy K-Nearest Neighbor, in Contemporary Computing and
application of ID3 decision tree algorithm for land capability Informatics (IC3I), 2014 International Conference on, 2014;
classification. Agropedology 2013; 22: 35-42. pp. 1032-1038.
[32] Shao X, Zhang G, Li P, Chen Y. Application of ID3 algorithm [48] Rizwan M, Anderson DV. Using k-Nearest Neighbor and
in knowledge acquisition for tolerance design. Journal of Speaker Ranking for Phoneme Prediction, in Machine
Materials Processing Technology 2001; 117: 66-74. Learning and Applications (ICMLA), 2014 13th International
https://doi.org/10.1016/S0924-0136(01)01016-0 Conference on, 2014; pp. 383-387.
[33] Tan Y, Qi Z, Wang J. Applications of ID3 algorithms in [49] Kasemsumran P, Auephanwiriyakul S, Theera-Umpon N.
computer crime forensics, in Multimedia Technology (ICMT), Face recognition using string grammar fuzzy K-nearest
2011 International Conference on, 2011; pp. 4854-4857. neighbor, in 2016 8th International Conference on
[34] Zou K, Sun W, Yu H, Liu F. ID3 Decision Tree in Fraud Knowledge and Smart Technology (KST), 2016; pp. 55-59.
Detection Application, in Computer Science and Electronics
[50] Ismail N, Rahiman MHF, Taib MN, Ali NAM, Jamil M,
Engineering (ICCSEE), 2012 International Conference on,
Tajuddin SN. The grading of agarwood oil quality using k-
2012; pp. 399-402.
Nearest Neighbor (k-NN), in Systems, Process & Control
[35] Amin RK, Indwiarti, Sibaroni Y. Implementation of decision (ICSPC), 2013 IEEE Conference on, 2013; pp. 1-5.
tree using C4.5 algorithm in decision making of loan
application by debtor (Case study: Bank pasar of Yogyakarta [51] Tiwari AK, Srivastava R. Feature based classification of
nuclear receptors and their subfamilies using fuzzy K-nearest
Special Region), in Information and Communication
neighbor, in Computer Engineering and Applications
Technology (ICoICT ), 2015 3rd International Conference on,
2015; pp. 75-80. (ICACEA), 2015 International Conference on Advances in,
2015; pp. 24-28.
[36] Li B, Shen B, Wang J, Chen Y, Zhang T. A Scenario-Based
Approach to Predicting Software Defects Using Compressed [52] Li S, Shen Z, Xiong G. A k-nearest neighbor locally weighted
C4.5 Model, in Computer Software and Applications regression method for short-term traffic flow forecasting, in
Conference (COMPSAC), 2014 IEEE 38th Annual, 2014; pp. 2012 15th International IEEE Conference on Intelligent
406-415. Transportation Systems, 2012; pp. 1596-1601.
[37] Soliman SA, Abbas S, Salem ABM. Classification of [53] Munisami T, Ramsurn M, Kishnah S, Pudaruth S. Plant Leaf
thrombosis collagen diseases based on C4.5 algorithm, in Recognition Using Shape Features and Colour Histogram
2015 IEEE Seventh International Conference on Intelligent with K-nearest Neighbour Classifiers. Procedia Computer
Computing and Information Systems (ICICIS), 2015; pp. 131- Science 2015; 58: 740-747.
136. https://doi.org/10.1016/j.procs.2015.08.095
https://doi.org/10.1109/IntelCIS.2015.7397209 [54] Mandhala VN, Sujatha V, Devi BR. Scene classification using
[38] Hehui Q, Zhiwei Q. Feature selection using C4.5 algorithm support vector machines, in Advanced Communication
for electricity price prediction, in 2014 International Control and Computing Technologies (ICACCCT), 2014
Conference on Machine Learning and Cybernetics, 2014; pp. International Conference on, 2014; pp. 1807-1810.
175-180. [55] Zhao Y, Zhu S, Yu J, Wang L. Predicting corporate financial
https://doi.org/10.1109/ICMLC.2014.7009113 distress by PCA-based support vector machines, in 2010
[39] Duan F, Zhao Z, Zeng X. Application of Decision Tree Based International Conference on Networking and Information
on C4.5 in Analysis of Coal Logistics Customer, in Intelligent Technology, 2010; pp. 373-376.
Information Technology Application, 2009. IITA 2009. Third https://doi.org/10.1109/ICNIT.2010.5508491
International Symposium on, 2009; pp. 380-383. [56] Aydin I, Karakose M, Akin E. Artificial immune based support
[40] Seet AM, Zualkernan IA. An Adaptive Method for Selecting vector machine algorithm for fault diagnosis of induction
Question Pools Using C4.5, in 2010 10th IEEE International motors, in Electrical Machines and Power Electronics, 2007.
Conference on Advanced Learning Technologies, 2010; pp. ACEMP '07. International Aegean Conference on, 2007; pp.
86-88. 217-221.
Classification Techniques in Machine Learning Journal of Basic & Applied Sciences, 2017, Volume 13 465

[57] Yehui L, Yuye Y, Liang H. Fault diagnosis of analog circuit [68] Babu VS, Viswanath P. Rough-fuzzy weighted k-nearest
based on support vector machines, in Communications leader classifier for large data sets. Pattern Recognition
Technology and Applications, 2009. ICCTA '09. IEEE 2009; 42: 1719-1731.
International Conference on, 2009; pp. 40-43. https://doi.org/10.1016/j.patcog.2008.11.021
[58] Jialong H, Yanbin W. Classification of the enterprise market [69] Duda RO, Hart PE, Stork DG. Pattern classification: John
competition based on support vector machines, in 2010 Wiley & Sons, 2012.
Chinese Control and Decision Conference, 2010; pp. 1644- [70] Guttman A. R-trees: a dynamic index structure for spatial
1647. searching 1984; 14: ACM.
https://doi.org/10.1109/CCDC.2010.5498321
[71] Moraes D, Wainer J, Rocha A. Low false positive learning
[59] Viswanath P, Sarma TH. An improvement to k-nearest with support vector machines. Journal of Visual
neighbor classifier, in Recent Advances in Intelligent Communication and Image Representation 2016; 38: 340-
Computational Systems (RAICS), 2011 IEEE, 2011; pp. 227- 350.
231. https://doi.org/10.1016/j.jvcir.2016.03.007
https://doi.org/10.1109/RAICS.2011.6069307
[72] Carrizosa E, Nogales-Gómez A, Romero Morales D.
[60] Dudani SA. The Distance-Weighted k-Nearest-Neighbor Clustering categories in support vector machines, Omega.
Rule. IEEE Transactions on Systems, Man, and Cybernetics,
1976; SMC-6: 325-327. [73] Abe S. Fuzzy support vector machines for multilabel
https://doi.org/10.1109/TSMC.1976.5408784 classification. Pattern Recognition 2015; 48: 2110-2117.
https://doi.org/10.1016/j.patcog.2015.01.009
[61] Cunningham P, Delany SJ. k-Nearest neighbour classifiers,
2007. [74] Vapnik VN. The Nature of Statistical Learning Theory, 1995.
[62] Chen J, Luo D-l, Mu F-X. An improved ID3 decision tree [75] Nizar A, Dong Z, Wang Y. Power utility nontechnical loss
algorithm, in Computer Science & Education, 2009. ICCSE analysis with extreme learning machine method. Power
'09. 4th International Conference on, 2009; pp. 127-130. Systems, IEEE Transactions on, 2008; 23: 946-955.
https://doi.org/10.1109/TPWRS.2008.926431
[63] Thakur D, Markandaiah N, Raj DS. Re optimization of ID3
and C4.5 decision tree, in Computer and Communication [76] Xiao H, Peng F, Wang L, Li H. Ad hoc-based feature
Technology (ICCCT), 2010 International Conference on, selection and support vector machine classifier for intrusion
2010; pp. 448-450. detection, in 2007 IEEE International Conference on Grey
Systems and Intelligent Services, 2007; pp. 1117-1121.
[64] Huang M, Niu W, Liang X. An improved Decision Tree https://doi.org/10.1109/GSIS.2007.4443446
classification algorithm based on ID3 and the application in
score analysis, in 2009 Chinese Control and Decision [77] Berwick R. An Idiot’s guide to Support vector machines
Conference, 2009; pp. 1876-1879. (SVMs).
https://doi.org/10.1109/CCDC.2009.5192865 [78] Ahmad I, Abdulah AB, Alghamdi AS. Towards the designing
[65] Mantas CJ, Abellán J. Credal-C4.5: Decision tree based on of a robust intrusion detection system through an optimized
imprecise probabilities to classify noisy data, Expert Systems advancement of neural networks, in Advances in Computer
with Applications 2014; 41: 4625-4637. Science and Information Technology, ed: Springer, 2010; pp.
https://doi.org/10.1016/j.eswa.2014.01.017 597-602.

[66] Mori J, Mahalec V. Inference in hybrid Bayesian networks [79] Han J, Kamber M, Pei J. Data mining: concepts and
with large discrete and continuous domains. Expert Systems techniques: Elsevier 2011.
with Applications 2016; 49: 1-19. [80] SVM. Available: http://www.nickgillian.com/wiki/pmwiki.
https://doi.org/10.1016/j.eswa.2015.11.019 php/GRT/SVM
[67] Hobæk Haff I, Aas K, Frigessi A, Lacal V. Structure learning
in Bayesian Networks using regular vines. Computational
Statistics & Data Analysis 2016; 101: 186-208.
https://doi.org/10.1016/j.csda.2016.03.003

Received on 07-08-2017 Accepted on 11-08-2017 Published on 29-08-2017

https://doi.org/10.6000/1927-5129.2017.13.76

© 2017 Soofi and Awan; Licensee Lifescience Global.


This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License
(http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in
any medium, provided the work is properly cited.

View publication stats

You might also like