0% found this document useful (0 votes)
25 views

Parida 2014

In this paper an application of genetic algorithms (GAs) and Gaussian Naïve Bayesian (GNB) approach is studied to explore the brain activities by decoding specific cognitive states from functional magnetic resonance imaging (fMRI) data. However, in case of fMRI data analysis the large number of attributes may leads to a serious problem of classifying cognitive states. It significantly increases the computational cost and memory usage of a classifier.

Uploaded by

smritii bansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Parida 2014

In this paper an application of genetic algorithms (GAs) and Gaussian Naïve Bayesian (GNB) approach is studied to explore the brain activities by decoding specific cognitive states from functional magnetic resonance imaging (fMRI) data. However, in case of fMRI data analysis the large number of attributes may leads to a serious problem of classifying cognitive states. It significantly increases the computational cost and memory usage of a classifier.

Uploaded by

smritii bansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Application of Genetic Algorithms and Gaussian

Naïve Bayesian Approach in Pipeline for Cognitive


State Classification

Shantipriya Parida Satchidananda Dehuri Sung-Bae Cho


Carrier Software & Core Network Department of Systems Engineering Department of Computer Science
Huawei Technologies India Pvt Ltd Ajou University Yonsei University
Bangalore, India Suwon, South Korea Seoul, South Korea
[email protected] [email protected] [email protected]

Abstract—In this paper an application of genetic algorithms fMRI data. Hence these motivations drive us to combine their
(GAs) and Gaussian Naïve Bayesian (GNB) approach is studied best attributes in pipeline for building a robust and accurate
to explore the brain activities by decoding specific cognitive states classifier. The strength of Bayesian approach is that it naturally
from functional magnetic resonance imaging (fMRI) data. leads to a distribution which used to make inferences for
However, in case of fMRI data analysis the large number of
model which contains complex parameters than simple
attributes may leads to a serious problem of classifying cognitive
amplitudes and variances [7].
states. It significantly increases the computational cost and
memory usage of a classifier. Hence to address this problem, we The rest of the paper is organized as follows. In Section II
use GAs for selecting optimal set of attributes and then GNB the preliminaries of this work are discussed. The overall
classifier in a pipeline to classify different cognitive states. The framework of the proposed method for combining the GA and
experimental outcomes prove its worthiness in successfully GNB in pipeline is explained in the Section III. The details of
classifying different cognitive states. The detailed comparison fMRI data, experimental setup, and the overview of the
study with popular machine learning classifiers illustrates the comparative methods are discussed in Section IV. The
importance of such GA-Bayesian approach applied in pipeline experimental results, analysis, and comparative study are
for fMRI data analysis. presented in Section V. Conclusions and future research
directions are discussed in Section VI.
Keywords—functional Magnetic Resonance Imaging
(fMRI); Genetic Algorithms; Gaussian Naïve Bayes; Decision II. PRELIMINARIES
Tree; Support Vector Machine
The machine learning techniques used in the proposed
I. INTRODUCTION method are explained in this section.
Neuroimaging shown that it is possible to decode a person’s A. Gaussian Naïve Bayesian
conscious experience based on their brain activity using non- Over decades, the Bayesian statistical decision theory
invasive technique [1]. The fMRI is a non-invasive technique gained attention in diversified research areas. Moreover, its
based on Blood Oxygen Level Dependent (BOLD) contrast, importance in perception has been realized recently, as it
measures the neural activity [2]. The state-of-the-art machine provides a rigorous mathematical framework for describing
learning techniques are popularly used by the neuroscientists the tasks that perceptual system performs [11]. In comparison
for variety of fMRI data analysis [3]. In case of fMRI data to other machine learning models such as neural networks and
analysis, the challenging task is to deal with the high support vector machine, Bayes model has the advantages of
dimensional data, also known as “curse of dimensionality”. modeling the inner relationships by incorporating the prior
This is due to the reason that a single fMRI volume of the brain knowledge through probabilistic theory [12]. By using the
typically contains tens of thousands of voxels [4]. The data training data, the GNB classifier estimates the probability
analysis and classification problems become harder with the distribution over fMRI observation, conditioned on the
increase in the dimensionality of data [5]. In fMRI data subject’s cognitive state. It classifies the new example
analysis, the feature selection techniques may calibrate the šሬԦ ൌ  ‫šۃ‬ଵ ǥš୬ ‫ ۄ‬by estimating the probability ’ሺ ୧ ȁšሬԦሻ of
problem by selecting relevant features which are passed as cognitive state ୧ given fMRI observation š ሬሬԦ . It estimates
input to machine learning classifiers. GAs has been ’ሺ ୧ ȁšሬԦሻ using the following equation along with an assumption
successfully applied in medical domains to return the best set that features are conditionally independent w.r.t. a class.
of features from a high dimensional data [6]. The Bayesian
approach has also been successfully applied to the analysis of

978-1-4799-2572-8/14/$31.00 2014
c IEEE 1237
‫݌‬ሺܿ௜ ሻǤ ܲሺ‫ݔ‬Ԧ ȁܿ௜ ሻ We have compared the performance of the proposed
ܲሺܿ௜ ȁ‫ݔ‬Ԧሻ ൌ  ǡ (1) technique with the popular classifiers such as Decision Tree
σ௞ ܲሺܿ௜ ሻǤ ܲሺ‫ݔ‬Ԧȁܿ௞ ሻ (DT), Support Vector Machine (SVM) and Multilayer
where ܲሺ‫ݔ‬Ԧȁܿ௞ ሻ ൌ  ς௝ ܲ൫‫ݔ‬௝ หܿ௞ ൯ can be estimated from the Perceptron Network (MLP), which are highlighted in Section
training set. Some extensions of GNB in the context of fMRI IV B.
are GNB-pooled and hierarchical GNB discussed in [13] and A. Genetic Algorithms for Feature Selection from fMRI
[14], respectively.
Here we discuss the working structure of GAs starting from
B. Genetic Algorithms the generation of the initial population pool, fitness function,
Genetic algorithms are parallel, iterative optimizers which and genetic operators to parameter configuration in connection
have been successfully applied to a large number of to feature selection.
optimization problems including classification tasks. Given a The initial population generated by populating a matrix
set of feature vector of the form ൌ  ሼšଵ ǡ šଶ ǡ ǥ ǡ šୢ ሽ, the GA with dimension of population size rows by independent
produces a transformed set of vectors of the form  ᇱ ൌ variable (genome length) columns. The values in this matrix
ሼ™ଵ šଵ ǡ ™ଶ šଶ ǥ ǡ ™ୢ šୢ ሽwhere ™୧ is a weight vector associated are integers, which are randomly selected from the processed
with feature‹. The feature values are normalized and scaled by input data based on ranking, as shown in Fig. 2.
the associated weight before applying to training, testing and
classification [8].
GA follows Darwin fittest principle where the next
generation produces from the current generation using three
operators: reproduction, crossover, and mutation [9]. The
highly fittest chromosomes move to the next generation. The
classification accuracy is returned as a measure of the quality
of the transformation matrix, which is used by GA to searches
transformation that minimizes the dimensionality of the
transformed patterns and maximizes classification accuracy
[10].
III. PROPOSED METHOD
Figure 2. Structure of the initial population matrix.
The overall framework of the proposed technique is shown
in the Fig. 1. In the proposed technique, the processed fMRI The fitness of the population is estimated using the fitness
data (.mat) file is supplying as the input to GA for selecting the function as shown in Fig. 3. We have used the fitness function
most promising features from the high dimensional dataset provided by the GA Toolbox, which maximizes the
(detail is discussed in Subsection III A). The selected features separability of two classes using linear combination of the
are used to construct the GNB classifier for classification (c.f., posterior probability and empirical error rate of linear classifier
Subsection III B). The popular k-fold cross validation method (classify).
used for validating the data. We have partitioned the data using
group (class of each observation). The classification accuracy
of the constructed classifier is determined using the confusion
matrix based on the classification result obtained by test data.

Figure 3. Genetic Algorithm based feature selection.

We have chosen the Roulette Wheel Selection (RWS)


method where a circle is divided into N sectors- the width of
each sector is proportional to individual’s fitness value. The
random selection is made similar to how the roulette wheel is
rotated [19]. The two point crossover is selected [20] with a
crossover rate, where two points to be selected on the parent
strings and all between the two points is swapped between
Figure 1. Block diagram of the proposed technique. parents to generate child strings. The mutation genetic operator

1238 2014 IEEE International Advance Computing Conference (IACC)


randomly alters the values of genes in parent string. At each
step of evolution, the crossover and mutation applied
stochastically so their probabilities of occurrence must be set a
priori [21]. The uniform mutation selected as the mutation
function. The parameters configured for genetic algorithms are
shown in Table. I.

Table I. GA Parameters
Figure 4. Picture Sentence Study.
Genetic Algorithm Parameter Value
Population Size 48 For these trials, the sentence and picture were presented in
Number of Generation 100 sequence, with the picture presented first on half of the trials,
Selection function Roulette and the sentence presented first on the other half of the trials.
Crossover function/rate Two point crossover/0.8 Forty such trials were available for each subject. The timing
Mutation function 0.01 within each such trial is as follows:
Based on the above initial population and parameters • The first stimulus (sentence or picture) was presented at the
configuration, the input passed to GA toolbox provided beginning of the trail (image=1).
genetic algorithm function which returns the best features. The • Four seconds later (image=9) the stimulus was removed,
genetic algorithm function runs multiple times (as it is replaced by a blank screen.
stochastic) to obtain the best set of features which can • Four seconds later (image=17) the second stimulus was
contribute significantly in subsequent GNB classification. presented. This remained on the screen for four seconds, or
B. Gaussian Naïve Bayesian for Cognitive State until the subject pressed the mouse button, whichever came
Classification first.
The best features selected using the GA is then given as the • A rest period of 15 seconds (30 images) was added after the
input to GNB for classifying the true class labels. The best second stimulus was removed from the screen. Thus, each
features obtained from GA are in the matrix formሾܺ ൈ ܻ ], trial lasted a total of approximately 27 seconds
where X denotes the observation i.e., data of the best features (approximately 54 images).
and Y denotes the class labels for the observation. We have
divided the dataset into two parts in the ratio of (80-20), where The images were collected every 500 msec. There are 54
80% of data are used for training the GNB classifier and 20% trials, 2800 snapshots. The data is stored in a ሾͷͶ ൈ ͳሿ cell
data are kept for testing. The GNB classifier is trained using array with one cell per ‘trial’ in the experiment. Each element
the training data and the classification accuracy of the model is in the cell array is an ሾܰ ൈ ܸሿ array of observed fMRI
predicted using the test data. The confusion matrix prepared to activations and each array contains 4698 number of voxels
obtain the classification accuracy. (features) per snapshot. The sample of voxel activity at a
specific time course shown in Fig. 5.
IV. EXPERIMENTAL STUDY
The experimental setup along with data set description is
described in this section.
A. Experimental Setup and Data Preparation
The experiment is carried out on the platform of 32 bit,
Intel 2.70 GHz processor, 4.00 GB RAM, running under
Windows 7 operating system. The programs for the experiment
are coded using the Matlab R2010a (The Mathworks©). The
Matlab provided GA toolbox functions used for feature
selection. The fMRI data set is collected from the Carnegie Figure 5. Voxel activity at a particular time course.
Mellon University (CMU)'s public StarPlus fMRI data
repository. The data is taken for a single subject (`04847') and The genetic algorithm applied to reduce the number of
is partitioned into trials. The experiment consists of set of features (V) of each of ሾܰ ൈ ܸሿ array. During the initial
trials. For some of these intervals, the subject simply rested, or population generation we have ignored the Cond=0 which
gazed at a fixation point on the screen. In other trials, the indicates data to ignored and Cond=1 indicates, segment is a
subject has shown a picture and a sentence, and instructed to rest or fixation interval. The dataset containing total number of
press a button to indicate whether the sentence correctly features and class label applied to genetic algorithm isሾͶ͸ͻͺ ൈ
described the picture as shown in Fig. 4. ʹͳͻ͸ሿ. The number of samples used for training and testing is
shown in Table. II.
We have partitioned the training and test sample into 80-20
ratio using ‘cvpartition’ based on the group/class label for

2014 IEEE International Advance Computing Conference (IACC) 1239


classification without Feature Selection (FS) and based on the
number of selected samples(after applied GA FS). ሬԦ௥ ሻ ൌ ሺ‫ݓ‬
‫ܦ‬ሺ‫ݑ‬ ሬԦ௥ ሻ ൅  ‫ݓ‬଴ ǡ
ሬሬԦǤ ‫ݑ‬ (1)

Table II. Dataset for Training and Testing where ™


ሬሬሬԦ defines the linear decision boundary, and is chosen to
maximize the boundaries defined by D = +1 and D = -1
Algorithm Train Test (known as the margin) between the two class distributions.
GNB without FS 1757(80%) 439(20%)
GA FS + GNB 907 (80%) 226(20%) During fMRI experimental design the class label such as
DT without FS 1757(80%) 439(20%) stimulus A, and stimulus B assigned unique class. The
GA FS + DT 891(80%) 222(20%)
experiment consists of a series of brain images which is being
SVM without FS 1757(80%) 439(20%)
MLP without FS 1757(80%) 439(20%)
collected for class label changes [16].
3) Decision Tree: Decision tree classifiers represent their
All twenty five Regions of Interest(ROIs) named as classification knowledge in tree form where interior node is test
'CALC','LDLPFC','LFEF','LIFG','LIPL','LIPS','LIT','LOPER',' of an attribute. The basic algorithm of DT builds a tree top
LPPREC','LSGA','LSPL','LT','LTRIA','RDLPFC','RFEF','RIP down using the standard greedy search principle, based on
L','RIPS','RIT','ROPER','RPPREC','RSGA','RSPL','RT','RTRI recursive partitioning. The partitioning algorithm includes
A','SMA' are considered in the experiment. stopping, splitting and pruning rules [15].
B. Methods for Comparative Study The advantages of DT classifier are the capability to break
This section describes the brief overview of the popular down a complex structure into a collection of simpler
machine learning classifiers, which we have considered for our structures, thus providing a solution that is easy to interpret.
comparative study.
C. Parameter Setting
1) MLP: Although neural network classifiers are powerful For MLP, we have used the Matlab “nntraintool” for train
and widely used classifiers, but they rely on a number of the network for which the training performance shown in Fig.
parameter choices to specify network architecture and to 7.
control the training process [17].
A MLP is a special type of feed-forward network includes
three or more layers, with nonlinear transfer functions in the
hidden layer neurons. The MLPs are able to associate training
patterns with outputs for nonlinearly separable data. Feed-
forward networks are particularly suitable for applications in
medical imaging such as fMRI data analysis where the inputs
and outputs are numerical and pairs of input/output vectors
provide a clear basis for training in a supervised manner.
As shown in Fig. 6, the input unit included in the input
layer is based on the number of features extracted from the
fMRI images. The number of neuron in the output layer is one
to produce and represent the classes and the number of neuron
in the hidden layers decided as described in [18].
Figure 7. MLP training.

The parameters and options used in Matlab tool for DT and


SVM mentioned in Table III.

Table III. DT and SVM Parameter

Algorithm Structure function


DT Tree type ‘classregtree’
(classification and
regression tree)
SVM ‘Method’ to separate ‘QP’
hyper plane (Quadratic
Figure 6. Three layer feed forward network.
Programming)

2) SVM: The SVM is one of the most popular and state-of-


the-art maximum classification algorithm used in fMRI studies. V. RESULTS AND ANALYSIS
Considering the classification, for given two classes, the SVM The experimental result and the comparison study
algorithm attempts to find a linear decision boundary explained here.
(separating hyper plane) using the decision function.

1240 2014 IEEE International Advance Computing Conference (IACC)


A. Classification Accuracy B. Comparison Result
The confusion matrix of the proposed method on training We have compared the result in terms of number of features
data is listed in Table IV. The element of ith row and jth vs. accuracy for the classifiers are shown in Table. VI. The
column denotes the classification accuracy belonging to class ݅ computation time of the classifiers are shown in the Table. VII.
is assigned to class ݆ after the classification. The class
Table VI. Comparison Matrix (No. of Features Vs. Accuracy)
‘Picture’ denotes pictures before sentences and the class
‘Sentence’ denotes sentences before picture.
Algorithm Number of Features Accuracy
Table IV. Confusion Matrix
GNB without FS 2196 71.75
GA FS + GNB 1133 96.46
Predicted DT without FS 2196 57.40
GA FS + DT 1113 94.69
Picture Sentence SVM without FS 2196 84.73
GNB without FS Picture 129 (TP) 91(FN) MLP without FS 2196 91.79
Sentence 33(FP) 186(TN)
GA FS + GNB Picture 217(TP) 2(FN) Table VII. Comparison Matrix(No. of Features Vs. Computation Time)
Sentence 6(FP) 1(TN)
Algorithm Sample Size Computation
DT without FS Picture 129 (TP) 91(FN) (Features and class Time(Second)
labels)
Sentence 96(FP) 123(TN)
GNB without FS [2196 X 2196] 766.893
GA FS + DT Picture 214(TP) 3(FN)
Sentence 9(FP) 0(TN) GA FS + GNB [1133 X 2196] 685.061
SVM without FS Picture 190(TP) 30(FN) DT without FS [2196 X 2196] 14.588
Sentence 37(FP) 182(TN) GA FS + DT [1113 X 2196] 7.313
MLP without FS Picture 199(TP) 21(FN)
SVM without FS [2196 X 2196] 3470.071
Sentence 15(FP) 204(TN)
MLP without FS [2196 X 2196] 24.414

The performance measurement has been estimated using


the following measures:

ሺ୘୔ା୘୒ሻ
 —”ƒ › ൌ  ሺ୘୔ା୘୒ା୊୔ା୊୒ሻ ȗͳͲͲΨǡ (2)

ሺ୘୔ሻ
‡•‹–‹˜‹–› ൌ  ሺ୘୔ା୊୒ሻȗͳͲͲΨǡ (3)

ሺ‫ۼ܂‬ሻ (4)
Specificity = ሺ ȗͳͲͲΨǡ
‫ۼ܂‬൅۴‫۾‬ሻ
where, TP (True Positives) = correctly classifier positive
cases,
TN (True Negative) = correctly classifier negative cases,
FP (False Positives) = incorrectly classified negative cases,
FN (False Negative) = incorrectly classified positive cases.

Table V. Performance Value


Figure 8. Comparison Graph.

Algorithm Sensitivity Specificity Accuracy Fig. 8, illustrate the accuracy comparison of classifiers. The
GNB without FS 40.94 84.93 71.75 generation vs. fitness value of 51 generation is shown in the
GA FS + GNB Fig. 9.
99.54 14.28 96.46
DT without FS 51.19 56.16 57.40
GA FS + DT 100 0 94.69
SVM without FS 51.07 83.1 84.73
MLP without FS 49.37 93.15 91.79

2014 IEEE International Advance Computing Conference (IACC) 1241


[4] R. D. Raizada, and N. Kriegeskorte. “Pattern information fMRI: New
questions which it opens up and challenges which face it,” International
Journal of Imaging Systems and Technology, 20(1), 31-41, 2010.
[5] A. Janecek, W. N. Gansterer, M. Demel, and G. Ecker, “On the
Relationship Between Feature Selection and Classification Accuracy,”
Journal of Machine Learning Research-Proceedings Track, vol. 4, pp.
90-105, 2008.
[6] T. D. Wager, and T. E. Nichols, “Optimization of experimental design in
fMRI: a general framework using a genetic algorithm,” Neuroimage,
vol. 18, no. 2, pp. 293-309, 2003.
[7] J. Kershaw, B. A. Ardekani, and I. Kanno, “Application of Bayesian
Figure 9. GA fitness value with generation. inference to fMRI data analysis. Medical Imaging,” IEEE Transactions
on, vol. 18, no. 12, pp. 1138-1153, 1999.
VI. CONCLUSIONS AND FUTURE RESEARCH DIRECTION [8] M. L. Raymer, W. F. Punch, E. D. Goodman, L. A. Kuhn, and A. K.
Jain, “Dimensionality reduction using genetic algorithms. Evolutionary
The research on feature selection techniques has been Computation,” IEEE Transactions on, vol. 4, no. 2, pp. 164-171, 2000.
active for decades attempting to improve well-known [9] C. L. Huang, and C. J. Wang, “A GA-based feature selection and
algorithm or to develop new ones [22]. In this paper we have parameters optimizationfor support vector machines,” Expert Systems
used the genetic algorithm based feature selection approach to with applications, vol. 31, no. 2, pp. 231-240, 2006.
reduce the high dimensional fMRI. It in turn helps to reduce [10] A. Kharrat, K. Gasmi, M. B. Messaoud, N. Benamrane, and M. Abid, “A
the computational cost. The selected best features which are hybrid approach for automatic classification of brain MRI using genetic
less as compared to the original fMRI data set are supplied as algorithm and support vector machine,” Leonardo J. Sci, vol. 17, pp. 71-
82, 2010.
input to GNB. It results less computation time as well as
optimal usage of memory without sacrificing classification [11] W. S. Geisler, and R. L. Diehl, “A Bayesian approach to the evolution of
perceptual and cognitive systems,” Cognitive Science, vol. 27, no.3, pp.
accuracy. The experimental result and comparison matrix (as 379-402, 2003.
shown in Fig. 9) emphasize the efficiency of such technique. [12] Z. Wang, R. M. Hope, Z. Wang, Q. Ji, and W. D. Gray, “Cross-subject
workload classification with a hierarchical Bayes model,” NeuroImage,
In summary the major contributions of the proposed vol. 59, no. 1, pp. 64-69, 2012.
approach are: i) the enormous reduction in number of feature as
[13] X. Wang, R. A. Hutchinson, and T. M. Mitchell, “Training fMRI
compared to original fMRI data set; ii) the faster execution classifiers to discriminate cognitive states across multiple subjects,” In
time in cognitive state classification; and iii) high classification Advances in neural information processing systems, 2003.
accuracy compared to popular machine learning classifier- [14] I. Rustandi, “Hierarchical gaussian naive bayes classifier for multiple-
strongly argues the potential of such technique to explore subject fmri data,” In NIPS Workshop: New Directions on Decoding
further. The potential research directions of the proposed Mental States from fMRI Data, 2006.
approach are: i) applying on multiple subject and different [15] M. CerĖak, “A Comparison of Decision Tree Classifiers for Automatic
cognitive task; ii) comparative analysis of GA-Bayesian Diagnosis of Speech Recognition Errors,” Computing and Informatics,
vol. 29, no. 3, pp. 489-501, 2012.
approach with different feature selection techniques along with
[16] S. J. Peltier, J. M. Lisinski, D. C. Noll, and S. M. LaConte, “Support
Bayesian classifiers, iii) although we have used the powerful vector machine classification of complex fMRI data,” In Engineering in
classifiers, the comparison study can include other machine Medicine and Biology Society, 2009. EMBC 2009. Annual International
learning classifiers which are used in cognitive classification. Conference of the IEEE, pp. 5381-5384, 2009.
[17] L. I. Kuncheva, and J. J. Rodríguez, “Classifier ensembles for fMRI data
The proposed approach is effective in resolving the key analysis: an experiment,” Magnetic resonance imaging, vol. 28, no. 4,
fMRI data analysis issue such as high dimensional data or a. k. pp. 583-593, 2010.
a “curse of dimensionality”. The significant reduction in data [18] J. Jiang, P. Trundle, and J. Ren, “Medical image analysis with artificial
size optimizes the computation cost which strongly argues the neural networks,” Computerized Medical Imaging and Graphics, vol. 34,
usefulness of such technique. no. 8, pp. 617-631, 2010.
[19] D. Sharma, V. Singh, and C. Sharma, “GA Based Scheduling of FMS
ACKNOWLEDGMENT Using Roulette Wheel Selection Process,” In Proceedings of the
International Conference on Soft Computing for Problem Solving
Authors gratefully acknowledge the support of the Original (SocProS 2011), pp. 931-940, 2012.
Technology Research Program for Brain Science through the [20] K. F. Man, K. S. Tang, and S. Kwong, “Genetic algorithms: concepts
National Research Foundation (NRF) of Korea (NRF: 2010- and applications [in engineering design],” Industrial Electronics, IEEE
0018948) funded by the Ministry of Education, Science, and Transactions on, vol. 43, 1996
Technology. [21] L. Scrucca, “GA: A Package for Genetic Algorithms in R. Journal of
Statistical Software,” vol. 53, pp. 1-37, 2012.
REFERENCES [22] H. Liu, and L. Yu, “Toward integrating feature selection algorithms for
[1] J.D. Haynes, and G. Rees, “Decoding mental states from brain activity classification and clustering,” Knowledge and Data Engineering, IEEE
in humans,” Nature Reviews Neuroscience, vol. 7, no. 7, pp. 523-534, Transactions on, vol. 17, no, 4, pp. 491-502, 2005.
2006.
[2] G. Marrelec, H. Benali, P. Ciuciu, and J. B. Poline, “Bayesian estimation
of the hemodynamic response function in functional MRI,” In AIP
Conference Proceedings, vol. 617, pp. 229, May 2002.
[3] S. Parida, and S. Dehuri, “Applying Machine Learning Techniques for
Cognitive State Classification,” IJCA Proceedings on International
Conference in Distributed Computing and Internet
Technology(ICDCIT), ICDCIT, pp. 40-45, 2013.

1242 2014 IEEE International Advance Computing Conference (IACC)

You might also like