0% found this document useful (0 votes)
22 views

A_Domain_Generative_Graph_Network_for_EEG-Based_Emotion_Recognition

The document presents a novel hybrid model called Domain Generative Graph Network (DGGN) for EEG-based emotion recognition, which combines generative adversarial networks with graph convolutional and long short-term memory networks. The proposed model effectively addresses the challenges of subject variability in emotion classification by generating potential representations of EEG signals and achieving competitive performance on benchmark datasets. Key contributions include the construction of a dynamic adjacency matrix for EEG features and the application of adversarial learning to enhance feature discrimination.

Uploaded by

duckv
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

A_Domain_Generative_Graph_Network_for_EEG-Based_Emotion_Recognition

The document presents a novel hybrid model called Domain Generative Graph Network (DGGN) for EEG-based emotion recognition, which combines generative adversarial networks with graph convolutional and long short-term memory networks. The proposed model effectively addresses the challenges of subject variability in emotion classification by generating potential representations of EEG signals and achieving competitive performance on benchmark datasets. Key contributions include the construction of a dynamic adjacency matrix for EEG features and the application of adversarial learning to enhance feature discrimination.

Uploaded by

duckv
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 27, NO.

5, MAY 2023 2377

A Domain Generative Graph Network for


EEG-Based Emotion Recognition
Yun Gu , Xinyue Zhong, Cheng Qu, Chuanjun Liu , and Bin Chen

Abstract—Emotion is a human attitude experience and the emotional state of the subject more realistically the subject
corresponding behavioral response to objective things. Ef- more realistically and are not easily artifacts, such as EEG,
fective emotion recognition is important for the intelligence EOG, EMG, ECG, etc. EEG is a signal that records changes
and humanization of brain-computer interface (BCI). Al-
though deep learning has been widely used in emotion in scalp potential, reflecting the relationship between emotional
recognition in recent years, emotion recognition based on state and cortical activity to a certain extent, thus representing
electroencephalography (EEG) is still a challenging task the emotional state of a person more directly. EEG has been
in practical applications. Herein, we proposed a novel hy- intensively studied as a non-invasive BCI in recent years because
brid model that employs generative adversarial networks of its high temporal resolution, noninvasiveness, simplicity of
to generate potential representations of EEG signals while
combining graph convolutional neural networks and long operation, low cost and good classification performance [1].
short-term memory networks to recognize emotions from EEG signal features are mainly classified into three cat-
EEG signals. Experimental results on DEAP and SEED egories: time domain, frequency domain and time-frequency
datasets show that the proposed model achieved the domain [8]. Since EEG devices often acquire EEG signals in
promising emotion classification performance compared the time domain, time domain features are the easiest to obtain
with the state-of-the-art methods.
and mainly include: event-related potentials [9], signal statistics
Index Terms—EEG emotion recognition, generative adv- [10], [11], higher-order over-zero analysis [12], Hjorth param-
ersarial networks (GAN), graph convolutional neural eter features [13] and fractal dimension [14]. The frequency
networks (GCNN), latent representation, long short-term
memory (LSTM).
domain characteristics of the EEG signal are closely related to
human mental activity, and the signal is usually decomposed into
several frequency bands by Fourier transform, including δ band
I. INTRODUCTION (1–3 Hz), θ band (4–7 Hz), α band (8–13 Hz), β band (14–30 Hz)
MOTIONS reflect a person’s current physiological and and γ band (31–50 Hz) [9], [15], [16], [17]. Then features such
E psychological state, and have an important impact on peo-
ple’s cognition, communication and decision-making [1]. How
as power spectral density, event-related synchronization, event-
related desynchronization, higher order spectrum, differential
to accurately and effectively identify emotions is of great practi- entropy (DE), differential asymmetry (DASM), rational asym-
cal importance to the research and development of BCI. Due to metry (RASM) and energy spectrum (ES), etc. are extracted
the high complexity and abstraction of emotions, the criteria for from each frequency band. Many studies use feature selection
classifying emotions have not yet reached unity. The difficulty to extract more discriminant features [18], [19], [20]. However,
of emotion classification lies in how to obtain different and weak the Fourier transform works over the entire time domain, so it is
signals of emotional changes, which are usually generated by the impossible to confirm the moments corresponding to each fre-
stimuli of the external environment and accompanied by changes quency domain component of the non-smooth signal. To extract
in physiological signals. Compared to non-physiological sig- the global and local information of the signal, time-frequency
nals, such as facial expression images [2], [3], body postures [4], domain features are more often used. Meanwhile, the sliding
[5] and speech signals [6], [7], physiological signals can reflect window method is applied to process the signal at different time
periods for combining the time domain features. Shi et al. [21]
first proposed DE features, and verified the DE on five frequency
Manuscript received 30 March 2022; revised 2 December 2022 and
28 January 2023; accepted 31 January 2023. Date of publication 3 bands for EEG features with a good characterization effect. Duan
February 2023; date of current version 5 May 2023. This work was et al. [22] extracted DE, DASM, RASM and ES features from
supported in part by the National Natural Science Foundation of China multi-channel EEG data and combined with machine learning
under Grant 61801400. (Corresponding author: Bin Chen.)
Yun Gu, Xinyue Zhong, Cheng Qu, and Bin Chen are with to obtain good classification results.
the Institute of Chongqing Key Laboratory of Non-linear Circuit Currently, researchers usually classify emotion as discrete and
and Intelligent Information Processing, College of Electronic and continuous model. The discrete model divides emotions into
Information Engineering, Southwest University, Chongqing 400715,
China (e-mail: [email protected]; [email protected]; limited basic emotions [23]. In contrast, based on the cognitive
[email protected]; [email protected]). evaluation, the continuous model divides the emotion space into
Chuanjun Liu is with the Department of Electronics, Graduate School two dimensions of valence-arousal (VA) [24] or three dimen-
of Information Science and Electrical Engineering, Kyushu University,
Fukuoka 819-0395, Japan (e-mail: [email protected]). sions of valence-arousal-dominance (VAD) [25]. Nowadays,
Digital Object Identifier 10.1109/JBHI.2023.3242090 the most popularly applied model is the Circumplex Model

2168-2194 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
2378 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 27, NO. 5, MAY 2023

of Affect only including valence and arousal, so as to the VA subject-independent items is unsatisfactory. Recently, inspired
model has been more extensively studied than the VAD model, by the research of Goodfellow et al. [32], researches have
where valence indicates the degree of pleasure, arousal indicates shifted the problem of solving subject variability to solve the
the degree of activation, and dominance indicates the degree of adaptive problem in the source and target domains, reducing the
control. Thus, our study is based on the Circumplex Model of distribution differences. Due to the asymmetry between the left
Affect. and right hemispheres of the brain, Li et al. [33] proposed a
With the appearance of deep learning, deep learning methods Bi-hemispheres domain adversarial neural network (BiDANN)
have been widely used to solve the emotional classification of model in an adversarial mechanism. The network maps the
EEG signals. For example, Yang et al. [26] used multi-band EEG signals of the left and right brain hemispheres into easily
DE features as EEG features and employed continuous CNN distinguishable feature spaces, making the feature representation
to deal with the EEG emotion recognition problem. Liu et al. of the data easier to classify. It applies one global and two local
[27] put forward a spatial-temporal convolution attention neu- domain discriminators in the prediction process to reduce the
ral network which fuse the spatial-temporal features of EEG difference in distribution between testing and training data. Ma
signals with the weights of dual attention learning for emo- et al. [34] have proposed the domain residual network based on
tion classification. Tao et al. [16] proposed an attention-based the domain-adversarial network [35]. A domain generalization
convolutional recurrent neural network (ACRNN) to deal with approach is introduced to reduce the effect of subject variability
EEG-based emotion recognition. To learn the correlation be- in emotion recognition. The model structure is similar to that
tween EEG signals and the other physiological signals, Ma of residual network, with the advantage that it is a domain
et al. [28] proposed a multimodal residual LSTM network model generalization framework that does not require any information
(MM-Res LSTM) which shares the weights of each modality about the subjects in the target domain.
in each layer of the LSTM to learn the correlation between Herein, we introduce the idea of generative adversarial learn-
EEG and the other physiological signals, and to obtain deep ing into a hybrid model of GCNN and LSTM, called Domain
feature representations related to emotions. Wang et al. [29] Generative Graph Network (DGGN). Employing domain dis-
proposed a multimodal LSTM combined with the traditional criminators and feature generators to mitigate the differences
supervised classification loss function to significantly improve in feature distributions in the source and target domains to
the effectiveness of emotion classification on SEED dataset. perform the EEG signal emotion classification task. The main
Yang et al. [30] used CNN modules to convert the brain computer contributions of this work are summarized as follows:
electrical signal sequences into two-dimensional sequences for 1) We constructed a dynamic adjacency matrix using the
extracting channel correlations between EEG electrodes, with changing EEG features to describe the intrinsic relation-
LSTM modules to extract contextual information. ship between different brain channels, so as to extract the
GCNN, as a hot research topic in deep learning field, has also deep structural features.
been introduced into the field of EEG-based emotion recognition 2) A novel graph convolution-based deep learning frame-
to alleviate the subject variability problem. Song et al. [9] used work is proposed, combing GCNN and LSTM, which
graphs to model multi-channel EEG features through optimizing applies adversarial learning strategy to generate potential
the weighted graph of functional relationships between each representations of EEG signals.
pair of electrode electrodes in an EEG device. Focusing on 3) The proposed model achieved a competitive performance
the strength of functional relationships between each pair of compared to the state-of-the-art models on two bench-
electrodes, for the first time dynamic GCNN (DGCNN) was pro- mark EEG datasets.
posed and showed good performance on SEED and DREAMER The remainder of this paper is presented below. In Section II,
datasets. Zhang et al. [17] introduced sparse constraints to mod- a brief description on the background introduction of the model
ify the DGCNN model, seeking to solve the constraint minimiza- is given. Our model is introduced in Section III. The experiments
tion problem to ensure the convergence of the network model are described in detail in Section IV. Finally, conclusions and
and improve the emotion classification performance. Wang et al. future work are given in Section V.
[31] introduced a generalized learning system and proposed a
model that combined dynamic CNN and generalized learning II. PRELIMINARY RELATED WORK
system.
In this section, we briefly introduce some preliminary knowl-
Although most studies have already obtained high emotion
edge about GCNN, LSTM and GAN, which are the basis of the
classification accuracies in the subject-dependent experiments,
proposed model.
the variability of different individuals is large leading to that
many EEG signal recognition models cannot obtain a satis-
factory result. For the subject-independent task, environmental A. GCNN
changes, individual needs and cognition have critical impact GCNN is a novel neural network for revealing complex de-
on emotions of individuals. To alleviate the individual vari- pendencies inherent in graph-structured data sources. GCNN
ability in EEG emotion recognition, Yin et al. [15] combined allows greater flexibility and a wider representation space for
GCNN with LSTM to extract graph-domain and time-domain reasoning from graph-embedded nodes and edge information.
features from EEG signal and obtained promising results on The great success of GCNN is partly attributed to the fact that
DEAP dataset. Nevertheless, the performance of the model on GCNN provide a fusion strategy to learn node embeddings based

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
GU et al.: DOMAIN GENERATIVE GRAPH NETWORK FOR EEG-BASED EMOTION RECOGNITION 2379

Fig. 1. The schematic framework of DGGN. In the figure, EEG source inputs: the extracted feature sequence with the size of the sliding window.
EEG target inputs: the next sample of the extracted feature sequence. After a series of pre-processing and DE feature extraction, we divided the
EEG raw inputs into EEG source inputs and EEG target inputs, whose domain labels were 0 and 1 respectively. EEG source inputs generate feature
vectors through G, while D maximizes the similarity between the two inputs, so as to make the generated feature vectors more discriminating. Finally,
the optimized feature vectors are sent to C to predict the emotion.

on topology and node features. Its fusion process is supervised TABLE I


MAIN SYMBOLS AND CORRESPONDING DEFINITIONS IN DGGN
by an end-to-end learning framework. However, the ability of
GCNN to fuse network topology and node features is deficient,
and the biggest obstacle is that the correlation between data
and classification tasks is usually very complex and unknowable
[36]. Hence, it is still a challenge to overcome.

B. LSTM
Precisely because of the complex correlation between data
and classification tasks and the redundancy of the data, the
strategy of fusing recurrent neural networks that happen to
a picture as realistic as possible to cheat the discriminator,
excel on EEG temporal data came into being. LSTM can solve
while the goal of discriminator is to try to separate the pictures
the problem that the output of each network only depends on
generated by the generator from the real ones. In this way, GAN
the current input, without considering the interaction of the
constitutes a dynamic “game process” that reaches the optimal
input at different moments [37]. To solve the long-term de-
state when the discriminator cannot determine whether the data
pendence problem that can occur in recurrent neural networks
comes from the real dataset or the generator.
back-propagation, gating units were introduced as well as linear
connections. It aims to map the input sequence to a series of
potential representations through a complex chain of neural III. METHODS
network dynamics transformations and adaptively remember,
Before introducing the DGGN model, we first provide the de-
and forget the information over the entire sequence.
notations and definitions of the main parameters, see Table I. In
addition, we use the subscript symbols G, D and C to denote the
C. GAN generator, discriminator and classifier, respectively. θG denotes
In practical applications of emotion recognition, it is still the learnable parameters of the generator.
challenging to solve the problem of subject variability. GAN To further enhance the discriminative ability, our model em-
is widely used in research fields such as computer vision and ploys three main modules: G, the feature reconstruction task
natural language processing, and aims to generate data that do which aims to obtain the potential spatial representation of the
not exist in the real world whose distribution is similar to real EEG signal. D, the output discrimination task, trained adver-
data [32]. sarial jointly with G, aiming to learn the feature distribution of
GAN consisting of a generator and a discriminator is inspired the source domain and reduce the possible feature distribution
by game theory. The goal of generator is to try to generate between the source and target domains. C, the output prediction

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
2380 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 27, NO. 5, MAY 2023

baseline signal is a simple and effective pre-processing way


to enhance the accuracy of emotion recognition. Firstly, the
stimulus signal data in the EEG data is subtracted from the
corresponding subject resting at the baseline signal. The sliding
window of 6 s and the step size of 3 s is adopted to divide
the noise-reduced EEG data to obtain a set of the DE feature
matrix. To perform unconstrained optimization in the potential
space, the neurons in the potential space must be effectively
encoded. With the powerful nonlinear representation capability
of GCNN and the superiority of LSTM in processing temporal
data, we choose the joint model framework of GCNN and LSTM
to construct the generator network.
G is divided into two parts, graph and time domain feature
extractors. GCN layers are used to extract the dynamic relation-
ship between two EEG channels over a period of time, and the
figure domain features are extracted in the DE feature matrix of
EEG data at T seconds. We assume that X = [X1 , · · · , Xl ] ∈
Fig. 2. The Framework of G. In the figure, Upper: The architecture of
RT ×F N ×CN denotes a segment of EEG source input. FN de-
graph domain feature extractor. Lower: The architecture of time domain
feature extractor. The dynamic adjacency matrices are constructed by notes the length of the EEG feature vector, i.e., the number
utilizing the changing EEG features to obtain the graph input signals, of frequency bands, CN denotes the number of EEG feature
which are sent to multiple GCN layers to obtain the graph vectors with
vectors, i.e., the number of channels, and T denotes the length
the graph domain features. Finally, the graph vectors are sent to LSTMs
to obtain the EEG latent outputs with the time domain features. of the EEG feature sequence. We regard the channels of DE
signals as nodes of the graph, and the frequency band features
as node features. The Graph input signal is determined by X
task, is trained jointly with the trained G to predict the emotion.
and A, where the dynamic adjacency matrix A is determined by
The schematic framework of the DGGN model is given in Fig. 1.
the distance function [38]. The graph features
We obtained two types of input signals from DE features
through a series of pre-processing on the raw EEG signals. One Xg = [X1g , · · · , Xlg ] = [ g (X1 ) , · · · , g (Xl )] (1)
of the input signals is the EEG source input which represents the
extracted feature sequence with the size of the sliding window. extracted by GCN layers are fed into the LSTM in seconds. To
The other is the EEG target input which represents the next sam- study the time dependence between EEG feature vectors and
ple of the extracted feature sequence. Source input is conveyed further obtain more effective temporal features Xl , the graph
to G, which is pre-trained to minimize errors for reproducing domain features Xg extracted by the GCNN are fed into the
the EEG signals, inducing the network to learn a compressed LSTM in seconds. The obtained potential representation of EEG
representation that captures the most significant statistical infor- features that can have certain discriminatory properties.
 
mation of the input. For unconstrained optimization in the latent Xl = X1l , · · · , Xll ]=[ l (X1g ) , · · · , l (Xlg ) (2)
space, vectors must be able to represent into valid EEG signals
that capture the emotional attributes. The EEG signal features
B. Discriminator
are encoded into the continuous representation of G, and needs
to be relevant to our optimal goal. We trained D with the label The goal of D is to eliminate the differences in the feature
set {0, 1} as the binary domain label, where the source domain distribution between the source and target domains. We utilize
label is set to 0 and the target label is set to 1. D determines the simple multi-layer perceptron (MLP) layers to construct D.
the relevance of this potential vector to the EEG target input by G is trained to be sufficient to deceive over the D which is trained
optimizing G continuously based on gradient in the continuous to make the correct judgment, i.e., to distinguish the generated
potential space. Because of maximizing the difference between fake samples from the real ones. The two learn against each
them, we can obtain more commonly critical features. Finally, other so that the distribution of the fake data generated by G can
C is used to predict the emotion states of the optimized potential approximate to that of the real data as much as possible. The
vector with each EEG signal. loss function of GAN can be expressed as follows:
min max L (D, G) = Ex∼Pdata (x) [log D (x)]
A. Generator G D

The goal of G is to improve the EEG signal classification + Ez∼Pz (z) [log (1 − D (G(z)))] (3)
performance by re-extracting more discriminative features from
GAN algorithm is described in Algorithm 1.
the already pre-processed EEG signal features. The framework
of G is shown in Fig. 2. Before the EEG signal features are sent
C. Classifier
to the G, the pre-process of the raw EEG signal is necessary.
We adopt the same pre-processing method to process the raw The features of EEG signal can be obtained from the enhanced
EEG signal [15]. Yang et al. [30] showed that removing the potential representation of the input by the generator trained by

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
GU et al.: DOMAIN GENERATIVE GRAPH NETWORK FOR EEG-BASED EMOTION RECOGNITION 2381

A. Datasets
Algorithm 1: GAN Training Algorithm.
Input: Source data set zRN ×T ×F N ×CN and Target data To validate the performance of DGGN model, we conducted
set xRN ×F N ×CN , extensive experiments on two benchmark emotion recognition
Source domain label set {1} and Target domain label set datasets: DEAP [39] and SEED [40].
{0}. DEAP: The DEAP dataset is a multimodal dataset of EEG sig-
Output: Optimized parameters θ̂D , θ̂G . nals, peripheral physiological signals, and corresponding scores
1: Initialize parameters of the discriminator and the recorded by 32 subjects (16 males and 16 females, mean age =
generator, learning rate λ and batch size k. 26.9, age ranged from 19 to 37) after watching 40 music videos
2: Repeat of one minute duration with different emotional orientations.
3: Update the discriminator by maximizing its The scores were based on the Self-assessment manikins (SAM)
stochastic gradient: which contains information on valence, arousal, dominance and
liking on a scale of one to nine, with the magnitude indicating
θD ← θD −λ ∂L∂θ D (x,z)
D the strength of the indicator.
4: Update the generator by minimizing its stochastic
SEED: The SEED dataset consists of EEG data via 15 subjects
gradient:
(7 males and 8 females, mean age = 23.27, STD = 2.37).
θG ← θG −λ ∂L∂θ G (x,z)
G As participants watched 15 4-minute Chinese film clips at 3
5: until the iteration satisfies the predefined condition. times, the EEG data were recorded from a 62-channel recording
cap and down-sampled to 200 Hz. The EEG data were later
processed with a band-pass filter of 0.3-50 Hz and manually
Algorithm 2: The Training Algorithm of Classifier.
checked to remove EOG and EMG noises. Three categories of
Input: Source data set zRN ×T ×F N ×CN , emotions (positive, neutral and negative) were considered during
Ground-truth label set Ys RN ×M of Source data set z. the experiments.
Output: Optimized parameters θ̂D , θ̂G .
1: Initialize model parameters, learning rate λ and batch
size k. B. Data Preprocessing
2: Train GAN with Algorithm 1. For DEAP dataset, we took the raw EEG signals from 32
3: Cut the generator (G) of trained GAN, attach it to the channels in the dataset with oculomotor, eye movement and
MLP. power noise removed, and down-sampled at 128 Hz. Since
4: Repeat the 4-45 Hz band is associated with emotional activity [1], the
5: Update the classifier by minimizing its stochastic irrelevant band signals are filtered out using a band-pass filter.
gradient: We extracted the DE features of the EEG signals on four bands
θC ← θC −λ ∂L∂θCC(z) (θ band (4-7 Hz), α band (8-13 Hz), β band (14-30 Hz) and
6: until the iteration satisfies the predefined condition. γ band (31-50 Hz)), with each trial containing 60 s stimulated
signal and 3 s resting signal. Specifically, the DE features were
taken as a suitable sliding window to obtain more data samples
GAN. The feature visualization experiments (see the Experi- and more effective results. The sliding window size was set to 6
ments section for details) can verify that the trained generator s, with 3s step size. For the arousal, valence and dominance
can effectively distinguish the EEG signal features in the hidden self-rating, we took a threshold of median 5 for the binary
vector space. C consists of the trained G and the MLP layers. classification and the thresholds of 3 and 6 were used for ternary
The hidden vector representation is predicted to correspond to classification.
the labels by a nonlinear softmax activation function. The loss For SEED dataset, the same features extraction procedure was
function of C is expressed as: applied. For each subject, EEG recordings were taken in three
chronologically discontinuous sessions, each of which repeated
LC = cross_entropy (p, l) (4) the same experiment. To ensure consistency of assessment, we
Where p is the classifier prediction result, and l is the classi- only used the first session for each subject in our experiment
fication label. The cross-entropy function is a commonly used as the first session reflected a more reliable mood than the two
loss function to measure the discrepancy between the model subsequent sessions. In addition, since the SEED dataset does
prediction value and the true label. The model algorithm of C is not contain arousal information, we only recognized positive
described in Algorithm 2. and negative emotions.

IV. EXPERIMENTS AND DISCUSSION C. Experimental Protocol


In this part, we first briefly introduce the datasets, and describe To fully evaluate our model, we implemented two types of
the pre-processing steps. Then, we perform two experiments to experiments: subject-dependent and subject-independent exper-
evaluate the performance of EEG sentiment classification using iments. For subject-dependent experiments, training and testing
the proposed DGGN model, including the subject-dependents data were obtained from the same subject. For the subject-
and the subject-independent experiments. Finally, we provide a independent experiments, training and testing data were ob-
brief discussion on the experimental results. tained from different subjects.
Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
2382 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 27, NO. 5, MAY 2023

Fig. 3. Average accuracy using different models on (a) valence and (b) arousal classification tasks for each subject.

TABLE II TABLE III


PARAMETERS SETTING OF DGGN MODEL COMPARISONS OF THE AVERAGE ACCURACIES AND STANDARD DEVIATIONS
(%) OF SUBJECT-DEPENDENT EXPERIMENTS ON DEAP DATASET USING
DIFFERENT METHODS

D. Model Training Platform and Parameters Setting


In our experiments, the proposed model was implemented in
the pytorch1.8 framework and trained on an NVIDIA GeForce
RTX 3080 GPU with the RMSProp optimizer to minimize the be due to the large variability in environmental changes, indi-
cross-entropy loss. The learning rate was initialized to 0.0001, vidual needs and cognition experienced by different subjects. In
and the learning rate was adjusted periodically using cosine an- addition, certain errors in the experiment may also lead to the
nealing decay [41] until the training was stopped when the model unsatisfactory results.
parameters reached the optimum. The specific parameters of Table III shows the average classification results of the DGGN
DGGN model training are shown in Table II. The source codes of model and the other models. It can be seen that the performance
DGGN are available from https://github.com/greeyun/DGGN. of our model is significantly better in terms of the average
accuracy results of binary classification of subject-dependent
valence and arousal with an improvement of 2.60% and 3.81%,
E. Experiments on DEAP Dataset
respectively. In comparison, DGGN yielded a promisingly re-
For DEAP dataset, we used the same experimental protocol duced standard deviation (to 2.23% and 2.56%), which implies
as Refer [15] to evaluate the DGGN model for a fair comparison. that the model had reliable stability.
The same 3 times 5-fold cross-validation was applied to validate To further demonstrate the recognition ability of DGGN
the DGGN model, where the average performance of 5-fold model, we illustrated the t-SNE embedding method to visual-
validation is regard as the final experimental result. ize the relevant feature distributions of the first three subjects
During the subject-dependent experiments, we compared its (Fig. 4). It can be clearly seen that the features learned by DGGN
performance with the latest deep learning algorithms [15], [16], have better differentiability between the positive and negative
[28], [30], [42], [43], [44], [45], [46], [47]. emotions.
Fig. 3 shows the accuracy of the valence and arousal using In the subject-independent experiments, we also conducted
our model versus the other methods on each subject. It can comparison experiments with the state-of-the-art methods using
be seen that the classification accuracy of the DGGN model is the same 5-fold cross-validation (Table IV). We found that
relatively stable and significantly higher than the other models. DGGN has the best recognition performance on DEAP dataset.
It is noteworthy that most of the models have lower classification It can be explained by the adversarial training of DGGN, which
accuracy on the 22nd subject including our model, which may can effectively extract more discriminative features from EEG
Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
GU et al.: DOMAIN GENERATIVE GRAPH NETWORK FOR EEG-BASED EMOTION RECOGNITION 2383

Fig. 4. Visualization of t-SNE before and after feature extraction on DEAP dataset. (a-c) The original and (d-f) DGGN-extracted data features
distributions. (The blue and red indicate the negative and positive emotion, respectively.).

TABLE IV subject individuals could be minimized more discriminatively,


COMPARISONS OF THE AVERAGE ACCURACIES AND STANDARD DEVIATIONS
(%) OF SUBJECT-INDEPENDENT EXPERIMENTS ON DEAP DATASET USING
to improve the effectiveness of DGGN. To verify the validity
DIFFERENT METHODS of our adversarial learning strategy, we implemented a contrast
model DGGN-S as a baseline model without adversarial
learning. As illustrated in Table IV, DGGN-S exhibits an
excellent performance on EEG emotions classification. It may
be due to that the optimized GCNN within DGGN-S can kernel
alleviate the overfitting problem of the local neighborhood
structure of graphs with a very wide distribution of node
degrees. In addition, DGGN achieved the best classification
performance (valence 94.87%, arousal 94.42% and dominance
94.78%) which is the highest in a binary classification result of
subject-independent experiments on DEAP dataset. Hence, we
believe that GAN in our proposed model can effectively reduce
TABLE V subject variability and is effective in improving EEG emotion
THE AVERAGE ACCURACIES, F1 SCORE AND STANDARD DEVIATIONS (%) OF recognition performance.
SUBJECT-INDEPENDENT EXPERIMENTS ON DEAP DATASET The average accuracy of valence and arousal is 96.14%,
94.05% and 92.07% (positive) and 86.58%, 89.23% and 90.99%
(negative) as shown in Fig. 5. The classification accuracy of high
valence and arousal is better than low valence and arousal, which
means that positive emotions are more easily identified by the
model. These results were similar to previous studies [33], [40].
In addition, under the existing platform setting, the com-
plexity of the proposed model is represented by floating point
signals. Thus, the variability of subjects’ individual features operations (FLOPs). The parameters of the model computation
is reduced, and it can address subject-independent emotion are 3.40 MFLOPs, and the memory size occupied by the model
recognition more effectively. Moreover, to demonstrate the in- is 1.2 Mbytes. The pre-training time is 7.21 s per epoch. The
sensitivity of DGGN to EEG labels, we extended the label of training time is 2.59 s per epoch. We can see that the pro-
the classification to three types for each dimension (Table V). posed model achieves high accuracy with low complexity and
The function of the adversarial generative learning introduced high memory usage efficiency under the condition of changing
by DGGN is to obtain potential representative information of traditional training strategies. Hence, the effectiveness of DGGN
the subject individuals. Meanwhile, the variability between the for EEG emotion recognition is further demonstrated.
Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
2384 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 27, NO. 5, MAY 2023

Fig. 6. Friedman test chart. The blue dots represent the average
rankings of the models in Friedman test, and the length of all horizontal
lines passing through the black dots represents the critical range cd.

Fig. 5. The confusion matrix of DEAP and SEED datasets on subject- is lower than that of the others, which indicates the robustness
independent experiments using DGGN model. of the proposed DGGN.

TABLE VI G. Friedman and Nemenyi Test


COMPARISONS OF THE AVERAGE ACCURACIES AND STANDARD DEVIATIONS
(%) ON SEED DATASET USING DIFFERENT METHODS To further prove the efficiency and stability of DGGN, the
average accuracy of valence, arousal and dominance of Table IV
are extracted for Friedman and Nemenyi test. Friedman test
aims to evaluate the statistical significance of any difference
in the mean ranks of each method. Specific, we used Nemenyi
test which is one of common post-hoc test to determine which
methods are statistically different in performance. Firstly, the
obtained Friedman test value τ F = 9.81, which is larger than the
critical value 2.764 when the significance α = 0.05. Hence, the
performance of algorithms in the tables is different. Secondly,
the critical range cd of Nemenyi test is equal to 6.06 when the
significance α = 0.05(Fig. 6). According to the critical range cd
and the average rankings of the models, the range of DGGN
F. Experiments on SEED Dataset on the horizontal axis only overlaps partially with the other
To prove the effectiveness of our method, we used leave-one- models except DGGN-S, with the smallest average ranking,
out cross validation rather than K-fold cross-validation, because which means the proposed model has the best performance.
the former is a more challenging. We used leave-one-clip-out In comparison, the values of the remaining models on the
cross validation for subject-dependent experiments. For each horizontal axis have a lot of overlap, implying that there is no
subject who watched n video clips, we trained a model for significant difference between them.
him/her using n−1 clips and tested on the left 1 clip. The
final results were averaged over all the tests in which each clip
was used for one test. For subject-independent experiments, the V. CONCLUSION
leave-one-subject-out cross validation strategy was used for each In summary, we proposed a novel hybrid model that employs
subject. Likewise, the final results were averaged over all the GAN to generate potential representations of EEG signals and
tests in which data of each subject were used for one test. combine GCNN and LSTM to identify emotions from EEG
As shown in Table VI, the proposed method achieves a com- signals. Extensive experimental results show that the proposed
petitive performance. Our model achieved a binary classification model has a competitive performance compared to the state-of-
accuracy of 97.28±2.70% in subject-dependent experiments and the-art methods on two benchmark datasets. The possible reason
83.84±10.26% in subject-independent experiments for discrim- is that our model can generate more discriminative potential
inating the positive and negative emotional states on SEED representations of EEG features, which improves the reduction
dataset. DGGN achieves better prediction performance than of individual variation of EEG subjects. It is worth noting that a
SVM [40], DBN [40], GELM [53], DGCNN [31] and DGGN-S. core limitation of DGGN model is their sensitivity to modal
Furthermore, the performance of DGGN is comparable with the collapse, which is present in many GAN-related studies. In
latest models SparseD [17] with the same experimental protocol. future work, we will introduce transfer learning or some kind of
In addition, we can also see that the standard deviation of DGGN well-designed reward mechanism to alleviate this problem.
Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
GU et al.: DOMAIN GENERATIVE GRAPH NETWORK FOR EEG-BASED EMOTION RECOGNITION 2385

REFERENCES [22] R. Duan, J. Zhu, and B. Lu, “Differential entropy feature for
EEG-based emotion classification,” in Proc. 6th Int. IEEE/EMBS
[1] S. M. Alarcao and M. J. Fonseca, “Emotions recognition using EEG sig- Conf. Neural Eng., San Diego, CA, USA, Nov. 2013, pp. 81–84,
nals: A survey,” IEEE Trans. Affect. Comput., vol. 10, no. 3, pp. 374–393, doi: 10.1109/NER.2013.6695876.
Jul.–Sep. 2019, doi: 10.1109/TAFFC.2017.2714671. [23] P. Ekman and W. V. Friesen, “Constants across cultures in the face and
[2] Y. Liu, J. Zhang, W. Yan, S. Wang, G. Zhao, and X. Fu, “A main directional emotion,” J. Pers. Social Psychol., vol. 17, no. 2, pp. 124–129, 1971,
mean optical flow feature for spontaneous micro-expression recognition,” doi: 10.1037/h0030377.
IEEE Trans. Affect. Comput., vol. 7, no. 4, pp. 299–310, Oct. 2016, [24] J. A. Russell, “A circumplex model of affect.,” J. Pers. Social Psychol.,
doi: 10.1109/TAFFC.2015.2485205. vol. 39, no. 6, pp. 1161–1178, 1980, doi: 10.1037/h0077714.
[3] X. Huang, S. Wang, X. Liu, G. Zhao, X. Feng, and M. Pietikainen, [25] A. Mehrabian, “Pleasure-arousal-dominance: A general framework for de-
“Discriminative spatiotemporal local binary pattern with revisited in- scribing and measuring individual differences in temperament,” Curr. Psy-
tegral projection for spontaneous facial micro-expression recognition,” chol., vol. 14, no. 4, pp. 261–292, Dec. 1996, doi: 10.1007/BF02686918.
IEEE Trans. Affect. Comput., vol. 10, no. 1, pp. 32–47, Jan. 2019, [26] Y. Yang, Q. Wu, Y. Fu, and X. Chen, “Continuous convolutional neural
doi: 10.1109/TAFFC.2017.2713359. network with 3D input for EEG-based emotion recognition,” in Proc.
[4] D. Glowinski, N. Dael, A. Camurri, G. Volpe, M. Mortillaro, and K. Neural Inf., 2018, pp. 433–443, doi: 10.1007/978-3-030-04239-4_39.
Scherer, “Toward a minimal representation of affective gestures,” IEEE [27] S. Liu et al., “3DCANN: A spatio-temporal convolution attention
Trans. Affect. Comput., vol. 2, no. 2, pp. 106–118, Apr.–Jun. 2011, neural network for EEG emotion recognition,” IEEE J. Biomed.
doi: 10.1109/T-AFFC.2011.7. Health Inform., vol. 26, no. 11, pp. 5321–5331, Nov. 2022,
[5] F. Noroozi, C. A. Corneanu, D. Kaminska, T. Sapinski, S. Escalera, and doi: 10.1109/JBHI.2021.3083525.
G. Anbarjafari, “Survey on emotional body gesture recognition,” IEEE [28] J. Ma, H. Tang, W. Zheng, and B. Lu, “Emotion recognition using multi-
Trans. Affect. Comput., vol. 12, no. 2, pp. 505–523, Apr.–Jun. 2021, modal residual LSTM network,” in Proc. 27th ACM Int. Conf. Multimedia,
doi: 10.1109/TAFFC.2018.2874986. Nice, France, Oct. 2019, pp. 176–183, doi: 10.1145/3343031.3350871.
[6] R. Panda, R. Malheiro, and R. P. Paiva, “Novel audio features for mu- [29] Y. Wang et al., “EEG-based emotion recognition with similarity learning
sic emotion recognition,” IEEE Trans. Affect. Comput., vol. 11, no. 4, network,” in Proc. 41st Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2019,
pp. 614–626, Oct.–Dec. 2020, doi: 10.1109/TAFFC.2018.2820691. pp. 1209–1212, doi: 10.1109/EMBC.2019.8857499.
[7] K. P. Seng, L.-M. Ang, and C. S. Ooi, “A combined rule-based & [30] Y. Yang, Q. Wu, M. Qiu, Y. Wang, and X. Chen, “Emotion recognition
machine learning audio-visual emotion recognition approach,” IEEE from multi-channel EEG through parallel convolutional recurrent neural
Trans. Affect. Comput., vol. 9, no. 1, pp. 3–13, Jan.–Mar. 2018, network,” in Proc. IEEE Int. Joint Conf. Neural Netw., 2018, pp. 1–7,
doi: 10.1109/TAFFC.2016.2588488. doi: 10.1109/IJCNN.2018.8489331.
[8] R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for [31] X. Wang, T. Zhang, X. Xu, L. Chen, X. Xing, and C. L. P. Chen, “EEG
emotion recognition from EEG,” IEEE Trans. Affect. Comput., vol. 5, no. 3, emotion recognition using dynamical graph convolutional neural networks
pp. 327–339, Jul.–Sep. 2014, doi: 10.1109/TAFFC.2014.2339834. and broad learning system,” in Proc. IEEE Int. Conf. Bioinf. Biomed., 2018,
[9] T. Song, W. Zheng, P. Song, and Z. Cui, “EEG emotion recog- pp. 1240–1244, doi: 10.1109/BIBM.2018.8621147.
nition using dynamical graph convolutional neural networks,” IEEE [32] I. Goodfellow et al., “Generative adversarial networks,” Commun. ACM,
Trans. Affect. Comput., vol. 11, no. 3, pp. 532–541, Jul.–Sep. 2020, vol. 63, no. 11, pp. 139–144, Oct. 2020, doi: 10.1145/3422622.
doi: 10.1109/TAFFC.2018.2817622. [33] Y. Li, W. Zheng, Y. Zong, Z. Cui, T. Zhang, and X. Zhou, “A bi-hemisphere
[10] B. Lu, L. Zhang, and J. Kwok, “Neural information,” in Proc. 18th Int. domain adversarial neural network model for EEG emotion recognition,”
Conf., 2011, Art. no. 7062, doi: 10.1007/978-3-642-24955-6. IEEE Trans. Affect. Comput., vol. 12, no. 2, pp. 494–504, Apr.–Jun. 2021,
[11] R. W. Picard, E. Vyzas, and J. Healey, “Toward machine emotional doi: 10.1109/TAFFC.2018.2885474.
intelligence: Analysis of affective physiological state,” IEEE Trans. Pat- [34] T. Gedeon, K. W. Wong, and M. Lee, “Neural information,” in Proc. 26th
tern Anal. Mach. Intell., vol. 23, no. 10, pp. 1175–1191, Oct. 2001, Int. Conf., 2019, Art. no. 11953, doi: 10.1007/978-3-030-36708-4.
doi: 10.1109/34.954607. [35] J. A. Urigüen and B. Garcia-Zapirain, “EEG artifact removal—State-
[12] P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from of-the-art and guidelines,” J. Neural Eng., vol. 12, no. 3, Jun. 2015,
EEG using higher order crossings,” IEEE Trans. Inf. Technol. Biomed., Art. no. 031001, doi: 10.1088/1741-2560/12/3/031001.
vol. 14, no. 2, pp. 186–197, Mar. 2010, doi: 10.1109/TITB.2009.2034649. [36] X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, and J. Pei, “AM-GCN: Adap-
[13] S. Oh, Y. Lee, and H. Kim, “A novel EEG feature extraction method using tive multi-channel graph convolutional networks,” in Proc. 26th ACM
hjorth parameter,” Int. J. Electron. Elect. Eng., vol. 2, pp. 106–110, 2014, SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 1243–1253,
doi: 10.12720/ijeee.2.2.106-110. doi: 10.1145/3394486.3403177.
[14] M. L. Gavrilova, C. J. K. Tan, and A. Kuijper Eds., Transactions on [37] X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo, “Convo-
Computational Science XVIII: Special Issue on Cyberworlds, vol. 7848. lutional LSTM network: A machine learning approach for precipitation
Berlin, Germany: Springer, 2013, doi: 10.1007/978-3-642-38803-3. nowcasting,” in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 802–810.
[15] Y. Yin, X. Zheng, B. Hu, Y. Zhang, and X. Cui, “EEG emotion recog- [38] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Van-
nition using fusion model of graph convolutional neural networks and dergheynst, “The emerging field of signal processing on graphs: Ex-
LSTM,” Appl. Soft Comput., vol. 100, Mar. 2021, Art. no. 106954, tending high-dimensional data analysis to networks and other irregular
doi: 10.1016/j.asoc.2020.106954. domains,” IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83–98, May 2013,
[16] W. Tao et al., “EEG-based emotion recognition via channel-wise atten- doi: 10.1109/MSP.2012.2235192.
tion and self attention,” IEEE Trans. Affect. Comput., to be published, [39] S. Koelstra et al., “DEAP: A database for emotion analysis; using phys-
doi: 10.1109/TAFFC.2020.3025777. iological signals,” IEEE Trans. Affect. Comput., vol. 3, no. 1, pp. 18–31,
[17] G. H. Zhang, M. J. Yu, Y. J. Liu, G. Z. Zhao, D. Zhang, and Jan.–Mar. 2012, doi: 10.1109/T-AFFC.2011.15.
W. M. Zheng, “SparseDGCNN: Recognizing emotion from multi- [40] W. Zheng and B. Lu, “Investigating critical frequency bands and channels
channel EEG signals,” IEEE Trans. Affect. Comput., to be published, for EEG-based emotion recognition with deep neural networks,” IEEE
doi: 10.1109/TAFFC.2021.3051332. Trans. Auton. Ment. Develop., vol. 7, no. 3, pp. 162–175, Sep. 2015,
[18] Q. Lin et al., “Designing individual-specific and trial-specific models doi: 10.1109/TAMD.2015.2431497.
to accurately predict the intensity of nociceptive pain from single-trial [41] I. Loshchilov and F. Hutter, “SGDR: Stochastic gradient descent with
fMRI responses,” NeuroImage, vol. 225, Jan. 2021, Art. no. 117506, warm restarts,” in Proc. Int. Conf. Pattern Recognit., 2017, pp. 1–16.
doi: 10.1016/j.neuroimage.2020.117506. [42] S. Tripathi, “Using deep and convolutional neural networks for accurate
[19] A. M. Anter, M. A. Elaziz, and Z. Zhang, “Real-time epileptic seizure emotion classification on DEAP dataset,” in Proc. 29th Innov. Appl. Artif.
recognition using Bayesian genetic whale optimizer and adaptive machine Intell. Conf., 2017, Art. no. 7.
learning,” Future Gener. Comput. Syst., vol. 127, pp. 426–434, 2022. [43] S. Alhagry, A. A. Fahmy, and R. A. El-Khoribi, “Emotion recog-
[20] A. M. Anter, G. Huang, L. Li, L. Zhang, Z. Liang, and Z. Zhang, nition based on EEG using LSTM recurrent neural network,” Int.
“A new type of fuzzy-rule-based system with chaotic swarm in- J. Adv. Comput. Sci. Appl., vol. 8, no. 10, pp. 1–4, 2017,
telligence for multiclassification of pain perception from fMRI,” doi: 10.14569/IJACSA.2017.081046.
IEEE Trans. Fuzzy Syst., vol. 28, no. 6, pp. 1096–1109, Jun. 2020, [44] E. S. Salama, R. A. El-Khoribi, M. E. Shoman, and M. A. Wahby,
doi: 10.1109/TFUZZ.2020.2979150. “EEG-based emotion recognition using 3D convolutional neural net-
[21] L. Shi, Y. Jiao, and B. Lu, “Differential entropy feature for EEG-based vigi- works,” Int. J. Adv. Comput. Sci. Appl., vol. 9, no. 8, pp. 1–4, 2018,
lance estimation,” in Proc. 35th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., doi: 10.14569/IJACSA.2018.090843.
Osaka, Jul. 2013, pp. 6627–6630, doi: 10.1109/EMBC.2013.6611075.

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.
2386 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 27, NO. 5, MAY 2023

[45] D. Huang, S. Chen, C. Liu, L. Zheng, Z. Tian, and D. Jiang, “Differences [50] X. Du et al., “An efficient LSTM network for emotion recognition from
first in asymmetric brain: A bi-hemisphere discrepancy convolutional multichannel EEG signals,” IEEE Trans. Affect. Comput., vol. 13, no. 3,
neural network for EEG emotion recognition,” Neurocomputing, vol. 448, pp. 1528–1540, Jul.–Sep. 2022, doi: 10.1109/TAFFC.2020.3013711.
pp. 140–151, Aug. 2021, doi: 10.1016/j.neucom.2021.03.105. [51] H. Chao and L. Dong, “Emotion recognition using three-dimensional
[46] Y. Zhu and Q. Zhong, “Differential entropy feature signal extraction feature and convolutional neural network from multichannel EEG
based on activation mode and its recognition in convolutional gated signals,” IEEE Sensors J., vol. 21, no. 2, pp. 2024–2034, Jan. 2021,
recurrent unit network,” Front. Phys., vol. 8, Jan. 2021, Art. no. 629620, doi: 10.1109/JSEN.2020.3020828.
doi: 10.3389/fphy.2020.629620. [52] H. Cui, A. Liu, X. Zhang, X. Chen, K. Wang, and X. Chen, “EEG-
[47] L. Feng, C. Cheng, M. Zhao, H. Deng, and Y. Zhang, “EEG-based emo- based emotion recognition using an end-to-end regional-asymmetric con-
tion recognition using spatial-temporal graph convolutional LSTM with volutional neural network,” Knowl.-Based Syst., vol. 205, Oct. 2020,
attention mechanism,” IEEE J. Biomed. Health Inform., vol. 26, no. 11, Art. no. 106243, doi: 10.1016/j.knosys.2020.106243.
pp. 5406–5417, Nov. 2022, doi: 10.1109/JBHI.2022.3198688. [53] W. Zheng, J. Zhu, and B. Lu, “Identifying stable patterns over time for
[48] J. Zhang, M. Chen, S. Hu, Y. Cao, and K. Robert, “PNN for EEG-based emotion recognition from EEG,” IEEE Trans. Affect. Comput., vol. 10,
emotion recognition,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2016, no. 3, pp. 417–429, Jul.–Sep. 2019.
pp. 002319–002323, doi: 10.1109/SMC.2016.7844584.
[49] Z. Wang, T. Gu, Y. Zhu, D. Li, H. Yang, and W. Du, “FLDNet: Frame-
level distilling neural network for EEG emotion recognition,” IEEE
J. Biomed. Health Inform., vol. 25, no. 7, pp. 2533–2544, Jul. 2021,
doi: 10.1109/JBHI.2021.3049119.

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on March 28,2025 at 14:31:41 UTC from IEEE Xplore. Restrictions apply.

You might also like