0% found this document useful (0 votes)
32 views

Eeg SNN

Uploaded by

ykzhang2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Eeg SNN

Uploaded by

ykzhang2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Computers in Biology and Medicine 183 (2024) 109225

Contents lists available at ScienceDirect

Computers in Biology and Medicine


journal homepage: www.elsevier.com/locate/compbiomed

Real-time sub-milliwatt epilepsy detection implemented on a spiking neural


network edge inference processor
Ruixin Li a,b ,1 , Guoxu Zhao a ,1 , Dylan Richard Muir c , Yuya Ling b , Karla Burelo c , Mina Khoe c ,
Dong Wang a ,∗, Yannan Xing b , Ning Qiao b,c
a
State Key Laboratory of Digital Medical Engineering, Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan
University, Sanya, 572025, China
b Chengdu SynSense Tech. Co. Ltd., 1577 Tianfu Road, Chengdu, 610041, Sichuan, China
c Synsense, Thurgauerstrasse 60, Zürich, 8050, Switzerland

ARTICLE INFO ABSTRACT

Keywords: Analyzing electroencephalogram (EEG) signals to detect the epileptic seizure status of a subject presents a
Spiking neural network challenge to existing technologies aimed at providing timely and efficient diagnosis. In this study, we aimed
Neuromorphic processor to detect interictal and ictal periods of epileptic seizures using a spiking neural network (SNN). Our proposed
Electroencephalogram
approach provides an online and real-time preliminary diagnosis of epileptic seizures and helps to detect
Seizure detection
possible pathological conditions.
Ultra-low power
To validate our approach, we conducted experiments using multiple datasets. We utilized a trained SNN
to identify the presence of epileptic seizures and compared our results with those of related studies. The SNN
model was deployed on Xylo, a digital SNN neuromorphic processor designed to process temporal signals. Xylo
efficiently simulates spiking leaky integrate-and-fire neurons with exponential input synapses. Xylo has much
lower energy requirements than traditional approaches to signal processing, making it an ideal platform for
developing low-power seizure detection systems.
Our proposed method has a high test accuracy of 93.3% and 92.9% when classifying ictal and interictal
periods. At the same time, the application has an average power consumption of 87.4 𝜇W (IO power) + 287.9
𝜇W (compute power) when deployed to Xylo. Our method demonstrates excellent low-latency performance
when tested on multiple datasets. Our work provides a new solution for seizure detection, and it is expected
to be widely used in portable and wearable devices in the future.

1. Introduction of abnormal discharge in epileptic patients can be diagnosed and


monitored. This can help to determine the type and location of epilepsy
The measurement of electroencephalogram (EEG) is a safe and non- and to choose appropriate treatment options during epilepsy diagnosis,
invasive method for recording brain signals by attaching electrodes and help to monitor treatment effect, adjust treatment plan timely and
on the scalp [1], where the continuously recorded data can reflect avoid unnecessary drug side effects during epilepsy treatment. In recent
the electrical activity of neurons and the electrical potential across years, with the development of computer and artificial intelligence
the entire brain surface. As the characteristic patterns of EEG signals (AI) technologies, EEG detection has been widely used in epilepsy
are distinguishable at different neural states, brain activities can be researches. By analyzing and mining a large amount of EEG data, it
analyzed and studied using EEG [2]. becomes realizable to explore the pathophysiological characteristics of
Epilepsy is a common neurological disorder characterized by re-
epilepsy and develop more accurate methods for the diagnosis and
current episodes of abnormal brain discharges, behavioral seizures, or
treatment of epilepsy [4,5].
abnormalities in sensation, emotion or consciousness [3]. Currently,
Real-time EEG monitoring plays a crucial role in medical diagnosis
there are more than 50 million epileptic patients in the worldwide.
by providing essential information about the health status of human
Thus, the diagnosis and treatment of epilepsy are of great social and
economic value. By detecting the EEG signals, the different degrees bodies [6,7]. This can aid doctors in timely disease detection and is

∗ Corresponding author.
E-mail addresses: [email protected] (D. Wang), [email protected] (Y. Xing).
1
Li and Zhao contributed equally to the work as first author.

https://doi.org/10.1016/j.compbiomed.2024.109225
Received 16 November 2023; Received in revised form 5 June 2024; Accepted 26 September 2024
Available online 16 October 2024
0010-4825/© 2024 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

vital for formulating and adjusting treatment plans. Moreover, real-time which is used to train MuSpiNN, which can achieve complex EEG
monitoring of biological signals is essential in scientific research fields, classification problem achieves a classification accuracy of 90.7% to
such as neuroscience. Researchers can investigate the relationship be- 94.8% [17]. The creation of the NeuCube framework in 2014 by
tween specific behaviors or cognitive tasks and brain activity by using Kasabov et al. based on SNN, marked the world’s first development
EEG signals. Given the time-consuming and labor-intensive nature of environment for building brain-inspired artificial intelligence. Studies
manual detection, an artificial intelligence model is urgently required have shown that the NeuCube model can improve the accuracy of
to perform real-time monitoring while accurately identifying abnormal brain spatiotemporal data classification compared to standard ma-
EEG signals. chine learning techniques [18]. In 2022, Karla et al. detected interictal
Artificial neural networks (ANNs) are now widely used for solving high-frequency oscillations (HFO) in scalp EEG using a spiking neural
problems in signal analysis, and have been widely employed in nu- network. The presence of HFOs is associated with active epilepsy and
merous fields, such as speech processing, computer vision and natural has an accuracy rate of 80%. However, it should be noted that HFOs
language processing [8]. However, there are significant differences are not a mandatory requirement for epileptic seizures, and the network
between traditional ANNs and real biological neural networks. Tra- was not implemented on a neuromorphic chip [19].
ditional ANNs have inputs and outputs that are both real numbers, If SNN-based EEG signal analysis is to be applied clinically, the
while the information transmission in the human brain is in the form SNN network must be deployed to neuromorphic inference hardware.
of discrete action potentials or spikes. Spiking neural network (SNN) Currently, there are few studies on epilepsy detection based on spiking
is the third generation of neural networks, and models the behavior neural networks that can be implemented on neuromorphic chips.
of biological neurons, where information is transmitted in the form
of discrete electrical impulses, known as spikes. SNNs have shown 3. Spiking neural network
promise in accurately identifying patterns in time-series data, such as
EEG signals, due to their ability to efficiently handle temporal informa- Spiking Neurons are a mathematical construct inspired by the dy-
tion. Our research focuses on real-time detection of EEG signals using namics and behavior of biological neurons. Spiking Neurons receive
low-power neuromorphic chips which deploy spiking neural networks inputs and communicate with other neurons through discrete binary
for inference. These chips can quickly and efficiently detect epileptic events known as Spikes. Neurons temporally integrate these signals in
signals, and trigger an appropriate response. small dynamical systems representing synapses and neuron membranes,
Our study utilized the CHB-MIT dataset from Boston Children’s and when a membrane state passes a configurable threshold, emit a
Hospital [9] and the Siena Scalp EEG dataset from the University of Spike to communicate with other connected neurons. Similar to an
Siena [10]. The Wavesense neural network model [11] was combined ANN, layers of spiking neurons are connected to each other through
with time-series analysis and Delta-Sigma coding techniques for de- linear weights, which effectively scale the strength of inputs received
tecting and identifying epilepsy signals based on spikes characteristics. by synapses [20].
The feasibility and accuracy of this method were further verified by Here we describe an LIF (Leaky integration and Fire) neuron [21],
experiments on Xylo, a low-power neuromorphic processor for signal one of the simplest spiking neuron models. The state of the neuron,
processing inference. Our experimental results demonstrate an ultra- known as the membrane potential 𝑉𝑚 (𝑡), evolves over time depending
low power consumption and high accuracy when deployed to the Xylo on its previous state as well as its inputs [22]. The equivalent circuit of
processor, suggesting that the great promise of this SNN-based epilepsy a LIF neuron is shown in the Fig. 1. The dynamics of this state can be
identification method for serving as a new epilepsy diagnostic tool in described as follows Eq. (1).
the future. Our work can enhance the efficiency of epilepsy diagnosis, 𝑑 𝑉mem(𝑡)
reduce the power consumption during signal acquisition, and provide 𝑅𝐶 = −𝑉mem (𝑡) + 𝐼(𝑡)𝑅 (1)
𝑑𝑡
guidance for the practical application and dissemination of spiking 𝑑𝑉
neural networks. In the equation, 𝑅𝐶 mem(𝑡)
𝑑𝑡
represents the rate of voltage change across
the capacitor, −𝑉mem (𝑡) represents the decay of the membrane potential
2. Related work due to the leakage resistance, and 𝐼(𝑡)𝑅 represents the voltage change
caused by the input current. Overall, this equation reflects how the
Historically EEG signal identification has been performed manually, membrane potential increases when the neuron receives input current
which is a time-consuming and subjective process, and susceptible and decreases due to the leakage effect when there is no input cur-
to human error. Although linear signal processing techniques such rent. The working mechanism of the LIF neuron model is: when the
as time-domain, frequency-domain, and time–frequency analysis have membrane potential 𝑉mem (𝑡) exceeds a certain threshold, the neuron
become popular [12], they fail to accurately capture the nonlinear generates a ‘‘spike’’ (i.e., ‘‘fires’’), and then the membrane potential is
characteristics of complex electrical activity in the brain. However, reset to a lower value, simulating the discharge process of a biological
with the development of nonlinear EEG data classification methods, neuron. Fig. 2 provides a detailed illustration of this process. The input
artificial neural networks have become an essential tool for nonlinear current 𝐼𝑎𝑝𝑝 (𝑡) is the sum of weighted input signals filtered by a set
analysis and identification of EEG signals. Various neural network of synapses, where each input 𝑥(𝑡) is weighted independently by 𝜔,
models have been explored, including spatiotemporal convolutional and can be positive or negative, and subject to synaptic filtering with
neural networks [13], fuzzy function-based classifiers [14], and long time constant 𝜏. The result is a spatially summed time series. The
short-term memory recurrent neural networks [15]. Traditional ANNs input current 𝐼𝑎𝑝𝑝 (𝑡) travels to the neuron soma, which acts as a low-
are highly dependent on feature extraction, and the quality of feature pass filter and integrates input over time, updating the internal state
extraction can greatly affect their classification accuracy. Therefore, variable 𝑉𝑚 . The soma performs integration and applies a threshold to
past researchers have spent a lot of time and effort developing suitable make a decision on whether to spike or not. After a spike is produced,
feature extraction techniques. In contrast, SNN models show potential the voltage 𝑉𝑚 is reset to a value 𝑉𝑟𝑒𝑠𝑒𝑡 . Finally, the resulting spike is
for processing complex spatiotemporal patterns without the need for transmitted to the other neurons in the network, this network consisting
manual feature extraction. of many similar LIF neurons is called a SNN. SNNs can adopt a wide
As early as 2007, Ghosh Dastidar developed an efficient SNN model range of network architectures, similar to standard ANNs. Neurons
for classification and epilepsy detection, which uses RProp as the train- in different layers are connected through synapses with multiple dy-
ing algorithm and achieved a classification accuracy of 92.5% [16]. In namically adjustable weights, which transmit signals from the input
2009, Adeli developed a new multi-spike neural network (MuSpiNN) layer to the next layer. Different topological structures have different
model and a new supervised learning algorithm called Multi-SpikeProp, characteristics.

2
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

Fig. 1. Equivalent circuit of the LIF neuron model.

Fig. 3. Xylo comprises a hidden population of 1000 LIF neurons and a readout
population of 8 LIF neurons. Dense input and output weights are available, along with
Fig. 2. Block diagram of a Leaky integration and Fire spiking neuron. sparse recurrent weights that can target up to 32 hidden neurons per neuron. The
inputs, which consist of 16 channels, and the outputs, which consist of 8 channels, are
transmitted through asynchronous firing events.

Feedforward SNNs are composed of an input layer, one or more


hidden layers, and an output layer, with dense connections passing
between each layer in a strictly forward manner [23]. Recurrent SNNs power consumption, the Xylo development kit seamlessly integrates an
are also common, raising additional complexities with regard to stabil- onboard current monitor, allowing users to sample real-time current
ity and learning algorithms [24]. Architectures with lateral inhibition data at a frequency of 1000 Hz via an ADC (Analog-to-Digital Con-
(‘‘Winner-Take-All’’ networks) are used for SNNs [25], with reference to verter). The kit includes two distinct power tracks: ‘‘core’’ and ’’IO’’.
concepts from Neuroscience [26]. In this work we choose feedforward The core power track covers the total power consumption of the Xylo
SNNs, due to their simplicity and ease of training. frontend and processing core, including RAM read/write operations,
logic circuit operations, event routing, and other essential processes.
4. Xylo neuromorphic processor Conversely, the IO power track pertains to the chip interface power,
primarily facilitating Serial Peripheral Interface (SPI) operations.
4.1. Architecture and functional capabilities The Xylo chip employs an off-chip training and on-chip inference
approach to learn data samples. Off-chip training refers to the phase
Xylo is an application-specific integrated circuit (ASIC) that utilizes where we use an external computing platform to train our neural net-
an all-digital approach for simulating spiking leaky integrate-and-fire work model. During this phase, the model learns features and weights
neurons with exponential input synapses. It is designed to be energy through a large amount of labeled data, often involving the backprop-
efficient and highly configurable, with the ability to adjust synaptic agation algorithm to optimize network parameters. Once the model
and membrane time-constants, thresholds, and biases for individual is trained and the parameters are set, we convert these parameters
neurons. Xylo supports a wide range of network architectures, including (weights, biases, thresholds, etc.) into a format suitable for the Xylo
recurrent networks, residual spiking networks, and other arbitrary chip. This may include quantization and encoding of the parameters
configurations. The overall logical architecture of Xylo is illustrated in to fit the hardware architecture of Xylo. In the on-chip inference
Fig. 3. It has up to 1000 digital LIF hidden neurons with independently phase, the converted model parameters are loaded onto the Xylo chip.
configurable time constants and thresholds, and each neuron supports The Xylo chip utilizes its digital spiking neural network to perform
up to 31 spikes generated per time-step. Xylo also has 8-bit input, real-time event-driven inference. Designed for energy efficiency and
recurrent, and readout weights with bit-shift decay on synaptic and real-time processing, the Xylo chip is capable of executing complex
membrane potentials. It supports one output alias per hidden neuron, signal processing tasks with very low power consumption.
one input synapse, and one output spike per time-step in the readout
layer. Furthermore, the Xylo ASIC allows for various clock frequencies, 4.2. Mapping and quantization
and the network time step 𝑑 𝑡 can be chosen freely. Overall, Xylo is a
flexible and powerful architecture suitable for many applications [27]. For deploying SNNs in Rockpool [28], the structure of an SNN is
To minimize power consumption and improve performance and converted to a computational graph. Nodes in the computational graph
reliability, the Xylo chip incorporates a low-power digital circuit de- contain parameters of a neuron model, such as thresholds, biases, etc.;
sign. This chip features sparse recurrent weights, with a maximum of high-level structures such as dense weights; as well as representing the
32 targets per hidden neuron, reducing energy and computation by connections between individual layers in the network. When a SNN is
connecting only a few neurons. Compared to dense connections, sparse deployed to a Neuromorphic Chip, the topology of the neural network
connections also decrease memory bandwidth requirements, compu- needs to be mapped to the actual physical layout of the chip. This
tation time, and power consumption. To measure the aforementioned enables the implementation of the same computational process as the

3
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

original SNN on the chip. The mapping process converts the neuron EEG, and the effect is remarkable [30]. ICA is a method of linear
graph to a form which matches the hardware architecture. In order transformation based on statistical principles, which separates data
to build the hardware-equivalent configuration, neuron IDs, weights, or signals into linear combinations of statistically independent non-
inputs and outputs, and other required information are extracted from Gaussian signal sources [31,32]. Since the collected signal is a mixed
the graph. In addition, when performing calculations on Xylo, it is signal of spontaneous EEG signal and various noises, it meets the
necessary to quantize floating-point parameters in the trained SNN to conditions of use of the ICA algorithm. After observing the effect before
low-precision integer values. To convert weights and thresholds, one and after ICA Fig. 4B, the appropriate independent components were
needs to find the absolute maximum of all input weights to a neuron, selected to be removed. Fig. 4C, D, E, F shows the independent compo-
and calculate a scaling factor such that this value can be mapped to a nent information of ICA. The four graphs show the spatial distribution
range of ±128, and the threshold is scaled by the same scaling factor. of each component on the scalp surface (the brighter the area, the
After scaling, weights and thresholds are rounded to the nearest integer, greater the contribution of the independent component to the area),
which is the process of quantization. the energy distribution of this independent component at different
frequencies, the time series reconstructing the trails of the independent
5. Experiments components and the time of each trail, and the total variance of the
waveform of each component over time under different numbers of
5.1. Dataset independent components over time [33].
In addition to the above operations, operations such as resampling
5.1.1. Siena scalp EEG database and baseline correction are also involved, which are not meaningful
The first dataset used in this work is the Scalp Electroencephalog- to describe here. In the CHB-MIT dataset, our preprocessing pipeline
raphy Database of the Department of Neurology and Neurophysiology closely resembles the preprocessing pipeline used by the Siena Scalp
at the University of Siena, Italy [10]. This database contains EEG EEG Database. However, the electrodes used in this dataset vary across
records from 14 patients, including 8 males (aged 25–71) and 6 females subjects and time periods, which results in inconsistent matrix dimen-
(aged 20–58). This experiment used Video-EEG monitoring under a sions when fed into the pulse neural network. Therefore, we only
sampling rate of 512 Hz, and the electrode arrangement followed the applied the aforementioned preprocessing steps on the C3-P3 and C4-
standard-1020 system. Most records also include 1–2 electrocardiogram P4 electrode channels, which exhibited the most pronounced epilepsy
signals. According to the standards of the International League Against symptoms. The preprocessed signal is then converted to a spiking time
Epilepsy, clinical doctors carefully checked the clinical and electrophys- series using a sigma-delta encoder [34], this encoder converts the
iological data of each patient and made a diagnosis and classification original signal into spikes signals by partitioning the signal’s amplitude
of epilepsy. In these 14 patients’ records, EEG signals were recorded range into multiple equidistant intervals. When the signal crosses one
before and after 1–10 seizures, and the recording time ranged from of these intervals, it generates either an up or down type spike signal
145–1408 min. based on the polarity of the signal’s slope at that point. The result of
this encoding in shown in Fig. 5. The highlighted portion indicates the
5.1.2. CHB-MIT scalp EEG database epileptic phase. Since the frequency and amplitude of the original sig-
The second dataset we used was collected by Boston Children’s nal increases during the patient’s seizure, so does the number of spikes.
Hospital and contains 23 cases involving 22 children with intractable In this method of encoding, information from individual channels of
epilepsy [9,29]. These patients recorded their epileptic seizures af- EEG data will be transformed into two separate time series of spiking
ter discontinuing antiepileptic drugs and undergoing monitoring for events. These two series correspond to the upward and downward
several days to evaluate the potential for surgical intervention. The trends in the EEG signal, respectively.
subjects in this database include 5 males and 17 females, aged between
1.5 and 22 years old. Cases chb21 and chb01 are data from the same 5.3. Network
female subject, with a gap of 1.5 years between them.
During the training process, the employed model is WaveSense [11].
5.2. Preprocessing This model takes spike time series data as input and is constructed
using a network architecture based on spiking neural networks. It draws
We begin by describing the preprocessing of the Siena Scalp EEG inspiration from the architecture of WaveNet [35]. The model adopts
dataset. In this dataset, the arrangement of electrodes is placed accord- a stacked network architecture, which stacks multiple blocks together
ing to the standard 10-20 system. However, the order of electrodes to gradually extract higher-level features of the input signals. Each
differs between subjects. In addition, the researchers also captured ECG block consists of multiple computational modules, including dilated
signals from some subjects that were unrelated to the experiment. The temporal convolutional layers, gated convolutional layers, and pooling
common electrodes of all subjects in the standard 10–20 system are layers. The dilated temporal convolutional layer uses multiple synapse
kept, and the order of all electrodes is uniformly sorted. After these projections with different time constants to achieve dilated temporal
operations, the electrode positioning is performed on the original signal convolution. The gated convolutional layer uses gate mechanisms to
once, and the electrode reference point is reselected. The data is re- regulate information flow, and the pooling layer is used to reduce
referenced according to the average value of all signals using this the dimensionality of feature maps. These computational modules are
function. Subsequently, the influence of the acquisition environment, used to extract the temporal features of the input signal and transform
such as power frequency interference in the acquisition system, needs them into classification or regression results. The WaveSense model
to be eliminated. In addition, studies have shown that EEG signals achieves audio and signal processing tasks by converting the input
only contain valid information at 1–80 Hz. Due to the need for a large signal into a pulse sequence and processing it with a series of pulse
number of repetitive preprocessing operations on the data in the study, neural layers. The model has been tested on multiple datasets and has
the EEG data of all patients were processed with 1–80 Hz band-pass achieved excellent performance. Advantages of the network include
filtering, and then 50 Hz notch processing to eliminate power frequency its strong generalization ability and robustness, suitability for various
interference. In order to verify the data processing effect of filtering low-dimensional signal processing tasks, and ability to be implemented
and notching, We show the comparison effect of power spectral density on neuromorphic hardware with low power consumption and high
maps before and after filtering Fig. 4A. efficiency. The WaveSense model can be trained using the backprop-
Independent components analysis (ICA) is widely used to remove agation algorithm and compared to other deep learning models. In this
ECG, eye movement, myoelectricity and head movement signals in experiment, we set the number of neurons in the hidden layer of the

4
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

Fig. 4. (A). Comparison of power spectrum before and after filtering and notching. (B). EEG comparison after removing the selected independent components. Red is the signal
before ICA, and black is the signal after ICA. This operation can check the influence of a specific component on the overall signal after removal. (C). Spatial distribution of each
component on the scalp surface, the brighter the area, the greater the contribution of the independent component to the area. (D). The energy distribution of this independent
component at different frequencies. (E). The time series reconstructing the trails of the independent components and the time of each trail. (F). The total variance of the waveform
of each component over time under different numbers of independent components over time.

Fig. 5. Original signal and corresponding spikes time series, the data in the red line is the data of the epileptic seizure period.

are randomly split into training and testing datasets at a 4:1 ratio,
trained for 150 training epochs, a learning rate of 0.0005, and an output
dimensionality of 2. Back Propagation Through Time (BPTT) [36] is
used to train the SNN, and the peak current of the neurons on each
segment is calculated by extracting the synaptic current of the output
layer. The prediction is made by choosing the neuron with the highest
peak current, and cross-entropy loss [37] is calculated with the label to
obtain 𝐿𝐶 𝐸 . The optimization technique employed is Adam [38]. It is
noteworthy that the model is intended to be used in streaming mode,
which necessitates an appropriate loss function. The activity of LIF
Fig. 6. Overview of the entire architecture.
neurons may change substantially during learning, resulting in either
a lack of spikes or excessive energy consumption in neuromorphic
implementations [39]. To maintain sparse activity and limit the activity
readout to 16, the number of synapses in the dilated layer of the block of these neurons, an activity regularizer term is included in the loss
to 2, and the membrane potential time constant of neurons to 0.002. function. The final loss function is given by Eq. (2):
The threshold for firing spikes of all neurons was set to 0.6. Fig. 6 shows ( )2
∑ ∑ 𝑁𝑖𝑡 𝛩(𝑁𝑖𝑡 − 𝑙)
the overview of the model architecture, including data processing, spike 𝐿 = 𝐿𝐶 𝐸 + (2)
encoding, stacked network architecture, and output layer, to convert 𝑖
𝑇 ⋅ 𝑁𝑛𝑒𝑢𝑟𝑜𝑛𝑠
low-dimensional signals into category probability distributions. The activation loss is determined by the network’s population size
𝑁𝑛𝑒𝑢𝑟𝑜𝑛𝑠 and the total number of excess spikes produced in response
5.4. Training and deploying to an input of duration 𝑇 time steps. The excess spikes are those that
exceed a certain threshold 𝑙, and are summed over all neurons 𝑁𝑖 and
While experiencing reduced average latency, experiments demon- time bins 𝑡. where 𝛩 is the Heaviside step function.
strate that optimal model prediction accuracy is achieved by seg- The network performance was evaluated using four evaluation crite-
menting the signal into 5-s samples. Consequently, the initial dataset ria. Accuracy measures overall correctness, sensitivity measures correct
undergoes partitioning into numerous 5-s trials, each accompanied by positive identification, specificity measures correct negative identifi-
a corresponding label. These trials are then processed to create spike cation, and F1 score considers both precision and recall for accuracy
time series data, which is fed into the network. The processed samples assessment. The model parameter curve and loss change curve during

5
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

Fig. 8. (A) Latency test results for partially positive samples. The black dot’s vertical
axis value signifies the accurate prediction time of epilepsy, while the red dot indicates
that the model failed to detect epilepsy even after surpassing the corresponding sample
time. (B) Latency test results for partial negative samples. The red dot signifies
that the model wrongly classified a non-epileptic signal as an epileptic signal at the
corresponding time on the 𝑦-axis. The black dot denotes that the model accurately
classified the signal as non-epileptic even after the sampling time for the sample has
passed.

simulation will occur. After the final test on the two data sets, the
accuracy can reach 89.87% and 88.62%.
Fig. 7. (A) Curves of relevant parameters in the training process corresponding In deep learning, model latency refers to the time interval between
to the CHB-MIT dataset. (B) Curves of relevant parameters in the training process when a model receives input data and when it outputs results. Model
corresponding to the Siena scalp dataset. (C) The loss value change curve of the CHB-
latency is critical for real-time applications [40], in this application,
MIT dataset during the training process. (D) The loss value change curve of the Siena
scalp dataset during the training process. for EEG monitoring of epilepsy patients, the model must quickly de-
tect epilepsy signals and sound an alarm in a short time. Therefore,
evaluating the latency of the SNN model in this experiment can help
determine its usability and practicality in real-time applications, as well
training are shown in Fig. 7, Since the results on the test set can better
as optimize the real-time performance of the model.
evaluate the generalization ability of the model on unseen data, we
The SNN was deployed on the Xylo processor, and measurements on
only show the curves of the test set. Once training is finished, the
delays were performed using the CHB-MIT dataset. Each time step was
network is deployed onto the neuromorphic chip Xylo. This process
set to 0.5 s, and it was found that the majority of 5-s epileptic signals
primarily involves the mapping and quantization techniques mentioned
were detected within 0–1 s, with a median delay of 0.5 s. Favorable
earlier [28]. Afterwards, the original data is imported into the Xylo
results were also achieved using the Siena scalp dataset with a delay of
processor and the model prediction results are observed.
0.75 s. The visualization of our findings is presented in Fig. 8.
In order to fully showcase the methods and implementation of our
6. Results research, we have publicly released the complete source code of the
project on GitHub. The code for the project related to the Siena scalp
6.1. Network performance EEG database can be accessed at https://github.com/liruixinxinxin/
siena_work, and for the CHB-MIT scalp EEG database, it is available
We visualized the above parameters. Additionally, the loss value at https://github.com/liruixinxinxin/epilepsy_complete. These reposi-
was observed during the iteration process, and the curves are shown tories include detailed documentation of the algorithms and imple-
in the figure. Since the results on the test set can better evaluate the mentation procedures, aimed at facilitating further verification and
generalization ability of the model on unseen data, only the curves of replication of our research results.
the test set are shown. The model parameter curves corresponding to
the two data sets are depicted. In both datasets, the following measures 6.2. Real-time monitoring and power consumption
were obtained: Accuracy of 93.3% and 92.9%, sensitivity of 90.4% and
89.7%, specificity of 96.7% and 90.1%, and F1 score of 91.2% and For power consumption measurement, we enable the power record-
92.3%. However, it is worth noting that when the model is deployed ing function by setting ‘‘power_record = True’’ in the code.
on the Xylo processor, a certain loss in accuracy rate relative to the Actually calls a function related to the Samna interface, allowing the

6
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

Table 1
Comparison of chip power consumption with other similar works. 1 Multilayer perceptron. 2 Support vector machine. 3 Long short-term memory. 4 Convolutional neural network.
5
Spiking neural network. 6 Hilbert transform. 7 Discrete wavelet transform.
Work [41] [42] [43] [44] [45] [46] This work
Processor M2GL025- FPGA Nexys 4 Artix M2GL025- Neuromorphic Xilinx Zedboard Neuromorphic
VF256 IGLOO2 7 FPGA VF256 IGLOO2 chip FPGA processor Xylo
FPGA by FPGA
Microsemi
Power (μW) 159 700 1589 284 000 448 160 000 166 000 287.9
Network MLP1 and ANN ANN SVM2 SVM2 LSTM3 and MLP1 and ANN SNN5
architecture CNN4
Technology / TSMC 0.18 / TSMC 65 nm 55 nm CMOS / 28 nm CMOS
used μWm CMOS 1P9M CMOS
Classification Feature Feature Feature Classification Classification Three-layer Real-time stream
method extraction using extraction extraction using lifting using 2D-CNN4 MLP1 , trained data analysis,
HT6 , using HT6 , using DWT7 , wavelet and Bi-LSTM3 and tested on parallel
classification classification classification transform and FPGA processing
with MLP1 with MLP1 with SVM2 SVM2
Chip size / 1409.12 × / 0.98 mm2 / / 6.5 mm2
1402.36 μm2
Power / Core 1.8 I/O / 1 / Core 1.8, IO 3.3 Core 1.8, IO 2.5
voltage (V) 3.3
Frequency / 8 MHz / 1 MHz / 100 MHz 250 MHz
SRAM / 144 × 18 bits / 256 × 32 bits / / 124 KB
Measurement Actual Actual Theoretical Actual Actual Actual Real-time actual
method measurement measurement simulation measurement measurement measurement measurement
estimate

system to automatically read voltage and current information from the number of weights also affects the model’s storage and data transmis-
chip’s registers. This method, which obtains precise data directly from sion requirements, which may also lead to higher power consumption.
the hardware, enables researchers to monitor and evaluate the chip’s Therefore, using a very small number of weights in this model, the
energy consumption in real time under various operating conditions. A experiment results demonstrate that microwatt-level power loss can
real-time epilepsy detection experiment was conducted and the power be achieved. Currently, many related studies on power consumption
consumption during the monitoring process was measured. As an exam- measurement remain at the theoretical estimation stage [57,58]. In
ple, The first seizure of patient No. 1 from CHB-MIT was used (show contrast, we have measured and visualized the power values in real-
in Fig. 9). The data sample reveals that patient No. 1 experienced time epilepsy detection. While some studies may achieve similarly
an epileptic seizure from 2996 s to 3036 s. The model issued a red low power consumption, they often suffer from significantly higher
alarm at 2997.5 s, and ended the alarm at 3033.5 s. False alarms latency [58,59]. Our study, however, achieved a balanced outcome by
during this process may occur, which were addressed by carrying out ensuring low power consumption, low latency, and high accuracy.
post-processing in the algorithm. Following an analysis of the model’s
accuracy and the trade-off between false alarms and sensitivity, the 7. Discussion and future work
algorithm triggers or cancels alerts only when four consecutive test
values are either 1 or 0. The purpose behind selecting this specific In this research, a real-time epilepsy detection method based on
threshold is to minimize false alarms while retaining a high sensi- SNN is proposed and implemented on a hardware platform using the
tivity to epileptic seizures. The device exhibited a consistent average neuromorphic processor Xylo. This study presents several advantages
I/O power range of 80.2–95.5 μW, with an average consumption of and innovations. Firstly, a third-generation spiking neural network was
87.4 μW. The digital SNN core and control logic showed stable power utilized. In processing EEG signals, compared with ANN, SNN can
consumption within 280.3–292.1 μW, averaging at 287.9 μW. Our better simulate the behavior of neurons, especially the process in which
experimental results confirmed that the power consumption of device is neurons fire spikes within a certain time window after receiving inputs.
in the microwatt range, significantly lower than the power consumption Therefore, SNN can more accurately capture the key features of EEG
reported in previous research involving traditional on-chip biosignal signals and perform accurate processing. Second, Instead of providing
detection. Table 1 displays a comparison of power consumption and complete or buffered input signals into a neural network, our SNN
other metrics with similar works. analyzes the EEG signals in a real-time streaming mode. This working
mode can improve the efficiency and low latency of the method, which
6.3. Performance comparison with existing work significantly enhances the clinical usability for EEG-based seizure de-
tection. Compared with buffering the input signal, real-time streaming
By conducting several experiments and summarizing the results in mode analysis signal does not require storing large amounts of data,
Table 2, we observe that the overall performance of the SNN model is which can avoid wasting storage space and delay problems. This is very
somewhat lower than that of the traditional artificial neural network. important for designing low-power and implantable brain–computer
However, in this project, the scale of the SNN model is relatively small, interfaces and related devices. In addition, using real-time streaming
which enables the model to better simulate the neural system, more mode analysis of EEG signals can help SNN better adapt to changes in
effectively process time series data, and use computing resources more different real-time EEG data streams, thereby improving the method’s
efficiently. This is because in deep learning models, the number of adaptability and generalization ability. Finally, our network is deployed
weights and the power consumption of the model are usually related. on the neuromorphic chip Xylo instead of traditional von Neumann
The more weights there are, the greater the complexity and computa- architecture chips, this is no longer the traditional work of simply sim-
tional load of the model, which may require more computing resources ulating biological signals in SNN. Unlike the traditional von Neumann
and higher power consumption during deployment. In addition, the architecture, Neuromorphic chips natively use neuron and synaptic

7
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

Fig. 9. (A) The official marked epileptic seizure moment is displayed in light red on the initial signal map, while the 5-s scanning window is represented in blue. The application
allows for quick diagnosis during real-time signal recording. (B) The time interval corresponding to each vertex of the triangle is the time period in the original signal corresponding
to the predicted result. By diagnosing the original signal, the model can identify the silent period in green, the epileptic period in red, and the false touch automatically recognized
by the model in yellow. (C) The tracking of power consumption occurs over a period of time.

Table 2
Comparison of results on CHB-MIT Scalp EEG and SINEA Scalp EEG datasets. 1 Short-time fourier transform. 2 Convolutional neural network. 3 Binary single-dimension. 4 Recurrent
neural network. 5 Hierarchical neural network. 6 Support vector machine. 7 Artificial neural network. 8 Artificial neural network. Acc: Accuracy. Sens: Sensitivity. Spec: Specificity.
H/W Deploy: Hardware deployment.
Dataset Citation Method Acc. % Sens. % Spec. % No. weights H/W Deploy
[47] STFT1 -CNN2 N.A. 81.2 – 119K No
[48] BSD3 -CNN N.A. 94.6 – 53.1K No
[49] Deep CNN 90.0 90.2 88.0 70.9K No
CHB-MIT
[50] CNN 61.0 59.0 – 8.93M No
[51] HNN5 98.9 – – >27.3M No
[52] CNN+RNN4 96.2 98.2 94.0 – No
[53] CNN 95.5 94.7 – 1.57M No
[54] CNN 96.9 97.0 96.7 10,854 No
This work SNN 93.3 90.4 96.7 2.4K Yes
[55] SVM6 87.5 85.0 90.2 – No
SINEA [56] ANN7 94.0 74.0 – – No
This work SNN 92.9 89.7 90.1 2.4K Yes

models for computation; have strong parallel computing capabilities; Our method achieves relatively high accuracy in detecting epilepsy,
and can efficiently process large datasets in streaming mode. In addi- but there is still room for improvement. Specifically, the accuracy rate
tion, near-memory-compute Neuromorphic processors, which integrate
of the proposed method is not be as high as that of a traditional ANN
memory closely with computing resources, consume significantly less
implementation. Additionally, the generalization ability of the model
energy than traditional von Neumann architectures. Our results high-
light the broad prospects for application of Neuromorphic chips in the can be improved. This is a trade-off between accuracy and model
fields of edge machine learning and artificial intelligence inference. simplicity (and low power).

8
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

Our proposed network requires the smallest amount of resources [6] J.H. Ju, Y.J. Park, J. Park, B.G. Lee, J. Lee, J.Y. Lee, Real-time driver’s biological
while achieving comparable training metrics (Table 2). Additionally, signal monitoring system, Sensors Mater. 27 (1) (2015) 51–59.
[7] J. Chen, R.V. Dalal, A.N. Petrov, A. Tsai, S.E. O’Leary, K. Chapin, J. Cheng, M.
in Table 1, we compared the processor types, network architectures,
Ewan, P.-L. Hsiung, P. Lundquist, et al., High-throughput platform for real-time
technologies used, classification methods, chip sizes, power supply monitoring of biological processes by multicolor single-molecule fluorescence,
voltages, frequencies, SRAM capacities, and measurement methods of Proc. Natl. Acad. Sci. USA 111 (2) (2014) 664–669.
different works. It can be seen that the neuromorphic processor Xylo [8] L. Zhang, S. Zhou, T. Zhi, Z. Du, Y. Chen, Tdsnn: From deep neural networks to
used in this work has the lowest power consumption (287.9 μW), which deep spike neural networks with temporal-coding, in: AAAI, Vol. 33, 2019, pp.
1319–1326.
is the best among similar works.
[9] A.L. Goldberger, L.A. Amaral, L. Glass, J.M. Hausdorff, P.C. Ivanov, R.G. Mark,
Our proposed solution can assist medical practitioners in accurately J.E. Mietus, G.B. Moody, C.-K. Peng, H.E. Stanley, PhysioBank, PhysioToolkit,
diagnosing patients with epilepsy, thereby facilitating timely and effec- and PhysioNet: components of a new research resource for complex physiologic
tive treatment plans and enhancing the quality of life of such patients. signals, Circulation 101 (23) (2000) e215–e220.
Furthermore, our work paves the way for the development of low- [10] P. Detti, Siena scalp EEG database (version 1.0. 0), PhysioNet (2020).
[11] P. Weidel, S. Sheik, WaveSense: Efficient temporal convolutions with spiking
power wearable devices for biosignal detection, and presents novel neural networks for keyword spotting, 2021, arXiv preprint arXiv:2111.01456.
insights and tools for neurological disease diagnosis and treatment, [12] L.C. Parra, C.D. Spence, A.D. Gerson, P. Sajda, Recipes for the linear analysis of
as well as other clinical medicine fields in brain science research. In EEG, NeuroImage 28 (2) (2005) 326–341.
addition, this method can be extended to process other bio-signals, such [13] Z. Gao, X. Wang, Y. Yang, C. Mu, Q. Cai, W. Dang, S. Zuo, EEG-based spatio–
temporal convolutional neural network for driver fatigue evaluation, IEEE Trans.
as the classification of electromyography (EMG) and electrocardiogra-
Neural Netw. Learn. Syst. 30 (9) (2019) 2755–2763.
phy (ECG). The use of real-time streaming signal analysis can provide [14] Y. Mohammadi, M. Hajian, M.H. Moradi, Discrimination of depression levels
more efficient and reliable solutions for the detection and treatment of using machine learning methods on EEG signals, in: ICEE, IEEE, 2019, pp.
muscle and heart diseases [60]. Therefore, the potential applications of 1765–1769.
SNN in the medical domain are extensive, and they hold considerable [15] K.M. Tsiouris, V.C. Pezoulas, M. Zervakis, S. Konitsiotis, D.D. Koutsouris, D.I.
Fotiadis, A long short-term memory deep learning network for the prediction of
research and practical significance.
epileptic seizures using EEG signals, Comput. Biol. Med. 99 (2018) 24–37.
[16] S. Ghosh-Dastidar, H. Adeli, Improved spiking neural networks for EEG classi-
CRediT authorship contribution statement fication and epilepsy and seizure detection, Integr. Comput. Aided Eng. 14 (3)
(2007) 187–212.
Ruixin Li: Writing – original draft, Visualization, Validation, Soft- [17] S. Ghosh-Dastidar, H. Adeli, A new supervised learning algorithm for multiple
spiking neural networks with application in epilepsy and seizure detection,
ware, Resources, Project administration, Methodology, Data curation.
Neural Netw. 22 (10) (2009) 1419–1431.
Guoxu Zhao: Writing – review & editing, Supervision, Project admin- [18] N.K. Kasabov, NeuCube: A spiking neural network architecture for mapping,
istration, Formal analysis, Conceptualization. Dylan Richard Muir: learning and understanding of spatio-temporal brain data, Neural Netw. 52
Writing – review & editing, Visualization, Supervision, (2014) 62–76.
[19] K. Burelo, G. Ramantani, G. Indiveri, J. Sarnthein, A neuromorphic spiking neural
Investigation, Conceptualization. Yuya Ling: Conceptualization. Karla
network detects epileptic high frequency oscillations in the scalp EEG, Sci. Rep.
Burelo: Methodology. Mina Khoe: Methodology. Dong Wang: Writing 12 (1) (2022) 1798.
– review & editing, Supervision, Methodology, Investigation, Funding [20] Y. Wang, S. Duan, F. Chen, Efficient asynchronous federated neuromorphic
acquisition, Formal analysis, Conceptualization. Yannan Xing: Writing learning of spiking neural networks, Neurocomputing 557 (2023) 126686, http:
– review & editing, Software, Investigation, Formal analysis, Data //dx.doi.org/10.1016/j.neucom.2023.126686, URL https://www.sciencedirect.
com/science/article/pii/S0925231223008093.
curation, Conceptualization. Ning Qiao: Supervision, Methodology,
[21] Y.-H. Liu, X.-J. Wang, Spike-frequency adaptation of a generalized leaky
Formal analysis, Conceptualization. integrate-and-fire model neuron, J. Comput. Neurosci. 10 (2001) 25–45.
[22] S. Ghosh-Dastidar, H. Adeli, Third generation neural networks: Spiking neural
Declaration of competing interest networks, in: Advances in Computational Intelligence, Springer, 2009, pp.
167–178.
[23] G. Susi, A. Cristini, M. Salerno, Path multimodality in a feedforward SNN module,
The authors of this paper hereby declare that there are no financial,
using LIF with latency model, Neural Netw. World 26 (4) (2016) 363.
personal, or other relationships that could or might be perceived to [24] V. Demin, D. Nekhaev, Recurrent spiking neural network learning based on a
influence the objectivity of our research and the results presented in competitive maximization of neuronal activity, Front. Neuroinform. 12 (2018)
this paper. We affirm that our research work has been conducted 79.
independently and has not been unduly influenced by any funding [25] Y. Guo, H. Wu, B. Gao, H. Qian, Unsupervised learning on resistive memory
array based spiking neural networks, Front. Neurosci. 13 (2019) 812.
agencies, commercial entities, or other organizations. [26] H.K. Hartline, H.G. Wagner, F. Ratliff, Inhibition in the eye of limulus, J. Gen.
Physiol. 39 (5) (1956) 651–673.
Acknowledgments [27] H. Bos, D. Muir, Sub-mW neuromorphic SNN audio processing applications with
rockpool and xylo, 2022, arXiv preprint arXiv:2208.12991.
[28] D. Muir, F. Bauer, P. Weidel, Rockpool documentaton, 2019, http://dx.doi.org/
This work was financially supported by the National Natural Science
10.5281/zenodo.6381006.
Foundation of China (32260244), Hainan Provincial Natural Science [29] A.H. Shoeb, Application of Machine Learning to Epileptic Seizure Onset Detection
Foundation of China (524YXQN416, 824CXTD424), Science and Tech- and Treatment, (Ph.D. thesis), Massachusetts Institute of Technology, 2009.
nology Special Fund of Hainan Province (ZDYF2022SHFZ289) and the [30] I. Winkler, S. Haufe, M. Tangermann, Automatic classification of artifactual ICA-
Research Foundation of One Health Collaborative Innovation Center of components for artifact removal in EEG signals, Behav. Brain Funct. 7 (2011)
1–15.
Hainan University (XTCX2022JKC01).
[31] A. Subasi, M.I. Gursoy, EEG signal classification using PCA, ICA, LDA and support
vector machines, Expert Syst. Appl. 37 (12) (2010) 8659–8666.
References [32] F.C. Viola, S. Debener, J. Thorne, T.R. Schneider, Using ICA for the analysis of
multi-channel EEG data, in: Simultaneous EEG-fMRI Oxford Univ. Press, 2010,
[1] P.L. Nunez, R. Srinivasan, Electroencephalogram, Scholarpedia 2 (2) (2007) pp. 121–133.
1348. [33] A. Flexer, H. Bauer, J. Pripfl, G. Dorffner, Using ICA for removal of ocular
[2] M. Teplan, et al., Fundamentals of EEG measurement, Meas. Sci. Rev. 2 (2) artifacts in EEG recorded from blind subjects, Neural Netw. 18 (7) (2005)
(2002) 1–11. 998–1005.
[3] S. Noachtar, J. Rémi, The role of EEG in epilepsy: a critical review, Epilepsy [34] F. Corradi, G. Indiveri, A neuromorphic event-based neural recording system for
Behav. 15 (1) (2009) 22–33. smart brain-machine-interfaces, IEEE Trans. Biomed. Circuits Syst. 9 (5) (2015)
[4] S.J. Smith, EEG in the diagnosis, classification, and management of patients with 699–709.
epilepsy, J. Neurol. Neurosurg. Psychiatry 76 (suppl 2) (2005) ii2–ii7. [35] A.v.d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N.
[5] J. Engel, A practical guide for routine EEG studies in epilepsy, J. Clin. Kalchbrenner, A. Senior, K. Kavukcuoglu, Wavenet: A generative model for raw
Neurophysiol. 1 (2) (1984) 109–142. audio, 2016, arXiv preprint arXiv:1609.03499.

9
R. Li et al. Computers in Biology and Medicine 183 (2024) 109225

[36] E.O. Neftci, H. Mostafa, F. Zenke, Surrogate gradient learning in spiking neural [49] Y. Zhang, Y. Guo, P. Yang, W. Chen, B. Lo, Epilepsy seizure prediction on EEG
networks: Bringing the power of gradient-based optimization to spiking neural using common spatial pattern and convolutional neural network, IEEE J. Biomed.
networks, IEEE Signal Process. Mag. 36 (6) (2019) 51–63. Health Inform. 24 (2) (2019) 465–474.
[37] Z. Zhang, M. Sabuncu, Generalized cross entropy loss for training deep neural [50] P. Handa, N. Goel, Epileptic seizure detection using rhythmicity spectrogram and
networks with noisy labels, in: ANIPS Vol. 31, 2018. cross-patient test set, in: SPIN IEEE, 2021, pp. 898–902.
[38] D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, 2014, arXiv [51] D. Hu, J. Cao, X. Lai, Y. Wang, S. Wang, Y. Ding, Epileptic state classification by
preprint arXiv:1412.6980. fusing hand-crafted and deep learning EEG features, IEEE Trans. Circuits Syst. II
[39] M. Sorbaro, Q. Liu, M. Bortone, S. Sheik, Optimizing the energy consumption Express Briefs 68 (4) (2020) 1542–1546.
of spiking neural networks for neuromorphic applications, Front. Neurosci. 14 [52] M. Varlı, H. Yılmaz, Multiple classification of EEG signals and epileptic seizure
(2020) 662. diagnosis with combined deep learning, J. Comput. Sci. 67 (2023) 101943.
[40] S. Naveen, M.R. Kounte, M.R. Ahmed, Low latency deep learning inference [53] E. Abdellatef, H.M. Emara, M.R. Shoaib, F.E. Ibrahim, M. Elwekeil, W. El-Shafai,
model for distributed intelligent IoT edge clusters, IEEE Access 9 (2021) T.E. Taha, A.S. El-Fishawy, E.-S.M. El-Rabaie, I.M. Eldokany, et al., Automated
160607–160621. diagnosis of EEG abnormalities with different classification techniques, Med. Biol.
[41] H.G. Daoud, A.M. Abdelhameed, M. Bayoumi, FPGA implementation of high Eng. Comput. (2023) 1–23.
accuracy automatic epileptic seizure detection system, in: MWSCAS IEEE, 2018, [54] D. Cimr, H. Fujita, H. Tomaskova, R. Cimler, A. Selamat, Automatic seizure
pp. 407–410. detection by convolutional neural networks with computational complexity
[42] C. Tsou, C.-C. Liao, S.-Y. Lee, Epilepsy identification system with neural network analysis, Comput. Methods Programs Biomed. 229 (2023) 107277.
hardware implementation, in: AICAS IEEE, 2019, pp. 163–166. [55] Y. Xiong, J. Li, D. Wu, F. Dong, J. Liu, L. Jiang, J. Cao, Y. Xu, Seizure
[43] K. Meddah, H. Zairi, B. Bessekri, H. Cherrih, M. Kedir-Talha, FPGA implemen- detection algorithm based on fusion of spatio-temporal network constructed with
tation of epileptic seizure detection based on DWT, PCA and support vector dispersion index, Biomed. Signal Process. Control 79 (2023) 104155.
machine, in: EDiS IEEE, 2020, pp. 141–146. [56] S.E. Sánchez-Hernández, R.A. Salido-Ruiz, S. Torres-Ramos, I. Román-Godínez,
[44] Y. Wen, Y. Zhang, L. Wen, H. Cao, G. Ai, M. Gu, P. Wang, H. Chen, A Evaluation of feature selection methods for classification of epileptic seizure EEG
65nm/0.448 mW EEG processor with parallel architecture SVM and lifting signals, Sensors 22 (8) (2022) 3066.
wavelet transform for high-performance and low-power epilepsy detection, [57] Y. Yang, J.K. Eshraghian, N.D. Truong, A. Nikpour, O. Kavehei, Neuromorphic
Comput. Biol. Med. 144 (2022) 105366. deep spiking neural networks for seizure detection, Neuromorph. Comput. Eng.
[45] Y. Liu, L. Chen, X. Li, Y. Wu, S. Liu, J. Wang, S. Hu, Q. Yu, T. Chen, Y. 3 (1) (2023) 014010.
Liu, Epilepsy detection with artificial neural network based on as-fabricated [58] F. Manzouri, M. Zöllin, S. Schillinger, M. Dümpelmann, R. Mikut, P. Woias, L.M.
neuromorphic chip platform, AIP Adv. 12 (3) (2022). Comella, A. Schulze-Bonhage, A comparison of energy-efficient seizure detectors
[46] B. Gupta, Y.K. Balivada, A. KUMAR, M. Sameer, et al., FPGA based artificial for implantable neurostimulation devices, Front. Neurol. 12 (2022) 703797.
neural network processor for detection of epileptic seizure, 2023, Authorea [59] A. Bahr, M. Schneider, M.A. Francis, H.M. Lehmann, I. Barg, A.-S. Buschhoff, P.
Preprints. Wulff, T. Strunskus, F. Faupel, Epileptic seizure detection on an ultra-low-power
[47] N.D. Truong, A.D. Nguyen, L. Kuhlmann, M.R. Bonyadi, J. Yang, S. Ippolito, O. embedded risc-v processor using a convolutional neural network, Biosensors 11
Kavehei, Convolutional neural networks for seizure prediction using intracranial (7) (2021) 203.
and scalp electroencephalogram, Neural Netw. 105 (2018) 104–111. [60] M. Hammad, A. Maher, K. Wang, F. Jiang, M. Amrani, Detection of abnormal
[48] S. Zhao, J. Yang, Y. Xu, M. Sawan, Binary single-dimensional convolutional heart conditions based on characteristics of ECG signals, Measurement 125
neural network for seizure prediction, in: ISCAS IEEE, 2020, pp. 1–5. (2018) 634–644.

10

You might also like