Computational Techniques in Neuroscience
Computational Techniques in Neuroscience
Neuroscience
The text discusses the techniques of deep learning and machine learning
in the field of neuroscience, engineering approaches to study the brain
structure and dynamics, convolutional networks for fast, energy-efficient
neuromorphic computing, and reinforcement learning in feedback control.
It showcases case studies in neural data analysis.
Features:
Edited by
Kamal Malik
Harsh Sadawarti
Moolchand Sharma
Umesh Gupta
Prayag Tiwari
First edition published 2024
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
Reasonable efforts have been made to publish reliable data and information,
but the author and publisher cannot assume responsibility for the validity of
all materials or the consequences of their use. The authors and publishers have
attempted to trace the copyright holders of all material reproduced in this
publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged
please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be
reprinted, reproduced, transmitted, or utilized in any form by any electronic,
mechanical, or other means, now known or hereafter invented, including
photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access
www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC),
222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that
are not available on CCC please contact [email protected]
Typeset in Sabon
by MPS Limited, Dehradun
Dr. Kamal Malik would like to dedicate this book to her father,
Sh. Ashwani Malik, her mother, Smt. Shakuntla Malik, and her brother,
Dr. Shiv Malik, for their constant support and motivation; I would also
like to give my special thanks to the publisher and my other co-editors
for believing in my abilities. Above all, a humble thanks to the Almighty
for this accomplishment.
Dr. Harsh Sadawarti would like to dedicate this book to his father,
Sh. Jagan Nath Sadawarti, his mother, Smt. Krishna, and his wife,
Ritcha, for their constant support and motivation; I would also like to
thank the publisher and my other co-editors for having faith in my
abilities. Above all, a humble thanks to the Almighty for this
accomplishment.
Mr. Moolchand Sharma would like to dedicate this book to his father,
Sh. Naresh Kumar Sharma, and his mother, Smt. Rambati Sharma, for
their constant support and motivation, and his family members, including
his wife, Ms. Pratibha Sharma, and son, Dhairya Sharma. I also thank
the publisher and my other co-editors for believing in my abilities.
Dr. Umesh Gupta would like to dedicate this book to his mother,
Smt. Prabha Gupta, and his father, Sh. Mahesh Chandra Gupta, for their
constant support and motivation, and his family members, including
his wife, Ms. Umang Agarwal, and son, Avaya Gupta. I also thank the
publisher and my other co-editors for believing in my abilities. Before
beginning and after finishing my endeavor, I must appreciate the
Almighty God, who provides me with the means to succeed.
Dr. Prayag Tiwari would like to dedicate this book to his father &
his mother for their constant support and motivation and his family
members. I also thank the publisher and my other co-editors for
believing in my abilities.
Contents
vii
viii Contents
Index 221
About the Book
ix
x About the Book
xi
xii Preface
xiii
xiv Editor(s)
IPM, IJCV, IEEE TNNLS, IEEE TFS, IEEE TII, IEEE JBHI, IEEE IOTJ,
IEEE BIBM, ACM TOIT, CIKM, SIGIR, AAAI, etc. His research interests
include machine learning, deep learning, quantum machine learning,
information retrieval, healthcare, and IoT. He is also associated with one
funded-based project named “Data Literacy for Responsible Decision-
Making,” short title (STN LITERACY/Marttinen). He is also the reviewer
of many reputed journals like Springer, Elsevier, IEEE, Wiley, Taylor &
Francis Group, IJEECS, and World Scientific Journal, and many Springer
conferences.
Contributors
xvii
xviii Contributors
1.1 INTRODUCTION
DOI: 10.1201/9781003398066-1 1
2 Computational Techniques in Neuroscience
1.3 AGGREGATION
has applications across domains. Decision making receives great interest from
researchers across various disciplines, but in real life, the information avail-
able is ambiguous or imprecise. To solve such problems of decision making
with vague or imprecise information, fuzzy set and IFS theory have emerged
as powerful tools. In order to define the entropy function, fuzzy set and IFS
theoretic approaches are useful in many real-life situations.
IFSs were introduced by Atanassov (1986). They are quite useful and
applicable, and are defined as:
An IFS A in X = {x1, x2, ….., xn} is given as: A = {< x, A (x), A (x) > | x X}
described by membership function A (x): X [0, 1] and non-membership
function A (x): X [0, 1] of the element x X where the function
A (x) = 1 A (x) A (x) is defined as an intuitionistic index or hesitation
index of x in A. In the limiting case, if A (x) = 0, IFS reduces automatically
to a fuzzy set.
6 Computational Techniques in Neuroscience
For a time variable t , (t) = ( (t) , (t) ) is called the intuitionistic fuzzy
variable, as proposed by Xu and Yager (2008).
If t = t1, t2, …, tp, then (t1), (t2), …, (tn) indicate q intuitionistic fuzzy
numbers (IFNs) collected at q different periods, where, (t) , (t) [0, 1]
Some operations of IFNs are as:
Let (y1) = ( 1 (y1), 1 (y1), 1 (y1) ) and (y2) = ( 2 (y2), 2 (y2), 2 (y2) ) be two
IFNs, then
(y1) (y2) )
• (y1) = (1 (1 (y1) ) , (y1) , (1 (y1) ) (y1) ), > 0.
• A1 + A2 = ( A1 + A2 A1 A2 , A1 A2)
• A1 × A2 = ( A1 A2 , A1 + A2 A1 A2)
• A = (1 (1 A ) , A ); > 0
• A = ( A , 1 (1 A) ); >0
DIFWA (y) ( (y1), (y2), ……, (yq)) = (y1) (y2) ……. (yq) (yq) is called
dynamic intuitionistic fuzzy weighted averaging (DIFWA) operator.
q
where, (yi) 0; i =1 (yi) = 1
Figure 1.2 Illustrates different types of brain tumors present in human body.
https://www.brainhealthdoctor.com/wp-content/uploads/2018/07/shutterstock_
230256940-sm-768×x706.jpg accessed on 20-01-2023.
There are many types of primary brain tumors, named according to the
type of the cells or part of the brain in which they grow (Figure 1.2). The
most common types are: acoustic neuroma, astrocytoma, brain metastases,
choroid plexus carcinoma, craniopharyngioma, embryonal tumors, epen-
dymoma, glioblastoma, glioma, medulloblastoma, meningioma, oligoden-
droglioma, pediatric brain tumors, pineoblastoma, haemangioblastoma,
lymphoma, pineal region tumors, spinal cord tumors, pituitary tumors,
germ cell tumors, and many more. Out of these, 78% of malignant tumors
belong to gliomas, as they arise from supporting cells of the brain called the
glia. Glioma is comprised of glial cells, which protect and support neurons
and are of malignant nature, usually located in the brain and spinal cord.
Ependymoma is a subtype of glioma generally found in children, which
comprises ependymal cells that line the brain’s ventricles. According to
the National Cancer Institute, approximately 30% of this type of tumor is
found in children of age 0–14 years by National Cancer Institute (2018).
Based on factors like: most malignant, aggressive, widely infiltrative, rapid
recurrence, and necrosis prone, WHO developed a grading system to judge the
malignancy of the tumor as: low grade (Grade I and Grade II) and high grade
(Grade III and Grade IV) tumors. Grade I and Grade II tumors generally grow
Dynamic intuitionistic fuzzy weighting averaging operator 9
Acoustic Neuroma
Craniopharyngioma
Haemangioblastoma
Ependymoma
Medulloblastoma
Meningioma
Oligodendroglioma
Lymphoma
Pituitary Tumors
slower than Grades III and Grade IV tumors and are referred to as low-grade
and high-grade tumors. Over a time, a low-grade tumor becomes a high-
grade tumor. The grade of a tumor refers to the structure of the cells under a
microscope. Brain tumors of any type are treated with surgery, radiation, and/
or chemotherapy, either alone or in various combinations. In this chapter,
12 types of brain tumors have been considered and given in Figure 1.3.
In this chapter, the DIFWA operator has been used to evaluate the type of
brain tumor.
Assumptions:
Xu and Yagar (2008) proposed the DIFWA operator for solving MCDM
problems and is defined as:
12.3. Determine the distance between the alternative fi and the IFPIS( i+)
and distance between the alternatives fi and the IFNIS ( i ), respectively
m m
+ +
(fi, i) = wj (tij , j) = wj (1 ij ) (1.1)
j =1 j =1
m m
(fi, i )= wj (tij , j )= wj (1 ij) (1.2)
j =1 j =1
12.4. The closeness coefficient of each alternative has been calculated as:
(fi , i )
C (fi) = +
, i = 1, ……, n (1.3)
(fi , i ) + (fi , i )
Since,
m
+
(fi, i) + (fi, i )= wj (1 + ij) (1.4)
j =1
Dynamic intuitionistic fuzzy weighting averaging operator 11
then
m
j =1 wj (1 ij )
C (fi) = m
, i = 1,…., n (1.5)
j =1 wj (1 + ij)
12.6. End.
1.14 RESULT
This section provides the results with discussion of closeness coefficient for
various types of brain tumors, as shown in Figure 1.4.
Thus, the ranking of various brain tumors based on available information
is given below:
f9 f5 f4 f1 f2 f3 f8 f11 f10 f6 f7
1.16 CONCLUSION
The MCDM technique has been proposed in this chapter for the diagnosis
of a type of brain tumor in the context of IFNs. DIFWA operator has been
used to aggregate IFNs for each of the alternatives. The optimal decision is
evaluated by ranking the alternatives based on their closeness coefficient
value. With the help of the given operator, one can easily rank the type of
brain tumor present in the body of the patient as per their attributes under
the intuitionistic fuzzy environment. This model can be utilized for other
methods to get a better decision without doing any additional medical tests,
as the proposed algorithm is sufficient enough to perform an initial inves-
tigation of the disease. The proposed model may help doctors make better
decisions under uncertainty.
REFERENCES
Atanassov, K. “Intuitionistic fuzzy sets”. fuzzy sets and systems 20(1), 87–96.
DOI: 10.1016/S0165-0114 (86)(1986): 80034-3.
Atanassov, Krassimir T., and Krassimir T. Atanassov. Intuitionistic fuzzy sets.
Physica-Verlag HD, 1999.
Atanassov, Krassimir, Gabriella Pasi, and Ronald Yager. “Intuitionistic fuzzy
interpretations of multi-criteria multi-person and multi-measurement tool
decision making.” International Journal of Systems Science 36, no. 14 (2005):
859–868.
Bordogna, Gloria, and Gabriella Pasi. “A fuzzy linguistic approach generalizing
boolean information retrieval: A model and its evaluation.” Journal of the
American Society for Information Science 44, no. 2 (1993): 70–82.
Bordogna, Gloria, Mario Fedrizzi, and Gabriella Pasi. “A linguistic modeling of
consensus in group decision making based on OWA operators.” IEEE
Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans
27, no. 1 (1997): 126–133.
16 Computational Techniques in Neuroscience
Bustince, Humberto, Francisco Herrera, and Javier Montero, eds. Fuzzy sets and
their extensions: Representation, aggregation and models: Intelligent systems
from decision making to data mining, web intelligence and computer vision.
Vol. 220. Springer, 2007.
Chen, Shu-Jen, Ching-Lai Hwang, Shu-Jen Chen, and Ching-Lai Hwang. Fuzzy
multiple attribute decision making methods. Springer Berlin Heidelberg, 1992.
Chen, Shyi-Ming, and Jiann-Mean Tan. “Handling multicriteria fuzzy decision-
making problems based on vague set theory.” Fuzzy sets and systems 67, no. 2
(1994): 163–172.
De, Supriya Kumar, Ranjit Biswas, and Akhil Ranjan Roy. “Some operations on
intuitionistic fuzzy sets.” Fuzzy sets and Systems 114, no. 3 (2000): 477–484.
Delgado, Miguel, Francisco Herrera, Enrique Herrera-Viedma, and Luis Martinez.
“Combining numerical and linguistic information in group decision making.”
Information Sciences 107, no. 1-4 (1998): 177–194.
Diaby, Vakaramoko, and Ron Goeree. “How to use multi-criteria decision analysis
methods for reimbursement decision-making in healthcare: a step-by-step
guide.” Expert review of pharmacoeconomics & outcomes research 14, no. 1
(2014): 81–99.
Fisher, Bernard. “Fuzzy environmental decision-making: applications to air pollu-
tion.” Atmospheric Environment 37, no. 14 (2003): 1865–1877.
Fodor, Janos C., and M. R. Roubens. Fuzzy preference modelling and multicriteria
decision support. Vol. 14. Springer Science & Business Media, 1994.
Herrera, F., E. Herrera-Viedma, and J. L. Verdegay. “A linguistic decision process in
group decision making.” Group Decision and Negotiation 5 (1996): 165–176.
Herrera, F., and E. Herrera-Viedma. “On the linguistic OWA operator and exten-
sions.” The Ordered Weighted Averaging Operators: Theory and Applications
(1997): 60–72.
Herrera, Francisco, and Enrique Herrera-Viedma. “Linguistic decision analysis:
steps for solving decision problems under linguistic information.” Fuzzy Sets
and Systems 115, no. 1 (2000a): 67–82.
Herrera, Francisco, and Enrique Herrera-Viedma. “Choice functions and mecha-
nisms for linguistic preference relations.” European Journal of Operational
Research 120, no. 1 (2000b): 144–161.
Herrera, Francisco, Enrique Herrera-Viedma, and Luis Martı́nez. “A fusion
approach for managing multi-granularity linguistic term sets in decision
making.” Fuzzy Sets and Systems 114, no. 1 (2000c): 43–58.
Herrera, Francisco, and Luis Martínez. “A 2-tuple fuzzy linguistic representation
model for computing with words.” IEEE Transactions on Fuzzy Systems 8,
no. 6 (2000d): 746–752.
Herrera, Francisco, Enrique Herrera-Viedma, and Francisco Chiclana. “Multiperson
decision-making based on multiplicative preference relations.” European
Journal of Operational Research 129, no. 2 (2001): 372–385.
Hong, Dug Hun, and Chang-Hwan Choi. “Multicriteria fuzzy decision-making
problems based on vague set theory.” Fuzzy Sets and Systems 114, no. 1
(2000): 103–113.
Kacprzyk, Janusz, Mario Fedrizzi, and Hannu Nurmi. “Group decision making
and consensus under fuzzy preferences and fuzzy majority.” Fuzzy Sets and
Systems 49, no. 1 (1992): 21–31.
Dynamic intuitionistic fuzzy weighting averaging operator 17
2.1 INTRODUCTION
2.1.1 Introduction
As we all know, biological/spiking neuron models show certain cells in
the nervous system that generate sharp electrical potentials across cell
membranes. But in today’s epoch, neuroscientists have developed various
algorithmic and computational approaches to analyze their findings and
make forecasts and hypotheses based on the study’s results. However, it is
critical to differentiate between generic modeling and neuronal modeling
related to computational neuroscience. Neural modeling is a mathemat-
ical or computer methodology that utilizes a neural network, an artificial
intelligence (AI) technology that trains computers to interpret data in a
manner similar to that of the human brain. Deep learning is a machine
learning approach that engages linked nodes or neurons in a hetero-
structure similar to the human brain. Precise neural models make certain
assumptions according to the available explicit data, and the conse-
quences of these suppositions are quantified. Recent advances in neural
computation reflect multidisciplinary research in theory, statistics in
neuroscience, modeling computation, design, and construction of neurally
inspired information processing systems. Hence, this sector attracts psy-
chologists, neuroscientists, physicists, computer scientists, and AI inves-
tigators functioning on neural systems underlying perception, cognition,
emotion, and behavior and artificial neural systems that have similar
capabilities. Thus, advanced experimental technologies being developed
by brain initiatives will fabricate large, complex data sets and meticulous
statistical analysis and theoretical insight for a better understanding of
these data mean sets.
DOI: 10.1201/9781003398066-2 19
20 Computational Techniques in Neuroscience
better than the previous data analysis of the same neuron, highlighting the
potential relevance of the competition’s results in re-evaluating previous
findings. Overall, this challenge demonstrated the importance of consid-
ering dendritic architecture in threshold models and the potential for im-
proving our understanding of neuronal activity by incorporating more
complex models (Sincich et al., 2009).
Threshold models describe brain function phenomenologically, but they
only provide a tenuous relationship to the fundamental biophysical reasons
for electrical activity. Threshold models have restricted ability that cannot
forecast the specific time course of the voltages before and after a spike, nor
can they predict the effect of temperature-dependent, chemical environment
changes, or pharmacological modifications of ion channels, whereas
Hodgkin-Huxley biophysical models can do all of this. Most biophysical
model parameters will soon be assessed in a systematic manner utilizing an
appropriate mix of immunostaining methods to estimate ion-channel dis-
tribution, calibrated measurements of ion-channel kinetics, and expression
investigations to identify tens of ion channels in individual cells. Along
those same lines, automatic model construction is feasible.
Furthermore, complex nonlinear spatiotemporal effects on the interac-
tion of back-propagating action potentials (those that move into a dendrite)
with shunting inhibition, or local spikes in intracellular calcium concen-
tration that are triggered by numerous, geographically distributed synaptic
inputs, are outside the purview of threshold models. Although traditional
experimental methods have made it difficult to quantify these nonlinear
spatiotemporal aspects, modern imaging techniques that quantify the
voltage phase time course from across dendritic tree at a high spatial res-
olution in tandem with a governed multisite sensory input by glutamate
uncaging or optogenetic methods will introduce a fresh era of statistically
predictive biophysical models (Gerstner & Naud, 2009).
2.1.3 Objective
We now turn to briefly consider how models of neural networks are being
employed in medical computing. Current methods can be divided into two
major categories. Neural models are employed as computational tools in
one class of applications to execute certain information processing tasks.
Neural models are employed as modeling tools in the second category of
applications to replicate diverse neurobiological or psychological events.
Aside from its roots in brain modelling and cognitive research, neural
models offer a general computational framework with potential applica-
tions (Reggia, 1993) in many areas of medical informatics. To the degree
that an issue can be described as a neural model, vast computer capacity
may be utilized in parallel processing to solve that problem. Furthermore,
because neural models have the potential to learn, the knowledge acquisi-
tion difficulty experienced when deploying classic AI systems may
22 Computational Techniques in Neuroscience
dai
a = ai + j
wij f (ai ) + Ki (2.1)
dt
where ij are the elements of the identity matrix and f (aj ) is the derivative
of f (ai ).
Gradient descent dynamics can often result in a neural network that
violates stability criteria such as feedforward, symmetric, or diagonal
dominance. However, it has been observed that even if the initial network
violates these criteria, it does not become unstable during the learning
process. This suggests that the stability assumptions underlying recurrent
backpropagation are sufficient and that a dynamical system that allows
only stable behavior is not necessary.
In gradient descent learning, the objective is to optimize an objective
function with the weights as independent parameters. The number of
weights, N, is proportional to n2 if the fan-in/fan-out ratio of the processing
units is proportional to R. Relaxing the network and generating a target
formula based on the steady-state a0 takes O(mN) or O(mn2) procedures.
However, computing the gradient of the objective function numerically
requires O(mN2) or O(mn^4) calculations, which becomes impractical for
large problems. Moreover, the number of gradient evaluations required for
solution convergence may diverge for certain problems.
Backpropagation adaptive dynamics, which is based on gradient descent,
uses two methods to reduce computation. The first method represents the
gradient of the objective function as an outer-product for equations of
the type (2.1), that is,
b0 = (LT ) 1K (2.4)
The transpose of the n matrix stated in equation (2.2) is denoted by LT, and
K is just an external error signal that is dependent on the objective function
and a0. Because L−1 can be determined in O(n3) operations from L and x2 in
Neural modeling and neural computation in a medical approach 25
only O(mn2) operations, this technique reduces the computational cost of the
gradient estimated by a factor of n. As an outcome, the whole computation has
scales such as O(mn3) or O(mN3/2). The second approach takes use of the fact
that b0 can be determined via relaxation, or that it is the (stable) fixed point of
the linear differential equation.
dbi
b = bi + f (ai ) j
wij bj + Ki (2.5)
dt
In the last decade, there has been significant advancement in the study of
brain function utilizing functioning neuroimaging. It has been notably correct
in the realm of human behavior studies in regional cerebral images. The
numerous forms of functional neuroimaging approaches are founded on two
distinct modalities: (Abeles et al., 1995) (1) hemodynamic-metabolic – this
domain includes positron emission tomography (PET), single photon emis-
sion tomography (SPECT), functional magnetic resonance imaging (fMRI),
all predominantly used in humans, and autoradiographic de-oxyglucose and
the optical imaging and method, both primarily used in nonhuman animals
(Ackermann et al., 1984); and (2) electric-magnetic – this realm includes EEG
and magnetoencephalography (MEG), both primarily used with human
subjects. These two basic forms of imaging have fundamentally distinct
26 Computational Techniques in Neuroscience
properties, the most notable of which are associated with temporal resolution
and the amount and type of spatial information provided by each.
track the magnetic fields created by electric current flows associated with
cerebral activity paving the way for the application of MEG to study cog-
nitive functions of the brain (Horwitz et al., 2000).
2.3.3 Conclusion
The field of functional brain imaging is a significant source of complicated
data in neuroscience study. Temporal as well as spatial domains are com-
plicated. Therefore, these data enable researchers to look into the neuro-
logical underpinnings of human sensory, motor, affective, and cognitive
functions. The secondary argument is that this complexity limits tranquil
comprehension and necessitates a similarly rich computational approach to
data analysis and, more importantly, data interpretation. The tertiary part
28 Computational Techniques in Neuroscience
“Machine learns under supervision,” as the name indicates, and “train me”
to make it easier for you. Consider a student studying for test questions and
having access to the answer key to all those questions; in this scenario, the
student is our model, the exam questions are the input, and the solution key
is the intended output. This form of data is referred to as “labeled data,”
and it is used in supervised learning. We train the model using labeled data,
and the model learns the link between both the output and input variables/
features during training. As the training phase concludes, the model is
tested and therefore enters the testing phase, in which the test input char-
acteristics are supplied to the model, and the model will now categorize the
output/predicted output. Now that the anticipated and intended outputs
Neural modeling and neural computation in a medical approach 31
have been matched, we can claim that the model is more appropriate and
the error margin is low; however, if the gap between the two is more, we
can say that the error margin is greater and the model must be trained more/
better. So in supervised learning:
“Machine learns on its own without supervision,” as the name implies. The
output variables in unsupervised learning are unlabeled. There are no known
combos of input and output variables. Unsupervised learning is concerned
with examining correlations between input variables and identifying hidden
patterns that may be used to generate new labels for potential outputs. For
example, a student is given 50 shapes, but the kids have no clue what they are
(they are still learning), and we do not define any labels or names of the forms.
This is an example of unsupervised learning with unlabeled data. Now the
student will try to understand the patterns; for example, the student will
make a group of shapes that has four comers, another group of shapes with
three corners, and one more group of shapes with two corners. So, here the
student tried to make clusters of similar input elements, and that’s what we
do in unsupervised learning. Further to those clusters made by students, new
labels could be given. You are right if you think that the shape labels/names
are quadrilateral, triangle, and circle. So, in unsupervised learning:
• Disease Identification
• Drug Discovery
• Personalized Treatment
• Clinical Trial Results
• Smart Electronic Health Records
• Treatment Queries and Suggestions
Neural modeling and neural computation in a medical approach 33
• Input layer
• Hidden layer
• Output layer
supporting neural network assists CNN in determining the best filers for
pooling for convolution layers. As a result, the primary neural classifier
learns more quickly and efficiently. The results reveal that our CLM model
can achieve 96% accuracy, 95% precision, and 95% recall.
The brain is built in modules. Engineers are well aware of the benefits of
modularity as strong technologies are built on modules that could be
copied and trumbled, such as transistors and web servers. This idea ap-
pears to be used by the brain in the following two ways (i) modular
circuits and using (ii) modular calculations. Anatomic evidence shows the
presence of canonical microcircuits, which are repeated across brain
areas, such as the cerebral cortex (Rodney et al., 1991). Physiological or
behavioral evidence points to the presence of canonical brain computa-
tions, which are typical computational modules that conduct the same
core processes in diverse contexts. A canonical neural computation can
use a variety of circuits and processes. Various brain areas or species may
use different accessible components to execute it.
Exponentiation and linear filtering are two well-known paradigms of
canonical neural computations. Exponentiation, a kind of thresholding,
acts at the neuronal and network levels, for example, the mechanism
through which eye and limb motions are produced. (Saito et al., 2018). This
procedure serves several important functions, including preserving sensory
selectivity, decorrelating signals, and establishing perceptual choice. A
common computational method in sensory mechanism, which is linear fil-
tering (weighted summing by linear receptive fields), is carried out, at least
roughly, at several levels of vision, hearing, and somatosensation. It aids in
the explanation of a wide range of perceptual events and may possibly be
implicated in sensory and motor systems (Prasetyoputri, 2021).
In several neural networks, a third type of computation has been
observed: divisive normalization. Normalization evaluates a ratio between
the response of a single neuron and the overall activity of a clump of
neurons. Normalization was proposed in the early 1990s to explain the
non-linear characteristics of neurons in the primal visual cortex. Similar
computations have previously been suggested to explain light adaptation of
the retina, size invariance of the fly visual system, and associative memory
in the hippocampus. Evidence gathered since then shows that normalization
is involved in a wide range of modalities, brain areas, and animals.
Theorists have proposed numerous (not mutually incompatible) rationales
for normalizing, the majority of which are connected to code efficiency and
enhancing sensitivity. Normalization modifies the procurement of neural
responses to make better use of the dynamic range offered, enhancing sen-
sitivity to changes as input (Prasetyoputri, 2021). Light adaptation of the
Neural modeling and neural computation in a medical approach 37
retina allows for great sensitivity to tiny changes in visual characteristics over
a broad range of intensities. Normalizing reward values produces depiction
capable of distinguishing between dollars and million dollars, hence ex-
panding the reward system’s effective dynamic range and the invariance with
regard to some stimulus dimensions. Normalization in the fly’s antennal lobe
is hypothesized for allowing odorant identification and discrimination inde-
pendent of concentration. In the retina where normalization discards infor-
mation about the mean light level in order to retain invariant representations
of other visual properties (e.g., contrast). Normalization in V1 disposes
contrast information used to encode picture patterns (e.g.,orientation),
maximizing discriminability independent of contrast.
It is believed that MT encodes velocity independently of structural
pattern. Normalization inside the ventral visual pathway might aid in the
creation of object representations that are immune to changes (size, loca-
tion, lighting, and occlusion) during the process of decoding a distributed
neural representation. The responses of a group of neurons adjusted for
distinct speeds and directions are assumed to represent visual motion in
visual region MT. These responses may be seen as discrete samples of a
likelihood density function for which the firing rate of each neuron is
proportionate to a probability, the means of the distribution anticipates
stimulus velocity and the variance of the distribution indicates the uncer-
tainty in that prediction. If the firing rates are normalized to sum a constant,
the mean & variance may be determined simply as weighted sums of the
firing rates, the same constant for each stimulus. Differentiating between
stimuli, normalization can help a linear classifier distinguish between neural
representations of various inputs. A point in n-dimensional space represents
the response of ˆ neurons to a stimulus. The points associated with com-
parable stimuli group together. A linear classifier distinguishes between
stimulus categories by connecting them using hyperplanes. This is chal-
lenging if certain stimuli elicit indestructible reactions while others elicit
weak responses; the hyperplane that establishes the boundary far from the
origin might fail close to the origin, and vice versa. This difficulty is avoided
via normalization.
Max-pooling (winner-take-all) (Katiyar, 2022) a neuronal population is
normalized. It can work in two modes, i.e., when the inputs are almost
equal, averaging them, and determining a winner, when one input is much
higher than the rest, the competition is winner-take-all (max-pooling,
choosing the maximum in inputs). Max-pooling is hypothesized to work
across different brain systems and to underpin perceptual judgments by
picking the neuronal subpopulation (or psychophysical channel) with the
most responses. Object recognition models suggest numerous phases of
max-pooling. Similar to the “biased competition” concept, attention may
depend on normalization to change the calculation from average to max-
pooling, thereby picking the sub-population with the most replies and
suppressing the others.
38 Computational Techniques in Neuroscience
2.7 CONCLUSION
REFERENCES
Abeles, M., Bergman, H., Gat, I., Meilijson, I., Seidemann, E., Tishby, N., &
Vaadia, E. (1995). Cortical activity flips among quasi-stationary states.
Proceedings of the National Academy of Sciences of the United States of
America, 92(19), 8616–8620. 10.1073/pnas.92.19.8616
Ackermann, R. F., Finch, D. M., Babb, T. L., & Engel, J. (1984). Increased glucose
metabolism during long-duration recurrent inhibition of hippocampal pyram-
idal cells. Journal of Neuroscience, 4(1), 251–264. 10.1523/jneurosci.04-01-
00251.1984
Adey, W. R., Walter, D. O., & Hendrix, C. E. (1961). Computer Techniques in
Correlation and Spectral Analyses of Cerebral Slow Waves during Discriminative
Behavior. 524, 501–524.
Aloysius, N., & Geetha, M. (2018). A review on deep convolutional neural net-
works. Proceedings of the 2017 IEEE International Conference on
Communication and Signal Processing, ICCSP 2017, 2018-Janua (November
2020), 588–592. 10.1109/ICCSP.2017.8286426
Neural modeling and neural computation in a medical approach 39
Brasil-Neto, J. P., McShane, L. M., Fuhr, P., Hallett, M., & Cohen, L. G. (1992).
Topographic mapping of the human motor cortex with magnetic stimulation:
factors affecting accuracy and reproducibility. Electroencephalography and
Clinical Neurophysiology/ Evoked Potentials, 85(1), 9–16. 10.1016/0168-
5597(92)90095-S
By, S. (2018). Prediction of Asthma as Side Effect After Vaccination. May.
Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural
computation. Nature Reviews Neuroscience, 13(1), 51–62. 10.1038/nrn3136
Couprie, C., Najman, L., & Lecun, Y. (2013). For Scene Labeling. Pattern Analysis
and Machine Intelligence, IEEE Transactions On, 35(8), 1915–1929. 10.1109/
TPAMI.2012.231
Cowan, N. (1998). Visual and auditory WM capacity. In Trends in Cognitive
Sciences (Vol. 2, Issue 3, pp. 77–78).
Fan, J., Xu, W., Wu, Y., & Gong, Y. (2010). Human tracking using convolutional
neural networks. IEEE Transactions on Neural Networks, 21(10), 1610–1623.
10.1109/TNN.2010.2066286
Gally, J. A., Montague, P. R., Reeke, G. N., & Edelman, G. M. (1990). The NO
hypothesis: Possible effects of a short-lived, rapidly diffusible signal in the
development and function of the nervous system. Proceedings of the National
Academy of Sciences of the United States of America, 87(9), 3547–3551.
10.1073/pnas.87.9.3547
Gerstner, W., & Naud, R. (2009). How good are neuron models? Science,
326(5951), 379–380. 10.1126/science.1181936
Gevins, A., Smith, M. E., McEvoy, L. K., Leong, H., & Le, J. (1999).
Electroencephalographic imaging of higher brain function. Philosophical
Transactions of the Royal Society B: Biological Sciences, 354(1387),
1125–1134. 10.1098/rstb.1999.0468
Goodisman, M. A. D., & Asmussen, M. A. (1997). Zyxwvutsrqp Zyxwvu Zyxwv
Zyx Zyxwvuts Zyxwv Zyxwvutsrqponm Zyxwvutsr Zyxw Zyxwvutsrqp
Zyxwvu Z Zyxwvutsrq Zyxwvu Zyxwvutsrq Zyxwvu. 338, 321–338.
Herz, A. V. M., Gollisch, T., Machens, C. K., & Jaeger, D. (2006). Modeling single-
neuron dynamics and computations: A balance of detail and abstraction.
Science, 314(5796), 80–85. 10.1126/science.1127240
Hong, S., Ag, B., & Fairhall, A. L. (2007). Single Neuron Computation: From
Dynamical System to Feature Detector. 3172, 3133–3172.
Horwitz, B., Friston, K. J., & Taylor, J. G. (2000). Neural modeling and functional
brain imaging: An overview. Neural Networks, 13(8–9), 829–846. 10.1016/
S0893-6080(00)00062-9
Horwitz, B., Rumsey, J. M., & Donohue, B. C. (1998). Functional connectivity of
the angular gyrus in normal reading and dyslexia. Proceedings of the National
Academy of Sciences of the United States of America, 95(15), 8939–8944.
10.1073/pnas.95.15.8939
Horwitz, B., & Sporns, O. (1994). Neural modeling and functional neuroimaging.
Human Brain Mapping, 1(4), 269–283. 10.1002/hbm.460010405
Horwitz, B., Tagamets, M. A., & McIntosh, A. R. (1999). Neural modeling,
functional brain imaging, and cognition. Trends in Cognitive Sciences, 3(3),
91–98. 10.1016/S1364-6613(99)01282-6
How Neural Networks Learn from Experience. (1992).
40 Computational Techniques in Neuroscience
Jindra, R. H. (1976). Mass action in the nervous system. Neuroscience, 1(5), 423.
10.1016/0306-4522(76)90135-4
Katiyar, K. (2022). AI-Based Predictive Analytics for Patients’ Psychological
Disorder BT - Predictive Analytics of Psychological Disorders in Healthcare:
Data Analytics on Psychological Disorders (M. Mittal & L. M. Goyal (Eds.);
pp. 37–53). Springer Nature Singapore. 10.1007/978-981-19-1724-0_3
Katiyar, K., Kumari, P., & Srivastava, A. (2022). Interpretation of Biosignals and
Application in Healthcare. In M. Mittal & G. Battineni (Eds.), Information and
Communication Technology (ICT) Frameworks in Telehealth (pp. 209–229).
Springer International Publishing. 10.1007/978-3-031-05049-7_13
Kwong, K. K., Belliveau, J. W., Chesler, D. A., Goldberg, I. E., Weisskoff, R. M.,
Poncelet, B. P., Kennedy, D. N., Hoppel, B. E., Cohen, M. S., Turner, R.,
Cheng -, H. M., Brady, T. J., & Rosen, B. R. (1992). Dynamic magnetic
resonance imaging of human brain activity during primary sensory stimula-
tion. Proceedings of the National Academy of Sciences of the United States of
America, 89(12), 5675–5679. 10.1073/pnas.89.12.5675
Levitt, J. B., Kiper, D. C., & Movshon, J. A. (1994). Receptive fields and functional
architecture of macaque V2. Journal of Neurophysiology, 71(6), 2517–2542.
10.1152/jn.1994.71.6.2517
McIntosh, A. R., & Gonzalez-Lima, F. (1991). Structural modeling of functional
neural pathways mapped with 2-deoxyglucose: effects of acoustic startle
habituation on the auditory system. Brain Research, 547(2), 295–302. 10.1016/
0006-8993(91)90974-Z
Motion, O. F. (1990). Differential Equations of Motion. March, 66.
Nithin, D. K., & Sivakumar, P. B. (2015). Generic Feature Learning in Computer
Vision. Procedia Computer Science, 58, 202–209. 10.1016/j.procs.2015.
08.054
Ogawa, S., Menon, R. S., Tank, D. W., Kim, S. G., Merkle, H., Ellermann, J. M., &
Ugurbil, K. (1993). Functional brain mapping by blood oxygenation level-
dependent contrast magnetic resonance imaging. A comparison of signal
characteristics with a biophysical model. Biophysical Journal, 64(3),
803–812. 10.1016/S0006-3495(93)81441-3
Oxide, T. H. E. N., & Of, D. (1947). The Nitrous Oxide Method for the
Quantitative. 1, 476–483.
Pineda, F. J. (1988). Dynamics and architecture for neural computation. Journal of
Complexity, 4(3), 216–245. 10.1016/0885-064X(88)90021-0
Pineda, F. J. (1989). Recurrent Backpropagation and the Dynamical Approach to
Adaptive Neural Computation. 172(1986), 161–172.
Poli, R., Cagnoni, S., Coppini, G., & Valli, G. (1991). A Neural Network Expert
System for Diagnosing and Treating Hypertension. Computer, 24(3), 64–71.
10.1109/2.73514
Prasetyoputri, A. (2021). Detection of Bacterial Coinfection in COVID-19 Patients
Is a Missing Piece of the Puzzle in the COVID-19 Management in Indonesia.
ACS Infectious Diseases, 7(2), 203–205. 10.1021/acsinfecdis.1c00006
Reggia, J. A. (1993). Neural Computation in Medicine. 5, 143–157.
Rodney, B. Y., Douglas, J., & Martin, K. A. C. (1991). Physiology, 440, 735–769.
Saito, B., Nakashima, H., Abe, M., Murai, S., Baba, Y., Arai, N., Kawaguchi, Y.,
Fujiwara, S., Kabasawa, N., Tsukamoto, H., Uto, Y., Ariizumi, H.,
Yanagisawa, K., Hattori, N., Harada, H., & Nakamaki, T. (2018). Efficacy of
Neural modeling and neural computation in a medical approach 41
3.1 INTRODUCTION
42 DOI: 10.1201/9781003398066-3
Neural networks and neurodiversity 43
Unexpectedly far back in the timeline of computers lies the origin of bio-
logically inspired algorithms. In 1943, McCulloch and Pitts established
the concept of a “integrate and fire” neuron, and Hebb was the first to put
out the notion of assimilation and accommodation in the brain cells in
the late 1940s: “What fires together, wires together” (Graben and Wright
2011). In contrast, the transistor was not created until 1947, usable inte-
grated circuits did not arise until the end of late 1950s, “minicomputers”
and mainframes didn’t become widely used until the late 1960s, and per-
sonalized computer systems didn’t come into being until the late 1980s
and 1990s. The fact that the proposal would come to pass before actual
application by such a long time is evidence of these early pioneers’ vision
(Cox and Dean 2014b). Rosenblatt’s “perceptron,” which presented a
straightforward configuration of neurons with output and input that may
judge based on input vectors, was one of first examples of a neural network
that can learn. Since the original perceptron could only learn linear func-
tions of the inputs, it was found to be fundamentally limited. As a result,
neural network research temporarily suffered by the opposing “symbolic
artificial intelligence” faction, which sought to mimic intelligence through
processes that used abstract representations rather than deriving inspiration
Neural networks and neurodiversity 45
and late in childhood they reach adult levels. The temporal course of brain
processing has been extensively documented by ERP investigations, which
are also sensitive to millisecond changes (S. Katiyar and Katiyar 2021). To
comprehend the underlying cognitive processes, one uses the sequence of
recorded potentials, together with their magnitude and duration.
3.5 NEUROMYTHS
captured by these systems, which require less weight in training, and per-
form effectively even if there are no weights learned (Cox and Dean 2014a).
Numerous medical uses and diagnoses employ neural networks (Abdel-
Nasser Sharkawy 2020).
neural networks has been used during the 1990s. They have been
intended to comprehend time shifting or sequential patterns. A recurrent
net is a kind of neural network which includes feedback (or closed loop
connections). A few examples of RNN incorporate Hopfield, Boltzmann
machine, BAM, and so forth. RNN strategies have been generally exe-
cuted to a wide scope of issues. Basic to some degree, RNNs were created
in the 1980s with the intent to learn strings of characters. RNNs have
additionally resolved the issues that incorporate dynamical frameworks
with time sequences of events. The two primary variations of RNN,
likewise called “simple” RNNs, are, the Elman and the Jordan RNN
models. Both the Elman and Jordan neural networks comprise a delay
layer along with an input, hidden, and output layer. The postpone neu-
rons of an Elman neural network are taken care of from the hidden layer,
while the defer neurons of a Jordan network are taken care of from the
output layer (Abdel-Nasser Sharkawy 2020). Figure 3.3: Types of RNNs
below depicts the different types of RNNs.
Rapid progress of technology over the past few years has resulted in sub-
stantial changes in workplace employment practices due to greater com-
puting capacity, the massive growth of technical data and massive
algorithmic advancements. The previous technological revolutions have
been primarily brought about by improvements in popularly used tech-
nologies, such as steam power, electricity, and computer-based technolo-
gies. However, the current Industrial Revolution includes a paradigm shift
across all fields of study, industries and economies because it is raising
significant political and philosophical issues in the process (Last 2017).
Artificial intelligence (AI) algorithms, for instance, “trained” on historical
Neural networks and neurodiversity 57
huge data and reflect the ideals and preconceptions of their creators as well
as programmers (Montes and Goertzel 2019).
With the algorithms expanding globally, they amplify prejudices and
perpetuate stereotypes against the most marginalized people. It becomes
essential to include ethical and human values in technologies. In fact, the
separation between technological and social advancement is no longer
viable in a setting of widening wealth disparities among workers. New
paradigms and stories must be developed immediately for inclusive pros-
perity (Last 2017). The term “neurodiversity” in psychology refers to the
amalgamation of benefits and drawbacks brought on by a person’s unique
brain makeup. These variations include disorders, including attention def-
icit hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia,
and dyspraxia, among others. As autistic workers are presently in the
spotlight for the majority of programs pertaining to employment that favor
neurodiversity, the word “neurodiversity” is used in this research to more
precisely refer to those individuals. Autism is a neurological psychiatric
illness that lasts a lifetime and affects people differently in terms of per-
ception and cognition. The neurodiversity paradigm takes into account the
intangible distinctions between individual brains and intelligences, while
the majority of literature based on diversity, incorporation, or inequality
highlights overt characteristics like gender, age, or race.
This study offers a novel conceptual framework for examining the dif-
ferent connections between neurodiversity and the digital transition, which
researchers view as a complementary pair. Despite their close connections,
neurodiversity management and the digital transition have not yet been
thoroughly researched.
time, the value of that particular function escalates higher than the sum-
mation of the changes that are brought about by the escalation in every
variable when considered separately. Also, this strategy is very acceptable.
Economic acceptance takes the place of technological determinism. The
coordination of technical and organizational decisions is motivated by the
pursuit of performance (Lambrecht and Tucker 2019). Reversing the loss to
examine the influence of human acceptance in the technology framework is
another method for highlighting biases in technologies. Instead of focusing
on how technologies affect the workforce, let’s examine how the makeup of
the workforce affects technologies. Biases in the values and traits contained
in technologies are created by technology designers. While the obvious
gender imbalance in the IT sector is receiving a lot of attention, algorithmic
biases related to impairments also raise additional concerns about social
norms and stereotypes as well as job impediments. Initiatives to promote
neurodiversity address the issue of preconceptions in the workplace while
aiming for a more inclusive hiring process (Lambrecht and Tucker 2019).
70 and 130. Individuals in the tails of the distribution differ from the mean
or median because they deviate statistically from it. Each person is clearly
defined by a variety of cognitive, emotional, and perceptual characteristics.
But for the majority of people, the differences between two people pale in
comparison to what they have in common. These differences are greater for
autistic people, which beg the query of how these individuals specifically
add up to the organization.
Both at the macro- and microeconomic levels, neurodiversity practices are
acknowledged to have good social and inclusionary effects (Krzeminska et al.
2019). Apart from reputational advantages, transnational as well as major
companies from various organizations have pioneered a change in how these
individuals take charge of their staff and have created programs to utilize
hitherto untapped autistic abilities. As a result, these organizations are actu-
ally acquiring a competitive benefit because of these creative projects (Austin
and Pisano 2017). More businesses of all sizes and in a variety of industries
are adopting neurodiversity hiring policies (Austin and Sonne 2014).
At the level of the individual, characteristics defining exceptional autistic
employees involve their capacity for concentration, pattern recognition,
ability to accomplish monotonous tasks, remarkable attention to detail, and
participation. This may be the reason why many businesses that specialize
in the placement of autistic individuals provide software-testing positions or
coding jobs. According to a study, certain autistic people may be well suited
for these positions since they have good rule-based system-building skills
and a low mistake probability on activities requiring close attention to
detail. The American Psychiatric Association (2013) noted that reoccurring
motif hobbies or pursuits may also be a strength for autistic professionals
who complete repetitive tasks in their field of interest. However, digitali-
zation could have a negative effect on these jobs due to the automation of
repetitive operations and the replacement of skilled jobs based on rules by
artificial intelligence. This observation calls into question whether autistic
workers’ skills can be replaced by machinery. In fact, repetitive tasks and/or
those requiring pattern recognition may be more susceptible to automation
(substitution).
The establishment of an organization that accepts employees on the
spectrum of autism is necessary for their employment and engagement at
the organizational level, creating a workforce of neurodiverse employees
(S.Markel & Elia, 2016). In a study, a variety of concerns and strategies
are highlighted that have been used in various national, organizational,
and institutional settings to promote the employment of autistic em-
ployees. According to (Austin and Pisano 2017), there are seven essential
steps in the process of identifying neurodiversity in the worksite: the
creation of non-stereotypical hiring practices; training of employees and
employers; customization of the supervisory environment; specification of
techniques for career management; guidance and counselling from experts
having deep knowledge of neurodiversity in order to gain a clear
60 Computational Techniques in Neuroscience
3.11.4 Methodology
This exploratory study seeks to determine the linkage between digital
transition and neurodiversity management. The research was shaped by a
phenomenological approach. A qualitative research method known as
phenomenology describes participants’ actual experiences in order to better
comprehend their nature or significance. To accomplish the goals of the
study, researchers employed a purposive sample strategy in accordance with
the phenomenological approach. To find suitable interview candidates,
researchers looked at two factors: leadership or competence in neurodi-
versity efforts, and solid IT industry knowledge (Walkowiak 2021).
In order to gather information with a suitable level of complexity and
diversity, sixteen candidates were interviewed in 2018 and 2019. As per
the participant expertise and contact intensity, this number in phenome-
nology typically ranges from 5 to 30 people. Sixteen participants offered the
amount of saturation that was seeked for, given the research’s targeting of
specialists, and the nascent approach of workplace initiatives in neurodi-
versity. Participants are listed in Table 3.1: Participants.
Three sets of questions served as the framework for the quasi-interviews.
Identifying autistic workers’ skills and how they relate to the digital transition
was the goal of the first set of questions. Second series of queries centered on
highlighting advantages as well as disadvantages pertaining to autistic em-
ployees related to innovation, artificial intelligence, and the shortage of
Neural networks and neurodiversity 61
3.11.5 Result
While discussing a range of interests and abilities they saw in a neurodiverse
workforce, participants also explicitly referred to noteworthy “performa-
tive” qualities that were connected with the analyzed output of an employee
when he/she was carrying out an IT oriented task. Resilience, a different
mode of innovative thinking as well as query resolving, which promotes
creativity in the workplace and aids in the digital transformation, was one
of these expressive capabilities.
62 Computational Techniques in Neuroscience
REFERENCES
Ameri, Mason, Lisa Schur, Meera Adya, F. Scott Bentley, Patrick McKay, and
Douglas Kruse. 2018. “The Disability Employment Puzzle: A Field Experi-
ment on Employer Hiring Behavior.” ILR Review 71 (2): 329–364. 10.1177/
0019793917717474.
Austin, Robert D., and Gary P. Pisano. 2017. “Neurodiversity as a Competitive
Advantage.” Harvard Business Review 2017 (May-June): 9.
Austin, Robert D., and Thorkil Sonne. 2014. “The Dandelion Principle: Redesigning
Work for the Innovation Economy.” MIT Sloan Management Review 55 (4):
67–72.
Barak, Omri. 2017. “Recurrent Neural Networks as Versatile Tools of Neuro-
science Research.” Current Opinion in Neurobiology 46: 1–6. 10.1016/
j.conb.2017.06.003.
Barak, Omri, David Sussillo, Ranulfo Romo, Misha Tsodyks, and L. F. Abbott. 2013.
“From Fixed Points to Chaos: Three Models of Delayed Discrimination.”
Progress in Neurobiology 103: 214–222. 10.1016/j.pneurobio.2013.02.002.
Berns, Gregory S, Jonathan D Cohen, and Mark A Mintun. n.d. “Absence of
Awareness.”
Berrar, Daniel, Naoyuki Sato, and Alfons Schuster. 2010. “Artificial Intelligence in
Neuroscience and Systems Biology: Lessons Learnt, Open Problems, and the
Road Ahead.” Advances in Artificial Intelligence 2010: 1–2. 10.1155/2010/
578309.
Camiña, Ester, Ángel Díaz-Chao, and Joan Torrent-Sellens. 2020. “Automation
Technologies: Long-Term Effects for Spanish Industrial Firms.” Technological
Forecasting and Social Change 151 (November 2019): 119828. 10.1016/
j.techfore.2019.119828.
Caspi, Avshalom, Joseph McCray, Terrie E. Moffitt, Jonathan Mill, Judy Martin,
Ian W. Craig, Alan Taylor, and Richie Poulton. 2002. “Role of Genotype in
the Cycle of Violence in Maltreated Children.” Science 297 (5582): 851–854.
10.1126/science.1072290.
Cox, David Daniel, and Thomas Dean. 2014a. “Neural Networks and Neuroscience-
Inspired Computer Vision.” Current Biology 24 (18): R921–R929. 10.1016/
j.cub.2014.08.026.
Cox, David Daniel, and Thomas Dean. 2014b. “Neural Networks and Neuroscience-
Inspired Computer Vision.” Current Biology 24 (18): R921–R929. 10.1016/
j.cub.2014.08.026.
Dwyer, Patrick. 2022. “The Neurodiversity Approach(Es): What Are They and
What Do They Mean for Researchers?” Human Development 66 (2): 73–92.
10.1159/000523723.
Enel, Pierre, Emmanuel Procyk, René Quilodran, and Peter Ford Dominey. 2016.
“Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex.”
PLoS Computational Biology 12 (6): 1–35. 10.1371/journal.pcbi.1004967.
“Fundamentals of Machine Learning and Softcomputing.” 2006. Neural Networks
in a Softcomputing Framework, no. Vc: 27–56. 10.1007/1-84628-303-5_2.
Gao, Peiran, and Surya Ganguli. 2015. “On Simplicity and Complexity in the Brave
New World of Large-Scale Neuroscience.” Current Opinion in Neurobiology
32: 148–155. 10.1016/j.conb.2015.04.003.
Geake, John, and Paul Cooper. 2003. “Cognitive Neuroscience: Implications
for Education?” Westminster Studies in Education 26 (1): 7–20. 10.1080/
0140672030260102.
Neural networks and neurodiversity 65
Graben, Peter beim, and James Wright. 2011. “From McCulloch-Pitts Neurons
Toward Biology.” Bulletin of Mathematical Biology 73 (2): 261–265. 10.1007/
s11538-011-9629-5.
Hopfield, J. J. 1982. “Neural Networks and Physical Systems with Emergent
Collective Computational Abilities.” Proceedings of the National Academy
of Sciences of the United States of America 79 (8): 2554–2558. 10.1073/
pnas.79.8.2554.
Houting, Jacquiline den. 2019. “Neurodiversity: An Insider’s Perspective.” Autism
23 (2): 271–273. 10.1177/1362361318820762.
Hyvärinen, Aapo. 2010. “Statistical Models of Natural Images and Cortical Visual
Representation.” Topics in Cognitive Science 2 (2): 251–264. 10.1111/j.1756-
8765.2009.01057.x.
Jaeger, Herbert, and Harald Haas. 2004. “Harnessing Nonlinearity: Predicting
Chaotic Systems and Saving Energy in Wireless Communication.” Science 304
(5667): 78–80. 10.1126/science.1091277.
Kapp, Steven K. 2020. Conclusion - Autistic Community and the Neurodiversity
Movement: Stories from the Frontline. Autistic Community and the Neuro-
diversity Movement: Stories from the Frontline. 10.1007/978-981-13-8437-
0_22.
Katiyar, Kalpana. 2022. “AI-Based Predictive Analytics for Patients’ Psychological
Disorder.” In Predictive Analytics of Psychological Disorders in Healthcare:
Data Analytics on Psychological Disorders, 37–53. Springer.
Katiyar, Kalpana, Pooja Kumari, and Aditya Srivastava. 2022. “Interpretation of
Biosignals and Application in Healthcare.” In Information and Communication
Technology (ICT) Frameworks in Telehealth, 209–229. Springer.
Katiyar, Sarthak, and Kalpana Katiyar. 2021. “Recent Trends Towards Cognitive
Science: From Robots to Humanoids.” In Cognitive Computing for Human-
Robot Interaction, 19–49. Elsevier.
Krzeminska, Anna, Robert D. Austin, Susanne M. Bruyère, and Darren Hedley.
2019. “The Advantages and Challenges of Neurodiversity Employment in
Organizations.” Journal of Management and Organization 25 (4): 453–463.
10.1017/jmo.2019.58.
Lambrecht, Anja, and Catherine Tucker. 2019. “Algorithmic Bias? An Empirical
Study of Apparent Gender-Based Discrimination in the Display of Stem Career
Ads.” Management Science 65 (7): 2966–2981. 10.1287/mnsc.2018.3093.
Last, Cadell. 2017. “Global Commons in the Global Brain.” Technological
Forecasting and Social Change 114 (2016): 48–64. 10.1016/j.techfore.2016.
06.013.
Machens, Christian K., Ranulfo Romo, and Carlos D. Brody. 2005. “Flexible Control
of Mutual Inhibition: A Neural Model of Two-Interval Discrimination.” Science
307 (5712): 1121–1124. 10.1126/science.1104171.
Mante, Valerio, David Sussillo, Krishna V. Shenoy, and William T. Newsome. 2013.
“Context-Dependent Computation by Recurrent Dynamics in Prefrontal
Cortex.” Nature 503 (7474): 78–84. 10.1038/nature12742.
Material, Supplementary. 2002. “Caspi_2002_MAOA_AggressionMsat_SIMethods,”
1–7. papers2://publication/uuid/D97CED7A-D777-4E60-A883-A49E4CE1B625.
McCalpin, John D. 1995. “Memory Bandwidth and Machine Balance in Current
High Performance Computers.” IEEE Computer Society Technical Committee
on Computer Architecture (TCCA) Newsletter, no. May: 19–25.
66 Computational Techniques in Neuroscience
McCoy, Liam G., Connor Brenna, Felipe Morgado, Stacy Chen, and Sunit Das. 2020.
“Neuroethics, Neuroscience, and the Project of Human Self-Understanding.”
AJOB Neuroscience 11 (3): 207–209. 10.1080/21507740.2020.1778127.
Montes, Gabriel Axel, and Ben Goertzel. 2019. “Distributed, Decentralized, and
Democratized Artificial Intelligence.” Technological Forecasting and Social
Change 141 (February): 354–358. 10.1016/j.techfore.2018.11.010.
Ogden, Thomas E., and Robert F. Miller. 1966. “Studies of the Optic Nerve of
the Rhesus Monkey: Nerve Fiber Spectrum and Physiological Properties.”
Vision Research 6 (10). 10.1016/0042-6989(66)90108-8.
Parra, Miguel, George Hruby, and George Hruby. n.d. “Neuroscience and Education
Related Papers.”
Rivkind, Alexander, and Omri Barak. 2017. “Local Dynamics in Trained Recurrent
Neural Networks.” Physical Review Letters 118 (25): 1–5. 10.1103/PhysRevLett.
118.258101.
S. Markel, Karen, and Brittany Elia. 2016. “How Human Resource Management
Can Best Support Employees with Autism: Future Directions for Research and
Practice.” Journal of Business and Management 22 (1): 71–86.
Sharkawy, Abdel-Nasser, Panagiotis N Koustoumpardis, and Nikos Aspragathos.
2020. “A Neural Network-Based Approach for Variable Admittance Control
in Human-Robot Cooperation: Online Adjustment of the Virtual Inertia.”
Intelligent Service Robotics 13 (4): 495–519. 10.1007/s11370-020-00337-4.
Srivastava, Aditya, and Shashank Jha. 2022. “Data-Driven Machine Learning:
A New Approach to Process and Utilize Biomedical Data.” In Predictive
Modeling in Biomedical Data Mining and Analysis, 225–252. Elsevier.
Srivastava, Aditya, Aparna Seth, and Kalpna Katiyar. 2021. “Microrobots and
Nanorobots in the Refinement of Modern Healthcare Practices.” In Robotic
Technologies in Biomedical and Healthcare Engineering, 13–37. CRC Press.
Sussillo, David. 2014. “Neural Circuits as Computational Dynamical Systems.”
Current Opinion in Neurobiology 25: 156–163. 10.1016/j.conb.2014.01.008.
Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan,
Ian Goodfellow, and Rob Fergus. 2014. “Intriguing Properties of Neural
Networks.” 2nd International Conference on Learning Representations,
ICLR 2014 - Conference Track Proceedings, 1–10.
Thibault, Jules, and Bernard P.A. Grandjean. 1991. “A Neural Network
Methodology for Heat Transfer Data Analysis.” International Journal of Heat
and Mass Transfer 34 (8): 2063–2070. 10.1016/0017-9310(91)90217-3.
Walkowiak, Emmanuelle. 2021. “Neurodiversity of the Workforce and Digital
Transformation: The Case of Inclusion of Autistic Workers at the Workplace.”
Technological Forecasting and Social Change 168 (April): 120739. 10.1016/
j.techfore.2021.120739.
Wang, Hongming, Ryszard Czerminski, and Andrew C. Jamieson. 2021. “Neural
Networks and Deep Learning.” The Machine Age of Customer Insight, 91–101.
10.1108/978-1-83909-694-520211010.
White, B. W., and Frank Rosenblatt. 1963. “Principles of Neurodynamics: Perceptrons
and the Theory of Brain Mechanisms.” The American Journal of Psychology
76 (4): 705. 10.2307/1419730.
Chapter 4
4.1 INTRODUCTION
Neuroscience is the integrative science dealing with the study of the nervous
system (both central and peripheral nervous system) as well as its functions
and underlying diseases. The brain is the vital organ of the central nervous
system controlling all the other activities in the body. Computational
neuroscience is the study of the mechanism of brain functioning by various
tools and techniques using computer science (Sejnowski et al., 1988).
Our complex brain consists of fundamental units called neurons. The
functional neuron in the brain transmits the electrical signals. There are on
an average 90 billion neurons present in the human brain (Goriely et al.,
2015). The electrochemical signals are carried by these neurons and result
in passing of the iconic currents via the synapse. The coordination of the
electrical activities of the neurons results in the repeated rhythmic altera-
tions all over the regions of brain and are termed as brain waves (Buskila,
2019). (Figure 4.1)
The five brain waves that have been recognized to date are: gamma (g),
beta (b), alpha (a), theta (θ), and delta (d) waves. All these waves possess
different frequencies, for e.g., gamma waves have the highest frequency range
(<30 Hz), whereas delta waves have the lowest frequency range (0–4 Hz).
They all are responsible for different states of mind in the human brain
(Abhang et al., 2016). These brain waves are measured by a technique known
as electroencephalography, or EEG, that uses electrodes placed on the scalp
for recording the waves by non-invasive approach (Teplan, 2002). The brain
structure and function can be easily analyzed by studying the images of the
brain by the process of neuroimaging. It has been broadly classified into
structural as well as functional neuroimaging techniques. The functional
techniques like fMRI, PET, MEG, and NIR are covered in the chapter
(Noggle & Davis, 2021). Numerous methods of non-invasive neuroimaging
have fostered an enormous impact in upgrading our knowledge about the
brain functioning and neurological disorders and provided a novel insight
into the treatment of these diseases (Supek & Aine, 2016).
DOI: 10.1201/9781003398066-4 67
68 Computational Techniques in Neuroscience
Figure 4.1 The different lobes of the brain viz. frontal, parietal, occipital and temporal
lobe, that possess diverse functions like controlling emotions, processing
sensory data and visual information, and conducting auditory functions,
respectively.
( https://imotions.com/wp-content/uploads/2022/10/brain-lobes-iMotions.png).
a. Gamma (γ) Brain waves – The gamma brain waves are the fastest waves,
having the largest frequency range above 30 Hz (i.e., more than 30 cycles
per second). They govern while the brain is involved in analytical
problem solving (Jeong et al., 2011), deep learning, as well as creative
processing of language (Ismail et al., 2016). These waves have emerged
from the thalamic region of the brain. They are initiated to adjust the
activity and systems of the neurons. People with some injury in the
thalamus lack consciousness along with cognition, and as a result may
slip into a coma (Desai et al., 2015). There should be balance in gamma
waves for the right perception, cognitive focus, inspired learning, and
processing of information. Lowered levels result in impairment in
learning, depression, as well as ADHD. On the other hand, increased
Brain waves, neuroimaging (fMRI, EEG, MEG, PET, NIR) 69
Figure 4.2 A non-invasive process of electroencephalography (EEG) for the analysis and
computation of brain waves.
( https://assets.nhs.uk/nhsuk-cms/images/E5RF5X.width-1534.jpg).
The alert mind is the result of beta waves that in turn result in effectual
functioning (Ashtaputre-Sisode, 2016).
c. Alpha (α) Brain Waves – Alpha waves are also called Berger’s wave as
they were manifested by Hans Berger in the 1930s (Stinson & Arthur,
2013). The conscious thinking in addition to the subconscious mind are
interconnected by the alpha waves. Their frequency lies between 8–13 Hz
(i.e., 8–13 cycles per second). They assist the feeling of relaxation and
calm down the body (Koudelková et al., 2018). They are initiated inside
the cortex, near the thalamus and the occipital lobe (Desai et al., 2015).
The alpha waves govern the state of light meditation or daydreaming
(Jeong et al., 2011), when the mind is relaxed, and thoughts are passing
over. The requirement of a stabilized amount of alpha waves is necessary
because if the quantity is less it results to insomnia, stress, and obsessive
compulsive disorder. If the quantity is more it leads to less concentration
and complete relaxed mind (Dudeja, 2017). These waves intensify the
learning process and the physical as well as mental health of a person. It
also aids in the therapy of depression, anxiety, and sleep dysfunctions by
introducing the deep relaxation approach along with mindfulness
(Stinson & Arthur, 2013). In older adults these waves ameliorate the
identification of words and enable precise memory performance.
The person behaves in a calm and composed manner (Desai et al.,
2015). A study revealed that the person suffering from anxiety
disorder had an increased frequency of alpha waves on the frontal
lobe (both sides) as well as on the parietal lobe (front side). The
disorders of anxiety are basically related to the dysfunction of the
forebrain (Cho et al., 2011) (Figure 4.3).
d. Theta (θ) Brain Waves – The frequency of the Theta waves ranges
between 4–8 Hz (i.e., 4 to 8 cycles per second). They are dominant
throughout normal sleeping hours, while meditating (Jeong et al.,
2011) and during states of fatigue (Ashtaputre-Sisode, 2016). Walter
and Dovey first described the presence of theta waves in 1944 in cases
associated with tumors in the sub-cortex (Schacter, 1977). They result
in profound meditation, relaxation, and enhanced memory in in-
dividuals. During light sleep or conscious dreaming in daytime, they get
activated and correlate with relieving stress (Ismail et al., 2016). Theta
rhythm is the other name given to these waves, which eventuate as a
repeated function turns to be self-governed. These waves originate in
the region of the cortex as well as hippocampus. It was suggested in a
study that theta waves are associated with constructing memories via
action within the hippocampus (Desai et al., 2015). While we are
dreaming or in a state of intuition, unconsciousness, or imagination,
these waves are governing our mind. Their excessive number results in
disorders like ADHD, distracted mind, impulsiveness, depression, and
hyperactivity disorders. Decreased amount of these waves causes stress,
anxiousness, and deficit emotional balance. Adequate balance in waves
Brain waves, neuroimaging (fMRI, EEG, MEG, PET, NIR) 71
Figure 4.3 The different types of brain waves associated with their dominant states.
( https://ars.els-cdn.com/content/image/3-s2.0-B9780128044902000026-f02-01-
9780128044902.jpg).
4.3 NEUROIMAGING
The neural activities of the brain result in various metabolic actions such as
rise in oxygen supply along with elongated blood flow, and these activities
are detected using various techniques namely BOLD, perfusion, and con-
trast fMRI. Among these the BOLD (blood oxygenation level dependent)
fMRI method is widely used (Gui et al., 2010). It maps the neurological
Figure 4.5 A fMRI scan of human brain depicting certain regions in a person suffering from
severe traumatic brain injuries.
( https://imotions.com/blog/learning/research-fundamentals/eeg-vs-mri-vs-
fmri-differences/ https://imotions.com/wp-content/uploads/2022/10/fMRI-
explained.jpg).
74 Computational Techniques in Neuroscience
activities by determining the variation in blood flow in the brain and de-
termines the alteration in corresponding signal strength related to the var-
ious cognitive conditions amidst the imaging process (Matthews & Jezzard,
2004). There lies a limitation for the same as the detection of brain activities
is not carried directly as it determines the variation in blood oxygenation
level instead of directly measuring the neuron activity (Chen et al., 2020).
Figure 4.6 Different metal electrodes incorporated into the cap that is placed on the scalp
to record the brain oscillations with contrasting frequencies.
( https://info.tmsi.com/blog/types-of-eeg-electrodes).
Brain waves, neuroimaging (fMRI, EEG, MEG, PET, NIR) 75
Firstly, the radionuclides that emit positrons are generated and get
included in molecules to give the radiotracers. These radionuclides pro-
duce PET radioisotopes. The radiotracers are administered in patients,
which reaches to the intended organs, and positrons move and get united
with the electrons, producing the photons that get captured by the camera
and recorded via detectors. The images with temporal and spatial reso-
lutions are generated and examined for diagnosis as well as treatment of
diseases (Zimmer, 2009). Assessment of patients suffering from brain
stroke is really complicated as a distinct region of the brain examined to
check for permanent or partial damage caused, but with PET imaging it
has been quite simple to picture a distinguished area for suitable therapy
to be conducted. The same is true with a person with epilepsy, where this
Brain waves, neuroimaging (fMRI, EEG, MEG, PET, NIR) 77
Figure 4.8 An amyloid PET scan (both positive and negative) depicting deposition of
proteins such as tau and amyloid that forms plaques in the brain of a person
suffering from Alzheimer’s disease.
( https://radiology.ucsf.edu/patient-care/services/specialty-imaging/alzheimer).
However, the functional activities of the brain or any tissue can dominate
its optical characteristics. In the same way, the human brain retaliates
to the external stimuli and experiences certain physiological variations
such as fluctuations in the level of blood along with the electrical activi-
ties, resulting in changes in optical characters, too. The oxygenated
haemoglobin, as well as the deoxyhaemoglobin (oxy-Hb and deoxy-Hb)
78 Computational Techniques in Neuroscience
Figure 4.9 NIRS imaging with head cap containing sensors, light detectors, and light
source.
( https://www.pnas.org/doi/10.1073/pnas.2208729119).
4.4 CONCLUSION
REFERENCES
Chen, W. L., Wagner, J., Heugel, N., Sugar, J., Lee, Y. W., Conant, L., … … &
Whelan, H. T. (2020). Functional near-infrared spectroscopy and its clinical
application in the field of neuroscience: advances and future directions.
Frontiers in Neuroscience, 14, 724. 10.3389/fnins.2020.00724
Cho, J. H., Lee, H. K., Dong, K. R., Kim, H. J., Kim, Y. S., Cho, M. S., &
Chung, W. K. (2011). A study of alpha brain wave characteristics from
MRI scanning in patients with anxiety disorder. Journal of the Korean
Physical Society, 59(4), 2861–2868. https://www.researchgate.net/profile/
Woon-Kwan-Chung/publication/270110634_A_Study_of_Alpha_Brain_
Wave_Characteristics_from_MRI_Scanning_in_Patients_with_Anxiety_
Disorder/links/570df7f808aed31341cf87f0/A-Study-of-Alpha-Brain-Wave-
Characteristics-from-MRI-Scanning-in-Patients-with-Anxiety-Disorder.pdf
Desai, R., Tailor, A., & Bhatt, T. (2015). Effects of yoga on brain waves and
structural activation: A review. Complementary therapies in clinical practice,
21(2), 112–118. 10.1016/j.ctcp.2015.02.002
Drevets, W. C. (2000). Neuroimaging studies of mood disorders. Biological
Psychiatry, 48(8), 813–829. 10.1016/S0006-3223(00)01020-9
Dudeja, J. P. (2017). Scientific analysis of mantra-based meditation and its bene-
ficial effects: An overview. International Journal of Advanced Scientific
Technologies in Engineering and Management Sciences, 3(6), 21–26. https://
www.researchgate.net/profile/Jai-Dudeja/publication/318395933_Scientific_
Analysis_of_Mantra-Based_Meditation_and_its_Beneficial_Effects_An_
Overview/links/5baa003192851ca9ed23aabd/Scientific-Analysis-of-Mantra-
Based-Meditation-and-its-Beneficial-Effects-An-Overview.pdf
Ehlis, A. C., Schneider, S., Dresler, T., & Fallgatter, A. J. (2014). Application of
functional near-infrared spectroscopy in psychiatry. Neuroimage, 85, 478–488.
10.1016/j.neuroimage.2013.03.067
Glover, G. H. (2011). Overview of functional magnetic resonance imaging.
Neurosurgery Clinics, 22(2), 133–139. 10.1016/j.nec.2010.11.001
Goriely, A., Budday, S., & Kuhl, E. (2015). Neuromechanics: from neurons
to brain. Advances in Applied Mechanics, 48, 79–139. 10.1016/bs.aams.
2015.10.002
Gui, X. U. E., Chuansheng, C. H. E. N., Zhong-Lin, L. U., & Qi, D. O. N. G.
(2010). Brain imaging techniques and their applications in decision-
making research. Xin li Xue Bao. Acta Psychologica Sinica, 42(1), 120.
10.3724%2FSP.J.1041.2010.00120
Irani, F., Platek, S. M., Bunce, S., Ruocco, A. C., & Chute, D. (2007).
Functional near infrared spectroscopy (fNIRS): an emerging neuroimaging
technology with important applications for the study of brain disorders.
The Clinical Neuropsychologist, 21(1), 9–37. 10.1080/1385404060091
0018
Ismail, W. W., Hanif, M., Mohamed, S. B., Hamzah, N., & Rizman, Z. I. (2016).
Human emotion detection via brain waves study by using electroencephalogram
(EEG). International Journal on Advanced Science, Engineering and Infor-
mation Technology, 6(6), 1005–1011. https://www.researchgate.net/profile/
Wan-Woaswi/publication/329584955_Human_Emotion_Detection_Via_
Brain_Waves_Study_by_Using_Electroencephalogram_EEG/links/5c10b7a645
85157ac1bbb648/Human-Emotion-Detection-Via-Brain-Waves-Study-by-
Using-Electroencephalogram-EEG.pdf
Brain waves, neuroimaging (fMRI, EEG, MEG, PET, NIR) 81
Izzetoglu, M., Izzetoglu, K., Bunce, S., Ayaz, H., Devaraj, A., Onaral, B., &
Pourrezaei, K. (2005). Functional near-infrared neuroimaging. IEEE Trans-
actions on Neural Systems and Rehabilitation Engineering, 13(2), 153–159.
10.1109/TNSRE.2005.847377
Jebelli, H., Hwang, S., & Lee, S. (2018). EEG signal-processing framework to obtain
high-quality brain waves from an off-the-shelf wearable EEG device. J. Comput.
Civ. Eng, 32(1), 04017070. https://www.researchgate.net/profile/Houtan-
Jebelli/publication/319550134_An_EEG_Signal_Processing_Framework_to_
Obtain_High-Quality_Brain_Waves_from_an_Off-the-Shelf_Wearable_EEG_
Device/links/5c85a1cd92851c69506b1fa8/An-EEG-Signal-Processing-
Framework-to-Obtain-High-Quality-Brain-Waves-from-an-Off-the-Shelf-
Wearable-EEG-Device.pdf
Jeong, E. G., Moon, B., & Lee, Y. H. (2011). A platform for real time brain-
waves analysis system. In International Conference on Grid and Distributed
Computing (pp. 431–437). Springer, Berlin, Heidelberg. 10.1007/978-3-
642-27180-9_53
Jia, X., & Kohn, A. (2011). Gamma rhythms in the brain. PLoS Biology, 9(4),
e1001045. 10.1371/journal.pbio.1001045
Koudelková, Z., Strmiska, M., & Jašek, R. (2018). Analysis of brain waves ac-
cording to their frequency. Int. J. Of Biol. And Biomed. Eng., 12, 202–207.
https://www.researchgate.net/profile/Zuzana-Koudelkova-4/publication/
334805116_Analysis_of_brain_waves_according_to_their_frequency/links/
5d4194e84585153e59312c60/Analysis-of-brain-waves-according-to-their-
frequency.pdf
Kringelbach, M. L., & Deco, G. (2020). Brain states and transitions: insights from
computational neuroscience. Cell Reports, 32(10), 108128. 10.1016/j.celrep.
2020.108128
Matthews, P. M., & Jezzard, P. (2004). Functional magnetic resonance imaging.
Journal of Neurology, Neurosurgery & Psychiatry, 75(1), 6–12. https://jnnp.
bmj.com/content/75/1/6.short
Müller-Putz, G. R. (2020). Electroencephalography. Handbook of Clinical Neurology,
168, 249–262. 10.1016/B978-0-444-63934-9.00018-4
Noggle, C. A., & Davis, A. S. (2021). Advances in neuroimaging. In Understanding
the Biological Basis of Behavior (pp. 107–137). Springer, Cham. 10.1007/
978-3-030-59162-5_5
Proudfoot, M., Woolrich, M. W., Nobre, A. C., & Turner, M. R. (2014).
Magnetoencephalography. Practical neurology, 14(5), 336–343. 10.1136/
practneurol-2013-000768
Raichle, M. E. (1983). Positron emission tomography. Annual Review of Neuro-
science, 6(1), 249–267. https://www.annualreviews.org/doi/pdf/10.1146/
annurev.ne.06.030183.001341
Rice, L., & Bisdas, S. (2017). The diagnostic value of FDG and amyloid PET in
Alzheimer’s disease—A systematic review. European Journal of Radiology,
94, 16–24. 10.1016/j.ejrad.2017.07.014
Savoy, R. L. (2001). History and future directions of human brain mapping and
functional neuroimaging. Acta Psychologica, 107(1-3), 9–42. 10.1016/S0001-
6918(01)00018-X
Schacter, D. L. (1977). EEG theta waves and psychological phenomena: A review and
analysis. Biological Psychology, 5(1), 47–82. 10.1016/0301-0511(77)90028-X
82 Computational Techniques in Neuroscience
WEB SOURCE
https://imotions.com/wp-content/uploads/2022/10/brain-lobes-iMotions.png
https://assets.nhs.uk/nhsuk-cms/images/E5RF5X.width-1534.jpg
https://ars.els-cdn.com/content/image/3-s2.0-B9780128044902000026-f02-01-
9780128044902.jpg
https://ccnmtl.columbia.edu/projects/neuroethics/module1/foundationtext/index.html
https://imotions.com/blog/learning/research-fundamentals/eeg-vs-mri-vs-fmri-
differences/
https://imotions.com/wp-content/uploads/2022/10/fMRI-explained.jpg
https://info.tmsi.com/blog/types-of-eeg-electrodes
https://www.york.ac.uk/psychology/research/york-neuroimaging-centre/research/
magnetoencephalography/
https://radiology.ucsf.edu/patient-care/services/specialty-imaging/alzheimer
https://www.pnas.org/doi/10.1073/pnas.2208729119
Chapter 5
EEG
Concepts, research-based analytics,
and applications
Rashmi Gupta, Sonu Purohit, and Jeetendra Kumar
Atal Bihari Vajpayee University, Bilaspur, Chhattisgarh, India
5.1 INTRODUCTION
DOI: 10.1201/9781003398066-5 83
84 Computational Techniques in Neuroscience
that processes the data and displays the results in a visual format, such as
a graph or image. Over the years, advances in materials, electronics, and
software have resulted in smaller, more portable, and more efficient
machines with higher resolution and accuracy. Furthermore, advances
in brain-computer interface technology have enabled the development of
EEG-based systems that can be used for a variety of applications other
than traditional medical diagnosis, such as controlling prosthetic devices,
monitoring brain activity for cognitive assessment and research, and
more. EEG is a subpart of the brain-computer interface(BCI). BCI devices
can be of the following types:
EEG devices can also be wired or wireless. The wired brain-wear devices
offer strong signals and more stability than the wireless ones. There is no
chance of dysconnectivity during the recording of electrical activity. During
the utility of wired connected devices, we face a lack of movement but
it is easy to record without any external interruption. In the wireless
headset, the recording can be affected by connectivity. Due to the mobility
of the device, it is easy to handle and can be used from a distance. The
primary drawback of the wireless device is that sometimes the device can
be disconnected for external reasons. Battery backup of the device can be
one of the reasons for an interruption during recording.
Electric signals inside the brain may be affected by different types of reasons
like body movements, eye blinking, opening the mouth, moving the leg as
well as the imagination. Many variations can be seen in recorded signals
when a person moves her/his leg, hand, head, or another part of the body.
This kind of variation can be called noise or artifacts. The removal of noise
from the datasets is called the preprocessing of datasets. Two types of ar-
tifacts can be present in the dataset – the first one is an internal artifact,
and the second one is a system artifact. The internal artifact can arise due
to body movements of the subject like eye blinking, hand, foot, or head
movements, and system artifacts can arise due to fluctuations in the power
supply in the device, movements in the electrodes of the device, etc.
There are a few artifact removal techniques to remove artifacts from the
raw EEG dataset that are called filtering. The filtering includes three
techniques, which are the high pass filter, low pass filter, and notch filter.
The high pass filters out the slow frequencies that are less than 0.1 hz or
1.0 hz. High-pass filtering is used for removing drift and trends. The low
pass filter is used for removing unwanted noise from data and is to make
the dataset clear and improve its quality. The low pass filter removes high-
frequency interferences greater than 50-70hz. EEG signals are often
affected by power line noise, and such noise is sometimes avoided from
a source with a carefully equipped system, but this work cannot always
be successful. Moreover, it cannot work in the already gathered dataset.
Therefore, notch filters are used for the removal of such noises from the
dataset. The notch filter removes 50/60 Hz interference in the dataset.
Apart from all these basic techniques, some new EEG data processing
techniques are also proposed in some research works. Following are some
of these techniques.
86 Computational Techniques in Neuroscience
EEG devices are able to grasp the internal state of mind using the elec-
trical signals generated from the mind. EEG is used to measure the
electrical activity of the brain in a non-invasive manner. Because this
activity is related to a variety of functions such as movement, sensation,
thought, and emotion, EEG is a useful tool for understanding brain
function. Furthermore, EEG can be used to diagnose various neurolog-
ical disorders as well as monitor brain activity during surgery, which is
critical for patient safety. EEG is also useful for researching sleep and
assessing the efficacy of brain stimulation techniques. Furthermore,
EEG’s ability to detect changes in brain activity during drug trials is
critical for the development of effective medicines for a variety of dis-
eases. Due to the non-invasive nature of EEG, it can also be used by a
non-medical person. Even people related to computers can also record
brain waves and analyze them. In this chapter, EEG applications in some
areas like neuro-marketing, behavioral science, cognitive neuroscience,
Table 5.1 Deep learning-based EEG data analysis techniques
Research Objective Year Dataset used No. of AI technique used Model used Accuracy
work volunteers
[ 2] To classify 2023 Own dataset 31 Machine learning KNN Highest 89% accuracy
schizophrenia during LR with SVM
resting state DT
RF
SVM
[ 3] To classify emotional 2023 DEAP dataset [ 4] 32 Machine learning SVM Classifier 97.42% accuracy
valence
[ 5] To detect drowsiness 2023 Public dataset by 12 Machine learning NB 100% highest accuracy
Jianliang Min et al [ 6] SVM
KNN
RFA
[ 7] To classify bipolar and 2023 Own collected dataset 57 Machine learning SVM 89.3% accuracy
other depressive KNN
disorders RF
[ 8] To diagnose brain 2019 The TUH EEG 13500 Deep learning HMM (Hidden 90%
disorders Corpus [ 9] Markov Model)
[ 10] To find the efficient 2021 Andrzejak et al. [ 11] 24 Deep learning CNN-E CNN-E model
classification method performed well with
for different same frequency
frequency data sampling and different
frequency sampling
[ 12] To classify eye close 2022 Own dataset 27 Deep learning DLVQ (deep 91% F-Score
and eye-opening learning vector
states using deep quantizer)
EEG
learning
87
(Continued)
88
dataset [ 17]
[ 18] To detect 2022 One public dataset 128 Deep learning CNN 78% accuracy
schizophrenia using and self-collected
sound perception dataset
[ 19] To classify normal 2022 UCI dataset 122 Deep learning DWT+ CNN+ 99.32%
and alcoholic people Bi-LSTM
[ 20] To predict tinnitus 2023 Own collected dataset 9 Deep learning TFI score + CNN Accuracy ranging
treatment outcomes from 98%-100%
[ 21] To predict awakening 2023 Own collected dataset 145 Deep learning CNN Positive prediction -
from the coma 0.83 ± 0.03 and
Negative prediction -
0.57 ± 0.04
EEG 89
education, security, etc. have been discussed. Figure 5.2 illustrates the
applications of EEG in many areas.
5.4.3 Neuro-marketing
The application of neuroscience techniques to the study of consumer
behavior and decision making is known as neuro-marketing. It seeks to
comprehend how people make purchasing decisions and what factors
influence them. Neuro-marketing measures brain activity and responses to
marketing stimuli using tools such as EEG, fMRI, and eye-tracking. Neuro-
marketing researchers can gain insights into consumer preferences, atti-
tudes, and motivations by studying brain activity. These insights can then
be used to develop more effective marketing strategies. Because it is still a
young field, neuro-marketing’s methods and conclusions can be debatable.
However, by offering a deeper understanding of consumer behavior and the
underlying neural mechanisms that drive it, it has the potential to com-
pletely alter how businesses approach marketing and consumer research.
Neuro-marketing is a highly interdisciplinary field that draws on knowledge
from marketing, psychology, and neuroscience. It is also a field that is
constantly changing as new strategies and tactics are created. For neuro-
marketing, EEG is an easy-to-use device.
the task, whether the task was easy or too hard to understand, or whether
the mind of the student was going through stress.
5.4.6 Security
User authentication system plays a very important role in security. There
are many types of user authentication systems like pin, password, fin-
gerprint, face recognition systems, etc. A new authentication system is
going to be introduced named EEG authentication system. EEG contains
a large number of psychological activities running in the human brain.
There are many differences between an individual’s brain structure and
cognitive activities. The EEG signals of different people are different. But
the same individual’s brain performs the same activities, which are even
repeatable. The same frequency of the brain can be used as authentication
by EEG signals for security purposes. The EEG-based authentication
system is best in cyber security or cryptography as it is unique always and
not replicable as well. The EEG-based biometric system needs the BCI
application to develop security systems.
EEG is a device that measures the electrical activity of the brain during
any mental task. From the first viewpoint, anyone can say that it is
technology to access the brain state of humans. New research doors are
already open for conducting research in neuroscience using EEG devices,
but as with other technologies, many challenges are also associated with
EEG. These challenges can be categorized as follows.
Table 5.2 Latest research work done on different application areas of EEG
EEG application Research work & Year Dataset used Device used No of. Methodology used Findings
area objective volunteer
Cognitive [ 22] Diagnosis of 2023 Publically Walter EEGPL- 59 SVM Accuracy 97.22%
Neuroscience Alzheimer’s disease available 2311
dataset [ 23]
[ 24] To predict the 2021 Own HydroCel 74 Statistical analysis EEG signals can
decision-making collected Geodesic discriminate decision
process dataset Sensory Net variables.
[ 25] To predict 2020 Own Brain Products 14 COH Highest accuracy
decision making collected GmbH) 0.90 ± 0.10
dataset
Behavior [ 26] To analyze the 2022 Own BioSemi Inc., 81 Statistical analysis Way-finding cognition
Neuroscience effect of colors, collected was significantly
graphics, and design dataset improved with color
and graphic
enhancement.
[ 27] To detect driver 2022 Own - 6 Bi-LSTM Highest accuracy 92.48%
distraction collected
dataset
[ 28] To analyze brain 2021 Own BioSemi B.V. 186 Statistical analysis 72% of sessions were
behavior and collected completed.
synchrony during dataset
parent–child
interactions
EEG
(Continued)
93
94
Table 5.2 (Continued) Latest research work done on different application areas of EEG
EEG application Research work & Year Dataset used Device used No of. Methodology used Findings
area objective volunteer
Neuro- [ 29] To classify 2022 DEAP Biosemi Active 32 DNN Highest accuracy 94%
marketing preferences dataset [ 4] Two system RF
SVM
KNN
[ 30] To predict 2022 Publicly Emotiv Epoc+ 25 Ensemble Accuracy 96.89%
consumer’s emotion available classification
dataset [ 31] using GA
[ 32] To classify 2022 Own Neurowerk 45 K means clustering Highest accuracy 92.4%
consumer preference collected EEG Sigma
dataset
Sports and [ 33] To access mental 2021 Own Muse EEG 10 VAS ruler Game positively affected
Meditation fatigue and stress collected Device stress and
dataset concentration
Computational Techniques in Neuroscience
[ 34] To access the 2019 Own Mindwave and 14 Calculated average of Shooting score and
mental status of rifle collected neurosky shooting score, meditation were higher
shooter dataset attention, and heart on the expert, and
rate before 5 s of attention and heart rate
shooting were higher on the
novice.
Education [ 35] To analyze split 2022 Own g.Nautilus, g.tec 40 Statistical test The split-attention group
attention effects collected in the beta brain wave
dataset and the focused
attention group in the
alpha brain wave both
showed a significant
difference.
[ 36] To analyze 2020 Own Mindwave 13 Comparison of Different media of study
attention collected attention values differently affects
dataset attention value.
Security [ 37] To develop EEG- 2022 Own EGI GES 300 15 Auto Weka 95.6%
based personal collected
authentication system dataset
[ 38] To analyze EEG- 2019 Own Truscan EEG 20 KNN Accuracy 98.04%
based authentication collected device LDA
dataset
Brain Control [ 39] To develop brain 2020 Own − 15 MDCBN Success rate 0.60
Robotics control robotic arm collected
dataset
[ 40] To decode hand 2020 Own EEG with 57 15 LDA Average 48% score
movements collected electrodes
dataset
EEG
95
96 Computational Techniques in Neuroscience
5.6 CONCLUSION
REFERENCES
[1] M. X. Cohen, “Where Does EEG Come From and What Does It Mean?,”
Trends Neurosci., vol. 40, no. 4, pp. 208–218, Apr. 2017.
[2] J. Ruiz de Miras, A. J. Ibáñez-Molina, M. F. Soriano, and S. Iglesias-Parro,
“Schizophrenia Classification Using Machine Learning on Resting State
EEG Signal,” Biomed. Signal Process. Control, vol. 79, p. 104233, Jan.
2023.
[3] L. Abdel-Hamid, “An Efficient Machine Learning-Based Emotional Valence
Recognition Approach Towards Wearable EEG,” Sensors 2023, Vol. 23,
Page 1255, vol. 23, no. 3, p. 1255, Jan. 2023.
[4] S. Koelstra et al., “DEAP: A Database for Emotion Analysis; Using
Physiological Signals,” IEEE Trans. Affect. Comput., vol. 3, no. 1, pp. 18–31,
Jan. 2012.
[5] I. A. Fouad, “A Robust and Efficient EEG-based Drowsiness Detection
System Using Different Machine Learning Algorithms,” Ain Shams Eng. J.,
vol. 14, no. 3, p. 101895, Apr. 2023.
98 Computational Techniques in Neuroscience
[6] J. Min, P. Wang, and J. Hu, “Driver Fatigue Detection through Multiple
Entropy Fusion Analysis in an EEG-based System,” PLoS One, vol. 12,
no. 12, Dec. 2017.
[7] M. Ravan et al., “Discriminating between Bipolar and Major Depressive
Disorder Using a Machine Learning Approach and Resting-state EEG Data,”
Clin. Neurophysiol., vol. 146, pp. 30–39, Feb. 2023.
[8] M. Golmohammadi, A. H. Harati Nejad Torbati, S. Lopez de Diego,
I. Obeid, and J. Picone, “Automatic Analysis of EEGs Using Big Data and
Hybrid Deep Learning Architectures,” Front. Hum. Neurosci., vol. 13, p. 76,
Feb. 2019.
[9] “Temple University EEG Corpus - Downloads.” [Online]. Available: https://
isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. [Accessed: 10-
Feb-2023].
[10] T. Wen, Y. Du, T. Pan, C. Huang, and Z. Zhang, “A Deep Learning-Based
Classification Method for Different Frequency EEG Data,” Comput. Math.
Methods Med., vol. 2021, 2021.
[11] R. G. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David, and
C. E. Elger, “Indications of Nonlinear Deterministic and Finite-Dimensional
Structures in Time Series of Brain Electrical Activity: Dependence on
Recording Region and Brain State,” Phys. Rev. E, vol. 64, no. 6, p. 061907,
Nov. 2001.
[12] F. Husham Almukhtar, A. Abbas Ajwad, A. S. Kamil, R. A. Jaleel, R. Adil
Kamil, and S. Jalal Mosa, “Deep Learning Techniques for Pattern Recognition
in EEG Audio Signal-Processing-Based Eye-Closed and Eye-Open Cases,”
Electron. 2022, Vol. 11, Page 4029, vol. 11, no. 23, p. 4029, Dec. 2022.
[13] Q. Xin, S. Hu, S. Liu, L. Zhao, and Y. D. Zhang, “An Attention-Based
Wavelet Convolution Neural Network for Epilepsy EEG Classification,”
IEEE Trans. Neural Syst. Rehabil. Eng., vol. 30, pp. 957–966, 2022.
[14] N. Tibrewal, N. Leeuwis, and M. Alimardani, “Classification of Motor
Imagery EEG Using Deep Learning Increases Performance in Inefficient
BCI Users,” PLoS One, vol. 17, no. 7, p. e0268880, Jul. 2022.
[15] M. Algarni, F. Saeed, T. Al-Hadhrami, F. Ghabban, and M. Al-Sarem, “Deep
Learning-Based Approach for Emotion Recognition Using Electroencephalo-
graphy (EEG) Signals Using Bi-Directional Long Short-Term Memory (Bi-
LSTM),” Sensors 2022, Vol. 22, Page 2976, vol. 22, no. 8, p. 2976, Apr. 2022.
[16] J. Xie et al., “A Transformer-Based Approach Combining Deep Learning
Network and Spatial-Temporal Information for Raw EEG Classification,”
IEEE Trans. Neural Syst. Rehabil. Eng., vol. 30, pp. 2126–2136, 2022.
[17] “EEG Motor Movement/Imagery Dataset v1.0.0.” [Online]. Available:
https://physionet.org/content/eegmmidb/1.0.0/. [Accessed: 10-Feb-2023].
[18] C. Barros, B. Roach, J. M. Ford, A. P. Pinheiro, and C. A. Silva, “From
Sound Perception to Automatic Detection of Schizophrenia: An EEG-Based
Deep Learning Approach,” Front. Psychiatry, vol. 12, p. 2659, Feb. 2022.
[19] E. Tamilia, L. Ricci, H. Li, and L. Wu, “EEG Classification of Normal
and Alcoholic by Deep Learning,” Brain Sci. 2022, Vol. 12, Page 778,
vol. 12, no. 6, p. 778, Jun. 2022.
[20] M. Doborjeh et al., “Prediction of Tinnitus Treatment Outcomes Based
on EEG Sensors and TFI Score Using Deep Learning,” Sensors 2023, Vol. 23,
Page 902, vol. 23, no. 2, p. 902, Jan. 2023.
EEG 99
6.1 INTRODUCTION
the stride interval time series of the gait in participants with HD as well as
the healthy older subjects in comparison to control subjects. Their findings
demonstrated that compared to controls, subjects with HD and older
subjects have greater random stride interval fluctuations. [5,13,14] are
more studies for consideration on gait oscillations analysis intended for
illness state analysis. Driven by their findings, Kamruzzaman applied the
support vector machine method and two fundamental temporal-spatial
gait characteristics (stride length and cadence) as input features to examine
the gait of people with cerebral palsy [15]. A study that uses fluctuation
analysis and frequency range distribution to get a fresh perspective on
gait rhythm is found in [16]. A tensor decomposition model for higher-
dimensional analysis in PD was put forth in [17] by the author [18]
described numerous regression normalization techniques for PD gait
analysis that took into account the patient’s physical characteristics and
self-selected speed. With the statistical investigation of gait rhythm, Wu
employed a nonparametric Paren-window approach in [19] to evaluate the
gait intervals. For the gait dynamics analysis for classifying NDDs, frequency
range distribution [20], tensor decomposition [21], and texture-based images
with fuzzy recurrence plots [22] were proposed. The improvement of fall
prediction, treatment, and rehabilitation procedures may result from research
on the dynamics of gait patterns in neurodegenerative illness to determine the
severity. Thus, we proposed non-linear entropy for extracting optimum
features for the detection of diseases. The proposed research could aid in the
classification of NDDs and the analysis of gait variations.
Contributions:
• An entropy-based feature extraction technique is proposed for effec-
tive feature extraction.
• Statistical-based features such as minimum, maximum, mean, energy,
and normalized energy were used for the classification.
• An artificial neural network (ANN) classifier is utilized to classify
the task, such as HC vs PD, HC vs. HD, HC vs. ALS, and HC
vs. NDD.
• The classification performance shows better results with the proposed
approach. The framework of the proposed method is shown in
Figure 6.1.
Table 6.1 The classification performance using all five statistical features
Classification performance
Classification task Accuracy (%) Sensitivity (%) Specificity (%) PPV (%) NPV (%)
PD vs. HC 93.54 99.28 88.82 88.12 99.33
HD vs. HC 92.41 89.20 97.62 98.12 85.34
ALS vs. HC 91.10 84.97 97.89 97.50 86.00
NDD vs.HC 92.21 92.28 91.31 92.25 93.20
104 Computational Techniques in Neuroscience
N
Log energy entropy = i =1
log(yi2) (6.1)
where N denotes the total length of the signal yi and denotes the signal’s
ith sample. The log energy entropy feature was used by the authors of [26]
and [27] to achieve high classification accuracy.
Classification of gait signals for detection of NDDs 105
Parkinson signal
1000
0
-1000
-2000
0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
4
10
Huntington signal
1000
0
-1000
Amplitude
-2000
0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
4
10
ALS signal
0
-1000
-2000
0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
4
10
Healthy signal
1000
0
-1000
-2000
0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
4
10
No. of samples
Figure 6.2 The visual representation of gait patterns of the different NDD classes.
6.2.3 Classification
Classification is a significant area of study in data mining, and neural net-
works are one of the most widely used techniques for classification. NDD
classification employing gait features is typically challenged by three
parameters that can affect performance: 1) a small number of clinical
samples; 2) a large number of noisy or redundant features; and 3) the
requirement to meet real-time requirements [28]. To eliminate these prob-
lems ANN classification is used. ANN is a complex adaptive system that
can change its internal structure in response to the information it receives. It
is accomplished by varying the weight of the connection. Each link carries a
certain amount of weight.
A weight is a numerical value that governs the signal between two neu-
rons. The output layer, input layer, and one or more hidden layers (middle
layer) with a larger number of processing nodes or artificial neurons are all
connected in an ANN [29]. The connection between two neurons is con-
trolled by weights that can be adjusted to improve system accuracy, shown
in Figure 6.3. Supervised learning is illustrated by an ANN. The knowledge
was acquired by the ANN in the form of connected network units. Humans
have a difficult time extracting this knowledge. This factor has motivated
the extraction of classification rules in data mining.
ANN learning is achieved through the use of various training algorithms
that are based on training guidelines or functions. Various training algo-
rithms are used to recognize the training algorithm, which is based on
106 Computational Techniques in Neuroscience
True (+)
Sensitivity = 100 (6.3)
True (+) + False ( )
True ( )
Specificity = 100 (6.4)
True ( ) + False (+)
True (+)
Positive predictive value (PPV) = 100 (6.5)
True (+) + False (+)
Classification of gait signals for detection of NDDs 107
True ( )
Negative Predictive Value(NPV) = 100 (6.6)
True ( ) + False ( )
The gait signals taken from the pressure sensors from gait dynamics for
the NDDs dataset is utilized to investigate the proposed technique. In the
presented work, the non-linear characteristics of the gait signals are ana-
lyzed by applying log energy entropy computation on gait signals. It cap-
tures effective non-linear properties of the given signal where the optimum
statistical-based features – minimum, maximum, mean, and energy and
normalized energy – were captured from the left and right feet of a healthy
control subject, an ALS patient, an HD patient, and a PD patient, respec-
tively. The extracted features were given to the ANN classifier for further
classification of the healthy control and individual with NDD. The total
subjects taken for the experiment are 15 PD, 20 HD, 13 ALS, and 16
healthy controls. Since each recording has a 5-minute time duration with
sampling of 300 samples for a single second, total number of samples
for each foot is 90,000, respectively. The classification performance of the
proposed work is shown in Table 6.1 and Figure 6.4.
The proposed method achieved the classification accuracy of 93.54% for
the classification between PD vs. HC, 92.41% for HD vs. HC, 91.10% for
ALS vs. HC, and 92.21% for NDD vs. HC. The sensitivity of PD vs. HC
classification achieved 99.28% maximum sensitivity than all other classi-
fication tasks, but minimum specificity than all other classification tasks.
Considering single statistical features for classification also performed
100
95
90
85
80
75
PD vs. HD vs. ALS vs. NDD vs.
HC HC HC HC
94 Accuracy (%)
92
Sensi!vity (%)
90
Specificity (%)
88
86 PPV (%)
84 NPV (%)
better classification results. Table 6.2 and Figure 6.5 show the classification
results of the classification between PD and HC by considering individual
statistical features.
For the classification of PD vs. HC, while considering single features,
energy features achieved the maximum classification performance with an
accuracy of 92.29%. In the classification between HD vs. HC, normalized
energy feature achieved the highest accuracy of 91.87% than all other
features that are shown in Table 6.3 and Figure 6.6.
The classification result of ALS vs. HC is shown in Table 6.4 and
Figure 6.7. Minimum feature outperforms the classification with an accu-
racy of 92.29%, sensitivity of 91.28%, specificity of 89.56%, PPV of
90.90%, and NPV of 91.34%, comparatively.
For the classification between the NDD and HC, all features have similar
classification results with minor variations, shown in Table 6.5 and
Figure 6.8. Hence, the proposed method produces promising results.
In comparison to some traditional variables for comparing multiple groups
of subjects, the indices and HC appear to be far more useful. For example,
if the p value is minimum of all Kruskal-Wallis tests for these subjects with
disease, that are related to the statistical-like average value, standard devia-
tion, fractal scaling index, decay time of autocorrelation, and nonstationarity
96 Accuracy (%)
94
92 Sensi!vity (%)
90
88 Specificity (%)
86
84 PPV (%)
82
NPV (%)
94
Accuracy (%)
92
90 Sensivity (%)
88 Specificity (%)
86
PPV (%)
84
NPV (%)
96
Accuracy (%)
94
92 Sensi!vity (%)
90
88 Specificity (%)
86
PPV (%)
NPV (%)
6.4 CONCLUSION
detection of ALS, PD, and HD. To investigate a better metric for mon-
itoring the progression of NDD and the effects of intervention therapies, the
proposed entropy-based feature extraction using log energy entropy with
statistical feature measures has produced a better classification of the entire
task with promising results with consideration of single features and mul-
tiple features. This strongly demonstrates the dependability of classification
results, which may be beneficial for automated intensive care of disease
development as well as for early diagnosing. Furthermore, the proposed
method may decrease clinical diagnosis imprecision. In the future, this
research work will concentrate on improving the performance by incor-
porating deep learning techniques for classification.
REFERENCES
[1] Alaskar, Haya, Abir Jaafar Hussain, Wasiq Khan, Hissam Tawfik,
Pip Trevorrow, Panos Liatsis, and Zohra Sbaï. “A data science approach
for reliable classification of neuro-degenerative diseases using gait patterns.”
Journal of Reliable Intelligent Environments 6 (2020): 233–247.
[2] Jankovic, Joseph. “Parkinson’s disease: clinical features and diagnosis.”
Journal of Neurology, Neurosurgery & Psychiatry 79, no. 4 (2008):
368–376.
[3] Hausdorff, Jeffrey M., Yosef Ashkenazy, Chang-K. Peng, Plamen Ch Ivanov,
H. Eugene Stanley, and Ary L. Goldberger. “When human walking becomes
random walking: fractal analysis and modeling of gait rhythm fluctuations.”
Physica A: Statistical Mechanics and its Applications 302, no. 1–4 (2001):
138–147.
[4] Lahmiri, Salim. “Image characterization by fractal descriptors in variational
mode decomposition domain: application to brain magnetic resonance.”
Physica A: Statistical Mechanics and its Applications 456 (2016): 235–243.
[5] Shahbakhi, Mohammad, Danial Taheri Far, and Ehsan Tahami. “Speech
analysis for diagnosis of Parkinson’s disease using genetic algorithm and
support vector machine.” Journal of Biomedical Science and Engineering
2014 (2014).
[6] Salvatore, Christian, Antonio Cerasa, Isabella Castiglioni, F. Gallivanone,
A. Augimeri, M. Lopez, G. Arabia, M. Morelli, M.C. Gilardi, and
A. Quattrone. “Machine learning on brain MRI data for differential diag-
nosis of Parkinson’s disease and Progressive Supranuclear Palsy.” Journal
of Neuroscience Methods 222 (2014): 230–237.
[7] Kamruzzaman, Joarder, and Rezaul K. Begg. “Support vector machines
and other pattern recognition approaches to the diagnosis of cerebral palsy
gait.” IEEE Transactions on Biomedical Engineering 53, no. 12 (2006):
2479–2490.
[8] Wahid, Ferdous, Rezaul K. Begg, Chris J. Hass, Saman Halgamuge, and
David C. Ackland. “Classification of Parkinson’s disease gait using spatial-
temporal gait features.” IEEE Journal of Biomedical and Health Informatics
19, no. 6 (2015): 1794–1802.
Classification of gait signals for detection of NDDs 111
[9] Wu, Yunfeng, and Sridhar Krishnan. “Statistical analysis of gait rhythm
in patients with Parkinson’s disease.” IEEE Transactions on Neural Systems
and Rehabilitation Engineering 18, no. 2 (2009): 150–158.
[10] Hass, Chris J., Thomas A. Buckley, Chris Pitsikoulis, and Ernest J. Barthelemy.
“Progressive resistance training improves gait initiation in individuals
with Parkinson’s disease.” Gait & Posture 35, no. 4 (2012): 669–673.
[11] McNeely, Marie E., and Gammon M. Earhart. “Medication and subthalamic
nucleus deep brain stimulation similarly improve balance and complex gait
in Parkinson disease.” Parkinsonism & Related Disorders 19, no. 1 (2013):
86–91.
[12] Picelli, Alessandro, Camilla Melotti, Francesca Origano, Andreas Waldner,
Antonio Fiaschi, Valter Santilli, and Nicola Smania. “Robot-assisted gait
training in patients with Parkinson disease: a randomized controlled trial.”
Neurorehabilitation and Neural Repair 26, no. 4 (2012): 353–361.
[13] Eskofier, Bjoern M., Sunghoon Ivan Lee, Manuela Baron, André Simon,
Christine F. Martindale, Heiko Gaßner, and Jochen Klucken. “An overview
of smart shoes in the internet of health things: gait and mobility assessment
in health promotion and disease monitoring.” Applied Sciences 7, no. 10
(2017): 986.
[14] Genetics Home, Huntington disease. National Library of Medicine [Online].
https://ghr.nlm.nih.gov/condition/huntingtondisease(2018). Accessed 03 Jan
2018
[15] Kremer, H.P.H., and Hungtington Study Group. “Unified Huntington’s
disease rating scale: Reliability and consistency.” Movement disorders 11
(1996): 136–142.
[16] Ren, Peng, Shanjiang Tang, Fang Fang, Lizhu Luo, Lei Xu, Maria L. Bringas-
Vega, Dezhong Yao, Keith M. Kendrick, and Pedro A. Valdes-Sosa. “Gait
rhythm fluctuation analysis for neurodegenerative diseases by empirical
mode decomposition.” IEEE Transactions on Biomedical Engineering 64,
no. 1 (2016): 52–60.
[17] Pham, Tuan D., and Hong Yan. “Tensor decomposition of gait dynamics
in Parkinson’s disease.” IEEE Transactions on Biomedical Engineering 65,
no. 8 (2017): 1820–1827.
[18] Long, Jeffery D., Jane S. Paulsen, Karen Marder, Ying Zhang, Ji‐In Kim,
James A. Mills, and Researchers of the PREDICT‐HD Huntington’s Study
Group. “Tracking motor impairments in the progression of Huntington’s
disease.” Movement Disorders 29, no. 3 (2014): 311–319.
[19] Mannini, Andrea, Diana Trojaniello, Andrea Cereatti, and Angelo M.
Sabatini. “A machine learning framework for gait classification using inertial
sensors: Application to elderly, post-stroke and huntington’s disease pa-
tients.” Sensors 16, no. 1 (2016): 134.
[20] Hausdorff, Jeffrey M., Merit E. Cudkowicz, Renée Firtion, Jeanne Y. Wei,
and Ary L. Goldberger. “Gait variability and basal ganglia disorders:
Stride‐to‐stride variations of gait cycle timing in Parkinson’s disease and
Huntington’s disease.” Movement disorders 13, no. 3 (1998): 428–437.
[21] Barnéoud, Pascal, and Olivier Curet. “Beneficial effects of lysine acetylsa-
licylate, a soluble salt of aspirin, on motor performance in a transgenic
model of amyotrophic lateral sclerosis.” Experimental Neurology 155, no. 2
(1999): 243–251.
112 Computational Techniques in Neuroscience
[22] Cho, Chien-Wen, Wen-Hung Chao, Sheng-Huang Lin, and You-Yin Chen.
“A vision-based analysis system for gait recognition in patients with
Parkinson’s disease.” Expert Systems with Applications 36, no. 3 (2009):
7033–7039.
[23] PhysioBank, PhysioToolkit. “Physionet: Components of a new research
resource for complex physiologic signals.” Circulation 101, no. 23 (2000):
e215–e220.
[24] Hausdorff, Jeffrey M., Apinya Lertratanakul, Merit E. Cudkowicz, Amie L.
Peterson, David Kaliton, and Ary L. Goldberger. “Dynamic markers of
altered gait rhythm in amyotrophic lateral sclerosis.” Journal of Applied
Physiology (2000).
[25] Hausdorff, Jeffrey M., Susan L. Mitchell, Renee Firtion, Chung-Kang Peng,
Merit E. Cudkowicz, Jeanne Y. Wei, and Ary L. Goldberger. “Altered fractal
dynamics of gait: Reduced stride-interval correlations with aging and
Huntington’s disease.” Journal of Ppplied Physiology 82, no. 1 (1997):
262–269.
[26] Saka, Kübra, Önder Aydemir, and Mehmet Öztürk. “Classification of EEG
signals recorded during right/left hand movement imagery using Fast Walsh
Hadamard Transform based features.” In 2016 39th International Conference
on Telecommunications and Signal Processing (TSP), pp. 413–416. IEEE,
2016.
[27] Gupta, Vipin, Tanvi Priya, Abhishek Kumar Yadav, Ram Bilas Pachori, and
U. Rajendra Acharya. “Automated detection of focal EEG signals using fea-
tures extracted from flexible analytic wavelet transform.” Pattern Recognition
Letters 94 (2017): 180–188.
[28] Sharma, Bhavna, and K.J.I.J.C.E. Venugopalan. “Comparison of neural
network training functions for hematoma classification in brain CT images.”
IOSR Journal of Computer Engineering 16, no. 1 (2014): 31–35.
[29] Subasi, Abdulhamit, M. Kemal Kiymik, Ahmet Alkan, and Etem Koklukaya.
“Neural network classification of EEG signals by using AR with MLE pre-
processing for epileptic seizure detection.” Mathematical and Computational
Applications 10, no. 1 (2005): 57–70.
[30] Goetz, Christopher G., Werner Poewe, Olivier Rascol, Cristina Sampaio,
Glenn T. Stebbins, Carl Counsell, Nir Giladi et al. “Movement Disorder
Society Task Force report on the Hoehn and Yahr staging scale: Status
and recommendations the Movement Disorder Society Task Force on
rating scales for Parkinson’s disease.” Movement Disorders 19, no. 9 (2004):
1020–1028.
[31] Hausdorff, Jeffrey M. “Gait dynamics in Parkinson’s disease: Common
and distinct behavior among stride length, gait variability, and fractal-like
scaling.” Chaos: An Interdisciplinary Journal of Nonlinear Science 19, no. 2
(2009): 026113.
Chapter 7
7.1 INTRODUCTION
from one or more texts that contain a significant portion of the infor-
mation in the original text(s) and is no longer than half of the original
text(s)”. Summaries can be categorized as indicative, informative, ex-
tractive, and abstractive. Indicative summaries give an idea of content,
while informative summaries give a brief of the content. Content can
be shortened by creating extracts or abstracts. Extracts are generated by
reusing a portion of the text, i.e., sentences or words of input text,
while abstracts are regenerated by using new phrases. Extractive text
summarization and abstractive text summarization are the two primary
methods of ATS.
ATS may have a three-stage procedure. In the first stage, topic selection or
identification, focus is on what portion of text to include in the summary.
This step generally assigns scores to different portions of the text, which
helps in the selection of the text. The second stage is topic interpretation,
which performs fusion or compression and helps in briefing the content.
The last or third stage is a summary generation, which produces the final
summary in the desired form, and it uses mainly the text generation method
to reformulate the text. Most of the existing systems use the first stage only
and produce a summary using pure extracts.
Topic identification is based on assigning a score to each unit (e.g., word,
clause, or sentence) of the input content and then producing the top scoring
n-units as per required length of the summary. In literature, various
methods are used for scoring the fragments of input text. Text summari-
zation systems use an independent scoring method to score each unit
of text.
To compute the score various criterion can be used and experimented in
literature. The first criteria is positional criteria, which says that certain
locations like headings, titles, first paragraphs, etc., contain important
information, so these phrases are given a higher score. Second criterion is
based on cue phrases in which sentences containing these phrases are given
a higher score. A popular method of text identification or scoring is based
on word and phrase frequently, which says that if a text has higher fre-
quency of some words, then sentences containing those words are of higher
importance and are given higher scores. If query-based summarization is
needed, in that case, query and title overlap methods score the sentences
high that contain desired words. The other criterion can be connectedness
or discourse structure, which are finding the connectedness in the sentences.
Summaries generated by different approaches can be evaluated using
different evaluation methods. Lin and Hovey [2003] introduced ROUGE to
evaluate text summaries. ROUGE compares a system summary to a human
summary using different methods like unigram, bigram, skip bigram, etc.
ROUGE measures recall, i.e., number of system units included in the
gold standard.
There are different types of medical records that can be useful for medical
professionals for decision making such as electronic health records, clinical
reports, medical research publications, etc. The summaries may help in
better and quick understanding of the medical text data. In recent years,
many researchers have done work related to summarization of health data
(Table 7.1).
Table 7.1 Current research in text summarization for the health domain
S. No. Author and Technique used Dataset
publication
1. Kanwal N and Used multi-head attention- MIMIC-III discharge notes
Rizzo G 2022 based mechanism for
performing extractive
summarization
2. Jesus M et al. 2018 Multi Objective Artificial Document understanding
Bee Colony (MOABC) conference’s dataset
3. Gayathri P and On the basis of cue word For the identification of
Jaisankar N 2015 occurrences, important domain-specific terms, MeSH
sentences were extracted vocabulary thesaurus was
from documents to get used.
summarized text
4. Moradi M and For biomedical text, Retrieved articles from
Mattias S 2019 summarization BioMed Central to construct
embeddings learned by development and evaluation
BERT model were used corpora
5. Deepika S et al. Pretrained models BERT, The COVID-19 Open
2021 GPT-2 and Text Rank Research Dataset: CORD-19
6. Kieuvongngam V Pretrained models, BERT CORD-19
et al. 2020 and OpenAI GPT-2
(Continued)
An optimized text summarization for healthcare analytics 117
Table 7.1 (Continued) Current research in text summarization for the health domain
S. No. Author and Technique used Dataset
publication
7. Gigioli P et al. 2018 Novel reinforcement Biomedical literature from
learning reward metrics MEDLINE from NLM’s
used for learning which PubMed citation database
is based on biomedical
expert tools like UMLS
Metathesaurus and MeSH
8. Chen YP et al. Bidirectional Encoder A dataset of discharge
2020 Representations using diagnoses from NTUH-iMD
Transformers (BERT)
based structure with a
two-stage training method
9. Moradi M 2018 Clustering and Itemset Self created text corpora
mining-based Biomedical
Summarizer
10. Reddy SM and A multi-objective Data from Mendeley datasets
Mirivala S 2020 optimization approach for summarization of clinical
with the use of similarity trial descriptions
and position of the
sentences. The value of
similarity is calculated
using TF-IDF and WMD.
on the CORD-19 dataset. Out of these models, GPT-2 showed the best
results for summarization.
[Kieuvongngam V et al. 2020] worked on the CORD-19 dataset and
showed that the text-to-text multi-loss approach for training can be used
for the fine tuning of a pre-trained model like GPT-2 to perform abstractive
summarization. Results obtained were reasonable and interpretable.
[Gigioli P et al. 2018] proposed a deep reinforced summarization
model that is capable of generating domain-based summaries of biomedical
documents. They used reward metrics to boost the quality of generated
summaries using a NLM PubMed citation dataset.
[Chen YP et al. 2020] presented a model named AlphaBERT model,
which uses bidirectional encoder representation from a transformers-based
structure with two-stage training. This approach decreases the size of the
model without affecting the performance for summarization. This model
helps staff of the hospital by giving quick summarized information.
[Moradi M. 2018] proposed clustering and itemset mining to develop a
biomedical summarizer using self-created corpora. This model extracts
concepts from the inputted data and uses itemset mining to extract main
topics, and a clustering algorithm is used to place sentences in a cluster,
which shows similar topics. After that, it generates summaries.
[Reddy SM and Miriyala S 2020] presented a multi-feature based opti-
mization model for summarization, in which various features such as
cosine similarity, position, and word-mover distance were used. Clinical
trial description dataset was taken from Mendeley.
Where:
Occurances of t in d
TF (t , d) = (7.2)
Number of terms in d
number of documents
IDF (t) = loge (7.3)
Number of documents with term t in it
(
Vit +1 = W . Vit + c1 U1t Pbt1 )
Pit + c2 U2t (gbt Pit) (7.4)
The most impressive aspect of PSO is its stable topology, which enables
particles to communicate with one another and pick up new information
more quickly in order to reach the global optimum. Because it optimizes
a problem by constantly attempting to improve a potential solution, the
metaheuristic aspect of this optimization algorithm gives us a lot of alter-
natives. Its application will grow as Ensemble Learning is studied more and
more. Candidate summaries are viewed as particles in text summarizing. By
applying the PSO, one concludes the summary(s), which can be considered
as best as per the given criterion.
7.5.2 Preprocessing
It is one of the most important steps because correctly preprocessed data
give good results. If data are preprocessed in an incorrect manner, they will
yield incorrect results. Finally, in preprocessing, we need to covert textual
data into numeric form because machines cannot understand textual forms
of data. They can only work with numeric data. Following are the steps
involved in preprocessing of data.
7.5.2.3 Stemming
In this step, all the words are reduced to their root forms. Porter’s stemmer
algorithm is used to find the root word.
frequently touches walls and furniture to stabilize herself. She has difficulty
stepping up onto things like a scale because of this imbalance. She does not
festinate. Her husband has noticed some slowing of her speed. She does
not need to use an assistive device. She has occasional difficulty getting in
and out of a car. Recently she has had more frequent falls. In March of
2007, she fell when she was walking to the bedroom and broke her wrist.
Since that time, she has not had any emergency department trips, but she
has had other falls. With respect to her bowel and bladder, she has no
issues and no trouble with frequency or urgency. The patient does not
have headaches. With respect to thinking and memory, she states she is
still able to pay the bills, but over the last few months she states, “I do
not feel as smart as I used to be”. She feels that her thinking has slowed
down. Her husband states that he has noticed, she will occasionally start
a sentence and then not know what words to use as she is continuing.
The patient has not had trouble with syncope. She has had past episodes
of vertigo, but not recently. Significant for hypertension diagnosed in 2006,
reflux in 2000, insomnia, but no snoring or apnea. She has been on
Ambien, which has no longer been helpful. She has had arthritis since
the year 2000, thyroid abnormalities diagnosed in 1968, a hysterectomy
in 1986, and a right wrist operation after her fall in 2007 with a titanium
plate and eight screws. Her father died of heart disease in his 60s, and
her mother died of colon cancer. She has a sister who she believes is
probably healthy. She has had two sons, one who died of a blood clot
after having been a heavy smoker and another who is healthy. She had two
normal vaginal deliveries. She lives with her husband. She is a nonsmoker
and has no history of drug or alcohol abuse. She does drink two to three
drinks daily. She completed 12th grade.”
p-best score
5 4.64 4.73
4.5 4.08
3.92
4
3.5 3.24
2.95 3.07
3
2.48
2.5
2
1.5
1
0.5
0
30% 40% 45% 50% 55% 60% 65% 70%
Summary Percentage
Values in the graph depict that the 70% summary is the best optimized
summary. The g-best value is 4.73 as it is the best among all the p-best values.
REFERENCES
Chen, Y.P., Chen, Y.Y., Lin, J.J., Huang, C.H., and Lai, F. 2020. Modified
Bidirectional Encoder Representations From Transformers Extractive
Summarization Model for Hospital Information Systems Based on Character-
Level Tokens (AlphaBERT): Development and Performance Evaluation. JMIR
Med Inform. doi: 10.2196/17787. PMID:30445218.
Deepika, S., Lakshmi Krishna, N., and Shridevi, S. 2021. Extractive Text
Summarization for COVID-19 Medical Records. Innovations in Power and
Advanced Computing Technologies (i-PACT). Kuala Lumpur. Malaysia. 1–5.
doi: 10.1109/i-PACT52855.2021.9697019.
126 Computational Techniques in Neuroscience
Gayathri, P., and Jaisankar, N. 2015. Towards an Efficient Approach for Automatic
Medical Document Summarization. Cybernetics and Information Technologies.
vol.15. no.4. 2015, 78–91. 10.1515/cait-2015-0056.
Gigioli, P., Sagar, N., Rao, A., and Voyles, J. 2018. Domain-Aware Abstractive
Text Summarization for Medical Documents. 2018 IEEE International
Conference on Bioinformatics and Biomedicine. Doi: 10.1109/bibm.2018.
8621539.
Jesus, M., Sanchez-Gomez, M., Vega-Rodríguez, A., and Pérez, C.J. 2018.
Extractive multi-document text summarization using a multi-objective artifi-
cial bee colony optimization approach. Knowledge-Based Systems. vol.159.
1–8. ISSN 0950-7051. 10.1016/j.knosys.2017.11.029.
Kanwal, N., and Rizzo, G. 2022. Attention-based clinical note summarization,
SAC ’22: Proceedings of the 37th ACM/SIGAPP Symposium on Applied
Computing. (April): 813–820. 10.1145/3477314.3507256.
Kieuvongngam, V., Tan, B., and Niu, Y. 2020. Automatic text summarization of
COVID-19 medical research articles using BERT and GPT-2. CoRR. Volume
abs/2006.01997. http://dblp.org/rec/journals/corr/abs-2006-01977.bib.
Kim, S.W., and Gil, J.M. 2019. Research paper classification systems based on
TF-IDF and LDA schemes. Human-centric Computing and Information Sciences.
vol. 9. 30. 10.1186/s13673-019-0192-7
Manning C., Raghavan, P., and Schütze, H. 2008. Introduction to Information
Retrieval. England: Cambridge University Press.
Moradi, M., and Samwald, M. 2019. Clustering of Deep Contextualized. for
Summarization of Biomedical Texts. arXiv. doi: 10.48550/ARXIV.1908.02286.
Pivovarov, R., and Elhadad, N. 2015. Automated methods for the summarization
of electronic health records. J. Am. Med. Inf. Assoc. vol.22. no. 5. 938–947.
Rohil, M.K., and Magotra, V. 2022. An exploratory study of automatic text
summarization in biomedical and healthcare domain. Healthcare Analytics.
vol. 2. 100058, ISSN 2772-4425, 10.1016/j.health.2022.100058.
Reddy, S.M., and Miriyala, S. 2020. Exploring Multi Feature Optimization for
Summarizing Clinical Trial Descriptions. IEEE Sixth International Conference
on Multimedia Big Data (BigMM). New Delhi, Indi. 341–345. Doi: 101109/
BigMM50055.2020.00059.
Sarkar, K. 2009. Using domain knowledge for text summarization in medical
domain. Int. J. Recent Trends Eng. vol.1. no. 1. 200–205.
“Transcribed Medical Transcription Sample Reports and Examples.” https://www.
mtsamples.com/site/pages/sample.asp?Type=42-Neurology&Sample=1695-
Adult%20Hydrocephalus.
Chapter 8
8.1 INTRODUCTION
Contributions:
• Time-frequency domain based feature extraction technique is pro-
posed for the effective feature extraction.
• Discrete wavelet transform (DWT) technique has been applied on
the gait time series to decompose the signals to coefficients.
• Statistical and entropy-based features were extracted from the de-
composed coefficients taken for the classification.
• Artificial neural network (ANN) classifier is utilized to classify the
task such as HC vs PD, HC vs. HD, HC vs. ALS, and HC vs. NDD.
• The classification performance shows the better results with the pro-
posed approach. The framework of the proposed method is shown
in Figure 8.1.
130 Computational Techniques in Neuroscience
a1 = a1oj
a10 < 1, b0 0, j Z, k Z (8.1)
b = kb0 a1oj
The base wavelet has discretized measures, which can be written as,
132 Computational Techniques in Neuroscience
1 t ka10j b0
j , k (t ) = (8.2)
a0j a10j
+
D (j, k) = u1(t) j , k (t) dt (8.3)
The signal u1(t) can be duplicated using the inverse DWT, as given below:
J
1
u1(t) = wt (j , k) j , k (t ) a1 R+ (8.4)
a1 j= k=
Figure 8.3 Nth level of wavelet decomposition of the gait time series.
Computer aided diagnosis of neurodegenerative diseases 133
N
Log energy entropy = log(y1i2 ) (8.5)
i =1
True (+)
Sensitivity = 100 (8.7)
True (+) + False ( )
True ( )
Specificity = 100 (8.8)
True ( ) + False (+)
True (+)
Positive predictive value(PPV) = 100 (8.9)
True (+) + False (+)
True ( )
Negative Predictive Value(NPV) = 100 (8.10)
True ( ) + False ( )
Where, True (+) indicates True positive, True(-) refers to True negative,
False (+) corresponds to False positive and False(-) implies False negative.
Computer aided diagnosis of neurodegenerative diseases 135
Table 8.1 Classification results achieved using statistical features for the classification of
PD vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV (%) NPV (%)
Minimum 91.94 87.50 96.67 95.57 89.27
Maximum 61.61 64.38 58.67 62.19 63.13
Mean 91.94 87.50 96.67 95.57 89.27
Standard deviation 64.84 65.00 64.67 69.09 65.70
Normalized 75.48 70.63 80.67 79.09 72.52
standard deviation
Kurtosis 76.77 78.75 74.67 77.10 77.03
Skewness 73.87 77.50 70.00 73.49 74.64
Energy 67.42 77.50 56.67 66.29 71.07
Normalized energy 62.26 72.50 51.33 62.16 66.85
All features 99.38 99.23 99.41 99.29 98.65
Table 8.2 Classification results achieved using entropy features for the classification of
PD vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Log- Energy 85.36 87.03 71.26 84.73 84.37
Approximate 98.13 96.75 94.88 97.48 97.43
Sample 91.94 87.50 96.67 95.57 89.27
Permutation 85.16 86.25 84.00 85.36 87.03
fuzzy 97.42 96.88 98.00 98.13 96.75
All features 99.78 99.46 99.70 99.75 99.72
136 Computational Techniques in Neuroscience
Table 8.3 Classification results achieved using statistical features for the classification of
HD vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Minimum 58.61 26.25 84.50 57.17 58.94
Maximum 61.67 39.38 79.50 60.03 62.34
Mean 63.61 42.50 80.50 64.63 63.57
Standard deviation 63.33 56.25 69.00 60.37 66.02
Normalized standard 54.72 33.75 71.50 50.51 57.17
deviation
Kurtosis 57.78 40.00 72.00 52.97 60.46
Skewness 70.56 69.38 71.50 67.00 74.25
Energy 70.28 60.63 78.00 69.10 71.32
Normalized energy 71.94 79.38 66.00 65.11 80.34
All features 97.17 96.25 98.61 98 98.94
Table 8.4 Classification results achieved using entropy features for the classification of
HD vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Log- Energy 72.34 90.53 96.84 94.93 81.67
Approximate 83.57 85.44 50.94 58.28 63.61
Sample 66.02 25.81 57.89 61.99 63.33
Permutation 57.17 76.33 39.72 78.53 54.72
fuzzy 95.46 92.68 94.84 92.70 97.78
All features 99.98 99.22 99.72 99.73 99.71
Table 8.5 Classification results achieved using statistical features for the classification of
ALS vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Minimum 75.86 86.25 63.08 75.57 76.72
Maximum 85.17 88.13 81.54 85.54 84.86
Mean 65.52 67.50 63.08 69.65 60.81
Standard deviation 84.48 91.88 75.38 82.16 88.45
Normalized standard 76.21 93.75 54.62 71.97 86.85
deviation
Kurtosis 72.41 87.50 53.85 70.00 77.78
Skewness 64.83 74.38 53.08 60.16 67.07
Energy 78.28 79.38 76.92 82.30 73.03
Normalized energy 88.28 91.25 84.62 87.93 88.95
All features 98.97 93.75 93.08 97.31 91.50
Table 8.6 Classification results achieved using entropy features for the classification of
ALS vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Log- Energy 76.72 50.74 80.24 72.72 75.86
Approximate 84.86 70.03 86.78 84.73 85.17
Sample 60.81 30.52 68.47 65.08 65.52
Permutation 88.45 68.91 86.72 83.19 84.48
fuzzy 86.85 53.31 81.40 71.30 76.21
All features 97.78 94.45 97.78 98.64 92.41
From Tables 8.7 and 8.8 the classification performance of NDD vs.
HC is considered with the individual features from both, such as statis-
tical and entropy attained, showing the better classification results. As
shown in Figure 8.5, the proposed method outperforms in all the classi-
fication tasks.
The three patient groups with NDDs are at various stages in the current
database. They are assessed by the severity or length of the illness. This is
the Hohn and Yahr score for PD patients. This is the total functional
storage assessment for subjects with HD.
The number here represents the number of months since the subject’s
diagnosis, demonstrated in the experiments that patients with NDDs, even
those in the early stages, can be categorized. This validated the proposed
method’s efficacy in early detection. Nonetheless, more clinical research and
data analysis with larger sample sizes are needed to verify the method’s
efficacy and appropriateness in the early stages.
138 Computational Techniques in Neuroscience
Table 8.7 Classification results achieved using statistical features for the classification of
NDD vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Minimum 58.94 63.13 35.79 46.19 58.61
Maximum 62.34 70.53 46.84 54.93 61.67
Mean 63.57 75.44 50.94 58.28 63.61
Standard deviation 66.02 65.81 57.89 61.99 63.33
Normalized standard 57.17 76.33 39.72 48.53 54.72
deviation
Kurtosis 60.46 76.68 44.84 52.70 57.78
Skewness 74.25 41.06 67.91 70.21 70.56
Energy 71.32 39.50 64.38 68.61 70.28
Normalized energy 80.34 45.41 71.46 72.29 71.94
All features 92.39 92.22 92.97 93.37 94.70
Table 8.8 Classification results achieved using entropy features for the classification of
NDD vs. HC
Features Accuracy (%) Sensitivity (%) Specificity (%) PPV(%) NPV(%)
Log- Energy 96.67 95.57 89.27 84.50 91.04
Approximate 64.67 69.09 65.70 31.18 62.58
Sample 80.67 79.09 72.52 51.45 74.26
Permutation 74.67 77.10 77.03 53.77 77.67
fuzzy 93.00 93.49 94.64 96.81 94.35
All features 97.00 95.36 97.03 91.26 94.73
100
98
96
94
Accuracy
92
Sensi vity
90
88 Specificity
86 PPV
NPV
Figure 8.5 The summary of classification performance considering all features for the
classification.
Computer aided diagnosis of neurodegenerative diseases 139
8.4 CONCLUSION
REFERENCES
[23] Zeng, Wei, Cong Wang, and Feifei Yang. “Silhouette-based gait recognition
via deterministic learning.” Pattern Recognition 47, no. 11 (2014):
3568–3584.
[24] Wagenaar, Robert C., and Richard E.A. van Emmerik. “Dynamics of move-
ment disorders.” Human Movement Science 15, no. 2 (1996): 161–175.
[25] Hausdorff, Jeffrey M., Merit E. Cudkowicz, Renée Firtion, Jeanne Y. Wei,
and Ary L. Goldberger. “Gait variability and basal ganglia disorders:
stride‐to‐stride variations of gait cycle timing in Parkinson’s disease
and Huntington’s disease.” Movement Disorders 13, no. 3 (1998):
428–437.
[26] Xia, Yi, Qingwei Gao, and Qiang Ye. “Classification of gait rhythm
signals between patients with neuro-degenerative diseases and normal
subjects: Experiments with statistical features and different classification
models.” Biomedical Signal Processing and Control 18 (2015): 254–262.
[27] Zeng, Wei, and Cong Wang. “Classification of neurodegenerative diseases
using gait dynamics via deterministic learning.” Information Sciences 317
(2015): 246–258.
[28] Hausdorff, Jeffrey M., Zvi Ladin, and Jeanne Y. Wei. “Footswitch system for
measurement of the temporal parameters of gait.” Journal of Biomechanics
28, no. 3 (1995): 347–351.
[29] Ocak, Hasan. “Automatic detection of epileptic seizures in EEG using dis-
crete wavelet transform and approximate entropy.” Expert Systems with
Applications 36, no. 2 (2009): 2027–2036.
[30] Prasanna, J., M.S.P. Subathra, Mazin Abed Mohammed, Robertas Damaševičius,
Nanjappan Jothiraj Sairamya, and S. Thomas George. “Automated epileptic
seizure detection in pediatric subjects of CHB-MIT EEG database—a survey.”
Journal of Personalized Medicine 11, no. 10 (2021): 1028.
[31] Jothiraj, Sairamya Nanjappan, Thomas George Selvaraj, Balakrishnan
Ramasamy, Narain Ponraj Deivendran, and Subathra M.S.P. “Classification
of EEG signals for detection of epileptic seizure activities based on feature
extraction from brain maps using image processing algorithms.” IET Image
Processing 12, no. 12 (2018): 2153–2162.
[32] Subathra, M.S.P., Mazin Abed Mohammed, Mashael S. Maashi,
Begonya Garcia-Zapirain, N.J. Sairamya, and S. Thomas George. “Detection
of focal and non-focal electroencephalogram signals using fast Walsh-
Hadamard transform and artificial neural network.” Sensors 20, no. 17
(2020): 4952.
[33] Acharya, U. Rajendra, Hamido Fujita, Vidya K. Sudarshan, Shreya Bhat,
and Joel E.W. Koh. “Application of entropies for automated diagnosis of
epilepsy using EEG signals: A review.” Knowledge-based Systems 88
(2015): 85–96.
[34] Cuesta–Frau, David, Pau Miró–Martínez, Jorge Jordán Núñez, Sandra
Oltra–Crespo, and Antonio Molina Picó. “Noisy EEG signals classification
based on entropy metrics. Performance assessment using first and second
generation statistics.” Computers in Biology and Medicine 87 (2017):
141–151.
[35] Cao, Yinhe, Wen-wen Tung, J.B. Gao, Vladimir A. Protopopescu, and Lee
M. Hively. “Detecting dynamical changes in time series using the permuta-
tion entropy.” Physical Review E 70, no. 4 (2004): 046217.
142 Computational Techniques in Neuroscience
[36] Chen, Weiting, Zhizhong Wang, Hongbo Xie, and Wangxin Yu.
“Characterization of surface EMG signal based on fuzzy entropy.” IEEE
Transactions on Neural Systems and Rehabilitation Engineering 15, no. 2
(2007): 266–272.
[37] Sharma, Bhavna, and K.J.I.J.C.E. Venugopalan. “Comparison of neural
network training functions for hematoma classification in brain CT images.”
IOSR Journal of Computer Engineering 16, no. 1 (2014): 31–35.
[38] Subasi, Abdulhamit, M. Kemal Kiymik, Ahmet Alkan, and Etem Koklukaya.
“Neural network classification of EEG signals by using AR with MLE pre-
processing for epileptic seizure detection.” Mathematical and Computational
Applications 10, no. 1 (2005): 57–70.
Chapter 9
9.1 INTRODUCTION
The beta band overlaps with the frequencies reported in [31] for the frontal
muscles, which are around 20–30 Hz. Furthermore, 2 Hz has been reported
as the lowest frequency of muscle activity. Muscle artifacts also disrupt the
delta, theta, and alpha frequency bands. Therefore, using only the most
fundamental spectral signatures, it is challenging to tell EMG apart
from EEG.
Artifacts in EEG data are typically eliminated using EEG artifact removal
strategies. Due to the effectiveness of artifact removal techniques for EEG
data, the techniques can now be used in a wide variety of clinical and
industrial settings. There have been a number of obstacles in the way of
EEG artifact removal techniques. These difficulties may arise from the
nonlinearities of the unwanted signal being added to the EEG signal or from
the complexity of the methods themselves. The “nonlinear” nature of the
artifacts, for instance, makes it challenging to extract only the artifacts
without also losing actual neuronal data. In this literature, various artifact
removal algorithms are discussed in the following subsections.
method assumes incorrectly that the neuronal activity in EEG and EOG
signals are uncorrelated, while attempting to remove artifacts using EOG
signals as reference [25,51]. Thus, the shared neuronal activity between
EEG and EOG can be removed from EEG signals using regression analysis.
There is currently no agreement in the scientific literature regarding the best
approach for low-pass filtering EOG signals. On the other hand, some
authors contend that neural activity pollutes every frequency range [52].
However, regression methods continue to serve as the benchmark against
which the efficacy of all other newly developed methods must be judged.
AMICA [87], and AMUSE [88] are just a few of the ICA variants proposed
by researchers for EEG artifact removal.
The necessity of using non-Gaussian data for ICA is a significant limi-
tation of the method. If the sources are not Gaussian, ICA can be used to
estimate them. ICA can only account for a single Gaussian component,
which can be estimated as the residual after all other independent compo-
nents have been extracted. The Gaussian or non-Gaussian nature of a given
component is rarely known in advance. Normalizing the data is a standard
practice before ICA is calculated. Another drawback of ICA is that it
requires a number of channels that is at least as large as the number of
sources, so it can’t be used with just one or a small number of channels.
database
(Continued)
157
158
Table 9.1 (Continued) Filtering techniques utilized in EEG artifact removal process
Author Method Type of artifact Performance measure Data Base
G. S. Spencer RLAS Motion artifact RMSE Recorded
et al. [ 39]
A. Kilicarslan Adaptive de-noising framework Motion artifact PSD Recorded
et al. [ 149] with H∞ adaptation rule
R. Ranjan et al. [ 150] Modified-EMD and LoG filter Motion artifact ∆SNR, γ, mean absolute Semi-simulated EEG data,
error in PSD of δ-band mobile brain-body
(MAEδ PSD), MI, imaging (MoBI) real-time
percentage improvement EEG dataset with BCI
in correlation [corr (%)], task and synthetic
percentage improvement dataset
in coherence [coh(%)],
power spectral distortion
[PSDdis (%)] and
execution time
Computational Techniques in Neuroscience
Table 9.2 Summery of decomposition techniques utilized for EEG artifact removal process
Author Method Type of artifact Performance measure Data Base
Shivam Sharma and DFT and ACMD Ocular CC, SNR, AE, AE MAX, Mendeley database, MIT-BIH
Udit Satija [ 148] MAXN, RDN, RDP, RMSE, polysomnographic database,
SNRW, CCW, percentage EEGMAT database
reduction in coefficient of
correlation (η), M I
Md S Hossain, VMD, VMD-PCA, VMD-CCA Motion artifact SNR and percentage PhysioNet
et al. [ 151] reduction in motion artifact
R. Ranjan, et al. [ 150] Modified EMD and optimized Motion artifact SNR, signal to artifact gain Semi-simulated EEG data
LoG filter coefficient, PSNR, SAR, contaminated with motion
MAE, MI, improvement in artifacts, mobile brain-body
correlation, improvement in imaging (MoBI) real-time
coherence, power spectral EEG data
destruction and execution
time
Miao Shi, et al. [ 110] GOSSA-VMD Ocular SNR, RMSE and CC Semi-simulated EEG dataset
C. Kaur, et al. [ 152] VMD-DWT and VMD-WPT SNR, PSNR, and MSE. Data collected from Hospital
University Sains Malaysia
(HUSM)
L. Chang, et al. [ 153] MVMD-CCA Accuracy and ITR Simulated data using Matlab
M. Saini, et al. [ 109] VMD in two stages denoted as Ocular Sensitivity, NCC, SNR, Mendeley database, MIT-BIH
VMD-I and VMD-II MAE,MAX,NMAX,NRD, polysomnographic database
PRD, and RMSE and EEGMAT database
R. Gavas, et al. [ 112] MVMD Eye blink SER, correlation, variance- Synthetically generated EEG
based metric (V), percentage data, Covert Shift Dataset,
change in band power, Cog Beacon Dataset
EEG artifact detection and removal techniques
classification accuracy
159
(Continued)
160
Table 9.2 (Continued) Summery of decomposition techniques utilized for EEG artifact removal process
Author Method Type of artifact Performance measure Data Base
C. Dora and Modified VMD ECG artifact SAR, CF MIT/BIH polysomnography
P. K. Biswal [ 154] data
Q. Li, et al. [ 138] CEEMDAN-ICA-WTD Ocular RMSE Recorded
C. Dora and VMD-based algorithm Ocular PSD Capslpdb and sleep-edf of
P. K. Biswal [ 155] Physionet
A. Yadav and EEMD and SCICA Ocular MI, CC and Coherence physionet.org
M. S. Choudhry [ 137]
Yan Liu, et al. [ 156] NALSMEMD Motion artifact Percentage change in Artifact physionet.org
X. Chen, et al. [ 133] EEMD-CCA Muscle artifact SNR, RMS, RRMSE Recorded and simulated data
A. Egambaram, EMD and CCA Eye blink Accuracy (VMI), Error (VMI), Recorded
et al. [ 132] CC, RMSE, Time of
execution
MD E. Alam, EMD Eye blink and SNR BCI2000
Computational Techniques in Neuroscience
(Continued)
161
162
Table 9.3 (Continued) Methods of wavelet transform and blind source separation
Author Method Type of artifact Performance measures Source of EEG data
Matteo Dora, David WT Different kinds of artifact NMSE, ∆R, ∆SNR Semi-simulated EEG
Holcman [ 169]
Chi-Yuan Chang ICA and ASR Muscle, eye-blink, and RMS Driving simulator EEG data
et al. [ 170] lateral eye-movement
activities
S. R. Sreeja MCAand KSVD Eye blink RMSE, SAR, CC, MI and MSE Recorded
et al. [ 171]
Nitesh S. Malan Shiru DTCWT Ocular RRMSE Recordeded using NI LABVIEW
Sharma [ 172] 2015 Biosignal toolkit
Chong Yeh Sai SVM and WICA Eye blink CC Recorded and simulated
et al. [ 113]
A. J.M. Ali ICA-ANC ECG artifact RRMSE and frequency correlation Recorded
Computational Techniques in Neuroscience
Badamchizadeh
et al. [ 117]
Pranjali Gajbhiye MTV, MWTV Motion artifact Difference in SNR and η physionet
et al. [ 173] and DWT
Md. Kafiul Islam Amir SWT Chewing, swallowing, SNR, ∆SNR, ∆RMSE, ∆PSD, ∆corr, BCI competition-IV Scalp EEG
Rastegarnia [ 174] eye blinks, subject ∆SNDR Database: dataset-1, dataset-2a
movements, talking, and dataset-2b
Soojin Lee et al. [ 175] JBSS and quadrature High-amplitude Root mean square (RMS), relative Recorded and simulated
regression and q-IVA stimulation artifact root-mean-squared error
(RRMSE), correlation coefficient
(CC), power deviation (Pdev)
Mohamed F. Issa and WEICA EOG artifact Signal-to-noise ratio ∆SNR, RMSE, Klados EEG dataset and
Zoltan Juhasz [ 176] MSC, Percentage of artifact Recorded dataset
removal
Abhijit Bhattacharyya TQWT Cortical stimulation (CS) MSE and CCI Simulated data and Data
et al. [ 177] Collected from Nancy
University Hospital (CHU
Nancy), France
Vandana Roy Shailja ICA,CCA,DWT Motion artifact ∆SNR, RMSE Online open source interface
Shukla [ 178] and SWT
Young-Eun Lee cICA with cIOL Movement artifact AUC(Area under the ROC curve) Recorded
et al. [ 179] and SNR
Chi-Yuan Chang ASR Eye blink and Muscle Retained power Recorded
et al. [ 180] artifact
Dhanalekshmi P. DWT and MRAF Ocular Sensitivity, specificity, Accuracy and CHB MIT scalp EEG dataset
Yedurkar, Shilpa P. precision chb01 15
Metkar [ 121]
K. Jindal et al. [ 181] FPIC and GLCT Eye blink, EOG, SNR, RMSE, NMAX and NRD Simulated and recorded
muscular and other
high frequency artifact
Nikesh Bajaja WPD Muscle, motion and MI, CC, PSD Recorded
et al. [ 182] ocular artifact
K. P. Paradeshi and Enhanced WICA Ocular ADP, RMSE, PSNR, CC Recorded
U. D. Kolekar [ 183]
Sayedu Khasim Ov-ASSA -ANC Ocular RRMSE, MAE CHB-MIT
Noorbasha, Gnanou
Florence
Sudha [ 184]
EEG artifact detection and removal techniques
(Continued)
163
164
Table 9.3 (Continued) Methods of wavelet transform and blind source separation
Author Method Type of artifact Performance measures Source of EEG data
Ian McNulty WT using the SURE Ocular NMSE and SNR Bonn database
et al. [ 185] Shrink algorithm
with the hard
thresholding
S. T. AUNG AND M-mDistEn Motion artifact Accuracy and p-values PhysioNet Database
Y. W.WAT [ 186]
Pranjali Gajbhiye WOSG filtering Motion artifact NMSE, ∆R, ∆SNR Two publicly available databases
et al. [ 187]
Zainab Jamil ICA-DWT Eye movement artifact MI, Sensitivity and specificity Recorded
et al. [ 188]
S. Phadikar WT With Eye blink CC, NMSE, SSIM
et al. [ 189] Heuristically
Optimized threshold
Computational Techniques in Neuroscience
M. Shahbakhti SWT Electrical shift and linear CC, PSNR, NRMSE Mendelay
et al. [ 190] trend artifacts (ESLT)
Sridhar Chintala Mixed step size Ocular MSD Recorded
et al. [ 191] normalized least
mean fourth adaptive
algorithm
Salim Çınar [ 192] ICA-ANC Ocular RE, CC, SAR, SNR, Sensitivity, ERP-based Brain-Computer
Specificity, and AUC Interface (BCI) Records dataset
Christos Stergiadis BSS Ocular Entropy, MIR, CC, Execution time Real recorded data and semi
et al. [ 193] simulated data
Ruisen Huang DSMF Motion artifact SDR, NMSE Recorded
et al. [ 194]
Mary Judith A. MD-SVD -ICA SNR, PSNR, MSE MIT EEG database
et al. [ 195]
Velu Prabhakar LOF and ASR Newborn non FTR, SME, F1 Score Data collected from Neonatal
Kumaravel stereotypical artifacts. Neuroimaging Unit CIMeC,
et al. [ 196] University of Trento) and
simulated data
Yuheng FENG SSA-CCA Muscle artifact PSD and mean time cost Semi simulated data
et al. [ 197]
Sagar S. Motdhare, SSA-MEMD EMG artifact Visual inspection Recorded
Dr. Garima
Mathur [ 198]
A. K. Maddirala and K. SSA-CWT and k- Eye blink RRMSE, SNR, RMS, CC, artifact Fatigue EEG database
C. Veluvolu [ 199] means clustering reduction ratio, MAE, precision
and accuracy
J. Yedukondalu and L. CiSSA-DWT EOG artifact SAR, MAE, RRMSE, and CC Dataset 2a from the BCI
D. Sharma [ 200] Competition IV
EEG artifact detection and removal techniques
165
166
Table 9.4 Summery of machine learning technique in the field of artifact removal from EEG signal
Author Method Type of artifact Performance measure Data Base
C. Burger et al. [ 201] ICA and WNN Ocular PSD, RMSE and frequency Simulated and recorded data
correlation collected from motor imagery test
W. Suna et al. [ 202] 1D-ResCNN model EMG, ECG, EOG SNR, RMSE CHB-MIT Scalp EEG Database
S. K. Sahoo and EMCD and ODCN Ocular MAE, SNR BCI competition IV database
S. K. Mohapatra [ 203]
R. Ghosh et al. [ 204] kNN classifier and a Eyeblink and muscular CC, SAR, MAE, NMSE, SSIM Recorded
LSTM network
B. Yang et al. [ 116] DLN Ocular PSD, RMSE and EEG “Data sets 1” for BCI Competition IV
classification accuracy
M.H. Quazi et al. [ 205] FLM optimization- EMG, ECG, EOG MSE, SNR PhysioNet
based learning
algorithm for neural
network-enhanced
adaptive filtering
Computational Techniques in Neuroscience
model
X. Li et al. [ 206] Discriminative model Ocular Recorded
for joint OAC and
feature learning
S. Behera and WVFLN Ocular MSE, RMSE, Mendeley database
M. N. Mohanty [ 115]
S. Behera and RVFLN model Ocular and cardiac MSE, NMSE, RE, GSAR, SNR, Mendeley database
M. N. Mohanty artifact IQ, and INPS
EEG artifact detection and removal techniques 167
As the artifacts are unwanted and need to be removed prior to the infor-
mation extraction, an automatic way of removing artifact is proposed in
this paper. In the literature review section, some of the automatic artifact
removal techniques are discussed. Most of the automatic EEG artifact
removal methods are based on transforming the EEG signal to another
domain. However, when the signal is transformed from the time domain to
another domain it suffers from the problem of time-frequency resolution.
To avoid the problem of time-frequency resolution the Fast Discrete S
Transform (FDST) is proposed in this work.
n N 1
j 2 kn
Seeg jT , = k =0
Seeg (kT ). w (kT , ). e N (9.1)
NT
applied to the Fourier transform of the signal, and then inverse Fourier
transform is applied to obtain S eeg . In the last step, FST is calculated for
each kernel function, which are given in equation (9.2) and (9.3).
3n N 1 3n + 3n
Seeg jT , = k =0
S eeg (kT ). w kT T, . kT , (9.2)
4NT 4NT 4NT
3n N 1 3n 3n
Seeg jT , = k =0
S eeg (kT ). w kT T, . kT , (9.3)
4NT 4NT 4NT
From the above equations + and are the kernel functions, is equivalent
to in DOST. S eeg (kT ) is obtained by taking inverse Fourier transform of
the bandpass version of the original signal. The inverse of FDST is obtained
by the equation (9.4)
n+1 1 M 1 n i 2 jl 1
Seeg = Seeg jT , .e M . (9.4)
NT N j =0
NT W ( 1
NT
, )
( ) n+1
Where Seeg NT is the Fourier transform of the EEG signal, and to get back
the original signal we need to apply inverse Fourier transform. FDST and
DOST use DFT for their operation and can be represented as:
k
DOST = i =1
Bi DFT (9.5)
Where direct sum matrix Bi forms a block diagonal matrix. Each block in
direct sum matrix is composed of a phase correction factor coming from ei
and dimensional inverse Fourier transform. The DOST basis functions are
compact in frequency, but not in space. DOST coefficients carry symmetry
property due to higher frequencies that are required in frequency, and in
space DOST suffers from ringing effect. The problems that arise in the case
of DOST and FDST can be overcome by DCST. The mathematical overview
of DCST is given in the following subsection.
The proposed algorithm is tested by considering the clean EEG signal and
two types of contaminated EEGs. The clean EEG is collected from the
Mendeley data base [209], as shown in Figure 9.1.The length of the clean
EEG signal is 6001, and its amplitude is within the range of −30 µV to
+30 µV. The clean EEG signal is considered here as a reference to validate
EEG artifact detection and removal techniques 169
its normal amplitude. Due to artifact the EEG signal shape is drastically
changed, and it is removed by the proposed algorithm.
The FDST of clean EEG and contaminated EEG are shown in Figure 9.4
and Figure 9.5, respectively. From the Figure 9.4 and Figure 9.5 it is
observed that at the time of occurrence of artifact, the phase change is more
prominent. More phase changes are indicated by the red and yellow color.
Figure 9.6 shows the clean EEG signal, contaminated signal, and the
signal after removal of artifact. It is observed that the recovered EEG signal
amplitude is lesser than the amplitude of the contaminated EEG signal.
Figure 9.7 shows the artifactual EEG signal collected from PhysioNet
database and the recovered artifact-free EEG signal.
EEG artifact detection and removal techniques 171
Figure 9.6 The clean EEG signal, contaminated EEG and the signal after removal of artifact.
9.6 CONCLUSION
In this paper, we have covered some of the most common artifacts in EEG
data as well as some of the methods currently in use to eliminate them. To
this day, there is no universal method that can transform all contaminated
EEG data into usable EEG data. To achieve desirable outcomes, it is
common practice to employ multiple methods in sequence. It is common
practice to use ICA-based algorithms as the next step after filtering in
order to obtain clean EEG data. After that, various artifact removal
strategies can be implemented, each tailored to the specific artifacts
present in the dataset. Both WT and regression can be used to clean up
EEG data that has been contaminated by electrocardiogram (ECG) or
electrooculogram (EOG) artefacts, respectively. EOG and ECG artifact
removal using machine learning techniques has shown remarkable
improvement over the past three years. Muscle and body movement, as
well as other extrinsic artifacts, should be minimized or eliminated
whenever possible during the EEG recording process. Some artifacts, such
as EOG and ECG artifacts, are unavoidable but can be eliminated with
these techniques.
REFERENCES
[1] A. Subasi, “EEG signal classification using wavelet feature extraction and
a mixture of expert model,” Expert Systems with Applications, vol. 32,
no. 4, pp. 1084–1093, 2007.
[2] T. Zhang, W. Chen, and M. Li, “Generalized Stockwell transform and SVD-
based epileptic seizure detection in EEG using random forest,” Biocybernetics
and Biomedical Engineering, vol. 38, no. 3, pp. 519–534, 2018.
[3] W. Mumtaz, S. S. A. Ali, M. A. M. Yasin, and A. S. Malik, “A machine
learning framework involving EEG-based functional connectivity to diagnose
major depressive disorder (MDD),” Medical & Biological Engineering &
Computing, vol. 56, pp. 233–246, 2018.
[4] U. R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, H. Adeli, and D. P. Subha,
“Automated EEG-based screening of depression using deep convolutional
neural network,” Computer Methods and Programs in Biomedicine,
vol. 161, pp. 103–113, 2018.
[5] A. Anuragi and D. S. Sisodia, “Alcohol use disorder detection using EEG
Signal features and flexible analytical wavelet transform,” Biomedical Signal
Processing and Control, vol. 52, pp. 384–393, 2019.
[6] W. Mumtaz, N. Kamel, S. S. A. Ali, and A. S. Malik, “An EEG-based
functional connectivity measure for automatic detection of alcohol use dis-
order,” Artificial Intelligence in Medicine, vol. 84, pp. 79–89, 2018.
[7] R. Yuvaraj, U. Rajendra Acharya, and Y. Hagiwara, “A novel Parkinson’s
Disease Diagnosis Index using higher-order spectra features in EEG sig-
nals,” Neural Computing and Applications, vol. 30, pp. 1225–1235,
2018.
EEG artifact detection and removal techniques 173
[83] Y. Li, Z. Ma, W. Lu, and Y. Li, “Automatic removal of the eye blink artifact
from EEG using an ICA-based template matching approach,” Physiological
Measurement, vol. 27, no. 4, p. 425, 2006.
[84] N. Mammone and F. C. Morabito, “Enhanced automatic artifact detection
based on independent component analysis and Renyi’s entropy,” Neural
Networks, vol. 21, no. 7, pp. 1029–1040, 2008.
[85] S. P. Fitzgibbon, D. M. Powers, K. J. Pope, and C. R. Clark, “Removal of
EEG noise and artifact using blind source separation,” Journal of Clinical
Neurophysiology, vol. 24, no. 3, pp. 232–243, 2007.
[86] C. J. James and O. J. Gibson, “Temporally constrained ICA: an application
to artifact rejection in electromagnetic brain signal analysis,” IEEE
Transactions on Biomedical Engineering, vol. 50, no. 9, pp. 1108–1116,
2003.
[87] A. Delorme, J. Palmer, J. Onton, R. Oostenveld, and S. Makeig, “Indepen-
dent EEG sources are dipolar,” PloS One, vol. 7, no. 2, p. e30135,
2012.
[88] K. Ting, P. Fung, C. Chang, and F. Chan, “Automatic correction of artifact
from single-trial event-related potentials by blind source separation using
second order statistics only,” Medical Engineering & Physics, vol. 28,
no. 8, pp. 780–794, 2006.
[89] W. De Clercq, A. Vergult, B. Vanrumste, W. Van Paesschen, and S. Van
Huffel, “Canonical correlation analysis applied to remove muscle artifacts
from the electroencephalogram,” IEEE Transactions on Biomedical
Engineering, vol. 53, no. 12, pp. 2583–2587, 2006.
[90] D. M. Vos et al., “Removal of muscle artifacts from EEG recordings of
spoken language production,” Neuroinformatics, vol. 8, pp. 135–150, 2010.
[91] X. Yong, R. K. Ward, and G. E. Birch, “Artifact removal in EEG
using morphological component analysis,” in 2009 IEEE International
Conference on Acoustics, Speech and Signal Processing, 2009: IEEE,
pp. 345–348.
[92] P. Berg and M. Scherg, “Dipole modelling of eye activity and its application
to the removal of eye artefacts from the EEG and MEG,” Clinical Physics
and Physiological Measurement, vol. 12, no. A, p. 49, 1991.
[93] S. Casarotto, A. M. Bianchi, S. Cerutti, and G. A. Chiarenza, “Principal
component analysis for reduction of ocular artefacts in event-related po-
tentials of normal and dyslexic children,” Clinical Neurophysiology,
vol. 115, no. 3, pp. 609–619, 2004.
[94] P. S. Kumar, R. Arumuganathan, K. Sivakumar, and C. Vimal, “Removal
of ocular artifacts in the EEG through wavelet transform without using
an EOG reference channel,” Int. J. Open Problems Compt. Math, vol. 1,
no. 3, pp. 188–200, 2008.
[95] D. Safieddine et al., “Removal of muscle artifact from EEG data: comparison
between stochastic (ICA and CCA) and deterministic (EMD and wavelet-
based) approaches,” EURASIP Journal on Advances in Signal Processing,
vol. 2012, no. 1, pp. 1–15, 2012.
[96] V. Krishnaveni, S. Jayaraman, L. Anitha, and K. Ramadoss, “Removal of
ocular artifacts from EEG using adaptive thresholding of wavelet coeffi-
cients,” Journal of Neural Engineering, vol. 3, no. 4, p. 338, 2006.
EEG artifact detection and removal techniques 179
[180] C.-Y. Chang, S.-H. Hsu, L. Pion-Tonachini, and T.-P. Jung, “Evaluation
of artifact subspace reconstruction for automatic artifact components
removal in multi-channel EEG recordings,” IEEE Transactions on
Biomedical Engineering, vol. 67, no. 4, pp. 1114–1121, 2019.
[181] K. Jindal, R. Upadhyay, and H. S. Singh, “Application of hybrid GLCT-
PICA de-noising method in automated EEG artifact removal,” Biomedical
Signal Processing and Control, vol. 60, p. 101977, 2020.
[182] N. Bajaj, J. R. Carrión, F. Bellotti, R. Berta, and A. De Gloria,
“Automatic and tunable algorithm for EEG artifact removal using wavelet
decomposition with applications in predictive modeling during auditory
tasks,” Biomedical Signal Processing and Control, vol. 55, p. 101624,
2020.
[183] K. Paradeshi and U. Kolekar, “Ocular artifact suppression in multichannel
EEG using dynamic segmentation and enhanced wICA,” IETE Journal
of Research, vol. 68, no. 4, pp. 2683–2696, 2022.
[184] S. K. Noorbasha and G. F. Sudha, “Removal of EOG artifacts from single
channel EEG–an efficient model combining overlap segmented ASSA
and ANC,” Biomedical Signal Processing and Control, vol. 60, p. 101987,
2020.
[185] I. McNulty et al., “Analysis of Artifacts Removal Techniques in EEG
Signals for Energy-Constrained Devices,” in 2021 IEEE International
Midwest Symposium on Circuits and Systems (MWSCAS), 2021: IEEE,
pp. 515–519.
[186] S. T. Aung and Y. Wongsawat, “Analysis of EEG signals contaminated
with motion artifacts using multiscale modified-distribution entropy,” IEEE
Access, vol. 9, pp. 33911–33921, 2021.
[187] P. Gajbhiye, N. Mingchinda, W. Chen, S. C. Mukhopadhyay,
T. Wilaiprasitporn, and R. K. Tripathy, “Wavelet domain optimized
Savitzky–Golay filter for the removal of motion artifacts from EEG re-
cordings,” IEEE Transactions on Instrumentation and Measurement,
vol. 70, pp. 1–11, 2020.
[188] Z. Jamil, A. Jamil, and M. Majid, “Artifact removal from EEG signals
recorded in non-restricted environment,” Biocybernetics and Biomedical
Engineering, vol. 41, no. 2, pp. 503–515, 2021.
[189] S. Phadikar, N. Sinha, and R. Ghosh, “Automatic eyeblink artifact removal
from EEG signal using wavelet transform with heuristically optimized
threshold,” IEEE Journal of Biomedical and Health Informatics, vol. 25,
no. 2, pp. 475–484, 2020.
[190] M. Shahbakhti et al., “SWT-kurtosis based algorithm for elimination of
electrical shift and linear trend from EEG signals,” Biomedical Signal
Processing and Control, vol. 65, p. 102373, 2021.
[191] S. Chintala, J. Thangaraj, and D. R. Edla, “Mixed step size normalized
least mean fourth adaptive algorithm for artifact elimination from raw
EEG signals,” Biomedical Signal Processing and Control, vol. 65,
p. 102392, 2021.
[192] S. Çınar, “Design of an automatic hybrid system for removal of eye-blink
artifacts from EEG recordings,” Biomedical Signal Processing and Control,
vol. 67, p. 102543, 2021.
186 Computational Techniques in Neuroscience
[208] W. Yao, Q. Tang, Z. Teng, Y. Gao, and H. Wen, “Fast S-transform for time-
varying voltage flicker analysis,” IEEE Transactions on Instrumentation and
Measurement, vol. 63, no. 1, pp. 72–79, 2013.
[209] M. A. Klados and P. D. Bamidis, “A semi-simulated EEG/EOG dataset
for the comparison of EOG artifact rejection techniques,” Data in brief,
vol. 8, pp. 1004–1006, 2016.
[210] I. Silva and G. B. Moody, “An open-source toolbox for analysing and
processing physionet databases in matlab and octave,” Journal of Open
Research Software, vol. 2, no. 1, 2014.
Chapter 10
10.1 INTRODUCTION
In this study, the author presents a thorough overview of the field of neu-
romorphic computing, including topics such as objectives, neuron/synapse
concepts, procedures and education, deployments, advances in hardware,
and supplementary materials and systems [1]. By providing a comprehen-
sive and historical overview of the topic, we hope to stimulate future
research and serve as a jumping-off point for interested newcomers [2–4].
For decades, the goal of computer scientists has been to construct a
system that can perceive the world at a rate greater than that of a person,
and the von Neumann architecture has emerged as the undisputed gold
standard for this kind of system. While parallels to the human brain are
inescapable, the vastly different organizational structure, power require-
ments, and processing capabilities of the two systems underline the limi-
tations of both architectures [5]. This begs the obvious question of whether
or not artificial neural networks (ANNs) can be designed to perform as well
as the human brain.
Paralleling von Neumann systems, neuromorphic computing has devel-
oped in recent years. In 1990, the term “neuromorphic computing” was
coined by Mead; “neuromorphic” refers to a type of very large scale inte-
gration (VLSI) that uses analog components to simulate organic neural
networks [6]. These days, the phrase also refers to implementations that
make use of or are inspired by neural networks that aren’t necessarily based
on biology.
These neuromorphic designs are unique in that they are highly linked and
parallel, need little energy, and co-localize storing and function. The
impending end of Moore’s law, the rising power requirements of Dennard
scaling, and the poor connectivity between processor and memory, known
as the von Neumann bottleneck, have all drawn more attention to neuro-
morphic designs, which are interesting in their own right. When compared
to conventional von Neumann designs, neuromorphic computers have the
ability to do complicated calculations more quickly, with less power con-
sumption, and in a smaller physical footprint [7]. Taking advantage of
neuromorphic systems in hardware development is strongly recommended
due to these features.
Furthermore, machine learning is a driving force behind the growing
popularity of neuromorphic computing. This method has the potential to
significantly enhance learning efficiency on specific tasks [8–10]. The focus
shifts from the architecture benefits of research in artificial intelligence to
its prospective operational benefits, with the hope that programs capable
of online, real-time learning like that of biological brains can be developed,
shown in Figure 10.1. Modern machine learning technique implementa-
tions may find neuromorphic structures to be the most perfect platform.
Researchers from many disciplines, including materials science, neu-
roscience, electrical engineering, computer engineering, and computer
science, are all represented in the neuromorphic computing community.
Neuromorphic technologies make use of substances with attributes com-
parable to those of biological neural systems; therefore, neuroscientists
explore, produce, and characterize new materials for application in these
devices [11]. Researchers in the field of neuroscience employ neuro-
morphic systems to imitate and analyze biological neural systems, and
they disseminate information about new findings from their investigations
that may have computational applications [12–14]. Device-level analog,
digital, mixed analog/digital, and non-traditional circuits are all tools of
the trade for electrical and computer engineers as they develop and
early system developers stressed that it was possible to achieve much faster
neural network computation with custom chips than was possible with
traditional von Neumann architectures, in part by extracting their natural
computation, as described above, but also by adding custom hardware to
finalize neural-style mathematical calculations [21–25]. This early emphasis
on speed suggested that neuromorphic devices could be used as boosters for
machine learning or neural network-style activities in the future.
One of the primary drivers of early neuromorphic systems was the need
for real-time performance. In applications like real-time control, real-
time digital image reconstruction, and autonomous robot control, the
devices’ natural computation and computational speed allowed neural
network simulations to be completed more quickly than in implementa-
tions on von Neumann architectures [26–28]. The demand for quicker
computing in these instances was driven more by the performance
requirements of the underlying applications than by research into the
topologies of neural networks. This is why we have separated it as a
driving force behind the evolution of machine learning from the pursuit
of speed and parallelism.
Due to various inherent single points of failure, both in the parallelized
representation and in the potential adaptation or self-healing capabilities
observed in ANN interpretations in software, developers have begun to
consider neural networks as a natural template for hardware design [29].
In the past and now, these traits mattered for making new hardware
implementations, as imperfections in both the created and used devices are
possible due to component and technique variance.
The possibility for exceptionally low power performance is the most
common reason given in the current research and discussions on neuro-
morphic devices in the cited papers [30]. The human brain is our primary
source of inspiration, but it only uses approximately 20 watts of electricity
and is capable of incredibly complicated computations and jobs. From its
inception, the goal of developing neuromorphic devices with comparably
low power consumption has been a driving force for neuromorphic com-
puting. This goal has just recently emerged as a key motive [31–33].
During this century of neuromorphic research, the development of por-
table devices with the computational power of neural networks but with
a minimal resource requirement (in terms of device size) has emerged as a
driving force [34]. As the prevalence of integrated systems and micro-
processors has increased, so has the need for designs that consume little
space, have different contexts, and use minimal energy.
Recently, reduced prevalence has been the driving force for the creation
of neuromorphic devices. As can be seen in Figure 10.2, this is the primary
driving force for neuromorphic computing. Major drivers for the advance-
ment of neuromorphic systems continue to be their inherent parallelism,
real-time performance, speed in both operation and training, and tiny
device footprint [35]. During this time, a few other motivations gained
192 Computational Techniques in Neuroscience
Equation for the output of Neuron x for the given model as:
Y
Mx = f i =0
wi , jni
Hardware implementations of neuron models have also been made for more
classic forms of ANNs. There are various neuron models, such as those found
in binary neural networks, fuzzy neural networks, and Hopfield neural net-
works. Generally speaking, many distinct neuron models have been realized
in hardware, and one of the choices a user might make is a compromise
between complexity and biological inspiration. There is a qualitative com-
parison between various neuron models with respect to these two criteria
in Figure 10.4.
for the creation and usage of novel materials for neuromorphic, the focus is
often on simplifying the synapse implementation. Therefore, unless an
attempt is made to describe biological activity explicitly, synapse models
tend to be quite straightforward. The strength or weight value of a neuron
can alter over time thanks to a plasticity mechanism, which is often
included in more complicated synapse models. In biological brains, mech-
anisms of plasticity have already been linked to knowledge.
nets (often back-propagation through time), spiking nets (where often feed-
forward nets are converted to spiking systems), and convolutional nets can
all be trained with back-propagation and its many variants. Since there are
numerous highly optimized software implementations available, using
back-propagation off-line on a conventional host system is the simplest
option. These methods mostly employ elementary back-propagation, which
has been thoroughly explored elsewhere in the neural network literature.
10.4 CONCLUSION
Each of these models has its own advantages and disadvantages, making
it unlikely that they will ever be merged into a single, unified theory. It’s safe
to assume that anything from simple feed-forward neural networks to
complex simulations of biological brain networks will continue to coexist in
the neuromorphic computing ecosystem.
The many learning and training algorithms developed for and used by
neuromorphic systems were discussed. Moving forward, we need to focus
on developing dedicated training and learning algorithms for neuromorphic
systems, as opposed to just adopting those built for other architectures.
Our research suggests that this subfield of neuromorphic computing holds
some of the greatest promise for future advancements. We talked about
the big picture of neuromorphic system hardware and the cutting-edge
device-level components and materials that are powering their develop-
ment. As time goes on, there is also plenty of room for improvement here.
We briefly covered some of the ancillary systems for neuromorphic com-
puters, like ancillary software, of which there is relatively little and which
would substantially assist the community. Finally, we went over some of
the uses of neuromorphic computing systems.
With this work, we aimed to provide a comprehensive overview of
neuromorphic computing research across multiple domains. Therefore, we
have added all citations in this revision. The results of this study, we believe,
will serve as motivation for others to create similarly novel approaches
to help fill in the blanks with their own research and to think about how
neuromorphic computers might work for their own purposes.
REFERENCES
11.1 INTRODUCTION
reactions to how our coworkers characterize us, our communities, and our
loved ones play a significant role in our writing [14].
We now provide context for our study by outlining our knowledge of
ADHD, describing our previous work in the context of HCI and neurodi-
versity, and introducing our theoretical support from Critical Disability
Studies and, more specifically, Crip Technoscience. Figure 11.2 shows the
various research fields that are related to the HCI field.
After that, we explain our methodology in further detail. Our analysis
and results show how existing research exposes ADHD as a problem space
for technology design due to solutionist and paternalistic perspectives
of the intended audience [15]. We draw conclusions for the technological
research community, propose hypotheses about possible solutions, con-
sider the consequences of engaging with works on a deeply personal level,
and sketch the contours of possible future developments based on these
results [16–18].
210 Computational Techniques in Neuroscience
11.2.1 ADHD
Clinically, hyperactivity, impulsivity, and inattention are the hallmarks of
ADHD. Predominant inattentiveness [20–22], hyperactive-impulsiveness,
and a mixed profile are frequently used diagnostic criteria [ibid]. For a
long time, ADHD, or hyperkinetic behavior syndrome, was thought of
as a condition that only affected kids [23].
Because of its early definition as a childhood condition, adults who
sought a diagnosis were overlooked, and its failure to account for the
“inattentive” (daydreamer) type led to a long-held misconception that
ADHD mostly affects boys [24].
Research suggests that the prevalence of ADHD in the general population
diminishes progressively throughout age groups, lending credence to the
idea that one might “grow out of” the disorder [25].
This misconception persists despite growing evidence that the under-
lying neurological distinctions that define ADHD persist over the course
of a person’s lifetime [26–29]. ADHD may be less noticeable in adults
because of differences in their environments (work, family life, etc.;
whereas in most countries, children all go to school), and because they
have learned to adapt to their surroundings through time, using a variety
of contextual masking tactics.
Misdiagnosis of depression or oppositional defiant disorder might occur
because of the presentation of traits being connected to conventional
standards along gender and race.
In recent years, those who have been diagnosed with ADHD have con-
tributed to our growing body of knowledge about what it’s like to live
with and manage ADHD in the real world.
Research that takes both a critical and appreciative stance and the par-
ticipation of people with ADHD in study both add to the credibility of these
narratives [30–33]. In this context, lobbying focuses mostly on dispelling
harmful myths and rebutting internalized societal stigma.
Figure 11.3 Graphical representation of neurodiversity between low cognative and high
cognative ability.
212 Computational Techniques in Neuroscience
Each of the four authors read the core corpus with a focus on either
participants (i.e., who was included or addressed in the research and how
did potential participants behave), disability (i.e., how did authors con-
ceptualize and explain ADHD), researchers (i.e., the larger research
framing and disciplinary origin of the work), or technology (i.e., the
design and development processes, including the artifacts and their
purpose).
Analysis of technology research and ADHD 215
11.6 CONCLUSION
barriers that prevent people with ADHD from participating in the design
of aiding technologies.
We also found that until recently, researchers’ perspectives remained
static, with the exception of a shift toward a concentration on diagnostic
and interventionist approaches in technology development.
Consequences of the current research’s reliance on a deficit model of
ADHD features were then discussed. We next provide a number of sug-
gestions for developments in this area based on our speculative choices.
In this way, we advocate for researchers to abandon their adherence to
neurotypical standards and approach their work with us and other mar-
ginalized communities less from a paternalistic and more from a solidaric
and community-oriented stance.
There are constraints on this work, like there are on any. For one,
all of us writing this are white, we live and work in the Global North,
and, as we’ve already mentioned, we have the advantage of having
access to a diagnosis, which in turn governs our ability to receive social
accommodations.
As a matter of fact, we owe our academic success to the fortunate
circumstances of our own educational backgrounds. We can thus only
speak for ourselves in our mutual assessment and make no claims to
represent all people with ADHD. However, we provide a close reading
of existing works and outline the future potential of such appreciative
approaches in light of the growing number of researchers who openly
disclose their ADHD, also within HCI, and the emerging research into
self-determined options for technologies in this context.
We can’t exist without technology, but if it’s only used to sort individuals
into boxes and control their actions, we have a responsibility as technolo-
gists to find better uses for it.
REFERENCES
[1] Ashwag Al-Shathri, Areej Al-Wabil, and Yousef Al-Ohali. 2013. Eye-
Controlled Games for Behavioral Therapy of Attention Deficit Disorders.
In HCI International 2013 – Posters’ Extended Abstracts, Constantine
Stephanidis (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 574–578.
[2] Shahab U. Ansari. 2010. Validation of FS+LDDMM by Automatic
Segmentation of Caudate Nucleus in Brain MRI. In Proceedings of the
8th International Conference on Frontiers of Information Technology
(Islamabad, Pakistan) (FIT ’10). Association for Computing Machinery,
New York, NY, USA, Article 10, 6 pages. 10.1145/1943628.1943638
[3] J. Anuradha, Tisha, Varun Ramachandran, K. V. Arulalan, and B. K.
Tripathy. 2010. Diagnosis of ADHD Using SVM Algorithm. In Proceedings
of the Third Annual ACM Bangalore Conference (Bangalore, India)
(COMPUTE ’10). Association for Computing Machinery, New York, NY,
USA, Article 29, 4 pages. 10.1145/1754288.1754317.
218 Computational Techniques in Neuroscience
[31] LouAnne Boyd, Kendra Day, Ben Wasserman, Kaitlyn Abdo, Gillian Hayes,
and Erik Linstead. 2019. Paper Prototyping Comfortable VR Play for
Diverse Sensory Needs. In Extended Abstracts of the 2019 CHI Conference
on Human Factors in Computing Systems (CHI EA ’19). Association for
Computing Machinery, New York, NY, USA, 1–6. 10.1145/3290607.
3313080
[32] Emeline Brulé and Katta Spiel. 2019. Negotiating Gender and Disability
Identities in Participatory Design. In Proceedings of the 9th International
Conference on Communities & Technologies - Transforming Communities
(Vienna, Austria) (C&T ’19). Association for Computing Machinery,
New York, NY, USA, 218–227. 10.1145/3328320.3328369
[33] Fiona Campbell. 2009. Contours of ableism: The production of disability
and abledness. Springer.
[34] A. Tyagi, A. Sharma, and M. Bhardwaj (2022). Future of bioinformatics in
India: A survey. International Journal of Health Sciences, 6(S2), 13767–13778.
10.53730/ijhs.v6nS2.8624.
[35] S. Lakkadi, A. Mishra, and M. Bhardwaj (2015). Security in ad hoc net-
works. American Journal of Networks and Communications, 4(3-1), 27–34.
[36] Jain, Ishita and Dr. Manish Bhardwaj, A Survey Analysis of COVID-19
Pandemic Using Machine Learning (July 14, 2022). Proceedings of the
Advancement in Electronics & Communication Engineering 2022, Available
at SSRN: https://ssrn.com/abstract=4159523 or http://dx.doi.org/10.2139/
ssrn.4159523
[37] Will H Canu, Matthew L Newman, Tara L Morrow, and Daniel L. W Pope.
2008. Social Appraisal of Adult ADHD: Stigma and Influences of the
Beholder’s Big Five Personality Traits. Journal of Attention Disorders 11,
6 (2008), 700–710.
[38] Dario Cazzato, Silvia M Castro, Osvaldo Agamennoni, Gerardo Fernández,
and Holger Voos. 2019. A Non-Invasive Tool for Attention-Deficit Disorder
Analysis Based on Gaze Tracks. In Proceedings of the 2nd International
Conference on Applications of Intelligent Systems (Las Palmas de Gran
Canaria, Spain) (APPIS ’19). Association for Computing Machinery,
New York, NY, USA, Article 5, 6 pages. 10.1145/3309772.3309777
[39] Kuo-Chung Chu, Hsin-Jou Huang, and Yu-Shu Huang. 2016. Machine
learning approach for distinction of ADHD and OSA. In 2016 IEEE/ACM
international conference on advances in social networks analysis and mining
(ASONAM). IEEE, 1044–1049.
[40] Franceli L Cibrian, Kimberley D Lakes, Sabrina Schuck, Arya Tavakoulnia,
Kayla Guzman, and Gillian Hayes. 2019. Balancing caregivers and
children interaction to support the development of self-regulation skills
using a smartwatch application. In Adjunct Proceedings of the 2019 ACM
International Joint Conference on Pervasive and Ubiquitous Computing
and Proceedings of the 2019 ACM International Symposium on Wearable
Computers. 459–460.
Index
Accessibility, 208, 210, 213 Brain waves, 85, 86, 87, 89, 92, 97, 98,
Acoustic neuroma, 8 99, 102, 104, 109
Adaptive filtering, 148, 149, 150, 154,
156, 157, 166, 175, 176 Canonical neural computation, 54
ADHD, 8, 75, 86, 87, 88, 89, 96, 100 Carcinoma, 26
Aggregation, 21, 22, 25, 28, 34, 35 Cell membranes, 37
Algorithm, 25, 29, 30, 33, 41, 43, 47, Classical feedforward approach, 43
50, 51, 62, 73, 115, 123, 124, Classical set theory, 19
128, 131, 135, 136, 137, 138, Clinical examination, 20
140, 141, 142, 143, 162, 166, CLM, 53, 54, 56
167, 169, 171, 172, 173, 174, CNN, 50, 51, 53, 54, 56, 104, 105,
175, 178, 182, 184, 185, 186, 106, 118
187, 188, 194, 198, 200, 201, Code efficiency, 54
203, 204, 219 Cognition, 9, 10, 37, 57, 60, 75, 86, 87,
Alternatives, 21, 22, 27, 28, 33, 162 89, 96, 97, 101, 111
Anatomical, 42 Computational tool, 20
ANN, 8, 50, 51, 52, 53, 119, 120, Correlation learning mechanism, 53
123, 124, 125, 147, 151, 152, Craniopharyngioma, 26
154, 157, 171, 209, 211, CT brain images, 53
215, 219
Artifacts, 163, 165, 170, 173, 191, 193 Decision, 1, 7, 10, 19, 20, 21, 22, 23,
Artificial intelligence, 1, 2, 9, 11, 13, 14, 25, 28, 29, 33, 34, 35, 36, 48,
37, 62, 77, 78, 81, 104, 207, 67, 68, 73, 98, 107, 108, 111,
208, 214 131, 134
Astrocytoma, 8 Decision making, 1, 10, 20, 21, 22, 23,
Attributes, 22, 25, 27, 33, 207 25, 33, 34, 35, 107, 108, 111,
Autism spectrum disorder, 57, 211 131, 134
Autistic, 60, 63, 83, 84 Decision support tools, 19
Autoradiography, 26 Decision theory, 19
Decision tree, 21, 48
Backpropagation method, 41 Deep learning, 37, 49, 50, 64, 104,
Beta waves, 87 105, 106
Biased competition, 55 Deterministic, 19, 147, 159, 196
Blind source separation (BSS), 147, 150, Diagnosis, 7, 8, 19, 20, 21, 22, 23, 25,
179, 180, 182, 193, 196, 29, 33, 35, 91, 93, 94, 95, 102,
200, 202 128, 145, 146, 147, 149, 151
Brain tumor, 7, 19, 22, 25, 27, 29, 30, Diagnostic process, 20
33, 56 DIFWA, 25, 27, 28, 33
221
222 Index
111, 114, 116, 118, 120, 122, Prioritized weighted average (PWA), 22
124, 126, 128, 130, 132, 134,
136, 138, 140, 142, 144, 146, Radiological, 20
148, 150, 152, 154, 156, 158, Real-life, 25
160, 162, 164, 166, 168, 170, Real-life problems, 1, 9, 19
172, 174, 186, 188, 190, 191, Recurrent neural networks (RNN),
192, 193, 194, 196, 197, 198, 50, 61
200, 202, 204, 208, 210, 212, Reinforcement learning, 1, 10, 49
214, 216, 218 RNNs, 61, 67, 68, 69, 70, 72, 73
Neuroscientists, 9, 11, 37, 107, 108,
207, 210 Sensitivity, 54, 55, 124, 125, 126, 146,
Nodes, 37, 51, 67, 123, 152 154, 170
Non-deterministic, 19 Spiking neuron, 37, 212, 213, 214
Normalization, 54, 55, 56, 57 Statistics, 19, 37, 51, 159, 169,
195, 196
Oligoden- droglioma, 26 Symptom, 20, 145
Ordered weighted averaging, 22 Synthetic intelligence, 81
Ordered weighted geometric, 22
Ordered weighted harmonic mean, 22 Techniques, 1, 8, 10, 11, 20, 21, 25, 39,
OWA, 22, 33, 34, 35 59, 61, 63, 70, 72, 77, 85, 90,
OWG, 22 91, 97, 98, 103, 104, 105,
OWHM, 22 106, 107, 108, 115, 119, 120,
123, 128, 131, 132, 150, 151,
Parameters, 39, 42, 119, 123, 141, 146, 161, 162, 163, 165, 166, 167,
151, 152, 154, 158, 159, 168, 168, 169, 170, 171, 173, 174,
171, 174, 185, 213 175, 176, 177, 178, 185, 187,
Pathological, 20 189, 190, 191, 193, 194, 195,
Pattern recognition, 19, 41, 77, 128 197, 199, 201, 202, 203
Perception, 1, 10, 37, 62, 86, 106, Traditional approaches, 19
107, 108 Treatment, 20, 21, 22, 23, 25, 40, 85,
Phenomenological approach, 78 93, 94, 97, 106, 114, 120,
Phenomenology, 40, 78 127, 146
Pineoblastoma, 26
Primary brain tumors, 26 Weighted harmonic mean (WHM), 22
Prioritized, 22, 25, 35