0% found this document useful (0 votes)
13 views6 pages

Enhanced_Image_Classification_Through_Customized_Convolutional_Spiking_Neural_Network

Uploaded by

subashg069
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views6 pages

Enhanced_Image_Classification_Through_Customized_Convolutional_Spiking_Neural_Network

Uploaded by

subashg069
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Enhanced Image Classification through Customized

Convolutional Spiking Neural Network


Ashok Kumar Saini Rajesh Kumar
Department of Electrical Engineering Department of Electrical Engineering
Malaviya National Institute of Technology Malaviya National Institute of Technology
Jaipur, India Jaipur, India
[email protected] [email protected]
2024 Parul International Conference on Engineering and Technology (PICET) | 979-8-3503-6974-8/24/$31.00 ©2024 IEEE | DOI: 10.1109/PICET60765.2024.10716183

Naveen Gehlot Seema Verma


Department of Electrical Engineering Department of Electrical and Electronics
Malaviya National Institute of Technology Engineering Education
Jaipur, India NITTTR Bhopal
[email protected] [email protected]

Abstract—Spiking Neural Networks (SNNs) are deemed to Spiking Neural Networks (SNNs), aspiring to replicate
provide biological realism. Also, it has more computational power biological behaviors. In the early 1950s, Hodgkin et al. [5]
than Artificial Neural Networks (ANNs) due to its utilization of initiated investigations into the electrochemical properties of
spikes for information transmission and encoding. However, their
shallow structures impose structural limitations, restricting the neurons and presented a mathematical description of spike
feature extraction capabilities of conventional SNNs. This study firing patterns. By the late 1980s, the identification of
aims to improve the feature extraction capability of SNNs by synchronized oscillations in the cat’s visual cortex had
leveraging the proficient feature extraction skills of Convolutional garnered significant attention within the neuroscience
Neural Networks (CNNs). Our proposed model, Customized community, which belongs to SNNs [6]. From a researcher’s
Convolutional Spiking Neural Network (CCSNN), combines CNN
for feature learning with SNNs for cognitive skills. On the Digit- perspective, the manner in which SNNs process sensory data
MNIST, Fashion-MNIST, and Letter-MNIST datasets, CCSNN closely resembles the processing in the human brain, marking
surpasses previous models using fewer neurons and less training the evolution of a new generation of neural networks.
data, enhancing the biological realism of image classification SNN represents and processes data using discrete spike
models. In this study, CCSNN achieved impressive results on trains and more physiologically understandable spiking
the Digit-MNIST, Fashion-MNIST, and Letter-MNIST datasets,
with accuracies of 99.10%, 91.80%, and 99.30%, respectively, neurons as its fundamental unit [7]. The spike train
compared to conventional SNN. incorporates different types of informational features,
Index Terms—Spiking Neural Network (SNN), MNIST, including time, place, frequency, and phase [8]. In SNNs, a
Convolutional Neural Network (CNN), Classification neuron becomes active solely upon receiving an input spike.
Consequently, neurons that remain inactive, devoid of any
I. I NTRODUCTION input spikes, can be transitioned into a low-power mode to
Neurons serve as the fundamental units within the conserve energy. Alongwith SNNs are valuable for handling
sophisticated neurological network. Neurons are used in the spatio-temporal patterns due to their temporal dynamics [9].
human brain for information processing, contributing to our Recent research has predominantly emphasized regression
understanding of the structure and its functioning [1]. Building and classification accuracy, potentially diminishing the
upon this understanding of the brain’s structure, scientists and attention given to factors such as energy consumption and
researchers replicate this methodology, commonly known as computing costs during both training and deployment. Each
Artificial Neural Networks (ANN), extensively utilized in the year witnesses significant advancements characterized by
field of artificial intelligence [2]. Despite the notable successes improved recall accuracy on industry-standard benchmarks.
of ANNs in various tasks, their fundamental principles still However, these improvements often lead to increased
deviate significantly from how the brain biologically processes computational demands, heightened energy consumption, and
information, as noted in the review by Taherkhani et al. [3]. greater data storage requirements, as noted by Han et al. [10]
The biological rationality of ANNs is compromised as they and Strubell et al. [11]. Researchers in both business and
can only process the spatial dimension information of the academia are actively moving into SNN techniques to address
data, and their internal data processing is based on analog these challenges and pave the way for future advancements.
information [4]. However, when compared to existing deep learning models
The recognition of spatial-dimensional information for image classification, conventional SNNs encounter a
processing mechanisms has given rise to the development of notable limitation in feature extraction. This constraint arises

979-8-3503-6974-8/24/$31.00 ©2024 IEEE

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on January 09,2025 at 06:45:14 UTC from IEEE Xplore. Restrictions apply.
Fig. 1: Customized Convolutional Spiking Neural Network Model Architecture
from their composition of solely a fully connected layer of
physiologically based neurons. A single fully-connected layer
of SNNs lacks the capability to uncover and capture certain
deeper and hidden information, in contrast to the intricate
structures of deep learning models. Recent advancements
in deep learning models for image classification, such as
Convolutional Neural Networks (CNNs), have demonstrated
remarkable success in various areas of computer vision, Fig. 2: Basic Architecture of Spiking Neural Network
particularly in classification tasks [12]–[14]. The superior facilitates swift convergence to a robust data representation
performance of CNNs is accompanied by higher model and ensures commendable classification accuracy. Within this
complexity, with current CNNs typically having tens of framework, different subpopulations of neurons engage in
millions of parameters. Due to this complexity, CNNs competitive dynamics, each contesting to represent isolated
can effectively learn intricate patterns from training data, portions of the input space.
prompting an exploration of computer models for broad
The rest of the article is organized as follows: Section
pattern recognition from a biological standpoint. Finally, the
2 outlines the methodology of SNN and explains the
feature extraction ability of CNNs has been utilized to enhance
customization of the Convolutional Neural Network layer
the performance of SNNs.
before the SNN, introducing the proposed Customized
Rikiya Yamashita et al. [15] proposed a category of ANNs
Convolutional Spiking Neural Network, referred to as
that has garnered significant attention in numerous computer
CCSNN. Section 3 presents the results of SNN and CCSNN
vision tasks and is increasingly recognized across various
on the three MNIST datasets (Digit-MNIST, Letter-MNIST,
domains. These networks, known as CNNs, have become
and Fashion-MNIST). Finally, Section 4 concludes this study.
prominent, particularly in image-related applications. CNNs
utilize several architectural components, such as convolution II. M ETHODOLOGY
layers, pooling layers, and fully connected layers (ANNs),
enabling them to autonomously and adaptively acquire spatial This section outlines the procedure for executing the
hierarchies of features through the process of backpropagation. study depicted in Figure 1. As shown in the figure, the
P. U. Diehl et al. [16] suggests an SNN-based model for study commences with a comprehensive understanding of
digit recognition that incorporates mechanisms close to the CCSNN architecture and its components. After grasping
biological realism. Specifically, the model utilizes the concept of SNN architecture, the convolutional layer
conductance-based synapses rather than current-based is customized to propose the CCSNN architecture. This
synapses, implements spike-timing-dependent plasticity customization of the convolutional layer is performed for
involving time-dependent weight adjustments, incorporates feature extraction, and these extracted features are then fed
lateral inhibition, and features an adaptive spiking threshold. into the SNN as inputs to enhance the model’s performance.
Drawing upon the foundation laid by the SNNs investigated
in [16], this study introduces architectural modifications to A. Spiking Neural Network Architecture
enhance the capability of SNNs in representing image inputs. Spiking Neural Networks (SNNs) stand as the third
In this study, the proposed approach, termed Customized evolutionary stage in ANNs. Unlike conventional ANNs that
Convolutional Spiking Neural Networks (CCSNNs), process values, either real or integer, SNNs handle data in
constitutes a class of networks designed to learn features the form of spike trains, where a sequence of spikes carries
from grid-like data autonomously. The notable success of the information. Notably, SNNs excel in processing temporal
CNNs in various image-processing applications inspires this patterns, going beyond the spatial capabilities of ANNs,
design. In order to improve training efficiency, small neuron thereby offering enhanced computational prowess. In terms
patches may share parameters (fostering collaborative of computational requirements, SNNs demand a tiny single
learning) or evolve independently to acquire distinct line to toggle within logical levels ’0’ and ’1,’ as depicted in
properties. The incorporation of new inhibitory connections Figure 2.

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on January 09,2025 at 06:45:14 UTC from IEEE Xplore. Restrictions apply.
Fig. 3: Different Neuron Models of Spiking Neural Network Fig. 4: Different Encoding and Decoding Methods of Spiking
Neural Network
Researchers have proposed various mathematical models of
SNN, including the Integrate and Fire (IF), Leaky Integrate and spikes. The method of decoding relies on the method of
Fire (LIF), Adaptive Leaky Integrate and Fire (ALIF), among encoding by which the sensory information is converted into
others, as illustrated in Figure 3 [17], [18]. Components of the spikes, as shown in Figure 4.
SNN basic architecture are described in the following section. 3) Neuron Model: This study employs the leaky integrate-
and-fire (LIF) neuron model with exponentially decaying
1) Encoding: Neural coding or encoding is the process synaptic current kernels. In the conventional LIF model,
by which the nervous system converts sensory information neurons are handled like electrical apparatuses. The membrane
and internal states into sequences of electrical signals potential Vm (t) of the neuron serves as an elementary internal
known as action potentials or spikes. These spikes convey variable or activation state variable. A neuron has different
information between neurons and allow for the communication types of ions, i.e., N a+ , k + , cl− , etc., and when they cross
of information throughout the nervous system. a membrane, they encounter capacitance (C = Cm ) coupled
There are several different encoding schemes used by to the membrane and resistance (R = Rm ). The soma acts
neurons, including rate coding, temporal coding, phase coding, as a leaky integrator (first-class low-pass filtration), having
burst coding, and so on, as shown in Figure 4. This study uses its integration time constant Tm = Rm Cm dictating the
the rate coding scheme for converting sensory information into reaction time of the impulse function’s exponential decay
electrical signals. The standard approach for producing spike rate. The passive leakage current flowing across Rm brings
sequences numerically relies on equation 1, which provides the the membrane potential Vm (t) approaching zero, whereas an
occurrence probability of a spike within a brief time interval. active membrane input pumping current opposes it, holding a
resting membrane potential at Vm (t) = VL .
P {1 spike during δt} ≈ rδt (1) Internal parameters governing individual neuron behavior
include the resting potential of the membrane VL and the time
Here r is the instantaneous firing rate, and δt is the time step. constant of the membrane Tm . The membrane potential Vm (t)
This equation serves as a method for creating a train of is impacted by three factors: passive leakage of current, an
spikes by initially partitioning time into short intervals, each active pumping current, and external inputs leading to time-
lasting δt. Subsequently, a series of arbitrary numbers denoted varying changes in membrane conductance. These influences
as x[i] and uniformly dispersed within 0 & 1 is generated. For are considered under a set of specified digital conditions.
every time interval, if x[i] is less than or equal to rδt, a spike The following equation represents the model’s dynamics:
is produced; thereby, no spike is produced. It is crucial to note
that this procedure is effective only when δt is exceptionally dVm VL Vm (t) 1
= − + Iinj (2)
small, specifically when rδt << 1. Typically, a δt value of dt } Tm Tm C
| {z | m{z }
1 msec should be adequate. However, a limitation of this
|{z} | {z }
Activation ActiveP umping Leakage ExternalInput
approach is that each spike is assigned to a distinct time frame
rather than a consistent time value. if Vm (t) > Vthresh, then emits a spike at tf and resets
2) Decoding: Decoding is the process by which the nervous Vm (t) → Vreset
system of the human brain converts the information received where Iinj is injected current into the neuron in the form of
from previous neurons in the form of a sequence of electrical a continuous sum of input spike trains, and Wj is the synaptic

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on January 09,2025 at 06:45:14 UTC from IEEE Xplore. Restrictions apply.
weight of the corresponding input spike train to Xj . TABLE I: Dataset Description
n Digit Letter Fashion
X Dataset
Iinj = Wj Xj (3) MNIST MNIST MNIST
j=1 Image 28 × 28 gray 28 × 28 gray 28 × 28 gray
Train 60000 319240 60000
The leaky integrate-and-fire (LIF) neurons exhibit a Test 10000 53210 10000
characteristic process involving the integration of current, Category 10 26 10
accumulation of membrane potential, and subsequent
exponential decay over time. Following the integration of the
preneuron current, the membrane potential of the postneuron TABLE II: Hyperparameters of CCSNN
accumulates, diminishing exponentially until it surpasses the Hyper Digit Letter Fashion
firing threshold. Upon reaching this threshold, the LIF neuron parameters MNIST MNIST MNIST
emits a backward spike and resets the membrane potential. Batch Size 1000 1000 1000
In the event that Vm (t) ≥ Vthresh , a generated spike of Training Epoch 100 100 100
Learning Rate(lr) 0.001 0.001 0.001
neuron is described by Decay Multiplier 0.9 0.9 0.9
Threshold 1 1 1
y(t) = δ(t − tf ) (4) Time Steps 25 25 25
where tf denotes the spike firing time, and the Vm (t) is
then set to Vreset . Then comes an essentially refractory time using the Digit-MNIST, Fashion-MNIST, and Letter-MNIST
interval during which Vm (t) progressively recuperates from datasets. Detailed information about these experimental
Vreset to the resting membrane potential VL . While inducing datasets is provided in TABLE I. The model’s performance
spike firing during this refractory time interval is challenging, is analyzed by employing these datasets to check the
it remains possible. As a result, the neuron’s output constitutes classification report. A common environment is established
a continuous sequence of spikes, expressed as before conducting the performance analysis to ensure a
X
y(t) = δ(t − ti ) (5) fair comparison of the models. Hyperparameters are set up
i according to the datasets to achieve a consistent and accurate
evaluation of the models under the same input parameter
where ti represents the times of spike firing.
setup, as outlined in TABLE II. For the same setup of
B. Customized Convolutional Spiking Neural Network hyperparameters, performance analysis is conducted on the
(CCSNN) models for 100 epochs, and the results are depicted in
The proposed strategy CCSNN involves a two-step process. Figures 5 and 6. Figure 5 illustrates the accuracy curves of
In the initial step, datasets serve as inputs to CNN models, the SNN and CCSNN models, while Figure 6 illustrates the
and distinct feature sets are extracted. Moving to the second convergence of model loss after completing 100 epochs.
step, an SNN is trained using these extracted features. Referring to the demonstrated trained model in Section II,
The emphasis on more efficient features in this training a detailed comparison of model accuracy is outlined in
process is targeted to enhance the overall accuracy of the TABLE III. As indicated by the results in TABLE III, the
classification. In both steps of this proposed strategy, CNNs CCSNN model exhibits superior accuracy across all datasets.
play a pivotal role in contributing to the improved classification Specifically, for the Digit-MNIST, and Letter-MNIST datasets,
performance of SNNs in the study. The procedural steps and the CCSNN model achieves accuracies of 99.10% and
the comprehensive design of this proposed strategy are visually 99.30%, respectively, outperforming the SNN model, which
presented in Figure 1. attains accuracies of only 97.60% and 97.70%, respectively.
SNNs rely on the precise timing of spikes for information Furthermore, a comparative analysis of our proposed model
processing, and with a large number of features, maintaining with existing literature on all datasets is presented in
precise spike timing across all neurons becomes challenging. TABLE IV. The table visualizes that our proposed model
SNNs might struggle to capture relationships between achieves higher accuracy than all existing literature.
numerous features efficiently. The suggested strategy, CCSNN,
enables us to carry out the classification task by feeding
the features extracted from CNN models to SNN. The CNN TABLE III: Comparison between SNN and CCSNN
reduces the dimensionality of the input space; with the lower
number of input spaces, the complexity of SNN will reduce, Dataset Model Network Size Accuracy
and the efficient capture will increase. Digit-MNIST SNN 784-444-10 97.60%
Fashion-MNIST SNN 784-444-10 89.60%
III. R ESULTS AND D ISCUSSION Letter-MNIST SNN 784-444-26 97.70%
Digit-MNIST CCSNN 784-32C3-64C3-P2-256-10 99.10%
To assess the effectiveness of both the brain-inspired SNN Fashion-MNIST CCSNN 784-32C3-64C3-P2-256-10 91.80%
model and the proposed CCSNN (customized convolutional Letter-MNIST CCSNN 784-32C3-64C3-P2-256-26 99.30%
spiking neural network) model, we conducted experiments

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on January 09,2025 at 06:45:14 UTC from IEEE Xplore. Restrictions apply.
1.0
Train Train Train
0.8 Validation Validation
Validation 0.8
0.8

0.6 0.6

Accuracy
0.6

Accuracy
Accuracy

0.4 0.4
0.4

0.2 0.2
0.2

0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100


Epoch Epoch Epoch
2.5
(a) Digit-MNIST Train 2.5 (b) Fashion-MNIST Train (c) Letter-MNIST Train
3.0
Validation Validation Validation
2.01.00 Train Train 1.00 Train
Validation 2.0 Validation
2.5 Validation
0.98 0.90 0.98
1.5 2.0
1.5

Loss
Loss

Loss

0.96 0.96
0.85
1.00.94 1.5
0.94
Accuracy
Accuracy

Accuracy
1.0
0.92 0.80
1.00.92
0.5
0.90 0.5 0.50.90
0.0 0.75
0.88
0 20 40 60 80 100 0 20 40 60 80 100 0.88 0 20 40 60 80 100
Epoch Epoch Epoch
1.00.86 0.70 0.86
Train Train Train
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Epoch 0.8 Validation
Epoch
0.8 Validation Epoch
Validation
0.8
(d) Digit-MNIST (e) Fashion-MNIST (f) Letter-MNIST
0.6 0.6

Accuracy
Accuracy

0.6
Accuracy

Fig. 5: Comparative Analysis of Accuracy Curves for SNN and CCSNN across All Datasets
0.4 0.4
0.4

0.2 0.2
0.2

0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100


Epoch Epoch Epoch

2.5
Train 2.5 Train Train
3.0
Validation Validation Validation
2.0
2.0 2.5

1.5 2.0
1.5
Loss
Loss
Loss

1.5
1.0
1.0
1.0
0.5
0.5 0.5
0.0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Epoch Epoch Epoch

(a) Digit-MNIST (b) Fashion MNIST (c) Letter-MNIST


Train 5 Train Train
0.8 Validation Validation Validation
0.6

4
0.5
0.6

0.4
3
Loss
Loss

Loss

0.4 0.3

2
0.2
0.2

1 0.1

0.0 0.0
0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100
Epoch Epoch Epoch

(d) Digit-MNIST (e) Fashion MNIST (f) Letter-MNIST


Fig. 6: Comparative Analysis of Loss Curves for SNN and CCSNN across All Datasets

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on January 09,2025 at 06:45:14 UTC from IEEE Xplore. Restrictions apply.
TABLE IV: Comparison of Classification Performance with [7] S. G. Wysoski, L. Benuskova, and N. Kasabov, “Evolving spiking neural
Previous Studies networks for audiovisual information processing,” Neural Networks,
vol. 23, no. 7, pp. 819–835, 2010.
Model Dataset Method Accuracy Ref. [8] C. Tang, D. Chehayeb, K. Srivastava, I. Nemenman, and S. J. Sober,
Surrogate “Millisecond-scale motor encoding in a cortical vocal area,” PLoS
BSNN Digit-MNIST 99.05% [19] biology, vol. 12, no. 12, p. e1002018, 2014.
gradient
LC-SNN Digit-MNIST STDP 95.07% [20] [9] J. Hu, H. Tang, K. C. Tan, and H. Li, “How the brain
sym-STDP formulates memory: A spatio-temporal model research frontier,” IEEE
Digit-MNIST DA-STDP 96.73% [21] Computational Intelligence Magazine, vol. 11, no. 2, pp. 56–68, 2016.
-SNN
Probalistic [10] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing
Spiking CNN Digit-MNIST 98.36% [22] deep neural networks with pruning, trained quantization and huffman
-STDP
Ours (CCSNN) Digit-MNIST STDP-BP 99.10% coding,” arXiv preprint arXiv:1510.00149, 2015.
Surrogate [11] E. Strubell, A. Ganesh, and A. McCallum, “Energy and
BSNN Fashion-MNIST 87.92% [19] policy considerations for deep learning in nlp,” arXiv preprint
gradient
sym-STDP arXiv:1906.02243, 2019.
Fashion-MNIST DA-STDP 84.89% [21] [12] N. Gehlot, A. Vijayvargiya, R. Kumar, A. R. Garg, and U. Desai, “C-lvq:
-SNN
TDSNN Fashion-MNIST ST-RSBP 90.13% [23] A convolutional neural network with learning vector quantization for the
EMSTDP Fashion-MNIST STDP 85.31% [24] diagnosis of covid-19,” in 2023 IEEE 20th India Council International
Conference (INDICON), pp. 55–60, IEEE, 2023.
Ours (CCSNN) Fashion-MNIST STDP-BP 91.80%
[13] A. Vijayvargiya, B. Singh, N. Kumari, and R. Kumar, “semg-based deep
ELM Letter-MNIST OPIUM 96.36% [25]
learning framework for the automatic detection of knee abnormality,”
MLP(SPA+MSE) Letter-MNIST Adagrad 97.34% [26]
Signal, Image and Video Processing, vol. 17, no. 4, pp. 1087–1095,
SVM(RBF) Letter-MNIST SDW(+ ) 90.75% [27]
2023.
Stochastic
Equiv-CloGAN Letter-MNIST 87.89% [28] [14] N. Rajawat, B. S. Hada, M. Meghawat, S. Lalwani, and R. Kumar,
generator
“Advanced identification of alzheimer’s disease from brain mri images
Ours (CCSNN) Letter-MNIST STDP-BP 99.30% using convolution neural network,” in Proceedings of 2nd International
Conference on Artificial Intelligence: Advances and Applications:
ICAIAA 2021, pp. 219–229, Springer, 2022.
[15] R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional
IV. C ONCLUSION AND FUTURE SCOPE neural networks: an overview and application in radiology,” Insights into
The article proposes the brain-inspired cognitive model imaging, vol. 9, pp. 611–629, 2018.
[16] P. U. Diehl and M. Cook, “Unsupervised learning of digit recognition
CCSNN, which combines the connectivity pattern extracting using spike-timing-dependent plasticity,” Frontiers in computational
capabilities and biological plausibility capabilities of the neuroscience, vol. 9, p. 99, 2015.
brain using CNNs and SNNs. The paper demonstrates [17] K. Yamazaki, V.-K. Vo-Ho, D. Bulsara, and N. Le, “Spiking neural
networks and their applications: A review,” Brain Sciences, vol. 12,
the system’s performance on the MNIST dataset and its no. 7, p. 863, 2022.
variants, showcasing equivalence with cognitive models SNN [18] S. Yadav, S. Chaudhary, and R. Kumar, “Comparative analysis of
and CCSNN while utilizing fewer neurons and training biological spiking neuron models for classification task,” in 2023
14th International Conference on Computing Communication and
samples. CCSNN notably achieves the highest accuracy Networking Technologies (ICCCNT), pp. 1–6, IEEE, 2023.
across all three datasets compared to existing literature. The [19] J. K. Eshraghian and W. D. Lu, “The fine line between dead neurons
potential advantages of employing this structure extend to the and sparsity in binarized spiking neural networks,” arXiv preprint
arXiv:2201.11915, 2022.
development of neuromorphic devices and VLSI. This research [20] D. J. Saunders, D. Patel, H. Hazan, H. T. Siegelmann, and R. Kozma,
introduces additional biological realism to contemporary “Locally connected spiking neural networks for unsupervised feature
picture categorization models, aiming to elucidate the brain’s learning,” Neural Networks, vol. 119, pp. 332–340, 2019.
[21] Y. Hao, X. Huang, M. Dong, and B. Xu, “A biologically plausible
processes in high-level vision tasks. Future work will explore supervised learning method for spiking neural networks using the
the integration of CCSNN with various regularization methods, symmetric stdp rule,” Neural Networks, vol. 121, pp. 387–395, 2020.
and the foundational concept of CCSNN may find application [22] A. Tavanaei and A. S. Maida, “Multi-layer unsupervised learning in
a spiking convolutional neural network,” in 2017 international joint
in diverse network types and medical scenarios. conference on neural networks (IJCNN), pp. 2023–2030, IEEE, 2017.
[23] L. Zhang, S. Zhou, T. Zhi, Z. Du, and Y. Chen, “Tdsnn: From deep
R EFERENCES neural networks to deep spike neural networks with temporal-coding,”
[1] A. Vijayvargiya, R. Kumar, and P. Sharma, “Pc-gnn: Pearson correlation- in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33,
based graph neural network for recognition of human lower limb activity pp. 1319–1326, 2019.
using semg signal,” IEEE Transactions on Human-Machine Systems, [24] A. Shrestha, H. Fang, Q. Wu, and Q. Qiu, “Approximating back-
2023. propagation for a biologically plausible local learning rule in spiking
[2] J. Ahire, Artificial neural networks: the brain behind AI. Lulu. com, neural networks,” in Proceedings of the International Conference on
2018. Neuromorphic Systems, pp. 1–8, 2019.
[3] A. Taherkhani, A. Belatreche, Y. Li, G. Cosma, L. P. Maguire, and T. M. [25] G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending
McGinnity, “A review of learning in biologically plausible spiking neural mnist to handwritten letters,” in 2017 international joint conference on
networks,” Neural Networks, vol. 122, pp. 253–272, 2020. neural networks (IJCNN), pp. 2921–2926, IEEE, 2017.
[4] A. Alhendi, A. S. Al-Sumaiti, M. Marzband, R. Kumar, and A. A. Z. [26] S. Madireddy, A. Yanguas-Gil, and P. Balaprakash, “Multilayer
Diab, “Short-term load and price forecasting using artificial neural neuromodulated architectures for memory-constrained online continual
network with enhanced markov chain for iso new england,” Energy learning,” in 4th Lifelong Machine Learning Workshop at ICML 2020,
Reports, vol. 9, pp. 4799–4815, 2023. 2020.
[5] A. L. Hodgkin and A. F. Huxley, “A quantitative description of [27] U. Buatoom and M. U. Jamil, “Improving classification performance
membrane current and its application to conduction and excitation in with statistically weighted dimensions and dimensionality reduction,”
nerve,” The Journal of physiology, vol. 117, no. 4, p. 500, 1952. Applied Sciences, vol. 13, no. 3, p. 2005, 2023.
[6] D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurones [28] A. Rios and L. Itti, “Closed-loop memory gan for continual learning,”
in the cat’s striate cortex,” The Journal of physiology, vol. 148, no. 3, arXiv preprint arXiv:1811.01146, 2018.
p. 574, 1959.

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on January 09,2025 at 06:45:14 UTC from IEEE Xplore. Restrictions apply.

You might also like