0% found this document useful (0 votes)
9 views

Sensors

Smart sensors working

Uploaded by

Siya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Sensors

Smart sensors working

Uploaded by

Siya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

sensors

Article
Electromyogram-Based Classification of Hand and Finger
Gestures Using Artificial Neural Networks
Kyung Hyun Lee † , Ji Young Min † and Sangwon Byun *

Department of Electronics Engineering, Incheon National University, Incheon 22012, Korea;


[email protected] (K.H.L.); [email protected] (J.Y.M.)
* Correspondence: [email protected]
† These authors contributed equally to the work.

Abstract: Electromyogram (EMG) signals have been increasingly used for hand and finger gesture
recognition. However, most studies have focused on the wrist and whole-hand gestures and not
on individual finger (IF) gestures, which are considered more challenging. In this study, we de-
velop EMG-based hand/finger gesture classifiers based on fixed electrode placement using machine
learning methods. Ten healthy subjects performed ten hand/finger gestures, including seven IF
gestures. EMG signals were measured from three channels, and six time-domain (TD) features were
extracted from each channel. A total of 18 features was used to build personalized classifiers for ten
gestures with an artificial neural network (ANN), a support vector machine (SVM), a random forest
(RF), and a logistic regression (LR). The ANN, SVM, RF, and LR achieved mean accuracies of 0.940,
0.876, 0.831, and 0.539, respectively. One-way analyses of variance and F-tests showed that the ANN
achieved the highest mean accuracy and the lowest inter-subject variance in the accuracy, respectively,
suggesting that it was the least affected by individual variability in EMG signals. Using only TD
features, we achieved a higher ratio of gestures to channels than other similar studies, suggesting
 that the proposed method can improve the system usability and reduce the computational burden.

Citation: Lee, K.H.; Min, J.Y.; Byun, S.
Keywords: electromyogram; EMG; machine learning; physiological signal; hand-finger movement;
Electromyogram-Based Classification
gesture recognition; classification; time-domain features; artificial neural network; prosthetic hand
of Hand and Finger Gestures Using
Artificial Neural Networks. Sensors
2022, 22, 225. https://doi.org/
10.3390/s22010225
1. Introduction
Academic Editors: Roberto Merletti
For decades, electromyograms (EMGs) have been employed to control prosthetic
and Jesus Lozano
limbs, such as hands and wrists. In theory, EMG signals recorded from specific muscles
Received: 16 November 2021 associated with hand and finger gestures can be used to control a variety of movements.
Accepted: 27 December 2021 However, individual finger (IF) gestures are considered to be more difficult to classify
Published: 29 December 2021 than whole-hand and wrist gestures due to the complexity and subtlety of muscle usage
Publisher’s Note: MDPI stays neutral for IF movements [1]. Most finger gesture prediction models rely on EMG signals from a
with regard to jurisdictional claims in large number of channels, which results in the high cost and complexity of the system [2].
published maps and institutional affil- For these reasons, many previous studies focused on classifying whole-hand or wrist
iations. gestures [3–6], but recent advances in computing power and machine learning algorithms
have allowed the classification of IF gestures with a small number of channels without
sacrificing the accuracy or response time.
Feature extraction is an important process for gesture recognition systems to extract
Copyright: © 2021 by the authors. the critical information hidden in the raw EMG signals [7]. EMG features are extracted
Licensee MDPI, Basel, Switzerland. from three domains, namely the time domain (TD), the frequency domain (FD), and the
This article is an open access article
time–frequency domain (TFD) [7], each of which have advantages and disadvantages in
distributed under the terms and
gesture classification. Features in the TD are fast and easy to implement because they
conditions of the Creative Commons
do not require additional transformation and are calculated directly from the raw EMG
Attribution (CC BY) license (https://
signals [7]. In addition, previous studies suggested that the TD features can represent the
creativecommons.org/licenses/by/
transient state of the gestures well [8–10]. However, the TD features are prone to errors
4.0/).

Sensors 2022, 22, 225. https://doi.org/10.3390/s22010225 https://www.mdpi.com/journal/sensors


Sensors 2022, 22, 225 2 of 20

due to the non-stationarity of the EMG signals [11]. Efforts have been made to improve
the performance by extracting features from different domains, but this can result in an
increase in computational cost.
The response time of the gesture recognition system needs to be short enough to
be perceived as real-time recognition by users. Because this latency includes the time
required to extract features, simpler implementation of TD features can be beneficial to
reduce the total response time, which is one of the strengths of using the TD features in
gesture classification. Therefore, TD features have been widely tested for EMG-based
hand/finger gesture recognition, along with FD or TFD features [12–18]. Various machine
learning methods, including support vector machine (SVM), k-nearest neighbors (KNN),
artificial neural network (ANN), convolutional neural network (CNN), and probabilistic
neural network (PNN), have been implemented as classification algorithms. Most studies
achieved at least a 90% accuracy in classifying four to ten hand/finger gestures [12–18].
Although these studies demonstrated successful recognition of gestures, the achievement
of both high accuracy and low response time is still a challenge.
Particularly, ANN algorithms have been extensively tested in EMG-based studies,
which investigated various types of EMG data for a wide range of applications. For example,
some studies tested both TD and FD features as input data [19,20], and others applied raw
EMG signals directly to the ANN without a feature extraction process [21,22]. Additionally,
the application of research was not limited to hand/finger gesture recognition but included
the prediction of force load [23,24] and the detection of neuromuscular disorders [25].
However, studies that applied ANN algorithms to TD features only, excluding other
features or data types, for hand/finger gesture recognition, have not been conducted as
extensively as other EMG studies, which used both TD and FD features or raw EMG signals
for various purposes [26]. Some studies used commercially available wearables, such as
Myo armband, to extract TD features from multi-channel signals and build ANN-based
classifiers [27–29]. However, it is difficult to specify the position of the electrodes relative
to muscles with these wearables. Because information on muscles is crucial for optimizing
personalized EMG sensors for users with various physiological conditions, the electrode
locations need to be estimated precisely.
Therefore, in this study, we developed a real-time hand/finger gesture recognition
system based on fixed electrode placement using only TD features. We employed the
ANN and three other popular machine learning algorithms, namely SVM, random forest
(RF), and logistic regression (LR), as classifiers, and their performances were statistically
compared. A total of ten gestures, including seven IF gestures, were classified. We limited
the number of channels used for recording EMG signals to reduce the complexity and
improve the usability of the recognition system. Hence, three channels, which are relatively
few compared to those used in previous studies [12–18], were used to record EMG signals
from three different muscles on a forearm. Six TD features were extracted from each
channel, and therefore, a total of 18 TD features were used as input data. Ten healthy
subjects were recruited, and for each subject, personalized classifiers were built and tested.

2. Materials and Methods


2.1. Participants
EMG data were collected from ten healthy male subjects (mean age ± SD, 24.5 ± 1.5).
All subjects were right-handed and did not suffer from any neurological condition. Before
the experiment, a guide was provided to acquaint the participants with the experimental
procedures, and informed consent was obtained from all subjects. This study was ap-
proved by the Institutional Review Board of Incheon National University, Incheon, Korea
(No. 7007971-201901-002) and performed according to the relevant guidelines.

2.2. Equipment and Software


We followed the guidelines suggested in [30]. All subjects were isolated from the main
supply during the experiment. The EMG signals were acquired using MyoWare Muscle
Sensors 2022, 22, 225 3 of 20

sensors (SparkFun Electronics, Niwot, CO, USA) (Figure 1). This sensor has been frequently
used in previous EMG studies because of its low cost, easy-to-customize features, and
favorable performances reports in validation studies, showing it to be comparable to more
expensive commercial EMG systems [31,32]. The sensor is powered by a 5 V supply and
consists of three electrodes: mid-muscle, end-muscle, and reference [33]. EMG signals
were differentially amplified with adjustable gain of 201Rgain /1 kΩ (CMRR 110 dB, input
impedance at 60 Hz is not available). For the electrodes, we used Ag/AgCl electrodes
for surface EMG (H124SG, Covidien, Dublin, Ireland), based on conductive and adhesive
hydrogel with a 201 mm2 gel area, a 251 mm2 adhesive area, and an 80 mm2 sensor area.
Analog EMG signals were collected with a data acquisition (DAQ) system (NI DAQ USB-
6361, National Instruments, Austin, TX, USA) to digitize the signals with a sampling rate of
2000 Hz. LabVIEW 2017 (National Instruments) was used to record the signals and remove
noise by digital filters. Data processing and machine learning modeling were performed
using Python (version 3.7, https://www.python.org/, accessed on 29 December 2021) with
scikit-learn (version 0.21.3, https://scikit-learn.org, accessed on 29 December 2021) and
TensorFlow (version 2.1.0, https://www.tensorflow.org/, accessed on 29 December 2021).
Statistical analyses were performed using R (version 4.0.2, https://www.r-project.org/,
accessed on 29 December 2021).

Figure 1. EMG sensor and electrodes used in the experiments. The image is modified from [33].

2.3. Experimental Setup and Data Acquisition


The EMG sensors were placed on the flexor carpi radialis, flexor carpi ulnaris, and
brachioradialis, which are the forearm muscles associated with the selected hand and
finger gestures (Figure 2) [1,34]. To improve the EMG signal measurement accuracy, the
electrodes were placed on the midline of the muscle belly between an innervation zone
and a myotendon junction. The electrodes were placed parallel to the muscle fibers [35].
To determine the attachment location, we used anatomical information and methods
recommended in previous literature [36]. After the attachment, electrode placements were
confirmed by muscle contraction performed by a subject. Each electrode pair had its
reference electrode, which was placed as close to the elbow and as distant from the targeted
muscles as possible. The subjects sat in a comfortable chair and relaxed their arm and hand
before performing gestures. The entire data acquisition process proceeded for each subject
without repositioning the electrodes until the end.
Sensors 2022, 22, 225 4 of 20

Figure 2. Placement of EMG sensors and electrodes on a forearm.

Ten different gestures—nine non-rest gestures and a rest gesture—were tested and
classified as follows: rock, scissors, paper, one, three, four, good, okay, finger gun, and rest
(Figure 3). Among the non-rest gestures, rock and paper were considered as whole-hand
gestures, and the other seven were considered as IF gestures. A whole-hand gesture is
defined as moving all five fingers in the same direction, and the individual finger gesture
involves moving at least one finger in a different direction. Although these definitions of
finger gestures are not commonly used in the literature, we separated the whole-hand and
individual finger gestures to indicate the more complex nature of movements for certain
finger gestures. The rock and paper gestures have been regularly included and studied as
a finger gesture in previous studies. Therefore, we included rock and paper as common
gestures to be compared with other studies.

Figure 3. Ten hand and finger gestures used for classification. Two whole-hand gestures (rock and
paper) and seven IF gestures (scissors, one, three, four, good, okay, and finger gun) are included.

The subjects were asked to perform the gestures in the order shown in Figure 4.
The recording of a 5-s rest gesture and a 5-s non-rest gesture is referred to as a set. Five
repeated sets formed a round. The subjects conducted four rounds sequentially for each
non-rest gesture. Between rounds, a 10 s rest interval was also given. After completing four
rounds of a given gesture, participants took a 5 min rest for relaxing their muscles before
performing a new gesture.
Sensors 2022, 22, 225 5 of 20

Figure 4. Experimental procedure. Four rounds were conducted for each nine non-rest gesture: rock,
scissors, paper, one, three, four, good, okay, and finger gun. The subjects repeated a set of a 5-s rest
gesture and a 5-s non-rest gesture five times in each round.

Signals measured from three EMG sensors were recorded simultaneously using LabVIEW.
EMG signals have frequencies mostly in the range of 20–500 Hz. Noise in the signals were
reduced using two digital filters implemented in the LabVIEW: a bandpass filter (Butterworth,
4th order, 20–500 Hz) and a bandstop filter (Butterworth, 7th order, 59.5–60.5 Hz).

2.4. Data Preprocessing


Figure 5 shows a flow chart of the data processing steps. An overlapping sliding
window was adopted for data segmentation (Figure 6). The length of a moving window
was 250 ms, and the window was increased in increments of 25 ms (90% overlapping). Each
segmented window was annotated with one of the ten gestures. In particular, to annotate
windows in transient states changing from rest to non-rest gestures, a threshold-identifying
Sensors 2022, 22, 225 6 of 20

gesture activation from the rest state was calculated with the first 4-s long-rest data in every
round; the highest value was selected as an activation threshold as shown below:

Threshold = λ × Baselinemax (1)

where λ is an empirical coefficient [37]. The range of λ was determined by evaluating


Baselinemax values for each gesture in every round. The smallest Baselinemax among these
values was defined as λ = 1. The ratio of the largest to the smallest Baselinemax was defined
as the highest value of λ. To find an optimal threshold, we increased λ from 1 to the
highest value. Then, the time at which an EMG signal exceeded the threshold was defined
as an activation point. Because the three muscles did not activate simultaneously when
performing a gesture, the activation points from each channel were different. Therefore, we
used the earliest activation time from the three channels. EMG signals were assumed to be
in the activated state for 5 s from the activation point. A segmented window was annotated
with a non-rest gesture if more than 50% of the window was in an activated state.

Figure 5. Flow chart of data processing steps.

Figure 6. Moving window for EMG signal segmentation on one channel, where the 90% overlapping
window technique was applied. W1, W2, and W3 denote the moving windows, which have a length
of 250 ms, and τ denotes the interval between the windows, which is 25 ms.
Sensors 2022, 22, 225 7 of 20

2.5. Feature Extraction


A total of six TD features, including Hudgins’ features, were extracted from the 250-
ms-long segmented datasets: the root mean square (RMS), variance (VAR), mean absolute
value (MAV), slope sign change (SSC), zero crossing (ZC), and waveform length (WL)
(Table 1) [8,38,39]. These features have been most widely used for real-time EMG signal
analyses owing to their relatively low computational requirements. The extracted features
were normalized using a standardization method for the classification to achieve a mean of
zero and unit variance.

Table 1. Equations of the time-domain features used in this study.

Time-Domain Features Formula


s
N
1
Root mean square (RMS) RMS = N ∑ xi2
i
2
VAR = 1
N ∑iN=1 ( xi
− µ) , µ = 1
N ∑iN=1 xi = 0
Variance (VAR)
VAR = RMS2
N
1
Mean absolute value (MAV) MAV = N ∑ | xi |
i =1

SSC = ∑iN=−2 1 [ f [( xi − xi−1 ) × ( xi − xi+1 )]]


Slop sign change (SSC) 
1, if x ≥ 0
f (x) =
0, otherwise
N −1
ZC = ∑ [sgn( xi × xi+1 ) ∩ | xi − xi+1 | ≥ threshold]
Zero crossing (ZC) i =1 
1, i f x ≥ threshold
sgn( x ) =
0, otherwise
N −1
Waveform length (WL) W L = ∑ | x i +1 − x i |
i =1
ith
N: number of samples used for calculation; xi : sample of measurement; the windows used for calculation of
features are shown in Figure 6 (N = 500 with 250 ms window).

2.6. Modeling
We developed personalized classifiers with the datasets obtained from each subject
(Appendix A). In each subject dataset, the number of segmented rest gesture observations
was nine times higher than the number of non-rest gesture observations, resulting in a class
imbalance. When a model is developed from imbalanced data, it cannot perform well on a
minority class because training algorithms are designed to reduce errors from inaccurate
prediction. If the dataset is highly imbalanced, the algorithm will reduce the error by
predicting the majority class and failing to learn the minority class. Most machine learning
algorithms perform best when each class has an equal number of samples. Under-sampling
is one of the methods used to overcome the data imbalance problem, which matches the
number of samples of each class by randomly removing samples from the majority class, as
previously applied in an EMG study [40]. Therefore, we under-sampled the rest gesture to
obtain a similar number of observations for each class to maximize the performance of the
classifiers. We split each subject dataset into a training dataset (90% of the data) and a test
dataset (10% of the data, Figure 5).
The following four machine learning methods were used to develop the classifiers and
identify the ten gestures in each subject dataset: ANN, SVM, RF, and LR. One of the aims of
this study was to test whether traditional TD features can be used for the ANN to develop
a multi-class classification model in an EMG-based hand/finger gesture recognition system.
Since the pre-calculated TD features were used as input data for classifiers, we chose to
adopt a multilayer perceptron model for the ANN.
The parameters for each machine learning model were tuned using stratified ten-fold
cross-validation (CV) grid search processes in the training dataset (Figure 5). In brief, the
Sensors 2022, 22, 225 8 of 20

training dataset was randomly divided into ten subparts of equal size. Nine subparts were
used for training the classifier with a grid of parameters, and the remaining subpart was
used for validation and accuracy evaluation. This process was repeated ten times, with
each of the ten subparts used exactly once for validation. Then, ten results from the folds
were averaged, and the averaged accuracy values were compared. Because the aim of this
study was to build a personalized model for EMG recognition, we focused on evaluating
individual performance of the model of each subject. Therefore, we applied a ten-fold CV
to each subject model instead of a validation method based on the entire dataset, such as
the leave-one-subject-out method.
The following parameters were optimized: the number of hidden layers (2, 3, and 4),
the neurons in each layer (300, 600, and 1000), the dropout rate (0.2 and 0.3), and the use
of batch normalization for the ANN; kernel (linear and rbf), C (1, 10, 100, and 1000), and
gamma (1, 0.1, 0.01, 0.001, and 0.0001) for the SVM; the number of trees (100, 500, 1000)
and class weight (balanced subsample and none) for RF; penalty (L1, L2, elasticnet, and
none), C (1, 0.1, 0.01, 0.001, and 0.0001), class weight (balanced and none), and solver (lbfgs
and saga) for LR. The Adam optimizer, a batch size of 1024, a learning rate of 0.001, and
2000 epochs were used when training and optimizing the ANN classifiers. The optimal
parameter, i.e., those that resulted in the highest average accuracy from the ten folds, were
selected for each classifier. The final optimized model was built by training the entire
training set with the best parameters and was applied to the test dataset (Figure 5).
In addition, we estimated the performance of ANN-based classifiers in real-time
decoding. To build a classifier model, data from the first to third round were used as the
training dataset and those from the fourth round were used as the test dataset to evaluate
the accuracy for each subject.

2.7. Statistical Analysis


The classification accuracy for each subject and each machine learning method was
evaluated separately. Then, to compare the performance among different machine learning
methods, the mean and standard deviation (SD) of the accuracies of all the subjects were
used. We evaluated the accuracy of the machine learning method using the one-way
analysis of variance (ANOVA) with the Games–Howell post-hoc tests. The variance of
the accuracies of the different machine learning methods was evaluated using F-tests with
the false discovery rate (FDR) correction. Additionally, we compared the accuracies of the
ANN-based classifiers of different feature combinations using the one-way ANOVA with
the Games–Howell post-hoc tests. The P-values were calculated between groups in the
analysis, and p < 0.05 was considered statistically significant.

3. Results
3.1. Classifier Assessment
Ten healthy male subjects participated in the experiment. We developed machine
learning-based classifiers to identify the ten gestures for each subject using ANN, SVM, RF,
and LR. The optimized classifiers were built with the training datasets based on the best
parameters, which were determined using a grid search process, and their performances
were estimated using the test datasets (Figure 5). Table 2 shows the best parameters selected
by a grid search process for each subject and machine learning method. Table 3 and Figure 7
describe the classification accuracies of the machine learning methods. The mean accuracies
(95% confidence intervals) for the ANN, SVM, RF, and LR were 0.940 (0.935–0.945), 0.874
(0.858–0.890), 0.831 (0.809–0.853), and 0.539 (0.483–0.595), respectively. ANN showed the
highest accuracy in all subjects. The highest accuracy of 0.952 was obtained in subject #5
with ANN, and the lowest accuracy of 0.435 was obtained in subject #4 with LR.
Sensors 2022, 22, 225 9 of 20

Table 2. Best parameters selected by a grid search process for each subject and machine learning method.

Subject
Method Parameter #1 #2 #3 #4 #5 #6 #7 #8 #9 #10
Number of hidden layers 4 3 4 4 3 4 3 4 4 4
Number of neurons 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000
ANN
Dropout rate 0.3 0.3 0.3 0.3 0.2 0.3 0.3 0.3 0.3 0.3
Batch normalization applied applied applied applied applied applied applied applied applied applied
C 10 10 100 100 100 100 10 100 100 100
SVM Gamma 1 1 1 1 1 1 1 1 1 1
Kernel rbf rbf rbf rbf rbf rbf rbf rbf rbf rbf
Number of trees 1000 1000 1000 1000 500 1000 500 1000 1000 1000
RF
Class weight BAL BAL none BAL none BAL none none none none
Penalty L2 none none none none L2 none none none L2
C 1 1 1 0.1 1 1 0.001 1 1 1
LR
Class weight none BAL none none none BAL None BAL none none
Solver lbfgs saga lbfgs saga lbfgs lbfgs saga lbfgs lbfgs lbfgs

Table 3. Classifier accuracies for each subject and machine learning method.

#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Mean 95% CI


ANN 0.947 0.939 0.941 0.927 0.952 0.928 0.942 0.942 0.935 0.944 0.940 0.935–0.945
SVM 0.898 0.866 0.857 0.824 0.905 0.856 0.883 0.886 0.866 0.899 0.874 0.858–0.890
RF 0.818 0.804 0.817 0.779 0.878 0.791 0.861 0.865 0.849 0.847 0.831 0.809–0.853
LR 0.520 0.454 0.515 0.435 0.672 0.442 0.611 0.600 0.502 0.639 0.539 0.483–0.595
ANN: artificial neural network; SVM: support vector machine; RF: random forest; LR: logistic regression; CI:
confidence interval.

Figure 7. Classifier accuracies for all ten subjects according to the machine learning method.

3.2. Statistical Comparison of the Performances of the Machine Learning Methods


Figure 8 shows the mean and standard deviation of the accuracy of the different ma-
chine learning methods obtained with each subject. The one-way ANOVA was conducted
to determine whether the main effect of the machine learning method on the accuracy was
statistically significant. Because the Levene’s test indicated that the assumption of equal
variances was violated (p < 0.001), the Welch F-ratio was reported. There was a significant
effect of the machine learning method on the accuracy: F (3, 16.367) = 110.23, p < 0.001.
In addition, Games–Howell post-hoc tests revealed that the ANN achieved significantly
higher accuracy than the other three methods (p < 0.001 for all three comparisons) and
Sensors 2022, 22, 225 10 of 20

that the SVM achieved significantly higher accuracy than RF (p = 0.025) and LR (p < 0.001).
Furthermore, RF showed significantly higher accuracy than LR (p < 0.001).

Figure 8. Mean and standard deviation (error bars) of accuracy values obtained with the ANN, SVM,
RF, and LR, and plot of the accuracy obtained with each subject (black circles). The machine learning
methods significantly affected the accuracy (p < 0.001, one-way ANOVA). Post-hoc comparisons
between machine learning methods were conducted using the Games–Howell method (* p < 0.05,
*** p < 0.001). The variances in accuracy of the different machine learning methods were determined
using F-tests with the FDR correction.

The distribution of the accuracies (Figure 8) suggests that the ANN and LR had the
smallest and largest inter-subject variance in accuracy, respectively, among the methods
considered. We used F-tests to statistically compare the variances in accuracy between
the methods with P-values corrected by the FDR method. Pairwise comparisons of the
variances revealed that the ANN had significantly smaller variance than the other three
methods (p = 0.003 with SVM, p < 0.001 with RF and LR), the SVM had significantly
smaller variance than LR (p = 0.002), and RF showed significantly smaller variance than LR
(p = 0.012) (Table 4).
Sensors 2022, 22, 225 11 of 20

Table 4. p-values from statistical comparisons of the variance in accuracy between machine learning
methods. F-tests were used with the FDR correction for p-values.

ANN SVM RF
SVM 0.003 - -
RF <0.001 0.386 -
LR <0.001 0.002 0.012
ANN: artificial neural network; SVM: support vector machine; RF: random forest; LR: logistic regression.

3.3. Confusion Matrices of ANN-Based Classifiers


The ANN-based classifiers achieved the highest performance, with a mean accuracy of
0.940 in the test datasets (Table 3). Figure 9 presents confusion matrices of the ANN-based
classifiers in the test datasets. A combined confusion matrix (Figure 9A) was calculated by
averaging entries from confusion matrices of individual subjects (Figure 9B). On average,
non-rest gestures were classified with a sensitivity of at least 0.96, while the rest state was
classified with a 0.70 sensitivity (Figure 9A). All individual confusion matrices also showed
relatively low sensitivity of the rest state compared to the non-rest gestures (Figure 9B).
These results suggest that misclassification was mostly due to the prediction of the rest
signals as non-rest gestures.

Figure 9. Confusion matrices of ANN-based classifiers in the test datasets. True and predicted labels
are shown on the horizontal and vertical axes, respectively. (A) Average of all subjects. (B) Indiv-
idual subjects.

3.4. Performance Comparison of ANN-Based Classifiers According to Feature Combinations


We evaluated the performance of ANN-based classifiers based on different feature
combinations. Three features, RMS, VAR, and MAV, are closely related to each other, and
redundancy among them has been suggested [7]. To explore the redundancy of these
related features, we evaluated the performance of ANN-based classifiers according to the
different feature combinations. We used ZC, SSC, and WL as the base feature set and added
one, two, or three features selected from the group of RMS, VAR, and MAV. Hence, a total
of eight combinations were tested, as shown in Figure 10 and Table 5. The base feature
set (ZC/SSC/WL) showed the lowest accuracy, and combining all six features achieved
the highest accuracy. One-way ANOVA revealed that the effect of the feature combination
on the accuracy was significant (p < 0.001). Post-hoc comparisons were run and indicated
interesting results (post-hoc graphs in Figure 10). The accuracy was significantly improved
by adding features to the base feature set regardless of the combination of RMS, VAR, and
MAV. However, after adding one or more features to the base feature set, there was no
significant difference in accuracy between any pairs of comparisons except for two cases.
The accuracy evaluated from ZC/SSC/WL + VAR was significantly lower than that from
ZC/SSC/WL + MAV + RMS and ZC/SSC/WL + MAV + RMS + VAR.
Sensors 2022, 22, 225 12 of 20

Figure 10. Box plots comparing accuracy of ANN-based classifiers according to feature combinations.
The plot shows the median (thick line in the box), interquartile range (the box), range (whiskers),
and accuracies obtained from each subject (black dots). The effect of the feature combination on the
accuracy was significant (p < 0.001, one-way ANOVA). Lines above the box plots display post-hoc
comparisons between feature combinations conducted using the Games–Howell method (* p < 0.05).

Table 5. Accuracy of ANN-based classifiers according to feature combinations.

Feature Combination Mean Accuracy ± SD


ZC/SSC/WL 0.884 ± 0.028
ZC/SSC/WL + MAV 0.926 ± 0.012
ZC/SSC/WL + RMS 0.930 ± 0.011
ZC/SSC/WL + VAR 0.921 ± 0.011
ZC/SSC/WL + MAV+ RMS 0.938 ± 0.011
ZC/SSC/WL + MAV + VAR 0.934 ± 0.009
ZC/SSC/WL + RMS + VAR 0.933 ± 0.011
ZC/SSC/WL + MAV + RMS + VAR 0.940 ± 0.008
ZC: zero crossing; SSC: slope sign change; WL: waveform length; MAV: mean absolute value; RMS: root mean
square; VAR: variance.

3.5. Estimation of Real-Time Performance Using ANN-Based Classifiers


To estimate the real-time performance of ANN-based classifiers, a classifier model was
built with data from the first to the third round, and data from the fourth round were used
as the test dataset to evaluate the accuracy. Figure 11 shows the classification accuracy for
each subject. The mean accuracy (SD) was 0.616 (0.0530). The highest accuracy of 0.675 was
obtained in subject #8, and the lowest accuracy of 0.544 was obtained in subject #4.
Sensors 2022, 22, 225 13 of 20

Figure 11. Estimation of real-time performance using ANN-based classifiers.

4. Discussion
We demonstrated the performance of personalized hand/finger gesture classifiers
based on TD features only. Four machine learning methods—ANN, SVM, RF, and LR—
were implemented to classify ten gestures, including seven IF gestures. The ANN method
achieved the highest mean accuracy of 0.940 (Figure 8), suggesting that a relatively large
number of gestures can be detected using only three EMG channels. In addition, the
ANN-based classifiers showed the lowest variance in the accuracy (Table 4), suggesting that
their performances were affected by inter-subject variability in EMG signals significantly
less than those of the other methods.
Table 6 shows previous studies in which EMG signals were used to classify hand/finger
gestures using machine learning methods. Direct comparisons of the results of different
studies are difficult, owing to methodological reasons. Thus, we included recently pub-
lished, personalized EMG recognition studies that used TD features and focused on IF
gestures, and we excluded studies that used commercially available multi-channel wear-
able devices, which cannot specify the electrode positions relative to the muscles. Table 6
also lists important parameters used: the number of subjects, feature types, the number
of features, the number of gestures (NG ), the number of channels (NCh ), the ratio of the
number of gestures to that of channels (NCh /NG ), the window length, machine learning
methods used, and accuracy [1].
We chose to use TD features to reduce the time delay due to the computational load,
thereby building a more suitable system for real-time detection. TD features are rapid
and straightforward to calculate; they can be extracted directly from raw EMG signals
without any transformation [41]. Previous studies suggested that TD features performed
better in classifying EMG signals in both transient and steady states than the features from
other domains [8–10]. However, other studies also demonstrated that the combination of
features from multiple domains, including FD and TFD, can improve the performance [42].
Our results suggest that TD features can indeed achieve high performance if an algorithm
suitable for classification is chosen.
Sensors 2022, 22, 225 14 of 20

Table 6. Recent EMG-based hand/finger gesture recognition studies that used TD features and
focused on IF gestures. Studies that used commercially available multi-channel wearable devices
were excluded.

Number of ML
Number of Feature Types Number of Number of Gestures Window Accuracy
Reference Channels NG /NCh Method
Subjects Features (NG ) Length
(NCh )
Palkowski &
Redlarski, 2016 N/A TD 6 6 (2 W + 2 WH + 2 IF) 2 3 N/A SVM 0.981
[12]
Fu et al. 5 TD-AR 65 6 1.33 125 ms PNN 0.922
8 (8 IF)
2017 [13]
Shi et al., 13 TD 8 2 2 250 ms KNN 0.938
4 (WH + 3 IF)
2018 [14]
Sharma & Gupta,
4 TD, FD 33 9 (8 IF + R) 3 3 125 ms SVM 0.901
2018 [15]
Qi et al., TD, FD 64 16 0.56 ANN 0.951
N/A 9 (2 WH + 2 W + 4 IF + R) N/A
2020 [16]
Arteaga et al.,
20 TD, FD 24 6 (5 IF + WH) 4 1.5 N/A KNN 0.975
2020 [17]

TD, FD, TFD, 10 (6 W + 3 WH + IF, 10 0.657


Fajardo et al., highest *)
N/A features extracted 198 1 750 ms CNN
2021 [18]
by CNN 4 (lowest *) 4 0.952
This study 10 TD 18 10 (2 WH + 7 IF + R) 3 3.34 250 ms ANN 0.940
TD: time domain; AR: auto-regressive; FD: frequency domain; TFD: time–frequency domain; W: wrist; WH: whole
hand; IF: individual finger; R: rest state; ML: machine learning; SVM: support vector machine; PNN: probabilistic
neural network; KNN: k-nearest neighbors; ANN: artificial neural network; CNN: convolutional neural network;
N/A: not applicable.* The study carried out by Fajardo et al. [18] tested four to ten gestures, but the results for the
highest and the lowest numbers of gestures were listed.

Despite recent advances in ANN technology, to the best of our knowledge, ANN
algorithms have been rarely applied to TD features for classifying hand/finger gestures.
Classical machine learning methods, such as SVM and KNN, have been applied to TD
features and achieved 0.94–0.98 accuracy in classifying four to six gestures [12,14]. Fu et al.
used a probabilistic neural network with the features from a TD auto-regressive model
to classify eight gestures and achieved a 0.922 accuracy [13]. Qi et al. [16] achieved an
accuracy of 0.951 in classifying nine gestures using an ANN algorithm with TD and FD
features. However, these studies were performed with a much greater number of channels
and features than the present study.
The number of gestures classified in this study was greater than that in most previous
studies (Table 6). More importantly, seven out of ten gestures used in this study were
associated with the movement of IF (Table 6, Figure 3). Fajardo et al. [18] classified ten
gestures, including one IF gesture, but achieved an accuracy of 0.657. Qi et al. [16] classified
nine gestures, including four IF gestures, with an accuracy of 0.951. Fu et al. [13] and
Sharma and Gupta [15] classified eight IF gestures, but they achieved accuracies lower
than that obtained in this study. The ANN classifiers used in this study achieved a high
accuracy in the classification of seven IF gestures. Therefore, we achieved high performance
in the classification of various IF gestures, which are more challenging to classify than
whole-hand and wrist gestures [1], using TD features and ANN algorithms.
We only used three channels to measure EMG signals, and therefore, the ratio of the
number of gestures to channels was 3.34, which was higher than that used in most previous
studies. Palkowski and Redlarski [12] and Shi et al. [14] used two EMG channels and
classified a smaller number of wrist and whole-hand gestures. Sharma and Gupta [15]
used three channels to classify nine gestures, but their accuracy was relatively low. Fajardo
et al. [18] used a higher ratio of the number of gestures to channels. They used a single
channel and classified four to ten gestures. However, in their study, the accuracy decreased
significantly as the number of gestures increased; it decreased from 0.952 to 0.657 as the
number of gestures increased from four to ten, suggesting that the number of differentiable
classes is small if only one channel is used to measure EMG signals. The use of a large
number of channels would increase the cost and complexity of signal acquisition hardware
and the computation time for classification, which would affect the usability of the recog-
Sensors 2022, 22, 225 15 of 20

nition system; moreover, the accuracy would not necessarily improve [43]. Therefore, it
is important to find an optimal number of channels for the high-accuracy classification of
various hand gestures movements. We successfully demonstrated the classification of a
large number of gestures using few electrodes, which is advantageous for finger gesture
recognition systems.
The entire data acquisition process proceeded for each subject without repositioning
the electrodes until the end. Therefore, the retraining of a classifier model to account for
the electrode repositioning was not required. However, if the electrodes were repositioned
during the measurement, new training might be needed for each repositioning to achieve
the best performance in classification. Before attaching the electrodes to each target muscle,
we carefully examined the forearm of the subject to find the correct positions to the best
of our ability. This effort helped to reduce the effects of electrode location or individual
anatomy but did not eliminate them completely. More studies are required in the future to
test the effect of electrode repositioning and anatomical variability on the performance of
classifiers and to develop a retraining process for adjustment.
Although the ANN showed the highest accuracy in this study, comparisons between
algorithms should be bounded by the current dataset and analyzed with caution. The
success of a certain algorithm is not solely decided by the superiority of the algorithm
itself but is significantly affected by the characteristics of the dataset as well. For example,
previous studies demonstrated that ANN models showed lower performance than other
classical methods, such as SVM [44]. Therefore, the ANN method proposed in this study
may not work well for other EMG datasets. Indeed, a vanilla ANN architecture is prone to
overfitting. However, recent ANN techniques, such as dropout and batch normalization,
have been employed in EMG studies to overcome overfitting [45]. We also applied these
recent techniques to the ANN, which may play an important role in improving the overall
performance of the ANN-based classifiers. In the future, we will test additional kernel
functions or parameters for SVM models and other machine learning methods for more
in-depth comparisons among algorithms.
It is important to note that classifiers based on ANN algorithms showed a signifi-
cantly lower variance in the accuracy than those based on the other algorithms (Table 4),
indicating that their performance was less affected by individual variability. Classifiers
for hand/finger gesture recognition should be trained and built based on personalized
datasets owing to inter-subject variability in EMG signals [46–49]. In particular, inter-
subject variability between amputees is higher than that between non-amputees. Therefore,
achieving consistently high accuracy in subjects with various physiological conditions is a
prerequisite for the development of prosthetic control systems based on EMG signals.
The sensitivity for the rest gestures was lower than those for the non-rest gestures in the
ANN-based classifier (Figure 9). The confusion matrices indicated that misclassification was
caused mainly by the prediction of rest gestures as non-rest gestures. The rest and non-rest
gestures were alternated every 5 s during the experiment. Therefore, all transient states in
the measured signals were representing transitions between the rest and non-rest gestures.
The prediction errors were associated with the EMG signals in these transient states. A
similar issue, i.e., prediction errors clustering around a transition zone, was reported in
previous studies [50–52] and remains challenging. To address this issue, post-processing
approaches, such as majority voting and confidence-based rejection, have been suggested
and performed, resulting in a decrease in the error rate [53,54]. However, implementing
post-processing would increase the overall computation time, forcing trade-off decisions
between accuracy and delay.
We compared the accuracy of ANN-based classifiers according to feature combinations
(Figure 10). Adding one feature to the base feature set significantly improved the accuracy,
suggesting that critical information not present in the base features was provided by
RMS, VAR, and MAV. However, adding more than one feature did not lead to a further
increase in the accuracy, except for the case of ZC/SSC/WL + VAR. Adding three features
resulted in the highest mean accuracy, but statistical significance was observed only when
Sensors 2022, 22, 225 16 of 20

compared to the base feature set and ZC/SSC/WL + VAR. These results suggest that
applying RMS, VAR, and MAV simultaneously may cause redundancy in input data
for classification [7]. Therefore, although combining all TD features showed the best
performance in the current study, redundancy in the input data may need to be addressed,
using feature selection methods [55,56]. This issue would have greater significance when
the reduction in computational cost is considered crucial in system development.
We estimated the real-time performance of ANN-based classifiers (Figure 11). Com-
pared to the offline decoding (Figure 7), the real-time classification showed a considerable
decrease in accuracy with mean accuracy decreased from 0.940 to 0.616. A significant
difference between offline and real-time performance has been reported in previous studies.
Ortiz-Catalan et al. [57] demonstrated the offline and real-time classification for ten hand
gestures using four EMG channels. Using an MLP model, accuracies of 0.912 and 0.609
were achieved for offline and real-time tests, respectively. Similarly, Abbaspour et al. [58]
classified ten hand gestures using four EMG channels and demonstrated significant dif-
ference in accuracy between the offline and real-time decoding. They tested nine different
machine learning algorithms, including MLP, and all of them resulted in a substantial
decrease in accuracy. Their MLP model achieved accuracies of 0.917 and 0.698 for offline
and real-time tests, respectively. These results suggest that offline performance does not
necessarily translate to real-time systems. Abbaspour et al. [58] suggested that the differ-
ence in accuracy could be decreased by having subjects practice the gestures. More studies
on various aspects, including consistency in muscle contraction, algorithm optimization,
and evaluation, will be conducted in the future to improve real-time performance.
It is crucial to find an optimal window length while keeping the data processing time
as short as possible. Adopting a longer segmented window would increase the accuracy
of the classifiers as more information would be used for gesture recognition, but it would
also increase the controller delay and computational burden [53,59]. Previous studies
suggested that the window length for EMG signal classification should be less than 300 ms
for real-time response and that the optimal length range is 150–250 ms [60,61]. However,
few studies reported the window lengths used, so the comparison is difficult (Table 6).
With the same window length, Shi et al. [14] achieved a similar accuracy as that achieved
in this study, but they only classified four gestures using two EMG channels. Fu et al. [13]
and Sharma and Gupta [15] used a 125-ms window but used multiple-domain features
and a larger number of features, which may increase the data processing time. Fajardo
et al. [18] used a 750-ms window and extracted a large number of features from various
domains, which may not be appropriate for real-time applications. In this study, we used
a 250-ms window and only extracted TD features to minimize the response time without
deteriorating the performance of the classifiers. In the future, we will use a shorter window
length to further reduce the response time while considering trade-off between the response
time and accuracy.
This study had some limitations. First, only healthy male subjects were recruited.
Because we aim to develop personalized classifiers for gesture recognition, more hetero-
geneous conditions need to be tested, such as females and amputees subjects, or subjects
with weak muscles. Particularly, to develop an application for amputees, we may need
to implement more EMG channels and more sophisticated models. A previous study
demonstrated that different approaches were required for healthy subjects and amputees
to optimize classification accuracy [62]. Additionally, because each amputee has different
muscle conditions and mobility, the proposed method in this study may not be applicable
for practical prosthetic solutions [63]. Second, we did not optimize electrode positions.
Our results suggest that a relatively large number of gestures can be detected using only
three EMG channels. As the number of channels decreases, the spatial coverage of EMG
signals becomes limited, in which case optimizing electrode positions might be critical for
improving gesture recognition [64]. A previous study based on fixed electrode placement
used two EMG channels to recognize hand/wrist gestures and demonstrated that electrode
position optimization improved the classification performance [64]. Because our study
Sensors 2022, 22, 225 17 of 20

was also based on just three channels, optimizing electrode positions might benefit the
performance. However, we used fixed electrode positions throughout the experiments and
did not test other positions. In addition, the number of subjects should be increased to
demonstrate the consistency of the high performance of the classifiers, i.e., low variance in
the accuracy, with a heterogeneous population. Finally, the size of and distance between
the electrodes (inter-electrode distance, IED) used in this study were larger than recom-
mended [65,66]. The gel area of 201 mm2 corresponds to a diameter of 16 mm. Due to the
filtering effect of surface electrodes, smaller electrodes with a diameter of 3–5 mm have
been recommended. While the IED in this study was 29.4 mm, a shorter IED between 8
to 10 mm has been recommended to reduce crosstalk contamination in EMG signals. To
determine the optimum IED, the length of targeted muscle may need to be considered.
Therefore, the quality of EMG signals might be considerably affected by the filtering effect
and crosstalk. If we use the standard electrode size and IED to measure signals, we may
observe different results, which will be further investigated in future studies.

5. Conclusions
We developed EMG-based hand/finger gesture classifiers based on ANN, SVM, RF,
and LR algorithms, and we tested the classifiers on ten healthy subjects performing ten
hand/finger gestures, including seven IF gestures. We achieved a mean accuracy of 0.940
in the classification of gestures with an ANN-based classifier. We only used TD features
but achieved a higher ratio of the number of gestures to channels than other similar
studies, demonstrating that the method can improve the recognition system usability while
reducing the computational burden. The ANN-based classifiers also showed the lowest
inter-subject variance in accuracy, suggesting that this method was the least affected by
individual variability. In future studies, we will perform additional tests with a larger, more
heterogeneous population to further evaluate the performance of the proposed method.

Author Contributions: Conceptualization, K.H.L., J.Y.M. and S.B.; Methodology, K.H.L., J.Y.M. and
S.B.; Formal Analysis, K.H.L., J.Y.M. and S.B.; Writing—Original Draft Preparation, K.H.L., J.Y.M.
and S.B.; Writing—Review & Editing, K.H.L., J.Y.M. and S.B.; Visualization, K.H.L. and J.Y.M.;
Supervision, S.B.; Project Administration, S.B. All authors have read and agreed to the published
version of the manuscript.
Funding: This research was supported by the Incheon National University Research Grant in 2017
and by the National Research Foundation of Korea (NRF) grant funded by the Korea government
(MSIT) (No. 2020R1F1A1049236).
Institutional Review Board Statement: This study was conducted according to the relevant guide-
lines and approved by the Institutional Review Board of Incheon National University, Incheon, Korea
(No. 7007971-201901-002).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Not available.
Acknowledgments: We thank to the National IT Industry Promotion Agency (NIPA) for the high-
performance computing support program in 2020 and 2021.
Conflicts of Interest: The authors declare no conflict of interest.

Appendix A
The codes for modeling are available at https://github.com/Bioelectronics-Laboratory/
EMG_hand_finger_gestures_classification (accessed on 29 December 2021).

References
1. Jiralerspong, T.; Nakanishi, E.; Liu, C.; Ishikawa, J. Experimental Study of Real-Time Classification of 17 Voluntary Movements
for Multi-Degree Myoelectric Prosthetic Hand. Appl. Sci. 2017, 7, 1163. [CrossRef]
2. Li, G.; Schultz, A.E.; Kuiken, T.A. Quantifying pattern recognition- based myoelectric control of multifunctional transradial
prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 185–192. [CrossRef] [PubMed]
Sensors 2022, 22, 225 18 of 20

3. Ahsan, M.R.; Ibrahimy, M.I.; Khalifa, O.O. Electromygraphy (EMG) signal based hand gesture recognition using artificial neural
network (ANN). In Proceedings of the 2011 4th International Conference on Mechatronics (ICOM), Kuala Lumpur, Malaysia,
17–19 May 2011. [CrossRef]
4. Phinyomark, A.; Quaine, F.; Charbonnier, S.; Serviere, C.; Tarpin-Bernard, F.; Laurillau, Y. Feature extraction of the first difference
of EMG time series for EMG pattern recognition. Comput. Methods Programs Biomed. 2014, 117, 247–256. [CrossRef]
5. Shim, H.-M.; Lee, S. Multi-channel electromyography pattern classification using deep belief networks for enhanced user
experience. J. Cent. South Univ. 2015, 22, 1801–1808. [CrossRef]
6. Wu, Y.; Hu, X.; Wang, Z.; Wen, J.; Kan, J.; Li, W. Exploration of feature extraction methods and dimension for sEMG signal
classification. Appl. Sci. 2019, 9, 5343. [CrossRef]
7. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Feature reduction and selection for EMG signal classification. Expert Syst.
Appl. 2012, 39, 7420–7431. [CrossRef]
8. Hudgins, B.; Parker, P.; Scott, R.N. A New Strategy for Multifunction Myoelectric Control. IEEE Trans. Biomed. Eng. 1993, 40,
82–94. [CrossRef]
9. Englehart, K.; Hudgins, B. A Robust, Real-Time Control Scheme for Multifunction Myoelectric Control. IEEE Trans. Biomed. Eng.
2003, 50, 848–854. [CrossRef]
10. Yang, D.; Zhao, J.; Jiang, L.; Liu, H. Dynamic hand motion recognition based on transient and steady-state emg signals. Int. J.
Hum. Robot. 2012, 9, 1–18. [CrossRef]
11. Nazmi, N.; Rahman, M.A.A.; Yamamoto, S.I.; Ahmad, S.A.; Malarvili, M.B.; Mazlan, S.A.; Zamzuri, H. Assessment on stationarity
of EMG signals with different windows size during isotonic contractions. Appl. Sci. 2017, 7, 1050. [CrossRef]
12. Palkowski, A.; Redlarski, G. Basic Hand Gestures Classification Based on Surface Electromyography. Comput. Math. Methods Med.
2016, 2016, 6481282. [CrossRef]
13. Fu, J.; Xiong, L.; Song, X.; Yan, Z.; Xie, Y. Identification of finger movements from forearm surface EMG using an augmented
probabilistic neural network. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei,
Taiwan, 11–14 December 2017. [CrossRef]
14. Shi, W.T.; Lyu, Z.J.; Tang, S.T.; Chia, T.L.; Yang, C.Y. A bionic hand controlled by hand gesture recognition based on surface EMG
signals: A preliminary study. Biocybern. Biomed. Eng. 2018, 38, 126–135. [CrossRef]
15. Sharma, S.; Gupta, R. On the use of temporal and spectral central moments of forearm surface EMG for finger gesture classification.
In Proceedings of the 2018 2nd International Conference on Micro-Electronics and Telecommunication Engineering (ICMETE),
Ghaziabad, India, 20–21 September 2018. [CrossRef]
16. Qi, J.; Jiang, G.; Li, G.; Sun, Y.; Tao, B. Surface EMG hand gesture recognition system based on PCA and GRNN. Neural Comput.
Appl. 2020, 32, 6343–6351. [CrossRef]
17. Arteaga, M.V.; Castiblanco, J.C.; Mondragon, I.F.; Colorado, J.D.; Alvarado-Rojas, C. EMG-driven hand model based on the
classification of individual finger movements. Biomed. Signal Process. Control 2020, 58, 101834. [CrossRef]
18. Fajardo, J.M.; Gomez, O.; Prieto, F. EMG hand gesture classification using handcrafted and deep features. Biomed. Signal Process.
Control. 2021, 63, 102210. [CrossRef]
19. Arozi, M.; Caesarendra, W.; Ariyanto, M.; Munadi, M.; Setiawan, J.D.; Glowacz, A. Pattern recognition of single-channel sEMG
signal using PCA and ANN method to classify nine hand movements. Symmetry 2020, 12, 541. [CrossRef]
20. Mendes Junior, J.J.A.; Freitas, M.L.B.; Campos, D.P.; Farinelli, F.A.; Stevan, S.L.; Pichorim, S.F. Analysis of influence of segmenta-
tion, features, and classification in sEMG processing: A case study of recognition of brazilian sign language alphabet. Sensors
2020, 20, 4359. [CrossRef] [PubMed]
21. Asif, A.R.; Waris, A.; Gilani, S.O.; Jamil, M.; Ashraf, H.; Shafique, M.; Niazi, I.K. Performance evaluation of convolutional neural
network for hand gesture recognition using EMG. Sensors 2020, 20, 1642. [CrossRef]
22. Gonzalez-Ibarra, J.C.; Soubervielle-Montalvo, C.; Vital-Ochoa, O.; Perez-Gonzalez, H.G. EMG pattern recognition system based
on neural networks. In Proceedings of the 2012 11th Mexican International Conference on Artificial Intelligence, San Luis Potos,
Mexico, 27 October–4 November 2012. [CrossRef]
23. Dorgham, O.; Al-Mherat, I.; Al-Shaer, J.; Bani-Ahmad, S.; Laycock, S. Smart system for prediction of accurate surface electromyog-
raphy signals using an artificial neural network. Futur. Internet 2019, 11, 25. [CrossRef]
24. Karabulut, D.; Ortes, F.; Arslan, Y.Z.; Adli, M.A. Comparative evaluation of EMG signal features for myoelectric controlled
human arm prosthetics. Biocybern. Biomed. Eng. 2017, 37, 326–335. [CrossRef]
25. Elamvazuthi, I.; Duy, N.H.X.; Ali, Z.; Su, S.W.; Khan, M.K.A.A.; Parasuraman, S. Electromyography (EMG) based Classification of
Neuromuscular Disorders using Multi-Layer Perceptron. Procedia Comput. Sci. 2015, 76, 223–228. [CrossRef]
26. Ariyanto, M.; Caesarendra, W.; Mustaqim, K.A.; Irfan, M.; Pakpahan, J.A.; Setiawan, J.D.; Winoto, A.R. Finger movement
pattern recognition method using artificial neural network based on electromyography (EMG) sensor. In Proceedings of the
2015 International Conference on Automation, Cognitive Science, Optics, Micro Electro-Mechanical System, and Information
Technology (ICACOMIT), Bandung, Indonesia, 29–30 October 2015. [CrossRef]
27. Kurniawan, S.R.; Pamungkas, D. MYO Armband sensors and Neural Network Algorithm for Controlling Hand Robot.
In Proceedings of the 2018 International Conference on Applied Engineering (ICAE). Batam, Indonesia, 3–4 October 2018.
[CrossRef]
Sensors 2022, 22, 225 19 of 20

28. Zhang, Z.; Yang, K.; Qian, J.; Zhang, L. Real-time surface EMG pattern recognition for hand gestures based on an artificial neural
network. Sensors 2019, 19, 3170. [CrossRef]
29. Yang, K.; Zhang, Z. Real-time pattern recognition for hand gesture based on ANN and surface EMG. In Proceedings of the 2019
IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 24–26
May 2019. [CrossRef]
30. Merletti, R. Standards for Reporting EMG Data. J. Electromyogr. Kinesiol. 1999, 9, III–IV.
31. Heywood, S.; Pua, Y.H.; McClelland, J.; Geigle, P.; Rahmann, A.; Bower, K.; Clark, R. Low-cost electromyography—Validation
against a commercial system using both manual and automated activation timing thresholds. J. Electromyogr. Kinesiol. 2018, 42,
74–80. [CrossRef]
32. Del Toro, S.; Wei, Y.; Olmeda, E.; Ren, L.; Guowu, W.; Díaz, V. Validation of a Low-Cost Electromyography (EMG) System via a
Commercial and Accurate EMG Device: Pilot Study. Sensors 2019, 19, 5214. [CrossRef]
33. SparkFun Electronics Electromyography Sensor for Microcontroller Applications MyoWareTM Muscle Sensor (AT-04-001)
Datasheet. Available online: https://cdn.sparkfun.com/assets/a/3/a/f/a/AT-04-001.pdf (accessed on 26 October 2021).
34. Chu, J.U.; Moon, I.; Mun, M.S. A real-time EMG pattern recognition system based on linear-nonlinear feature projection for a
multifunction myoelectric hand. IEEE Trans. Biomed. Eng. 2006, 53, 2232–2239. [CrossRef]
35. Zahak, M. Computational Intelligence in Electromyography Analysis—A Perspective on Current Applications and Future Challenges;
Signal Acquisition Using Surface EMG and Circuit Design Considerations for Robotic Prosthesis; IntechOpen: London, UK, 2012.
[CrossRef]
36. Barbero, M.; Merletti, R.; Rainoldi, A. Atlas of Muscle Innervation Zones: Understanding Surface EMG and Its Applications; Springer:
Milano, Italy, 2012; ISBN 978-88-470-2462-5.
37. Wang, M.; Wang, X.; Peng, C.; Zhang, S.; Fan, Z.; Liu, Z. Research on EMG segmentation algorithm and walking analysis based
on signal envelope and integral electrical signal. Photonic. Netw. Commun. 2019, 37, 195–203. [CrossRef]
38. Zardoshti-Kermani, M.; Wheeler, B.C.; Badie, K.; Hashemi, R.M. EMG feature evaluation for movement control of upper extremity
prostheses. IEEE Trans. Rehabil. Eng. 1995, 3, 324–333. [CrossRef]
39. Challis, R.E.; Kitney, R.I. Biomedical signal processing (in four parts)—Part 3 The power spectrum and coherence function.
Med. Biol. Eng. Comput. 1991, 29, 225–241. [CrossRef]
40. Said, S.; Karar, A.S.; Beyrouthy, T.; Alkork, S.; Nait-Ali, A. Biometrics verification modality using multi-channel semg wearable
bracelet. Appl. Sci. 2020, 10, 6960. [CrossRef]
41. Nazmi, N.; Rahman, M.A.A.; Yamamoto, S.I.; Ahmad, S.A.; Zamzuri, H.; Mazlan, S.A. A review of classification techniques of
EMG signals during isotonic and isometric contractions. Sensors 2016, 16, 1304. [CrossRef]
42. Abbaspour, S.; Lindén, M.; Gholamhosseini, H.; Naber, A.; Ortiz-Catalan, M. Evaluation of surface EMG-based recognition
algorithms for decoding hand movements. Med. Biol. Eng. Comput. 2020, 58, 83–100. [CrossRef]
43. Hargrove, L.J.; Englehart, K.; Hudgins, B. A comparison of surface and intramuscular myoelectric signal classification.
IEEE Trans. Biomed. Eng. 2007, 54, 847–853. [CrossRef] [PubMed]
44. Phinyomark, A.; Khushaba, R.N.; Scheme, E. Feature extraction and selection for myoelectric control based on wearable EMG
sensors. Sensors 2018, 18, 1615. [CrossRef] [PubMed]
45. Phinyomark, A.; Scheme, E. EMG pattern recognition in the era of big data and deep learning. Big Data Cogn. Comput. 2018, 2, 21.
[CrossRef]
46. Atzori, M.; Castellini, C.; Müller, H. Spatial Registration of Hand Muscle Electromyography Signals. In Proceedings of the 7th
International Workshop on Biosignal Interpretation (BSI2012), Como, Italy, 2–4 July 2012.
47. Martens, J.; Daly, D.; Deschamps, K.; Fernandes, R.J.P.; Staes, F. Intra-individual variability of surface electromyography in front
crawl swimming. PLoS ONE 2015, 10, e0144998. [CrossRef]
48. Winter, D.A.; Yack, H.J. EMG profiles during normal human walking: Stride-to-stride and inter-subject variability. Electroen-
cephalogr. Clin. Neurophysiol. 1987, 67, 402–411. [CrossRef]
49. Guidetti, L.; Rivellini, G.; Figura, F. EMG patterns during running: Intra-and inter-individual variability. J. Electromyogr. Kinesiol.
1996, 6, 37–48. [CrossRef]
50. Jiang, N.; Lorrain, T.; Farina, D. A state-based, proportional myoelectric control method: Online validation and comparison with
the clinical state-of-the-art. J. Neuroeng. Rehabil. 2014, 11, 110. [CrossRef]
51. Lorrain, T.; Jiang, N.; Farina, D. Influence of the training set on the accuracy of surface EMG classification in dynamic contractions
for the control of multifunction prostheses. J. Neuroeng. Rehabil. 2011, 8, 25. [CrossRef]
52. Hargrove, L.J.; Scheme, E.J.; Englehart, K.B.; Hudgins, B.S. Multiple binary classifications via linear discriminant analysis for
improved controllability of a powered prosthesis. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 49–57. [CrossRef]
53. Englehart, K.; Hudgins, B.; Parker, P.A. A wavelet-based continuous classification scheme for multifunction myoelectric control.
IEEE Trans. Biomed. Eng. 2001, 48, 302–311. [CrossRef]
54. Scheme, E.J.; Hudgins, B.S.; Englehart, K.B. Confidence-based rejection for improved pattern recognition myoelectric control.
IEEE Trans. Biomed. Eng. 2013, 60, 1563–1570. [CrossRef]
55. Jović, A.; Brkić, K.; Bogunović, N. A review of feature selection methods with applications. In Proceedings of the 2015 38th
International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija,
Croatia, 25–29 May 2015. [CrossRef]
Sensors 2022, 22, 225 20 of 20

56. Remeseiro, B.; Bolon-Canedo, V. A review of feature selection methods in medical applications. Comput. Biol. Med. 2019, 112,
103375. [CrossRef]
57. Ortiz-Catalan, M.; Brånemark, R.; Håkansson, B. BioPatRec: A modular research platform for the control of artificial limbs based
on pattern recognition algorithms. Source Code Biol. Med. 2013, 8, 1–18. [CrossRef]
58. Abbaspour, S.; Naber, A.; Ortiz-catalan, M.; Gholamhosseini, H.; Lindén, M. Real-time and offline evaluation of myoelectric
pattern recognition for the decoding of hand movements. Sensors 2021, 21, 5677. [CrossRef]
59. Asghari Oskoei, M.; Hu, H. Myoelectric control systems-A survey. Biomed. Signal Process. Control 2007, 2, 275–294. [CrossRef]
60. Wang, N.; Chen, Y.; Zhang, X. The recognition of multi-finger prehensile postures using LDA. Biomed. Signal Process. Control.
2013, 8, 706–712. [CrossRef]
61. Khushaba, R.N.; Takruri, M.; Miro, J.V.; Kodagoda, S. Towards limb position invariant myoelectric pattern recognition using
time-dependent spectral features. Neural Netw. 2014, 55, 42–58. [CrossRef]
62. Daley, H.; Englehart, K.; Hargrove, L.; Kuruganti, U. High density electromyography data of normally limbed and transradial
amputee subjects for multifunction prosthetic control. J. Electromyogr. Kinesiol. 2012, 22, 478–484. [CrossRef]
63. Parajuli, N.; Sreenivasan, N.; Bifulco, P.; Cesarelli, M.; Savino, S.; Niola, V.; Esposito, D.; Hamilton, T.J.; Naik, G.R.; Gunawardana,
U.; et al. Real-time EMG based pattern recognition control for hand prostheses: A review on existing methods, challenges and
future implementation. Sensors 2019, 19, 4596. [CrossRef]
64. He, J.; Sheng, X.; Zhu, X.; Jiang, C.; Jiang, N. Spatial Information Enhances Myoelectric Control Performance with only Two
Channels. IEEE Trans. Ind. Inform. 2019, 15, 1226–1233. [CrossRef]
65. Merletti, R.; Cerone, G.L. Tutorial. Surface EMG detection, conditioning and pre-processing: Best practices. J. Electromyogr.
Kinesiol. 2020, 54, 102440. [CrossRef]
66. Merletti, R.; Muceli, S. Tutorial. Surface EMG detection in space and time: Best practices. J. Electromyogr. Kinesiol. 2019, 49, 102363.
[CrossRef]

You might also like