0% found this document useful (0 votes)
64 views17 pages

A Machine Learning Approach For Fall Detection and Daily Living Activity Recognition

This article discusses the development of a machine learning framework for fall detection and daily activity recognition using data from wearable sensors. It reviews existing approaches for fall detection systems, which are generally either based on wearable devices or context-aware sensors. The proposed framework uses acceleration and angular velocity data from wearables to extract time-domain and frequency-domain features for classifying seven different activities, including falls, using algorithms like artificial neural networks, K-nearest neighbors, support vector machines and ensemble bagged trees. Evaluation shows the framework can accurately detect falls and recognize daily activities using both acceleration-only and combined acceleration/angular velocity features.

Uploaded by

Praveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views17 pages

A Machine Learning Approach For Fall Detection and Daily Living Activity Recognition

This article discusses the development of a machine learning framework for fall detection and daily activity recognition using data from wearable sensors. It reviews existing approaches for fall detection systems, which are generally either based on wearable devices or context-aware sensors. The proposed framework uses acceleration and angular velocity data from wearables to extract time-domain and frequency-domain features for classifying seven different activities, including falls, using algorithms like artificial neural networks, K-nearest neighbors, support vector machines and ensemble bagged trees. Evaluation shows the framework can accurately detect falls and recognize daily activities using both acceleration-only and combined acceleration/angular velocity features.

Uploaded by

Praveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

A Machine Learning Approach for Fall Detection


and Daily Living Activity Recognition
Ali Chelli, Member, IEEE, and Matthias Pätzold, Senior Member, IEEE

Abstract— The number of older people in western countries is falls a major public health problem worldwide. The number
constantly increasing. Most of them prefer to live independently of fatal falls per year is estimated by the WHO to be equal
and are susceptible to fall incidents. Falls often lead to serious to 420,000 per year [4]. After a fall, rapid medical care can
or even fatal injuries which are the leading cause of death for
elderlies. To address this problem, it is essential to develop robust significantly reduce the potential damage from fall injuries,
fall detection systems. In this context, we develop a machine resulting in a higher survival rate. For this reason, fall detection
learning framework for fall detection and daily living activity systems that can detect and report falls as fast as possible are
recognition. We use acceleration and angular velocity data of great importance.
from two public databases to recognize seven different activities During the last years, the development of fall detection
including falls and activities of daily living. From the acceleration
and angular velocity data, we extract time and frequency domain systems has become a hot research topic. A plethora of
features and provide them to a classification algorithm. In this fall detection systems are being developed using different
work, we test the performance of four algorithms for classifying approaches. We can categorize the existing fall detection
human activities. These algorithms are artificial neural network systems into two main classes: (i) wearable device-based
(ANN), K-nearest neighbors (KNN), quadratic support vector systems and (ii) context-aware systems [5]. Wearable device-
machine (QSVM), and ensemble bagged tree (EBT). New features
that improve the performance of the classifier are extracted based systems utilize a device that is worn by the user to detect
from the power spectral density of the acceleration. In a first falls. These devices integrate a gyroscope and an accelerometer
step, only the acceleration data are used for activity recognition. that can measure the acceleration and the angular velocity.
Our results reveal that the KNN, ANN, QSVM, and EBT The movement and activity of the user results in a temporal
algorithms could achieve an overall accuracy of 81.2%, 87.8%, variation of the measured acceleration and angular velocity
93.2%, and 94.1%, respectively. The accuracy of fall detection
reaches 97.2% and 99.1% without any false alarms for the data, leaving different fingerprints for different activities. By
QSVM and EBT algorithms, respectively. In a second step, we analyzing the measured acceleration and angular velocity data,
extract features from the autocorrelation function and the power it is possible to determine the type of activity performed by
spectral density of both the acceleration and the angular velocity the user. Several studies have investigated the performance
data, which improves the classification accuracy. By using the of wearable device-based systems [6]–[10]. A big advantage
proposed features, we could achieve an overall accuracy of 85.8%,
91.8%, 96.1%, and 97.7% for the KNN, ANN, QSVM, and EBT of wearable device-based fall detection systems is that they
algorithms, respectively. The accuracy of fall detection reaches can recognize human activity without compromising the user
100% for both the QSVM and EBT algorithms without any false privacy. Widely used smartphones with built-in accelerometer
alarm, which is the best achievable performance. and gyroscope can also be used to measure the acceleration
Index Terms—Fall detection, activity recognition, machine and angular velocity as the user moves and performs various
learning, acceleration data, angular velocity data, feature extrac- activities. The measured data can be analyzed in real time
tion. to detect falls. This fall detection approach is very attractive
because it requires no new equipment and is therefore cost-
I. I NTRODUCTION effective. For wearable device-based systems, if the user
forget to wear the device, it becomes impossible to monitor
Advances in the diagnosis and treatment of diseases have
the person activity. This represents the major limitation of
led to an increase in life expectancy. In every country, the
wearable device-based systems.
percentage of elderlies in the society is increasing. The World
Context-aware systems represent the second main category
Health Organization (WHO) estimates that by 2050 the num-
of fall detection systems. These systems are based on sensors
ber of people over 60 years will exceed two billion [1]. With
placed in the area around the user to be monitored. The sensors
increasing age, people become more susceptible to falls. In
used for monitoring encompass floor sensors, pressure sensors,
fact, as the age increases from 65 to over 70 years, the rate of
microphones, and cameras. Context-aware systems can include
falls and fall related injuries rises from 28% to 42% according
a single or many types of sensors which are deployed in
to the WHO [2]. For people over 65 years of age, fall related
specific areas. This makes fall detection impossible if the user
injuries were the leading cause of death in 2013 [3]. Moreover,
leaves the monitoring area. The most common type of context-
fall related injuries cause a significant costs for society, making
aware systems is video surveillance. To detect falls, a camera
A. Chelli and M. Pätzold are with the Faculty of Engineering and is used to capture a series of images which is subsequently
Science, University of Agder, 4898 Grimstad, Norway (e-mails:{ali.chelli, processed by a classification algorithm to determine whether
matthias.paetzold}@uia.no). a fall has occurred or not [11]. The use of video surveillance
This work is an extended version of a paper submitted to the IEEE
Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC for activity recognition and fall detection has been extensively
2018), Bologna, Italy, September 2018. investigated in the literature [12]–[17]. The main shortcoming

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

of video surveillance systems is that they can compromise user accuracy of 93.5% and a precision of 94.2%. For our solution,
privacy. For this reason, video surveillance is considered illegal we use a feature vector of length 3 and achieve a fall detection
in some countries [18]. Moreover, context-aware systems are accuracy and precision of 96.8% and 100%, respectively. Thus,
susceptible to external events (e.g., changes in illuminance), we outperform the fall detection systems in [22] and [23] in
and have high installation costs. terms of accuracy and precision by using less features.
To evaluate the performance of fall detection systems, we Our second main contribution consists in proposing new
need records of actual falls. However, it is very difficult features that improve the classification accuracy of ADLs.
to collect real-world fall data, especially for older people. For instance, we have proposed new power spectral density
Generally, we need to monitor people for several weeks to (PSD) features that enhance the classification accuracy, espe-
obtain records of few actual falls. In the end, these few falls cially, for the activities walking, walking upstairs, and walking
are not enough to accurately evaluate the performance of the downstairs. In the literature, several features were extracted
developed fall detection system. Therefore, only a few studies from the PSD, such as the largest frequency value [6] and
have adopted this approach [19]–[21]. In the absence of data of the mean frequency value [24]. However, in this paper, we
actual falls, most researchers utilize simulated falls performed extract the main peaks of the PSD and use them as a feature
by volunteers. In addition to falls, these volunteers carry out for activity classification. To the best of our knowledge, this
activities of daily living (ADL) to check the accuracy of the feature has never been utilized before in activity classification.
developed fall detection system and its ability to differentiate Moreover, we extract additional novel features, such as the
between falls and ADL. peaks of the autocorrelation function (ACF), and the peaks
In the literature, several activity datasets are publicly avail- of the cross-correlation function (CCF), which are extracted
able which allow evaluating fall detection methods and assess- from the triaxial acceleration and the triaxial angular velocity
ing their performance on real-world data. An ADL database signals. These proposed new features allow a more accurate
which comprises acceleration and angular velocity data are distinction between different activities.
provided in [6], where a script describing the set of activities In this work, we combine the fall and ADL data from
to be carried out was provided to the participants. A total the datasets provided in [7] and [6]. These real-world data
of 30 participants of different genders, ages, and weights are then utilized to evaluate the performance of the proposed
contributed to this experiment. The experiment consisted in machine learning framework in human activity recognition.
performing ADL activities including: standing, sitting, walk- The acceleration and angular velocity signals are divided into
ing, walking upstairs, walking downstairs, and lying. To collect buffers of 2.56 s length. From each buffer, we extract a feature
the acceleration and angular velocity data, a smartphone was vector of length 66, in a first step. To improve the accuracy
attached to the waist of each participant. On average, the total of the classification, more features are extracted from each
time of recording for each participant was 192 seconds. It buffer, such that the length of the feature vector increases to
is worth mentioning that the dataset in [6] does not include 328. Note that the lengths of the considered feature vectors
fall data, but only ADL activities. Fall related data can be (66 and 328) are smaller than the number of features used
found in some public databases [7]–[10]. The authors of [7] in existing baseline solutions. We utilize 70% of the data to
provide a fall dataset which was performed by 42 participants. train the classifier, while 30% of the data are used to test
Both acceleration and angular velocity data were collected the trained classifier. For a feature vector of length 66, we
during this experiment. The participants in this experiment achieve a similar performance compared to existing solutions
were young healthy adults who performed planned falls. This [24], while for a feature vector of length 328, our approach
fact makes the collected data different from that of real falls outperforms existing solutions.
of elderly people. Due to the difficulty of gathering enough In this paper, we assess the performance of four different
real fall data from older people, the use of mimicked fall data classification algorithms, namely, the artificial neural network
for testing the performance of fall detection system is a well- (ANN), K-nearest neighbors (KNN), quadratic support vector
accepted approach by the researchers on this topic. machine (QSVM), and ensemble bagged tree (EBT). In a
In this paper, we propose a machine learning framework for first step, only the acceleration data are used for feature
fall detection and activity recognition. Our first main contri- extraction. A feature vector of length 66 is built and provided
bution is related to the features used for fall detection. More as input to the classification algorithm. Our results reveal
specifically, we use the mean value of the triaxial acceleration that the KNN algorithm has the worst performance with an
and achieve a fall detection accuracy and precision of 96.8% overall accuracy of 81.2%. The EBT algorithm has the best
and 100%, respectively. Even though, the mean value of the performance with an overall accuracy of 94.1%. The ANN
triaxial acceleration is not intrinsically a new feature since it and the QSVM algorithms achieve an overall accuracy of
was used in previous work [6] to classify ADLs, the mean 87.8% and 93.2%, respectively. The accuracy of fall detection
value for triaxial acceleration was not utilized as a feature in reaches 97.2% and 99.1% for the QSVM and EBT algorithms,
the classification of falls [22], [23]. Note that by extracting respectively, without any false alarm. In a second step, we
only the mean value of the triaxial acceleration, we construct extract features from both the acceleration and the angular
a feature vector of size 3. In [22], a feature vector of length velocity data and construct a feature vector of length 328. This
4 is used for fall detection. This resulted in a fall detection increase in the number of features improves the performance
accuracy of 92% and a precision of 81%, while in [23] a of the four classification algorithms. The KNN, the ANN, the
feature vector of length 23 is utilized leading to a fall detection QSVM, and the EBT algorithms achieve an overall accuracy

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

types of activities: falling, walking, walking upstairs, walking


of 85.8%, 91.8%, 96.1%, and 97.7%, respectively. Moreover, downstairs, sitting, standing, and lying. Since the data obtained
the accuracy of fall detection reaches 100% for both QSVM from the two databases in [6] and [7] is provided as an input
and EBT without any false alarm, which is the best achievable to the classification algorithm, this data must be homogenous.
performance. The data from [7] are organized into buffers of length 2.56 s
The remainder of the paper is organized as follows. Sec- to make it consistent with the data from the first database [6].
tion II describes the machine learning framework, the different Moreover, we select the fall data from 30 participants given
blocks in this framework, and their roles. We discuss the in [7]. The collected triaxial acceleration data can be written
time domain and frequency domain features in Section III. as
In Section IV, we assess the accuracy and the precision of our
ax (t) = agx (t) + abx (t) (1)
proposed solution using first the features from the acceleration
data only and then using features from both the acceleration ay (t) = agy (t) + aby (t) (2)
and angular velocity data. Finally, Section V offers concluding az (t) = agz (t) + abz (t) (3)
remarks.
where ax (t), ay (t), and az (t) refer to the acceleration data
measured along the x-axis, y-axis, and z-axis, respectively.
II. F RAMEWORK D ESCRIPTION
The acceleration ax (t) in (1) is expressed as a sum of two
Our objective is to determine the user’s activity based on terms: (i) agx (t) which stands for the gravity contribution to
the measured acceleration and angular velocity data. In this the acceleration along the x-axis and (ii) abx (t) which denotes
section, we provide an overview of the framework used for the body movement contribution to the acceleration along the
classifying ADLs as well as fall events and explain the activity x-axis. Similarly, the accelerations ay (t) and az (t) are written
recognition strategy. Fig. 1 illustrates the activity recognition as a sum of two terms as shown in (2) and (3). Henceforth,
framework which encompasses: (i) the input acceleration and the terms ai (t) (i = x, y, z) and abi (t) (i = x, y, z) are
angular velocity data obtained from the smartphone, (ii) the referred to as the total acceleration and the body acceleration,
feature extraction block, and (iii) the classification algorithm. respectively. Since the body acceleration abi (t) (i = x, y, z)
In the following, we discuss each component of this frame- reflects the impact of the body movement on the measured
work. acceleration, the use of the body acceleration for activity
recognition should intuitively yield a better classification ac-
curacy. Hence, we must filter out the gravity contribution to
the measured total acceleration to obtain the body accelera-
tion. Generally, the contribution of gravity to the acceleration
varies slowly which implies that the gravity component in
the frequency domain occurs at frequencies near to 0 Hz.
This is as opposed to the contribution of the body movement
which occurs at frequencies larger than 0 Hz. Therefore, we
can eliminate the gravity contribution by applying a high-pass
filter to the total acceleration ai (t) (i = x, y, z). To this end,
we use a Chebyshev filter of Type II [25] with a stopband
Fig. 1. Activity recognition framework.
attenuation of 60 dB and stopband frequency of 0.4 Hz. It is
worth mentioning that Type II Chebyshev filters are sharper
than Butterworth filters which allows filtering out the gravity
A. Data Description and Preprocessing contribution [25]. Moreover, Type II Chebyshev filters can
The triaxial angular velocity and acceleration data are extract the body acceleration from the total acceleration with
obtained from two public databases. The first database in [6] negligible distortions, since Type II Chebyshev filters have no
comprises six types of activities: walking, walking upstairs, ripples for frequencies larger than the passband frequency [25].
walking downstairs, sitting, standing, and lying. A total of 30 From the total acceleration signal provided by the smart-
participants were involved in this experiment. A smartphone phone, we can obtain other signals, such as the triaxial body
was attached to the waist of the participants to collect accelera- acceleration signal and the magnitude of the body acceleration
tion and angular velocity data. The sampling frequency of the signal. By increasing the number of signals from which
collected data was 50 Hz. The data have then been divided features are extracted, we can improve the accuracy of human
into buffers of 2.56 s length with 50% overlap. Each data activity recognition. The triaxial body acceleration signal
buffer is labeled with the corresponding actual activity using abi (t) (i = x, y, z) is extracted from the total acceleration
the ground truth and contains both the triaxial acceleration signal by applying a high-pass filter to the acceleration data
and the triaxial angular velocity of a specific participant. ai (t) (i = x, y, z). The magnitude of the body acceleration
In addition to the ADL data set, we acquired acceleration can be expressed as
q
and angular velocity data for fall events from the public kab (t)k = [abx (t)]2 + [aby (t)]2 + [abz (t)]2 . (4)
database in [7]. Our aim is to develop a framework that uses
the acceleration and angular velocity data to classify seven Beside the triaxial total acceleration, the smartphone is

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

reduce the complexity of activity recognition and improve


equipped with a gyroscope that can measure the angular the classification accuracy. This renders feature extraction a
velocity. The angular velocities around the x-, y-, and z- corner stone in achieving a high classification accuracy with
axes are denoted as ωx (t), ωy (t), and ωz (t), respectively. The reasonable complexity.
unit of the measured angular velocities is radians per second The task of feature extraction consists in finding a finite
(rad/s). By integrating the angular velocity with respect to set of measures that captures quantitative descriptions and
time, we can obtain the time-variant angular position. The enables differentiating between various classes of activity.
angular rotation around the x-, y-, and z-axes are referred to as Typical features include statistical quantities extracted from
the pitch, the roll, and the yaw, respectively. Since the triaxial the acceleration signal, such as the mean value, the standard
angular velocities are not affected by the gravity, these data deviation, and higher order moments [26], [27].
reflect the impact of the body movement. Thus, the gyroscope In the following, we consider a simple example to explain
data can be used directly without any filtering. Moreover, how feature extraction can help to determine the type of
we compute the magnitude of the angular velocity and use activity performed by the user. We assume for simplicity that
it to extract additional features to improve the classification the collected acceleration data pertain to two activities: lying1
accuracy of the proposed framework. The signals used for and standing. Our objective is to determine the user activity
features extraction are listed in Table I. based on the observed acceleration data. By carefully studying
TABLE I
the acceleration data, we find certain properties in the data that
T HE SIGNALS UTILIZED FOR FEATURE EXTRACTION . can be used to recognize the performed activity. For instance,
for lying, the acceleration data az (t) has a mean value that
Signal Name Notation is close to 0 m/s2 . In contrast, for standing the mean value
Triaxial total acceleration ai (t) (i = x, y, z)
Triaxial body acceleration abi (t) (i = x, y, z) of the acceleration az (t) is around 10 m/s2 . We assume now
Magnitude of body acceleration kab (t)k that we receive a new acceleration data buffer which could
Triaxial angular velocity ωi (t) (i = x, y, z) be measured either while the user was lying or standing.
Magnitude of angular velocity kω(t)k Our task is to recognize the activity that was performed by
the user when this acceleration buffer was recorded. There
are two possible outcomes: (i) the user activity is lying or
(ii) the user activity is standing. A simple way to recognize
B. Feature Extraction
the user activity consists of evaluating the mean value of the
This section offers an overview of the concept of feature ex- acceleration az (t). Then, this mean value is provided to the
traction and highlights its importance in obtaining an accurate classification algorithm. If the mean value of the acceleration
classification. The acceleration and angular velocity signals are data az (t) is close to 0 m/s2 , the classifier would decide that
provided as input to the feature extraction block as shown in the performed activity is lying. Otherwise, if the mean value
Fig. 1. Afterwards, the output of the feature extraction block of the acceleration data az (t) is close to 10 m/s2 , the classifier
is used by the classification algorithm to recognize human decides that the performed activity is standing.
activities. In this example, a distinction was made between two
It is worth noting that if we directly provide the classifica- activities using the collected acceleration data. To achieve
tion algorithm with raw acceleration and angular velocity data, this goal, we have used a single feature. This feature is the
the classification algorithm will fail to distinguish different mean value of the acceleration az (t). In this paper, we raise
types of activities and the classification accuracy would be a much more complicated problem. Our aim is to achieve
very poor. In classification problems, the aim is to distinguish a good classification accuracy for seven types of activities.
between different classes of activities. A good feature must Therefore, we need to extract a large number of features. In
achieve this objective. For instance, a good feature can have Section III, we discuss in detail all the features used in our
a specific range of values for each activity, and these value proposed solution to achieve a high classification accuracy.
ranges do not overlap. In this case, by knowing the value of the
considered feature, we can find out to which range it belongs
and consequently recognize the type of the performed activity. C. Classification Algorithm
Moreover, a good feature must be general enough such that it
The objective of the classification algorithm is to recognize
allows identifying the activity associated with new data. These
the user activity based on the acceleration and gyroscope
are two criteria that must be fulfilled by a good feature. Note
data. We use a supervised learning approach to achieve this
that raw data does not fulfill any of these criteria.
objective. As a first step, the algorithm is exposed to a large
Additionally, the raw data is generally contaminated with
set of labeled data2 , the so-called training data. Based on the
noise and artifacts which makes it very difficult for the
training data, the classification algorithm can tune its internal
classifier to find any pattern in the data. Moreover, if the
parameters to reduce the misclassification rate as much as
raw data is used, the dimensionality of the feature vector
possible. After the training phase, the classification accuracy
becomes huge and makes the processing of that feature
vector complex and time consuming. Feature extraction helps 1 The acceleration data for the activity lying has been recorded while the
to reduce the dimensionality of the problem and decreases participant is laying down and in the phase right before laying down.
therefore its complexity. By selecting the right features, we 2 The class of the data is given to the classification algorithm.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

of the algorithm is assessed using a new set of data, called the


test data. in a feature vector which is provided to the classification al-
First, we recall that the data are organized in buffers of gorithm. The trained classification algorithm maps this feature
length 2.56 s. Each of these buffers is labeled with an activity vector to one of the seven activity classes. The accuracy of this
identity (ID) indicating to which class the data buffer belongs. classification depends strongly on the extracted features.
The activity IDs are numbered from 1 to 7. The activity IDs In this section, we discuss the features which are extracted
1, 2, 3, 4, 5, 6, and 7 correspond to walking, walking up- from the signals presented in Table I. We explain the methods
stairs, walking downstairs, sitting, standing, lying, and falling, used to obtain these features and highlight their impact on
respectively. For example, if the data buffer has an activity improving the classification accuracy. The set of features can
ID equal to 4, this implies that the data buffer was recorded be divided into two main categories: time domain features
while the participant is sitting. The data buffer provided to the and frequency domain features of the acceleration and the
feature extraction block contains raw acceleration and angular gyroscope data. The time domain features include the mean
velocity data. The feature extraction block extracts the set of value, the root mean square, the main maxima and minima, and
features described in Section III. After computing the value of the peaks of the ACF and the peaks of the CCF of the signals
each feature for the considered data buffer, these features are given in Table I. The frequency domain features include the
stacked in a vector, known as the feature vector. This vector is value and location of the main peaks of the PSD and the energy
provided to the classification algorithm which must recognize in different frequency bands of the signals listed in Table I.
the type of activity performed by the user, while the data The sample mean of the total acceleration is our first
buffer was recorded. To achieve a good classification accuracy, statistical feature. By exploring the mean value of the total
the classification algorithm must first be trained to learn the acceleration of different activities, we find that for lying the
underlying pattern of each activity. During the training phase, mean value of ax (t) is equal to 0 m/s2 , while for activities
the classification algorithm is exposed to labeled data to opti- where the human body is in a vertical position, such as
mize its internal parameters such that the classification error standing and walking, the mean value of the acceleration ax (t)
is minimized. Subsequently, we can assess the performance of is equal to 10 m/s2 . Using this property, we can differentiate
the trained algorithm using the test data. Once a new buffer is lying from other activities. The histogram of the accelerations
received, the corresponding feature vector is determined and ax (t) and az (t) pertaining to the activities standing and lying
provided to the classifier. This latter computes the likelihood is illustrated in Fig. 2. It can be seen from this figure that
that this buffer belongs to one of the seven possible activity the mean value of ax (t) equals 10 m/s2 for standing and
classes. The algorithm then declares that the buffer belongs to 0 m/s2 for lying. On the other hand, the mean value of the
the activity with the highest likelihood score. For example, acceleration az (t) equals 0 m/s2 and 5 m/s2 for standing and
if Class 5 has the highest score for a given buffer, then lying, respectively.
the algorithm would declare that the user was standing. To
find out whether the decision of the algorithm is right or
wrong, we compare it with the ground truth (the labeled data).
This process is repeated for each buffer in the test data. By
combining all the results, we generate a confusion matrix that
shows the accuracy and the precision of the classifier for each
activity. In this paper, we evaluate the performance of four
classification algorithms to recognize human activity based on
the collected acceleration and angular velocity data. These four
classification algorithms are the ANN, KNN, QSVM, and EBT
algorithm. Principles and background information about the
ANN, KNN, QSVM, and EBT algorithms can be found in
[28]–[30].

III. F EATURE E XTRACTION


The raw acceleration and angular velocity signals could be
utilized as inputs to the classification algorithm. However, in Fig. 2. Histogram of the accelerations ax (t) and az (t) for the activities
this case, the accuracy of the activity recognition would be standing and lying.
very poor. To solve this problem, it is important to extract a
set of features from the acceleration and the angular velocity Note that the orientation of the accelerometer axes when
signals. These features should have different value ranges for lying is different compared to activities with vertical body
different activities. During the training phase, the classification posture, such as standing and walking. Thus, depending on
algorithm is exposed to a large set of labeled data. For each the body posture (vertical or horizontal), we observe different
activity, the classification algorithm has to learn the value mean values of ax (t). We recall that the contribution of the
range of each feature. When a new acceleration and angular body acceleration is negligible in comparison to the gravity
velocity signal is received, the features are extracted and stored pertaining to the activities standing and lying. For standing,

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

the gravitational field contribution equals 10 m/s2 along the x-


axis of the accelerometer3 , while the contribution of the body
acceleration is zero along all axes. As opposed to standing,
the impact of the gravitational field for lying is equal to
0 m/s2 along the x- and y-axes of the accelerometer and
10 m/s2 along the z-axis. Fig. 2 shows that the mean value
of az (t) is 5 m/s2 for lying. This is because the collected
acceleration data are recorded while the user is performing
lying and during lying which makes the mean value of az (t)
smaller than 10 m/s2 . But even with this error, an accurate
classification of the lying activity is obtained by extracting the
mean value of the total acceleration. Note that this feature has
not been considered in previous studies.
In addition to the total acceleration, we evaluate the sample
mean of the other signals provided in Table I to obtain
additional features. More specifically, we compute the mean
Fig. 3. Histogram of the RMS of the body accelerations abx (t) and aby (t)
values of the magnitude of the body acceleration kab (t)k, the for the activities sitting and walking downstairs.
triaxial angular velocity ωi (t) (i = x, y, z), and the magnitude
of the angular velocity kω(t)k.
The second feature that we extract is the root mean square
(RMS) of the body acceleration. The RMS is also known as the and reduce the impact of noise. The Savitzky-Golay smoothing
quadratic mean. The body acceleration abi (t) (i = x, y, z) is method reduces the noise while preserving the underlying
obtained by applying a high-pass filter to the total acceleration pattern and the peaks in the data. By exploring the histograms
ai (t) (i = x, y, z). This filtering removes the contribution of of different activities, we find that the ranges of the acceler-
the gravitational field. The RMS of the body acceleration can ation values vary. For instance, the acceleration mean value
be expressed as for the activities walking and standing is the same, but the
s dynamic range of the accelerations is different. Consequently,
1 T  b 2
Z
b,RMS
ai = ai (t) dt for i = x, y, z (5) by extracting the main maxima and minima of the acceleration,
T 0 we can reduce the misclassification rate for walking and
where T is the length of the buffer which is equal to 2.56 s. standing. Note that this feature improves the classification
Beside the RMS of the body acceleration, we incorporate addi- accuracy of all activities.
tional features obtained by computing the RMS of the angular It is worth mentioning that the above features allow distin-
velocity ωi (t) (i = x, y, z), the RMS of the magnitude of the guishing between activities that exhibit very different acceler-
body acceleration kab (t)k, and the RMS of the magnitude of ation patterns, i.e., activities with different acceleration mean
the angular velocity kω(t)k. values and variances. Nevertheless, for activities with similar
In Fig. 3, we illustrate the histogram of the RMS of the body statistical properties, the classification based on the above
acceleration for the activities sitting and walking downstairs. features would result in poor accuracy. For example, we notice
From this figure, it can be deduced that the RMS of the body that the activities walking, walking downstairs, and walking
acceleration abx (t) for sitting is less than 0.2, while for walking upstairs have similar mean and variance. If we would use only
downstairs the RMS of abx (t) is larger than 0.2. In contrast, the above features to classify the activities walking, walking
the RMS of the body acceleration aby (t) is smaller than 0.1 for upstairs, and walking downstairs, we would find a misclassifi-
the activity sitting and mostly larger than 0.1 for the activity cation error of more than 15%. We must investigate how these
walking downstairs. acceleration signals vary over time to discriminate acceleration
By examining the RMS for the activity standing, which is signals associated with these activities. More specifically, we
a static activity similar to sitting, we notice that the RMS must measure the rate of oscillations of the acceleration.
of abx (t) and aby (t) is less than 0.1 and 0.2, respectively. On Actually, people tend to move slower when walking upstairs
the other hand, for the dynamic activities, such as walking, compared to walking downstairs which results in a higher
walking upstairs, walking downstairs, and falling the RMS of rate of oscillations if the person is walking downstairs. By
abx (t) and aby (t) is larger than 0.1 and 0.2, respectively. Thus, extracting the peaks of the PSD, we can obtain a quantitative
this feature allows differentiating between static and dynamic description of the rate and shape of the oscillations of the
activities. acceleration signal.
The third feature is the main maxima and minima of the
Our forth feature quantifies the rate of change and shape
triaxial body acceleration abi (t) (i = x, y, z). We apply a
of the oscillation of the body acceleration signal abi (t) (i =
Savitzky-Golay filter [31] to the body acceleration to smooth it
x, y, z). This feature is extracted from the PSD of the accel-
3 If the person is standing, the x-axis of the accelerometer corresponds to eration, which can be obtained as follows. First, we compute
the z-axis of the earth-centered coordinate system. the ACF Rabi (τ ) of the body acceleration abi (t) (i = x, y, z)

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

as Therefore, the ACF Rabi (τ ) of the triaxial acceleration can


Z T
1 ∗
abi (t (6) be approximated by P harmonics with weights Sabi (fp ) and
abi (t)

Rabi (τ ) = + τ) dτ.
2T −T frequency fp . The ACF Rabi (τ ) of the triaxial acceleration
b contains information pertaining to the time variation of
The PSD Sabi (f ) of the body acceleration ai (t) (i = x, y, z)
the acceleration signal. Moreover, the ACF enables finding
can be obtained by applying the Fourier transform to the ACF
repeating patterns in the acceleration signal as well as
Rabi (τ ) as
identifying the fundamental frequency and main harmonic
n o Z ∞ frequencies embedded in the acceleration signal. Thus, by
Sabi (f ) = F Rabi (τ ) = Rabi (τ )e−j2πf τ dτ for i = x, y, z.
extracting the location (fp ) and the value (Sabi (fp )) of P
−∞
(7) peaks of the PSD Sabi (f ), we capture quantitative information
From the PSD Sabi (f ), we extract the location and the value on the time variation of the acceleration signal as shown in
of the main PSD peaks. Our hypothesis is that these PSD (10).
peaks capture well the time variation in the acceleration signal
and allow identifying the fundamental frequency and main In Fig. 4, we illustrate the PSDs Sabx (f ) and Saby (f ) of the
harmonic frequencies embedded in the acceleration signal. body accelerations ab (t) and ab (t) for the activities walking
x y
Thus, with the help of the PSD feature, we can distinguish and walking upstairs. From this figure, we see that most of the
between different activities, since each activity leads to dif- information is confined to the range from 0 to 10 Hz. The peak
ferent time variations in the acceleration signal as well as to locations and values hold useful information on the shape and
different shapes and rates of oscillations. In the following, we rate of the signal oscillations in the time domain. From the
provide arguments supporting our hypothesis that the proposed PSD curves S b (f ) and S b (f ), we observe a fundamental
ax ay
PSD feature describes well the observed time variations in the frequency f around 1 Hz and a number of harmonics at
0
acceleration signal. Thus, the PSD feature allows improving positions that are multiples of f . The relative amplitudes of
0
the classification accuracy for different activities. the spectral peaks are closely related to the shape of oscillation
Our analysis of the PSD Sabi (f ) of the triaxial body accel- of the signal, whereas the spacing between the spectral peaks
eration reveals that this PSD is narrowband and can generally indicates the rate of oscillation of the signal.
be considered equal to zero outside the frequency interval
For the activity walking upstairs, it can be seen from the
[fmin , fmax ]4 . Using this property, we can write the inverse
PSDs Sabx (f ) and Saby (f ) that the spectral peaks are closer
Fourier transform of the PSD as
n o Z ∞ together and pushed to the left compared to the spectral
F −1
Sabi (f ) = Sabi (f )e j2πf τ
df for i = x, y, z. peaks for the activity walking. This means that the rate of
−∞ oscillation for walking is higher than that of the activity
Z fmax
walking upstairs. Besides, for the activity walking upstairs,
= Sabi (f )ej2πf τ df the amplitude of the peaks to the right of the fundamental
fmin
N
frequency f0 decreases quickly. This implies that the shape
(8) of the oscillation for walking upstairs is smoother compared
X
j2πfn τ
≈ Sabi (fn )e ∆f
n=1
to walking. This can be explained by Newton’s second law of
motion which states that the sum of forces is equal to the mass
where the approximation in (8) is obtained using [32, Eq.
times the acceleration [33]. When people are walking upstairs,
(7)] and ∆f = (fmax − fmin )/N . Note that as N → ∞
the gravity impact makes the body acceleration smaller and the
the approximation in (8) becomes an equality. In (8), the
shape of its oscillations smoother compared to walking on a
number of terms in the sum can be reduced from N to P
flat surface, where the gravity has almost no impact on the
(P  N ) by selecting the terms with the P largest weights
body acceleration.
Sabi (fp ) (p = 1, 2, . . . , P ). Thus, we can write
The classification accuracy for walking, walking downstairs,
X N XP and walking upstairs is improved by using the spectral peaks
Sabi (fn )ej2πfn τ ∆f ≈ Sabi (fp )ej2πfp τ ∆f. (9) features. We recall that the use of other features, such as the
n=1 p=1 mean, the RMS, and the maxima does not yield an accurate
Note that the P components on the right-hand side of (9) classification for these activities. The proposed frequency
coincide with the P peaks of the PSD, which we extract as a domain feature enhances the accuracy of the classification
feature to distinguish the activities walking, walking upstairs, algorithm, especially for the activities walking, walking down-
and walking downstairs. stairs, and walking upstairs. Beside the spectral peaks of the
b
On the other hand, the ACF Rabi (τ ) can be expressed as the body acceleration ai (t) (i = x, y, z), we extract as well the
inverse Fourier transform of the PSD Sabi (f ). Using (8) and spectral peaks of the angular velocity ωi (t) (i = x, y, z),
(9), we can write the spectral peaks of the magnitude of the body acceleration
kab (t)k, and the spectral peaks of the magnitude of the angular
P
velocity kω(t)k.
n o X
Rabi (τ ) = F −1 Sabi (f ) ≈ Sabi (fp )ej2πfp τ ∆f. (10)
p=1 The fifth feature is extracted from the ACF of the body
acceleration abi (t) (i = x, y, z). More specifically, we estimate
4 Typically f the values and the location of the first maximum and the
min = 0 Hz and fmax = 10 Hz.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

40 20

20
0

0
-20

-20
-40
-40

-60
-60

-80
-80

-100 -100
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Fig. 4. PSDs Sab (f ) and Sab (f ) of the body accelerations abx (t) and aby (t) pertaining to the activities walking and walking upstairs.
x y

second peak of the body acceleration ACF. These features


contain information pertaining to the shape and rate of change of the proposed features for improving the accuracy of the
of the oscillations of the acceleration signal. Such features classification, we arrange the features into three subsets:
can improve the classification of activities that have similar Subset A, Subset B, and Subset C. Subset A comprises the
statistical properties (i.e., similar mean values and variances) mean value of the triaxial total acceleration which is referred to
but have a different rate and shape of oscillations. Additionally, as the first feature in Section III. Subset B includes the features
we extract similar features from the ACF of the angular from Subset A augmented with the peaks extracted from the
velocity ωi (t) (i = x, y, z), the ACF of the body acceleration PSD and the ACF of the body acceleration which represent the
kab (t)k, and the ACF of the magnitude of the angular velocity fourth and the fifth features as described in Section III. Finally,
kω(t)k. Subset C encompasses the features from Subset B in addition
Our sixth feature quantifies the energy in different frequency to the RMS and the main maxima and minima of the body
bands of the triaxial body acceleration abi (t) (i = x, y, z). acceleration. The feature vector of Subset C has a length of
To extract this feature, we first obtain the PSD of the body 66 and contains features extracted only from the acceleration
acceleration. Then, we divide the frequency spectrum into 10 data.
bands and evaluate the energy confined in each frequency We consider an ANN classification algorithm with one
band. To improve the classification accuracy, we extract as well hidden layer, which comprises 25 nodes. The performance of
the energy in different bands of the triaxial angular velocity this ANN algorithm is assessed using the features of Subset A.
ωi (t) (i = x, y, z), the magnitude of the body acceleration The obtained results are provided in the confusion matrix in
kab (t)k, and the magnitude of the angular velocity kω(t)k. Fig. 5. In this figure, the diagonal cells show the number and
The seventh feature is extracted from the cross-correlation the percentage of correct classifications by the trained ANN
function (CCF) between the body acceleration on different algorithm. For instance, in 131 cases the classifier correctly
axes. More specifically, we estimate the values and the lo- predicts the walking activity. These 131 cases represent 4.1%
cations of the first three peaks of the CCF. These peaks pro- of the 3200 buffers that are being classified during the test
vide information about the level of resemblance between the phase by the trained ANN classifier. Similarly, the ANN
body acceleration measured on different axes. We determine algorithm successfully predicted the class of 305, 132, 478,
the CCF between the body acceleration signal pairs (abx (t), 226, 603, and 121 data buffers as pertaining to the activities
aby (t)), (abx (t), abz (t)), and (aby (t), abz (t)). Then, we extract the walking upstairs, walking downstairs, sitting, standing, and
locations and values of the first three peaks of these CCFs. falling, respectively.
By observing a given column of the confusion matrix in
IV. E XPERIMENTAL R ESULTS Fig. 5, it is possible to know the accuracy of the algorithm
for a given class5 . For example, the first column shows the
In this section, we assess the performance of the proposed
results associated with the activity walking. The first row of
activity recognition framework. The dataset is divided into two
Column 1 contains the number 131, which implies that in
random independent sets: the training set and the test set. We
131 cases the activity walking was successfully recognized by
use 70% of the data for training and 30% for testing. In our
the ANN algorithm. The value in the second row of Column
investigation, we evaluate the performance of the ANN, the
1 indicates that in 105 cases the algorithm misclassified the
KNN, the QSVM, and the EBT classification algorithms.
activity walking as walking upstairs. Similarly, the value in
row j (j = 2, . . . , 7) of Column 1 indicates the number of
A. Classification Based on the Acceleration Signal
5 Throughout the paper the words class and activity are used interchange-
In a first step, we extract features only from the triaxial
ably. Class i corresponds to the activity with ID i(i = 1, 2, . . . , 7). The
total acceleration ai (t) (i = x, y, z) and the triaxial body number i(i = 1, 2, . . . , 7) located to the left of the confusion matrix in
acceleration abi (t) (i = x, y, z). To emphasize the importance Fig. 5 indicates that the predicted class is Class i.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

33, 76, and 1 case, respectively. Out of 633 sitting predic-


tions, 155 predictions are wrong which represents 24.5%.
131 48 84 25 158 0 0 29.4%
1
4.1% 1.5% 2.6% 0.8% 4.9% 0.0% 0.0% 70.6% The classification precision of activity j is provided in row
105 305 40 13 81 0 0 56.1% j(j = 1, . . . , 7) of Column 8. The classification precision for
2
3.3% 9.5% 1.3% 0.4% 2.5% 0.0% 0.0% 43.9% the activities walking, walking upstairs, walking downstairs,
3
65 30 132 0 1 0 0 57.9% sitting, standing, lying, and falling are equal to 29.4%, 56.1%,
2.0% 0.9% 4.1% 0.0% 0.0% 0.0% 0.0% 42.1%
57.9%, 75.5%, 36.3%, 99.5%, and 100%.
36 9 33 478 76 0 1 75.5%
4
1.1% 0.3% 1.0% 14.9% 2.4% 0.0% 0.0% 24.5% It is worth mentioning that the accuracy and the precision
182 63 106 45 226 0 0 36.3%
of the classification have different meaning. The accuracy
5
5.7% 2.0% 3.3% 1.4% 7.1% 0.0% 0.0% 63.7% focuses on the actual activity and indicates the percentage of
6
0 0 0 0 0 603 3 99.5% successful classifications out of the actual buffers belonging
0.0% 0.0% 0.0% 0.0% 0.0% 18.8% 0.1% 0.5%
to a given class. In contrast, the precision of a classification
7
0 0 0 0 0 0 121 100% focuses on the predicted activity and quantifies the percentage
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.8% 0.0%
of successful classification out of the buffers predicted to
25.2% 67.0% 33.4% 85.2% 41.7% 100% 96.8% 62.4%
74.8% 33.0% 66.6% 14.8% 58.3% 0.0% 3.2% 37.6% belong to a certain activity.
1 2 3 4 5 6 7 Fig. 5 shows that the classifier recognizes with high ac-
curacy the activities lying and falling. These two activities
are almost never confused with the remaining five activities.
Fig. 5. Confusion matrix of the ANN algorithm obtained using the features As shown by the acceleration histograms in Fig. 2, one can
from Subset A. visually differentiate between the activity lying and other
activities based on the range of values of the acceleration
along the x- and z-axis. This explains the high accuracy in
cases for which the activity walking was misclassified as the recognizing the activity lying, which reaches 100%. Similarly,
activity with the ID j 6 . The accuracy for activity one indicates Fig. 10 compares the histograms of the mean value of the
the percentage of successful classification for the activity acceleration for falling and standing. Using Fig. 10, one can
walking. This accuracy is obtained by dividing two numbers: distinguish falls from non-falls based on the range of the mean
(i) the first quantity is the number of buffers pertaining to the value of the acceleration. This clarifies the high fall detection
activity walking and that are correctly classified7 and (ii) the accuracy, which reaches 96.8%.
second quantity is the total number of buffers pertaining to the On the other hand, Fig. 5 demonstrates that the classifier
activity walking8 . For the activity walking, the classification confuses the activities walking, walking upstairs, walking
accuracy equals 25.2% as shown in the first column of Row downstairs, sitting, and standing, since all of them have
8. For example, considering the falling events which are similar patterns for the histogram of the mean value of the
represented over the seventh column. In total there are 125 acceleration.9 For instance, the algorithm misclassifies walking
falls in the considered test data. In 121 cases, the fall events are as sitting in 182 cases and as walking upstairs in 105 cases.
correctly recognized by the classifier which yields an accuracy In Table II, we provide the confusion matrix of the binary
of 96.8%. The classifier fails to recognize fall events in 4 classification problem, where we classify the data into fall
cases which means that 3.2% of the classifications for fall and non-fall classes. The non-fall class includes the activ-
events are incorrect. The classification accuracy of activity j ities walking, walking upstairs, walking downstairs, sitting,
is provided in Column j (j = 1, . . . , 7) of Row 8. The clas- standing, and lying. Table II is obtained when using the ANN
sification accuracy for the activities walking upstairs, walking classifier with the features from Subset A. From the binary
downstairs, sitting, standing, lying, and falling are equal to confusion matrix, we can compute the false negative (FN) and
67%, 33.4%, 85.2%, 41.7%, 100%, and 96.8%, respectively. false positive (FP) as well as the FN rate and FP rate for fall
Overall the ANN classifier was able to successfully predict detection. From Table II, we see that the number of FP equals
the user activity in 62.4% of the cases. 0 and the number of FN is 4. The FP rate can be computed
By looking at a given row of the confusion matrix in Fig. 5, as
we can evaluate the prediction precision for a given class. For
instance, let us consider the fourth row which corresponds FP FP
FP Rate = =
to sitting. The activity sitting is correctly predicted in 478 Number of actual non−falls FP + TN
cases and wrongly predicted in 155 cases, which implies a 0
= = 0%. (11)
precision of 75.5% for the predictions of the activity sitting. 3075
The activities walking, walking upstairs, walking downstairs,
standing and falling are misclassified as sitting, in 36, 9, Since the FP rate is equal to 0%, this implies that if the
classifier is given a non-fall event, it never recognizes it as
6 The activity IDs 1, 2, 3, 4, 5, 6, and 7 correspond to walking, walking a fall. Thus, the fall detection system has zero false alarm.
upstairs, walking downstairs, sitting, standing, lying, and falling, respectively. The FN rate can be computed by dividing the number of FN
7 This number, which is shown in the first row of the first column, is equal
to 131 [see the confusion matrix in Fig. 5].
8 By summing the numbers in Column 1 located in Rows 1-7, we get the 9 We recall that the only feature used by the classifier in Fig. 5 is the
total number of buffers that actually belong to the activity walking. mean value of the triaxial acceleration.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

10

by the number of actual falls, i.e., The precision of the ANN algorithm achieved with the
FN FN features of the Subsets A, B, and C is provided in Table IV.
FN Rate = = This table shows that the precision of the predicted falls
Number of actual falls TP + FN
4 reaches 100% regardless of whether we use the features of
= = 3.2%. (12) Subset A, B, or C. This implies that there are no false alarms
125
and that all fall events detected by the algorithm are real falls.
The FN rate indicates the percentage of undetected falls by On the contrary, as the number of features increases, the false
the system. It is desirable that the fall detection system has a alarm rate for walking decreases. For example, if we use the
very low FN rate. features of Subset B instead of Subset A, the classification
Using Table II, we can compute the accuracy and precision precision for walking is enhanced by 48.2%. Additionally,
for fall detection as follows if we use the features of Subset C instead of Subset A, the
TP 121 classification precision is improved by 28.6%, 28.1%, 8.3%,
Accuracy = = = 96.8% (13)
TP + FN 121 + 4 and 47.2%, respectively, for the activities walking upstairs,
TP 121 walking downstairs, sitting, and standing.
Precision = = = 100%. (14)
TP + FP 121
TABLE IV
P RECISION OF THE ANN CLASSIFIER FOR VARIOUS ACTIVITIES AND
DIFFERENT FEATURE SUBSETS .
TABLE II
C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE
ANN ALGORITHM OBTAINED USING THE FEATURES FROM S UBSET A. Precision %
Features Wal. Up. Dow. Sit. Sta. Ly. Fal.
Subset A 29.4 56.1 57.9 75.5 36.3 99.5 100
Actual Non-Fall Actual Fall
Subset B 77.6 79.9 80.8 83.7 82 99.8 100
Predicted Non-Fall 3075 (TN) 4 (FN)
Subset C 84.2 84.7 86 83.8 83.5 99.8 100
Predicted Fall 0 (FP) 121 (TP)

The confusion matrix of the ANN classifier obtained by


To demonstrate the importance of the proposed features in using the features of Subset C is illustrated in Fig. 6. The
improving the classification accuracy, we assess the perfor- diagonal cells of the confusion matrix provide the number
mance of the ANN classifier using the features of the Subsets and the percentage of correct classifications. For example, the
A, B, and C. Table III shows the classification accuracy results classifier correctly predicts fall events in 121 cases. These 121
of the ANN algorithm. It can be noticed from this table that cases represent 3.8% of the total number of buffers which are
as the set of features becomes larger, the overall accuracy of classified by the ANN algorithm during the test phase. From
the classifier is improved. For example, if we consider the the remaining diagonal cells of the confusion matrix in Fig. 6,
activity walking, we see that if we use the features from we can conclude that the trained ANN algorithm successfully
Subset A, we achieve a poor classification accuracy of 25.2%. predicts the class of the activities walking, walking upstairs,
If we use the features of Subset B instead of Subset A as an walking downstairs, sitting, and standing in 459, 370, 332,
input to the ANN algorithm, the classification accuracy for 476, 450, and 603 cases, respectively.
walking is enhanced by more than 60%. Note that Subset B
encompasses the features from Subset A augmented with the
peaks of the PSD and the ACF. These additional features
hold information pertaining to the shape and rate of the 1
459 49 34 0 0 0 3 84.2%
14.3% 1.5% 1.1% 0.0% 0.0% 0.0% 0.1% 15.8%
oscillations of the acceleration signals which improves the
36 370 29 0 2 0 0 84.7%
classification accuracy for most activities. Moreover, if we use 2
1.1% 11.6% 0.9% 0.0% 0.1% 0.0% 0.0% 15.3%
the features of Subset B instead of Subset A, we observe that 21 33 332 0 0 0 0 86.0%
3
the classification accuracy is improved by 11.8%, 39.2%, and 0.7% 1.0% 10.4% 0.0% 0.0% 0.0% 0.0% 14.0%

43.7% for the activities walking upstairs, walking downstairs, 4


0 2 0 476 90 0 0 83.8%
0.0% 0.1% 0.0% 14.9% 2.8% 0.0% 0.0% 16.2%
and standing, respectively. From Table III, we observe that
by using the features of Subset C, the classification accuracy 5
3 1 0 85 450 0 0 83.5%
0.1% 0.0% 0.0% 2.7% 14.1% 0.0% 0.0% 16.5%
is enhanced furthermore. The ANN algorithm achieves an
0 0 0 0 0 603 1 99.8%
accuracy of 96.8% and 100% for the activities falling and 6
0.0% 0.0% 0.0% 0.0% 0.0% 18.8% 0.0% 0.2%
lying when using the features of Subset C. 0 0 0 0 0 0 121 100%
7
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.8% 0.0%
TABLE III
88.4% 81.3% 84.1% 84.8% 83.0% 100% 96.8% 87.8%
ACCURACY OF THE ANN CLASSIFIER FOR VARIOUS ACTIVITIES AND 11.6% 18.7% 15.9% 15.2% 17.0% 0.0% 3.2% 12.2%
DIFFERENT FEATURE SUBSETS .
1 2 3 4 5 6 7
Accuracy %
Features Wal. Up. Dow. Sit. Sta. Ly. Fal. Overall
Subset A 25.2 67 33.4 85.2 41.7 100 96.8 62.4
Subset B 85.6 78.8 72.6 80.8 85.4 100 93.8 85.1 Fig. 6. Confusion matrix of the ANN algorithm obtained using the features
Subset C 88.4 81.3 84.1 84.8 83 100 96.8 87.8 from Subset C.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

11

To assess the accuracy of the algorithm for a given activity,


we must observe the corresponding column for that activity in that of the QSVM algorithm, we observe that the QSVM
the confusion matrix in Fig. 6. For instance, let us consider algorithm outperforms the ANN algorithm in terms of the
the first column of the confusion matrix which pertain to overall accuracy by 5.4%. Moreover, the QSVM algorithm has
the activity walking. In the test data, there are 519 walking better accuracy and precision for most activities compared to
buffers. In 459 cases, the walking activity is successfully the ANN algorithm. If the QSVM algorithm is utilized instead
recognized by the algorithm which implies an accuracy of of the ANN algorithm, the prediction precision is improved by
88.4%. The activity walking is misclassified in 60 cases which 9.2%, 8.6%, 9.2%, 2.3%, and 6.5%, respectively, for the ac-
represent 11.6% of the actual walking buffers. Overall the tivities walking, walking upstairs, walking downstairs, sitting,
ANN algorithm successfully predicts the user activity in 87.8% and standing. Additionally, for the activities walking, walking
of the cases. upstairs, walking downstairs, sitting, standing, and falling, the
To determine the algorithm precision for a given activity, classification accuracy is improved by 7.5%, 14.6%, 5.7%,
we must look at the row corresponding to that activity in the 4.7%, 3.5%, and 0.4%, respectively, if we use the QSVM
confusion matrix in Fig. 6. For example, let us consider the algorithm instead of the ANN algorithm.
fifth row which pertain to the activity standing. In 450 cases,
the activity standing is predicted correctly. This implies that
the prediction precision for standing is 83.5%. The activities
496 12 23 0 0 0 0 93.4%
sitting, walking upstairs, and walking are misclassified as 1
15.5% 0.4% 0.7% 0.0% 0.0% 0.0% 0.0% 6.6%
standing in 85, 1, and 3 cases, respectively. Out of 539 stand-
10 445 20 1 0 0 1 93.3%
ing predictions, 89 predictions are wrong which represents 2
0.3% 13.9% 0.6% 0.0% 0.0% 0.0% 0.0% 6.7%
16.5%. 11 7 378 0 0 0 1 95.2%
3
From Fig. 6, we observe that the classifier clearly distin- 0.3% 0.2% 11.8% 0.0% 0.0% 0.0% 0.0% 4.8%

guishes lying and falling from the other activities. For the 4
0 0 0 478 77 0 0 86.1%
0.0% 0.0% 0.0% 14.9% 2.4% 0.0% 0.0% 13.9%
remaining five activities, the classifier confuses standing and
sitting together, since these two activities are static. Besides, 0 0 0 55 494 0 0 90.0%
5
0.0% 0.0% 0.0% 1.7% 15.4% 0.0% 0.0% 10.0%
the classifier does not differentiate well the dynamic activities
0 0 0 0 0 583 1 99.8%
walking, walking upstairs, and walking downstairs. However, 6
0.0% 0.0% 0.0% 0.0% 0.0% 18.2% 0.0% 0.2%
the misclassification rate among dynamic activities drops 0 0 0 0 0 0 106 100%
7
significantly by using the features from Subset B instead of 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.3% 0.0%

those from Subset A, as shown in Table III. This demonstrates 95.9% 95.9% 89.8% 89.5% 86.5% 100% 97.2% 93.2%
4.1% 4.1% 10.2% 10.5% 13.5% 0.0% 2.8% 6.8%
that the PSD features allow achieving a higher accuracy in
1 2 3 4 5 6 7
recognizing dynamic activities, since each of these activities
has its own rate and shape of oscillations as discussed in
Section III. Note that the classifier rarely misclassifies dynamic
activities as static and vice versa. For example, the number of Fig. 7. Confusion matrix of the QSVM algorithm obtained using the features
from Subset C.
misclassifications for the activity walking as standing drops
from 182 to 3 by using the feature Subset C instead of
In Table VI, we provide the confusion matrix of the binary
Subset A. This reveals that the features in Subset C allow
classification problem, where we classify the data into fall and
distinguishing static and dynamic activities.
non-fall classes. Table VI is obtained when using the QSVM
In Table V, we provide the confusion matrix of the binary
algorithm with the features from Subset C. From Table VI, we
classification problem, where we classify the data into fall and
see that the number of FP equals 0 and the number of FN is 3.
non-fall classes. Table V is obtained when using the ANN
The FP rate is equal to 0%, while the FN rate is 2.75%. The
classifier with the features from Subset C. From Table V, we
accuracy and precision for fall detection are equal to 97.25%
see that the number of FP equals 0 and the number of FN is
and 100%, respectively.
4. The FP rate and FN rate can be computed using (11) and
(12), which results in 0% and 3.2%, respectively. Utilizing (13) TABLE VI
and (14), we can compute the accuracy and precision of fall C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE
detection which are equal to 96.8% and 100%, respectively. QSVM ALGORITHM OBTAINED USING THE FEATURES FROM S UBSET C.

TABLE V Actual Non-Fall Actual Fall


C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE Predicted Non-Fall 3090 (TN) 3 (FN)
ANN ALGORITHM OBTAINED USING THE FEATURES FROM S UBSET C. Predicted Fall 0 (FP) 106 (TP)

Actual Non-Fall Actual Fall


Predicted Non-Fall 3075 (TN) 4 (FN) In Figs. 8 and 9, we provide the confusion matrices for
Predicted Fall 0 (FP) 121 (TP) the KNN and the EBT algorithms, respectively. These results
are obtained using the features from Subset C. From Fig. 8,
The confusion matrix for the QSVM classifier is provided it can be noticed that the KNN algorithm has the worst
in Fig. 7. Comparing the confusion matrices of the ANN with overall accuracy compared to the ANN, QSVM, and EBT

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

12

485 20 27 0 0 0 0 91.2%
1
15.2% 0.6% 0.8% 0.0% 0.0% 0.0% 0.0% 8.8%
algorithms, while the EBT algorithm has the best performance 15 433 18 1 0 0 0 92.7%
2
in terms of the overall accuracy. By using the EBT algorithm 0.5% 13.5% 0.6% 0.0% 0.0% 0.0% 0.0% 7.3%

instead of the KNN algorithm, we can improve the overall 3


17 9 376 0 0 0 0 93.5%
0.5% 0.3% 11.8% 0.0% 0.0% 0.0% 0.0% 6.5%
accuracy by 12.9%. The EBT algorithm significantly improves
0 0 0 498 44 0 0 91.9%
the classification accuracy of most activities compared to the 4
0.0% 0.0% 0.0% 15.6% 1.4% 0.0% 0.0% 8.1%
KNN algorithm. In particular, the EBT algorithm enhances
0 1 0 35 527 0 0 93.6%
the classification accuracy for the activities walking, walking 5
0.0% 0.0% 0.0% 1.1% 16.5% 0.0% 0.0% 6.4%
upstairs, walking downstairs, sitting, standing, and falling by 0 0 0 0 0 584 1 99.8%
6
14.1%, 25.7%, 22.9%, 11.7%, 9.3%, and 6.4%, respectively. 0.0% 0.0% 0.0% 0.0% 0.0% 18.3% 0.0% 0.2%

Moreover, the use of the EBT algorithm improves the preci- 7


0 0 0 0 0 0 108 100%
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.4% 0.0%
sion of the predictions and significantly reduces false alarms
93.8% 93.5% 89.3% 93.3% 92.3% 100% 99.1% 94.1%
compared to the KNN algorithm. More specifically, if the 6.2% 6.5% 10.7% 6.7% 7.7% 0.0% 0.9% 5.9%
EBT algorithm is utilized instead of the KNN algorithm, the
1 2 3 4 5 6 7
precision of the prediction for the activities walking, walking
upstairs, walking downstairs, sitting, standing, and falling is
improved by 22.7%, 23.5%, 13.7%, 10%, 11.2%, and 3.8%,
Fig. 9. Confusion matrix of the EBT algorithm obtained using the features
respectively. from Subset C.

411 112 75 0 0 0 2 68.5%


1
12.8% 3.5% 2.3% 0.0% 0.0% 0.0% 0.1% 31.5% with the features from Subset C. Table VIII shows that the
2
73 314 67 0 0 0 0 69.2% number of FP and FN are equal to 0 and 1, respectively. The
2.3% 9.8% 2.1% 0.0% 0.0% 0.0% 0.0% 30.8%
FN rate equals 0.91%, whereas the FP rate is 0%. For fall
3
31 34 280 0 0 0 6 79.8% detection, the accuracy and precision reach 99.09% and 100%,
1.0% 1.1% 8.8% 0.0% 0.0% 0.0% 0.2% 20.2%
respectively.
0 0 0 435 96 0 0 81.9%
4
0.0% 0.0% 0.0% 13.6% 3.0% 0.0% 0.0% 18.1%
TABLE VIII
1 3 0 97 474 0 0 82.4% C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE
5
0.0% 0.1% 0.0% 3.0% 14.8% 0.0% 0.0% 17.6%
EBT ALGORITHM OBTAINED USING THE FEATURES FROM S UBSET C.
0 0 0 0 0 582 0 100%
6
0.0% 0.0% 0.0% 0.0% 0.0% 18.2% 0.0% 0.0% Actual Non-Fall Actual Fall
0 0 0 1 1 2 102 96.2% Predicted Non-Fall 3090 (TN) 1 (FN)
7
0.0% 0.0% 0.0% 0.0% 0.0% 0.1% 3.2% 3.8% Predicted Fall 0 (FP) 108 (TP)
79.7% 67.8% 66.4% 81.6% 83.0% 99.7% 92.7% 81.2%
20.3% 32.2% 33.6% 18.4% 17.0% 0.3% 7.3% 18.8%

1 2 3 4 5 6 7

B. Comparison
In this work, we obtained the acceleration data for ADL
Fig. 8. Confusion matrix of the KNN algorithm obtained using the features
from Subset C.
activities and falls from two different databases. This fact
makes it difficult to compare our results to existing work
In Table VII, we provide the confusion matrix of the binary in the literature. In [24], the authors use the support vector
classification problem. Table VII is obtained when using machine (SVM) algorithm to classify six ADL activities using
the KNN algorithm with the features from Subset C. From the same acceleration data that we use in this paper. Therefore,
Table VII, we see that the number of FP equals 4 and the we can roughly compare our results to those obtained in [24].
number of FN is 8. The FP rate is equal to 3.77%, while the The classification accuracy in [24] for the activities walking,
FN rate is 7.27%. The accuracy and precision of fall detection walking upstairs, walking downstairs, standing, sitting, and
are equal to 96.23% and 100%, respectively. lying are equal to 95.6%, 69.8%, 83.2%, 93%, 96.4%, and
100%, respectively, while the overall accuracy reaches 89.3%.
TABLE VII
C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE In our case, we achieve a better overall accuracy of 93.2%, if
KNN ALGORITHM OBTAINED USING THE FEATURES FROM S UBSET C. we use the QSVM algorithm and the features extracted only
from the acceleration signal. Note that in our case we classify
Actual Non-Fall Actual Fall
Predicted Non-Fall 3085 (TN) 8 (FN) seven different activities compared to six activities in [24]. Our
Predicted Fall 4 (FP) 102 (TP) solution improves the classification accuracy for the activities
walking, walking upstairs, and walking downstairs by 0.3%,
Table VIII illustrates the confusion matrix of the binary 26.1%, and 6.6% compared to the method proposed in [24].
classification problem resulting from using the EBT algorithm On the other hand, the solution in [24] outperforms our method

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

13

for the classification of the activities standing and sitting by


3.5% and 9.9%, respectively. less features. In [22], the authors utilize as features the vector
Wearable-based fall detection systems use either threshold- magnitude of the acceleration and the angular velocity. In [23],
ing or machine learning algorithms to detect falls. Threshold- the authors extract features from the magnitude of the accelera-
based algorithms have low complexity and can be easily tion signal, such as the minimum, maximum, mean, variance,
implemented on wearable devices. Their major drawback and signal magnitude area. However, in both [22] and [23]
is that they produce a high number of false alarms [23]. there are no features extracted from the triaxial acceleration
In fact, threshold-based algorithms conclude that a fall has signal, but the features are extracted from the magnitude of
occurred if the magnitude of the acceleration vector exceeds the acceleration data. By computing the magnitude of the
a certain value. Such a simple algorithm confuses falls with acceleration signal, we combine the acceleration data from
activities that yield a large acceleration value, such as walking the x-axis, y-axis, and z-axis into a single value, but we lose
downstairs [34]. The use of machine learning algorithms important information on the orientation of the acceleration
to detect falls is quite popular due to their high accuracy, vector a(t) that helps to recognize falls. Next, we explain this
which is achieved at a larger computational cost compared idea in more detail.
to thresholding algorithms. In Fig. 10, we illustrate the histogram of the mean value
Many studies have investigated the performance of different of the accelerations ax (t) and az (t) for the activities standing
fall detection algorithms using acceleration data [22], [23]. We and falling. For standing, the mean value of the acceleration
compare the performance of our proposed machine learning ax (t) is mainly confined in the interval [9 m/s2 , 10 m/s2 ],
framework to [22], [23]. The choice of these two papers as while for falling, the mean value of ax (t) is mostly between
a benchmark is motivated by two reasons. First, the fall data 0 and 7 m/s2 . This mismatch between the histograms of the
used to assess the performance of our fall detection solution mean value of the acceleration ax (t) for standing and falling
is the same as the fall data used in [22], [23], which makes allows distinguishing between these two activities using a
this comparison fair. Second, the solutions proposed in [22], threshold. Note that for the activities where the body posture
[23] have a high fall detection accuracy and precision. is vertical (i.e., walking, walking upstairs, walking downstairs,
In [22], the acceleration and angular velocity data are sitting, and standing), the mean value of the acceleration ax (t)
collected by two sensors attached to the participants’ chests is approximately equal to the gravity contribution, which is
and thighs. The vector magnitude of the acceleration and measured along the x-axis of the accelerometer11 , and equal
angular velocity obtained from the two sensors are computed to 10 m/s2 . Thus, using the mean value of the acceleration
and stacked in a feature vector of length 4. A decision tree ax (t), we can easily distinguish between falling and all the
algorithm is used to classify fall and non-fall activities. The activities where the body posture is vertical.
performance in terms of fall detection reaches an accuracy of
92% and a precision of 81%.
In [23], the authors built a binary classifier which can
distinguish between fall and non-fall events. A feature vector
of length 23 was provided to the classifier to decide if a fall
has occurred or not. The performance of three classification
algorithms was evaluated, namely, decision tree, logistic re-
gression, and multilayer perceptron. The best performance in
[23] was achieved with the multilayer perceptron classifier
which has a fall detection accuracy of 93.5% and a precision
of 94.2%.
In our proposed solution, by just using the acceleration fall
data obtained from the sensor attached to the chest, we achieve
a fall detection accuracy and precision of 96.8% and 100%,
respectively10 , by utilizing only a feature vector of length 3
(features from Subset A). This feature vector contains the
mean value of the acceleration along the x-axis, y-axis, and z-
axis. Thus, by using less data and a smaller size feature vector, Fig. 10. Histogram of the mean value of the accelerations ax (t) and az (t)
we are able to outperform the fall detection systems proposed for the activities standing and falling.
in [22] and [23]. In our framework, the precision and accuracy
of fall detection are further improved by increasing the size A fall comprises three main stages: (i) the pre-fall, (ii) the
of the feature vector. For instance, using a feature vector of fall, and (iii) the post-fall. In the pre-fall stage, the person
length 66, the EBT algorithm has a fall detection accuracy of is generally walking and has a vertical body posture, while
99.1% and a precision of 100%. for the post-fall stage the person is usually lying on the
In the following, we explain why we achieve a better fall ground. During the fall stage, the body posture changes from
detection accuracy than [22] and [23], even though we use
11 For the activities with vertical body posture, the x-axis of the accelerom-
10 See the confusion matrix in Fig. 5. eter corresponds to the z-axis of the earth-centered coordinate system.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

14

vertical to horizontal. Since the smartphone is attached to the


person’s body, the orientation of the axes of the smartphone’s
accelerometer changes during the fall. For activities with a can to a certain extent interpret the obtained results. However,
vertical body posture, such as standing, the x-axis of the as the dimensionality of the feature vector increases, the
accelerometer coincides with the z-axis of the earth. Therefore, interpretability of the obtained results becomes highly difficult.
for a vertical body posture, the gravity contribution equals to Since the size of the feature vector in this section is large, we
10 m/s2 along the x-axis and 0 m/s2 along the y- and z- cannot explain the reasons behind the obtained classification
axes.12 In contrast, for activities with horizontal body posture, results.
such as lying, the z-axis of the accelerometer coincides with In the following, we assess the accuracy of four classifica-
the z-axis of the earth-centered coordinate system. As a result, tion algorithms using all the extracted features (i.e., the used
the gravity contribution equals to 10 m/s2 along the z-axis and feature vector has a length of 328). These four classification
0 m/s2 along the x- and y-axes. However, for falling, since algorithms are the KNN, the ANN, the QSVM, and the EBT
the accelerometer axes orientation keeps changing during the algorithm. The data are randomly divided into a training set
fall, the contribution of the gravity is non-zero along both the and test set. The training data and the test data represent 70%
x- and z-axes of the accelerometer as shown in Fig. 10. and 30%, respectively, of the total data.
By comparing the histograms of the mean value of the In Fig. 11, we provide the confusion matrix of the KNN
acceleration az (t) for the activities standing and falling in algorithm obtained using all the features extracted from the
Fig. 10, we observe that for standing the mean value of acceleration and the angular velocity. The KNN algorithm
the acceleration az (t) is mainly concentrated around 0 m/s2 , achieves an overall accuracy of 85.8%. We recall that using
whereas for falling the mean value of the acceleration az (t) the features from Subset C, we achieve an overall accuracy of
is generally located in the interval [−8 m/s2 , 8 m/s2 ]. This 81.2%. This implies that by increasing the size of the feature
difference in the histograms of the mean value of az (t) for vector from 66 to 328, while using the KNN algorithm, we can
standing and falling allows the classifier to distinguish these improve the classification accuracy by 4.6%. From Fig. 11, we
two activities. Hence, using the mean value of the triaxial see that for the KNN algorithm the classification accuracy for
acceleration, we can improve the fall detection accuracy as it the activities walking, walking upstairs, walking downstairs,
is shown in Fig. 5, where a fall detection accuracy of 96.8% sitting, standing, lying, and falling are equal to 95%, 87.5%,
is achieved using just a feature vector of size 3. 81.9%, 72.8%, 74.7%, 99.8%, and 98.2%, respectively. On
the other hand, the KNN algorithm achieves a precision of
87.5%, 88%, 90.3%, 72.7%, 75.4%, 99%, and 100% for
C. Classification Based on Acceleration and Angular Velocity
the activities walking, walking upstairs, walking downstairs,
Signals
sitting, standing, lying, and falling, respectively.
In this section, we utilize all the features extracted from the
acceleration and the angular velocity signals. We extract the
mean value of the total triaxial acceleration ai (t) (i = x, y, z),
the triaxial angular velocity ωi (t) (i = x, y, z), the magnitude 1
491 31 39 0 0 0 0 87.5%
15.3% 1.0% 1.2% 0.0% 0.0% 0.0% 0.0% 12.5%
of the body acceleration kab (t)k, and the magnitude of the
17 405 37 1 0 0 0 88.0%
angular velocity kω(t)k. Additionally, we extract the RMS, 2
0.5% 12.7% 1.2% 0.0% 0.0% 0.0% 0.0% 12.0%
the ACF peaks, the PSD peaks, and the energy in different 9 27 345 0 0 0 1 90.3%
frequency bands from the triaxial body acceleration abi (t) (i = 3
0.3% 0.8% 10.8% 0.0% 0.0% 0.0% 0.0% 9.7%

x, y, z), the triaxial angular velocity ωi (t) (i = x, y, z), 0 0 0 389 145 1 0 72.7%
4
the magnitude of the body acceleration kab (t)k, and the 0.0% 0.0% 0.0% 12.2% 4.5% 0.0% 0.0% 27.3%

magnitude of the angular velocity kω(t)k. Finally, we extract 5


0 0 0 139 427 0 0 75.4%
0.0% 0.0% 0.0% 4.3% 13.3% 0.0% 0.0% 24.6%
supplementary features from the triaxial body acceleration
abi (t) (i = x, y, z), such as the cross-correlation peaks and 6
0
0.0%
0
0.0%
0
0.0%
5
0.2%
0
0.0%
582
18.2%
1
0.0%
99.0%
1.0%
the main maxima and minima. Using all these features, we 0 0 0 0 0 0 107 100%
7
construct a feature vector of length 328. 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.3% 0.0%

It is important to mention that machine learning algorithms 95.0% 87.5% 81.9% 72.8% 74.7% 99.8% 98.2% 85.8%
5.0% 12.5% 18.1% 27.2% 25.3% 0.2% 1.8% 14.2%
have a nested and non-linear structure, which makes it difficult
to understand how classifiers can achieve a high recognition 1 2 3 4 5 6 7

accuracy. Most researchers in this field use machine learning


methods as a black box [35], [36]. The interpretability of
the decision made by machine learning algorithms is still an Fig. 11. Confusion matrix of the KNN algorithm obtained using 328 features.
open research question [35], [36]. When we are faced with
a problem where the size of the feature vector is small, we Table IX shows the confusion matrix of the binary classifi-
cation problem resulting from using the KNN algorithm with
12 The contribution of the gravity is equal to 10 m/s2 along the z-axis
328 features. From Table IX, we see that the number of FP
of earth-centered coordinate system. In our case, the gravity contribution is
measured along the different axes of the accelerometer, which differ from the equals 0 and the number of FN is 2. The FP rate is equal to
axes of the earth-centered coordinate system. 0%, while the FN rate is 1.83%. The accuracy and precision

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

15

TABLE IX TABLE X
C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE
KNN ALGORITHM OBTAINED USING 328 FEATURES . ANN ALGORITHM OBTAINED USING 328 FEATURES .

Actual Non-Fall Actual Fall Actual Non-Fall Actual Fall


Predicted Non-Fall 3090 (TN) 2 (FN) Predicted Non-Fall 3087 (TN) 1 (FN)
Predicted Fall 0 (FP) 107 (TP) Predicted Fall 0 (FP) 112 (TP)

of fall detection are equal to 98.17% and 100%, respectively.


The confusion matrix of the ANN algorithm in Fig. 12 respectively. For the EBT algorithm, the accuracy is 100% for
shows that the ANN algorithm outperforms the KNN algo- the activities lying and falling, and there is no false alarm for
rithm by 6% in terms of overall accuracy. Additionally, the use these activities. Moreover, the accuracy and the precision of
of 328 features instead of 66 features in conjunction with the the EBT algorithm for the activities walking, walking upstairs,
ANN algorithm allows improving the classification accuracy and walking downstairs are above 98%. The activities with
by 4%. On the other hand, the increase of the number of the lowest accuracy and precision are sitting and standing.
features from 66 to 328 results in enhancing the prediction The EBT algorithm can classify the activities sitting and
precision for the activities walking, walking upstairs, walking standing with an accuracy of 94.6% and 95.4%, respectively.
downstairs, and sitting by 9.6%, 7.4%, 6.6%, and 4.1%, The prediction precisions of the EBT algorithm for sitting and
respectively. In terms of accuracy, we observe that the use of standing are equal to 95.1% and 94.8%, respectively. Note that
328 features instead of 66 features yields an improvement in differentiating between sitting and standing is not very critical.
classification accuracy by 6.5%, 13.8%, 6.6%, and 5.3% for
the activities walking, walking upstairs, walking downstairs,
and standing, respectively.
515 1 2 0 0 0 0 99.4%
1
16.1% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.6%

0 458 4 1 0 0 0 98.9%
2
0.0% 14.3% 0.1% 0.0% 0.0% 0.0% 0.0% 1.1%
498 6 19 4 4 0 0 93.8%
1
15.6% 0.2% 0.6% 0.1% 0.1% 0.0% 0.0% 6.2% 1 2 416 0 0 0 0 99.3%
3
0.0% 0.1% 13.0% 0.0% 0.0% 0.0% 0.0% 0.7%
12 443 21 4 1 0 0 92.1%
2
0.4% 13.8% 0.7% 0.1% 0.0% 0.0% 0.0% 7.9% 0 1 0 488 66 0 0 87.9%
4
0.0% 0.0% 0.0% 15.3% 2.1% 0.0% 0.0% 12.1%
15 16 390 0 0 0 0 92.6%
3
0.5% 0.5% 12.2% 0.0% 0.0% 0.0% 0.0% 7.4% 0 1 0 45 505 0 0 91.7%
5
0.0% 0.0% 0.0% 1.4% 15.8% 0.0% 0.0% 8.3%
0 1 0 445 60 0 0 87.9%
4
0.0% 0.0% 0.0% 13.9% 1.9% 0.0% 0.0% 12.1% 0 0 0 0 0 583 0 100%
6
0.0% 0.0% 0.0% 0.0% 0.0% 18.2% 0.0% 0.0%
0 0 0 96 490 0 0 83.6%
5
0.0% 0.0% 0.0% 3.0% 15.3% 0.0% 0.0% 16.4% 0 0 0 0 0 0 110 100%
7
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.4% 0.0%
0 0 0 1 0 561 1 99.6%
6
0.0% 0.0% 0.0% 0.0% 0.0% 17.5% 0.0% 0.4% 99.8% 98.9% 98.6% 91.4% 88.4% 100% 100% 96.1%
0.2% 1.1% 1.4% 8.6% 11.6% 0.0% 0.0% 3.9%
0 0 0 0 0 0 112 100%
7
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.5% 0.0% 1 2 3 4 5 6 7

94.9% 95.1% 90.7% 80.9% 88.3% 100% 99.1% 91.8%


5.1% 4.9% 9.3% 19.1% 11.7% 0.0% 0.9% 8.2%

1 2 3 4 5 6 7
Fig. 13. Confusion matrix of the QSVM algorithm obtained using 328
features.

Fig. 12. Confusion matrix of the ANN algorithm obtained using 328 features. Table XI illustrates the confusion matrix of the binary clas-
sification problem obtained when using the QSVM algorithm
Table X provides the confusion matrix of the binary classifi- with 328 features. From Table XI, we observe that the number
cation problem resulting from using the ANN algorithm with of FP equals 0 and the number of FN is 0. The FP rate and FN
328 features. From Table X, we see that the number of FP rate are both equal to 0%, this implies that the fall detection
equals 0 and the number of FN is 1. The FP rate is equal to system has zero false alarm and has zero undetected falls.
0%, while the FN rate is 0.88%. The accuracy and precision TABLE XI
for fall detection are equal to 99.12% and 100%, respectively. C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE
QSVM ALGORITHM OBTAINED USING 328 FEATURES .
In Figs. 13 and 14, we provide the confusion matrices
Actual Non-Fall Actual Fall
for the QSVM and the EBT algorithms, respectively. These Predicted Non-Fall 3089 (TN) 0 (FN)
two algorithms have a better performance compared to the Predicted Fall 0 (FP) 110 (TP)
KNN and the ANN algorithms. The QSVM and the EBT
algorithms achieve an overall accuracy of 96.1% and 97.7%, Table XII shows the confusion matrix of the binary classifi-

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

16

V. C ONCLUSION
511 0 4 0 0 0 0 99.2% A robust fall detection system is essential to support the
1
16.0% 0.0% 0.1% 0.0% 0.0% 0.0% 0.0% 0.8% independent living of elderlies. In this paper, we have pro-
2
2 457 3 0 0 0 0 98.9% posed a machine learning approach for fall detection and
0.1% 14.3% 0.1% 0.0% 0.0% 0.0% 0.0% 1.1%
ADL recognition. We have tested the performance of four
3 5 415 0 0 0 0 98.1%
3
0.1% 0.2% 13.0% 0.0% 0.0% 0.0% 0.0% 1.9%
algorithms in recognizing the activities falling, walking, walk-
ing upstairs, walking downstairs, sitting, standing, and lying
0 0 0 505 26 0 0 95.1%
4
0.0% 0.0% 0.0% 15.8% 0.8% 0.0% 0.0% 4.9% based on the acceleration and the angular velocity data. We
0 1 0 29 545 0 0 94.8% have proposed new time and frequency domain features and
5
0.0% 0.0% 0.0% 0.9% 17.0% 0.0% 0.0% 5.2% have demonstrated the importance of these features and their
6
0 0 0 0 0 584 0 100% positive impact on enhancing the accuracy and precision of
0.0% 0.0% 0.0% 0.0% 0.0% 18.3% 0.0% 0.0%
the classifier.
0 0 0 0 0 0 109 100%
7
0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 3.4% 0.0%
Moreover, we have tested the performance of the KNN,
ANN, QSVM, and EBT classification algorithms on real-
99.0% 98.7% 98.3% 94.6% 95.4% 100% 100% 97.7%
1.0% 1.3% 1.7% 5.4% 4.6% 0.0% 0.0% 2.3% world acceleration data obtained from public databases. The
1 2 3 4 5 6 7 internal parameters of these algorithms have been optimized
using the training data. Afterwards, the performance of the
trained algorithms has been assessed using the test data. In
Fig. 14. Confusion matrix of the EBT algorithm obtained using 328 features.
a first step, only the acceleration data have been used for
activity recognition. A feature vector of size 66 has been
obtained and has been provided as an input to the classification
algorithm. Our results reveal that the KNN, ANN, QSVM, and
cation problem obtained when using the EBT algorithm with EBT algorithm achieve an overall accuracy of 81.2%, 87.8%,
328 features. From Table XII, we see that the number of FP 93.2%, and 94.1%, respectively.
equals 0 and the number of FN is 0. The FP rate and FN rate In a second step, we have extracted new features from both
are both equal to 0%, thus the fall detection system has an the acceleration and the angular velocity data which has sig-
accuracy of 100% and generates zero false alarm. nificantly improved the performance of the four classification
algorithms. The constructed feature vector has a size of 328.
TABLE XII By using the proposed feature vector, we have shown that the
C ONFUSION MATRIX OF THE BINARY CLASSIFICATION PROBLEM OF THE KNN, ANN, QSVM, and EBT algorithms achieve an overall
EBT ALGORITHM OBTAINED USING 328 FEATURES .
accuracy of 85.8%, 91.8%, 96.1%, and 97.7%, respectively.
Actual Non-Fall Actual Fall It is worth to mention that the accuracy of fall detection for
Predicted Non-Fall 3090 (TN) 0 (FN) QSVM and EBT reaches 100% with no false alarm which is
Predicted Fall 0 (FP) 109 (TP)
the best achievable performance.

By comparing the accuracy of the QSVM and the EBT ACKNOWLEDGEMENT


algorithms for different activities, we notice that the EBT This work was supported by the WiCare Project funded
algorithm outperforms the QSVM algorithm in classifying by the Research Council of Norway under grant number
the activities sitting and standing, while the QSVM algorithm 261895/F20.
outperforms the EBT algorithm in terms of accuracy for the
activities walking, walking upstairs, and walking downstairs. R EFERENCES
Both the EBT and the QSVM algorithms reach an accuracy [1] WHO. (2018, Jun.) World report on ageing and health. [Online].
and a precision of 100% in classifying the activities lying Available: http://apps.who.int/iris/bitstream/handle/
and falling. Note that a 100% precision and accuracy for the 10665/186463/9789240694811 eng.pdf
[2] ——. (2018, Jun.) World Health Organization:
activity falling is a highly desirable performance. In fact, by Global report on falls prevention in older age.
achieving a 100% fall detection accuracy, we can build reliable [Online]. Available: https://extranet.who.int/agefriendlyworld/wp-
fall detection systems that support the independent living of the content/uploads/2014/06/WHo-Global-report-on-falls-prevention-in-
older-age.pdf
elderly, reduce the impact of fall related injuries, and improve [3] G. Bergen, M. Stevens, and E. Burns. (2018, Jun.) Falls and fall
the survival rate for persons that experience falls. On the other injuries among adults aged ≥ 65 years – United States, 2014. [Online].
hand, achieving 100% precision in fall detection means that Available: http://dx.doi.org/10.15585/mmwr.mm6537a2
[4] E. R. Burns, J. A. Stevens, and R. Lee, “The direct costs of fatal and
no false alarm is generated by the algorithm and all detected non-fatal falls among older adults – United States,” Journal of Safety
falls are real falls. Note that if the algorithm generates a false Research, vol. 58, pp. 99–103, Sep. 2016.
alarm, an ambulance will be sent to the person’s house. As [5] R. Igual, C. Medrano, and I. Plaza, “Challenges, issues and trends in
fall detection systems,” BioMedical Engineering OnLine, vol. 12, no. 1,
the number of false alarms increases, the amount of wasted pp. 1–24, Jul. 2013.
money increases. Therefore, it is very important to develop a [6] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “A public
fall detection system without false alarms, which is achieved domain dataset for human activity recognition using smartphones,” in
European Symposium on Artificial Neural Networks, Computational
using the EBT and the QSVM algorithms as shown in Figs. 13 Intelligence and Machine Learning, Bruges, Belgium, Apr. 2013, pp.
and 14. 24–26.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2906693, IEEE Access

17

[7] O. Ojetola, E. Gaura, and J. Brusey, “Data set for fall events and criminant analysis,” in 2010 5th International Conference on Future
daily activities from inertial sensors,” in 6th ACM Multimedia Systems Information Technology, Busan, South Korea, May 2010, pp. 1–6.
Conference - MMSys ’15, Portland, OR, USA, Mar. 2015, pp. 243–248. [28] C. M. Bishop, Pattern Recognition and Machine Learning, 1st ed.
[8] A. Wertner, P. Czech, and V. Pammer-Schindler, “An open labelled Cambridge, UK: Springer, 2006.
dataset for mobile phone sensing based fall detection,” in 12th EAI In- [29] I. Steinwart and A. Christmann, Support Vector Machines, 1st ed. New
ternational Conference on Mobile and Ubiquitous Systems: Computing, York, USA: Springer, 2008.
Networking and Services (MOBIQUITOUS 2015), Coimbra, Portugal, [30] T. G. Dietterich, “An experimental comparison of three methods for
Jul. 2015, pp. 277–278. constructing ensembles of decision trees: Bagging, boosting, and ran-
[9] A. Sucerquia, J. D. López, and J. F. Vargas-Bonilla, “SisFall: A fall and domization,” Machine Learning, vol. 40, no. 2, pp. 139–157, Aug. 2000.
movement dataset,” Sensors, vol. 17, no. 1, pp. 1–14, Jan. 2017. [31] R. W. Schafer, “What is a Savitzky-Golay filter? [Lecture notes],” IEEE
[10] E. Casilari, J. A. Santoyo-Ramón, and J. M. Cano-Garcı́a, “Analysis of Signal Processing Magazine, vol. 28, no. 4, pp. 111–117, Jul. 2011.
a smartphone-based architecture with multiple mobility sensors for fall [32] D. Bailey and P. Swarztrauber, “A fast method for the numerical
detection,” PLoS ONE, vol. 11, pp. 1–17, Dec. 2016. evaluation of continuous Fourier and Laplace transforms,” SIAM Journal
[11] O. P. Popoola and K. Wang, “Video-based abnormal human behavior on Scientific Computing, vol. 15, no. 5, pp. 1105–1110, Sep. 1994.
recognition–A review,” IEEE Transactions on Systems, Man, and Cyber- [33] J. M. Knudsen and P. G. Hjorth, Elements of Newtonian Mechanics:
netics, Part C (Applications and Reviews), vol. 42, no. 6, pp. 865–878, Including Nonlinear Dynamics, 3rd ed. Springer-Verlag Berlin Heidel-
Nov. 2012. berg, 2000.
[12] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Robust video [34] M. Vallejo, C. Isaza, and J. López, “Artificial Neural Networks as
surveillance for fall detection based on human shape deformation,” IEEE an alternative to traditional fall detection methods,” in 35th Annual
Transactions on Circuits and Systems for Video Technology, vol. 21, International Conference of the IEEE Engineering in Medicine and
no. 5, pp. 611–622, May 2011. Biology Society (EMBC), Osaka, Japan, Jul. 2013, pp. 1648–1651.
[13] C. Zhang, Y. Tian, and E. Capezuti, “Privacy preserving automatic fall [35] A. Vellido, J. D. M. n Guerrero, and P. J. G. Lisboa, “Making machine
detection for elderly using RGBD cameras,” in International Conference learning models interpretable,” in European Symposium on Artificial
on Computers for Handicapped Persons (ICCHP 2012). Linz, Austria: Neural Networks, Computational Intelligence and Machine Learning
Springer, Berlin, Heidelberg, Jul. 2012, pp. 625–633. (ESANN 2012), Bruges, Belgium, Apr. 2012, pp. 163–172.
[14] I. Charfi, J. Miteran, J. Dubois, M. Atri, and R. Tourki, “Definition and [36] W. Samek, T. Wiegand, and K. Müller, “Explainable artificial in-
performance evaluation of a robust SVM based fall detection solution,” telligence: Understanding, visualizing and interpreting deep learning
in 2012 Eighth International Conference on Signal Image Technology models,” arXiv e-prints, arXiv:1708.08296, Aug. 2017.
and Internet Based Systems. Naples, Italy: IEEE, Nov. 2012, pp. 218–
224.
[15] H. A. Nguyen and J. Meunier, “Gait analysis from video: camcorders
vs. Kinect,” in International Conference Image Analysis and Recognition
(ICIAR 2014). Vilamoura, Portugal: Springer, Oct. 2014, pp. 66–73.
[16] R. K. Tripathy, L. N. Sharma, S. Dandapat, B. Vanrumste, and T. Croo-
nenborghs, “Bridging the gap between real-life data and simulated data
by providing a highly realistic fall dataset for evaluating camera-based
fall detection algorithms,” Healthcare Technology Letters, vol. 3, no. 1,
pp. 6–11, Mar. 2016.
[17] K. Sehairi, F. Chouireb, and J. Meunier, “Comparative study of motion
detection methods for video surveillance systems,” Journal of Electronic
Imaging, vol. 26, no. 2, pp. 26–29, Apr. 2017.
[18] J. Klonovs et al., Distributed Computing and Monitoring Technologies
for Older Patients, 1st ed. London, UK: SpringerBriefs in Computer
Science, 2016.
[19] A. Bourke, P. Van de Ven, A. Chaya, G. OLaighin, and J. Nelson,
“Testing of a long-term fall detection system incorporated into a custom
vest for the elderly,” in 30th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society (EMBS 2008),
Vancouver, BC, Canada, Aug. 2008, pp. 2844–2847.
[20] P. Barralon, I. Dorronsoro, and E. Hernandez, “Automatic fall detection:
Complementary devices for a better fall monitoring coverage,” in IEEE
15th International Conference on e-Health Networking, Applications and
Services (Healthcom 2013), Lisbon, Portugal, Oct. 2013, pp. 590–593.
[21] P. Kostopoulos, T. Nunes, K. Salvi, M. Deriaz, and J. Torrent, “F2D: A
fall detection system tested with real data from daily life of elderly
people,” in 17th International Conference on e-Health Networking,
Application & Services (HealthCom), Boston, MA, USA, Oct. 2015,
pp. 397–403.
[22] O. Ojetola, E. I. Gaura, and J. Brusey, “Fall detection with wearable
sensors–Safe (smart fall detection),” in Seventh International Conference
on Intelligent Environments, Jul. 2011, pp. 318–321.
[23] P. Putra, J. Brusey, and E. Gaura, “A cascade-classifier approach for
fall detection,” in 5th EAI International Conference on Wireless Mobile
Communication and Healthcare (MOBIHEALTH’15), London, UK, Oct.
2015, pp. 94–99.
[24] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human
activity recognition on smartphones using a multiclass hardware-friendly
support vector machine,” in International Workshop of Ambient Assited
Living, Vitoria-Gasteiz, Spain, Dec. 2012, pp. 216–223.
[25] A. B. Williams and F. J. Taylor, Electronic Filter Design Handbook,
4th ed. New York, USA: McGraw-Hill, 2006.
[26] J.-Y. Yang, J.-S. Wang, and Y.-P. Chen, “Using acceleration mea-
surements for activity recognition: An effective learning algorithm for
constructing neural classifiers,” Pattern Recognition Letters, vol. 29,
no. 16, pp. 2213–2220, 2008.
[27] A. M. Khan, Y.-K. Lee, S. Y. Lee, and T.-S. Kim, “Human activity
recognition via an accelerometer-enabled-smartphone using kernel dis-

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like