(2018) Multimodal Database For Human Activity Recognition and Fall Detection
(2018) Multimodal Database For Human Activity Recognition and Fall Detection
Abstract: Fall detection can improve the security and safety of older people and alert when fall
occurs. Fall detection systems are mainly based on wearable sensors, ambient sensors, and vision.
Each method has commonly known advantages and limitations. Multimodal and data fusion
approaches present a combination of data sources in order to better describe falls. Publicly
available multimodal datasets are needed to allow comparison between systems, algorithms and
modal combinations. To address this issue, we present a publicly available dataset for fall detection
considering Inertial Measurement Units (IMUs), ambient infrared presence/absence sensors, and
an electroencephalogram Helmet. It will allow human activity recognition researchers to do
experiments considering different combination of sensors.
1. Introduction
Falls are the most common cause of disability and death in older people [1] around the world.
The risk of falling increases with age, but it also depends in health status and environmental factors
[2]. Along with preventive measures, there is also important to have fall detection solutions in order
to reduce the time in which a person who suffered a fall receives assistance and treatment [3].
Fall detection can improve the security and safety of older people and alert when fall occurs.
Surveys in the field of automatic fall detection [3–5] classify fall detection systems in in three
categories. Approaches based on wearable sensors, ambient sensors, and vision. Each method has
commonly known advantages and limitations. Wearable sensors are sometimes obtrusive and
uncomfortable; smartphones and smart watches have battery limitations and limited processing and
storage. Although vision methods are cheap, unobtrusive, and require less cooperation from the
person, they have privacy issues and environment conditions can affect the recollection.
In recent years, due to the increasing availability of different modality of data and greater
facility to acquire them, there is a trend to use multimodal data to study different phenomenon or
system of interest [6]. The main idea is that “Due to the rich characteristics of natural processes and
environments, it is rare that a single acquisition method provides complete understanding thereof”
[6]. Multimodal and data fusion are also a trends in health systems. The combination of different
data sources, processed for data fusion is applied to improve reliability and precision of fall
detection systems. Koshmak et al. [7] presented the challenges and issues of fall detection research
with focus in multisensor fusion. The authors describe systems of fall detection of multifusion
approaches, hence each of them present results in their own dataset making it impossible to
Proceedings 2018, 2, 1237; doi:10.3390/proceedings2191237 www.mdpi.com/journal/proceedings
Proceedings 2018, 2, 1237 2 of 9
compare. Igual et al. [8] present the need of publicly available dataset with great diversity of
acquisition modalities to enable comparison between systems and new algorithm performances.
In this work, we present a publicly available multimodal dataset for fall detection. This is the
first attempt in our on-going project considering four Inertial Measurement Units (IMUs), four
infrared presence/absence sensors, and an electroencephalogram Helmet. This dataset can benefit
researchers in the fields of wearable computing, ambient intelligence and sensor fusion. New
machine learning algorithms can be proven with this dataset. It will also allow human activity
recognition (HAR) researchers to do experiments considering different combination of sensors in
order to determine the best placement of wearable and ambient sensors.
The rest of the paper is organized as follows. In Section 2 we reviewed existing publicly
available datasets for fall detection. Our UP-fall detection and activity recognition dataset is
presented in Section 3. Experiments and results are shown in Section 4. Finally, conclusions and
future work are discussed in 5
3.1. Data Acquisition System for Fall Detection and Activity Recognition
The main objective of this data acquisition system is sensing data of different body parts as
neck, waist, thigh, wrist of hand, signals and absence/presence in delimited area. All data is
manipulated to be converted in JavaScript Object Notation (JSON) structure and then be sent to
Firebase (no SQL database) via API REST communication method. This may throw rich information
to classify and detect falls and predict ADL activities.
The components used for this acquisition system were:
• 4 Inertial Measurement Units (IMUs).
• 1 Electroencephalogram Helmet (EEG)
• 4 (absence/presence) Ambient Infrared sensors.
• RaspberryPI3
• PC and External USB Bluetooth
The Data acquisition for this project consists of 3 steps Sensing, Extraction and Storage:
1. Sensing.—Each component starts sensing the actions with the different sensors at the same
time, the data to be sensing are: IMU’s: Accelerometer (X, Y and Z), Ambient Light (L) and
Angular Velocity (X (rad/s), Y (rad/s) and Z (rad/s)). Helmet EEG: signals. Infrared sensor:
absence-presence with binary value.
2. Recollection.—The recollection phase consists in gathering data through Bluetooth connection,
with IMU’s and EEG Helmet devices. Data are converted to JSON structure (Figure 1) to be sent
to Cloud (Firebase). This process is made with C# program using SDK’s from IMU’s and EEG
Helmet to provide us full access to the sensors data. Infrared sensors are connected directly to
raspsberrypy3 in which a Python program allows to take data and convert them to JSON in
order to store them in the cloud.
3. Storage.—Once that information has been collected and prepared in JSON structure packages,
it is sent via POST request to be storage into firebase (noSQL database). In order to achieve this
connection, a RESTAPI platform was configured to storage every POST request into firebase
database as a new data.
The IMUS were positioned in the neck, belt, thigh, and wrist. These positions were defined after
reviewing the most commonly used for fall detection according to literature [3]. The acquisition
system is shown in Figure 2.
Proceedings 2018, 2, 1237 4 of 9
Data are separated by subject, activity, and trial, and is delivered in CSV format. The dataset
was recollected in a laboratory and it was decided not to have complete control with the aim of
simulating conditions as realistic as possible. The different sensor signals were synchronized for
temporal alignment.
4.2. Classification
The following classifiers were used in training processes: Linear Discriminant Analysis (LDA),
CART Decision Trees (CART), Gaussian Naïve Bayes (NB), Sup-port Vector Machines (SVM),
Random Forest (RF), K Nearest Neighbor (KNN), and Neural Networks (NN). A summary of results
in terms of accuracy of using each ma-chine learning method in training process is presented in
Table 4.
Proceedings 2018, 2, 1237 6 of 9
Unless comparison between IMUs and IMUs with context sensors is quite small, it is interesting
to observe that activity recognition is better using no enhancements in the dataset nor in the feature
extraction, thus adding up contextual information in this particular dataset improves the
performance of human activity recognition. The simulation of conditions can be a factor that causes
low metrics for simple machine learning methods.
The results of predictions on validation sets using the same classifier with the best performance
in training (i.e., SVM and NN) are presented in Figures 3–5. A comparison between both
experiments, using just IMUs data and IMUs plus context, for the same window size is shown in
each figure.
Figure 3 shows the average precision, recall and f1 score of prediction on validation sets created
with two seconds windowing. We can see a small improvement in the precision and f1-score
measurements when using IMUs plus context data.
In Figure 4, on the contrary, we observe that the performance worsened a bit when adding
context information to IMUs data for three seconds cases. Nevertheless, in the results of experiments
with five seconds windowing, a similar improvement in the performance of IMUs plus context
scenario can be observed.
f1-score
recall
precision
f1-score
recall
precision
Further experimentation can be done combining different placements for the purpose of finding
the best combination and/or placement for fall detection.
f1-score
recall
precision
exhaustive tuning of machine learning models, better results can be obtained. However the main
goal of our experimentation is to show an example of usage of the dataset.
In order to improve the diversity of the multimodal dataset, new modality acquisition
frameworks will be added, namely cameras and microphones. Further experimentation must be
done to verify the complementarity of the various types of data.
Funding: This research has been funded by Universidad Panamericana through the grant “Fomento a la
Investigación UP 2017”, under project code UP-CI-2017-ING-MX-02.
References
1. Gale, C.R.; Cooper, C.; Sayer, A. Prevalence and risk factors for falls in older men and women: The English
Longitudinal Study of Ageing. Age Ageing 2016, 45, 789–794.
2. Alshammari, S.A.; Alhassan, A.M.; Aldawsari, M.A.; Bazuhair, F.O.; Alotaibi, F.K.; Aldakhil, A.A.;
Abdulfattah, F.W. Falls among elderly and its relation with their health problems and surrounding
environmental factors in Riyadh. J. Fam. Commun. Med. 2018, 25, 29.
3. Igual, R.; Medrano, C.; Plaza, I. Challenges, issues and trends in fall detection systems. Biomed. Eng. Online
2013, 12, 66.
4. Noury, N.; Fleury, A.; Rumeau, P.; Bourke, A.K.; Laighin, G.O.; Rialle, V.; Lundy, J.E. Fall detection-principles
and methods. In Proceedings of the 29th Annual International Conference of the Engineering in Medicine
and Biology Society, Lyon, France, 22–26 August 2007; pp. 1663–1666.
5. Mubashir, M.; Shao, L.; Seed, L. A survey on fall detection: Principles and approaches. Neurocomputing
2013, 100, 144–152.
6. Lahat, D.; Adali, T.; Jutten, C. Multimodal data fusion: An overview of methods, challenges, and
prospects. Proc. IEEE 2015, 103, 1449–1477.
7. Koshmak, G.; Loutfi, A.; Linden, M. Challenges and issues in multisensor fusion approach for fall
detection. J. Sens. 2016, doi:10.1155/2016/6931789.
8. Igual, R.; Medrano, C.; Plaza, I. A comparison of public datasets for acceleration-based fall detection. Med.
Eng. Phys. 2015, 37, 870–878.
9. Casilari, E.; Santoyo-Ramón, J.A.; Cano-García, J. Analysis of public datasets for wearable fall detection
systems. Sensors 2017, 17, 1513.
10. Frank, K.; Nadales, V.; Robertson, M.J.; Pfeifer, T. Bayesian recognition of motion related activities with
inertial sensors. In Proceedings of the 12th ACM International Conference Adjunct Papers on Ubiquitous
Computing-Adjunct, Copenhagen, Denmark, 26–29 September 2010.
11. Vavoulas, G.; Pediaditis, M.; Chatzaki, C.; Spanakis, E.G.; Tsiknakis, M. The mobifall dataset: Fall
detection and classification with a smartphone. Int. J. Monit. Surveillance Technol. Res. 2014, 2, 44–56.
12. Medrano, C.; Igual, R.; Plaza, I.; Castro, M. Detecting falls as novelties in acceleration patterns acquired
with smartphones. PLoS ONE 2014, 9, e94811.
13. Vilarinho, T.; Farshchian, B.; Bajer, D.; Dahl, O.; Egge, I.; Hegdal, S.S.; Lønes, A.; Slettevold, J.N.;
Weggersen, S.M. A combined smartphone and smartwatch fall detection system. In Proceedings of the
2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing
and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and
Computing (CIT/IUCC/DASC/PICOM), Liverpool, UK, 26–28 October 2015; pp. 1443–1448.
14. Sucerquia, A.; López, J.D.; Vargas-Bonilla, J.F. SisFall: A fall and movement dataset. Sensors 2017, 17, 198.
15. Kepski, M.; Kwolek, B.; Austvoll, I. Fuzzy inference-based reliable fall detection using kinect and
accelerometer. In Artificial Intelligence and Soft Computing of Lecture Notes in Computer Science; Springer:
Berlin, Germany, 2012; pp. 266–273.
16. Kwolek, B.; Kepski, M. Improving fall detection by the use of depth sensor and accelerometer.
Neurocomputing 2015, 168, 637–645.
17. Ofli, F.; Chaudhry, R.; Kurillo, G.; Vidal, R.; Bajcsy, R. Berkeley MHAD: A comprehensive multimodal
human action database. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision
(WACV), Tampa, FL, USA, 15–17 January 2013; pp. 53–60.
18. Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the
state-of-the-art. Inf. Fusion 2013, 14, 28–44.
Proceedings 2018, 2, 1237 9 of 9
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).