0% found this document useful (0 votes)
15 views

(2018) Multimodal Database For Human Activity Recognition and Fall Detection

This document presents a new publicly available multimodal dataset for human activity recognition and fall detection. The dataset was collected using inertial measurement units (IMUs), ambient infrared presence/absence sensors, and an electroencephalogram helmet worn by participants. It allows researchers to experiment with different combinations of sensor modalities. The dataset aims to address the lack of publicly available multimodal datasets, which prevents direct comparison of fall detection systems and algorithms using different sensor configurations. It will benefit research in areas like wearable computing, ambient intelligence, and sensor fusion.

Uploaded by

SUJITHA MEDURI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

(2018) Multimodal Database For Human Activity Recognition and Fall Detection

This document presents a new publicly available multimodal dataset for human activity recognition and fall detection. The dataset was collected using inertial measurement units (IMUs), ambient infrared presence/absence sensors, and an electroencephalogram helmet worn by participants. It allows researchers to experiment with different combinations of sensor modalities. The dataset aims to address the lack of publicly available multimodal datasets, which prevents direct comparison of fall detection systems and algorithms using different sensor configurations. It will benefit research in areas like wearable computing, ambient intelligence, and sensor fusion.

Uploaded by

SUJITHA MEDURI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Proceedings

Multimodal Database for Human Activity


Recognition and Fall Detection †
Lourdes Martínez-Villaseñor 1,*, Hiram Ponce 1 and Ricardo Abel Espinosa-Loera 2
1 Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, 03920 Ciudad de México, Mexico;
[email protected]
2 Facultad de Ingeniería, Universidad Panamericana, Josemaría Escrivá de Balaguer 101,

20290 Aguascalientes, Mexico; [email protected]


* Correspondence: [email protected]; Tel.: +52-555-482-1600
† Presented at the 12th International Conference on Ubiquitous Computing and Ambient Intelligence
(UCAmI 2018), Punta Cana, Dominican Republic, 4–7 December 2018.

Published: 22 October 2018

Abstract: Fall detection can improve the security and safety of older people and alert when fall
occurs. Fall detection systems are mainly based on wearable sensors, ambient sensors, and vision.
Each method has commonly known advantages and limitations. Multimodal and data fusion
approaches present a combination of data sources in order to better describe falls. Publicly
available multimodal datasets are needed to allow comparison between systems, algorithms and
modal combinations. To address this issue, we present a publicly available dataset for fall detection
considering Inertial Measurement Units (IMUs), ambient infrared presence/absence sensors, and
an electroencephalogram Helmet. It will allow human activity recognition researchers to do
experiments considering different combination of sensors.

Keywords: fall detection; database; human activity recognition

1. Introduction
Falls are the most common cause of disability and death in older people [1] around the world.
The risk of falling increases with age, but it also depends in health status and environmental factors
[2]. Along with preventive measures, there is also important to have fall detection solutions in order
to reduce the time in which a person who suffered a fall receives assistance and treatment [3].
Fall detection can improve the security and safety of older people and alert when fall occurs.
Surveys in the field of automatic fall detection [3–5] classify fall detection systems in in three
categories. Approaches based on wearable sensors, ambient sensors, and vision. Each method has
commonly known advantages and limitations. Wearable sensors are sometimes obtrusive and
uncomfortable; smartphones and smart watches have battery limitations and limited processing and
storage. Although vision methods are cheap, unobtrusive, and require less cooperation from the
person, they have privacy issues and environment conditions can affect the recollection.
In recent years, due to the increasing availability of different modality of data and greater
facility to acquire them, there is a trend to use multimodal data to study different phenomenon or
system of interest [6]. The main idea is that “Due to the rich characteristics of natural processes and
environments, it is rare that a single acquisition method provides complete understanding thereof”
[6]. Multimodal and data fusion are also a trends in health systems. The combination of different
data sources, processed for data fusion is applied to improve reliability and precision of fall
detection systems. Koshmak et al. [7] presented the challenges and issues of fall detection research
with focus in multisensor fusion. The authors describe systems of fall detection of multifusion
approaches, hence each of them present results in their own dataset making it impossible to
Proceedings 2018, 2, 1237; doi:10.3390/proceedings2191237 www.mdpi.com/journal/proceedings
Proceedings 2018, 2, 1237 2 of 9

compare. Igual et al. [8] present the need of publicly available dataset with great diversity of
acquisition modalities to enable comparison between systems and new algorithm performances.
In this work, we present a publicly available multimodal dataset for fall detection. This is the
first attempt in our on-going project considering four Inertial Measurement Units (IMUs), four
infrared presence/absence sensors, and an electroencephalogram Helmet. This dataset can benefit
researchers in the fields of wearable computing, ambient intelligence and sensor fusion. New
machine learning algorithms can be proven with this dataset. It will also allow human activity
recognition (HAR) researchers to do experiments considering different combination of sensors in
order to determine the best placement of wearable and ambient sensors.
The rest of the paper is organized as follows. In Section 2 we reviewed existing publicly
available datasets for fall detection. Our UP-fall detection and activity recognition dataset is
presented in Section 3. Experiments and results are shown in Section 4. Finally, conclusions and
future work are discussed in 5

2. Fall Detection Databases Overview


Mubashir et al. [5] divided the approaches for fall detection in three categories: wearable
device-based, ambience sensor based, and vision based. There are few currently publicly available
datasets for fall detection based in sensors [7,9–12], vision based [13,14], and multimodal datasets
[15–17]. In this section, we present an overview of datasets based in sensors or multimodal
approaches. For more extensive surveys including vision-based approaches see [3,6,18].

2.1. Sensor-Based Fall Databases


DLR (German Aerospace Center) dataset [19] is the collection of data from one Inertial
Measurement Unit (IMU) worn in the belt of 16 people (6 female and 5 male) whose ages ranged
from 20 to 42 years old. They performed seven activities (walking, running, standing, sitting, laying,
falling and jumping). The type of fall was not specified.
MobiFall fall detection dataset recent version [20], was developed by the Biomedical
Informatics and eHealth Laboratory of Technological Educational Institute of Crete. They captured
data generated from inertial-sensors of a Smartphone (3D accelerometer and gyroscope) of 24
subjects, seventeen male and seven female with an age range 22–47 years. The authors recorded four
types of falls and nine activities of daily living (ADL).
The tFall dataset developed by EduQTech (Education, Quality and Technology) in Universidad
de Zaragoza [21] collected data from ten participant, three female and seven male, with age ranged
from 20 to 42 years old. They obtained data from two smartphones carried by the subjects in
everyday life for ADL and eight types of simulated falls.
Project gravity dataset [11] acquired from a smartphone and a smart watch worn in the thigh
pocket and on the wrist. Three young participants (ranged age 22 to 32) performed seven ADL
activities and 12 types of fall done simulating natural ADL and a sudden fall.
SisFall is a dataset [22] of falls and ADL obtained with self-developed Kinects MKL25Z128VLK4
microcontroller an Analog Devices ADXL345 accelerometer a Freescale MMA8451Q accelerometer
an ITG3200. The device was positioned in a belt. The dataset was generated with the collaboration of
38 participants with elderly people and young adults from ranged age 19 to 75 years old. They
selected 19 ADL activities and 15 interesting types of fall simulated when doing another ADL
activity. It is important to notice that this dataset is the only including elderly in their trials.
These datasets only include wearable sensors, commercial, self-developed or embedded in
smart devices. We can find other context-aware approaches but they are mostly vision-based or
multimodal. Some few authors use only near field image sensor [23], Pressure and infrared sensors
[24] or only infrared sensors [25]. To our knowledge, no dataset is publicly available with binary
ambient sensors or other type of sensors for fall detection.
Proceedings 2018, 2, 1237 3 of 9

2.2. Multimodal Fall Databases


The UR (University of Rzeszow) fall detection dataset [26] was generated recollecting data form
an IMU inertial device connected via Bluetooth and 2 Kinects connected via USB. Five volunteers
were recorded doing 70 sequences of falls and ADL. Some of these are fall like activities in typical
rooms. There were two kinds of falls: falling from standing position and falling from sitting on a
chair. Each register contains sequences of depth and RGB images for two cameras and raw
accelerometer data.
Multimodal Human Action Database (MHAD) [27] presented by [17] contains 11 actions
performed by 12 volunteers (7 male and 5 female). Although the dataset registered very dynamic
actions, falls were not considered. Nevertheless, this dataset is important given that actions were
simultaneously captured with an optical motion capture system, six wireless accelerometers and
four microphones.

3. UP-Fall Detection and Activity Recognition Database


We present a dataset for fall detection that includes data of ADL and simulated falls recollected
from wearable and ambient sensors. Four volunteers performed the activities in a controlled
environment. The dataset is presented in CSV and is publicly available in https://sites.google.com/
up.edu.mx/har-up.

3.1. Data Acquisition System for Fall Detection and Activity Recognition
The main objective of this data acquisition system is sensing data of different body parts as
neck, waist, thigh, wrist of hand, signals and absence/presence in delimited area. All data is
manipulated to be converted in JavaScript Object Notation (JSON) structure and then be sent to
Firebase (no SQL database) via API REST communication method. This may throw rich information
to classify and detect falls and predict ADL activities.
The components used for this acquisition system were:
• 4 Inertial Measurement Units (IMUs).
• 1 Electroencephalogram Helmet (EEG)
• 4 (absence/presence) Ambient Infrared sensors.
• RaspberryPI3
• PC and External USB Bluetooth
The Data acquisition for this project consists of 3 steps Sensing, Extraction and Storage:
1. Sensing.—Each component starts sensing the actions with the different sensors at the same
time, the data to be sensing are: IMU’s: Accelerometer (X, Y and Z), Ambient Light (L) and
Angular Velocity (X (rad/s), Y (rad/s) and Z (rad/s)). Helmet EEG: signals. Infrared sensor:
absence-presence with binary value.
2. Recollection.—The recollection phase consists in gathering data through Bluetooth connection,
with IMU’s and EEG Helmet devices. Data are converted to JSON structure (Figure 1) to be sent
to Cloud (Firebase). This process is made with C# program using SDK’s from IMU’s and EEG
Helmet to provide us full access to the sensors data. Infrared sensors are connected directly to
raspsberrypy3 in which a Python program allows to take data and convert them to JSON in
order to store them in the cloud.
3. Storage.—Once that information has been collected and prepared in JSON structure packages,
it is sent via POST request to be storage into firebase (noSQL database). In order to achieve this
connection, a RESTAPI platform was configured to storage every POST request into firebase
database as a new data.
The IMUS were positioned in the neck, belt, thigh, and wrist. These positions were defined after
reviewing the most commonly used for fall detection according to literature [3]. The acquisition
system is shown in Figure 2.
Proceedings 2018, 2, 1237 4 of 9

Figure 1. JSON structure is: DATA, DATATYPE, SENSOR, TIME.

Figure 2. Acquisition System.

3.2. Database Description


Four volunteers, two female and two male, from ranged age 22–58 years old per-formed each
three trials of six activities of daily living (ADL) and five type falls shown in Table 1. The volunteers
have different body complexions and ages as presented in 21. Although simulations were performed
in this early stage of our work by a small group of volunteers, we seek to include young and mature
volunteers, two female and two male, with different body complexities (Table 2). As falls are very
rare events in real life in comparison to all activities performed even in a year, six ADL were
included to help classifiers to discriminate falls from ADLs.
The types of falls simulated by volunteers were chosen after a review of most related works
reported in Section 2 namely: fall use hands, fall forward knees, fall backwards, fall sideward, and
fall sitting in empty chair. These falls are common particularly in elderly.

Table 1. Activities and falls included in the dataset.

Activity Duration (sec)


Walking (W) 60 s
Standing (ST) 60 s
Sitting (SI) 10 s
ADL
Laying (L) 60 s
Pick up something (P) 10 s
Jumping (J) 30 s
Fall use hands (FH) 10 s
Fall forward knees (FF) 10 s
Falls Fall backwards (FB) 10 s
Fall sideward (FS) 10 s
Fall sitting in empty chair (FE) 10 s
Proceedings 2018, 2, 1237 5 of 9

Table 2. Volunteers’ body complexions and ages.

Subject Gender Height (Meters) Weight (kg) Age (Years)


1 Female 1.65 56 58
2 Female 1.70 82 51
3 Male 1.80 57 32
4 Male 1.72 75 22

Data are separated by subject, activity, and trial, and is delivered in CSV format. The dataset
was recollected in a laboratory and it was decided not to have complete control with the aim of
simulating conditions as realistic as possible. The different sensor signals were synchronized for
temporal alignment.

4. Experiments and Results


The intention of our dataset is to provide diversity of sensor information to allow comparison
between systems and algorithms. It also allows experimentation in which combinations of sensors
are taking into account in order to determine if the consideration of inertial and context data
improve the performance of fall detection. In our experiments infrared sensors provide context data.
For the purpose of showing an example of the experiments that can be done with our dataset,
two types of experiments were designed. For the first series of experiments, feature datasets were
prepared only with data extracted from IMUs. The second series of experiments were done using the
whole dataset, which includes context and brain helmet’s information. Three different feature
datasets were used for each series of experiments corresponding to 2, 3 and 5 size of windowing for
feature extraction.

4.1. Feature Extraction


As mentioned above, in order to apply feature extraction, the time series from the sensors were
split in windows of 2, 3 and 5 s for experimentation. Relevant temporal and frequency features were
extracted from original raw signals for each size of window generating three datasets. These features
are shown in Table 3. Last three features are tags identifying the subject, activity and number of trial.

Table 3. Extracted features.

Temporal Signal (x 33 Signals from Original): Frequency Signal: Sampling:


[1:33]—mean
[34:66]—standard deviation
[67:99]—root mean square
[100:132]—maximum
[331:363] —mean [430]: subject number
[133:165]—minimum
[364:396] —median [431]: activity number
[166:198]—median
[397:429]—energy [432]: trial number
[199:231]—skewness
[232:264]—kurtosis
[265:297]—quantile 1
[298:330]—quantile 3

4.2. Classification
The following classifiers were used in training processes: Linear Discriminant Analysis (LDA),
CART Decision Trees (CART), Gaussian Naïve Bayes (NB), Sup-port Vector Machines (SVM),
Random Forest (RF), K Nearest Neighbor (KNN), and Neural Networks (NN). A summary of results
in terms of accuracy of using each ma-chine learning method in training process is presented in
Table 4.
Proceedings 2018, 2, 1237 6 of 9

Table 4. Results from Training Process.

IMUs IMUs + Context


Method
2 seg 3 seg 5 seg 2 seg 3 seg 5 seg
LDA 0.5839 0.5098 0.2858 0.5991 0.4884 0.4160
CART 0.6206 0.5893 0.5691 0.6226 0.5877 0.5691
NB 0.1525 0.1818 0.4413 0.1598 0.2216 0.4719
SVM 0.6908 0.6625 0.6279 0.6918 0.6488 0.6457
RF 0.6907 0.6487 0.6252 0.6816 0.6443 0.6273
KNN 0.6795 0.6502 0.6303 0.6561 0.6396 0.6305
NN 0.6866 0.6626 0.6454 0.6758 0.6488 0.6403

Unless comparison between IMUs and IMUs with context sensors is quite small, it is interesting
to observe that activity recognition is better using no enhancements in the dataset nor in the feature
extraction, thus adding up contextual information in this particular dataset improves the
performance of human activity recognition. The simulation of conditions can be a factor that causes
low metrics for simple machine learning methods.
The results of predictions on validation sets using the same classifier with the best performance
in training (i.e., SVM and NN) are presented in Figures 3–5. A comparison between both
experiments, using just IMUs data and IMUs plus context, for the same window size is shown in
each figure.
Figure 3 shows the average precision, recall and f1 score of prediction on validation sets created
with two seconds windowing. We can see a small improvement in the precision and f1-score
measurements when using IMUs plus context data.
In Figure 4, on the contrary, we observe that the performance worsened a bit when adding
context information to IMUs data for three seconds cases. Nevertheless, in the results of experiments
with five seconds windowing, a similar improvement in the performance of IMUs plus context
scenario can be observed.

Results with Two Seconds Windowing

f1-score

recall

precision

0 0.2 0.4 0.6 0.8 1


precision recall f1-score
IMUs + Context, 2 secs,
0.68 0.67 0.64
SVM
IMUs, 2 seg, SVM 0.62 0.67 0.63

Figure 3. Average results of prediction with two seconds windowing.


Proceedings 2018, 2, 1237 7 of 9

Results with Three Seconds Windowing

f1-score

recall

precision

0 0.2 0.4 0.6 0.8 1


precision recall f1-score
IMUs + Context, 3 secs,
0.59 0.66 0.62
SVM
IMUs, 3 secs, NN 0.67 0.67 0.64

Figure 4. Average results of prediction with three seconds windowing.

Further experimentation can be done combining different placements for the purpose of finding
the best combination and/or placement for fall detection.

Results with Five Seconds Windowing

f1-score

recall

precision

0 0.2 0.4 0.6 0.8 1


precision recall f1-score
IMUs + Context, 5 secs,
0.66 0.69 0.67
SVM
IMUs, 5 secs, NN 0.65 0.68 0.66

Figure 5. Average results of prediction with five seconds windowing.

5. Conclusions and Future Work


In this work, we presented a publicly available multimodal dataset for fall detection. The group
of volunteers includes young and mature volunteers, two females and two males, with different
body complexities. All data were captured with four IMUs, one Electroencephalogram Helmet
(EEG), and four Infrared sensors. This dataset can allow comparison for different purposes: fall
detection system performance, new algorithms for fall detection, multimodal complementarity, and
sensor placement.
As shown, the dataset is challenging, thus we encourage the use of novel and robust machine
learning methods by the community to overcome contextual human activity recognition.
In most cases, a slight improvement in performance was achieved when using the whole
multimodal dataset, which includes IMUs and contextual data. We believe that with more
Proceedings 2018, 2, 1237 8 of 9

exhaustive tuning of machine learning models, better results can be obtained. However the main
goal of our experimentation is to show an example of usage of the dataset.
In order to improve the diversity of the multimodal dataset, new modality acquisition
frameworks will be added, namely cameras and microphones. Further experimentation must be
done to verify the complementarity of the various types of data.

Funding: This research has been funded by Universidad Panamericana through the grant “Fomento a la
Investigación UP 2017”, under project code UP-CI-2017-ING-MX-02.

References
1. Gale, C.R.; Cooper, C.; Sayer, A. Prevalence and risk factors for falls in older men and women: The English
Longitudinal Study of Ageing. Age Ageing 2016, 45, 789–794.
2. Alshammari, S.A.; Alhassan, A.M.; Aldawsari, M.A.; Bazuhair, F.O.; Alotaibi, F.K.; Aldakhil, A.A.;
Abdulfattah, F.W. Falls among elderly and its relation with their health problems and surrounding
environmental factors in Riyadh. J. Fam. Commun. Med. 2018, 25, 29.
3. Igual, R.; Medrano, C.; Plaza, I. Challenges, issues and trends in fall detection systems. Biomed. Eng. Online
2013, 12, 66.
4. Noury, N.; Fleury, A.; Rumeau, P.; Bourke, A.K.; Laighin, G.O.; Rialle, V.; Lundy, J.E. Fall detection-principles
and methods. In Proceedings of the 29th Annual International Conference of the Engineering in Medicine
and Biology Society, Lyon, France, 22–26 August 2007; pp. 1663–1666.
5. Mubashir, M.; Shao, L.; Seed, L. A survey on fall detection: Principles and approaches. Neurocomputing
2013, 100, 144–152.
6. Lahat, D.; Adali, T.; Jutten, C. Multimodal data fusion: An overview of methods, challenges, and
prospects. Proc. IEEE 2015, 103, 1449–1477.
7. Koshmak, G.; Loutfi, A.; Linden, M. Challenges and issues in multisensor fusion approach for fall
detection. J. Sens. 2016, doi:10.1155/2016/6931789.
8. Igual, R.; Medrano, C.; Plaza, I. A comparison of public datasets for acceleration-based fall detection. Med.
Eng. Phys. 2015, 37, 870–878.
9. Casilari, E.; Santoyo-Ramón, J.A.; Cano-García, J. Analysis of public datasets for wearable fall detection
systems. Sensors 2017, 17, 1513.
10. Frank, K.; Nadales, V.; Robertson, M.J.; Pfeifer, T. Bayesian recognition of motion related activities with
inertial sensors. In Proceedings of the 12th ACM International Conference Adjunct Papers on Ubiquitous
Computing-Adjunct, Copenhagen, Denmark, 26–29 September 2010.
11. Vavoulas, G.; Pediaditis, M.; Chatzaki, C.; Spanakis, E.G.; Tsiknakis, M. The mobifall dataset: Fall
detection and classification with a smartphone. Int. J. Monit. Surveillance Technol. Res. 2014, 2, 44–56.
12. Medrano, C.; Igual, R.; Plaza, I.; Castro, M. Detecting falls as novelties in acceleration patterns acquired
with smartphones. PLoS ONE 2014, 9, e94811.
13. Vilarinho, T.; Farshchian, B.; Bajer, D.; Dahl, O.; Egge, I.; Hegdal, S.S.; Lønes, A.; Slettevold, J.N.;
Weggersen, S.M. A combined smartphone and smartwatch fall detection system. In Proceedings of the
2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing
and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and
Computing (CIT/IUCC/DASC/PICOM), Liverpool, UK, 26–28 October 2015; pp. 1443–1448.
14. Sucerquia, A.; López, J.D.; Vargas-Bonilla, J.F. SisFall: A fall and movement dataset. Sensors 2017, 17, 198.
15. Kepski, M.; Kwolek, B.; Austvoll, I. Fuzzy inference-based reliable fall detection using kinect and
accelerometer. In Artificial Intelligence and Soft Computing of Lecture Notes in Computer Science; Springer:
Berlin, Germany, 2012; pp. 266–273.
16. Kwolek, B.; Kepski, M. Improving fall detection by the use of depth sensor and accelerometer.
Neurocomputing 2015, 168, 637–645.
17. Ofli, F.; Chaudhry, R.; Kurillo, G.; Vidal, R.; Bajcsy, R. Berkeley MHAD: A comprehensive multimodal
human action database. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision
(WACV), Tampa, FL, USA, 15–17 January 2013; pp. 53–60.
18. Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the
state-of-the-art. Inf. Fusion 2013, 14, 28–44.
Proceedings 2018, 2, 1237 9 of 9

19. DLRdataset. Available online: www.dlr.de/kn/en/Portaldata/27/Resources/dokumente/04_abteilungen_fs/


kooperative_systeme/high_precision_reference_data/Activity_DataSet.zip (accessed on 10 May 2018).
20. MobiFalldataset. Available online: http://www.bmi.teicrete.gr/index.php/research/mobifall (accessed on
10 January 16).
21. EduQTech, tFall:EduQTechdataset. Published July 2013. Available online: http://eduqtech.unizar.es/fall-
adl-data/ (accessed on 10 May 2018).
22. Sistemas Embebidos e Inteligencia Computacional, SISTEMIC: SisFall Dataset. Available online:
http://sistemic.udea.edu.co/investigacion/proyectos/english-falls/?lang=en (accessed on 10 May 2018).
23. Rimminen, H.; Lindström, J.; Linnavuo, M.; Sepponen, R. Detection of falls among the elderly by a floor
sensor using the electric near field. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 1475–1476.
24. Tzeng, H.W.; Chen, M.Y.; Chen, J.Y. Design of fall detection system with floor pressure and infrared
image. In Proceedings of the 2010 International Conference on System Science and Engineering (ICSSE),
Taipei, Taiwan, 1–3 July 2010; pp. 131–135.
25. Mastorakis, G.; Makris, D. Fall detection system using Kinect’s infrared sensor. J. Real-Time Image Process.
2012, 1–12, doi:10.1007/s11554-012-0246-9.
26. URFD University of Kseszow Fall Detection Dataset. Available online: http://fenix.univ.rzeszow.pl/mkepski/
ds/uf.html (accessed on 10 May 2018).
27. Teleimmersion Lab, University of California, Berkeley, 2013, Berkeley Multimodal Human Action
Database (MHAD). Available online: http://tele-immersion.citris-uc.org/berkeley_mhad (accessed on 10
May 2018).

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like