Sequential Neural Networks For Multi-Resident Acti
Sequential Neural Networks For Multi-Resident Acti
https://doi.org/10.1007/s10489-020-02134-z
Abstract
Advances in smart home technology and IoT devices had made us capable of monitoring human activities in a non-intrusive
way. This data, in turn, enables us to predict the health status and energy consumption patterns of residents of these smart
homes. Machine learning has been an excellent tool for the prediction of human activities from raw sensor data of a single
resident. However, Multi Resident activity recognition is still a challenging task, as there is no correlation between sensor
values and resident activities. In this paper, we have applied deep learning algorithms on the real world ARAS Multi Resident
data set, which consists of data from two houses, each with two residents. We have used different variations of Recurrent
Neural Network (RNN), Convolutional Neural Network (CNN), and their combination on the data set and kept the labels
separate for both residents. We have evaluated the performance of models based on several metrics.
Keywords Activities of Daily Life(ADL) · Multi resident · Neural networks · Sequential neural networks · Deep learning ·
Human activity recognition
and usage of temporal approaches for activity recognition uses images and videos captured from the camera, for
and sequential models such as HMM [5], and CRF [6] are activity recognition. The data collected from these sensors
also widely used in this area. Other works showcase the use can be used for determining the presence and orientation
of IDT’s [7] and other machine learning models like ran- of the subject in the environment. These types of sensors
dom forests [8] applied for the task of multi-resident activity are more effective in human activity recognition, but they
recognition. In this paper, we examine the use of different are very complicated and costly and also raise a severe
kinds of neural networks for the task of activity recognition concern about the subject’s privacy. Other types of sensors
like Multi-Layer Perceptron(MLP) [9], Recurrent Neural include Ambient sensors [4] these sensors are passive
networks(RNN) [10], Convolutional Neural Network(CNN) sensors embedded in the smart environment. They comprise
[11], and combination of CNN and RNN. Later we investi- many types of sensors, including Photocell, Contact Sensor,
gate the impact of data set size on the accuracy and training Pressure Mat, and other sensors used for collecting various
time of each type of neural network for the task of human types of data related to the environment and how subjects
activity recognition. This paper focuses on taking labels as interact with the environment [12, 16]. These sensors are
separate labels instead of taking them as combined labels. less intrusive as they do not hinder with the subject’s
This approach scales better in the real-world situation as privacy, and do not provide any discomfort to the subject as
the number of residents increases or decreases in the smart they are passively embedded in the smart environment.
home; only the last layer of the neural network needs to be
changed. 2.2 Types of approaches
We have applied the mentioned approaches on the ARAS
Data set [12], which consists of data from two smart homes, Many types of approaches are used for human activity
denoted as A and B. The data is collected for 30 days with recognition. These mainly include Logic-Based Approaches
the help of 20 ambient sensors and 27 different activities and Machine Learning-Based Approaches. In this paper,
are performed by two residents. The results show that we have used deep learning for human activity recognition,
the combinations of RNN and CNN for one-dimensional which comes under machine learning-based approaches.
data perform consistently, giving excellent results with less Firstly logic-based approaches involve logic-based con-
variance within results. This work helps in understanding text models where we define the context using the expres-
the performance of different kinds of neural network for sions, and we use rules to describe the relationships and
the task of human activity recognition and suggests the constraints [17]. Shet et al. [18] proposed a framework
best possible methods that can be used for making better integrating computer vision algorithms with logic program-
recognition systems. ming to describe and identify video surveillance activities
in a parking lot. Other human activity recognition systems
were proposed by [19] these systems are based on an Event
2 Related work Calculus logic programming implementation [20].
Secondly, approaches based on machine learning involve
In recent years a lot of research work has been done in the the use of several machine learning algorithms for human
field of human activity recognition. In this section, firstly, activity recognition. Earlier works show use of algorithms
we discuss the types of sensors used in collecting the data like naive bayes [21], kernel methods like SVM [22],
for human activity recognition, and then we discuss the use of decision trees like incremental decision trees [7],
different approaches used to predict activities from the data ensembles like random forest [8] and clustering algorithms
collected. [23]. However, there is much research involving the use of
graphical models like Hidden Markov Models [5, 24, 25],
2.1 Types of sensors Conditional Random Fields [6, 26, 27], Gaussian Mixture
Models [28], and Dynamic Bayesian Networks [29]. Due
Different types of sensors are used for the collection to the complexity of the problem of multiple residents,
of data for activity recognition which include wearable most of the previous works have used graphical models for
sensors [3, 13] which comprises of sensors like accelerator multi-resident activity recognition. Models like the Hidden
and gyroscope [14], employed for collection of data Markov Model are widely used for tasks where multiple
related to acceleration and rotation of the subject, main activity labels can be combined in a single label to be used
issues with this type of sensors are that they are mostly by HMM [30].
limited to physical movements like running, walking and Activity recognition is much more complicated for multi-
playing some sport. Furthermore, subjects do not feel resident activity recognition [31], where sensors states
much comfortable wearing it all the time. Another type do not directly reflect the particular resident’s activity.
of sensors includes vision-based sensors [2, 15], which Much of the research discussed earlier focuses on the
activity recognition of a single resident. However, recent model on the train set. Lastly, we measured the performance
works show the use of ambient sensors for human activity of the model using various metrics on the test set.
recognition. These sensors have a wide range of activity Figure 2 shows the model selection process. Firstly we
recognition like sleeping, watching television, cooking, or divide the training dataset into validation data and train data,
talking [12]. As these sensors collect only one value per then we train our model on train data and validate it on
time step for the whole environment, therefore in the data, validation data and then save the model for inferencing later.
we do not have any distinction between the activities of two
residents. Tran et al. [30] showcased the use of different 3.1 Multi-Layer perceptron approach
types of HMM’s like factorial HMM with different types
of labelling strategy for the task of multi-resident activity Multi-Layer perceptron is the simplest neural network
recognition; their work also includes the use of CRF for consisting of one input layer, any number of hidden layers
the same task where they have showcased the use of and an output layer. In this case, we have modified the multi-
simple CRF and Factorial CRF for modelling multi-resident layer perceptron to have two output layers instead of one to
activity recognition. Lastly Al Machot et al. showcased accommodate two residents instead of one by doing this we
how we can detect human activity from data streams from can avoid the multi-label approach [40] or combined label
sensors [32]. approach [41] which were used earlier for multi-resident
Deep Learning has been used for activity recognition activity modeling. We can use the multi-layer perceptron to
[33–36] in many ways, like using Deep Convolutional Neu- model activities as
ral Networks for activity recognition from RFID data [37].
X1 = g1 (W (1) .s(t) + b(1) ) (1)
Convolutional Neural Networks are also used to identify
the activities from 3D data collected using correspond-
ing depth sensors [38]. Other works show the use of X2 = g1 (W (2) .X1 + b(2) ) (2)
Recurrent Neural Network for the task of Activity recog-
nition [10].Liciotti et al. [39] showcased the use of dif- Ya = g2 (W (3a) .X2 + b(3a) ) (3)
ferent types of LSTM for human activity recognition on
CASAS dataset [16]. In this paper, we have showcased Yb = g2 (W (3b) .X2 + b(3b) ) (4)
the use of different types of neural network for the task In the (1)–(4), the weight matrix of nth layer is denoted by
of activity recognition. We have used Multi-Layer Per- Wn , and sensor states at time t are denoted by s(t) which are
ceptron(MLP), Recurrent Neural Networks(RNN), Con- given as input to the model, bias is denoted by b and g1 ,
volutional Neural Networks(CNN’s), and Combination of g2 specifies the activation functions. X1 denotes the output
RNN’s and CNN’s for human activity recognition. These from the first layer, X2 denotes the output from second
neural networks perform well in the scenario of a multi- layer, Ya denotes the output for resident A and Yb denotes
resident environment-based dataset. the output for resident B.
We can make several modifications in a multi-layer
perceptron network by increasing the number of layers and
3 Technical approach the number of neurons to tune its performance, here we have
3 layers and they have 128,64 and 28 neurons. We have used
Let us denote the activity of a resident at particular time t by a rather simple multi-layer perceptron to model activities
A(t) and sensor states at that time by s(t) . Here A(t) = {A(1,t) , with fewer layers as an increasing number of layers resulted
A(2,t) } i.e. activities performed by two residents and s(t) = in more training time, and there were no significant gains in
{s(1,t) , s(1,t) . . . .s(n,t) } where n is total number of sensors. We accuracy.
have used t1 :t2 to denote time between two-time intervals One of the benefits of using this network is it can be
t1 and t2 . Hence, A(t1 :t2 ) means activities between t1 and t2 trained in very little time as compared to other networks, but
time intervals and s(t1 :t2 ) means sensors states between t1 it comes with the cost of accuracy as it does not capture the
and t2 time intervals. Finally, for prediction we are using sequential i.e previous information and it also does not make
sensor readings between two time intervals s(t1 :t2 ) to predict use of spatial information. That’s why it performs worse in
activities activities A(t1 :t2 ) . the task of activity recognition.
We have used different types of neural networks to Figure 3 shows the model architecture for the multi layer
approach the task of multi-resident activity recognition, perceptron network. We have used two hidden layers which
which we will elaborate one by one. contains 128 and 64 neurons and we have two output layers
Figure 1 shows the workflow of the experiment. Firstly for both residents, each having 28 neurons representing each
we took raw data and did feature extraction, then divided it activity. We have used standard cross entropy loss to train
into train and test set of different sizes, and we trained our the neural network.
3.2 Convolutional neural network based approach Yb = g2 (W (4b) .X3 + b(4b) ) (9)
Convolutional Neural Networks (CNN) [42] are generally Here in the (5) and (6) the convolution operation is
used to map image data to the output variable. Benefits performed on the input to capture the spatial information
of using a CNN is their ability to make use of spatial this will generate the output for the next layer. Similar
information. They directly takes two-dimension vector as to previous model here the weight matrix of nth layer is
an input and contains convolutional layers as hidden layers. denoted by Wn , and sensor states at time t are denoted by s(t)
We have used two types of convolutional neural network to which are given as input to the model, bias is denoted by b
model activities as feature maps. and g1 , g2 specifies the activation functions. Xn denotes the
output from the nth layer, Ya denotes the output for resident
X1 = g1 (W (1) ∗ s(t) + b(1) ) (5) A and Yb denotes the output for resident B. Other (7)–(9)
are similar to the previous model.
X2 = g1 (W (2) ∗ X1 + b(2) ) (6) Figure 4 shows the model architecture for Convolutional
Neural Network for one dimension. We have used two
X3 = g1 (W (3) .X2 + b(3) ) (7) convolutional layers for one dimension, giving out 40 and
80 channels, and they use a kernel of size two, it is followed
Ya = g2 (W (4a) .X3 + b(4a) ) (8) by a fully connected layer having 128 neurons, we also used
Relu activation in all the layers. We have two output layers residents, each having 28 neurons representing each activity.
for both residents, each having 28 neurons representing each The input to the network is in the form of a feature map,
activity followed by a softmax activation function. which is shown in Fig. 5.
Figure 5 show the mapping of one dimension sensor
states to two dimension feature maps. 3.3 Recurrent neural networks based approach
Figure 6 shows the model architecture for Convolutional
Neural Network for two dimension. We have used two Recurrent Neural Networks(RNN) [43, 44] uses past
Convolutional two dimension layers, giving out 6 and 12 information to determine the output, this is because in RNN
channels, and they use a kernel of size two, it is followed the current cell state is influenced by the previous cell state.
by a fully connected layer having 64 neurons, we also The benefits of using RNN is that it can make use of past
used Relu activation. We have two output layers for both information. In this paper, we have used three different
types of RNN’s architectures, which are Elman RNN, Gated
Recurrent Units (GRU’s), and Long-term Short Memory
Network (LSTM’s). Elman RNN is the most straightforward
RNN architecture where the output of the current state is
generated by multiplying the hidden value from previous
state and input to the current state, finally passing the result
through tanh activation function. These kind of RNN’s are
unable to learn long term dependencies and suffer from
vanishing gradient issues. In contrast, in GRU’s, we have
additional updates and reset gate, which helps to tackle the
vanishing gradient problem. In GRU update gate is utilized
to determine the amount of knowledge from previous hidden
state to be passed and reset gate is utilized to determine
the amount of prior knowledge to forget and lastly we have
LSTM in which we have additional forget gate, input gate
and output gate which make LSTM adequate of learning
long term dependencies. In LSTM, forget gate is utilized
to determine which information to reject from the previous
hidden state input gate is utilized to determine which
information will be updated and output gate is utilized to
determine which part of the information will go to the next
hidden state. Similar to MLP and CNN, we have multiple
Fig. 5 Feature Maps output layers for each type of RNN.
Figure 7 shows the model architecture for different types C̃t = tanh WC · ht−1 , st + bC (15)
of Recurrent Neural Network. In the first layer, we have
256 recurrent units followed by a fully connected layer Ct = ft ◦ Ct−1 + it ◦ C̃t (16)
having 128 neurons, and we have two output layers for both
residents, each having 28 neurons representing each activity. ot = σ Wo ht−1 , st + bo (17)
Equation (19) is called update gate equation which for determining the output. For the sequential part of these
determines how much past information need to be passed networks, we have used LSTM as it worked best during our
along the future, Update coefficient zt is determined with testing.
the help of current input st and previous cell hidden state These networks gave us the best accuracy results on both
ht−1 . Equation (20) is called reset gate equation which is the dataset. We have used two versions of these networks,
used to determine how much past information to forget. one which utilizes CNN which works with one-dimension
Forget coefficient rt is calculated with the help of current data and others that work with two-dimension data.
input and previous hidden state. Lastly (21)–(22) are used Figure 8 shows the model architecture for the combina-
to determine the hidden state(ht ) and resolve the vanishing tion of CNN for one-dimension and LSTM networks it has
gradient problem. same layers as of CNN for one-dimension (Fig. 4), followed
These networks work very well on the dataset with GRU by the fully connected layer with 64 neurons followed by a
beating all of the other previous networks in terms of layer having 256 LSTM units which is followed by a fully
accuracy but these network fall behind the combination of connected layer having 128 neurons. We also used relu acti-
CNN and RNN as these they do not make use of the spatial vation and network as two output layers for two residents
information which can increase accuracy significantly as we the same as earlier networks.
have seen in CNN’s. Figure 9 shows the model architecture for the combina-
tion of CNN2d and LSTM networks it has same layers as
3.4 Combination of CNN and RNN of CNN for two-dimension (Fig. 6), followed by the fully
connected layer with 64 neurons followed by a layer hav-
These networks are a combination of convolutional neural ing 256 LSTM units which is followed by a fully connected
networks and recurrent neural networks [45]. These net- layer having 128 neurons. We also used relu activation and
works can take advantage of both the RNN and CNN as they network as two output layers for two residents the same as
can use the spatial information as well as past information earlier networks.
Fig. 8 Model architecture for combination of CNN for one dimension and LSTM networks
We conducted our analysis of the ARAS multi-resident A true negative occurs when the model correctly predicts
dataset [12]. The ARAS dataset is collected in two houses the negative class.
House A and House B for 30 days. In these homes 20
different sensors were placed at different locations in the 5.4 False positive
homes. Residents living in house A are 2 males with
average age of 25, whereas residents living in House B is A false positive occurs when the model incorrectly predicts
a married couple with average age of 34, in both houses, the positive class.
each resident is asked to perform 27 different activities. The
ARAS dataset features are binary sensor readings which are 5.5 False negative
denoted by 1 when the sensor is activated and 0 when the
sensor is deactivated. Here each time stamp has two outputs A false negative occurs when the model incorrectly predicts
i.e. activities performed by each Resident A and Resident the negative class.
B. In this paper, we have modeled each output separately as
discussed in previous sections. 5.6 Precision
A true positive occurs when the model correctly predicts the 2 ∗ (P recision ∗ Recall)
F − Score = (26)
positive class. P recision + Recall
Fig. 9 Model architecture for combination of CNN for two dimension and LSTM networks
Table 2 Accuracy and time taken for 30 Days Table 5 Accuracy and time taken for 10 days
Model Accuracy Accuracy Average Time Model Accuracy Accuracy Average Time
of Res A of Res B Taken(s) of Res A of Res B Taken(s)
MLP 64.64 75.20 69.92 3163.4 MLP 91.11 91.99 91.55 1048.2
LSTM 77.17 91.43 84.3 11973.6 LSTM 95.93 86.03 90.98 4043.7
GRU 81.83 90.13 85.98 11286 GRU 95.93 85.39 90.66 3492.7
RNN 74.25 90.27 82.26 9141.2 RNN 92.66 82.15 87.405 2661.8
CNN2D 72.07 88.12 80.095 3376.9 CNN2D 94.51 83.39 88.95 1129
CNN2DS 76.19 90.38 83.285 12339.2 CNN2DS 91.79 90.21 91 4135.1
CNN1D 64.66 75.27 69.965 5636.4 CNN1D 94.49 83.14 88.815 1726
CNN1DS 81.81 90.82 86.315 14043.7 CNN1DS 96.96 86.23 91.595 4592.4
Activity number PR RE FS
Activity number PR RE FS
6.2.1 10 Days
6.2.2 30 Days
13. Jordao A, Nazare Jr A C, Sena J, Schwartz WR (2018) 31. Tran SN, Zhang Q (2020) Towards multi-resident activity
Human activity recognition based on wearable sensor data: A monitoring with smarter safer home platform. In: Smart Assisted
standardization of the state-of-the-art. arXiv:1806.05226 Living: Toward An Open Smart-Home Infrastructure. Springer
14. Zubair M, Song K, Yoon C (2016) Human activity recognition International Publishing, Cham, pp 249–267
using wearable accelerometer sensors. IEEE, pp 1–5 32. Al Machot F, Mosa A, Ali M, Kyamakya K (2017) Activity
15. Zhang S, Wei Z, Nie J, Huang L, Wang S, Li Z (2017) A review recognition in sensor data streams for active and assisted living
on human activity recognition using vision-based method. Journal environments. IEEE Trans Circ Syst for Video Technol PP.
of Healthcare Engineering 2017 https://doi.org/10.1109/TCSVT.2017.2764868
16. Cook DJ, Crandall AS, Thomas BL, Krishnan NC (2012) Casas: 33. Hassan MM, Uddin MZ, Mohamed A, Almogren A (2018)
A smart home in a box. Computer 46(7):62–69 A robust human activity recognition system using smartphone
17. Ye J, Stevenson G, Dobson S (2015) Kcar: A knowledge-driven sensors and deep learning. Futur Gener Comput Syst 81:307–313
approach for concurrent activity recognition. Pervasive Mob 34. Wang J, Chen Y, Hao S, Peng X, Hu L (2019) Deep learning for
Comput 19:47–70 sensor-based activity recognition: A survey. Pattern Recogn Lett
18. Shet VD, Harwood D, Davis LS (2005) Vidmap: video monitoring 119:3–11
of activity with prolog. In: IEEE Conference on Advanced Video 35. Phyo CN, Zin TT, Tin P (2019) Deep learning for
and Signal Based Surveillance. IEEE, pp 224–229 recognizing human activities using motions of skele-
19. Artikis A, Sergot M, Paliouras G (2013) A logic-based approach tal joints. IEEE Trans Consum Electron 65(2):243–252.
to activity recognition. In: Human Behavior Recognition Tech- https://doi.org/10.1109/TCE.2019.2908986
nologies: Intelligent Applications for Monitoring and Security. IGI 36. Baccouche M, Mamalet F, Wolf C, Garcia C, Baskurt A (2011)
Global, pp 1–13 Sequential deep learning for human action recognition. In: Salah
20. Kowalski R, Sergot M (1989) A logic-based calculus of events. In: AA, Lepri B (eds) Human Behavior Understanding. Springer,
Foundations of knowledge base management. Springer, pp 23–55 Berlin, pp 29–39
21. Cook DJ (2010) Learning setting-generalized activity models for 37. Li X, Zhang Y, Marsic I, Sarcevic A, Burd RS (2016) Deep
smart spaces. IEEE Intell Syst 2010(99):1 learning for rfid-based activity recognition. In: Proceedings of the
22. Cook DJ, Krishnan NC, Rashidi P (2013) Activity discovery 14th ACM Conference on Embedded Network Sensor Systems
and activity recognition: A new partnership. IEEE Trans Cybern CD-ROM. ACM, pp 164–175
43(3):820–828 38. Wang K, Wang X, Lin L, Wang M, Zuo W (2014) 3d
23. Fahad LG, Tahir SF, Rajarajan M (2014) Activity recognition in human activity recognition with reconfigurable convolutional
smart homes using clustering based classification. In: 2014 22nd neural networks. In: Proceedings of the 22nd ACM international
International Conference on Pattern Recognition. IEEE, pp 1348– conference on Multimedia. ACM, pp 97–106
1353 39. Liciotti D, Bernardini M, Romeo L, Frontoni E (2019) A
24. Tran S, Zhang Q, Karunanithi M (2009) Mixed-dependency sequential deep learning application for recognising human
models for multi-resident activity recognition in smart-homes activities in smart homes. Neurocomputing
25. Chen R, Tong Y (2014) A two-stage method for solving multi- 40. Mohamed R (2017) Multi label classification on multi resident in
resident activity recognition in smart environments. Entropy smart home using classifier chains. Adv Sci Lett 4:400–407
16(4):2184–2203 41. Mohamed R, Perumal T, Sulaiman M, Mustapha N (2017) Multi-
26. Nazerfard E, Das B, Holder LB, Cook DJ (2010) Conditional resident activity recognition using label combination approach in
random fields for activity recognition in smart environments. In: smart home environment. In: 2017 IEEE International Symposium
Proceedings of the 1st ACM International Health Informatics on Consumer Electronics (ISCE). IEEE, pp 69–71
Symposium. ACM, pp 282–286 42. LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-
27. Hsu K-C, Chiang Y-T, Lin G-Y, Lu C-H, Hsu JY-J, Fu L-C based learning applied to document recognition. Proc IEEE
(2010) Strategies for inference mechanism of conditional random 86(11):2278–2324
fields for multiple-resident activity recognition in a smart home. 43. Sherstinsky A (2018) Fundamentals of recurrent neural net-
In: International Conference on Industrial, Engineering and Other work (rnn) and long short-term memory (lstm) network.
Applications of Applied Intelligent Systems. Springer, pp 417– arXiv:1808.03314
426 44. Hochreiter S, Schmidhuber J (1997) Long short-term memory.
28. Zhuang X, Huang J, Potamianos G, Hasegawa-Johnson M (2009) Neural Comput 9(8):1735–1780
Acoustic fall detection using gaussian mixture models and 45. Donahue J, Anne Hendricks L, Guadarrama S, Rohrbach M,
gmm supervectors. In: 2009 IEEE International Conference on Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent
Acoustics, Speech and Signal Processing. IEEE, pp 69–72 convolutional networks for visual recognition and description.
29. Ribaric S, Hrkac T (2012) A model of fuzzy spatio-temporal In: Proceedings of the IEEE conference on computer vision and
knowledge representation and reasoning based on high-level petri pattern recognition, pp 2625–2634
nets. Inf Syst 37(3):238–256
30. Tran SN, Nguyen D, Ngo T-S, Vu X-S, Hoang L, Zhang Q,
Karunanithi M (2019) On multi-resident activity recognition in Publisher’s note Springer Nature remains neutral with regard to
ambient smart-homes. Artif Intell Rev:1–17 jurisdictional claims in published maps and institutional affiliations.
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at