2013 Basia
2013 Basia
Maria Basia
Project report submitted in part fulfilment of the requirements for the degree
of Master of Science (Human-Computer Interaction with Ergonomics) in the
Faculty of Brain Sciences, University College London, 2014
1
ACKNOWLEDGMENTS
A great thanks goes to my fellow students for being great and supportive
throughout this intensive academic year. I owe earnest thankfulness to my family
and especially my parents for always supporting and encouraging me. Without
their support I would not have been able to attend this MSc Programme.
Thanks to Stelios for always being there for me.
2
ABSTRACT
The study’s suggests that people felt lighter when the high frequency
components of their footsteps sound were selectively increased (ranges between
1-4 kHz) and heavier when the footsteps sound’s low frequency components
were selectively amplified (ranges between 63-250 Hz). Their motor behavior
3
(e.g. acceleration when lifting foot of the ground) and effective responses
(valence, arousal, and dominance) were also altered. A series of design
recommendations for the design of this kind of applications are also provided.
The findings along with the extensive use of wearable devices and the
integration of multiple sensors which record body activity into smartphones
suggest excellent possibilities for the use of manipulated footsteps sound in the
design of technology for fitness and rehabilitation, virtual reality and games.
4
Contents
1 INTRODUCTION ....................................................................................................... 7
2 LITERATURE REVIEW, RESEARCH QUESTION AND HYPOTHESES.............................. 8
2.1 Literature Review ............................................................................................. 8
Human Gait .............................................................................................................. 8
Sound as a Source of Information: The Perceptual Attributes of Sound.................. 9
Sound and Perception of Body .............................................................................. 12
Sound and Motor Behavior.................................................................................... 14
Sound and Emotions .............................................................................................. 16
Implications for HCI ............................................................................................... 17
2.2 Research Question and Hypotheses .............................................................. 19
3 PROTOTYPE AND METHODS .................................................................................. 23
3.1 Prototype ....................................................................................................... 23
Prototype and System Description ........................................................................ 23
Sensor Setup .......................................................................................................... 26
Exploration ............................................................................................................ 29
3.2 Methods ........................................................................................................ 31
Participants ............................................................................................................ 31
Materials and Apparatus ....................................................................................... 31
Design .................................................................................................................... 39
Procedure .............................................................................................................. 40
3.3 Data Extraction .............................................................................................. 42
Behavioral and Physiological Data ......................................................................... 42
4 DATA ANALYSIS...................................................................................................... 48
4.1 Questionnaires............................................................................................... 50
Perceived body weight as measured by the 3D Body Visualization Application .... 50
Task Experience Questions .................................................................................... 51
Self-Assessment Manikin (SAM) ............................................................................ 58
Spanner Questionnaire .......................................................................................... 61
Self-Efficacy Questionnaire .................................................................................... 62
4.2 Behavioral and Physiological Measures ......................................................... 64
Pressure sensors .................................................................................................... 64
Accelerometer ....................................................................................................... 68
Galvanic Skin Response – GSR ............................................................................... 70
5
5 DISCUSSION, FUTURE WORK AND LIMITATIONS ................................................... 74
5.1 Results Discussion .......................................................................................... 74
5.2 Applications in HCI and Design recommendations ........................................ 83
Applications in HCI ................................................................................................. 83
Design recommendations ...................................................................................... 83
5.3 Limitations and Future Research ................................................................... 87
6 Conclusion ............................................................................................................. 89
REFERENCES .................................................................................................................. 90
APPENDICES................................................................................................................... 95
6
1 INTRODUCTION
In this thesis Chapter 2 reviews the literature on the basics of gait mechanism,
the perceptual attributes of sound and sound effects on body perception, motor
behaviour and emotions. Implications for HCI are also presented. The research
question and hypotheses are also presented in the light of the literature review
findings. Chapter 3 presents the prototype and the methodology applied in the
experiment. Chapter 4 presents the study results. Chapter 5 discusses the results,
presents the design recommendations, implications for HCI, limitations and
future work. Finally, chapter 6 concludes the study.
7
2 LITERATURE REVIEW, RESEARCH QUESTION AND
HYPOTHESES
Walking and sound are fundamental to our everyday lives. The present study
aims to investigate the implications of footstep auditory cues in the perception of
our own body weight, walking behavior and emotions. Consequently, in this
chapter we review the literature regarding the basics of gait mechanism, the
nature of the auditory cues elicited as a result of the contact of foot with the
ground and the effects of acoustic stimuli on areas such as perception, gait
behavior and human affective states.
Human Gait
Human gait is a periodic movement of each of the lower limbs from one position
of support to the next (Vaughan et al., 1992; Perry, 1992). The time interval
between two successive contacts of the same foot with the ground defines the
gait cycle (Cunado et al., 2003). The gait cycle is divided into two phases, the
Stance and the Swing phase. According to Vaughan et al., (1992) the Stance
phase begins with the “heel strike” of one of the feet and is followed by “foot
flat”, during which the plantar surface of the foot is in contact with the ground,
“midstance” when the contralateral limb passes the leg that is on the ground,
“heel-off” when heel leaves the ground and “toe-off” which indicates the end of
Stance phase and during which the foot loses contact with the ground. The
Swing phase is comprised by the following events. The “acceleration” when the
flexor muscles are activated to accelerate the foot forward, the “midswing”
which occurs simultaneously with the “midstance” of the contralateral limb and
the “deceleration” when the foot decelerates to prepare for the next heel strike.
The force applied by the lower limb on the ground is a net force F which can be
presented as a spectrum with frequency variations (Visell et al., 2009). While the
low frequency components of this spectrum are related with the Ground
Reaction Force (GRF) and they mostly depend on the walker’s weight and
8
walking rate, the high frequency components are linked to factors such as the
impact between the foot and the ground, the sliding friction and the different
degrees of contact with the ground and they depend on the ground and shoe
materials (Ekimov and Sabatier, 2006). These factors are responsible for the
aural cues elicited during walking.
9
The variety of environmental sounds is remarkably wide and it is quite difficult
to generalize the applicability of empirical results in entire sound categories.
Thus, a number of studies have focused on investigating the relationship between
the acoustic properties of specific types of auditory cues and human acoustic
perception (Hardness of percussion mallets (Freed, 1993), classification of
bouncing and breaking bottle events (Warren & Verbrugge, 1984), configuration
of clapping hands (Repp, 1987), surface texture (Lederman, 1979; Lederman et
al., 2002).
Similarly, Zampini & Spence (2003) showed that the perception of the crispness
and freshness of crisps being eaten was increased when either the overall sound
generated when biting the crisps was amplified or the high frequencies were
increased (ranges from 2-20 kHz).
10
The above mentioned studies indicate the potential effects of manipulating the
frequency components of sound on human perception, providing evidence that
manipulating the frequency spectrum of footsteps sound may lead to analogous
effects.
Giordano and Bresin (2006) demonstrated that participants showed a high ability
to recognize gender, emotions, weight and shoe properties (shoe size and sole
11
abrasiveness) of a walker. However, a greater accuracy was reported in the
identification of gender, weight and shoe characteristics than in emotions.
Similarly to Li et al. (1991) the results demonstrated that gender classification is
based on spectral peak and high frequency components.
In a more recent research Pastore and colleagues (2008) investigated the ability
of participants to identify the posture (upright-stooped) of a walker based on the
auditory stimuli produced by the walker’s footsteps. While their findings reveal
that both upright and stooped walking elicits complex and variable acoustic
characteristics they do provide an indication that pace and spectral amplitude (in
the range of 100-500 Hz) may be associated with the listeners’ judgments
regarding posture.
While the above mentioned studies give valuable insights about the
characteristics of the footstep sounds of a walker and the way they are perceived
by humans they do not investigate the provision of real-time self-generated
footstep sounds and their possible affects in perception of one’s one body,
behavior and emotions.
Auditory feedback plays a substantial role in the way we perceive our body
(Kitagawa and Spence, 2006). Despite the fact that many study findings
demonstrate that body-representations are continuously updated and affected by
12
sensory inputs such as vision, touch and proprioception (Botvinick and Cohen,
1998; De Vignemont et al. 2005; Haggard et al. 2007) the possible effects of
self-generated sounds in the way humans perceive their bodies have only
recently been studied (Tajadura-Jimenez et al. 2012, Senna et al., 2014).
As mentioned above, Jousmäki and Hari (1998) demonstrated that sound can
alter the perception of skin properties. Building on studies like this one, Senna et
al., (2014) showed that acoustic stimuli (sound of a hammer hitting marble)
provided in synchrony with the slight hitting of participants’ arm with a hammer
altered the auditory perceived material of the arm, resulting in participants
reporting that they feel their arm stiffer and heavier like it was made of marble.
Building on this, Furfaro et al., (2013) found that accompanying the tapping of a
real or virtual surface with real-time tapping sounds which correspond to the
sound elicited by different degrees of tapping strength, influences perceived
body strength, emotions, tapping behaviour and the perception of the surface
properties (hardness).
In a more recent study, Tonetto et al., (2014) demonstrated that the provision of
different combinations of sounds produced from a variety of soles (leather,
propylene) and ground (carpet, ceramic) materials affect both emotions and
sensations of body, such as feeling “at ease”, “relaxed”, “comfortable”,
“resentful” and “contend”.
While the above described studies suggest that sound affects the perception of
our own body, the effects of self-produced walking generated sounds on
perceived body properties such as weight have not yet been investigated.
13
Sound and Motor Behavior
Styns et al. (2007) focused their research on the exploration of how music affects
human gait. The participants were provided with either musical or metronome
feedback both ranging on tempo and were asked to adjust their gait tempo to the
stimuli’s tempo. The results revealed that participants adopted a faster walking
pace with the musical stimuli than with the metronome stimuli. This was
attributed to the fact that the information conveyed by music, such as the sound
of specific musical instruments increased the walker’s energy. Additionally,
Moens et al’s., (2010) study provided evidence that when the users’ walking
rhythm is close to the music rhythm, the users synchronize their gait pattern to
the music’s tempo by taking one step per beat. In a more recent study Leman et
al., (2013) found that participants who had already synchronized their gait tempi
to musical feedbacks increased their speed when they were provided with a more
activating music and decreased it with calm music. These results suggest a
relationship between music type and vigor of the gait.
While the main focus of the above described studies is the effects of musical
stimuli on motor behavior, there are also past studies which seek to investigate
the relationship between either performance or non-performance related auditory
feedback and gait behavior.
More specifically, Menzer et al. (2010) found that inducing delays of different
lengths, ranging from 16ms-1800ms, in the provision of footstep auditory
feedback changed participants’ perception of the exact timing of when they took
each footstep and participants’ speed without them being aware of this change.
The specific research provides evidence that the provision of performance
related auditory cues can affect both motor behavior and perception of body
movement.
14
environments. The research results showed that the provision of both self-
induced walking sounds which were synthesized in real-time from participants’
walking pattern and of 3D environmental sounds improved the participants’
motion by enhancing their whole body movement rate compared to visual
feedback. However the effects of standalone ego walking sounds did not differ
significantly from the effects of visual feedback.
15
was softer. Even though the results are not statistically significant they do
provide evidence of the effects of sound in human motor behavior.
Considering all the above, it is clear that past studies have focused on exploring
the effects of music and other types of auditory cues on gait behavior. While,
Bresin et al. (2010) investigates the effects of real-time walking related sounds
(ground texture sounds) they fact that they asked participants to walk in different
emotional intentions may have confounded the results. In our study we seek to
investigate the effects of real-time self-produced footstep sounds and their
possible influences on walking behavior, emotion and perception.
Emotions play a fundamental role in our everyday lives. Sound is an innate part
of our lives triggering a wide range of emotional reactions and processes during
our daily interactions with the environment. There are past studies revealing the
effects of auditory feedback in emotions (e.g. Bradley and Lang, 2000). In this
section a few researches that investigate the implications of sounds associated to
our body or our actions in human emotions are briefly described.
Tonetto et al., (2014) in their study (details in “Sound and Perception of Body”
section) demonstrated that sounds elicited by high heels of different materials
while walking on different types of ground affect women’s emotions (valence,
arousal and dominance). Additionally, Furfaro et al., (2013) also found effects of
sound in human emotions (valence, arousal, dominance) (details in “Sound and
Perception of Body” section).
16
Additionally, there is an indication that the provision of auditory stimuli closer to
our body (through headphones) can be more arousing.
While in all the above mentioned studies the auditory stimuli provided to the
participants were pre-recorded, past studies haven’t yet investigated the effects
of real-time, self-produced footstep sounds in human affective states.
As showed above footstep sounds constitute an informative source and can have
effects in body perception, motor behavior and emotions. Those three aspects are
highly important for the design of interactive systems. The present research’s
findings can provide valuable insights for the design of applications for virtual
reality, video games, rehabilitation and fitness.
More specifically, Turchet et al. (2010) in their study highlight that footstep
sounds are used in both virtual applications which enhance user navigation and
games to increase the sense of action and resemblance to physical environments.
Additionally, past researches have reported that that “self-representation” sounds
contribute to the development of a virtual self-representation (Väljamäe et al.,
2008) and can be used in VR environments to enhance presence experience
(Tajadura-Jimenez et al., 2008). Thus, the findings of the present study can be
used in the field of virtual reality to enhance presence, immersion and enable a
more realistic interaction in various situations.
17
al., (2014) in their study revealed that the use of auditory feedback can motivate
physical activity in patients with chronic pain.
Finally, past studies have demonstrated that music feedback during exercise
decreases perceived physical effort and improves affective states (bike riding,
Becker et al., 1994; karate performance, Ferguson et al., 1994). Consequently,
the use of self-produced footstep sounds in the design of interactive personal
training applications could have similar results.
18
2.2 Research Question and Hypotheses
In light of the above mentioned it is clear that previous studies show evidence
that sound related to both our bodies and our actions may influence the
perception of key body attributes, motor behaviour and emotions. However,
there is lack of sufficient understanding of the effects of real-time manipulated
self-produced sounds on these three aspects. Thus, the research question of this
study is:
Can we alter the perception of one’s body weight, behavior and emotions by
manipulating self-produced footsteps sounds?
Hypothesis 1
1. Participants’ perception of their own body weight will change with two
possible outcomes (Li et al., 1991):
1.1 Participants will perceive their own body weight as higher when the
low frequency components of their footsteps sound increase.
1.2 Participants will perceive their own body weight as lower when the
high frequency components of their footsteps sound increase.
Hypothesis 2
2.1 Participants will decrease their speed (longer time interval between
heel strike and toe-off events) when we increase the low frequency
components of the footsteps sound (Bresin et al., 2010). It is also
expected that both pressure applied on the ground and time of contact
with the ground will be increased. Additionally, the acceleration
while lifting the foot to move forward will be decreased (point-light
walker application, Troje, 2008).
19
2.2 Participants will increase their speed (shorter time interval between
heel strike and toe-off events) when we increase the high frequency
components of the footsteps sound (Bresin et al., 2010). It is also
expected that both pressure applied on the ground and time of contact
with the ground will be decreased. Additionally, the acceleration
while lifting the foot to move forward will be increased (point-light
walker application, Troje, 2008).
Hypothesis 3
3. Participants’ emotional experience will be affected by the provision of
the manipulated footsteps sound with two possible outcomes (Tonetto et
al., 2014; Tajadura-Jimenez et al.; 2008):
Hypothesis 4
4. Considering the relationship between body weight and walking posture
(point-light walker, Troje, 2008) we believe that participants’ walking
posture will be affected (Pastore et al., 2008; Tajadura-Jimenez et al.,
2012) by the provision of manipulated footstep sounds with two possible
outcomes:
20
Hypothesis 5
5. Participants’ perception of their strength will be affected by the provision
of manipulated footstep sounds (Furfaro et al., 2013) with two possible
outcomes:
5.1 Participants will feel stronger as they perceive their body as heavier
when we increase the low frequency components of the footsteps
sound. Accordingly, they will feel weaker when we increase the high
frequency components.
5.2 Participants will feel weaker as they will feel unfit to a body that is
perceived as heavier when we increase the low frequency
components of the footsteps sound. Accordingly, they will feel
stronger when we increase the high frequency components.
Hypothesis 6
6. Considering the relationship between body weight and walking posture
(point-light walker, Troje, 2008) we believe that participants’ perception
of their walking posture (straight-stooped/hunched) will be affected by
the provision of manipulated footstep sounds (Tajadura-Jimenez et al.,
2012) with two possible outcomes:
6.1 Participants will perceive their posture as stooped when the low
frequency components of the footsteps sounds increase.
6.2 Participants will perceive their posture as straight when the high
frequency components of the footsteps sounds increase.
Hypothesis 7
7. Based on the findings in the “Sound and Motor Behaviour” section which
suggest that sound affects motor behaviour (speed) (Bresin et al., 2010)
and Menzer et al’s., (2010) study which reveals that sound affects
perceived motor behaviour we also believe that participants’ perceived
speed will be affected by the provision of manipulated footstep sounds
with two possible outcomes:
21
7.1 Participants’ perceived speed will be decreased when we increase the
low frequency components of the footstep sounds.
7.2 Participants’ perceived speed will be increased when we increase the
high frequency components of the footstep sounds.
22
3 PROTOTYPE AND METHODS
3.1 Prototype
For the purpose of the experiment a system based on both audio equipment and
sensors was set up. Additionally, a pair of sensored shoes prototype was build.
In this section a detailed description of the prototype and an overview of the
system used to carry out the experiment will be provided. Furthermore, the way
the different sensors were fitted to the participant will be described. Finally,
details will be given for the exploratory process followed in order to choose the
appropriate equipment that comprises the system.
For the purpose of the present study a pair of sensored active sandals was build.
The sandals were enhanced with pressure sensors and one accelerometer. More
specifically, 4 Square Force-Sensing resistors (43.7 mm x 88 mm) (2 on each
shoe) were placed in the front and rear part of the sandal’s insole (Figure 1). One
ribbon cable (1.5 meters long) exited each of the shoes and was soldered to a
prototyping board along with 4 resistors (330 Ω). The board was then connected
to an Arduino board. Additionally, a Sparkfun 3-axis Breakout accelerometer
integrated into the prototype was attached to participants’ left ankle (Figure 2).
The accelerometer was directly connected to the Arduino board.
23
Figure 2: The sensored-shoes and the accelerometer as it was attached to participants’ left
ankle
The data gathered from both sensors were digitized through Arduino (Figure 3).
More specifically, 3 different boards were used. An Arduino UNO with an
Electronic Brick Shield V4.0 Pro on the top was used. The Electronic Brick
Shield had 5 buckled analog ports. Since we were aiming on reducing cabling
more components were added so that the data would be wirelessly transferred
from the Arduino to the laptop. This was achieved with the addition of an
Arduino XBee board with an XBee wireless component attached to it. An XBee
Explorer USB unit along with an XBee wireless component were also connected
to the laptop to achieve wireless communication with the Arduino board.
Finally, the Arduino UNO board was connected to a 9V battery for power
supply.
24
A pair of Core sound binaural lavalier microphones with the frequency response
of 20 – 20.000 Hz (one on each shoe) was also integrated into the shoes to
capture the footstep sounds of the walker (Figure 4). The microphones were
directly connected to an SP-24B Stereo Microphone Preamplifier to amplify the
acoustic signal of the footstep sounds. In order to increase the equipment’s
portability the preamplifier was converted into a portable device by connecting it
to an 8 AA battery box. The amplified signal was thereafter transmitted to a
Behringer’s MINIFBQ FBQ800 Ultra-Compact, Graphic Equalizer with 9
frequency bands (83 Hz, 125 Hz, 250 Hz, 500 Hz, 1 kHz, 2 kHz, 4 kHz, 8 kHz,
16 kHz) and 24 dB dynamic range to manipulate the higher and lower frequency
components of the footsteps sound. The participants were required to walk down
an 8.54 meters corridor for the purpose of the experiment. For this reason, the
equalizer’s cable was extended by 15 meters so that the device would be plugged
into the power supply during the walking trial. The sound output was rendered
through a pair of Sennheiser HDA 300 closed headphones with high passive
ambient noise attenuation. The headphones were directly connected to the
equalizer.
25
body. The sensors were wirelessly connected to their base which was connected
to the laptop with a USB cable. A detailed scheme of the connection between
the different components of the system is presented in Figure 5.
Figure 5: The scheme illustrates the connection between the different components of the
system
Sensor Setup
The audio equipment (preamplifier and graphic equalizer) and the Arduino
prototype board were placed in a small backpack. The backpack was connected
to the shoe prototype. More specifically, 2 ribbon cables and 2 microphone
cables exit the backpack and were connected to the sandals. The accelerometer
cable exited the backpack and was attached to the participants’ ankle with a
hypo-allergenic tape. The participants were first asked to wear the sandals and
they were then assisted to wear the backpack. The cables were attached to the
participant’s legs with the use of two Velcro straps as shown in Figure 6, to
ensure that they could walk comfortably without limitations. The cable
26
connecting the headphones to the equalizer exited the upper part of the
backpack.
27
Figure 9: The GSR sensor placed on participant’s non-dominant hand
28
Exploration
Shoes
Sandals were preferred to regular shoes since they are easy to fashion and they
can accommodate a wide range of foot sizes. Two different types of sandals were
tested regarding the sound elicited from the sole impact with the ground. A pair
of Arlington sandals by Earth Spirit, UK size 8 (EU, 42) was chosen primarily
due to the hardness of the sole material (hard rubber) which elicited a clearer and
more distinctive footstep sound.
Audio equipment
Regarding the sound input there was a need for microphones of high quality that
could capture a wide range of both the high and low frequency footstep sound
components and could be easily attached to the shoes prototype. For this reason
four different types of microphones were tested to choose the most effective one
(Core sound binaural microphone).
Two types of equalizer were initially tested (Altai Soundlab DEQ31X1 Equalizer
and Audacity 2.0.5 virtual equalizer) to ensure that we could achieve the
desirable manipulations of the frequency spectrum. Altai Soundlab DEQ31X1
Equalizer was highly effective. However, its large size didn't allow us to use it
for the project. Thus we purchased an equalizer with similar characteristics and
smaller dimensions. (Behringer’s MINIFBQ FBQ800 Ultra-Compact, 9-Band
Graphic Equalizer).
29
Sensors
30
3.2 Methods
Participants
Data were gathered from 22 participants (4 males and 18 females) with ages
ranging from 18 to 35 years (M=24.36; SD=4.85) The participation criteria set
were the following: age between 18 and 35, normal hearing, willing to mostly
stand upright for about an hour, no neurological/psychiatric disorder. All
participants were naive as to the purpose of the study. The participants were
given £7.5 for their time.
Software
31
Figure 12: Real-time plotting of data in Processing environment – the first four are the
pressure sensor signals. The last three are the plots of the accelerometer x,y,z axes.
Additionally, the Q-Sensor software (Figure 13) was used to acquire, plot and
export the GSR data to .csv files so that they could be utilized for further
analysis on Matlab.
Delsy’s EMGworks 4.0 Acquisition software was installed for the acquisition of
Delsy’s EMG/accelerometer sensor data (Figure 14).
32
Figure 14: Delsy’s EMGworks 4.0 Acquisition software – plot of electromyographic
activation and accelerometer data
33
Figure 15: The 3D Body Visualizing application
Sound manipulation
34
Figure 16: Settings for the Figure 17: Manipulation Figure 18: Manipulation of the
neutral condition – no of the high frequency low frequency components
frequency manipulation components
Measures
Questionnaires
Questionnaire 1
35
Questionnaire 2 - Spanner questionnaire
A fake task (spanner task) was introduced to the experiment for two reasons: To
mislead the participants regarding the actual purpose of the study and to reduce
visual contributions to the task that could lead to biased responses, meaning that
participants would focus on the task and would not inspect their gait patterns
during the experimental trial. The participants were presented with three
spanners of different sizes (Figure 19) and were informed that they would be
given one of these spanners before each experimental trial. However, they were
given the same spanner in each trial. The participants were asked to make
approximate assessments of the spanner’s length and weight. Despite this being a
fake task the participants’ responses were later analyzed to detect possible
effects of sound on their assessments. (Full questionnaire in Appendix C-
Spanner Questionnaire).
36
further explore whether the different sound manipulations affect participants’
perceived strength. (Full questionnaire in Appendix C-Lifting Questionnaire).
Participants’ emotional valence, arousal and dominance levels related to the task
were measured by using the Self-Assessment Manikin (SAM), a 9-point non-
verbal pictorial assessment method introduced by Bradley and Lang (1994). The
specific method is used to measure the aforementioned affective responses in a
variety of stimuli including sound. Since SAM is a non-verbal method it can be
used across different cultures and it quick to fill in and highly reliable despite its
simplicity (Bradley and Lang, 1994). (Full questionnaire in Appendix C-
Questionnaire 2.2).
7-point likert scales were used to measure the way participants perceived their
walking speed, weight, strength and posture after each experimental trial.
Additionally, the same type of scales was used to measure participants’ degree of
agreement regarding aspects such as the feelings of their body and their ability to
locate their feet. In this case higher values corresponded to higher level of
agreement with each question. (Full questionnaire in Appendix C-Questionnaire
2.1).
The EDE-Q is the self-report version of the Eating Disorder Interview (EDE)
introduced by Fairburn and Cooper (1993). The weight and shape concern sub
scale items of the specific questionnaire were utilized to explore the participants’
degree of concern regarding their weight and shape and identify possible
correlations between the level of shape/weight concern and manipulation of body
perception. (Full questionnaire in Appendix C-Questionnaire 3).
37
Pressure sensors
Accelerometer
EMG/accelerometer sensors
Other materials/Environment/Lighting
The experimental sessions were recorded with a camera (JVC Everio). In order
to minimize visual distractions the lighting of the room was lowered during the
experiment. Finally, 4 MDF boards with dimensions 2440mm x 1220mm x
25mm where placed next to each other forming an 8.54 meters corridor (Figure
20).
38
Figure 20: The experiment setting
Consent Form
An Informed Interviewee Consent Form was signed by the participants before the
beginning of the experiment (See consent form in Appendix D)
Design
The experimental design was a 3x2 within participants factors. The independent
variables were the 3 different types of auditory feedback provided to the users as
described above (veridical feedback, high frequencies amplified and low frequencies
amplified). Two repetitions were performed for each type of auditory feedback.
Consequently, 6 experimental trials were performed by each participant. The
provision of the auditory feedback provided in each trial for each participant was
randomly ordered. The dependent variables were the perceived body
weight/dimensions, assessment of spanner length and weight, self-efficacy regarding
lifting objects, the task experience questions, perceived valence, arousal and
dominance, GSR, acceleration and deceleration of foot movement, pressure applied
from the heel and ball of foot to the floor and walking pace.
39
Procedure
The experiment lasted approximately one hour. It took place at the Institute of
Philosophy, Senate House, in London. Prior to the experiment participants were
informed that they should wear a pair of shorts and socks. At the moment the
participants arrived in the experiment room, they were asked to read through an
information sheet (Appendix H) with a detailed description of the experimental
procedure and to sign the Informed Interviewee Consent Form. They were also
required to fill in the preliminary questionnaire concerning demographic and
personal information. Thereafter, the GSR sensor was placed on the participant’s
non-dominant hand. After showing the different spanners to the participant the
researcher role-played the whole experimental task. Two initial practice blocks
of 2 trials (one without the equipment on and one with the equipment on) were
performed prior to the experimental block to allow participants to familiarize
with both the equipment and the task. Before the beginning of the actual
experimental sessions 2 small tests were performed to collect the maximum
activation values of the Tibialis Anterior and the Gastrocnemious Medial Head
muscles. These values were essential for the normalization of the
electromyographic data that would be collected subsequently. Test 1 required
participants to lift their heels 5 times to collect the maximum activation values of
Tibialis Anterior. For the collection of the maximum values of Gastrocnemius
Medial Head participants were first asked to lift their toes. Their toes were then
pushed towards the floor and the participant would have to resist the pushing.
Once the tests were carried out the experiment began. Participants were provided
with the spanner. A “click and start” signal was then given to participants to
press the GSR sensor’s button which set a marker in the data and start marching
at place. The purpose of the introduction of the marching task was to achieve
longer exposure the sound stimuli. After 10 seconds the researcher gave a “go”
signal and the participants started walking towards the end of the corridor where
they were required to press the GSR button again, put the spanner in a non-
transparent bag, adjust the measurements of the 3D body avatar and fill in
Questionnaire 2 described above (“Measures” section). Participants were asked
to walk at their comfortable speed. Subsequently a new session started. This exact
procedure was performed 6 times. After the end of the 6 experimental trials the
40
researcher helped the participant to take the equipment off. Finally, the EDE-Q
questionnaire was filled in and the participant was debriefed and paid.
41
3.3 Data Extraction
MATLAB scripts were used to extract the final data for the subsequent analysis.
Please note that the scripts were developed with the supervisor’s help. The
scripts can be found in Appendix E.
The data captured by the pressure sensors were integer numbers of different
values. This variation illustrated the different levels of pressure applied on the
floor across time. The values were plotted in MATLAB as illustrated in Figure
21. Thereafter, the start and end points of the walking session were manually
identified for each of the 4 pressure sensors and the signal was cut. Since the
marching part was mainly introduced to increase sound exposure, it was not
included in the analysis. The differentiation between the pressure values given
during the marching and the walking session is obvious in Figure 21. Five main
variables were extracted from the heel and ball of foot for each step as presented
in Table 1. Figure 22 illustrates the main variable in a plot of the pressure signal.
Subsequently, 7 seven final variables were calculated and used for the analysis to
detect significant changes in participants’ pace and/or pressure applied on the
floor. The variables extracted to measure participants pace were: the duration of
heel and toe contact with the ground and the time interval between the heel strike
and toe-off events. The pressure applied on the ground was measured by
analyzing the maximum and average pressure applied on the ground from both
heel and ball of foot (toe area). The final variables are illustrated in table 2.
42
Figure 21: A plot of the signal extracted from the left toe pressure sensor. Both the
marching and walking session are illustrated.
43
p,tp
avg
t0
t1
Figure 22: A plot of the left toe pressure signal during the walking session and the main
variables extracted from the heel and ball of the foot
44
Accelerometer
The data captured by the accelerometer were a string of positive and negative
numbers. These variations suggest the accelerating and decelerating movements of
the foot in x,y and z axes during the Swing phase. While acceleration data were
gathered from all 3 axes the resultant of the 3-axes data was calculated as it follows
to facilitate subsequent analysis:
Concerning the definition of the end and start point of the walking session the
same process as the one described above was followed. Figure 23 shows an
acceleration and pressure plot before cutting the signal according to the start/end
points of the walking phase. A plot of the acceleration during the walking trial is
showed in Figure 24. After plotting the data maximum and minimum values
above and below specific thresholds were identified. Thereafter, 3 final values
were calculated and used for subsequent analysis to detect changes in
participants’ acceleration of movements as showed in table 3
45
Figure 23: Plot of acceleration and pressure recorded from all 4 pressure sensors
GSR
A MATLAB script (Appendix F) was used to identify the changes in the GSR signal
during the walking session. Before exporting the data on MATLAB the GSR files
were plotted in Q-Sensor software and extra markers were added for the
differentiation of the walking and the marching session. Data were then imported
into MATLAB and values were calculated for subsequent analysis (Table 4)
Measures Definition
avg_march Average arousal during marching
46
avg_walk Average arousal during walking
max_min_march Difference between maximum and
minimum arousal during marching
max_min_walk Difference between maximum and
minimum arousal during walking
Table 4: GSR main variables
Participants were asked to wear the sensor during the whole experimental
procedure. Thus one GSR file was exported for each participant. In order to
identify the data relevant to each experimental trial the participants were asked to
press the sensor’s button in both the beginning and the end of each trial so that
markers would be placed on the data files.
47
4 DATA ANALYSIS
Data extraction was followed by the statistical analysis using the software IBM
SPSS 22. Due to time limitations, the analysis of the data gathered using the
DELSY’S Trigno EMG System and the EDE-Q was postponed to future
analysis. For the purpose of this report four types of data were analysed, the
questionnaire, the pressure sensors, the accelerometer and the GSR data.
For all the variables, initially exploratory analyses were performed to test
whether the distribution as a whole deviates from a comparable normal
distribution. The objective Shapiro-Wilk test of normality was used since it is
reported to be more accurate than the Kolmogorov-Smirnov one, (Field, 2005).
o The log transformation (log(Xi)): takes the logarithms of the initial data.
In the case of zero or negative values, log(Xi+1) was calculated.
o The square root transformation (√Xi): takes the square roots of each of
the scores. In the case of negative values √(Xi+1) was calculated.
o Reciprocal Transformation (1/Xi): divides 1 to each score of the initial
data.
o In the case of zero values 1/(Xi+1) was calculated.
o Z-scores: the initial data are converted to individual z scores calculated
for each participant based on his/her data gathered from 6 trials according
In certain cases (avatar task, perceived weight, average time of heel contact,
acceleration when lifting foot) and further examination to detect outlier data was
conducted, by looking for values that exceeded or were under two SD from the
mean. However, no outlier data were detected in any of the cases.
48
According to the results of the normality test either parametric or non-parametric
analysis was carried out, as described below.
In the case of a non-normal data distribution, the two repetitions of each sound
condition were compared against each other by conducting a Wilcoxon test, to
detect possible significant effects of sound repetition. In the case of no
significant sound repetition effects the Means and SD were calculated.
Thereafter, Friedman’s test analysis was performed to detect whether there is a
significant sound effect within the different sound conditions. Finally, a
Wilcoxon test was conducted to detect significant sound effects and trends
within specific conditions. The effect size in Wilcoxon test is demonstrated by
reporting r. The value was calculated as follows:
Results and analysis are reported specifically in the next sections of the chapter.
There is a separate section for each data type (questionnaire, pressure sensors,
accelerometer and GSR sensor). All the detailed tables with the statistical test
outputs are included in Appendix G.
49
Sound condition Labelling
No sound manipulation – repetition 1 NF1
No sound manipulation – repetition 2 NF2
Manipulation of High Frequencies- HF1
repetition 1
Manipulation of High Frequencies – HF2
repetition 2
Manipulation of Low Frequencies - LF1
repetition 1
Manipulation of Low Frequencies – LF2
repetition 2
Table 5: Conditions labelling
4.1 Questionnaires
Different types of questionnaires were used for the purpose of the experiment.
Due to time limitations, the Eating Disorder Examination Questionnaire (EDE-
Q) (Fairburn, 2008) was excluded from the current analysis.
The initial Shapiro-Wilk test revealed that the data distribution was significantly
non-normal, p<.05. Normalisation was achieved by performing a Reciprocal
50
transformation (1/Xi) on the initial data. A sound x repetition ANOVA was
conducted to explore possible sound effects and interaction. Mauchly’s test
showed no violation of sphericity (2)=5.46, p=.065. Thus we are allowed to
report the sphericity assumption, F(2,42)=3.58, p=.037 which showed significant
sound effects on perceived body weight. No effects of sound repetition or
interaction were found. The t-test revealed that participants perceived their own
body weight as lighter in HF (M = 54.65,SE = 2.72) than in NF condition (M =
57.52, SE = 3.02), t(21)=-2.4, p=.025, r=.46 . Additionally, a trend to significant
sound effects was also revealed in HF with respect to LF condition (M = 56.77,
SE = 2.88), t(21)=-1.77, p=.090<0.1, r=.36. There was no significant difference
in LF with respect to NF, p>.05.
Kg 54 Low_Frequency_mean
52 High_Frequency_mean
50
48
46
Figure 25: Means and Standard Errors for perceived body weight in each sound condition
– the non-transformed initial values are reported
51
Perceived speed - Slow-Quick scale
Perceived Speed
6
4 NF_slow_quick_mean
3 LF_slow_quick_mean
2 HF_slow_quick_mean
Figure 26: Means and Standard errors for perceived walking speed values in each sound
condition - for the measurement an 7-point LIkert scale was utilised, where 1 was slow and
7 was quick
52
in Friedman’s test was not far from significance indicated that possible sound
effects or trends could be revealed by a further Wilcoxon test between the three
sound conditions. The results showed that participants perceived their own body
weight as lower in HF (M = 3.65,SE =.21) than in LF condition (M = 4.40,SE
=.24) z=-2.02, p=.043, r=-.30. Additionally, there was a trend to significance in
HF with respect to NF (M = 5.29,SE = 1.19), z=-1.71, p=.087< 0.1, r=-.25. No
significant effects of sound were detected in LF with respect to NF condition,
p>.05. The results suggest that participants felt lighter in HF condition and
heavier in NF condition (Figure 27). A variation in perceived body weight across
conditions is also demonstrated. These results are consistent with the ones
yielded previously in the analysis of data regarding the perceived body weight.
5
NF_light_heavy_mean
4
LF_light_heavy_mean
3
HF_light_heavy_mean
2
Figure 27: Means and Standard errors for perceived body weight in each sound condition -
for the measurement a 7-point Likert scale was utilised, where 1 was light and 7 was heavy
The Shapiro-Wilk test showed that data were non-normally distributed (p<.05)
and the consequent Wilcoxon test revealed no significant effects of sound
repetition, p>.05. Friedman’s test showed that sound effects were insignificant
but not far from significance, (2)=3.75, p=.153. Hence, paired comparisons
were planned based on our hypothesis. A Wilcoxon test was run to detect
possible significant effects and trends. The results demonstrated a trend to
53
significant difference in participants’ perceived strength in HF (M = 4.59,SE
=.14) with respect to the LF condition (M = 4.11,SE =.18) z=-1.80, p=.071, r=-
.27. No statistically significant difference was found in the rest paired
comparisons (all p>.05).
Perceived Strength
5
4
NF_weak_strong_mean
3
LF_weak_strong_mean
2
HF_weak_strong_mean
1
Figure 28: Means and Standard Errors for perceived strength values in each sound
condition - for the measurement a 7-point LIkert scale was utilised, where 1 was weak and
7 was strong
Perceived posture
6
5
NF_stooped_hatched_straig
4 ht
LF_stooped_hatched_straig
3
ht
2 HF_stooped_hatched_straig
ht
1
Figure 29: Means and Standard Errors for perceived posture values in repetition 1 - for
the measurement a 7-point Likert scale was utilised, where 1 was stooped/hunched and 7
was straight
Perceived Posture
7
6
NF2_stooped_hatched_strai
5
ght
4 LF2_stooped_hatched_strai
3 ght
HF2_stooped_hatched_strai
2
ght
1
0
Figure 30: Means and Standard Errors for perceived posture values in repetition 2 - for
the measurement a 7-point Likert scale was utilised, where 1 was stooped/hunched and 7
was straight
55
Feet localization
The data were non-normally distributed (all p<.05). The Wilcoxon test revealed
no significant effects of sound repetition (all p>.05). A significant effect of
sound in the ability of participant to identify the location of their own feet was
found in Friedman’s test, (2)=8.51, p=.014<0.05. A further Wilcoxon test was
run to identify significant effects and trends within pairs of conditions. The
results suggested that participants appeared more confident to identify their feet
location in HF (M = 5.81,SE =.21) than in LF (M = 5.25,SE = .29) z=-2.06,
p=.039, r=-.31 and NF conditions (M = 5.22, SE = .28) z=-2.29, p=.022, r=-.34.
No significant effects of sound were identified in LF with respect to NF
conditions (p>.05).
Feet Localization
7
6
NF_Identification_of_foot_l
5
ocation_mean
4 LF_Identification_of_foot_lo
3 cation_mean
HF_Identification_of_foot_l
2
ocation_mean
1
0
Figure 31: Means and Standard Errors for the values of participant’s ability to identify
their feet location - for the measurement a 7-point LIkert scale was utilised, where 1 was “I
strongly agree” and 7 was “I strongly disagree” with the statement: During the experience I
could really tell where my feet were
Participants’ ability to identify their own feet as the source of footstep sounds
The data distribution was non-normal (all p<.05). Wilcoxon test showed no
significant effect of sound repetition, (all p>.05). There were no significant
effects of sound in participants’ ability to identify their own feet as the source of
56
the footstep sounds, (2)=.241, p=.88. Mean values and Standards Errors are
demonstrated in Figure 32.
6.5
NF_sounds_source_mean
6 LF_sound_source_mean
HF_sound_source_mean
5.5
Figure 32: Means and Standard Errors for participants’ ability to identify their
own feet as the sound source of footstep sounds - for the measurement a 7-point LIkert
scale was utilised, where 1 was “I strongly agree” and 7 was “I strongly disagree” with the
statement: During the experience I felt the sounds I heard were produced by my own
footsteps/ body
4 NF_surprising.unexpected
_feelings_mean
3 LF_surprising.unexpected
2 _feelings_mean
HF_surprising.unexpected
1 _feelings_mean
0
Figure 33: Means and Standard Errors for the experience of surprising and unexpected
feelings of body - for the measurement a 7-point LIkert scale was utilised, where 1 was “I
57
strongly agree” and 7 was “I strongly disagree” with the statement: During the experience
the feelings about my body were surprising and unexpected
The data distribution was non-normal (all p<.05) and no effect of sound
repetition was found (all p>.05). A Friedman’s test that was performed showed
no significant sound effects or interaction (all p>.05). Means and Standard
Errors are demonstrated in Figure 34.
Figure 34: Means and Standard Errors for the degree of vividness of perceived body
feelings in each condition - for the measurement a 7-point LIkert scale was utilised, where 1
was “I strongly agree” and 7 was “I strongly disagree” with the statement: During the
experience the feeling of my body was less vivid than normal
The Shapiro-Wilk test was conducted for all the measures in order to identify the
normality of the data distribution. All the variables were significantly non-
normal and normalisation could not be achieved.
SAM – Dominance
58
different conditions indicated that possible significant sound effects or trends
could be identified. A further Wilcoxon test between the three different sound
conditions showed that participants felt significantly more dominant in HF
(M=6.02,SE=.26) than in LF condition (M=5.29,SE=.32) z=-2.00, p=.045. A
trend is identified in HF with respect to NF condition (M=5.27, SE=.29), z=-
1.74, p=.080. No statistically significant sound effect was found between LF and
NF condition, p>.05.
SAM-Dominance
7
5 NF_SAM_Dominance_me
an
4
LF_mean_SAM_Dominanc
e_mean
3
HF_mean_SAM_Dominan
2 ce_mean
Figure 35: Means and Standard Errors for perceived dominance in each condition - for the
measurement a 7-point LIkert scale was utilised, where 1 was Submissive/Awed and 7 was
Dominant/Important
SAM – Valence
59
SAM - Valence
7
6
5
NF _SAM_Valence_mean
4
LF_SAM_Valence_mean
3
HF_SAM_Valence_mean
2
1
0
Figure 36: Means and Standard Errors for perceived valence in each condition -
for the measurement a 7-point LIkert scale was utilised, where 1 was Unhappy/Negative
and 7 was Happy/Positive
SAM – Arousal
SAM - Arousal
6
4 NF_SAM_Arousal_mean
3 LF_SAM_Arousal_mean
2 HF_SAM_Arousal_mean
Figure 37: Means and Standard Errors for perceived valence values in each condition - for
the measurement a 7-point LIkert scale was utilised, where 1 was Unaroused/Calm and 7
was Aroused/Excited
60
Spanner Questionnaire
The initial Shapiro-Wilk test revealed that the data distribution was significantly
non-normal, p<.05. The initial data were converted to z-scores to achieve
normalisation. A multivariate analysis of variance (MANOVA) was conducted
to detect sound effects, interaction and possible correlations between the weight
and length variables. The results suggested no significant effects or interaction
(all p>.05). Figure 38, 39 illustrate the means and Standard Errors for perceived
spanner length and weight respectively.
400
300 NF_spanner_Weight
Gram (g) LF_spanner_Weight
200 HF_spanner_Weight
100
Figure 38: Means and Standard Errors for perceived spanner weight values in each
condition
19
NF_spanner_Length
18
Cm LF_spanner_ Length
17
HF_spanner_ Length
16
15
Figure 39: Means and Standard Errors for perceived spanner length values in each
condition
61
Self-Efficacy Questionnaire
Cronbach’s alpha α, reliability measure was calculated for each condition to test
reliability of the self-efficacy scale of strength. The scale was highly reliable in
each condition (table 7). Means and Standard Errors were calculated for each
variable. The Shapiro-Wilk test showed normal distribution of the scale (all
p>.05). The ANOVA demonstrated no significant sound effects or interaction.
However, there was a significant combined effect of sound and repetition,
F(2,42)=3.69, p=.033. Thus, planned t-tests were performed separately for
repetition 1 and 2. A significant effect of sound was found in HF2
(M=42.56,SE=3.19) with respect to NF2 (M=39.68,SE=2.77), t(21)=-
2.20,p=.039,r=.43 and in HF2 with respect to LF2 (M=40.26,SE=3.01), t(21)=-
2.11,p=.047,r=.43.
Cronbach’s α
NF .868
NF2 .781
LF .823
LF2 .839
HF .847
HF2 .866
Table 7: Cronbach’s α for self-efficacy values in each experimental trial.
62
Perceived Strenght - Sound Repetition 1
46
44
42
perc_strength__NF1
40 perc_strength__LF1
perc_strength__HF1
38
36
34
Figure 40: Means and Standard Errors for perceived strength values in sound repetition 1 -
for the measurement a scale from 0-100 was utilised, where 0 was “Cannot lift at all” and
100 was “Highly certain I can lift” a specific weight
Figure 41: Means and Standard Errors for perceived strength values in sound repetition 2 -
for the measurement a scale from 0-100 was utilised, where 0 was “Cannot lift at all” and
100 was “Highly certain I can lift” a specific weight
63
4.2 Behavioral and Physiological Measures
Pressure sensors
Primarily due to technical implications only the data gathered from the sensors
integrated in the left shoe were analysed. More specifically, while the
experiment was in progress, the researcher observed that the right foot sensors
were giving significantly lower values than the left foot ones. However,
behavioural differences across conditions could still be noticed. Thus, it was
decided to run the Shapiro-Wilk test for normality and the ANOVA tests for
both feet data to confirm the validity of the initial assumption to exclude the
specific sensors. The results showed inconsistencies between the results of right
and left foot. For instance, while in certain cases significant sound effects were
revealed on the left foot data, no significant effects were found on the right foot.
Moreover, data were normal for most of the left foot variables but not for the
right foot. Finally, the fact that the accelerometer was attached to the left foot
also pointed to the direction of conducting the analysis in the sensors of the same
foot. Considering the aforementioned, it was concluded that the right foot
sensors should be excluded from the analysis.
There were seven measures of interest per step. Means were calculated for each
variable in each condition. The measures and their labeling are demonstrated in
Table 8.
64
Overall, significant effects of sound were found in 2 out of 7 measures, average
time of heel contact with the ground and average of toe pressure applied on the
ground. No significant sound effects were found in the rest of the measures. For
the detailed analysis of the non-significant values please see Appendix G.
Condition Measure
NF1 NF1_left_heel_t1t0_diff
NF1_heel_to_toe
NF1_left_heel_peak
NF1_left_heel_average
NF1_left_toe_t1t0_diff
NF1_left_toe_peak
NF1_left_toe_average
NF2 NF2_left_heel_t1t0_diff
NF2_heel_to_toe
65
NF2_left_heel_peak
NF2_left_heel_average
NF2_left_toe_t1t0_diff
NF2_left_toe_peak
NF2_left_toe_average
LF1 LF1_left_heel_t1t0_diff
LF1_heel_to_toe
LF1_left_heel_peak
LF1_left_heel_average
LF1_left_toe_t1t0_diff
LF1_left_toe_peak
LF1_left_toe_average
LF2 LF2_left_heel_t1t0_diff
LF2_heel_to_toe
LF2_left_heel_peak
LF2_left_heel_average
LF2_left_toe_t1t0_diff
LF2_left_toe_peak
LF2_left_toe_average
HF1 NF1_left_heel_t1t0_diff
NF1_heel_to_toe
NF1_left_heel_peak
NF1_left_heel_average
NF1_left_toe_t1t0_diff
NF1_left_toe_peak
NF1_left_toe_average
HF1 NF2_left_heel_t1t0_diff
NF2_heel_to_toe
NF2_left_heel_peak
NF2_left_heel_average
NF2_left_toe_t1t0_diff
NF2_left_toe_peak
66
NF2_left_toe_average
Table 9: Variables in each condition
The Shapiro-Wilk test showed normal data distribution of the scale. An ANOVA
was run for the left foot sensors. Mauchly’s test indicated that the assumption of
sphericity has been violated, (2)=9.67, p=.008. Therefore, degrees of freedom
were corrected using Greenhouse-Geisser estimates of sphericity, ε=.68. There
was a significant effect of sound, F(1.37,23.38)=3.95, p=.047. No repetition
effects or interaction were indicated (p>.05). A paired sample t-test revealed that
the duration of heel contact tended to be longer in LF (M=57.79,SE=1.57) than
in HF (M=55.96,SE=1.23) , t(21)=-2.07, p=.051<0.1, r=.41 and
NF(M=56.24,SE=1.28), t(21)=2.03, p=.055<0.1, r=.40. No significant effects
were found in HF with respect to NF condition, p>.05. The initial data were
further explored to detect outliers. However, no outlier data were identified.
Figure 42: Means and Standard Errors for time of heel contact in each condition
67
found, p>.05 thus we are allowed to report the sphericity assumption,
F(2,34)=4.51, p=.018 which showed a significant effect of sound on average toe
values. A paired t-test revealed that the pressure applied on the floor by the ball
of the foot (toe area) was higher in LF (M=552.86,SE=15.88) than in HF
(M=543.48, SE=16.95), t(21)=-2.21, p=.038, r=.43. No significant sound effects
were found in NF with respect to LF and HF (all p>.05).
Figure 43: Means and Standard Errors for the average of toe values in each condition – the
original non-transformed values have been used for the graphical representation- the
pressure in Force-Sensitive resistors is measured in ohms which can be converted to
measured Force (g).
Accelerometer
There were three measures of interest. Means were calculated for each variable
in each condition. The measures and their labeling are demonstrated in Table 10.
Due to technical implications data were lost in NF1 condition for participant 7,
in LF1 condition for participant 15 and in HF1 condition for participant 19.
Overall, the results suggested that there were significant sound effects in one
measure, the acceleration during lifting the foot to move forward. No significant
sound effects were found in the rest of the measures. For the analysis of the non-
significant measures please see Appendix G.
68
Measures
Acceleration during lifting the foot to acc_lifting
move forward
Acceleration when moving the foot acc_down
downwards towards the floor.
Deceleration occurred during lifting the dec_lifting
foot
Table 10: labelling of the behavioural measures
69
Acceleration when lifting the foot to move forward
2100
2050 NF_acc_lifting
g-force (g) LF_acc_lifting
2000 HF_acc_lifting
1950
1900
Figure 44: Means and Standard Errors for acceleration when lifting foot to move forward –
measurement unit
Data from participant 2 were excluded from analysis due to noise in the GSR
signal. Thus, data from 21 instead of 22 participants were included in the
analysis. Additionally, data collected from participant 1 in HF2 and LF2
conditions were excluded for the same reason. The measures and their labeling
are demonstrated in Table 12. Overall, the results suggested that there were
70
significant sound effects in one measure, the difference between maximum and
minimum GSR during marching. No significant sound effects were found in the
rest of the measures. For the analysis of the non-significant measures please see
Appendix G.
A Shapiro-Wilk test of normality revealed that all the GSR measures were
significantly non-normally distributed, p<.05. None of the four transformations
mentioned in the beginning of the analysis chapter successfully normalized the
data distributions. However, since GSR is a highly variable measure not only
between participants, but also within participants, due to varying level of
moisture in the skin due to weather conditions, all the original data were
individually z-scored to control individual variability.
Measures Labelling
Average arousal during marching avg_march
Average arousal during walking avg_walk
Difference between maximum and max_min_march
minimum arousal during marching
Difference between maximum and max_min_walk
minimum arousal during walking
Table 12: labelling of the GSR behavioural measures
71
LF1_max_min_walk_z
LF2 LF2_avg_march_z
LF2_avg_walk_z
LF2_max_min_march_z
LF2_max_min_walk_z
HF1 HF1_avg_march_z
HF1_avg_walk_z
HF1_max_min_march_z
HF1_max_min_walk_z
HF2 HF2_avg_march_z
HF2_avg_walk_z
HF2_max_min_march_z
HF2_max_min_walk_z
Table 13: Variables in each condition
The data distribution was non-normal. A Wilcoxon test was run between the two
repetitions of each condition to identify possible effects of sound repetition.
Since no significant effect was found the values of the 2 repetitions for each
condition were merged and their Means were calculated (Figure 45). A
Friedman’s test was run to detect significant effects of sound in the difference
between the minimum and maximum arousal. The results indicated a significant
sound effect, (2)= 9.57,p=.008. Paired comparisons between the three sound
conditions were carried out by performing a Wilcoxon test. The results indicated
that there was a significant effect of sound in the difference of the maximum and
minimum arousal values in LF (M = .18,SE = .09) with respect to HF (M =
.13,SE = .05), z=-2.13, p=.033, r=-.32 and in NF (M = .15,SE = .06) with respect
to HF conditions (M = .13,SE = .05), z=-2.09, p=.036, r=-.31. No significant
effects were identified in NF with respect to LF conditions, p>.05.
72
Difference between maximum and minimum
arousal values
0.3
0.25
NF_max_min_march_av
0.2 erage
LF_max_min_march_ave
0.15
rage
0.1 HF_max_min_march_av
erage
0.05
Figure 45: Means and Standard errors of max – min difference in each condition –the
means of the original values are illustrated
73
5 DISCUSSION, FUTURE WORK AND LIMITATIONS
Hypothesis 1
Hypothesis 1
1. Participants’ perception of their own body weight will change with two
possible outcomes (Li et al.,1991):
1.1 Participants will perceive their own body weight as bigger when the low
frequency components of their footsteps sound increase.
1.2 Participants will perceive their own body weight as smaller when the
high frequency components of their footsteps sound increase.
74
may indicate that the manipulations we performed were quite crude and the
sound output did not feel self-produced or participants were too aware of these
manipulations and they were not influenced. However, a trend revealed that
participants felt heavier in LF than in HF condition.
Looking at the second self-report measure the results show a degree of similarity
with the Body Visualization one. The findings suggest that participants felt
lighter in HF than in LF condition. A trend also showed that participants felt
lighter in HF condition than in NF condition. Similarly to the first measure there
were no changes in perceived body weight between the LF and NF condition
confirming our speculations about performing crude manipulations in LF.
Another interesting finding revealed from both measures is the fact that
participants felt heavier in the NF condition. This may indicate that our reference
condition (NF) was not appropriately chosen. While the acoustic feedback
provided during NF condition was not manipulated, it cannot be considered as
veridical since it is amplified and rendered through headphones. The results
indicate that the amplification alone affected perceived weight and was more
evident than the LF manipulation. This is not surprising since past studies
(Zampini and Spence,2005; Guest et al.,2002; Jousmaki and Hari,1998) have
demonstrated that overall sound amplification leads to changes in perceived
sound source characteristics.
Hypothesis 2
Hypothesis 2
2. Participants’ motor behaviour will be affected (“Sound and Motor
Behaviour” section) by the provision of manipulated footsteps sounds:
75
2.1 Participants will decrease their speed (longer time interval between heel
strike and toe-off events) when we increase the low frequency
components of the footsteps sound. It is also expected that both pressure
applied on the ground and time of contact with the ground will be
increased. Additionally, the acceleration while lifting the foot to move
forward will be decreased (point-light walker application, Troje,2008).
2.2 Participants will increase their speed (shorter time interval between heel
strike and toe-off events) when we increase the high frequency
components of the footsteps sound. It is also expected that both pressure
applied on the ground and time of contact with the ground will be
decreased. Additionally, the acceleration while lifting the foot to move
forward will be increased (point-light walker application, Troje,2008).
Participants’ speed was measured by calculating the time interval between left
foot’s heel strike and toe-off events. The results showed that the acoustic
feedback did not significantly change participants’ speed. Thus this part of the
hypothesis is disconfirmed. These results are not consistent with previous
researches which demonstrated effects of auditory feedback in participants’
speed/pace (Menzer et al.,2010; Bresin et al.,2010). However possible changes
could have been revealed if we had the time to analyse more variables for
detecting changes in speed (e.g. time interval between 2 heel-strikes).
Regarding the pressure applied on the ground, while there are no sound effects
on the heel pressure, the findings reveal that the pressure applied from the ball of
the foot was higher in LF than in HF condition confirming this part of the
hypothesis. It is not surprising that the pressure changes were detected in the area
of the ball of the foot, since the relationship between these two factors is well
established in past research. More specifically, Zhu et al., (1991, as cited in
76
Titianova et al.,2004) in their study state that the heel-off event is followed by a
phase were the entire body weight is borne on the ball of the foot and toe area.
Since participants’ perceived auditory weight, as measured with the body
visualization application and the questionnaire, did not significantly changed in
the LF condition as described above, these results may indicate an alteration in
their unconscious “body-image” (De Vignemont, 2010) which led to increasing
the pressure exerted on the ground. On the other hand, changes in participants’
consciously perceived body weight are consistent with the applied pressure
regarding the HF condition. The fact that the pressure changes in HF and LF are
not significant with respect to NF condition underlines again our considerations
about the nature of the sound furnished during the reference condition (NF).
Regarding the time of contact with the ground no significant effects of sound
were found. However, there is a trend indicating that participants kept their heel
longer on the ground in LF than in HF condition. Another trend also indicates
that heel time contact was longer in LF than in NF. These results partially
confirm the Hypothesis.
Overall, Hypothesis 2 was partially confirmed since the pressure applied on the
ground, time of foot contact and acceleration of foot changed according to our
assumptions. However, no changes were detected in participants’ speed. These
results complete previous research suggesting that walking related sounds
(ground texture sounds) affect the walking pattern of people with specific
emotional intentions (Bresin et al.,2010) by demonstrating that auditorily
perceived weight cues in walking sounds can affect walking pattern. While we
manipulated body weight related auditory cues there is a possibility that people
77
perceived the sound output as changes in the ground or shoe materials, especially
when we performed high frequency manipulations. This assumption is based on
past studies which demonstrated that overall amplification or high frequency
manipulations alter the perceived sound source characteristics (Zampini and
Spence,2005; Guest et al.,2002; Jousmaki and Hari,1998).
Hypothesis 3
Hypothesis 3
3. Participants’ emotional experience will be affected by the provision of
the manipulated footsteps sounds with two possible outcomes (Tonetto et
al.,2014; Tajadura-Jimenez et al.;2008):
3.1 Participants will feel more negative/unaroused/submissive when we
increase the low frequency components of the footstep sounds.
3.2 Participants will feel more positive/aroused/dominant when we increase
the high frequency components of the footsteps sound.
78
felt more dominant when presented with more acute sounds (elicited when
people walk in ceramic floors with leather shoes) than with quitter sounds
(elicited when walking in carpet with leather shoes). The fact that participants
felt less dominant in the LF condition may be attributed to the nature of the low
frequency feedback. Increasing the low frequency bands the sound produced was
heavy and resounding. This may have led to participants feeling unfit in a body
that produces this type of sounds.
Hypothesis 5
Hypothesis 5
5. Participants’ perception of their strength will be affected by the provision
of manipulated footstep sounds (Furfaro et al., 2013) with two possible
outcomes:
5.1 Participants will feel stronger as they perceive their body as heavier when
we increase the low frequency components of the footsteps sound.
Accordingly, they will feel weaker when we increase the high frequency
components.
5.2 Participants will feel weaker as they will feel unfit to a body that is
perceived as heavier when we increase the low frequency components of
the footsteps sound. Accordingly, they will feel stronger when we
increase the high frequency components.
Hypothesis 6
Hypothesis 6
6. Considering the relationship between body weight and walking posture
(point-light walker, Troje, 2008) we believe that participants’ perception
of their walking posture (straight-stooped/hunched) will be affected by
the provision of manipulated footstep sounds (Tajadura-Jimenez et al.,
2012) with two possible outcomes:
6.1 Participants will perceive their posture as stooped when the low
frequency components of the footsteps sounds increase.
6.2 Participants will perceive their posture as straight when the high
frequency components of the footsteps sounds increase.
Effects of sound in the perceived walking posture were explored with the
assessment of a stooped/hunched-straight 7-point likert scale. The results seem
to partially confirm our hypothesis. More specifically, participants perceived that
their walking posture was straighter in the second repetition of HF (HF2) than in
the second repetition of NF (NF2) condition. These findings agree with
Tajadura-Jimenez’s et al’s.,(2012) research regarding the effects of sound in
proprioception. Additionally, they are consistent with the observations we made
in Troje’s (2008) point-light walker application. However, the fact that the
effects were evident only in the second repetition of the sound conditions may
80
indicate that changes in perceived posture are evident after longer exposure to
sound feedback.
Hypothesis 7
Hypothesis 7
81
during the NF and LF conditions participants’ proprioception was influenced by
the provision of footsteps sound in accord to previous research (Tajadura-
Jimenez et al., 2012). Additionally, we could also assume that since participants
felt more dominant/in control in HF condition they were more confident to
localize their body parts.
Participants’ ability to identify that they were producing the footstep sounds was
also assessed. The results showed no significant effects of sound. The mean
values indicate that participants mostly identified that the sounds were produced
by themselves. These results are not consistent with Menzer et al.,(2010) study.
While they found that delaying footsteps sound caused changes on participants’
feeling of being producing the sounds, our manipulations intending to evoke the
sensation that sounds are produced by a body of different weight did not affect
the agency for the participant’s body.
82
5.2 Applications in HCI and Design recommendations
Applications in HCI
In Chapter 2 the potential of the use of footsteps auditory feedback in various
fields (VR, games, rehabilitation and fitness) was discussed drawing on past
researches. In light of the above mentioned findings, possible applications of the
results will be presented. The study findings demonstrate that the provision of
manipulated footsteps sound can influence the perception of one’s own body
weight, motor behavior and emotions.
More specifically, our study suggests the provision of footsteps sound with
increased high frequencies results in people feeling lighter, quicker, stronger,
more aroused, happier and in control. Our findings also revealed that
acceleration during lifting the foot increases. This can be related to increased
performance. These effects along with the extensive use of wearable audio
devices and the integration of multiple sensors which record body activity into
smartphones suggest that a refined and compact version of our prototype can be
used for the development of fitness and rehabilitation applications. For instance,
a system which would be comprised of a pair of compact microphones, wireless
headphones and a compact equalizer could be connected to a smartphone
application and be used for walking, running or physical rehabilitation
applications that could increase the users’ performance, creating a more
pleasurable running experience and providing motivation.
Design recommendations
The effects of each of the three different auditory feedbacks on auditory
perceived weight, motor behavior and emotions are summarized and presented in
the Table 14 below. They can be utilized as recommendations for the design of
83
systems/applications which aim to use the footsteps auditory feedback to change the user’s perception of own body weight, motor behavior or
trigger emotional responses.
HF (High Users will Users will Users will feel Users will Users will Users will feel
frequency feel lighter apply less more aroused, feel feel that quicker when
component when pressure and more pleased stronger they walk presented with
84
s increased presented keep their and more in when more HF than with
and low with HF feet shorter control when presented upright NF and LF
decreased) footstep on the ground presented with with HF when footsteps
sounds than when HF than with than with presented sounds.
with NF or presented NF footsteps NF and LF with HF
with LF with HF than sound. footsteps than with
footsteps with LF sound. NF
sounds. footsteps Pleasure and footsteps
sound. feeling of sound.
control will also
Foot be increased
acceleration when presented
will be higher with HF than
when with LF
presented footsteps sound.
with HF than
with LF
footstep
sounds
LF (Low Users will Users will Users will feel Users will Users will feel
85
frequency feel heavier apply more less pleasure and feel weaker slower when
component when pressure and less in control when presented with
s increased presented keep their when presented presented LF than with
and high with LF feet longer on with LF than with LF HF footsteps
decreased) footstep the ground with HF than with sounds.
sounds than when footsteps sound. HF
with HF presented footsteps
ones. with LF than sound.
with HF
footstep
sound.
User’s foot
acceleration
will be lower
when
presented
with LF than
with HF
footstep
sounds.
Table 14: Summary of finding
86
5.3 Limitations and Future Research
The main limitations of the present study concern the equipment and the
experimental setup. Our initial goal was to achieve the development of a portable
wireless system. While this was managed for most of the equipment, the
equalizer could not be converted to a wireless device due to technical
implications. As a result, a large cable exited the backpack to connect the device
into the power plug. The contact of the cable with the ground introduced
background noise. Additionally, this large cable along with the presence of
cables connecting the shoes to the Arduino board may have influenced
participants’ walking pattern, even if participants reported no discomfort when
they were asked. Furthermore, due to the weather conditions (high temperature)
during data collection most of the participants’ skin was sweaty. This caused
difficulties in the setup of the wireless EMG sensors/accelerometer. Overall, we
could say that the unnatural experimental setting may have introduced bias in the
data collected. Finally, time-limitations prevented us from conducting the
analysis on the data gathered from both lower limbs, EMG in lower limbs, and
body part inclinations (delsey sensors).
87
components of the footsteps sound should also be investigated. Additionally,
open-ended questions could be introduced in the study to further investigate the
possible sensations evoked due to the provision of the manipulated footstep
sounds. Finally, a more refined and compact version of our prototype could be
used to further explore the effects of self-produced manipulated footsteps sound
in perceived weight, motor behaviour and emotions in the wild.
88
6 Conclusion
The main aim of this study was to investigate whether the provision of
manipulated footstep sounds affects the perception of one’s body weight, motor
behaviour and emotions.
89
REFERENCES
90
15. Field, A. P. (2009). Discovering statists using SPSS (and sex and drugs
and rock ‘n’roll).
16. Freed, D. J. (1990). Auditory correlates of perceived mallet hardness for
a set of recorded percussive sound events. The Journal of the Acoustical
Society of America, 87(1), 311-322.
17. Furfaro, E., Bevilacqua, F., & Tajadura Jimenez, A. (2013). Sonification
of surface tapping: Influences on behaviour, emotion and surface
perception. Interactive Sonification Workshop.
18. Gaver, W. W. (1993). What in the world do we hear?: An ecological
approach to auditory event perception. Ecological psychology, 5(1), 1-29.
19. Giordano, B. L., Visell, Y., Yao, H. Y., Hayward, V., Cooperstock, J. R.,
& McAdams, S. (2012). Identification of walked-upon materials in
auditory, kinesthetic, haptic, and audio-haptic conditionsa). The Journal
of the Acoustical Society of America, 131(5), 4002-4012.
20. Giordano, B., & Bresin, R. (2006). Walking and playing: What’s the
origin of emotional expressiveness in music. In Proc. Int. Conf. Music
Perception and Cognition.
21. Guest, S., Catmur, C., Lloyd, D., & Spence, C. (2002). Audiotactile
interactions in roughness perception. Experimental Brain
Research, 146(2), 161-171.
22. Haggard, P., Christakou, A., & Serino, A. (2007). Viewing the body
modulates tactile receptive fields. Experimental Brain Research, 180(1),
187-193.
23. http://bodyvisualizer.com/
24. Jaffe, D. L., Brown, D. A., Pierson-Carey, C. D., Buckley, E. L., & Lew,
H. L. (2004). Stepping over obstacles to improve walking in individuals
with poststroke hemiplegia. Journal of rehabilitation research and
development,41(3), 283-292.
25. Jenkins, J. J. (1985). Acoustic information for objects, places, and
events.Persistence and change, 115-138.
26. Jousmäki, V., & Hari, R. (1998). Parchment-skin illusion: sound-biased
touch.Current Biology, 8(6), R190-R191.
27. Kitagawa, N., & Spence, C. (2006). Audiotactile multisensory
interactions in human information processing. Japanese Psychological
Research, 48(3), 158-173.
28. Lang, P. J. (1995). The emotion probe: Studies of motivation and
attention.American psychologist, 50(5), 372.
29. Lederman, S. J. (1979). Auditory texture perception. Perception, 8(1),
93-103.
30. Lederman, S. J., Klatzky, R. L., Morgan, T., & Hamilton, C. (2002).
Integrating multimodal information about surface texture via a probe:
relative contributions of haptic and touch-produced sound sources.
In Haptic Interfaces for Virtual Environment and Teleoperator Systems,
91
2002. HAPTICS 2002. Proceedings. 10th Symposium on (pp. 97-104).
IEEE.
31. Leman, M., Moelants, D., Varewyck, M., Styns, F., van Noorden, L., &
Martens, J. P. (2013). Activating and relaxing music entrains the speed of
beat synchronized walking. PloS one, 8(7), e67932.
32. Li, X., Logan, R. J., & Pastore, R. E. (1991). Perception of acoustic
source characteristics: Walking sounds. The Journal of the Acoustical
Society of America, 90(6), 3036-3049.
33. Lunn, D., & Harper, S. (2010). Using galvanic skin response measures to
identify areas of frustration for older web 2.0 users. In Proceedings of the
2010 International Cross Disciplinary Conference on Web Accessibility
(W4A) (p. 34). ACM.
34. Menzer, F., Brooks, A., Halje, P., Faller, C., Vetterli, M., & Blanke, O.
(2010). Feeling in control of your footsteps: Conscious gait monitoring
and the auditory consequences of footsteps. Cognitive neuroscience, 1(3),
184-192.
35. Moens, B., van Noorden, L., & Leman, M. (2010). D-jogger: Syncing
music with walking. In 7th Sound and Music Computing Conference (pp.
451-456). Universidad Pompeu Fabra.
36. Montagu, J. D., & Coles, E. M. (1966). Mechanism and measurement of
the galvanic skin response. Psychological Bulletin, 65(5), 261.
37. Nordahl, R. (2006). Increasing the motion of users in photo-realistic
virtual environments by utilising auditory rendering of the environment
and ego-motion. In PRESENCE 2006: The 8th Annual International
Workshop on Presence (pp. 57-63).
38. Norman, D. A. (2002). The design of everyday things. Basic books.
39. Pastore, R. E., Flint, J. D., Gaston, J. R., & Solomon, M. J. (2008).
Auditory event perception: The source—perception loop for posture in
human gait. Perception & psychophysics, 70(1), 13-29.
40. Perala, C. H., & Sterling, B. S. (2007). Galvanic skin response as a
measure of soldier stress (No. ARL-TR-4114). ARMY RESEARCH
LAB ABERDEEN PROVING GROUND MD HUMAN RESEARCH
AND ENGINEERING DIRECTORATE.
41. Perry, J. (1992). Gait analysis. Normal and pathological function, 1.
42. Point-light walker application:
http://biomotionlab.ca/Demos/BMLwalker.html
43. Repp, B. H. (1987). The sound of two hands clapping: An exploratory
study.The Journal of the Acoustical Society of America, 81(4), 1100-
1109.
44. Rosati, G., Oscari, F., Reinkensmeyer, D. J., Secoli, R., Avanzini, F.,
Spagnol, S., & Masiero, S. (2011, June). Improving robotics for
neurorehabilitation: enhancing engagement, performance, and learning
with auditory feedback. InRehabilitation Robotics (ICORR), 2011 IEEE
International Conference on (pp. 1-6). IEEE.
92
45. Senna, I., Maravita, A., Bolognini, N., & Parise, C. V. (2014). The
Marble-Hand Illusion. PloS one, 9(3), e91688.
46. Serafin, S., Franinovic, K., Hermann, T., Lemaitre, G., Rinott, M.,
Rocchesso, D. (2011). Sonic interaction design. In T. Hermann, A. Hunt,
J. G. Neuhoff (Eds.), The sonification handbook (pp. 87-111). Berlin:
Logos Publishing House.
47. Singh, A., Klapper, A., Jia, J., Fidalgo, A., Tajadura-Jiménez, A.,
Kanakam, N., ... & Williams, A. (2014, April). Motivating people with
chronic pain to do physical activity: opportunities for technology design.
In Proceedings of the 32nd annual ACM conference on Human factors in
computing systems (pp. 2803-2812). ACM.
48. Styns, F., van Noorden, L., Moelants, D., & Leman, M. (2007). Walking
on music. Human movement science, 26(5), 769-785.
49. Suzuki, Y., Gyoba, J., & Sakamoto, S. (2008). Selective effects of
auditory stimuli on tactile roughness perception. Brain research, 1242,
87-94.
50. Tajadura-Jiménez, A., Väljamäe, A., & Västfjäll, D. (2008). Self-
representation in mediated environments: the experience of emotions
modulated by auditory-vibrotactile heartbeat. CyberPsychology &
Behavior, 11(1), 33-38.
51. Tajadura-Jiménez, A., Väljamäe, A., Toshima, I., Kimura, T., Tsakiris,
M., & Kitagawa, N. (2012). Action sounds recalibrate perceived tactile
distance.Current Biology, 22(13), R516-R517.
52. Titianova, E. B., Mateev, P. S., & Tarkka, I. M. (2004). Footprint
analysis of gait using a pressure sensor system. Journal of
Electromyography and Kinesiology,14(2), 275-281.
53. Tonetto, P. L. M., Klanovicz, C. P., & Spence, P. C. (2014). Modifying
action sounds influences people’s emotional responses and bodily
sensations. i-Perception, 5(3), 153-163.
54. Troje, N. F. (2008). Retrieving information from human movement
patterns.Understanding events: How humans see, represent, and act on
events, 1, 308-334.
55. Turchet, L., Nordahl, R., Serafin, S., Berrezag, A., Dimitrov, S., &
Hayward, V. (2010). Audio-haptic physically-based simulation of
walking on different grounds. In Multimedia Signal Processing (MMSP),
2010 IEEE International Workshop on (pp. 269-273). IEEE.
56. Väljamäe, A., Larsson, P., Västfjäll, D., & Kleiner, M. (2008). Sound
representing self-motion in virtual environments enhances linear
vection.Presence, 17(1), 43-56.
57. Vaughan, C. L., Davis, B. L., & O'connor, J. C. (1992). Dynamics of
human gait(pp. 15-43). Champaign, Illinois: Human Kinetics Publishers.
58. Visell, Y., Fontana, F., Giordano, B. L., Nordahl, R., Serafin, S., &
Bresin, R. (2009). Sound design and perception in walking
93
interactions. International Journal of Human-Computer Studies, 67(11),
947-959.
59. Vlaeyen, J. W., & Linton, S. J. (2000). Fear-avoidance and its
consequences in chronic musculoskeletal pain: a state of the
art. Pain, 85(3), 317-332.
60. Warren, W. H., & Verbrugge, R. R. (1984). Auditory perception of
breaking and bouncing events: a case study in ecological
acoustics. Journal of Experimental Psychology: Human perception and
performance, 10(5), 704.
61. Zampini, M., & Spence, C. (2004). The role of auditory cues in
modulating the perceived crispness and staleness of potato chips. Journal
of sensory studies,19(5), 347-363.
62. Zanotto, D., Rosati, G., Spagnol, S., Stegall, P., & Agrawal, S. K. (2013).
Effects of complementary auditory feedback in robot-assisted lower
extremity motor adaptation. Neural Systems and Rehabilitation
Engineering, IEEE Transactions on, 21(5), 775-786.
94
APPENDICES
>> http://processing.org/exhibition/features/igoe/
Reads analog inputs and sends them as a series of strings seperated with commas,
>> http://www.kobakant.at/DIY/?cat=347
*/
/*
Nathan Seidle
SparkFun Electronics
November 5, 2012
License: This code is public domain but you buy me a beer if you use this and we meet someday
(Beerware license).
This example code shows how to read the X/Y/Z accelerations and basic functions of the
MMA5842. It leaves out
all the neat features this IC is capable of (tap, orientation, and inerrupts) and just displays X/Y/Z.
See
Hardware setup:
SDA -------^^(330)^^------- A4
SCL -------^^(330)^^------- A5
The MMA8452 is 3.3V so we recommend using 330 or 1k resistors between a 5V Arduino and the
MMA8452 breakout.
95
The MMA8452 has built in pull-up resistors for I2C so you do not need additional pull-ups.
*/
// The SparkFun breakout board defaults to 1, set to 0 if SA0 jumper on the bottom of the board is
set
#define GSCALE 2 // Sets full-scale range to +/-2, 4, or 8g. Used to calc real g values.
int
numberOfSensors = 4;
14, 15, 16, 17, 18, 19}; //Digital names for analog input pins 0-5
void setup() {
for(int i = 0; i < numberOfSensors; i++) // set analog pins as inputs, although this is also the default
pinMode(sensorPins[i], INPUT);
for(int i = 0; i < numberOfSensors; i++) // set internal pullup resistors for all analog pins in use
digitalWrite(pullupPins[i], HIGH);
Serial.begin(57600);
96
void loop() {
if (count < numberOfSensors - 1) Serial.print(","); // if this isn't the last sensor to read then print
a comma after it
Serial.println(); // after all the sensors have been read print a newline and carriage return
97
byte rawData[6]; // x/y/z accel register data stored here
readRegisters(OUT_X_MSB, 6, rawData); // Read the six raw data registers into data array
int gCount = (rawData[i*2] << 8) | rawData[(i*2)+1]; //Combine the two 8 bit registers into one
12-bit number
gCount >>= 4; //The registers are left align, here we right align the 12-bit integer
// If the number is negative, we have to make it so manually (no 12-bit data type)
gCount = ~gCount + 1;
// See the many application notes for more info on setting all of these registers:
// http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MMA8452Q
void initMMA8452()
Serial.println("MMA8452Q is online...");
else
98
Serial.print("Could not connect to MMA8452Q: 0x");
Serial.println(c, HEX);
writeRegister(XYZ_DATA_CFG, fsr);
//The default data rate is 800Hz and we don't modify it in this example code
// Sets the MMA8452 to standby mode. It must be in standby to change most register settings
void MMA8452Standby()
byte c = readRegister(CTRL_REG1);
// Sets the MMA8452 to active mode. Needs to be in this mode to output data
void MMA8452Active()
byte c = readRegister(CTRL_REG1);
// Read bytesToRead sequentially, starting at addressToRead into the dest byte array
99
Wire.beginTransmission(MMA8452_ADDRESS);
Wire.write(addressToRead);
while(Wire.available() < bytesToRead); //Hang out until we get the # of bytes we expect
dest[x] = Wire.read();
Wire.beginTransmission(MMA8452_ADDRESS);
Wire.write(addressToRead);
Wire.beginTransmission(MMA8452_ADDRESS);
Wire.write(addressToWrite);
Wire.write(dataToWrite);
100
APPENDIX B - ARDUINO SCRIPT
/*
by Tom Igoe
Language: Processing
*/
import processing.serial.*;
PrintWriter output;
int lineThickness = 2;
int startTime = 0;
int newTimer = 0;
101
boolean startRecording = false;
//Time of recording
void setup () {
output = createWriter("Maria.txt");
palette=new color[10];
palette[0]=color(250,0,0);
palette[1]=color(138,155,15);
palette[2]=color(48,139,206);
palette[3]=color(252,252,38);
palette[4]=color(255,255,255);
palette[5]=color(232,16,175);
palette[6]=color(247,134,12);
palette[7]=color(255,255,255);
palette[8]=color(255,255,255);
palette[9]=color(255,255,255);
size(1024, 697);
println(Serial.list());
102
String portName = Serial.list()[2];
myPort.clear();
myPort.bufferUntil('\n');
//println(fontList);
textFont(myFont);
fontInitialized = true;
background(0);
// turn on antialiasing:
smooth();
void draw () {
if (inString != null) {
103
inString = trim(inString);
//println(inString);
if(startRecording){
output.println(inString);
noStroke();
fill(0);
104
// print the sensor numbers to the screen:
fill(palette[i]);
// you text():
if (fontInitialized) {
stroke(127);
stroke(palette[i]);
strokeWeight(lineThickness);
strokeWeight(1);
previousValue[i] = ypos;
// if you've drawn to the edge of the window, start at the beginning again:
xpos = 0;
background(0);
else {
105
xpos++;
if(startRecording){
//executes the exit function when the timer gets to the end
goExit();
void goExit() {
output.println("Date End: " + day() + " " + hour() + ":" + minute() + ":" + second());
void keyPressed(){
output.println("Date: " + day() + " " + hour() + ":" + minute() + ":" + second());
startRecording = true;
// millis tells you how much time the program has been running so we need to substract it when
we start the program
startTime = millis();
106
APPENDIX C - QUESTIONNAIRES
Participant No:
Questionnaire 1
Male
Female
18 – 24
25 – 34
35 – 44
45 – 54
55 – 64
65 years or older
3. What is your weight at present? (Please give your best estimate and circle the
unit of measurement).
………………. pounds / Kg
4. What is your height? (Please give your best estimate and circle the unit of
measurement).
………………. inches / cm
107
Yes
No
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………
7. Do you exercise?
Yes
No
8. If yes how many hours per week do you exercise? (e.g. 5 hours)
……………………………...........................................................................................
................
Yes
No
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………………………………………………
………………………………………………
108
Condition:
Participant No:
Spanner
Now please think about the experience you have just had. You will have to answer a few
questions about it.
1. What was the approximate weight of the spanner you carried? (Please give
your best estimate and circle the unit of measurement).
2. What was the approximate length of the spanner you carried? (Please give
your best estimate and circle the unit of measurement).
………………………….. inches/ cm
Right
Left
109
Condition:
Participant No:
Lifting
If you were asked to lift objects of different weights RIGHT NOW, how certain are you
that you can lift each of the weights described below?
Rate you degree of confidence by recording a number from 0 to 100 using the scale given
below. You can write any number from 0 to 100 (e.g. 42).
0 10 20 30 40 50 60 70 80 90 100
110
Condition:
Participant No:
Questionnaire 2.1
Now please circle the number you think that best expresses your level of agreement with
the sentences below.
1 2 3 4 5 6 7
Slow Quick
1 2 3 4 5 6 7
Light Heavy
1 2 3 4 5 6 7
Weak Strong
1 2 3 4 5 6 7
Stooped, Straight
Hunched
111
During the experience I felt the sounds I heard were produced by my own footsteps/
body.
1 2 3 4 5 6 7
I strongly Neither agree I strongly
disagree nor disagree agree
During the experience the feeling of my body was less vivid than normal.
1 2 3 4 5 6 7
I strongly Neither agree I strongly
disagree nor disagree agree
During the experience the feelings about my body were surprising and unexpected.
1 2 3 4 5 6 7
I strongly Neither agree I strongly
disagree nor disagree agree
1 2 3 4 5 6 7
I strongly Neither agree I strongly
disagree nor disagree agree
112
Condition:
Participant No:
Questionnaire 2.2
You will have to rate the way you felt by selecting a figure from each of 3 different scales.
Each scale shows different kind of feelings: happy/positive vs unhappy/negative,
aroused/excited vs unaroused/calm, dominant/important vs submissive/awed (slightly
frightened).
113
Participant No:
Questionnaire 3
The following questions are concerned with the past four weeks (28 days) only. Please read
each question carefully. Please answer all the questions. Thank you.
Please circle the appropriate number on the right. Remember that the questions only refer to
the past four weeks (28 days).
114
influenced how
you think about 0 1 2 3 4 5 6
(judge) yourself
as a person?
How much would
it have upset you
if you had been
asked to weigh 0 1 2 3 4 5 6
yourself once a
week (no more,
or less often) for
the next four
weeks?
How dissatisfied
have you been 0 1 2 3 4 5 6
with your
weight?
How dissatisfied
have you been 0 1 2 3 4 5 6
with your shape?
How
uncomfortable
have you felt
seeing your body
(for example 0 1 2 3 4 5 6
seeing your shape
in the mirror, in
a shop window
reflection, while
undressing or
taking a bath or
shower)?
How
uncomfortable
have you felt
about others
seeing your shape
or figure (for 0 1 2 3 4 5 6
example in
communal
changing rooms,
when swimming
or wearing tight
clothes)?
115
APPENDIX D – CONCENT FORM
5. If I have questions about the research project or procedures, I know I can contact Dr Ana Tajadura-Jimenez,
supervisor and research fellow at the University College London Interaction Centre; email:
[email protected]
6. This study has been approved by the UCL Research Ethics Committee as Project ID Number Staff/1213/003
(Project: “The Hearing Body”).
For the following please circle “Yes” or “No” and initial each point.
_____ I agree for the videotape and photographs to be used by the researchers in further research studies YES / NO
_____ I agree for the videotape and photographs to be used by the researchers to demonstrate the technology YES / NO
_____ I agree for the videotape and photographs to be used by the researchers for teaching, conferences, presentations,
publications and/or thesis work YES / NO
_____ I agree to be contacted in the future by UCL researchers who would like to invite me to participate in follow-up
studies YES / NO
_____ I understand that my participation will be taped/video recorded/photographed and I am aware of and consent to the
use of the recordings/photographs YES / NO
Address _______________________________________________________________________
116
APPENDIX E – MATLAB SCRIPTS FOR THE EXTRACTION OF
THE PRESSURE SENSOR AND ACCELEROMETER DATA
sensor1=data(:,1);
sensor2=data(:,2);
sensor3=data(:,3);
sensor4=data(:,4);
acc1=data(:,5);
acc2=data(:,6);
acc3=data(:,7);
[ RESULT_ACC ] = read_acc(acc1,-130,20);
[ RESULT_ACC ] = read_acc(acc2,760,20);
[ RESULT_ACC ] = read_acc(acc3,1030,20);
break
% % %check graph
start1=1303;
last1=1822;
start2=1330;
start3=1280;
last3=1775;
start_acc=1353;
last_acc=1822;
dt12= start2-start1;
dt23=start2-start3;
final_sensor1= [];
for i=1 : length(New_result)
v=New_result(i,2);
t=New_result(i,1);
final_sensor2= [];
117
New_result = read_sensor( sensor3(start3:last3), 300, 20);
final_sensor3= [];
for i=1 : length(New_result)
v=New_result(i,2);
t=New_result(i,1);
final_sensor4= [];
for i=1 : length(New_result)
v=New_result(i,2);
t=New_result(i,1);
for i=1:length(New_result_acc)
t=New_result_acc(i,1);
v=New_result_acc(i,2);
[t_min, v_min]=min_acc_extraction(t,v,acc3);
final_acc3=[final_acc3;[t_min, v_min]];
end
min_acc_all_axis_array= [final_acc1 final_acc2 final_acc3];
max_acc_all_axis_array= [New_result_acc1 New_result_acc2 New_result_acc3];
all_sensor_array= [final_sensor1 final_sensor2 final_sensor3 final_sensor4];
118
APPENDIX F – MATLAB SCRIPTS FOR THE EXTRACTION OF
THE GSR DATA
close all
markers=P01_markers_time;
GSR=data(:,7);
% data=P01timestamp;
final_values=[];
N = length(GSR);
Fs=32;
t = linspace(0, N/Fs, N);
t=t';
for i=1:3:length(markers)-2
march_time=find(data(:,1)==markers(i,2))
walk_time=find(data(:,1)==markers(i+1,2))
end_time=find(data(:,1)==markers(i+2,2))
time_block=t(march_time:end_time)-t(march_time);
figure
block=[time_block GSR(march_time:end_time)];
plot(time_block,GSR(march_time:end_time),'c'), hold on
GSR_filter=sgolayfilt(GSR(march_time:end_time),1,13);
plot(time_block,GSR_filter, 'r');
baseline=sgolayfilt(GSR(march_time-Fs*1:march_time,:),1,13); %baseline
march=sgolayfilt(GSR(march_time:walk_time,:),1,13);
walk=sgolayfilt(GSR(walk_time:end_time),1,13);
avg_baseline=mean(baseline);
avg_march=mean(march);
avg_walk=mean(walk);
v1=avg_march-avg_baseline;
v2=avg_walk-avg_baseline;
v3=max(march)-min(march);
v4=max(walk)-min(walk);
final_values=[final_values;[v1 v2 v3 v4]];
end
119
APPENDIX G
Questionnaires
Perceived body dimensions/weight – 3D Body Visualizing Application
Kolmogorov-Smirnova Shapiro-Wilk
Source df F Sig.
120
Huynh-Feldt 35.438
Lower-bound 21.000
Table 2: ANOVA for the detection of significant sound effects and interaction
M SE
NF_mean 57.5227 3.02523
LF_mena 56.7727 2.88351
HF_mean 54.6591 2.72005
Table 3: Means and Standard Errors for perceived body dimensions - non-transformed
values
T
df Sig. (2-tailed)
Pair 1 NF_R1_Body_mean -
-.547 21 .590
LF_R1_Body_mean
Pair 2 NF_R1_Body_mean -
-2.414 21 .025
HF_R1_Body_mean
Pair 3 LF_R1_Body_mean -
-1.775 21 .090
HF_R1_Body_mean
Table 4: t-test for the detection of sound effects between specific conditions
Shapiro-Wilk
Statistic df Sig.
NF_thefeelingsaboutmybodyw
.923 22 .090
eresurprisingandunexpected
NF_Icouldreallytellwheremyfe
.907 22 .041
etwere
121
NF2_slow_quick .882 22 .013
NF2_light_heavy .913 22 .053
NF2_weak_strong .904 22 .036
NF2_stoopedhatched_straight .892 22 .021
NF2_soundswereproducedbym
.745 22 .000
yownfootsteps
NF2_thefeelingofmybodywasle
.895 22 .024
ssvividthannormal
NF2_thefeelingsaboutmybody
.919 22 .072
weresurprisingandunexpected
NF2_Icouldreallytellwheremyf
.917 22 .065
eetwere
LF_slow_quick .884 22 .015
LF_light_heavy .935 22 .156
LF_weak_strong .859 22 .005
LF_stoopedhatched_straight .887 22 .017
LF_soundswereproducedbymy
.714 22 .000
ownfootsteps
LF_thefeelingofmybodywasles
.850 22 .003
svividthannormal
LF_thefeelingsaboutmybodywe
.937 22 .168
resurprisingandunexpected
LF_Icouldreallytellwheremyfee
.882 22 .013
twere
LF2_slow_quick .923 22 .088
LF2_light_heavy .946 22 .264
LF2_weak_strong .876 22 .010
LF2_stoopedhatched_straight .934 22 .149
LF2_soundswereproducedbym
.696 22 .000
yownfootsteps
LF2_thefeelingofmybodywasle
.899 22 .028
ssvividthannormal
LF2_thefeelingsaboutmybodyw
.929 22 .115
eresurprisingandunexpected
LF2_Icouldreallytellwheremyfe
.872 22 .009
etwere
HF_slow_quick .898 22 .027
HF_light_heavy .938 22 .178
HF_weak_strong .807 22 .001
HF_stoopedhatched_straight .932 22 .132
HF_soundswereproducedbymy
.646 22 .000
ownfootsteps
122
HF_thefeelingofmybodywasles
.899 22 .029
svividthannormal
HF_thefeelingsaboutmybodyw
.932 22 .135
eresurprisingandunexpected
HF_Icouldreallytellwheremyfe
.848 22 .003
etwere
HF2_slow_quick .890 22 .018
HF2_light_heavy .937 22 .175
HF2_weak_strong .870 22 .008
HF2_stoopedhatched_straight .843 22 .003
HF2_soundswereproducedbym
.592 22 .000
yownfootsteps
HF2_thefeelingofmybodywasle
.853 22 .004
ssvividthannormal
HF2_thefeelingsaboutmybody
.927 22 .106
weresurprisingandunexpected
HF2_Icouldreallytellwheremyf
.849 22 .003
eetwere
Table 5: Shapiro-Wilk test of normality for all the task experience questions
N 22
Chi-Square 8.361
df 2
Asymp. Sig. .015
Table 7: Friedman’s ANOVA for the detection of significant sound effects
123
Perceived weight – Light-Heavy scale
N 22
Chi-Square 4.200
df 2
Asymp. Sig. .122
Table 10: Friedman’s ANOVA
NF2_weak_stron HF2_weak_stron
g- LF2_weak_strong g-
NF_weak_strong - LF_weak_strong HF_weak_strong
b c
Z -.492 -.731 -1.934b
Asymp. Sig. (2-tailed) .623 .465 .053
Table 12: Wilcoxon test within each condition and its repetition
N 22
Chi-Square 3.757
df 2
Asymp. Sig. .153
Table 13: Friedman’s ANOVA
124
Asymp. Sig. (2-tailed) .660 .119 .071
Table 14: Wilcoxon test within the three different sound conditions
N 22
Chi-Square 3.836
df 2
Asymp. Sig. .147
Table 16: Friedman’s ANOVA - repetition 2
N 22
Chi-Square 8.517
df 2
Asymp. Sig. .014
Table 19: Friedman’s ANOVA
125
LF_identification HF_identification HF_identification
_of_foot_location _of_foot_location _of_foot_location
_mean _mean _mean
NF_identification NF_identification LF_identification
_of_foot_location _of_foot_location _of_foot_location
_mean _mean _mean
b b
Z -.095 -2.293 -2.063b
Asymp. Sig. (2-tailed) .924 .022 .039
Table 20: Wilcoxon test within the three different sound conditions
Participants’ ability to identify their own feet as the footsteps sound source
N 22
Chi-Square .241
df 2
Asymp. Sig. .886
Table 22: Friedman’s ANOVA
Source df F Sig.
sound Sphericity Assumed 2 .284 .754
126
Greenhouse-Geisser 1.645 .284 .712
Greenhouse-Geisser 40.554
Huynh-Feldt 42.000
Lower-bound 21.000
Table 24: ANOVA
NF2__vividness_
of_body_feelings LF2_vividness_of HF2_vividness_o
- _body_feelings - f_body_feelings -
NF_vividness_of LF_vividness_of_ HF_vividness_of
_body_feelings body_feelings _body_feelings
N 22
Chi-Square .200
df 2
Asymp. Sig. .905
Table 26: Friedman’s ANOVA
127
Shapiro-Wilk
Statistic df Sig.
SAM – Dominance
N 22
Chi-Square 2.471
df 2
Asymp. Sig. .291
128
Table 29: Friedman’s ANOVA
SAM – Valence
LF2_SAM_Valen HF2_SAM_Valen
ce- ce-
NF2_SAM_Valen LF_SAM_Valenc HF_SAM_Valenc
ce- NF_Valence e- e-
b b
Z -.695 -1.203 -1.567b
Asymp. Sig. (2-tailed) .487 .229 .117
Table 31: Wilcoxon for exploration of sound repetition effect
N 22
Chi-Square 6.222
df 2
Asymp. Sig. .045
Table 32: Friedman’s ANOVA
SAM – Arousal
129
Table 34: Wilcoxon for exploration of sound repetition effect
N 22
Chi-Square 2.324
df 2
Asymp. Sig. .313
Table 35: Friedman’s ANOVA
Spanner
Kolmogorov-Smirnova Shapiro-Wilk
130
Table 37: Multivariate test to detect sound effects and interaction
131
Greenhouse-Geisser 1.829 1.223 .303
Huynh-Feldt 1.995 1.223 .305
Lower-bound 1.000 1.223 .281
Error(sound*repetition) w Sphericity Assumed 42
Greenhouse-Geisser 40.570
Huynh-Feldt 42.000
Lower-bound 21.000
l Sphericity Assumed 42
Greenhouse-Geisser 38.408
Huynh-Feldt 41.885
Lower-bound 21.000
Table 38: Univariate test to detect possible effect between the weight and length variables
Self-Efficacy
Kolmogorov-Smirnova Shapiro-Wilk
Source df F Sig.
132
Lower-bound 21.000
sound * repetition Sphericity Assumed 2 3.698 .033
Greenhouse-Geisser 1.974 3.698 .034
Huynh-Feldt 2.000 3.698 .033
Lower-bound 1.000 3.698 .068
Error(sound*repetition) Sphericity Assumed 42
Greenhouse-Geisser 41.457
Huynh-Feldt 42.000
Lower-bound 21.000
Table 40: sound x repetition ANOVA
Behavioural measures
Time of heel contact with the ground
Kolmogorov-Smirnova Shapiro-Wilk
Source df F Sig.
133
Huynh-Feldt 1.884 2.534 .100
Lower-bound 1.000 2.534 .132
Error(sound) Sphericity Assumed 30
Greenhouse-Geisser 25.381
Huynh-Feldt 28.261
Lower-bound 15.000
repetition Sphericity Assumed 1 .002 .965
Greenhouse-Geisser 1.000 .002 .965
Huynh-Feldt 1.000 .002 .965
Lower-bound 1.000 .002 .965
Error(repetition) Sphericity Assumed 15
Greenhouse-Geisser 15.000
Huynh-Feldt 15.000
Lower-bound 15.000
foot * sound Sphericity Assumed 2 .766 .474
Greenhouse-Geisser 1.758 .766 .459
Huynh-Feldt 1.973 .766 .472
Lower-bound 1.000 .766 .395
Error(foot*sound) Sphericity Assumed 30
Greenhouse-Geisser 26.368
Huynh-Feldt 29.594
Lower-bound 15.000
foot * repetition Sphericity Assumed 1 .163 .692
Greenhouse-Geisser 1.000 .163 .692
Huynh-Feldt 1.000 .163 .692
Lower-bound 1.000 .163 .692
Error(foot*repetition) Sphericity Assumed 15
Greenhouse-Geisser 15.000
Huynh-Feldt 15.000
Lower-bound 15.000
sound * repetition Sphericity Assumed 2 .519 .601
Greenhouse-Geisser 1.639 .519 .566
Huynh-Feldt 1.812 .519 .584
Lower-bound 1.000 .519 .483
Error(sound*repetition) Sphericity Assumed 30
Greenhouse-Geisser 24.578
Huynh-Feldt 27.187
Lower-bound 15.000
foot * sound * repetition Sphericity Assumed 2 3.123 .059
Greenhouse-Geisser 1.570 3.123 .073
Huynh-Feldt 1.722 3.123 .068
134
Lower-bound 1.000 3.123 .098
Error(foot*sound*repetition) Sphericity Assumed 30
Greenhouse-Geisser 23.557
Huynh-Feldt 25.832
Lower-bound 15.000
Table 42: foot x sound x repetition ANOVA
Source df F Sig.
Greenhouse-Geisser 30.082
Huynh-Feldt 33.319
Lower-bound 17.000
Table 43: Left foot sensor ANOVA
t df Sig. (2-tailed)
Pair 1 HF_Left_heel_t1t0diff_mean -
-2.070 21 .051
LF_Left_heel_t1t0diff_mean
135
Pair 2 HF_Left_heel_t1t0diff_mean -
-.686 21 .500
NF_Left_heel_t1t0diff_mean
Pair 3 LF_Left_heel_t1t0diff_mean -
2.033 21 .055
NF_Left_heel_t1t0diff_mean
Table 44: Paired sample t-test
Kolmogorov-Smirnova Shapiro-Wilk
Source df F Sig.
136
Greenhouse-Geisser 1.970 2.573 .092
Huynh-Feldt 2.000 2.573 .091
Lower-bound 1.000 2.573 .127
Error(sound*repetition) Sphericity Assumed 34
Greenhouse-Geisser 33.487
Huynh-Feldt 34.000
Lower-bound 17.000
Table 46: ANOVA – sound by repetition
t df Sig. (2-tailed)
Pair 1 HF_L_toe_avg_z_mean -
-2.217 21 .038
LF_L_toe_avg_z_mean
Pair 2 HF_L_toe_avg_z_mean -
-.274 21 .787
NF_L_toe_avg_z_mean
Pair 3 LF_L_toe_avg_z_mean -
1.613 21 .122
NF_L_toe_avg_z_mean
Table 47: Paired t-test for left foot sensors
Average step time – time interval between heel’s initial strike and toe’s last contact with
the ground
Shapiro-Wilk
Statistic df Sig.
Source df F Sig.
137
Lower-bound 1.000 .795 .385
Error(sound) Sphericity Assumed 34
Greenhouse-Geisser 30.440
Huynh-Feldt 33.790
Lower-bound 17.000
repetition Sphericity Assumed 1 .036 .852
Greenhouse-Geisser 1.000 .036 .852
Huynh-Feldt 1.000 .036 .852
Lower-bound 1.000 .036 .852
Error(repetition) Sphericity Assumed 17
Greenhouse-Geisser 17.000
Huynh-Feldt 17.000
Lower-bound 17.000
sound * repetition Sphericity Assumed 2 1.370 .268
Greenhouse-Geisser 1.519 1.370 .267
Huynh-Feldt 1.637 1.370 .267
Lower-bound 1.000 1.370 .258
Error(sound*repetition) Sphericity Assumed 34
Greenhouse-Geisser 25.829
Huynh-Feldt 27.836
Lower-bound 17.000
Table 49: ANOVA – sound x repetition – no significant sound effects or interaction
Shapiro-Wilk
Statistic df Sig.
138
NF2_Left_heel_peak_Z .950 16 .487
Table 50: Shapiro wilk - z-scores
Source df F Sig.
Greenhouse-Geisser 28.924
Huynh-Feldt 31.809
Lower-bound 17.000
Table 51: ANOVA showed no significant sound effects or interaction
Tests of Normality
Kolmogorov-Smirnova Shapiro-Wilk
139
NF1_Right_heel_average .147 16 .200* .914 16 .135
NF2_Right_heel_average .209 16 .061 .880 16 .038
*
HF1_Left_heel_average .138 16 .200 .954 16 .550
*
HF2_Left_heel_average .106 16 .200 .954 16 .548
*
LF1_Left_heel_average .113 16 .200 .984 16 .987
*
LF2_Left_heel_average .107 16 .200 .952 16 .528
NF1_Left_heel_average .196 16 .100 .922 16 .180
*
NF2_Left_heel_average .103 16 .200 .973 16 .890
Table 52: Shapiro-Wilk test for both left and right foot sensors – normal distribution in left
foot sensors
Kolmogorov-Smirnova Shapiro-Wilk
Kolmogorov-Smirnova Shapiro-Wilk
140
Source df F Sig.
Greenhouse-Geisser 25.053
Huynh-Feldt 26.855
Lower-bound 17.000
Table 55: ANOVA showed no significant effects or interaction
Kolmogorov-Smirnova Shapiro-Wilk
141
HF2_Left_toe_peak_z .139 16 .200* .950 16 .498
LF1_Left_toe_peak_z .192 16 .118 .959 16 .637
*
LF2_Left_toe_peak_z .134 16 .200 .962 16 .699
*
NF1_Left_toe_peak_z .111 16 .200 .955 16 .580
*
NF2_Left_toe_peak_z .129 16 .200 .944 16 .407
Table 56: Shapiro-Wilk - z-scores - test for both left and right foot sensors – normal
distribution in left foot sensors
Source df F Sig.
Greenhouse-Geisser 33.613
Huynh-Feldt 34.000
Lower-bound 17.000
ACCELEROMETER
Kolmogorov-Smirnova Shapiro-Wilk
142
Statistic df Sig. Statistic df Sig.
Source df F Sig.
143
Huynh-Feldt 2.000 .707 .500
Lower-bound 1.000 .707 .412
Error(sound*repetition) Sphericity Assumed 36
Greenhouse-Geisser 33.995
Huynh-Feldt 36.000
Lower-bound 18.000
Table 59: ANOVA - sound x repetition
t df Sig.
Source df F Sig.
Greenhouse-Geisser 34.916
Huynh-Feldt 36.000
Lower-bound 18.000
144
Table 61: ANOVA – sound x repetition – no significant sound effects or interaction were
found
Deceleration during lifting the foot upwards just before moving the foot downwards.
Source df F Sig.
Greenhouse-Geisser 33.728
Huynh-Feldt 36.000
Lower-bound 18.000
Table 62: ANOVA – sound x repetition – no significant sound effects or interaction were
found
145
GSR
HF2_max_
min_walk_
NF2_max_min LF2_max_min_ HF2_max_min NF2_max_min LF2_max_min z-
_march_z - march_z - _march_z - _walk_z - _walk_z - HF1_max_
NF1_max_min LF1_max_min_ HF1_max_min NF1_max_min LF1_max_min min_walk_
_march_z march_z _march_z _walk_z _walk_z z
b c c b c
Z -1.569 -.604 -1.569 -1.730 -.885 -2.575c
Asymp. Sig. (2-
.117 .546 .117 .084 .376 .010
tailed)
Table 63: Wilcoxon test for all variables to identify significant effects of sound repetition part
1
N 19
Chi-Square 9.579
df 2
Asymp. Sig. .008
Table 65: Friedman’s ANOVA
146
N 19
Chi-Square 5.158
df 2
Asymp. Sig. .076
Table 67: Friedman’s ANOVA
N 19
Chi-Square 5.158
df 2
Asymp. Sig. .076
Table 68: Friedman’s ANOVA
N 20
Chi-Square .300
df 2
Asymp. Sig. .861
Table 69: Friedman’s ANOVA
147
APPENDIX H – INFORMATION SHEET
We would like to invite you to participate in this research project. You should only participate if you want to;
choosing not to take part will not disadvantage you in any way. Before you decide whether you want to take
part, please read the following information carefully and discuss it with others if you wish. Ask us if there is
anything that is not clear or you would like more information.
The study aims to explore how sensory simulation during walking affects behaviour and physiological
change. This research takes place in three parts. First you will be provided with a questionnaire for collecting
demographic data and information related to the study. Thereafter, you will be asked to perform a task six
different times, each lasting about 5 minutes. For this task you will wear headphones, through which you will
listen to environmental sounds, and a pair of sandals. We will also ask you to carry a small bag with you with
some equipment inside. In the beginning of each different task session you will be given a spanner. Then you
will be asked to march in place for 20 seconds until the experimenter gives you a “go” signal, after which you
will be required to walk down a corridor and put the spanner in a box placed at the end of the corridor.
148
Please focus on this walking task. Between each of the different six periods you will be asked to complete 3
small tasks including you to provide three different dimension estimates and to fill in a brief questionnaire.
Finally, after the end of all six sessions you will also be required to fill in a last short questionnaire. Please
note that you may omit questions that you do not need to answer.
Your participation will be video recorded. In addition, during the experiment you will be asked to wear a
number of sensors. Accelerometer and electromyography sensors, which measures your movements and
will be placed on your legs and on the upper part of your back and chest; a wrist sensor, which measures the
amount of sweat produced at your wrist (EDA – electrodermal activity) and pressure sensors attached to a
pair of sandals.
The procedure is not harmful or painful in any way. This experiment does not involve any risk for you. The
whole session lasts about 1 hour. You will be fully debriefed. You will be paid £7.50 for taking part.
All data will be collected and stored in accordance with the Data Protection Act 1998 and will be kept
anonymous. Researchers working with me will analyze the data collected.
It is up to you to decide whether or not to take part. If you choose not to participate, you won't incur any
penalties or lose any benefits to which you might have been entitled. However, if you do decide to take part,
you will be given this information sheet to keep and asked to sign a consent form. Even after agreeing to
take part, you can still withdraw at any time and without giving a reason.
149