El Gran Libro de HTML5 CSS3 y Javascript
El Gran Libro de HTML5 CSS3 y Javascript
IN
HEALTH
TECHNOLOGY AND
INFORMATICS 145
145
Advanced Technologies
in Rehabilitation
A. Gaggioli et al. (Eds.)
ISBN 978-1-60750-018-6
ISSN 0926-9630
Advanced
Technologies in
Rehabilitation
Empowering Cognitive, Physical, Social and
Communicative Skills through Virtual Reality,
Robots, Wearable Systems and
Brain-Computer Interfaces
Volume 145
Recently published in this series
Vol. 144. B.K. Wiederhold and G. Riva (Eds.), Annual Review of Cybertherapy and
Telemedicine 2009 Advanced Technologies in the Behavioral Social and
Neurosciences
Vol. 143. J.G. McDaniel (Ed.), Advances in Information Technology and Communication
in Health
Vol. 142. J.D. Westwood, S.W. Westwood, R.S. Haluck, H.M. Hoffman, G.T. Mogel,
R. Phillips, R.A. Robb and K.G. Vosburgh (Eds.), Medicine Meets Virtual
Reality 17 NextMed: Design for/the Well Being
Vol. 141. E. De Clercq et al. (Eds.), Collaborative Patient Centred eHealth Proceedings of the
HIT@HealthCare 2008 joint event: 25th MIC Congress, 3rd International Congress
Sixi, Special ISV-NVKVV Event, 8th Belgian eHealth Symposium
Vol. 140. P.H. Dangerfield (Ed.), Research into Spinal Deformities 6
Vol. 139. A. ten Teije, S. Miksch and P. Lucas (Eds.), Computer-based Medical Guidelines and
Protocols: A Primer and Current Trends
Vol. 138. T. Solomonides et al. (Eds.), Global Healthgrid: e-Science Meets Biomedical
Informatics Proceedings of HealthGrid 2008
Vol. 137. L. Bos, B. Blobel, A. Marsh and D. Carroll (Eds.), Medical and Care Compunetics 5
Vol. 136. S.K. Andersen, G.O. Klein, S. Schulz, J. Aarts and M.C. Mazzoleni (Eds.), eHealth
Beyond the Horizon Get IT There Proceedings of MIE2008 The XXIst
International Congress of the European Federation for Medical Informatics
ISSN 0926-9630
Advanced Technologies
in Rehabilitation
Empowering Cognitive, Physical, Social and Communicative
Skills through Virtual Reality, Robots, Wearable Systems and
Brain-Computer Interfaces
Edited by
Andrea Gaggioli
Catholic University of Milan, Milan, Italy
Istituto Auxologico Italiano, Milan, Italy
Emily A. Keshner
Temple University, Philadelphia, USA
and
Giuseppe Riva
Catholic University of Milan, Milan, Italy
Istituto Auxologico Italiano, Milan, Italy
LEGAL NOTICE
The publisher is not responsible for the use which might be made of the following information.
PRINTED IN THE NETHERLANDS
INTRODUCTION
The proportion of the world population over 65 years of age is climbing. Life expectancy in this age group is increasing, and disabling illnesses now occur later in life, so
the burden on the workingage population to support health care costs of aging populations continues to increase. These demographic shifts portend progressively greater
demands for cost effective health care, including long-term care and rehabilitation. The
most influential change in physical rehabilitation practice over the past few decades has
been the rapid development of new technologies that enable clinicians to provide more
effective therapeutic interventions.
New rehabilitation technologies can provide more responsive treatment tools or augment the therapeutic process. However, the absence of education about technological
advancements and apprehensions by clinicians related to the role of technology in the
treatment delivery process puts us at risk of losing the benefit of an essential partner in
achieving successful outcomes with the physically disabled and aging population.
There are two reasons that may explain why rehabilitation practitioners do not play an
integral role in the development and evaluation of these new technologies. First, the
engineers who develop these technologies do not recognize the value they could derive
by consulting with rehabilitation professionals in order to make their machine-user
interfaces more efficient, user friendly, and effective for specific disabilities. Second,
many rehabilitation professionals are uncomfortable with technology and fear that it
may take the place of individualized interactions with patients.
Funding challenges, a lack of public awareness about technologys potential, a shortage
of trained experts, and poor collaboration among researchers, clinicians, and users are
often the cause for an absence of clinical trials that demonstrate the value of near-term
and future rehabilitation applications. If technology transfer is to become successful,
we need to establish collaborative interactions in which the goals of each discipline
become overlapping with the skills and goals of the other fields of endeavor and of the
consumer. The rapid rise of technological development is pushing the market place and
it is essential that rehabilitation specialists oversee the quality and validity of these new
applications before they reach the consumer.
It is clear from the chapters in this book that improvements in technology depend on
interdisciplinary cooperation among neuroscientists, engineers, computer programmers,
psychologists, and rehabilitation specialists, and on adoption and widespread application of objective criteria for evaluating alternative methods. The goal of this book is to
bring ideas from several different disciplines in order to examine the focus and aims
that drive rehabilitation intervention and technology development.
Specifically, the chapters in this book address the questions of what research is currently taking place to further develop rehabilitation applied technology and how we
vi
have been able to modify and measure responses in both healthy and clinical populations using these technologies. In the following sections we highlight some of the issues raised about emergent technologies and briefly describe the chapters from this
book that are dedicated toward addressing these issues.
vii
of impairment and adapting the intervention as performance changed thereby exploiting the nervous systems capacity for sensorimotor adaptation.
viii
extending the reach of medical rehabilitation service delivery all emphasize the importance of human factors and user-centered design in the planning, developing, and
implementation of their systems. Brennan et al. present a brief history of telerehabilitation and tele-care and offer an overview of the technology used to provide
these remote rehabilitation services. Mataric et al. demonstrate how combining the
technology of non-contact socially assistive robotics and the clinical science of neurorehabilitation and motor learning can promote home-based rehabilitation programs
for stroke and traumatic brain injury. Weiss and Klinger discuss the practical and ethical considerations of using virtual reality for multiple users in co-located settings, single users in remote locations, and multiple users in remote locations.
6. Summary
Although new technologies and applications are rapidly emerging in the area of rehabilitation, there are still issues that must be addressed before these can be used both
effectively and economically. First, we need to demonstrate that these devices are effective through clinical trials. Second, we must determine how to build devices cheaply
enough for mass use. Lastly, we need sufficiently educated physicians and therapists to
drive the technology development and applications. Although considerable engineering
knowledge is required to understand the potential capabilities of the various technologies, engineering alone will not determine the usefulness of these systems. The chapters we have included in this book clearly demonstrate that in order to design appropriate system features and successful interventions, developers and the users need to be
familiar with the scientific rationale for motor learning and motor control, as well as
the motor impairments presented by different clinical populations. Ultimately, the impact of these new technologies will depend very much on mutual communication and
collaboration between clinicians, engineers, scientists, and the people with disabilities
that the technology will most directly impact.
Emily A. Keshner
Temple University
Philadelphia, PA, USA
W. Zev Rymer
Northwestern University
Chicago, Illinois, USA
ix
CONTRIBUTORS
Sergei V. ADAMOVICH
Department of Biomedical Engineering, New Jersey Institute of Technology, NJ, USA
Sergei Adamovich received his Ph.D. degree in physics and mathematics from Moscow
Institute of Physics and Technology. He is currently with the department of Biomedical
Engineering at New Jersey Institute of Technology, USA. His research is funded by
National Institutes of Health and by the National Institute on Disability and Rehabilitation Research.
Michela AGOSTINI
Laboratory of Robotics and Kinematics, I.R.C.C.S. San Camillo Venezia, Padova, Italy
Michela Agostini obtained the Degrees in Motor Science and in Physical Therapy at
the University of Padova. Her studies are focused on the clinical application of virtual
reality and telerehabilitation systems for motor recovery after neurological injury, with
specific interest in the motor learning principles involved in the human machine interaction.
Alessandro ANTONIETTI
Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
Alessandro Antonietti is Full professor of Cognitive Psychology and head of the Department of Psychology at the Catholic University of the Sacred Heart in Milano. He
investigated the role played by media in thinking processes and he is interested in the
application of cognitive issues in the field of education and rehabilitation.
Massimo BERGAMASCO
PERCRO Laboratory, Scuola Superiore SantAnna, Pisa, Italy
Massimo Bergamasco is Full Professor of Applied Mechanics at Scuola Superiore
SantAnna and the current coordinator of the IP EU project SKILLS. His research activity deals with the study and development of haptic interfaces for the control of the
interaction between humans and Virtual Environments.
de Catalunya (UPC) and his PhD from the Swiss Federal Institute of Technology
Zrich (ETHZ).
David BRENNAN
Center for Applied Biomechanics and Rehabilitation Research, National Rehabilitation
Hospital, Washington DC, USA
David Brennan, MBE, is a Senior Research Engineer at the National Rehabilitation
Hospital in Washington, DC. He has worked for over 10 years on telerehabilitation
research and development projects with funding from the National Institutes of Health,
and the United States Departments of Education and Defense
Simon BROWNSELL
School of Health and Related Research, University of Sheffield Regent Court, Sheffield,
UK
Dr. Brownsell is a Research Fellow at the University of Sheffield, UK. He has 12 years
experience working in telecare and telehealth and a particular interest in developing
evidence based services for older people. He has written over 50 articles, two books,
and three book chapters.Mnica
S. CAMEIRO
SPECS-Institut Universitari de lAudiovisual (IUA), Universitat Pompeu Fabra, Barcelona, Spain
Mnica Cameiro is a PhD student in the SPECS group in the University Pompeu
Fabra in Barcelona. Mnicas main interest is the application of new technologies for
rehabilitation and she is currently working on the development and clinical assessment
of interactive systems for the neurorehabilitation of motor impairments such as the
ones originated by stroke.
Roberta CARABALONA
Biomedical Technology Department (Polo Tecnologico), Fondazione Don C. Gnocchi,
Milan, Italy
Roberta Carabalona received the B.Sc. in Biomedical Engineering (1996) from
Politecnico di Milano and the M.Sc. in Biostatistics and Experimental Statistics
(2005) from Universit degli Studi di Milano-Bicocca. She is researcher in the
Biosignal Analysis Area at the Biomedical Technology Department of Fondazione Don
Carlo Gnocchi (Milan, Italy). Her research interests include bio-signal analysis and
brain-computer interfaces.
Maria Chiara CARBONCINI
Department of Neurosciences, University of Pisa, Pisa, Italy
Maria Chiara Carboncini (MD) is the responsible for upper limb rehabilitation and
kinesiology at the Neurorehabilitation Unit of the University Hospital of Pisa.
xi
Maura CASADIO
Department of Informatics, Systems and Telematics, University of Genoa, Genoa, Italy
Maura Casadio received the Master degree in Electronic Engineering (2002), from the
University of Pisa, Italy, the Master degree in Biomedical Engineering and the Ph.D.
degree in Bioengineering, Material Science and Robotics (2006) from the University of
Genoa, Italy. She is now postdoctoral fellow at Rehabilitation Institute of Chicago,
USA.
Paolo CASTIGLIONI
Biomedical Technology Department (Polo Tecnologico), Fondazione Don C.Gnocchi,
Milan, Italy
Paolo Castiglioni received the Ph.D. in biomedical engineering (1993) from the
Politecnico di Milano University, Italy. He is coordinator of the Biosignal Analysis
Area at the Biomedical Technology Department of Fondazione Don Carlo Gnocchi
(Milan, Italy). His research interests include bio-signal analysis, physiological mechanisms for the cardiovascular control, gravitational physiology, brain-computer interfaces.
Mauro DAM
Departement of neurosciences, University of Padova, Padova, Italy
Mauro Dam received a specialization in Neurology in 1979. From 1980 to 1982 he was
Visiting Fellow, National Institute on Aging, N.I.H., Bethesda, USA. He is currently
Associate professor of Neurology and Scientific Vice President of the Italian Scientific
Institutes for Research Hospitalization and Health Care, S Camillo Hospital, Venice.
His research interests include: brain metabolism, neuropharmacology, dementia, stroke,
neurorehabilitation.
Judith E. DEUTSCH
Department of Rehabilitation and Movement Sciences, University of Medicine and
Dentistry of New Jersey, USA
Judith E. Deutsch is Professor and Director of Rivers Lab. Her research focuses on the
development and testing of gaming and virtual reality to improve mobility for individuals post-stroke.
xii
Jon ERIKSSON
Computer Science Department, University of Southern California, Los Angeles, USA
Jon Eriksson is a Master student at the Computer Science Department, University of
Southern California.
Antonio FRISOLI
PERCRO Laboratory, Scuola Superiore SantAnna, Pontedera (Pisa), Italy
Antonio Frisoli (Eng., PhD) is Assistant Professor of Applied Mechanics at Scuola
Superiore SantAnna. He is Associate Editor of IEEE Transaction of Haptics and Presence Teleoperators and Virtual Environments journals. His research interests are in the
field of robotic assisted rehabilitation, robotics, virtual reality and haptic interfaces.
Andrea GAGGIOLI
Faculty of Psychology, Catholic University of Milan, Milan, Italy
Andrea Gaggioli received a MSc in Psychology from University of Bologna and a
Ph.D. from the Faculty of Medicine of the University of Milan. He is a researcher at the
Faculty of Psychology of the Catholic University of Milan and senior researcher at the
Applied Technology for Neuro-Psychology Lab of Istituto Auxologico Italiano (Milan,
Italy). He is the founder of Positive Technology, a field that studies how technology
can be used to promote mental and physical wellbeing.
Psiche GIANNONI
School of Medicine, Master program in physiotherapy, University of Genoa, Genoa,
Italy
Psiche Giannoni is a trained physiotherapist, IBITA Advanced Course Bobath Instructor and EBTA Senior Bobath Instructor (country representative). She teaches and organizes basic and advanced courses for the treatment of adults with hemiplegia, children with cerebral palsy. She is a Professor at the University of Genoa, Physiotherapy
School and anauthor of one book and about 30 scientific publications.
Furio GRAMATICA
Polo Tecnologico Biomedical technology Department, Fondazione Don Carlo Gnocchi ONLUS, Milano, Italy
Furio Gramatica, physicist, is the coordinator of the Biomedical Technology Department at Fondazione Don Gnocchi, where he also leads a biophysics and nanomedicine
team. His main scientific interest is the application of nanotechnology to diagnosis and
targeted drug delivery. Formerly, he served as researcher and project manager at CERN
(European Laboratory for Particle Physics, Geneva).
Giovanni GREGGIO
School of Physical Medicine and Rehabilitation, University of Padua, Rovigo, Italia
xiii
Giovanni Greggio graduated in Medicine at the University of Padua in 2004, and specialized in Physical Medicine and Rehabilitation in 2009. He took part to the European
project I-Learning about upper limb rehabilitation after stroke.
Robert KENYON
Department of Computer Science, University of Illinois, Chicago, USA
Robert Kenyon received his Ph.D. in Physiological Optics from the University of California, Berkeley and is a Professor of Computer Science at the University of Illinois at
Chicago. His research spans the areas of sensory-motor adaptation, effects of microgravity on vestibular development, visuo-motor and posture control, flight simulation,
Tele-immersion, sensory/motor integration for navigation and wayfinding, virtual environments, and the melding of robots and virtual reality for rehabilitation.
Emily A. KESHNER
Department of Physical Therapy and Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA, USA
Emily Keshner is Professor and Chair of the Department of Physical Therapy, a Professor in the Department of Electrical Engineering and Computer Science, and Director of
the Virtual Environment and Postural Orientation Laboratory at Temple University.
She is currently President of the International Society for Virtual Rehabilitation. Her
research focuses on how the CNS integrates multiple sensory demands with the biomechanical constraints of postural and spatial orientation tasks.
Evelyne KLINGER
LAMPA, Arts et Metiers ParisTech Angers, Laval, France
Evelyne Klinger, PhD, Eng, is Researcher of Arts et Mtiers ParisTech in Laval,
France. Her work is dedicated to the design of virtual reality based methods, concepts
and systems for cognitive rehabilitation assessment and intervention. She created the
VAP-S, a virtual supermarket for executive functions exploration.
Mindy F. LEVIN
School of Physical and Occupational Therapy, McGill University, Montreal, Quebec,
Canada
xiv
Mindy Levin is a researcher and neurological physiotherapist (McGill-1996). She obtained an MSc (Clinical Sciences, University of Montreal-1985) and a PhD (Physiology, McGill-1990). She was a Professor in the School of Rehabilitation (UdeM-19922004) and Director of the Physical Therapy Program (McGill-2004-08). She holds a
Canada Research Chair in Motor Recovery and Rehabilitation.
Eliane C. MAGDALON
Department of Biomedical Engineering, University of Campinas, Campinas, SP, Brazil
Eliane Magdalon obtained a B.Sc. in Physical Therapy from the Methodist University
of Piracicaba in 2000 and her Masters degree from the University of Campinas in
2004. She is currently completing her PhD in the Department of Biomedical Engineering (Rehabilitation Engineering) of University of Campinas, Campinas, SP, Brazil.
Maja MATARI
Computer Science Department, University of Southern California, Los Angeles, USA
Maja J. Mataric is Professor of Computer Science and Neuroscience, Director of the
Center for Robotics and Embedded Systems (CRES), and the Viterbi School of Engineering Senior Associate Dean for Research at the University of Southern California.
She received her Ph.D. in Computer Science and Artificial Intelligence at MIT in 1994.
With the goal of getting robots to help people, her research interests include human
robot interaction and robot control and learning in complex environments.
Sue MAWSON
Center for Health and Social Care Research, Sheffield Hallam University, Sheffield,
UK
Sue Mawson is a Professor of Rehabilitation at Sheffield Hallam University, UK. Her
research focuses on improving quality of life of people with neurological problems,
particularly through exploration of the effectiveness of rehabilitative interventions. She
is a partner in the SMART trial, investigating benefits of technology for stroke rehabilitation.
Andrea MENEGHINI
Advanced Technology in Rehabilitation Lab Padua Teaching Hospital, Rehabilitation
Unit, University of Padua, Padua, Italy
Andrea Meneghini, MD, is a physiatrist specialized in Orthopedics and Traumatology.
He is head and founder of the Advanced Technology in Rehabilitation Lab at Padua
Teaching Hospital. He has more than 25 years of clinical and research experience. He
has been studying the use of virtual reality in the rehabilitation of hemiplegia since the
early 90s.
xv
Alma S. MERIANS
Department of Rehabilitation and Movement Science, University of Medicine and Dentistry of New Jersey, NJ, USA
Dr. Alma Merians is Professor and Chairperson of the Department of Rehabilitation
and Movement Sciences. The major focus of her lab is to study basic mechanisms underlying neuromuscular control of human movement and sensorimotor learning, both in
healthy populations and in people with neurological diseases like stroke or cerebral
palsy.
Pietro MORASSO
Dept. of Informatics, Systems, Telematics, University of Genoa, Genova, Italy
Pietro Morasso is full professor of Bioengineering at the Genoa University. Since 1970
he has been associated with the Neurophsysiological laboratory of Emilio Bizzi (MIT).
His scientific interests include neural control of movement, motor learning, anthropomorphic robotics, and rehabilitation engineering. He is author and co-author of 7 books
and over 300 papers (44 indexed in Medline).
Francesca MORGANTI
Department of Human Science, University of Bergamo, Bergamo, Italy
Francesca Morganti received a MSc in Psychology from Padua University, where she
took a specialization in Neuropsychology and Clinical Psychophysiology. She also
obtained a PhD in Cognitive Science from the University of Turin. Her research focuses on the application of interactive technologies to experimental psychology and
neuroscience, as well as the study of intersubjectivity from the perspectives of neuroscience, cognitive science and social cognition.
Francesco PICCIONE
Department of Neurorehabilitation, IRCCS Hospital San Camillo Alberoni, Venice,
Italy
Francesco Piccione has a degree in Medicine and Surgery and Residency in Neurology
and Neurophysiopathology. He is currently Director of Unit of Neurodegenerative Disorders and Neurophysiopathology in San Camillo Hospital, Venice. Expert in EMG,
EEG and Evoked Potentials, and scientific researcher in the Neurophysiology field
applied to disability improvement.
Maurizia PIGATTO
Dipartimento di Specialit Medico Chirurgiche, University of Padua, Padua, Italy
Maurizia Pigatto is a chartered Physiotherapist. She has over 25 years of clinical experience. She has collaborated with the School of Physioterapy and Master of Musicotherapy at Padua University. She serves as senior research collaborator at Advanced
Technology in Rehabilitation Lab at Padua Teaching Hospital.
xvi
Lamberto PIRON
Neurorehabilitation Department, I.R.C.C.S. San Camillo Hospital, Venice, Italy
Lamberto Piron is a neurologist. He is the director of the Cerebro-vascular diseases
Operative Unit and of the Kinematics and Robotics laboratory at I.R.C.C.S. San
Camillo Hospital. His research focuses on the use of virtual environments, robotics and
telerehabilitation for training patients with arm motor impairment after neurological
lesions.
Ilaria POZZATO
Rehabilitation Unit, University of Padua, Padua, Italy
Dr. Ilaria Pozzato is currently postgraduate training at the Medical School of Specialization in Physical Medicine and Rehabilitation at Padua University. She has graduated
in Medicine at the University of Padua with a thesis on the application of virtual reality
and motor imagery training for upper limb rehabilitation of hemiplegic patients.
David J. REINKENSMEYER
Department of Mechanical and Aerospace Engineering, University of California at
Irvine, CA, USA
David J. Reinkensmeyer received his B.S. degree from the Massachusetts Institute of
Technology and his M.S. and Ph.D. degrees from the University of California at Berkeley. He was a research associate at the Rehabilitation Institute of Chicago before joining the University of California at Irvine.
Giuseppe RIVA
Department of Psychology, Catholic University of Milan, Milan, Italy
Giuseppe Riva, Ph.D. is Associate Professor of General Psychology and Communication Psychology at the Catholic University of Milan, Italy; Director of the Interactive
Communication and Ergonomics of NEw Technologies ICE-NET Lab. at the
Catholic University of Milan, Italy, and Head Researcher of the Applied Technology
for Neuro-Psychology Laboratory Istituto Auxologico Italiano (Milan, Italy). His
research activities focus on methods and assessment tools in psychology and the use
virtual reality in assessment and therapy.
Bruno ROSSI
Neurorehabilitation Unit, Department of Neurosciences, University of Pisa, Pisa, Italy
Bruno Rossi (MD) is Head of the Neurorehabilitation Unit, Department of Neuroscience University Hospital Pisa, and Full Professor of Physical Medicine and Rehabilitation. His research interests include clinical neurophysiology, EMG in neuromuscular
disorders, brain-stem and spinal reflexology, muscle fatigue analysis, clinical neurology, psychophysiology of consciousness disorders and neurorehabilitation.
xvii
Vittorio SANGUINETI
Dept Informatics Systems Telematics, University of Genoa and Italian Institute of
Technology, Genoa, Italy
Vittorio Sanguineti was born in Genova, Italy in 1964. He got a Masters Degree in
Electronic Engineering in 1989 and a PhD in Robotics in 1994, both at the University
of Genova. Until 1998 he was working, as a post-doctoral fellow, at the Institut de la
Communication Parle, INPG (Grenoble, France); at the Department of Psychology,
McGill University, (Montreal, Canada); and at the Department of Physiology, Northwestern University, (Chicago, USA). Since 1999 he has been an assistant professor at
the Dipartimento di Informatica, Sistemistica e Telematica (DIST) of the University of
Genova.
Valentina SQUERI
Dept Informatics Systems Telematics, University of Genoa and Italian Institute of
Technology, Genoa, Italy
Valentina Squeri received a Masters Degree in Bioengineering at the University of
Genova in 2006. She is currently a PhD student at the University of Genoa and the Italian Institute of Technology. Her areas of interest include motor control, motor learning
and their application to robot therapy.
Sandeep SUBRAMANIAN
School of Physical and Occupational Therapy, McGill University, Quebec, Canada
Sandeep Subramanian, MSc, PT is currently enrolled in the PhD program in Rehabilitation Sciences at the School of Physical and Occupational Therapy, McGill University.
His research focuses on the use of feedback for motor learning in patients with chronic
stroke and the use of different environments to maximize motor recovery post-stroke.
Adriana TAPUS
Computer Science Department, University of Southern California, CA, Los Angeles,
USA
Dr. Adriana Tapus is a research associate at University of Southern California (USC,
USA) in the Interaction Lab/ Robotics Research Lab, Computer Science Department.
She received her Ph.D. in Computer Science from Swiss Federal Institute of Technology, Lausanne (EPFL) in 2005, her M.S. in Computer Science from University Joseph
xviii
Fourier, Grenoble, France in 2002 and her degree of Engineer in Computer Science and
Engineering from Politehnica University of Bucharest, Romania. Her current research interests are socially assistive robotics for post-stroke patients and people suffering from cognitive impairment and/or Alzheimers disease, humanoid robotics, machine learning, and computer vision...
Paolo TONIN
Department of Neurorehabilitation, IRCCS San Camillo S. Polo, Venice, Italy
Paolo Tonin is a neurologist and a physiatrist. He has carried out research in the rehabilitation of Stroke, Parkinson disease, Multiple Sclerosis, Traumatic Brain Injury, with
particular reference to the role of emerging technologies in neurorehabilitation. Dr.
Tonin is member of the Board of the Italian Society of Neurorehabilitation and of the
Management Committee of the World Federation of Neurorehabilitation.
Eugene TUNIK
Department of Rehabilitation and Movement Science, University of Medicine and Dentistry of New Jersey, Newark, NJ, USA
Dr. Tunik completed his degrees in Physical Therapy at Northeastern University and
doctorate at the Center for Molecular and Behavioral Neuroscience at Rutgers University. He studies brain mechanisms involved in motor control and learning in health and
disease and how this information can guide therapeutic interventions.
Andrea TUROLLA
Laboratory of Robotics and Kinematics, I.R.C.C.S. San Camillo Venezia, Noventa Padovana, Italy
Andrea Turolla is Physical Therapist. He obtained the Masters Degree in Science of
Rehabilitation Health Profession at the University of Padua. His research focuses on
the application of virtual reality and robotic systems in motor rehabilitation, with specific interest in the motor learning principles involved in human-machine interaction.
Elena VERGARO
Department of Informatics, Systems and Telematics, University of Genoa, Genoa, Italy
Elena Vergaro received a Masters degree in Biomedical Engineering (2006), from the
University of Genoa, Italy. She is now a Ph.D. student in Bioengineering at the same
university. Her area of interest is motor control and motor skill learning.
Paul VERSCHURE
Institute of Audiovisual Studies, Universitat Pompeu Fabra, Barcelona, Spain
Paul Verschure is a research professor with the Catalan Institute of Advanced Studies
(ICREA) and the Universitat Pompeu Fabra. Paul uses synthetic and experimental
xix
methods to find a unified theory of mind and brain and applies the outcomes to novel
real-world technologies and quality of life enhancing applications.
Carolee J. WINSTEIN
Division of Biokinesiology and Physical Therapy at the School of Dentistry, University
of Southern California, Los Angeles, USA
Carolee J. Winstein, PhD, PT, FAPTA is Professor and Director of Research in Biokinesiology and Physical Therapy at the University of Southern California. She runs an
interdisciplinary research program focused on understanding control, rehabilitation and
recovery of goal-directed movements that emerge from a dynamic brain-behavior system in brain-damaged conditions.
xxi
CONTENTS
Introduction, Emily A. Keshner and W. Zev Rymer
Contributors
ix
Section I.
Chapter 1.
25
Chapter 3.
40
Chapter 4.
55
Chapter 6.
Chapter 7.
Chapter 8.
Chapter 9.
65
84
94
109
126
xxii
Section IV. Using the Bodys Own Signals to Augment Therapeutic Gains
Chapter 10. Advances in Wearable Technology for Rehabilitation
P. Bonato
145
160
Section V.
179
195
209
231
249
Chapter 17. Moving Beyond Single User, Local Virtual Environments for
Rehabilitation
P.L. Weiss and E. Klinger
263
Subject Index
279
Author Index
281
Rehabilitation as Empowerment:
The Role of Advanced Technologies
a
Introduction
The field of rehabilitation is placing increasing emphasis on the construct of
empowerment as a critical element of any treatment strategy. This construct integrates
perceptions of personal control, participation with others to achieve goals and an
awareness of the factors that hinder or enhance ones efforts to exert control in one's
life [1, 2]. The emphasis on empowerment reflects a critical shift in rehabilitation: from
a focus on deficits and dependence toward an emphasis on assets and independence.
The International Classification of Functioning, Disability and Health (ICF) of the
World Health Organization [3] defines disability as a condition in which people are
temporarily or definitively unable to perform an activity in the correct manner and/or at
a level generally considered normal for the human being. In this definition the focus
is not on deficits but on assets: a person is disabled when he/she is not able to fully
exploit his/her relationship with everyday contexts [4].
In this chapter we suggest that the new emerging technologies discussed in the
book with particular reference to robotics and virtual reality - have the right features
for improving the rehabilitation process. These technologies can improve the quality of
life of the disabled individual through an effective support of his/her activity and
interaction [5].
1.
Empowerment in Rehabilitation
Levels
Process
Patient (Intrapersonal) Receiving help from therapist to
gain control over his/her life
Therapist/Caregiver
(Interactional)
Health Care
Institution/System
(Social)
Goals
To have decision-making
power
Outcomes
Self-efficacy
To have access to
information and resources
To change perceptions of
patient's competency and
capacity to act
Sense of control
Participatory
behaviors
Effective resource
management
Critical awareness
The interactional component refers to how people think about and relate to their
social environment. This component of any empowering rehabilitation strategy
involves the transactions between people and the environments (family, clinical setting,
work, etc.) that they are involved in. On the one hand, it includes the decision-making
and problem-solving skills necessary to actively engage in one's environment. On the
other, it includes the ability to mobilize and obtain resources.
Again, how is it possible to evaluate the success of an interactional rehabilitation
strategy? According to the psychological literature, the key outcome variables are [6]:
2.
In recent years it has been possible to identify a clear trend in the design and
development of rehabilitation technologies: the shift from a general user-centered
approach to a specific activity-centered approach. In this last perspective, the goal of
technology should be the improvement of the quality of life of the individual, through
an effective support of his/her activity and interaction [4]. In this vision,
if a person is able to write a paper with a pen and another person is limited in
the pen use but is able to write the same paper using a computer keyboard, none of
them is defined as disabled. On the contrary if both of them will be in a condition in
which the tool, that allows them to write the paper, is not available in a specific
moment they will be both disabled in performing the activity. (p. 286).
This compensatory approach in rehabilitation is usually divided [9] into personoriented and environmentally oriented interventions (see Figure 1).
However, as remarked by Biocca [12], and agreed upon by most researchers in the
area, while the design of virtual reality technology has brought the theoretical issue of
Presence to the fore, few theorists argue that the experience of Presence suddenly
emerged with the arrival of virtual reality. Rather, as suggested by Loomis [13],
Presence may be described as a basic state of consciousness: the attribution of
sensation to some distal stimulus, or more broadly to some environment. Due to the
complexity of the topic, and the interest in this concept, different conceptualizations of
Presence have been proposed in the literature.
A first definition of Presence is introduced by the International Society of
Presence Research (ISPR). ISPR researchers define Presence (a shortened version of
the term telePresence) as:
a psychological state in which even though part or all of an individuals current
experience is generated by and/or filtered through human-made technology, part or all
of the individuals perception fails to accurately acknowledge the role of the
technology in the experience [14].
This definition suggests that rehabilitation technology should provide a strong
feeling of Presence: the more the user experiences Presence in using a rehabilitation
technology, the more it is transparent to the user, the more it helps the user in coping
with his/her context in an effective way .
Nevertheless, the above definition has two limitations. First, what is Presence for?
Why do we experience Presence? As underlined by Lee [15]:
Presence scholars, may find it surprising and even disturbing that there have
been limited attempts to explain the fundamental reason why human beings can feel
Presence when they use media and/or simulation technologies. (p. 496).
Second, is Presence related to media only? As commented by Biocca [12], and
agreed by most researchers in the area:
while the design of virtual reality technology has brought the theoretical issue of
Presence to the fore, few theorists argue that the experience of Presence suddenly
emerged with the arrival of virtual reality.
(online: http://jcmc.indiana.edu/vol3/issue2/biocca2.html)
Recent insights from cognitive sciences suggest that Presence is a
neuropsychological process that results in a sense of agency and control [16-18]. For
instance, Slater suggested that presence is a selection mechanism that organizes the
stream of sensory data into an environmental gestalt or perceptual hypothesis about
current environment [19, 20].
Within this framework, supported by ecological/ethnographic studies [21-28], any
rehabilitation technology, virtual or real, does not provide undifferentiated information
or ready-made objects in the same way for everyone. It offers different opportunities
and creates different levels of Presence according to its ability in supporting the users'
intentions.
1.2 Presence: A Second Definition
Recent findings in cognitive science suggest that Presence is a neuropsychological
phenomenon, evolved from the interplay of our biological and cultural inheritance,
First, Presence "locates" the self in an external physical and/or cultural space: the
Self is present in a space if he/she can act in it
Second, Presence provides feedback to the Self about the status of its activity: the
Self perceives the variations in Presence and tunes its activity accordingly.
First, we suggest that the ability to feel present in the interaction with a
rehabilitation technology - an artifact - basically does not differ from the ability to feel
present in our body. Within this view, being present during agency means that 1)
the individual is able to successfully enact his/her intentions 2) the individual is able to
locate him/herself in the physical and cultural space in which the action occurs. When
the subject is present during a mediated action (that is, an action supported by a tool),
he/she incorporates the tool in his/her peri-personal space, extending the action
potential of the body into virtual space [33]. In other words, through the successful
enaction of the actors intentions using the tool, the subject becomes present in the
tool.
The process of Presence can be described as a sophisticated but covert form of
monitoring action and experience, transparent to the self but critical for its existence.
The result of this process is a sense of agency: the feeling of being both the author and
the owner of ones own actions. The more intense the feeling of Presence, the higher
the quality of experience perceived during the action[34]. However, the agent directly
perceives only the variations in the level of Presence: breakdowns and optimal
experiences [16].
Why do we monitor the level of Presence? Our hypothesis is that this high-level
process has evolved to control the quality of action and behaviors.
According to Csikszentmihalyi [35, 36], individuals preferentially engage in
opportunities for action associated with a positive, complex and rewarding state of
consciousness, defined by him as optimal experience or Flow. The key feature of
this experience is the perceived balance between great environmental opportunities for
action (challenges) and adequate personal resources in facing them (skills). Additional
characteristics are deep concentration, clear rules for and unambiguous feedback from
the task at hand, loss of self-consciousness, control of ones actions and environment,
positive affect and intrinsic motivation. Displays of optimal experience can be
associated with various daily activities, provided that individuals perceive them as
complex opportunities for action and involvement. An example of Flow is the case
where a professional athlete is playing exceptionally well (positive emotion) and
achieves a state of mind where nothing else is attended to but the game (high level of
Presence). From the phenomenological viewpoint, both Presence and Flow are
described as absorbing states, characterized by a merging of action and awareness, loss
of self-consciousness, a feeling of being transported into another reality, and an altered
perception of time. Further, both Presence and optimal experience are associated with
high involvement, focused attention and high concentration on the ongoing activity.
Starting from these theoretical premises, can we design rehabilitation technologies that
elicit a state of Flow by activating a high level of Presence (maximal Presence) [4, 37,
38]? This question will be addressed in the following section.
1.3 The Presence Levels
How can we achieve a high level of Presence during interaction with a rehabilitation
technology? The answer to this question requires a better understanding of what
intentions are.
According to folk psychology, the intention of an agent performing an action is
his/her specific purpose in doing so. However, the latest cognitive studies clearly show
that any action is the result of a complex intentional chain that cannot be analyzed at a
single level [39-41].
Pacherie identifies three different levels or forms of intentions, characterized
by different roles and contents: distal intentions (D-intentions), proximal intentions (Pintentions) and motor intentions (M-intentions):
Any intentional level has its own role: the rational (D-intentions), situational (PIntention) and motor (M-Intention) guidance and control of action. They form an
intentional cascade [40, 41] in which higher intentions generate lower intentions.
The role of the different layers will be related to the complexity of the activity
done: the more complex the activity, the more layers will be needed to produce a high
level of Presence (Figure 3).
At the lower level operations proto Presence is enough to induce a satisfying
feeling of Presence. At the higher level activity the media experience has to support
all three layers.
As suggested by Juarrero [44] high level intentions (Future Intentions/Objects)
channel future deliberation by narrowing the scope of alternatives to be subsequently
considered (cognitive reparsing). In practice, once the subject forms an intention, not
every logical or physically possible alternative remains open, and those that do are
encountered differently: once I decide to do A, non-A is no longer a viable alternative
and should it happen, I will consider non-A as a breakdown [45].
1.4 How to design rehabilitation technologies that foster Presence and Flow
This perspective allows us to predict under which mediated situations the feeling of
Presence can be enhanced or reduced.
First, minimal Presence results from an almost complete lack of integration of the
three layers discussed above, such as is the case when attention is mostly directed
towards contents of extended consciousness that are unrelated to the present external
environment (e.g., Im in the office trying to write a letter but Im thinking about how
to find a nurse for my father). By the same reasoning, maximal Presence arises when
proto Presence, core Presence and extended Presence are focused on the same external
situation or activity [28]. Maximal Presence thus results from the combination of all
three layers with a tight focus on the same content. This experience is supported by a
rehabilitation technology that offers an optimal combination of form and content, able
to support the activity of the user in a meaningful way.
The concepts described above are summarized by the following points:
1) The lower the level of activity, the easier it is to induce maximal Presence. The
object of an activity is wider and less targeted than the goal of an action. So,
its identification and support is more difficult for the designer of a
rehabilitation technology. Furthermore, the easiest level to support is the
operation. In fact, its conditions are more objective and predictable, being
related to the characteristics (constraints and affordances) of the artifact used:
it is easier to automatically open a door in a virtual environment than to help
the user in finding the right path for the exit. At the lower level operations
proto Presence is enough to induce a satisfying feeling of Presence. At the
higher level activity the media experience has to support all the three
levels.
2) We have maximal Presence when the environment is able to support the full
intentional chain of the user: this can explain i) the success of the Nintendo
Wii over competing consoles (it is the only one to fully support M-intentions);
ii) the need for a long-term goal to induce a high level of Presence after many
experiences of the same rehabilitation technology.
3) Subjects with different intentions will not experience the same level of
Presence, even when using the same rehabilitation technology: this means that
understanding and supporting the intentions of the user will improve his/her
Presence during the interaction with the technology.
4) Action is more important than perception: Im more present in a perceptually
poor virtual environment (e.g. a textual MUD) where I can act in many
different ways than in a real-like virtual environment where I cannot do
anything.
2.
Many researches using VR underline the link between this technology and optimal
experiences. However, given the limited space available, we focus on the ones that are
most relevant to the contents of this chapter.
A first set of results comes from the work of Gaggioli [46, 47]. Gaggioli compared
the experience reported by a user immersed in a virtual environment with the
experience reported by the same individual during other daily situations. To assess the
quality of experience the author used a procedure called Experience Sampling Method
(ESM), which is based on repeated on-line assessments of the external situation and
personal states of consciousness [47]. Results showed that the VR experience was the
activity associated with the highest level of optimal experience (22% of self-reports).
Reading, TV viewing and using other media both in the context of learning and of
leisure activities obtained lower percentages of optimal experiences (15%, 8% and
19% of self-reports respectively).
To verify the link between advanced technologies and optimal experiences, the
V-STORE Project investigated the quality of experience and the feeling of Presence
in a group of 10 patients with Frontal Lobe Syndrome involved in VR-based cognitive
rehabilitation [68].They used the ITC-Sense of Presence Inventory [69] to evaluate the
feeling of Presence induced by the VR sessions. Findings highlighted the association of
VR sessions with both positive affect and a high level of Presence.
Miller and Reid [70] investigated the personal experiences of children with
cerebral palsy engaging in a virtual reality play intervention program. The results show
that participants experienced a sense of control and mastery over the virtual
environment. Moreover, they perceived experiencing Flow and both peers and family
reported perceived physical changes and increased social acceptance. These results
were confirmed in two later studies with the same population group [71, 72].
The other hypothesis we suggested in this chapter is that the transformation of
Flow may also exploit the plasticity of the brain producing some form of functional
reorganization [73]. Optale and his team [74-76] investigated the experience of subjects
with male erectile disorders engaging in a virtual reality rehabilitative experience. The
results obtained - 30 out of 36 patients with psychological erectile dysfunction and 28
out of 37 clients with premature ejaculation maintained partial or complete positive
response after 6-month follow up - showed that this approach was able to hasten the
healing process and reduce dropouts. However, the most interesting part of the work is
the PET analysis carried out in the study. Optale used PET scans to analyze regional
brain metabolism changes from baseline to follow-up in the experimental sample [77].
The analysis of the scans showed, after the VR protocol, different metabolic changes in
specific areas of the brain connected with the erection mechanism.
Recent experimental results from the work of Hoffman and his group in the
treatment of chronic pain [78-81] might also be considered as fostering this vision.
Hoffman and colleagues verified the efficacy of VR as an advanced distraction tool
[82] in different controlled studies. The result showed dramatic drops in pain ratings
during VR compared to controls [83]. Further, using a functional magnetic resonance
imaging (fMRI) scanner they measured pain-related brain activity for each participant
when virtual reality was not present and when virtual reality was present (order
randomized). The team studied five regions of the brain known to be associated with
pain processing - the anterior cingulate cortex, primary and secondary somatosensory
cortex, insula, and thalamus - and found that during VR the activity in all regions
showed significant reductions [84]. In particular, the results showed direct modulation
of human brain pain responses by VR distraction: the amount of reduction in painrelated brain activity ranged from 50 percent to 97 percent.
Interestingly, as predicted by our model, the level of pain reduction was directly
correlated to the level of Presence experienced in VR [79, 85]: the more the Presence,
the less the pain.
3.
Although VR certainly has potential as a rehabilitation technology [86] [87, 88], most
of the actual applications in this area are still in the laboratory or at investigation stage.
In a recent review [89], Riva identified four major issues that limit the use of VR in this
field:
1) The lower the level of activity, the easier it is to induce maximal Presence. The
object of an activity is wider and less targeted than the goal of an action. The
virtual exercises developed with NeuroVR can simulate a number of finegrained activities, such as opening the fridge, grabbing the water and closing
the fridge. These activities may in turn broken down to an even finer level
depending on the goals and the complexity of the exercise.
2) We have maximal Presence when the environment is able to support the full
intentional chain of the user. The virtual environments developed using
NeuroVR support the three hierarchical levels indicated by the Presence
theory [19]:
- Extended Presence: NeuroVR allows the presentation of mediated
affordances that supports the Self in generating complex action plans;
- Core Presence: the VE can be programmed to present the patient
with direct affordances. For instance, it is possible to program the
appearance/disappearance of virtual objects/images that that trigger
the attention of the user. These objects/images can be activated by
users actions and behavior or by therapists commands.
- Proto Presence: the combined use of sensors and actuators supports
perception-action coupling and permits the patient to use his/her
body for enacting direct affordances in the virtual environment.
Movements can in turn be captured and recorded by means of
different input devices and wearable sensors (i.e. head tracking).
3) Subjects with different intentions will not experience the same level of
Presence, even when using the same rehabilitation technology. Since the
reduction of psychomotor performance can vary significantly among patients
suffering from neurological damages, complexity of virtual exercises can be
tailored to match the level of impairment of each patient. In this way, even
patients with a low level of cognitive functioning can successfully accomplish
virtual exercises, thereby increasing their feeling of presence, empowerment
and motivation for therapy.
4) Action is more important than perception: NeuroVR was explicitly designed
to find an optimal trade-off between perceptual realism and naturalness of
interaction. Whilst finding this trade-off was not an easy task, the level of
realism supported by the player is at least adequate to provide patients with
the feeling of being there. As several Presence scholars have pointed out
[24], [25], [32] the experience of Presence depends to a greater extent on the
ability of a medium to support users action in a transparent and natural way,
and is affected to a lesser extent by the quantity and quality of realism cues
depicted in the simulated environment.
4.
Conclusions
awareness of the factors that hinder or enhance one's efforts to exert control in one's
life [1, 2].
In this chapter we suggested that the new emerging technologies discussed in the
book from Virtual Reality to Robotics have the right features to improve the course
of rehabilitation. Specifically, we claim that they are able to improve the quality of life
of the individual, by improving his/her level of Presence.
To be precise, by enhancing the experienced level of Presence, emerging
technologies can foster optimal (Flow) experiences triggering the empowerment
process (transformation of Flow). The vision underlying this concept arises from
Positive Psychology [91]. According to this vision, rehabilitation technologies should
include positive peak experiences because they serve as triggers for a broader process
of motivation and empowerment. Within this context, the transformation of Flow can
be defined as a person's ability to draw upon an optimal experience and use it to
marshal new and unexpected psychological resources and sources of involvement.
Although different technologies can be used to achieve this goal, one of the most
promising is Virtual Reality. On the one hand, it can be described as an advanced form
of humancomputer interface that allows the user to interact with and become
immersed in a computer-generated environment in a naturalistic fashion. On the other,
VR can also be considered as an advanced imaginal system: an experiential form of
imagery that is as effective as reality in inducing emotional responses.
To this end, we developed NeuroVR, an empowering rehabilitation tool that
allows the creation of virtual environments where patients can start to explore and act
without feeling threatened [92, 93]. Nothing the patient fears can really happen to
them in VR. With such assurance, they can freely explore, experiment, feel, live, and
experience feelings and/or thoughts. VR thus becomes a very useful intermediate step
between the therapists office and the real world [94].
Clearly, further improving NeuroVR and building new virtual environments is
important so that therapists will continue to investigate the application of these tools in
their day-to-day clinical practice. In fact, in most circumstances, the clinical skills of
the rehabilitator remain the key factor in the successful use of VR systems.
Future research should also deepen analysis of the link between cognitive
processes, motor activities, Presence and Flow. This will allow the creation of a new
generation of rehabilitation technologies which are truly able to support the
empowerment process.
References
[1]
[2]
[3]
[4]
[5]
M.A. Zimmerman, Taking aim on empowerment research: On the distinction between individual and
psychological conceptions. American Journal of Community Psychology, (1984), 18(1): p. 169-177.
D.D. Perkins and M.A. Zimmerman, Empowerment theory: Research and applications. American
Journal of Community Psychology, (1995), 23: p. 569579.
WHO, International Classification of Functioning, Disability and Health. 2004, World Health
Organization.
F. Morganti and G. Riva, Ambient Intelligence in Rehabilitation, in Ambient Intelligence: The
evolution of technology, communication and cognition towards the future of the human-computer
interaction, G. Riva, F. Davide, F. Vatalaro, and M. Alcaiz, Editors. 2004, IOS Press. On-line:
http://www.emergingcommunication.com/volume6.html: Amsterdam. p. 283-295.
R.L. Glueckauf, J.D. Whitton, and D.W. Nickelson, Telehealth: The new frontier in rehabilitation and
health care, in Assistive technology: Matching device and consumer for successful rehabilitation, M.J.
Scherer, Editor. 2002, American Psychological Association: Washington, DC. p. 197-213.
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
M.A. Zimmerman and S. Warschausky, Empowerment Theory for Rehabilitation Research: Conceptual
and Methodological Issues. Rehabilitation Psychology, (1998), 43(1): p. 3-16.
G. Riva, F. Davide, and W.A. IJsselsteijn, eds. Being There: Concepts, effects and measurements of
user presence in synthetic environments. Emerging Communication: Studies on New Technologies and
Practices in Communication, ed. G. Riva and F. Davide. 2003, Ios Press. Online:
http://www.emergingcommunication.com/volume5.html: Amsterdam.
G. Riva, G. Castelnuovo, and F. Mantovani, Transformation of flow in rehabilitation: the role of
advanced communication technologies. Behavior Research Methods, (2006), 38(2): p. 237-44.
L.N. Kirsch, M. Shenton, E. Spirl, J. Rowan, R. Simpson, D. Schreckenghost, and E.F. LoPresti, WebBased Assistive Technology Interventions for Cognitive Impairments After Traumatic Brain Injury:
Selective Review and Two Case Studies. Rehabilitation Psychology, (2004), 49(3): p. 200-212.
B. Crosson, P. Barco, C. Velozo, M.M. Bolesta, P.V. Cooper, D. Wefts, and T.C. Brobeck, Awareness
and compensation in post-acute head injury rehabilitation. Journal of Head Trauma Rehabilitation,
(1989), 4(46-54).
T.B. Sheridan, Musing on telepresence and virtual presence. Presence, Teleoperators, and Virtual
Environments, (1992), 1: p. 120-125.
F. Biocca, The Cyborg's Dilemma: Progressive embodiment in virtual environments. Journal of
Computer
Mediated-Communication
[On-line],
(1997),
3(2):
Online:
http://jcmc.indiana.edu/vol3/issue2/biocca2.html.
J.M. Loomis, Distal attribution and presence. Presence, Teleoperators, and Virtual Environments,
(1992), 1(1): p. 113-118.
International Society for Presence Research, The concept of presence: explication statement. 2000.
K.M. Lee, Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence,
(2004), 13(4): p. 494-505.
G. Riva, Being-in-the-world-with: Presence meets Social and Cognitive Neuroscience, in From
Communication to Presence: Cognition, Emotions and Culture towards the Ultimate Communicative
Experience. Festschrift in honor of Luigi Anolli, G. Riva, M.T. Anguera, B.K. Wiederhold, and F.
Mantovani, Editors. 2006, IOS Press. Online: http://www.emergingcommunication.com/volume8.html:
Amsterdam. p. 47-80.
G. Riva, F. Mantovani, and A. Gaggioli, Are robots present? From motor simulation to being there.
Cyberpsychology & Behavior, (2008), 11(631-636).
G. Riva, Virtual Reality and Telepresence. Science, (2007), 318(5854): p. 1240-1242.
M. Slater, Presence and the sixth sense. Presence: Teleoperators, and Virtual Environments, (2002),
11(4): p. 435439.
M.V. Sanchez-Vives and M. Slater, From presence to consciousness through virtual reality. Nature
Review Neuroscience, (2005), 6(4): p. 332-9.
G. Riva, J.A. Waterworth, and E.L. Waterworth, The Layers of Presence: a bio-cultural approach to
understanding presence in natural and mediated environments. Cyberpsychology & Behavior, (2004),
7(4): p. 405-419.
G. Mantovani and G. Riva, "Real" presence: How different ontologies generate different criteria for
presence, telepresence, and virtual presence. Presence, Teleoperators, and Virtual Environments,
(1999), 8(5): p. 538-548.
G. Mantovani and G. Riva, Building a bridge between different scientific communities: on Sheridan's
eclectic ontology of presence. Presence: Teleoperators and Virtual Environments, (2001), 8: p. 538-548.
J.J. Gibson, The ecological approach to visual perception. 1979, Hillsdale, NJ: Erlbaum.
A. Spagnolli and L. Gamberini, A Place for Presence. Understanding the Human Involvement in
Mediated Interactive Environments. PsychNology Journal, (2005), 3(1): p. 6-15. On-line:
www.psychnology.org/article801.htm.
A. Spagnolli, D. Varotto, and G. Mantovani, An ethnographic action-based approach to human
experience in virtual environments. International Journal of Human-Computer Studies, (2003), 59(6): p.
797-822.
L. Gamberini and A. Spagnolli, On the relationship between presence and usability: a situated, actionbased approach to virtual environments, in Being There: Concepts, Effects and Measurement of User
Presence in Synthetic Environments, G. Riva, W.A. IJsselsteijn, and F. Davide, Editors. 2003, IOS
Press: Amsterdam. p. 97-107. Online: http://www.emergingcommunication.com/volume5.html.
J.A. Waterworth and E.L. Waterworth, Presence as a Dimension of Communication: Context of Use
and the Person, in From Communication to Presence: Cognition, Emotions and Culture towards the
Ultimate Communicative Experience, G. Riva, M.T. Anguera, B.K. Wiederhold, and F. Mantovani,
Editors.
2006,
IOS
Press:
Amsterdam.
p.
80-95.
Online:
http://www.emergingcommunication.com/volume8.html.
[29] P. Haggard and S. Clark, Intentional action: conscious experience and neural prediction. Conscious
Cogn, (2003), 12(4): p. 695-707.
[30] P. Haggard, S. Clark, and J. Kalogeras, Voluntary action and conscious awareness. Nat Neurosci,
(2002), 5(4): p. 382-5.
[31] F.J. Varela, E. Thompson, and E. Rosch, The embodied mind: Cognitive science and human
experience. 1991, Cambridge, MA: MIT Press.
[32] R. Whitaker, Self-Organization, Autopoiesis, and Enterprises,. ACM SIGOIS Illuminations series,
(1995: p. online: http://www.acm.org/sigs/siggroup/ois/auto/Main.html.
[33] A. Clark, Natural Born Cyborgs: Minds, technologies, and the future of human intelligence. 2003,
Oxford: Oxford University Press.
[34] P. Zahoric and R.L. Jenison, Presence as being-in-the-world. Presence, Teleoperators, and Virtual
Environments, (1998), 7(1): p. 78-89.
[35] M. Csikszentmihalyi, Beyond Boredom and Anxiety. 1975, San Francisco: Jossey-Bass.
[36] M. Csikszentmihalyi, Flow: The psychology of optimal experience. 1990, New York: HarperCollins.
[37] G. Riva, The psychology of Ambient Intelligence: Activity, situation and presence, in Ambient
Intelligence: The evolution of technology, communication and cognition towards the future of the
human-computer interaction, G. Riva, F. Davide, F. Vatalaro, and M. Alcaiz, Editors. 2004, IOS
Press. On-line: http://www.emergingcommunication.com/volume6.html: Amsterdam. p. 19-34.
[38] E.L. Waterworth, M. Hggkvist, K. Jalkanen, S. Olsson, J.A. Waterworth, and W. H., The
Exploratorium: An environment to explore your feelings. PsychNology Journal, (2003), 1(3): p. 189201.
On-line:
http://www.psychnology.org/File/PSYCHNOLOGY_JOURNAL_1_3_WATERWORTH.pdf.
[39] J. Searle, Intentionality: An essay in the philosophy of mind. 1983, New York: Cambridge University
Press.
[40] E. Pacherie, Toward a dynamic theory of intentions, in Does consciousness cause behavior?, S. Pockett,
W.P. Banks, and S. Gallagher, Editors. 2006, MIT Press: Cambridge, MA. p. 145-167.
[41] E. Pacherie, The phenomenology of action: A conceptual framework. Cognition, (2008), 107(1): p.
179-217.
[42] G. Riva, Enacting Interactivity: The Role of Presence, in Enacting Intersubjectivity: A cognitive and
social perspective on the study of interactions, F. Morganti, A. Carassa, and G. Riva, Editors. 2008, IOS
Press: Online: http://www.emergingcommunication.com/volume10.html: Amsterdam. p. 97-114.
[43] C. Dillon, J. Freeman, and E. Keogh. Dimension of Presence and components of emotion. in Presence
2003. 2003. Aalborg, Denmark: ISPR.
[44] A. Juarrero, Dynamics in action: Intentional behavior as a complex system. (1999.
[45] M.E. Bratman, Shared cooperative activity. Philosophical Review, (1992), 101: p. 327-341.
[46] A. Gaggioli, M. Bassi, and A. Delle Fave, Quality of Experience in Virtual Environments, in Being
There: Concepts, effects and measurement of user presence in synthetic environment, G. Riva, W.A.
IJsselsteijn,
and
F.
Davide,
Editors.
2003,
Ios
Press.
Online:
http://www.emergingcommunication.com/volume5.html: Amsterdam. p. 121-135.
[47] A. Gaggioli, Optimal Experience in Ambient Intelligence, in Ambient Intelligence: The evolution of
technology, communication and cognition towards the future of human-computer interaction, G. Riva,
F. Vatalaro, F. Davide, and M. Alcaiz, Editors. 2004, IOS Press. On-line:
http://www.emergingcommunication.com/volume6.html: Amsterdam. p. 35-43.
[48] J.A. Waterworth, Virtual Realisation: Supporting creative outcomes in medicine and music.
PsychNology
Journal,
(2003),
1(4):
p.
410-427.
http://www.psychnology.org/pnj1(4)_waterworth_abstract.htm.
[49] F. Massimini and A. Delle Fave, Individual development in a bio-cultural perspective. American
Psychologist, (2000), 55(1): p. 24-33.
[50] A. Delle Fave, Il processo di trasformazione di Flow in un campione di soggetti medullolesi [The
process of flow transformation in a sample of subjects with spinal cord injuries], in La selezione
psicologica umana, F. Massimini, A. Delle Fave, and P. Inghilleri, Editors. 1996, Cooperativa Libraria
IULM: Milan. p. 615-634.
[51] N. Doidge, The Brain that Changes Itself: Stories of Personal Triumph from the frontiers of Brain
Science. 2007, New York: Penguin Books.
[52] S. Begley, The Plastic Mind. 2008, London: Constable & Robinson.
[53] S.L. Wolf, C.J. Winstein, J.P. Miller, E. Taub, G. Uswatte, D. Morris, C. Giuliani, K.E. Light, and D.
Nichols-Larsen, Effect of constraint-induced movement therapy on upper extremity function 3 to 9
months after stroke: the EXCITE randomized clinical trial. Jama, (2006), 296(17): p. 2095-104.
[54] L.V. Gauthier, E. Taub, C. Perkins, M. Ortmann, V.W. Mark, and G. Uswatte, Remodeling the brain:
plastic structural brain changes produced by different motor therapies after stroke. Stroke, (2008),
39(5): p. 1520-5.
[55] L. Collier and J. Truman, Exploring the multi-sensory environment as a leisure resource for people with
complex neurological disabilities. NeuroRehabilitation, (2008), 23(4): p. 361-7.
[56] S.B.N. Thompson and S. Martin, Making sense of multi-sensory rooms for people with learning
disabilities. British Journal of Occupational Therapy, (1994), 57: p. 341-344.
[57] K.W. Hope, The effects of multi-sensory environments on older people with dementia. Journal of
Psychiatric and Mental Health Nursing, (1998), 5: p. 377-385.
[58] K.W. Hope and H.A. Waterman, Using Multi-Sensory Environments (MSEs) with people with
dementia. Dementia, (2004), 3(1): p. 45-68.
[59] R. Baker, S. Bell, E. Baker, S. Gibson, J. Holloway, R. Pearce, Z. Dowling, P. Thomas, J. Assey, and
L.A. Waering, A randomized controlled trial of the effects of multi-sensory stimulation (MSS) for
people with dementia. British Journal of Clinical Psychology, (2001), 40(1): p. 81-96.
[60] G.A. Hotz, A. Castelblanco, I.M. Lara, A.D. Weiss, R. Duncan, and J.W. Kuluz, Snoezelen: a
controlled multi-sensory stimulation therapy for children recovering from severe brain injury. Brain Inj,
(2006), 20(8): p. 879-88.
[61] M. Lotan and J. Merrick, Rett syndrome management with Snoezelen or controlled multi-sensory
stimulation. A review. Int J Adolesc Med Health, (2004), 16(1): p. 5-12.
[62] M.M. Behrmann and L. Lahm, Babies and robots: technology to assist learning of young multiple
disabled children. Rehabil Lit, (1984), 45(7-8): p. 194-201.
[63] E. Libin and A. Libin, New diagnostic tool for robotic psychology and robotherapy studies.
Cyberpsychol Behav, (2003), 6(4): p. 369-74.
[64] F. Tanaka, A. Cicourel, and J.R. Movellan, Socialization between toddlers and robots at an early
childhood education center. Proc Natl Acad Sci U S A, (2007), 104(46): p. 17954-8.
[65] M.R. Banks, L.M. Willoughby, and W.A. Banks, Animal-assisted therapy and loneliness in nursing
homes: use of robotic versus living dogs. J Am Med Dir Assoc, (2008), 9(3): p. 173-7.
[66] R. Colombo, F. Pisano, A. Mazzone, C. Delconte, S. Micera, M.C. Carrozza, P. Dario, and G. Minuco,
Design strategies to improve patient motivation during robot-aided rehabilitation. J Neuroeng Rehabil,
(2007), 4: p. 3.
[67] G. Riva and A. Gaggioli, Virtual clinical therapy. Lecture Notes in Computer Sciences, (2008), 4650: p.
90-107.
[68] G. Castelnuovo, C. Lo Priore, D. Liccione, and G. Cioffi, Virtual Reality based tools for the
rehabilitation of cognitive and executive functions: the V-STORE. PsychNology Journal, (2003), 1(3):
p.
311-326.
Online:
http://www.psychnology.org/pnj1(3)_castelnuovo_lopriore_liccione_cioffi_abstract.htm.
[69] J. Lessiter, J. Freeman, E. Keogh, and J. Davidoff, A Cross-Media Presence Questionnaire: The ITCSense of Presence Inventory. Presence: Teleoperators, and Virtual Environments, (2001), 10(3): p. 282297.
[70] S. Miller and D. Reid, Doing play: competency, control, and expression. Cyberpsychol Behav, (2003),
6(6): p. 623-32.
[71] D. Reid, The influence of virtual reality on playfulness in children with cerebral palsy: a pilot study.
Occup Ther Int, (2004), 11(3): p. 131-44.
[72] K. Harris and D. Reid, The influence of virtual reality play on children's motivation. Can J Occup Ther,
(2005), 72(1): p. 21-9.
[73] B.B. Johansson, Brain plasticity and stroke rehabilitation. The Willis lecture. Stroke, (2000), 31(1): p.
223-30.
[74] G. Optale, A. Munari, A. Nasta, C. Pianon, J. Baldaro Verde, and G. Viggiano, Multimedia and virtual
reality techniques in the treatment of male erectile disorders. International Journal of Impotence
Research, (1997), 9(4): p. 197-203.
[75] G. Optale, F. Chierichetti, A. Munari, A. Nasta, C. Pianon, G. Viggiano, and G. Ferlin, PET supports
the hypothesized existence of a male sexual brain algorithm which may respond to treatment combining
psychotherapy with virtual reality. Studies in Health Technology and Informatics, (1999), 62: p. 249251.
[76] G. Optale, Male Sexual Dysfunctions and multimedia Immersion Therapy. CyberPsychology &
Behavior, (2003), 6(3): p. 289-294.
[77] G. Optale, F. Chierichetti, A. Munari, A. Nasta, C. Pianon, G. Viggiano, and G. Ferlin, Brain PET
confirms the effectiveness of VR treatment of impotence. International Journal of Impotence Research,
(1998), 10(Suppl 1): p. 45.
[78] H.G. Hoffman, T.L. Richards, B. Coda, A.R. Bills, D. Blough, A.L. Richards, and S.R. Sharar,
Modulation of thermal pain-related brain activity with virtual reality: evidence from fMRI.
Neuroreport, (2004), 15(8): p. 1245-1248.
[79] H.G. Hoffman, T. Richards, B. Coda, A. Richards, and S.R. Sharar, The illusion of presence in
immersive virtual reality during an fMRI brain scan. CyberPsychology & Behavior, (2003), 6(2): p.
127-131.
[80] H.G. Hoffman, D.R. Patterson, J. Magula, G.J. Carrougher, K. Zeltzer, S. Dagadakis, and S.R. Sharar,
Water-friendly virtual reality pain control during wound care. Journal of Clinical Psychology, (2004),
60(2): p. 189-195.
[81] H.G. Hoffman, T.L. Richards, T. Van Oostrom, B.A. Coda, M.P. Jensen, D.K. Blough, and S.R. Sharar,
The analgesic effects of opioids and immersive virtual reality distraction: evidence from subjective and
functional brain imaging assessments. Anesth Analg, (2007), 105(6): p. 1776-83, table of contents.
[82] H.G. Hoffman, D.R. Patterson, E. Seibel, M. Soltani, L. Jewett-Leahy, and S.R. Sharar, Virtual reality
pain control during burn wound debridement in the hydrotank. Clin J Pain, (2008), 24(4): p. 299-304.
[83] H.G. Hoffman, J.N. Doctor, D.R. Patterson, G.J. Carrougher, and T.A. Furness, 3rd, Virtual reality as
an adjunctive pain control during burn wound care in adolescent patients. Pain, (2000), 85(1-2): p. 3059.
[84] H.G. Hoffman, T.L. Richards, A.R. Bills, T. Van Oostrom, J. Magula, E.J. Seibel, and S.R. Sharar,
Using FMRI to study the neural correlates of virtual reality analgesia. CNS Spectr, (2006), 11(1): p. 4551.
[85] H.G. Hoffman, S.R. Sharar, B. Coda, J.J. Everett, M. Ciol, T. Richards, and D.R. Patterson,
Manipulating presence influences the magnitude of virtual reality analgesia. Pain, (2004), 111(1-2): p.
162-8.
[86] P.L. Weiss and N. Katz, The potential of virtual reality for rehabilitation. J Rehabil Res Dev, (2004),
41(5): p. vii-x.
[87] D. Rand, R. Kizony, and P.T. Weiss, The Sony PlayStation II EyeToy: low-cost virtual reality for use
in rehabilitation. J Neurol Phys Ther, (2008), 32(4): p. 155-63.
[88] A. Rizzo, M.T. Schultheis, K. Kerns, and C. Mateer, Analysis of assets for virtual reality applications in
neuropsychology. Neuropsychological Rehabilitation, (2004), 14(1-2): p. 207-239.
[89] G. Riva, Virtual reality in psychotherapy: review. CyberPsychology & Behavior, (2005), 8(3): p. 22030; discussion 231-40.
[90] G. Riva, A. Gaggioli, D. Villani, A. Preziosa, F. Morganti, R. Corsi, G. Faletti, and L. Vezzadini,
NeuroVR: an open source virtual reality platform for clinical psychology and behavioral neurosciences.
Studies in Health Technology and Informatics, (2007), 125: p. 394-9.
[91] M.E.P. Seligman and M. Csikszentmihalyi, Positive psychology. American Psychologist, (2000), 55: p.
5-14.
[92] C. Botella, C. Perpia, R.M. Baos, and A. Garcia-Palacios, Virtual reality: a new clinical setting lab.
Studies in Health Technology and Informatics, (1998), 58: p. 73-81.
[93] F. Vincelli, From imagination to virtual reality: the future of clinical psychology. CyberPsychology &
Behavior, (1999), 2(3): p. 241-248.
[94] C. Botella, S. Quero, R.M. Banos, C. Perpina, A. Garcia Palacios, and G. Riva, Virtual reality and
psychotherapy. Stud Health Technol Inform, (2004), 99: p. 37-54.
David J. REINKENSMEYERa
Department of Mechanical and Aerospace Engineering
University of California at Irvine, CA, USA
Abstract. There has been a rapid increase in the past decade in the number of
robotic devices that are being developed to assist in movement rehabilitation of the
upper extremity following stroke. Many of these devices have produced positive
clinical results. Yet, it is still not well understood how these devices enhance
movement recovery, and whether they have inherent therapeutic value that can be
attributed to their robotic properties per se. This chapter reviews the history of
robotic assistance for upper extremity training after stroke and the current state of
the field. Future advances in the field will likely be driven by scientific studies
focused on defining the behavioral factors that influence motor plasticity.
Keywords. upper extremity, rehabilitation, robotics, motor control, plasticity
Introduction
In the early 1990s there were a handful of robotic devices being developed for upper
extremity training after stroke. Today there are tens of prototypes and several
companies selling commercial devices [1]. However, use of robotic devices in
rehabilitation clinics is still rare. This chapter reviews the history of the field, and
identifies factors that limit clinical acceptance and important directions for future
scientific research. Section 1 reviews why engineers started investigating robots for use
in rehabilitation therapy, and initial reactions by clinicians to these efforts. Section 2
reviews key design decisions that had to be made for the first robotic therapy devices,
which in some ways defined the flow of the field. Section 3 reviews clinical results
from the field and two important scientific questions that these results have raised.
Section 4 discusses recent developments in robotic assistance for the upper
extremity. The chapter concludes by suggesting directions for future research.
Figure 1. Pre-cursors of robotic therapy devices. The three devices on the left (Swedish sling, arm
skateboard, and JAECO mobile arm support) are designed to provides assistance for arm movement without
using actuators. The device on the right is the Biodex Active Dynamometer, which is a single degree-offreedom robot that can be adjusted to assist or resist movement around different joints.
Example of simple,
existing technology
therabands, pegboards,
blocks
Figure 2. Some of the first robotic therapy devices for the arm to undergo clinical testing (left to right: MITMANUS [2], MIME [4], the ARM Guide [5]). These devices were designed to provide active assistance to
patients during reaching movements with the arm.
developed in the late 1970s and early 1980s (Figure 1). Here we define a robot to be a
device that can move in response to commands (cf. American Heritage Dictionary).
Active dynamometers incorporate a computer-controlled motor, and thus fit this
general definition of a robot. They include a kit of levers and bars that can be attached
to the motor. The levers are designed to work with different limbs and joints (e.g.
elbow flexion/extension, or should abduction/adduction), allowing patients to exercise
a joint while the motor resists or assists movement. The dynamometer senses the torque
and limb rotation that the patient generates, and displays this information to the patient
and therapist for visual feedback and outcomes documentation.
Robotics engineers realized that not only one-joint robotic devices with simple
controllers like active dynamometers could be used in therapy, but also more
sophisticated robotic mechanisms with more than one joint and more sophisticated
controllers (Figure 2). Engineers began to delineate possible benefits of robots, in a
way that aligned with many of the therapists technological goals defined above (Table
1). Engineers also explicitly promoted the goal of partial automation: robots had the
potential to allow the patient to practice some of the repetitive aspects of rehabilitation
therapy on their own, without the continuous presence of the rehabilitation therapist.
1.3. A Skeptical Reception by Some Clinicians, and a Collaborative Approach by
Others
Some clinicians expressed skepticism toward the idea that robots could help them meet
rehabilitation goals. Skeptical clinicians had good reasons to be skeptical that included
the following points:
1) Robots cannot match therapists expertise and skill. Therapy involves manual
skills that are learned over the course of years by experience under the
guidance of expert mentors. Some of these skills require sophisticated manual
manipulations of complex joints (e.g. mobilizing the patients scapula). An
alert and perceptive therapist alters her therapy goals and assistance based on a
complex, ongoing consideration of the patients state and progress. In brief:
hands-on therapy requires expertise and is complex; it seems doubtful that a
robot could replicate hands-on therapy effectively.
2) Robots are unsafe: robots are dangerous because they can move patients
limbs but are not intelligent and sensitive to contra-indications to imposed
movement like human therapists. They could move a patient in a harmful way.
3) Robots might replace therapists. Also implicit in the dubious reception by
some therapists was a concern that robots might replace therapists, just as
patterns or work to build strength in those patterns? Bimanual, with two robots, or
unimanual? Should they have a functional goal?
The motions used by MIT-MANUS in the first clinical trials were unimanual
pointing movements in the horizontal plane [17]. The patient was instructed to move a
cursor to a target. After attaining the target, the target moved to a new location. The
robot helped the patient to make the movement to the target, following a normative
trajectory (minimum jerk trajectory) [17]. This type of paradigm had been used often
previously in motor control research. It required multiple-joint coordination, and was
functional in a sense, since pointing (or reaching) is a component of many activities of
daily living. MIME and the ARM Guide also focused on unimanual reaching
movements. MIME incorporated some bimanual reaching exercises also.
2.
3.
Clinical testing of second generation robotic therapy devices has essentially been
confirmatory of these findings, as reviewed in a recent systematic review [19].
Figure 3. Change in Fugl-Meyer Upper-Extremity Score with one to two months of training several hours
per week after chronic stroke, for three robotic devices (MIT-MANUS [17], MIME [4], and Gentle-S [20]),
and with conventional table-top exercise [21] and with the TWREX non-robotic exoskeleton [21] (see Figure
4). The Fugl-Meyer score varies from 0 (complete paralysis) to 66 (normal movement ability).
2.
The Question of Necessity: Was the robot necessary for the observed
therapeutic benefit? I think the clearest way to express this question is as
follows [22]: Consider a control group for which the motors of the robot are
removed but the joints are allowed to move freely such that the robot allows
movement but does not assist movement. The unactuated robot provides the
same audiovisual stimulation, and the control group undergoes a matched
duration of unactuated therapy. Would this control group recover less than a
group that exercised with the actuated robot? If not, this would suggest that
the robotic properties themselves (i.e. the programmable actuators) were
superfluous. This result is scientifically plausible because, with regards to
motor plasticity after stroke, we know that practice is a key (or perhaps the
key) stimulant for motor plasticity.
The Question of Optimization. If one accepts that the robotic properties of
robotic therapy are helpful for enhancing recovery, a logical question is how
sensitive are therapeutic benefits to the optimization of the robotic
parameters? The first robotic therapy devices elicited therapeutic benefits
comparable to each other, even though they were fairly different in their
design and approach (e.g. number of degrees of freedom, details of the form of
assistance provided, stiffness levels). Can tuning the robot geometry and
control algorithm increase the therapeutic benefits? Or will any reasonable
robot (or non-robotic therapy) give approximately the same result?
2.
Figure 4. Recently developed robotic and non-robotic therapy devices. Upper left: NeReBot: a 5 DOF cable
robot that can be used next to a patients bed [27]. Bottom left: ArmIn: a highly responsive robot that allows
naturalistic arm movement, including shoulder translation [28]. Middle top: Rupert: a lightweight
exoskeleton actuated with pneumatic muscles, which can be worn by the subject [29]. Middle bottom: TWREX a non-robotic arm support device [21]. Upper right: HWARD a 3 DOF hand and wrist robot [23].
Lower right: A cable driven glove that can be worn, and driven by a motor or the patients shoulder shrugs
[30].
Other therapeutic paradigms besides active assistance are also being explored
including:
Error amplification strategies [36, 37]: The concept behind this approach
is that movement errors drive motor adaptation, and thus assistance may
be the wrong approach to take if the goal is to enhance motor adaptation,
since assistance reduces movement errors. Amplifying errors may
improve the rate or extent of motor adaptation by better provoking motor
plasticity. Clinically, this technique has only been shown to be effective
in reducing curvature errors during supported-arm reaching in the shortterm [38].
Virtual environments (see review: [39]) Another alternate therapeutic
paradigm that differs from the active assistance paradigm that dominates
the field is to use the robot to create a virtual environment that simulates
different therapeutic activities. In this paradigm, the robot may not
physically assist or resist movement, but instead just provide a training
environment that simulates reality. Potential advantages of training in a
haptic environment over training in physical reality include: a haptic
simulator can create many different interactive environments simulating a
wide range of real-life situations; quickly switch between these
environments without a set-up time, automatically grade the difficulty
of the training environment by adding or removing virtual features; make
the environments more interesting than a typical rehabilitation
environment; automatically reset itself if virtual objects are dropped or
misplaced; and provide novel forms of visual and haptic feedback
regarding performance. In this haptic simulation framework, robotics may
benefit rehabilitation therapy not by provoking motor plasticity with
special assisting or resisting control schemes, but rather by providing a
diverse, salient, and convenient environment for semi-autonomous
training.
3.
Rehabilitation Therapists are Accepting Robots as Scientific but not
Clinical Tools. A third trend is that while rehabilitation therapists are not
widely incorporating commercial robotic therapy devices for clinical use, they
are using robots in their research. The research therapists in the conference
that led to this book are setting the pace: they are doing groundbreaking
scientific work using robotics and related technology, as can be read in this
books other chapters (see chapter by Mataric, for example).
5. Conclusion
As mentioned in the Introduction, in the early 1990s there were only a handful of
robotic devices being developed for upper extremity training after stroke. In 2008, there
are dozens of devices being developed. However, robotic therapy has not become a
standard therapeutic treatment in most clinics. What impedes clinical acceptance?
One important factor is that the therapeutic benefits of robotic therapy are modest,
and have not been shown to be decisively better than other, less expensive approaches
that can partially automate therapy (Figure 1). In other words, the necessity question
remains unanswered. There is little motivation for most clinics to buy expensive robots
until it is proven that the robots yield therapeutic or cost benefits that are substantially
better than current approaches.
The field seems to be investing the majority of its resources in developing new
devices, rather than in understanding and optimizing the content of robotic therapy.
One explanation for this phenomenon is that there is a lack of devices for certain
movements and applications, such as hand movement and naturalistic arm movement,
and the new technology addresses this lack, as well as improving features such as
portability and force control response (Figure 4). But another possible factor is that
engineers like to build devices and are good at it. Engineers motivation and expertise
for scientifically exploring the clinical effects of their devices is more limited, and this
may signal the need for an even greater role by clinician scientists.
The field will likely have to evolve to place more focus on scientific studies of the
mechanisms of motor plasticity to optimize technology, improve the benefits of robotic
therapy, and determine if routine clinical use makes sense. The question of What are
the maximum benefits that we can obtain with robotic therapy? can be illustrated by a
boy playing with a stomp rocket (Figure 5). A dose of robotic therapy is like stomping
on the air bladder. The altitude that the rocket reaches is like the resulting improvement
in motor control. The boy can increase the rocket altitude by stomping harder, just like
a robotic device can increase recovery if it uses an optimal training paradigm, but there
is a limit to how the rocket, and likely recovery also, can go. For upper extremity
recovery, the limit is probably dictated by the number of spared corticospinal neurons
following stroke. The limit for the rocket is well short of the Eiffel tower, despite the
perspective shown in Figure 5. Does a trick of perspective make us think that the limits
for recovery enhancement that are possible with robotic therapy are higher than they
really are, if indeed the amount of cell loss defines them?
Addressing the following two key questions would help answer this question, and
advance robotic therapy development:
1.
2.
What are the fundamental limits to the plasticity that can be provoked with
behavioral signals? Answering this question would define the limits we
should expect of robotic therapy optimization. It would thus allow us to
determine how much time to invest in optimizing robotic therapy itself. If the
cost function is relatively flat and we are already close to an optimum, it may
Figure 5. What are the maximum benefits that we can obtain with robotic therapy?
Acknowledgements
The contents of this chapter were developed in part with support from NIDRR
H133E070013 and NIH N01-HD-3-3352.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
B.R. Brewer, S.K. McDowell, and L.C. Worthen-Chaudhari, Poststroke upper extremity rehabilitation:
a review of robotic systems and clinical results, Top. Stroke Rehabilitation 14 (2007), 22-44.
H.I. Krebs, N. Hogan, M.L. Aisen, and B.T. Volpe, Robot-aided neurorehabilitation, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 6 (1998), 75-87.
P.S. Lum, D.J. Reinkensmeyer, and S.L. Lehman, Robotic assist devices for bimanual physical therapy:
preliminary experiments, IEEE Transactions on Neural Systems and Rehabilitation Engineering 1
(1993),185-191.
P.S. Lum, C.G. Burgar, S.P.C.M. Majmundar, and M. Van der Loos, Robot-assisted movement training
compared with conventional therapy techniques for the rehabilitation of upper limb motor function
following stroke, Archives of Physical Medicine and Rehabilitation 83 (2002), 952-9.
D. Reinkensmeyer, L. Kahn, M. Averbach, A. McKenna, B. Schmit, and W. Rymer, Understanding and
promoting arm movement recovery after chronic brain injury: Progress with the ARM Guide, Journal
of Rehabilitation Research & Development submitted (2000).
C.G. Atkeson, Learning arm kinematics and dynamics, Annual Review of Neuroscience 12 (1989), 157183.
S.M. Schmidt, L. Guo, and S.J. Scheer, Changes in the status of hospitalized stroke patients since
inception of the prospective payment system in 1983, Archives of Physical Medicine and Rehabilitation
83 (2002), 894-898.
R.J. Nudo, B.M. Wise, F. SiFuentes, and G.W. Milliken, Neural substrates for the effects of
rehabilitative training on motor recovery after ischemic infarct, Science 272 (1996), 1791-1794.
G. Kwakkel, R. van Peppen, R.C. Wagenaar, S. Wood-Dauphinee, C. Richards, A. Ashburn, K. Miller,
N. Lincoln, C. Partridge, I. Wellwood, and P. Langhorne, Effects of augmented exercise therapy time
after stroke: A meta-analysis, Stroke 35 (2004), 2529-2539.
C.A. Trombly, Occupational Therapy for Dysfunction, 4th Edition, Baltimore: Williams and Wilkins,
1995.
G. Gresham, P. Duncan, W. Stason, H. Adams, A. Adelman, D. Alexander, D. Bishop, L. Diller, N.
Donaldson, C. Granger, A. Holland, M. Kelly-Hayes, F. McDowell, L. Myers, M. Phipps, E. Roth, H.
Siebens, G. Tarvin, and C. Trombly, Post-Stroke Rehabilitation. Rockville, MD: U.S. Department of
Health and Human Services. Public Health Service, Agency for Health Care Policy and Research, 1995.
M.M. Merzenich, and W.M. Jenkins, Reorganization of cortical representations of the hand following
alterations of skin inputs induced by nerve injury, skin island transfers, and experience, Journal of
Hand Therapy 6 (1993), 89-104.
D.J. Reinkensmeyer, and S.J. Housman, If I can't do it once, why do it a hundred times?: Connecting
volition to movement success in a virtual environment motivates people to exercise the arm after stroke,
Proc. Virtual Rehabilitation Conference (2007), 44-48.
J.F. Israel, D.D. Campbell, J.H. Kahn, and T.G. Hornby, Metabolic costs and muscle activity patterns
during robotic- and therapist-assisted treadmill walking in individuals with incomplete spinal cord
injury, Physical Therapy 86 (2006),1466-78.
[15] R. Shadmehr, and F.A. Mussa-Ivaldi, Adaptive representation of dynamics during learning of a motor
task, Journal of Neuroscience 14 (1994), 3208-3224.
[16] S. Hesse, C. Werner, M. Pohl, S. Rueckriem, J. Mehrholz, and M.L. Lingnau, Computerized arm
training improves the motor control of the severely affected arm after stroke: a single-blinded
randomized trial in two centers, Stroke 36 (2005),.1960-6.
[17] S. Fasoli, H. Krebs, J. Stein, W. Frontera, and N. Hogan, Effects of robotic therapy on motor
impairment and recovery in chronic stroke, Archives of Physical Medicine and Rehabilitation 84
(2003), 477-82.
[18] L.E. Kahn, M.L. Zygman, W.Z. Rymer, and D.J. Reinkensmeyer, Robot-assisted reaching exercise
promotes arm movement recovery in chronic hemiparetic stroke: A randomized controlled pilot study,
Journal of Neuroengineering and Neurorehabilitation 3 (12) (2006).
[19] G. Kwakkel, B.J. Kollen, and H.I. Krebs, Effects of robot-assisted therapy on upper limb recovery after
stroke: a systematic review, Neural Repair 22 (2008), 111-121.
[20] F. Amirabdollahian, R. Loureiro, E. Gradwell, C. Collin, W. Harwin, and G. Johnson, Multivariate
analysis of the Fugl-Meyer outcome measures assessing the effectiveness of GENTLE/S robotmediated stroke therapy, Journal of Neuroengineering Rehabilitation 19 (2007), 4.
[21] S.J. Housman, V. Le, T. Rahman, R.J. Sanchez, and D.J. Reinkensmeyer, Arm-Training with T-WREX
after Chronic Stroke: Preliminary Results of a Randomized Controlled Trial, To Appear, 2007 IEEE
International Conference on Rehabilitation Robotics, 2007.
[22] L. Kahn, P. Lum, W. Rymer, and D. Reinkensmeyer, Robot-assisted movement training for the strokeimpaired arm: Does it matter what the robot does?, Journal of Rehabilitation Research and
Development 43 (2006), 619-630.
[23] C.D. Takahashi, L. Der-Yeghiaian, V. Le, R.R. Motiwala, and S.C. Cramer, Robot-based hand motor
therapy after stroke, Brain 131 (2008), 425-437.
[24] A.H.A. Stienen, E.E.G. Hekman, F.C.T. Van der Helm, G.B. Prange, M.J.A. Jannink, A.M.M. Aalsma,
and H. Van der Kooij, Freebal: dedicated gravity compensation for the upper extremities, IEEE 10th
International Conference on Rehabilitation Robotics (2007), 804-808.
[25] E. T. Wolbrecht, D.J. Reinkensmeyer, and J.E. Bobrow, Optimizing compliant, model-based robotic
assistance to promote neurorehabilitation, IEEE Transactions Neural Systems and Rehabiltation
Engineering 16 (2008), 286-297.
[26] M. Mihelj, T. Nef, and R. Riener, A novel paradigm for patient-cooperative control of upper-limb
rehabilitation robots, Advanced Robotics 21 (2007), 843-867.
[27] S. Masiero, A. Celia, G. Rosati, and M. Armani, Robotic-assisted rehabilitation of the upper limb after
acute stroke, Archives of Physical Medicine and Rehabilitation 88 (2007), 142-9.
[28] T. Nef, and R. Riener, ARMin Design of a Novel Arm Rehabilitation Robot, Proceedings of the 2005
IEEE International Conference on Rehabilitation Robotics, Chicago, Illinois, 2005, pp. 57-60.
[29] T.G. Sugar, J. He, E.J. Koeneman, J.B. Koeneman, R. Herman, H. Huang, R.S. Schultz, D.E. Herring,
J. Wanberg, S. Balasubramanian, P. Swenson, and J.A. Ward, Design and control of RUPERT: a device
for robotic upper extremity repetitive therapy, IEEE Transactions Neural Systems and Rehabilitation
Engineering 15 (2007), 336-346.
[30] H.C. Fischer, K. Stubblefield, T. Kline, X. Luo, R.V. Kenyon, and D.G. Kamper, Hand rehabilitation
following stroke: a pilot study of assisted finger extension training in a virtual environment. Top Stroke
Rehabilitation 14 (1)(2007), 1-12.
[31] H. Krebs, J. Palazzolo, L. Dipietro, M. Ferraro, J. Krol, K. Rannekleiv, B. Volpe, and N. Hogan,
Rehabilitation robotics: performance-based progressive robot-assisted therapy, Auto. Rob. 15 (2003), 720.
[32] R. Riener, L. Lunenburger, S. Jezernik, M. Anderschitz, G. Colombo, and V. Dietz, Patient-cooperative
strategies for robot-aided treadmill training: first experimental results, IEEE Transactions Neural
Systems and Rehabilitation Engineering 13 (2005), 380-394.
[33] J.L. Emken, R. Benitez, and D.J. Reinkensmeyer, Human-robot cooperative movement training:
learning a novel sensory motor transformation during walking with robotic assistance-as-needed,
Journal of Neuroengineering Rehabilitation 4 (2007), 8.
[34] J.L. Emken, R. Benitez, A. Sideris, J.E. Bobrow, and D.J. Reinkensmeyer, Motor adaptation as a
greedy optimization of error and effort, Journal of Neurophysiology 97 (2007), 3997-4006.
[35] D.J. Reinkensmeyer, E. Wolbrecht, and J. Bobrow, A computational model of human-robot load
sharing during robot-assisted arm movement training after stroke, IEEE Engineering in Medicine and
Biology Society 2007 (2007), 4019-4023.
[36] J.L. Patton, M.E. Phillips-Stoykov, M. Stojakovich, and F.A. Mussa-Ivaldi, Evaluation of robotic
training forces that either enhance or reduce error in chronic hemiparetic stroke survivors, Experimental
Brain Research 168 (2005), 368-383.
[37] J.L. Emken, and D.J. Reinkensmeyer, Robot-enhanced motor learning: accelerating internal model
formation during locomotion by transient dynamic amplification, IEEE Transactions Neural Systems
and Rehabilitation Engineering 13 (2005), 33-9.
[38] J. Patton, M. Kovic, and F. Mussa-Ivaldi, Custom-designed haptic training for restoring reaching ability
to individuals with poststroke hemiparesis. Journal of Rehabilitation Research and Development 43
(2006), 643-56.
[39] J.L. Patton, G. Dawe, C. Scharver, F.A. Muss-Ivaldi, and R. Kenyon, Robotics and virtual reality: A
perfect marriage for motor control research and rehabilitation, Assistive Technology 18 (2006), 181195.
Introduction
Several studies demonstrate the importance of an early, constant and intensive
rehabilitation following cerebral accidents. This kind of therapy is an expensive
procedure in terms of human resources and time, and the increase of both life
expectance of world population and incidence of stroke is making the
administration of such therapies more and more important. The impairment of
upper limb function is one of the most common and challenging consequences
following stroke, that limits the patients autonomy in daily living and may lead to
permanent disability [1]. Well-established traditional stroke rehabilitation
techniques rely on thorough and constant exercise [2, 3], which patients are
required to carry out within the hospital with the help of therapists, as well as
during daily life at home. Early initiation of active movements by means of
repetitive training has proved its efficacy in guaranteeing a good level of motor
capability recovery [4]. Such techniques allow stroke patients to partially or fully
recover motor functionalities during the acute stroke phase, due to the clinical
evidence of a period of rapid sensorimotor recovery in the first three months after
stroke, after which improvement occurs more gradually for a period of up to two
years and perhaps longer [5, 6]. However after usual therapies, permanent
disabilities are likely to be present in the chronic phase, and in particular a
satisfying upper extremity motor recovery is much more difficult to obtain with
respect to lower extremities [7].
Several studies have attempted to investigate the efficacy of stroke
rehabilitation approaches [8, 9]. Intensive and task oriented therapy for the upper
limb, consisting of active, highly repetitive movements, is one of the most effective
approaches to arm function restoration [10, 11]. The driving motivations to apply
robotic technology to stroke rehabilitation are that it may overcome some of the
major limitations that manual assisted movement training suffers from, i.e. lack of
repeatability, lack of objective estimation of rehabilitation progress, and high
dependence on specialized personnel availability. Robotic devices for rehabilitation
can help to reduce the costs associated with the therapy and lead to new effective
therapeutic procedures. In addition, Virtual Reality can provide a unique medium
where therapy can be provided within a functional and highly motivating context,
that can be readily graded and documented. The cortical reorganization and
associated functional motor recovery after Virtual Reality treatments in patient with
chronic stroke are documented also by fRMN [12].
Among leg rehabilitation robot devices, Lokomat [13] has become a
commercial and widely diffused lower limb robotic rehabilitation device. It is a
motorized orthosis able to guide knee and ankle movements while the patient walks
on a treadmill.
Concerning arm rehabilitation devices, both cartesian and exoskeleton-based
devices have been developed in the last 10 years. MIT Manus [14, 15] and its
commercial version InMotion2 [16] are pantograph-based planar manipulators,
which have extensively been used to train patients on reaching exercises and have
been constantly evaluated by means of clinical data analysis [17]. It has been
designed to be backdrivable as much as possible and to have a nearly isotropic
inertia. ARM-guide [18, 19] is a device which is attached to the patients forearm
and guides the arm along a linear path having a variable angle with respect to the
horizontal position. Constraint forces and range of motion are measured throughout
the exercises. The MIME (Mirror Image Movement Enabler) system [20] is a
bimanual robotic device which uses an industrial PUMA 560 robot that applies
forces to the paretic limb during 3-dimensional movements. The system is able to
replicate the movements of the non-paretic limb.
Exoskeletons are robotic systems designed to work linked with parts of the
human body and, unlike robots, are not designed to perform specific tasks
autonomously in their workspace [21]. In such a condition, the issue of the physical
interaction between robots and humans is considered in terms of safety. The design
of exoskeleton systems stems from opposite motivations that intend the robotic
structure to be always maintained in contact with the human operators limb. Such a
condition is required for several applications that include the use of master robotic
arms for teleoperation, active orthoses and rehabilitation [22].
Experiments on exoskeletons have been performed at the JPL during 1970s
[23]. Sarcos [24] developed a master arm used for the remote control of a robotic
arm, while at PERCRO arm exoskeletons have been developed for interaction with
virtual environments since 1994 [22, 25, 26]. Exoskeletons can be suitably
employed in robotic assisted rehabilitation [27].
is passive and allows free wrist pronation and supination movements. Moreover,
design optimizations allow total arm mobility to a healthy subject wearing the
device.
The structure of the L-Exos is open, the wrist being the only closed joint, and
can therefore be easily wearable by post-stroke patients with the help of a therapist.
In order to use the L-Exos system for rehabilitation purposes, an adjustable height
support was made, and a chair was placed in front of the device support, in order to
enable patients to be comfortably seated while performing the tasks. The final
handle length is also tunable, according to the patients arm length.
After wearing the robotic device, the subjects elbow is kept attached to the
robotic structure by means of a belt. If necessary, the wrist may also be tightly
attached to the device end-effector by means of a second belt, which was used for
patients who were not able to fully control hand movements. A third belt can easily
be employed in order to block the patients trunk when necessary.
The L-Exos device was integrated with a projector used to display on a wide
screen placed in front of the patient different virtual scenarios in which to perform
rehabilitation exercises. The VR display is therefore a mono screen in which a 3D
scene is rendered. Three Virtual Rehabilitation scenarios were developed using the
XVR Development Studio [40]. The photo shown in Figure 2 was taken during a
therapy session, while one of the admitted patients was performing the required
exercises, and is useful to visualize the final clinical setup.
2.
Methods
A clinical pilot study involving 9 subjects with the main objective of validating
robotic assisted therapy with the L-Exos system was carried out at the Santa Chiara
Hospital of Pisa, Italy, between March and August 2007. Potential subjects to be
enrolled in the clinical protocol were contacted to take part in a preliminary test
session used to evaluate patients acceptance of the device. Most of the patients gave
an enthusiastic positive feedback about the opportunity.
Patients who were declared fit for the protocol and agreed to sign an informed
consent form concerning the novel therapy scheme were admitted to the clinical
trials. The protocol consisted of 3 one-hour rehabilitation sessions per week for a
total of six weeks (i.e., 18 therapy sessions). Each rehabilitation session consisted in
three different VR mediated exercises. A brief description of the goal of each
exercise will be provided in the next paragraphs, whereas a more detailed
description of the VR scenarios developed may be found in previous works [35,
36]. Some relevant control issues concerning the proposed exercises will be
reported as well.
The patient was on a seat as shown in Figure 3(D), with his/her right forearm
wearing the exoskeleton and a video projector displaying frontally the virtual
scenario. A preliminary clinical test was conducted to evaluate the ergonomics of
the system and the functionality as a rehabilitation device on a set of three different
applications. The test was intended to demonstrate that the L-Exos could be
successfully employed by a patient, and to measure the expected performance
during therapy.
To assess the functionality of the device, three different scenarios and
corresponding exercises were devised:
- A reaching task;
- A motion task constrained to a circular trajectory;
- An object manipulation task.
The tasks were designed in order to be executed in succession within one
therapy session of the duration of about one hour, repeated three times per week.
(A)
(B)
(C)
(D)
Figure 3. The arm exoskeleton during the execution of the reaching task. A: the starting position of the
reaching task; B: a subject in the middle of the path of the reaching task; C: a subject at the end-point of
the path of the reaching task; D: The overall system.
the task (h1 = 0.01 m and h2 = 0.12 m). During each series, the height of the fixed
target is not changed, and the following steps are executed in succession for each
series:
1) The first movement is executed towards the leftmost fixed target;
2) Once the fixed target is reached the moving marker returns back to its start
position, it stops for 2 seconds, and then it starts again towards the next
target on the right;
3) The last target of each series is the rightmost one.
In order to leave the patient the possibility to actively conduct the task and be
passively guided by the robot only when he/she is unable to complete the reaching
task, a suitable impedance control was developed. The control of the device is
based on two concurrent impedance controls acting respectively along tangential
and orthogonal directions to the trajectory.
2.2. Constrained motion task
In the second exercise the patient is asked to move freely along a circular trajectory,
as shown in Figure 6, where it is constrained by an impedance control. The virtual
constraint is activated through a button located on the handle. Position, orientation
and scale of the circular trajectory can be changed online, thus allowing the patient
to move within different effective workspaces. No guiding force is
Figure 5. The motion profile to be followed by the patient in the reaching task.
applied to the patients limb when he/she is moving within the given trajectory,
along which the patient is constrained by means of virtual springs.
Also in this task the therapist can actively compensate the weight of the
patients arm through the device, until the patient is able to autonomously perform
the task. This is accomplished by applying torques at the level of the joints, based
on a model of the human arm, with masses distributed along the different limbs
with a proportion derived from anatomical data. The absolute value of the each limb
mass is determined according to the weight of the subject.
2.3. Free motion task
In this task the patient is asked to move cubes represented in the virtual
environment, as shown for instance in figure 7, and to arrange them in a order
decided by the therapist, e.g. putting the cubes with the same symbol or with the
same color in a row, or putting together the fragments of one image.
For this task the device is controlled with a direct force control, with the
interaction force computed by a physics module based on the Ageia PhysX physics
engine [42]. By pressing a button on the handle, the patient can decide to select
which cube wants to move and release the cube through the same button. Collision
with and between the objects are simulated through the physics engine, so that it is
actually possible to perceive all the contact forces during the simulation.
Also in this task the device can apply an active compensation of the weight of
the patient arm, leaving to the therapist the possibility to decide the amount of
weight reduction.
3. Therapy results
The following paragraphs will describe the metrics used in order to quantitatively
evaluate patients performance in the reaching task and in the path following task
exercises. No quantitative data was computed for the last proposed task. A first
obvious possible quantitative measure, such as task completion time, was thought
as being not significant to evaluate patient performance improvements. This was
due to the high variability in the task difficulty among different therapy sessions
(initial cube disposition was randomly chosen by the control PC), and to the high
variability in patients attitude to consider the exercise as completed, i.e. the
accepted amount of cube misalignment and hence the amount of time spent in
trying to perform fine movements to reduce such misalignment.
3.1. Reaching task
Figure 8 shows a typical path followed by a patient during the reaching task. The
cumulative error for each task was chosen as being the most significant metric to
analyze reaching data. After the definition of a target position and of a nominal task
speed, the cumulative error in the reaching task is computed for iterations
corresponding to the given target position and speed. The cumulative error curves
are then fitted in a least square sense by a sigmoid-like 3-parameter curve,
represented with Eq. (1), where s is the cumulative error at time t, whereas a, b and
c are fitting parameters.
Fitting curves are then grouped and averaged on a therapy session basis, each
set containing the fitting curves computed for a single rehabilitation session.
Sample data resulting from this kind of analysis are shown in Figure 9, where a
greater dash step indicates a later day when a given target was required to be
reached with a given peak speed.
It is to be said that statistically significant improvements in the average fitting
curves from Week 1 to Week 6 are recognizable for more than half targets in only 4
out of 9 patients enrolled in the protocol. A typical improvement pattern for a
sample target is shown in Panel A of Figure 9 for Patient 6. This patient is
constantly improving his performance in the exercise, leading to a significant
(1)
Figure 8. Typical path followed during a reaching task Blue straight line: ideal trajectory, Red: actual
trajectory.
decrease in the final cumulative error for a given target. A reducing of the mean
slope of the central segment of the fitting curve is therefore present, thus indicating
a higher ability to maintain a constant average error throughout the task.
Panel B of Figure 9 reveals an interesting aspect of the application of the belt
used to avoid undesired back movements. During the first therapy sessions, no belt
was present, and each therapy session registered a comparable value of the
cumulative error. As soon as the trunk belt is introduced, the error increases
dramatically, as formerly employed compensatory strategies are not allowed.
However, due to the fact that active patients movements become much more
stimulated, the cumulative error fitting curve improves significantly. It is to be
noted that, by the end of the therapy, values which are nearly comparable to the
ones obtained in the no-belt condition are reached.
3.2. Path following task
Total time required to complete a full circular path was the quantitative parameter
used to assess patient improvement for the constrained motion task. 3D position
data were projected onto a best fitting plane (in the sense of least squares), and the
best fit circle was computed for the projected points. Time to complete a turn was
then evaluated with regard to trajectory. Curvature along the trajectory, which is
irregular for the three patients, was not evaluated. In particular, due to the
deliberately low value of the stiffness which realizes the motion constraint, patients
sometimes move in an unstable way, bouncing from the internal side to the external
side of the trajectory and vice versa, requiring some time to gain the control of their
movements again. This behavior has detrimental effects on curvature computation.
Although three of the patients report no significant decrease of the completion
time from Week 1 to Week 6, three patients report a decrease of about 50% in the
task completion time, whereas three other patients report a decrease of about 70%
of the same performance indicator. Such results are significant from a statistical
point of view (p < 0.001 for the t-Student test for each patient showing
improvements).
Sample data from Patient 3 are shown in Figure 10, in order to visualize a
typical trend which was found in the patients reporting improvements in the motion
constrained exercise. It is interesting to note that, along with the significant
reduction in the mean time required to complete a circle, a significant reduction of
A
Figure 9. A: sample reaching results for Patient 6;
B
B: sample reaching results for Patient 3.
4. Clinical results
All patients were evaluated by means of standard clinical evaluation scales:
Fugl-Meyer scale: this scale [43] is used for the evaluation of motor
function, of balance, and of some sensation qualities and joint function
in hemiplegic patients. The Fugl-Meyer assessment method applies a
cumulative numerical score. The whole scale consists of 50 items, for
a total of 100 points, each item being evaluated in a range from 0 to
2.33 items concern upper limb functions (for a total of 66 points) and
are used for the clinical evaluations.
Modified Ashworth scale: it is the most widely used method for
assessing muscle spasticity in clinical practice and research. Its items
are marked with a score ranging from 0 to 5, the greater the score, the
greater being the spasticity level. Only patients with modified
Ashworth scale values 2 were admitted to this study.
Range Of Motion: it is the most classical and evident parameter used
to assess motor capabilities of impaired patients.
Clinical improvements in each scale have been observed by the end of the
therapy protocol for every patient, and they will now be discussed.
4.1. Fugl-Meyer assessment
The Fugl-Meyer assessment was carried out before and after robotic therapy. Every
patient reported a significant increment ranging from 1 to 8 points, 4 points (out of
66) being the average increment (p<0.005, paired t-Student test). Such results is
absolutely comparable with the results which may be found in the scientific
literature [33].
4.2. Ashworth assessment
Slight decrements of some values of the Modified Ashworth scale may be found
examining detailed clinician assessments. The following improvement index was
defined for each value of the Ashworth scale:
5. Conclusions
The L-Exos system, which is a 5-DoF haptic exoskeleton for the right arm, was
successfully clinically tested on a group of nine chronic stroke patients with upper
limb motor impairments. In particular, the extended clinical trial presented in this
paper consisted in a 6-week protocol involving three one-hour robotic-mediated
rehabilitation sessions per week.
Despite most of the patients enthusiastically reporting major subjective benefits
in Activities of Daily Life after robotic treatment, it is to be said that no general
correlation has yet been found between such reported benefits and performance
improvements in the proposed studies. In other words, patients who improve on the
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16] J Stein, H I Krebs, W R Frontera, S E Fasoli, R Hughes and N Hogan, Comparison of two
techniques of robot-aided upper limb exercise training after stroke, Am J Phys Med Rehabil, 83 (9)
(2004),. 720728.
[17] S E Fasoli, H I Krebs, J Stein, W R Frontera and N. Hogan, Effects of robotic therapy on motor
impairment and recovery in chronic stroke. Arch Phys Med Rehabil, 84 (4) (2003), 477482.
[18] D J Reinkensmeyer, J P A Dewald and W Z Rymer, Guidance-Based Quantification of Arm
Impairment Following Brain Injury: A Pilot Study, IEEE Transactions on Rehabilitation
Engineering 7 (1) (1999), 1.
[19] D J Reinkensmeyer, L E Kahn, M Averbuch, A McKenna-Cole, B D Schmit and W Z Rymer,
Understanding and treating arm movement impairment after chronic brain injury: progress with the
ARM guide, J Rehabil Res Dev 37 (6) (2000),.653662.
[20] P S Lum, C G Burgar, P C Shor, M Majmundar and M Van der Loos, Robot-assisted movement
training compared with conventional therapy techniques for the rehabilitation of upper-limb motor
function after stroke, Arch Phys Med Rehabil. 83 (7) (2002), 952959.
[21] C A Avizzano, and M Bergamasco, Technological Aids for the treatment of tremor. Sixth
International Conference on Rehabilitation Robotics (ICORR), (1999).
[22] M Bergamasco, Force replication to the human operator: the development of arm and hand
exoskeletons as haptic interfaces, Proceedings of 7th International Symposium on Robotics
Research, (1997).
[23] B M Jau, Anthropomorhic Exoskeleton dual arm/hand telerobot controller, IEEE International
Workshop on Intelligent Robots (1988), 715718.
[24] A Nahvi, D D Nelson, J M Hollerbach and D E Johnson, Haptic manipulation of virtual
mechanisms from mechanical CAD designs, Proceedings of IEEE International Conference on
Robotics and Automation (1998).
[25] M Bergamasco, B Allotta, L Bosio, L Ferretti, G Parrini, G M Prisco, F Salsedo and G Sartini, An
arm exoskeleton system for teleopera-tion and virtual environments applications, IEEE Int. Conf.
On Robotics and Automation (1994), 14491454.
[26] A Frisoli, F Rocchi, S Marcheschi, A Dettori, F Salsedo and M Bergamasco, A new force-feedback
arm exoskeleton for haptic interaction in virtual environments. WHC 2005. First Joint Eurohaptics
Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator
Systems., (2005), 195201.
[27] T Nef and R Riener, ARMin-Design of a Novel Arm Rehabilitation Robot. ICORR 2005, 9th
International Conference on Rehabilitation Robotics (2005), 5760.
[28] K Kiguchi, S Kariya, K Watanabe, K Izumi and T Fukuda, An Exoskeletal Robot for Human
Elbow Motion Support Sensor Fusion, Adaptation, and Control, IEEE Transactions on System,
man and Cybernetics - Part B: Cybernetics 31 (3) (2001), 353.
[29] K Kiguchi, K Iwami, M Yasuda, K Watanabe and T Fukuda, An exoskeletal robot for human
shoulder joint motion assist, IEEE/ASME Transactions on Mechatronics 8 (1) (2003), 125135.
[30] R Riener, T Nef and G Colombo, Robot-aided neurorehabilitation of the upper extremities,
Medical and Biological Engineering and Computing 43 (1) (2005) , 210.
[31] T Nef and R Riener (2005), ARMin-Design of a Novel Arm Rehabilitation Robot. 9th
International Conference on Rehabilitation Robotics,ICORR (2005), 5760.
[32] N G Tsagarakis and D G Caldwell, Development and Control of a Soft-Actuated Exoskeleton for
Use in Physiotherapy and Training, Autonomous Robots 15 (1) (2003), 2133.
[33] G B Prange, M J Jannink, C G Groothuis-Oudshoorn, H J Hermens and M J Ijzerman, Systematic
review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke, J
Rehabil Res Dev 43 (2) (2006), 171184.
[34] F Salsedo, A Dettori, A Frisoli, F Rocchi, M Bergamasco and M Franceschini, Exoskeleton
Interface Apparatus.
[35] A Frisoli, L Borelli, A Montagner, S Marcheschi, C Procopio, F Salsedo, M Bergamasco, M
Carboncini, M Tolaini and B Rossi, Arm rehabilitation with a robotic exoskeleleton in Virtual
Reality, ICORR 2007 10th International Conference on Rehabilitation Robotics. (2007), 631642.
[36] A Montagner, A Frisoli, L Borelli, C Procopio, M Bergamasco, M Carboncini and B Rossi, A pilot
clinical study on robotic assisted rehabilitation in VR with an arm exoskeleton device, Virtual
Rehabilitation (2007).
[37] A Frisoli, F Rocchi, S Marcheschi, A Dettori, F Salsedo and M Bergamasco, A new force-feedback
arm exoskeleton for haptic interaction in virtual environments, Proceedings of IEEE WorldHaptics
Conference (2005), 195201.
[38] S Marcheschi, A Frisoli, C Avizzano and M Bergamasco, A Method for Modeling and Control
Complex Tendon Transmissions in Haptic Interfaces, Proceedings of the 2005 IEEE International
Conference on Robotics and Automation (2005), 17731778.
[39] M Cirstea and M Levin, Compensatory strategies for reaching in stroke, Brain 123 (5) (2000),
940953.
[40] E Ruffaldi, A Frisoli, M Bergamasco, C Gottlieb and F Tecchia, A haptic toolkit for the
development of immersive and web-enabled games, Proceedings of the ACM symposium on
Virtual reality software and technology (2006), 320323.
[41] D J Reinkensmeyer, L E Kahn, M Averbuch, A McKenna-Cole, B D Schmit and W Z Rymer,
Understanding and treating arm movement impairment after chronic brain injury: progress with the
ARM guide, J Rehabil Res Dev. 37 (6) (2000), 653662.
[42] http://www.ageia.com/
[43] A Fugl-Meyer, L Jaasko, I Leyman, S Olsson and S Steglind, The post-stroke hemiplegic patient.
A method for evaluation of physical performance, Scand J Rehabil Med 7 (1) (1975), 1331.
Abstract. The disability deriving from stroke impacts heavily on the economic
and social aspects of western countries because stroke survivors commonly
experience various degrees of autonomy reduction in the activities of daily living.
Recent developments in neuroscience, neurophysiology and computational science
have led to innovative theories about the brain mechanisms of the motor system.
Thereafter, innovative, scientifically based therapeutic strategies have initially
arisen in the rehabilitation field. Promising results from the application of a virtual
reality based technique for arm rehabilitation are reported.
Keywords. Stroke, Rehabilitation, Motor Learning and Control, Augmented
Feedback, Virtual Reality
Introduction
Stroke is a leading cause of death and disability for men and women of all ages, classes,
and ethnic origins worldwide. Several epidemiological surveys were conducted on
cerebro-vascular disease, especially in the United States, where 500,000 new strokes
occur each year causing 100,000 deaths and leaving residual disability for 300,000
survivors. Moreover, approximately 3 million Americans have survived a stroke with
some degree of residual disability [1, 3].
Within 2 weeks after stroke, hemiparesis is present in 70-85% of patients and a
percentage, between 40 to 75%, is completely dependent in their activities of daily
living [4]. There is a lack of epidemiological data for European countries, although in
the United Kingdom the Oxfordshire Community Stroke Project (1983) reported an
annual incidence of 500 new cases in a 250,000 people community, with a peak in
people older than 75 years [5]. In a recent study conducted in Norway, a total annual
incidence of 2.21 strokes per 1000 people was reported. This rate is congruent with
other European countries showing that there are no regional variations within Western
Europe [6].
The estimates of the total cost of stroke are very variable in relation to the
difficulty of calculating the indirect cost resulting from disability and mortality. A 1993
estimate placed the total annual cost of stroke at $30 billion in the United States, of
which $17 billion are direct costs (hospital, physician, rehabilitation, equipment) and
$13 billion are indirect costs (lost productivity) [7].
The main cost of stroke survivors is related to their residual motor disabilities that
interfere with personal, social and/or productive activities. Surprisingly, there are few
therapeutic approaches to restore lost functions. Nowadays the available rehabilitative
therapies are currently working to develop treatments that are closely related to motor
learning principles.
Recently the development of tools for quantitative analysis of motor deficits gave
the opportunity to increment the amount of data in clinical practice to better study
human motor behavior with consequent important practical implications. First of all, it
will be possible to infer the anatomical structures that modulate the different elements
of motor control. Furthermore, it may help to better characterize motor deficits and, as
a consequence, to plan individually modified therapeutic approaches. Finally, the
quantitative analysis of movement may allow to monitor pharmacological therapies (i.e
drugs interacting with the central neurotransmitters levels) that could modify the
human motor behavior [8, 9].
1. Rationale
1.1 Neurophysiology of motor learning
Research on the physiological underpinnings of movement dynamics has traditionally
focused most extensively on the primary motor cortex (M1) pointing out that neurons
in M1 are modulated by external dynamic perturbations. Some investigators [10]
indicate that several premotor areas feed M1 which then projects to the spinal cord.
These areas are intensely interconnected with each other, with a parallel contribution to
the control of movement [11].
Other work on primates demonstrated that several cortical cells in motor and premotor areas responded selectively to kinematic variations during motor adaptation
tasks. These cells, clearly identified in the monkey SMA, are involved in the process of
kinematics-to-dynamics transformation, hence in new motor task learning [11]. Doya et
al. [11, 12] proposed other correlations between the motor learning problem and
circuitry at the cortical level, suggesting that different brain areas are involved in three
different kinds of learning mechanisms: supervised learning, reinforcement learning
and unsupervised learning.
The cerebellum is supposed to be involved in real-time fine tuning of movement
by means of its feed-forward structure based on massive synaptic convergence of
granule cell axons (parallel fibers) onto Purkinje cells, which send inhibitory
connections to deep cerebellar nuclei and to inferior olive. The circuit of the cerebellum
is capable of implementing the supervised learning paradigm which consists of error
driven learning behaviors. Reinforcement learning is based on the multiple inhibitory
pathways of the basal ganglia that permit the reward predicting activity of dopamine
neurons and change of behavior in the course of goal directed task learning. The
extremely complex anatomical features of the cortex suggest that information coding is
established by an unsupervised learning paradigm in which the activity is determined
by the Hebbian-rule of synaptic updating. In this paradigm the environment provides
input but gives neither desired targets nor any measure of reward or punishment [11,
12].
Recent neurophysiologic studies demonstrated that some natural complex systems
have discrete combinatory architecture that utilizes finite numbers of primary elements
to create larger structures, such as motor primitives in spinal cord [13]. Poggio and
Bizzi [14] hypothesized a hierarchical architecture where the motor cortex is endowed
with functional modular structures that change their directional tuning during
adaptation, visuo-motor learning, exposure to mechanical load and reorganization after
lesions, i.e. the circuit of interneurons as central pattern generators, unit burst
generators, and spinal motor primitives contributing to motor learning. In the latter case
the force fields stored as synaptic weights in spinal cord may be viewed as representing
motor field primitives from which, through linear superimposition, a vast number of
movements can be fashioned by impulses conveyed by supraspinal and reflex pathways
[14]. Computational analysis [15] verifies that this proposed mechanism is capable of
learning and controlling a wide repertoire of motor behaviors. This hypothesis suggests
that the cortical lesion induced by a stroke could modify the hierarchical architecture
with negative influences on learning and controlling new motor behaviors.
1.2 Neurophysiopathology of stroke lesion
From the physiopathologic perspective, much evidence demonstrated that the location
of the stroke lesion is related to upper limb motor deficit severity. Specifically it is
argued that patients with cortical stroke have a better motor outcome than patients with
subcortical stroke. Furthermore, patients with mixed cortical plus subcortical stroke
tended to improve more than patients with pure subcortical stroke despite the expected
larger size of mixed lesions. Although subcortical strokes are normally smaller than
cortical strokes, they are more likely to involve primary (from M1) and secondary
motor pathways (from SMA and premotor area, PMA). The descending fibers from
primary and secondary motor areas converge in the internal capsule maintaining their
somatotopic distribution. Consequently, even small subcortical lesions produce
devastating motor effects. The probability of upper limb motor recovery after stroke is
hence linked strictly with the anatomical lesion: 75% for patients with lesions restricted
to the cortex (MI, PMA, SMA); 38.5% for those with subcortical or mixed cortical plus
subcortical lesions not affecting the posterior limb of the internal capsule (PLIC); and
3.6% for those with involvement of the PLIC plus adjacent corona radiata, basal
ganglia or thalamus [16].
1.3 Computational approach to the upper limb rehabilitation.
The computational approach to the motor system is a powerful analysis, in the field of
neuroscience, which offers the opportunity to unify the experimental data in a
theoretical framework. In the computational perspective, the motor behavior is intended
as the manifestation of an engineering system, whose basic task is to manage the
relationships between motor commands and sensory feedback. This management is
necessary for two reasons:
1.
2.
Recently, Han et al. developed a computational model for bilateral hand use in arm
reaching movements to study the interactions between adaptive decision making and
motor learning after motor cortex lesion [17]. This model combines a biologically
plausible neural model of the motor cortex with a non-neural model of reward-based
decision making and physical therapy intervention. The model demonstrated that in the
damaged cortex, during therapy, the supervised learning rules ensured that
underrepresented directions of movement were repopulated, thereby decreasing
average reaching errors.
The authors suggested that after stroke, if no therapy is given, plasticity due to
unsupervised learning may become maladaptive, thereby augmenting the strokes
negative effect. They also indicated that there is a threshold for the amount of therapy
based on three types of learning mechanisms (unsupervised, supervised and
reinforcement) required for the recovery process; below this threshold motor retraining
is in vain. In other words, there is an absent or exiguous use of the arm exhibiting the
learned non-use phenomenon. In the absence of supervised or reinforcement learning,
subsequent motor performance worsens with any amount of rehabilitation trials. On the
contrary, if unsupervised learning is not present, motor performance improves with any
amount of rehabilitation trials in the late period.
1.4 Virtual reality as an emerging therapy
Virtual Reality (VR) is an innovative technology consisting of a computer based
environment that represents a 3-D artificial world. VR has been already applied in
many fields of human activity. New computer platforms permit human-machine
interactions in real time, therefore the possibility of using VR in medicine has arisen.
The present level of technical advances in the computer interface allows the
development of VR systems as therapeutic tools in some neurological and psychiatric
pathology. For example, stroke survivors may undergo rehabilitative therapeutic
procedures with different VR systems [18, 19]. The use of a VR-based system coupled
to a motion tracking tool allows us to study the kinematics of arm movement in the
restorative process after stroke. Furthermore, the possibility of modifying the artificial
environment, where the patients could interact, may exploit some of the mechanisms of
motor learning.
We know from physiological studies that humans perform a large variety of
constrained and unconstrained movement in a smooth and graceful way because the
CNS enables us to rapidly solve complex computational problems. One hypothesis is
that the CNS needs little information in order to adapt movements to the changing of
the external requirements, providing that it already contains preprogrammed algorithms
for function [20]. These algorithms produce regularities in biological movements that
are not in any way implied by the motor task. Accordingly with this view, a given
movement can be characterized by variant and invariant elements. For instance, the
variant part of a reaching movement is the distance of the targets (corresponding to the
amplitude of the movement). The invariant part consists of straight paths with a bellshaped speed profile in all movements [21, 22].
In our laboratory, we experimented with a VR based setting for the assessment and
treatment of arm motor deficit in patients after stroke. We compared a VR based
(reinforced feedback in virtual environment, RFVE) and traditional physical therapy
technique (conventional therapy, CT) in the treatment of arm motor impairments in
post-stroke patients. The studied population met the following inclusion criteria: a
single ischemic stroke in the region of the middle cerebral artery at least six months
before the study (proven by means of CT scan or MRI); conventional physical therapy
treatment received in the early period after stroke; mild to intermediate motor
impairments of the arm assessed as a Fugl-Meyer Upper Extremity score (F-M UE)
2. Conclusion
In our VR setting, patients were given information about their arm movements during
the performance of motor skills (KP) that consisted of the representation of their endeffector, and virtual teacher movement which showed the actual kinematics of the
hand path in order to practice learning by imitation. The teacher, as other relevant
feedback, realizes an ideal environment to implement new predictors or to modify
disrupted forward models. These mechanisms are developed by means of amplification
of the actual state. On the other side, new or better controllers can be developed by
means of different sensorimotor context presented in every scenario, as by the
utilization of graphic models that reproduce the visual objects appearance, giving
coherent contextual information. Furthermore, instructions imparted by the therapist
during the experimental procedure and the virtual representation of the correct
movement contributed to providing information about motor performance, thereby
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
J.H. Chesebro, V. Fuster, J.L. Halperin, Atrial fibrillation-Risk marker for stroke, The New England
Journal of Medicine 323 (1990), 1556-1558.
R.D. Abbott, Y. Yin, D.M. Reed, et al., Risk of stroke in male cigarette smokers, The New England
Journal of Medicine 315 (1986), 717-720.
M.L. Olsen, Autoimmune disease and stroke, Stroke (4) (1992), 13-16.
B. Dobkin, Neurologic rehabilitation, Contemporary Neurology Series, 1995.
Oxfordshire Community Stroke Project, Incidence of stroke in Oxfordshire : first year of experience of
a community stroke register, British Medical Journal 287 (1983), 713-717.
H Ellekjaer, J Holmen, B Indredavik, and A Terent, Epidemiology of stroke in Innherred, Norway,
1994 to 996. Incidence and 30-day case-fatality rate, Stroke (1997), 2180-2184.
PORT Study, Duke University Medical Center, Durham, NC, 1994.
R.W.V. Flynn, R.S.M. MacWalter, A.S.F. Doney, The cost of cerebral ischaemia, Neuropharmacology
55 (3) (2008), 250-266.
D.M. Feeney, A.M. De Smet, S. Rai, Noradrenergic modulation of hemiplegia: facilitation and
maintenance of recovery, Restorative Neurology and Neuroscience 22 (2004), 175-190.
L.B. Goldstein, Neurotransmitters and motor activity: effects on functional recovery after brain injury,
NeuroRx 3 (2006), 451-457.
C. Padoa-Schioppa, Li Chiang-Shan Ray, and E. Bizzi, Neuronal activity in the supplementary motor
area of monkeys adapting to a new dynamic environment, Journal of Neurophysiology 91 (2004), 449473.
K. Doya, What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?,
Neural networks 12 (1999), 961-974.
K. Doya, Complementary roles of basal ganglia and cerebellum in learning and motor control, Current
Opinion in Neurobiology 10 (6) (2000), 732-739.
E. Bizzi, F.A. Mussa-Ivaldi, S. Giszter, Computations underlying the execution of movement: a
biological perspective, Science 253 (1991), 287-291.
T. Poggio, E. Bizzi, Generalization in vision and motor control, Nature 431 (2004), 768-774.
F.A. Mussa-Ivaldi, Computational Intelligence in Robotics and Automation in Proc. 1997 IEEE (IEEE
Computer Society, Los Alamitos, California, Int. Symp., 1997, 84-90.
N. Shelton, M.J. Reding, Effect of lesion location on upper limb motor recovery after stroke, Stroke 32
(1) (2001), 107-112.
C.E. Han, M.A. Arbib, N. Schweighofer, Stroke rehabilitation reaches a threshold, PLoS Computational
Biology 4(8) (2008), e1000133.
M.L. Aisen, H.I. Krebs, N. Hogan, F. McDowell, and B.T. Volpe, The effect of Robot assisted therapy
and rehabilitative training on motor recovery following stroke, Archives of Neurology 54 (1997), 443446.
E. Todorov, R. Shadmehr, and E. Bizzi, Augmented feedback presented in a virtual environment
accelerates learning of a difficult motor task, Journal Motor Behavior 29 (1997), 147-158.
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
P. Morasso, Spatial control of arm movements, Experimental Brain Research 42 (1981), 223-227.
W. Abend, E. Bizzi, and P. Morasso, Human arm trajectory formation, Brain 105 (1982), 331-348.
C.A.M. Stan Gielen, Movement dynamics, Current Opinion in Neurobiology 3 (6) (1993), 912-916.
A.R. Fugl-Meyer, L. Jaasko, L. Leyman, S. Olsson, S. Steglind, The post-stroke hemiplegic patient. 1. a
method for evaluation of physical performance, Scandinavian Journal of Rehabilitation Medicine 7 (1)
(1975), 13-31.
A.E. Russon, Learning by imitation: a hierarchical approach, Behavioral and Brain Sciences 21 (5)
(1998), 667-684.
R. Keiith, C. Granger, B. Hamilton, G. Sherwin, The functional independence measure: a new tool for
rehabilitation, in M.G. Eisenburg, R.C. Grzesiak (Eds), Advances in clinical rehabilitation. Springer
Publishing Co., New York, 1987, pp. 6-18.
M. Dam, P. Tonin, S. Casson, et al., The effects of long-term rehabilitation therapy on poststroke
hemiplegic patients, Stroke 24 (8) (1993), 1186-1191.
P.T. Tangeman, D.A. Banaitis, A.K.Williams, Rehabilitation of chronic stroke patients: changes in
functional performance, Archives of Physical Medicine and Rehabilitation 71 (11) (1990), 876-880.
Abstract. Stroke will become one of the main burdens of disease and loss of
quality of life in the near future. However, we still have not found rehabilitation
approaches that can scale up so as to face this challenge. Virtual reality based
therapy systems are a great promise for directly addressing this challenge. Here we
review different approaches that are based on this technology, their assumptions
and clinical impact. We will focus on virtual reality based rehabilitation systems
that combine hypotheses on the aftermath of stroke and the neuronal mechanisms
of recovery that directly aims at addressing this challenge. In particular we will
analyze the, so called, Rehabilitation Gaming System (RGS) that proposes the use
of non-invasive multi-modal stimulation to activate intact neuronal systems that
provide direct stimulation to motor areas affected by brain lesions. The RGS is
designed to engage the patients in task specific training scenarios that adapt to
their performance, allowing for an individualized training of graded difficulty and
complexity. Although RGS stands for a generic rehabilitative approach it has been
specifically tested for the rehabilitation of motor deficits of the upper extremities
of stroke patients. In this chapter we review the main foundations and properties of
the RGS, and report on the major findings extracted from studies with healthy and
stroke subjects. We show that the RGS captures qualitative and quantitative data
on motor deficits, and that this is transferred between real and VR tasks.
Additionally, we show how the RGS uses the detailed assessment of the
kinematics and performance of stroke patients to individualize the treatment.
Subsequently, we will discuss how real-time physiology can be used to provide
additional measures to assess the task difficulty and subject engagement. Finally,
we report on preliminary results of an ongoing longitudinal study on acute stroke
patients.
Keywords. Virtual reality, stroke, acute phase, rehabilitation, cortical plasticity,
gaming, individualized training.
Introduction
In the last decade there has been a growing interest in the use of Virtual Reality (VR)
based methods for the rehabilitation of cognitive and motor deficits after lesions to the
nervous system. Stroke patients have become one of the main target populations for
these new rehabilitative methods (see [1, 3] for reviews). This is due to stroke being
one of the major causes of adult disability worldwide [4], with restoration of normal
motor function in the hemiplegic upper limb being observed in less than 15% of
patients with initial paralysis [5]. This has a strong impact on the degree of
independence of these patients and it is associated with high societal costs. In addition,
we should take into account the psychological impact as many of these patients regress
into depression [6].
Rehabilitation following stroke focuses on maximizing the restoration of the lost
motor functions and on relearning skills for the performance of the activities of daily
living (ADLs). Most of the newest rehabilitation techniques rely on the fact that motor
function can be recovered by cortical plasticity [7, 9]. The ability of the brain to
reorganize itself after a brain injury has been observed by a remapping of the
surrounding areas of the lesion [10] and in other cases as a functional shift to the
contralateral hemisphere [11]. To maximize brain plasticity, several rehabilitation
strategies have been proposed that rely on a putative promotion of activity within
surviving motor networks (see [12] for review). Among those strategies we can find
intensive rehabilitation [13], repetitive motor training [14, 15], techniques directed
towards specific deficits of the patients [16], mirror therapy [17], constraint-induced
movement therapy [18], motor imagery [19], action observation [20], etc.
More recently, growing evidence of the positive impact of virtual reality
techniques on recovery following stroke has been shown [1, 2]. These systems allow
for the integration of several of the above mentioned rehabilitation strategies. Different
paradigms have been used, which we can group in different categories: learning by
imitation [21, 22], reinforced feedback [23, 24], haptic feedback [25, 26], augmented
practice and repetition [27, 28], video capture virtual reality [29], exoskeletons [30,
31], mental practice [32], and action execution/observation [33, 35]. The major
findings of these studies show that virtual reality technologies will become a more and
more essential ingredient in the treatment of stroke and order disorders of the nervous
system. Indeed, with VR we can have well controlled training protocols within
specifically defined interactive scenarios that are customized towards the needs of the
patient. However, it is not yet clear which characteristics of these systems are effective
for rehabilitation. Moreover, the quantification of the impact of these novel
rehabilitation technologies on patients recovery and well being is in general still very
anecdotal. One problem is that most of the reported studies are performed on small
numbers of chronic stroke patients [1, 2] although most of the cortical reorganization
happens in the first few months after stroke [36, 38]. Since plasticity is a requirement
for functional recovery, intervention at early stages of stroke should be pursuit more
vigorously.
The Rehabilitation Gaming System (RGS) is a VR based system that is targeted for
the induction and enhancement of functional recovery after lesions to the nervous
system using non-invasive multi-modal stimulation. Currently, the RGS is tested in the
context of the rehabilitation of motor deficits of the upper extremities after stroke. The
RGS assumes that neuronal plasticity is a permanent feature of the CNS and that
conditions for recovery can be induced by activating areas of the brain that are affected
by a lesion through the use of non-invasive multi-modal stimulation. In the specific
case of the rehabilitation of motor deficits after stroke, the working hypothesis of the
RGS is that action execution combined with the observation of correlated movements
in a virtual environment may activate undamaged primary or secondary motor areas
recruiting alternative networks that will improve the conditions for functional
reorganization. Indeed, it has been shown that VR stimulation can activate these motor
areas [39]. One candidate network that can provide the interface between multi-modal
stimulation and motor execution is the, so-called, mirror neuron system [40, 42]. The
mirror neurons have been shown to be active during the execution of hand, foot and
mouth goal oriented movements and also during the observation of these movements
while performed by others. This implies that we can recruit this system not only during
action execution but also during the observation of actions. We will in detail analyze
the impact of RGS on acute stroke patients [33, 35].
The RGS provides VR tasks performed in a first-person perspective where users
control the movement of two virtual arms with their own arm movements. The choice
of the first-person perspective relies on the fact that it has been shown that the
observation of hand movements produces an increase in cortical excitability modulated
by the orientation with respect to the observer [43]. In particular, those experiments
showed stronger responses when both the orientation of the hand and the orientation of
the observer coincided. This suggests that a first-person perspective could be more
effective than a third-person perspective in driving cortical activation during
performance of virtual tasks. Furthermore, the first-person perspective can recruit the
motor system to a greater extent and allows for the integration of kinesthetic
information [44] that can result in a higher degree of identification with the virtual
representation and thus in a more effective functional reorganization.
In addition to the above described neuronal principles, the RGS incorporates a
number of features that make it a very well suited system for rehabilitation. It proposes
tasks of graded complexity and difficulty. The varying complexity of the tasks allows
the user to re-construct the different elements of instrumental activities of daily living
(IADL) from overall stability to precision movements. In addition, the RGS controls
the individual task performance using a psychometric model of the training scenario.
This model is derived from game performance data obtained from a group of both
patients and healthy control subjects. As a result, the RGS can adapt the difficulty of
the task to the capabilities of the individual user, providing individualized training
while following a single rule for all users. This is relevant, since it reduces the effect of
external uncontrolled influences in choosing game parameters and training protocols,
eliminating important sources of error. For the same reason, the game instructions are
automatically provided by the system in written and auditory form.
In this chapter, we will analyze the RGS as a general strategy for functional
rehabilitation after lesions to the CNS. Our specific examples will be taken from results
of a number of pilot studies we have performed with stroke patients that address issues
such as the transfer of movements, deficits and training from real to virtual
environments. We will assess the validity of the psychometric difficulty model
implemented in the system by investigating the affective responses of the users.
Additionally, we will report on preliminary results of an ongoing clinical study with
acute stroke patients. Here the diagnostic and monitoring capabilities of the RGS will
be discussed as well as the effect of the training paradigm compared to the performance
of two control groups. Finally, we will try to extract which general principles are
behind the impact of the RGS.
Figure 1. The Rehabilitation Gaming System (RGS). The subject, resting the arms on a table surface, faces
a computer screen. The movements of the arms are visually captured by a camera positioned on top of the
display that detects color patches located on wrists and elbows. A pair of data gloves measure finger flexure.
An avatar moving according to the movements of the user performs a task in the virtual scenario. Adapted
from Cameiro et al. [1].
1. Methods
1.1. The Rehabilitation Gaming System
The RGS has been designed with standard, out of the box and inexpensive components
to provide stroke patients with the option to have this system at their homes for further
training and monitoring after discharge from the hospital.
The RGS consists of a standard PC (Intels Core 2 Duo Processor, Santa Clara,
California, USA) running the Linux operating system, a 3D graphics accelerator
(nVidia GeForce Go 7300, Santa Clara, California, USA), a 19 4:3 LCD monitor, a
video camera (VGA) and a pair of data gloves (5DT, Pretria, South Africa) (Figure 1).
Arm movements are tracked by means of a vision based tracking system (AnTS) that
detects color patches located on the wrists and elbows (see section 1.2). The finger
flexion is captured by optic fiber data gloves that integrate seamlessly with our system
via a USB connection. The lycra textile of the gloves adapts to a wide range of hand
sizes and offers little resistance to finger bending. As many patients do not have the
ability to support their arms against gravity, the task is purposely in 2D and is
performed on a table surface.
1.2. AnTS
A vision based tracking system called AnTS has been adapted to track the movements
of the arms of patients during the training period at an update frequency of 35 Hz [45,
46]. The basic processing stream of AnTS starts with the acquisition of images from
the video camera that is placed on top of a computer screen (Figure 1). In this case, the
goal is the reconstruction of arm motion. Therefore, the head of the subject and a set of
color patches positioned on elbows and wrists are tracked. To locate those objects, a set
of noise filtering and image segmentation techniques based on color, shape and size
features are used. Color detection is performed by transforming the Red, Green and
Blue (RGB) data of the input images to the Hue, Saturation and Value (HSV) color
space, which encodes more robustly the identity of colors in dynamic environments
(changing light conditions, shadows, etc). Thereafter, Bayesian inference techniques
are used to locate the center of mass of objects using a model based on the Hue value,
velocity vector, object size and position that improves performance during occlusions
and target loss [46]. The position of the head and the color patches is subsequently fed
to a biomechanically constrained model of the upper body, and the joint angles are
computed. The biomechanical model imposes restrictions on the possible joint angles
and allows for a 3D approximation of arm movements using a single camera setup.
1.3. The Environment
The Torque Game Engine (www.garagegames.com), a popular, versatile and multiplatform 3D engine has been chosen to implement the VR tasks. Torque provides both
a 3D rendering and a physics engine that allows generating high resolution and realistic
VR scenarios.
Our environment consists of a spring-like natural highland where the user interacts
in a first-person perspective. A human avatar is rendered in the world in such way that
only its arms are displayed on the screen. The joint angles captured by the tracking
system and the finger flexure provided by the data gloves are mapped to the
corresponding joints of the avatar skeleton. In this way, the user observes on the screen
two virtual arms that move according to his/her own movements.
1.4. Tasks
The task protocol consists of three different stages: two stages of calibration (see
below) and the training game. Both calibration phases allow measuring the properties
of movements in the real and virtual worlds, making possible the analysis of transfer
between both worlds. The training game is the core task of the RGS intervention, and it
deploys an exercise that is individualized for each subject depending on its
performance. All the intervention tasks provide automated written and auditory
instructions to minimize the influence of the human operator.
1.4.1. Real Calibration
The real calibration consists of performing a set of motor actions starting from a resting
position, i.e. positioning the palm of the hand on a randomized sequence of numbered
positions on the table surface (Figure 2, left panel). The patient receives auditory and
written instructions during the process, which lasts approximately 2 minutes. This task
allows recording every session basic properties of arm movement such as speed,
reaching distance, precision and reaction time.
1.4.2. Virtual Calibration
The user is asked to perform the same randomized sequence as in the real calibration
task but this time using the virtual arms and a virtual replica of the table displayed on
the screen (Figure 2, right panel).
Figure 2. Real and virtual calibration phases. Left panel: on the table surface, numbered dots are located at
specific positions on the left and right hand sides. The user is asked to place the palm of his/her hand on the
numbered dots in a randomized order. Right panel: the same setup is replicated in the virtual environment
and the user is asked to perform the same task with the virtual arms. The figure text reads Place your right
hand palm above the number 2 and wait.
To prevent the patients of using the numbered positions in the physical table top as
external cues, the table surface is covered during this phase. This calibration phase
allows for a comparison on how the movements of the real calibration phase are
performed in a virtual world. Together with the analysis of real-to-virtual movement
transfer, the main role of the virtual calibration is to daily set the starting game
parameters of the training task.
1.4.3. Training
The main task of the user is to intercept spheres that are flying towards him/her by
hitting them with his/her virtual arms (Hitting). We have purposefully taken a
relatively constrained task since it allows us to fully control all aspects of the training
scenario and understand its impact on recovery. The difficulty of the task is determined
by three gaming parameters: the speed of the spheres, the time interval between
consecutive spheres and the range of dispersion of the spheres. When the game starts,
the difficulty baseline is set by using the parameters measured during the virtual
calibration phase. The system automatically updates the task difficulty during the
game, depending on the performance of the subject. To be able to adjust the difficulty
level in an objective fashion, a difficulty model was developed based on experimental
data on the performance of stroke patients with random game parameters. With such a
model, the parameters are continuously adapted to keep the performance level at
around 70%, keeping patients at a challenging difficulty level but within their
capabilities to sustain motivation.
Starting from the Hitting task, the RGS sequentially introduces tasks of graded
difficulty that require movement execution with increasing complexity and scoring,
ranging from arm extension/flexion to a coordination task that combines arm
movement with grasp and release (Hitting, Grasping and Placing) (Figure 3).
Figure 3. The 3 RGS training tasks of graded complexity. Left panel: Hitting to train range of movement,
movement speed, and arm and shoulder stability. The approaching virtual spheres have to be intercepted with
the movements of the virtual arms. Middle panel: Grasping to exercise finger flexure on top of movement
range, speed, and arm and shoulder stability. Now, the intercepted spheres can be grasped by flexing the
fingers. Right panel: Placing to train not only grasp but also release. The grasped spheres can now be
released in the basket of the corresponding color. Adapted from Cameiro et al. [33].
2. Results
2.1. Real vs Virtual
A crucial aspect for our research, and consequently for the possible benefits it can
provide users with, is to understand the responses of the patients to these new VR
technologies and the correspondence between task execution in real and virtual worlds.
Figure 4. Timeline of the study. The intervention period has a duration of 12 weeks plus a 12 weeks followup period. The clinical evaluation of the patients is performed at several stages of the process.
Therefore, an analysis of how movements are transferred to the virtual world when
performing the same task as in reality is pivotal. These issues were addressed in a pilot
study with 6 nave right handed stroke patients with left hemiparesis, mean age of 61
years (range 32-74), Brunnstrom Stage for upper extremity ranging from II to V [59],
and Barthel Index from 36 to 72 [35]. These nave patients performed single trials of
the real and virtual calibration tasks (see section 1.4). Out of these 6 patients, two were
excluded from the analysis since they did not complete the execution of the real and/or
virtual tasks within the given time. From the real and virtual tasks we extracted
reaching distance and the speed information from the movements. The reaching
distance is measured as the farthest position the patients were able to reach from the
resting position, and the speed is computed as the mean speed of all the movement
sequences performed by each arm individually.
The measurements performed during the real calibration phase show that the task
is a valid method to quantitatively analyze the performance differences between paretic
and non-paretic arms. Thus, the calibration task is well suited to evaluate and monitor
the evolution of patients over sessions, independent of the specific training they are
exposed to (Figure 5). In addition, this allows for a direct comparison between the
performance in both real and virtual tasks.
The results for the real and virtual tasks show that the behavior in the virtual
environment is consistent with the one in the real world. This means that the RGS is
able to assess from both tasks the degree of impairment. This is measured in both the
reaching distance (Figure 5, left panel) and the movement speed, with the only
difference that nave patients display slower movements in the VR environment for
both the paretic and healthy arms (Figure 5, right panel). This could be due to an
adaptation effect to the virtual environment. Nevertheless, the relative differences
between paretic and non-paretic arms are conserved in real and virtual worlds, meaning
that motor deficits are transferred (Figure 5, right panel). This strongly suggests that
improvements measured within the RGS virtual tasks will translate to measurable
improvements in real world tasks.
Figure 5. Real vs virtual reaching distance and speed of the movements. Left panel: maximum reaching distance
across patients for the paretic and non-paretic arms in real (up) and virtual (bottom) worlds. Adapted from
Cameiro et al. [35]. Right panel: mean speed for the paretic and non-paretic arms of all the patients in real and
virtual worlds. Vertical bars indicate the standard deviation.
Figure 6. Example of recorded time stamped game event data for patient 2. This plot shows over time the
position of both the left (blue line) and right hand (red line), and events (touched and missed spheres) during a
trial. The patient, with left hemiplegia, shows a reduced reaching distance and a higher number of missed
spheres with the paretic arm. Adapted from Cameiro et al. [60].
Figure 7. Example of game performance analysis with patient 2 (left hemiplegia). Top left panel: histogram
of game events (caught and missed spheres and their position in the field). Top right panel: error in sphere
interception for both arms. The bar denotes the median error, and the error bar the standard deviation.
Bottom left panel: sphere interception error histogram of the left arm (paretic). Bottom right panel: sphere
interception error histogram of the right arm (non-paretic). Adapted from Cameiro et al. [35].
Figure 8. Electrodermal response analysis. Top panel: skin conductance level (SCL) during a trial. Middle
panel: galvanic skin response (GSR) during the same trial. Bottom panel: average event centered GSR mean
response for missed (solid line) and touched (dashed line) spheres. The gray area indicates when a sphere is
approaching, 0 is the time of the event and the time between the event and the minimum of the GSR
signal. Adapted from Cameiro et al. [61].
These results indicate that there is an arousal prior to a missed sphere event that
could eventually be used to predict when patients are likely to fail. Therefore, it would
be possible to use this biofeedback information in real time to modify game parameters
to keep performance and arousal at a desirable level.
In a second study with 5 healthy subjects, we investigated the HR game event
related changes and also the validity of our model of game difficulty. Interestingly,
when the subjects were exposed to a random combination of game parameters, we
found a correlation between the difficulty of the parameters and the HR. The difficulty
model, previously developed from experimental data of the performance of chronic
stroke patients with randomly changing game parameters, was now used to compute
the difficulty of each trial. The difficulty level, measured from 0 (easy) to 1 (hard), had
an impact on the measured HR for all subjects, relating low difficulty to lower stress
levels and higher difficulty to higher levels of stress (Figure 9, left panel).
In all subjects exposed to the task we could detect game event related responses in
either HR or HRV. Nevertheless, although HR or HRV significant changes were found
immediately before and after the event occurred, these were not consistently found for
all subjects (Figure 9, right panels). In addition, no differences were found between
event types (touched or missed spheres) leading both of them to comparable
physiological changes.
59
58
57
HR (bpm)
56
55
54
53
52
game events
linear fit
51
50
0.2
0.4
0.6
0.8
Difficulty level
Figure 9. Heart rate event related responses. Left panel: example plot of the difficulty of the game trials vs
the measured Heart Rate (HR) for a healthy subject. The difficulty level (X-X axis) is assessed by a difficulty
model based on data of stroke patients. The HR (Y-Y axis) is measured as beats-per-minute (bpm). There is
a monotonically increasing relationship between difficulty of trials and HR response shown by the linear
regression of the data. Right panels: mean heart rate (HR) (top) and heart rate variability (HRV) (bottom)
responses with respect to the timing of game events (time = 0) (n=31) for a healthy subject. The event related
responses (Y-Y axis) are computed as the percentage change of HR or HRV measures. The blue curve
indicates the mean response and green curves +/- standard deviation.
Figure 10. Monitoring of patients using the calibration tasks. Left panel: evolution of the speed of
movements of the paretic and non-paretic arms over sessions for patient ID.1407178. The colored solid
lines correspond to the linear regressions of the data. Right panel: phase plot of the shoulder vs elbow
angles for patient ID.951736. The plot shows two distinct strategies used by the paretic and non-paretic
arms to reach the same distances.
MOTRICITY INDEX
100
Intervention
Control A
50
10
15
20
25
FUGL-MEYER
100
Intervention
Control
50
10
15
20
25
CAHAI
100
Intervention
Control A
50
10
15
20
25
Weeks
Figure 11. Percentage of improvement in standard evaluation scales obtained at different stages - week 0
(admittance), week 5, week 12 (end of treatment) and week 24 (follow-up) - for two patients with similar
baseline measures. Top panel: Motricity Index for the upper extremity. Middle panel: Fugl-Meyer
Assessment Test for the upper extremity. Bottom panel: Chedoke Arm and Hand Activity Inventory.
whereas this is not the case for the Fugl-Meyer [33]. Although some interesting trends
are observed in the clinical scores, at this point of the study the data are not conclusive
and there is a need to find a better measure to compare patients with different baselines.
As an example, here we show the data of the 2 patients with the closest scores at
admittance (1 RGS and 1 Control A) that completed the entire protocol. The patient in
the RGS group had the following scores at admittance: motor FIM = 28, Barthel
Index=39, Motricity Index = 34, Fugl-Meyer = 27 and CAHAI = 13. The patient in the
Control A group had the following scores at admittance: motor FIM = 31, Barthel
Index=37, Motricity Index = 34, Fugl-Meyer = 24 and CAHAI = 13. The scores of the
three previously discussed clinical scales, namely the Motricity Index, the Fugl-Meyer
Assessment Test for upper extremities and the Chedoke Arm and Hand Activity
Inventory (CAHAI) were used to perform an analysis of the percentage of
improvement over time (Figure 11).
On what concerns specific properties of the movements, evaluated by the Motricity
Index and the Fugl-Meyer Assessment Test, the Control A patient presented a higher or
similar improvement rate at week 5, but then stabilized over the entire study period; on
the other hand, the patient in the RGS group shows a smaller improvement rate at week
5 but the improvement is sustained over the whole intervention period (Figure 11, top
and middle panels). On the evaluation of the functionality of the paretic arm (CAHAI),
the patient in the RGS group presented a trend similar to the one observed for the other
measures but with accentuated differences when compared to the Control A patient
(Figure 11, bottom panel). This particular measure is relevant because it directly
evaluates the active use of the paretic arm in the performance of daily living activities.
traumatic brain injury patients. At this moment we are exploring the additional benefits
of the RGS when coupled to haptic interfaces or passive exoskeletons.
To conclude, notwithstanding the therapeutic benefits of the RGS beyond
conventional therapy, the RGS is a very valuable low cost tool for diagnosis and
amusing training that can be largely deployed in hospitals and at home.
Acknowledgments
The authors would like to thank the occupational therapy and clinical staff at the
Hospital de L Esperanza in Barcelona, especially N. Rueda, S. Redon and A. Morales,
for their help and support in this study.
This research is supported by the European project Presenccia (IST-2006-27731).
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
M.S. Cameirao, S. Bermudez i Badia, and P.F.M.J. Verschure, Virtual Reality Based Upper Extremity
Rehabilitation following Stroke: a Review, Journal of CyberTherapy & Rehabilitation 1 (2008), 63-74.
M.K. Holden, Virtual environments for motor rehabilitation: review, Cyberpsychology & behavior 8
(2005), 187-211.
F.D. Rose, B.M. Brooks, and A.A. Rizzo, Virtual reality in brain damage rehabilitation: review,
Cyberpsychology & behavior 8 (2005), 241-62.
C.D. Mathers, and D. Loncar, Projections of global mortality and burden of disease from 2002 to 2030,
PLoS Med 3 (2006), e442.
H.T. Hendricks, J. van Limbeek, A.C. Geurts, and M.J. Zwarts, Motor recovery after stroke: a
systematic review of the literature, Archives of Physical Medicine and Rehabilitation 83 (2002), 162937.
S.A. Thomas, and N.B. Lincoln, Factors relating to depression after stroke, British Journal of Clinical
Psychology 45 (2006), 49-61.
J.N. Sanes, and J.P. Donoghue, Plasticity and primary motor cortex, Annual Review of Neuroscience 23
(2000), 393-415.
C.M. Butefisch, Plasticity in the human cerebral cortex: lessons from the normal brain and from stroke,
Neuroscientist 10 (2004), 163-73.
R.J. Nudo, Plasticity, NeuroRx 3 (2006), 420-7.
R.J. Nudo, B.M. Wise, F. SiFuentes, and G.W. Milliken, Neural substrates for the effects of
rehabilitative training on motor recovery after ischemic infarct, Science 272 (1996), 1791-4.
C.M. Fisher, Concerning the mechanism of recovery in stroke hemiplegia, Canadian Journal of
Neurological Sciences 19 (1992), 57-63.
L. Kalra, and R. Ratan, Recent advances in stroke rehabilitation, Stroke 38 (2007), 235-7.
G. Kwakkel, R. van Peppen, R.C. Wagenaar, S. Wood Dauphinee, C. Richards, A. Ashburn, K. Miller,
N. Lincoln, C. Partridge, I. Wellwood, and P. Langhorne, Effects of augmented exercise therapy time
after stroke: a meta-analysis, Stroke 35 (2004), 2529-39.
A. Karni, G. Meyer, P. Jezzard, M.M. Adams, R. Turner, and L.G. Ungerleider, Functional MRI
evidence for adult motor cortex plasticity during motor skill learning, Nature 377 (1995), 155-8.
E.J. Plautz, G.W. Milliken, and R.J. Nudo, Effects of repetitive motor training on movement
representations in adult squirrel monkeys: role of use versus learning, Neurobiology of Learning and
Memory 74 (2000), 27-55.
J.W. Krakauer, Motor learning: its relevance to stroke recovery and neurorehabilitation, Current
Opinion in Neurology 19 (2006), 84-90.
E.L. Altschuler, S.B. Wisdom, L. Stone, C. Foster, D. Galasko, D.M. Llewellyn, and V.S.
Ramachandran, Rehabilitation of hemiparesis after stroke with a mirror, Lancet 353 (1999), 2035-6.
S. Blanton, H. Wilsey, and S.L. Wolf, Constraint-induced movement therapy in stroke rehabilitation:
Perspectives on future clinical applications, NeuroRehabilitation 23 (2008), 15-28.
A. Zimmermann-Schlatter, C. Schuster, M.A. Puhan, E. Siekierka, and J. Steurer, Efficacy of motor
imagery in post-stroke rehabilitation: a systematic review, Journal of NeuroEngineering and
Rehabilitation 5 (2008), 8.
[20] D. Ertelt, S. Small, A. Solodkin, C. Dettmers, A. McNamara, F. Binkofski, and G. Buccino, Action
observation has a positive impact on rehabilitation of motor deficits after stroke, Neuroimage 36
(2007), T164-73.
[21] M.K. Holden, T.A. Dyar, and L. Dayan-Cimadoro, Telerehabilitation using a virtual environment
improves upper extremity function in patients with stroke, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 15 (2007), 36-42.
[22] M.K. Holden, E. Todorov, J. Callahan, and E. Bizzi, Virtual environment training improves motor
performance in two patients with stroke: case report, Neurology Report 23 (1999), 57-67.
[23] L. Piron, P. Tombolini, A. Turolla, C. Zucconi, M. Agostini, M. Dam, G. Santarello, F. Piccione, and P.
Tonin, Reinforced Feedback in Virtual Environment Facilitates the Arm Motor Recovery in Patients
after a Recent Stroke, in Virtual Rehabilitation, Venice, Italy, 2007.
[24] L. Piron, P. Tonin, F. Piccione, V. Laia, E. Trivello, and M. Dam, Virtual Environment Training
Therapy for Arm Motor Rehabilitation, Presence 14 (2005), 732-40.
[25] J. Broeren, M. Rydmark, A. Bjorkdahl, and K.S. Sunnerhagen, Assessment and training in a 3dimensional virtual environment with haptics: a report on 5 cases of motor rehabilitation in the chronic
stage after stroke, Neurorehabilitation and Neural Repair 21(2007), 180-9.
[26] J. Broeren, M. Rydmark, and K. S. Sunnerhagen, Virtual reality and haptics as a training device for
movement rehabilitation after stroke: a single-case study, Archives of Physical Medicine and
Rehabilitation 85 (2004), 1247-50.
[27] A.S. Merians, D. Jack, R. Boian, M. Tremaine, G.C. Burdea, S.V. Adamovich, M. Recce, and H.
Poizner, Virtual reality-augmented rehabilitation for patients following stroke, Physical Therapy 82
(2002), 898-915.
[28] A.S. Merians, H. Poizner, R. Boian, G. Burdea, and S. Adamovich, Sensorimotor training in a virtual
reality environment: does it improve functional recovery poststroke?, Neurorehabilitation and Neural
Repair 20 (2006), 252-67.
[29] P.L. Weiss, D. Rand, N. Katz, and R. Kizony, Video capture virtual reality as a flexible and effective
rehabilitation tool, Journal of NeuroEngineering and Rehabilitation 1 (2004), 12.
[30] A. Montagner, A. Frisoli, L. Borelli, C. Procopio, M. Bergamasco, M.C. Carboncini, and B. Rossi, A
pilot clinical study on robotic assissted rehabilitation in VR with an arm exoskeleton device, in Virtual
Rehabilitation, Venice, Italy, 2007.
[31] R.J. Sanchez, J. Liu, S. Rao, P. Shah, R. Smith, T. Rahman, S.C. Cramer, J.E. Bobrow, and D.J.
Reinkensmeyer, Automating arm movement training following severe stroke: functional exercises with
quantitative feedback in a gravity-reduced environment, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 14 (2006), 378-89.
[32] A. Gaggioli, A. Meneghini, F. Morganti, M. Alcaniz, and G. Riva, A strategy for computer-assisted
mental practice in stroke rehabilitation, Neurorehabilitation and Neural Repair 20 (2006), 503-7.
[33] M.S. Cameiro, S. Bermdez i Badia, E. Duarte Oller, and P.F.M.J. Verschure, Using a Multi-Task
Adaptive VR System for Upper Limb Rehabilitation in the Acute Phase of Stroke, in Virtual
Rehabilitation, Vancouver, Canada, 2008.
[34] M.S. Cameiro, S. Bermdez i Badia, E. Duarte Oller, and P.F.M.J. Verschure, Stroke Rehabilitation
using the Rehabilitation Gaming System (RGS): initial results of a clinical study., in CyberTherapy, San
Diego, EUA, 2008.
[35] M.S. Cameiro, S. Bermdez i Badia, L. Zimmerli, E. Duarte Oller, and P.F.M.J. Verschure, The
Rehabilitation Gaming System: a Virtual Reality Based System for the Evaluation and Rehabilitation of
Motor Deficits, in Virtual Rehabilitation, Lido, Venice, Italy, 2007.
[36] S.H. Kreisel, H. Bazner, and M.G. Hennerici, Pathophysiology of stroke rehabilitation: temporal
aspects of neuro-functional recovery, Cerebrovascular Diseases 21 (2006), 6-17.
[37] P.W. Duncan, L.B. Goldstein, D. Matchar, G.W. Divine, and J. Feussner, Measurement of motor
recovery after stroke. Outcome assessment and sample size requirements, Stroke 23 (1992), 1084-9.
[38] G. Kwakkel, B. Kollen, and J. Twisk, Impact of time on improvement of outcome after stroke, Stroke
37 (2006), 2348-53.
[39] K. August, J.A. Lewis, G. Chandar, A. Merians, B. Biswal, and S. Adamovich, FMRI analysis of neural
mechanisms underlying rehabilitation in virtual reality: activating secondary motor areas, Conference
Proceedings - IEEE Engineering in Medicine and Biology Society 1 (2006), 3692-5.
[40] G. Rizzolatti, and L. Craighero, The mirror-neuron system, Annual Review of Neuroscience 27 (2004),
169-92.
[41] G. Buccino, F. Binkofski, G.R. Fink, L. Fadiga, L. Fogassi, V. Gallese, R.J. Seitz, K. Zilles, G.
Rizzolatti, and H.J. Freund, Action observation activates premotor and parietal areas in a somatotopic
manner: an fMRI study, European Journal of Neuroscience 13 (2001), 400-4.
[42] M. Iacoboni, and M. Dapretto, The mirror neuron system and the consequences of its dysfunction,
Nature Reviews Neuroscience 7 (2006), 942-51.
[43] F. Maeda, G. Kleiner-Fisman, and A. Pascual-Leone, Motor facilitation while observing hand actions:
specificity of the effect and role of observer's orientation, Journal of Neurophysiology 87 (2002), 132935.
[44] P.L. Jackson, A.N. Meltzoff, and J. Decety, Neural circuits involved in imitation and perspectivetaking, Neuroimage 31 (2006), 429-39, May 15 2006.
[45] Z. Mathews, S. Bermdez i Badia, and P.F.M.J. Verschure, A Novel Brain-Based Approach for MultiModal Multi-Target Tracking in a Mixed Reality Space, in INTUITION - International Conference and
Workshop on Virtual Reality, Athens, Greece, 2007.
[46] S. Bermdez i Badia, The Principles of Insect Navigation Applied to Flying and Roving Robots: From
Vision to Olfaction, Zurich, Switzerland: Eidgenssische Technische Hochschule ETH, 2006.
[47] R.M. Yerkes, and J.D. Dodson, The relation of strength of stimulus to rapidity of habit formation,
Journal of Comparative Neurology 18 (1908), 459-482.
[48] M.M. Bradley, B.N. Cuthbert, and P.J. Lang, Picture media and emotion: effects of a sustained
affective context, Psychophysiology 33 (1996), 662-70.
[49] J.F. Brosschot, and J.F. Thayer, Heart rate response is longer after negative emotions than after positive
emotions, International Journal of Psychophysiology 50 (2003),181-7.
[50] H.D. Critchley, Electrodermal responses: what happens in the brain, Neuroscientist 8 (2002), 132-42.
[51] M.R. Council, Aids to the Examination of the Peripheral Nervous System, London, 1976.
[52] M.F. Folstein, S.E. Folstein, and P.R. McHugh, Mini-mental state. A practical method for grading the
cognitive state of patients for the clinician, Journal of Psychiatric Research 12 (1975), 189-98.
[53] R.A. Keith, C.V. Granger, B.B. Hamilton, and F.S. Sherwin, The functional independence measure: a
new tool for rehabilitation, Advances in Clinical Rehabilitation 1 (1987), 6-18.
[54] F.I. Mahoney, and D.W. Barthel, Functional Evaluation: The Barthel Index, Maryland State Medical
Journal 14 (1965), 61-5.
[55] C. Collin, and D. Wade, Assessing motor impairment after stroke: a pilot reliability study, Journal of
Neurology, Neurosurgery & Psychiatry 53 (1990), 576-9.
[56] A.R. Fugl-Meyer, L. Jaasko, I. Leyman, S. Olsson, and S. Steglind, The post-stroke hemiplegic patient.
1. a method for evaluation of physical performance, Scandinavian Journal of Rehabilitation Medicine 7
(1975), 13-31.
[57] S. Barreca, C. K. Gowland, P. Stratford, M. Huijbregts, J. Griffiths, W. Torresin, M. Dunkley, P.
Miller, and L. Masters, Development of the Chedoke Arm and Hand Activity Inventory: theoretical
constructs, item generation, and selection, Top Stroke Rehabilitation 11 (2004), 31-42.
[58] M. Kellor, J. Frost, N. Silberberg, I. Iversen, and R. Cummings, Hand strength and dexterity, American
Journal of Occupational Therapy 25 (1971), 77-83.
[59] S. Brunnstrom, Recovery stages and evaluation procedures, in Movement Therapy in Hemiplegia: A
Neurophysiological Approach, New York, 1970.
[60] M.S. Cameiro, S. Bermdez i Badia, L. Zimmerli, E. Duarte Oller, and P.F.M.J. Verschure, A Virtual
Reality System for Motor and Cognitive Neurorehabilitation, in Association for the Advancement of
Assistive Technology in Europe - AAATE, San Sebastian, Spain, 2007.
[61] M.S. Cameiro, S. Bermdez i Badia, K. Mayank, C. Guger, and P.F.M.J. Verschure, Physiological
Responses during Performance within a Virtual Scenario for the Rehabilitation of Motor Deficits, in
Presence, Barcelona, Spain, 2007.
Judith E. DEUTSCHa
Rivers Lab, Doctoral Programs in Physical Therapy, Rehabilitation and Movement
Science, University of Medicine and Dentistry of NJ, USA
Introduction
Rehabilitation of walking for individuals with musculoskeletal and neuromuscular
conditions remains a challenge in rehabilitation. The application of new technologies
such as virtual reality, gaming and robotics has stimulated many approaches to enable
walking for individuals with disabilities. The purpose of this chapter is two-fold, first to
provide a brief overview on technology that incorporates virtual reality (VR) to
promote walking or mobility for individuals with disability; second to describe in some
detail the development and testing of one such system used for individuals with both
musculoskeletal and neurological impairments that interfered with functional mobility.
in a street- crossing task. The emphasis was on safe crossing. [11] The same system
with refinements was further tested on a larger group of patients who all had unilateral
spatial neglect and used wheelchairs as their primary source of mobility. The authors
reported that the individuals with unilateral spatial neglect looked left more often and
had fewer accidents than the group that trained on computer visual scanning tasks. [10]
Transfer to real life crossing was not significantly different between the groups.
These studies illustrate the use of a navigation task for individuals who have mobility
deficits that are heavily influenced by their visual spatial processing abilities.
1.3. Virtual Reality Applications for Individuals with Multiple Sclerosis and Parkinson
Disease who have mobility challenges
Although not as extensive as the research on virtual reality enabled walking and
mobility for people post-stroke, the literature on VR to improve mobility for people
with multiple sclerosis (MS) and Parkison Disease (PD) is emerging. As with the
research on stroke there are various methods used to deliver the virtual environments
and train the patients.
For people with MS two contrasting approaches were found in the literature.
Baram and colleagues used VR as a cueing strategy to improve stepping for people
with MS who presented with an ataxic gait. [13] A checkered board tiled floor was
displayed through a visor worn over the subjects head. The VR was used as an orthotic
to stabilize the stepping pattern. They demonstrated improvements in walking speed
that had short term carry over. Fulk and colleagues reported on a single case in which
they combined bodyweight supported treadmill training with virtual reality based
balance training using the IREX-GestureTek motion capture system described in the
section on mobility and stroke. The combined treatment resulted in improved walking
and balance outcomes. [14] The case report format allowed the formulation of a
treatment plan that reflected a combined therapeutic approach to meet the patients
mobility needs. This is in contrast with research studies that require a reductionistic
approach to testing interventions in order to guarantee the internal validity of the study.
It is likely that clinicians would use VR technologies in combination with other
therapeutic modalities to achieve patient goals.
Research on use of virtual environments for individuals with PD has focused on
motor control aspects related to action and navigation as well as performing activities
of daily living, rather than training walking. [15, 16] This more basic research, however,
has implications for practice. Individuals with mild to moderate PD were compared to
healthy controls during a virtual supermarket navigation task. The task involved
navigation and specific actions that occurred using a first person perspective as if
pushing a shopping cart. The individuals with PD achieved similar outcomes in the
virtual environment tasks but required greater distances and more time to complete the
tasks relative to healthy controls. These differences were attributed to planning deficits
that may be amenable to training. [15] Using a HMD and joystick two individuals with
PD (Hoehn and Yahr Stage 2) and 10 healthy controls navigated through environments
and performed activities of daily living. The goal was to determine, if in the absence of
deficits on paper and pencil neuropsychological tests, the VR tasks could identify
deficits in planning for the individuals with PD. Evaluated on orientation, speed,
hesitation and memory tasks, individuals with PD were found to have the most notable
deficits on speed of execution. This was pronounced when they had to navigate through
a narrow doorway. [16] Both studies suggest that virtual environments can be used for
examination of cognitive deficits that may interfere with mobility. It will be interesting
to see if they can be applied to rehabilitation.
1.4. Gaming and VR to Improve Mobility for Individuals with Cerebral Palsy
Use of virtual reality and gaming has been used to improve selective lower extremity
motor control and improve mobility and balance in adolescents with cerebral palsy
(CP). Bryanton and colleagues demonstrated that individuals with CP were more
motivated and exercised at a greater intensity when working with a Kung Fu Game on
the IREX system compared to standard of care exercises. [17] Deutsch and colleagues,
incorporated gaming with the Nintendo Wii sport software, into an adolescent with
CPs summer program. The individual trained in sitting and standing over 11 sessions
using boxing, baseball, bowling and golfing games designed to improve postural
control, spatial abilities and mobility. They reported gains in standing symmetry and
control, scores on the Test of Visual Perceptual Skills III (a measure of spatial ability)
and walking distance. [18] They had hypothesized the direct changes in visual spatial
ability and balance but were uncertain if they would see the transfer to walking. Finally
there has also been a case report in which neural plasticity was demonstrated after
virtual reality training with an individual with cerebral palsy. [19]
The evidence for use of virtual reality to improve mobility across a variety of
rehabilitation populations is modest. Of interest is the variety of approaches in terms of
the technology used for similar applications. The greatest number of studies and labs
that are integrating virtual reality or gaming technology for mobility rehabilitation has
focused on individuals post stroke. Important questions about transfer of training, what
is the right amount of technology will need to be addressed before these approaches are
widely adopted in the clinic. Such efforts are underway in applying virtual reality to
upper extremity [20] as well as walking rehabilitation in individual post-stroke. [2]
users were individuals with lower extremity musculoskeletal injuries, individuals poststroke and physical therapists. Over the course of six years the system was
conceptualized, developed, refined and tested in feasibility, pilot and user studies
culminating in randomized single blind clinical trial.
The system consists of a six-degree of freedom parallel kinematics robot (Stewrdt
platform) interfaced with a controller and a desktop computer, which displays the
virtual environments. The Stewardt platform is instrumented with a force transducer
and linear potentiometers that read forces and displacements of the platform, which are
referenced to the foot movements. Using inverse dynamics the ankle orientation and
position relative to the floor can be read into the simulation. The robots pneumatic
actuators also provide forces and torques to the patients foot and ankle. This force
feedback system allows for the delivery of haptic effects to the foot. The haptic effects
were modeled at a low and a high level. High-level effects allow for manipulation of
augmented sensory input to the users foot. Thus the robot serves as input into the
virtual environment as well as a recorder of all the movements. Details on the hardware
and haptic modeling can be found elsewhere. [21, 22] [23, 24]
The software evolved from a basic representation of a foot moving on a checkerboard pattern to an airplane navigating through a series of simple targets into an
airscape and a seascape complete with visual, auditory and haptic effects to increase
realism, challenge mobility and augment sensory input. It was designed using
principles of exercise and motor learning. [25] We have described the theoretical
rationale for the construction of a robot coupled with a virtual environment elsewhere
[25]. Briefly it was based on evidence from pre-clinical and basic science studies of the
effects of training animals in enriched environments that produced superior task
performance than those trained in impoverished environments, on the identification of
the important role of the distal effector namely the ankle in walking and the integration
of principles of motor learning and principles of exercise. Animals (primarily rats)
trained in enriched environments perform better on functional tasks and in solving
problems when compared to animals trained in impoverished environments. [26] This
difference is accentuated when the complexity of the problem increases. [27] It has
been suggested that the use of virtual environments for rehabilitation may provide the
stimulation to extend the existing benefits of rehabilitation and promote functional
recovery. [28]
The initial goal of developing the system was to create a tool for rehabilitation of
the lower extremity to remediate impairments such as weakness, lack of flexibility, incoordination, decreased endurance and sensory loss. The patient population initially
identified that would benefit from such a device were individuals with musculoskeletal
conditions that primarily affected the ankle. These included but were not limited to
individuals with ankle sprains and fractures. These individuals were selected because
we hypothesized that the training provided by the robot-vr system would target all of
the relevant impairments that may interfere with their recovery of mobility. Training at
the impairment level was a frequently employed therapeutic approach with this
population. Training relevant kinematic features of a movement is also believed to
transfer to whole tasks. [29] For our particular application there was some speculation
about whether training ankle impairments as well relevant kinematic features of
walking, namely the kinetics of ankle push-off, would transfer to improved walking.
alone might produce some impairment level changes but not a transfer to function. In a
single blind, randomized clinical trial we demonstrated that individuals who trained
with the robot-virtual reality system had gait speed and distance increases that were
measured both in the clinic and the real world, that were significantly greater than those
who trained with the robot alone. [37, 38] Other interesting findings from a fully
instrumented gait analysis was the dramatic change in ankle push off kinetics of the
robot-vr group compared to the robot alone. The specificity of training the ankle
kinetics as well as transfer of relevant part-task training is one of several explanations
for the positive outcome of the vr-system coupled with the robot. A complex system
like the one that we have just described has many features that remain to be explored.
Probably the most relevant finding of single blind randomized trial is the transfer
of training from the lab setting to the community. Using an activity monitor subjects
gait was measured for a week in advance of training and a week after training
concluded. Significant improvements in walking distance and velocity were measured
in real world situations for the robot-vr but not the robot alone group. [38] These
findings are important as transfer for the real world from virtual reality training is a
central goal of this rehabilitation approach.
3. Summary
Virtual reality and gaming based approaches to rehabilitation of individuals with
neuromuscular and musculoskeletal conditions has been reviewed and evaluated.
Approaches in terms of technology (hardware and software) have been quite variable.
In common, the systems developed, use rich augmented multi-sensory feedback, as
well as information about performance and results. Whether it is the hardware that
interfaces into the (VE), or the stimulus for goal directed movement that the VE offers
that promotes the changes in the motor behavior remains to be elucidated. Most of the
work has used custom built, primarily lab-based systems. These offer the advantages of
customization and rich data collection. However, off the shelf commercially available
systems are also being trialed. Their reduced cost and ease of availability relative to the
lab-based systems makes them appealing. Which of these systems will be adopted in
practice remains to be determined. It is likely that exploration of off-the-shelf gaming
systems will continue in parallel with the development of lab-based systems. Each will
serve an important role in understand the usefulness of virtual reality and gaming
technology in the rehabilitation of walking and mobility.
References
[1]
[2]
[3]
[4]
M.K. Holden, and T. Dyar, Virtual environment training: a new tool for neurorehabilitation, Neurology
Report 26 (2002), 62-71.
J.E. Deutsch, and A. Mirelman, Virtual reality-based approaches to enable walking for people poststroke, Topics in Stroke Rehabilitation 14 (2007), 45-53.
D.L. Jaffee, D.A. Brown, C. Pierson-Carey, E. Buckley, and H.L. Lew, Stepping over obstacles to
improve walking in individuals with poststroke hemiplegia, Journal of Rehabilitation Research &
Development 41 (2004), 283-292.
S.H. You, S.H. Jang, Y. Kim, M. Hallett, S.H. Ahn, Y. Kwon, J.H. Kim, and M.Y. Lee, Virtual realityinduced cortical reorganization and associated locomotor recovery in chronic stroke: an experimenterblind randomized study, Stroke 36 (6)(2005) 1166-71.
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
J. Fung, C.L. Richards, F. Malouin, B.J. McFadyen, and A. Lamontagne, A treadmill and motion
coupled virtual reality system for gait training post-stroke, Cyberpsychology & Behavior 9 (2006), 15762.
S. Flynn, P. Palma, and A. Bender, Feasibility of using the Sony PlayStation 2 gaming platform for an
individual poststroke: a case report, Journal of Neurologic Physical Therapy 31 (2007), 180-189.
P. Weiss, D. Rand, N. Katz, and R. Kizony, Video capture virtual reality as a flexible and effective
rehabilitation tool, Journal of NeuroEngineering and Rehabilitation 1 (2004), 12.
Y.R. Yang, M.P. Tsai, T.Y. Chuang, W.H. Sung, and R.Y. Wang, Virtual -reality based training
improves community ambulation in individuals with stroke: A randomized controlled trial, Gait &
Posture (2008).
Y.S. Lam, D.W. Man, S.F. Tam, and P.L. Weiss, Virtual reality training for stroke rehabilitation,
NeuroRehabilitation 13 (2006), 245-53.
N. Katz, H. Ring, Y. Naveh, R. Kizony, U. Feintuch, and P.L. Weiss, Interactive virtual environment
training for safe street crossing of right hemisphere stroke patients with unilateral spatial neglect,
Disability & Rehabilitation 27 (2005), 1235-43.
P.L.T. Weiss, Y. Naveh, and N. Katz, Design and testing of a virtual environment to train stroke
patients with unilateral spatial neglect to cross a street safely, Occupational Therapy International 10
(2003), 39-55.
J. Kim, K. Kim, D.Y. Kim, W.H. Chang, C.I. Park, S.H. Ohn, K. Han, J. Ku, S.W. Nam, I.Y. Kim, and
S.I. Kim, Virtual environment training system for rehabilitation of stroke patients with unilateral
neglect: crossing the virtual street, Cyberpsychology & Behavior 10 (2007), 7-15.
Y. Baram, and A. Miller, Virtual reality cues for improvement of gait in patients with multiple sclerosis,
Neurology 66 (2006), 178-81.
G.D. Fulk, Locomotor training and virtual reality-based balance training for an individual with multiple
sclerosis: a case report, Journal of Neurologic Physical Therapy 29 (2005), 34-42.
E. Klinger, I. Chemin, S. Lebreton, and R.M. Marie, Virtual action planning in Parkinson's disease: a
control study, Cyberpsychology & Behavior 9 (2006), 342-7.
G. Albani, R. Pignatti, L. Bertella, L. Priano, C. Semenza, E. Molinari, G. Riva, and A. Mauro,
Common daily activities in the virtual environment: a preliminary study in parkinsonian patients,
Neurological Sciences 23 (2002),S49-50.
C. Bryanton, J. Bosse, M. Brien, J. McLean, A. McCormick, and H. Sveistrup, Feasibility, motivation,
and selective motor control: virtual reality compared to conventional home exercise in children with
cerebral palsy, Cyberpsychology & Behavior 9 (2006), 123-8.
J.E. Deutsch, M. Borbely, J. Filler, K. Huhn, and P. Guarrera-Bowlby, Use of a low-cost, commercially
available gaming console (Wii) for rehabilitation of an adolescent with cerebral palsy, Physical Therapy
88 (2008), 1196-207.
S.H. You, S.H. Jang, Y.H. Kim, Y.H. Kwon, I. Barrow, and M. Hallett, Cortical reorganization induced
by virtual reality therapy in a child with hemiparetic cerebral palsy, Developmental Medicine and Child
Neurology 47 (2005), 628-35.
A. Henderson, N. Korner-Bitensky, and M. Levin, Virtual reality in stroke rehabilitation: a systematic
review of its effectiveness for upper limb motor recovery, Topics in Stroke Rehabilitation 14 (2007),
52-61.
M. Girone, G. Burdea, M. Bouzit, and J.E. Deutsch, Orthopedic rehabilitation using theRutgers ankle
interface, presented at Medicine Meets Virtual Reality, Newport Beach, California, 2000.
R. Boian, C.S. Lee, J.E. Deutsch, G. Burdea, and J.A. Lewis, Virtual reality-based system for ankle
rehabilitation post stroke, Proceedings of the First, presented at International Workshop on Virtual
Reality Rehabilitation, Mental Health, Neurological, Physical, Vocational, 2002.
M. Girone, G. Burdea, M. Bouzit, V. Popescu, and J.E. Deutsch, A Stewart platform-based system for
ankle telerehabilitation, Autonomous Robots 10 (2001), 203-212.
R. Boian, J.E. Deutsch, C.S. Lee, G. Burdea, and J.A. Lewis, Haptic Effects for Virtual Reality-based
Post-Stroke Rehabilitation, presented at Symposium on Haptic Interfaces For Virtual Environment And
Teleoperator Systems, Los Angeles, CA, 2003.
J.E. Deutsch, A.S. Merians, S. Adamovich, H. Poizner, and G.C. Burdea, Development and application
of virtual reality technology to improve hand use and gait of individuals post-stroke, Restorative
Neurology & Neuroscience 22 (2004), 371-86, 2004.
A. Risedal, B. Mattsson, P. Dahlqvist, C. Nordborg, T. Olsson, and B.B. Johansson, Environmental
influences on functional outcome after a cortical infarct in the rat, Brain Research Bulletin 58 (2002),
315-21.
M.J. Renner, and M.R. Rosensweig, Enriched and Impoverished Environments: Effects on Brain and
Behavior, New York, Springer-Verlag, 1987.
[28] F.D. Rose, B.M. Attree, B.M. Brooks, and D.A. Johnson, Virtual environments in brain damage
rehabilitation: A rationale from basic neuroscience, in G. Riva, B.K. Wiederhold, and E. Molinari,
Virtual environments in clinical and neuroscience, Amsterdam, IOS Press, 1998.
[29] C.J. Winstein, Designing Practice for Motor Learning: Clinical Implications, presented at II Step
Conference, Norman, Oklahoma, 1990.
[30] J.E. Deutsch, Rehabilitation of musculoskeletal injuries using the Rutgers Ankle Haptic Interface:
Three Case reports, Eurohaptics (2001), 11-16.
[31] J.E. Deutsch, J. Latonio, G. Burdea, and R. Boian, Post-Stroke Rehabilitation with the Rutgers Ankle
System A case study, Presence (2001). 416-430.
[32] J.E. Deutsch, C. Paserchia, C. Vecchione, J.A. Lewis, R. Boian, and G. Burdea, Improved gait and
elevation speed of individuals post-stroke after lower extremity training in virtual environments,
Journal of Neurologic Physical Therapy 28 (2004), 185-86.
[33] J.A. Lewis, J.E. Deutsch, and G. Burdea, Usability of the remote console for virtual reality
telerehabilitation: formative evaluation, Cyberpsychology & Behavior 9 (2006), 142-7.
[34] J. Deutsch, J.A. Lewis, E. Whitworth, R. Bolan, G. Burdea, and M. Tremaine, Formative evaluation
and preliminary findings of a virtual reality telerehabilitation system for the lower extremity, Presence
14 (2005), 198-213.
[35] J.A. Lewis, R.F. Boian, G. Burdea, and J.E. Deutsch, Remote console for virtual telerehabilitation,
Studies in Health Technology & Informatics 111 (2005), 294-300.
[36] J. Lewis, R. Boian, G. Burdea, and J. Deutsch, Real-time web-based telerehabilitation monitoring,
Studies in Health Technology & Informatics 94 (2003), 190-2.
[37] A. Mirelman, P. Bonato, and J. Deutsch, Effects of Virtual Reality-Robotic Training w Compared with
a Robot Training Alone on the Walking of Individuals with Post Stroke Hemiplegia, Journal of
Neurologic Physical Therapy 31 (2007), 200-201.
[38] A. Mirelman, P. Bonato, and J. Deutsch, Effects of training with a robot-virtual reality system
compared with a robot alone on the gait of individuals after stroke., Stroke 40 (2009), 169-74.
Introduction
The motor recovery of the upper limb in patients following congenital or acquired brain
injury remains a persistent problem in neurological rehabilitation. More than 80% of
the approximately 566,000 stroke survivors in the United States experience hemiparesis
resulting in impairment of one upper extremity (UE) immediately after stroke and in
55-75% of survivors, impairments persist beyond the acute stage of stroke. Important
from a rehabilitation perspective is that functional limitations of the upper limb
contribute to disability and are associated with diminished health-related quality of life
[1, 3].
Despite a growing number of studies, there is still a paucity of good quality
evidence for the effectiveness of upper limb motor rehabilitation techniques for patients
Figure 1. Arm and hand coordination during a reach-to-grasp task in one healthy subject (top) and one
individual with stroke-related hemiparesis (bottom). The mean peak hand aperture (thin solid lines) generally
occurs after the mean peak hand velocity (thick solid lines) as seen in both examples but the movement is
slower and hand opening is delayed in the individual with hemiparesis. Dotted lines indicate one standard
deviation of the mean traces.
For more complex movements, individuals with hemiparesis may have several
deficits when attempting to produce coordinated arm, trunk and hand movements. For
example, during trunk-assisted reaching (reaching to objects placed beyond arms
length), patients may have deficits in the timing of the initiation of arm and trunk
movement characterized by delays and increased variability [34, 35]. In addition,
Esparza et al. [35] found differences in the range of trunk displacement between
patients with left and right brain lesions and documented bilateral deficits in the control
of movements involving complex arm-trunk co-ordination.
We are only beginning to understand how complex movements are controlled and
the role of perception-action coupling in the healthy and damaged nervous system. The
healthy nervous system is able to integrate multiple degrees of freedom of the body and
produce invariant hand trajectories when making pointing movements with or without
trunk displacement (Figure 2). In trunk-assisted reaching, Rossi et al. [36] compared
the hand trajectories when healthy subjects reached to a target placed beyond the reach
on a horizontal surface. In some trials, the trunk was free to move and thus contributed
to the endpoint trajectory. In some other trials however, the trunk movement was
unexpectedly arrested before the movement began. They showed that the initial
contribution of the trunk movement to the hand displacement was neutralized by
appropriate compensatory rotations at the shoulder and elbow. Trunk movement began
to contribute to hand displacement only after the peak velocity of the hand movement
was reached. Results such as these highlight the elegant temporal and spatial
coordination used by the healthy nervous system to produce smooth and effective
movement.
Figure 2. Top. For beyond-the-reach experiments, subjects sat in a cut-out section of a plexiglass table.
Goggles obstructed vision of the hand and target after the go signal. Hand starting position was located 30 cm
in front of the sternum. A metal plate attached to the back of the trunk, and an electromagnet attached to the
wall were used to arrest the trunk movement in 30% of randomly selected trials. Middle and lower panels:
Mean hand and trunk trajectories for one healthy (left) and one stroke subject (right) in trunk-blocked (solid
lines) and trunk-free (open lines) movements. The stroke subject had a moderate motor impairment as
indicated by the Fugl-Meyer (FM) Arm Score of 50 out of 66. Despite differences in the trunk motion
between conditions, hand trajectories for blocked-trunk trials initially coincided with those for free-trunk
movements. Hand trajectories for trunk-blocked trials diverged earlier in participants with stroke indicating
that they could not fully compensate for the trunk movement by adjusting their arm movement.
After stroke, control of movement in specific joint ranges is limited and trunk
movement makes a larger and earlier contribution to hand transport for reaches to
objects placed both within and beyond the arms length [26, 29]. The neurologically
damaged system also has deficits in the ability to make appropriate compensatory
adjustments of the arm joints to maintain the desired hand trajectory during trunkassisted reaching. This was tested using the same paradigm described above for the
study by Rossi et al. [36]. We compared hand trajectories and elbow-shoulder interjoint
coordination during beyond-the-reach pointing movements in healthy and
hemiparetic subjects when the trunk was free to move or when it was unexpectedly
arrested [31]. In approximately half the participants with hemiparesis, hand trajectory
divergence occurred earlier (Figure 2, right panels) while the divergence of interjoint
coordination patterns occurred later than the control group suggesting that
compensatory adjustments of the shoulder and elbow joints were not sufficient to
neutralize the influence of the trunk on the hand trajectory. Arm movements only
partially compensated the trunk displacement and this compensation was delayed. This
suggests a deficit in intersegmental temporal coordination that may be partly
responsible for the loss of arm coordination even in well-recovered patients.
Individuals with hemiparesis also have spatial and temporal coordination deficits
between movements of adjacent arm joints such as the elbow and shoulder [12, 16, 17,
18, 37], between the transport phase of reaching and aperture formation in grasping [38,
40] and in precision grip force control [39, 41]. For example, using a mathematical
analysis of kinematic variability during whole arm reaching movements, Reisman and
Scholz [42] found that individuals with mild-to-moderate hemiparesis had deficits in
specific patterns of joint coupling, and that they had only partial ability to rapidly
compensate movement errors. This suggestion had previously been proposed for single
joint arm movements by Dancause et al. [43] who further related the error
compensation deficits to impairments in executive functioning in patients with chronic
stroke.
The reduced capacity to produce and coordinate the movements of the arm, hand
and trunk into coherent action [see 44, 45] may lead to clumsy and slow movement
making it less likely that individuals would use their upper limb in daily life activities.
Rehabilitation efforts are aimed at reducing the effects of impairments through repeated
practice of targeted movements, tasks or activities in controlled clinical environments
[46].
of using VEs is that sensory parameters can be adapted and scaled to the abilities of the
user. In so doing, responses to a larger number of situations in a shorter amount of time
than is possible in real-world laboratory experimental set-ups can be measured. For
example, in a VE, several object locations and orientations can be reliably and rapidly
reproduced and object properties can be manipulated (i.e., obstacles can be introduced
by quickly changing properties and orientation of the object or the environment). VEs
are especially suited to the study of how individuals interact with objects or situations
that unexpectedly change. Thus, questions about dexterity and coordination that are not
easily accessible in a real-world environment can be more easily addressed. This is of
particular importance in the study of arm functional recovery in post-stroke patients.
Many stroke survivors lack the ability to reliably use the arm and hand during
interactions with objects within changing environments: e.g. catching a ball or picking
up an object while walking. These types of experimental set-ups are difficult to recreate
in the laboratory. Finally, another advantage of using VR is the possibility of studying
movement production in situations that, in the real world, may compromise the safety
of the individual. For example, in obstacle avoidance tasks, the ability to anticipate and
reach around a static obstacle such as the table ledge can be evaluated as well as the
ability to move in a constrained environment without danger of incurring injury due to
impact of the hand with an object.
2.2. The question of haptics
When the arm and hand interact with objects in the physical world, in addition to
proprioceptive feedback related to limb movement, the individual perceives sensory
information about collision of the hand with the objects being manipulated. This
sensory information combined with task success, provides feedback to the individual
about the adequacy and effectiveness of his or her movement in the virtual
environment. However, haptic information is not easily incorporated into VR
environments created for motor control studies or rehabilitation studies of upper limb
reaching and object manipulation. The use of relevant haptic interfaces is important
because it enhances the users sense of presence within VEs [62]. Many existing VEs
do not include haptics or include haptic information limited to sensations felt through a
joystick or mouse [63, 64]. These do not provide the nervous system with the most
salient movement-related sensory information. Given this reality, the essential question
is whether movements made in VR environments that lack haptic sensory cues usually
available in physical environments, can be considered valid. In other words, are they
spatially and temporally kinematically similar to equivalent movements made in
physical environments? In order to address this question, several studies have been
done to compare the kinematics of movements made in different types of VEs to those
made in physical environments [65, 69]. The following section of this chapter will
summarize the results of these validation studies.
virtual 7 cm diameter ball, reached forward by leaning the trunk and then placed the
ball within a 2 cm x 2 cm yellow square on a real or virtual target. The initial
conditions for the task and the tasks themselves were carefully matched so that
movement extent and direction were as similar as possible. Thus, in both environments,
the initial position of the arm was about 0 flexion, 30 abduction and 0 external
rotation (shoulder), 80 flexion and 0 supination (elbow) with the wrist and hand in
the neutral position. The fingers were slightly flexed. The initial position of the ball
was 13 cm in front of the right shoulder, 7 cm above and 3 cm to the left of the
subjects hand. The target was placed 31 cm in front of the shoulder, 12.5 cm above
and 14 cm to the right of the initial position of the ball. The VR environment was
displayed in 2 dimensions (2D) on a computer screen placed 75 cm in front of subjects
midline. The ball and hand were displayed on the screen inside a cube. The task was to
place the ball in the upper right far corner of the cube. The virtual representation of the
subjects hand was obtained using a 22 sensor fibre optic glove (Cyberglove,
Immersion Corp.) and an electromagnetic sensor (Fastrak, Polhemus Corp.) that was
used to orient the glove in the 2D environment. Data from these devices were
synchronized in real time. To enable the subject to "feel" the virtual ball, a prehension
force feedback device (Cybergrasp, Immersion Corp.) was fitted to the dorsal surface
of the hand. The Cybergrasp delivered prehension force feedback in the form of
extension forces to the distal phalanxes of the thumb and each finger. Forces applied to
the fingers were calibrated for each subject while he/she was wearing the Cyberglove
and all subjects perceived that they were holding a spherical object in their hand. To
better compare the performance of participant in each of the two environments, the
glove and grasp devices were worn on the hand in both conditions (Figure 3).
Figure 3. Top: Experimental set up for reaching, grasping and placing experiment in 2D virtual (VE) and
physical (PE) environments. Elbow-shoulder interjoint coordination in the reaching (middle) and transport
(bottom) phase of the task was similar between environments in healthy and stroke subjects.
Figure 4. A. Experimental set-up for comparison of pointing in the physical environment and equivalent 3D
virtual environment. The virtual environment (VE) was designed as two rows of three elevator buttons. The
distances between the buttons and from the body were the same in both environments. B. Examples of
endpoint (hand) and trunk trajectories for pointing movements to three lower targets in one healthy and one
stroke subject. C. Examples of elbow/shoulder interjoint coordination for movements made to middle lower
target in healthy and stroke subjects in the physical (PE) and virtual (VE) environments.
required the subject to use different combinations of arm joint movements for
successful pointing. The center-to-center distance between adjacent targets was 26 cm
in both environments and targets were displayed at a standardized distance equal to the
participants arm length.
Fifteen adults (4 women, 11 men; aged 59 15.4 years) with chronic poststroke
hemiparesis participated in this study. They had moderate upper limb impairment
according to Chedoke-McMaster Arm Scores which ranged from 3 to 6 out of 7. A
comparison group of 12 healthy subjects (6 women, 6 men, aged 53.3 17.1 years)
also participated in the study.
The task was to point as quickly and as accurately as possible to each of the 6 targets
(12 trials per target) in a random sequence in each of the two environments.
Movements were analyzed in terms of performance outcome measures (endpoint
precision, trajectory and peak velocity) and arm and trunk movement patterns (elbow
and shoulder ranges of motion, elbow/shoulder coordination, trunk displacement and
rotation). There were very few differences in movement kinematics between
environments for healthy subjects. Overall, there were no differences in elbow and
shoulder ranges of motion or interjoint coordination for movements made in both
environments by either group (Figure 5). Healthy subjects however, made movements
faster, pointed to contralateral targets more accurately and made straighter endpoint
paths in the PE compared to the VE. The participants with stroke made less accurate
and more curved movements in VE and also used less trunk displacement. Thus, the
results of this study suggested that pointing movements in virtual environments were
sufficiently similar to those made in physical environments so that 3D VEs could be
considered as valid training environments for upper limb movements.
Figure 5. Results of comparison of pointing movements made in two environments described in Figure 4.
Healthy (A) but not stroke (B) subjects made movements more slowly in the virtual environment (VE)
compared to the physical environment (PE). There were no differences in joint ranges used in either healthy
or stroke subjects in the two environments (C,D).
The appearance of more curved trajectories and the use of less trunk movement
were also features of grasping movements made in a virtual environment while subjects
wore a haptic device on the hand (Cybergrasp, Immersion Corp.). In a study of 12
adults with chronic stroke-related hemiparesis (age 6710 yrs), reaching and grasping
kinematics to three different objects in a VE and a PE were compared [68]. The 3D
virtual environment was displayed via a HMD as in the previous study and the task was
to reach forward, pick-up and transport a virtual/physical object from one surface to
another (Figure 6). Three objects were used that required different grasp types a can
(diameter 65.6 mm) that required a spherical grasp, a screwdriver (diameter 31.6 mm)
requiring a power grasp and a pen (diameter 7.5 mm), requiring a precision fingerthumb grasp. In the VE, the virtual representation of the subject's hand was obtained
using a glove (Cyberglove, Immersion Corp.) and haptic feedback (prehension force
feedback) was provided via an exoskeleton device placed over the glove (Cybergrasp,
Immersion Corp.).
As for the comparison of reaching movements, comparable movement strategies
were used to reach, grasp and transport the virtual and physical objects in the two
environments. Similar to what was found for pointing movements, reaching in VR took
approximately 35% longer compared to PE. This was true especially for the cylindrical
and precision grasps. Thus, reaching and grasping movements that were accomplished
in around 1.5 seconds in PE, took up to 2.2 seconds in the VE. The increase in
movement time was reflected in all the temporal variables compared between the two
environments such as the peak velocity, the time to peak velocity, the time to maximal
grip aperture and the deceleration time as the hand approached the object. In addition to
the temporal differences, movement endpoint trajectories were also more curved in VE.
Overall, participants used more elbow extension and shoulder horizontal adduction in
VE compared to PE and there were slight differences in the amount of supination and
pronation used for reaching the different objects. Despite these differences, subjects
were able to similarly scale hand aperture to object size and the hand was similarly
oriented in the VE compared to the PE.
Figure 6. Representation of virtual environment for comparison of reaching and grasping kinematics in
physical and virtual environments. Inset (upper right) shows the scene as viewed by the subject wearing the
head-mounted display. Bottom: Sequence of movements (1-5) for picking up and moving the can,
screwdriver and pen.
4.
Conclusion
Results of these validation studies are encouraging for the incorporation of VEs into
rehabilitation programs aimed at improving upper limb function. They suggest that
movements made in virtual environments can be kinematically similar to those made in
physical environments. This is the first step in the validation of VEs for rehabilitation
applications. A question remains as to how similar movements made in VEs have to be
to movements made in the physical world in order for real functional gains to occur.
Research on the effectiveness of task-specific training versus conventional or nonspecific training suggests that rehabilitation outcomes are better when practice is taskoriented and repetitive [4, 46, 70]. Better outcomes are also expected when the learner
is motivated to improve and when the movements practiced are judged to be salient to
the learner [47]. These variables can be optimized in novel environments offered by
virtual reality technology to maximize rehabilitation outcomes.
VR is one of the most innovative, potentially effective technologies that during the
past decade has begun to be used as an assessment and treatment tool in the
rehabilitation of adults and children [49, 50, 52, 71, 72]. Some progress has been made
in the demonstration of the transfer of abilities and skills acquired within VE to real
world performance [50, 69, 73, 75]. Training in virtual reality environments has the
potential to lead to better rehabilitation outcomes than conventional approaches
because of the attributes of VR. Future research is still needed to firmly establish that
motor gains made in VEs are transferable to and will improve functioning and arm use
in the physical world.
Acknowledgements
These studies were supported by the Canadian Foundation for Innovation (CFI), the
Natural Science and Engineering Council of Canada (NSERC) and the Heart and
Stroke Foundation of Canada (HSFC). ECM was supported by CAPES, Brazil. MFL
holds a Tier 1 Canada Research Chair in Motor Recovery and Rehabilitation. Thanks
are extended to the patients and volunteers who participated in these studies and to
Ruth Dannenbaum-Katz, Christian Beaudoin, Valeri Goussev for clinical and technical
expertise.
References
[1] N.E. Mayo, W. Wood-Dauphinee, S. Ahmed, C. Gordon, J. Higgins, S. McEwen, N. Salbach,
Disablement following stroke, Disability & Rehabilitation 21 (1999), 258-268.
[2] J. Carod-Artal, J.A. Egido, J.L. Gonzalez, E. Varela de Seijas, Quality of life among stroke survivors
evaluated 1 year after stroke: experience of a stroke unit, Stroke 31 (2000), 2995-3000.
[3] P. Clarke, S.E. Black, Quality of life following stroke: Negotiating disability, identity, and resources,
Journal of Applied Genetics 24 (2005), 319-336.
[4] Canadian Stroke Network Evidence Based
Review of Stroke Rehabilitation,
http://www.canadianstrokenetwork.ca/eng/research/themefour.php#, accessed on 2007.
[5] G. Kwakkel, B.J. Kollen, and R.C. Wagenaar, Therapy impact on functional recovery in stroke
rehabilitation. A critical review of the literature, Physiotherapy 85 (1999), 377-391.
[6] J.G. Broeks, G.J. Lankhorst, K. Rumping, A.J.H. Prevo, The long-term outcome of arm function after
stroke: results of a follow-up study, Disability Rehabilitation 21 (1999), 357-364.
[7] T. Platz, P. Denzler, Do psychological variables modify motor recovery among patients with mild arm
paresis after stroke or traumatic brain injury who receive the Arm Ability Training? Restorative
Neurology and Neuroscience 20 (2002), 37-49.
[8] J.W. Lance, The control of muscle tone, reflexes, and movement: Robert Wartenberg Lecture, Neurol 30
(1980), 1303-1313.
[9] B. Bobath, Adult Hemiplegia. Evaluation and Treatment 2nd ed., Heinemann Medical, London, 1978.
[10] D. Bourbonnais, S. Vanden Noven, Weakness in patients with hemiparesis, American Journal of
Occupational Therapy 43 (1989), 313-317.
[11] B. Conrad, R. Benecke, H.M. Meinck, Gait disturbances in paraspastic patients. In: Restorative
Neurology, Clinical Neurophysiology in Spasticity, P.J. Delwaide, and R.R. Young, Elsevier,
Amsterdam, 1 (1985), 155-174.
[12] J.P.A. Dewald, P.S. Pope, J.D. Given, T.S. Buchanan, and W.Z. Rymer, Abnormal muscle coactivation
patterns during isometric torque generation at the elbow and shoulder in hemiparetic subjects, Brain 118
(1995), 495-510.
[13] J. Filiatrault, D. Bourbonnais, J. Gauthier, D. Gravel, A.B. Arsenault, Spatial patterns of muscle
activation at the lower limb in subjects with hemiparesis and in healthy subjects, Journal of
Electromyography and Kinesiology 2 (1991), 91-102.
[14] M.C. Hammond, G.H. Kraft, S.S. Fitts Recruitment and termination of electromyographic activity in the
hemiparetic forearm, Archives of Physical Medicine and Rehabilitation 69 (1988), 106-110.
[15] M.F. Levin, M. Dimov, Spatial zones for muscle coactivation and the control of postural stability, Brain
Research 757 (1997), 43-59.
[16] R.F. Beer, J.P. Dewald, W.Z. Rymer, Deficits in the coordination of multijoint arm movements in
patients with hemiparesis: evidence for disturbed control of limb dynamics, Experimental Brain
Research 131 (2000), 305-319.
[17] M.F. Levin, Interjoint coordination during pointing movements is disrupted in spastic hemiparesis, Brain
119 (1996), 281-294.
[18] M.C. Cirstea, A.B. Mitnitski, A.G. Feldman, M.F. Levin Interjoint coordination dynamics during
reaching in stroke patients, Experimental Brain Research 151 (2003), 289-300.
[19] A. Hufschmidt, K.H. Mauritz, Chronic transformation of muscle in spasticity: a peripheral contribution
to increased tone, Journal of Neurology, Neurosurgery, and Psychiatry 48 (1985), 676-685.
[20] F. Jakobsson, L. Grimby, L. Edstrom, Motoneuron activity and muscle fibre type composition in
hemiparesis, Scandinavian Journal of Rehabilitation Medicine 24 (1992), 115-119.
[21] J.G. Colebatch, S.C. Gandevia, P.J. Spira, Voluntary muscle strength in hemiparesis: distribution of
weakness at the elbow, Journal of Neurology, Neurosurgery, and Psychiatry 49 (1986), 1019-1024.
[22] A. Tang, W.Z. Rymer, Abnormal force-EMG relations in paretic limbs of hemiparetic human subjects,
Journal of Neurology, Neurosurgery, and Psychiatry 44 (1981), 690-698.
[23] C. Gowland, H. deBruin, J.V. Basmajian, N. Plews, I. Burcea, Agonist and antagonist activity during
voluntary upper-limb movement in patients with stroke, Physical Therapy 72 (1992), 624-633.
[24] M.C. Hammond, S.S. Fitts, G.H. Kraft, P.B. Nutter, M.J. Trotter, L.M. Robinson, Co-contraction in the
hemiparetic forearm: Quantitative EMG evaluation, Archives of Physical Medicine and Rehabilitation 69
(1988), 348-351.
[25] M.F. Levin, A.G. Feldman, The role of stretch reflex threshold regulation in normal and impaired motor
control, Brain Research 637 (1994), 23-30.
[26] M.F. Levin, R.W. Selles, M.H.G. Verheul, O.G. Meijer, Deficits in the coordination of agonist and
antagonist muscles in stroke patients: Implications for normal motor control, Brain Research 853 (2000),
352-369.
[27] N. Yanagisawa, R. Tanaka, Reciprocal Ia inhibition in spastic paralysis in man, in: W.A. Cobb, H. van
Duijn H, Contemporary Clin Neurophysiol EEG Suppl 34, Elsevier, Amsterdam, 1978, pp. 521-526.
[28] M.C. Cirstea, M.F. Levin, Compensatory strategies for reaching in stroke, Brain 123 (2000), 940-953.
[29] M.F. Levin, S. Michaelsen, C. Cirstea, A. Roby-Brami, Use of the trunk for reaching targets placed
within and beyond the reach in adult hemiparesis, Experimental Brain Research 143 (2002), 171-180.
[30] S.M. Michaelsen, R. Dannenbaum, M.F. Levin, Task-specific training with trunk restraint on arm
recovery in stroke. Randomized control trial, Stroke 37 (2006), 186-192.
[31] D. Moro, M.F. Levin, Arm-trunk compensations for beyond-the-reach movements in adults with chronic
stroke, International Society of Electrophysiological Kinesiology, Abstract, Boston, 2004.
[32] A. Roby-Brami, A. Feydy, M. Combeaud, E.V. Biryukova, B. Bussel, M. Levin, Motor compensation
and recovery of reaching in stroke patients, Acta Neurologica Scandinavia 107 (2003), 369-381.
[33] S.M. Michaelsen, E.C. Magdalon, M.F. Levin, Coordination between reaching and grasping in adults
with hemiparesis, Motor Control, in press.
[34] P. Archambault, P. Pigeon, A.G. Feldman, M.F. Levin, Recruitment and sequencing of different degrees
of freedom during pointing movements involving the trunk in healthy and hemiparetic subjects,
Experimental Brain Research 126 (1999), 55-67.
[35] D. Esparza, P.S. Archambault, C.J. Winstein, M.F. Levin, Hemispheric specialization in the coordination of arm and trunk movements during pointing in patients with unilateral brain damage,
Experimental Brain Research 148 (2003), 288-497.
[36] E. Rossi, A. Mitnitski, A.G. Feldman, Sequential control signals determine arm and trunk contributions
to hand transport during reaching, The Journal of Physiology 538 (2002), 659-671.
[37] M.C. Cirstea, A. Ptito, M.F. Levin Arm reaching improvements with short-term practice depend on the
severity of the motor deficit in stroke, Experimental Brain Research 152 (2003), 476-488.
[38] S.M. Michaelsen, S. Jacobs, A. Roby-Brami, M.F. Levin, Compensation for distal impairments of
grasping in adults with hemiparesis, Experimental Brain Research 157 (2004), 162-173.
[39] R. Wenzelburger, F. Kopper, A. Frenzel, H. Stolze, S. Klebe, A. Brossmann, J. Kuhtz-Buschbeck, M.
Golge, M. Illert, G. Deuschl, Hand coordination following capsular stroke, Brain 128 (2005), 64-74.
[40] R.M. Dannenbaum, M.F. Levin, R. Forget, P. Oliver, S.J. De Serres, Fading of sustained touch-pressure
appreciation in the hand of patients with hemiparesis, Archives of Physical Medicine and Rehabilitation,
in press.
[41] J. Hermsdorfer, K. Laimgruber, G. Kerkhoff, N. Mai, G. Goldenberg, Effects of unilateral brain damage
on coordination, and kinematics of ipsilesional prehension, Experimental Brain Research 128 (1999),
41-51.
[42] D.S. Reisman, J.P. Scholz. Aspects of joint coordination are preserved during pointing in persons with
post-stroke hemiparesis, Brain 126 (11) (2003), 2510-2527.
[43] N. Dancause, A. Ptito, M.F. Levin, Error correction strategies for motor behavior after unilateral brain
damage: Short-term motor learning processes, Neuropsychologia 40 (2002), 1313-1323.
[44] N. St-Onge, A.G, Feldman, Referent configuration of the body: A global factor in the control of multiple
skeletal muscles, Experimental Brain Research 155 (2004), 291-300.
[45] A.G. Feldman, V. Goussev, A. Sangole, M.F. Levin, Threshold position control and the principle of
minimal interaction in motor actions, Brain Research 165 (2007), 267-281.
[46] A. Gentile, Skill acquisition: action, movement, and neuromotor processes, in J. Carr and R. Shepherd
(Eds) Movement Science: Foundations for Physical Therapy in Rehabilitation, Aspen, Rockville, MD,
1987, pp. 93-117.
[47] J.A. Kleim, T.A. Jones, Principles of experience-dependent neural plasticity: Implication for
rehabilitation after brain damage, Journal of Speech and Hearing Research 51 (2008), 225-239.
[48] M.T. Schultheis, J. Himelstein, A.A. Rizzo, Virtual reality and neuropsychology: Upgrading the current
tools, The Journal of Head Trauma Rehabilitation 17 (2002), 378-394.
[49] A. Rizzo, G.J. Kim, A SWOT analysis of the field of virtual reality rehabilitation and therapy, PresenceTeleoperators & Virtual Environments 14 (2005), 119-146.
[50] H. Sveistrup, Motor rehabilitation using virtual reality, Journal of NeuroEngineering and Rehabilitation
1 (2004), 1-8.
[51] M. Thornton, S. Marshal, J. McComas, H. Finestone, H. McCormick, H. Sveistrup, Benefits of activity
and virtual reality based balance exercise program for adults with traumatic brain injury: Perceptions of
participants and their caregivers, Brain Injury 19 (2005), 989-1000.
[52] P.L. Weiss, N. Katz, The potential of virtual reality for rehabilitation, Journal of Rehabilitation Research
and Development 41 (2004), vii-x.
[53] J. Broeren, M. Dixon, K. Stibrant Sunnerhagen, M. Rydmark, Rehabilitation after stroke using virtual
reality, haptics (force feedback) and telemedicine, Studies in Health Technology and Informatics 124
(2006), 51-56.
[54] J.E. Deutsch, J. Latonio, G.C. Burdea, R. Boian, Rehabilitation of musculoskeletal injuries using the
Rutgers Ankle haptic interface: Three case reports, Europhaptics 1 (2001), 11-16.
[55] J.E. Deutsch, A.S. Merians, G.C. Burdea, R. Boian, S.V. Adamovich, H. Poizner H, Haptics and virtual
reality used to increase strength and improve function in chronic individuals post-stroke: Two case
reports, Neurology Report 26 (2002), 79-86.
[56] M. Holden, E. Todorov, J. Callahan, E. Bizzi, Case report: Virtual environment training improves motor
performance in two stroke patients, Neurology Report 23 (1999), 57-67.
[57] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Motor learning and generalization following
virtual environment training in a patient with stroke, Neurology Report 24 (2000), 170-171.
[58] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Quantitative assessment of motor
generalization in the real world following training in a virtual environment in patents with stroke,
Neurology Report 25 (2002), 129-130.
[59] A.S. Merians, D. Jack, R. Boian, M. Tremaine, G.C. Burdea, S.V. Adamovich, M. Recce, H. Poizner,
Virtual reality-augmented rehabilitation for patients following stroke, Physical Therapy 82 (2002), 898915.
[60] A. Rovetta, F. Lorini, M.R. Canina, Virtual reality in the assessment of neuromotor diseases:
measurement of time response in real and virtual environments, Studies in Health Technology and
Introduction
The motor recovery of the upper limb in patients following congenital or acquired brain
injury remains a persistent problem in neurological rehabilitation. More than 80% of
the approximately 566,000 stroke survivors in the United States experience hemiparesis
resulting in impairment of one upper extremity (UE) immediately after stroke and in
55-75% of survivors, impairments persist beyond the acute stage of stroke. Important
from a rehabilitation perspective is that functional limitations of the upper limb
contribute to disability and are associated with diminished health-related quality of life
[1, 3].
Despite a growing number of studies, there is still a paucity of good quality
evidence for the effectiveness of upper limb motor rehabilitation techniques for patients
Figure 1. Arm and hand coordination during a reach-to-grasp task in one healthy subject (top) and one
individual with stroke-related hemiparesis (bottom). The mean peak hand aperture (thin solid lines) generally
occurs after the mean peak hand velocity (thick solid lines) as seen in both examples but the movement is
slower and hand opening is delayed in the individual with hemiparesis. Dotted lines indicate one standard
deviation of the mean traces.
For more complex movements, individuals with hemiparesis may have several
deficits when attempting to produce coordinated arm, trunk and hand movements. For
example, during trunk-assisted reaching (reaching to objects placed beyond arms
length), patients may have deficits in the timing of the initiation of arm and trunk
movement characterized by delays and increased variability [34, 35]. In addition,
Esparza et al. [35] found differences in the range of trunk displacement between
patients with left and right brain lesions and documented bilateral deficits in the control
of movements involving complex arm-trunk co-ordination.
We are only beginning to understand how complex movements are controlled and
the role of perception-action coupling in the healthy and damaged nervous system. The
healthy nervous system is able to integrate multiple degrees of freedom of the body and
produce invariant hand trajectories when making pointing movements with or without
trunk displacement (Figure 2). In trunk-assisted reaching, Rossi et al. [36] compared
the hand trajectories when healthy subjects reached to a target placed beyond the reach
on a horizontal surface. In some trials, the trunk was free to move and thus contributed
to the endpoint trajectory. In some other trials however, the trunk movement was
unexpectedly arrested before the movement began. They showed that the initial
contribution of the trunk movement to the hand displacement was neutralized by
appropriate compensatory rotations at the shoulder and elbow. Trunk movement began
to contribute to hand displacement only after the peak velocity of the hand movement
was reached. Results such as these highlight the elegant temporal and spatial
coordination used by the healthy nervous system to produce smooth and effective
movement.
Figure 2. Top. For beyond-the-reach experiments, subjects sat in a cut-out section of a plexiglass table.
Goggles obstructed vision of the hand and target after the go signal. Hand starting position was located 30 cm
in front of the sternum. A metal plate attached to the back of the trunk, and an electromagnet attached to the
wall were used to arrest the trunk movement in 30% of randomly selected trials. Middle and lower panels:
Mean hand and trunk trajectories for one healthy (left) and one stroke subject (right) in trunk-blocked (solid
lines) and trunk-free (open lines) movements. The stroke subject had a moderate motor impairment as
indicated by the Fugl-Meyer (FM) Arm Score of 50 out of 66. Despite differences in the trunk motion
between conditions, hand trajectories for blocked-trunk trials initially coincided with those for free-trunk
movements. Hand trajectories for trunk-blocked trials diverged earlier in participants with stroke indicating
that they could not fully compensate for the trunk movement by adjusting their arm movement.
After stroke, control of movement in specific joint ranges is limited and trunk
movement makes a larger and earlier contribution to hand transport for reaches to
objects placed both within and beyond the arms length [26, 29]. The neurologically
damaged system also has deficits in the ability to make appropriate compensatory
adjustments of the arm joints to maintain the desired hand trajectory during trunkassisted reaching. This was tested using the same paradigm described above for the
study by Rossi et al. [36]. We compared hand trajectories and elbow-shoulder interjoint
coordination during beyond-the-reach pointing movements in healthy and
hemiparetic subjects when the trunk was free to move or when it was unexpectedly
arrested [31]. In approximately half the participants with hemiparesis, hand trajectory
divergence occurred earlier (Figure 2, right panels) while the divergence of interjoint
coordination patterns occurred later than the control group suggesting that
compensatory adjustments of the shoulder and elbow joints were not sufficient to
neutralize the influence of the trunk on the hand trajectory. Arm movements only
partially compensated the trunk displacement and this compensation was delayed. This
suggests a deficit in intersegmental temporal coordination that may be partly
responsible for the loss of arm coordination even in well-recovered patients.
Individuals with hemiparesis also have spatial and temporal coordination deficits
between movements of adjacent arm joints such as the elbow and shoulder [12, 16, 17,
18, 37], between the transport phase of reaching and aperture formation in grasping [38,
40] and in precision grip force control [39, 41]. For example, using a mathematical
analysis of kinematic variability during whole arm reaching movements, Reisman and
Scholz [42] found that individuals with mild-to-moderate hemiparesis had deficits in
specific patterns of joint coupling, and that they had only partial ability to rapidly
compensate movement errors. This suggestion had previously been proposed for single
joint arm movements by Dancause et al. [43] who further related the error
compensation deficits to impairments in executive functioning in patients with chronic
stroke.
The reduced capacity to produce and coordinate the movements of the arm, hand
and trunk into coherent action [see 44, 45] may lead to clumsy and slow movement
making it less likely that individuals would use their upper limb in daily life activities.
Rehabilitation efforts are aimed at reducing the effects of impairments through repeated
practice of targeted movements, tasks or activities in controlled clinical environments
[46].
of using VEs is that sensory parameters can be adapted and scaled to the abilities of the
user. In so doing, responses to a larger number of situations in a shorter amount of time
than is possible in real-world laboratory experimental set-ups can be measured. For
example, in a VE, several object locations and orientations can be reliably and rapidly
reproduced and object properties can be manipulated (i.e., obstacles can be introduced
by quickly changing properties and orientation of the object or the environment). VEs
are especially suited to the study of how individuals interact with objects or situations
that unexpectedly change. Thus, questions about dexterity and coordination that are not
easily accessible in a real-world environment can be more easily addressed. This is of
particular importance in the study of arm functional recovery in post-stroke patients.
Many stroke survivors lack the ability to reliably use the arm and hand during
interactions with objects within changing environments: e.g. catching a ball or picking
up an object while walking. These types of experimental set-ups are difficult to recreate
in the laboratory. Finally, another advantage of using VR is the possibility of studying
movement production in situations that, in the real world, may compromise the safety
of the individual. For example, in obstacle avoidance tasks, the ability to anticipate and
reach around a static obstacle such as the table ledge can be evaluated as well as the
ability to move in a constrained environment without danger of incurring injury due to
impact of the hand with an object.
2.2. The question of haptics
When the arm and hand interact with objects in the physical world, in addition to
proprioceptive feedback related to limb movement, the individual perceives sensory
information about collision of the hand with the objects being manipulated. This
sensory information combined with task success, provides feedback to the individual
about the adequacy and effectiveness of his or her movement in the virtual
environment. However, haptic information is not easily incorporated into VR
environments created for motor control studies or rehabilitation studies of upper limb
reaching and object manipulation. The use of relevant haptic interfaces is important
because it enhances the users sense of presence within VEs [62]. Many existing VEs
do not include haptics or include haptic information limited to sensations felt through a
joystick or mouse [63, 64]. These do not provide the nervous system with the most
salient movement-related sensory information. Given this reality, the essential question
is whether movements made in VR environments that lack haptic sensory cues usually
available in physical environments, can be considered valid. In other words, are they
spatially and temporally kinematically similar to equivalent movements made in
physical environments? In order to address this question, several studies have been
done to compare the kinematics of movements made in different types of VEs to those
made in physical environments [65, 69]. The following section of this chapter will
summarize the results of these validation studies.
virtual 7 cm diameter ball, reached forward by leaning the trunk and then placed the
ball within a 2 cm x 2 cm yellow square on a real or virtual target. The initial
conditions for the task and the tasks themselves were carefully matched so that
movement extent and direction were as similar as possible. Thus, in both environments,
the initial position of the arm was about 0 flexion, 30 abduction and 0 external
rotation (shoulder), 80 flexion and 0 supination (elbow) with the wrist and hand in
the neutral position. The fingers were slightly flexed. The initial position of the ball
was 13 cm in front of the right shoulder, 7 cm above and 3 cm to the left of the
subjects hand. The target was placed 31 cm in front of the shoulder, 12.5 cm above
and 14 cm to the right of the initial position of the ball. The VR environment was
displayed in 2 dimensions (2D) on a computer screen placed 75 cm in front of subjects
midline. The ball and hand were displayed on the screen inside a cube. The task was to
place the ball in the upper right far corner of the cube. The virtual representation of the
subjects hand was obtained using a 22 sensor fibre optic glove (Cyberglove,
Immersion Corp.) and an electromagnetic sensor (Fastrak, Polhemus Corp.) that was
used to orient the glove in the 2D environment. Data from these devices were
synchronized in real time. To enable the subject to "feel" the virtual ball, a prehension
force feedback device (Cybergrasp, Immersion Corp.) was fitted to the dorsal surface
of the hand. The Cybergrasp delivered prehension force feedback in the form of
extension forces to the distal phalanxes of the thumb and each finger. Forces applied to
the fingers were calibrated for each subject while he/she was wearing the Cyberglove
and all subjects perceived that they were holding a spherical object in their hand. To
better compare the performance of participant in each of the two environments, the
glove and grasp devices were worn on the hand in both conditions (Figure 3).
Figure 3. Top: Experimental set up for reaching, grasping and placing experiment in 2D virtual (VE) and
physical (PE) environments. Elbow-shoulder interjoint coordination in the reaching (middle) and transport
(bottom) phase of the task was similar between environments in healthy and stroke subjects.
Figure 4. A. Experimental set-up for comparison of pointing in the physical environment and equivalent 3D
virtual environment. The virtual environment (VE) was designed as two rows of three elevator buttons. The
distances between the buttons and from the body were the same in both environments. B. Examples of
endpoint (hand) and trunk trajectories for pointing movements to three lower targets in one healthy and one
stroke subject. C. Examples of elbow/shoulder interjoint coordination for movements made to middle lower
target in healthy and stroke subjects in the physical (PE) and virtual (VE) environments.
required the subject to use different combinations of arm joint movements for
successful pointing. The center-to-center distance between adjacent targets was 26 cm
in both environments and targets were displayed at a standardized distance equal to the
participants arm length.
Fifteen adults (4 women, 11 men; aged 59 15.4 years) with chronic poststroke
hemiparesis participated in this study. They had moderate upper limb impairment
according to Chedoke-McMaster Arm Scores which ranged from 3 to 6 out of 7. A
comparison group of 12 healthy subjects (6 women, 6 men, aged 53.3 17.1 years)
also participated in the study.
The task was to point as quickly and as accurately as possible to each of the 6 targets
(12 trials per target) in a random sequence in each of the two environments.
Movements were analyzed in terms of performance outcome measures (endpoint
precision, trajectory and peak velocity) and arm and trunk movement patterns (elbow
and shoulder ranges of motion, elbow/shoulder coordination, trunk displacement and
rotation). There were very few differences in movement kinematics between
environments for healthy subjects. Overall, there were no differences in elbow and
shoulder ranges of motion or interjoint coordination for movements made in both
environments by either group (Figure 5). Healthy subjects however, made movements
faster, pointed to contralateral targets more accurately and made straighter endpoint
paths in the PE compared to the VE. The participants with stroke made less accurate
and more curved movements in VE and also used less trunk displacement. Thus, the
results of this study suggested that pointing movements in virtual environments were
sufficiently similar to those made in physical environments so that 3D VEs could be
considered as valid training environments for upper limb movements.
Figure 5. Results of comparison of pointing movements made in two environments described in Figure 4.
Healthy (A) but not stroke (B) subjects made movements more slowly in the virtual environment (VE)
compared to the physical environment (PE). There were no differences in joint ranges used in either healthy
or stroke subjects in the two environments (C,D).
The appearance of more curved trajectories and the use of less trunk movement
were also features of grasping movements made in a virtual environment while subjects
wore a haptic device on the hand (Cybergrasp, Immersion Corp.). In a study of 12
adults with chronic stroke-related hemiparesis (age 6710 yrs), reaching and grasping
kinematics to three different objects in a VE and a PE were compared [68]. The 3D
virtual environment was displayed via a HMD as in the previous study and the task was
to reach forward, pick-up and transport a virtual/physical object from one surface to
another (Figure 6). Three objects were used that required different grasp types a can
(diameter 65.6 mm) that required a spherical grasp, a screwdriver (diameter 31.6 mm)
requiring a power grasp and a pen (diameter 7.5 mm), requiring a precision fingerthumb grasp. In the VE, the virtual representation of the subject's hand was obtained
using a glove (Cyberglove, Immersion Corp.) and haptic feedback (prehension force
feedback) was provided via an exoskeleton device placed over the glove (Cybergrasp,
Immersion Corp.).
As for the comparison of reaching movements, comparable movement strategies
were used to reach, grasp and transport the virtual and physical objects in the two
environments. Similar to what was found for pointing movements, reaching in VR took
approximately 35% longer compared to PE. This was true especially for the cylindrical
and precision grasps. Thus, reaching and grasping movements that were accomplished
in around 1.5 seconds in PE, took up to 2.2 seconds in the VE. The increase in
movement time was reflected in all the temporal variables compared between the two
environments such as the peak velocity, the time to peak velocity, the time to maximal
grip aperture and the deceleration time as the hand approached the object. In addition to
the temporal differences, movement endpoint trajectories were also more curved in VE.
Overall, participants used more elbow extension and shoulder horizontal adduction in
VE compared to PE and there were slight differences in the amount of supination and
pronation used for reaching the different objects. Despite these differences, subjects
were able to similarly scale hand aperture to object size and the hand was similarly
oriented in the VE compared to the PE.
Figure 6. Representation of virtual environment for comparison of reaching and grasping kinematics in
physical and virtual environments. Inset (upper right) shows the scene as viewed by the subject wearing the
head-mounted display. Bottom: Sequence of movements (1-5) for picking up and moving the can,
screwdriver and pen.
4.
Conclusion
Results of these validation studies are encouraging for the incorporation of VEs into
rehabilitation programs aimed at improving upper limb function. They suggest that
movements made in virtual environments can be kinematically similar to those made in
physical environments. This is the first step in the validation of VEs for rehabilitation
applications. A question remains as to how similar movements made in VEs have to be
to movements made in the physical world in order for real functional gains to occur.
Research on the effectiveness of task-specific training versus conventional or nonspecific training suggests that rehabilitation outcomes are better when practice is taskoriented and repetitive [4, 46, 70]. Better outcomes are also expected when the learner
is motivated to improve and when the movements practiced are judged to be salient to
the learner [47]. These variables can be optimized in novel environments offered by
virtual reality technology to maximize rehabilitation outcomes.
VR is one of the most innovative, potentially effective technologies that during the
past decade has begun to be used as an assessment and treatment tool in the
rehabilitation of adults and children [49, 50, 52, 71, 72]. Some progress has been made
in the demonstration of the transfer of abilities and skills acquired within VE to real
world performance [50, 69, 73, 75]. Training in virtual reality environments has the
potential to lead to better rehabilitation outcomes than conventional approaches
because of the attributes of VR. Future research is still needed to firmly establish that
motor gains made in VEs are transferable to and will improve functioning and arm use
in the physical world.
Acknowledgements
These studies were supported by the Canadian Foundation for Innovation (CFI), the
Natural Science and Engineering Council of Canada (NSERC) and the Heart and
Stroke Foundation of Canada (HSFC). ECM was supported by CAPES, Brazil. MFL
holds a Tier 1 Canada Research Chair in Motor Recovery and Rehabilitation. Thanks
are extended to the patients and volunteers who participated in these studies and to
Ruth Dannenbaum-Katz, Christian Beaudoin, Valeri Goussev for clinical and technical
expertise.
References
[1] N.E. Mayo, W. Wood-Dauphinee, S. Ahmed, C. Gordon, J. Higgins, S. McEwen, N. Salbach,
Disablement following stroke, Disability & Rehabilitation 21 (1999), 258-268.
[2] J. Carod-Artal, J.A. Egido, J.L. Gonzalez, E. Varela de Seijas, Quality of life among stroke survivors
evaluated 1 year after stroke: experience of a stroke unit, Stroke 31 (2000), 2995-3000.
[3] P. Clarke, S.E. Black, Quality of life following stroke: Negotiating disability, identity, and resources,
Journal of Applied Genetics 24 (2005), 319-336.
[4] Canadian Stroke Network Evidence Based
Review of Stroke Rehabilitation,
http://www.canadianstrokenetwork.ca/eng/research/themefour.php#, accessed on 2007.
[5] G. Kwakkel, B.J. Kollen, and R.C. Wagenaar, Therapy impact on functional recovery in stroke
rehabilitation. A critical review of the literature, Physiotherapy 85 (1999), 377-391.
[6] J.G. Broeks, G.J. Lankhorst, K. Rumping, A.J.H. Prevo, The long-term outcome of arm function after
stroke: results of a follow-up study, Disability Rehabilitation 21 (1999), 357-364.
[7] T. Platz, P. Denzler, Do psychological variables modify motor recovery among patients with mild arm
paresis after stroke or traumatic brain injury who receive the Arm Ability Training? Restorative
Neurology and Neuroscience 20 (2002), 37-49.
[8] J.W. Lance, The control of muscle tone, reflexes, and movement: Robert Wartenberg Lecture, Neurol 30
(1980), 1303-1313.
[9] B. Bobath, Adult Hemiplegia. Evaluation and Treatment 2nd ed., Heinemann Medical, London, 1978.
[10] D. Bourbonnais, S. Vanden Noven, Weakness in patients with hemiparesis, American Journal of
Occupational Therapy 43 (1989), 313-317.
[11] B. Conrad, R. Benecke, H.M. Meinck, Gait disturbances in paraspastic patients. In: Restorative
Neurology, Clinical Neurophysiology in Spasticity, P.J. Delwaide, and R.R. Young, Elsevier,
Amsterdam, 1 (1985), 155-174.
[12] J.P.A. Dewald, P.S. Pope, J.D. Given, T.S. Buchanan, and W.Z. Rymer, Abnormal muscle coactivation
patterns during isometric torque generation at the elbow and shoulder in hemiparetic subjects, Brain 118
(1995), 495-510.
[13] J. Filiatrault, D. Bourbonnais, J. Gauthier, D. Gravel, A.B. Arsenault, Spatial patterns of muscle
activation at the lower limb in subjects with hemiparesis and in healthy subjects, Journal of
Electromyography and Kinesiology 2 (1991), 91-102.
[14] M.C. Hammond, G.H. Kraft, S.S. Fitts Recruitment and termination of electromyographic activity in the
hemiparetic forearm, Archives of Physical Medicine and Rehabilitation 69 (1988), 106-110.
[15] M.F. Levin, M. Dimov, Spatial zones for muscle coactivation and the control of postural stability, Brain
Research 757 (1997), 43-59.
[16] R.F. Beer, J.P. Dewald, W.Z. Rymer, Deficits in the coordination of multijoint arm movements in
patients with hemiparesis: evidence for disturbed control of limb dynamics, Experimental Brain
Research 131 (2000), 305-319.
[17] M.F. Levin, Interjoint coordination during pointing movements is disrupted in spastic hemiparesis, Brain
119 (1996), 281-294.
[18] M.C. Cirstea, A.B. Mitnitski, A.G. Feldman, M.F. Levin Interjoint coordination dynamics during
reaching in stroke patients, Experimental Brain Research 151 (2003), 289-300.
[19] A. Hufschmidt, K.H. Mauritz, Chronic transformation of muscle in spasticity: a peripheral contribution
to increased tone, Journal of Neurology, Neurosurgery, and Psychiatry 48 (1985), 676-685.
[20] F. Jakobsson, L. Grimby, L. Edstrom, Motoneuron activity and muscle fibre type composition in
hemiparesis, Scandinavian Journal of Rehabilitation Medicine 24 (1992), 115-119.
[21] J.G. Colebatch, S.C. Gandevia, P.J. Spira, Voluntary muscle strength in hemiparesis: distribution of
weakness at the elbow, Journal of Neurology, Neurosurgery, and Psychiatry 49 (1986), 1019-1024.
[22] A. Tang, W.Z. Rymer, Abnormal force-EMG relations in paretic limbs of hemiparetic human subjects,
Journal of Neurology, Neurosurgery, and Psychiatry 44 (1981), 690-698.
[23] C. Gowland, H. deBruin, J.V. Basmajian, N. Plews, I. Burcea, Agonist and antagonist activity during
voluntary upper-limb movement in patients with stroke, Physical Therapy 72 (1992), 624-633.
[24] M.C. Hammond, S.S. Fitts, G.H. Kraft, P.B. Nutter, M.J. Trotter, L.M. Robinson, Co-contraction in the
hemiparetic forearm: Quantitative EMG evaluation, Archives of Physical Medicine and Rehabilitation 69
(1988), 348-351.
[25] M.F. Levin, A.G. Feldman, The role of stretch reflex threshold regulation in normal and impaired motor
control, Brain Research 637 (1994), 23-30.
[26] M.F. Levin, R.W. Selles, M.H.G. Verheul, O.G. Meijer, Deficits in the coordination of agonist and
antagonist muscles in stroke patients: Implications for normal motor control, Brain Research 853 (2000),
352-369.
[27] N. Yanagisawa, R. Tanaka, Reciprocal Ia inhibition in spastic paralysis in man, in: W.A. Cobb, H. van
Duijn H, Contemporary Clin Neurophysiol EEG Suppl 34, Elsevier, Amsterdam, 1978, pp. 521-526.
[28] M.C. Cirstea, M.F. Levin, Compensatory strategies for reaching in stroke, Brain 123 (2000), 940-953.
[29] M.F. Levin, S. Michaelsen, C. Cirstea, A. Roby-Brami, Use of the trunk for reaching targets placed
within and beyond the reach in adult hemiparesis, Experimental Brain Research 143 (2002), 171-180.
[30] S.M. Michaelsen, R. Dannenbaum, M.F. Levin, Task-specific training with trunk restraint on arm
recovery in stroke. Randomized control trial, Stroke 37 (2006), 186-192.
[31] D. Moro, M.F. Levin, Arm-trunk compensations for beyond-the-reach movements in adults with chronic
stroke, International Society of Electrophysiological Kinesiology, Abstract, Boston, 2004.
[32] A. Roby-Brami, A. Feydy, M. Combeaud, E.V. Biryukova, B. Bussel, M. Levin, Motor compensation
and recovery of reaching in stroke patients, Acta Neurologica Scandinavia 107 (2003), 369-381.
[33] S.M. Michaelsen, E.C. Magdalon, M.F. Levin, Coordination between reaching and grasping in adults
with hemiparesis, Motor Control, in press.
[34] P. Archambault, P. Pigeon, A.G. Feldman, M.F. Levin, Recruitment and sequencing of different degrees
of freedom during pointing movements involving the trunk in healthy and hemiparetic subjects,
Experimental Brain Research 126 (1999), 55-67.
[35] D. Esparza, P.S. Archambault, C.J. Winstein, M.F. Levin, Hemispheric specialization in the coordination of arm and trunk movements during pointing in patients with unilateral brain damage,
Experimental Brain Research 148 (2003), 288-497.
[36] E. Rossi, A. Mitnitski, A.G. Feldman, Sequential control signals determine arm and trunk contributions
to hand transport during reaching, The Journal of Physiology 538 (2002), 659-671.
[37] M.C. Cirstea, A. Ptito, M.F. Levin Arm reaching improvements with short-term practice depend on the
severity of the motor deficit in stroke, Experimental Brain Research 152 (2003), 476-488.
[38] S.M. Michaelsen, S. Jacobs, A. Roby-Brami, M.F. Levin, Compensation for distal impairments of
grasping in adults with hemiparesis, Experimental Brain Research 157 (2004), 162-173.
[39] R. Wenzelburger, F. Kopper, A. Frenzel, H. Stolze, S. Klebe, A. Brossmann, J. Kuhtz-Buschbeck, M.
Golge, M. Illert, G. Deuschl, Hand coordination following capsular stroke, Brain 128 (2005), 64-74.
[40] R.M. Dannenbaum, M.F. Levin, R. Forget, P. Oliver, S.J. De Serres, Fading of sustained touch-pressure
appreciation in the hand of patients with hemiparesis, Archives of Physical Medicine and Rehabilitation,
in press.
[41] J. Hermsdorfer, K. Laimgruber, G. Kerkhoff, N. Mai, G. Goldenberg, Effects of unilateral brain damage
on coordination, and kinematics of ipsilesional prehension, Experimental Brain Research 128 (1999),
41-51.
[42] D.S. Reisman, J.P. Scholz. Aspects of joint coordination are preserved during pointing in persons with
post-stroke hemiparesis, Brain 126 (11) (2003), 2510-2527.
[43] N. Dancause, A. Ptito, M.F. Levin, Error correction strategies for motor behavior after unilateral brain
damage: Short-term motor learning processes, Neuropsychologia 40 (2002), 1313-1323.
[44] N. St-Onge, A.G, Feldman, Referent configuration of the body: A global factor in the control of multiple
skeletal muscles, Experimental Brain Research 155 (2004), 291-300.
[45] A.G. Feldman, V. Goussev, A. Sangole, M.F. Levin, Threshold position control and the principle of
minimal interaction in motor actions, Brain Research 165 (2007), 267-281.
[46] A. Gentile, Skill acquisition: action, movement, and neuromotor processes, in J. Carr and R. Shepherd
(Eds) Movement Science: Foundations for Physical Therapy in Rehabilitation, Aspen, Rockville, MD,
1987, pp. 93-117.
[47] J.A. Kleim, T.A. Jones, Principles of experience-dependent neural plasticity: Implication for
rehabilitation after brain damage, Journal of Speech and Hearing Research 51 (2008), 225-239.
[48] M.T. Schultheis, J. Himelstein, A.A. Rizzo, Virtual reality and neuropsychology: Upgrading the current
tools, The Journal of Head Trauma Rehabilitation 17 (2002), 378-394.
[49] A. Rizzo, G.J. Kim, A SWOT analysis of the field of virtual reality rehabilitation and therapy, PresenceTeleoperators & Virtual Environments 14 (2005), 119-146.
[50] H. Sveistrup, Motor rehabilitation using virtual reality, Journal of NeuroEngineering and Rehabilitation
1 (2004), 1-8.
[51] M. Thornton, S. Marshal, J. McComas, H. Finestone, H. McCormick, H. Sveistrup, Benefits of activity
and virtual reality based balance exercise program for adults with traumatic brain injury: Perceptions of
participants and their caregivers, Brain Injury 19 (2005), 989-1000.
[52] P.L. Weiss, N. Katz, The potential of virtual reality for rehabilitation, Journal of Rehabilitation Research
and Development 41 (2004), vii-x.
[53] J. Broeren, M. Dixon, K. Stibrant Sunnerhagen, M. Rydmark, Rehabilitation after stroke using virtual
reality, haptics (force feedback) and telemedicine, Studies in Health Technology and Informatics 124
(2006), 51-56.
[54] J.E. Deutsch, J. Latonio, G.C. Burdea, R. Boian, Rehabilitation of musculoskeletal injuries using the
Rutgers Ankle haptic interface: Three case reports, Europhaptics 1 (2001), 11-16.
[55] J.E. Deutsch, A.S. Merians, G.C. Burdea, R. Boian, S.V. Adamovich, H. Poizner H, Haptics and virtual
reality used to increase strength and improve function in chronic individuals post-stroke: Two case
reports, Neurology Report 26 (2002), 79-86.
[56] M. Holden, E. Todorov, J. Callahan, E. Bizzi, Case report: Virtual environment training improves motor
performance in two stroke patients, Neurology Report 23 (1999), 57-67.
[57] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Motor learning and generalization following
virtual environment training in a patient with stroke, Neurology Report 24 (2000), 170-171.
[58] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Quantitative assessment of motor
generalization in the real world following training in a virtual environment in patents with stroke,
Neurology Report 25 (2002), 129-130.
[59] A.S. Merians, D. Jack, R. Boian, M. Tremaine, G.C. Burdea, S.V. Adamovich, M. Recce, H. Poizner,
Virtual reality-augmented rehabilitation for patients following stroke, Physical Therapy 82 (2002), 898915.
[60] A. Rovetta, F. Lorini, M.R. Canina, Virtual reality in the assessment of neuromotor diseases:
measurement of time response in real and virtual environments, Studies in Health Technology and
Abstract. Stroke patients report hand function as the most disabling motor deficit.
Current evidence shows that learning new motor skills is essential for inducing
functional neuroplasticity and functional recovery. Adaptive training paradigms
that continually and interactively move a motor outcome closer to the targeted skill
are important to motor recovery. Computerized virtual reality simulations when
interfaced with robots, movement tracking and sensing glove systems, are
particularly adaptable, allowing for online and offline modifications of task based
activities using the participants current performance and success rate. We have
developed a second generation system that can exercise the hand and the arm
together or in isolation and provide for both unilateral and bilateral hand and arm
activities in three-dimensional space. We demonstrate that by providing haptic
assistance for the hand and arm and adaptive anti-gravity support, the system can
accommodate patients with lower level impairments. We hypothesize that
combining training in virtual environments (VE) with observation of motor actions
can bring additional benefits. We present a proof of concept of a novel system that
integrates interactive VE with functional neuroimaging to address this issue. Three
components of this system are synchronized, the presentation of the visual display
of the virtual hands, the collection of fMRI images and the collection of hand joint
angles from the instrumented gloves. We show that interactive VEs can facilitate
activation of brain areas during training by providing appropriately modified
visual feedback. We predict that visual augmentation can become a tool to
facilitate functional neuroplasticity.
Keywords. Virtual Environment, Haptics, fMRI, Stroke, Cerebral Palsy
Introduction
During the past decade the intersection of knowledge gained within the fields of
engineering, neuroscience and rehabilitation has provided the conceptual framework
for a host of innovative rehabilitation treatment paradigms. These newer treatment
interventions are taking advantage of technological advances such as the improvement
in robotic design, the development of haptic interfaces, and the advent of humanmachine interactions in virtual reality and are in accordance with current neuroscience
literature in animals and motor control literature in humans. We therefore find
ourselves on a new path in rehabilitation.
Figure 1. a. Hand & Arm Training System using a CyberGlove and Haptic Master interface that provides
the user with a realistic haptic sensation that closely simulates the weight and force found in upper
extremity tasks. b. Hand & Arm Training System using a CyberGlove, a CyberGrasp and Flock of Birds
electromagnetic trackers. c. Close view of the haptic interface in a bimanual task.
pair of 5DT [15]) or CyberGlove [16] instrumented gloves for hand tracking and a
CyberGrasp ([16] for haptic effects. The CyberGrasp device is a lightweight, forcereflecting exoskeleton that fits over a CyberGlove data glove and adds resistive force
feedback to each finger. The CyberGrasp is used in our simulations to facilitate
individual finger movement by resisting flexion of the adjacent fingers in patients with
more pronounced deficits thus allowing for individual movement of the active finger.
The arm simulations utilize the Haptic MASTER [17], a 3 degrees of freedom
admittance controlled (force controlled) robot. Three more degrees of freedom (yaw,
pitch and roll) can be added to the arm by using a gimbal with force feedback available
for pronation/supination (roll). A three-dimensional force sensor measures the external
force exerted by the user on the robot. In addition, the velocity and position of the
robots endpoint are measured. These variables are used in real time to generate
reactive motion based on the properties of the virtual haptic environment in the vicinity
of the current location of the robots endpoint, allowing the robotic arm to act as an
interface between the participants and the virtual environments, enabling multiplanar
movements against gravity in a 3D workspace. The haptic interface provides the user
with a realistic haptic sensation that closely simulates the weight and force found in
functional upper extremity tasks [18] (Figure 1).
Hand position and orientation as well as finger flexion and abduction is recorded in
real time and translated into three-dimensional movements of the virtual hands shown
on the screen in a first-person perspective. The Haptic MASTER robot or the
Ascension Flock of Birds motion trackers [19] are used for arm tracking.
1.2 Simulations
We have developed a comprehensive library of gaming simulations: two exercise the
hand alone, five exercise the arm alone, and five exercise the hand and arm together.
Eight of these gaming simulations facilitate bilateral, symmetrical movement of the two
upper extremities. To provide clarification of the richness of these virtual worlds and
the sophistication of the haptic modifications for each game, we will describe some of
them in detail.
Figure 2. a. The piano trainer consists of a complete virtual piano that plays the appropriate notes as they
are pressed by the virtual fingers. b. Placing Cups displays a three-dimensional room with a haptically
rendered table and shelves. c. Reach/Touch is accomplished in the context of aiming /reaching type
movements in a normal, functional workspace. d. The Hammer Task trains a combination of three
dimensional reaching and repetitive finger flexion and extension. Targets are presented in a scalable 3D
workspace. e. Catching Falling Objects enhances movement of the paretic arm by coupling its motion
with the less impaired arm. f. Humming Bird Hunt depicts a hummingbird as it moves through an
environment filled with trees, flowers and a river. g. The full screen displays a three-dimensional room
containing three shelves and a table.
cues as to which notes are to be played. The activity can be made more challenging by
changing the fractionation angles required for successful key pressing (see 1.3.
Movement Assessment). When playing the songs bilaterally, the notes are keymatched. When playing the scales and the random notes bilaterally, the fingers of both
hands are either key matched or finger matched. Knowledge of results and knowledge
of performance is provided with visual and auditory feedback.
1.2.2 Hummingbird Hunt
This simulation depicts a hummingbird as it moves through an environment filled with
trees, flowers and a river. Water and bird sounds provide a pleasant encouraging
environment in which to practice repeated arm and hand movements (Figure 2f). The
game provides practice in the integration of reach, hand-shaping and grasp using a
pincer grip to catch and release the bird while it is perched on different objects located
on different levels and sections of the workspace. The flight path of the bird is
programmed into three different levels, low, medium and high allowing for progression
in the range of motion required to successfully transport the arm to catch the bird.
Adjusting the target position as well as the size, scales the difficulty of the task and the
precision required for a successful grasp and release.
1.2.3 Placing Cups
The goal of the Placing Cups task is to improve upper extremity range and
smoothness of motion in the context of a functional reaching movement. The screen
displays a three-dimensional room with a haptically rendered table and shelves (Figure
2b). The participants use their virtual hand (hemiparetic side) to lift the virtual cups and
place them onto one of nine spots on one of three shelves. Target spots on the shelves
(represented by red squares) are presented randomly for each trial. To accommodate
patients with varying degrees of impairments, there are several haptic effects that can
be applied to this simulation; gravity and antigravity forces can be applied to the cups,
global damping can be provided for dynamic stability and to facilitate smoother
movement patterns, and the three dimensions of the workspace can be calibrated to
increase the range of motion required for successful completion of the task. The
intensity of these effects can be modified to challenge the patients as they improve.
1.2.4 Reach/Touch
The goal of the Reach/Touch game is to improve speed, smoothness and range of
motion of shoulder and elbow movement patterns. This is accomplished in the context
of aiming/reaching type movements (Figure 2c). Subjects view a 3-dimensional
workspace aided by stereoscopic glasses [23] to enhance depth perception, to increase
the sense of immersion and to facilitate the full excursion of upper extremity reach. The
participant moves a virtual cursor (small sphere) through this space in order to touch
ten targets presented randomly. Movement initiation is cued by a haptically rendered
activation target (donut at the bottom of the screen). In this simulation, there are three
algorithms that are used to control the robot to accommodate varying levels of
impairments. The first algorithm is an adjustable spring-like assistance that draws the
participants arm/hand toward the target if they are unable to reach it within a
predefined time interval. The spring stiffness gradually increases when hand velocity
and force applied by the subject do not exceed predefined thresholds within this time
interval. Current values of active force and hand velocity are compared online with
threshold values and the assistive force increases if both velocity and force are under
threshold. If either velocity or force is above threshold, spring stiffness starts to
decrease in 5 N/m increments. The range of the spring stiffness is from 0 to 10000
N/m. The velocity threshold is predefined for each of the ten target spheres based on
the mean velocity of movement recorded from a group of neurologically healthy
subjects. The second algorithm, a haptic ramp (invisible tilted floor that goes through
the starting point and the target) decreases the force necessary to move the upper
extremity toward the target. This can be added or removed as needed. Finally, a range
restriction limits participants ability to deviate from an ideal trajectory toward each
target. This restriction can be decreased to provide less guidance as participants
accuracy improves. We have recently adapted this VE to train children with hemiplegia
due to cerebral palsy. To keep the childrens attention focused, we modified this game
to make it more dynamic by enhancing the visual and auditory presentation. The
spheres, rather than just disappearing, now explode accompanied by the appropriate
bursting sound. This modification, easily implemented in the framework of VR, has
dramatically increased the childrens compliance and engagement [24].
1.2.5 Hammer Task
The Hammer Task trains a combination of three dimensional reaching and repetitive
finger flexion and extension. Targets are presented in a scalable 3D workspace (Figure
2d). There are two versions of this simulation. One game exercises movement of the
hand and arm together by having the subjects reach towards a wooden cylinder and
then use their hand (finger extension or flexion) to hammer the cylinders into the floor.
The other uses supination and pronation to hammer the wooden cylinders into a wall.
The haptic effects allow the subject to feel the collision between the hammer and target
cylinders as they are pushed through the floor or wall. Hammering sounds accompany
collisions as well. The subjects receive feedback regarding their time to complete the
series of hammering tasks. Adjusting the size of the cylinders, the amount of antigravity assistance provided by the robot to the arm and the time required to
successfully complete the series of cylinders adaptively modifies the task requirements
and game difficulty.
1.2.6 Catching Falling Objects
The goal of this bilateral task simulation, Catching Falling Objects, is to enhance
movement of the paretic arm by coupling its motion with the less impaired arm (Figure
2e). Virtual hands are presented in a mono-view workspace. Each movement is
initiated by placing both virtual hands on two small circles. The participants arms then
move in a synchronized symmetrical action to catch virtual objects with both hands as
they drop from the top of the screen. Real-time 3-D position of the less affected arm is
measured from either a Flock of Birds sensor attached to the less impaired hand or a
second Haptic Master robot. The position of the less affected arm guides the movement
of the impaired arm. For the bilateral games, an initial symmetrical (relative to the
patients midline) relationship between the two arm positions is established prior to the
start of the game and maintained throughout the game utilizing a virtual spring
mechanism. At the highest levels of the virtual springs stiffness, the haptic master
guides the subjects arm in a perfect 1:1 mirrored movement. As the trajectory of the
subjects hemiparetic arm deviates from a mirrored image of the trajectory of the less
involved arm, the assistive virtual spring is stretched exerting a force on the subjects
impaired arm. This force draws the arm back to the mirrored image of the trajectory of
the uninvolved arm. The Catching Falling Objects simulation requires a quick,
symmetrical movement of both arms towards an object falling along the midline of the
screen. If the subject successfully hits the falling object three times in a row the spring
stiffness diminishes. The subject then has to exert a greater force with their hemiplegic
arm in order to maintain the symmetrical arm trajectory required for continuous
success. If the subject can not touch the falling object appropriately by exerting the
necessary force, the virtual spring stiffens again to assist the subject. In this way, the
adaptive algorithm maximizes the active force generated by the impaired arm. The
magnitude of the active force measured by the robot defines the progress and success in
the game, therefore this adaptive algorithm insures that the patient continually utilizes
their arm and does not rely on the Haptic Master to move it for them.
1.3 Movement assessment
Several kinematic measures are derived from the training simulations. Each task in a
simulation consists of a series of movements e.g. pressing a series of piano keys to
complete a song, or placing 9 cups on the virtual shelves. Time to complete a task,
range of motion and peak velocity for each individual movement can be measured in
each simulation. Accuracy, which denotes the proportion of correct key presses, and
fractionation are measures specific to the hand. Peak fractionation score quantifies the
ability to isolate each fingers motion and is calculated online by subtracting the mean
of the metacarpophalangeal and proximal interphalangeal joint angles of the most
flexed non-active finger from the mean angle of the active finger. When the actual
fractionation score becomes greater than the target score during the trial, a successful
key press will take place (assuming the subjects active finger was over the correct
piano key). The target fractionation score starts at 0 at the beginning of the training.
After each trial, and for each finger, our algorithm averages the fractionation achieved
when the piano key is pressed. If the average fractionation score is greater than 90% of
the target, the target fractionation will increase by 0.005 radians. If the average
fractionation is less than 75% of the target, the target will decrease by the same
amount. Otherwise, the target will remain the same. There is a separate target for each
finger and for each hand (total 10 targets). Once a key is displayed for the subject to
press, the initial threshold will be the set target. This will decrease during the trial
according to the Bezier Progression (interpolation according to a Bezier curve).
Thresholds will start at the target value and decrease to zero or to a predefined negative
number over the course of one minute. Negative limits for the target score will be used
to allow more involved subjects to play the game. To calculate movement smoothness,
we compute the normalized integrated third derivative of hand displacement [25, 26].
Finally, active force denotes the mean force applied by the subject to move the robot to
the target during the movement.
2. Training paradigms
2.1 Training the hand
We trained patients using three different paradigms, the hand alone, the hand and arm
separately, and the hand and arm together. We trained the hemiplegic hand of 8
subjects in the chronic phase post-stroke [9, 10]. Examination of the group effects
using analysis of variance of the data from the first two days of training, last two days
of training, and the one-week retention test showed significant changes in performance
in each of the parameters of hand movement that were trained in the virtual
environment. Post-hoc analyses revealed that subjects as a group improved in finger
fractionation (a measurement of finger flexion independence), thumb range of motion,
finger range of motion, thumb speed and finger speed. The Jebsen Test of Hand
Function (JTHF) [27], a timed test of hand function and dexterity, was used to
determine whether the kinematic improvements gained through practice in the VE
measures transferred to real world functional activities. After training, the average task
completion time for all seven subtests of the JTHF for the affected hand (group mean
(SD) decreased from 196 (62) sec to 172 (45) sec; paired t-test, t =2.4, p=<.05). In
contrast, no changes were observed for the unaffected hand (t=.59, p=.54). Analysis of
variance of the Jebsen scores from the pre-therapy, post-therapy and one-week
retention test demonstrated significant improvement in the scores. The subjects
affected hand improved in this test (pre-therapy versus post-therapy) on average by
12%. In contrast, no significant changes were observed for the unaffected hand.
Finally, scores obtained during the retention testing were not significantly different
from post-therapy scores.
2.2 Training the hand and arm
Four other subjects (mean age=51; years post stroke =3.5) practiced approximately
three hrs/day for 8 days on simulations that trained the arm and hand separately
(Reach/Touch, Placing Cups, Piano/Hand alone). Four other subjects (mean age=59;
years post stroke =4.75) practiced for the same amount of time on simulations that
trained the arm and hand together (Hammer, Plasma Pong, Piano/Hand/Arm,
Hummingbird Hunt). All subjects were tested pre and post training on two of our
primary outcome measures the JTHF and the Wolf Motor Function Test (WMFT) a
time-based series of tasks that evaluates upper extremity performance [28]. The groups
that practiced arm and hand tasks separately (HAS) showed a 14% change in the
WMFT and an 9% change in the JTHF whereas the group that practiced using the
simulations that trained the arm and hand together (HAT) showed a 23% (WMFT) and
29% change (JTHF) in these tests of real world hand and arm movements.
There were also notable changes in the secondary outcome measures; the
kinematics and force data derived from the virtual reality simulations during training.
These kinematic measures included time to task completion (duration), accuracy,
velocity, smoothness of hand trajectory and force generated by the hemiparetic arm.
Subjects in both groups showed similar changes in the time to complete each game, a
36%-42% decrease, depending on the specific simulation. Additionally three of the
four subjects in the HAS group improved the smoothness of their hand trajectories (in
the range of 50%-66%) indicating better control [29].
Figure 3. Trajectories of a representative subject performing single repetitions of the cup reaching
simulation. a. The dashed line represents the subjects performance without any haptic effects on Day 1 of
training. The solid line represents the subjects performance with the trajectory stabilized by the damping
effect and with the work against gravity decreased by the robot. Also note the collision with the haptically
rendered shelf during this trial. b. The same subjects trajectory while performing the cup placing task
without haptic assistance following training. Note the coordinated, up and over trajectory, consistent with
normal performance of a real world placing task (adapted from [13]).
However, the subjects in the HAT group showed a more pronounced decrease in
the path length. This suggests a reduction in extraneous and inaccurate arm movement
with more efficient limb segment interactions. Figure 3 shows the hand trajectories
generated by a representative subject in the Placing Cup activity pre and post training.
Figure 3a depicts a side view of a trajectory generated without haptic assistance, and
another trajectory generated with additional damping and increased antigravity support.
At the beginning of the training the subject needed the addition of the haptic effects to
stabilize the movement and to provide enough arm support for reaching the virtual
shelf. However, Figure 3b shows that after two weeks of training this subject
demonstrated a more normalized trajectory even without haptic assistance.
2.3 Bilateral training
In the upper arm bilateral games, movement of the unimpaired hand guides the
movement of the impaired hand. Importantly, an adaptive algorithm continually
modifies the amount of force assistance provided by the robot. This is based upon the
force generation and success in the game achieved by the subject. This adaptive
algorithm thereby ensures that the patient continually utilizes their hemiplegic arm and
does not rely on the Haptic Master to move it for them. Figure 4 shows the change in
the relationship between the assistive force provided by the robot and the active force
generated by a representative subject on Day 1 and Day 7 of training. With the aid of
this algorithm, the subjects were able to minimize their reliance on the assistance
provided by the robot during training, and greatly increase the force they could
generate to successfully complete the catching task during the final days of training.
Active force was calculated as the amount of force generated to move the robot towards
the target and did not take into account the force required to support the arm against
gravity. The mean active force produced by the impaired upper extremity during this
bilateral elbow-shoulder activity increased by 82% and 95% for two subjects who were
more impaired (pre-training WMFT scores of 180 sec and 146 sec). Two other subjects
who were less impaired, (pre-training WMFT scores of 67 sec and 54 sec) improved
their active force by 17% and 22% respectively.
Figure 4. Interaction between the subject and robot which is coordinated by on-line assistance algorithms.
Figure 4a depicts the performance of a repetition of Reach/Touch. The dashed line plots the hand velocity
over time. As the subject moves toward the target, the assistive force, depicted by the solid line, stays at a
zero level unless the subject fails to reach the target within a predefined time window. As the subjects
progress toward the target slows, the assistive force increases until progress resumes and then starts to
decrease after velocity exceeds a predefined threshold value. Figure 4b and c describe two repetitions of the
bilateral Catching Falling Objects simulation. Performance on Day 1 (b) requires Assistive Force from the
robot (solid line) when the subject is unable to overcome gravity and move the arm towards the target (Active
Force (dashed lines) dips below zero. Figure 4c represents much less assistance from the robot to perform the
same task because the subject is able to exert active force throughout the task.
Questionnaires have been used to assess subjects perception and satisfaction with
the training sessions, the physical and mental effort involved in the training and an
evaluation of the different exercises. The subjects were eager to participate in the
project. They found the computer sessions required a lot of mental concentration, were
engaging and helped improve their hand motion. They found the exercises to be tiring
but wished this form of training had been part of their original therapy. When
comparing the hand simulations they stated that playing the piano one finger at a time
(fractionation exercise) required the most physical and mental effort.
researched for their role in higher-order representation of action [34-37]. Mirror and
canonical neurons may play a central role. Detailed accounts and physiological
characteristics of these neurons are extensively documented (for review, see [38]),
however, a key property of a mirror cell is that it is equally activated by either
observing or actuating a given behavior. Though initially identified in non-human
primates, there is now compelling evidence for the existence of a human mirror neuron
system [38, 39]. Although the nature of tasks and functions that may most reliably
capture this network remains under investigation (for example, see [37]),
neurophysiological evidence suggests that mirror neurons may be the link that allows
the sensorimotor system to resonate when observing actions, such as for motor
learning. Notably, the pattern of muscle activation evoked by transcranial magnetic
stimulation to the primary motor cortex while observing a grasping action was found to
be similar to the pattern of muscle activation seen during actual execution of that
movement [40, 41] suggesting that the neural architecture for action recognition
overlaps with and can prime the neural architecture for action production [42]. This
phenomenon may have profound clinical implications [43].
Literature on the effects of observation of actions performed in the natural world
indicating recruitment of specific neural networks [38] allows us to hypothesize that
observation of actions performed in VE may also recruit neural circuits of interest. If
we can show proof of concept for using virtual reality feedback to selectively drive
brain circuits in healthy individuals, then this technology can have profound
implications for use in diagnoses, rehabilitation, and studying basic brain mechanisms
(i.e. neuroplasticity). We have done several pilot experiments using MRI-compatible
data gloves to combine VE experiences with fMRI to test the feasibility of using VEbased sensory manipulations to recruit select sensorimotor networks. In this chapter, in
addition to the data supporting the feasibility of our enhanced training system, we also
present preliminary data indicating that through manipulations in the VE, one can
activate specific neural networks, particularly those neural networks associated with
sensorimotor learning.
3.1 fMRI Compatible Virtual Reality System
Three components of this system are synchronized, the presentation of the visual
display of the virtual hands, the collection of fMRI images and the collection of hand
joint angles from the MRI-compatible (5DT) data gloves. We have extracted the
essential elements common to all of our environments, the virtual hands, in order to test
the ability of visual feedback provided through our virtual reality system to effect brain
activity (Figure 5, left panel). Subjects performed simple sequential finger flexion
movements with their dominant right hand (index through pinky fingers) as if they
were pressing imaginary piano keys at a rate of 1 Hz. Subjects finger motion was
recorded and the joint angles were transmitted in real time to a computer controlling the
motion of the virtual hands. Thus we measured event-related brain responses in realtime as subjects interacted in the virtual environment.
The virtual hand on the display was sized in proportion to the subjects actual hand
and its movement was calibrated for each subject before the experiment. After
calibration, glove data collection was synchronized with the first functional volume of
each functional imaging run by a trigger signal transmitted from the scanner to the
computer controlling the glove. From that point, glove data was collected in a
continuous stream until termination of the visual presentation program at the end of
each functional run. As glove data was acquired, it was time-stamped and saved for
offline analysis. fMRI data was realigned, co-registered, normalized, and smoothed (10
mm Gaussian filter) and analyzed using SPM5 (http://www.fil.ion.ucl.ac.uk/spm/).
Activation was significant if it exceeded a threshold level of p<0.001 and a voxel
extent of 10 voxels. Finger motion data was analyzed offline using custom written
Matlab software to confirm that subjects produced the instructed finger sequences and
rested in the appropriate trials. Finger motion amplitude and frequency was analyzed
using standard multivariate statistical approaches to assure that differences in finger
movement did not account for any differences in brain activation.
First, we investigated whether observing virtual hand actions with the intention to
imitate those actions afterwards activates known frontoparietal observation-execution
networks. After signing institutionally approved consent, eight right-handed subjects
who were nave to the virtual reality environment and free of neurological disease were
tested in two conditions: 1) Watch Virtual Hands: observe finger sequences performed
by a virtual hand model with the understanding that they would imitate the sequence
after it was demonstrated observe with intent to imitate (OTI), 2) Move and Watch
Hands: execute the observed sequence while receiving real time feedback of the virtual
hands (actuated by the subjects motion). The trials were arranged as 9 second long
miniblocks and separated by a random interval lasting between 5-10 seconds. Each
subject completed four miniblocks of each condition.
In the Move+Watch condition, significant activation was noted in a distributed
network traditionally associated with motor control contralateral sensorimotor premotor, posterior parietal, basal ganglia, and ipsilateral anterior intermediate cerebellum.
In the OTI condition, significant activation was noted in the contralateral dorsal
premotor cortex, the (pre)supplementary motor area, and the parietal cortex. Parietal
activation included regions in the superior and inferior parietal lobules and overlapped
with activation noted in the Move+Watch condition in the rostral extent of the
intraparietal sulcus (see Figure 5). The common activation noted in this region for
intentional observation and execution of action is in line with other reports using video
playback of real hands moving [34, 37] and suggests that well constructed VE may tap
into similar neural networks.
Virtual Hands
Real Hand
Figure 5. Left panel. Subjects view during fMRI experiment (top). The real hand in a 5DT glove is
shown below that. Movement of the virtual hand can be generated as an exact representation of the real
hand, or can be distorted to study action-observation interaction inside a virtual environment. Right panel.
Observing finger sequences with the intention to imitate afterwards. Significant BOLD activity (p<.001)
is rendered on an inflated cortical surface template. Arrows show activation in the dorsal premotor cortex,
BA 5, rostral portion of the IPS, supramarginal gyrus, and (pre)supplementary motor area, likely
associated with planning sequential finger movements.
Figure 6. A representative healthy subject (left panel) and a chronic stroke patient (right panel)
performed a finger sequence with the RIGHT hand. The inset in the right panel shows the lesion location
in the stroke patient, (see also [9]) For each subject, the panels show the activations that were
significantly greater when viewing the corresponding finger motion of the LEFT more than the RIGHT
virtual hand (i.e. activation related to mirror viewing). Note that viewing the LEFT virtual hand led to
significantly greater activation of the primary motor cortex IPSILATERAL to the moving hand (i.e.
contralateral to the observed virtual hand) (see arrow). Significant BOLD activity (p<.01) is rendered on
an inflated cortical surface template using Caret software (adapted from [22]).
Fig. 6 shows activation in the ROI that was greater when the LEFT (relative to
RIGHT) virtual hand was actuated by the subjects physical movement of their right
hand. In other words, this contrast represents greater activation when seeing the virtual
mirrored hand than the corresponding hand. This simple sensory manipulation was
sufficient to selectively facilitate lateralized activity in the cortex representing the
observed (mirrored) virtual hand. As our preliminary data suggest in the case of stroke
patients, this visual manipulation in a VE may be effective in facilitating the
sensorimotor motor cortex in the lesioned hemisphere and may help explain the
positive therapeutic effects noted by Altschuler and colleagues [44] when training
stroke patients using mirror therapy.
4. Discussion
Rehabilitation of the upper extremity is difficult. It has been reported that 75%-95% of
patients post stroke learn to walk again, but 55% have continuing problems with upper
extremity function [47, 48]. The complexity of sensorimotor control required for hand
function as well as the wide range of recovery of manipulative abilities makes
rehabilitation of the hand even more challenging. Moreover, while walking requires
integration of both limbs, ensuring that the affected limb is exercised during
ambulation, some upper extremity tasks can be completed using only the unaffected
limb, creating a situation in which the patient gets used to neglecting the affected side
(learned disuse).
Although we demonstrated positive outcomes with the original system, it was only
appropriate for patients with mild impairments. Our second generation system,
combining, movement tracking, virtual reality therapeutic gaming simulations and
robotics appears to be a viable possibility for patients with more significant
impairments of the upper extremity. The haptic mechanisms such as the spring
assistance, the damping to stabilize trajectories and the adaptable anti-gravity
assistance allowed patients with greater impairments to successfully participate in
activities in which they could not usually partake. From a clinical perspective,
therapists can tailor the interventions to address the particular needs of the patients, and
from the patients perspective, it was clear throughout the testing of the system, that the
patients enjoyed the activities and were challenged by the intervention.
In addition to their use in assisting to provide more intense therapy of longer
duration, Brewer [6] suggests that robotics have the potential to address the challenge
of conducting clinically relevant research. An example of this is the comparison we
described above, training the hand and arm separately to training them together. It is
controversial whether training the upper extremity as an integrated unit leads to better
outcomes than training the proximal and distal components separately. The current
prevailing paradigm for upper extremity rehabilitation describes the need to develop
proximal control and mobility prior to initiating training of the hand. During recovery
from a lesion, the hand and arm are thought to compete with each other for neural
territory [49]. Therefore, training proximal control first or along with distal control may
actually have deleterious effects on the neuroplasticity and functional recovery of the
hand. However, neural control mechanisms of arm transport and hand-object
interaction are interdependent. Therefore, complex multisegmental motor training is
thought to be more beneficial for skill retention. Our preliminary results demonstrate
that in addition to providing an initial proof of concept, the system allows for the
systematic testing of such controversial treatment interventions.
Our second goal was to design a sensory stimulation paradigm for acute and severe
patients with limited ability to participate in therapy. A practice condition used during a
therapy session is that of visual demonstration or modeling. Current neurological
evidence suggests that the observation of motor actions is more than an opportunity to
understand the requirements of the movement to be executed. Many animal and human
studies have shown activation of the motor cortex during observation of actions done
by others [38]. Observation of motor actions may actually activate similar neural
pathways to those involved in the performance of the observed action. These findings
provide an additional potential avenue of therapeutic intervention to induce neural
activation.
However, some studies indicate that neural processing is not the same when
observing real actions and when observing virtual actions suggesting that observing
virtual models of human arms could have significantly less of a facilitation effect when
compared to video clips of real arm motion [50]. We found that when our subjects
viewed the movement of the virtual hands, with the intention of imitating that action,
the pre-motor and posterior parietal areas were activated. Furthermore, we showed in
both healthy subjects and in one subject post-stroke, that when the left virtual hand was
actuated by the subjects physical movement of their right hand, activity in the cortex
ipsilateral to the real moving hand (contralateral to the moving virtual hand) was
selectively facilitated.
We hypothesized that viewing a virtual hand corresponding to the patients
affected side and animated by movement of the patients unaffected hand could
selectively facilitate the motor areas in the affected hemisphere. This sensory
manipulation takes advantage of the capabilities of virtual reality to induce activation
through observation and to perturb the reality in order to target particular networks. We
are optimistic about our preliminary findings and suggest that this visual manipulation
in a VE should be further explored to determine its effectiveness in facilitating
sensorimotor areas in a lesioned hemisphere.
We believe that VR is a promising tool for rehabilitation. We found that adding
haptic control mechanisms to the system enabled subjects with greater impairments to
Acknowledgments
This work was supported in part by Rehabilitation Engineering Research Center grant #
H133E050011 from the National Institute on Disability and Rehabilitation Research.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
J.L. Patton, and F.A. Mussa-Ivaldi, Robot-assisted adaptive training: custom force fields for teaching
movement patterns, IEEE Transactions in Biomedical Engineering 51 (2004), 636646.
P.S. Lum, C.G. Burgar, P.C. Shor, M. Majmundar, and M. Van der Loos, MIME robotic device for
upper limb neurorehabilitation in subacute stroke subjects: A follow up study, Journal of Rehabilitation
Research and Development 42 (2006), 631-642.
H.I. Krebs, N. Hogan, M.L. Aisen, and B.T. Volpe, Robot-aided neurorehabilitation, IEEE
Transactions in Rehabilitation Engineering 6 (1998), 75-87.
L.E. Kahn, P.S. Lum, W.Z. Rymer, and D.J. Reinkensmeyer, Robot assisted movement training for the
stroke impaired arm: Does it matter what the robot does? Journal of Rehabilitation Research and
Development 43 (2006), 619-630.
S. McCombe-Waller, and J. Whittall, Fine motor coordination in adults with and without chronic
hemiparesis: baseline comparison to non-disabled adults and effects of bilateral arm training, Archives
of Physical Medicine and Rehabilitation 85 (2004), 1076-1083.
B.R. Brewer, S.K. McDowell, and L.C. Worthen-Chaudhari, Poststroke upper Extremity rehabilitation:
A review of robotic systems and clinical results, Topics in Stroke Rehabilitation 14 (1993), 22-44.
K.M. Stanney, Handbook of Virtual Environments: Design, Implementation and Applications, London,
Lawrence Erlbaum, 2002.
G.C. Burdea, and P. Coiffet, Virtual Reality Technology, New Jersey, Wiley, 2003.
A.S. Merians, H. Poizner, R. Boian, G. Burdea, and S. Adamovich, Sensorimotor training in a virtual
reality environment: does it improve functional recovery poststroke?, Neurorehabilitation and Neural
Repair 20 (2006), 252-67.
S.V. Adamovich, A.S. Merians, R Boian, M. Tremaine, G.C. Burdea, M. Recce, and H. Poizner, A
virtual reality (VR)-based exercise system for hand rehabilitation after stroke, Presence 14 (2005), 161174.
A.S. Merians, J. Lewis, Q. Qiu, B. Talati, G.G. Fluet, and S.A. Adamovich, Strategies for Incorporating
Bilateral Training into a Virtual Environment, In: IEEE/ICME International Conference on Complex
Medical Engineering, Beijing, China, 2007, pp. 1272-1277.
S.A. Adamovich, Q. Qiu, B. Talati, G.G. Fluet, and A.S. Merians, Design of a Virtual Reality Based
System for Hand and Arm Rehabilitation. In: IEEE 10th International Conference on Rehabilitation
Robotics, Noordwijk, The Netherlands, 2007, pp. 958-963.
S.A. Adamovich, G.G. Fluet, A.S. Merians, A. Mathai, and Q. Qiu, Incorporating haptic effects into
three-dimensional virtual environments to train the hemiparetic upper extremity, IEEE Transactions on
Neural Systems and Rehabilitation Engineering (2009), in press.
Q. Qiu, D.A. Ramirez, K. Swift, H.D. Parikh, D. Kelly, and S.A. Adamovich, Virtual environment for
upper extremity rehabilitation in children with hemiparesis, In: NEBC / IEEE 34th Annual Northeast
Bioengineering Conference Providence, RI, 2008.
5DT, 5DT Data Glove 16 MRI, http://www.5dt.com.
Immersion, CyberGlove, http://www.immersion.com, 2006.
Moog FCS Corporation, Haptic Master, http://www.fcs-cs.com, 2006.
R.Q. Van der Linde, P. Lammertse, E. Frederiksen, and B. Ruiter, The HapticMaster, a new high-
[45]
[46]
[47]
[48]
Abstract. Robot therapy seems promising with stroke survivors, but it is unclear
which exercises are most effective, and whether other pathologies may benefit
from this technique. In general, exercises should exploit the adaptive nature of the
nervous system, even in chronic patients. Ideally, exercise should involve multiple
sensory modalities and, to promote active subject participation, the level of
assistance should be kept to a minimum. Moreover, exercises should be tailored to
the different degrees of impairment, and should adapt to changing performance.
To this end, we designed three tasks: (i) a hitting task, aimed at improving the
ability to perform extension movements; (ii) a tracking task, aimed at improving
visuo-motor control; and (iii) a bimanual task, aimed at fostering inter-limb
coordination. All exercises are conducted on a planar manipulandum with two
degrees of freedom, and involve alternating blocks of exercises performed with
and without vision. The degree of assistance is kept to a minimum, and adjusted to
the changing subjects performance. All three exercises were tested on chronic
stroke survivors with different levels of impairment. During the course of each
exercise, movements became faster, smoother, more precise, and required
decreasing levels of assistive force. These results point to the potential benefit of
that assist-as-needed training with a proprioceptive component in a variety of
clinical conditions.
Keywords. Robot therapy, stroke, rehabilitation
Introduction
During the last few years, considerable effort has been devoted to using robots for
delivering therapy to persons with motor disabilities [1, 2]. Robotic devices have been
frequently used to enforce passive movements (see Figure 1, left). In fact, it has been
shown that repeated passive exercise may help improving recovery[3, 7]. However, a
number of studies [8, 10] point at techniques that take the adaptive nature of the
nervous system into consideration. Such techniques include active-assisted exercises,
in which the robot guides the arm along a desired path (see Figure 1, right). A variant is
1
Corresponding author: Department of Informatics, Systems and Telematics, University of Genoa, Via
Opera Pia 13, 16145 Genoa (ITALY). E-mail: [email protected]
human
human
F(t)
x(t)
F(t)
x(t)
robot
robot
ROBOT+PATIENT
Target
Perfomance
Degree of
assistance
CONTROLLER
Actual
Perfomance
Figure 2. Mechanism of assistance regulation
Are there optimal ways to provide assistance, and to continuously regulate it? If
this is the case, do they depend on the specific task, or do they obey to general
principles, valid for a wide range of motor learning problems? While optimal solutions
have been proposed for simple, specific tasks, like lifting a weight [18, 19], it would be
desirable to derive general principles and methods, that can (in principle) be applied to
any motor learning/re-learning task.
Recently, Wolbrecht and coll. [15] proposed an adaptive control scheme, in which
a controller negotiates an error-reducing and an effort-reducing component. This allows
to keep assistance to a minimum and to automatically adapt it to task performance,
while providing enough assistance to support task completion. This technique does not
explicitly aim at augmenting the degree of voluntary control. Such an increase is
assumed to result from the ability to successfully complete the task.
Proprioceptive training
In stroke survivors, motor impairment is frequently associated with degraded
proprioceptive and/or somatosensory functions [20]. Stroke subjects may have
difficulties with estimating the position of their arm in absence of vision. Moreover,
they may be unable to integrate visual and proprioceptive information. Furthermore,
when performing assistive training they may not be capable of detecting the presence,
magnitude and direction of assistive forces. Therefore, impaired proprioception may
affect the recovery of motor functions [21]. Like motor deficits, proprioceptive deficits
may decrease through repeated exercise [22]. The nervous system uses flexible
strategies in integrating visual and proprioceptive information [23]: when both visual
and kinesthetic information of a limb are available, vision is usually the dominant
source of information [24, 27]. As a consequence of visual dominance[28, 30],
proprioceptive impairment may be masked by vision if the latter is available. This
would suggest that in subjects with both proprioceptive and motor impairment,
assistive exercise might be more effective if at least part of the training were performed
without vision. In fact, recent studies demonstrate that visual feedback is not necessary
for learning novel dynamics [31].
In this context, the contribution of robotic devices to neuromotor rehabilitation
may turn out to be crucial. Moreover, different training conditions - either presence or
absence of vision - may have different degrees of efficacy in robot therapy protocols in
individual stroke patients.
(1)
where session is the session number (from 0 to max), force is the intensity of the
assistive force (in N), and vision denotes absence (0) or presence (1) of vision.
Model coefficients may be interpreted as follows: (i) b0 is the baseline
performance level, i.e. the performance at the initial session, with zero assistive force;
this corresponds to the initial degree of voluntary control; (ii) b1 is the between-session
rate of improvement; (iii) b2 is a compliance coefficient, measuring the sensitivity of
performance on the assistance level; (iv) b3 is the vision component, which indicates
the contribution to the performance provided by presence of vision; (v) b4 is the
session vision component, which accounts for the differences in the session effect
that are due to vision. In other words, b4 accounts for the different behaviors, in terms
of between-session improvement, of vision and no-vision trials.
The presence of random factors implies that the above model parameters can be
seen as having a constant component (the same for all subjects), and a random
component (different for each subject), which can be estimated separately.
Testing the significance of the fixed components allows to test hypotheses like
whether the therapy produces a significant improvement (this would correspond to
testing for the significance of the session effect). If we consider the whole set of
parameters (i.e., fixed plus random), we can look at inter-subject variability. For
instance, we may look at the relationship between the baseline performance (b0) and the
subsequent improvement (b1) in no-vision sessions. Or, we may look at the difference
in baseline performance between vision and no-vision trials (b3) and the corresponding
difference in improvement between the same trials (b4).
For each particular task we need to define a suitable indicator of performance.
Then, the model can be fitted to the data by using a maximum-likelihood procedure
[33] for, instance, in the R statistical package, this is done by the lme function
library [34]. The fitting procedure provides estimates for the fixed and the random
components of each model coefficient, as well as the corresponding significance scores.
2. Experiments
We carried out three pilot studies to investigate the potential benefit of active assisted
training in the recovery of arm movements after stroke. The training included an
explicit proprioceptive component. In all cases, subjects performed their movements
under the influence of robot-generated assistive forces.
We focused on chronic stroke survivors, who were initially unable to complete the
required movements with their affected arm without assistance. The inclusion criteria
were chronic conditions (at least 1 year after stroke) and stable clinical conditions for at
least one month before entering the study. The exclusion criteria were the inability to
understand instructions about the exercise protocol, and the presence of other neurocognitive problems.
In all cases, we used an assist-as-needed protocol, in which the therapist initially
sets the magnitude of the assistive force provided by the robot. Assistance allows
patients to initiate the movements, but in no way imposes the trajectory, the reaching
time, or the speed profile. Whenever patient performance improves, in the subsequent
blocks of trials force magnitude is reduced - either manually or automatically. Part of
the trials are performed without vision of the arm, so that subjects are forced to rely on
proprioception to estimate the position of their arm and the direction/position of the
target by detecting presence and direction of the assistive force.
All studies use the same robot system, specifically designed for robot therapy and
for the evaluation of motor control and motor adaptation. The robot - Braccio di Ferro
(BdF) is a planar manipulandum with 2 degrees of freedom [35]. It has a large planar
workspace (a 8040 cm ellipse) and a rigid parallelogram structure with direct drive of
two brushless motors that provides a low intrinsic mechanical impedance at the endeffector and a full backdriveability. Hand trajectory is measured with high resolution
(0.1 mm) through optical encoders, and an impedance controller modulates (from
fractions of 1 N up to 50 N) the force transmitted to the hand. Therefore, motion of the
hand is not imposed but results from the interaction between the forces generated by
the robot and the forces generated by patients muscles. In all experiments, subjects sat
in a chair, with their chest and wrist restrained, and grasped the robot handle. A light,
soft support was connected to the forearm to allow low-friction sliding on the
horizontal surface of a table. In this way, only the shoulder and the elbow were allowed
to move, and motion was restricted to the horizontal plane, with no influence of gravity.
The height of the seat was adjusted, so that the arm was kept approximately
horizontal, and its position was also adjusted, in such a way that the farthest targets
could be reached with an almost extended arm. A 19 LCD screen was positioned in
front of the patients at a distance of about 1 m in order to display the positions of hand
and of the targets.
Due to the small size of the subjects population, these studies are merely intended
as feasibility studies, aimed at demonstrating the proposed approach and the related
analytical tools.
2.1. Hitting Task
This task [36] focuses specifically on facilitating the active execution of arm extension
movements. This is motivated by the observation that many stroke subjects are unable
to actively perform these movements, particularly in specific directions. In contrast,
wide inward movements are dominated by the flexion pattern that characterizes this
pathology. The task consists of hitting a set of targets, arranged in the horizontal plane
(Figure 3, top) according to three layers: inner (A, 3 targets), middle (B, 3 targets), and
outer (C, 7 targets). Reaching the outer targets requires nearly full extension of the arm.
Target sequences were generated according to the following scheme: ACBA. In
this way, outward movements had to be performed in one step (AC), whereas inward
movements were performed in two steps (CB and BA).
When a target was presented to the subject, the robot generated an assistive force F,
directed toward the target, xT. The assistive force was delivered gradually with a rampand-hold profile, R(t) that had a rise time of one second. The force was switched off as
soon as the subject hit the target. The next target was presented after a pause of 1 s.
Assistance also had a speed-dependent component, aimed at improving the interaction
between the subject and the robot. A virtual wall also provided additional haptic
feedback. The force generated by the robot is summarized by Eq. 2:
(x xH )
F (t ) = F A T
R(t ) b v H kW ( x H xW )
(2)
xT x H
where xT is the vector that identifies the target position in the plane, xH and vH are,
respectively, the hand position and speed vectors; b (12 Ns/m) is the viscous coefficient,
and kW (1000 N/m) is the stiffness coefficient of the wall. xW indicates the projection of
hand position on the wall. The difference (xH - xW ) indicates the degree of penetration
of the hand inside the wall, and is zero outside the wall. The protocol started with a test
phase, during which individual subjects became familiar with the apparatus and in
which a physical therapist selected the minimum force level FA that evoked a
functional response, i.e. a (possibly incomplete) movement in the intended direction.
One block of trials included repetitions of the ACBA sequence with
different targets in random order, for a total of 337=63 movements. Each block of
trials was performed either with or without vision. In the latter case, the subjects were
blindfolded, but could still feel the target through proprioception. The first training
session initiated with two blocks of trials (vision, no-vision), using the same level of
force determined in the test session (F1). After a little rest, the therapist considered the
level of performance and asked the subject about fatigue. The decision could be 1) to
terminate the session, 2) to continue with the same force level, 2) to continue with a
reduced force F2 (10-20% less than F1). The procedure was iterated until the decision to
stop was agreed by the patient and the therapist. In following sessions the training
always started F1, and then, if possible, the level of assitance was decreased. If subjects
reached a level of assistance with a force below 4 N, the no-vision blocks were
Figure 3. The targets are arranged on three layers: A, B, C. The C layer is just in front of a virtual wall.
The distance between adjacent layers was 10 cm. A target was considered as reached when its distance
from the hand was less than 2 cm.
corresponding time courses of assistive force and hand speed profile. Figure 3 shows
the trajectories in a typical subject. In early sessions, the outward movement (AC) is
segmented into a sequence of sub-movements. The first sub-movement covers only
part of the total distance, thus leaving a residual error which has to be corrected by
additional movements. The motor performance in late training sessions (Figure 3,
bottom right) suggests a visible improvement. At the same time, the level of robot
assistance could be reduced from 12 N to 6 N; movement duration was shorter, and the
number of sub-movements was reduced. The residual error after the first submovement decreases as well. In the overall population of subjects, the initial level of
assistance ranged between 25 N and 5 N, and was generally higher for patients who
initially had lower Fugl-Meyer scores (arm part).
To account for the joint effects of session and assistance, we applied the mixedeffects model (see Eq. 1) to the number of sub-movement observed during the outward
phase of each trial. The number of sub-movements had a significant effect on the level
of assistance (p=0.0026). This is not surprising, the results merely confirm that
assistance has a beneficial effect on performance. The effect of session was also highly
significant (p<0.0001). In fact, we found a negative b1 (session) coefficient (systematic
part): -0.3690.098 sub-movements per session. This indicates that the observed effect
of session corresponds to a reduction of sub-movements. The model may also be used
to assess the session effect on each individual subject (Figure 4 - left). The number of
peaks displays a strong negative correlation (the correlation coefficient is -0.75)
between baseline performance, b0, and the change over sessions, b1: subjects with
better initial performance are closer to maximum performance and therefore they
improve less, however, irrespective of the initial conditions, all subjects have a
potential for improvement. With regard to the effect of vision, we found significant
vision and session vision effects. This means the presence of vision did not have a
systematic effect. However, the model allows us to investigate the effect of vision on
Number of peaks
0
S2
S7
Vision Performance
S5
-4
S6
-6
S9
-8
-10
Number of peaks
S1 S8
S3
-2
Changes NV (b1)
15
S7
10
S3
S1
S5
5
S4
5
10
Baseline NV (b0)
15
S4
S6
S2
0
S8
S9
5
10
No Vision Performance
15
Figure 4. Effect of robot training on the number of sub-movements. Left: Baseline performance vs
change over sessions. Improvement is greater in subjects with a greater initial impairment. Right:
Different subjects exhibit different impairments with and without vision, but in all cases the effect of
training is to equalize their vision-no vision performance. Dots indicate initial performance, lines the
change over sessions.
individual subjects. A crucial question is how the different subjects compare in terms
of their initial performance with eyes open or eyes closed. Another question, similar to
the one we asked before for the session effect, is whether there is a systematic
relationship between the differential behavior in vision and no-vision baseline behavior
and the differential change in vision and no-vision trials. The former question can be
addressed by comparing, for each subject, the baseline performance with vision (b0+b3)
and without (b0). Figure 4 (right) clearly indicates that some subjects (namely, S1 and
S3) have a better initial performance with eyes closed (data points above the diagonal
line). In contrast, other subjects (S8, S9) have better performance with eyes open (data
points below the diagonal). The remaining subjects have similar performance with both
sensory modalities.
The difference in the baseline performance with and without vision (i.e., parameter
b3) and the relative difference in the performance change over sessions (i.e., parameter
b4) have a strong negative correlation (correlation coefficient: -0.96). This means that
subjects with severe impairments in the eyes closed condition (negative b3) result in a
greater improvement in eyes closed trials (negative b4), and vice versa.
As regards FMA scores, we found a statistically significant change (p = 0.00035,
pairwise t-test) from 1513 to 2013, corresponding to an average 4.82.4
improvement. This is in line with previous studies [1], which report an average
improvement of 3.70.5. Evaluation of the FMA at follow-up resulted in a substantial
preservation of the improvement (FMA=2013, no significant difference from that
assessed at the end of treatment). Four subjects even displayed an improvement in their
FMA score. No change was observed in the subjects Ashworth score.
2.2. Tracking Task
In this task [40], subjects had to continuously track a moving visual target, moving on a
figure-of-eight trajectory (length = 90 cm, time period = 15 s). The target was
represented visually as a small red circle, and haptically, as an attractive force field
defined by F = K d , where d is the distance of the hand from the target (Figure 5).
The current position of the hand was continuously displayed (as the picture of a small
car). For each subject, the scale factor, K, was initially selected as the minimum level
capable to induce the initiation of movement; The range of the assistive force was 3-30
N (from the least to the greatest impairment). The moving target stopped if the distance
from the cursor was greater than 2 cm. The experimental protocol was organized into
blocks of 10 trials each, which include 10 repetitions of the figure-of-eight. Within
each training session, two blocks of trials are alternated, with eyes open and eyes
closed. Within each block, half of the trials were clockwise and half were
counterclockwise. One session lasted approximately 45 minutes. At the end of each
block, the robot estimated a performance score, based on the number of stops and the
overall movement duration. If the score exceeded a threshold, the level of assistance
was reduced. Unlike the previous exercise, assistance here is automatically adapted to
the observed performance (see Figure 2).
The therapy cycle included up to 10 sessions (2-3 sessions/week). Improvements
were evaluated with clinical scales (FMA, Ashworth) and movement indicators
(average speed, duration, tracking error, stop time). We used the statistical model
described in Section 1.
First
Last
10 cm
Figure 5. Top: Visual tracking task. Bottom: trajectories at early and late training, for vision (left) and
no-vision trials (right)
closed, whereas other subjects (S1, S4, S5, S9) perform better with eyes open. The
remaining subjects have similar performance with both sensory modalities.
We found a negative correlation (r= -0.27) between the initial vision/no vision
performance difference and the difference in the vision/no vision improvement. This
means that subjects with an initially more severe impairment with eyes closed resulted
in a greater improvement in eyes closed trials, and vice versa. Improved performance is
also reflected in the increased FMA score (from 2314 to 2715, corresponding to an
average 3.41.9 increase). The level of assistance was reduced on average by 28%.
As in the previous experiment, subjects consistently improve their performance.
Moreover, proprioceptive problems - revealed by a discrepancy between initial
performance with eyes open and closed - tend to reduce over training.
2.3. Bi-manual training
Upper limb robot therapies for stroke hemiparesis primarily focus on the paretic limb,
with unilateral exercises to improve motor control of the shoulder and elbow [1].
Actually, many daily tasks require the coordination of both hands. This points to a
possible benefit of protocols for upper limb robotic rehabilitation that involve the
cooperation of both hands. Few studies have examined the efficacy of bilateral training
in the recovery of paretic limb movements post-stroke [3, 4, 41]. These studies showed
a positive effect on joint power of the affected shoulder and elbow muscles, although
motor control improved to a lesser extent. In these cases, however, the two arms were
not required to cooperate but, rather, to interact in a master-slave fashion.
Here we propose a robot-mediated cooperative exercise, in which subjects make
forward and backward movements with both hands, while grasping the handles of an
horizontal bar. Subjects are required to keep the horizontal orientation of the bar. In this
way, the plegic and non plegic limbs are required to coordinate and balance their action
in order to achieve the movement goal. Bi-manual cooperation may be seen as a form
of self-regulated assistance. The non-plegic limb contributes to the forward and
backward translation of the bar, but the contributions of both arms must be balanced in
Figure 6. The bi-manual task. Left: experimental apparatus. Right: assistive force fields.
3. Discussion
We have presented three examples of active-assisted training protocols, aimed at the
rehabilitation of chronic stroke survivors. These exercises have a number of common
features: (i) problem-solving aspects and a sensory-rich experience; (ii) a mechanism
that regulates the degree of assistance such that it is kept to a minimum; (iii) different
blocks of trials are performed with and without vision, in alternation.
In all three experiments, analysis of performance suggests that, all patients
First session
Last session
0.2
0.2
0.1
0.1
-0.1
-0.1
-0.2
10 20 30 40 50 60 70 80 90
-0.2
10
10
-10
-10
10 20 30 40 50 60 70 80 90
20
-20
-20
Time [s]
10 20 30 40 50 60 70 80 90
Angle [deg]
Threshold [deg]
10 20 30 40 50 60 70 80 90
20
10 20 30 40 50 60 70 80 90
Target [m]
End effector [m]
Attractive F [N]
Resistive F [N]
10 20 30 40 50 60 70 80 90
Time [s]
Figure 7. Performance of a typical stroke subject in the bi-manual task, at the beginning (left) and end
(right) of the training protocol. Top: time course of vertical movements. Middle: bar orientation. Bottom:
Attractive (assistive) and resistive forces.
exhibited an increase in the amount of voluntary control, even though some of them
could not achieve complete recovery of autonomous movements.. In particular, we
found that proprioceptive training (i.e., training with closed eyes) is beneficial to
patients with abnormal proprioception. Moreover, training different sensory modalities
separately may improve overall recovery.
These results highlight a number of key points, which will need to be accounted
for when trying to improve the efficacy of robots as therapeutic devices. First, robot
therapy should rely on a better understanding of the mechanisms underlying motor
learning and re-learning. In particular, it is crucial to identify optimal ways to provide
assistance and to regulate it. Second, robots may be beneficial to neuromotor
rehabilitation not only for their potential for improving motor control, but also because
they may help to train multi-sensory and sensorimotor integration. Robots are capable
of delivering interactive and repeatable sensorimotor exercises and continuously
monitoring the actual motor performance. They can also be used to simulate new and
controlled haptic environments. Third, therapy robots should ideally possess an
ability to continuously estimate subjects amount of voluntary control and to regulate
assistance accordingly. Ultimately, during recovery subjects would learn from robots,
and robots would learn from patients.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
G.B. Prange, M.J. Jannink, C.G. Groothuis-Oudshoorn, H.J. Hermens, and M.J. Ijzerman, Systematic
review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke, Journal of
Rehabilitation Research and Development 43 (2006), 171-84.
G. Kwakkel, B.J. Kollen, and H.I. Krebs, Effects of Robot-Assisted Therapy on Upper Limb Recovery
After Stroke: A Systematic Review, Neurorehabilitation and Neural Repair (2007).
C.G. Burgar, P.S. Lum, P.C. Shor, and H.F. Machiel Van der Loos, Development of robots for
rehabilitation therapy: the Palo Alto VA/Stanford experience, Journal of Rehabilitation Research and
Development 37 (2000), 663-73.
P.S. Lum, C.G. Burgar, P.C. Shor, M. Majmundar, and M. Van der Loos, Robot-assisted movement
training compared with conventional therapy techniques for the rehabilitation of upper-limb motor
function after stroke, Archives of Physical Medicine and Rehabilitation 83 (2002), 952-9.
H.I. Krebs, N. Hogan, M.L. Aisen, and B.T. Volpe, Robot-aided neurorehabilitation, IEEE
Transactions on Rehabilitation Engineering 6 (1998), 75-87.
B.T. Volpe, H.I. Krebs, N. Hogan, L. Edelsteinn, C.M. Diels, and M.L. Aisen, Robot training enhanced
motor outcome in patients with stroke maintained over 3 years, Neurology 53 (1999), 1874-6.
D.G. Kamper, A.N. McKenna-Cole, L.E. Kahn, and D.J. Reinkensmeyer, Alterations in reaching after
stroke and their relation to movement direction and impairment severity, Archives of Physical Medicine
and Rehabilitation 83 (2002), 702-7.
C.D. Takahashi and D.J. Reinkensmeyer, Hemiparetic stroke impairs anticipatory control of arm
movement, Experimental Brain Research 149 (2003), 131-40.
J.L. Patton and F.A. Mussa-Ivaldi, Robot-assisted adaptive training: custom force fields for teaching
movement patterns, IEEE Transactions on Rehabilitation Engineering 51 (2004), 636-46.
J.L. Patton, M.E. Stoykov, M. Kovic, and F.A. Mussa-Ivaldi, Evaluation of robotic training forces that
either enhance or reduce error in chronic hemiparetic stroke survivors, Experimental Brain Research
168 (2006), 368-383.
K.P. Kording, J.B. Tenenbaum, and R. Shadmehr, The dynamics of memory as a consequence of
optimal adaptation to a changing body, Nature Neuroscience 10 (2007), 779-86.
R.A. Schmidt and T.D. Lee, Motor Control And Learning: A Behavioral Emphasis, Fourth ed.
Champaign, Illinois: Human Kinetics, 2005.
E. Todorov, R. Shadmehr, and E. Bizzi, Augmented Feedback Presented in a Virtual Environment
Accelerates Learning of a Difficult Motor Task, Journal of Motor Behavior 29 (1997), 147-158.
E. Todorov and M.I. Jordan, Optimal feedback control as a theory of motor coordination, Nature
Neuroscience 5 (2002), 1226-35.
[15] E.T. Wolbrecht, V. Chan, D.J. Reinkensmeyer, and J.E. Bobrow, Optimizing Compliant, Model-Based
Robotic Assistance to Promote Neurorehabilitation, IEEE Transactions on Rehabilitation Engineering
(2008).
[16] J.L. Emken, R. Benitez, A. Sideris, J.E. Bobrow, and D.J. Reinkensmeyer, Motor adaptation as a
greedy optimization of error and effort, Journal of Neurophysiology 97 (2007), 3997-4006.
[17] J.L. Emken, R. Benitez, and D.J. Reinkensmeyer, Human-robot cooperative movement training:
learning a novel sensory motor transformation during walking with robotic assistance-as-needed,
Journal of NeuroEngineering and Rehabilitation 4 (2007), 8.
[18] D. Aoyagi, W.E. Ichinose, S.J. Harkema, D.J. Reinkensmeyer, and J.E. Bobrow, A robot and control
algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait
training following neurologic injury, IEEE Transactions on Rehabilitation Engineering 15 (2007), 387400.
[19] D.J. Reinkensmeyer, E. Wolbrecht, and J. Bobrow, A Computational Model of Human-Robot Load
Sharing during Robot-Assisted Arm Movement Training after Stroke, Conference Proceedings - IEEE
Engineering in Medicine and Biology Society 1 (2007), 4019-23.
[20] S. Tyson, M. Hanley, J. Chillala, A.B. Selley, and R.C. Tallis, Sensory Loss in Hospital-Admitted
People With Stroke: Characteristics, Associated Factors and Relationship With Function,
Neurorehabilitation and Neural Repair, 2007.
[21] L.M. Carey, T.A. Matyas, and L.E. Oke, Sensory loss in stroke patients: effective training of tactile and
proprioceptive discrimination, Archives of Physical Medicine and Rehabilitation 74 (1993), 602-11.
[22] S. Dechaumont-Palacin, P. Marque, X. De Boissezon, E. Castel-Lacanal, C. Carel, I. Berry, J. Pastor,
J.F. Albucher, F. Chollet, and I. Loubinoux, Neural Correlates of Proprioceptive Integration in the
Contralesional Hemisphere of Very Impaired Patients Shortly After a Subcortical Stroke: An fMRI
Study, Neurorehabil Neural Repair (2007).
[23] S.J. Sober, and P.N. Sabes, Flexible strategies for sensory integration during motor planning, Nature
Neuroscience 8 (2005), 490-7.
[24] J.R. Flanagan, and A.K. Rao, Trajectory adaptation to a nonlinear visuomotor transformation: evidence
of motion planning in visually perceived space, Journal of Neurophysiology 74 (1995), 2174-8.
[25] D.M. Wolpert, Z. Ghahramani, and M.I. Jordan, Are arm trajectories planned in kinematic or dynamic
coordinates? An adaptation study, Experimental Brain Research 103 (1995), 460-70.
[26] D.M. Wolpert, Z. Ghahramani, and M.I. Jordan, Perceptual distortion contributes to the curvature of
human reaching movements, Experimental Brain Research 98 (1994), 153-6.
[27] J.B. Smeets, J.J. van den Dobbelsteen, D.D. de Grave, R.J. van Beers, and E. Brenner, Sensory
integration does not lead to sensory calibration, Proceedings of the National Academy of Sciences U S A
103 (2006), 18781-6.
[28] M. Botvinick, and J. Cohen, Rubber hands 'feel' touch that eyes see, Nature 391 (1998), 756.
[29] R.J. van Beers, A.C. Sittig, and J.J. Denier van der Gon, The precision of proprioceptive position sense,
Experimental Brain Research 122 (1998), 367-77.
[30] R.J. van Beers, A.C. Sittig, and J.J. Denier van der Gon, How humans combine simultaneous
proprioceptive and visual position information, Experimental Brain Research 111 (1996), 253-61.
[31] D.W. Franklin, U. So, E. Burdet, and M. Kawato, Visual feedback is not necessary for the learning of
novel dynamics, PLoS ONE 2 (2007), e1336.
[32] R. Colombo, F. Pisano, S. Micera, A. Mazzone, C. Delconte, M.C. Carrozza, P. Dario, and G. Minuco,
Robotic techniques for upper limb evaluation and rehabilitation of stroke patients, IEEE Transactions
on Rehabilitation Engineering 13 (2005), 311-24.
[33] N.M. Laird, and J.H. Ware, Random-Effects Models for Longitudinal Data, Biometrics 38 (1982), 963974.
[34] D.M. Bates, and J.C. Pinheiro, lme and nlme - Mixed-Effects Methods and Classes for S and S-PLUS,
Version 3.0., Madison: Bell Labs, Lucent Technologies and University of Wisconsin, 1998.
[35] M. Casadio, P.G. Morasso, V. Sanguineti, and V. Arrichiello, Braccio di Ferro: a new haptic
workstation for neuromotor rehabilitation, Technology Health Care 13 (2006), 1-20.
[36] M. Casadio, P. Morasso, V. Sanguineti, and P. Giannoni, Impedance controlled, minimally assistive
robotic training of severely impaired hemiparetic patients, 1st IEEE / RAS-EMBS International
Conference on Biomedical Robotics and Biomechatronics, Pisa, Italy, 2006.
[37] D.J. Gladstone, C.J. Danells, and S.E. Black, The fugl-meyer assessment of motor recovery after stroke:
a critical review of its measurement properties, Neurorehabilitation and Neural Repair 16 (2002), 23240.
[38] T. Platz, C. Pinkowski, F. van Wijck, I.H. Kim, P. di Bella, and G. Johnson, Reliability and validity of
arm function assessment with standardized guidelines for the Fugl-Meyer Test, Action Research Arm
Test and Box and Block Test: a multicentre study, Clinical Rehabilitation 19 (2005), 404-11.
[6]
[7]
[8]
R.W. Bohannon, and M.B. Smith, Interrater reliability of a modified Ashworth scale of muscle
spasticity, Physical Therapy 67 (1987), 206-7.
E. Vergaro, M. Casadio, V. Squeri, P. Giannoni, P. Morasso, and V. Sanguineti, Robot-therapy of
hemiparetic patients with a minimally assistive strategy for tracking movements, 3rd International
Symposium on Measurement, Analysis, and Modeling of Human Functions, Lisbon, Portugal, 2007.
S. Hesse, H. Schmidt, C. Werner, and A. Bardeleben, Upper and lower extremity robotic devices for
rehabilitation and for studying motor control, Current Opinion in Neurology 16 (2003), 705-10.
Figure 1. Schematic representation of a wearable system to monitor individuals in the home and community
settings.
for instance, via a web-based application. The system is recharged at night via a
docking station that allows for fast communication with a remote clinical center by
means of an access point thus facilitating transfer of raw data and the performance of
maintenance tasks. This technology is bound to enable new telemedicine applications
[2] and to facilitate the implementation of the medical home concept [3].
Wearable devices can be divided into two categories: 1) garments with embedded
sensors and 2) body sensor networks. The idea of embedding sensors into garments
was first pursued by a research team at Georgia Institute of Technology led by
Dr. Sundaresan Jayaraman [4, 6]. Research work by this team eventually led to a
product referred to as Smart Shirt (Figure 2). The Smart Shirt is a wearable health
monitoring system by Sensatex, Inc., USA (http://www.sensatex.com/) that monitors
heart rate, body temperature, and motion of the trunk. The monitoring system is
designed as an undershirt with various sensors embedded within it. Data are transmitted
to a pager-size device attached to the waist portion of the shirt where it is sent via a
wireless gateway to the Internet and routed to a data server where the actual monitoring
occurs. The Smart Shirt incorporates a patented technology named Wearable
Motherboard which incorporates optical fibers, a data bus, a microphone, other
sensors, and a multifunction processor, all embedded in a basic textile grid that can be
laundered.
Figure 2. The Smart Shirt by Sensatex, Inc., USA, a garment with embedded sensors for physiological
function monitoring. (Reproduced with permission)
Figure 3. The MIThril is a platform of sensors and wearable computing technology to gather data in the field.
A description of the platform can be found at http://www.media.mit.edu/wearables/mithril/. (Reproduced
with permission)
Figure 4. Wearable sensor patch designed by Gisela Lin and William Tang as part of a project carried out at
NASA's Jet Propulsion Laboratory. (Reproduced with permission)
In the business sector, several companies took inspiration from the seminal work
achieved by researchers at NASAs Jet Propulsion Laboratory and developed systems
based body sensor networks for commercialization. Among others, FitLinxx
http://www.fitlinxx.com/brand.htm has recently put on the market an ultra low-power
wireless personal area network that provides two-way radio communication that can
control and respond to sensors and actuators, as well as provide wireless connectivity
to the Internet via devices such as a cell phone, personal digital assistant or PC. Based
on this platform, FitLinxx has developed products for health monitoring that integrate
heart rate, blood pressure, a pedometer, and body weight data gathered using a special
weight
scale.
Similar
products
are
offered
by
BodyMedia
Inc
http://www.bodymedia.com/. BodyMedias products are centered on the SenseWear
Armband, a sleek, wireless, wearable body monitor that enables continuous
physiological and lifestyle data collection outside the lab environment. Worn on the
back of the upper arm, it utilizes a unique combination of sensors and technologies that
allows one to gather raw physiological data such as movement, heat flow, skin
temperature, near body ambient temperature, heart rate, and galvanic skin response.
The SenseWear Armband contains a 2-axis accelerometer, temperature sensors for
monitoring heat fluctuation, skin temperature, near body ambient temperature, galvanic
skin response, and heart rate received from a Polar Monitor system. The SenseWear
Armband can be worn continuously up to 3 days without recharging the battery (at
default sampling rate settings), and stores up to 5 days of continuous physiological and
lifestyle data. Research software is available to offer audio and tactile feedback for
reminders, targets, and alerts. Its ability to provide 2-way communication makes the
SenseWear Armband a hub for collecting data from other third-party products such as a
weight scale or a blood pressure cuff. The manufacturer promotes the product as
eliminating the need for researchers and clinicians to administer and apply
cumbersome sensors to their research subjects.
The research and development work summarized above is bound to facilitate the
development of new clinical applications in telemedicine [2]. However, a fundamental
limitation that hinders the application in rehabilitation of all commercially available
systems and the majority of the body sensor networks developed in the research field is
that most of the available systems are only suitable for managing data gathered at a
sampling rate of a few Hz per channel. The ideal sampling rate for applications in
rehabilitation ranges from 100 Hz for biomechanical data to 1 kHz for surface EMG
data. To our knowledge, only two research groups have focused on the development of
body sensor networks with the performance required by clinical applications in
rehabilitation. Dr. Emil Jovanov at University of Alabama [16] and Dr. Matt Welsh at
Harvard University [17, 18] developed body sensor networks that provide adequate
performance for application in rehabilitation. These groups developed complex data
management architectures with buffering and data transmission that occurs both in real
time as well as offline to meet the specifications of applications in rehabilitation. The
approaches developed by these researchers achieve high bandwidth in ways compatible
with the low-power consumption specification that needs to be met in order to allow
one to implement a wearable system for monitoring patients over days.
Researchers and clinicians with a focus on rehabilitation are demonstrating a
growing interest in the adoption of wearable technology when monitoring individuals
in the home and community settings is relevant to optimize outcomes of rehabilitation
interventions. Three major categories of application of wearable technology are
emerging: 1) with the focus on monitoring motor activities via pedometers or sensor
networks that go beyond the use of simple pedometers; 2) with the emphasis on
medication titration and, more generally, applications in which the severity of
symptoms (e.g. in motor disorders) is assessed and a clinical intervention is adjusted
accordingly; and 3) with the focal point on the assessment of the outcomes of
therapeutic interventions (i.e. physical and occupation therapy) with potential for
gathering information suitable to adjust the intensity and modality of the prescribed
therapeutic exercises. The following three sections summarize recent work by our team
and others in the above-described three areas of development of new applications of
wearable technology in rehabilitation.
average six-minute walking distance was 1170 + 304 feet. In Part I of this pilot study,
our aim was to automatically identify three exercises comprising the aerobic portion of
the pulmonary rehabilitation exercise program from a continuous data record: walking
on a treadmill, cycling on a stationary bike, and cycling of the upper extremities on an
arm ergometer. Identification was based on the output of a neural network trained with
examples of accelerometer data corresponding to each of the exercise conditions. We
demonstrated that accurate and reliable identification of the exercise activities could be
achieved, thus enabling monitoring of patients compliance with a prescribed exercise
regimen outside of the rehabilitation environment. For misclassification equal to 5%,
the sensitivity of the classifier was remarkably high, ranging from 93 to 98% across
subjects. Details concerning the study are provided in Sherrill et al [25] and Moy et al
[26].
In Part II of this preliminary investigation, we extended the protocol to include
typical activities such as climbing stairs, walking indoors, doing household chores, etc.
Identifying these types of activities is relevant for assessing patients overall mobility.
We collected data by providing patients with a script as described in Sherrill et al
[22]. Due to the physical limitations of the COPD individuals, it was not feasible to
gather more than a minute of data for tasks such as ascending stairs. Since the number
of features derived from the accelerometer data far exceeded the number of data
segments available, a neural network approach was not considered to be appropriate.
We envisioned therefore compiling data from a large group of patients and performing
all identifications based on an existing database of examples rather than customtraining the classifier for each individual. In order for this to be a workable solution, the
variability across tasks must exceed the variability within tasks due to different
individuals. To show that this was a feasible approach, we sought ways to visualize the
relationships among clusters of data points corresponding to the conditions of interest.
We combined principal components analysis (PCA) and Sammons mapping. First, a
PCA transformation was applied, and the first 15 PCs (accounting for 90% of the total
variance) were retained. Then, the Sammons map was computed on the transformed
data. Results were viewed as a scatter plot, color-coded by task as shown in Figure 5
utilizing a gray scale. A clear division is evident among tasks. Techniques to assess the
quality of the clusters were then utilized as reported in Sherrill et al [22], thus
allowing us to conclude that the motor activities of interest can be classified based on
accelerometer data recorded from upper and lower extremities.
fold laundry
arm ergometer
climb stairs
sweep floor
treadmill
stationary
bike
walk in hallway
Figure 5. Clustering of features derived from accelerometer data recorded while patients with COPD
performed various motor tasks. Results are plotted for 7 tasks that were performed by the 5 patients who
participated in the study. Each task is represented by 20 randomly selected data points per patient that were
projected in two dimensions using the Sammons map algorithm. Each task occupies a distinct subregion of
the data space, forming distinct clusters in each region. Cluster positions are suggested by circles overlaid on
the plots. Axes are the result of abstract transformations on normalized data and therefore units are not shown.
a motor fluctuation cycle (i.e. interval between two medication intakes) and the
occurrence of dyskinesia. Dyskinetic movements are observed at certain points of the
cycle. Patients are asked to report the duration of these symptoms in terms of percent of
awake time spent in each state. This retrospective approach is formalized in Subscale
Four of the Unified Parkinsons Disease Rating Scale (UPDRS) [36] referred to as
Complications of Treatment. This kind of self-report is subject to both perceptual
bias (e.g. patients often have difficulty distinguishing dyskinesia from other symptoms)
and recall bias. Another approach is the use of patient diaries, which does improve
reliability by recording symptoms as they occur, but does not capture many of the
features that are useful in clinical decision-making [37]. In clinical trials of new
therapies, both the diary-based approach [37] as well as extended direct observations of
the patients in a clinical care setting [38] have been used, but both capture only a small
portion of the patients daily experience and are burdensome for the subjects.
Based on these considerations, our team [39] and others [40] have developed
methods that rely on wearable technology to monitor longitudinal changes in the
severity of symptoms and motor complications in patients with Parkinsons disease. In
our own study, we recruited twelve individuals, ranging in age from 46 to 75 years,
with a diagnosis of idiopathic Parkinsons disease (Hoehn & Yahr stage 2.5 to 3, i.e.
mild to moderate bilateral disease) [36]. Subjects delayed their first medication intake
in the morning so that they could be tested in a practically-defined OFF state
(baseline trial). This approach is used clinically to observe patients during their most
severe motor symptoms. Subjects were instructed to perform a series of standardized
motor tasks utilized in clinically evaluating patients with Parkinsons disease.
Accelerometer sensors positioned on the upper and lower extremities were used to
gather movement data during performance of the standardized series of motor tasks
mentioned above. The study focused on predicting tremor, bradykinesia, and
dyskinesia based on features derived from accelerometer data. Raw accelerometer data
were high-pass filtered with a cutoff frequency of 1 Hz to remove gross changes in the
orientation of body segments [41]. An additional filter with appropriate characteristics
was applied to isolate the frequency components of interest for estimating each
symptom or motor complication. Specifically, the time series were band-pass filtered
with bandwidth 3-8 Hz for the analysis of tremor, and they were low-pass filtered with
a cut-off frequency of 3 Hz for the analysis of bradykinesia and dyskinesia. All the
filters were implemented as IIR filters based on an elliptic design. The accelerometer
time series were segmented using a rectangular window randomly positioned
throughout the recordings performed during performance of each motor task. [42]
Features were extracted from 30 such data segments (i.e. epochs) for each motor task
from the recordings performed from each subject during each trial. Five different types
of features were estimated from accelerometer data recorded from different body
segments. The features were chosen to represent characteristics such as intensity,
modulation, rate, periodicity, and coordination of movement. We implemented Support
Vector Machines to predict clinical scores of the severity of Parkinsonian symptoms
and motor complications. Our results demonstrated that an average prediction error not
exceeding a few percentage points can be achieved in the prediction of Parkinsonian
symptoms and motor complications from wearable sensor data. Specifically, average
prediction error values were 3.5 % for tremor, 5.1 % for bradykinesia, and 1.9 % for
dyskinesia. [43].
ON
Onset
Dyskinesia
OFF
WEARING-OFF
Peak-Dose
Dyskinesia
End-of-Dose
Dyskinesia
Levodopa
Intake
OFF
Figure 6. Example of the cyclical pattern of motor abnormalities observed in patients with Parkinsons
disease for dyskinesia during a motor fluctuation cycle.
feature-to-instance ratio. We assessed the reliability of the estimates achieved using this
method by deriving the prediction error for each of the investigated motor tasks. The
estimated prediction error for such motor tasks ranged between about 1 % and 13 %.
This is a very encouraging result as it suggests that FAS scores could be estimated via
monitoring motor tasks performed by patients in the home and community settings
using accelerometers.
Our team also studied the feasibility of utilizing a sensorized glove to implement
physical therapy protocols for motor retraining based on the use of video games. The
glove was utilized to implement grasp and release of objects in the video games. This
function was achieved by defining a measure of hand aperture and estimating it
based on processing data gathered from the data glove. Calibration of the data glove
was achieved by asking individuals to hold a wooden cone-shaped object with diameter
ranging from 1 cm to 11.8 cm at different points of the cone corresponding to a known
diameter. The output of the sensors on the glove was used to estimate the diameter of
the section of the cone-shaped object corresponding to the position of the middle finger.
A linear regression model was utilized to estimate the above-defined measure of hand
aperture (dependent variable) using the glove sensor outputs as independent variables.
Encouraging results were achieved from the study. The estimation error that marked
the measures of hand aperture as defined above was smaller than 1.5 cm. We
consider this result as satisfactory in the context of the application of interest, i.e. the
implementation of video games to train grasp and release functions in individuals post
stroke.
Overall, the results herein summarized indicate that the investigated wearable
technologies are suitable to implement telerehabilitation protocols.
Figure 7. A subject testing the data glove herein described in combination with a robotic system for
rehabilitation.
5. Conclusions
The assessment of the impact of rehabilitation interventions on the daily life of
individuals is essential for developing protocols that maximize the impact of
rehabilitation on the quality of life of individuals. The use of questionnaires is
somehow limited because questionnaires are subject to perceptual bias and recall bias.
Furthermore, relying on questionnaires is bound to introduce a delay in the response to
changes in patients status since the information needed to make a clinical decision
concerning changes in rehabilitation interventions is not readily available on a
continuous basis but rather questionnaires are administered sporadically. Wearable
technology has the potential to overcome limitations of existing methodologies to
assess the impact of rehabilitation interventions on the real life of individuals.
Miniature unobtrusive sensors can provide clinicians with quantitative measures of
subjects status in the home and community settings thus facilitating making clinical
decisions concerning the adequacy of ongoing interventions and possibly allowing
prompt modification of the rehabilitation strategy if needed. In this chapter, we
presented three applications that point at potential areas of use of wearable technology
in rehabilitation.
In the first example, we showed that wearable sensors can provide clinicians with a
tool to monitor exercise compliance in patients with COPD. We also showed that
activities of daily living that are associated with different systemic responses can be
identified with high reliability. It is conceivable that based on trends identified via
analysis of changes in activity level and systemic responses associated with certain
motor tasks, we could achieve early detection of exacerbation episodes. The impact on
our ability to care for patients with COPD would be paramount.
In the second example, we demonstrated that we can monitor the severity of
symptoms and motor complications in patients with Parkinsons disease. This is
important in the late stages of the disease when motor fluctuations develop. Since
motor fluctuations span an interval of several hours, observations performed in a
clinical setting (typically limited to the duration of the outpatient visit, i.e. about 2025 minutes) are not sufficient to capture the severity of motor fluctuations. Monitoring
patients in the home and community settings could therefore substantially improve the
clinical management of patients with late stage Parkinsons disease. Our results suggest
that the technique summarized in this chapter could be extended to monitoring patients
with other neurodegenerative conditions that are accompanied by motor symptoms.
Finally, we demonstrated that wearable technology could provide clinicians with a
means to assess functional ability in individuals post stroke. This is important because
we currently have very limited tools to assess the impact of rehabilitation interventions
on the real life of patients. Although it is expected that therapeutic interventions that
are associated with improvements in impairment level and functional level lead to an
improved quality of life, it would be very useful to quantify such impact and compare
different interventions measuring their impact on the performance of activities of daily
living via processing data gathered in the home and community settings. Tools to
monitor patients in the home and community settings could lead to new criteria for
adjusting interventions that maximize the impact on real life conditions of the adopted
therapeutic intervention. We anticipate that such criteria would allow clinicians to help
patients achieving higher level of independence and better quality of life.
All in all, the examples provided in this chapter indicate that wearable technology
has tremendous potential to allow clinicians to improve quality of care thus resulting in
Acknowledgments
The author wishes to thank Dr. Sunderasan Jayaraman, Dr. Alex (Sandy) Pentland, and
Dr. William Tang for allowing him to utilize figures that they utilized in previous
communications and for their input on a draft of this chapter. The work on wireless
technology summarized in this chapter was largely carried out by Dr. Matt Welsh and
his team at the Harvard School of Engineering and Applied Sciences. Applications of
e-textile solutions were pursued jointly with Dr. Danilo De Rossi, University of Pisa,
and his associates Dr. Alessandro Tognetti and Mr. Fabrizio Cutolo. Dr. Rita Paradiso
(Smartex) provided expertise and support in the development of the data glove
discussed in this chapter. The pilot study concerning the application of wearable
technology to monitor patients with COPD was performed with Dr. Marilyn Moy at
Harvard Medical School and Ms. Sherrill Delsey, currently at the MIT Lincoln
Laboratory, who was at Spaulding Rehabilitation Hospital at the time the study
described in this chapter was performed. The development of methodologies to assess
the severity of symptoms and motor complications in patients with Parkinsons disease
was performed with Dr. John Growdon and Ms. Nancy Huggins at Massachusetts
General Hospital. Algorithms for the analysis of accelerometer data were developed by
Mr. Shyamal Patel, Northeastern University. Mr. Richard Hughes, currently with
Partners HomeCare, who was at Spaulding Rehabilitation Hospital at the time the study
described in this chapter was performed, provided clinical scores for all the patients
recordings. Mr. Richard Hughes also contributed to the pilot study we performed to
assess the use of wearable technology in patients post stroke. Medical expertise for this
project was provided by Dr. Joel Stein, currently at Columbia University, who was at
Spaulding Rehabilitation Hospital at the time the study was performed. Algorithms for
the analysis of data recorded from patients post stroke were developed by Mr. Todd
Hester, currently at University of Texas Austin, who was at Spaulding Rehabilitation
Hospital at the time the study was performed. Mr. Shyamal Patel, Northeastern
University, also contributed to the development of these algorithms.
References
[1]
[2]
[3]
[4]
[5]
[6]
P. Bonato, Advances in wearable technology and applications in physical medicine and rehabilitation,
Journal of NeuroEngineering and Rehabilitation 2 (1) (2005), 2.
E.A. Krupinski, Telemedicine for home health and the new patient: when do we really need to go to the
hospital?, Studies in Health Technology and Informatics 131 (2008), 179-189.
Joint principles of the Patient-Centered Medical Home, Del Med J 80 (1) (2008), 21-22.
S. Park, C. Gopalsamy, R. Rajamanickam, S. Jayaraman, The Wearable Motherboard: a flexible
information infrastructure or sensate liner for medical applications, Studies in Health Technology and
Informatics 62 (1999), 252-258.
S. Park, S. Jayaraman, Enhancing the quality of life through wearable technology, IEEE Engineering in
Medicine and Biology Magazine 22 (3) (2003), 41-48.
S. Park, S. Jayaraman, e-Health and quality of life: the role of the Wearable Motherboard, Studies in
Health Technology and Informatics 108 (2004), 239-252.
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
A. Pentland, Healthwear: medical technology becomes wearable, Studies in Health Technology and
Informatics 118 (2005), 55-65.
A. Pentland, T. Choudhury, N. Eagle, P. Singh, Human dynamics: computation for organizations,
Pattern Recognition Letters 26 (2005), 503-511.
M. Sung, C. Marci, A. Pentland, Wearable feedback systems for rehabilitation, Journal of
NeuroEngineering and Rehabilitation 2 (2005), 17.
A. Pentland, Social Dynamics: Signals and Behavior, MIT Media Lab, 2005.
D. De Rossi, F. Lorussi, E.P. Scilingo, F. Carpi, A. Tognetti, M. Tesconi, Artificial kinesthetic systems
for telerehabilitation, Studies in Health Technology and Informatics 108 (2004), 209-213.
D. De Rossi, A. Lymberis, New generation of smart wearable health systems and applications, IEEE
Transactions on Biomedical Engineering Letters 9 (3) (2005), 293-294.
A. Tognetti, F. Lorussi, R. Bartalesi, S. Quaglini, M. Tesconi, G. Zupone, D. De Rossi, Wearable
kinesthetic system for capturing and classifying upper limb gesture in post-stroke rehabilitation,
Journal of Neuroengineering Rehabilitation 2 (1) (2005), 8.
E. Wade, H. Asada, Cable-free body area network using conductive fabric sheets for advanced humanrobot interaction, Conference of the IEEE Engineering in Medicine and Biology Society 4 (2005), 35303533.
P.T. Gibbs, H.H. Asada, Wearable conductive fiber sensors for multi-axis human joint angle
measurements, Journal of Neuroengineering Rehabilitation 2 (1) (2005), 7.
E. Jovanov, A. Milenkovic, C. Otto, P.C. de Groen, A wireless body area network of intelligent motion
sensors for computer assisted physical rehabilitation, Journal of Neuroengineering Rehabilitation 2 (1)
(2005), 6.
T.R. Fulford-Jones, G.Y. Wei, M. Welsh, A portable, low-power, wireless two-lead EKG system,
Conference of the IEEE Engineering in Medicine and Biology Society 3 (2004), 2141-2144.
T. Gao, L. Selavo, M. Welsh, Creating a hospital-wide patient safety net: Design and deployment of
ZigBee vital sign sensors, AMIA Annual Symposium Proceedings (2007), 960.
WHO, World Health Report, Geneva, Switzerland, 2000.
C.J. Murray, A.D. Lopez, Mortality by cause for eight regions of the world: Global Burden of Disease
Study, Lancet 349 (9061) (1997), 1269-1276.
L. Wilson, E.B. Devine, K. So, Direct medical costs of chronic obstructive pulmonary disease: chronic
bronchitis and emphysema, Respiratory Medicine 94 (3) (2000), 204-213.
D.M. Sherrill, M.L. Moy, J.J. Reilly, P. Bonato, Using hierarchical clustering methods to classify motor
activities of COPD patients from wearable sensor data, Journal of Neuroengineering Rehabilitation 2
(2005), 16.
M.L. Moy, S.J. Mentzer, J.J. Reilly, Ambulatory monitoring of cumulative free-living activity, IEEE
Engineering in Medicine and Biology Magazine 22 (3) (2003), 89-95.
M.L. Moy, E. Garshick, K.R. Matthess, R. Lew, J.J. Reilly, Accuracy of uniaxial accelerometer in
chronic obstructive pulmonary disease, Journal of Rehabilitation Research and Development 45 (4)
(2008), 611-617.
D.M. Sherrill, M.L. Moy, J.J. Reilly, P. Bonato, Objective Field Assessment of Exercise Capacity in
Chronic Obstructive Pulmonary Disease, 15th Annual Congress of the International Society of
Electrophysiology Kinesiology, Boston (Massachusetts), 2004.
M.L. Moy, D.M. Sherrill, P. Bonato, J.J. Reilly, Monitoring Cumulative Free-Living Exercise in COPD,
ATS, Orlando (Florida), 2004.
D.G. Standaert, A.B. Young, Treatment of CNS Neurodegenerative Diseases, in: J.G. Hardman, L.E.
Limbird (Eds.), Goodman and Gilman's Pharmacological Basis of Therapeutics, McGraw-Hill, 2001,
pp. 549-620.
S. Fahn, Levodopa in the treatment of Parkinson's disease, Journal of Neural Transmission 71 (2006),
1-15.
T.N. Chase, Levodopa therapy: consequences of the nonphysiologic replacement of dopamine,
Neurology 50 (5) (1998), S17-25.
J.A. Obeso, C.W. Olanow, J.G. Nutt, Levodopa motor complications in Parkinson's disease, Trends in
Neurosciences 23 (10) (2000), S2-7.
A.E. Lang, A.M. Lozano, Parkinson's disease. First of two parts, The New England Journal of Medicine
339 (15) (1998), 1044-1053.
A.E. Lang, A.M. Lozano, Parkinson's disease. Second of two parts, The New England Journal of
Medicine 339 (16) (1998), 1130-1143.
A. Thomas, L. Bonanni, A. Di Iorio, S. Varanese, F. Anzellotti, A. D'Andreagiovanni, F. Stocchi, M.
Onofrj, End-of-dose deterioration in non ergolinic dopamine agonist monotherapy of Parkinson's
disease, Journal of Neurology 253 (12) (2006), 1633-1639.
[34] W.J. Weiner, Motor fluctuations in Parkinson's disease, Reviews in neurological diseases 3 (3) (2006),
101-108.
[35] T. Muller, H. Russ, Levodopa, motor fluctuations and dyskinesia in Parkinson's disease, Expert
Opinion on Pharmacotherapy 7 (13) (2006), 1715-1730.
[36] S. Fahn, R.L. Elton, Unified Parkinsons Disease Rating Scale, in: S. Fahn (Ed.), Recent Developments
in Parkinsons Disease, MacMillan Healthcare Information, 1987, pp. 153-163.
[37] Evaluation of dyskinesias in a pilot, randomized, placebo-controlled trial of remacemide in advanced
Parkinson disease, Archives of Neurology 58 (10) (2001), 1660-1668.
[38] C.H. Adler, C. Singer, C. O'Brien, R.A. Hauser, M.F. Lew, K.L. Marek, E. Dorflinger, S. Pedder, D.
Deptula, K. Yoo, Randomized, placebo-controlled study of tolcapone in patients with fluctuating
Parkinson disease treated with levodopa-carbidopa. Tolcapone Fluctuator Study Group III, Archives of
Neurology 55 (8) (1998), 1089-1095.
[39] S. Patel, D. Sherrill, R. Hughes, T. Hester, N. Huggins, T. Lie-Nemeth, D. Standaert, P. Bonato,
Analysis of the severity of dyskinesia in patients with parkinsons disease via wearable sensors,
BSN2006, International Workshop on Wearable and Implantable Body Sensor Networks, Cambridge,
MA, 2006, pp. 123-126.
[40] N.L. Keijsers, M.W. Horstink, S.C. Gielen, Ambulatory motor assessment in Parkinson's disease,
Movement Disorders 21 (1) (2006), 34-44.
[41] J.I. Hoff, A.A. van den Plas, E.A. Wagemans, J.J. van Hilten, Accelerometric assessment of levodopainduced dyskinesias in Parkinson's disease, Movement Disorders 16 (1) (2001), 58-61.
[42] P. Bonato, D.M. Sherrill, D.G. Standaert, S.S. Salles, M. Akay, Data mining techniques to detect motor
fluctuations in Parkinson's disease, Conference of the IEEE Engineering in Medicine and Biology
Society 7 (2004), 4766-4769.
[43] S. Patel, K. Lorincz, R. Hughes, N. Huggins, J.H. Growdon, M. Welsh, P. Bonato, Analysis of feature
space for monitoring persons with Parkinson's disease with application to a wireless wearable sensor
system, Conference of the IEEE Engineering in Medicine and Biology Society (2007), 6291-6294.
[44] Heart Disease and Stroke Statistics, American Heart Association, 2005.
[45] Stroke: Hope Through Research, National Institute of Neurological Disorders and Stroke, 2004.
[46] Morbidity and Mortality Weekly Report, Center for Disease Control, 2001.
[47] S.L. Wolf, P.A. Catlin, M. Ellis, A.L. Archer, B. Morgan, A. Piacentino, Assessing Wolf motor
function test as outcome measure for research in patients after stroke, Stroke 32 (7) (2001), 1635-1639.
Introduction
A brain-computer interface (BCI) is a system that interprets brain signals generated by
the user, allowing specific commands from the brain to be executed on an external
device. Therefore such an interface would enable severely disabled people to interact
with their environment without the need for any muscle activation. Indeed, BCI
systems appear to be interesting new assistive devices for people with severe motor
disabilities. However they differ from other human-machine interfaces in that the user
must learn completely new skills in order to operate them. Years of experimentation
have shown cortical plasticity even in adult brain, which still adapts to BCIs, thus the
combination of rehabilitation and BCIs, both of which exploit cortical plasticity, could
help people become able once again. For this reason, BCI systems appear promising
rehabilitation tools.
First, we provide an overview of BCI systems, from the historical and technical
points of view, and then we move on to discuss the application of BCI in rehabilitation,
where we focus on BCI usability in relation to user acceptance. In the final section we
present experimental data concerning two important issues related to the applicability
of BCI in rehabilitation procedures: i) the use of dry electrodes, a technology that has
the potential to improve BCI system usability and comfort; ii) the monitoring of
psychophysiological effects during BCI tasks, thus allowing the quantification of the
cognitive load and mental fatigue of BCI rehabilitation sessions.
Signal Processing
Brain Signal
Output
Figure 1. Schematic model of BCI, which considers three basic elements: input signal (brain signal from
user), input-into-output translator block (signal processing) and output signal (to an external device like an
electric wheelchair).
Mason and Birch [7] proposed a functional model of BCI; it includes the following
components: user, electrodes, EEG amplifier, feature extractor, feature translator,
control interface, device controller, device and physical (operating) environment; the
feature extractor and feature translator can be further divided into subcomponents [8].
The processing done by the feature extractor can be split into signal enhancement (a
preprocessing done to increase the signal to noise ratio), actual feature extraction and,
finally, feature selection (dimensionality reduction).
Mason and Birchs approach highlights the real complexity of any BCI system.
This is because a BCI is not a human-computer interface in the classical sense as the
interaction is directly driven by the users brain signals. In addition, there is an
interaction between two systems (human and machine) that have to learn and to adapt
to each other. From the user viewpoint, this requires the acquisition of new and
complex skills [6], whereas for the computer system there has to be the provision of
reliable and efficient algorithms for feature extraction, selection and classification [7, 8,
9, 10]. Regarding this last point, Schalk et al. [11] suggest the alternative approach of
moving from classification of features to detection of signal change, thus bypassing the
critical concerns related to training both user and algorithm.
1.3. Paradigms
The input signal for a BCI system cannot be simply the EEG signal at rest. This is
because at least two different states are needed to operate an external device. Thus a
cognitive task assigned to the user produces a signal containing features that are
extracted and classified. Different cognitive tasks can be used to produce such features.
The operational framework used to specify them is called the paradigm. The most used
paradigms are listed below, with a brief description of their physiological rationale.
1.3.1. Motor Imagery
According to Decety [12], motor imagery can be defined as a dynamic state during
which a given action is mentally simulated by a subject. The subject can implement
two different techniques: first person perspective, or motor-kinesthetic, and third
person, or visuo-spatial perspective. Considering the physiological bases, movement
execution and motor imagery share common mechanisms [13]: in both cases, eventrelated desynchronization of mu (or Rolandic) and beta rhythms over the contralateral
side (with respect to the movement) are present [14]. Moreover, it has been shown that
it is possible to discriminate between imagination of right and left hand movements
[15].
1.3.2. P300 component of event-related potentials
EEGs show event-related potentials in response to some stimuli. Traditionally, such
potentials are extracted from the EEG by presenting the stimulus repeatedly, and then
averaging epochs which are time-locked to the stimulus or to its response. The
resulting waveform presents peaks of different amplitude at different latencies: the
P300 component is a positive peak with a latency of about 300 ms. It was discovered
by Sutton [16] and can be elicited when the subject is performing an oddball task.
During the task the subject is presented a series of stimuli comprising two classes of
different relevance and probability of occurrence. The subject has to pay attention to
the target stimulus that belongs to the less frequent class: the P300 component
highlights the recognition of the target-events by the subject [17]. The stimulation can
be either visual or auditory [18].
1.3.3. Slow cortical potentials
Slow cortical potentials (SCP) are changes in cortical potential that can last from a
minimum of 300 ms up to several seconds. Typically, any reduction in cortical activity
produces positive changes with such a slow dynamic, while functions associated with
cortical activation, like preparation to voluntary movements, induce negative SCP [6,
19].
1.3.4. Steady State Visual Evoked Potential
Experimental evidence shows that flickering visual stimulation synchronizes human
visual cortex neurons to the frequency of the stimulus. The EEG response to such
visual stimulus is called the Steady State Visual Evoked Potential (SSVEP). This
SSVEP is a periodic oscillation with the same fundamental frequency as the flickering
stimulus, but it can also include higher frequencies [20].
computer interfaces: BCIs require skilled users, able to modulate their brain activity
and should give such users real time feedback. Whatever the feedback modality
(biofeedback or sensory input), it can induce activity-dependent plasticity, as shown by
Jarosiewicz et al. with an invasive BCI in primates. [31].
Thus, learning processes activated by cognitive and sensory experiences related to
feedback from the environment are key elements in promoting neural plasticity and
modifications of brain circuitry. Adaptation to brain damage with compensatory
strategies can also be considered a learning process: thus the brain, although damaged,
triggers a reorganization of its structure. Addressing issues concerning brain structure
modification, and learning capacity due to brain insults, is very important for an
effective translation of neuroscience results into rehabilitation [32].
Interesting results toward the application of BCI in neurorehabilitation come from
Buch and colleagues [33]. They acquired data from eight subjects with chronic hand
plegia resulting from stroke (hand plegia duration was on average 16.7 6.4 months).
They used a BCI system based on magnetoencephalography. During the BCI sessions
the subjects wore a mechanical orthosis on the plegic hand. This study demonstrated
that patients with complete hand paralysis due to stroke can not only use motor
imagery to operate an orthosis, but can also achieve control of the mechanical orthosis
by using signals recorded on the ipsilesional hemisphere. The evidence that increasing
the excitability of the ipsilesional motor cortex can improve the clinical outcome for
stroke patients [34] makes these findings very promising.
Some studies have shown that deafferentation of a body part induces a reduction of
its topographical representation on the somatosensory cortex. This reorganization can
be the result of both structural lesion and disuse. Counteracting such reorganization is
an important rehabilitation goal, particularly in stroke rehabilitation. In the event of
severe motor disorders and impairments, when physical exercise is no longer possible,
motor imagery may be the only possible way to access and train the motor system [35,
36], and since motor imagery is also a BCI paradigm, BCI is itself a candidate as the
rehabilitation tool for this situation. Furthermore, experimental evidence has revealed
functional and neural similarity between motor execution and imagery, which can be
performed from a kinesthetic or a visual perspective. In sports applications, mental
practice with motor imagery enhances performance and facilitates motor learning [13].
Another interesting finding with regard to rehabilitation is found in the study of Stinear
et al. [37], who indicate that only kinesthetic imagery modulates corticomotor
excitability. It is interesting to note that kinesthetic imagery seems to provide the best
performance also in BCI tasks [38]. This observation highlights the importance of
recognizing, in order to provide effective neurorehabilitation treatment, the kind of
motor imagery attitude, if any, of the BCI user [35].
Jackson et al [39] suggested a model for the use of mental practice with motor
imagery in rehabilitation. According to their model, three elements contribute to the
rehabilitative outcome: physical execution (musculo-skeletal activity), declarative
knowledge (information about the skill the patient has to learn) and nonconscious
processes (elements of the skill, which cannot be explicitly verbalized). Obviously, due
to the interaction of the three components, the outcome improves with physical
execution, but this is not always possible or may be difficult in patients with brain
damage. Thus, motor imagery could be helpful for such cases [35, 39]. Moreover, the
lack of motor execution stresses the role of declarative knowledge and could also be
important in disclosing nonconscious aspects of motor learning [39].
Motor imagery has been used in stroke rehabilitation (though without clear results,
[40]) and in relation to Parkinson's disease [41]. Motor imagery in BCI goes further
because it provides users with feedback related to their cognitive activity, and this can
be exploited to achieve effective treatment. Moreover, BCI provides a quantitative
evaluation of both the subjects engagement and his/her ability to accomplish the
cognitive task.
Since one of the BCI paradigms is based on the P300 potential related to cognitive
events, BCI could also be used for cognitive rehabilitation. People affected by brain
injury or disease often experience cognitive problems [42], which can seriously affect
their quality of life. Cognitive rehabilitation is aimed at mitigating cognitive deficits
arising from neurological insults and diseases. While there is substantial evidence of
the efficacy of cognitive therapies concerning stroke and traumatic brain injury [43],
more research is needed where other diseases are concerned [44].
In cognitive rehabilitation, the use of event-related potentials is traditionally
limited to an assessment of injuries incurred or disorder severity. However, a tentative
biofeedback therapy based on P300 was designed to treat attention-deficit patients with
brain injury [45]: five patients with chronic mental disturbances received a P300 based
biofeedback therapy for a four week period, and all showed remarkable improvement.
However this pioneering work has not yet been followed by a larger clinical study.
A critical issue in the design of every BCI-based rehabilitation protocol concerns
the selection of the paradigm, which is related to the cognitive task proposed to the
user. Wolpaw [46] analyzed BCI system performance under different paradigms,
within controlled settings. He found an intrinsic inter-subject variability, and concluded
that such variability is a fundamental feature of BCI systems, probably related to the
nature of the BCI output pathway. In fact, there is an essential difference between
classic assistive devices and BCI systems: the former rely on the brains natural
output pathways, while the latter require that the central nervous system control the
cortical neurons instead of the spinal motoneurons. In order to achieve a more natural,
and therefore reliable, BCI system, it could be beneficial to shift the control strategy
from process-control to goal-selection. The BCI performance results were from using
BCI as assistive technology, and Wolpaw points out that paradigms like P300 should
be preferred to others like motor imagery in order to reduce the users cognitive load. It
is important to consider the issues discussed by Wolpaw, also when endeavoring to
design effective neurorehabilitation protocols based on BCI. In fact, protocols based on
motor imagery can imply a relevant cognitive load.
rows and columns of the matrix flash 15 times, every flash lasting 60 ms with a dark
time between two flashes of 10 ms. The flashing order of rows and columns is random.
In each trial a letter is selected for communication, and the subject is asked to count
how many times the cell containing the selected letter flashes. During the training
phase, the subject communicates a predefined (meaningful) word of 10 letters, but, as
already mentioned, does not receive any meaningful feedback from the BCI system.
Therefore, the subject is told in advance to ignore the symbol printed on the screen at
the end of every repetition. During the performance phase, the subject communicates a
different predefined (meaningful) word of 10 letters. However in this case the subject
receives meaningful feedback, i.e., at the end of each series of random flashes, the
character identified by the signal processing block is printed on the screen.
In the motor imagery task, the subject looks at a fixation cross displayed in the
centre of the monitor. After 3 seconds, an arrow (cue stimulus) pointing to the right, or
the left, appears for 1.25s on the fixation cross. In the training phase, the subject is
asked to imagine a right or left hand movement, according to the direction indicated by
the arrow; this trial is repeated 40 times, with the arrow pointing randomly 20 times in
each of the two directions. In the performance phase, feedback appears while the
subject imagines the hand movement: the feedback is a horizontal bar indicating the
direction of the imagined movement (left or right) as identified on line by the
computer.
The experimental session also consists of two reference conditions. One is the
basal resting condition, in which the subject is asked to sit quietly in front of the
computer monitor for 6 minutes: for the first 3 minutes with eyes open, looking at a
fixation point on the screen, and for the remaining 3 minutes with eyes closed. The
second reference condition is a mental arithmetic task. The subject is asked to perform
a Paced Auditory Serial Addition Test (PASAT) [78]. First, two series of single digits
are presented verbally every 3 seconds (10 digits for the probe and 60 digits for the test,
in this text named PASAT-3) and the patient must add each new digit to the one
immediately prior to it. The test is then repeated with two different series of single
digits presented every 2 seconds (again, 10 digits for the probe and 60 digits for the
test, in this text named PASAT-2). Sitting at rest and PASAT represent two
reproducible and standardized reference conditions characterized by different levels of
autonomic tone: a low sympathetic tone during rest, and an important activation of the
sympathetic tone, induced by the stress associated with the mental arithmetic, during
PASAT.
4.2.3. Data Analysis
Here we show some examples which suggest how information on the psychological
and physiological effects produced by BCI tasks can be assessed by analyzing the
signals monitored during the rehabilitation sessions. Examples are derived from signals
recorded in one healthy volunteer (male, 28 years) who followed our previously
described experimental protocol.
4.2.3.1. EEG Spectrum
In the literature, specific components of the EEG spectrum have been used to monitor
the state of alertness or sleepiness of the subject, to reflect the difficulty of a task and to
identify lapses in attention [72]. For this reason we quantified spectral changes during
each BCI task. This was done by computing the power spectrum at Fz, Cz, Pz and Oz
Figure 2. Changes of EEG spectral components during different BCI tasks and during a mental arithmetic
test (PASAT-2) calculated for Fz. Each histogram shows the ratio between the EEG spectrum obtained
during the task and the spectrum obtained in the reference condition (eyes open - sitting at rest). Each
spectrum was normalized by its total power before computing the ratio.
during the two reference conditions (rest and PASAT), and during the training and
performance phases of the P300 speller and motor imagery tasks. Spectra were
normalized with respect to their total power, and expressed as a percentage of the value
measured in the eyes open - sitting at rest condition (our reference condition in this
analysis). As an example, figure 2 shows the results obtained for Fz. Clear differences
appear between the training and performance phases in the weight of the low-frequency
components, and between the BCI tasks and PASAT. These spectral alterations may
help to better understand the mental load of the subject and to monitor his/her
psychophysiological state.
4.2.3.2.Neurovegetative Responses
Also activation of the sympathetic branch of the autonomic nervous system can be
expected during BCI tasks. The reasons for such activation may be related to the stress
induced by the mental tasks required during BCI sessions [79], to the phenomena
related to the expectation of positive feedback from the computer, or to the frustration
when negative feedback appears, and, finally, to the mechanisms of motor anticipation
and programming activated by motor imagery tasks [12, 80]. A way to continuously
monitor the neurovegetative tone is through an analysis of heart rate variability [81]. A
heart rate variability signal can be derived from the ECG. This is done first by
identifying the time of occurrence of the R-peak of each heart beat, and then by
calculating the time intervals between consecutive R peaks on a beat-by-beat basis. The
series of R-R intervals strongly reflects any change in the cardiac outflow of the
parasympathetic and sympathetic systems. Indeed, it has been shown that the spectral
power of the time series of R-R intervals, calculated over a high frequency (HF, from
0.15 to 0.40 Hz) band, mainly reflects the vagal tone, while power in a low frequency
(LF, from 0.04 to 0.15 Hz) band is influenced by both cardiac sympathetic and vagal
outflows. This evidence suggested considering the ratio between power in the LF and
HF bands, the so-called LF/HF power ratio, as an indirect index of the sympatho-vagal
balance [76].
Figure 3. Values of the LF/HF power ratio normalized with respect to the sitting at rest condition (100%
basal condition) during a mental arithmetic test (PASAT) and during different BCI tasks.
Figure 3 shows our volunteers level of LF/HF power ratios evaluated during
different tasks of the experimental protocol. Values are expressed as percentage of the
basal at rest condition, i.e., one of the two reference conditions (dashed line).
The second reference condition is the mental arithmetic test (PASAT). Clearly
mental arithmetic induced sympathetic activation, which is reflected in a substantially
higher LF/HF power ratio. The training phase of the P300 speller induces a similarly
high sympathetic activation, probably produced by the same mechanisms of mental
stress activated during the mental arithmetic test. The actual performance of the P300
speller is associated with an even larger LF/HF power ratio. Differently from the
training phase, during the performance of this BCI task the subject receives meaningful
feedback from the computer (in this case, the identification of the letter selected for
communication). It is likely that the presence of feedback, particularly the expectation
of correct letter recognition, is responsible for the further sympathetic activation
observed during the P300 performance. Similarly to the P300-based BCI, also the BCI
sessions based on the motor imagery paradigm are characterized by a larger LF/HF
ratio when feedback is present. However, it is worth noting that during both the
training and performance phases, motor imagery tasks are associated with a larger
autonomic activation than the corresponding P300-speller tests. It can be hypothesized
that the difference is accounted for by the additional sympathetic activation associated
with motor planning and preparation [11].
Conclusions
The inherent complexity of using any BCI system derives from its peculiar feature: the
interaction between man and machine does not require any muscular activation. This
means that unlike classical human-computer interfaces, the user commands follow notnatural output pathways. We have shown that this peculiar feature makes BCI systems
not only valuable assistive devices for people with severe motor disabilities, but also
real rehabilitative tools. In fact, by stimulating patients to acquire new skills, and
activating specific cortical areas, BCIs might also be used for innovative and effective
neurorehabilition therapies. Indeed, we have reviewed studies investigating this
possible use of BCI, and report the first promising clinical applications.
However clinical BCI applications are still very limited, and many critical issues
need to be addressed before we can see effective systems for neurorehabilitative BCI
operating in clinical settings or in patients homes. We have described the most
significant points that need to be considered when designing, selecting and using a BCI
system for neurorehabilitation. Furthermore, we have emphasized the importance of
technology acceptance and usability. Problems, which today limit the practical use of
BCI in neurorehabilitation, will probably be overcome when new technologies provide
non-conventional sensors for less obtrusive brain signal recording, and affective
interfaces able to adapt the BCI according to emotional status changes in the patient.
Our experimental results regarding the possible use of dry electrodes, and the online
monitoring of the psychophysiological effects of BCI tasks, suggest a way of
addressing these problems.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19] N. Birbaumer, Slow cortical potentials: plasticity, operant control and behavioral effects, The
Neuroscientist 5 (1999), 74-78.
[20] G.R. Mueller-Putz, R. Scherer, C. Brauneis, and G. Pfurtscheller, Steady-state visual evoked potential
(SSVEP)-based communication: impact of harmonic frequency components, Journal of Neural
Engineering 2 (2005), 123-130.
[21] http://www.who.int/topics/rehabilitation/en/
[22] B.H. Dobkin, Brain-computer interface technology as a tool to augment plasticity and outcomes for
neurological rehabilitation, The Journal of Physiology 579 (2007), 637-642.
[23] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey et al, A spelling device for the
paralysed, Nature 398 (1999), 297-298.
[24] J. Philips, J. del R. Milln, G. Vanacker, E. Lew, F. Galn, P.W. Ferrez, H. Van Brussel, and M. Nuttin,
Adaptive Shared Control of a Brain-Actuated Simulated Wheelchair, Proceedings of the 2007 IEEE
10th International Conference on Rehabilitation Robotics, Noordwijk, The Netherlands, 408-414.
[25] R. Leeb, D. Friedman, G.R. Mueller-Putz, R. Scherer, Mel Slater and G. Pfurtscheller, Self-Paced
(Asynchronous) BCI Control of a Wheelchair in Virtual Environments: A Case Study with a
Tetraplegic, Computational Intelligence and Neuroscience (2007), 1-8
[26] L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh et al., Neuronal ensemble control
of prosthetic devices by a human with tetraplegia, Nature 242 (2006), 164-171.
[27] G. Pfurtscheller, G.R. Mueller, J. Pfurtscheller, H.J. Gerner, R. Rupp, Thought control of functional
electrical stimulation to restore hand grasp in a patient with tetraplegia, Neuroscience Letters 351
(2003), 33-36.
[28] D.V. Buonomano, M.M. Merzenich, Cortical plasticity: from synapses to maps, Annual Review of
Neuroscience 21 (1998), 149-186.
[29] C. Kelly, J.J. Foxe, H. Garavan, Patterns of normal human brain plasticity after practice and their
implications for neurorehabilitation, Archives of Physical Medicine and Rehabilitation 87 (2006), S20S29.
[30] J.J. Daly, J.R. Wolpaw, Brain-computer interfaces in neurological rehabilitation, The Lancet Neurology
7 (2008), 1032-43.
[31] B. Jarosiewicz, S.M. Chase, G.W. Fraser, M. Velliste, R.E. Kass, A.B. Schwartz, Functional network
reorganization during learning in a brain-computer interface paradigm, Proceedings of the National
Academy of Sciences USA 105 (2008), 19486-91.
[32] J.A. Kleim, T.A. Jones, Principles of experience-dependent neural plasticity: implications for
rehabilitation after brain damage, Journal of Speech and Hearing Research 51 (2008), S225-39.
[33] E. Buch, C. Weber, L.G. Cohen, C. Braun, M.A. Dimyan et al., Think to move: a neuromagnetic BrainCompute Interface (BCI) system for chronic stroke, Stroke 39 (2008), 910-917.
[34] C. Stinear, P.A. Barber, J.P. Coxon, M.K. Fleming, B.D. Winston, Priming the motor system enhances
the effects of upper limb therapy in chronic stroke, Brain 131 (2008), 1381-1390.
[35] N. Sharma, V.M. Pomeroy, J.C. Baron, Motor imagery: a backdoor to the motor system after stroke?,
Stroke 37 (2006), 1941-52.
[36] T. Mulder, Motor imagery and action observation: cognitive tools for rehabilitation, Journal of Neural
Transmission 114 (2007), 1265-1278.
[37] C.M. Stinear, W.D. Byblow, M. Steyvers, O. Levin, S.P. Swinnen, Kinesthetic, but not visual, motor
imagery modulates corticomotor excitability , Experimental Brain Research 168 (2006), 157-164.
[38] C. Neuper, R. Scherer R, M. Reiner, and G. Pfurtscheller, Imagery of motor actions: differential effects
of kinesthetic and visual-motor mode of imagery in single-trial EEG, Brain research. Cognitive brain
research 25 (2005), 668-77.
[39] P.L. Jackson, M.F. Lafleur, F. Malouin, C. Richards, J. Doyon, Potential role of mental practice using
motor imagery in neurologic rehabilitation, Archives of Physical Medicine and Rehabilitation 82
(2001), 1133-1141.
[40] S.M. Braun, A.J. Beurskens, P.J. Borm, T. Schack, D.T. Wade, The effects of mental practice in stroke
rehabilitation: a systematic review, Archives of Physical Medicine and Rehabilitation 87 (2006), 842852.
[41] R. Tamir, R. Dickstein, M. Huberman, Integration of motion imagery and physical practice in group
treatment applied to subjects with Parkinson's disease, Neurorehabilitation and Neural Repair 21
(2007), 68-75.
[42] B.A. Wilson, Neuropsychological Rehabilitation, Annual Review of Clinical Psychology 4 (2008), 141162.
[43] K.D. Cicerone, C. Dahlberg C, J.F. Malec, D.M. Langenbahn, T. Felicetti et al, Evidence-based
cognitive rehabilitation: updated review of the literature from 1998 through 2002, Archives of Physical
Medicine and Rehabilitation 86 (2005), 1681-1692.
[44] A.R. O'Brien, N. Chiaravalloti, Y. Goverover, J. DeLuca, Evidence-based cognitive rehabilitation for
persons with multiple sclerosis: a review of the literature, Archives of Physical Medicine and
Rehabilitation 89 (2008), 761-769.
[45] R. Neshige, T. Endou, T. Miyamoto et al, Proposal of P300 biofeedback therapy in patients with mental
disturbances as cognitive rehabilitation, The Japanese Journal of Rehabilitation Medicine 32 (1995),
323-329.
[46] J.R. Wolpaw, Brain-computer interfaces as new brain output pathways, The Journal of Physiology 579
(2007), 613-619.
[47] J.H. Wright, A.S. Wright, AM. Albano, M.R. Basco, L.J. Goldsmith et al, Computer-assisted cognitive
therapy for depression: maintaining efficacy while reducing therapist time, The American Journal of
Psychiatry 162 (2005), 1158-1164.
[48] R.M.E.M. da Costa, L.A.V. de Carvalho, The acceptance of virtual reality devices for cognitive
rehabilitation: a report of positive results with schizophrenia, Computer Methods and Programs in
Biomedicine 73 (2004), 173-182.
[49] W.S. Harwin, J.L. Patton, V.R. Edgerton, Challenges and opportunities for robot-mediated
neurorehabilitation, Proceedings of the IEEE 94 (2006),1717-1126.
[50] N. Neumann, N. Birbaumer, Predictors of successful self control during brain-computer
communication, Journal of Neurology, Neurosurgery & Psychiatry 74 (2003), 1117-1121.
[51] A. Kuebler, V.K. Mushhwar, L.R. Hochberg, and J.P. Donoghue, BCI Meeting 2005-Workshop on
clinical issues and applications, IEEE Transactions on Neural Systems and Rehabilitation Engineering
14 (2006), 131-134.
[52] N. Neumann, A. Kuebler, Training locked-in patients: a challenge for the use of brain-computer
interfaces, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11 (2003), 169-172.
[53] M. Lee, M. Rittenhouse, and H.A. Abdullah, Design Issues for Therapeutic Robot Systems: Results
from a Survey of Physiotherapists, Journal of Intelligent and Robotic Systems 42 (2005), 239252.
[54] E. Doherty, G. Cockton, C. Bloor, D. Benigno, Improving the Performance of the Cyberlink Mental
Interface with the Yes/No Program, Proceedings of the SIGCHI conference on Human factors in
computing systems 3 (2001), 69-76.
[55] N. Bevan, Ux, Usability and ISO standards, ACM Proceedings of CHI, Florence, Italy, 2008.
[56] J.N. Fuertes, A. Mislowack, J. Bennett et al, The physicianpatient working alliance, Patient Educ
Couns 66 (2007), 29-36.
[57] H. Sun, P. Zhang, Role of moderating factors in user technology acceptance, Int. J. Human-Computer
Studies 64 (2006), 53-78.
[58] M. Bertrand, S. Bouchard, Applying the Technology Acceptance Model to vr with people who are
favorable to its use, Journal of CyberTherapy & Rehabilitation 1 (2008), 200-210.
[59] E. Hudlicka, To feel or not to feel: the role of affect in human-computer interaction, Int. J. HumanComputer Studies 59 (2003), 1-32.
[60] C. Neuper, A. Schlgl, G. Pfurtscheller, Enhancement of Left-Right Sensorimotor EEG Differences
During Feedback-Regulated Motor Imagery, Journal of Clinical Neurophysiology 16 (1999), 373-382.
[61] M. Pham, T. Hinterberger, N. Neumann, A. Kuebler, N. Hofmayer et al, An Auditory Brain-Computer
Interface Based on the Self-Regulation of Slow Cortical, Neurorehabilitation and Neural Repair 19
(2005), 206-18
[62] G. Schalk, J.R. Wolpaw, D.J. McFarland, G. Pfurtscheller, EEG-based communication: presence of an
error potential, Clinical neurophysiology 111 (2000), 2138-2144.
[63] P.W. Ferrez, J del R. Millan, Error-Related EEG Potentials Generated During Simulated Brain
Computer Interaction, IEEE Transaction on Biomedical Engineering 55 (2008), 923-929.
[64] A.D. Legatt, Intraoperative Neurophysiologic Monitoring: Some Technical Considerations, The
American journal of EEG technology 35 (1995), 167200.
[65] B.A. Taheri, R.T. Knight, R.L. Smith, A dry electrode for EEG recording, Electroencephalography
Clinilcal Neurophysiology 90 (1994), 376-83.
[66] M. Matteucci, R. Carabalona, M. Casella, E. Di Fabrizio, F. Gramatica, M. Di Rienzo, E. Snidero, L.
Gavioli, and M. Sancrotti, Micropatterned dry electrodes for brain-computer interface, Microelectronic
Engineering 84 (2007), 1737-1740.
[67] P.M. Wang, M.G. Cornwell, and M.R. Prausnitz, Effects of microneedle tip geometry on injection and
extraction in the skin, Proceedings of the second joint EMBS/BMES Conference, Houston, TX, USA,
2002, 23-26.
[68] M.R. Nuwer, G. Comi, R. Emerson et al, IFCN standards for digital recording of clinical EEG.
International Federation of Clinical Neurophysiology, Electroencephalography and Clinical
Neurophysiology 106 (1998), 259-61.
[69] F. Yamada, Frontal midline theta rhythm and eyeblinking activity during a VDT task and a video game:
useful tools for psychophysiology in ergonomics, Ergonomics 41 (1998), 678-688.
[70] W. Klimesch, EEG alpha and theta oscillations reflect cognitive and memory performance: a review
and analysis, Brain Research Review 29 (1999), 169-195.
[71] A.T. Pope, E.H. Bogart, and D.S. Bartolome, Biocybernetic system evaluates indices of operator
engagement in automated task, Biological Psychology 40 (1995), 187-195.
[72] J. Allanson, and S.H. Fairclough, A research agenda for physiological computing, Interacting with
Computers 16 (2004), 857-878.
[73] R.J. Croft, and R.J. Barry, Removal of ocular artifact from the EEG: a review, Clinical
Neurophysiology 30 (2000), 5-19.
[74] J.A. Stern, D. Boyer, and D. Schroeder, Blink rate: a possible measure of fatigue, Hum. Factors 36
(1994), 285-297.
[75] J. Malmivuo, and R. Plonsey, 12-Lead ECG System, in Bioelectromagnetism - Principles and
Applications of Bioelectric and Biomagnetic Fields New York, Oxford University Press, 1995, pp.277288.
[76] Task Force of the European Society of Cardiology and the North American Society of Pacing and
Electrophysiology, Heart rate variability. Standards of measurement, physiological interpretation, and
clinical use, European Heart Journal 17 (1996), 354-381.
[77] G. Moody, R. Mark, M. Bump, J. Weinstein, A. Berman, J. Mietus, and A. Goldberger, Clinical
Validation of the ECG-Derived Respiration (EDR) Technique, Computers in Cardiology, 13 ed
Washington, D.C. IEEE Computer Society Press, 1986, pp. 507-510.
[78] S.M. Rao, G.J. Leo, V.M. Haughton, P. St Aubin-Faubert, and L. Bernardin, Correlation of magnetic
resonance imaging with neuropsychological testing in multiple sclerosis, Neurology 39 (1989), 161166.
[79] P. Hjemdahl, U. Freyschuss, A. Juhlin-Dannfelt, and B. Linde, Differentiated sympathetic activation
during mental stress evoked by the Stroop test, Acta Physiological Scandinavia 527 (1984), 25-29.
[80] K. Oishi, and T. Maeshima, Autonomic nervous system activities during motor imagery in elite
athletes, Journal of Clinical Neurophysiology 21 (2004), 170-179.
[81] S. Akselrod, D. Gordon, F.A. Ubel, D.C. Shannon, A.C. Berger, and R.J. Cohen, Power spectrum
analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control, Science
213 (1981), 220-222.
Alessandro ANTONIETTIa,1
Department of Psychology, Catholic University of the Sacred Heart, Milano, Italy
Introduction
In what sense can music be considered a technology? In a superficial sense, music is
technology because, for it to be produced and enjoyed, it needs tools. Except for
singing, every musical performance is mediated by some artefact, which can be very
simple and primitive such as cut reeds, tanned and stretched animal skins, roughly
moulded metal sheets or very sophisticated, as in the case of electronic equipments
that generate new kinds of sounds and new ways of interacting with sounds [1, 2].
Material devices are also required for music reproduction in all occasions except for at
a live concert. In this regard, the range of traditional technologies radio, long playing
records, audiotapes has recently been expanded (or replaced) by new technological
opportunities [3, 4, 5]. But music can also be understood as a technology in a less
superficial sense. Music, as a symbolic system, is a cognitive technology, an extension
or prosthesis of intelligence, a form of embodiment of thought whereby mental life
expresses and builds itself. In this perspective music is a tool of the mind and, as such,
it allows for interesting opportunities for rehabilitation.
Attempts at using music for therapeutic goals date back to a long time [6]. A
statement about psychology made by Ebbinghaus may also be true for this tradition,
which often labelled as music therapy [7, 8]: it has a long past but a recent history.
Indeed, a scientific approach to understanding the benefits people can get from music
has developed only in the last few decades. Actually the variety of music-based
methods employed with therapeutic purposes is quite wide, as is the range of different
situations in which such methods are offered. In the neurorehabilitation field, the
spectrum of potential patients benefitting from music therapy interventions is broad,
ranging from motor deficits to speech disorders, from cognitive deterioration to
1
Corresponding Author: Department of Psychology, Catholic University of the Sacred Heart, Largo
Gemelli 1, 20123 Milano, Italy; E-mail: [email protected].
situations. In short, music has a transformative power; it does things, changes things
and allows things to happen [21]. Then, it is a matter of understanding the reasons why
the specific use of music can result in the achievement of goals in the field of
neurorehabilitation.
with large body movements. Philips-Silver and Trainor [25] reported that at 7 months
of age the infant shows his preference for a rhythm associated with a synchronised
rocking of the cradle. At 18 months of age children spontaneously perform rhythmical
movements synchronised with sounds, while they are listening to music [26]. At a later
age, the connection between music and movement does not require involvement of
ones own body. For example, Boone and Cunnigham [27] asked 4 and 5 year olds to
make a teddy bear dance according to the emotional features of short musical segments
while they were hearing them. Afterwards adults were presented the videotaped
performance played by the children without the accompanying musical track and were
requested to identify the emotion that the body movement intended to express. Results
showed that children succeeded in moving the teddy bear so as to express the
emotional meaning of the music. The detailed analysis of how children manipulated the
teddy bear showed that upward movements, rotations, shifts, as well as the tempo and
the force of the movements, differed significantly according to the expressive meaning
of the corresponding music.
Secondly, music carries an iconic, i.e. a visuospatial, component. Music, at least in
some circumstances, seems to translate spontaneously into images, so much so that in
German the term Tonmalerei (painting with sounds) has been coined in order to
indicate the possibility of depicting visual pictures through musical notes. To
corroborate the fact that, besides a motor element, also an iconic dimension belongs to
music, we can recall that in Non-Western musical cultures the performers activity is
controlled by space representations rather than by sound representation. But in the
Western world too music is connected with visual thought. For example, it is proven
that musicians, as compared to non musicians, have greater capacities of visuospatial
memory and their hippocampus the cerebral structure connected with this kind of
memory is more developed than in the latter [28]. Practicing music develops visual
memory abilities, probably because of the inherent figural nature of sounds patterns.
Even people without any musical training think about music in spatial terms. In an
experiment Halpern (mentioned in [29] p. 202) presented one word, by selecting it
from the lyrics of a song, and subsequently another word from the same song. The task
of the subjects was to compare the height of the notes corresponding to the two words.
The reaction time recorded during this task increased as a function of the distance (in
terms of bars) between the two words in the song. This suggests that the listeners
scanned mentally an image of the melody. Hence, in music there seems to be a similar
activity to the scan of visual images.
Thirdly, music carries a verbal component. Between music and verbal language
there are overt analogies:
there is a poor variation in the structural aspects of both music and verbal
language among cultures;
the skills required in music and verbal language appear early in ontogenetic
development;
music and language follow similar principles of perceptual organisation;
music and language can be described in terms of organised time units;
both consist of complex productions generated by few elements;
these elements are combined according to rules;
the rules determine hierarchical structures;
the rules allow the generation of a potentially infinite number of combinations
of elements.
These similarities concern mostly the syntactical aspects of music and have
enabled authors such as Lerdhal and Jackendoff [30] to identify some general cognitive
principles that are the foundations for musical listening. As happens for the syntactical
structure of verbal language (understood in a Chomskian sense), music implies an
unconscious construction of abstract structures that meet the dictates of a generative
grammar with a set of recursive analytical rules. However, the verbal dimension of
music appears not only at the level of syntactical structures, but also in terms of
narrative structures. In the approach of Heinrich Schenker [31] an author who
anticipated the ideas advocated by Lerdhal and Jackendoff the diatonic triad is the
Ursatz, i.e. the basic structure, in which (i) the tonic represents the initial balance, (ii)
the dominant introduces tension and (iii) the return to the tonic re-establishes the
balance. One can find a correspondence between this harmonic pattern and the
grammar of stories, some of which imply a transition from (i) the initial situation to (ii)
the appearance of troubles/hindrances/problems to end (iii) with the resolution of the
conflict/struggle/quest/tension.
Finally, the verbal dimension of music appears at the phonetic-prosodic level,
when one attempts to render the inflections of the spoken language through musical
sounds, and at the pragmatic level, when the dynamic of roles, entrances and
alternations of the interlocutors in the development of the discourse is at play.
Music activates in the listener and the performer some mental processes in all three
registers (motor, iconic and verbal) and in this synchronised activation of several
registers we can find the reasons of its therapeutic-rehabilitative efficacy. Now we have
to develop further this point.
an aspect of the piece will be better reflected or expressed in the motor register, another
in the verbal one). What is processed within a given register is correlated to and
presents some analogies with what happens in another register. Let us attempt to use a
concrete example to describe the isomorphism between different registers. If we
imagine a stretch of gravel path, on its surface some stones will protrude more than
others and some depressions will form. Let us imagine pressing a piece of cardboard
into the ground. Some features of the gravel path its protrusions and depressions, etc.
will be found on the piece of cardboard. Where on the ground there was a sharp stone,
on the card there will be a narrow and high protuberance. In some way, the
characteristics of the ground have been retranscribed in the shape the card has taken.
There are some correspondences between the two surfaces, even though each one is
made of different things (in this example, stones for the former and cellulose mixture
for the latter). If we imagine we pour some coloured paint on the modelled card after it
has been pressed into the ground, the paint will run down along the protrusions of the
card and thicken in its depressions, colouring protrusions and hollows with different
intensity. If we flatten the card now, we can still detect the original roughness of the
ground that has been impressed on it in terms of protrusions and hollows, because the
different intensity of the paint has transcribed the three-dimensional undulations of
the paper. With a different medium (the paint pigment) the characteristics of the ground
have been maintained, since we still find the same set of relationships made of hollows
and protrusions that are on the ground. We have three different planes and three
different materials stones, paper and coloured pigment. Although in a different
manner, all of them represent the same system of relations, since the same print has
been impressed in these different materials. A transcription is therefore a projection,
on a certain cognitive register, of characteristics emphasized in a different register. The
transcriptions, i.e. the correspondences that are formed between the various registers
(motor, iconic and verbal), contribute to transform the mental processing of music into
a consistent complex of acts which generates an overall strong impression.
The ability to grasp the correspondences between different registers appears quite
early. According to Stern [32], infants show an ability to connect the content of
heterogeneous senses (sight, hearing, touch). For example, infants capture the relation
between the rhythm of a repeated noise and a similar rhythm of a caress and they
associate these rhythms with the switching on and off of a light occurring at the same
pace. At 3 weeks after birth infants grasp the relationship between a time pattern
reaching their hearing and a similar visible time pattern. When the mother tries to
quieten her baby by singing or pronouncing some words with rhythmical and prosodic
inflections and she accompanies her voice with a movement of her hands caressing the
childs body in a manner synchronised with the pace of her voice, the baby perceives
the correspondence between the two experiences (auditory and tactile). Musical
cognition is a multimodal form of knowledge which, through the simultaneous
triggering of several registers, produces a global experience. It is now time to consider
each one of these registers and their consequent potentials in terms of rehabilitation.
external rhythms which are heard, we can notice that the musical rhythm induces
variations in the cardio-vascular and respiratory rates that, in turn, affect other
physiological changes. It has been confirmed that lullabies decrease the heart beat and
the respiratory rate, which synchronise with music [33]. It is not only rhythm that has
these effects; the emotional quality of music also changes the cardio-respiratory rate
[34].
On a different level of the motor register, it is proven that people grasp the
expressive tension-release dynamisms in music [35]. When subjects were asked to push
a device depending on the tension perceived in the musical piece they were listening to,
the authors noticed that moments of tension and relaxation alternated. Furthermore,
high tension was detected in correspondence with sections of fortissimo, when the
melody was ascending, the density of notes increased, places of dissonance occurred,
rhythmical and harmonic complexity increased, musical segments were repeated, as
well as during the pauses and in the parts in which some musical ideas were developed.
Similar responses can be found at the level of muscular reactions determining the
expression of the face. Usually people respond with subliminal changes of their facial
expression while they are listening to music [34, 36]. These responses can be more
specifically related to the type of music [33] music with negative emotional meaning
tends to produce a greater corrugating muscular activity, while music with positive
emotional meaning brings about zygomatic activity. These associations between music
and motor responses appear early: 3-4 year olds know how to match musical pieces and
facial expressions congruently with the emotional character of the music [34].
On a more sophisticated level, it has been shown that music generates in the
listener motor responses that allow the person to mirror the gestures performed by the
interpreter [35]. These claims are supported by experiments showing that people are
able to associate to music the corresponding gestures and actions. For instance, only by
watching the videotape of a musical performance without any sound track, people can
rate successfully the expressive intent inherent in the piece [36, 37]. Such a skill
emerges also by observing people making sound-producing gestures in the air without
manipulating any concrete instrument [38]. Similar findings were reported by
considering ballet performances: hearing only the music or seeing only the body
movements produced similar judgments about the beginnings and the ends of the
internal sections of the performance, as well as about the tension and the emotions
conveyed by the stimuli [39]. Visual experience of a musical performance provides
listeners not only with information about the context where it takes place and the
alleged personal features of the musician, but also a variety of cues which can
emphasize the expressive intention of the executor [40, 41]. The gestures of the
performer help decode also some structural aspects of music. In an experiment [42] a
singer performed intervals of various ranges and was videotaped. Subsequently two
samples were presented with only the sound track or only the soundless filmed
sequence. In both conditions the judges adequately identified the range of the different
intervals. In the video condition visual cues, such as the facial expression and gestures
of the singer, were enough to assess the size of the performed melodic interval.
The possibilities of using the link between music and body reactions for
rehabilitation purposes are broad. For example, as far as physiological responses to
sounds are concerned, Antic et al. [43] investigated the effect of music in a sample of
patients with acute ischemic stroke. In almost 80% of participants an increase in the
mean blood flow velocity in the middle cerebral artery was recorded as a consequence
of listening to music for 30 minutes. Elderly people affected by dementia benefitted
from music treatments by showing lower systolic blood pressure and better
maintenance of their physical and mental state than controls did [44]. With regards to
muscle responses, Carrick, Oggero and Pagnacco [45] observed, through computer
dynamic posturography, how people reacted to music by measuring body stability; they
found positive changes, due to music, in stability scores in individuals with balance
abnormalities, so suggesting that music can be a way to prevent fall and/or vertigo and
to rehabilitate persons showing postural disorders. Music has been applied in gait
training addressed to people with brain injuries: results showed improvements in gait
efficiency, supported also by electromyographic measurements [10]. In an even more
evident way, if music is utilised to provide the patient with an auditory pattern as a
basis for organising movement, the synchronisation between sounds and gestures
resulting from it can be applied to teach brain-damaged patients to perform the
appropriate movements required to dress autonomously [46].
Technology enables us to expand the natural link between music and movement or
to recover it when physical disabilities have impaired it. For example, Tam et al. [47]
devised a computer system, called Movement-To-Music, which allows children with
impaired movements to play and create music, resulting in broader horizons and
increased quality of life. Patients with spinal chord injuries were trained to create and
play music by means of an electronic music program: this tool led them to exercise
upper extremities which were connected to a synthesiser through a computer [48].
Motor skills impaired by stroke can be rehabilitated thanks to an equipment constituted
by electronic drum pads (to train gross motor skills) and a MIDI-piano (to train fine
motor skills) designed to activate auditory-motor representations in the patients mind
[49]. Stroke patients were induced to use such tools to reproduce a musical pattern with
the impaired arm. Better outcomes were recorded in these patients as compared to the
effects produced by traditional rehabilitation. Equipment enabled to produce MIDI
sounds can be activated and controlled by muscular contractions, as well as by
biosignals such as electrocardiogram or electroencephalogram: in this way even people
with severe motor impairments can produce music and receive feedback [10]. Other
similar devices are Sound Beam and Wave Rider [10]. In all cases in which music
contributes to restore motor functions [50], music can be conceived as an anticipatory
and continuous temporal template which facilitates the execution of the movement
which has to be rehabilitated, thanks to auditory-motor synchronisation.
(SMARC) effect, recently documented [51, 52], is supposedly evidence for it. The
SMARC effect is a form of stimulus-response compatibility effect: the person is facing
a screen where some signals appear; they can appear unpredictably either on the left or
the right sides of the screen. The task is to push a button as soon as one perceives the
appearance of the signal. If the positioning of the signal and the button to be pushed are
compatible (for example, the signal appears on the left side of the screen and the button
is at the left side of the person, so that she uses her left hand to push it), the response
will be quicker than it would be in a situation of incompatibility (the signal appears on
the left and the button for the answer is on the right). If the stimuli are musical notes,
and the subjects are asked to determine whether, compared to a standard note, the
following one is higher or lower, the SMARC effect occurs. If the button
corresponding to the answer lower is on the left and the button corresponding to the
answer higher is on the right, the response is quicker as compared to what happens if
the buttons are switched. This happens because in the first condition there is
compatibility between the stimulus characteristic (pitch) and the position of the button.
The musical notes are therefore mentally represented in a space vectorially oriented
from left to right, so that low pitches tend to be psychologically located on the left
and high pitches on the right.
The iconic power of music is grasped very early. According to research by Spelke
[53, 54], 3 and 4 month olds are capable of detecting when sound rhythm and visual
rhythm are coordinated and when they are uncoordinated. In these experiments infants
were shown a visual scene in which a puppet representing an animal was making jumps.
A sound was produced either when the jumping puppet was landing or a little later. The
children preferred to watch the visual scene in which jumps and sounds were
coordinated rather than the uncoordinated scene (their preference was assessed
according to the frequency and duration of ocular fixations). A preference was shown
also when the time interval between the puppet landing and the sound, although out of
phase (delayed sound), was constant. Other studies showed that 6-8 month olds are able
to grasp numerical correspondences between sounds and images. For example, given
the choice to look at a scene in which two objects appeared or a scene with three
objects, if the infants heard two sounds, they rather watched the two-object scene;
while they turned their gaze to the three-object scene if there were three sounds. The
skills highlighted by Wagner et al. [55] in 6 to 14 month olds are even more surprising.
The children seem to be able to associate characteristics of sounds (such as pitch) and
characteristics of sound sequences (ascending or descending sequences, sequences of
continuous or intermittent sounds) with analogous characteristics of lines. The children
prefer to watch a low line, a small circle and a dark circle in concomitance with low
pitch, a high line, a big circle and a clear circle in correspondence with a high pitch;
just as they prefer to turn their gaze to an ascending arrow if they are listening to an
ascending melodic line and a descending arrow if the melody is descending, or a
continuous line if the sound sequence is continuous and a broken line if the sound
sequence is intermittent.
Older children as documented by Walker [56] know how to make even more
complex associations, such as matching weak and strong, low and high, long and short
notes respectively with long and short lines, light and dark lines, low and high lines,
empty and full symbols. Fairly early on children understand that certain characteristics
of sound stimuli can be represented graphically with a variety of devices [57].
What role could the visuospatial components play in listening to music? In some
psychological models [58, 59] these components seem to fulfil a function only in the
preliminary and/or conclusive stages of the process of listening to music. In the former
case, for example, it is emphasized how some of the organisation principles of the
visual field (law of proximity, similarity, continuity, etc.) are true also for the
organisation of sound events: picture-like principles would intervene in the segregation
of the musical flow and in the formation of basic sound clusters. In the latter case,
general patterns of emotional response triggered by listening to music would maintain
characteristics of the iconic type (sense of raising/sinking, opening/closing, etc.).
However, it seems that the figural aspects of musical language can be assigned a role
not only in these peripheral moments respectively incoming (perceptual
organisation) and outgoing (emotional response) of the process of listening, but
also in the central moment of formation of meaning of the musical piece.
The visual resonances and spatial analogies activated by music are often used
within rehabilitative interventions to induce the patient into a state that favours the
recovery of his remaining cognitive and emotional resources. To this end, a method
called Guided Imagery and Music (GIM) has been increasingly applied. It intentionally
elicits visual imagery in the mind of the person starting from sound stimuli. Having
proven that music therapy can be successfully applied in cardiac rehabilitation [60],
there was the attempt to empower such a technique by associating visual imagery to
musical stimulation. Thus, it has been devised Music-Assisted Relaxation and Imagery,
a variant of GIM which has been proven to be more effective in cardiac rehabilitation
than traditional music therapy [61].
Finally, the iconic register activated by music can be used in rehabilitation with
other goals. For example, music can be used to facilitate recalling visual scenes from
the past. In fact, it has been shown that music can enhance the production of
autobiographical memories in Alzheimer patients [62].
contained in the musical structure. The musical elements define the implicit event, i.e.
the structure that has a decisive and primary role in determining the range of gestures
suitable for that musical piece. The performer, like a storyteller, has to be loyal to the
structure of the story and, at the same time, has the freedom to modulate the emotions
of the characters. In other words, the performer has the task to create the character so as
to add deep meaning to the literal surface of the musical piece. According to Schaffer,
the details of a musical expression are more fully understood if regarded as
corresponding to the gestures of an implicit main character. In this respect, we can
recall the observation made by Sloboda [26] that people recognize better a melody if,
as they are listening to it, they label it with concrete titles that hint at its dramatisation.
This is a potential way of using music that Noy [65] designated narrative path, which
leads the listener to identify with the experience of the composer, feeling his emotions
as if reliving his narrative.
Following Shaffers suggestion, how can we identify the narrative dimension in
the structure of music itself? Like in a story, the plot unfolds through promises,
creation of preconditions, anticipations, escalation, dramatic turn of events, sudden
resolutions, etc., and similar variations of the arousal levels are produced by the
unfolding of the musical discourse. The emotional "course" of music would be parallel
to that of a story that could overlap it.
As it is easy to imagine, the narrative characteristic of music can be exploited
particularly in the therapeutic context to activate dynamisms in terms of affect and
emotion processing. It seems that the understanding of the emotional meaning of music
has its own distinct counterpart in the brain. In this regard, Peretz [66] referred an
interesting dissociation in a patient with damage to the auditory cortex. He was still
able to enjoy music emotionally, but not to make simple auditory discriminations.
Notably, the patient knew how to distinguish sad and cheerful melodies, he was
sensitive to speed manipulations of the music and to the distinction between major and
minor modes in order to differentiate sad and cheerful music, but he was not able to
make any distinction between familiar and unfamiliar melodies (for example, he could
not recognize that a piece he listened to was the Adagio in G minor by Albinoni; he
said that this music made him feel sad like the Adagio by Albinoni) and did not
realise the errors in the pitch of the notes purposely introduced in the musical pieces he
was asked to listen to, just as he could not discriminate between consonant and
dissonant music. In this patient the analysis of music was intact as far as the emotional
aspects were concerned, but not with regard to the syntactic ones.
Do such data lead us to believe that an emotional evaluation of music does not
require cortical mediation? According to Peretz [66], this is not the conclusion to be
drawn, because in the above described case a specific cortical structure could be
damaged. It is well-known that at a cortical level it is possible to identify
neurobiological structures which can be related to the discrimination of the emotional
meaning of music. The frontal left cortical activity is higher when the subject listens to
cheerful music (and when there are variations of mode and time in the direction of joy),
while the right one is higher with sad or frightening music (and with its respective
variations). It is also proven that the left ear (which projects to the right hemisphere) is
superior when one judges music as unpleasant (i.e. atonal), while the right ear is
superior in case of pleasant (tonal) music, therefore suggesting a specialisation of the
left hemisphere in perceiving positive emotions and the right one for negative emotions.
This inter-hemisphere asymmetry does not appear when the persons need to judge the
same music not from an emotional standpoint (i.e. as pleasant or unpleasant), but in
6. Conclusions
As we have tried to argue, if music is a tool that triggers representations and processes
in different mental registers (motor, iconic and verbal) given that sounds carry
affordances, forces, vectors which drive the performance of specific actions, images
and ways of speaking and that what occurs in the various registers is reciprocally
synchronised both the power of music as a spontaneous elicitor of emotions and as a
natural tool of communication and the deliberate utilisation of music for rehabilitative
purposes are justified.
Music is constitutively motor, iconic and linguistic, since gestures, images and
words are not extrinsic elements to it. Motor, visuospatial and verbal elements are
already present in the innermost nature of music. The registers that music activates
(movements, figures, words) do not "attach" to music from the outside; they are
embricated and are deeply embedded in music. It is because of this very imbrication
that we can argue that music acts as a vicarious function in the rehabilitation context.
When the processes of motor planning are impaired, music can provide the sequential
and rhythmical patterns required to perform actions that need to be relearned, and this
is possible because these patterns are embedded in music itself. When memory
processes fail in recalling the past, music helps the memory emerge because it suggests
colours, shapes, spatial movements that can be found in visual scenes. If it is the
organisation of verbal language that is impaired, music can assist it, because it contains
discursive patterns. In other words, music, thanks to its multimodal nature, offers
scaffolding on which one can learn to perform movements, carry out cognitive
operations or articulate verbal expressions that need to be rehabilitated.
Recently, the theoretical concepts justifying the interventions in the field of music
therapy have been clarified and reliable evidence of achievable results have started to
be collected. The new technologies can expand possible music-based interventions, but
we have a long way to go yet to understand better the potential of music in
rehabilitation.
Acknowledgements
Isabella Negri is gratefully acknowledged for the linguistic revision of this chapter she
made.
References
[1]
[2]
[3]
P. Cook, Music, cognition, and computerized sounds, MIT Press, Cambridge (MA), 2001.
C. Roads, Microsound with map, MIT Press, Cambridge (MA), 2004.
P. Burkart, and T. Mccourt, Digital music wars: Ownership and control of the celestial jukebox,
Rowman & Littlefield Publishers, Lenham (MD), 2006.
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
K. Collins, From Pac-Man to pop music: Interactive audio in games and new media, Ashgate,
Aldershot (UK) Burlington (VT), 2008.
A. Williams, Portable music and its functions, Peter Lang, New York, 2007.
P. Horden, Music as medicine: The history of music therapy since antiquity, Ashgate, Aldershot (UK),
2000.
J. Schmidt Peter, Music therapy: An introduction, Charles C. Thomas Publishers, Springfield (IL), 2000.
T. Wigram, B. Saperstone, and R. West, The art and science of music therapy: A handbook, Routledge,
New York, 1995.
D. Aldridge, Music therapy and neurological rehabilitation, Jessica Kingsley Publishers, Philadelphia
(PA), 2005.
S. Paul, and D. Ramsey, Music therapy in physical medicine and rehabilitation, Australian
Occupational Therapy Journal 47 (2000), 111-118.
J. A. Sorrell, and J. M. Sorrell, Music as a healing art for older adults, Journal of Psychosocial Nursing
and Mental Health Services 46 (2008), 21-24.
C. Pacchetti, F. Mancini, R. Aglieri, C. Fundar, E. Martignoni, and G. Nappi, Active music therapy in
Parkinsons disease: An integrative method for motor and emotional rehabilitation, Psychosomatic
Medicine 62 (2000), 386-393.
N. Mammarella, B. Fairfield, and C. Cornoldi, Does music enhance cognitive performance in healthy
older adults? The Vivaldi effect, Aging Clinical and Experimental Research 19 (2007), 394-399.
N. Bannan, and C. Montgomery-Smith, Singing for the brain: Reflections on the human capacity for
music arising from a pilot study of group singing with Alzheimers patients, Journal of the Royal
Society for the Promotion of Health 128 (2008), 73-78.
R. Knox, and J. Jutai, Music-based rehabilitation of attention following brain injury, Canadian Journal
of Rehabilitation 9 (1996), 169-181.
T. Srkm, M. Tervaniemi, S. Laitinen, A. Forsblom, S. Soinila, M. Mikkonen, T. Autti, H. M.
Silvennoinen, J. Erkkil, M. Laine, I. Peretz, and M. Hietanen, Music listening enhances cognitive
recovery and mood after mild cerebral artery stroke, Brain 131 (2008), 866-876.
W.L. Magee, and C. Bowen, Using music in leisure to enhance social relationships with patients with
complex disabilities, Neurorehabilitation 23 (2008), 305-311.
S. Nayak, B.L. Wheeler, S.C. Shiflett, and S. Agostinelli, Effect of music therapy on mood and social
interaction among individuals with acute traumatic brain injury and stroke, Rehabilitation Psychology
45 (2000), 274-283.
B.L. Wheeler, S. Shiflett, and S. Nayak, Effects of number of sessions and group or individual music
therapy on the mood and behaviour of people who have had strokes or traumatic brain injuries, Nordic
Journal of Music Therapy 12 (2003), 139-151.
W.L. Magee, Music as a diagnostic tool in low awareness states: Considering limbic responses, Brain
Injury 21 (2007), 593-599.
T. DeNora, Music in everyday life, Cambridge University Press, Cambridge (UK), 2000.
J.S. Bruner et all., Studies in cognitive growth, Wiley, New York, 1966.
J. Blacking, How musical is man,? University of Washington Press, Seattle (WA)-London, 1973.
H. Moog, The development of musical experience in children of preschool age, Psychology of Music 4
(1976), 38-45.
J. Phillips-Silver, and L.J. Trainor, Feeling the beat in music: Movement influences rhythm perception
in infants, Science 308 (2005), 1430.
J.A. Sloboda, The musical mind, Clarendon Press, Oxford, 1985.
R.T. Boone, and J.G. Cunningham, Children's expression of emotional meaning in music through
expressive body movements, Journal of Nonverbal Behavior 25 (2001), 21-41.
V. Sluming, D. Page, J. Downes, C. Denby, A. Mayes, and N. Roberts, Structural brain correlates of
visuospatial memory in musicians. Conference The neurosciences and music II. From perception to
performance (2005), 8-10.
C.L. Krumhansl, Internal representations for music perception and performance, In M.R. Jones & S.
Holleran, Cognitive bases of musical communication, American Psychological Association,
Washington (DC), 1992, pp. 197-211.
F. Lerdhal, and R. Jackendoff, A generative theory of tonal music, MIT Press, Cambridge (MA), 1983.
H. Schenker, Harmony, University of Chicago Press, Chicago (IL), 1954.
D.N. Stern, The interpersonal world of the infant, Basic Books, New York, 2000.
K.R. Scherer, and M.R. Zentner, Emotional effects of music: production rules, in P.N. Juslin, and J.A.
Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 361-392.
J.A. Sloboda, and P.N. Juslin, Psychological perspectives on music and emotion, in P.N. Juslin, and J.A.
Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 71-104.
[35] A. Gabrielsson, and S. Lindstrm, The influence of musical structure on emotional expression, in PN.
Juslin, and J.A. Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 223-248.
[36] M. Leman, Embodied music cognition and mediation technology, MIT Press, Cambridge (MA), 2008.
[37] J.W. Davidson, Visual perception and performance manner in the movements of solo musicians,
Psychology of Music 21 (1993), 103-113.
[38] I. Molnar-Szakacs, and K. Overy, Music and mirror neurons: from motion to 'e'motion, Scan 1 (2006),
235-241.
[39] J.W. Davidson, What does the visual information contained in music performances offer the observer?
Some preliminary thoughts, in R. Steinberg, Music and the mind machine: Psychophysiology and
psychopathology of the sense of music, Springer, Heidelberg, 1995, pp. 105-114.
[40] R.I. Gody, E. Haga, and A.R. Jensenius, Playing "air instruments": mimicry of sound-producing
gestures by novices and experts, in S. Gibet, N. Courty, and J.-F. Kamps (Eds.), Gesture in humancomputer interaction and simulation, Springer, Berlin, 2006, pp. 256-267.
[41] C.L. Krumhansl, and D.L. Schenck, Can dance reflect the structural and expressive qualities of music?
A perceptual experiment on Balanchine's choreography of Mozart's Divertimento no. 15, Musicae
Scientiae 1 (1997), 63-85.
[42] K. Ohgushi, and M. Hattori, Emotional communication in performance of vocal music. Interaction
between auditory and visual information, in B. Pennycook, and E. Costa-Giomi, Proceedings of the
Fourth International Conference on Music Perception and Cognition, Montreal, 1996, pp. 269-274.
[43] W.F. Thompson, and F.A. Russo, Visual influences on the perception of emotion in music, in S.
Lipscomb, R. Ashley, R. Gjerdingen, and P. Webster, Proceedings of the Eighth International
Conference for Music Perception and Cognition, Northwestern University, 2004, pp. 198-199.
[44] W.F. Thompson, P. Graham, and F.A. Russo, Seeing music performance: Visual influences on
perception and experience, Semiotica 156 (2005), 203-227.
[45] S. Antic, I. Galinovich, A. Lovrendic-Huzjan, V. Vukovic, M.J. Jurasic, and V. Demarin, Music as an
auditory stimulus in stroke patients, Collegium Anthropologicum 32 (2008), 19-23.
[46] T. Takahashi, and H. Matsushita, Long-term effects of music therapy on elderly with moderate/severe
dementia, Journal of Music Therapy 43 (2006), 317-333.
[47] F.R. Carrick, E. Oggero, and G. Pagnacco, Posturographic changes associated with music listening,
Journal of Alternative and Complementary Medicine 13 (2007), 519-526.
[48] A.P. Gervin, Music therapy compensatory technique utilizing song lyrics during dressing to promote
independence in the patient with a brain injury, Music Therapy Perspectives 9 (1991), 87-90.
[49] C. Tam, H. Schwellnus, C. Eaton, Y. Hamdani, A. Lamont, and T. Chau, Movement-to-music computer
technology: A developmental play experience for children with severe physical disabilities,
Occupational Therapy International 14 (2007), 99-112.
[50] B. Lee, and T. Nantais, Use of electronic music as an occupational therapy modality in spinal chord
injury rehabilitation: An occupational performance mode, American Journal of Occupational Therapy
50 (1996), 362-369.
[51] S. Schneider, P.W. Schnle, E. Altenmller, and T.F. Mnte, Using musical instruments to improve
motor skill recovery following a stroke, Journal of Neurology 254 (2007), 1339-1346.
[52] M.H. Thaut, Rhythmic intervention techniques in music therapy with gross motor dysfunctions, The
Arts in Psychotherapy 15 (1988), 127-137.
[53] P. Lidji, R. Kolinsky, A. Lochy, and J. Morais, Spatial associations for musical stimuli: A piano in the
head, Journal of Experimental Psychology: Human Perception and Performance 23 (2007), 1189-1207.
[54] E. Rusconi, B. Kwan, B. Giordano, C. Umilt, and B. Butterworth, The mental space of pitch height, in
G. Avanzini, L. Lopez, S. Koelsch, and M. Majno, The neurosciences and music II. From perception to
performance, Annals of the New York Academy of Sciences 1060 (2005), 195-197.
[55] E.S. Spelke, Infants intermodal perception of events, Cognitive Psychology 8 (1976), 553-560.
[56] E.S. Spelke, Perceiving bimodally specified events in infancy, Developmental Psychology 15 (1979),
626-636.
[57] S. Wagner, E. Winner, D. Cicchetti, and H. Gardner, Metaphorical mapping in human infants, Child
Development 52 (1981), 728-731.
[58] R. Walker, The effects of culture, environment, age, and musical training of choices of visual
metaphors for sound, Perception and Psychophysics 42 (1987), 491-502.
[59] J. Bamberger, The mind behind the musical hear. How children develop musical intelligence,
Cambridge University Press, London, 1991.
[60] M. Imberty, Les critures du temps, Bordas, Paris, 1981.
[61] M.L. Serafine, Music as cognition. The development of thought in sound, Columbia University Press,
New York, 1988.
[62] L.K. Metzger, Assessment of use of music by patients participating in cardiac rehabilitation, Journal of
Music Therapy 41 (2004), 55-69.
[63] S.E. Mandel, S.B. Hanser, M. Secic, and B.A. Davis, Effects of music therapy on health-related
outcomes in cardiac rehabilitation: A randomized controlled trial, Journal of Music Therapy 44 (2007),
176-197.
[64] M. Irish, C.J. Cunningham, J.B. Walsh, D. Coakley, B.A. Lawlor, I.H. Robertson, and R.F. Coen,
Investigating the enhancing effect of music on autobiographical memory in mild Alzheimers disease,
Dementia and Geriatric Cognitive Disorders 22 (2006), 108-120.
[65] M. Satoh, and S. Kuzuhara, Training in mental singing while walking improves gait disturbance in
Parkinsons disease patients, European Neurology 60 (2008), 237-243.
[66] L.H. Schaffer, How to interpret music, in M.R. Jones, and S. Holleran, Cognitive bases of musical
communication, American Psychological Association, Washington (DC), 1992, pp. 263-278.
[67] P. Noy, How music conveys emotion, in S. Feder, R.L. Karmel, and G.H. Pollock (Eds.),
Psychoanalytic explorations in music, International Universities Press, Madison (CT), 1993, pp. 125149.
[68] I. Peretz, Listen to the brain: A biological perspective on musical emotions, in P.N. Juslin, and J.A.
Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 105-134.
[69] S.E. Trehub, and L.J. Trainor, Singing to infants: Lullabies and play songs, Advances in Infancy
Research 12 (1998), 43-77.
[70] J. Tamplin, A pilot study into the effect of vocal exercises and singing on dysarthric speech,
Neurorehabilitation 23 (2008), 207-216.
[71] R. Sparks, N. Helm, and M. Albert, Aphasia rehabilitation resulting from melodic intonation therapy,
Cortex 10 (1974), 303-316.
[72] G.J. Murrey, Alternate therapies in the treatment of brain injury and neurobehavioral disorders,
Haworth Press, Binghamton (NY), 2006.
[73] J. Kennelly, L. Hamilton, and J. Cross, The interface of music therapy and speech pathology in the
rehabilitation of children with acquired brain injury, Australian Journal of Music Therapy 12 (2001),
13-20.
[74] M. Besson, D. Schn, S. Moreno, A. Santos, and C. Magne, Influence of musical expertise and musical
training on pitch processing in music and language, Restorative Neurology and Neuroscience 25 (2007),
399-410.
1. Introduction
Motor imagery refers to the mental simulation of a motor act in the absence of any
gross muscular activation [1]. The mental process of motor imagery has been
investigated within different areas of research, such as cognitive psychology,
neuroscience and sport psychology, sometimes with different terminology. In the
context of athletic performance studies, a frequently used concept is mental practice.
This term refers to a training technique by which a motor act is cognitively rehearsed
with the goal of improving performance. It is important to distinguish this specific
definition from the broader term mental preparation, which includes a variety of
disparate sport psychology techniques that share a goal of enhancing performance, such
as positive mental imagery, performance cues/concentration, relaxation/activation, selfefficacy statements, and other forms of mental training. A distinction also needs to be
made between the external and internal perspectives in motor imagery. The
external perspective, considered to be mainly visual in nature, involves a third-person
view of the movement, as if watching oneself on a screen. The internal (or kinaesthetic)
perspective, on the other hand, requires a subject to take a first-person view and to
imagine the somesthetic feedback associated with action [2].
Recent studies in neuroscience have provided robust evidence that mental practice
with motor imagery may induce plastic changes in the motor system similar to actual
physical training [3, 4]. This supports the idea that mental training could be effective in
promoting motor recovery after damage to the central nervous system. In this chapter,
we first provide the rationale for using mental training in neurorehabilitation. Next, we
describe results of a pilot clinical trial, in which we examined the technical and clinical
feasibility of using virtual reality technology to support mental practice in stroke
recovery.
2. Motor imagery
Scientific investigation of motor imagery dates back to 1885, when the Viennese
psychologist, Stricker, collected the first empirical evidence that overt and covert motor
behaviours involve the same processing resources [5]. Over the past thirty-five years, a
number of studies have investigated this hypothesis further, by means of behavioural,
psycho-physiological and neuroimaging methodologies. Overall, these studies have
provided robust evidence about the existence of a striking functional similarity between
real and mentally imagined actions.
2.1. Chronometric studies
Chronometric studies are based on the Mental Chronometry paradigm, which involves
comparing real and imagined movement durations. In general, results of these studies
indicate a close temporal coupling between mentally imagined and executed movement.
Decety and Michel [6] compared actual and imagined movement times in a
graphic task. They found that the time taken by right-handed subjects to write a
sentence was the same whether the task was executed mentally or physically. Also,
subjects took approximately the same time, both physically and mentally, whether they
wrote the text in large letters or in small letters. This observation suggests that the
isochronic principle, which holds for physically performed drawing and writing tasks,
applies also to mentally-simulated motor tasks.
In another experiment, Decety and Jeannerod [7] investigated whether Fitts law
(which implies an inverse relationship between the accuracy of a movement and the
speed with which it can be performed), applies also to imagined movements. These
authors investigated mentally simulated motor behaviours within a virtual environment.
Participants were instructed to imagine themselves walking in a computergenerated three-dimensional space toward gates of different apparent widths placed at
three different apparent distances. Results showed that response time increased for
decreasing gate widths when the gate was placed at different distances, as predicted by
Fitts law. According to authors, these findings support the hypothesis that mentally
simulated actions are governed by central motor rules.
The temporal correspondence between real and imagined motion is affected by
moderating variables such as the type of motor task and the time of the day. Rodriguez
and colleagues [8] asked a group of healthy subjects to perform or imagine a fast
sequence of finger movements of progressive complexity. Findings showed real-mental
congruency in relatively complex motor sequences (4 to 5 fingers), while in the
simplest sequences (performed with 1 to 2 fingers) real-mental congruency remarkably
decreased. The influence of the time of the day on real-mental congruency was
investigated by Gueugneau and colleagues [9]. They found that the real-virtual
isochrony was only observable between 2 pm and 8 pm, whereas in the morning and
later in the evening, the durations of mental movements were significantly longer than
the durations of real movements.
2.2. Psycho-physiological studies
Further evidence of the functional similarity between physical and imagined
movements is provided by studies that have measured patterns of autonomic response
during mental simulation of effortful motor actions. Decety and colleagues [10]
measured cardiac and ventilatory activity during actual and mental locomotion at
different speeds. Data analysis showed a strict correlation between heart and respiratory
rates and the degree of imagined effort. For example, the authors found that the amount
of vegetative arousal of a participant mentally running at 12 km/h was similar to that of
a subject physically walking at a speed of 5 km/h. In another study, Decety and
colleagues [11] analysed heart rate, respiration rate and muscular metabolism during
both actual and mental leg exercise. During motor imagery, vegetative activation was
found to be greater than expected from metabolic demands. The authors explained the
additional autonomic activation as the involvement of central mechanisms dedicated to
motor control, which anticipate the need for energetic mobilization required by the
planned movement.
Bonnet et al. [12] investigated changes in the excitability of spinal reflex pathways
during mental simulation and actual motor performance. In their experiment, subjects
were instructed either to exert or to mentally simulate a strong or a weak pressure on a
pedal with the left or the right foot. Modifications in the H- and T reflexes were
measured on both legs by electromyography (EMG). Findings showed that spinal
reflex excitability during motor imagery was only slightly weaker than in the reflex
facilitation associated with the actual performance. A further interesting result of this
study was that the lateralization and intensity of the imagined movement significantly
modulated the EMG activity during motor imagery.
2.3. Brain imaging studies
A large body of recent research has investigated neural substrates underlying motor
imagery by comparing the brain activation that occurs during mental and physical
execution of movements. Taken together, results derived from these studies suggest
that imagining a motor act is a cognitive task that engages a complex distributed neural
circuit, which includes the activation of primary motor cortex (M1), supplementary
motor area, dorsal and ventral lateral pre-motor cortices, superior and inferior parietal
lobule, pre-frontal areas, inferior frontal gyrus, superior temporal gyrus, primary
sensory cortex, secondary sensory area, insular cortex, anterior cingulate cortex, basal
ganglia and cerebellum [13, 15].
The pattern of cerebral activation associated with motor imagery can be influenced
by the level of motor expertise. Ross and colleagues [16] used fMRI to evaluate motor
imagery of the golf swing of golf players with different handicap. Results showed
activation of cerebellum, vermis, supplementary motor area, as well as motor and
parietal cortices. Moreover, the authors found a correlation between increased handicap
of participants and an increased number of activated brain areas. According to the
authors of this study, increased brain activity may reflect a failure to learn and become
highly automatic, or be related to a loss of automaticity with the need for compensatory
processing.
3. Mental practice
In the previous section, we have reviewed evidence suggesting that the execution of
mental and physical actions obey the same biomechanical constraints and share similar
neuromuscular mechanisms. Another stream of research has investigated the effects of
mental rehearsal on motor skill learning. Laboratory experiments involving healthy
individuals have shown that motor learning can occur through mental practice alone,
and that the combination of physical and mental rehearsal can lead to superior
performance compared to physical practice only [27]. Positive effects of mental
practice have been reported in a variety of motor tasks and for different outcome
variables, including performance accuracy, movement speed and muscular force [28,
31].
Neuro-physiological studies have consistently shown that prolonged mental
practice induce plastic changes in the brain which are similar to those resulting from
physical training. Pascual-Leone and colleagues [3] used transcranial magnetic
stimulation to examine patterns of functional reorganization of the brain after mental or
physical training of a motor skill. Participants practiced a one-handed piano exercise
over a period of five days. Results showed that the size of the contra-lateral cortical
output map for the long finger flexor and extensor muscles increased progressively
each day, and that the increase was equivalent in both physical and mental training.
Type of task: mental practice seems to be more effective when the task to be
learnt require cognitive or symbolic components/operations (i.e. make decisions,
solve problems, generate hypotheses, p. 485);
Retention interval: the effects of mental practice on performance become
weaker over time. To gain the maximum benefits of mental practice, one should
refresh training on at least a one- or two-week schedule (p. 489);
Experience level: while experienced subjects benefit equally well from mental
practice, regardless of task type (cognitive or physical), novice subjects benefit
more from mental practice on cognitive tasks than on physical tasks (p. 488).
Mental practice may be more effective if novice subjects are given schematic
knowledge before mental practice of a physical task (p. 489);
Duration of mental practice: the benefit of mental practice decreases with the
training duration. To maximize learning outcome, an overall training period of
approximately twenty minutes is recommended (p. 488).
Side of
impairment
Right
Hand
dominance
Right
Patient ID
Gender
Age
VP
46
GT
68
24
Right
Right
MLR
61
27
Right
Right
LS
39
24
Left
Right
SS
57
20
Right
Right
RG
68
25
Left
Right
LL
63
96
Left
Right
TP
40
36
Right
Bilateral
PZ
27
14
Left
Left
52.1
31
Mean
SD
14.6
25.3
Range
27-68
13-96
that are generated. The responses to all the questions can be summed to provide an
overall score. The Mental Rotation Test is used to assess the ability to mentally rotate
three-dimensional objects. The Vividness of Movement Imagery Questionnaire
(VMIQ) [48] was used to test patient's ability to perform motor imagery. The VMIQ is
constructed specifically to assess kinesthetic imagery ability, and contains 24-item
scale consisting of movements that the subject is requested to imagine. The
questionnaire includes a variety of relatively simple upper-extremity, lower extremity,
and whole-body movements. The best score is 120, and the worst score is 24.
5.3. System
The VR Mirror consists of the following components (Figure 1):
-
The laboratory intervention using the VR-Mirror was integrated with a homerehabilitation program making use of a DVD. The DVD stored prerecorded movies that
showed the patient how to perform the motor exercises.
Figure 1. The Virtual Reality Mirror. Top left: the patient is performing the movement with the healthy arm
during the registration phase. Top-right and bottom: positioning of sensors and structure of the prototype.
5.4. Intervention
The day-hospital rehabilitation protocol includes a minimum of two weekly sessions,
for eight consecutive weeks. Each therapeutic session at the hospital included 1/2 h of
standard physiotherapy plus 1/2 h of VR Mirror training. The treatment focused on the
following motor exercises:
1) flexion-extension of the wrist;
2) supination/pronation;
3) flexion-extension of the elbow with assisted stabilization of the shoulder.
The training procedure with the VR Mirror consisted of the following steps. First,
the therapist shows the patient how to perform the movement with the unaffected arm.
When the patient performs the task, the system registers the movement and generates
its mirrored three-dimensional simulation. Then, the virtual arm is superimposed over
the (unseen) paretic limb, so that the patient can observe a model of the movement to
be imagined. Next, the patient is asked to mentally rehearse the movement he has just
observed, taking a first-person perspective. When the patient starts to imagine the
movement, he presses a button (using his healthy hand), pressing it again when he has
finished. This allows the therapist to measure the time the patient takes to imagine each
movement exercise. Last, the patient has to perform the movement with the affected
arm. During the execution of the physical exercise with the paretic arm, the system
tracks the movement and measures its deviation from the movement performed with
the non-paretic arm. Using this measurement, which is done in real time, the system
provides the patient with audiovisual feedback describing his performance on the task.
The feedback consists of a red bar chart, which changes its shape according to the
precision of the movement. This procedure was repeated at least 5 times within each
practice session for each target exercise.
In parallel to hospital-based treatment, patients were asked to practice home-based
exercises using the DVD three times a week for one hour. The DVD stored prerecorded movies showing the correct exercise to be performed. After viewing the
movies, the patient was asked to take a first-person perspective and to imagine
executing the movement with the impaired arm. The patient performed this sequence at
home 3 times a week.
5.5. Evaluation
Patients were evaluated 4 times:
1) at the beginning of the hospital practice (baseline assessment);
2) four weeks after starting hospital practice (midterm evaluation);
3) eight weeks after starting hospital practice;
4) 12 weeks after the end of hospital practice (follow-up).
Primary pretreatment and post-treatment measures included the Action Research
Arm Test (ARAT) [49] and Fugl-Meyer Upper Extremity Assessment Scale (FMAUE) [50]. ARAT includes four domains (grasp, grip, pinch and gross motor) and
contains 19 items. Each item is graded on a four-point scale with total score ranging
from 0 to 60. Higher scores indicate better upper extremity function. The Fugl-Meyer
Upper Extremity Assessment Scale is composed of 33 items, with total scores ranging
between 0 and 66. Higher FMA-UE scores mean better motor function.
Baseline
12
4 weeks
8 weeks
26
29
follow-up
29
GT
25
26
29
30
MLR
57
57
57
57
LS
0
5
SS
RG
LL
TP
20
24
28
28
PZ
60
60
60
60
Mean
19,9
22,0
23,1
23,2
SD
23,7
23,6
23,8
23,8
4 weeks
34
8 weeks
36
FUGL-MEYER
VP
Baseline
20
follow-up
36
GT
25
25
32
32
MLR
59
60
66
66
LS
10
10
10
11
SS
14
15
15
15
RG
LL
14
15
15
15
TP
PZ
54
54
58
62
Mean
19,1
24,7
27,1
27,7
SD
15,9
20,7
22,5
23,1
Patients LL, LS, SS and RG presented a more negative pattern of results. None of
them showed improvement on the functional scales, and the effect of treatment on
functional recovery was negligible. This result might be partially explained by the low
compliance with the home-based exercise program that was reported by these patients
in their diaries.
When practicing with the VR Mirror, patients were asked to press a button (with
the healthy arm) in order to record imagined movement times (Figure 2). The analysis
of these data did not reveal any significant correlation between real and imagined
movement durations.
Despite post-treatment measures showing moderate gains in motor function, all
patients reported increased well-being and reduced stress. A Wilcoxon signed-rank test
revealed a significant pre-post difference for EuroQol VAS scores (W=28, p < .02).
Furthermore, patients reported improvements of key health status dimensions, with
particular reference to daily activities (t = 1.18, p < .05). In particular, patients MLR
and PZ reported the achievement of remarkable gains in leisure, household and
community tasks.
Figure 3. Mean EQ-5D scores for the five health dimensions (1= no problems; 3 = severe problems).
Figure 4. EQ-5D assessment of patients own health status before and after treatment (0 = worst imaginable
health state; 100 = best imaginable health state).
6. Conclusions
The main objective of this pilot study was to evaluate the technical and clinical
feasibility of using virtual reality technology to support mental practice in
neurorehabilitation. This strategy was tested in nine post-stroke patients with chronic
motor impairment of the upper limb. After eight weeks of treatment, remarkable
improvement was noted in three cases, slight improvement in two cases, and no
improvement in four cases. The limited number of patients and the absence of a control
condition did not allow us to draw any conclusion about the efficacy of this
intervention. However, results showed a good acceptance of VR Mirror therapy by
both patients and therapists, suggesting that virtual reality technology can be
successfully integrated into mental practice interventions. A future goal is to define
appropriate technology-based strategies for motivating patients to execute mental
practice at home without therapist supervision.
References
[1]
[2]
[3]
[4]
[5]
[6]
M. Jeannerod, The representing brain: Neural correlates of motor intention and imagery, Behavioural
and Brain Sciences 17 (2) (1994), 187-245.
L.P. McAvinue, and I.H. Robertson, Relationship between visual and motor imagery, Percept Mot
Skills 104 (2007), 823-43.
A. Pascual-Leone, D. Nguyet, L.G. Cohen, J.P. Brasil-Neto, A. Cammarota, and M. Hallett, Modulation
of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new fine
motor skills, Journal of Neurophysiology 74 (3) (1995), 1037-1045.
P.L. Jackson, M.F. Lafleur, F. Malouin, C.L. Richards, and J. Doyon, Functional cerebral
reorganization following motor sequence learning through mental practice with motor imagery,
NeuroImage 20 (2) (2003), 1171-1180.
A. Sirigu, and J.R. Duhamel, Motor and visual imagery as two complementary but neutrally dissociable
mental processes, Journal of Cognitive Neuroscience 13 (7) (2001), 910-919.
J. Decety, and F. Michel, Comparative analysis of actual and mental movement times in two graphic
tasks, Brain and Cognition 11 (1989), 87-97.
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
J. Decety, and M. Jeannerod, Mentally simulated movements in virtual reality: does Fitts's law hold in
motor imagery?, Behavioural Brain Research 72 (1-2) (1995), 127-134.
M. Rodriguez, C. Llanos, S. Gonzalez, and M. Sabate, How similar are motor imagery and movement?,
Behavioural Neuroscience 122 (2008), 910-6.
N. Gueugneau, B. Mauvieux, and C. Papaxanthis, Circadian Modulation of Mentally Simulated Motor
Actions: Implications for the Potential Use of Motor Imagery in Rehabilitation, Neurorehabilitation
and Neural Repair (2008).
J. Decety, M. Jeannerod, M. Germain, and J. Pastene, Vegetative response during imagined movement
is proportional to mental effort, Behavioural Brain Research 42 (1) (1991), 1-5.
J. Decety, M. Jeannerod, D. Durozard, and G. Baverel, Central activation of autonomic effectors during
mental simulation of motor actions in man, The Journal of Physiology 461 (1993), 549-563.
M. Bonnet, J. Decety, M. Jeannerod, and J. Requin, Mental simulation of an action modulates the
excitability of spinal reflex pathways in man, Brain Research Cognitive Brain Research 5 (3) (1997),
221-228.
J. Decety, Do imagined and executed actions share the same neural substrate?, Brain Research.
Cognitive Brain Research 3 (2) (1996), 87-93.
P.L. Jackson, M.F. Lafleur, F. Malouin, C. Richards, and J. Doyon, Potential role of mental practice
using motor imagery in neurologic rehabilitation, Archives of Physical Medicine and Rehabilitation 82
(8) (2001), 1133-1141.
M.G. Lacourse, E.L.R. Orr, S.C. Cramer, and M.J. Cohen, Brain activation during execution and motor
imagery of novel and skilled sequential hand movements, NeuroImage 27 (3) (2005), 505-519.
J.S. Ross, J. Tkach, P.M. Ruggieri, M. Lieber, and E. Lapresto, The Mind's Eye: Functional MR
Imaging Evaluation of Golf Motor Imagery, American Journal of Neuroradiology 24 (6) (2003), 10361044.
R.J. Davidson, and G.E. Schwarz, Brain mechanisms subserving self-generated imagery:
Electrophysiological specificity and patterning, Psychophysiology 14 (6) (1977), 598-601.
A. Guillot, C. Collet, V.A. Nguyen, F. Malouin, C. Richards, and J. Doyon, Brain activity during visual
versus kinesthetic imagery: An fMRI study, Human Brain Mapping (2008).
A. Sirigu, L. Cohen, J.R. Duhamel, B. Pillon, B. Dubois, Y. Agid, and C. Pierrot-Deseilligny,
Congruent unilateral impairments for real and imagined hand movements, Neuroreport 6 (7), 997-1001.
P. Dominey, J. Decety, E. Broussolle, G. Chazot, and M. Jeannerod, Motor imagery of a lateralized
sequential task is asymmetrically slowed in hemi-Parkinson's patients, Neuropsychologia 33 (6) (1995),
727-41.
R. Cunnington, G.F. Egan, J.D. O'Sullivan, A.J. Hughes, J.L. Bradshaw, and J.G. Colebatch, Motor
imagery in Parkinson's disease: a PET study, Movement Disorders 16 (5) (2001), 849-57.
S. Thobois, P.F. Dominey, J. Decety, P.P. Pollak, M.C. Gregoire, P.D. Le Bars, and E. Broussolle,
Motor imagery in normal subjects and in asymmetrical Parkinson's disease: a PET study, Neurology 55
(7) (2000), 996-1002.
A. Sirigu, J.R. Duhamel, and L. Cohen, The mental representation of hand movements after parietal
cortex damage, Science 273 (1996), 1564-1568.
B. Tomasino, and R.I. Rumiati, Effects of strategies on mental rotation and hemispheric lateralization:
neuropsychological evidence, The Journal of Cognitive Neuroscience 16 (2004), 878-888.
N. Sharma, V.M. Pomeroy, and J.C. Baron, Motor imagery: a backdoor to the motor system after
stroke?, Stroke 37 (7) (2006), 1941-1952.
N. Allami, Y. Paulignan, A. Brovelli, and D. Boussaoud, Visuo-motor learning with combination of
different rates of motor imagery and physical practice, Experimental Brain Research 184 (1) (2008),
105-13.
L. Ygez, D. Nagel, H. Hoffman, A.G.M. Canavan, E. Wist, and V. Hmberg, A mental route to
motor learning: Improving trajectorial kinematics through imagery training, Behavioural Brain
Research 90 (1) (1998), 95-106.
G. Yue, and K.J. Cole, Strength increases from the motor program: comparison of training with
maximal voluntary and imagined muscle contractions, Journal of Neurophysiology 67 (1992), 11141123.
V.K. Ranganathan, V. Siemionow, J.Z. Liu, V. Sahgal, and G.H. Yue, From mental power to muscle
power-gaining strength by using the mind, Neuropsychologia 42 (7) (2004), 944-956.
R. Gentili, C. Papaxanthis, and T. Pozzo, Improvement and generalization of arm motor performance
through motor imagery practice, Neuroscience 137 (2006), 761-772.
K. Sacco, F. Cauda, L. Cerliani, D. Mate, S. Duca, and G.C. Geminiani, Motor imagery of walking
following training in locomotor attention. The effect of "the tango lesson", NeuroImage 32 (3) (2006),
1441-9.
[32] J.E. Driskell, C. Copper, and A. Moran, Does Mental Practice Enhance Performance?, Journal of
Applied Psychology 79 (4) (1994), 481-492.
[33] Y.A. Fery, Differentiating visual and kinesthetic imagery in mental practice, Journal of Experimental
Psychology 57 (1) (2003), 1-10.
[34] C. Hall, E. Buckolz, and G.J. Fishburne, Imagery and the acquisition of motor skills, Journal of Sport
Science 17 (1992), 19-27.
[35] R. Tamir, R. Dickstein, and M. Huberman, Integration of motor imagery and physical practice in group
treatment applied to subjects with Parkinson's disease, Neurorehabilitation and Neural Repair 21
(2007), 68-75.
[36] S.C. Cramer, E.L. Orr, M.J. Cohen, and M.G. Lacourse, Effects of motor imagery training after chronic,
complete spinal cord injury, Experimental brain research 177 (2006), 233-242.
[37] G.L. Moseley, Graded motor imagery is effective for long-standing complex regional pain syndrome: a
randomised controlled trial, Pain 108 (1-2) (2004), 192-198.
[38] A. Zimmermann-Schlatter, C. Schuster, M.A. Puhan, E. Siekierka, and J. Steurer, Efficacy of motor
imagery in post-stroke rehabilitation: a systematic review, Journal of NeuroEngineering and
Rehabilitation 14 (2008).
[39] P.L. Jackson, M.F. Lafleur, F. Malouin, C. Richards, and J. Doyon, Potential role of mental practice
using motor imagery in neurologic rehabilitation, Archives of Physical Medicine and Rehabilitation 82
(8) (2001), 1133-1141.
[40] N. Sharma, V.M. Pomeroy, and J.C. Baron, Motor imagery: a backdoor to the motor system after
stroke?, Stroke 37 (2006), 19411952.
[41] F. Malouin, C.L. Richards, A. Durand, and J. Doyon, Clinical assessment of motor imagery after stroke,
Neurorehabilitation and Neural Repair 22 (4) (2008), 330-40.
[42] R. Dickstein, and J.E. Deutsch, Motor Imagery in Physical Therapist Practice, Physical Therapy 87 (7)
(2007), 942 - 953.
[43] J.A. Stevens, and M.E. Stoykov, Using motor imagery in the rehabilitation of hemiparesis, Archives of
Physical Medicine and Rehabilitation 84 (7) (2003), 1090-2.
[44] A. Gaggioli, F. Morganti, R. Walker, A. Meneghini, M. Alcaniz, J.A. Lozano, J. Montesa, J.A. Gil, and
G. Riva, Training with computer-supported motor imagery in post-stroke rehabilitation,
CyberPsychology and Behavior 7 (3) (2004), 327-332.
[45] D. Marks, Visual imagery differences in the recall of pictures, British Journal of Developmental
Psychology 64 (1973), 17-24.
[46] J. Metzler, and R. Shepard, Mental rotation of three-dimensional objects, Science 171 (1971), 1-32.
[47] A. Isaac, D. Marks, and D. Russell, An instrument for assessing imagery of movement: The vividness
of movement imagery questionnaire (VMIQ), Journal of Mental Imagery 10 (1986), 23-30.
[48] R.A. Lyle, Performance test for assessment of upper limb function in physical rehabilitation treatment
and research, International Journal of Rehabilitation Research 4 (1981), 483-492.
[49] A. Fugl-Meyer, L. Jaasko, and I. Leyman, The post-stroke hemiplegic patient, I: a method for
evaluation of physical performance, Scandinavian Journal of Rehabilitation Medicine 7 (1975), 13-31.
[50] R. Rabin, and F. de Charro, EQ-5D: a measure of health status from the EuroQol Group, Annals of
Medicine 33 (2001), 337-43.
1. Introduction
We have created a laboratory that combines two technologies: dynamic posturography
and virtual reality (VR). The purpose of this laboratory is to test and train postural
behaviors in a virtual environment (VE) which simulates real world conditions. Our
goal with this environment is to explore how multisensory inputs influence the
perception of orientation in space, and to determine the consequence of shifts in spatial
perception on postural responses. Drawing from previous findings from aviators in
simulators which indicated that responses to visual disturbances became much stronger
when combined with a physical force disturbance [1], we have assumed that the use of
a VE would elicit veridical reactions.
Traditionally, postural reactions have been studied in controlled laboratory
settings that either remove inputs from specific pathways to determine their
contribution to the postural response (e.g., closing the eyes to remove vision), or test
patients with sensory deficits to determine how the central nervous system (CNS)
compensates for the loss of a particular input [2]. In order to simplify the system, most
studies recorded a single dependent variable such as center of pressure (COP) or center
of mass (COM) when either the physical or visual world moved [3, 4]. There have been
studies comparing the relative influence of two input signals, but conclusions have
mostly been drawn from a single output variable [5, 6]. With our environment, we have
the ability to monitor multiple inputs as well as multiple outputs. Thus our
experimental protocols can be designed to incorporate the complex cortical processing
that occurs when an individual is navigating through the natural world, thereby eliciting
motor behaviors that are presumed to be more analogous to those that take place in the
natural physical environment.
In this chapter we present information about both the technology used in our
laboratory as well as data that demonstrate how we have been able to modify and
measure postural and perceptual responses to multisensory disturbances in both healthy
and clinical populations. First, we will present a background of our rationale for using
VE technology. Second, we will describe the choices we made in developing our
laboratory. Results from experiments in the VE will be offered to support our claim
that our laboratory creates a sense of presence in the environment. Lastly, we will
present evidence of a strong linkage between posture and perception which supports
our belief that the VE is a valuable tool for exploring the organization of postural
behaviors as they would occur in natural conditions. Our laboratory presents the
challenges necessary to evaluate postural disorders and to design interventions that will
fully engage the adaptive properties of the system. We believe the VE has vast
potential as both a diagnostic and treatment tool for patients with poorly diagnosed
balance disorders.
1.1. Unique Features that Motivated Our Use of VR in Posture Control Research
Prior to the advent of virtual environment display technology, experiments using
complex, realistic computer-controlled imagery to study visual information
processing/motor control linkages were difficult to produce. Until the arrival of
affordable high performance computer graphics hardware, the manipulation of visual
stimuli was relegated to real world conditions that were optically altered using prisms
lenses [7, 8] or using artificial visual stimuli depicted in pictures and simple computer
generated images such as random dots moving across a monitor [9, 11]. Each of these
systems had advantages and limitations that constrained the type of experiments that
could be performed. Consequently, any investigation of motor control and its
interaction with vision and the other senses was limited by the available technology.
These limitations severely impeded the study of posture control which incorporates
both feedback about our self-motion and the perception of orientation in space.
Although the role of each sensory feedback system could be studied by either removing
or augmenting its contribution to an action, perception of vertical orientation is more
difficult to discern and its measurement was mostly dependent on subjective feedback
from the subject. If we were to examine how a higher level process like perception
impacted posture control, then we needed to produce a visual environment convincing
enough that our subjects believed that they needed to deal with disturbances presented
in that environment. To create such conditions we modeled our laboratory after one of
the most successful applications of a VE to activate perception, that of pilots in flight
simulators [12]. We needed an environment where subjects accepted that they were
actually present in the environment, thus their responses to the virtual world would be
similar to those elicited in the physical world [13]. This suspension of disbelief
The CAVE is a registered trademark of the Board of Trustees of the University of Illinois.
immersed, but not to the extent provided by a HMD system since they could see
physical objects, such as their own body, in addition to the virtual world. The ability to
see yourself within your environment is a trait experienced in the physical world
[Augmented Reality HMD systems that allow the subject to see both physical and
virtual objects are currently available but are at least twice the cost of a CAVE].
Swimming of the scene during head movements was minimal because the entire fieldof-view (FOV) was projected on the screen in front of the subject. Swimming was
further reduced because the tracking system we used (Motion Analysis, Santa Rosa,
CA) produced very short latencies (approximately 10-20 msec) resulting in an image
update that is very close to the physiological latency of the vestibulo-ocular reflex
during natural head motion [18]. Negative characteristics of the projection-based
system are that it requires a much larger physical space than a HMD and that it forces
the subjects to be confined to an area near the screens in order to see the image. Also,
images are not as bright as in HMDs. However, our decision criteria lead us to use a
CAVE system rather than a HMD.
We originally started with a one wall CAVE, i.e., a single projection screen in
front of the subject [19]. Although the 100 FOV was adequate for our experiments, we
expected that a wider FOV would elicit a stronger sense of motion in subjects [20]. In
fact, we have found that narrowing the FOV so that the peripheral field is not
stimulated actually produces greater delays in response to postural disturbances [21].
We currently have a 3 screen passive stereo system, with walls in front and to the sides
of the subject, which permits peripheral as well as central visual field motion (Figure
1). Two projectors are located behind each screen. Each pair of projectors has
circularly polarized filters of different (opposite) directions placed in front of them, and
each projects a full-color workstation field [1280h x 1024v] at 60 Hz onto each screen.
Matching circularly polarized stereo glasses are worn by the subject to deliver the
appropriate left and right eye image to each eye allowing a 150 stereo FOV. The
correct perspective and stereo projections for the scene are computed using values for
the subjects inter-pupillary distance (IPD) and the current orientation of the head
supplied by position markers attached to the subjects head and scanned by the Motion
Analysis infrared camera system. Consequently, virtual objects retain their true
perspective and position in space regardless of the subjects movement. The visual
experience is that of being immersed in a realistic scene with textural content and optic
flow. To produce the physical motion disturbances necessary to elicit postural
reactions, we incorporated a moving base of support with two integrated force plates
(NeuroCom International Inc, Clackamas OR) into the environment (Figure 1). In
many posture laboratories and with the popular clinical tools for diagnosis and training
of postural reactions (e.g., the Equitest and Balance Master), the visual axis of rotation
is placed at the ankle and the multi-segmental body is assumed or even constrained to
function as an inverted pendulum [6, 20, 22, 25]. In our laboratory, the visual axis is
referenced to the head as occurs during natural movement, and it is assumed that the
control of posture is a multi-segmental process.
1.4. Is Stereo Vision Necessary?
We have explored whether having stereovision in the VE produced a more compelling
visual experience than just viewing a flat wall or picture. Stereopsis is an effective cue
up to about 30m which encompasses many objects in our scene [26]. We predicted that
stereovision was necessary to produce a sense of immersion in the VE, and that this
Figure 1. The Virtual Environment and Postural Orientation Laboratory currently at Temple University is a
three-wall virtual environment. Each wall measures 2.4 m x 1.7 m. The visual experience is that of being
immersed in a realistic scene with textural content and optic flow. Built into the floor is a 3 degree of
freedom posture platform (NeuroCom Inc., Clackamas, OR) with two integrated force plates (AMTI,
Watertown, MA) on which sit reflective markers from the Motion Analysis (Santa Rosa, CA) infrared
camera system.
perceptual engagement would be reflected in the postural response metrics. For these
experiments we produced postural instability by having young adults stand on the force
plate with a full (100%) base of support (BOS), or on a rod offering 45% of their BOS
(calculated as a percentage of their foot length), or a rod offering 35% BOS [21, 27].
Subjects viewed the wide FOV visual scene moving fore-aft at 0.1 Hz viewing either
stereo (IPD 0) or dioptic (IPD = 0) images. Response power at the frequency of the
scene increased significantly (p < 0.05) with the 35% BOS (Figure 2), suggesting some
critical mechanical limit at which subjects could no longer rely on the inputs provided
by the BOS and, thus, switched to a reliance on vision. There was also an interaction
between BOS and stereovision revealing that when subjects were more reliant on the
visual inputs, stereovision had a greater effect on their motion. Thus, in an unstable
environment, visual feedback and, in particular, stereovision became more influential
on the metrics of the postural response. As a result we chose to retain the stereo
component in our 3-wall environment.
2D Visual Field
100% BOS
3D Visual Field
35% BOS
100% BOS
35% BOS
head
trunk
shank
s1
s2
s3
s4
0.1
0.4
Frequency (Hz)
0.1
0.4
Frequency (Hz)
0.1
0.4
Frequency (Hz)
0.1
0.4
Frequency (Hz)
Figure 2. Power of head, trunk, and shank center of mass for four subjects is normalized to the largest
response of each subject during 0.1 Hz motion of a visual scene with dioptic (2D) and stereo (3D) images
while on a full (100%) and reduced (35%) base of support.
-5
ankle
trunk
head
-5
B
Forward
Left
Right
Backward
3
-3.5
-2.5
-1.5
-0.5
0.5
x-position (ft)
Figure 3. (Left) A subject standing within a field of random dots projected in the VE. The subject is tethered
to three flock-of-birds sensors that are recording 6 axes of motion of the head, trunk, and lower limb. (Right)
Graphs of two subjects (A and B) showing the relationship of the head, trunk, and left ankle during
locomotion. The two gait patterns produced by the subjects walking from the rear of the CAVE (bottom of
the y-axis) to the front wall (top of the y-axis) are shown. (A) The subject takes one step forward and then
walks in the direction of the counterclockwise scene by crossing one limb over the other. (B) The subject
crouches down and stamps his feet to progress forward in the CAVE.
and root mean square (RMS) values were calculated for the head, trunk, and lower
limb. We found that with scene motion in either the pitch and roll planes, subjects
exhibited greater magnitudes of motion in the head and trunk than at the lower limb.
Additionally, the frequency or velocity content of the head and trunk motion was
equivalent to that of the visual input, but this was not the case in the lower limb.
Smaller amplitudes and frequent phase reversals observed at the ankle suggested that
control at the ankle was directed toward keeping the body over the base of support (the
foot) rather than responding to changes in the visual environment. These results
suggested to us that the lower limb postural controller was setting a limit of motion for
postural stabilization while posture of the head and trunk may have been governed by a
perception of the visual vertical driven by the visual scene.
When our subjects were asked to walk while the visual environment rolled
counterclockwise, all of the subjects compensated for the visual field motion by
exhibiting one of two locomotion strategies. Some subjects exhibited a normal step
length, taking only two or three steps to cover the seven-foot distance which would be
a normal gait for this distance. However, a lateral shift took place so that they walked
sideways in the direction of the rolling scene (Figure 3A). In each case, the subjects
first step was straight ahead and the second step was to the left regardless of which foot
was placed first. For example, one subject who made the first step with the left foot
then made the second step by crossing the right leg over the left leg when responding to
the visual stimulus [in order to move to the left]. When queried about the amount of
translation produced during the walking trials, subjects responded that they recognized
they were moving off center. In fact, these subjects were three feet to the left of center
at the end of their trial but were unable to counteract the destabilizing force.
The other subjects walked with short, vertically projected stamping, taking
approximately seven or eight steps in the seven feet traveled (Figure 3B). These
subjects exhibited an increased frequency of medial-lateral sway of the head and trunk
as though they were rocking over each foot as they stepped forward. These subjects
reported that they were only focused on not falling over. Shortened step lengths and
increased flexion at the knee and ankle implied that these subjects were exerting
cognitive decisions about their locomotion that focused on increasing their awareness
of both the somatosensory feedback and their motor output. This locomotion pattern
was reminiscent of the gait observed in elderly fallers [34] or subjects that have been
walking with reversing prisms [35].
From these results we concluded that subjects could only counteract the effects of
the destabilizing visual stimulus by altering their normal locomotion pattern and,
correspondingly, the altered perception of vertical. Interestingly, the content of the
visual scene did not determine response strategy selection (subjects receiving the
random dot pattern also exhibited the different strategies), thus this paradigm can be
used in laboratories with less advanced technologies than those reported here.
Figure 4. Amplitudes of head, trunk, and ankle to pitch, roll, and A-P motion of the VE. For pitch and roll,
both constant velocity at 5/s (A) and sinusoidal motion of the VE at 0.1Hz (B) and 0.5Hz (C) were used. (A)
Vertical dashed lines indicate the start and termination of constant velocity visual scene motion. (B and C)
Sinusoidal motion of the visual scene is illustrated by the light grey lines in each plot. (D) Sinusoidal motion
of the visual scene at 0.1 Hz is shown in the bottom trace. Time scale shows responses from early and late
portions of the experiment. In all A-P plots, upward peaks represent anterior motion relative to the room;
downward peaks represent posterior motion relative to the room.
With 0.1 Hz sinusoidal roll of the visual scene (Figure 4B), although the
magnitude of motion was greater in the head and trunk than in the ankle, all segments
had similar phases and oscillatory frequencies suggesting that subjects were responding
as a simple pendulum limited only by the constraints of the base of support. With 0.1
Hz sinusoidal pitch of the visual scene, the subject shown in Figure 4B attempted to
maintain a sinusoidal relation with the stimulus with similar magnitudes at all
segments. Segmental responses were more synchronized to a visual scene with a
frequency of 0.1 Hz than 0.5 Hz (Figure 4C). Interestingly, that same frequency (0.1
Hz) with a visual scene moving in A-P (Figure 4D) produced a much more subtle
response of the body segments with lower amplitudes [35]. Differences seen between
the responses of the two subjects presented in Figure 4D are indicative of the variable
Figure 5. Orientation of the hand held wand, the head, and the center of pressure (COP) while viewing
counterclockwise (CCW) roll motion of the visual scene (bold line) and a stationary visual scene (broken
line) in three subjects demonstrates a fluctuating response (top row), a bi-directional response (middle row),
and a constant response (bottom row) that is consistent across all three variables. Dashed vertical lines mark
the start and end of the scene motion.
response to the visual motion, which is not unexpected if the response is a reflection of
each individuals perception of their own movement and that of the environment.
The waxing and waning of our subjects responses were reminiscent of reports in
the literature regarding subjects perceptions of orientation during scene rotation [20,
36, 38]. Consequently, we wanted to determine whether changes in orientation of the
head and trunk when exposed to a rotating scene correlated with spatial and temporal
characteristics of the perception of self-motion. We recorded head position in space,
center of pressure responses, and perception of self-motion through the orientation of a
hand-held wand during constant velocity rotations of the visual scene about the roll
axis [39]. Although no consistent response pattern emerged across the healthy subjects,
there was a clear relationship between the perception of vertical, the position of the
head in space, and postural sway within each subject (Figure 5). This observed
relationship between spatial perception and postural orientation suggests that spatial
representation during motion in the environment is modified by both ascending and
descending controls. We inferred from these data that postural behaviors generated by
the perception of self-motion are the result of cortical interactions between visual and
vestibular signals as well as input from other somatosensory signals. This probable
real-time monitoring of spatial orientation has implications for rehabilitation
interventions. For example, the recovery of balance following a slip or trip may rely
greatly on the ability to match continuously changing sensory feedback to an initial
model of vertical that could be highly dependent on the visual environment and the
mechanical arrangement during that particular task. Also, we cannot assume that a
patient, particularly one with a sensory deficit who appears to be vertically oriented at
the initiation of motion, will be able to sustain knowledge of that orientation as the task
progresses [3].
Figure 6. Schematic illustration of the vection phenomenon. Gravitational and visual signals stimulate the
otoliths and the visual system, respectively, which, when combined, produce the perception of tilt. Thus, as
seen on the right, when the visual scene is rotating counterclockwise there is a mismatch with the vertically
directed otolith vector. The CNS determines that it doesnt make sense for the world to be moving, thereby
resolving this conflict with a perception of tilt. The response is to correct for the perceived tilt (in the irection
opposite that of the visual world) by tilting the body in the same direction as the motion of the visual world.
Figure 7. (Top) Power of the relative angles between head, trunk, shank and the moving platform (sled) over
the period of the trial at the relevant frequencies of platform motion (0.25 Hz) and visual scene motion (0.1
Hz) are shown for each protocol for one young adult, one elderly adult, and one labyrinthine deficient adult.
The power at each segment is portrayed as the percentage of the maximum response power (observed in the
trunk) across segments for that subject. (Bottom) Mean area under the power curve standard error of the
mean across all young adult subjects at the relevant frequency for platform (sled) motion only (0.25 Hz),
visual scene motion only (0.1 Hz), and both frequencies of combined platform and visual scene motion
(both). Segmental responses significantly increased (*) at 0.1 Hz when platform and scene motion were
combined.
Furthermore, we believe it unlikely that the role of any single pathway contributing
to postural control can be accurately characterized in a static environment if the
function of that pathway is context dependent. We conclude from these data that a
healthy postural system does not selectively switch between inputs but continuously
monitors all environmental signals to update the frequency and magnitude
characteristics of a motor behavior.
Figure 8. Average head, whole body, and shank COM power for each of the three BOS conditions when the
augmented visual motion was imposed on a stereo virtual scene. Subjects viewed the motion with a narrow
(black line) and wide (dashed) FOV.
For example, when we are moving in the environment rather than standing quietly, we
might expect that feedback generated by our physical motion becomes more heavily
weighted and it should therefore be easier for the postural control system to
differentiate between our own motion and motion of the world. PET and MRI studies
have supported this hypothesis by demonstrating that when both retinal and vestibular
inputs are processed, there are changes in the medial parieto-occipital visual area and
parieto-insular vestibular cortex [44, 46] as well as cerebellar nodulus [47, 48] that
suggest a deactivation of the structures processing object-motion when there is a
perception of physical motion. But we have preliminary data [49] to suggest that
inappropriate visual field motion is not suppressed when it is not matched to actual
physical motion. Instead, during quiet stance, magnitude and power of segmental
motion increased as the velocities of sinusoidal anterior-posterior visual field motion
were increased even to values much greater than that normally observed in postural
sway. In fact, head velocity in space was modulated by the scene velocity regardless of
the velocity of physical body motion.
Figure 9. Average head, whole body, and shank COM power for the 100% (dashed line) and 45% (black
line) BOS conditions in subjects that were able to maintain balance on the reduced BOS (typical subjects)
and those that needed to take a step (steppers) while viewing the scene in stereo.
individuals who complain of dizziness provoked by visual environments with full field
of view repetitive or moving visual patterns [51]. Visual vertigo is present in some
patients with a history of a peripheral vestibular disorder, but there is also a subset of
patients who have no history of vestibular disorder and who test negative for vestibular
deficit on traditional clinical tests. We investigated whether the visual sensitivity
described by these individuals could be quantified by the magnitude of the postural
response in response to an upward pitch of the VE combined with dorsiflexion tilt of
the support surface [52].
We found that the healthy subjects exhibited incremental effects of visual field
velocity on the peak angular velocities of the head, but responses of the visually
sensitive subjects were not linearly modulated by visual field velocity (Figure 10).
Patients with no history of vestibular disorder demonstrated exceedingly large head
velocities whereas patients with a history of vestibular disorder exhibited head
velocities that fell within the bandwidth of healthy subjects. Thus, our results clearly
indicated that the relation between postural kinematics and visual inputs could quantify
the presence of a perceptual disorder. From this we concluded that virtual reality
technology could be useful for differential diagnosis and specifically designed
interventions for individuals whose chief complaint was sensitivity to visual motion.
We have also started to explore whether the VE could be used to measure
improvements following balance retraining. We have tested one patient with bilateral
vestibular deficit and another with BPPV following a training paradigm that focused on
somatosensory feedback. To test whether balance was improved following treatment,
we placed them in a VE that moved in counterclockwise roll while they were standing
on a platform that was sway-referenced to the motion of their center of mass. At the
same time, they were instructed to point to a target that moved laterally in their visual
field. Although these are preliminary results, we have been able to demonstrate that
visual field motion is less destabilizing following the balance training program than
prior to the training period (Figure 11) which suggests that VR technology holds
particular promise as a clinical evaluation tool.
10
0
DARK
30
45
60
side-side
Dark
Still
Roll
side-side
anterior-posterior sway
Dark
Still
Roll
Pointing
anterior-posterior sway
Figure 11. Center of pressure responses of a BPPV subject before (top traces) and following (bottom traces)
balance training. The subject stood on an unstable support surface while in the dark, viewing a scene matched
to her head motion (still), viewing a scene moving counterclockwise (roll), and while pointing to a target in
the rolling scene (pointing). N.B. the subject was unable to accomplish the pointing task prior to the balance
training.
given our finding that within our VE we could distinguish the postural responses of
patients with visual sensitivity, who present with oscillopsia but have no hard clinical
signs, from a healthy population [52]. We have also had some initial success in using
the VE to test the carryover of postural training paradigm in patients with vestibular
deficit.
Future directions for our laboratory, and for virtual technology to be considered
seriously as a rehabilitative tool, must include studies to determine how immersive the
VE must be given the strength of the stimuli to produce changes in the perception of
vertical and spatial orientation. Does the VE need to project a stereo image and how
wide must the field of view be? Can we identify how to make more economical
systems for treatment and diagnosis of postural disorders? Finally, we must ask how to
make these systems user friendly (and safe) either for the clinic or for home based use.
Acknowledgements
The research reported here was supported by National Institutes of Health (NIH) grants
DC01125 and DC05235 from the National Institute on Deafness and Communication
Disorders and grants AG16359 and AG26470 from the National Institute on Aging.
The virtual reality research, collaborations, and outreach programs at the Electronic
Visualization Laboratory (EVL) at the University of Illinois at Chicago are made
possible by major funding from the National Science Foundation (NSF), awards EIA9802090, EIA-9871058, ANI- 9980480, and ANI-9730202, as well as the NSF
Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement
ACI-9619019 to the National Computational Science Alliance. The authors thank
VRCO Inc. for the use of their CAVElib and Trackd software and our colleagues J.
Streepy, K. Dokka, and J. Langston for their collaboration on some of this work.
References
[1]
L.R. Young, Vestibular reactions to spaceflight: human factors issues, Aviation, Space, and
Environmental Medicine 71 (2000), A100-4.
[2] F.O. Black, C. Wall, and L.M. Nashner, Effects of visual and support surface orientation references
upon postural control in vestibular deficient subjects, Acta Otolaryngology 95 (1983), 199-201.
[3] E.A. Keshner, J.H. Allum, and C.R. Pfaltz, Postural coactivation and adaptation in the sway stabilizing
responses of normals and patients with bilateral vestibular deficit, Experimental Brain Research 69
(1987), 77-92.
[4] F.B. Horak, and L.M. Nashner, Central programming of postural movements: adaptation to altered
support-surface configurations, Journal of Neurophysiology 55 (1986), 1369-81.
[5] K.S. Oie, T. Kiemel, and J.J. Jeka, Multisensory fusion: simultaneous re-weighting of vision and touch
for the control of human posture, Cognitive Brain Research 14 (2002), 164-76.
[6] R.J. Peterka, Sensorimotor integration in human postural control, Journal of Neurophysiology 88
(2002), 1097-118.
[7] Stratton, and G.M., Vision without inversion of the retinal image, Psychological Review 4 (1987), 46381.
[8] C.M. Oman, O.L. Bock, and J.K. Huang, Visually induced self-motion sensation adapts rapidly to leftright visual reversal, Science 209 (1980), 706-8.
[9] J. Dichgans, and T. Brandt, Visual-vestibular interaction and motion perception, British Journal of
Ophthalmology 82 (1972), 327-38.
[10] L.R. Young, C.M. Oman, D.G. Watt, K.E. Money, B.K. Lichtenberg, R.V. Kenyon, and A.P. Arrott,
M.I.T./Canadian vestibular experiments on the Spacelab-1 mission: 1 Sensory adaptation to
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
weightlessness and readaptation to one-g: an overview, Experimental Brain Research 64 (1986), 29198.
S. Weghorst, J. Prothero, T. Furness, D. Anson, and T. Riess, Virtual images in the treatment of
Parkinson's disease akinesia, in K. Morgan, R.M. Satvara, H.B. Sieburg, R. Matheus, and J.P.
Christensens (Eds), Medicine Meets Virtual Reality II 30, 1995, pp. 242-43.
C.C. Ormsby, and L. Young, Perception of static orientation in a constant gravitoinertial environment,
Aviation, Space, and Environmental Medicine 47 (1976), 159-64.
W. Sadowski, and K. Stanney, Presence in Virtual Environments, in Handbook of Virtual
Environments: Design, Implementation, and Applications, K.M. Stanney, Ed. London: Lawrence
Erlbaum Associates, Inc, 2002, pp. 791-806.
M. Slater, Presence - the view from Marina del Rey, http://www.presence-thoughts.blogspot.com/,
2008.
C.E. Lathan, M.R. Tracey, M.M. Sebrechts, D.M. Clawson, and G.A. Higgins, Using Virtual
Environments as Training Simulators: Measuring Transfer, in K.M. Stanney, Handbook of Virtual
Environments: Design, Implementation, and Applications, London, Lawrence Erlbaum Associates, Inc,
2002, pp. 403-14.
A.E. Thurrell, and A.M. Bronstein, Vection increases the magnitude and accuracy of visually evoked
postural responses, Experimental Brain Research 147 (2002), 558-60.
C. Cruz-Neira, D. Sandin, T. Defanti, R. Kenyon, and J. Hart, The CAVE Audio-Visual Environment,
ACM Transactions on Graphics 35 (1992), 65-72.
T. Swee Aw, M.J. Todd, and G.M. Halmagyi, Latency and initiation of the human vestibuloocular
reflex to pulsed galvanic stimulation, Journal of Neurophysiology 96 (2006), 925-30.
E.A. Keshner, and R.V. Kenyon, Using immersive technology for postural research and rehabilitation,
Assistive Technology 16 (2004), 54-62.
J. Dichgans, R. Held, L.R. Young, and T. Brandt, Moving visual scenes influence the apparent direction
of gravity, Science 178 (1972), 1217-19.
J. Streepey, R.V. Kenyon, and E.A. Keshner, Visual motion combined with base of support width
reveals variable field dependency in healthy young adults, Experimental Brain Research 176 (2006),
182-87.
T.M. Dijkstra, G. Schoner, and C.C. Gielen, Temporal stability of the action-perception cycle for
postural control in a moving visual environment, Experimental Brain Research 97 (1994), 477-86.
F.H. Previc, The effects of dynamic visual stimulation on perception and motor control, Journal of
Vestibular Research 2 (1992), 285-95.
E.A. Keshner, M.H. Woollacott, and B. Debu, Neck, trunk and limb muscle responses during postural
perturbations in humans, Experimental Brain Research 71 (1988), 455-66.
J.J. Buchanan, and F.B. Horak, Emergence of postural patterns as a function of vision and translation
frequency, Journal of Neurophysiology 81 (1999), 2325-39.
J. Cutting, and P.M. Vishton, Perceiving Layout and Knowing Distances: The Integration, Relative
Potency, and Contextual Use of Different Information About Depth, in Handbook of Perception and
Cognition: Perception of Space and Motion, 2nd ed: Academic Press, 1995, pp. 69-117.
J. Streepey, R.V. Kenyon, and E.A. Keshner, Field of view and base of support width influence postural
responses to visual stimuli during quiet stance, Gait and Posture 25 (2006), 49-55.
A.D. Kuo, R.A. Speers, R.J. Peterka, and F.B. Horak, Effect of altered sensory conditions on
multivariate descriptors of human postural sway, Experimental Brain Research 122 (1998), 185-95.
T. Mergner, and S. Glasauer, A simple model of vestibular canal-otolith signal fusion, Annals of the
New York Academy of Sciences 871 (1999), 430-34.
T. Mergner, C. Maurer, and R.J. Peterka, A multisensory posture control model of human upright
stance, Progress in Brain Research 142 (2003), 189-201.
T. Mergner, and T. Rosemeier, Interaction of vestibular, somatosensory and visual signals for postural
control and motion perception under terrestrial and microgravity conditions-a conceptual model, Brain
Research Reviews 28 (1998), 118-35.
F.H. Previc, R.V. Kenyon, E.R. Boer, and B.H. Johnson, The effects of background visual roll
stimulation on postural and manual control and self-motion perception, Perceptual Psychophysics 54
(1993), 93-107.
E.A. Keshner, and R.V. Kenyon, The influence of an immersive virtual environment on the segmental
organization of postural stabilizing responses, Journal of Vestibular Research 10 (2000), 207-19.
D.A. Winter, A.E. Patla, J.S. Frank, and S.E. Walt, Biomechanical walking pattern changes in the fit
and healthy elderly, Physical Therapy 70 (1990), 340-47.
A. Gonshor, and G.M. Jones, Postural adaptation to prolonged optical reversal of vision in man, Brain
Research 192 (1980), 239-48.
[36] A. Thurrell, P. Bertholon, and A.M. Bronstein, Reorientation of a visually evoked postural response
during passive whole body rotation, Experimental Brain Research 133 (2000), 229-32.
[37] J. Dichgans, and T. Brandt, Visual-vestibular interaction: effects on self-motion perception and postural
control., in R. Held, H.W. Leibowitz, and H.L. Teuber (Eds), Handbook of sensory physiology, New
York, Springer, 1978, pp. 755-804.
[38] H. Fushiki, S. Takata, and Y. Watanabe, Influence of fixation on circular vection, Journal of Vestibular
Research 10 (2000), 151-55.
[39] E.A. Keshner, K. Dokka, and R.V. Kenyon, Influences of the perception of self-motion on Postural
parameters in a dynamic visual environment, Cyberpsychology and Behavior 9 (2006), 163-66.
[40] J.R. Lackner, and P. DiZio, Visual stimulation affects the perception of voluntary leg movements
during walking, Perception 17 (1988), 71-80.
[41] E.A. Keshner, R.V. Kenyon, and J. Langston, Postural responses exhibit multisensory dependencies
with discordant visual and support surface motion, Journal of Vestibular Research 14 (2004), 307-19.
[42] E.A. Keshner, R.V. Kenyon, and Y. Dhaher, Postural research and rehabilitation in an immersive
virtual environment, Conference Proceedings IEEE Engineering in Medicine & Biology Society 7
(2004), 4862-65.
[43] E.A. Keshner, R.V. Kenyon, Y.Y. Dhaher, and J.W. Streepey, Employing a virtual environment in
postural research and rehabilitation to reveal the impact of visual information, International Journal on
Disability and Human Development 4 (2005), 177-82.
[44] T. Brandt, P. Bartenstein, A. Janek, and M. Dieterich, Reciprocal inhibitory visual-vestibular
interaction. Visual motion stimulation deactivates the parieto-insular vestibular cortex, Brain 121 (9)
(1998), 1749-58.
[45] T. Brandt, S. Glasauer, T. Stephan, S. Bense, T.A. Yousry, A. Deutschlander, and M. Dieterich, Visualvestibular and visuovisual cortical interaction: new insights from fMRI and pet, Annals of the New York
Academy of Sciences 956 (2002), 230-41.
[46] M. Dieterich, and T. Brandt, Brain activation studies on visual-vestibular and ocular motor interaction,
Current Opinion in Neurology 13 (2000), 13-18.
[47] A. Kleinschmidt, K.V. Thilo, C. Buchel, M.A. Gresty, A.M. Bronstein, and R.S. Frackowiak, Neural
correlates of visual-motion perception as object- or self-motion, Neuroimage 16 (2002), 873-82.
[48] C. Xerri, L. Borel, J. Barthelemy, and M. Lacour, Synergistic interactions and functional working range
of the visual and vestibular systems in postural control: neuronal correlates, Progress in Brain Research
76 (1988), 193-203.
[49] K. Dokka, R. Kenyon, and E.A. Keshner, Influence of visual velocity on head stabilization, Society for
Neuroscience (2006).
[50] S. Lambrey, and A. Berthoz, Combination of conflicting visual and non-visual information for
estimating actively performed body turns in virtual reality, International Journal of Psychophysiology
50 (2003), 101-15.
[51] A.M. Bronstein, The visual vertigo syndrome, Acta Otolaryngol Suppl 520 (1) (1995), 45-8.
[52] E.A. Keshner, J. Streepey, Y. Dhaher, and T. Hain, Pairing virtual reality with dynamic posturography
serves to differentiate between patients experiencing visual vertigo, Journal of NeuroEngineering and
Rehabilitation 4 (2007), 24.
[53] D. Gopher, and E. Donchin, Workload - An Examination of the Concept, in K.R. Boff, L. Kaufman,
and J.P. Thomas (Eds), Handbook of perception and human performance, New York, Wiley, 1986, pp.
41-1 - 41-49.
[54] V.S. Gurfinkel, P. Ivanenko Yu, S. Levik Yu, and I.A. Babakova, Kinesthetic reference for human
orthograde posture, Neuroscience 68 (1995), 229-43.
[55] B. Isableu, T. Ohlmann, J. Cremieux, and B. Amblard, Selection of spatial frame of reference and
postural control variability, Experimental Brain Research 114 (1997), 584-89.
[56] B. Isableu, T. Ohlmann, J. Cremieux, and B. Amblard, Differential approach to strategies of segmental
stabilisation in postural control, Experimental Brain Research 150 (2003), 208-21.
[57] J. Kluzik, F. B. Horak, and R.J. Peterka, Differences in preferred reference frames for postural
orientation shown by after-effects of stance on an inclined surface, Experimental Brain Research 162
(2005), 474-89.
Introduction
Technology has revolutionized all aspects of medical rehabilitation. The use of robotics,
virtual reality, nanotechnologies, embedded sensors, neuro-imaging, and a host of other
technologies have enhanced health outcomes and allowed researchers to break new
ground and expand their knowledge of the processes of neurological and
musculoskeletal recovery. As this research knowledge and technology development is
1
Corresponding Author: National Rehabilitation Hospital, Center for Applied Biomechanics and
Rehabilitation Research, 102 Irving Street, NW, Washington, DC, 20010, USA; E-mail:
[email protected].
1. Origins of Telemedicine
Some have suggested that telemedicine can trace its origins to the use of bonfires and
smoke signals, carrier pigeons, and horseback-riding letter carriers to transfer
information related to disease outbreaks, military casualty lists, and requests for
medical assistance [1]. These instances are obviously dwarfed by the scope and speed
of modern telemedicine applications, yet they demonstrate the rationale and potential
for health data to be transmitted from one location to another.
The formal history of telemedicine is directly tied to advancements in technology.
The original "no-tech" methods, described above, were gradually replaced, first by the
telephone and telegraph, and later by radio-transmission, closed-circuit television
signals, and satellite communication. The mid- to late-1990's brought about what some
consider to be the 'modern' era of telemedicine with the advent of digital
communications, growth of electronic medical records, and rapid proliferation of the
Internet all of which continue to be significant driving forces for telemedicine [1, 2].
Figure 1 offers an approximate timeline illustrating when various ICT was first used for
transmitting medical information.
Today, telemedicine has grown to include a broad research knowledge base, a
mounting body of evidence on efficacy and effectiveness, and a rising level of
acceptance among clinicians and clients [1]. Radiology, pathology, and other primarily
image-driven diagnostic specialties have strongly embraced telemedicine as a way to
deliver services faster, more efficiently, more accurately (for example, when advanced
image processing techniques or algorithms are applied), and to a greater number of
people [3]. Videoconferencing consults from larger specialty clinics to rural healthcare
providers are becoming increasingly commonplace around the globe, extending the
reach of clinicians and improving client care [2]. Advancements in ICT coupled with
the rapid development of software, sensors, robotics, digital medical records, and other
equipment have helped telemedicine develop into a key component in the evolution of
modern healthcare.
2. Origins of Telerehabilitation
Much as the earliest examples of telemedicine were opportunistic and often driven by
innovative clinicians making use of equipment that existed for another purpose [2],
telerehabilitation sprang from the application of existing telemedicine tools and
3. Technology
All telemedicine applications, while unique in their purpose, involve the exchange of
medical or healthcare information. The people involved in the session (e.g. clinician-toclinician in a tele-consultation, or clinician-to-client in a treatment encounter), the type
of information collected, and the ways in which it is transmitted and displayed, varies
significantly according to the intervention being delivered. Figure 2 illustrates the
typical information exchange in telemedicine. In a live session, the information
transmission occurs in real-time, while in store-and-forward telemedicine information
is collected and transmitted for review at a later time. It should be noted that this
exchange is most typically bi-directional with information flowing both to and from
each site. In many telecare applications, there may be an added step where the
information is analyzed or processed either when it is collected or when it is received
(e.g. to check if data from a sensor is within pre-defined criteria and notify users as
appropriate).
Real-time verbal and visual interaction between participants in a telemedicine
encounter occurs through the use of videoconferencing technology. There are a number
of different types of videoconferencing systems, each defined by the type of network
over which they connect and the telecommunications standard which they support (e.g.
H.320, H.323, H.324) [24]. Across all technologies, the quality of the link (typically
measured by the speed of the video and the clarity of the audio) is directly related to the
speed or bandwidth of the connection being used. In cases where the videoconference
is used for basic conversation or to provide macro-level visual interaction with a client,
a lower-fidelity connection such as standard PSTN lines may be sufficient (Figure 3).
However, in telemedicine applications where higher-quality video is necessitated (e.g.
assessment of fine motor skill, balance, or tremor; detection of facial affect or emotion)
a higher bandwidth connection may be required. Section 6.1 provides information on
how sensors can be used to augment videoconferencing to provide high-fidelity data on
motion and patient performance during a telemedicine session.
While videoconferencing is a powerful tool for bringing people together across
long distances, in many instances, it is not enough to provide the dynamic interaction
between a clinician and a client that lies at the heart of telerehabilitation. By
incorporating additional types of information exchange between users, a wider range of
telerehabilitation interventions can be delivered. Many traditional rehabilitation
assessment and therapy techniques make use of paper-based materials. In a
telerehabilitation application, these materials can be exchanged in a store-and-forward
fashion through the simple use of a fax machine or e-mail, or alternately in real-time
via computer-based data sharing methods where on-screen material can be used
interactively by participants (as illustrated by the interactive telerehabilitation system
described in Section 6.2).
Today, an increasing number of projects are moving beyond basic
videoconferencing to include the types of remote hands-on interaction that was once
viewed as being impossible for telerehabilitation. Multi-axial position and force sensors
(the latest of which are small in size with wireless communication and low-power
requirements) provide a tangible measure of physical performance and function of a
remote client. Haptic and robotic technologies let therapists feel a client and impart
forces and motion. Environmental sensors, and other Smart Home equipment,
monitor a living space and collect information on a clients interaction with the
environment. The data from these devices can be used as part of a remote monitoring
application and transmitted in real-time (with or without a simultaneous
videoconference) or be collected, processed, and analyzed using store-and-forward
methods.
Despite the ever-evolving potential that advanced technologies afford clinicians
and researchers, it is imperative that the presence of technology has no negative impact
on the interaction between clinician and client. Telerehabilitation technologies must be
developed and implemented such that they facilitate the treatment interventions being
delivered and are usable and accepted by both the clients (and their caregivers) who
will receive the services and the clinicians who will provide them.
Given the broad scope of the field of rehabilitation, the dynamic recovery process,
the need to maintain and prevent deterioration in neurological and musculoskeletal
systems, and the inherent variability of clients receiving treatment, it is difficult to
make a one-size-fits-all recommendation of telerehabilitation technologies. Rather the
clinician, researcher, and/or administrator, in collaboration with the target population,
should first carefully identify the clinical need and relevant constraints (e.g. available
bandwidth, cost, etc.) and use them as a basis from which to select the most suitable
and appropriate technology. Figure 4 illustrates a top-down needs-focused approach for
identifying the appropriate technology for a telerehabilitation application.
The importance of including the client users in this process is described by
Ballegaard et al. who advocate that the clinical need for health technology must be
supplemented with the citizens perspective' focusing on the everyday life activities,
values, expertise and wishes of the person who will utilize the system [25].
"What I liked about it was that he was so eager to do it. He'd ask me 'shall I do it again?
Shall I do it again?...It's really amazing that he really wanted to do these exercises
much more..."
"Having viewed it visually I'm aware that this elbow swings out...people can see the
difference between what they can do and what they should be doing"
Figure 7. Qualitative Feedback from Home Users of the SMART System
The results provided valuable guidance for further development of the system and
proof of the concept that a robust rehabilitation system can be managed at home and
used to provide useful and motivating feedback within the daily routine of a stroke
survivor. Future projects will expand the SMART system and methodology to develop
a personalized remote therapy system for chronic pain, stroke and chronic heart failure.
6.2. Interactive videoconferencing for remote cognitive-communicative treatment
Following stroke or brain injury, many clients often exhibit some degree of cognitivecommunicative impairments. Treatment of these impairments by a speech-language
pathologist (SLP) typically involves skill-based exercises that are largely based on
drill-and-practice, repetition, and the use of treatment materials such as worksheets and
flash cards. Given the highly verbal and visual nature of this treatment, it is well-suited
for delivery using telerehabilitation technology [48]. While early work in the field
pointed to the significant potential for telerehabilitation to deliver numerous cognitivecommunicative interventions to remote clients [6, 9, 11, 12, 49], there was a clear need
for technology that could enhance and expand the ways in which clinicians could
interact with their remote clients.
Work at the National Rehabilitation Hospital, in Washington, DC, investigated use
of a customized telerehabilitation system that combined videoconferencing with
interactive data sharing features (Figure 8). The goal was to develop a system that
could augment and extend interaction during a telerehabilitation session so as to enable
a wide range of therapeutic interventions to be delivered to remote clients. In addition
to the basic verbal and visual communication afforded by the videoconferencing
connection, data collaboration functions were designed to serve as a virtual desktop
on which the client and clinician were able to work together in real-time using onscreen material (e.g. word processing documents, scanned worksheets, computer
applications, or digital drawing whiteboards) just as they would use physical treatment
materials in a traditional face-to-face session (Figure 9).
Development of the overall system was based on a UCD framework, which
emphasized effective and usable interface design in addition to traditional
software/system development goals of quality, budget, expandability, and timeliness.
The design of the system GUIs were achieved through iterative prototyping that
progressed from low-fidelity designs (e.g. drawings, sketches, and storyboards) to
Figure 9. Videoconferencing Interaction with Barrier Drawing Task (NOTE: in this example, the client and
the clinician share an on-screen drawing whiteboard, such that the clinician can view the clients drawing via
the touchscreen in real-time)
Recommendations
1.
2.
3.
1.
1.
1.
2.
3.
2.
2.
3.
1.
2.
3.
8. Conclusion
The need for evolving the delivery of rehabilitation services and incorporating aspects
of self-care and remote monitoring is somewhat magnified in light of the shift in global
demographics to an older population and the increasing prevalence of chronic health
conditions. Telerehabilitation holds significant potential to meet this need and to
provide services that are more accessible to more people, while having the ability to
offer a more affordable enhanced level of care.
Despite all of its potential, the evolution of telerehabilitation is not inevitable and it
will not occur on its own. Greater adoption of telerehabilitation will likely occur as a
result of the shift towards user-focused and technology-enabled healthcare, and as an
increased emphasis is placed on preventative and continuous care, rather than
traditional episodic and reactive care. Additionally, there is a crucial need for a greater
body of evidence on clinical effectiveness and cost efficiency of telerehabilitation
programs. Research should look to analyze the behavior change that can and does occur
as a result of telerehabilitation interventions and the impact it has on long-term health
outcomes.
The last decade has seen tremendous growth of telerehabilitation, and this trend is
likely to continue. While most of the past and current focus in telerehabilitation has
been on modifying face-to-face treatment methods for remote delivery, future work
will explore the potential for telerehabilitation to enhance and perhaps even improve
care. While this growth occurs and new approaches for remote service delivery are
explored, great care must be taken to ensure that the planning, design, and
implementation of telerehabilitation technologies, systems, and services is strongly
focused on user-centered human factors principles. Telerehabilitation must never be a
result of 'technology push' alone, rather it must be driven by clinical need and a desire
to improve healthcare.
Acknowledgments
The SMART Consortium (www.thesmartconsortium.org) is funded by the Engineering
and Physical Sciences Research Council (www.fp.rdg.ac.uk/equal). Consortium
members include Sheffield Hallam University, University of Sheffield, University of
Bath, University of Essex and the University of Ulster.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
J. Craig, and V. Patterson, Introduction to the practice of telemedicine, In R. Wootton, J. Craig, and V.
Patterson, Introduction to Telemedicine, Second Edition, London: The Royal Society of Medicine Press,
Ltd, 2006, pp.3-14.
A.C. Norris, Essentials of Telemedicine and Telecare, West Sussex: John Wiley and Sons, Ltd, 2002.
V. Della Mea, Prerecorded telemedicine, In R. Wootton, J. Craig, and V. Patterson, Introduction to
Telemedicine, Second Edition, London: The Royal Society of Medicine Press, Ltd, 2006, pp. 3-14.
G.R. Vaughn, Tel-communicology: health-care delivery system for persons with communicative
disorders, ASHA 18 (1) (1976), 13-17.
N. Korner-Bitensky, S. Wood-Dauphinee, Barthel Index information elicited over the telephone. Is it
reliable?, American Journal of Physical Medicine and Rehabilitation 74 (1) (1995), 9-18.
R. Wertz, N. Dronkers, E. Bernstein-Ellis, Y. Shubitowski, R. Elman, G. Shenaut, R. Knight, and J.
Deal, Appraisal and diagnosis of neurogenic communication disorders in remote settings, In R.H.
Brookshire, Clinical Aphasiology, Minneapolis: BRK Publishers, 1987, pp. 117-123.
R. Wertz, N. Dronkers, E. Bernstein-Ellis, L. Sterling, Y. Shubitowski, R. Elman, G. Shenaut, R.
Knight, and J. Deal, Potential of telephonic and television technology for appraising and diagnosing
neurogenic communication disorders in remote settings, Aphasiology 6 (2) (1992), 195-202.
J.R. Duffy, G.W. Werven, and A.E. Aronson, Telemedicine and the diagnosis of speech and language
disorders, Mayo Clinic Proceedings 72 (12) (1997), 1116-1122.
A. McCullough, Viability and effectiveness of teletherapy for pre-school children with special needs,
International Journal of Language & Communication Disorders 36 (1) (2001), 321-326.
L. Savard, A. Borstad, J. Tkachuck, D. Lauderdale, and B. Conroy, Telerehabilitation consultations for
clients with neurologic diagnoses: cases from rural Minnesota and American Samoa,
NeuroRehabilitation 18 (2) (2003), 93-102.
D. Theodoros, T.G. Russell, A. Hill, L. Cahill, and K. Clark, Assessment of motor speech disorders
online: a pilot study, Journal of Telemedicine & Telecare 9 (2) (2003), S66-68.
D. Brennan, A. Georgeadis, C. Baron, L. Barker, The effect of videoconference-based telerehab on
story retelling performance by brain injured subjects and its implications for remote speech-language
therapy, Telemedicine Journal and e-Health 10 (2) (2004), 147-154.
R.P. Hauber, M.L. Jones, A.J. Temkin, S. Vesmarovich, V.L. Phillips, Extending the Continuum of
Care After Spinal Cord Injury Through Telerehabilitation, Topics in Spinal Cord Injury Rehabilitation
5 (3) (1999),11-20.
N.C. Dreyer, K.A. Dreyer, D.K. Shaw, and P.P. Wittman, Efficacy of telemedicine in occupational
therapy: a pilot study, Journal of Allied Health 30 (1) (2001), 39-42.
E.D. Lemaire, Y. Boudrias, and G. Greene, Low-bandwidth, Internet-based videoconferencing for
physical rehabilitation consultations, Journal of Telemedicine and Telecare 7 (2) (2001), 82-89.
P.G. Clark, S.J. Dawson, C. Scheideman-Miller, and M.I. Post, Telerehab: Stroke teletherapy and
management using two-way interactive video, Neurology Report 26 (2002), 87-93.
T.G. Russell, P. Buttrum, R. Wootton, and G.A. Jull, Low-bandwidth telerehabilitation for patients who
have undergone total knee replacement: preliminary results, Journal of Telemedicine & Telecare 9 (2)
(2003), S44-47.
V.L. Phillips, S. Vesmarovich, R. Hauber, E. Wiggers, A. Egner, Telehealth: reaching out to newly
injured spinal cord patients, Public Health Reports 116 (1) (2001), 94-102.
B.Q. Tran, K.M. Buckley, and C.M. Prandoni, Selection & use of telehealth technology in support of
homebound caregivers of stroke patients, CARING Magazine 21 (3) (2002), 16-21.
H. Zheng, R.J. Davies, N.D. Black, Web-based monitoring system for home based rehabilitation with
stroke patients, Proceedings of 18th IEEEE International Symposium on computer based medical
systems 2005.
[21] R.M. Bendixen, K. Horn, C. Levy, Using telerehabilitation to support elders with chronic illness in their
homes, Topics in Geriatric Rehabilitation 2 (1) (2007), 47-51.
[22] M.K. Holden, T.A. Dyar, and L. Dayan-Cimadoro, Telerehabilitation using a virtual environment
improves upper extremity function in patients with stroke, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 15 (1) (2007), 36-42.
[23] G. Placidi, A smart virtual glove for hand telerehabilitation, Computers in Biology and Medicine 37 (8)
(2007), 1100-1107.
[24] Tandberg, Video Conferencing Standards, Application Notes D10740, Rev 2.3. Available from:
http://www.tandberg.com/collateral/white_papers/whitepaper_Videoconferencing_standards.pdf
[25] S.A. Ballegaard, et al, Healthcare in everyday life - designing healthcare services for daily life, CHI
proceedings, Florence, Italy, 2008, pp. 5-10.
[26] H.M. Government, Department of Health. Our health, our care, our say: a new direction for community
services, 2006.
[27] D.M. Brennan, L.M. Barker, Human factors in the development and implementation of
telerehabilitation systems, Journal of telemedicine and telecare 14 (2008), 55-58.
[28] A.F. Newell, P. Greor, Design for older and disabled people- where do we go from here?, Univ Access
Inf Soc 2 (2002), 3-7.
[29] W. Gaver, T. Dunne, E. Pacenti, Cultural Probes, Interaction 1 (6) (1999), 21-29.
[30] H. Hutchinson, et al., Technology probes inspiring design for and with families, Proceedings of the
conference on human factors in computing systems, 2002.
[31] T. Mattelmake, Applying probes-from inspirational notes to collaborative insights, CoDesign 2 (1)
(2005), 83-102.
[32] P. Yellowlees, Successful development of telemedicine systems seven core principles, Journal of
Telemedicine and Telecare 3 (1997); 215-23.
[33] A.V. Salvemini, Challenges for user-interface designers of telemedicine systems, Journal of
Telemedicine and Telecare 5 (1999), 163-8.
[34] The Health Foundation, Patient-focused interventions: A review of the evidence, A.Coulter, and J.
Ellins, London, 2006.
[35] Department of Health, Self care, 2008, from http://www.dh.gov.uk/en/Healthcare/Selfcare/index.htm.
[36] G. Foster, S.J.C. Taylor, S.E. Eldridge, J. Ramsay, and C.J. Griffiths, Self-management education
programmes by lay leaders for people with chronic conditions, Cochrane Reviews 2 (2007).
[37] C. Gately, A. Rogers, and C. Sanders, Re-thinking the relationship between long-term condition selfmanagement education and the utilisation of health services, Social Science and Medicine 65 (2007),
934-45.
[38] European Commission Information Society and Media, ICT for Health and i2010: Transforming the
European healthcare landscape towards a strategy for ICT for Health, Luxembourg, 10 (2006), ISBN
92-894-7060-7.
[39] W. Rosamond, et al., Heart disease and stroke statistics: a report from the American Heart Association
Statistics Committee and Stroke Statistics Subcommittee, Circulation 117 (4) (2008), e25-146.
[40] National Audit Office, Reducing Brain Damage: Faster access to better stroke care, London: The
Stationery Office, 2005.
[41] G. Kwakkel, B.J. Kollen, J. van der Grond, A.J. Prevo, Probability of regaining dexterity in the flaccid
upper limb: impact of severity of paresis and time since onset in acute stroke, Stroke 34 (9) (2003),
2181-2186.
[42] G. Kwakkel, B.J. Kollen, R.C. Wagenaar, Long term effects of intensity of upper and lower limb
training after stroke: a randomised trial, Journal of Neurology Neurosurgery and Psychiatry 72 (2002),
473-479.
[43] M. Lotze, C. Baun, N. Birbaumer, S. Anders, and L. Cohen, Motor learning elicited by voluntary drive
Brain 126 (4) (2003), 866-872.
[44] G.A. Mountain, P.M. Ware, J. Hammerton, S.J. Mawson, H. Zheng, R. Davies, N.D. Black, H. Zhou, H.
Hu, N. Harris, and C. Eccleston, The SMART Project: A user led approach to developing and testing
technological applications for domiciliary stroke rehabilitation, In P. Clarkson, J. Langdon, and P.
Robinson, Designing Accessible Technology, Springer-Verlag, London, 2006.
[45] H. Zheng, R. Davies, N.D. Black, P.M. Ware, J. Hammerton, S.J. Mawson, G.A. Mountain, and N.
Harris, The SMART Project: An ICT decision platform for home based stroke rehabilitation system,
Proceedings of the International Conference on Smart Homes and Telematics (2006).
[46] T. Creer, K. Holroyd, Self management, In A. Braun, S. Newman, J. Weinman, C. McManus,
Cambridge handbook of Psychology, Health and Medicine, Cambridge: Cambridge University Press,
1997.
[47] F. Jones, Strategies to enhance chronic disease self-management: How can we apply this to stroke,
Disability and Rehabilitation 28 (13-14) (2006), 841-847.
[48] D. Brennan, A. Georgeadis, C. Baron, Telerehabilitation tools for the provision of remote speechlanguage treatment, Topics in Stroke Rehabilitation 8 (4) (2002), 71-78.
[49] A. Georgeadis, D. Brennan, L. Barker, C. Baron, Telerehabilitation and its effect on story retelling by
adults with neurogenic impairments, Aphasiology 18 (5/6/7) (2004), 639-652.
[50] D. Brennan, L. Barker, A model of client-clinician interaction for telemedicine in speech-language
pathology, Telemedicine Journal and e-Health 11 (2) (2005), 218-219.
[51] D. Brennan, A. Georgeadis, The use of data sharing in speech-language telemedicine following stroke,
Telemedicine Journal and e-Health 12 (2) (2006), 226.
[52] http://www.continuaalliance.org/home
limb neurorehabilitation from stroke, as well as unilateral brain damage from traumatic
brain injury, tumors affecting arm function, and Parkinsons disease [1, 2, 3].
As the population ages and the number of sufferers of the above disabilities grows
[4, 5], the need for effective means of supervising and motivating rehabilitation
activities is rapidly increasing. Importantly, the current standard of care cannot meet
the growing needs for rehabilitation activities supervision, both in and especially
outside of the clinic. The research work we propose is aimed at addressing this
important problem by developing and validating non-contact robotics technology as a
means to improve task-specific practice and functional outcomes. Specifically, we
propose a general and affordable technology that can provide supplemental therapy,
supervision, and encouragement of functional practice for individuals with impaired
movement capability in an effort to significantly augment in- and out-of clinic care.
Socially Assistive Robotics (SAR) focuses on assisting through social, not physical,
interaction [6] and therefore a human-robotic therapeutic interaction can offer a
possible and cost-effective method to reach our goal by maximizing the patients
motivation both during and after structured rehabilitation, such that they will continue
practicing beyond the physical therapy session per se. Our long-term goal is to show
that such enhancement of sustained motivation can be achieved by incorporating
contact-free robotic therapy during rehabilitation. This creates a critical niche for SAR,
wherein Human-Robot Interaction (HRI) can be used not to replace physical or
occupational therapists, but to become frequently and readily available individualized
rehabilitation aids. By providing the opportunity for time-extended monitoring and
encouragement of rehabilitation activities in any setting (at the clinic or at home), these
systems complement human care [7, 8, 9, 10, 11, 12].
In this developmental/exploratory research work, we illustrate some of the key
factors that impact user acceptance and practice efficacy in improving self-efficacy of
paretic arm use through human-robot social interaction while optimizing functional
performance and recovery. We describe a pilot study involving an autonomous
assistive mobile robot that aids stroke patient rehabilitation by providing monitoring,
encouragement, and reminders. We also show some preliminary results that focused on
the benefits of mirroring user personality in robot's behavior and user modeling for
adaptive and natural assistive behaviors. All these are aimed at improving the humanrobot social interaction and at the same time enhancing the user's task performance in
daily activities and rehabilitation activities. Furthermore, we outline and discuss future
work and factors toward the development of effective socially assistive rehabilitation
robots.
1. Defining the Need and New Insights for the Hands-Off Robotic Rehabilitation
The technology described in this research features a novel, non-contact approach to
robotics-based upper extremity rehabilitation. Our approach is original and promising
in that it combines several ingredients that individually have been shown to be
important for learning and long-term efficacy in motor neurorehabilitation: (1) intensity
of task specific training and (2) engagement and self-management of goal-directed
actions. These two guiding principles are incorporated into the development and testing
of an engaging, user-friendly home-based robotic therapy for accelerated recovery of
upper-extremity function after stroke hemiparesis that relies on our pilot results in
novel human-robot interaction [7, 8, 13, 14, 15].
available to the patient during the recovery process. To make significant advances in
the field of motor rehabilitation, we need a better understanding of the critical factors
that underlie the recovery process at the behavioral, psychological, and pathological
levels, and the specific ways that therapeutic interventions modulate that recovery
process across these levels. For these reasons, we propose a concerted multidisciplinary
collaboration between engineering, computer and clinical sciences that will develop
and evaluate cost-effective, evidence-based upper extremity rehabilitation programs
aimed specifically at the promotion of engaging, motivating human-robot interaction
for accelerated recovery of function.
Robot Test-bed
The robot used for our experiments, shown in Figure 1, consisted of an ActiveMedia
Pioneer 2-DX mobile robot base, equipped with a SICK LMS200 laser rangefinder
used to track and identify people in the environment by detecting reflective fiducials
worn by users. A Sony pan-tilt-zoom (PTZ) camera allowed the robot to look at and
away from the participant, shake its head (camera), and make other communicative
actions. A speaker produced pre-recorded or synthesized speech and sound effects. The
IMU-based motion capture unit provided movement data to the robot wirelessly in real
time. The entire robot control software was implemented using the Player robot control
system [33].
Design
This study [7, 8] focused on how different robot behaviors may affect the patient's
willingness to comply with the rehabilitation program. Our main goal was to test
different voices, movements, and levels of patience on the part of the robot, and
correlated those with participant compliance, i.e., adherence to the activities.
The robot was able to safely move about the environment without colliding with
objects or people. This was achieved through the use of a laser sensor which provides
high-fidelity information in real-time. Moreover, the robot was able to find and follow
the patient, maneuver itself to an appropriate position for monitoring the patient, and
leave when it was not wanted. This was achieved through the use of highly reflective
markers worn on the leg of the patient (Figure 2), in order for the robot to reliably
detect and recognize the patient.
Figure 1. The Pioneer mobile robot base used in the experiments. Shown are the laser (the blue box), camera
(mounted on top of the laser), and speakers (mounted on each side of the laser).
x,y,z motion
sensor
laser range
finder
robot
(a)
laser
reflective
band
(b)
Figure 2. Two hands-off robot-assisted rehabilitation tasks: (a) magazine stacking and (b) free movement of
the stroke-affected limb.
The robot was able to monitor the movement of the stroke-affected limb. We used
a light-weight and low-cost inertial measurement unit (IMUs). The patient wore a
maker on the wrist, which provided its 3D real-time position information to the robot
through wireless communication. The robot used the information provided by the
motion sensor about the movement of the patients limb so as to encourage the patient
to continue using the limb, or use the limb more or in a different way, as appropriate
based on the sensor data and goal movement.
The robot was capable of using three distinct interaction modes, as follows:
I.
II.
III.
The robot said nothing, and gave feedback only with different
beeping sounds. The robots presence also served to remind the
patient of the activity. The robot kept at a distance from the patient
and was not very persistent in encouraging the patient.
The robot used a robotic-sounding synthesized voice for its
communication with the patient. It gave simple verbal feedback,
including: It looks like you are not using your arm, Have you
already shelved the books?, Great, keep up the good work. It
maintained a shorter distance to the patient than in the first mode and,
when the patient was not reacting to the encouragement by
continuing the activity, was more persistent before giving up and
going away.
The robot used a pre-recorded friendly human voice, with humor and
engagement. It stayed with the patient and followed him/her around,
persistently encouraging the patient to perform the activity. It also
used body movement, wiggling back and forth, side to side and
turning around.
Experiments
This system was evaluated in three short experiments at the USC Center for Health
Professions on the Health Sciences Campus and in the USC Robotics Lab on the
University Park Campus. Two of these were conducted with patients, and one with
non-patients. Of the six stroke patients, two were women; all were middle-aged, the
participants ranged in age between 65 and 75. The stroke impairment occurred on
different limbs among the patients but all were sufficiently mobile to perform the
activities in the experiments. All experiments were video recorded and comprised
several experimental runs involving three randomly selected types of interaction for
each participant. The participants were asked by the robot to perform one of the
experimental tasks: shelving books/magazines or any voluntary movement of the
stroke-affected limb. The robot measured arm movement as an averaged derivative of
the arm angle. In the shelving task, the robot counted how many books the patient
put on the shelf by monitoring the movement of the arm. Hence, it was possible to fool
the robot by merely lifting the arm without any books; this was discovered by one of
the patients. In our newly designed experiments this possibility is eliminated. The
overall measure of performance the robot used was the length of time the patient
persisted in the chosen activity.
At the start of the experiment, the patient was presented with a written one-page
introduction to the experiment, followed by a simple questionnaire. Next, the robot was
introduced. The order of presentation of the three different modes of interaction was
randomized. After the patient performed both activities in all three modes (totaling six
experiments per patient), a second questionnaire was presented. Finally, an exit
interview solicited patient impressions and opinions and the experiment was concluded.
Results
We investigated the participants response to the robot and to the different interaction
modes. The pilot results are positive; generally, the robot was received well by the
participants, and the participants expressed consistent preferences in terms of robot
voices and interface technologies. Some participants continued to perform the activity
beyond the end of the experiment, therefore providing further evidence of improved
compliance in the robot condition well beyond any novelty effect. The design of the
study emphasized the user's response to the robot's behavior. Furthermore, as expected,
there were significant personality differences among the patients; some were highly
compliant but appeared un-engaged by the robot, while others were highly engaged and
even entertained, but got involved in playing with the robot rather than performing the
prescribed exercises. All this leads toward interesting questions of how to define
adaptive robot-assisted rehabilitation protocols that will serve the variety of patients as
well as the time-extended and evolving needs of a single participant. We addressed
some of the questions in the next study, described below. Video transcripts of the
experiments can be found on line [34]. The details about this study have been reported
[7, 8].
3. Personality-Matching Study
Our previously described experiment with a SAR system we developed, that monitored
and encouraged stroke patients to perform rehabilitation activities, demonstrated that
personality differences had a strong impact in the way the users interacted with the
robot. While all patients reported having enjoyed the robot, task performance ranged
from strict adherence to the robots instructions but no obvious engagement, to playful
engagement and even repeated attempts to trick the robot. It is known that pre-stroke
personality has a great influence on post-stroke recovery [4]; subjects classified as
extroverted before the stroke mobilize their strength easier to recover than do
introverted subjects [35]. Further, work in human-computer interaction (HCI) has
demonstrated the similarity-attraction principle, which posits that individuals are more
attracted to others manifesting the same personality as theirs [36, 37, 38]. Little
research to date has addressed personality in human-robot social interactions and no
work has yet addressed the issue of personality in the assistive human-robot interaction
context.
The research question addressed in this study was as follows:
Is there any relationship between the extroversion-introversion personality
spectrum and the challenge-based vs. nurturing style of patient encouragement?
Experimental Design
We [7, 8] performed a series of experiments in which the simple mobile robot depicted
in Figure 1, equipped with a camera and a microphone, interacted with a (healthy, 30
years old) user in an experimental scenario designed for post-stroke rehabilitation
Figure 3. The participant is performing turning pages of a newspaper task with the robot at a social distance.
The laser fiducial is on the participants right leg, the motion capture sensor on the right arm, and a
microphone is worn on standard headphones.
activities (see Figure 3). The participants were asked to perform four tasks (designed as
functional activities) similar to those used during standard stroke rehabilitation:
drawing up and down, or left and right on an easel; lifting and moving books from a
desktop to a raised shelf; moving pencils from one bin to another; and turning pages of
a newspaper. The subject pool for this experiment consisted of 19 participants (13 male,
6 female; 7 introverted and 12 extroverted). The participants completed a set of
questionnaires before the experiment, which were used to assess their personality traits
using the Eysenck biologically-based model [39]. The resulting personality assessment
based specifically on the extroversion-introversion dimension was used to determine
the robots personality. Our behavior control architecture is based on the Banduras
model of reciprocal influences on behavior [40]. The robot expressed its personality
through several means: (1) proxemics (social use of space; the extroverted personalities
used smaller personal distances) [41]; (2) speed and amount of movement (the
extroverted personalities moved more and faster); and (3) vocal content (the
extroverted personalities talked more aggressively (You have done only x movements,
Im sure you can do more!), using a challenge-based style compared to a nurturebased style (I know its hard, but remember its for your own good.) on the
introversion end of the personality spectrum). The robot used the arm motion capture
data to monitor user activity and to determine whether the activity was being performed.
The experiment compared personality-matched vs. personality-mismatched (random)
conditions.
Our hypotheses were as follows:
H1: A robot that challenges the user during rehabilitation therapy rather than
praising her/him will be preferred by users with extroverted personalities and
will be less appealing to users with introverted personalities.
H2: A robot that focuses on nurturing praise rather than on challenge-based
motivation during the training program will be preferred by users with
introverted personalities and will be less appealing to users with extroverted
personalities.
Results
The system evaluation was performed based on user introspection (questionnaires).
After each experiment, the participant completed two post-experiment questionnaires
designed to evaluate impression of the robots personality (e.g., Did you find the
robots character unsociable?) and about the interaction with the robot (e.g., The
robots personality is a lot like mine.). All questions were presented on a 7-point
Likert scale ranging from strongly agree to strongly disagree. The data obtained
from the questionnaires conclusively showed that the robots personality was
fundamental in the interaction and two statistically significant results were found
(ANOVA validation): (1) participants consistently performed better on the task (more
pages turned, more sticks moved, etc.) when interacting with the personality-matched
robot; (2) both extroverted and introverted participants reported preferring the
personality-matched robot. More details about this study can be found in [10].
Methodology
The problem was formulated as policy gradient reinforcement learning (PGRL) and it
consisted of the following steps: (a) parameterization of the robots overall behavior
(including all parametric components, listed above); (b) approximation of the gradient
of the reward function in the parameter space; and (c) movement toward a local
optimum. This methodology allowed us to dynamically optimize the interaction
parameters: interaction distance/proxemics, speed, and vocal content (what the robot
says and how it says it) [11]. Proxemics involved three zones (all beyond the minimal
safety area), activity was expressed through the amount of robot movement, and vocal
content varied from nurturing (You are doing great, please keep up the good work.)
to challenging (Come on, you can do better than that.) and extroverted (higherpitched tone and louder volume) to introverted (lower-pitched tone and lower volume),
in accordance with well-established personality theories referred to earlier. These
define the behavior, and thus personality, of the therapist robot, which is adaptable to
the users personality in order to improve the users task performance. Task
performance is measured as the number of movements performed and/or time-on-task,
depending on the nature of the trial.
The robot incrementally adapted its behavior and thus its expressed personality as
a function of the (healthy) users extroversion-introversion level and the amount of
performed activities, attempting to maximize that amount. The result was a novel
stroke/TBI rehabilitation tool that has the potential to provide individualized and
appropriately challenging/nurturing therapy style that may measurably improve user
task performance.
Experimental Design
We designed two different experiments to test the adaptability of the robots behavior
to the participants personality and preferences. The experimental task was a common
Results
The experimental results provided first evidence for the effectiveness of robot behavior
adaptation to user personality and performance: users (non-disabled) tended to perform
more or longer trials under the personality matched and therapy style matched
conditions. The latter refers to nurturing styles being correlated with the introversion
side of the personality spectrum, and challenging styles correlated with the
extroversion side of the spectrum. A more detailed description is given in [11].
PENCIL
BIN 2
BIN 1
SCALE
Figure 4. Participant performing the object transfer task: moving pencils from one bin to another.
4. Conclusions
We have presented a research program aimed at developing non-contact socially
assistive robot therapists intended for monitoring, assisting, encouraging, and socially
interacting with users during the motor rehabilitation process. Our first results
demonstrated user acceptance of the robot. Our next round of results validated that
mirroring user personality in the robots behavior during the hands-off therapy process
acts to improve task performance on rehabilitation activities. Finally, our last round of
results demonstrated the robots ability to adapt its behavior to the users personality
and preferences.
Our ongoing work is aimed at evaluating the described approach in a timeextended user study with a large group of participants post-stroke. The longitudinal
study will allow us to eliminate the effects of novelty, and will also provide the robot
with the opportunity for richer learning and adaptation algorithms. Our robots are
designed to subordinate to the participants desires and preferences, thereby promoting
patient-centered practice and avoiding the complex issues of taking control away and
dehumanizing health care [42]. Our ultimate goal is to develop technology-assisted
therapy methods that can augment the current standard of care in order to meet the
growing need for personalized care indicated by the population demographics.
Acknowledgements
This work was supported by USC Women in Science and Engineering (WiSE)
Program, the Okawa Foundation, the National Science Foundation grants #IIS-0713697
and #CNS-0709296.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
C.J. Winstein, and S.L. Wolf, Task-oriented training to promote upper extremity recovery, in J. Stein,
R.L. Harvey, R.F. Macko, C.J. Winstein, and R.D. Zorowitz, Eds, Stroke Recovery and Rehabilitation,
Demos Medical, New York, New York, 2008, pp. 267-90.
B.E. Fisher, A.D. Wu, G.J. Salem, J. Song, C.H. Lin, J. Yip, S. Cen, J. Gordon, M. Jakowec, and G.
Petzinger, The effect of exercise training in improving motor performance and corticomotor excitability
in people with early Parkinsons Disease, Archives of Physical Medicine and Rehabilitation 89 (2008),
1221-9.
M.J. Watson, and R. Hitchcock, Recovery of walking late after a severe traumatic brain injury,
Physiotherapy 80 (2008), 103-7.
American Heart Association. Heart disease and stroke statistics, American Heart Association and
American Stroke Association, 2003.
D.J. Thurman, C. Alverson, K.A. Dunn, J. Guerrero, and J.E. Sniezek, Traumatic brain injury in the
United States: A public health perspective, Journal of Head and Trauma Rehabilitation 14 (6)
(1999),602-615.
D. Feil-Seifer, and M.J. Matari, Defining socially assistive robotics, In Proc. IEEE International
Conference on Rehabilitation Robotics (ICORR05), Chicago, Il, USA, 2005, pp. 465-468.
J. Eriksson, M.J. Matari, and C. Winstein, Hands-off assistive robotics for post stroke arm
rehabilitation, In Proc. IEEE International Conference on Rehabilitation Robotics (ICORR05),
Chicago, Il, USA, 2005, pp. 21-24.
M. Matari, J. Eriksson, D. Feil-Seifer, and C. Winstein, Socially assistive robotics for post-stroke
rehabilitation. International, Journal of NeuroEngineering and Rehabilitation 4 (5) (2007).
A. Tapus, and M.J. Matari, Towards socially assistive robotics. International, Journal of the Robotics
Society of Japan 24 (5) (2006):14-16.
[10] A. Tapus, and M.J. Matari, User personality matching with hands-off robot for post-stroke
rehabilitation therapy, In Proc. International Symposium on Experimental Robotics (ISER06), Rio de
Janeiro, Brazil, 2006.
[11] A. Tapus, C. Tapus, and M.J. Matari, User-robot personality matching and assistive robot behavior
adaptation for post-stroke rehabilitation therapy. Intelligent Service Robotics, Special Issue on
Multidisciplinary Collaboration for Socially Assistive Robotics, 1 (2) (2008),169-183.
[12] A. Tapus, and M.J. Matari, Towards Active Learning for Socially Assistive Robots. Proceedings of
Neural Information Processing Systems (NIPS-07), Workshop on Robotics Challenges for Machine
Learning, Vancouver, Canada, 2007.
[13] L. Ada, C.G. Canning, J. Carr, S.L. Kilbreath, and R. Shephed, Task-specific training of reaching and
manipulation, In: K. Bennett, U. Castiello, ed. Insights into the Reach to Grasp Movement, Amsterdam,
Elsevier, (105) (1994), 239265.
[14] J. Carr, and R.B. Shepherd, A motor learning model for stroke rehabilitation, Physiotherapy 75 (1989),
372-380.
[15] K. Hellstrom, B. Lindmark, B. Wahlberg, and A.R. Fugl-Meyer, Self-efficacy in relation to
impairments and activities of daily living disability in elderly patients with stroke: A prospective
investigation, Journal of Rehabilitation Medicine 35 (5) (2003), 202-207.
[16] M. Kelly-Hayes, and J.T. Robertson, The American Heart Association stroke outcome classification.
Stroke 29 (6) (1998), 1274-1280.
[17] J.P. Broderick., M. William, Feinberg lecture: Stroke therapy in the year 2025: Burden, breakthroughs,
and barriers to progress, Stroke 35 (1) (2004), 205-211.
[18] T. S. Olsen, Arm and leg paresis as outcome predictors in stroke rehabilitation, Stroke (2) (1990), 247251.
[19] R.J. Nudo, and E.J. Plautz et al., Role of adaptive plasticity in recovery of function after damage to
motor cortex, Muscle Nerve 24 (8) (2001), 1000- 1019.
[20] E.G. Taub, and G. Uswatte et al., Improved motor recovery after stroke and massive cortical
reorganization following constraint-induced movement therapy, Journal of Medical Physics 14 (1)
(2003), 77-91.
[21] J. Desrosiers, F. Malouin, D. Bourbonnais, C.L. Richards, A. Rochette, and G. Bravo, Arm and leg
impairments and disabilities after stroke rehabilitation: relation to handicap, Clinical Rehabilitation (6)
(2003), 666-673.
[22] H. Smits, and E.C. Smits-Boone, Hand recovery after stroke: exercise and results measurements,
Boston, MA, Butterworth-Heinemann, 2000.
[23] J. Van der Lee, R.C.Wagenaar, G. Lankhorst, T.W. Vogelaar,W.L. Deville, and L.M. Bouter, Forced
use of the upper extremity in chronic stroke patients; results from a single-blind randomized clinical
trial, Stroke 30 (1999), 2369-2375.
[24] D. Reinkensmeyer, M. Averbuch, A McKenna-Cole, B.D. Schmit, W.Z. Rymer, Understanding and
treating arm movement impairment after chronic brain injury: Progress with the arm guide, Journal of
Rehabilitation Research and Development 37(6), 2000.
[25] J. Schaechter, E. Kraft, T.S. Hilliard, R.M. Dijkhuizen, T. Benner, S.P Finklestein, B.R. Rosen, and S.C.
Cramer, Motor recovery and cortical reorganization after constraint-induced movement therapy in
stroke patients: a preliminary study, Neurorehabilitation and Neural Repair 16 (4) (2002),326-338.
[26] C. Winstein, D.K. Rose, S.M. Tan, R. Lewthwaithe, H.C. Chui, and S.P. Azen, A randomized
controlled comparison of upper extremity rehabilitaiton strategies in acute stroke: a pilot study of
immediate and lont-term outcomes, Archives of Physical Medicine and Rehabilitation 85 (2004), 620628.
[27] S.L. Wolf, C. Winstein, P.J. Miller, P.A. Thompson, E. Taub, G. Uswatte, D. Morris, S. Blanton, D.
Nichols-Larsen, P.C. Clark., Retention of upper limb function in stroke survivors who have received
constraint-induced movement therapy: the EXCITE randomized trial, The Lancet Neurology 7 (1)
(2008), 33-40.
[28] S.L. Wolf, C. Winstein, J.P. Miller, E. Taub, G. Uswatte, D. Morris, C. Giuliani, K.E. Light, D.
Nichols-Larsen, EXCITE Investigators. Effect of constraint-induced movement therapy on upper
extremity function 3 to 9 months after stroke: the EXCITE randomized clinical trial, Journal of the
American Medical Association 296 (17) (2006), 2095-104.
[29] S. Hesse, G. Schulte-Tigges, et al., Robot-assisted arm trainer for the passive and active practice of
bilateral forearm and wrist movements in hemiparetic subjects, Archives of Physical Medicine and
Rehabilitation 84 (6) (2003), 915-20.
[30] S. Hesse, and C. Werner , Poststroke motor dysfunction and spasticity: novel pharmacological and
physical treatment strategies, CNS Drugs 17 (15) (2003), 1093-107.
[31] B.R. Brewer, R. Klatzky, and Y. Matsuoka, Feedback distortion to overcome learned nonuse: A system
overview, IEEE Engineering in Medicine and Biology (2003), 1613-1616.
[32] C. Burgar, P. Shor, and H.F. Van Der Loos, Development of robots for rehabilitation therapy: the Palo
Alto VA/Stanford experience, Journal of Rehabilitation Medicine 37 (6) (2000), 663-673.
[33] B. Gerkey, R. Vaughan, and A. Howard, The Player/Stage Project: Tools for Multi-Robot Distributed
Sensor Systems, in Proceedings of the International Conference on Advanced Robotics, Coimbra,
Portugal; 2003, pp. 317-323.
[34] Interaction Lab: Human Robot Interaction for Post-Stroke Recovery Robot Project Page [http://wwwrobotics.usc.edu/interaction/?l=Research:Projects:post_stroke:index]
[35] M. Ghahramanlou, J. Arnoff, M.A. Wozniak, S.J. Kittner, and T.R. Price, Personality influences
psychological adjustment and recovery from stroke, in Proc. of the American Stroke Associations 26th
International Stroke Conference, Fort Lauderdale, USA, 2001.
[36] H. Nakajima, Y. Morishima, R. Yamada, S. Brave, H. Maldonado, C. Nass, and S. Kawaji, Social
intelligence in a human-machine collaboration system: Social responses to agents with mind model and
personality, Journal of the Japanese Society for Artificial Intelligence 19 (3) (2004), 184-196.
[37] H. Nakajima, C. Nass, R. Yamada, Y. Morishima, S. Kawaji, The functionality of human-machine
collaboration systems mind model and social behavior, In Proc. of the IEEE Conference on Systems,
Man and Cybernetics, Washington, USA, 2003, pp. 2381-2387.
[38] C. Nass, and M.K. Lee, Does computer-synthesized speech manifest personality? experimental tests of
recognition, similarity-attraction, and consistencyattraction, Journal of Experimental Psychology
Applied 7 (3) (2001), 171-18.
[39] H.J. Eysenck, Dimensions of personality: 16, 5 or 3? criteria for a taxonomic paradigm, Personality and
Individual Differences 12 (1991), 773-790.
[40] A. Bandura, Principles of behavior modification, Holt, Rinehart & Wilson, New York, USA, 1969.
[41] E.T. Hall, Hidden Dimension, Doubleday, Gorden City, NY, 1966.
[42] Institute of Medicine, Crossing the quality chasm: A new health care system for the 21st century,
Washington, D.C.: National Academy Press, 2001.
1. Introduction
The rapid development of Virtual Reality (VR)-based technologies over the past
decade is both an asset and a challenge for neuro-rehabilitation. The availability of
novel technologies that provide interactive, functional simulations with multimodal
feedback (visual, auditory and, less frequently, haptic, vestibular, and olfactory
channels) enable clinicians to achieve traditional therapeutic goals that would be
difficult, if not impossible to attain via conventional therapy. For example, the practice
of functional skills, such as street crossing or supermarket shopping, are inconvenient
and sometimes dangerous for clients with brain damage when they take place in real
settings. They also lead to the creation of novel clinical paradigms. For example, the
use of instrumented tangible cubes that control virtual building blocks, enables a
clinician to assess the constructional ability of children with Developmental
Coordination Disorder under dynamic conditions [1].
In applications of rehabilitation for both motor and cognitive deficits, the main
focus of much of the early exploratory research has been to investigate the use of VR as
an assessment tool [2, 3]. More recently researchers have been striving to develop and
evaluate VR-based intervention strategies. Examples include the use of realistic
functional simulations, tele-rehabilitation, and home-based therapy. For example, the
IREX video capture VR system has been used to improve ankle movements in children
with Cerebral Palsy [4] and a customized speech training program has been used to
augment therapy for clients with stroke who have been discharged home [5].
In this chapter, we begin by providing a short overview of how virtual
environments (VE) first began to be implemented for the purposes of cognitive or
motor rehabilitation. To date such environments are primarily: (a) single user (i.e.,
designed for and used by one clinical client at a time) and (b) used locally within a
clinical or educational setting (see Figure 1). The clinical attributes of such systems
will be illustrated via two examples: the VAP-S [6] and the IREX VMall [7].
Researchers developed these technologies to enhance conventional assessment [8]
and therapy [9, 10] with the aid of VEs; a single user in a particular location
experiences a VR-based clinical session under the local supervision of a therapist. The
potential of VR assets for rehabilitation are now well known [2, 10]. They include realtime interaction, objective outcome measures that are documented, and repeated
delivery of virtual stimuli within simulated functional environments that are graded in
difficulty and context. A variety of studies have begun to demonstrate the validity of
VR use in neuropsychology and rehabilitation [11, 13].
In recent years, we have observed a push-pull phenomenon which is leading to an
increase in the application of VR technologies for rehabilitation. The push emanates
from the continuous development of novel technologies, their more ready availability
in clinical settings, and lowered costs. The pull stems from clients, clinicians and
third party payers who recognize the need for treatment that goes beyond conventional
therapy. As indicated above, VR-based therapy given to single users in local settings
has been driven by the push-pull phenomenon. However, very recently, efforts are
being made to expand to approaches designed to support multiple users and remote
locations. Figure 2 presents a revised version of Figure 1, showing three additional
avenues: multiple users in co-located settings (Arrow 1), single users in remote
locations (Arrow 2) and multiple users in remote locations (Arrow 3). The latter two
approaches are often referred to as tele-rehabilitation. This evolution in the use of VEs
in rehabilitation raises research questions and ethical considerations that we will
address below.
Figure 2. Moving beyond single user and local virtual environments for rehabilitation
chooses the same item twice; 2) chooses a check-out counter without any cashier; 3)
leaves the supermarket without purchasing anything or without paying; or 4) stays in
the supermarket after the purchases. A training task which is similar, but not identical,
to the test is also available to enable the user to get acquainted with the VE and the
tools. The task-related instructions are, at first, written on the screen and the target
items to purchase are displayed on the right side of the screen. As the client progresses
with the purchases, the items appear in the cart and disappear from the screen. The
cashier-related instructions are verbal and are given before the beginning of the session.
While sitting in front of a PC screen monitor, the client enters the supermarket
behind the cart as if he is pushing it, and moves around freely by pressing the keyboard
arrows. He is able to experience the environment from a first person perspective
without any intermediating avatar. The client is able to select items by pressing the left
mouse button. If the item selected is one of the items on the list it will transfer to the
cart. At the cashier check-out counter, the client may place the items on the conveyor
belt by pressing the left mouse button with the cursor pointing to the belt. He may also
return an item placed on the conveyor belt to the cart. By clicking on the purse icon, the
client may pay and proceed to the supermarket exit.
The VAP-S records various outcome measures (position, times, actions) while the
participant experiences the VE and executes the task. At least eight variables can be
calculated from the recorded data: the total distance in meters traversed by the patient
(referred to as the trajectory), the total task time in seconds, the number of items
purchased, the number of correct actions, the number of incorrect actions, the number
of pauses, the combined duration of pauses in seconds, and the time to pay (i.e., the
time between when the cost is displayed on the screen and when the participant clicks
on the purse icon). A review of the performance is available from a Birds eye view,
i.e. from above the scene (see white traces in Figure 4 and Figure 5).
The initial design of the VAP-S was carried out in the context of research on
Parkinsons disease (PD) and the elderly. Its purpose was first to test the feasibility of
the VAP-S for elderly people, and second to investigate the capacity of the VAP-S to
discriminate between patients with PD and age-matched control subjects. Five patients
with PD (two females, three males; age, 74.0 5.4 years) and five age-matched healthy
controls (four females, one male; age, 66.6 7.7 years) were recruited, according to the
inclusion criteria [6, 15]. A debriefing period allowed Klinger et al. to collect the
participants feedback: they well understood the task and VAP-S usage; thanks to the
training session they easily became familiar with the VR interface. One limitation
(related to the correct distance to apply from the shelves) was noted and revised in
subsequent versions of the VAP-S. The performance results underlined a behavioral
difference between patients with PD and controls: patients needed more time to execute
the task and cover a longer distance. This difference was not related to motor
difficulties since they navigated with keyboard keys at the same speed. It is rather
related to their hesitations, numerous stops, and need to search for products not
appropriate to the products' position in the VAP-S (see Figure 5). These data reveal the
alteration of temporal and spatial organization of PD patients [17]. Moreover the
review of the trajectory was appreciated by both the participants and the therapist.
The original VAP-S was then adapted by E. Klinger in 2005 for use by an Israeli
population; the names of the aisles and grocery items, as well as all the elements of the
task were translated to Hebrew [18, 19]. The purpose of the study was first to test the
feasibility of the VAP-S for post-stroke patients, and second to examine the
relationships between performance within the VAP-S and standard outcome measures
of executive functions. Twenty-six post-stroke patients participated in the study [18]. In
order to predict problems in everyday activities, they also were assessed with the six
performance subtests of the Behavioral Assessment of Dysexecutive Syndrome
(BADS) [20], which cover various aspects of the dysexecutive syndrome such as
difficulty in planning and abstract thinking. Performance results showed the feasibility
of the VAP-S for use by post-stroke patients. Analysis of the participants performance
showed a large variance of the scores within the VAP-S. The relationships between
performance within the VAP-S and the key search subtest from the BADS that requires
planning ability showed that the supermarket task requires planning ability which is
one of the key executive functions.
The potential of the VAP-S as a predictive tool of executive function profiles is
currently explored thanks to studies among various populations of patients with deficits
in the central nervous system [19, 21].
Figure 6. The VMall, the left panel shows selection of a supermarket aisle, the middle panel shows a
shopping cart with purchased food items, and the right panel shows selection of grocery products.
extremity and stated that they used it more in daily life than prior to the intervention.
These data support the potential of the VMall as a motivating and effective intervention
tool for the rehabilitation of people with stroke by presenting multitasking or upper
extremity motor deficits.
Figure 7. The StoryTable. Left panel shows screen shot of one story telling background. Right panel shows
two typically developed children engaged in a multi-touch activation.
render the VE. In the future, such decisions should be also driven by the
therapeutic needs of patients with varying neurological conditions and the
optimal presentation of therapeutic goals as designed by therapists.
Role of virtual presence It has generally been assumed that increasing the
level of virtual presence helps to facilitate the achievement of therapeutic
goals due to its impact on motivation and performance. This assumption
should be more directly tested in single user, local location VEs. It is
particularly important to establish the role of virtual presence in multi-user,
remote location VEs due to the added difficulty in achieving in such settings.
Technology considerations Access to remote locations, especially in realtime, adds additional cost and technical complexity to the design and
implementation of VEs. Considerations of increased band width and the use of
sensors capable of transmitting high fidelity data must be taken into account.
Compliance Therapists are well aware that a key issue in the rehabilitation
process is the motivation of a patient to be a willing partner in the process.
Indeed, one of VRs major assets has been the use of game-like environments
to increase motivation, participation and performance [10]. Whether and how
much compliance may be lost due to changes in locality and number of users
remains to be determined.
Ethical considerations The use of VEs in the traditional single user, local
setting retained all elements of privacy that were guarded during conventional
rehabilitation. The addition of other users and the transmission of data, images,
and communication over the Internet clearly introduce ethical issues not
previously considered.
7. Conclusion
The rapid development of VR-based technologies over the past decade has been both
an asset and a challenge for neuro-rehabilitation. The availability of novel technologies
that provide interactive, functional simulations with multimodal feedback enables
clinicians to achieve traditional therapeutic goals that would be difficult, if not
impossible, to attain via conventional therapy. They also lead to the creation of
completely new clinical paradigms which would have been hard to achieve in the past.
In applications of rehabilitation for both motor and cognitive deficits, the main focus of
much of the early exploratory research has been to investigate the use of VR as an
assessment tool. To date such environments are primarily: (a) single user (i.e., designed
for and used by one clinical client at a time) and (b) used locally within a clinical or
educational setting. More recently, researchers have begun the development of new
and more complex VR-based approaches according to two dimensions: the number of
users and the distance between the users. Driven by a push-pull phenomenon, the
original approach has now expanded to three additional avenues: multiple users in colocated settings; single users in remote locations; and multiple users in remote locations.
It is clear that the VR rehabilitation research community needs to address the new
concerns that are associated with such novel VEs.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
S. Jacoby, N. Josman, D. Jacoby, M. Koike, Y. Itoh, N. Kawai, Y. Kitamura, E. Sharlin, and P.L. Weiss,
Tangible User Interfaces: Tools to Examine, assess and treat dynamic cinstructional processes in
children with Developmental Coordination Disorders, International Journal of Disability and Human
Development 5 (2006), 257-263.
A.A. Rizzo, M.T. Schultheis, K.A. Kerns, and C. Mateer, Analysis of assets for virtual reality
applications in neuropsychology, Neuropsychological Rehabilation 14 (2004), 207-239.
A.A. Rizzo, J.G. Buckwalter, and C. van der Zaag, Virtual Environment Applications for
Neuropsychological Assessment and Rehabilitation, in Handbook of Virtual Environments, K. Stanney,
Ed. New York, L.A. Earlbaum, 2002, pp. 1027-1064.
C. Bryanton, J. Bosse, M. Brien, J. McLean, A. McCormick, and H. Sveistrup, Feasibility, motivation,
and selective motor control: virtual reality compared to conventional home exercise in children with
cerebral palsy, Cyberpsychology & Behaviour 9 (2006), 123-8.
D. M. Brennan, A.C. Georgeadis, C.R. Baron, and L.M. Barker, The effect of videoconference-based
telerehabilitation on story retelling performance by brain-injured subjects and its implications for
remote speech-language therapy, Telemedicine Journal and e-Health 10 (2004), 147-54.
E. Klinger, I. Chemin, S. Lebreton, and R.M. Mari, Virtual Action Planning in Parkinsons Disease: a
control study, Cyberpsychology & Behaviour 9 (2006), 342-347.
D. Rand, N. Katz, and P.L. Weiss, Evaluation of virtual shopping in the VMall: Comparison of poststroke participants to healthy control groups, Disability and Rehabilitation (2007), 1-10.
R. Martin, Test des commissions (2nde dition), Bruxelles, Editest, 1972.
G. Riva, Virtual reality for health care: the status of research, Cyberpsychology & Behaviour 5 (2002),
219-25.
A.A. Rizzo, and G.J. Kim, A SWOT analysis of the field of virtual reality rehabilitation and therapy,
Presence: Presence: Teleoperators and Virtual Environments 14 (2005), 119-146.
M.K. Holden, Virtual environments for motor rehabilitation: review, Cyberpsychology & Behaviour 8
(212-9) (2005), 187-211.
F.D. Rose, B.M. Brooks, and A.A. Rizzo, Virtual reality in brain damage rehabilitation: review,
Cyberpsychology & Behaviour 8 (263-71) (2005), 241-62.
A. Henderson, N. Korner-Bitensky, and M. Levin, Virtual reality in stroke rehabilitation: a systematic
review of its effectiveness for upper limb motor recovery, Topics in Stroke Rehabilitation 14 (2007),
52-61.
E. Klinger, I. Chemin, S. Lebreton, and R.M. Mari, A Virtual Supermarket to Assess Cognitive
Planning, Cyberpsychology & Behaviour 7 (2004), 292-293.
R.M. Mari, I. Chemin, S. Lebreton, and E. Klinger, Cognitive Planning Assessment and Virtual
Environment in Parkinson's Disease, presented at VRIC - Laval Virtual, Laval, 2005.
R.M. Mari, E. Klinger, I. Chemin, and M. Josset, Cognitive Planning assessed by Virtual Reality,
presented at VRIC 2003, Laval Virtual Conference, Laval, France, 2003.
E. Klinger, Apports de la ralit virtuelle la prise en charge des troubles cognitifs et comportementaux,
PhD Thesis, ENST, 2006.
N. Josman, E. Hof, E. Klinger, R.M. Marie, K. Goldenberg, P.L. Weiss, and R. Kizony, Performance
within a virtual supermarket and its relationship to executive functions in post-stroke patients, presented
at Proceedings of International Workshop on Virtual Rehabilitation, 2006.
[19] N. Josman, E. Klinger, and R. Kizony, Performance within the Virtual Action Planning Supermarket
(VAP-S): An executive function profile of three different populations suffering from deficits in the
central nervous system, presented at Proceedings of the 7th Intl Conf. Disability, Virtual Reality &
Assoc. Tech., Maia & Porto, Portugal, 2008.
[20] B.A. Wilson, N. Alderman, P.W. Burgess, H. Emslie, and J.J. Evans, Behavioral Assessment of the
Dysexecutive Syndrome Manual. UK: Thames Valley Test Company, 1996.
[21] E. Klinger, R.M. Mari, S. Lebreton, P.L. Weiss, E. Hof, and N. Josman, The VAP-S: A virtual
supermarket for the assessment of metacognitive functioning, presented at Proceedings of VRIC08,
Laval, France, 2008.
[22] P.L. Weiss, D. Rand, N. Katz, and R. Kizony, Video capture virtual reality as a flexible and effective
rehabilitation tool, Journal of NeuroEngineering and Rehabilitation 1 (2004), 1-12.
[23] R. Kizony, N. katz, and P.L. Weiss, Adapting an immersive virtual reality system for rehabilitation, The
Journal of Visualization and Computer Animation 14 (2003), 261-268.
[24] R. Kizony, L. Raz, N. Katz, H. Weingarden, and P.L. Weiss, Video-capture virtual reality system for
patients with paraplegic spinal cord injury, Journal of Rehabilitation Research and Development 42
(2005), 595-608.
[25] D. Rand, Performance in a functional virtual environment and its effectiveness for the rehabilitation of
individuals following stroke, Unpublished PhD Thesis, University of Haifa, Israel, 2007.
[26] H. Sveistrup, Motor rehabilitation using virtual reality, Journal of NeuroEngineering and
Rehabilitation 1 (2004), 1-10.
[27] D.T. Reid, Benefits of a virtual play rehabilitation environment for children with cerebral palsy on
perceptions of self-efficacy: a pilot study, Pediatric rehabilitation 5 (2002), 141-8.
[28] D. Rand, N. Katz, M. Shahar, R. Kizony, and P.L. Weiss, The virtual mall: A functional virtual
environment for stroke rehabilitation, Annual Review of Cybertherapy and Telemedicine: A decade of
VR 3 (2005), 193-198.
[29] D. Rand, S. Basha-Abu Rukan, P.L. Weiss, and N. Katz, Validation of the Vmall as an assessment tool
for executive functions, Neuropsychological Rehabilitation, in press.
[30] D. Rand, P.L. Weiss, and N. Katz, Training multitasking in a virtual supermarket: a novel intervention
following stroke, American Journal of Occupational Therapy, in press.
[31] D. Rand, N. Katz, and P.L. Weiss, Intervention using the VMall for improving motor and functional
ability of the upper extremity in post stroke participants, European Journal of Physical and
Rehabilitation Medicine, in press.
[32] E.B. Nash, G.W. Edwards, J.A. Thompson, and W. Barfield, A Review of Presence and Performance in
Virtual Environments, International Journal of Human-Computer Interaction 12 (2000), 1-41.
[33] I. Soderback, I. Bengtsson, E. Ginsburg, and J. Ekholm, Video feedback in occupational therapy: its
effects in patients with neglect syndrome, Archives of Physical Medicine and Rehabilitation 73 (1992),
1140-6.
[34] M. Zancanaro, F. Pianesi, O. Stock, P. Venuti, A. Cappelletti, G. Iandolo, M. Prete, and F. Rossi,
Children in the Museum: an Environment for Collaborative Storytelling, in PEACH - Intelligent
Interfaces for Museum Visits, 2007, pp. 165-184.
[35] P.H. Dietz, and D.L. Leigh, DiamondTouch: A Multi-User Touch Technology, presented at
Proceedings of the 14th annual ACM symposium on User Interface Software and Technology (UIST),
Orlando, Florida, 2001.
[36] E. Gal, D. Goren-Bar, E. Gazit, N. Bauminger, A. Cappelletti, F. Pianesi, O. Stock, M. Zancanaro, and
P. L. Weiss, Enhancing social communication through story-telling among high-functioning children
with autism, Lecture Notes in Computer Science 3814 (2005), 320-323.
[37] N. Bauminger, E. Gal, D. Goren-Bar, J. Kupersmitt, F. Pianesi, O. Stock, P.L. Weiss, R. Yifat, and M.
Zancanaro, Enhancing Social Communication in High-Functioning Children with Autism through a Colocated Interface, presented at Proceedings of the 6th International Workshop on Social Intelligence
Design, Trento, Italy, 2007.
[38] E. Gal, N. Bauminger, D. Goren-Bar, F. Pianesi, Stock,O., M. Zancanaro, and P.L. Weiss, Enhancing
social communication of children with high functioning autism through a co-located interface, Artificial
Intelligence & Society, in press.
[39] G.C. Burdea, V. Popescu, V. Hentz, and K. Colbert, Virtual reality-based orthopedic telerehabilitation,
IEEE Transactions on Neural Systems and Rehabilitation Engineering 8 (2000), 430-2.
[40] M.K. Holden, T.A. Dyar, L. Schwamm, and E. Bizzi, Virtual-Environment-Based Telerehabilitation in
Patients with Stroke, Presence: Teleoperators & Virtual Environments 14 (2005), 214-233.
[41] D.J. Reinkensmeyer, C.T. Pang, J.A. Nessler, and C.C. Painter, Web-based telerehabilitation for the
upper extremity after stroke, IEEE Transactions on Neural Systems and Rehabilitation Engineering 10
(2002), 102-8.
[42] M.K. Holden, T.A. Dyar, and L. Dayan-Cimadoro, Telerehabilitation using a virtual environment
improves upper extremity function in patients with stroke, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 15 (2007), 36-42.
[43] M.N. Boulos and S. Wheeler, The emerging Web 2.0 social software: an enabling suite of sociable
technologies in health and healthcare education, Health Information and Libraries Journal 24 (2007),
2-23.
[44] M.N. Boulos, L. Hetherington, and S. Wheeler, Second Life: an overview of the potential of 3-D virtual
worlds in medical and health education, Health Information and Libraries Journal 24 (2007), 233-45.
[45] A. Gorini, A. Gaggioli, C. Vigna, and G. Riva, A second life for eHealth: prospects for the use of 3-D
virtual worlds in clinical psychology, Journal of Medical Internet Research 10 (2008), e21.
[46] J. Lester, About Brigadoon. Brigadoon: An innivative online community for people dealing with
Asperger's syndrome and autism, http://braintalk.blogs.com/brigadoon/2005/01/about_brigadoon.html
[accessed 2008 Sept 23]
[47] G. Riva, and A. Gaggioli, CyberEurope Column, Cyberpsychology & Behaviour 10 (2007), 493-494.