Handbook of Virtual Environments Design Implementation and Applications 1st Edition Kay M. Stanney 2024 Scribd Download
Handbook of Virtual Environments Design Implementation and Applications 1st Edition Kay M. Stanney 2024 Scribd Download
com
https://ebookname.com/product/handbook-of-virtual-
environments-design-implementation-and-applications-1st-
edition-kay-m-stanney/
OR CLICK BUTTON
DOWLOAD NOW
https://ebookname.com/product/virtual-machine-design-and-
implementation-c-c-689th-edition-bill-blunden/
https://ebookname.com/product/prototyping-of-robotic-systems-
applications-of-design-and-implementation-1st-edition-tarek-sobh/
https://ebookname.com/product/the-description-logic-handbook-
theory-implementation-and-applications-1st-edition-franz-baader/
https://ebookname.com/product/experimental-design-unified-
concepts-practical-applications-and-computer-implementation-
first-edition-bowerman/
The Description Logic Handbook Theory Implementation
and Applications 2nd Edition Franz Baader
https://ebookname.com/product/the-description-logic-handbook-
theory-implementation-and-applications-2nd-edition-franz-baader/
https://ebookname.com/product/virtual-worlds-real-libraries-
librarians-and-educators-in-second-life-and-other-multi-user-
virtual-environments-1st-edition-lori-bell/
https://ebookname.com/product/being-there-together-social-
interaction-in-virtual-environments-1st-edition-ralph-schroeder/
https://ebookname.com/product/active-hybrid-and-semi-active-
structural-control-a-design-and-implementation-handbook-1st-
edition-s-y-chu/
https://ebookname.com/product/database-systems-a-practical-
approach-to-design-implementation-and-management-6th-edition-
thomas-m-connolly/
HANDBOOK OF VIRTUAL ENVIRONMENTS
Design, Implementation, and Applications
HUMAN FACTORS AND ERGONOMICS
Gavriel Salvendy, Series Editor
Stephanidis, C. (Ed.): User Interfaces for All: Concepts, Methods, and Tools
Smith, M. J., Salvendy, G., Harris, D., and Koubeck, R. J. (Eds.): Usability
Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents and
Virtual Reality
Meister, D., and Enderwick, T.: Human Factors in System Design, Development,
and Testing
For more information on LEA titles, please contact Lawrence Erlbaum Associates,
Publishers, at www.erlbaum.com.
HANDBOOK OF VIRTUAL ENVIRONMENTS
Design, Implementation, and Applications
Edited by
Kay M. Stanney
University of Central Florida
This book was typeset in 10/12 pt. Times, Italic, Bold, Bold Italic. The heads were
typeset in Helvetica Bold, and Helvetica Bold Italic.
The editor, authors, and the publisher have made every effort to provide accurate and
complete information in this handbook but the handbook is not intended to serve as a
replacement for professional advice. Any use of this information is at the reader’s
discretion. The editor, authors, and the publisher specifically disclaim any and all
liability arising directly or indirectly from the use or application of any information
contained in this handbook. An appropriate professional should be consulted
regarding your specific situation.
Series Foreword xi
Foreword xiii
Perspective xv
Preface xix
Acknowledgments xxiii
Advisory Board xxv
About the Editor xxix
Contributors xxxi
I: INTRODUCTION
1 Virtual Environments in the 21st Century 1
Kay M. Stanney and Michael Zyda
2 Virtual Environments Standards and Terminology 15
Richard A. Blade and Mary Lou Padgett
vii
viii CONTENTS
Software Requirements
12 Virtual Environment Models 255
G. Drew Kessler
13 Principles for the Design of Performance-oriented Interaction Techniques 277
Doug A. Bowman
14 Technological Considerations in the Design of Multisensory Virtual
Environments: The Virtual Field of Dreams Will Have to Wait 301
W. Todd Nelson and Robert S. Bolia
15 Embodied Autonomous Agents 313
Jan M. Allbeck and Norman I. Badler
16 Internet-based Virtual Environments 333
Charles E. Hughes, J. Michael Moshell, and Dean Reed
Application Requirements
17 Structured Development of Virtual Environments 353
John R. Wilson, Richard M. Eastgate, and Mirabelle D’Cruz
18 Influence of Individual Differences on Application Design for Individual
and Collaborative Immersive Virtual Environments 379
David B. Kaber, John V. Draper, and John M. Usher
19 Using Virtual Environments as Training Simulators: Measuring Transfer 403
Corinna E. Lathan, Michael R. Tracey, Marc M. Sebrechts,
Deborah M. Clawson, and Gerald A. Higgins
V: EVALUATION
34 Usability Engineering of Virtual Environments 681
Deborah Hix and Joseph L. Gabbard
35 Human Performance Measurement in Virtual Environments 701
Donald Ralph Lampton, James P. Bliss, and Christina S. Morris
36 Virtual Environment Usage Protocols 721
Kay M. Stanney, Robert S. Kennedy, and Kelly Kingdon
37 Measurement of Visual Aftereffects Following Virtual Environment Exposure 731
John P. Wann and Mark Mon-Williams
38 Proprioceptive Adaptation and Aftereffects 751
Paul DiZio and James R. Lackner
39 Vestibular Adaptation and Aftereffects 773
Thomas A. Stoffregen, Mark H. Draper, Robert S. Kennedy,
and Daniel Compton
40 Presence in Virtual Environments 791
Wallace Sadowski and Kay Stanney
41 Ergonomics in Virtual Environments 807
Pamela R. McCauley Bell
VII: CONCLUSION
56 Virtual Environments: History and Profession 1167
Richard A. Blade and Mary Lou Padgett
With the rapid evolution of highly sophisticated computers, communications, service, and
manufacturing systems, a major shift has occurred in the way people use and work with tech-
nology. The objective of this series on human factors and ergonomics is to provide researchers
and practitioners alike with a platform through which to address a succession of human factors
disciplines associated with advancing technologies, by reviewing seminal works in the field,
discussing the current status of major topics, and providing a starting point to focus future
research in these ever evolving disciplines. The guiding vision behind this series is that human
factors and ergonomics should play a preeminent role in ensuring that emerging technologies
provide increased productivity, quality, satisfaction, safety, and health in the context of the
“Information Society.”
The present volume is published at a very opportune time. Now more than ever technology
is becoming pervasive in every aspect of the Information Society, both in the workplace and
in everyday life activities. The field of virtual environments (VEs) emerged some 40 years
ago as a very exotic, extremely expensive technology whose use was difficult to justify. The
discipline has matured, and the cost of VE technology has decreased by over 100-fold, while
computer speed has increased by over 1,000 fold, which makes it a very effective and viable
technology to use in a broad spectrum of applications, from personnel training to task design.
With this viability and broad potential application come numerous issues and opportunities, and
a responsibility on the part of researchers, practitioners, designers, and users of this powerful
technology to ensure that it is deployed appropriately.
The Handbook of Virtual Environments was guided by a distinguished advisory board of
scholars and practitioners, who assisted the editor in ensuring a balanced coverage of the
entire spectrum of issues related to VE technology, from fundamental science and technology
to VE applications. This was achieved in a thorough and stimulating presentation, covered in
56 chapters, authored by 121 individuals from academia, industry, and government laboratories
from Europe, Asia and the United States on topics of system requirements (including hardware
and software), design and evaluation methods, and an extensive discussion of applications. All
this was presented, after careful peer reviews, to the publisher in 1,911 manuscript pages,
including 3,012 references for further in-depth reading, 255 figures, and 76 tables to illustrate
concepts, methods, and applications. Thus, this handbook provides a most comprehensive
account of the state of the art in virtual environments, which will serve as an invaluable source
of reference for practitioners, researchers, and students in this rapidly evolving discipline.
xi
xii SERIES FOREWORD
This could not have been achieved without the diligence and insightful work of the editor and
cooperative efforts of the chapter authors who have made it all possible. For this, my sincere
thanks and appreciation go to all of you.
—Gavriel Salvendy
Series Editor
Foreword
An explosion has occurred in recent years in our understanding of virtual environments (VEs)
and in the technologies required to produce them. Virtual environments, as a way for humans
to interact with machines and with complex information sets, will become commonplace in
our increasingly technological world. In order for this to be practical, multimodal system
requirements must be developed and design approaches must be addressed. Potential health
and safety risks associated with VE systems must be fully understood and taken into account.
Finally, ergonomic and psychological concerns must be investigated so people will enjoy using
VE technology, be comfortable using it, and seek out its application.
This book provides an up-do-date discussion of the current research on virtual environments.
It describes the current VE state of the art and points out the many areas where there is still
work to be done. The Handbook of Virtual Environments provides an invaluable comprehen-
sive reference for experts in the field, as well as for students and VE researchers. Both the
theoretical and the practical side of VE technologies are explored.
The National Aeronautics and Space Administration (NASA) has long been interested in
virtual environments. This interest arises from the need for humans to efficiently interact with
complex spacecraft systems and to work with very large data sets generated by satellites. In
the future, when humans travel beyond low Earth orbit to explore the universe, the relationship
between the space-faring crew and the technologies they bring with them must be extremely
intimate. The need for a small number of people to be able to work with a huge number of
different technologies will be unprecedented. Virtual environment trainers, for example, are
particularly attractive as a means of conducting just-in-time training before a crew member
conducts a maintenance procedure not practiced for many months. In terrestrial applications,
the complete life cycle for the design of complex systems such as aerospace vehicles will be
completed virtually, before a single piece of metal is cut.
NASA’s needs are unique in some respects but share much in common with other endeav-
ors in today’s world. Virtual environment technologies will also find military, medical, and
commercial (e.g., in manufacturing and in entertainment) applications. As the science and tech-
nology of VEs progress, full-immersion technologies will likely become a standard interface
between humans and machines. This book will help to make that vision a “real” reality.
—Guy Fogleman
Acting Director, Bioastronautics Research Division
National Aeronautics and Space Administration
Washington, DC
xiii
This Page Intentionally Left Blank
Perspective
We read that we are in a new economy and that the pace of technological change is accelerating.
This seems true because there are so many revolutionary innovations popping up. However,
from the vantage of a given field, the picture can seem very different. It is over 35 years
since Ivan Sutherland gave his address, “The Ultimate Display,” at the National Computer
Conference. Part of the delay is explained by the observation that problems that are not worked
on do not get solved. Only a few years ago, I could argue that progress in the field was being held
back because the enabling devices being used—the Polhemus magnetic tracker, the DataGlove,
and the screens from handheld consumer television sets—were invented decades earlier. No
one ever disagreed with those assertions. Today, it is different. In the best “divide and conquer”
tradition, researchers have deployed themselves around every possible research problem. All
of the easy problems have been run over and very smart people are working on most of the
hard problems. In this book, those same people are reporting where things stand.
In fact, at least one hard problem has been solved, at least in preliminary form. Thirty
years ago, I considered head-mounted displays (HMDs), but reasoned that they would be
unacceptable unless they were wireless and permitted natural ambulation around a large area.
I rejected the approach—not because I thought it was too hard but because I felt that the
encumbering paraphernalia would be too awkward. I did not then appreciate how fast and
how accurate the tracking would have to be and understood little about multipath transmission
problems and diversity receivers. After 20 years of effort, the University of North Carolina
recently demonstrated a wide-area tracking system with the needed performance and reported
that its impact on the user’s experience was every bit as powerful as one would hope it would
be. Finally, it is possible to show what an HMD can do that no other display can.
A host of applications have been attempted in visualization, training, and entertainment.
Even once unlikely opportunities have been gaining traction. In 1970, I received what I suspect
was the first job offer in virtual reality. Dr. Arnold Ludwig came into my interactive installation
and was so impressed with its impact on people’s behavior that he wanted me to join him in
the Department of Psychiatry at the University of Kentucky and to focus my interactions on
psychotherapy. Today, virtual therapy is a thriving field with nothing to fear but the running
out of phobias.
However, many of these “applications” have been motivated by the desire to do research
rather than the expectation that the results would be immediately practical. Some of these
systems are being used to do real work within the organizations that created them. An even
smaller number have been sold to early adopters. But virtual environment (VE) applications are
xv
xvi PERSPECTIVE
not yet being sucked into the marketplace by irresistible demand. Thus, while VE technology
may be able to do a job, it is not yet recognized as the best or most cost effective way to do
any job.
On the other hand, virtual environments have been unusually successful at spinning off
its technology into adjacent fields even before it has gotten under way itself. Techniques for
tracking human motion developed for VEs are now standard procedure in the film industry.
Haptic devices are working their way into desktop and automotive systems. Virtual surgery,
once a wild speculation, then a Defense Advanced Research Projects Agency (DARPA) pro-
gram managed by Richard Satava, and now a nascent industry with publicly traded companies,
is selling robotic surgical systems and surgical simulations to hospitals. Finally, the HMD has
been incorporated into eyeglasses and has to be considered a competitor for the mobile display
of the future.
In the background, Moore’s law has continued to operate, assuring the increase in computer
power and the advance of computer graphics. However, it is important to note that while the
film industry routinely spends an hour computing a single frame of graphic animation, a VE
1 1
system has only 30 or 60 of a second to create its necessarily simpler image. This is a 100,000:1
difference in time, or 16 doublings, or a 24-year lag between the time a state-of-the-art image
appears in film and when a similarly complex image can be used in virtual environments. Mere
doublings do not guarantee subjective improvement; only orders of magnitude are discernible.
Nevertheless, we can be confident that ultimately the needed processing and graphics capacity
will be readily affordable.
At that point, all that remains to be developed is the VE technology itself: the tracking, the
displays, and an answer to the question that I posed well over a decade ago: Would you use
it if it was free? Whatever you are willing to pay for a technology, once you own it, it is free.
How much you choose to use it at this point determines its future as much as the economics
of its purchase or the efficacy of its performance. If it is a pleasure to use, you will try to use
it for everything you can think of. If it is awkward and uncomfortable, you will not use it for
any task where it is not absolutely required. In fact, you will be looking for an alternative way
to perform the tasks for which it is suited.
The question is: How good does it have to be? Not so long ago, I attended a conference at
which the keynote speaker declared computer graphics to be the key to VE technology and
that more realistic images would assure its success. I asked him, “If we turned out the lights,
would we still be here?” The point was that graphic realism does not appear to be necessary
or sufficient. It would seem simple to depict a dark, cloudy night—or just a dark room—with
current graphic technology, but instinct suggests that the experience would not be convincing.
There is still a long distance between depiction and illusion.
In visualization applications, believability is not so important. Training can be useful even
if the experience is not totally persuasive. A game or a story can be entertaining even if
the participant never suspends disbelief, but in each case, there is a threshold at which a
technology goes beyond serviceable and becomes compelling. When this threshold is crossed,
the technology is poised to go from being a niche solution to becoming a way of life.
It is not clear where critical mass will be reached first. Will one of the immersion applications
take hold? Will a VE entertainment system supplant traditional video games? Or will the desire
for portable wearable devices lead to the routine wearing of HMDs unobtrusively integrated
with eyeglasses? At the moment, I would bet on the latter because the standards for success
are so low. Text is easy to display. The screens on cell phones and handheld computers are too
small. And while speech may be ideal for answering questions, it is too slow for presenting
options. Only an eyeglass-mounted display can provide the full screen of information that we
take for granted at the desktop. If popular, such limited devices would inevitably be used for
gaming, just as cell phones are today. Head tracking would make those games more interactive,
PERSPECTIVE xvii
if not more convincing, and augmented reality applications could be implemented in specific
locations like grocery stores. More immersive displays could then evolve over a period of time,
always assured of this installed base.
Starting from the other direction, real breakthroughs are needed in HMD design to provide
immersive experiences that are guaranteed to work. A wide field of view and minimal weight
are required before we can be confident that the HMD wearer will forget the apparatus and
embrace the virtual world.
Whatever the path or pace of the technology and its deployment, virtual reality will maintain
its proper role as the best metaphor for the world that is evolving around us. It will continue to
be depicted in films and incorporated into everyday thought to the point that it is so familiar
as a concept that by the time the real thing seeps into our daily lives, we may barely notice.
—Myron Krueger
President
Artificial Reality Corporation
This Page Intentionally Left Blank
Preface
When computers first permeated the public domain, thoughts of the Turing Test arose yet
were quickly extinguished, as users labored over perplexing interfaces, which often left them
bewildered and thoroughly frustrated. There had to be a better way, and so began the field
of human–computer interaction (HCI). HCI efforts have substantially improved computer
interaction, yet barriers to user friendliness still exist due to the abstract concepts that must
be conquered to successfully use a computer. A user must work through an interface (e.g.,
window, menu, icon, or some other mechanism) to achieve desired goals. They cannot access
these goals directly, but only through their interface surrogates. Until now.
Virtual environments (VEs) allow users to be immersed into three-dimensional digital
worlds, surrounding them with tangible objects to be manipulated and venues to be traversed,
which they experience from an egocentric perspective. Through the concrete and familiar, users
can enact known perceptual and cognitive skills to interact with a virtual world; there is no
need to learn contrived conventions. Virtual environments also extend the realm of computer
interaction, from the purely visual to multimodal communication that more closely parallels
human–human exchanges. VE users not only see visual representations, they can also reach
out and grab objects, “feel” their size, rotate them in any given axis, hear their movement, and
even smell associated aromas. Such experiences do not have to be in solitude, as VE users
can take along artificial autonomous agents or collaborate with other users who also have
representations within the virtual world. Taken together, this multisensory experience affords
natural and intuitive interaction.
The paragraph above describes an ideal, but not, unfortunately, the current state of the
art. In today’s virtual environments, users are immersed into an experience with suboptimal
visual resolution, inadequate spatialization of sound, encumbering interactive devices, and
misregistration of tracking information. These issues are among the scientific and technological
challenges that must be resolved to realize the full potential of VE technology, which were
well defined by Nathaniel Durlach and Anne Mavor in their seminal work, Virtual Reality:
Scientific and Technological Challenges. Chapter 1 of this handbook furthers this definitional
effort by reviewing the recommendations set forth by Durlach and Mavor and identifying the
current status of those objectives, the sine qua non being that VE technology, both hardware
and software, has realized substantial gains in the past decade and is posed to support the
next generation of highly sophisticated VE systems. However, psychological considerations
and VE usability evaluation require additional study to identify how best to design and use
VE technology. In addition, Durlach and Mavor (1995, p. 2) mentioned in their work the
xix
xx PREFACE
more intuitive and natural manner, with multidisciplinary design teams communicating their
ideas via the VE medium. Advances in information visualization are enabling dynamic inves-
tigation of multidimensional, highly complex data domains. Manufacturing VE applications
have led to advances in the design of manufacturing activities and manufacturing facilities,
execution of planning, control, and monitoring activities, and execution of physical processing
activities. Likely the most popular of all VE applications, the entertainment industry is leading
the way to truly innovative uses of the technology. From interactive arcades to cyber cafes,
the entertainment industry has leveraged the unique characteristics of this communications
medium, providing dynamic experiences to those who come along for the ride.
This handbook closes with a brief review of the history of VE technology, as we must ac-
knowledge the pioneers whose innovativeness and courage provided the keystones for contem-
porary successes. The final chapter also provides information on the VE profession, providing
those interested with a number of sources to further their quest for the keys to developing the
ultimate virtual world.
The main objective of this handbook is to provide practitioners with a reference source
to guide their development efforts. We have endeavored to provide a resource that not only
addresses technology concerns but also tackles the social and business implications with which
those associated with the technology are likely to grapple. While each chapter has a strong
theoretical foundation, practical implications are derived and illustrated via the many tables
and figures presented.
Taken together, the chapters present systematic and extensive coverage of the primary areas
of research and development within VE technology. The handbook brings together a com-
prehensive set of contributed articles that address the principles required to define system
requirements and design, build, evaluate, implement, and manage the effective use of VE
applications. The scope and detail of the handbook are extensive, and no one person could
possibly do justice to the breadth of coverage provided. Thus, the handbook leveraged author-
itative specialists that were able to provide critical insights and principles associated with their
given area of expertise. It is through the collective effort of the many contributing authors that
such a broad body of knowledge was assembled.
If men will not act for themselves, what will they do when the benefit of their effort is for all?
—Elbert Hubbard, A Message to Garcia (p. 23)
In the case of the many contributors to this handbook, the answer to this question is that they will
selflessly endeavor to provide the insights and assistance required to realize this tremendous
effort. In many ways I feel the creation of this handbook is “our Message to Garcia,” one
developed through altruistic dedication, the only impetus being that such a source is direly
needed in the field. Many individuals openly gave of their time, energy, and knowledge in order
to develop this handbook, often when they were fully loaded with their own responsibilities.
The efforts of the many contributing authors and to the advisory board, which helped formulate
the content coverage, are most sincerely appreciated.
To Gavriel Salvendy, who has provided me with many opportunities, including the invitation
to edit this handbook, which have shaped and molded my career I am forever grateful. I have also
been blessed with the finest of mentors, Robert S. Kennedy, who gives tirelessly of himself—
thank you. I am greatly appreciative of the support of the National Science Foundation, Office
of Naval Research, and Naval Air Warfare Center Training Systems Division, in particular Gary
W. Strong, Helen M. Gigley, and Robert Breaux. The National Science Foundation CAREER
Award and ONR Young Investigator Award have provided me with the opportunity to develop
technical depth in human-computer interaction and virtual environment technology and fos-
tered interchange with experts in the field, many of whom contributed chapters to this handbook.
Each chapter in the handbook was peer reviewed. I would like to thank the many advisory
board members and chapter contributors who assisted with this process, as well as the fol-
lowing individuals who kindly gave of their time to the review process: Andi Cowell, Chuck
Daniels, Nathaniel Durlach, Jason Fox, Thomas Furness, Phillip Hash, Susan Lanham, Dennis
McBride, Dean Owen, Randy Pausch, Leah Reeves, Mario Rodriguez, Randy Stiles, and
Mark Wiederhold.
For the persistent efforts and encouragement of Anne Duffy, our Lawrence Erlbaum senior
editor, who stuck with me even as I missed deadlines and acted out of frustration, I am deeply
grateful.
Much appreciation goes to Branka Wedell, who took my amorphous ideas and, through her
inspired creativity, designed the striking cover art for this handbook.
xxiii
xxiv ACKNOWLEDGMENTS
The efforts of David Bush are greatly appreciated, as he assisted with many of the activities
associated with the handbook and always with a smile on his face. I am also indebted to Kelly
Kingdon, who assisted me with many of my personal responsibilities so that I had more time
to dedicate to this effort.
To those individuals who constitute the fabric of my life, my parents who instilled the work
ethic that allowed me to persevere and see this effort through to completion, my sisters and
brother, my very best friends, my brother-in-law who introduced me to A Message to Garcia
at an opportune moment, and my three sons, Sean, Ryan, and Michael, who fill my world with
sunshine, I have been blessed by your encouragement and confidence.
Above all, I am deeply indebted to my husband, who not only encouraged me as I fretted
that this handbook would remain a virtual reality, but also rolled up his sleeves and assisted
with editing and proofreading. His love is my pillar and his steadfast support of my career is
my forte.
—Kay M. Stanney
Advisory Board
xxv
xxvi ADVISORY BOARD
Kay Stanney is an associate professor with the University of Central Florida’s Industrial Engi-
neering and Management Systems Department, which she joined in 1992. She is an editor of
the International Journal of Human–Computer Interaction. She is a cofounder of the Virtual
Environments Technical Group of the Human Factors and Ergonomics Society. Dr. Stanney has
more than 100 scientific publications and has given numerous invited lectures and presen-
tations. Her research into the after-effects associated with virtual environment exposure,
which is funded by the National Science Foundation, Office of Naval Research, and National
Aeronautics and Space Administration, has appeared on MTV Network’s health show Mega-
Dose, NBC Nightly News, the Canadian Broadcasting Company’s Undercurrents, and NBC’s
local Orlando news, as well as receiving front-page coverage in various newspapers. Dr. Stanney
received a bachelor of science in industrial engineering from the State University of New York
at Buffalo in 1986, after which time she spent three years working as a manufacturing/quality
engineer for Intel Corporation in Santa Clara, California. She received her master’s degree
and Ph.D. in industrial engineering, with a focus on human factors engineering, from Purdue
University in 1990 and 1992, respectively.
xxix
This Page Intentionally Left Blank
Contributors
xxxi
xxxii CONTRIBUTORS
W. Todd Nelson
Andrew M. Mead Senior Usability Engineer
Research Scientist divine/Whitman-Hart
Naval Air Warfare Center Cincinnati, Ohio
Training Systems Division
Orlando, FL Max M. North
Associate Professor
Computer Science and Information Systems
Mark Mon-Williams Kennesaw State University
Lecturer Kennesaw, GA
School of Psychology
University of St. Andrews Sarah M. North
St. Andrews, Scotland, United Kingdom Director
Human–Computer Interaction Group
Associate Professor
Christina S. Morris Computer and Information Sciences
Research Administrator Clark Atlanta University
Advanced Learning Technologies Atlanta, Georgia
Institute for Simulation and Training
Orlando, FL Randy L. Oser
Senior Research Psychologist
Naval Air Warfare Center
J. Michael Moshell Training Systems Division
Director Orlando, FL
CREAT Digital Media Program and
Professor of Computer Science Mary Lou Padgett
University of Central Florida President
Orlando, FL Padgett Computer Innovations, Inc.
Auburn, AL
Barry Peterson
Eric Muth Research Associate
Assistant Professor Department of Computer Science
Department of Psychology The MOVES Institute
Clemson University Naval Postgraduate School
Clemson, SC Monterey, CA
CONTRIBUTORS xxxvii
Barbara Shinn-Cunningham
Wallace Sadowski
Assistant Professor
Advisory Human Factors Engineer
Departments of Cognitive and Neural
IBM Voice Systems
Systems and Biomedical Engineering
Boca Raton, FL
Hearing Research Center
Boston University
Eduardo Salas Boston, MA
Professor
Department of Psychology Stephanie Sides
Institute for Simulation & Training Research Associate
University of Central Florida Spatial Orientation Systems
Orlando, FL Naval Aerospace Medical Research
Laboratory
Pensacola, FL
Richard M. Satava
Professor of Surgery Mandayam A. Srinivasan
Yale University School of Medicine Director, Touch Lab
Program Manager, Advanced Biomedical Department of Mechanical Engineering and
Technologies Research Laboratory of Electronics
Defense Advanced Research Projects Agency Massachusetts Institute of Technology
New Haven, CT Cambridge, MA
Kay M. Stanney
Marc M. Sebrechts Associate Professor
Professor and Chair Department of Industrial Engineering and
Department of Psychology Management Systems
The Catholic University of America University of Central Florida
Washington, DC Orlando, FL
xxxviii CONTRIBUTORS
1
Virtual Environments
in the 21st Century
1. INTRODUCTION
You see, then, that a doubt about the reality of sense is easily raised, since there may even be a doubt
whether we are awake or in a dream. And as our time is equally divided between sleeping and waking,
in either sphere of existence the soul contends that the thoughts which are present to our minds at the
time are true; and during one half of our lives we affirm the truth of the one, and, during the other half,
of the other; and are equally confident of both.
—Theaetetus, Plato
As Plato so eloquently stated, that which is reality emanates from that which is present to our
minds. In Theaetetus, Plato examines perception, knowledge, truth, and subjectivity. This work
suggests that Forms (i.e., circularity, squareness, and triangularity) have greater reality than
objects in the physical world. This reality is derived because Forms serve as models for our
perceptions. So it is that a virtual environment (i.e., a modeled world) can represent a “truth”
that can educate, train, entertain, and inspire. In their ultimate form, virtual environments (VEs)
immerse users in a fantastic world, one that stimulates multiple senses and provides vibrant
1
2 STANNEY AND ZYDA
experiences that somehow transform those exposed (e.g., via training, educating, marketing,
or entertaining).
Visions such as The Matrix, written and directed by Andy and Larry Wachowski, have
elevated the status of VE to the level of pop iconography, and some of those associated
with the technology have arguably risen to star status (e.g., Jaron Lanier). Yet, while one
may speak of VE as contemporary, even in vogue, how far has the technology really come
since the pioneering work of Ivan Sutherland, with his 1963 Sketchpad that provided the
first interactive computer graphics, or Morton Heilig’s 1956 engineering marvel Sensorama
rambled through Brooklyn’s streets and California’s sand dunes (Rheingold, 1991; Sutherland,
1963)? Sensorama provided a multisensory experience of riding a motorcycle by combining
three-dimensional (3-D) movies seen through a binocularlike viewer, stereo sound, wind, and
enticing aromas (see chap. 56, this volume). Some aspects of the technology have improved
substantially since Sketchpad and Sensorama, such as greater visual resolution (see chap. 3,
this volume), spatialized audio (see chap. 4), and haptic interaction (e.g., the net force and
torque feedback used in tool usage; see chaps. 5 and 6), while others have yet to make any
significant strides. In particular, the small grills placed near the nose of Sensorama’s passenger
that emitted authentic aromas are arguably as sophisticated as today’s olfactory technology
(see chaps. 14, 21, and 40), although digiScents·com now promises to bring the sense of smell
to our computer. In addition, the generation of tactile sensations (i.e., distribution of force
fields on the skin during contact with objects) remains elusive (see chaps. 5 and 6).
Perhaps a more appropriate yardstick by which to judge the current state of the art in VE
technology would be the agenda set by Durlach and Mavor (1995) a half decade ago in the
seminal National Research Council (NRC) report Virtual Reality: Scientific and Technological
Challenges. That report developed a set of recommendations that, if heeded, should assist
in realizing the full potential of VE technology (see Table 1.1). While this work provided
many suggestions, the importance of improved computer generation of multimodal images and
advancements in hardware technologies that support interface devices were stressed, as was
improvement in the general comfort associated with donning these devices. As the following
sections will discuss, the former objectives have largely been met by astounding technological
advances, yet the latter has yet to be fully realized, as VE users are still impeded by cumbersome
devices and binding tethers (but that will soon change with technologies such as Bluetooth).
This chapter focuses on a number of key recommendations put forth by Durlach and Mavor
(1995), while many others are described in detail in other chapters in this handbook (see status
notes in Table 1.1).
2. TECHNOLOGY
Virtual environments are driven by the technology that is used to design and build these systems.
This technology consists of the human–machine interface devices that are used to present
multimodal information and sense the virtual world, as well as the hardware and software
used to generate the virtual environment. It also includes the techniques and electromechanical
systems used in telerobotics, which can be transferred to the design of VE systems, as well as the
communication networks that can be used to transform VE systems into shared virtual worlds.
data transfer. A solution to the input device connectivity issue that is available on commodity
computing is the great unsolved problem. At some point, this input-port speed problem needs
to be solved, and that resolution must be included on mass-marketed PCs or their descendents.
Visual displays, especially head-mounted displays (HMDs), have come down substantially
in weight but are still hindered by cumbersome designs, obstructive tethers, suboptimal reso-
lution, and insufficient field of view (see chap. 3). (Note: For an excellent comparative source
on HMDs, see the “HMD/VR–Helmet Comparison Chart,” Bungert, 2001.) Recent advances in
wearable computer displays (e.g., Microvision, MicroOptical), which can incorporate minia-
ture LCDs directly into conventional eyeglasses or helmets, should ease cumbersome design
and further reduce weight (Lieberman, 1999). There are several low- to mid-cost HMDs
(InterSense’s InterTrax i-glasses, Olympus Eye-Trek FMD, Interactive Imaging Systems’
VFX3D, Sony Cybermind, Sony Glasstron, and Kaiser ProViewXL) that are lightweight (ap-
proximately 39 g to 1,000 g) and provide a horizontal field of view (30 to 35 degrees per eye)
and resolution (180 K to 2.4 M pixels/LCD) exceeding predecessor systems. While the reso-
lution range looks impressive, most consumer-grade HMDs (those around 180 K pixels/LCD)
use three pixels (red, green, and blue) to produce one colored pixel, providing a true resolution
of only about 60 K pixels per LCD (Bungert, 2001).
Virtual Retinal Displays (VRDs) may bring truly revolutionary advances in display tech-
nology. VRD technology, which was invented in 1991 at the University of Washington’s HIT
(Human Interface Technology) Lab, holds the promise for greatly enhanced optics (Kleweno
et al., 1998). With this technology, an image is scanned directly onto a viewer’s retina using
low-power red, green, and blue light sources, such as lasers or LEDs (Urey, Wine, & Lewis,
online). The VRD system has superior color fidelity, brightness, resolution, and contrast com-
pared to LCDs and CRTs, as it typically uses spectrally pure lasers as the light source.
With advances in wireless and laser technologies and miniaturization of LCDs, during the
next decade visual display technology should realize the substantial gains necessary to provide
high-fidelity virtual imagery in a lightweight noncumbersome manner.
In the area of virtual auditory displays there have also been tremendous gains (see chap. 4).
For example, while early spatialized audio solutions (Blauert, 1997) were expensive to imple-
ment, it is currently feasible to include spatialized audio in most VE systems. (For an excellent
source on spatialized audio, see “The Ultimate Spatial Audio Index,” http://www.speakeasy.org/
∼draught/spataudio.html.) On the hardware side, systems are available that present multiple
sound sources to multiple listeners using positional tracking. Technology for designing com-
plex spatial audio scenarios, including numerous reflections, real-time convolution, and head
tracking is currently under way. Software solutions are also under development, which pro-
vide low-level control of a variety of signal processing functions, including the number and
position of reflections, as well as allowing for normal head-related transfer function (HRTF)
processing (see chap. 4), manipulation of acoustic radiation patterns, spherical spreading loss,
and atmospheric absorption. HRTFs have yet, however, to effectively include reverberation
or echoes. Adding reverberation to a VE causes auditory sources to seem more realistic and
provides robust information about relative source distance; thus, further research is needed
in the development of tractable reverberation algorithms for real-time systems. Advances in
HRTF individualization (i.e., to the physiological makeup of a listener’s ear) are also of great
importance for localizing sounds, especially for distinguishing front from back and up from
down. In particular, means to tailor HRTFs to an individual listener without explicitly measur-
ing HRTFs for that individual are needed. This may be possible, since the transfer functions
of the external ear have been found to be similar across different individuals (Middlebrooks,
Makous, & Green, 1989).
Current haptic technology provides net force and torque feedback (i.e., simulating tool
usage) but has yet to develop effective tactile feedback (e.g., simulating skin contact or dynamic
flexibility, such as the sensation of bumps, scratches, and deformations due to flexion of body
4
TABLE 1.1
Status of Durlach and Mavor’s (1995) Recommendations for Advancing VE Technology
Technology: human–machine • Address issues of information loss due to technology shortcomings (e.g., poor resolution, limited field of view, deficiencies in S
interface tracker technology)
• Improvements in spatialization of sounds, especially sounds to the front of a listener and outside of the “sweet spot” M (see chap. 4)
surrounding a listener’s head
• Improvements in sound synthesis for environmental sounds M
• Improvements in real-time sound generation M
• Better understanding of scene analysis (e.g., temporal sequencing) in the auditory system M
• Improvements in tactile displays that convey information through the skin L (see chaps. 5–6)
• Better understanding of the mechanical properties of soft tissues that come in contact with haptic devices, limits on human L
kinesthetic sensing and control, and stimulus cues involved in the sensing of contact and object features
• Improvements in locomotion devices beyond treadmills and exercise machines M (see chap. 11)
• Address fit issues associated with body-based linkage tracking devices; workspace limitations associated with ground-based M (see chap. 8)
linkage tracking devices; accuracy, range, latency, and interference issues associated with magnetic trackers; and sensor size
and cost associated with inertial trackers
• Improvements in sensory, actuator, and transmission technologies for sensing object proximity, object surface properties, and M (see chaps. 5–6)
applying force
• Improvements in the vocabulary size, speaker independence, speech continuity, interference handling, and quality of speech S
production for speech communication interfaces
• Improvements in olfactory stimulation devices L
• Improvements in physiological interfaces (e.g., direct stimulation and sensing of neural systems) M (see chap. 7)
• Address ergonomic issues associated with interaction devices (e.g., excessive weight, poor fit both mechanically and optically) M (see chap. 41)
• Better understanding of perceptual effects of misregistration of visual images in augmented reality M (see chap. 37)
• Better understanding of how multimodal displays influence human performance on diverse types of tasks M (see chaps. 14, 21)
Technology: computer • Improvements in techniques to minimize the load (i.e., polygon flow) on graphics processors S (see chap. 12)
generation of virtual • Improvements in data access speeds S
environments • Development of operating systems that ensure high-priority processes (e.g., user tracking) receive priority at regular intervals L
and provide time-critical computing and rendering with graceful degradation
• Improvements in rendering photorealistic time-varying visual scenes at high frame rates (i.e., resolving the trade-off between M
realistic images and realistic interactivity)
• Development of navigation aids to prevent users from becoming lost M (see chap. 24)
• Improvements in ability to model psychological and physical models that “drive” autonomous agents M (see chap. 15)
• Improved means of mapping how user’s control actions update the visual scene M (see chaps. 12, 13)
• Improvements in active mapping techniques (e.g., scanning-laser range finders, light stripes) M (see chap. 8)
Technology: telerobotics • Improvements in the ability to create and maintain accurate registration between the real and virtual worlds in augmented M (see chap. 48)
reality applications
• Development of display and control systems that support distributed telerobotics M (see chap. 48)
• Improvements in supervisory control and predictive modeling for addressing transport delay issues M (see chap. 48)
Technology: networks • Development of network standards that support large-scale distributed VEs M (see chap. 16)
• Development of an open VE network M (see chap. 16)
• Improvements in ability to embed hypermedia nodes into VE systems M (see chap. 16)
• Development of wide-area and local-area networks with the capability (e.g., increased bandwidth, speed, and reliability, L (see chap. 16)
reduced cost) to support the high-performance demands of multimodal VE applications
• Development of VE-specific applications-level network protocols L (see chap. 16)
Psychological consideration • Better understanding of sensorimotor resolution, perceptual illusions, human-information-processing transfer rates, and M (see chaps. 20, 22, 23)
manual tracking ability
• Better understanding of the optimal form of multimodal information presentation for diverse types of tasks M (see chap. 21)
• Better understanding of the effect of fixed sensory transformations and distortions on human performance M (see chap. 31)
• Better understanding of how VE drives alterations and adaptation in sensorimotor loops and how these processes are affected M (see chaps. 31, 37–39)
by magnitude of exposure
• Better understanding of the cognitive and social side effects of VE interaction M (see chaps. 19, 20, 33)
Evaluation • Establish set of VE testing and evaluation standards M (see chap. 34)
• Determine how VE hardware and software can be developed in cost-effective manner, taking into consideration engineering M (see chap. 28)
reliability and efficiency, as well as human perceptual and cognitive features
• Identify capabilities and limitations of humans to undergo VE exposure M (see chaps. 29–41)
• Examine medical and psychological side effects of VE exposure, taking into consideration effects on human visual, auditory, M (see chaps. 29–33, 37–39)
and haptic systems, as well as motion sickness and physiological/psychological aftereffects
• Determine if novel aspects of human–VE interaction require new evaluation tools M (see chap. 34)
• Conduct studies that can lead to generalizations concerning relationships between types of tasks, task presentation modes, L (see chap. 35)
and human performance
• Determine areas in which VE applications can lead to significant gains in experience or performance M (see chaps. 42–55)
segments, see chaps. 5 and 6). Srinvasan (see chap. 5) suggests that for the foreseeable future
advances in haptic technology will be limited by the development of new actuator hardware.
In addition, hardware required to simulate distributed forces on the skin may require sub-
stantial gains in miniature rotary and linear actuators or advances in alternative technologies,
such as shape memory alloys, piezoelectrics, microfluidics, and other microelectromechanical
systems.
Advances in tracking technology have been realized in terms of drift-corrected gyroscopic
orientation trackers, outside-in optical tracking for motion capture, and laser scanners (see
chap. 8). The future of tracking technology is likely hybrid tracking systems, with an acoustic-
inertial hybrid on the market (see http://www.isense.com/products/) and several others in re-
search labs (e.g., magnetic-inertial, optical-inertial, and optical-magnetic). In addition, ul-
trawideband radio technology holds promise for an improved method of omni-directional
point-to-point ranging.
Led largely by the Information Society Directorate General of the European Union and
the Information Science and Engineering Directorate of the National Science Foundation, the
quality of speech recognition and synthesis systems have made substantial gains in the past
half decade. Speaker-independent continuous speech recognition systems are currently com-
mercially available (Germain, 1999; Huang, 1998); however, additional advances are needed
in acoustic and language modeling algorithms to improve the accuracy, usability, and effi-
ciency of spoken language understanding. Synthetic speech can now be produced that rea-
sonably resembles the acoustic and prosodic characteristics of the original speaker; however,
improvements are required in the areas of naturalness, flexibility, and intelligibility of syn-
thesized speech (Institution of Electrical Engineers, 2000). Speech recognition and synthe-
sis are not addressed in detail in this handbook, not due to any implied lack of importance
but rather because there are many significant works whose sole focus is speech technology
(see Gibbon, Mertins, Moore, 2000; Varile & Zampolli, 1998). (For an excellent information
source on commercial speech recognition, see http://www.tiac.net/users/rwilcox/speech.html;
see http://www.cs.bham.ac.uk/∼jpi/museum.html for resources on speech synthesis systems;
see http://research.microsoft.com/research/srg/ for the latest in Microsoft’s speech technology
efforts).
Taken together, these technological advancements, along with those poised for the near
future, provide the infrastructure on which to build complex, immersive multimodal VE
applications.
start. The future promises massive parallelism in computing as we approach the molecular-
feature-size limits in integrated circuits (Appenzeller, 2000).
Software development of VE systems has progressed tremendously, from proprietary and
arcane systems, to development kits that run on general-purpose operating systems, such as
Windows in most of its flavors, while still allowing high-end development on Silicon Graphics
workstations (Pountain, 1996). Virtual environment system components are becoming mod-
ular and distributed, thereby allowing VE databases (i.e., editors used to design, build, and
maintain virtual worlds) to run independently of visualizer and other multimodal interfaces
via network links. Standard application program interfaces (APIs; e.g., OpenGL, Direct-3D,
Mesa) allow multimodal components to be hardware-independent. Virtual environment pro-
gramming languages are advancing, with APIs, libraries, and particularly scripting languages
allowing nonprogrammers to develop virtual worlds. Using these tools, commercial applica-
tions developers can build a range of VEs, from the most basic mazes to complex medical
simulators, from low-end single-user PC platform applications to collaborative applications
supported by client–server environments.
A number of 3-D modeling languages and tool kits are available which provide intuitive
interfaces and run on multiple platforms and renderers (e.g., AC3D Modeler, Clayworks,
MR Toolkit, MultiGen Creator and Vega, RealiMation, Renderware, VRML, WorldToolKit).
Beyond these languages, which deal with display devices that paint pixels on the screen
and define higher-level inputs via triangles and polygons, a new approach to the computer
generation of VEs is to use a scene management engine (RealiMation, 2000). This approach
allows programmers to work at a higher level, defining characteristics and behaviors for more
holistic concepts (e.g., attacker, enemy), thereby enabling developers to concentrate on content
design (see chap. 25) without being concerned about how that content is delivered to users.
Photorealistic rendering tools are evolving toward full-featured physics-based global
illumination rendering systems (e.g., Raster3D—http://www.bmsc.washington.edu/
raster3d/raster3d.html; RenderPark—http://www.cs.kuleuven.ac.be/cwis/research/graphics/
RENDERPARK/; Heirich & Arvo, 1997; Merritt & Bacon, 1997). Such physically based
rendering techniques allow quantitative prediction of the illumination in a virtual scene and
generation of photorealistic computer images, in which illumination effects such as soft shad-
ows and glossy reflections are reproduced with high fidelity (Suykens, 1999).
Computer generation of autonomous agents is a key component of many VE applications
involving interaction with other entities, such as adversaries, instructors, or partners. There
has been significant research and development in modeling embodied autonomous agents (see
chap. 15). Notable in this area is a spin-off from the MIT Artificial Intelligence Laboratory,
Boston Dynamics, Inc. (BDI, http://www.bdi.com/). BDI has adapted advances in robotics
systems, such as motion caching, variable motion interpolation, and task-level control opti-
mization techniques to display dozens of lifelike articulated agents at one time. BDI’s products
allow system developers to work directly in a 3-D database, interactively specifying agent be-
haviors, such as paths to traverse and sensor regions. The resulting agents move realistically,
respond to simple commands, and travel about a VE as directed. Further integration of tele-
robotics techniques (see chap. 48) into autonomous agent design is certain to lead to even more
impressive advances. While the aforementioned gains are noteworthy, there are still a number
of unsolved problems in agent design and development (see Table 15.3 of chap. 15).
Research in VE navigation has led to the development of design guidelines and aids that
enable wayfinding in virtual worlds. These aids include maps, landmarks, trails, and direction
finding (see chap. 24). In addition, for closed VEs (e.g., buildings), tools that demonstrate
the surrounding area (maps, exocentric 3-D views) are recommended if training or exposure
time is short, while internal landmarks (i.e., along a route) are recommended for longer expo-
sure durations (Stanney, Chen, & Wedell, 2000). For semiopen (e.g., urban areas) and open
8 STANNEY AND ZYDA
Yellow halo
in surround
FIG. 1.1. Sea scene with window of normative color encircled by yellow halo indicating going
off-course.
environments (e.g., sea, sky), demonstrating the surround is appropriate for short exposures,
while use of external landmarks (i.e., outside a route) is recommended for long exposure times.
Based on these and other guidelines, aids need to be developed to guide navigation in virtual
environments. One such aid was designed by Stanney, Chen, and Wedell (2000), which pro-
vides wayfinders with a “window” of normative color shaded on the edges in a symbolic
color (e.g., yellow or red), which appears when off-course (see Fig. 1.1). The scene appears
in normative color when wayfinders are on-course, gradually changing to yellow and then to
red as a wayfinder moves further off-course. While this is just an example, and one whose
effectiveness has yet to be validated, it is shared here because it is an example of a collabo-
rative effort between engineers and a graphic artist. Such multidisciplinary collaborations are
likely to serve as the crux for truly innovative advances in VE design. More work is needed
in the area of navigational aiding, as making one’s way through a VE has been found to be
one of the most significant usability issues influencing VE task performance (Ellis, 1993;
Jul & Furnas, 1997).
The NRC report (Durlach & Mavor, 1995) indicated the need for a real-time operating
system for virtual environments, but the expectation from that committee that such an effort
would be funded was low. That proved to be an accurate assessment, as current operating
systems (OSs) are perhaps less-supportive of VEs than six years ago, at the time of the NRC
report’s debut. Certainly there are a variety of Windows derivatives (Nicholls, 2000b), yet no
one is convincingly arguing their appropriate use as OSs for VEs, with the exception that if one
wants wide acceptance, a Windows variant allows for broad usage. Linux is available, which is
a less-capable but open source derivative of Unix. At the same time, there is diminished use of
Silicon Graphics (SGI) Irix. SGI Irix, to many of those on the bleeding edge of VE technology,
was the operating system for developing VEs, and the many features of that system not found
elsewhere are direly missed. So the right OS for developing VEs is still an open issue.
2.3 Telerobotics
Beyond the advantages to autonomous agent design discussed above, there are many areas
(e.g., sensing, navigation, object manipulation) in which VE technology can prosper from
the application of robotics techniques. Yet, if these techniques are to be adopted, issues of
communication time delay (i.e., transport delay) and real-time control architecture design
must be resolved. Chapter 48 discusses a number of techniques for addressing these issues. In
that chapter, Kheddar, Chellali, and Coiffet note that
a cleverly conceived yet “simple” VE intermediary representation contributes to solving the time
delay problem, offers ingenious metaphors for both operator assistance and robot autonomy sharing
problems, enhances operator sensory feedback through multiple sensory modalities admixtures,
1. VIRTUAL ENVIRONMENTS IN THE 21st CENTURY 9
enhances operator safety, offers a huge possible combination of strategies for remote control
and data feedback, shifts the well known antagonistic transparency/stability problem into an
operator/VE transparency one without compromising the slave stability, offers the possibility to
enhance—in terms of pure control theory—remote robot controllers, allows new human-centered
teleoperation schemes, permits the production of advanced user-friendly teleoperation interfaces,
makes possible the remote control of actual complex systems, such as mobile robots, nano and
micro robots, surgery robots, etc.
To achieve these gains, however, advances in VE modeling techniques and means of addressing
error detection and recovery inherent to VE–real environment discrepancies are needed (see
discussion in chap. 48).
2.4 Networks
The NRC report (Durlach & Mavor, 1995) suggested that with improvements in communica-
tions networks, virtual environments would become shared experiences, in which individuals,
objects, processes, and autonomous agents from diverse locations interactively collaborate.
Advances in the Internet have been substantial in the time since that report, due particularly to
the U.S. government’s Next Generation Internet (NGI) effort and the University Corporation
for Advanced Internet Development’s (UCAID’s) Internet2 (Langa, 2001). The NGI initiative
(http://www.ngi.gov/) is connecting a number of universities and national labs at speeds 100
times faster than the 1996 Internet, and a smaller number of institutions at speeds 1,000 times
faster in order to experiment with collaborative-networking technologies, such as high-quality
video conferencing and audio and video streams. Of particular interest to VE developers,
technologies have been developed to “mark” data streams as having specific characteristics
(e.g., time-critical, lockstep) so that differentiated services can enable different types of data
to be handled with different quality of service levels. Internet2 is using existing networks (e.g.,
the National Science Foundation’s VBNS—Very-High-Speed Backbone Network Service) to
determine the transport designs necessary to carry real-time multimedia data at high speed
(http://apps.internet2.edu/). Networked VE applications, which require the ability to recog-
nize and track the presence and movements of individuals as well as physical and virtual
objects, while projecting them in realistic, multiple, geographically distributed immersive en-
vironments on stereo-immersive surfaces, are ideal for Internet2, as they leverage its special
capabilities (i.e., high bandwidth, low latency, low jitter; Singhal & Zyda, 1999).
3. PSYCHOLOGICAL CONSIDERATION
There are a number of psychological considerations associated with the design and use of
VE systems. Some of these focus on techniques and concerns that can be used to augment
or enhance VE interaction and transfer-of-training (e.g., perceptual illusions, design based on
human-information-processing transfer rates), while others focus on adverse effects due to VE
exposure. In terms of the former, we know that perceptual illusions exist, such as auditory-
visual cross-modal perception phenomena (see chap. 22), yet little is known about how to
leverage these phenomena to reduce development costs while enhancing one’s experience in a
virtual environment. Perhaps the one exception is vection (i.e., the illusion of self-movement),
which is known to be related to a number of display factors (see Table 23.1 of chap. 23). By
manipulating these display factors, designers can provide VE users with a compelling illu-
sion of self-motion throughout a virtual world, thereby enhancing their sense of presence (see
chap. 40) often with the untoward effect of motion sickness, as well (see chap. 23). Other such
illusions exist (e.g., visual dominance—see chap. 22) and could likewise be leveraged. While
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of The
magazine of history with notes and queries,
Vol. II, No. 5, November 1905
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Title: The magazine of history with notes and queries, Vol. II,
No. 5, November 1905
Author: Various
Language: English
THE
MAGAZINE OF HISTORY
WITH
NOVEMBER, 1905
WILLIAM ABBATT
281 Fourth Avenue, New York
COOPERSTOWN, N. Y.
THE MAGAZINE OF HISTORY
The right wing of the expedition, consisting of the 3rd, 4th and 5th
New York, the 6th Massachusetts, and 4th Pennsylvania, with four
companies of riflemen and two pieces of artillery, was under the
command of General James Clinton. This veteran officer gathered
his forces at Schenectady. He encamped his regiments around this
little palisaded frontier town, while his flotilla of over 215 boats was
building in the boat yards that then lined the Mohawk river, between
the stream and the town’s wooden walls on its north and west sides.
When all was ready, about June 15, the boats were pushed, poled
or rowed up the river to Canajoharie. Then both the stores and the
boats were loaded on wagons drawn by four yokes of oxen, carried
over the hills and unloaded on the beach at Otsego Lake. This very
toilsome work was over by July 3, and on the “Glorious Fourth” was
celebrated by a parade, salute of cannon, divine service and a
banquet with thirteen patriotic toasts. Herds of cattle had been
driven from Kingston, N. Y., by the great western route through the
Catskill mountains, to furnish fresh beef. The soldiers enjoyed their
camp life in the fragrant woods, though eager to move against the
enemy.
An engineer and the father of the “father of the Erie Canal,”
General Clinton’s first object was to provide enough water to float his
boats down out of the lake and into and along the shallow
Susquehanna, in order to make junction with Sullivan at Tioga Point.
To secure this, in the dry mid-summer a reservoir was made by
damming up the little lake at its source near the present
Cooperstown. The flow of rain not only in this, but also in the
adjoining Schuyler Lake, during four weeks of waiting to hear from
Sullivan, was thus secured. The gain of one month’s water from sky
and earth was apparent. It is uncertain from extant journals and
diaries how high a level was reached, some saying that three feet, but
one declaring that only one foot of water was gained. At any rate, the