0% found this document useful (0 votes)
91 views

ch069 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views

ch069 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Greenleaf, W., Piantanida, T. “Medical Applications of Virtual Reality Technology.


The Biomedical Engineering Handbook: Second Edition.
Ed. Joseph D. Bronzino
Boca Raton: CRC Press LLC, 2000
69
Medical Applications
of Virtual Reality
Technology

69.1 Overview of Virtual Reality Technology


Instrumented Clothing • Head Mounted Display (HMD) •
3D Spatialized Sound • Other VR Interface Technology
69.2 VR Application Examples
VR Applications
69.3 Current Status of Virtual Reality Technology
69.4 Overview of Medical Applications of Virtual
Reality Technology
Surgical Training and Surgical Planning • Medical Education,
Modeling, and Non-Surgical Training • Anatomical Imaging
Walter Greenleaf and Medical Image Fusion • Ergonomics, Rehabilitation, and
Greenleaf Medical Disabilities • Telesurgery and Telemedicine • Behavioral
Tom Piantanida Evaluation and Intervention
Greenleaf Medical 69.5 Summary

Virtual Reality (VR) is the term commonly used to describe a novel human-computer interface that
enables users to interact with computers in a radically different way. VR consists of a computer-generated,
multi-dimensional environment and interface tools that allow users to:
1. immerse themselves in the environment,
2. navigate within the environment, and
3. interact with objects and other inhabitants in the environment.
The experience of entering this environment—this computer-generated virtual world—is compelling.
To enter, the user usually dons a helmet containing a head-mounted display (HMD) that incorporates
a sensor to track the wearer’s movement and location. The user may also wear sensor-clad clothing that
likewise tracks movement and location. The sensors communicate position and location data to a
computer, which updates the image of the virtual world accordingly. By employing this garb, the user
“breaks through” the computer screen and becomes completely immersed in this multi-dimensional
world. Thus immersed, one can walk through a virtual house, drive a virtual car, or run a marathon in
a park still under design. Recent advances in computer processor speed and graphics make it possible
for even desk-top computers to create highly realistic environments. The practical applications are far
reaching. Today, using VR, architects design office buildings, NASA controls robots at remote locations,
and physicians plan and practice difficult operations.

© 2000 by CRC Press LLC


FIGURE 69.1 A complete VR system.

Virtual reality is quickly finding wide acceptance in the medical community as researchers and clini-
cians alike become aware of its potential benefits. Several pioneer research groups have already demon-
strated improved clinical performance using VR imaging, planning, and control techniques.

69.1 Overview of Virtual Reality Technology


The term “Virtual Reality” describes the experience of interacting with data from within the computer-
generated data set. The computer-generated data set may be completely synthetic or remotely sensed,
such as X-ray, MRI, PET, etc. images. Interaction with the data is natural and intuitive and occurs from
a first-person perspective. From a system perspective, VR technology can be segmented as shown in
Fig. 69.1.
The computer-generated environment, or virtual world content consists of a 3D graphic model, typically
implemented as a spatially organized, object-oriented database; each object in the database represents
an object in the virtual world.
A separate modeling program is used to create the individual objects for the virtual world. For greater
realism, texture maps are used to create visual surface detail.
The data set is manipulated using a real-time dynamics generator that allows objects to be moved within
the world according to natural laws such as gravity and inertia, or according to other variables such as spring-
rate and flexibility that are specified for each particular experience by application-specific programming.
The dynamics generator also tracks the position and orientation of the user’s head and hand using
input peripherals such as a head tracker and DataGlove.
Powerful renderers are applied to present 3D images and 3D spatialized sound in real-time to the
observer.
The common method of working with a computer (the mouse/keyboard/monitor paradigm), based
as it is, on a two-dimensional desk-top metaphor, is inappropriate for the multi-dimensional virtual

© 2000 by CRC Press LLC


FIGURE 69.2 The DataGlove™, a VR control device.

world. Therefore, one long-term challenge for VR developers has been to replace the conventional
computer interface with one that is more natural, intuitive, and allows the computer—not the user—to
carry a greater proportion of the interface burden. Not surprisingly, this search for a more practical
multi-dimensional metaphor for interacting with computers and computer-generated artificial environ-
ments has spawned the development of a new generation of computer interface hardware. Ideally, the
interface hardware should consist of two components: (1) sensors for controlling the virtual world and
(2) effectors for providing feedback to the user. To date, three new computer-interface mechanisms, not
all of which manifest the ideal sensor/effector duality, have evolved: Instrumented Clothing, the Head
Mounted Display (HMD), and 3D Sound Systems.

Instrumented Clothing
The DataGlove™ and DataSuit™ use dramatic new methods to measure human motion dynamically in
real time. The clothing is instrumented with sensors that track the full range of motion of specific activities
of the person wearing the Glove or Suit, for example as the wearer bends, moves, grasps, or waves.
The DataGlove is a thin lycra glove with bend-sensors running along its dorsal surface. When the
joints of the hand bend, the sensors bend and the angular movement is recorded by the sensors. These
recordings are digitized and forwarded to the computer, which calculates the angle at which each joint
is bent. On screen, an image of the hand moves in real time, reflecting the movements of the hand in
the DataGlove and immediately replicating even the most subtle actions.
The DataGlove is often used in conjunction with an absolute position and orientation sensor that
allows the computer to determine the three-space coordinates, as well as the orientation of the hand and
fingers. A similar sensor can be used with the DataSuit and is nearly always used with an HMD.
The DataSuit is a customized body suit fitted with the same sophisticated bend-sensors found in the
DataGlove. While the DataGlove is currently in production as both a VR interface and as a data-collection
instrument, the DataSuit is available only as a custom device. As noted, DataGlove and DataSuit are
utilized as general purpose computer interface devices for VR. There are several potential applications
of this new technology for clinical and therapeutic medicine.

© 2000 by CRC Press LLC


Head-Mounted Display (HMD)
The best-known sensor/effector system in VR is a head-mounted display (HMD). It supports first-person
immersion by generating an image for each eye, which, in some HMDs, may provide stereoscopic vision.
Most lower cost HMDs ($6000 range) use LCD displays; others use small CRTs. The more expensive
HMDs ($60,000 and up) use optical fibers to pipe the images from remotely mounted CRTs. An HMD
requires a position/orientation sensor in addition to the visual display. Some displays, for example the
BOOM System [Bolas, 1994], may be head-mounted or may be used as a remote window into a virtual
world.

3D Spatialized Sound
The impression of immersion within a virtual environment is greatly enhanced by inclusion of 3D
spatialized sound [Durlach, 1994; Hendrix and Barfield, 1996]. Stereo-pan effects alone are inadequate
since they tend to sound as if they are originating inside the head. Research into 3D audio has shown
the importance of modeling the head and pinea and using this model as part of the 3D sound generation.
A Head Related Transfer Function (HRTF) can be used to a generate the proper acoustics [Begault and
Wenzel, 1992; Wenzel, 1994]. A number of problems remain, such as the “cone of confusion” wherein
sounds behind the head are perceived to be in front of the head [Wenzel, 1992].

Other VR Interface Technology


A sense of motion can be generated in a VR system by a motion platform. These have been used in flight
simulators to provide cues that the mind integrates with visual and spatialized sound cues to generate
perception of velocity and acceleration.
Haptics is the science of touch. Haptic interfaces generate perception of touch and resistance in VR.
Most systems to date have focused on providing force feedback to enable users to sense the inertial
qualities of virtual objects, and/or kinesthetic feedback to specify the location of a virtual object in the
world [Salisbury and Srinivasan, 1996, 1997]. A few prototype systems exist that generate tactile stimu-
lation, which allow users to feel the surface qualities of virtual objects [Minsky and Lederman, 1996].
Many of the haptic systems developed thus far consist of exo-skeletons that provide position sensing as
well as active force application [Burdea et al., 1992].
Some preliminary work has been conducted on generating the sense of temperature in VR. Small
electrical heat pumps have been developed that produce sensations of heat and cold as part of the
simulated environment. (See, for example, [Caldwell and Gosney, 1993; Ino et al., 1993].)
Olfaction is another sense that provides important cues in the real world. Consider, for example, a
surgeon or dentist examining a patient for a potential bacterial infection. Inflammation and swelling
may be present, but a major deciding factor is the odor of the lesion. Very early in the history of virtual
reality, Mort Heilig patented his Sensorama Simulator, which incorporated olfactory, as well as visual,
auditory, and motion cues (U.S. Patent 3850870, 1961). Recently, another pioneer of virtual reality, Myron
Kreuger [Kreuger, 1994, 1995a, 1995b], has been developing virtual olfaction systems for use in medical
training applications. The addition of virtual olfactory cues to medical training systems should greatly
enhance both the realism and effectiveness of training.

69.2 VR Application Examples


Virtual reality had been researched for years in government laboratories and universities, but because of
the enormous computing power demands and associated high costs, applications have been slow to
migrate from the research world to other areas. Continual improvements in the price/performance ratio
of graphic computer systems, however, have made VR technology more affordable and, thus, used more
commonly in a wider range of application areas. In fact, there is even a strong “Garage VR” move-
ment—groups of interested parties sharing information on how to build extremely low-cost VR systems

© 2000 by CRC Press LLC


using inexpensive off-the-shelf components [Jacobs, 1994]. These home-made systems are often ineffi-
cient, uncomfortable to use (sometimes painful), and slow, but they exist as a strong testament to a
fervent interest in VR technology.

VR Applications
Applications today are diverse and represent dramatic improvements over conventional visualization and
planning techniques:
Public Entertainment: VR made its first major inroads in the area of public entertainment, with
ventures ranging from shopping mall game simulators to low-cost VR games for the home. Major
growth continues in home VR systems, partially as a result of 3D games on the Internet.
Computer-Aided Design: Using VR to create “virtual prototypes” in software allows engineers to test
potential products in the design phase, even collaboratively over computer networks, without
investing time or money for conventional hard models. All of the major automobile manufacturers
and many aircraft manufacturers rely heavily on virtual prototyping.
Military: With VR, the military’s solitary cab-based systems have evolved to extensive networked
simulations involving a variety of equipment and situations. There is an apocryphal story that
General Norman Schwartzcopf was selected to lead the Desert Storm operation on the basis of
his extensive experience with simulations of the Middle East battle arena.
Architecture/Construction: VR allows architects and engineers and their clients to “walk through”
structural blueprints. Designs may be understood more clearly by clients who often have difficulty
comprehending them even with conventional cardboard models. Atlanta, Georgia credits its VR
model for winning it the site of the 1996 Olympics, while San Diego is using a VR model of a
planned convention center addition to compete for the next convention of the Republican Party.
Data Visualization: By allowing navigation through an abstract “world” of data, VR helps users rapidly
visualize relationships within complex, multi-dimensional data structures. This is particularly
important in financial-market data, where VR supports faster decision-making.
VR is commonly associated with exotic “fully immersive applications” because of the over-dramatized
media coverage on helmets, body suits, entertainment simulators, and the like. Equally important are
the “Window into World” applications where the user or operator is allowed to interact effectively with
“virtual” data, either locally or remotely.

69.3 Current Status of Virtual Reality Technology


The commercial market for VR, while taking advantage of advances in VR technology at large, is
nonetheless contending with the lack of integrated systems and the frequent turnover of equipment
suppliers. Over the last few years, VR users in academia and industry have developed different strategies
for circumventing these problems. In academic settings, researchers buy peripherals and software from
separate companies and configure their own systems to maintain the greatest application versatility. In
industry, however, expensive, state-of-the-art VR systems are vertically integrated to address problems
peculiar to the industry.
Each solution is either too costly or too risky for most medical organizations. What is required is a
VR system tailored to the needs of the medical community. Unfortunately, few companies offer integrated
systems that are applicable to the VR medical market. This situation is likely to change in the next few
years as VR-integration companies develop to fill this void.
At the same time, the nature of the commercial VR medical market is changing as the price of high-
performance graphics systems continues to decline. High-resolution graphics monitors are becoming
more cost-effective even for markets that rely solely on desktop computers. Technical advances are also
occurring in networking, visual photo-realism, tracker latency through predictive algorithms, and vari-
able-resolution image generators. Improved database access methods are underway. Hardware advances,

© 2000 by CRC Press LLC


such as eye gear that provide an increased field of view with high-resolution, untethered VR systems and
inexpensive intuitive input devices, e.g., DataGloves, have lagged behind advances in computational,
communications, and display capabilities.

69.4 Overview of Medical Applications


of Virtual Reality Technology
Within the medical community, the first wave of VR development efforts have evolved into six key
categories:
1. Surgical Training and Surgical Planning
2. Medical Education, Modeling, and Non-Surgical Training
3. Anatomical Imaging and Medical Image Fusion
4. Ergonomics, Rehabilitation, and Disabilities
5. Telesurgery and Telemedicine
6. Behavioral Evaluation and Intervention
The potential of VR through education and information dissemination indicates there will be few
areas of medicine not taking advantage of this improved computer interface. However, the latent potential
of VR lies in its capacity to be used to manipulate and combine heterogeneous multi-dimensional data
sets from many sources, for example, MRI, PET, and X-ray images. This feature is most significant and
continues to transform the traditional applications environment.

Surgical Training and Surgical Planning


Various projects are underway to utilize VR and imaging technology to plan, simulate, and customize
invasive (as well as minimally invasive) surgical procedures. One example of a VR surgical-planning and
training system is the computer-based workstation developed by Ciné-Med of Woodbury, Connecticut
[McGovern, 1994]. The goal was to develop a realistic, interactive training workstation that helps surgeons
make a more seamless transition from surgical simulation to the actual surgical event.
Ciné-Med focused on television-controlled endosurgical procedures because of the intensive training
required for endosurgery and the adaptability of endosurgery to high quality imaging. Surgeons can gain
clinical expertise by training on this highly realistic and functional surgical simulator. Ciné-Med’s com-
puter environment includes life-like virtual organs that react much like their real counterparts, and
sophisticated details such as the actual surgical instruments that provide system input/output (I/O). To
further enhance training, the simulator allows the instructor to adapt clinical instruction to advance the
technical expertise of learners. Surgical anomalies and emergency situations can be replicated to allow
practicing surgeons to experiment and gain technical expertise on a wide range of surgical problems
using the computer model before using an animal model. Since the steps of the procedure can be repeated
and replayed at a later time, the learning environment surpasses other skills-training modalities.
The current prototype simulates the environment of laparoscopic cholecystectomy for use as a surgical
training device. Development began with the creation of an accurate anatomic landscape, including the
liver, the gallbladder, and related structures. Appropriate surgical instruments are used for the system
I/O and inserted into a fiberglass replica of a human torso. Four surgical incisional ports are assigned
for three scissors grip instruments and camera zoom control. The instruments, retrofitted with switching
devices, read and relay the opening and closing of the tips, with position trackers located within the
simulator. The virtual surgical instruments are graphically generated on a display monitor where they
interact with fully textural, anatomically correct, three-dimensional virtual organs. The organs are created
as independent objects and conform to object-oriented programming.
To replicate physical properties, each virtual organ must be assigned appropriate values to dictate its
reaction when contacted by a virtual surgical instrument. Collision algorithms are established to define
when the virtual organ is touched by a virtual surgical instrument. Additionally, with the creation of

© 2000 by CRC Press LLC


spontaneous objects resulting from the dissection of a virtual organ, each new object is calculated to
have independent physical properties using artificial intelligence (AI) subroutines. Collision algorithms
drive the programmed creation of spontaneous objects.
To reproduce the patient’s physiological reactions during the surgical procedure, the simulation
employs an expert system. This software sub-system generates patient reactions and probable outcomes
derived from surgical stimuli, for example, bleeding control, heart rate failure, and, in the extreme, a
death outcome. The acceptable value ranges of these factors are programmed to be constantly updated
by the expert system while important data is displayed on the monitor.
Three-dimensional graphical representation of a patient’s anatomy is a challenge for accurate surgical
planning. Technological progress has been seen in the visualization of bone, brain, and soft tissue.
Heretofore, three-dimensional modeling of soft tissue has been difficult and often inaccurate owing to
the intricacies of the internal organ, its vasculature, ducts, volume, and connective tissues.
As an extension of this surgical simulator, a functional surgical planning device using VR technology
is under development that will enable surgeons to operate on an actual patient, in virtual reality, prior
to the actual operation. With the advent of technological advances in anatomic imaging, the parallel
development of a surgical planning device incorporating real-time interaction with computer graphics
that mimic a patient’s anatomy is possible. Identification of anatomical structures to be modeled consti-
tutes the initial phase for development of the surgical planning device. A spiral CAT scanning device
records multiple slices of the anatomy during a single breath inhalation by the patient. Pin-registered
layers of the anatomy are thus provided for the computer to read.
Individual anatomical structures are defined at the scan level according to gray scale. Once the
anatomical structures are identified and labeled, the scans are stacked and connected. The result is a
volumetric polygonal model of the patient’s actual anatomy. A polygon-reduction program is initiated
to create a wire frame that can be successfully texture-mapped and interacted with in real time. As each
slice of the CAT scan is stacked and linked together, the result is a fully volumetric, graphic representation
of the human anatomy. Since the model, at this point, is unmanageable by the graphics workstation
because of the volume polygons, a polygon-reduction program is initiated to eliminate excessive polygons.
Key to the planning device is development of a user-friendly software program that will allow the
radiologist to define anatomical structures, incorporate them into graphical representations, and assign
physiological parameters to the anatomy. Surgeons will then be able to diagnose, plan and prescribe
appropriate therapy to their patients using a trial run of a computerized simulation.
Several VR-based systems currently under development allow real-time tracking of surgical instru-
mentation and simultaneous display and manipulation of three-dimensional anatomy corresponding to
the simulated procedure [Hon, 1994; McGovern, 1994; Edmond et al., 1997]. With this design surgeons
can practice procedures and experience the possible complications and variations in anatomy encountered
during surgery. Ranging from advanced imaging technologies for endoscopic surgery to routine hip
replacements, these new developments will have a tremendous impact on improving surgical morbidity
and mortality. According to Merril [1993, 1994], studies show that doctors are more likely to make errors
performing their first few to several dozen diagnostic and therapeutic surgical procedures. Merril claims
that operative risk could be substantially reduced by the development of a simulator that allows trans-
ference of skills from the simulation to the actual point of patient contact.
A preliminary test of Meril’s claim was achieved by Taffinder and colleagues [1998]. Following the
observations by McDougall et al. [1996] that 2D representations of anatomic imagery are insufficient to
develop the eye-hand coordination of surgeons, Taffinder et al. conducted randomized, controlled studies
of psychomotor skills developed either through the use of the MIST VR Laparoscopic Simulator or
through a standard laparoscopic-surgery training course. They used the MIST VR Laparoscopic Simulator
to compare the psychomotor skills of experienced surgeons, surgeon trainees, and non-surgeons. When
task speed and the number of correctional movements were compared across subjects, experienced
surgeons surpassed trainees and non-surgeons. Taffinder and colleagues also noted that among trainees,
training on the VR system improved efficiency and reduced errors, but did not increase the speed of the
laparoscopic procedures.

© 2000 by CRC Press LLC


To increase the realism of medical simulators, software tools have been developed to create “virtual
tissues” that reflect the physical characteristics of physiological tissues. This technology operates in real-
time using three-dimensional graphics, on a high speed computer platform. Recent advances in creating
virtual tissues have occurred as force feedback is incorporated into more VR systems and the computa-
tional overhead of finite-element analysis is reduced (see, for example, [Grabowski, 1998; McInerny and
Terzopoulos, 1996]). Bro-Nielsen and Cotin [1996] have been developing tissue models that incorporate
appropriate real-time volumetric deformation in response to applied forces. Their goal was to provide
surgeons with the same “hands-on” feel in VR that they experience in actual surgery. They used mass-
spring systems [Bro-Nielsen, 1997] to simplify finite-element analysis, significantly reduced computa-
tional overhead, and thereby achieving near-real-time performance. While advancing the state of the art
considerably toward the real-time, hands-on objective, much more work remains to be done.
In another study of the effectiveness of force-feedback in surgical simulations, Baur et al. [1998]
reported on the use of VIRGY, a VR endoscopic surgery simulator. VIRGY provides both visual and force
feedback to the user, through the use of the Pantoscope [Baumann, 1996]. Because the Pantoscope is a
remote-center-of-motion force-reflecting system, Baur et al. were able to hide the system beneath the
skin of the endoscope mannequin. Force reflection of tissue interactions, including probing and cutting,
was controlled by nearest neighbor propagation of look-up table values to circumvent the computational
overhead inherent in finite element analysis. Surgeons were asked to assess the tissue feel achieved through
this simulation.
Evaluation of another surgical simulator system was carried out by Weghorst et al. [1998]. This
evaluation involved the use of the Madigan Endoscopic Sinus Surgery (ESS) Simulator (see Edmond
et al. [1997] for a full description), developed jointly by the U.S. Army, Lockheed-Martin, and the Human
Interface Technology Laboratory at the University of Washington. The ESS Simulator, which was designed
to train otolaryngologists to perform paranasal endoscopic procedures, incorporates 3D models of the
naso-pharyngeal anatomy derived from the National Library of Medicine’s Virtual Human Data Base. It
employs two 6-DOF force-reflecting instruments developed by the Immersion Corporation to simulate
the endoscope and 11 surgical instruments used in paranasal surgery.
Weghorst and colleagues pitted three groups of subjects against each other in a test of the effectiveness
of the ESS Simulator: non-MDs, non-ENT MDs, and ENTs with various levels of experience. Training
aids, in the form of circular hoops projected onto the simulated nasal anatomy, identified the desired
endoscope trajectory; targets identified anatomical injection sites; and text labels identified anatomical
landmarks. Successive approximations to the actual task involved training first on an abstract, geometric
model of the nasal anatomy with superimposed training aids (Level 1); next on real nasal anatomy with
superimposed training aids (Level 2); and finally, on real nasal anatomy without training aids (Level 3).
Results of the evaluation indicated that, in general, ENTs performed better than non-ENTs on Level 1
and 2 tasks, and that among the ENTs, those with more ESS experience scored higher than those who
had performed fewer ESS procedures. The researchers also found that deviations from the optimum
endoscope path differentiated ENT and non-ENT subjects. In post-training questionnaires, ENTs rated
the realism of the simulation as high and confirmed the validity and usefulness of training on the ESS
Simulator.
While the ESS Simulator described above requires a 200 MHz Pentium PC and a 4-CPU SGI Oxyx
with Reality Engine 2 graphics to synthesize the virtual nasal cavity, a current trend in surgical simulation
is towards lower-end simulation platforms. As computational power of PCs continues to escalate, sim-
ulations are increasingly being ported to or developed on relatively inexpensive computers. Tseng et al.
[1998] used an Intel-based PC as the basis of their laparoscopic surgery simulator. This device incorpo-
rates a 5-DOF force-reflecting effector, a monitor for viewing the end-effector, and a speech-recognition
system for changing viewpoint. The system detects collisions between the end effector and virtual tissues,
and is capable of producing tissue deformation through one of several models including a finite element
model, a displacement model, or a transmission line model. Computational power is provided by dual
Pentium Pro 200 MHz CPUs and a Realizm Graphics Accelerator Board.

© 2000 by CRC Press LLC


Medical Education, Modeling, and Non-Surgical Training
Researchers at the University of California–San Diego are exploring the value of hybridizing elements of
VR, multimedia (MM), and communications technologies into a unified educational paradigm [Hoff-
man, 1994; Hoffman et al. 1995; Hoffman et al. 1997]. The goal is to develop powerful tools that extend
the flexibility and effectiveness of medical teaching and promote lifelong learning. To this end, they have
undertaken a multi-year initiative, named the “VR-MM Synthesis Project.” Based on instructional design
and user-need (rather than technology per se), they have planned a linked 3-computer array representing
the Data Communications Gateway, the Electronic Medical Record System, and the Simulation Environ-
ment. This system supports medical students, surgical residents, and clinical faculty running applications
ranging from full surgical simulation to basic anatomical exploration and review, all via a common
interface. The plan also supports integration of learning and telecommunications resources (such as
interactive MM libraries, online textbooks, databases of medical literature, decision support systems,
electronic mail, and access to electronic medical records).
The first application brought to fruition in the VR-MM Synthesis Project is an anatomical instruc-
tional-aid system called Anatomic VisualizeR [Hoffman et al. 1997] that uses anatomical models derived
from the Visible Human Project of the National Library of Medicine. Using the “Guided Lessons”
paradigm of Anatomical VisualizeR, the student enters a 3D workspace that contains a “Study Guide.”
The student uses the study guide to navigate through the 3D workspace, downloading anatomical models
of interest, as well as supporting resources like diagrammatic representations, photographic material,
and text. Manipulation of the anatomical models is encouraged to provide the student with an intuitive
understanding of anatomical relationships. The study guide also incorporates other instructional material
necessary for the completion of a given lesson.
Another advanced medical training system based on VR technology is VMET—the Virtual Medical
Trainer—jointly developed by the Research Triangle Institute and Advanced Simulation Corp. [Kizakevich
et al., 1998]. VMET was developed to provide training in trauma care. The VMET Trauma Patient
Simulator (VMET-TPS), which was designed to comply with civilian guidelines for both Basic Trauma
Life Support and Advanced Trauma Life Support, incorporates models of (1) physiological systems and
functions, (2) dynamic consequences of trauma to these physiological systems, and (3) medical interven-
tion effects on these physiological systems.
VMET provides a multisensory simulation of a trauma patient, including visible, audible, and behav-
ioral aspects of the trauma patient. To maintain cost-effectiveness, VMET is constrained to providing
“virtual spaces” at the site of several injuries, including a penetrating chest wound, a penetrating arm
wound, laceration to the arm, and a thigh contusion. Within the virtual spaces, the trainee can experience
the physiological effects of the injury and of his intervention in real time, as the physiological engine
updates the condition of the wound.
VMET-TPS was designed to train military personnel in trauma care, so it places emphasis on the
medical technology currently and foreseeably available to the military. However, VMET-TPS will find
applications in civilian teaching hospitals, as well. Partially because of the cost-constraint requirements
of the project, the price of VMET should be within reach of many trauma centers.
In another task-specific program at the Fraunhofer Institute, researchers have been developing an
ultrasonic probe training system based on VR components called the UltraTrainer [Stallkamp and Walper,
1998]. UltraTrainer uses a Silicon Graphics workstation and monitor to present images drawn from a
database of real ultrasonograms, a Polhemus tracker to determine the position of the user’s hand, a
joystick representation of the ultrasound probe, and an ultrasound phantom to provide probe posi-
tion/orientation feedback to the trainee. In using the UltraTrainer, the trainee moves the ultrasound
probe against the phantom, which is used in this system as a representation of the virtual patient’s body.
On the monitor, the trainee sees an image of the probe in relation to the phantom, showing the trainee
the position and orientation of the probe with respect to the virtual patient.
The Polhemus tracker determines and records the real-space position of the virtual ultrasound probe,
and this tracker information is then used to extract stored ultrasonograms from a database of images.

© 2000 by CRC Press LLC


FIGURE 69.3 VR-based rehabilitation workstation.

An ultrasonic image of the appropriate virtual scanfield is presented on the monitor in accordance with
the position of the virtual probe on the phantom. As the probe is moved on the phantom, new virtual
scanfields are extracted from the database and presented on the monitor. The UltraTrainer is able to
present sequential virtual scanfields rapidly enough for the trainee to perceive the virtual ultrasonography
as occurring in real time.
Planned improvements to the UltraTrainer include a larger database of stored sonograms that can be
customized for different medical specialties, and a reduction in cost by porting the system to a PC and by
using a less expensive tracking system. These improvements should allow the UltraTrainer to be accessed
by a broader range of users and to move from the laboratory to the classroom, or even to the home.
Three groups of researchers have taken different approaches to developing VR-based training systems
for needle insertion, each based on feedback to a different sensory system. At Georgetown University,
Lathan and associates [1998] have produced a spine biopsy simulator based on visual feedback; a team
from Ohio State University and the Ohio Supercomputer Center have demonstrated an epidural needle
insertion simulator based on force feedback [Heimenz et al., 1998]; and Computer Aided Surgery in New
York has developed a blind needle biopsy placement simulator that uses 3D auditory feedback [Wenger
and Karron, 1998]. Each of these innovative systems draws on technological advances in virtual reality.
In the aggregate, they disclose that, given appropriate cognitive constructs, humans are capable of using
diverse sensory input to learn very demanding tasks. Imagine how effective virtual reality could be in
training complex tasks if all of the sensory information could be integrated in a single system. That is
currently one of the goals of the virtual reality community.

Anatomical Imaging and Medical Image Fusion


An anatomically keyed display with real-time data fusion is currently in use at NYU Medical Center’s
Department of Neurosurgery. The system allows both pre-operative planning and real-time tumor

© 2000 by CRC Press LLC


FIGURE 69.4 DataSuit™ for ergonomic and sports medicine applications.

visualization [Kelly, 1994; Kall, 1994]. The technology offers a technique for surgeons to plan and simulate
the surgical procedure beforehand in order to reach deep-seated or centrally located brain tumors. The
imaging method (volumetric stereotaxis) gathers, stores, and reformats imaging-derived, three-dimen-
sional volumetric information that defines an intracranial lesion (tumor) with respect to the surgical field.
Computer-generated information is displayed intraoperatively on computer monitors in the operating
room and on a “heads up” display mounted on the operating microscope. These images provide surgeons
with a CT (computed tomography) and MRI defined map of the surgical field area scaled to actual size
and location. This guides the surgeon in finding and defining the boundaries of brain tumors. The
computer-generated images are indexed to the surgical field by means of a robotics-controlled stereotactic
frame which positions the patient’s tumor within a defined targeting area. Simulated systems using VR
models are being advocated for high risk techniques, such as the alignment of radiation sources to treat
cancerous tumors. Where registration of virtual and real anatomical features is not an issue, other display
technologies can be employed. Two recent examples, based on totally different display modalities, are
described below.
Parsons and Rolland [1998] developed a non-intrusive display technique that projects images of virtual
inclusions such as tumors, cysts, or abscesses, or other medical image data into the surgeon’s field of
view on demand. The system relies on retroreflective material, perhaps used as a backing for the surgeon’s
glove or on a surgical instrument, to provide an imaging surface upon which can be presented images
extracted from 2D data sources, for example MRI scans. The surgeon can access the data by placing the
retroreflective screen, e.g., his gloved hand, in the path of an imaging beam. The image of the virtual
MRI scan, for example, will appear superimposed along the surgeon’s line of sight. Although image
registration is not currently possible with this real-time display system, the system does provide a means
for the clinician to access critical information without turning to view a monitor or other screen.
For displaying volumetric data, Senger [1998] devised a system based on the FakeSpace Immersive
Workbench™ and position-tracked StereoGraphics CrystalEyes™ that presents stereoscopic images
derived from CT, MRI, and the Visible Human data sets. The viewer can interact with the immersive
data structure through the use of a position/orientation sensed probe. The probe can be used first to
identify a region of interest within the anatomical data set and then to segment the data so that particular

© 2000 by CRC Press LLC


anatomical features are highlighted. The probe can also be used to “inject” digital stain into the data set
and to direct the propagation of the stain into structures that meet predetermined parameters, such as
voxel density, opacity, etc. Through the use of the probe, the vasculature, fascia, bone, etc. may be
selectively stained. This imaging system takes advantage of the recent advances in VR hardware and
software, the advent of programs such as the National Library of Medicine’s Visible Human Project, and
significant cost reductions in computational power to make it feasible for medical researchers to create
accurate, interactive 3D human anatomical atlases.
Suzuki and colleagues [1998] have pushed virtual anatomical imaging a step further by including the
temporal domain. In 1987, Suzuki, Itou, and Okamura developed a 3D human atlas based on serial MRI
scans. Originally designed to be resident on a PC, the 3D Human Atlas has been upgraded during the last
decade to take advantage of major advances in computer graphics. The latest atlas is based on composite
super-conducting MRI scans at 4-mm intervals at 4-mm pitch of young male and female humans. In
addition to volume and surface rendering, this atlas is also capable of dynamic cardiac imaging.
Rendered on a Silicon Graphics Onyx Reality Engine, this atlas presents volumetric images at approx-
imately 10 frames per second. The user interface allows sectioning of the anatomical data along any plane,
as well as extracting organ surface information. Organs can be manipulated individually and rendered
transparent to enable the user to view hidden structures. By rendering the surface of the heart transparent,
it is possible to view the chambers of the heart in 4D at approximately 8 frames per second. Suzuki and
colleagues plan to distribute the atlas over the internet so that users throughout the world will be able
to access it.
Distribution of volumetric medical imagery over the Internet through the World Wide Web (WWW)
has already been examined by Hendin and colleagues [1998]. These authors relied on the Virtual Reality
Modeling Language (VRML) and VRMLscript to display and interact with the medial data sets, and they
used a Java graphical user interface (GUI) to preserve platform independence. They conclude that it is
currently feasible to share and interact with medical image data over the WWW.
Similarly, Silverstein et al. [1998] have demonstrated a system for interacting with 3D radiologic image
data over the Internet. While they conclude that the Web-based system is effective for transmitting useful
imagery to remote colleagues for diagnosis and pretreatment planning, it does not supplant the require-
ment that a knowledgeable radiologist examine the 2D tomographic images. This raises the question of
the effectiveness of the display and interaction with virtual medical images.
The perceived effectiveness of VR-based anatomical imaging systems similar to those described above,
has been assessed by Oyama and colleagues [1998]. They created virtual environments consisting of 3D
images of cancerous lesions and surrounding tissue, derived from CT or MRI scans, and presented these
environments to clinicians in one of three modes. The cancers were from the brain, breast, lung, stomach,
liver, and colon, and the presentation modes were either as surface-rendered images, real-time volume-
rendered images, or editable real-time volume-rendered images. Clinicians had to rate the effectiveness
of the three modes in providing them with information about (1) the location of the cancer, (2) the shape
of the cancer, (3) the shape of fine vessels, (4) the degree of infiltration of surrounding organs, and (5) the
relationship between the cancer and normal organs. In each of the five categories, the clinicians rated
the editable real-time volume-rendered images as superior to the other modes of presentation. From the
study, the authors conclude that real-time surface rendering, while applicable to many areas of medicine,
is not suitable for use in cancer diagnosis.

Ergonomics, Rehabilitation, and Disabilities


VR offers the possibility to better shape a rehabilitative program to an individual patient. Greenleaf et al.
[1994] have theorized that the rehabilitation process can be enhanced through the use of VR technology.
The group is currently developing a VR-based rehabilitation workstation that will be used to:
(1) decompose rehabilitation into small, incremental functional steps to facilitate the rehabilitation
process; and (2) make the rehabilitation process more realistic and less boring, thus enhancing motivation
and recovery of function.

© 2000 by CRC Press LLC


FIGURE 69.5 VR system used as a disability solution.

DataGlove and DataSuit technologies were originally developed as control devices for VR, but through
improvements they are now being applied to the field of functional evaluation of movement and to
rehabilitation in a variety of ways. One system, for example, uses a glove device coupled with a force
feedback system—The Rutgers Master (RM-I)—to rehabilitate a damaged hand or to diagnose a range
of hand problems [Burdea et al., 1995; Burdea et al, 1997]. The rehabilitation system developed by Burdea
and colleagues uses programmable force feedback to control the level of effort required to accomplish
rehabilitative tasks. This system measures finger-specific forces while the patient performs one of several
rehabilitative tasks, including ball squeezing, DigiKey exercises, and peg insertion.
Another system under development—The RM II—incorporates tactile feedback to a glove system to
produce feeling in the fingers when virtual objects are “touched” [Burdea, 1994]. In order to facilitate
accurate goniometric assessment, improvements to the resolution of the standard DataGlove have been
developed [Greenleaf, 1992]. The improved DataGlove allows highly accurate measurement of dynamic
range of motion of the fingers and wrist and is in use at research centers such as Johns Hopkins and
Loma Linda University to measure and analyze functional movements.
Adjacent to rehabilitation evaluation systems are systems utilizing the same measurement technology
to provide ergonomic evaluation and injury prevention. Workplace ergonomics has already received a
boost from new VR technologies that enable customized workstations tailored to individual requirements
[Greenleaf, 1994]. In another area, surgical room ergonomics for medical personnel and patients is
projected to reduce the hostile and complicated interface among patients, health care providers, and
surgical spaces [Kaplan, 1994].
Motion Analysis Software (MAS) can assess and analyze upper extremity function from dynamic
measurement data acquired by the improved DataGlove. This technology not only provides highly
objective measurement, but also ensures more accurate methods for collecting data and performing

© 2000 by CRC Press LLC


quantitative analyses for physicians, therapists, and ergonomics specialists involved in job site evaluation
and design. The DataGlove/MAS technology is contributing to a greater understanding of upper extremity
biomechanics and kinesiology. Basic DataGlove technology coupled with VR media will offer numerous
opportunities for the rehabilitation sector of the medical market, not the least of which is the positive
implication for enhancing patient recovery by making the process more realistic and participatory.
One exciting aspect of VR technology is the inherent ability to enable individuals with physical
disabilities to accomplish tasks and have experiences otherwise denied them. The strategies currently
employed for disability-related VR research include head-mounted displays; position/orientation sensing;
tactile feedback; eye tracking; 3D sound systems; data input devices; image generation; and optics. For
physically disabled persons, VR will provide a new link to capabilities and experiences heretofore unat-
tainable, such as:
• An individual with cerebral palsy who is confined to a wheelchair can operate a telephone switch-
board, play hand ball, and dance [Greenleaf, 1994] within a virtual environment.
• Patients with spinal cord injury or CNS dysfunction can relearn limb movements through adaptive
visual feedback in virtual environments [Steffin, 1997].
• Disabled individuals can be in one location while their “virtual being” is in a totally different
location. This opens all manner of possibilities for participating in work, study, or leisure activities
anywhere in the world without leaving home.
• Physically disabled individuals could interact with real world activities through robotic devices
they control from within the virtual world.
• Blind persons could practice and plan in advance navigating through or among buildings if the
accesses represented in a virtual world were made up of 3-dimensional sound images and tactile
stimuli [Max and Gonzalez, 1997].
One novel application of VR technology to disabilities has been demonstrated by Inman [1994a,b;
1996] who trained handicapped children to operate wheelchairs without the inherent dangers of such
training. Inman trained children with cerebral palsy or orthopedic impairments to navigate virtual worlds
that contained simulated physical conditions that they would encounter in the real world. By training in
a virtual environment, the children acquired an appreciation of the dynamics of their wheelchairs and
of the dangers that they might encounter in the real world, but in a safe setting. The safety aspect of VR
training is impossible to achieve with other training technologies.
VR will also enable persons with disabilities to experience situations and sensations not accessible in a
physical world. Learning and working environments can be tailored to specific needs with VR. For example,
since the virtual world can be superimposed over the real world, a learner could move progressively from
a highly supported mode of performing a task in the virtual world, through to performing it unassisted
in the real world. One project, “Wheelchair VR” [Trimble, 1992], is a highly specialized architectural
software being developed to aid in the design of barrier-free building for persons with disabilities.
VR also presents unique opportunities for retraining persons who have incurred neuropsychological
deficits. The cognitive deficits associated with neurophysiological insults are often difficult to assess and
for this and other reasons, not readily amenable to treatment. Several groups have begun to investigate
the use of VR in the assessment and retraining of persons with neuropsychological cognitive impairments.
Pugnetti and colleagues [1995a,b; 1996] have begun to develop VR-based scenarios based on standardized
tests of cognitive function, for example card-sorting tasks. Because the tester has essentially absolute control
over the cognitive environment in which the test is administered, the effects of environmental distractors
can be both assessed and controlled. Initial tests by the Italian group suggests that VR-based cognitive tests
offer great promise for both evaluation and retraining of brain-injury-associated cognitive deficits.
Other groups have been developing cognitive tests based on mental rotation of virtual objects [Buch-
walter and Rizzo, 1997; Rizzo and Buchwalter, 1997] or spatial memory of virtual buildings [Attree et al.,
1996]. Although VR technology shows promise in both the assessment and treatment of persons with
acquired cognitive deficits, Buchwalter and Rizzo find that the technology is currently too cumbersome,
too unfamiliar, and has too many side effects to allow many patients to benefit from its use.

© 2000 by CRC Press LLC


Attree and associates, however, have begun to develop a taxonomy of virtual reality applications in
cognitive rehabilitation. Their study examined the differences between active and passive navigation of
virtual environments. They report that active navigation improves spatial memory of the route through
the virtual environment, while passive navigation improves object recognition for landmarks within the
virtual environment. This report sends a clear message that VR is capable of improving cognitive function,
but that much more work is required to understand how the technology interacts with the human psyche.
One researcher who is actively pursuing this understanding is Dorothy Strickland [1995, 1996], who
has developed a virtual environment for working with autistic children. Strickland’s premise is that VR
technology allows precise control of visual and auditory stimuli within virtual environments. Frequently,
autistic children find the magnitude of environmental stimuli overwhelming, so it is difficult for them
to focus. Strickland has developed a sparse virtual environment into which stimuli can be introduced as
the autistic child becomes capable of handling more environmental stimulation. Essentially, she produces
successive approximations to a real environment at a rate that the autistic child can tolerate. Her work
has shown that autistic children can benefit greatly by exposure to successively richer environments and
that the advances that they make in virtual environments transfer to the real world.
Max and Burke [1998], also using VR with autistic children, report that contrary to expectations, their
patients were able to focus on the virtual environment, rather than “fidgeting” as they are prone to do
in other environments. These researchers also found that in the absence of competing acoustic stimuli,
autistic children were drawn to loud music and shunned soft choral music. These findings may provide
important insights into the etiology of autism that could not be observed in the “real world” setting.

Telesurgery and Telemedicine


Telepresence is the “sister field” of VR. Classically defined as the ability to act and interact in an off-site
environment by making use of VR technology, telepresence is emerging as an area of development in its
own right. Telemedicine (the telepresence of medical experts) is being explored as a way to reduce the
cost of medical practice and to bring expertise into remote areas [Burrow, 1994; Rosen, 1994; Rovetta
et al., 1997; Rissam et al., 1998].
Telesurgery is a fertile area for development. On the verge of realization, telesurgery (remote surgery)
will help resolve issues that can complicate or compromise surgery, among them:
• The patient is too ill or injured to be moved for surgery.
• A specialist surgeon is located at some distance from the patient requiring specialized attention.
• Accident victims may have a better chance of survival if immediate, on-the-scene surgery can be
performed remotely by an emergency room surgeon at a local hospital.
• Soldiers wounded in battle could undergo surgery on the battlefield by a surgeon located elsewhere.
The surgeon really does operate—on flesh, not a computer animation. And while the distance aspect
of remote surgery is a provocative one, telepresence is proving an aid in non-remote surgery as well. It
can help surgeons gain dexterity over conventional methods of manipulation. This is expected to be
particularly important in laparoscopic surgery. For example, suturing and knot-tying will be as easy to
see in laparoscopic surgery as it is in open surgery because telepresence enables the surgery to look and
feel like open surgery.
As developed at SRI International [Satava 1992; Hill et al., 1995, 1998], telepresence not only offers a
compelling sense of reality for the surgeon, but also allows the surgeon to perform the surgery according
to the usual methods and procedures. There is nothing new to learn. Hand motions are quick and precise.
The visual field, instrument motion, and force feedback can all be scaled to make microsurgery easier
than it would be if the surgeon were at the patient’s side. While the current technology has been
implemented in prototype, SRI and Telesurgical Corporation, based in Redwood City, California, are
collaborating to develop a full system based on this novel concept.
The system uses color video cameras to image the surgical field in stereo, which the remote surgeon
views with stereo shutter glasses. The remote surgeon grasps a pair of force-reflecting 6-DOF remote

© 2000 by CRC Press LLC


manipulators linked to a slaved pair of end effectors in the surgical field. Coordinated visual, auditory,
and haptic feedback from the end effectors provides the remote surgeon with a compelling sense of
presence at the surgical site.
Because the remote manipulators and end effectors are only linked electronically, the gain between
them can be adjusted to provide the remote surgeon with microsurgically precise movement of the slaved
instruments with relatively gross movement of the master manipulators. Researchers at SRI have dem-
onstrated the microsurgery capabilities of this system by performing microvascular surgery on rats.
Again, because only an electronic link exists between manipulator and end effector, separation of the
remote surgeon and the surgical field can be on a global scale. To demonstrate this global remote surgery
capability, Rovetta and colleagues [1998] established Internet and ISDN networks between Monterey,
California, and Milan, Italy, and used these networks to remotely control a surgical robot. By manipulating
controls in Monterey, Professor Rovetta was able to perform simulated biopsies on liver, prostate, and
breasts in his laboratory in Milan, Italy. Using both an Internet and ISDN network facilitated the
transmission of large amounts of image data and robotic control signals in a timely manner.
Prior to the telesurgery demonstration between Monterey and Milan, Rovetta and colleagues [1997]
established a link between European hospitals and hospitals in Africa as part of the Telehealth In Africa
Program of the European Collaboration Group for Telemedicine. Among the objectives of this program
was a demonstration of the feasibility of transmitting large amounts of medical data including X-ray and
other diagnostic images, case histories, and diagnostic analyses. Once established, the data link could be
used for teleconsulting and teletraining.
Teleconsulting is becoming commonplace. Within the last year, the U.S. Department of Veteran Affairs
has established a network to link rural and urban Vet Centers. This network, which will initially link
20 Vet Centers, will provide teleconsultation for chronic disease screening, trauma outreach, and psy-
chosocial care. In Europe, where telemedicine is making great strides, the Telemedical Information Society
has been established. (See [Marsh, 1998], for additional information.) At the International Telemedical
Information Society ‘98 Conference held in Amsterdam in April 1998, a number of important issues
pertaining to teleconsultation were addressed. Among these were discussions about the appropriate
network for telemedicine, e.g., the World Wide Web, ATM, and privacy and liability issues.
Privacy, that is security of transmitted medical information, will be a continuing problem as hackers
hone their ability to circumvent Internet security [Radesovich, 1997]. Aslan and colleagues [1998] have
examined the practicality of using 128-bit encryption for securing the transmission of medical images.
Using software called Photomailer™, Aslan at Duke Medical Center and his colleagues at Boston Uni-
versity Medical Center and Johns Hopkins encrypted 60 medical images and transmitted them over the
Internet. They then attempted to access the encrypted images without Photomailer™ installed on the
receiving computer. They reported complete success and insignificant increases in processing time with
the encrypting software installed.

Behavioral Evaluation and Intervention


VR technology has been successfully applied to a number of behavioral conditions over the last few years.
Among the greatest breakthoughs attained through the use of this technology is the relief of akinesia, a
symptom of Parkinsonism wherein a patient has progressively greater difficulty initiating and sustaining
walking. The condition can be mitigated by treatment with drugs such as L-dopa, a precursor of the
natural neural transmitters dopamine, but usually not without unwanted side effects. Now, collaborators
at the Human Interface Technology Laboratory at the University of Washington, along with the Univer-
sity’s Department of Rehabilitation Medicine and the San Francisco Parkinson’s Institute, are using virtual
imagery to simulate an effect called kinesia paradoxa, or the triggering of normal walking behavior in
akinetic Parkinson’s patients [Weghorst 1994].
Using a commercial, field-multiplexed, “heads-up” video display, the research team has developed an
approach that elicits near-normal walking by presenting collimated virtual images of objects and abstract
visual cues moving through the patient’s visual field at speeds that emulate normal walking. The

© 2000 by CRC Press LLC


combination of image collimation and animation speed reinforces the illusion of space-stabilized visual
cues at the patient’s feet. This novel, VR-assisted technology may also prove to be therapeutically useful
for other movement disorders.
In the area of phobia intervention, Lamson [Lamson, 1994, 1997; Lamson and Meisner, 1994] has
investigated the diagnostic and treatment possibilities of VR immersion on anxiety, panic, and phobia
of heights. By immersing both patients and controls in computer-generated situations, the researchers
were able to expose the subjects to anxiety provoking situations (such as jumping from a height) in a
controlled manner. Experimental results indicated a significant subject habituation and desensitization
through this approach, and the approach appears clinically useful.
The effectiveness of VR as a treatment for acrophobia has been critically examined by Rothbaum and
associates [1995a,b]. They used a procedure in which phobic patients were immersed in virtual environ-
ments that could be altered to present graded fear-inducing situations. Patient responses were monitored
and used to modify the virtual threats accordingly. Rothbaum et al. report that the ability to present
graded exposure to aversive environments through the use of VR technology provides a very effective
means of overcoming acrophobia. Transfer to the real world was quite good.
The use of VR technology in the treatment of phobias has expanded in recent years to include fear of
flying [Rothbaum et al., 1996; North et al., 1997], arachnophobia [Carlin et al, 1997], and sexual dys-
function [Optale et al., 1998]. In each of these areas, patients attained a significant reduction in their
phobias through graded exposure to fear-provoking virtual environments. In the treatment of arachno-
phobia, Carlin and associates devised a clever synthesis of real and virtual environments to desensitize
patients to spiders. Patients viewed virtual spiders while at the same time touching the simulated fuzziness
of a spider. The tactile aspect of the exposure was produced by having the patient touch a toupé.
Before leaving the subject of VR treatment of phobias, a word of caution is necessary. Bloom [1997]
has examined the application of VR technology to the treatment of psychiatric disorders and sounds a
note of warning. Bloom notes the advances made in the treatment of anxiety disorders, but also points
out that there are examples of physiological and psychological side effects from immersion in VR. While
the major side effects tend to be physiological, mainly simulator sickness and Sopite Syndrome (malaise,
lethargy), psychological maladaptations, such as “...unpredictable modifications in perceptions of social
context...” and “...fragmentation of self ” [Bloom, 1997, p 12] reportedly occur.

69.5 Summary
VR tools and techniques are being developed rapidly in the scientific, engineering, and medical areas.
This technology will directly affect medical practice. Computer simulation will allow physicians to
practice surgical procedures in a virtual environment in which there is no risk to patients, and where
mistakes can be recognized and rectified immediately by the computer. Procedures can be reviewed from
new, insightful perspectives that are not possible in the real world.
The innovators in medical VR will be called upon to refine technical efficiency and increase physical
and psychological comfort and capability, while keeping an eye on reducing costs for health care. The
mandate is complex, but like VR technology itself, the possibilities are very exciting. While the possibil-
ities—and the need—for medical VR are immense, approaches and solutions using new VR-based
applications require diligent, cooperative efforts among technology developers, medical practitioners,
and medical consumers to establish where future requirements and demand will lie.

Defining Terms
For an excellent treatment of the state-of-the-art of VR and its taxonomy, see the ACM SIGGRAPH
publication “Computer Graphics,” Vol. 26, #3, August 1992. It covers the U.S. Government’s National
Science Foundation invitational workshop on Interactive Systems Program, March 23-24, 1992, which
served to identify and recommend future research directions in the area of virtual environments. A more
in-depth exposition of VR taxonomy can be found in the MIT Journal “Presence”, Vol. 1 #2.

© 2000 by CRC Press LLC


References
Aslan, P., Lee, B., Kuo, R., Babayan, R.K., Kavoussi, L.R., Pavlin, K.A., and Preminger, G.M. (1998).
Secured Medical Imaging Over the Internet. Medicine Meets Virtual Reality: Art, Science, Technology:
Healthcare (R)Evolution. (pp. 74-78). IOS Press, San Diego, CA.
Attree, E.A., Brooks, B.M., Rose, F.D., Andrews, T.K., Leadbetter, A.G., and Clifford, B.R., (1996). Memory
Processes and Virtual Environments: I Can’t Remember What Was There, But I Can Remember
How I Got There. Implications for Persons with Disabilities. Proceedings of the European Conference
on Disability, Virtual Reality, and Associated Technology. (pp. 117-121).
Begault, D.R. and Wenzel, E.M. 1992. Techniques and Applications for Binaural Sound Manipulation in
Human-Machine Interfaces. International Journal of Aviation Psychology, 2, 1-22.
Bloom, R.W. (1997). Psychiatric Therapeutic Applications of Virtual Reality Technology (VRT): Research
Prospectus and Phenomenological Critique. Medicine Meets Virtual Reality: Global Healthcare Grid
(pp. 11-16). IOS Press, San Diego, CA.
Bolas, M.T. (1994, January). Human Factors in the Design of an Immersive Display. IEEE Computer
Graphics and Applications, 14(1), 55-59.
Buchwalter, J.G. and Rizzo, A.A. (1997). Virtual Reality and the Neuropsychological Assessment of Persons
with Neurologically Based Cognitive Impairments. Medicine Meets Virtual Reality: Global Health-
care Grid (pp. 17-21). IOS Press, San Diego, CA.
Burdea, G., Zhuang, J., Roskos, E., Silver, D., and Langrana, N. 1992. A Portable Dextrous Master with
Force Feedback. Presence. 1(1), 18-28.
Burdea, G., Goratowski, R., and Langrana, N. (1995) Tactile Sensing Glove for Computerized Hand
Diagnosis. Journal of Medicine and Virtual Reality, 1, 40-44.
Burdea, G, Deshpande, S., Popescu, V., Langrana, N., Gomez, D., DiPaolo, D., and Kanter, M. (1997).
Computerized Hand Diagnostic/Rehabilitation System Using a Force Feedback Glove. Medicine
Meets Virtual Reality: Global Healthcare Grid (pp. 141-150). IOS Press, San Diego, CA.
Burrow, M. (1994). A Telemedicine Testbed for Developing and Evaluating Telerobotic Tools for Rural
Health Care. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary
Applications for Simulation Visualization Robotics (pp. 15-18). Aligned Management Associates, San
Diego, CA.
Caldwell, G. and Gosney, C. (1993). Enhanced Tactile Feedback (Tele-taction) Using a Multi-Functional
Sensory System. Proceedings of the IEEE International Conference on Robotics and Automation, 955-960.
Carlin, A.S., Hoffman, H.G., and Weghorst, S. (1997). Virtual Reality and Tactile Augmentation in the
Treatment of Spider Phobia: A Case Study. Behavior Research and Therapy, 35(2), 153-158.
Durlach, N.I., Shinn-Cunningham, B.G. and Held, R.M. (1993, Spring). Supernormal Auditory Local-
ization. I. General Background. Presence: Teleoperators and Virtual Environments. 2(2), 89-103.
Edmond, C.V., Heskamp, D., Sluis, D., Stredney, D., Sessanna, D., Weit, G., Yagel, R., Weghorst, S.,
Openheimer, P., Miller, J. Levin, M., and Rosenberg, L. (1997) ENT Endoscopic Surgical Training
Simulator. Medicine Meets Virtual Reality: Global Healthcare Grid (pp. 518-528). IOS Press, San
Diego, CA.
Grabowski, H.A. (1998) Generating Finite Element Models from Volumetric Medical Images. Medicine
Meets Virtual Reality: Art, Science, Technology: Healthcare (R)Evolution. (pp. 355-356). IOS Press,
San Diego, CA.
Greenleaf, W. (1992). DataGlove, DataSuit and Virtual Reality. Virtual Reality and Persons with Disabilities:
Proceedings of the 7th Annual Conference; (pp 21-24). March 18-21, 1992. Los Angeles, CA.
Northridge, CA. California State University. 1992. Available from: Office of Disabled Student Ser-
vices, California State University, Northridge. 18111 Nordhoff Street—DVSS. Northridge, CA 91330.
Greenleaf, W.J. (1994). DataGlove and DataSuit: Virtual Reality Technology Applied to the Measurement
of Human Movement. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare:
Visionary Applications for Simulation Visualization Robotics (pp. 63- 69). Aligned Management
Associates, San Diego, CA.

© 2000 by CRC Press LLC


Hendin, O., John, N.W., and Shochet, O. (1998). Medical Volume Rendering over the WWW Using
VRML and JAVA. Medicine Meets Virtual Reality: Art, Science, Technology: Healthcare (R)Evolution.
(pp. 34-40). IOS Press, San Diego, CA.
Hendrix, C and Barfield, W. (1996). The Sense of Presence within Audio Virtual Environments. Presence,
5, 295-301.
Hiemenz, L., Stredney, D., and Schmalbrock, P. (1998). Development of a Force-Feedback Model for an
Epidural Needle Insertion Simulator. Surgical Simulation. Medicine Meets Virtual Reality: Art,
Science, Technology: Healthcare (R)Evolution. (pp. 272-277). IOS Press, San Diego, CA.
Hill, J. W., Jensen, J.F, Green, P.S., and Shah, A.S. (1995). Two-Handed Tele-Presence Surgery Demon-
stration System. Proc ANS Sixth Annual Topical Meeting on Robotics and Remote Systems, 2, 713-720.
Monterey, CA.
Hill, J.W., Holst, P.A., Jensen, J.F., Goldman, J., Gorfu, Y., and Ploeger, D.W. (1998) Telepresence Interface
with Applications to Microsurgery and Surgical Simulation. Medicine Meets Virtual Reality: Art,
Science, Technology: Healthcare (R)Evolution. (pp. 96-102). IOS Press, San Diego, CA.
Hoffman, H.M. (1994). Virtual Reality and the Medical Curriculum: Integrating Extant and Emerging
Technologies. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary
Applications for Simulation Visualization Robotics (pp. 73-76). Aligned Management Associates, San
Diego, CA.
Hoffman, H.M., Irwin, A.E., Ligon, R, Murray, M., and Tohsaku, C. (1995). Virtual Reality-Multimedia
Synthesis: Next Generation Learning Environments for Medical Education. J. Biocomm. 22 (3), 2-7.
Hoffman, H.M., Murray, M., Danks, M., Prayaga, R., Irwin, A., and Vu, D. (1997). A Flexible and
Extensible Object-Oriented 3D Architecture: Application in the Development of Virtual Anatomy
Lessons. Medicine Meets Virtual Reality: Global Healthcare Grid. (pp. 461-466). IOS Press, San
Diego, CA.
Holler, E. and Breitwieser, H. (1994). Telepresence Systems for Application in Minimally Invasive Surgery.
Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary Applications for
Simulation Visualization Robotics (pp. 77-80). Aligned Management Associates, San Diego, CA.
Hon, D. (1994). Ixion’s Laparoscopic Surgical Skills Simulator. Medicine Meets Virtual Reality II: Inter-
active Technology & Healthcare:: Visionary Applications for Simulation Visualization Robotics
(pp. 81-83). Aligned Management Associates, San Diego, CA.
Inman, D.P. (1994a). Use of Virtual Reality to Teach Students with Orthopedic Impairments to Drive
Motorized Wheelchairs. Paper presented at the Fourth Annual Fall Conference of the Oregon
Department of Education, Office of Special Education, Portland, Oregon.
Inman, D.P. (1994b). Virtual Reality Wheelchair Drivers’ Training for Children with Cerebral Palsy. Paper
presented to the New York Virtual Reality Expo ‘94, New York, NY.
Inman, D.P. (1996). Use of Virtual Reality and Computerization in Wheelchair Training. Paper presented
to Shriner’s Hospital for Crippled Children, Portland, OR.
Ino, S., Shimizu, S., Odagawa, T., Sato, M., Takahashi, M., Izumi, T., and Ifukube, T. (1993). A Tactile
Display for Presenting Quality of Materials by Changing the Temperature of Skin Surface. Proceed-
ings of the Second IEEE International Workshop on Robot and Human Communication. (pp. 220-224).
Jacobs, L. 1994. Garage Virtual Reality, Sams Publications, Indianapolis, IN.
Johnson, A.D. (1994). Tactile Feedback Enhancement to Laparoscopic Tools. [abstract]. In Medicine Meets
Virtual Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation Visu-
alization Robotics (p. 92). Aligned Management Associates, San Diego, CA.
Kall, B.A., Kelly, P.J., Stiving, S.O. and Goerss, S.J. (1994). Integrated Multimodality Visualization in
Stereotactic Neurologic Surgery. Medicine Meets Virtual Reality II: Interactive Technology & Health-
care: Visionary Applications for Simulation Visualization Robotics (pp. 93-94). Aligned Management
Associates, San Diego, CA.
Kaplan, K.L. (1994). Project Description: Surgical Room of the Future. Medicine Meets Virtual Reality II:
Interactive Technology & Healthcare: Visionary Applications for Simulation Visualization Robotics
(pp. 95-98). Aligned Management Associates, San Diego, CA.

© 2000 by CRC Press LLC


Kelly, P.J. (1994). Quantitative Virtual Reality Surgical Simulation, Minimally Invasive Stereotactic Neu-
rosurgery and Frameless Stereotactic Technologies. Medicine Meets Virtual Reality II: Interactive
Technology & Healthcare: Visionary Applications for Simulation Visualization Robotics (pp. 103-108).
Aligned Management Associates, San Diego, CA.
Kizakevich, P.N., McCartney, M.L., Nissman, D.B., Starko, K., and Smith, N.T. (1998). Virtual Medical
Trainer. Medicine Meets Virtual Reality: Art, Science, Technology: Healthcare (R)Evolution.
(pp. 309-315). IOS Press, San Diego, CA.
Kreuger, M. (1994). Olfactory Stimuli in Medical Virtual Reality Applications. Proceedings: Virtual Reality
in Medicine—The Cutting Edge. (pp. 32-33). Sig-Advanced Applications, Inc., New York.
Kreuger, M. (1995a). Olfactory Stimuli in Virtual Reality for Medical Applications. Interactive Technology
and the New Paradigm for Healthcare. (pp. 180-181). IOS Press, Washington, D.C.
Kreuger, M. (1995b). Olfactory Stimuli in Virtual Reality Medical Training. Proceedings: Virtual Reality
in Medicine and Developers’ Expo.
Kuhnapfel, U.G. (1994). Realtime Graphical Computer Simulation for Endoscopic Surgery. Medicine
Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation
Visualization Robotics (pp. 114-116). Aligned Management Associates, San Diego, CA.
Lamson, R. (1994) Virtual Therapy of Anxiety Disorders. CyberEdge Journal, 4 (2), 1-28.
Lamson, R. (1995). Clinical Application of Virtual Therapy to Psychiatric Disorders. Medicine Meets
Virtual Reality III. IOS Press, San Diego, CA.
Lamson, R. (1997). Virtual Therapy. Montreal: Polytechnic International Press.
Lamson, R., and Meisner, M. (1994). The Effects of Virtual Reality Immersion in the Treatment of Anxiety,
Panic, & Phobia of Heights. Virtual Reality and Persons with Disabilities: Proceedings of the 2nd
Annual International Conference; 1994 June 8-10. San Francisco, CA. Sponsored by the Center on
Disabilities, California State University, Northridge. 18111 Nordhoff Street—DVSS. Northridge, CA.
Lathan, C., Cleary, K, and Greco, R. (1998). Development and Evaluation of a Spine Biopsy Simulator.
Surgical Simulation. Medicine Meets Virtual Reality: Art, Science, Technology: Healthcare (R)Evolu-
tion. (pp. 375-376). IOS Press, San Diego, CA.
Loftin, R.B., Ota, D., Saito, T. and Voss, M. (1994). A Virtual Environment for Laparoscopic Surgical
Training. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary Applica-
tions for Simulation Visualization Robotics (pp. 121-123). Aligned Management Associates, San
Diego, CA.
Marsh, A. (1998). Special Double Issue: The Telemedical Information Society. Future Generation Com-
puter Systems, 14 (1-2). Elsevier, Amsterdam.
Max, M.L. and Burke, J.C. (1997). Virtual Reality for Autism Communication and Education, with
Lessons for Medical Training Simulators. Medicine Meets Virtual Reality: Global Healthcare Grid.
(pp. 46-53). IOS Press, San Diego, CA.
Max, M.L. and Gonzalez, J.R. (1997). Blind Persons Navigating in Virtual Reality (VR): Hearing and
Feeling Communicates “Reality.” Medicine Meets Virtual Reality: Global Healthcare Grid.
(pp. 54-59). IOS Press, San Diego, CA.
McGovern, K.T., and McGovern, L.T. (1994, March). Virtual Clinic: A Virtual Reality Surgical Simulator.
Virtual Reality World, 2(2), 41-44.
McInerney, T. and Terzopoulos, D. (1996). Deformable Models in Medical Image Analysis. Medical Image
Analysis 1(2).
Merril, J. R. (1993, November). Surgery on the Cutting Edge. Virtual Reality World, 1(3-4), 34-38.
Merril, J. R. (1994). Presentation Material: Medicine Meets Virtual Reality II. [abstract] Medicine Meets
Virtual Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation Visu-
alization Robotics (pp. 158-159). Aligned Management Associates, San Diego, CA.
Minsky, M. and Lederman, S.J. (1996). Simulated Haptic Textures: Roughness. Symposium on Haptic
Interfaces for Virtual Environment and Teleoperator Systems. ASME International Mechanical
Engineering Congress and Exposition. Proceedings of the ASME Dynamic Systems and Control
Division, DSC-Vol. 58, 451-458.

© 2000 by CRC Press LLC


North, M.M., North, S.M., and Coble, J.R. (1997). Virtual Reality Therapy for Fear of Flying. American
Journal of Psychiatry, 154 (1), 130.
Optale, G., Munari, A., Nasta, A., Pianon, C., Verde, J.B., and Viggiano, G. (1998).Virtual Reality Tech-
niques in the Treatment of Impotence and Premature Ejaculation. Medicine Meets Virtual Reality:
Art, Science, Technology: Healthcare (R)Evolution. (pp. 186-192). IOS Press, San Diego, CA.
Oyama, H., Wakao, F., and Takahira, Y. (1998). The Clinical Advantages of Editable Real-Time Volume
Rendering in a Medical Virtual Environment: VolMed. Medicine Meets Virtual Reality: Art, Science,
Technology: Healthcare (R)Evolution. (pp. 341-345). IOS Press, San Diego, CA.
Parsons, J, and Rolland, J.P. (1998). A Non-Intrusive Display Technique for Providing Real-Time Data
within a Surgeon’s Critical Area of Interest. Medicine Meets Virtual Reality: Art, Science, Technology:
Healthcare (R)Evolution. (pp. 246-251). IOS Press, San Diego, CA.
Peifer, J. (1994). Virtual Environment for Eye Surgery Simulation. Medicine Meets Virtual Reality II:
Interactive Technology & Healthcare: Visionary Applications for Simulation Visualization Robotics
(pp. 166- 173). Aligned Management Associates, San Diego, CA.
Preminger, G. M. (1994). Advanced Imaging Technologies for Endoscopic Surgery. Medicine Meets Virtual
Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation Visualization
Robotics (pp. 177-178). Aligned Management Associates, San Diego, CA.
Pugnetti, L. (1994). Recovery Diagnostics and Monitoring in Virtual Environments. Virtual Reality in
Rehabilitation, Research, Experience and Perspectives. Proceedings of the 1st International Congress
on Virtual Reality in Rehabilitation. 1994 June 13-18. Gubbio, Italy.
Pugnetti, L., Mendozzi, L. , Motta, A., Cattaneo, A., Barbieri, E. and Brancotti, S. (1995a). Evaluation
and Retraining of Adults’ Cognitive Impairments: Which Role for Virtual Reality Technology?
Computers in Biology and Medicine 25 (2), 213-227.
Pugnetti, L., Mendozzi, L. , Motta, A., Cattaneo, A., Barbieri, E., Brancotti, S., and Cazzullo, C.L. (1995b).
Immersive Virtual Reality to Assist Retraining of Acquired Cognitive Deficits: First Results with a
Dedicated System. Interactive Technology and the New Paradigm for Healthcare. IOS Press, Wash-
ington, D.C. (pp. 455-456)
Pugnetti, L., Mendozzi, L., Barbieri, E., Rose, F.D., and Attree, E.A. (1996) Nervous System Correlates of
Virtual Reality Experience. Proceedings of the European Conference on Disability, Virtual Reality and
Associated Technology. (pp. 239-246).
Rabinowitz, W.M., Maxwell, J., Shao, Y., and Wei, M. (1993, Spring). Sound Localization Cues for a
Magnified Head: Implications from Sound Diffraction about a Rigid Sphere. Presence: Teleoperators
and Virtual Environments. 2(2), 125-129.
Radesovich, L. (1997). Hackers prove 56-bit DES is not enough. InfoWorld, 18 (26), 27.
Rissam, H.S., Kishore, S., Bhatia, M.L., and Trehan, N. (1998). Trans-Telephonic Electro-Cardiographic
Monitoring (TTEM)-First Indian Experience. Medicine Meets Virtual Reality: Art, Science, Tech-
nology: Healthcare (R)Evolution. (pp. 361-363). IOS Press, San Diego, CA.
Rizzo, A.A. and Buckwalter, J.G. (1997). The Status of Virtual Reality for the Cognitive Rehabilitation of
Persons with Neurological Disorders and Acquired Brain Injury. Medicine Meets Virtual Reality:
Global Healthcare Grid (pp. 22-33). IOS Press, San Diego, CA.
Rosen, J. (1994). The Role of Telemedicine and Telepresence in Reducing Health Care Costs. Medicine
Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation
Visualization Robotics (pp. 187-194). Aligned Management Associates, San Diego, CA.
Rothbaum, B.O., Hodges, L.F., Kooper, I.R., Opdyke, D., Williford, J.S., and North, M. (1995a). Virtual Reality
Graded Exposure in the Treatment of Acrophobia: A Case Report. Behavior Therapy, 26 (3), 547-554.
Rothbaum, B.O., Hodges, L.F., Kooper, I.R., Opdyke, D., Williford, J.S., and North, M. (1995b). Effec-
tiveness of Computer Generated Graded Exposure in the Treatment of Achrophobia. American
Journal of Psychiatry, 152(4), 626-628.
Rothbaum, B.O., Hodges, B.F., Watson, B.A., Kessler, G.D., and Opdyke, D. (1996). Virtual Reality
Exposure Therapy in the Treatment of Fear of Flying: A Case Report. Behavior Research and Therapy,
34 (5/6), 477-481.

© 2000 by CRC Press LLC


Rovetta, A., Falcone, F., Sala, R., and Garavaldi, M.E. (1997). Telehealth in Africa. Medicine Meets Virtual
Reality: Global Healthcare Grid (pp. 277-285). IOS Press, San Diego, CA.
Rovetta, A., Sala, R., Bressanelli, M., Garavaldi, M.E., Lorini, F., Pegoraro, R., and Canina, M. (1998).
Demonstration of Surgical Telerobotics and Virtual Telepresence by Internet + ISDN From
Monterey (USA) to Milan (Italy). Medicine Meets Virtual Reality: Art, Science, Technology: Health-
care (R)Evolution. (pp.79-83). IOS Press, San Diego, CA.
Salisbury, J.K. and Srinivasan, M. (1996). Proceedings of the First PHANToM User’s Group Workshop. MIT
Press, Cambridge, MA.
Salisbury, J.K. and Srinivasan, M. (1997). Proceedings of the Second PHANToM User’s Group Workshop.
MIT Press, Cambridge, MA.
Satava, R.M. (1992). Robotics, Telepresence and Virtual Reality: A Critical Analysis of the Future of
Surgery. Minimally Invasive Therapy 1, 357-63.
Schraft, R.D., Neugebauer, J.G., and Wapler, M. (1994). Virtual Reality for Improved Control in Endo-
scopic Surgery. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary
Applications for Simulation Visualization Robotics (pp. 233-236). Aligned Management Associates,
San Diego, CA.
Senger, S. (1998). An Immersive Environment for the Direct Visualization and Segmentation of Volu-
metric Data Sets. Medicine Meets Virtual Reality: Art, Science, Technology: Healthcare (R)Evolution.
(pp. 7-12). IOS Press, San Diego, CA.
Shimoga, K.B., Khosla, P.K, and Sclabassi, R.J. (1994). Teleneurosurgery: An Approach to Enhance the
Dexterity of Neurosurgeons. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare:
Visionary Applications for Simulation Visualization Robotics. (pp. 203). Aligned Management Asso-
ciates, San Diego, CA.
Stallkamp, J. and Walper, M. (1998). UltraTrainer-A Training System for Medical Ultrasound Examina-
tion. Medicine Meets Virtual Reality: Art, Science, Technology: Healthcare (R)Evolution. (pp. 298-301).
IOS Press, San Diego, CA.
Steffin, M. (1997). Computer Assisted Therapy for Multiple Sclerosis and Spinal Cord Injury Patients
Application of Virtual Reality. Medicine Meets Virtual Reality: Global Healthcare Grid. (pp. 64-72).
IOS Press, San Diego, CA.
Strickland, D.C. (1995) Virtual Reality Training for Autism. North Carolina State University, College of
Engineering Updates, May 1995.
Strickland, D.C. (1996). Virtual Reality Helps Children with Autism. Presence, 5(3), 319-329.
Suzuki, N., Ito, M., and Okamura, T. (1987). Morphological Reference System of Human Structure Using
Computer Graphics. World Congress on Medical Physics and Biomedical Engineering, San Antonio,
TX.
Suzuki, N., Takatsu, A., Hattori, A., Ezumi, T., Oda, S., Yanai, T., and Tominaga, H. (1998). 3D and 4D
Atlas System of Living Human Body Structure. Medicine Meets Virtual Reality: Art, Science, Tech-
nology: Healthcare (R)Evolution. (pp. 131-136). IOS Press, San Diego, CA.
Szabo, Z., Hunter, J.G., Berci, G. et al. (1994). Choreographed Instrument Movements During Laparo-
scopic Surgery: Needle Driving, Knot Tying and Anastomosis Techniques. Medicine Meets Virtual
Reality II: Interactive Technology & Healthcare: Visionary Applications for Simulation Visualization
Robotics (pp. 216-217). Aligned Management Associates, San Diego, CA.
Tendick, F., Jennings, R.W., Tharp, G. and Stark, L. (1993). Sensing and Manipulation Problems in
Endoscopic Surgery: Experiment, Analysis, and Observation. Presence: Teleoperators and Virtual
Environments. 2(1), 66-81.
Trimble, J., Morris, T., and Crandall, R. 1992. Virtual Reality. TeamRehab Report, 3(8), 33-37.
Wang, Y. and Sackier, J. (1994). Robotically Enhanced Surgery. Medicine Meets Virtual Reality II: Interactive
Technology & Healthcare: Visionary Applications for Simulation Visualization Robotics (pp. 218- 220).
Aligned Management Associates, San Diego, CA.

© 2000 by CRC Press LLC


Weghorst, S., Airola, C., Openheimer, P., Edmond, C.V., Patience, T., Heskamp, D., and Miller, J. (1998)
Validation of the Madigan ESS Simulator. Medicine Meets Virtual Reality: Art, Science, Technology:
Healthcare (R)Evolution. (pp. 399-405). IOS Press, San Diego, CA.
Weghorst, S., Prothero, J. and Furness, T. (1994). Virtual Images in the Treatment of Parkinson’s Disease
Akinesia. Medicine Meets Virtual Reality II: Interactive Technology & Healthcare: Visionary Applica-
tions for Simulation Visualization Robotics. (pp. 242-243). Aligned Management Associates, San
Diego, CA.
Wenger, K. and Karron, D.B. (1998). Audio-Guided Blind Biopsy Needle Placement. Medicine Meets
Virtual Reality: Art, Science, Technology: Healthcare (R)Evolution:. (pp. 90-95). IOS Press, San Diego,
CA.
Wenzel, E.M. (1992) Localization in Virtual Acoustic Displays. Presence, 1, 80-107.
Wenzel. E.M. (1994). Spatial Sound and Sonification. In G. Kramer (Ed.) Auditory Display: Sonification,
Audification, and Auditory Interfaces. Addison-Wesley, Reading, MA (pp. 127-150).

Further Information
Burdea, G and Coiffet, P. Virtual Reality Technology. John Wiley & Sons, New York.
HITL (Human Interface Technology Laboratory), University of Washington, FJ-15, Seattle, WA 98195.
UNC Laboratory, University of North Carolina, Chapel Hill, Computer Science Department, Chapel Hill,
NC 27599-3175.
Presence: Teleoperators & Virtual Environments. Professional Tech Papers and Journal, MIT Press Journals,
55 Hayward St, Cambridge MA 02142.

© 2000 by CRC Press LLC

You might also like