S Merged
S Merged
Engineering College
Ajmer
(A Constituent College of Bikaner Technical University, Bikaner)
CERTIFICATE
This is to certify that the Seminar Report entitled Haptic Technology has
been submitted by Rahul Garg, VII semester, in partial fulfilment of the
degree of B.Tech. in Electronics and Communication Engineering
during the academic session 2023-2024.
Session 2023-24
First of all, I am indebted to the GOD ALMIGHTY for giving me an opportunity to excel in
my efforts to complete this seminar on time.
Every seminar big or small is successfully largely due to effort of a number of wonderful people
who have always given their valuable advice or lent a helping hand. I sincerely appreciate the
inspiration, support and guidance of all those people who have been instrumental in making
this seminar a successful.
This is opportunity to express my heartfelt words for the people who were part of this seminar
in numerous ways, people who gave me unending support right from beginning of the seminar.
I wish to express of gratitude to my guide to Dr. Deepak Jhanwar, Assistant Professor,
Department of Electronics and communication Engineering to give me guidance at every
moment during my entire thesis and giving valuable suggestion.
I express my deep sense of gratitude to Dr Uma Shankar Modani, Associate Professor for his
continuous cooperation and encouragement.
We are thankful to Dr Anurag Garg, Head, Department of ECE for his continuous support.
I will be failing in duty if I do not acknowledge with grateful thanks to the authors of the
References and other Literatures referred to in this seminar. Last but not the least, I am very
much thankful to my parents who guided me in every step which I took.
RAHUL GARG
21EEAEC204
~i~
ABSTRACT
Users are given the illusion that they are touching or Manipulating a real Physical Object
‘Haptics’ is a technology that adds the sense of touch to virtual environments..
This seminar discusses the important concepts in Haptics, some of the most commonly
used haptics systems like ‘Phantom’, ‘Cyber glove’, ‘Novint Falcon’ and such similar
devices. Following this, a description about how sensors and actuators are used for tracking
the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains
the blocks in force rendering. Then a few applications of haptic systems are taken up for
discussion.
ii
INDEX
CONTENT PAGE No.
ACKNOWLEDGEMENT………………………………………………………………….i
Abstract……………………………………………………………………………...ii
Index………………………………………………………………………………….iii
CHAPTER 1 INTRODUCTION………………………………………………….1-2
1.1 What is Haptics?
1.2 History of Haptics.
CHAPTER 2 WORKING OF HAPTICS………………………………………...3-6
2.1 Basic System Configuration.
2.2 Haptic Information
2.3 Creation of Virtual environment (Virtual reality).
2.4 Haptic feedback
CHAPTER 3 HAPTIC DEVICES………………………………………………..7-13
3.3.1 PHANTOM
11
iii
CHAPTER 4 HAPTIC RENDERING…………………………………………14-19
CONCLUTION……………………………………………………………………….29
REFERENCES………………………………………………………………………..30
iv
LIST OF FIGURES
v
CHAPTER 1 INTRODUCTION
Haptic technology refers to technology that interfaces the user with a virtual environment
via the sense of touch by applying forces, vibrations, and/or motions to the user. This
mechanical stimulation may be used to assist in the creation of virtual objects (objects
existing only in a computer simulation), for control of such virtual objects, and to enhance
the remote control of machines and devices (teleoperators). This emerging technology
promises to have wide reaching applications as it already has in some fields. For example,
haptic technology has made it possible to investigate in detail how the human sense of
touch works by allowing the creation of carefully controlled haptic virtual objects. These
objects are used to systematically probe human haptic capabilities, which would otherwise
be difficult to achieve. These new research tools contribute to our understanding of how
touch and its underlying brain functions work. Although haptic devices are capable of
measuring bulk or reactive forces that are applied by the user, it should not to be confused
with touch or tactile sensors that measure the pressure or force exerted by the user to the
interface.
The term haptic originated from the Greek word ἁπτικός (haptikos) meaning pertaining to
the sense of touch and comes from the Greek verb ἅπτεσθαι (haptesthai) meaning to
“contact” or “touch.
In the early 20th century, psychophysicists introduced the word haptic to label the subfield
of their studies that addressed human touch-based perception and manipulation. In the
1970s and 1980s, significant research efforts in a completely different field, robotics also
began to focus on manipulation and perception by touch. Initially concerned with building
autonomous robots, researchers soon found that building a dexterous robotic hand was
much more complex and subtle than their initial naive hopes had suggested.
In time these two communities, one that sought to understand the human hand and one that
aspired to create devices with dexterity inspired by human abilities found fertile mutual
1
interest in topics such as sensory design and processing, grasp control and manipulation,
object representation and haptic information encoding, and grammars for describing
physical tasks.
In the early 1990s a new usage of the word haptics began to emerge. The confluence of
several emerging technologies made virtualized haptics, or computer haptics possible.
Much like computer graphics, computer haptics enables the display of simulated objects
to humans in an interactive manner. However, computer haptics uses a display technology
through which objects can be physically palpated.
2
CHAPTER 2 WORKING OF HAPTICS
Basically a haptic system consist of two parts namely the human part and the machine part.
In the figure shown above, the human part (left) senses and controls the position of the
hand, while the machine part (right) exerts forces from the hand to simulate contact with
a virtual object. Also both the systems will be provided with necessary sensors, processors
and actuators. In the case of the human system, nerve receptors performs sensing, brain
performs processing and muscles performs actuation of the motion performed by the hand
while in the case of the machine system, the above mentioned functions are performed by
the encoders, computer and motors respectively.
Basically the haptic information provided by the system will be the combination of (i)
Tactile information and (ii) Kinesthetic information.
3
Tactile information refers the information acquired by the sensors which are actually
connected to the skin of the human body with a particular reference to the spatial
distribution of pressure, or more generally, tractions, across the contact area.
For example when we handle flexible materials like fabric and paper, we sense the pressure
variation across the fingertip. This is actually a sort of tactile information. Tactile sensing
is also the basis of complex perceptual tasks like medical palpation, where physicians
locate hidden anatomical structures and evaluate tissue properties using their hands.
Kinesthetic information refers to the information acquired through the sensors in the joints.
Virtual reality is the technology which allows a user to interact with a computersimulated
environment, whether that environment is a simulation of the real world or an imaginary
world. Most current virtual reality environments are primarily visual experiences,
displayed either on a computer screen or through special or stereoscopic displays, but some
simulations include additional sensory information, such as sound through speakers or
headphones. Some advanced, haptic systems now include tactile information, generally
known as force feedback, in medical and gaming applications. Users can interact with a
virtual environment or a virtual artifact (VA) either through the use of standard input
devices such as a keyboard and mouse, or through multimodal devices such as a wired
glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment
can be similar to the real world, for example, simulations for pilot or combat training, or
it can differ significantly from reality, as in VR games. In practice, it is currently very
difficult to create a high-fidelity virtual reality experience, due largely to technical
limitations on processing power, image resolution and communication bandwidth.
However, those limitations are expected to eventually be overcome as processor, imaging
and data communication technologies become more powerful and cost-effective over time.
4
Virtual Reality is often used to describe a wide variety of applications, commonly
associated with its immersive, highly visual, 3D environments. The development of CAD
software, graphics hardware acceleration, head mounted displays; database gloves and
miniaturization have helped popularize the motion. The most successful use of virtual
reality is the computer generated 3-D simulators. The pilots use flight simulators. These
flight simulators have designed just like cockpit of the airplanes or the helicopter. The
screen in front of the pilot creates virtual environment and the trainers outside the
simulators commands the simulator for adopt different modes. The pilots are trained to
control the planes in different difficult situations and emergency landing. The simulator
provides the environment. These simulators cost millions of dollars.
The virtual reality games are also used almost in the same fashion. The player has to wear
special gloves, headphones, goggles, full body wearing and special sensory input devices.
The player feels that he is in the real environment. The special goggles have monitors to
see. The environment changes according to the moments of the player. These games are
very expensive.
2.4 Haptic feedback
Virtual reality (VR) applications strive to simulate real or imaginary scenes with which
users can interact and perceive the effects of their actions in real time. Ideally the user
5
interacts with the simulation via all five senses. However, today’s typical VR applications
rely on a smaller subset, typically vision, hearing, and more recently, touch.
Figure below shows the structure of a VR application incorporating visual, auditory, and
haptic feedback.
The human operator typically holds or wears the haptic interface device and perceives
audiovisual feedback from audio (computer speakers, headphones, and so on) and visual
displays (for example a computer screen or head-mounted display).Whereas audio and
visual channels feature unidirectional information and energy flow (from the simulation
engine toward the user), the haptic modality exchanges information and energy in two
directions, from and toward the user. This bi-directionality is often referred to as the single
most important feature of the haptic interaction modality.
6
Chapter 3 HAPTIC DEVICES
A haptic device is the one that provides a physical interface between the user and the virtual
environment by means of a computer. This can be done through an input/output device
that senses the body’s movement, such as joystick or data glove. By using haptic devices,
the user can not only feed information to the computer but can also receive information
from the computer in the form of a felt sensation on some part of the body.
The term exoskeleton refers to the hard outer shell that exists on many creatures. In a
technical sense, the word refers to a system that covers the user or the user has to wear.
Current haptic devices that are classified as exoskeletons are large and immobile systems
that the user must attach him or her to.
These devices are smaller exoskeleton-like devices that are often, but not always, take the
down by a large exoskeleton or other immobile devices. Since the goal of building a haptic
7
system is to be able to immerse a user in the virtual or remote environment and it is
important to provide a small remainder of the user’s actual environment as possible.
The drawback of the wearable systems is that since weight and size of the devices are a
concern, the systems will have more limited sets of capabilities.
This is a class of devices that are very specialized for performing a particular given task.
Designing a device to perform a single type of task restricts the application of that device
to a much smaller number of functions. However it allows the designer to focus the device
to perform its task extremely well. These task devices have two general forms, single point
of interface devices and specific task devices.
An interesting application of haptic feedback is in the form of full body Force Feedback
called locomotion interfaces. Locomotion interfaces are movement of force restriction
devices in a confined space, simulating unrestrained mobility such as walking and running
for virtual reality. These interfaces overcomes the limitations of using joysticks for
maneuvering or whole body motion platforms, in which the user is seated and does not
expend energy, and of room environments, where only short distances can be traversed.
feedback devices
Force feedback input devices are usually, but not exclusively, connected to computer
systems and is designed to apply forces to simulate the sensation of weight and resistance
in order to provide information to the user. As such, the feedback hardware represents a
more sophisticated form of input/output devices, complementing others such as keyboards,
mice or trackers. Input from the user in the form of hand, or other body segment whereas
feedback from the computer or other device is in the form of hand, or other body segment
whereas feedback from the computer or other device is in the form of force or position.
These devices translate digital information into physical sensations.
8
3.2.2 Tactile display devices
PHANTOM
9
of the virtual object with a very high degree of realism. One of its key features is that it
can model free floating 3 dimensional objects.
The principle of a Cyber glove is simple. It consists of opposing the movement of the hand
in the same way that an object squeezed between the fingers resists the movement of the
latter. The glove must therefore be capable, in the absence of a real object, of recreating
the forces applied by the object on the human hand with (1) the same intensity and (2) the
same direction. These two conditions can be simplified by requiring the glove to apply a
torque equal to the interphalangian joint.
The solution that we have chosen uses a mechanical structure with three passive joints
which, with the interphalangian joint, make up a flat four-bar closed-link mechanism. This
solution use cables placed at the interior of the four-bar mechanism and following a
trajectory identical to that used by the extensor tendons which, by nature, oppose the
movement of the flexor tendons in order to harmonize the movement of the fingers.
10
• Adapted to different size of the fingers
• Located on the back of the hand
• Apply different forces on each phalanx (The possibility of applying a lateral force
on the fingertip by motorizing the abduction/adduction joint) • Measure finger
angular flexion (The measure of the joint angles are Independent and can have a
good resolution given the important paths traveled by the cables when the finger
shut.
The glove is made up of five fingers and has 19 degrees of freedom 5 of which are passive.
Each finger is made up of a passive abduction joint which links it to the base (palm) and
to 9 rotoid joints which, with the three interphalangian joints, make up 3 closed-link
11
mechanism with four bar and 1 degree of freedom. The structure of the thumb is composed
of only two closed-links, for 3 dof of which one is passive. The segments of the glove are
made of aluminum and can withstand high charges; their total weight does not surpass 350
grams. The length of the segments is proportional to the length of the phalanxes. All of the
joints are mounted on miniature ball bearings in order to reduce friction.
The mechanical structure offers two essential advantages: the first is the facility of adapting
to different sizes of the human hand. We have also provided for lateral adjustment in order
to adapt the interval between the fingers at the palm. The second advantage is the presence
of physical stops in the structure which offer complete security to the operator. The force
sensor is placed on the inside of a fixed support on the upper part of the phalanx. The
sensor is made up of a steel strip on which a strain gauge was glued. The position sensor
used to measure the cable displacement is incremental optical encoders offering an average
theoretical resolution equal to 0.1 deg for the finger joints.
12
3.3.2.2 Control of Cyber glove
The glove is controlled by 14 torque motors with continuous current which can develop a
maximal torque equal to 1.4 Nm and a continuous torque equal to 0.12 Nm. On each motor
we fix a pulley with an 8.5 mm radius onto which the cable is wound. The maximal force
that the motor can exert on the cable is thus equal to 14.0 N, a value sufficient to ensure
opposition to the movement of the finger. The electronic interface of the force feedback
data glove is made of PC with several acquisition cards. The global scheme of the control
is given in the figure shown below. One can distinguish two command loops: an internal
loop which corresponds to a classic force control with constant gains and an external loop
which integrates the model of distortion of the virtual object in contact with the fingers. In
this schema the action of man on the position of the fingers joints is taken into
consideration by the two control loops. Man is considered as a displacement generator
while the glove is considered as a force generator.
13
Chapter 4 HAPTIC RENDERING
As illustrated in Fig. given above, haptic interaction occurs at an interaction tool of a haptic
interface that mechanically couples two controlled dynamical systems: the haptic interface
with a computer and the human user with a central nervous system. The two systems are
exactly symmetrical in structure and information and they sense the environments, make
decisions about control actions, and provide mechanical energies to the interaction tool
through motions.
devices:-
14
• Balanced range, resolution, and bandwidth of position sensing and force reflection;
and Proper ergonomics that let the human operator focus when wearing or
manipulating the haptic interface as pain, or even discomfort, can distract the user,
reducing overall performance.
An avatar is the virtual representation of the haptic through which the user physically
interacts with the virtual environment. Clearly the choice of avatar depends on what’s
being simulated and on the haptic device’s capabilities. The operator controls the avatar’s
position inside the virtual environment. Contact between the interface avatar and the
virtual environment sets off action and reaction forces. The avatar’s geometry and the type
of contact it supports regulate these forces.
Within a given application the user might choose among different avatars. For example, a
surgical tool can be treated as a volumetric object exchanging forces and positions with
the user in a 6D space or as a pure point representing the tool’s tip, exchanging forces and
positions in a 3D space.
15
Haptic-rendering algorithms compute the correct interaction forces between the haptic
interface representation inside the virtual environment and the virtual objects populating
the environment. Moreover, haptic rendering algorithms ensure that the haptic device
correctly renders such forces on the human operator. Several components compose typical
haptic rendering algorithms. We identify three main blocks, illustrated in Figure shown
blow.
Collision-detection algorithms detect collisions between objects and avatars in the virtual
environment and yield information about where, when, and ideally to what extent
collisions (penetrations, indentations, contact area, and so on) have occurred.
Force-response algorithms compute the interaction force between avatars and virtual
objects when a collision is detected. This force approximates as closely as possible the
contact forces that would normally arise during contact between real objects.
Forceresponse algorithms typically operate on the avatars’ positions, the positions of all
objects in the virtual environment, and the collision state between avatars and virtual
objects. Their return values are normally force and torque vectors that are applied at the
device-body interface. Hardware limitations prevent haptic devices from applying the
exact force computed by the force-response algorithms to the user.
Control algorithms command the haptic device in such a way that minimizes the error
between ideal and applicable forces. The discrete-time nature of the haptic-rendering
algorithms often makes this difficult; as we explain further later in the article. Desired
force and torque vectors computed by force response algorithms feed the control
algorithms. The algorithms’ return values are the actual force and torque vectors that will
be commanded to the haptic device.
• Low-level control algorithms sample the position sensor sat the haptic interface
device joints.
• These control algorithms combine the information collected from each sensor to
obtain the position of the device-body interface in Cartesian space—that is, the
avatar’s position inside the virtual environment
16
• The collision-detection algorithm uses position information to find collisions
between objects and avatars and report the resulting degree of penetration.
The simulation engine then uses the same interaction forces to compute their effect on
objects in the virtual environment. Although there are no firm rules about how frequently
the algorithms must repeat these computations, a 1-KHz servo rate is common. This rate
seems to be a subjectively acceptable compromise permitting presentation of reasonably
complex objects with reasonable stiffness. Higher servo rates can provide crisper contact
and texture sensations, but only at the expense of reduced scene complexity (or more
capable computers).
Humans perceive contact with real objects through sensors (mechanoreceptors) located in
their skin, joints, tendons, and muscles. We make a simple distinction between the
information these two types of sensors can acquire. i.e. Tactile information and kinesthetic
information. A tool based interaction paradigm provides a convenient simplification
because the system need only render forces resulting from contact between the tool’s
avatar and objects in the environment. Thus, haptic interfaces frequently utilize a tool
handle physical interface for the user.
To provide a haptic simulation experience, we’ve designed our systems to recreate the
contact forces a user would perceive when touching a real object. The haptic interfaces
measure the user’s position to recognize if and when contacts occur and to collect
information needed to determine the correct interaction force. Although determining user
motion is easy, determining appropriate display forces is a complex process and a subject
of much research. Current haptic technology effectively simulates interaction forces for
simple cases, but is limited when tactile feedback is involved.
17
Compliant object response modeling adds a dimension of complexity because of non
negligible deformations, the potential for self-collision, and the general complexity of
modeling potentially large and varying areas of contact. We distinguish between two types
of forces: forces due to object geometry and forces due to object surface properties, such
as texture and friction
4.6 Geometry-dependant force-rendering algorithms:-
The first type of force-rendering algorithms aspires to recreate the force interaction a user
would feel when touching a frictionless and texture fewer objects. Such interaction forces
depend on the geometry of the object being touched, its compliance, and the geometry of
the avatar representing the haptic interface inside the virtual environment.
Although exceptions exist, 5 of the necessary DOF to describe the interaction forces
between an avatar and a virtual object typically matches the actuated DOF of the haptic
device being used. Thus for simpler devices, such as a 1-DOF force-reflecting gripper, the
avatar consists of a couple of points that can only move and exchange forces along the line
connecting them. For this device type, the force-rendering algorithm computes a simple 1-
DOF squeeze force between the index finger and the thumb, similar to the force you would
feel when cutting an object with scissors. When using a 6-DOF haptic device, the avatar
can be an object of any shape. In this case, the force-rendering algorithm computes all the
interaction forces between the object and the virtual environment and applies the resultant
force and torque vectors to the user through the haptic device. We group current force-
rendering algorithms by the number of DOF necessary to describe the interaction force
being rendered.
All real surfaces contain tiny irregularities or indentations. Obviously, it’s impossible to
distinguish each irregularity when sliding a finger over an object. However, tactile sensors
in the human skin can feel their combined effects when rubbed against a real surface.
Micro-irregularities act as obstructions when two surfaces slide against each other and
generate forces tangential to the surface and opposite to motion. Friction, when viewed at
the microscopic level, is a complicated phenomenon. Nevertheless, simple empirical
18
models exist, such as the one Leonardo Da Vinci proposed and Charles Augustin de
Coulomb later developed in 1785. Such models served as a basis for the simpler frictional
models in 3 DOF Researchers outside the haptic community have developed many models
to render friction with higher accuracy, for example, the Karnopp model for modeling
stick-slip friction, the Bristle model, and the reset integrator model. Higher accuracy,
however, sacrifices speed, a critical factor in real-time applications. Any choice of
modeling technique must consider this trade off. Keeping this trade off in mind,
researchers have developed more accurate haptic-rendering algorithms for friction. A
texture or pattern generally covers real surfaces. Researchers have proposed various
techniques for rendering the forces that touching such textures generates.
In point interactions, a single point, usually the distal point of a probe, thimble or stylus
employed for direct interaction with the user, is employed in the simulation of collisions.
The point penetrates the virtual objects, and the depth of indentation is calculated between
the current point and a point on the surface of the object. Forces are then generated
according to physical models, such as spring stiffness or a spring-damper model.
In ray-based rendering, the user interface mechanism, for example, a probe is modeled in
the virtual environment as a finite ray. Orientation is thus taken into account, and collisions
are determined between the simulated probe and virtual objects. Collision detection
algorithms return the intersection point between the ray and the surface of the simulated
object.
19
Chapter 5 APPLICATIONS, LIMITATION & FUTURE
VISION
5.1APPLICATION:-
Video game makers have been early adopters of passive haptics, which takes advantage of
vibrating joysticks, controllers and steering wheels to reinforce on-screen activity. But
future video games will enable players to feel and manipulate virtual solids, fluids, tools
and avatars. The Novint Falcon haptics controller is already making this promise a reality.
The 3-D force feedback controller allows you to tell the difference between a pistol report
and a shotgun blast, or to feel the resistance of a longbow's string as you pull back an
arrow.
Graphical user interfaces, like those that define Windows and Mac operating
environments, will also benefit greatly from haptic interactions. Imagine being able to feel
20
graphic buttons and receive force feedback as you depress a button. Some touch screen
manufacturers are already experimenting with this technology. Nokia phone designers
have perfected a tactile touch screen that makes on-screen buttons behave as if they were
real buttons. When a user presses the button, he or she feels movement in and movement
out. He also hears an audible click. Nokia engineers accomplished this by placing two
small piezoelectric sensor pads under the screen and designing the screen so it could move
slightly when pressed. Everything, movement and sound is synchronized perfectly to
simulate real button manipulation.
Various haptic interfaces for medical simulation may prove especially useful for training
of minimally invasive procedures (laparoscopy/interventional radiology) and remote
surgery using teleoperators. In the future, expert surgeons may work from a central
workstation, performing operations in various locations, with machine setup and patient
preparation performed by local nursing staff. Rather than traveling to an operating room,
the surgeon instead becomes a telepresence. A particular advantage of this type of work is
21
that the surgeon can perform many more operations of a similar type, and with less fatigue.
It is well documented that a surgeon who performs more procedures of a given kind will
have statistically better outcomes for his patients. Haptic interfaces are also used in
rehabilitation robotics.
Reality-based modeling for surgical simulation consists of a continuous cycle. In the figure
given above, the surgeon receives visual and haptic (force and tactile) feedback and
interacts with the haptic interface to control the surgical robot and instrument. The robot
with instrument then operates on the patient at the surgical site per the commands given
by the surgeon. Visual and force feedback is then obtained through endoscopic cameras
and force sensors that are located on the surgical tools and are displayed back to the
surgeon.
From the earliest moments in the history of virtual reality (VR), the United States military
forces have been a driving factor in developing and applying new VR technologies. Along
with the entertainment industry, the military is responsible for the most dramatic
evolutionary leaps in the VR field.
Virtual environments work well in military applications. When well designed, they provide
the user with an accurate simulation of real events in safe, controlled environment.
Specialized military training can be very expensive, particularly for vehicle pilots. Some
22
training procedures have an element of danger when using real situations. While the initial
development of VR gear and software is expensive, in the long run it's much more cost
effective than putting soldiers into real vehicles or physically simulated situations. VR
technology also has other potential applications that can make military activities safer.
Today, the military uses VR techniques not only for training and safety enhancement, but
also to analyze military maneuvers and battlefield positions. In the next section, we'll look
at the various simulators commonly used in military training. Out of all the earliest VR
technology applications, military vehicle simulations have probably been the most
successful. Simulators use sophisticated computer models to replicate a vehicle's
capabilities and limitations within a stationary and safe computer station.
Possibly the most well-known of all the simulators in the military are the flight simulators.
The Air Force, Army and Navy all use flight simulators to train pilots. Training missions
may include how to fly in battle, how to recover in an emergency, or how to coordinate air
support with ground operations.
Although flight simulators may vary from one model to another, most of them have a
similar basic setup. The simulator sits on top of either an electronic motion base or a
hydraulic lift system that reacts to user input and events within the simulation. As the pilot
23
steers the aircraft, the module he sits in twists and tilts, giving the user haptic feedback.
The word "haptic" refers to the sense of touch, so a haptic system is one that gives the user
feedback he can feel. A joystick with force-feedback is an example of a haptic device.
Some flight simulators include a completely enclosed module, while others just have a
series of computer monitors arranged to cover the pilot's field of view. Ideally, the flight
simulator will be designed so that when the pilot looks around, he sees the same controls
and layout as he would in a real aircraft. Because one aircraft can have a very different
cockpit layout than another, there isn't a perfect simulator choice that can accurately
represent every vehicle. Some training centers invest in multiple simulators, while others
sacrifice accuracy for convenience and cost by sticking to one simulator model.
24
The FCS simulators include three computer monitors and a pair of joystick controllers
attached to a console. The modules can simulate several different ground vehicles,
including non-line-of sight mortar vehicles, reconnaissance vehicles or an infantry carrier
vehicle.
The Army uses several specific devices to train soldiers to drive specialized vehicles like
tanks or the heavily-armored Stryker vehicle. Some of these look like long-lost twins to
flight simulators. They not only accurately recreate the handling and feel of the vehicle
they represent, but also can replicate just about any environment you can imagine. Trainees
can learn how the real vehicle handles in treacherous weather conditions or difficult terrain.
Networked simulators allow users to participate in complex war games.
5.1.4 Telerobotics
In a telerobotic system, a human operator controls the movements of a robot that is located
some distance away. Some teleported robots are limited to very simple tasks, such as
aiming a camera and sending back visual images. Haptics now makes it possible to include
touch cues in addition to audio and visual cues in telepresence models. It won't be long
before astronomers and planet scientists actually hold and manipulate a Martian rock
through an advanced hapticsenabled telerobot, a high-touch version of the Mars
Exploration Rover.
Limitations of haptic device systems have sometimes made applying the force’s exact
value as computed by force-rendering algorithms impossible. Various issues contribute to
limiting a haptic device’s capability to render a desired force or, more often, desired
impedance are given below:-
• Haptic interfaces can only exert forces with limited magnitude and not equally well
in all directions, thus rendering algorithms must ensure that no output components
saturate, as this would lead to erroneous or discontinuous application of forces to
the user. In addition, haptic devices aren’t ideal force transducers.
25
• An ideal haptic device would render zero impedance when simulating movement
in free space, and any finite impedance when simulating contact with an object
featuring such impedance characteristics. The friction, inertia, and backlash present
in most haptic devices prevent them from meeting this ideal.
• A third issue is that haptic-rendering algorithms operate in discrete time whereas
users operate in continuous time, as Figure shown below illustrates. While moving
into and out of a virtual object, the sampled avatar position will always lag behind
the avatar’s actual continuous-time position. Thus, when pressing on a virtual
object, a user needs to perform less work than in reality. And when the user
releases, however, the virtual object returns more work than its real-world
counterpart would have returned. In other terms, touching a virtual object extracts
energy from it. This extra energy can cause an unstable response from haptic
devices.
All of these issues, well known to practitioners in the field, can limit haptic application’s
realism. The first two issues usually depend more on the device mechanics; the latter two
depend on the digital nature of VR applications.
As haptics moves beyond the buzzes and thumps of today’s video games, technology will
enable increasingly believable and complex physical interaction with virtual or remote
objects. Already haptically enabled commercial products let designers sculpt digital clay
26
figures to rapidly produce new product geometry, museum goers feel previously
inaccessible artifacts, and doctors train for simple procedures without endangering
patients.
Past technological advances that permitted recording, encoding, storage, transmission,
editing, and ultimately synthesis of images and sound profoundly affected society. A wide
range of human activities, including communication, education, art, entertainment,
commerce, and science, were forever changed when we learned to capture, manipulate,
and create sensory stimuli nearly indistinguishable from reality. It’s not unreasonable to
expect that future advancements in haptics will have equally deep effects. Though the field
is still in its infancy, hints of vast, unexplored intellectual and commercial territory add
excitement and energy to a growing number of conferences, courses, product releases, and
invention efforts.
For the field to move beyond today’s state of the art, researchers must surmount a number
of commercial and technological barriers. Device and software tool-oriented corporate
efforts have provided the tools we need to step out of the laboratory, yet we need new
business models. For example, can we create haptic content and authoring tools that will
make the technology broadly attractive?
Can the interface devices be made practical and inexpensive enough to make them widely
accessible? Once we move beyond single-point force-only interactions with rigid objects,
we should explore several technical and scientific avenues. Multipoint, multihand, and
multi-person interaction scenarios all offer enticingly rich interactivity. Adding sub-
modality stimulation such as tactile (pressure distribution) display and vibration could add
subtle and important richness to the experience. Modeling compliant objects, such as for
surgical simulation and training, presents many challenging problems to enable realistic
deformations, arbitrary collisions, and topological changes caused by cutting and joining
actions.
Improved accuracy and richness in object modeling and haptic rendering will require
advances in our understanding of how to represent and render psychophysically and
cognitively germane attributes of objects, as well as algorithms and perhaps specialty
hardware (such as haptic or physics engines) to perform real-time computations.
27
Development of multimodal workstations that provide haptic, visual, and auditory
engagement will offer opportunities for more integrated interactions. We’re only
beginning to understand the psychophysical and cognitive details needed to enable
successful multimodality interactions. For example, how do we encode and render an
object so there is a seamless consistency and congruence across sensory modalities—that
is, does it look like it feels? Are the object’s densities, compliance, motion, and appearance
familiar and unconsciously consistent with context? Are sensory events predictable
enough that we consider objects to be persistent, and can we make correct inference about
properties?
Hopefully we could get bright solutions for all the queries in the near future itself.
28
CONCLUSIONS
Finally we shouldn’t forget that touch and physical interaction are among the fundamental
ways in which we come to understand our world and to effect changes in it. This is true on
a developmental as well as an evolutionary level. For early primates to survive in a physical
world, as Frank Wilson suggested, “a new physics would eventually have to come into this
their brain, a new way of registering and representing the behavior of objects moving and
changing under the control of the hand. It is precisely such a representational system—a
syntax of cause and effect, of stories, and of experiments, each having a beginning, a
middle, and an end— that one finds at the deepest levels of the organization of human
language.”
Our efforts to communicate information by rendering how objects feel through haptic
technology, and the excitement in our pursuit, might reflect a deeper desire to speak with
an inner, physically based language that has yet to be given a true voice.
29
REFERENCES
• Haptic Rendering: Introductory Concepts-Kenneth Salisbury and
Francois Conti Stanford University. Federico Barbagli Stanford
University and University of Siena, Italy
• https://haptics.lcsr.jhu.edu/Research/Tissue_Modeling_and_Simulation
• http://74.125.153.132/search?
q=cache:7bpkVLHv4UcJ:science.howstuffworks.com/virtualmilitary.htm/pr
intable+haptics+in+virtual+military+training&cd=9&hl=en&ct=clnk&gl=in
&client=firefox-a
• http://portal.acm.org/citation.cfm?id=1231041#abstract
• http://www.psqh.com/julaug08/haptics.html
• http://www.informit.com/articles/article.aspx?p=29226&seq
30