0% found this document useful (0 votes)
15 views19 pages

AVR_Unit-3 Notes

This document discusses environment modeling in virtual reality (VR), covering geometric modeling, behavior simulation, and physically based simulation techniques. It details methods for creating and visualizing 3D objects, including polygonal meshes and texture mapping, as well as interactive techniques like body tracking and hand gestures. Additionally, it explores the importance of grasping actions in VR systems for various applications, including training and rehabilitation.

Uploaded by

ommishraa13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views19 pages

AVR_Unit-3 Notes

This document discusses environment modeling in virtual reality (VR), covering geometric modeling, behavior simulation, and physically based simulation techniques. It details methods for creating and visualizing 3D objects, including polygonal meshes and texture mapping, as well as interactive techniques like body tracking and hand gestures. Additionally, it explores the importance of grasping actions in VR systems for various applications, including training and rehabilitation.

Uploaded by

ommishraa13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

AUGMENTED AND

UNIT 3

Chapter 1: Environment Modelling in VR

Environment Modeling in VR: Geometric Modeling, Behavior Simulation, Physically


Based Simulation.

Interactive Techniques in VR: Body Track, Hand Gesture, 3D Manus, Object Grasp

1.1 Geometric Modeling

Geometric Modeling:

❖ VR System Architecture:
AUGMENTED AND

❖ VR Modeling Cycle:

❖ The VR Geometric Modeling:


• Object Surface Shape:
o Polygonal Meshes (Vast Majority)
o Splines (For Curved Surfaces)
• Object Appearance:
AUGMENTED AND
o Lighting (Shading)
o Texture Mapping
• The surface polygonal (triangle) mesh: Triangle meshes are preferred since they
are memory and computationally efficient (shared vertices).

• Object spline-based shape: Another way of representing virtual objects.


Functions are of higher degree than linear functions describing a polygon – use
less storage and provide increased surface smoothness.
• Parametric splines are represented by points x(t), y(t), z(t), t=[0,1] and a, b, c are
constant coefficients.

• Parametric surfaces are extension of parametric splines with point coordinates


given by x(s,t), y(s,t), z(s,t), with s=[0,1] and t=[0,1].
• β-Splines are controlled indirectly through four control points (more in physical
modeling section).

• Object Polygonal Shape: Can be programmed from scratch using OpenGL or


another toolkit editor. It is tedious and requires skill.
AUGMENTED AND
• Can be obtained from CAD files.
• Can be created using 3D digitizer (stylus), or a 3D scanner (tracker, cameras, and
laser).
• Can be purchased from existing online databases (Viewpoint database). Files have
vertex location and connectivity information but are static.

• Polhemus 3D scanners: Eliminate direct contact with object. It uses two cameras,
a laser, and magnetic trackers (if movable objects are scanned). Scanning resolution
05 mm at 200 mm range, scanning speed is 50 lines per second, range is 75-680
mm scanner-object range.

• Polhemus FastScan 3D scanner can scan objects up to 3 m long.


AUGMENTED AND

• Scanners produce a dense “cloud” of vertices (x,y,z). Using such packages as Wrap
the point data is transformed into surface data (including editing and decimation).

Point cloud from scanner Polygonal mesh after


decimation
AUGMENTED AND

(a) Polygonal surface (b) NURBS (Non-Uniform Rational β-Splines) patches (c) NURBS
surface

• Object Visual Appearance: Scene illumination (local or global), texture mapping,


multi-textures, use of textures to do illumination in the rasterizing stage of the
pipeline.
• Scene illumination:
o Local methods (Flat shaded, Phong shaded) treat objects in isolation. They are
computationally faster than global illumination methods.
o Global illumination treats the influence of one object on another object’s
appearance. It is more demanding from a computation point of view but produces
more realistic scenes.

Flat shaded Utah Teapot Phong shaded Utah Teapot


AUGMENTED AND
• Global scene illumination: The inter-reflections and shadows cast by objects on
each other.

• Radiosity illumination: Results in a more realistic looking scene.

Without radiosity With radiosity

• Texture mapping: It is done in the rasterizer phase of the graphics pipeline, by


mapping assigning texture space coordinates to polygon vertices (or splines), then
mapping these to pixel coordinates.
• Texture increase scene realism.
• Texture provides better 3D spatial cues (they are perspective transformed). They
reduce the number of polygons in the scene – increased frame rate (Example: Tree
models).
AUGMENTED AND

Textured room image for increased realism

• How to create textures: Models are available online in texture “libraries” of cars,
people, construction materials, etc. Custom textures from scanned photographs or
using an interactive paint program to create bitmaps.

• Texture minification: Uses various “filters” to approximate the color of the pixel:
nearest neighbor (to Texel closest to the pixel center is selected, bilinear
interpolation, etc.).
AUGMENTED AND

• Multi-texturing: Several Texels can be overlaid on one pixel. A texture blending


cascade is made up of a series of texture stages.
AUGMENTED AND

• Multi-texturing for bump mapping: Lighting effects caused by irregularities on


object surface are simulated through “bump mapping”. This encodes surface
irregularities as textures. No change in model geometry. No added computations at
the geometry stage.

• Multi-texturing for lighting: Several Texels can be overlaid on one pixel. One
application in more realistic lighting. Polygonal lighting is real-time but requires lots
of polygons (triangles) for realistic appearance.
AUGMENTED AND

• Multi-texturing (texture blending): Realistic-looking lighting can be done with 2D


textures called “light maps”. Not applicable to real-time (need to be recomputed
when object moves.)

Behavior Simulation / Modeling:

• Until now our discussion has been limited to the mathematical modeling of object
appearance, kinematics, and physical properties. Whenever objects interacted, it
was assumed that one was controlled by the user. It is also possible to model object
behavior that is independent of the user's actions. This becomes critical in very large
simulation environments, when users cannot possibly control all the interactions
that are taking place.
• Consider the modeling of a virtual office, for example. Such an office could have an
automatic sliding door, a clock, a desk calendar, as well as furniture. The time
displayed by the clock and the date shown on the current calendar page can be
updated by accessing the VR engine system time. Every time the user enters the
virtual office, the sliding door opens and some of the information displayed by the
AUGMENTED AND
clock, calendar, and window thermometer changes. However, direct user input is
limited in this example to just changing the field of view to the simulation.
• This example illustrates one method of modeling object behavior by accessing
external sensors (system time, and proximity sensors for the sliding door). This
provides virtual objects with a degree of independence from the user's actions-a
degree of "intelligence". Many current simulations also model virtual humans,
called agents.
• Definition: A virtual human (or agent) is a 3D character that has a human behavior.
Groups of such agents are called crowds and have crowd behavior.

• Fully autonomous agents, such as the virtual football player in the yellow-colored
uniform, need to perceive their environment (in this case, the opponent) in order to
take appropriate actions.
• The behavior model of the agent includes emotions, behavior rules, and actions.
• The agent behavior has its hierarchy, reflex behavior being at a lower level.
• A reflex behavior could be to tackle his opponent every time he sees him.
• Emotion-based behavior filters perceptual data through likes, dislikes, anger, or
fear.
• It is thus at a higher level than simple reflex behavior. As a consequence, two agents
interpreting the same sensorial data will take different actions in the simulation.

Physically Based Simulation:


AUGMENTED AND
• Physically based simulation / animation is an area of interest within computer
graphics concerned with the simulation of physically plausible behaviors at
interactive rates. Advances in physically based animation are often motivated by
the need to include complex, physically inspired behaviors in video games,
interactive simulations, and movies.
• Physically based animation is now common in movies and video games, and many
techniques were pioneered during the development of early special effects scenes
and game engines.
• In games such as Angry Birds, physically based animation is itself the primary
game mechanic and players are expected to interact with or create physically
simulated systems in order to achieve goals.

❖ Rigid Body Simulation:


• Simplified rigid body physics is relatively cheap and easy to implement, which is
why it appeared in interactive games and simulations earlier than most other
techniques.
• Rigid bodies are assumed to undergo no deformation during simulation so that rigid
body motion between time steps can be described as a translation and rotation.
AUGMENTED AND

❖ Soft Body Simulation:


• Soft bodies can easily be implemented using spring-mesh systems.
• Spring mesh systems are composed of individually simulated particles that are
attracted to each other by simulated spring forces and experience resistance from
simulated dampeners.

❖ Fluid Simulation:
• Computational fluid dynamics can be expensive, and interactions between multiple
fluid bodies or with external objects/forces can require complex logic to evaluate.
• Fluid simulation is generally achieved in video games by simulating only the height
of bodies of water to create the effect of waves, ripples, or other surface features.
AUGMENTED AND

❖ Particle Systems:
• Particle systems are an extremely popular technique for creating visual effects in
movies and games because of their ease of implementation, efficiency, extensibility,
and artist control.
• The update cycle of particle systems usually consists of the three phases:
generation, simulation, and extinction.

❖ Flocking:
• In physically based simulation, flocking refers to a technique that models the
complex behavior of birds, schools of fish, and swarms of insects using virtual
forces.
• These virtual forces simulate the tendency for flocks to center their velocities,
avoid collisions and crowding, and move toward the group.
AUGMENTED AND

Unit-3: Interactive Techniques in VR

Body Track:

• Full-Body Tracking is often considered the Holy Grail for the virtual reality experience.
AUGMENTED AND
• The ability to imitate your real body movement into an avatar is something that greatly
enhances immersion in VR as well as provides countless possibilities for new behavior.
• The most common way to achieve tracking is by attaching to the body special markers
that are detected by cameras.
• The more markers are placed, the more accurate is mapping of avatar movement. There
are two most used configurations.
• The first configuration uses 6 markers placed at the head (headset), hands (most often
controllers), belt, ankles.
• The second one adds additional markers on knees and elbows for a better bending
experience.
• To solve the position and orientation of the avatar system uses inverse kinematic.
• The easiest way to enjoy full-body tracking is by adding Vive markers.
• Those markers are easy to set up and work directly with steam VR or any headset
using base stations.
• Because it is relatively cheap, this solution is used the most. It provides good tracking
and works with devices that are already in possession of most VR headsets owners.
• It’s a go-to choice for non-industrial use of full-body tracking. Pricing is around 115$
for each marker with a strap. Assuming the user already owns a headset with
controllers he needs to buy at least 3 of them to make full use of full-body tracking.
• Optitrack is a more industrial way of full-body tracking. This solution stands out by
high quality, fluidity, and very low delay of tracking.
• The general principle of operation remains unchanged. Multiple cameras are tracking
markers in the space.
• There are several differences between the Optitrack and the Vive solution.
• First is that Optitrack can track multiple objects in much larger areas. Because of that,
it makes it perfect for multiplayer applications or for capturing precise animations of
many objects.
• The second one is that Optitrack supports two kinds of markers: – active, -passive.
• Active markers work exactly like Vive’s. Passive ones are way cheaper and do not
need a battery but require unique patterns to distinguish objects from each other for
the cameras. The third disparity is that Optitrack requires a specialized room with
special camera installation. Additionally, the Optitrack solution requires an external
AUGMENTED AND
object tracking program called Motive for data processing. Summarizing Optitrack
solution is not the cheapest one and not easiest to develop on but provides exceptional
quality.
• https://www.youtube.com/watch?v=FhiAFo9U_sM

Hand Gesture:

• Users taking part in VR experiences tend to rely on a head-mounted display, making


it challenging for the person to navigate and walk-through the experience. With the
addition of hand-gesture interaction, virtual reality experiences are becoming even
more immersive.
• Devices like joysticks and head orientation tracking are embedded into VR headsets
and are used to help a person walk through the virtual world — but they’re not fully
effective.
• There can be jolts to the view that are irregular and not only disrupt the experience
but can also cause motion sickness for the user.
• The portal method is a technique that has been used to reduce motion sickness,
allowing users to jump from one location to another. The movement isn’t natural, so it
can also cause some disorientation.
• Researchers have experimented with the simple double hand-gesture interaction
method and have found it to be beneficial to users in a VR experience. It allows the
user to be better immersed and comfortable without motion sickness.
• The hand gestures the researchers have created are natural, such as raising and
opening their left hands in order to move avatars forward. By tracing the motions in
the air, it helps combine the actions of movement and turning in VR and is more
natural, and easier to learn than other VR navigation methods.

3D Manus:

• 3D Manus VR is the first virtual reality glove Input Device created specifically for
general consumers. The team behind it wants to bridge our physical world with virtual
reality and allow users to experience a never-before-seen immersion.
• Manus VR users an assortment of sensors to track hand movement in real time and
use the captured data to faithfully reproduce the movement in virtual reality.
AUGMENTED AND
• It operates completely wirelessly and comes with an open-source SDK that developers
can use to integrate the hand-tracking functionality into their applications and games.
• https://www.manus-vr.com/

Object Grasp:

• Grasping is one of the fundamental actions we perform to interact with objects in real
environments, and in the real world we rarely experience difficulty picking up objects.
• Grasping plays a fundamental role for interactive virtual reality (VR) systems that are
increasingly employed not only for recreational purposes, but also for training in
industrial contexts, in medical tasks, and for rehabilitation protocols.
• To ensure the effectiveness of such VR applications, we must understand whether the
same grasping behaviors and strategies employed in the real world are adopted when
interacting with objects in VR.
• Grasps are visually realistic because hand is automatically fitted to the object shape
from a position and orientation determined by the user using the VR handheld
controllers (e.g., Oculus Touch motion controllers).
• Grasping system enables interaction with different objects regardless their geometries.

End of Unit-3

You might also like