0% found this document useful (0 votes)
64 views

CG Unit 6 notes 3

The document outlines the rendering pipeline in graphics and haptic rendering, detailing the stages involved in converting 3D models to 2D displays and incorporating sensory feedback. It covers geometric modeling, kinematic modeling, physical modeling, and behavior modeling, emphasizing the importance of collision detection, surface deformation, and force computation in creating realistic virtual environments. Additionally, it discusses the significance of texture mapping and the use of triangle meshes for efficient rendering.

Uploaded by

abhishekb2803
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

CG Unit 6 notes 3

The document outlines the rendering pipeline in graphics and haptic rendering, detailing the stages involved in converting 3D models to 2D displays and incorporating sensory feedback. It covers geometric modeling, kinematic modeling, physical modeling, and behavior modeling, emphasizing the importance of collision detection, surface deformation, and force computation in creating realistic virtual environments. Additionally, it discusses the significance of texture mapping and the use of triangle meshes for efficient rendering.

Uploaded by

abhishekb2803
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Rendering Pipeline

Rendering Pipeline
 The word rendering is generally related with graphics. It represents the process of
converting the 3D geometrical models population a virtual world into 2D scene
presented to the user
 Pipelining the rendering process means dividing it into stages and assigning these
stages to different hardware resources.

Graphics Rendering Pipeline

 The first phase is the application phase, which is performed entirely on the software
by the CPU. It reads a geometry database with user inputs controlled by devices such
as mouse, trackballs, trackers, or hearing gloves. Responding to a user input,
application stage can change the view to an image or change the orientation of the
object (such as a visual hand).
 The results of the application phase are given to the geometric stage, which may be
installed in software or hardware. This phase contains model transformation
(translation, rotation, measurement, etc.), lighting calculation, group guessing, cutting,
and map making.
 Lighting substage counts more color based on type and number of light sources made
on the scene, lighting model, face goods, space objects such as fog, etc. The lighting
figures of the area will be darker and thus look more realistic.
 The final phase in the graphics pipe is the rasterizing phase, which is made of
hardware, for speed. This phase converts vertex information output by geometric
category (such as color and texture) in Pixel details required for video display.
 One important function of the rasterizing phase is to perform antialiasing in order to
smooth out the jagged appearance of polygon edges. Antialiasing separates pixels
from subpixel regions with assigned color. Subpixel colors are summarized then
determine the percentage R, G, B in the lead color given the display pixel.
Haptic Rendering Pipeline

 The modern VR simulation systems implement supplementary sensorial modalities,


such as haptics, that need to meet similar real-time contrainst. This can be applied
through a multistage haptics rendering pipeline.
 In the first phase of haptics rendering pipeline which gives the visual features of 3D
objects are downloaded from the database. These include surface compiliance,
smoothness, weight, surface temperature, etc. The first phase of the pipeline also
makes collision detection to determine which objects are in conflict, if any. Unlike
graphics pipeline, here only the colliding objects at the scene are passed to the
subsequent pipeline stage.
 The second phase of haptics rendering pipeline that compute the collisions forces,
based on various simulation models. The simplest model is based on Hooke’s law, in
which the contact force has a spring-like dependence on the degree of surface
deformation. Other models involve damping and friction forces and are more realistic,
but also more computationally intensive.
 The more object contacts and the more complex the force shading, the higher is the
chance of the pipeline becoming force limited. In this case optimization can be done
by using simpler force models. The second phase force smoothing and force mapping.
Force smoothing regulates the direction of the force vector in order to avoid sharp
transitions between polygonal surfaces, while force mapping projects the designed
force to the characteristics of the particular haptic display system.
 The third and final phase of haptics rendering pipeline is haptic texturing, which
provides part of the simulation touch response. It calculates the effects, such as
vibration or high temperature, are added at the force vector and sent to the haptics
output display. Mapping with haptic interface features are important, and can lead, for
example, to install vibrational communication.
Modeling in Virtual Reality

 The important aspect of the virtual world is the modelling. This has to be done after
the mapping of I/O devices with the simulation scene. After mapping I/O devices, the
development of object databases for populating the virtual world.
 This means modelling object shape, appearance, kinematic constraint, intelligent
behaviour and physical characteristics (weigth, inertia, hardness, etc.).

1. Geometric Modelling
 Geometric modelling is the term used to describe shape and appearance of virtual
objects. Shape of the object indicates rectangle, circle triangle, polygon, spline, etc.
Whereas its appearance indicates color, surface pattern, surface texture, surface
illumination, etc.
 Virtual Object Shape : The 3D surface is used to dermine the shape of virtual object.
Mostly triangle meshes are used to compose surface of virtual objects.

Triangular Mesh :

 Triangle meshes use shared vertices and faster to render.


As shown in figure. (x1,y1,z1) and (x2,y2,z2) are the shared vertices triangle T1 and
triangle T2.
 Less memory will be required to store the model using triangle meshes. It will be
loaded faster by the rendering pipeline.
 Some architectures are optimized to process triangle meshes.
 Triangle meshes are convenient for geometric transformation and level of detail
optimization.
Parametric Surface

 Parametric surface are determined by introducing a second parameter a such that


t<s<1. Then the points on a parametric surface are given by coordinates x(s,t), y(s,t)
and z(s,t).
 The linear functions are used for describing a polygon whereas high order functions
are used for describing parametric surfaces. So parametric surface need comparatively
less storage with improved surface smoothness.

Object Visual Appearance :

 After modelling the geometry of a virtual object, the next step is to illuminate the
scene such that the object become visible.
 The appearance of an object depends on
1. Type and placement of virtual light sources and object’s surface reflectivity
coefficient.
2. Surface texture.

Scene illumination :

 This determines the light intensities on the surface of the object.


 It is classified as,
1. Local illumination
2. Global illumination
 Local scene illumination treats the interactions between objects and light sources in
insolation irrespective of the interdependencies between objects.
 Global scene illumination models the interreflections between objects and shadows
which results in a more realistic looking scene.

Texture Mapping :

 This method improves the image realism without using additional surface polygons. It
is a technique used during rasterization stage of the graphics pipeline to modify the
surface properties (color, specular reflection, pixel normals etc.) of the object model.
2. Kinematic Modeling

 The kinematic modelling determines the location of 3D objects with respect to a


world coordinate system and their motion in the virtual world.
 The aspects of kinematic modelling :
1. Object kinematics is governed by parent-child hierarchical relations. The
motion of a parent object affects that of its child.
2. The way with the world is viewed. i.e the motion of a virtual camera.
3. Transformation and projection of a camera image on 2D display window for
providing visual feedback t0 the user.
 The 4 x 4 homogeneous transformation matrices are used to express transformations
on object such as translations, rotations, scaling, etc.
 A homogeneous transformation matrix is given by,

= | R(3x3) P(3x3) |
|0 0 0 1 |

Where R(3x3) is the rotation submatrix expressing the orientation of system of


coordinates Q with respect to system of coordinates S and P(3x1) is the vector
expressing the position of the origin of system Q with respect to the origin of system
of coordinates S.

Transformation Invariants :

 Many times one of the objects seen in the virtual scene is a 3D hand where its position
and orientation are mapped to the position and orientation of the 3D tracker attached
to the sensing glove worn by the user. The 3D position of tracker receiver relative to
the source is given by the time dependent homogeneous transformation matrix
T(sourcereciever)(t).
 Let’s assume that the source is fixed, then its position in the world system of
coordinates is given by the matrix T(wsource).
 A virtual hand can be moved in the virtual world by multiplying all its vertices with a
overall transformation matrix.
 Vi(t) = T(wsource) T(sourcereceiver) (t)Vi
Where Vi are the vertex co-ordinates in the hand system of co-ordinates, and i=1,….n.
Object Hierarchies :

 Object hierarchies define groups of objects which move together as a whole nad
whose parts can also move independently.
 In VR system, a hierarchy implies at least two levels of virtual objects. The highest
level objects are also called the parent objects and the lower objects are referred to as
child objects.
 The motion of a parent object causes all its children to move. However, a child object
can move independently without affecting the position of the parent object.

Viewing the Three-Dimensional World

 The first task in the geometry stage of the rendering pipeline is mapping the virtual
objects to the camera system of co-ordinates in the world system of coordinates.
 The next consecutive stages are lighting, perspective projection, clipping and screen
mapping.
 The graphics pipeline processes only what the camera sees irrespective of the virtual
world.
3. Physical Modelling

 The more realism can be achieved to the virtual world model by using modelling
together with object behaviour.
 Physical modelling is the integration of the object’s physical characteristics such as
inertia, surface roughness, compliance (hard or soft), deformation mode etc,

Collision Detection :

 The first stage physical modelling (haptic rendering) is collision detection. It


determines whether two (or more) objects are in contact with each other. The only
objects that collide are processed by the haptic rendering pipeline, hence this can be
considered a form of haptic clipping.
 Collision detection can be classified as
1. Approximate
2. Exact
 Approximate collision detection is also called bounding-box collision. It uses 3D
objects (bounding boxes).
 Bounding box is a prism which encloses all the vertices of a given 3D object.
 Bounding boxes are classified as :
1. Oriented
2. Axis-aligned
 Oriented Bounding Box (OBB) is a prism aligned with the object’s major axes and
which changes oriented orientation dynamically as the object rotates.
 Axix-Aligned Bounding Box (AABB) is a prism aligned with the world system of
coordinates.

Surface Deformation :

 Collision detection is followed by collision response. If the objects in contact are


nonrigid, then one form of collision response is surface deformation. Surface
deformation changes the 3D object geometry interactively and needs to be
coordinated with the graphics pipeline.
 If the object is modelled using polygons, then the surface deformation is done
directly, through vertex manipulation.
 If the object is modelled by parametric surfaces, then the deformation is done
indirectly by modifying the position of control points surrounding the surface.
 An extreme case of surface deformation is surface cutting.
Force Computation :

 When users interact with 3D object surfaces, they should feel reaction forces. The
haptics rendering pipeline is used for computing these forces. Then they are sent to
the haptic display through force feedback to the user.
 Force computation takes into account :
1. The type of surface contact
2. The king of surface deformation
3. The object’s physical and kinematic characteristics
 The force computation is performed according to the type of object being haptically
simulated. It varies for different types of objects : Elastic virtual objects, plastic
virtual objects and virtual walls (virtual objects that are neither elastic nor plastic)

Force Smoothing and Mapping :

 Force shading changes the direction of the feedback force produced during
interactions with polygonal surface to simulate contact with smooth curved surfaces.
 In force modelling it is assumed that the object surface is frictionless and that a single
point of interaction exist. Under these assumptions the direction of the resistive force
is that of the surface normal at the point of contact.

Haptic Texturing :

 Haptic texturing is the last stage of the haptic rendering pipeling.


 As the graphics textures makes an object realistic, haptic-textures also improve the
realism of the physical model of the object surface.
 Also the haptic textures can add new information to characterize an object as slippery,
cold, smooth, etc. The haptic can be cascaded to create new surface effects.
4. Behaviour Modelling

 Whenever objects interacted, it was assumed that one was controlled by the user. It is
also possible to model object behaviour that is independent of user’s actions.
 The virtual human, also called as an agent is a 3D character that has a human
behaviour. Group of such agents are called crowds and have crowd behaviour.
 For example : The model of a virtual cabin. The has an atomatic sliding door, a wall
clock, a calendar, a thermometer.
 The VR engine system will update clock and calendar details. The external
temperature sensor interfaced to the VR engine will update the temperature display of
the thermometer. When the user enters the virtual cabin, the sliding door opens
automatically and the information displayed by clock, calendar and thermometer will
be updated. This is an example of modelling object behaviour by accessing external
sensors. The virtual human (agent) can also be modelled using simulation.

You might also like