CG Unit 6 notes 3
CG Unit 6 notes 3
Rendering Pipeline
The word rendering is generally related with graphics. It represents the process of
converting the 3D geometrical models population a virtual world into 2D scene
presented to the user
Pipelining the rendering process means dividing it into stages and assigning these
stages to different hardware resources.
The first phase is the application phase, which is performed entirely on the software
by the CPU. It reads a geometry database with user inputs controlled by devices such
as mouse, trackballs, trackers, or hearing gloves. Responding to a user input,
application stage can change the view to an image or change the orientation of the
object (such as a visual hand).
The results of the application phase are given to the geometric stage, which may be
installed in software or hardware. This phase contains model transformation
(translation, rotation, measurement, etc.), lighting calculation, group guessing, cutting,
and map making.
Lighting substage counts more color based on type and number of light sources made
on the scene, lighting model, face goods, space objects such as fog, etc. The lighting
figures of the area will be darker and thus look more realistic.
The final phase in the graphics pipe is the rasterizing phase, which is made of
hardware, for speed. This phase converts vertex information output by geometric
category (such as color and texture) in Pixel details required for video display.
One important function of the rasterizing phase is to perform antialiasing in order to
smooth out the jagged appearance of polygon edges. Antialiasing separates pixels
from subpixel regions with assigned color. Subpixel colors are summarized then
determine the percentage R, G, B in the lead color given the display pixel.
Haptic Rendering Pipeline
The important aspect of the virtual world is the modelling. This has to be done after
the mapping of I/O devices with the simulation scene. After mapping I/O devices, the
development of object databases for populating the virtual world.
This means modelling object shape, appearance, kinematic constraint, intelligent
behaviour and physical characteristics (weigth, inertia, hardness, etc.).
1. Geometric Modelling
Geometric modelling is the term used to describe shape and appearance of virtual
objects. Shape of the object indicates rectangle, circle triangle, polygon, spline, etc.
Whereas its appearance indicates color, surface pattern, surface texture, surface
illumination, etc.
Virtual Object Shape : The 3D surface is used to dermine the shape of virtual object.
Mostly triangle meshes are used to compose surface of virtual objects.
Triangular Mesh :
After modelling the geometry of a virtual object, the next step is to illuminate the
scene such that the object become visible.
The appearance of an object depends on
1. Type and placement of virtual light sources and object’s surface reflectivity
coefficient.
2. Surface texture.
Scene illumination :
Texture Mapping :
This method improves the image realism without using additional surface polygons. It
is a technique used during rasterization stage of the graphics pipeline to modify the
surface properties (color, specular reflection, pixel normals etc.) of the object model.
2. Kinematic Modeling
= | R(3x3) P(3x3) |
|0 0 0 1 |
Transformation Invariants :
Many times one of the objects seen in the virtual scene is a 3D hand where its position
and orientation are mapped to the position and orientation of the 3D tracker attached
to the sensing glove worn by the user. The 3D position of tracker receiver relative to
the source is given by the time dependent homogeneous transformation matrix
T(sourcereciever)(t).
Let’s assume that the source is fixed, then its position in the world system of
coordinates is given by the matrix T(wsource).
A virtual hand can be moved in the virtual world by multiplying all its vertices with a
overall transformation matrix.
Vi(t) = T(wsource) T(sourcereceiver) (t)Vi
Where Vi are the vertex co-ordinates in the hand system of co-ordinates, and i=1,….n.
Object Hierarchies :
Object hierarchies define groups of objects which move together as a whole nad
whose parts can also move independently.
In VR system, a hierarchy implies at least two levels of virtual objects. The highest
level objects are also called the parent objects and the lower objects are referred to as
child objects.
The motion of a parent object causes all its children to move. However, a child object
can move independently without affecting the position of the parent object.
The first task in the geometry stage of the rendering pipeline is mapping the virtual
objects to the camera system of co-ordinates in the world system of coordinates.
The next consecutive stages are lighting, perspective projection, clipping and screen
mapping.
The graphics pipeline processes only what the camera sees irrespective of the virtual
world.
3. Physical Modelling
The more realism can be achieved to the virtual world model by using modelling
together with object behaviour.
Physical modelling is the integration of the object’s physical characteristics such as
inertia, surface roughness, compliance (hard or soft), deformation mode etc,
Collision Detection :
Surface Deformation :
When users interact with 3D object surfaces, they should feel reaction forces. The
haptics rendering pipeline is used for computing these forces. Then they are sent to
the haptic display through force feedback to the user.
Force computation takes into account :
1. The type of surface contact
2. The king of surface deformation
3. The object’s physical and kinematic characteristics
The force computation is performed according to the type of object being haptically
simulated. It varies for different types of objects : Elastic virtual objects, plastic
virtual objects and virtual walls (virtual objects that are neither elastic nor plastic)
Force shading changes the direction of the feedback force produced during
interactions with polygonal surface to simulate contact with smooth curved surfaces.
In force modelling it is assumed that the object surface is frictionless and that a single
point of interaction exist. Under these assumptions the direction of the resistive force
is that of the surface normal at the point of contact.
Haptic Texturing :
Whenever objects interacted, it was assumed that one was controlled by the user. It is
also possible to model object behaviour that is independent of user’s actions.
The virtual human, also called as an agent is a 3D character that has a human
behaviour. Group of such agents are called crowds and have crowd behaviour.
For example : The model of a virtual cabin. The has an atomatic sliding door, a wall
clock, a calendar, a thermometer.
The VR engine system will update clock and calendar details. The external
temperature sensor interfaced to the VR engine will update the temperature display of
the thermometer. When the user enters the virtual cabin, the sliding door opens
automatically and the information displayed by clock, calendar and thermometer will
be updated. This is an example of modelling object behaviour by accessing external
sensors. The virtual human (agent) can also be modelled using simulation.