CG Chapter4
CG Chapter4
by
Tesfamicael.W Arefaine
3D Object Representation
To produce realistic displays of scenes, we need to use
representations that accurately model object x-cs
Polygon and quadric surfaces provide precise descriptions for simple
euclidean objects such as polyhedrons and ellipsoids.
Spline surfaces are used for structures with curved surfaces.
Representation schemes for solid objects are often divided into two
broad categories, although not all representations fall neatly into one
or the other of these two categories.
1 Boundary representations (B-reps) describe a 3D object as a set of
surfaces that separate the object interior from the environment.
2 Space-partitioning representations are used to describe interior
properties, by partitioning the spatial region containing an object into
a set of small, non overlapping, contiguous solids (usually cubes).
Polygon mesh
The most commonly boundary representation for a 3D graphics
object is a set of surface polygons that enclose the object interior.
Set of surface polygons= polygon mesh
A polygon representation for a polyhedron precisely defines the
surface features of the object.
But for other objects surfaces are tessellated (tiled) to produce the
polygon mesh approximation.
The polygon mesh approximation to a curved surface can be
improved by dividing the surface into smaller polygon faces.
Quadric surfaces
Frequently used class of objects, described with second degree
equations.
Includes spheres, ellipsoids, tori,paraboloids, and hyperboloids.
Quadric surfaces, particularly spheres and ellipsoids, are common
elements of graphics scenes, and they are often available in graphics
packages as primitives from which more complex objects are
constructed.
1.Sphere
A spherical surface with radii r, centered at the coordinate origin is
defined as set of points(x, y, z) that satisfy the equation
x2 + y2 + z2 = r2
2.Ellipsoid
An ellipsoid surface can be described as an extension of a spherical
surface where the radii in 3 mutual perpendicular direction can have
different values.
x y z
( )2 + ( )2 + ( )2 = 1
rx ry rz
3.Torus
A doughnut shaped object surface
Created by revolving a circle in 3D space about an axis in the same
plane as the circle.
n!
C (n, k) =
k!(n − k)!
Bezier Curves
P(0) = P0 &P(1) = pn
2 The direction of the tangent vector at the end points P0 and Pn is the
same as that of the vector determined by the 1st and the last
segments P0 P1 and Pn−1 Pn of the guiding polyline.
3 The bezier curve lies entirely with in the convex hull of the guiding
polyline. This follows from the properties of Bkzier blending
functions:
Pn They are all positive and their sum is always 1.
k=0 BEZ k,n (u) =1
4 Bezier curves are suited for interactive design. They can be placed
together so as to ensure continuous differentiability at they juncture
by letting the edge of two different guiding polylines that are adjacent
to the common end points be collinear,
(x0 , y0 ) = (1, 1) = P0
(x1 , y1 ) = (3, 6) = P1
(x2 , y2 ) = (5, 4) = P2
Spline Curves
Refers to any smooth curve that passes through points that can be
expressed using a polynomial function.
The polynomial function its first and second derivaties are
continuous across various sections.
Cubic Splines:
Defined using cubic polynomials.
General formula: y=a0 + a1 x + a2 x 2 + a3 x 3
where a0 , a1 , a2 and a3 are constants and a3 6= 0
To find a0 , a1 , a2 and a3 we need boundary conditions either
1 4 points that passes through the cubic splines or
2 2 end points, a slope(y’) and a curvature(y”)
Depth-buffer method
Commonly used image-space method
Compares surface depths at each pixel position on the projection
plane
Also called the z-buffer method
I Object depth is usually measured from the view plane along the z-axis
I Normalized coordinate: z values range from 0 at the back clipping
plane to zmax at the front clipping plane
Each surface of a scene is processed separately, one point at a time
across the surface
For each pixel position (x, y ) on the view plane object depths can
be compared by comparing their original z-values.
If the Surface s1 has the smallest depth from the view plane - its
surface intensity value at (x,y) is saved.
Implementation
Two buffer areas are required
1 Depth buffer - used to store depth values for each (x,y) position as
surfaces are processed
2 Refresh buffer - stores the intensity values for each position
Initially, all positions in the depth buffer are set to 0 (minimum
depth)
Whenever a new pixel is generated, for example during the
scan-conversion of a surface, the pixel’s Z value is compared with
the corresponding value in the depth-buffer
If the pixel’s depth is greater than that in the buffer, the pixel is
drawn and its depth recorded in the depth buffer, Over-writing the
previous value.
Otherwise, the pixel is not drawn and the depth buffer is not
updated.
Example
Suppose subsequently that during the scan-conversion of surfaces s1
and s2 , the same pixel is generated, because s1 and s2 overlap in
the scene
If the depth (Z) value of s2 ’s pixel is less than that stored in the
depth buffer, then s2 is further away from the eye than s1 , and s2 is
obscured by s1
Algorithm
1 Initialize the depth buffer and the refresh buffer so that for all screen
positions (x, y),
depth(x, y) = 0, refresh(x, y) = Ibackground
2 For each position on each polygon surface, compare depth values to
previously stored values in the depth buffer to determine visibility
I Calculate the depth z for each (x, y) position on the polygon
I If z > buffer (x, y), then
depth(x, y) = z, refresh(x, y ) = Isurf (x, y) where
Illumination model
Lighting model or shading model
Used to calculate the intensity of light that we should see at a given
point on the surface of an object
Surface-rendering algorithm
Uses the intensity calculations from an illumination model to determine
the light intensity for all projected pixel positions for the various surfaces
in the scene.
Light Source
Point source - rays from such light sources follow radially diverging
paths from the source position
Light on transparent surface - some of the incident light will be
reflected, and some will be transmitted through the material
Light on an opaque surface - part of it is reflected and part of it is
absorbed
I Shiny surfaces reflect more of the incident light
I Dull surfaces absorb more of the incident light
Rough/dull surfaces - scatter the reflected light in all directions
I The scattered light - diffuse reflection
Light source
A very rough matte surface produces primarily diffuse reflections, so
that the surface appears equally bright from all viewing directions
shiny surfaces (polished metal, apple, a person’s forehead)- scatter
the reflected light more in one direction
I the scattered light - specular reflection
Ambient light
A surface that is not exposed directly to a light source will still be
visible if nearby objects are illuminated
Ambient light or background light - combination of light
reflections from various surfaces
Ambient light
No spatial or directional characterstics
Each surface is illuminated with a constant level of ambient light -
set as Ia
The resulting reflected light - constant for each surface, independent
of the viewing direction and the spatial orientation of the surface
Intensity of the reflected light - depends on the optical properties of
the surface: how much of the incident energy is absorbed and how
much is reflected
Diffuse reflection
constant over each surface in a scene, independent of the viewing
direction
kd - diffuse-reflection coefficient diffuse reflectivity - fractional
amount of the incident light that is diffusely reflected from a surface
To simulate a highly reflective surface kd is near 1 produces a bright
surface with the intensity of the reflected light near that of the
incident light
To simulate that absorbs most of the light kd is near 0
Combined diffuse reflection from point source and from ambient light
ka - ambient-reflection coefficient
Idiff = ka la + kd Il (N.L)
Figure: Diffuse
reflections from a
spherical surface
illuminated with
ambient light and
a single point
light source for
values of ka and
kd in the interval
(0,1)
Specular reflection
Result of total, or near total, reflection of the incident light in a
concentrated region around the specular reflection angle
Specular-reflection angle =angle of the incident light = θ
N - the unit normal surface vector
R - the unit vector in the direction of ideal specular reflection
L - the unit vector directed toward the point light source
V - the unit vector pointing to the viewer from the surface position
φ - the viewing angle relative to the specular-reflection direction R
Specular reflection
Ideal reflector (perfect mirror)
I Incident light is reflected only in the direction of R
I We would only see reflected light when the vectors V and R coincide
Objects other than ideal reflectors
I Exhibit specular reflections over a finite range of viewing positions
around the vector R
I Shiny surfaces - have a narrow specular reflection range
I Dull surfaces - have a wider specular reflection range
Figure: Modeling
specular reflections
(shaded area) with
parameter ns
Phong model
Empirical model for calculating the specular reflection range
Specular reflection range ∝cosns φ
φ - in the range 0 to 90 (cosφ - in the range of 0 to 1)
ns - specular-reflection parameter - determined by the type of the
surface we want to display
Very shiny surface - modeled with large values of ns , 100 or more
Dull surfaces - modeled with small values of ns , around 1
Phong model
Transparent materials - specular reflection depends on θ
Many opaque materials - specular reflection is nearly constant for all
incidence angles
We can assume a constant specular-reflection coefficient ks - in the
range of 0 and 1 for each surface
I Ispec = ks Il cosns φ
I cosφ = V.R
I Ispec = ks Il (V.R) ns
I R = (2N.L)N - L
Intensity attenuation
As radiant energy from a point light source travels through space, its
amplitude is attenuated by the factor 1/d2
d = distance the light has travelled
Surfaces close to the light source - receive higher incident intensity
Surfaces far from the light source - receive lower incident intensity
Our illumination model should take this into account
Intensity attenuation
Common attenuation function
1
f (d) =
a0 + a1 d + a2 d 2
Coefficients a0 , a1 , a2 can be varied to give different lighting effects
Limit the max value of f(d) to 1
1
f (d) = min(1, )
a0 + a1 d + a2 d 2
Using this function, we can write our basic illumination model as,
where di is the distance light travelled from light source i
n
X
I = ka Ia + f (di )lli [kd (N.Li ) + ks (V.Ri )ns ]
i=1
Ray Tracing
Basic ray-tracing algorithm ray-traced displays are highly realistic,
but they require considerable computation time to generate
Best suited for applications where the image can be rendered slowly
ahead of time, such as in still images and film and television special
effects,
Poorly suited for real-time applications like video games where speed
is critical
Also provides for visible surface detection, shadow effects,
transparency, and multiple light source illuminations
Figure: Ray tracing can achieve a very high degree of visual realism
Figure: The ray tracing algorithm builds an image by extending rays into
a scene
The algorithm
For each pixel ray, we test each surface in the scene to determine if
it is intersected by the ray
If a surface intersected, we calculate the distance from the pixel to
the surface-intersection point
The smallest calculated intersection distance identifies the visible
surface for that pixel
Then we reflect the ray off the visible surface along a specular path
If the surface is transparent, we also send a ray through the surface
in the refraction direction
I reflection and refraction rays - secondary rays
This procedure is repeated for the secondary rays:
I Objects are tested for intersection, and the nearest surface along a
secondary path is used to recursively produce the next generation of
reflection and refraction paths
Figure: (a)reflection and refraction ray paths through a scene for a screen
pixel (b) binary ray-tracing tree for the paths shown in (a)
Z END