CG Unit 1 5 Notes
CG Unit 1 5 Notes
UNIT I - 2D PRIMITIVES
Introduction
A picture is completely specified by the set of intensities for the pixel positions in
the display. Shapes and colors of the objects can be described internally with pixel
arrays into the frame buffer or with the set of the basic geometric – structure such as
straight line segments and polygon color areas. To describe structure of basic object
is referred to as output primitives.
Each output primitive is specified with input co-ordinate data and other information
about the way that objects is to be displayed. Additional output primitives that can
be used to constant a picture include circles and other conic sections, quadric
surfaces, Spline curves and surfaces, polygon floor areas and character string. Figure : Pixel Postions reference by scan line number and column number
Points and Lines To load an intensity value into the frame buffer at a position corresponding to
column x along scan line y,
Point plotting is accomplished by converting a single coordinate position furnished
by an application program into appropriate operations for the output device. With a setpixel (x, y)
CRT monitor, for example, the electron beam is turned on to illuminate the screen
phosphor at the selected location To retrieve the current frame buffer intensity setting for a specified location we use
a low level function
Line drawing is accomplished by calculating intermediate positions along the line
path between two specified end points positions. An output device is then directed getpixel (x, y)
to fill in these positions between the end points
Digital devices display a straight line segment by plotting discrete points between Line Drawing Algorithms
the two end points. Discrete coordinate positions along the line path are calculated
from the equation of the line. For a raster video display, the line color (intensity) is Digital Differential Analyzer (DDA) Algorithm
then loaded into the frame buffer at the corresponding pixel coordinates. Reading Bresenham‟s Line Algorithm
from the frame buffer, the video controller then plots “the screen pixels”. Parallel Line Algorithm
Pixel positions are referenced according to scan-line number and column number The Cartesian slope-intercept equation for a straight line is
(pixel position across a scan line). Scan lines are numbered consecutively from 0,
starting at the bottom of the screen; and pixel columns are numbered from 0, left to y=m.x+b (1)
right across each scan line
Where m as slope of the line and b as the y intercept
1
CS2401 –Computer Graphics
Given that the two endpoints of a line segment are specified at positions (x1,y1) and
(x2,y2) as in figure we can determine the values for the slope m and y intercept b
with the following calculations
Figure : Straight line Segment with five sampling positions along the x axis
between x1 and x2
Figure : Line Path between endpoint positions (x1,y1) and (x2,y2)
Digital Differential Analyzer (DDA) Algortihm
m = ∆y / ∆x = y2-y1 / x2 - x1 (2)
The digital differential analyzer (DDA) is a scan-conversion line algorithm based
b= y1 - m . x1 (3) on calculation either ∆y or ∆x
For any given x interval ∆x along a line, we can compute the corresponding y The line at unit intervals in one coordinate and determine corresponding integer
interval values nearest the line path for the other coordinate.
∆y
∆y= m ∆x (4) A line with positive slop, if the slope is less than or equal to 1, at unit x intervals
(∆x=1) and compute each successive y values as
We can obtain the x interval ∆x corresponding to a specified ∆y as
yk+1 = yk + m (6)
∆ x = ∆ y/m (5)
Subscript k takes integer values starting from 1 for the first point and increases by 1
For lines with slope magnitudes |m| < 1, ∆x can be set proportional to a until the final endpoint is reached. m can be any real number between 0 and 1 and,
small horizontal deflection voltage and the corresponding vertical deflection is then the calculated y values must be rounded to the nearest integer
set proportional to ∆y as calculated from Eq (4).
For lines with a positive slope greater than 1 we reverse the roles of x and y, (∆y=1)
For lines whose slopes have magnitudes |m | >1 , ∆y can be set proportional to a and calculate each succeeding x value as
small vertical deflection voltage with the corresponding horizontal deflection
voltage set proportional to ∆x, calculated from Eq (5) xk+1 = xk + (1/m) (7)
For lines with m = 1, ∆x = ∆y and the horizontal and vertical deflections Equation (6) and (7) are based on the assumption that lines are to be processed from
voltage are equal. the left endpoint to the right endpoint.
2
CS2401 –Computer Graphics
If this processing is reversed, ∆x=-1 that the starting endpoint is at the right {
x += xIncrement;
yk+1 = yk – m (8) y += yIncrement;
setpixel (ROUND(x), ROUND(y));
When the slope is greater than 1 and ∆y = -1 with }
}
xk+1 = xk-1(1/m) (9)
Algorithm Description:
If the absolute value of the slope is less than 1 and the start endpoint is at the left,
we set ∆x = 1 and calculate y values with Eq. (6) Step 1 : Accept Input as two endpoint pixel positions
Step 2: Horizontal and vertical differences between the endpoint positions are
When the start endpoint is at the right (for the same slope), we set ∆x = -1 and assigned to parameters dx and dy (Calculate dx=xb-xa and dy=yb-ya).
obtain y positions from Eq. (8). Similarly, when the absolute value of a negative Step 3: The difference with the greater magnitude determines the value of
slope is greater than 1, we use ∆y = -1 and Eq. (9) or we use ∆y = 1 and Eq. (7). parameter steps.
Step 4 : Starting with pixel position (xa, ya), determine the offset needed at each
Plotting points (Rounded
k x Y step to
to Integer)
generate the next pixel position along the line path.
0 0+0.66=0.66 0+1=1 (1,1)
Step 5: loop the following process for steps number of times
1 0.66+0.66=1.32 1+1=2 (1,2)
2 1.32+0.66=1.98 2+1=3 (2,3) a. Use a unit of increment or decrement in the x and y direction
3 1.98+0.66=2.64 3+1=4 (3,4) b. if xa is less than xb the values of increment in the x and y directions are 1 and
4 2.64+0.66=3.3 4+1=5 (3,5) m
5 3.3+0.66=3.96 5+1=6 (4,6) c. if xa is greater than xb then the decrements -1 and – m are used.
3
CS2401 –Computer Graphics
In addition, Bresenham‟s line algorithm can be adapted to display circles and other
curves.
Result : Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0,y0) of a given line, we step to each successive
column (x position) and plot the pixel whose scan-line y value is closest to the line
path.
To determine the pixel (xk,yk) is to be displayed, next to decide which pixel to plot
the column xk+1=xk+1.(xk+1,yk) and .(xk+1,yk+1). At sampling position xk+1, we label
vertical pixel separations from the mathematical line path as d 1 and d2. The y
coordinate on the mathematical line at pixel column position x k+1 is calculated as
y =m(xk+1)+b (1)
Then
d1 = y-yk
= m(xk+1)+b-yk
d2 = (yk+1)-y
= yk+1-m(xk+1)-b
To determine which of the two pixel is closest to the line path, efficient test that is
Advantages of DDA Algorithm based on the difference between the two pixel separations
An accurate and efficient raster line generating algorithm developed by Bresenham, The sign of pk is the same as the sign of d1- d2,since ∆x>0
that uses only incremental integer calculations.
4
CS2401 –Computer Graphics
Parameter C is constant and has the value 2∆y + ∆x(2b-1) which is independent of 2. load (x0,y0) into frame buffer, ie. Plot the first point.
the pixel position and will be eliminated in the recursive calculations for P k. 3. Calculate the constants ∆x, ∆y, 2∆y and obtain the starting value for the
decision parameter as P0 = 2∆y-∆x
If the pixel at yk is “closer” to the line path than the pixel at yk+1 (d1< d2) than 4. At each xk along the line, starting at k=0 perform the following test
decision parameter P k is negative. In this case, plot the lower pixel, otherwise plot If Pk < 0, the next point to plot is(xk+1,yk) and
the upper pixel. Pk+1 = Pk + 2∆y
Coordinate changes along the line occur in unit steps in either the x or y directions. otherwise, the next point to plot is (xk+1,yk+1) and
To obtain the values of successive decision parameters using incremental integer Pk+1 = Pk + 2∆y - 2∆x
calculations. At steps k+1, the decision parameter is evaluated from equation (3) as
5. Perform step4 ∆x times.
Pk+1 = 2∆y xk+1-2∆x. yk+1 +c
Implementation of Bresenham Line drawing Algorithm
Subtracting the equation (3) from the preceding equation
void lineBres (int xa,int ya,int xb, int yb)
Pk+1 - Pk = 2∆y (xk+1 - xk) -2∆x(yk+1 - yk) {
int dx = abs( xa – xb) , dy = abs (ya - yb);
But xk+1= xk+1 so that int p = 2 * dy – dx;
int twoDy = 2 * dy, twoDyDx = 2 *(dy - dx);
Pk+1 = Pk+ 2∆y-2∆x(yk+1 - yk) int x , y, xEnd;
(4)
/* Determine which point to use as start, which as end * /
Where the term yk+1-yk is either 0 or 1 depending on the sign of parameter Pk
if (xa > x b )
This recursive calculation of decision parameter is performed at each integer x {
position, starting at the left coordinate endpoint of the line. x = xb;
y = yb;
The first parameter P0 is evaluated from equation at the starting pixel xEnd = xa;
position (x0,y0) and with m evaluated as ∆y/∆x }
else
P0 = 2∆y-∆x (5) {
x = xa;
Bresenham‟s line drawing for a line with a positive slope less than 1 in the y = ya;
following outline of the algorithm. xEnd = xb;
}
The constants 2∆y and 2∆y-2∆x are calculated once for each line to be setPixel(x,y);
scan converted. while(x<xEnd)
{
Bresenham’s line Drawing Algorithm for |m| < 1 x++;
if (p<0)
1. Input the two line endpoints and store the left end point in (x0,y0) p+=twoDy;
5
CS2401 –Computer Graphics
else
{ Result
y++;
p+=twoDyDx;
}
setPixel(x,y);
}
}
∆x = 10 ∆y=8
p0 = 2Δy- Δx = 6
6
CS2401 –Computer Graphics
To eliminate the unequal spacing is to calculate points along the circle boundary
Circle-Generating Algorithms using polar coordinates r and θ. Expressing the circle equation in parametric polar
from yields the pair of equations
General function is available in a graphics library for displaying various kinds of
curves, including circles and ellipses. x = xc + rcos θ y = yc + rsin θ
Properties of a circle When a display is generated with these equations using a fixed angular step size, a
circle is plotted with equally spaced points along the circumference. To reduce
A circle is defined as a set of points that are all the given distance (xc,yc). calculations use a large angular separation between points along the circumference
and connect the points with straight line segments to approximate the circular path.
Set the angular step size at 1/r. This plots pixel positions that are
approximately one unit apart. The shape of the circle is similar in each quadrant. To
determine the curve positions in the first quadrant, to generate he circle section in
the second quadrant of the xy plane by nothing that the two circle sections are
symmetric with respect to the y axis and circle section in the third and fourth
quadrants can be obtained from sections in the first and second quadrants by
considering symmetry between octants.
Circle sections in adjacent octants within one quadrant are symmetric with respect
to the 450 line dividing the two octants. Where a point at position (x, y) on a one-
eight circle sector is mapped into the seven circle points in the other octants of the
xy plane.
This distance relationship is expressed by the pythagorean theorem in Cartesian
coordinates as To generate all pixel positions around a circle by calculating only the points within
the sector from x=0 to y=0. the slope of the curve in this octant has an magnitude
(x – xc)2 + (y – yc) 2 = r2 (1) less than of equal to 1.0. at x=0, the circle slope is 0 and at x=y, the slope is -1.0.
7
CS2401 –Computer Graphics
center position (xc,yc) set up our algorithm to calculate pixel positions around a
circle path centered at the coordinate position by adding xc to x and yc to y.
To apply the midpoint method we define a circle function as
fcircle(x,y) = x2+y2-r2
Any point (x,y) on the boundary of the circle with radius r satisfies the equation
fcircle (x,y)=0. If the point is in the interior of the circle, the circle function is
negative. And if the point is outside the circle the, circle function is positive
The tests in the above eqn are performed for the midposition sbteween pixels near
the circle path at each sampling step. The circle function is the decision parameter
in the midpoint algorithm.
8
CS2401 –Computer Graphics
2xk+1+1-2 yk+1.
Evaluation of the terms 2xk+1 and 2 yk+1 can also be done incrementally as Example : Midpoint Circle Drawing
2xk+1=2xk+2
2 yk+1=2 yk-2 Given a circle radius r=10
At the Start position (0,r) these two terms have the values 0 and 2r respectively. The circle octant in the first quadrant from x=0 to x=y. The initial value of the
Each successive value for the 2xk+1 term is obtained by adding 2 to the previous decision parameter is P0=1-r = - 9
value and each successive value for the 2yk+1 term is obtained by subtracting 2 from
the previous value. For the circle centered on the coordinate origin, the initial point is (x 0,y0)=(0,10)
and initial increment terms for calculating the decision parameters are
The initial decision parameter is obtained by evaluating the circle function
at the start position (x0,y0)=(0,r) 2x0=0 , 2y0=20
P0= fcircle (1,r-1/2) Successive midpoint decision parameter values and the corresponding
=1+(r-1/2)2-r2 coordinate positions along the circle path are listed in the following table.
or
P0=(5/4)-r k pk (xk+1, yk-1) 2xk+1 2yk+1
0 -9 (1,10) 2 20
If the radius r is specified as an integer 1 -6 (2,10) 4 20
2 -1 (3,10) 6 20
P0=1-r(for r an integer) 3 6 (4,9) 8 18
4 -3 (5,9) 10 18
Midpoint circle Algorithm 5 8 (6,8) 12 16
6 5 (7,7) 14 14
1. Input radius r and circle center (xc,yc) and obtain the first point on the
circumference of the circle centered on the origin as
(x0,y0) = (0,r)
2. Calculate the initial value of the decision parameter as P 0=(5/4)-r
3. At each xk position, starting at k=0, perform the following test. If P k <0 the next
point along the circle centered on (0,0) is (xk+1,yk) and Pk+1=Pk+2xk+1+1
Otherwise the next point along the circle is (xk+1,yk-1) and Pk+1=Pk+2xk+1+1-2 yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk-2
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x,y) onto the circular path centered at (x c,yc)
and plot the coordinate values.
x=x+xc y=y+yc
6. Repeat step 3 through 5 until x>=y.
9
CS2401 –Computer Graphics
p +=2* (x - Y) + 1;
}
circlePlotPoints(xCenter, yCenter, x, y)
}
}
void circlePlotPolnts (int xCenter, int yCenter, int x, int y)
{
setpixel (xCenter + x, yCenter + y ) ;
setpixel (xCenter - x. yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter - x, yCenter - y ) ;
setpixel (xCenter + y, yCenter + x);
setpixel (xCenter - y , yCenter + x);
setpixel (xCenter t y , yCenter - x);
setpixel (xCenter - y , yCenter - x);
}
Ellipse-Generating Algorithms
10
CS2401 –Computer Graphics
Ellipse equations are simplified if the major and minor axes are oriented to
align with the coordinate axes. The major and minor axes oriented parallel to the x
and y axes parameter rx for this example labels the semi major axis and parameter r y
labels the semi minor axis
((x-xc)/rx)2+((y-yc)/ry)2=1
11
CS2401 –Computer Graphics
1. Start at position (0,ry) and step clockwise along the elliptical path in the
first quadrant shifting from unit steps in x to unit steps in y when the slope becomes
less than
-1
2. Start at (rx,0) and select points in a counter clockwise order.
2.1 Shifting from unit steps in y to unit steps in x when the slope
becomes greater than -1.0
2.2 Using parallel processors calculate pixel positions in the two
regions simultaneously
3. Start at (0,ry)
step along the ellipse path in clockwise order throughout the first
quadrant ellipse function (xc,yc)=(0,0)
Midpoint ellipse Algorithm fellipse (x,y)=ry2x2+rx2y2 –rx2 ry2
which has the following properties:
The midpoint ellipse method is applied throughout the first quadrant in two fellipse (x,y) <0, if (x,y) is inside the ellipse boundary
parts. =0, if(x,y) is on ellipse boundary
>0, if(x,y) is outside the ellipse boundary
The below figure show the division of the first quadrant according to the slope of an Thus, the ellipse function fellipse (x,y) serves as the decision parameter in
ellipse with rx<ry. the midpoint algorithm.
Starting at (0,ry):
Unit steps in the x direction until to reach the boundary between region 1
and region 2. Then switch to unit steps in the y direction over the remainder of the
curve in the first quadrant.
At each step to test the value of the slope of the curve. The ellipse slope is
calculated
dy/dx= -(2ry2x/2rx2y)
12
CS2401 –Computer Graphics
The following figure shows the midpoint between two candidate pixels at sampling p1k+1 = p1k +2 ry2(xk +1) + ry2 + rx2 [(yk+1 -½)2 - (yk -½)2]
position xk+1 in the first region.
Where yk+1 is yk or yk-1 depending on the sign of P1k.
Increments for the decision parameters can be calculated using only addition and
subtraction as in the circle algorithm.
The terms 2ry2 x and 2rx2 y can be obtained incrementally. At the initial
position (0,ry) these two terms evaluate to
2 ry2x = 0
2rx2 y =2rx2 ry
x and y are incremented updated values are obtained by adding 2ry2to the
current value of the increment term and subtracting 2rx 2 from the current value of
To determine the next position along the ellipse path by evaluating the decision the increment term. The updated increment values are compared at each step and
parameter at this mid point more from region 1 to region 2. when the condition 4 is satisfied.
P1k = fellipse (xk+1,yk-1/2) In region 1 the initial value of the decision parameter is obtained by
= ry2 (xk+1)2 + rx2 (yk-1/2)2 – rx2 ry2 evaluating the ellipse function at the start position
if P1k <0, the midpoint is inside the ellipse and the pixel on scan (x0,y0) = (0,ry)
line yk is closer to the ellipse boundary. Otherwise the midpoint is outside or
on the ellipse boundary and select the pixel on scan line yk-1 region 2 at unit intervals in the negative y direction and the midpoint is now taken
between horizontal pixels at each step for this region the decision parameter is
At the next sampling position (xk+1+1=xk+2) the decision parameter for evaluated as
region 1 is calculated as
p10 = fellipse(1,ry -½ )
p1k+1 = fellipse(xk+1 +1,yk+1 -½ )
2 2
= ry2 + rx2 (ry -½)2 - rx2 ry2
=ry [(xk +1) + 1] + rx2 2 2
(yk+1 -½) - rx ry2
Or
Or
p10 = ry2 - rx2 ry + ¼ rx2
13
CS2401 –Computer Graphics
over region 2, we sample at unit steps in the negative y direction and the midpoint is
now taken between horizontal pixels at each step. For this region, the decision 2. Calculate the initial value of the decision parameter in region 1 as
parameter is evaluated as P10=ry2-rx2ry +(1/4)rx2
p2k = fellipse(xk +½ ,yk - 1) 3. At each xk position in region1 starting at k=0 perform the following
test. If P1k<0, the next point along the ellipse centered on (0,0) is
= ry2 (xk +½ )2 + rx2 (yk - 1)2 - rx2 ry2 (xk+1, yk) and
1. If P2k >0, the mid point position is outside the ellipse boundary, and p1k+1 = p1k +2 ry2xk +1 + ry2
select the pixel at xk.
2. If P2k <=0, the mid point is inside the ellipse boundary and select pixel Otherwise the next point along the ellipse is (xk+1, yk-1) and
position xk+1.
To determine the relationship between successive decision parameters in p1k+1 = p1k +2 ry2xk +1 - 2rx2 yk+1 + ry2
region 2 evaluate the ellipse function at the sampling step : yk+1 -1= yk-2.
with
P2k+1 = fellipse(xk+1 +½,yk+1 -1 )
2 ry2xk +1 = 2 ry2xk + 2ry2
=ry2(xk +½) 2 + rx2 [(yk+1 -1) -1]2 - rx2 ry2 2 rx2yk +1 = 2 rx2yk + 2rx2
or And continue until 2ry2 x>=2rx2 y
p2k+1 = p2k -2 rx2(yk -1) + rx2 + ry2 [(xk+1 +½)2 - (xk +½)2]
4. Calculate the initial value of the decision parameter in region 2
using the last point (x0,y0) is the last position calculated in region 1.
With xk+1set either to xkor xk+1, depending on the sign of P2k. when we
enter region 2, the initial position (x0,y0) is taken as the last position. Selected in
p20 = ry2(x0+1/2)2+rx2(yo-1)2 – rx2ry2
region 1 and the initial decision parameter in region 2 is then
5. At each position yk in region 2, starting at k=0 perform the
p20 = fellipse(x0 +½ ,y0 - 1)
following test, If p2 k>0 the next point along the ellipse centered on
= ry2 (x0 +½ )2 + rx2 (y0 - 1)2 - rx2 ry2 (0,0) is (xk,yk-1) and
14
CS2401 –Computer Graphics
7. Move each calculate pixel position (x,y) onto the elliptical path The remaining positions along the ellipse path in the first quadrant are
centered on (xc,yc) and plot the coordinate values then calculated as
x=x+xc, y=y+yc
8. Repeat the steps for region1 unit 2ry2x>=2rx2y k P2k xk+1,yk+1 2ry2xk+1 2rx2yk+1
0 -151 (8,2) 576 256
1 233 (8,1) 576 128
Example : Mid point ellipse drawing 2 745 (8,0) - -
Input ellipse parameters rx=8 and ry=6 the mid point ellipse
algorithm by determining raster position along the ellipse path is the first Implementation of Midpoint Ellipse drawing
quadrant. Initial values and increments for the decision parameter
calculations are #define Round(a) ((int)(a+0.5))
2ry2 x=0 (with increment 2ry2=72 ) void ellipseMidpoint (int xCenter, int yCenter, int Rx, int Ry)
2rx2 y=2rx2 ry (with increment -2rx2= -128 ) {
int Rx2=Rx*Rx;
For region 1 the initial point for the ellipse centered on the origin is int Ry2=Ry*Ry;
(x0,y0) = (0,6) and the initial decision parameter value is int twoRx2 = 2*Rx2;
int twoRy2 = 2*Ry2;
p10=ry2-rx2ry2+1/4rx2=-332 int p;
int x = 0;
Successive midpoint decision parameter values and the pixel positions int y = Ry;
along the ellipse are listed in the following table. int px = 0;
int py = twoRx2* y;
k p1k xk+1,yk+1 2ry2xk+1 2rx2yk+1 void ellipsePlotPoints ( int , int , int , int ) ;
0 -332 (1,6) 72 768 /* Plot the first set of points */
1 -224 (2,6) 144 768 ellipsePlotPoints (xcenter, yCenter, x,y ) ;
2 -44 (3,6) 216 768
3 208 (4,5) 288 640 / * Region 1 */
4 -108 (5,5) 360 640 p = ROUND(Ry2 - (Rx2* Ry) + (0.25*Rx2));
5 288 (6,4) 432 512 while (px < py)
6 244 (7,3) 504 384 {
x++;
px += twoRy2;
Move out of region 1, 2ry2x >2rx2y . i f (p < 0)
p += Ry2 + px;
For a region 2 the initial point is (x0,y0)=(7,3) and the initial decision else
parameter is {
p20 = fellipse(7+1/2,2) = -151 y--;
py -= twoRx2;
p += Ry2 + px - py;
15
CS2401 –Computer Graphics
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
/* Region 2 */
p = ROUND (Ry2*(x+0.5)*' (x+0.5)+ Rx2*(y- l )* (y- l ) - Rx2*Ry2);
while (y > 0 )
{
y--;
py -= twoRx2;
i f (p > 0)
p += Rx2 - py;
else
{
x++;
px+=twoRy2;
p+=Rx2-py+px;
}
ellipsePlotPoints(xCenter, yCenter,x,y); Attributes of output primitives
}
} Any parameter that affects the way a primitive is to be displayed is referred to as an
void ellipsePlotPoints(int xCenter, int yCenter,int x,int y); attribute parameter. Example attribute parameters are color, size etc. A line
{ drawing function for example could contain parameter to set color, width and other
setpixel (xCenter + x, yCenter + y); properties.
setpixel (xCenter - x, yCenter + y);
setpixel (xCenter + x, yCenter - y); 1. Line Attributes
setpixel (xCenter- x, yCenter - y); 2. Curve Attributes
} 3. Color and Grayscale Levels
4. Area Fill Attributes
5. Character Attributes
6. Bundled Attributes
Line Attributes
Basic attributes of a straight line segment are its type, its width, and its color. In
some graphics packages, lines can also be displayed using selected pen or brush
options
Line Type
Line Width
Pen and Brush Options
16
CS2401 –Computer Graphics
Line Color
Round cap obtained by adding a filled semicircle to each butt cap. The circular arcs
Line type are centered on the line endpoints and have a diameter equal to the line thickness
Possible selection of line type attribute includes solid lines, dashed lines and dotted Projecting square cap extend the line and add butt caps that are positioned one-
lines. half of the line width beyond the specified endpoints.
To set line type attributes in a PHIGS application program, a user invokes the
function
setLinetype (lt)
Line width
setLinewidthScaleFactor(lw)
Line width parameter lw is assigned a positive number to indicate the relative width
of line to be displayed. A value of 1 specifies a standard width line. A user could set
lw to a value of 0.5 to plot a line whose width is half that of the standard line. Three possible methods for smoothly joining two line segments
Values greater than 1 produce lines thicker than the standard.
Mitter Join
Line Cap Round Join
Bevel Join
We can adjust the shape of the line ends to give them a better appearance by adding
line caps.
1. A miter join accomplished by extending the outer boundaries of each of the two
There are three types of line cap. They are lines until they meet.
Butt cap 2. A round join is produced by capping the connection between the two segments
Round cap with a circular boundary whose diameter is equal to the width.
Projecting square cap 3. A bevel join is generated by displaying the line segment with but caps and filling
in tri angular gap where the segments meet
Butt cap obtained by adjusting the end positions of the component parallel lines so
that the thick line is displayed with square ends that are perpendicular to the line
path.
17
CS2401 –Computer Graphics
A poly line routine displays a line in the current color by setting this color value in
the frame buffer at pixel locations along the line path using the set pixel procedure.
We set the line color value in PHlGS with the function
setPolylineColourIndex (lc)
setLinetype(2);
setLinewidthScaleFactor(2);
setPolylineColourIndex (5);
polyline(n1,wc points1);
setPolylineColorIindex(6);
Pen and Brush Options poly line (n2, wc points2);
With some packages, lines can be displayed with pen or brush selections. Options in This program segment would display two figures, drawn with double-wide dashed
this category include shape, size, and pattern. Some possible pen or brush shapes
lines. The first is displayed in a color corresponding to code 5, and the second in
are given in Figure color 6.
Curve attributes
Parameters for curve attribute are same as those for line segments. Curves displayed
with varying colors, widths, dot –dash patterns and available pen or brush options
Various color and intensity-level options can be made available to a user, depending
on the capabilities and design objectives of a particular system
In a color raster system, the number of color choices available depends on the
amount of storage provided per pixel in the frame buffer
18
CS2401 –Computer Graphics
We can put the color codes in a separate table and use pixel values as an
index into this table
With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer
for each-component pixel in the output primitives to be displayed in that color.
A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel, as shown in Table
A user can set color-table entries in a PHIGS applications program with the
function
Grayscale
With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed primitives.
Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
19
CS2401 –Computer Graphics
Intensity = 0.5[min(r,g,b)+max(r,g,b)]
Options for filling a defined region include a choice between a solid color
or a pattern fill and choices for particular colors and patterns
Fill Styles
Areas are displayed with three basic fill styles: hollow with a color border, filled
with a solid color, or filled with a specified pattern or design. A basic fill style is
selected in a PHIGS program with the function
setInteriorStyle(fs)
Values for the fill-style parameter fs include hollow, solid, and pattern. Another
value for fill style is hatch, which is used to fill an area with selected hatching
patterns-parallel lines or crossed lines
The color for a solid interior or for a hollow area outline is chosen with where fill
color parameter fc is set to the desired color code
setInteriorColourIndex(fc)
Pattern Fill
20
CS2401 –Computer Graphics
Control of text color (or intensity) is managed from an application program with
For example, the following set of statements would fill the area defined in the
fillArea command with the second pattern type stored in the pattern table: setTextColourIndex(tc)
SetInteriorStyle( pattern) where text color parameter tc specifies an allowable color code.
SetInteriorStyleIndex(2);
Fill area (n, points) Text size can be adjusted without changing the width to height ratio of characters
with
SetCharacterHeight (ch)
Parameter ch is assigned a real value greater than 0 to set the coordinate height of
Character Attributes capital letters
The appearance of displayed character is controlled by attributes such as font, size, The width only of text can be set with function.
color and orientation. Attributes can be set both for entire character strings (text)
and for individual characters defined as marker symbols SetCharacterExpansionFactor(cw)
Where the character width parameter cw is set to a positive real value that scales the
Text Attributes body width of character
The choice of font or type face is set of characters with a particular design style as
courier, Helvetica, times roman, and various symbol groups.
21
CS2401 –Computer Graphics
The orientation for a displayed character string is set according to the direction of
the character up vector
setCharacterUpVector(upvect)
Parameter upvect in this function is assigned two values that specify the x and y
vector components. For example, with upvect = (1, 1), the direction of the up
vector is 45o and text would be displayed as shown in Figure. Another handy attribute for character strings is alignment. This attribute specifies
how text is to be positioned with respect to the $tart coordinates. Alignment
attributes are set with
setTextAlignment (h,v)
setTextPrecision (tpr)
To arrange character strings vertically or horizontally tpr is assigned one of values string, char or stroke.
Where the text path parameter tp can be assigned the value: right, left, up, or down A marker symbol is a single character that can he displayed in different colors and
in different sizes. Marker attributes are implemented by procedures that load the
chosen character into the raster at the defined positions with the specified color and
size. We select a particular character to be the marker symbol with
22
CS2401 –Computer Graphics
setMarkerType(mt)
Entries in the bundle table for line attributes on a specified workstation are set with
where marker type parameter mt is set to an integer code. Typical codes for marker the function
type are the integers 1 through 5, specifying, respectively, a dot (.) a vertical cross
(+), an asterisk (*), a circle (o), and a diagonal cross (X). setPolylineRepresentation (ws, li, lt, lw, lc)
We set the marker size with Parameter ws is the workstation identifier and line index parameter li defines the
bundle table position. Parameter lt, lw, tc are then bundled and assigned values to
setMarkerSizeScaleFactor(ms) set the line type, line width, and line color specifications for designated table index.
with parameter marker size ms assigned a positive number. This scaling parameter Example
is applied to the nominal size for the particular marker symbol chosen. Values
greater than 1 produce character enlargement; values less than 1 reduce the marker setPolylineRepresentation(1,3,2,0.5,1)
size. setPolylineRepresentation (4,3,1,1,7)
Marker color is specified with A poly line that is assigned a table index value of 3 would be displayed
using dashed lines at half thickness in a blue color on work station 1; while on
setPolymarkerColourIndex(mc) workstation 4, this same index generates solid, standard-sized white lines
A selected color code parameter mc is stored in the current attribute list and used to Bundle area fill Attributes
display subsequently specified marker primitives
Table entries for bundled area-fill attributes are set with
Bundled Attributes
setInteriorRepresentation (ws, fi, fs, pi, fc)
The procedures considered so far each function reference a single attribute that
specifies exactly how a primitive is to be displayed these specifications are called Which defines the attributes list corresponding to fill index fi on workstation ws.
individual attributes. Parameter fs, pi and fc are assigned values for the fill style pattern index and fill
color.
A particular set of attributes values for a primitive on each output device is chosen
by specifying appropriate table index. Attributes specified in this manner are called Bundled Text Attributes
bundled attributes. The choice between a bundled or an unbundled specification is
made by setting a switch called the aspect source flag for each of these attributes setTextRepresentation (ws, ti, tf, tp, te, ts, tc)
setIndividualASF( attributeptr, flagptr) bundles values for text font, precision expansion factor size an color in a table
position for work station ws that is specified by value assigned to text index
where parameter attributer ptr points to a list of attributes and parameter flagptr parameter ti.
points to the corresponding list of aspect source flags. Each aspect source flag can
be assigned a value of individual or bundled. Bundled marker Attributes
23
CS2401 –Computer Graphics
x’ = x + tx, y’ = y + ty
The translation distance point (tx,ty) is called translation vector or shift vector.
P ( x, y )
T (t x , t y )
x' x t x
y' y t y
x' x tx
y' y ty
24 P' P T
CS2401 –Computer Graphics
Rotation of a point from position (x,y) to position (x‟,y‟) through angle θ relative to
coordinate origin
The transformation equations for rotation of a point position P when the pivot point
is at coordinate origin. In figure r is constant distance of the point positions Ф is the
original angular of the point from horizontal and θ is the rotation angle.
Rotation equation
P‟= R . P
25
CS2401 –Computer Graphics
Rotation Matrix
cos sin
R=
sin cos
Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and sy= 1.
x' cos sin x
Any positive numeric values are valid for scaling factors sx and sy. Values less than
y' sin cos y 1 reduce the size of the objects and values greater than 1 produce an enlarged
object.
Note : Positive values for the rotation angle define counterclockwise rotations about
the rotation point and negative values rotate objects in the clockwise.
A scaling transformation alters the size of an object. This operation can be carried Uniform scaling
out for polygons by multiplying the coordinate values (x,y) to each vertex by Non Uniform Scaling
scaling factor Sx & Sy to produce the transformed coordinates (x‟,y‟)
To get uniform scaling it is necessary to assign same value for sx and sy. Unequal
x’= x.Sx y’ = y.Sy values for sx and sy result in a non uniform scaling.
scaling factor Sx scales object in x direction while Sy scales in y direction. Matrix Representation and homogeneous Coordinates
The transformation equation in matrix form Many graphics applications involve sequences of geometric transformations. An
animation, for example, might require an object to be translated and rotated at each
increment of the motion. In order to combine sequence of transformations we have
x' sx 0 x to eliminate the matrix addition. To achieve this we have represent matrix as 3 X 3
instead of 2 X 2 introducing an additional dummy coordinate h. Here points are
y' 0 sy y specified by three numbers instead of two. This coordinate system is called as
Homogeneous coordinate system and it allows to express transformation equation
as matrix multiplication
or
Cartesian coordinate position (x,y) is represented as homogeneous coordinate
P‟ = S. P triple(x,y,h)
26
CS2401 –Computer Graphics
x' 1 0 tx x Translation
y' 0 1 ty y If two successive translation vectors (tx1,ty1) and (tx2,ty2) are applied to a
coordinate position P, the final transformed location P‟ is calculated as
1 0 0 1 1
P‟=T(tx2,ty2).{T(tx1,ty1).P}
P' T t x , t y P ={T(tx2,ty2).T(tx1,ty1)}.P
For Scaling
1 0 tx2 1 0 tx1 1 0 tx1 tx2
x' sx 0 0 x 0 1 ty2 . 0 1 ty1 0 1 ty1 ty2
y' 0 sy 0 y 0 0 1 0 0 1 0 0 1
1 0 0 1 1
Or
P' S sx , s y P T(tx2,ty2).T(tx1,ty1) = T(tx1+tx2,ty1+ty2)
27
CS2401 –Computer Graphics
Scaling
sx 2 0 0 sx1 0 0 sx 2.sx1 0 0
0 sy 2 0 . 0 sy1 0 0 sy 2.sy1 0
0 0 1 0 0 1 0 0 1 The composite transformation matrix for this sequence is obtain with the
concatenation
General Pivot-point Rotation
28
CS2401 –Computer Graphics
Translate object so that the fixed point coincides with the coordinate origin
Scale the object with respect to the coordinate origin
Use the inverse translation of step 1 to return the object to its original position
#include <math.h>
#include <graphics.h>
typedef float Matrix3x3 [3][3];
Matrix3x3 thematrix;
29
CS2401 –Computer Graphics
30
CS2401 –Computer Graphics
Other Transformations
1. Reflection
2. Shear
Reflection
1 0 0
0 1 0
0 0 1
Reflection of an object about the y axis
Reflection about origin is accomplished with the transformation matrix
31
CS2401 –Computer Graphics
1 0 0
Reflection axis as the diagonal line y = -x
0 1 0
0 0 1
32
CS2401 –Computer Graphics
X- Shear 1 0 0
The x shear preserves the y coordinates, but changes the x values which cause sh y 1 0
vertical lines to tilt right or left as shown in figure
0 0 1
x‟ =x
y‟ = y+ sh y .x
XY-Shear
x‟ =x+ shx .y
y‟ = y+ sh y .x
y‟ = y
Shearing Relative to other reference line
Y Shear
We can apply x shear and y shear transformations relative to other reference lines.
The y shear preserves the x coordinates, but changes the y values which cause In x shear transformations we can use y reference line and in y shear we can use x
horizontal lines which slope up or down reference line.
We can generate x-direction shears relative to other reference lines with the
transformation matrix
33
CS2401 –Computer Graphics
0 1 0 x‟ =x
0 0 1 y‟ = shy (x- xref) + y
y‟ = y
Example
34
CS2401 –Computer Graphics
The viewing- coordinate reference frame is used to provide a method for setting up
arbitrary orientations for rectangular windows. Once the viewing reference frame is
established, we can transform descriptions in world coordinates to viewing
coordinates.
We then define a viewport in normalized coordinates (in the range from 0 to 1) and
map the viewing-coordinate description of the scene to normalized coordinates.
At the final step all parts of the picture that lie outside the viewport are clipped, and
the contents of the viewport are transferred to device coordinates. By changing the
position of the viewport, we can view objects at different positions on the display
area of an output device. A point at position (xw,yw) in a designated window is mapped to viewport
coordinates (xv,yv) so that relative positions in the two areas are the same. The
figure illustrates the window to view port mapping.
35
CS2401 –Computer Graphics
A point at position (xw,yw) in the window is mapped into position (x v,yv) in the Any number of output devices can be open in particular application and
associated view port. To maintain the same relative placement in view port as in another window view port transformation can be performed for each open output
window device. This mapping called the work station transformation is accomplished by
selecting a window area in normalized apace and a view port are in coordinates of
display device.
xvmax xvmin
xv = xvmin + xw xwmin
xwmax xwmin
yvmax yvmin
yv =areyv
where scaling factors
min + yw ywmin
sx = xvmax – xvmin
yw
symax
yw yv
= yv -min
max min
36
CS2401 –Computer Graphics
Viewing reference system in a PHIGS application program has following function. ywsVPortmin, ywsVPortmax)
evaluateViewOrientationMatrix(x0,y0,xv,yv,error, viewMatrix) where was gives the workstation number. Window-coordinate extents are specified
in the range from 0 to 1 and viewport limits are in integer device coordinates.
where x0,y0 are coordinate of viewing origin and parameter x v, yv are the world
coordinate positions for view up vector.An integer error code is generated if the Clipping operation
input parameters are in error otherwise the view matrix for world-to-viewing
transformation is calculated. Any number of viewing transformation matrices can Any procedure that identifies those portions of a picture that are inside or outside of
be defined in an application. a specified region of space is referred to as clipping algorithm or clipping. The
region against which an object is to be clipped is called clip window.
To set up elements of window to view port mapping
Algorithm for clipping primitive types:
Where parameter ws designates the output device and parameter view index sets an xwmin <= x <= xwmax
integer identifier for this window-view port point. The matrices viewMatrix and
viewMappingMatrix can be concatenated and referenced by viewIndex. ywmin <= y <= ywmax
At the final stage we apply a workstation transformation by selecting a work Line Clipping
station window viewport pair.
A line clipping procedure involves several parts. First we test a given line segment
setWorkstationWindow (ws, xwsWindmin, xwsWindmax, whether it lies completely inside the clipping window. If it does not we try to
ywsWindmin, ywsWindmax) determine whether it lies completely outside the window . Finally if we can not
identify a line as completely inside or completely outside, we perform intersection
setWorkstationViewport (ws, xwsVPortmin, xwsVPortmax, calculations with one or more clipping boundaries.
37
CS2401 –Computer Graphics
Process lines through “inside-outside” tests by checking the line endpoints. A line Cohen-Sutherland Line Clipping
with both endpoints inside all clipping boundaries such as line from P1 to P2 is
saved. A line with both end point outside any one of the clip boundaries line P3P4 This is one of the oldest and most popular line-clipping procedures. The
is outside the window. method speeds up the processing of line segments by performing initial tests that
reduce the number of intersections that must be calculated.
Line clipping against a rectangular clip window
Every line endpoint in a picture is assigned a four digit binary code called
a region code that identifies the location of the point relative to the boundaries of
the clipping rectangle.
All other lines cross one or more clipping boundaries. For a line segment with end
points (x1,y1) and (x2,y2) one or both end points outside clipping rectangle, the
Binary region codes assigned to line end points according to relative position
parametric representation
with respect to the clipping rectangle.
Regions are set up in reference to the boundaries. Each bit position in region code is
used to indicate one of four relative coordinate positions of points with respect to
x = x1 + u x2 x1 , clip window: to the left, right, top or bottom. By numbering the bit positions in the
region code as 1 through 4 from right to left, the coordinate regions are corrected
with bit positions as
y = y +u y
boundary coordinates. If the value of 2 y , 0 u 1
could be used to determine values of u for an intersection with the clipping
1 u for an intersection
1 with a rectangle boundary bit 1: left
edge is outside the range of 0 to 1, the line does not enter the interior of the
window at that boundary. If the value of u is within the range from 0 to 1, the line bit 2: right
segment does indeed cross into the clipping area. This method can be applied to
bit 3: below
each clipping boundary edge in to determined whether any part of line segment is to
displayed. bit4: above
38
CS2401 –Computer Graphics
A value of 1 in any bit position indicates that the point is in that relative Lines that cannot be identified as completely inside or completely outside
position. Otherwise the bit position is set to 0. If a point is within the clipping a clip window by these tests are checked for intersection with window boundaries.
rectangle the region code is 0000. A point that is below and to the left of the
rectangle has a region code of 0101.
(2) Use the resultant sign bit of each difference calculation to set the corresponding
value in the region code.
bit 4 is the sign bit of ywmax - y. Line extending from one coordinates region to another may pass through the
clip window, or they may intersect clipping boundaries without entering
Once we have established region codes for all line endpoints, we can quickly
window.
determine which lines are completely inside the clip window and which are clearly
outside. Cohen-Sutherland line clipping starting with bottom endpoint left, right ,
bottom and top boundaries in turn and find that this point is below the clipping
Any lines that are completely contained within the window boundaries
rectangle.
have a region code of 0000 for both endpoints, and we accept
Starting with the bottom endpoint of the line from P1 to P2, we check P1
these lines. Any lines that have a 1 in the same bit position in the region codes for
against the left, right, and bottom boundaries in turn and find that this point is below
each endpoint are completely outside the clipping rectangle, and we reject these
the clipping rectangle. We then find the intersection point P 1‟ with the bottom
lines.
boundary and discard the line section from P1 to P1‟.
We would discard the line that has a region code of 1001 for one endpoint
The line now has been reduced to the section from P 1‟ to P2,Since P2, is
and a code of 0101 for the other endpoint. Both endpoints of this line are left of the
outside the clip window, we check this endpoint against the boundaries and find
clipping rectangle, as indicated by the 1 in the first bit position of each region code.
that it is to the left of the window. Intersection point P2‟ is calculated, but this point
A method that can be used to test lines for total clipping is to perform the is above the window. So the final intersection calculation yields P 2”, and the line
logical and operation with both region codes. If the result is not 0000,the line is from P1‟ to P2”is saved. This completes processing for this line, so we save this part
completely outside the clipping region. and go on to the next line.
39
CS2401 –Computer Graphics
Point P3 in the next line is to the left of the clipping rectangle, so we code=code|LEFT_EDGE;
determine the intersection P3‟, and eliminate the line section from P3 to P3'. By if(pt.x>winmax.x)
checking region codes for the line section from P3'to P4 we find that the code=code|RIGHT_EDGE;
remainder of the line is below the clip window and can be discarded also. if(pt.y<winmin.y)
code=code|BOTTOM_EDGE;
Intersection points with a clipping boundary can be calculated using the if(pt.y>winmax.y)
slope-intercept form of the line equation. For a line with endpoint coordinates code=code|TOP_EDGE;
(x1,y1) and (x2,y2) and the y coordinate of the intersection point with a vertical return(code);
boundary can be obtained with the calculation. }
void swappts(wcPt2 *p1,wcPt2 *p2)
y =y1 +m (x-x1) {
wcPt2 temp;
where x value is set either to xwmin or to xwmax and slope of line is calculated as tmp=*p1;
*p1=*p2;
m = (y2- y1) / (x2- x1)
*p2=tmp;
the intersection with a horizontal boundary the x coordinate can be calculated as }
void swapcodes(unsigned char *c1,unsigned char *c2)
x= x1 +( y- y1) / m {
unsigned char tmp;
with y set to either to ywmin or to ywmax. tmp=*c1;
*c1=*c2;
Implementation of Cohen-sutherland Line Clipping *c2=tmp;
}
#define Round(a) ((int)(a+0.5)) void clipline(dcPt winmin, dcPt winmax, wcPt2 p1,ecPt2 point p2)
{
#define LEFT_EDGE 0x1 unsigned char code1,code2;
#define RIGHT_EDGE 0x2 int done=FALSE, draw=FALSE;
#define BOTTOM_EDGE 0x4 float m;
#define TOP_EDGE 0x8 while(!done)
#define TRUE 1 {
#define FALSE 0 code1=encode(p1,winmin,winmax);
code2=encode(p2,winmin,winmax);
#define INSIDE(a) (!a) if(ACCEPT(code1,code2))
#define REJECT(a,b) (a&b) {
#define ACCEPT(a,b) (!(a|b)) done=TRUE;
draw=TRUE;
unsigned char encode(wcPt2 pt, dcPt winmin, dcPt winmax) }
{ else if(REJECT(code1,code2))
unsigned char code=0x00; done=TRUE;
if(pt.x<winmin.x) else
40
CS2401 –Computer Graphics
41
CS2401 –Computer Graphics
For each line, calculate values for parameters u1and u2 that define the part
of line that lies within the clip rectangle. The value of u1 is determined by looking #define Round(a) ((int)(a+0.5))
at the rectangle edges for which the line proceeds from outside to the inside (p<0). int clipTest (float p, float q, gfloat *u1, float *u2)
{
For these edges we calculate float r;
int retval=TRUE;
rk = qk / pk if (p<0.0)
{
The value of u1 is taken as largest of set consisting of 0 and various values of r. The r=q/p
value of u2 is determined by examining the boundaries for which lines proceeds if (r>*u2)
from inside to outside (P>0). retVal=FALSE;
else
A value of rkis calculated for each of these boundaries and value of u2 is
if (r>*u1)
the minimum of the set consisting of 1 and the calculated r values.
*u1=r;
If u1>u2, the line is completely outside the clip window and it can be }
rejected. else
if (p>0.0)
Line intersection parameters are initialized to values u1=0 and u2=1. for {
each clipping boundary, the appropriate values for P and q are calculated and used r=q/p
by function if (r<*u1)
retVal=FALSE;
Cliptest to determine whether the line can be rejected or whether the intersection else
parameter can be adjusted. if (r<*u2)
*u2=r;
When p<0, the parameter r is used to update u1. }
else
When p>0, the parameter r is used to update u2. if )q<0.0)
retVal=FALSE
If updating u1 or u2 results in u1>u2 reject the line, when p=0 and q<0, discard the return(retVal);
line, it is parallel to and outside the boundary.If the line has not been rejected after
all four value of p and q have been tested , the end points of clipped lines are void clipLine (dcPt winMin, dcPt winMax, wcPt2 p1, wcpt2 p2)
determined from values of u1 and u2. {
The Liang-Barsky algorithm is more efficient than the Cohen-Sutherland float u1=0.0, u2=1.0, dx=p2.x-p1.x,dy;
algorithm since intersections calculations are reduced. Each update of parameters if (clipTest (-dx, p1.x-winMin.x, &u1, &u2))
u1 and u2 require only one division and window intersections of these lines are if (clipTest (dx, winMax.x-p1.x, &u1, &u2))
computed only once. {
Cohen-Sutherland algorithm, can repeatedly calculate intersections along a dy=p2.y-p1.y;
line path, even through line may be completely outside the clip window. Each if (clipTest (-dy, p1.y-winMin.y, &u1, &u2))
intersection calculations require both a division and a multiplication. if (clipTest (dy, winMax.y-p1.y, &u1, &u2))
{
Implementation of Liang-Barsky Line Clipping
42
CS2401 –Computer Graphics
if (u1<1.0)
{
p2.x=p1.x+u2*dx;
p2.y=p1.y+u2*dy;
}
if (u1>0.0)
{
p1.x=p1.x+u1*dx;
p1.y=p1.y+u1*dy;
}
lineDDA(ROUND(p1.x),ROUND(p1.y),ROUND(p2.x),ROUND(p2.y));
}
}
}
43
CS2401 –Computer Graphics
The four clipping regions used in NLN alg when p1 is inside and p2 outside
the clip window
For the third case, when P1 is to the left and above the clip window, we usethe
clipping regions in Fig.
Fig : The two possible sets of clipping regions used in NLN algorithm when P1
is above and to the left of the clip window
44
CS2401 –Computer Graphics
y = y1 + y2 – y1 (xL – x1 )
x2 – x1
And an intersection position on the top boundary has y = yT and u = (yT – y1)/ (y2 –
y1) with
x = x1 + x2 – x1 (yT – y1 )
In this case, we have the two possibilities shown, depending on the position of P1,
relative to the top left corner of the window. If P2, is in one of the regions T, L, TR, y2 – y1
TB, LR, or LB, this determines a unique clip window edge for the intersection
calculations. Otherwise, the entire line is rejected. POLYGON CLIPPING
To determine the region in which P2 is located, we compare the slope of the To clip polygons, we need to modify the line-clipping procedures. A
polygon boundary processed with a line clipper may be displayed as a series of
line to the slopes of the boundaries of the clip regions. For example, if P1 is left of unconnected line segments (Fig.), depending on the orientation of the polygon to
the clipping window.
the clipping rectangle (Fig. a), then P2, is in region LT if
slopeP1PTR<slopeP1P2<slopeP1PTL
Display of a polygon processed by a line clipping algorithm
or
yT – y1 < y2 – y1 < yT – y1
xR – x1 x2 – x1 xL – x1
The coordinate difference and product calculations used in the slope tests
are saved and also used in the intersection calculations. From the parametric
equations
For polygon clipping, we require an algorithm that will generate one or more closed
x = x1 + (x2 – x1)u areas that are then scan converted for the appropriate area fill. The output of a
polygon clipper should be a sequence of vertices that defines the clipped polygon
y = y1 + (y2 – y1)u boundaries.
45
CS2401 –Computer Graphics
There are four possible cases when processing vertices in sequence around
the perimeter of a polygon. As each point of adjacent polygon vertices is passed to a
window boundary clipper, make the following tests:
1. If the first vertex is outside the window boundary and second vertex is Clipping a polygon against the left boundary of a window, starting with vertex
inside, both the intersection point of the polygon edge with window 1. Primed numbers are used to label the points in the output vertex list for this
boundary and second vertex are added to output vertex list. window boundary.
2. If both input vertices are inside the window boundary, only the second
vertex is added to the output vertex list.
3. If first vertex is inside the window boundary and second vertex is
outside only the edge intersection with window boundary is added to
output vertex list.
4. If both input vertices are outside the window boundary nothing is
added to the output list.
Clipping a polygon against successive window boundaries.
46
CS2401 –Computer Graphics
A point is added to the output vertex list only after it has been determined to be
inside or on a window boundary by all boundary clippers. Otherwise the point does
not continue in the pipeline.
47
CS2401 –Computer Graphics
48
CS2401 –Computer Graphics
void closeclip(dcPt wmin,dcPt wmax, wcPt2 *pout,int *cnt,wcPt2 *first[], wcPt2 currently being processed represents an outside-to-inside pair or an inside- to-
*s) outside pair. For clockwise processing of polygon vertices, we use the following
{ rules:
wcPt2 iPt; For an outside-to-inside pair of vertices, follow the polygon boundary.
Edge b; For an inside-to-outside pair of vertices,. follow the window boundary in a
for(b=left;b<=top;b++) clockwise direction.
{ In the below Fig. the processing direction in the Weiler-Atherton algorithm and
if(cross(s[b],*first[b],b,wmin,wmax)) the resulting clipped polygon is shown for a rectangular clipping window.
{
i=intersect(s[b],*first[b],b,wmin,wmax);
if(b<top)
clippoint(i,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=i;
(*cnt)++;
}
}
}
}
int clippolygon(dcPt point wmin,dcPt wmax,int n,wcPt2 *pin, wcPt2 *pout)
{
wcPt2 *first[N_EDGE]={0,0,0,0},s[N_EDGE]; An improvement on the Weiler-Atherton algorithm is the Weiler algorithm,
int i,cnt=0; which applies constructive solid geometry ideas to clip an arbitrary polygon
for(i=0;i<n;i++) against any polygon clipping region.
clippoint(pin[i],left,wmin,wmax,pout,&cnt,first,s);
closeclip(wmin,wmax,pout,&cnt,first,s); Curve Clipping
return(cnt);
} Curve-clipping procedures will involve nonlinear equations, and this
requires more processing than for objects with linear boundaries. The bounding
rectangle for a circle or other curved object can be used first to test for overlap with
Weiler- Atherton Polygon Clipping a rectangular clip window.
If the bounding rectangle for the object is completely inside the window,
This clipping procedure was developed as a method for identifying visible
we save the object. If the rectangle is determined to be completely outside the
surfaces, and so it can be applied with arbitrary polygon-clipping regions.
window, we discard the object. In either case, there is no further computation
necessary.
The basic idea in this algorithm is that instead of always proceeding
around the polygon edges as vertices are processed, we sometimes want to follow
But if the bounding rectangle test fails, we can look for other computation-
the window boundaries. Which path we follow depends on the polygon-processing
saving approaches. For a circle, we can use the coordinate extents of individual
direction (clockwise or counterclockwise) and whether the pair of polygon vertices
49
CS2401 –Computer Graphics
quadrants and then octants for preliminary testing before calculating curve-window
intersections.
Text clipping
A final method for handling text clipping is to clip the components of
There are several techniques that can be used to provide text clipping in a
individual characters. We now treat characters in much the same way that we
graphics package. The clipping technique used will depend on the methods used to
treated lines. If an individual character overlaps a clip window boundary, we clip
generate characters and the requirements of a particular application.
off the parts of the character that are outside the window.
The simplest method for processing character strings relative to a window
Text Clipping performed on the components of individual characters
boundary is to use the all-or-none string-clipping strategy shown in Fig. . If all of
the string is inside a clip window, we keep it. Otherwise, the string is discarded.
This procedure is implemented by considering a bounding rectangle around the text
pattern. The boundary positions of the rectangle are then compared to the window
boundaries, and the string is rejected if there is any overlap. This method produces
the fastest text clipping.
50
CS2401 –Computer Graphics
Exterior clipping: Parallel projection is a method for generating a view of a solid object is to
project points on the object surface along parallel lines onto the display
Procedure for clipping a picture to the interior of a region by eliminating plane.
everything outside the clipping region. By these procedures the inside region of the
picture is saved. To clip a picture to the exterior of a specified region. The picture In parallel projection, parallel lines in the world coordinate scene project
parts to be saved are those that are outside the region. This is called as exterior into parallel lines on the two dimensional display planes.
clipping. This technique is used in engineering and architectural drawings to
represent an object with a set of views that maintain relative proportions of
Objects within a window are clipped to interior of window when other the object.
higher priority window overlap these objects. The objects are also clipped to the
exterior of overlapping windows. The appearance of the solid object can be reconstructed from the major
views.
51
CS2401 –Computer Graphics
This makes objects further from the viewing position be displayed smaller An alternative to exploding an object into its component parts is the cut
than objects of the same size that are nearer to the viewing position. away view which removes part of the visible surfaces to show internal
structure.
In a perspective projection, parallel lines in a scene that are not parallel to
the display plane are projected into converging lines. Three-dimensional and Stereoscopic Views:
Scenes displayed using perspective projections appear more realistic, since In Stereoscopic views, three dimensional views can be obtained by
this is the way that our eyes and a camera lens form images. reflecting a raster image from a vibrating flexible mirror.
Depth Cueing: The vibrations of the mirror are synchronized with the display of the scene
on the CRT.
Depth information is important to identify the viewing direction, which is
the front and which is the back of displayed object. As the mirror vibrates, the focal length varies so that each point in the
scene is projected to a position corresponding to its depth.
Depth cueing is a method for indicating depth with wire frame displays is
to vary the intensity of objects according to their distance from the viewing Stereoscopic devices present two views of a scene; one for the left eye and
position. the other for the right eye.
Depth cueing is applied by choosing maximum and minimum intensity (or The two views are generated by selecting viewing positions that
color) values and a range of distance over which the intensities are to vary. corresponds to the two eye positions of a single viewer.
Visible line and surface identification: These two views can be displayed on alternate refresh cycles of a raster
monitor, and viewed through glasses that alternately darken first one lens
A simplest way to identify the visible line is to highlight the visible lines
then the other in synchronization with the monitor refresh cycles.
or to display them in a different color.
2.1.2 Three Dimensional Graphics Packages
Another method is to display the non visible lines as dashed lines.
The 3D package must include methods for mapping scene descriptions
Surface Rendering:
onto a flat viewing surface.
Surface rendering method is used to generate a degree of realism in a
There should be some consideration on how surfaces of solid objects are to
displayed scene.
be modeled, how visible surfaces can be identified, how transformations of
Realism is attained in displays by setting the surface intensity of objects objects are preformed in space, and how to describe the additional spatial
according to the lighting conditions in the scene and surface properties.
characteristics.
World coordinate descriptions are extended to 3D, and users are provided
Lighting conditions include the intensity and positions of light sources and with output and input routines accessed with specifications such as
the background illumination. o Polyline3(n, WcPoints)
Surface characteristics include degree of transparency and how rough or o Fillarea3(n, WcPoints)
smooth the surfaces are to be.
o Text3(WcPoint, string)
Exploded and Cutaway views:
o Getlocator3(WcPoint)
Exploded and cutaway views of objects can be to show the internal
structure and relationship of the objects parts. o Translate3(translateVector, matrix Translate)
52
CS2401 –Computer Graphics
Where points and vectors are specified with 3 components and transformation A convenient organization for storing geometric data is to create three lists:
matrices have 4 rows and 4 columns.
1. The Vertex Table
2.2 Three Dimensional Object Representations
Coordinate values for each vertex in the object are stored in this
Representation schemes for solid objects are divided into two categories as table.
follows:
2. The Edge Table
1. Boundary Representation ( B-reps)
It contains pointers back into the vertex table to identify the
It describes a three dimensional object as a set of surfaces that separate vertices for each polygon edge.
the object interior from the environment. Examples are polygon facets
3. The Polygon Table
and spline patches.
It contains pointers back into the edge table to identify the edges
2. Space Partitioning representation
for each polygon.
It describes the interior properties, by partitioning the spatial region
This is shown in fig
containing an object into a set of small, nonoverlapping, contiguous
solids(usually cubes).
Eg: Octree Representation
2.2.1 Polygon Surfaces
Polygon surfaces are boundary representations for a 3D graphics object is a
set of polygons that enclose the object interior.
Polygon Tables
The polygon surface is specified with a set of vertex coordinates and
associated attribute parameters.
For each polygon input, the data are placed into tables that are to be used
in the subsequent processing.
Polygon data tables can be organized into two groups: Geometric tables
and attribute tables.
Geometric Tables
Contain vertex coordinates and parameters to identify the spatial
orientation of the polygon surfaces.
Attribute tables
Contain attribute information for an object such as parameters specifying
the degree of transparency of the object and its surface reflectivity and texture
characteristics.
53
CS2401 –Computer Graphics
Vertex table Edge Table Polygon surface vertices are input, we can calculate edge slopes and we can scan the
table coordinate values to identify the minimum and maximum x, y and z values
for individual polygons.
V1 : X1, Y1, Z1 E1 : V1, V2 S1 : E1, E2, E3
The more information included in the data tables will be easier to check for
V2 : X2, Y2, Z2 E2 : V2, V3 S2 : E3, E4, E5, E6
errors.
V3 : X3, Y3, Z3 E3 : V3, V1
Some of the tests that could be performed by a graphics package are:
V4 : X4, Y4, Z4 E4 : V3, V4
1. That every vertex is listed as an endpoint for at least two
V5 : X5, Y5, Z5 E5 : V4, V5 edges.
E6 : V5, V1 2. That every edge is part of at least one polygon.
3. That every polygon is closed.
Listing the geometric data in three tables provides a convenient reference 4. That each polygon has at least one shared edge.
to the individual components (vertices, edges and polygons) of each
5. That if the edge table contains pointers to polygons, every
object.
edge referenced by a polygon pointer has a reciprocal pointer
The object can be displayed efficiently by using data from the edge table to back to the polygon.
draw the component lines.
Plane Equations:
Extra information can be added to the data tables for faster information
To produce a display of a 3D object, we must process the input data
extraction. For instance, edge table can be expanded to include forward
representation for the object through several procedures such as,
points into the polygon table so that common edges between polygons can
be identified more rapidly. - Transformation of the modeling and world coordinate descriptions
to viewing coordinates.
E1 : V1, V2, S1
- Then to device coordinates:
E2 : V2, V3, S1
- Identification of visible surfaces
E3 : V3, V1, S1, S2
- The application of surface-rendering procedures.
E4 : V3, V4, S2
For these processes, we need information about the spatial orientation of the
E5 : V4, V5, S2
individual surface components of the object. This information is obtained
E6 : V5, V1, S2 from the vertex coordinate value and the equations that describe the polygon
planes.
This is useful for the rendering procedure that must vary surface shading
smoothly across the edges from one polygon to the next. Similarly, the The equation for a plane surface is
vertex table can be expanded so that vertices are cross-referenced to
Ax + By+ Cz + D = 0 ----(1)
corresponding edges.
Where (x, y, z) is any point on the plane, and the coefficients A,B,C and D
Additional geometric information that is stored in the data tables includes are constants describing the spatial properties of the plane.
the slope for each edge and the coordinate extends for each polygon. As
54
CS2401 –Computer Graphics
We can obtain the values of A, B,C and D by solving a set of three plane Ax + By + Cz + D ≠ 0
equations using the coordinate values for three non collinear points in the
We can identify the point as either inside or outside the plane surface
plane.
according o the sigh (negative or positive) of Ax + By + Cz + D:
For that, we can select three successive polygon vertices (x 1, y1, z1), (x2, y2,
If Ax + By + Cz + D < 0, the point (x, y, z) is inside the surface.
z2) and (x3, y3, z3) and solve the following set of simultaneous linear plane
equations for the ratios A/D, B/D and C/D. If Ax + By + Cz + D > 0, the point (x, y, z) is outside the surface.
(A/D)xk + (B/D)yk + (c/D)zk = -1, k=1,2,3 -----(2) These inequality tests are valid in a right handed Cartesian system,
provided the plane parmeters A,B,C and D were calculated using vertices
The solution for this set of equations can be obtained in determinant form,
selected in a counter clockwise order when viewing the surface in an
using Cramer‟s rule as
outside-to-inside direction.
1 y1 z1 x1 1 z1
Polygon Meshes
A= 1 y2 z2 B= x2 1 z2
A single plane surface can be specified with a function such as fillArea.
1 y3 z3 x3 1 z3 But when object surfaces are to be tiled, it is more convenient to specify
the surface facets with a mesh function.
One type of polygon mesh is the triangle strip.A triangle strip formed
x1 y1 1 x1 y1 z1
with 11 triangles connecting 13 vertices.
C= x2 y2 1 D=- x2 y2 z2
------(3)
x3 y3 1 x3 y3 z3
Expanding the determinants , we can write the calculations for the plane
coefficients in the form:
A = y1 (z2 –z3 ) + y2(z3 –z1 ) + y3 (z1 –z2 )
This function produces n-2 connected triangles given the coordinates for n
B = z1 (x2 -x3 ) + z2 (x3 -x1 ) + z3 (x1 -x2 ) vertices.
C = x1 (y2 –y3 ) + x2 (y3 –y1 ) + x3 (y1 -y2 )
D = -x1 (y2 z3 -y3 z2 ) - x2 (y3 z1 -y1 z3 ) - x3 (y1 z2 -y2 z1) ------(4) Another similar function in the quadrilateral mesh, which generates a
mesh of (n-1) by (m-1) quadrilaterals, given the coordinates for an n by m
As vertex values and other information are entered into the polygon data
array of vertices. Figure shows 20 vertices forming a mesh of 12
structure, values for A, B, C and D are computed for each polygon and
quadrilaterals.
stored with the other polygon data.
Plane equations are used also to identify the position of spatial points
relative to the plane surfaces of an object. For any point (x, y, z) hot on a
plane with parameters A,B,C,D, we have
55
CS2401 –Computer Graphics
56
CS2401 –Computer Graphics
Torus
Torus is a doughnut shaped object.
It can be generated by rotating a circle or other conic about a specified
axis.
A torus with a circular cross section centered on the coordinate origin
The Cartesian representation for points over the surface of a torus can be
written in the form
57
CS2401 –Computer Graphics
2
2 These control points are fitted with piecewise continuous parametric
2 2
x y z polynomial functions in one of the two ways.
r =1
rx ry rz 1. When polynomial sections are fitted so that the curve passes
through each control point the resulting curve is said to
interpolate the set of control points.
where r in any given offset value. A set of six control points interpolated with piecewise continuous
polynomial sections
Parametric representation for a torus is similar to those for an ellipse,
except that angle φ extends over 360o.
Using latitude and longitude angles φ and θ, we can describe the torus
surface as the set of points that satisfy.
x = rx (r + cosφ) cosθ, -л <= φ <= л
y = ry(r+ cosφ )sinθ, -л <= φ <= л
z = rz sinφ
2.2.4 Spline Representations
A Spline is a flexible strip used to produce a smooth curve through a
designated set of points.
Several small weights are distributed along the length of the strip to hold it 2. When the polynomials are fitted to the general control point path
in position on the drafting table as the curve is drawn. without necessarily passing through any control points, the
The Spline curve refers to any sections curve formed with polynomial resulting curve is said to approximate the set of control points.
sections satisfying specified continuity conditions at the boundary of the A set of six control points approximated with piecewise
pieces. continuous polynomial sections
A Spline surface can be described with two sets of orthogonal spline
curves.
Splines are used in graphics applications to design curve and surface
shapes, to digitize drawings for computer storage, and to specify animation
paths for the objects or the camera in the scene. CAD applications for
splines include the design of automobiles bodies, aircraft and spacecraft
surfaces, and ship hulls.
Interpolation and Approximation Splines
Interpolation curves are used to digitize drawings or to specify animation
Spline curve can be specified by a set of coordinate positions called paths.
control points which indicates the general shape of the curve.
58
CS2401 –Computer Graphics
Approximation curves are used as design tools to structure object surfaces. We set parametric continuity by matching the parametric derivatives of
adjoining curve sections at their common boundary.
A spline curve is designed , modified and manipulated with operations on
the control points.The curve can be translated, rotated or scaled with Zero order parametric continuity referred to as C0 continuity, means
transformation applied to the control points. that the curves meet. (i.e) the values of x,y, and z evaluated at u 2 for the
first curve section are equal. Respectively, to the value of x,y, and z
The convex polygon boundary that encloses a set of control points is called evaluated at u1 for the next curve section.
the convex hull.
First order parametric continuity referred to as C1 continuity means that
The shape of the convex hull is to imagine a rubber band stretched around the first parametric derivatives of the coordinate functions in equation (a)
the position of the control points so that each control point is either on the for two successive curve sections are equal at their joining point.
perimeter of the hull or inside it.
Second order parametric continuity, or C2 continuity means that both
Convex hull shapes (dashed lines) for two sets of control points
the first and second parametric derivatives of the two curve sections are
equal at their intersection.
Higher order parametric continuity conditions are defined similarly.
Piecewise construction of a curve by joining two curve segments using
different orders of continuity
a)Zero order continuity only
59
CS2401 –Computer Graphics
Second order Geometric continuity referred as G2 continuity means that To illustrate these three equivalent specifications, suppose we have the
both the first and second parametric derivatives of the two curve sections following parametric cubic polynomial representation for the x coordinate
are proportional at their boundary. Here the curvatures of two sections will along the path of a spline section.
match at the joining position. x(u)=axu3 + axu2 + cxu + dx 0<= u <=1 ----------(1)
Three control points fitted with two curve sections joined with Boundary conditions for this curve might be set on the endpoint
a) parametric continuity coordinates x(0) and x(1) and on the parametric first derivatives at the endpoints
x‟(0) and x‟(1). These boundary conditions are sufficient to determine the values of
the four coordinates ax, bx, cx and dx.
From the boundary conditions we can obtain the matrix that characterizes
this spline curve by first rewriting eq(1) as the matrix product
b)geometric continuity where the tangent vector of curve C 3 at point p1 x(u) = [u3 u2 u1 1] ax
has a greater magnitude than the tangent vector of curve C1 at p1.
60
CS2401 –Computer Graphics
Using equation (2) we can write the boundary conditions in matrix form Scientific visualization is used to visually display , enhance and
and solve for the coefficient matrix C as manipulate information to allow better understanding of the data.
C = Mspline . Mgeom -----(3) Similar methods employed by commerce , industry and other nonscientific
areas are sometimes referred to as business visualization.
Where Mgeom in a four element column matrix containing the geometric
constraint values on the spline and Mspline in the 4 * 4 matrix that transforms the Data sets are classified according to their spatial distribution ( 2D or 3D )
geometric constraint values to the polynomial coefficients and provides a and according to data type (scalars , vectors , tensors and multivariate data
characterization for the spline curve. ).
Matrix Mgeom contains control point coordinate values and other geometric
constraints. Visual Representations for Scalar Fields
We can substitute the matrix representation for C into equation (2) to A scalar quantity is one that has a single value. Scalar data sets contain
obtain. values that may be distributed in time as well as over spatial positions also
x (u) = U . Mspline . Mgeom ------(4) the values may be functions of other scalar parameters. Examples of
physical scalar quantities are energy, density, mass , temperature and
The matrix Mspline, characterizing a spline representation, called the basis water content.
matriz is useful for transforming from one spline representation to
another. A common method for visualizing a scalar data set is to use graphs or
charts that show the distribution of data values as a function of other
Finally we can expand equation (4) to obtain a polynomial representation parameters such as position and time.
for coordinate x in terms of the geometric constraint parameters.
Pseudo-color methods are also used to distinguish different values in a
x(u) = ∑ gk. BFk(u) scalar data set, and color coding techniques can be combined with graph
where gk are the constraint parameters, such as the control point coordinates and and chart models. To color code a scalar data set we choose a range of
slope of the curve at the control points and BF k(u) are the polynomial blending colors and map the range of data values to the color range. Color coding a
functions. data set can be tricky because some color combinations can lead to
misinterpretations of the data.
Contour plots are used to display isolines ( lines of constant scalar value)
2.3 Visualization of Data Sets for a data set distributed over a surface. The isolines are spaced at some
61
CS2401 –Computer Graphics
convenient interval to show the range and variation of the data values over
the region of space. Contouring methods are applied to a set of data values
that is distributed over a regular grid.
A 2D contouring algorithm traces the isolines from cell to cell
within the grid by checking the four corners of grid cells to determine
which cell edges are crossed by a particular isoline.
The path of an isoline across five grid cells
Sometimes isolines are plotted with spline curves but spline fitting can
lead to misinterpretation of the data sets. Example two spline isolines could
cross or curved isoline paths might not be a true indicator of data trends since
data values are known only at the cell corners. . Data values at the grid positions. are averaged so that
one value is stored for each voxel of the data space. How the data are
For 3D scalar data fields we can take cross sectional slices and display the encoded for display depends on the application.
2D data distributions over the slices. Visualization packages provide a slicer
routine that allows cross sections to be taken at any angle. For this volume visualization, a color-coded plot of the distance
to the maximum voxel value along each pixel ray was displayed.
Instead of looking at 2D cross sections we plot one or more isosurfaces
which are simply 3D contour plots. When two overlapping isosurfaces are
displayed the outer surface is made transparent so that we can view the shape Visual representation for Vector fields
of both isosurfaces.
A vector quantity V in three-dimensional space has three scalar values
Volume rendering which is like an X-ray picture is another method for
visualizing a 3D data set. The interior information about a data set is ( Vx , Vy,Vz, ) one for each coordinate direction, and a two-dimensional
projected to a display screen using the ray-casting method. Along the ray vector has two components (Vx, Vy,). Another way to describe a vector
path from each screen pixel. quantity is by giving its magnitude IV I and its direction as a unit vector
u.
Volume visualization of a regular, Cartesian data grid using
ray casting to examine interior data values As with scalars, vector quantities may be functions of position,
time, and other parameters. Some examples of physical vector quantities
are velocity, acceleration, force, electric fields, magnetic fields,
gravitational fields, and electric current.
62
CS2401 –Computer Graphics
One way to visualize a vector field is to plot each data point as a Tensor quantities are frequently encountered in anisotropic materials,
small arrow that shows the magnitude and direction of the vector. This which have different properties in different directions. The x, xy, and xz elements
method is most often used with cross-sectional slices, since it can be of the conductivity tensor, for example, describe the contributions of electric field
difficult to see the trends in a three-dimensional region cluttered with components in the x, y, and z diretions to the current in the x direction.
overlapping arrows. Magnitudes for the vector values can be shown by
Usually, physical tensor quantities are symmetric, so that the tensor has
varying the lengths of the arrows.
only six distinct values. Visualization schemes for representing all six components
Vector values are also represented by plotting field lines or of a second-order tensor quantity are based on devising shapes that have six
streamlines . parameters.
Field lines are commonly used for electric , magnetic and gravitational Instead of trying to visualize all six components of a tensor quantity, we
fields. The magnitude of the vector values is indicated by spacing between can reduce the tensor to a vector or a scalar. And by applying tensor-contraction
field lines, and the direction is the tangent to the field. operations, we can obtain a scalar representation.
Field line representation for a vector data set Visual Representations for Multivariate Data Fields
In some applications, at each grid position over some region of space, we
may have multiple data values, which can be a mixture of scalar, vector, and even
tensor values.
A method for displaying multivariate data fields is to construct graphical
objects, sometimes referred to as glyphs, with multiple parts. Each part of a glyph
represents a physical quantity. The size and color of each part can be used to
display information about scalar magnitudes. To give directional information for a
vector field, we can use a wedge, a cone, or some other pointing shape for the glyph
part representing the vector.
Visual Representations for Tensor Fields 2.4 Three Dimensional Geometric and Modeling Transformations
A tensor quantity in three-dimensional space has nine components and can Geometric transformations and object modeling in three dimensions are
be represented with a 3 by 3 matrix. This representation is used for a second-order extended from two-dimensional methods by including considerations for the z-
tensor, and higher-order tensors do occur in some applications. coordinate.
Some examples of physical, second-order tensors are stress and strain in a
material subjected to external forces, conductivity of an electrical conductor, and
the metric tensor, which gives the properties of a particular coordinate space.
The stress tensor in Cartesian coordinates,for example, can be represented as
2.4.1 Translation
In a three dimensional homogeneous coordinate representation, a point or
an object is translated from position P = (x,y,z) to position P‟ = (x‟,y‟,z‟) with the
matrix operation.
63
CS2401 –Computer Graphics
x‟ 1 0 0 tx
x
y‟ = 0 1 0 ty
y
z‟ 0 0 1 yz
z --------(1)
1 0 0 0 1
1
z‟ = z + tz ----------------------
---------(3)
Translating a point with translation vector T = (tx, ty, tz) 2.4.2 Rotation
To generate a rotation transformation for an object an axis of rotation must
be designed to rotate the object and the amount of angular rotation is also
be specified.
Positive rotation angles produce counter clockwise rotations about a
coordinate axis.
64
CS2401 –Computer Graphics
y‟ = x sin θ + y cos θ Transformation equations for rotation about the other two coordinate axes
can be obtained with a cyclic permutation of the coordinate parameters x, y and z in
z‟ = z --------------------------(2)
equation (2) i.e., we use the replacements
Parameters θ specifies the rotation angle. In homogeneous coordinate
x yzx ---------(5)
form, the 3D z axis rotation equations are expressed as
Substituting permutations (5) in equation (2), we get the equations for an
x-axis rotation
x‟ cosθ -sinθ 0 0 x
y‟ = ycosθ - zsinθ
y‟ = sinθ cosθ 0 0 y
z‟ = ysinθ + zcosθ ---------------(6)
z‟ 0 0 1 0
x‟ = x
z -------(3)
which can be written in the homogeneous coordinate form
1 0 0 0 1
1
which we can write more compactly as x‟ 1 0 0 0
x
P’ = Rz (θ) . P ------------------(4)
y‟ = 0 cosθ -sinθ 0 y
The below figure illustrates rotation of an object about the z axis.
z‟ 0 sinθ cosθ 0 z
-------(7)
1 0 0 0 1
1
65
CS2401 –Computer Graphics
Cyclically permuting coordinates in equation (6) give the transformation An inverse rotation matrix is formed by replacing the rotation angle θ by –
equation for a y axis rotation. θ.
z‟ = zcosθ - xsinθ Negative values for rotation angles generate rotations in a clockwise
x‟ = zsinθ + xcosθ ---------------(9) direction, so the identity matrix is produces when any rotation matrix is
multiplied by its inverse.
y‟ = y
Since only the sine function is affected by the change in sign of the
rotation angle, the inverse matrix can also be obtained by interchanging
rows and columns. (i.e.,) we can calculate the inverse of any rotation
matrix R by evaluating its transpose (R-1 = RT).
The matrix representation for y-axis rotation is
General Three Dimensional Rotations
x‟ cosθ 0 sinθ 0 x
A rotation matrix for any axis that does not coincide with a coordinate axis
y‟ = 0 1 0 0 y can be set up as a composite transformation involving combinations of
z‟ -sinθ 0 cosθ 0 z --------(10) translations and the coordinate axes rotations.
66
CS2401 –Computer Graphics
In such case, we need rotations to align the axis with a selected coordinate
axis and to bring the axis back to its original orientation.
Given the specifications for the rotation axis and the rotation angle, we can
accomplish the required rotation in five steps:
1. Translate the object so that the rotation axis passes through the
coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of
the coordinate axes.
3. Perform the specified rotation about that coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its
original orientation.
5. Apply the inverse translation to bring the rotation axis back to its
original position.
Five transformation steps 2.4.3 Scaling
The matrix expression for the scaling transformation of a position P =
(x,y,.z) relative to the coordinate origin can be written as
x‟ sx 0 0 0 x
y‟ = 0 sy 0 0 y
z‟ 0 0 sz 0 z
--------(11)
1 0 0 0 1 1
67
CS2401 –Computer Graphics
z‟ = z.sz
Scaling an object changes the size of the object and repositions the object
relatives to the coordinate origin.
If the transformation parameters are not equal, relative dimensions in the
object are changed.
The original shape of the object is preserved with a uniform scaling (s x =
sy= sz) .
Scaling with respect to a selected fixed position (x f, yf, zf) can be
represented with the following transformation sequence:
1. Translate the fixed point to the origin.
2. Scale the object relative to the coordinate origin using Eq.11.
3. Translate the fixed point back to its original position. This
sequence of transformation is shown in the below figure .
68
CS2401 –Computer Graphics
69
CS2401 –Computer Graphics
Composite three dimensional transformations can be formed by The order of the transformation sequence for the
multiplying the matrix representation for the individual operations in the buildTransformationMarix3 and composeTransfomationMarix3
transformation sequence. functions, is the same as in 2 dimensions:
1. scale
This concatenation is carried out from right to left, where the right most
matrixes is the first transformation to be applied to an object and the left 2. rotate
most matrix is the last transformation.
3. translate
A sequence of basic, three-dimensional geometric transformations is
combined to produce a single composite transformation which can be Once a transformation matrix is specified, the matrix can be applied to
applied to the coordinate definition of an object. specified points with
Some of the basic 3D transformation functions are: The transformations for hierarchical construction can be set using
structures with the function
translate ( translateVector, matrixTranslate)
setLocalTransformation3 (matrix, type)
rotateX(thetaX, xMatrixRotate)
where parameter matrix specifies the elements of a 4 by 4 transformation
rotateY(thetaY, yMatrixRotate) matrix and parameter type can be assigned one of the values of:
rotateZ(thetaZ, zMatrixRotate) Preconcatenate,
scale3 (scaleVector, matrixScale) Postconcatenate, or replace.
2.4.7 Modeling and Coordinate Transformations
70
CS2401 –Computer Graphics
In modeling, objects are described in a local (modeling) coordinate which transforms unit vectors u‟x, u‟y and u‟z onto the x, y and z axes
reference frame, then the objects are repositioned into a world coordinate respectively.
scene.
Transformation of an object description from one coordinate
For instance, tables, chairs and other furniture, each defined in a local system to another.
coordinate system, can be placed into the description of a room defined in
another reference frame, by transforming the furniture coordinates to room
coordinates. Then the room might be transformed into a larger scene
constructed in world coordinate.
Three dimensional objects and scenes are constructed using structure
operations.
Object description is transformed from modeling coordinate to world
coordinate or to another system in the hierarchy.
Coordinate descriptions of objects are transferred from one system to
another system with the same procedures used to obtain two dimensional
coordinate transformations. The complete coordinate-transformation sequence is given by the
composite matrix R .T.
Transformation matrix has to be set up to bring the two coordinate systems
into alignment: This matrix correctly transforms coordinate descriptions from one
Cartesian system to another even if one system is left-handed and the other
- First, a translation is set up to bring the new coordinate origin to is right handed.
the position of the other coordinate origin.
- Then a sequence of rotations are made to the corresponding
coordinate axes.
- If different scales are used in the two coordinate systems, a
scaling transformation may also be necessary to compensate for
the differences in coordinate intervals.
2.5Three-Dimensional Viewing
If a second coordinate system is defined with origin (x 0, y0,z0) and axis
vectors as shown in the figure relative to an existing Cartesian reference In three dimensional graphics applications,
frame, then first construct the translation matrix T(-x0, -y0, -z0), then we - we can view an object from any spatial position, from the front,
can use the unit axis vectors to form the coordinate rotation matrix from above or from the back.
u‟x1 u‟x2 u‟x3 0 - We could generate a view of what we could see if we were
R= u‟y1 u‟y2 u‟y3 0 standing in the middle of a group of objects or inside object, such
as a building.
u‟z1 u‟z2 u‟z3 0
2.5.1Viewing Pipeline:
0 0 0 1
71
CS2401 –Computer Graphics
In the view of a three dimensional scene, to take a snapshot we need to do 4. Objects outside the viewing limits are clipped from further
the following steps. consideration, and the remaining objects are processed through visible
surface identification and surface rendering procedures to produce the
1. Positioning the camera at a particular point in space.
display within the device viewport.
2. Deciding the camera orientation (i.e.,) pointing the camera
and rotating it around the line of right to set up the direction
for the picture. 2.5.2Viewing Coordinates
3. When snap the shutter, the scene is cropped to the size of the Specifying the view plane
„window‟ of the camera and light from the visible surfaces is
projected into the camera film. The view for a scene is chosen by establishing the viewing coordinate
system, also called the view reference coordinate system.
In such a way the below figure shows the three dimensional transformation
pipeline, from modeling coordinates to final device coordinate.
Modeling World
Viewing
Viewing Modeling
Co-ordinates Co-ordinates transformation
Co-ordinates transformation
Projection
Device.
Projection Work Station
Co-ordinates co-
ordinates A viewplane or projection plane is set-up perpendicular to the viewing Z v
Transformation Transformation axis.
World coordinate positions in the scene are transformed to viewing
Processing Steps coordinates, then viewing coordinates are projected to the view plane.
1. Once the scene has been modeled, world coordinates position is The view reference point is a world coordinate position, which is the
converted to viewing coordinates. origin of the viewing coordinate system. It is chosen to be close to or on
2. The viewing coordinates system is used in graphics packages as a the surface of some object in a scene.
reference for specifying the observer viewing position and the position Then we select the positive direction for the viewing Z v axis, and the
of the projection plane. orientation of the view plane by specifying the view plane normal vector,
3. Projection operations are performed to convert the viewing coordinate N. Here the world coordinate position establishes the direction for N
description of the scene to coordinate positions on the projection relative either to the world origin or to the viewing coordinate origin.
plane, which will then be mapped to the output device.
72
CS2401 –Computer Graphics
1 0 0 -x0 u1 u2 u3 0
T = 0 1 0 -y0 R= v1 v2 v3 0
0 0 1 -z0 n1 n2 n3 0
0 0 0 1 0 0 0 1
which transforms u into the world xw axis, v onto the yw axis and n onto
the zw axis.
73
CS2401 –Computer Graphics
Parallel Projections
Parallel projections are specified with a projection vector that defines the
direction for the projection lines.
When the projection in perpendicular to the view plane, it is said to be an
Orthographic parallel projection, otherwise it said to be an Oblique
parallel projection.
2. Perspective Projection – Here, object positions are
transformed to the view plane along lines that converge to a Orientation of the projection vector Vp to produce an
point called the projection reference point. orthographic projection (a) and an oblique projection (b)
Orthographic Projection
74
CS2401 –Computer Graphics
Orthographic projections are used to produce the front, side and top views The most commonly used axonometric projection is the isometric
of an object. projection.
Front, side and rear orthographic projections of an object are called It can be generated by aligning the projection plane so that it intersects
elevations. each coordinate axis in which the object is defined as the same distance
from the origin.
A top orthographic projection is called a plan view.
Isometric projection for a cube
This projection gives the measurement of lengths and angles accurately.
75
CS2401 –Computer Graphics
yp = y + z(L1sinφ)
The transformation matrix for producing any parallel projection onto the
xvyv plane is
1 0 L1cosφ 0
Mparallel = 0 1 L1sinφ 0
0 0 1 0
0 0 0 1
An orthographic projection is obtained when L1 = 0 (which occurs at a
projection angle α of 900)
Oblique projections are generated with non zero values for L1.
Point (x,y,z) is projected to position (xp,yp) on the view plane. Perspective Projections
The oblique projection line form (x,y,z) to (xp,yp) makes an angle α with To obtain perspective projection of a 3D object, we transform points along
the line on the projection plane that joins (xp,yp) and (x,y). projection lines that meet at the projection reference point.
This line of length L in at an angle φ with the horizontal direction in the If the projection reference point is set at position z prp along the zv axis and
projection plane. the view plane is placed at zvp as in fig , we can write equations describing
coordinate positions along this perspective projection line in parametric
The projection coordinates are expressed in terms of x,y, L and φ as form as
xp = x + Lcosφ - - - -(1) x‟ = x - xu
yp = y + Lsinφ y‟ = y - yu
Length L depends on the angle α and the z coordinate of the point to be z‟ = z – (z – zprp) u
projected:
Perspective projection of a point P with coordinates (x,y,z). to position (x p,
tanα = z / L yp,zvp) on the view plane.
thus,
L = z / tanα
= z L1
where L1 is the inverse of tanα, which is also the value of L when z = 1.
76
CS2401 –Computer Graphics
h 0 0 -1/dp zprp/dp 1
In this representation, the homogeneous factor is
h = (zprp-z)/dp --------------(4)
and the projection coordinates on the view plane are calculated from eq
(2)the homogeneous coordinates as
xp = xh / h
yp = yh / h ---------------------(5)
where the original z coordinate value retains in projection coordinates for
depth processing.
2.6 CLIPPING
Parameter u takes values from 0 to 1 and coordinate position (x‟, y‟,z‟)
represents any point along the projection line.
An algorithm for three-dimensional clipping identifies and saves all
When u = 0, the point is at P = (x,y,z). surface segments within the view volume for display on the output device.
At the other end of the line, u = 1 and the projection reference point coordinates All parts of objects that are outside the view volume are discarded.
(0,0,zprp) Instead of clipping against straight-line window boundaries, we now clip
`
On the view plane z` = zvp and z can be solved for parameter u at this position objects against the boundary planes of the view volume.
along the projection line: To clip a line segment against the view volume, we would need to test the
`
Substituting this value of u into the equations for x and y`, we obtain the relative position of the line using the view volume's boundary plane
perspective transformation equations. equations. By substituting the line endpoint coordinates into the plane
equation of each boundary in turn, we could determine whether the
xp = x((zprp – zvp) / (zprp – z)) = x( dp/(zprp – z)) endpoint is inside or outside that boundary.
yp = y((zprp - zvp) / (zprp – z)) = y(dp / (zprp – z)) --------------(2) An endpoint (x, y, z) of a line segment is outside a boundary plane if Ax +
where dp = zprp – zvp is the distance of the view plane from the projection By + Cz + D > 0, where A, B , C, and D are the plane parameters for that
reference point. boundary.
Using a 3D homogeneous coordinate representation we can write the Similarly, the point is inside the boundary if Ax + By + Cz +D < 0. Lines
perspective projection transformation (2) in matrix form as with both endpoints outside a boundary plane are discarded, and those with
both endpoints inside all boundary planes are saved.
xh 1 0 0 0 x
The intersection of a line with a boundary is found using the line equations
yh = 0 1 0 0 y along with the plane equation.
zh 0 0 -(zvp/dp) zvp(zprp/dp) z --------(3)
77
CS2401 –Computer Graphics
Intersection coordinates (x1, y1, z1) are values that are on the line and that viewport if both endpoints have a region code of 000000. If either endpoint
satisfy the plane equation Ax1, + By1 + Cz1 + D = 0. of a line segment does not have a region code of 000000, we perform the
logical and operation on the two endpoint codes. The result of this and
To clip a polygon surface, we can clip the individual polygon edges. First, operation will be nonzero for any line segment that has both endpoints in
we could test the coordinate extents against each boundary of the view one of the six outside regions.
volume to determine whether the object is completely inside or completely
outside that boundary. If the coordinate extents of the object are inside all As in two-dimensional line clipping, we use the calculated intersection of a
boundaries, we save it. If the coordinate extents are outside all boundaries, line with a viewport plane to determine how much of the line can be
we discard it. Other-wise, we need to apply the intersection calculations. thrown away.
Viewport Clipping
Lines and polygon surfaces in a scene can be clipped against the viewport The two-dimensional parametric clipping methods of Cyrus-Beck or
boundaries with procedures similar to those used for two dimensions, Liang-Barsky can be extended to three-dimensional scenes. For a line
except that objects are now processed against clipping planes instead of segment with endpoints P1 = (x1, y1, z1,) and P2 = (x2, y2, z2), we can write
clipping edges. the parametric line equations as
The two-dimensional concept of region codes can be extended to three x = x1 + (x2 - x1)u 0<=u <=1
dimensions by considering positions in front and in back of the three- y = y1 + (y2 - y1)u
dimensional viewport, as well as positions that are left, right, below, or
above the volume. For three dimensionalpoints, we need to expand the z= z1 + (z2 - z1)u -------------( 1 )
region code to six bits. Each point in the description of a scene is then
assigned a six-bit region code that identifies the relative position of the
point with respect to the viewport. Coordinates (x, y, z) represent any point on the line between the two
endpoints.
For a line endpoint at position (x, y, z), we assign the bit positions in the
region code from right to left as At u = 0, we have the point PI, and u = 1 puts us at P2.
bit 1 = 1, if x < xvmin(left) To find the intersection of a line with a plane of the viewport, we substitute
the coordinate value for that plane into the appropriate parametric
bit 2 = 1, if x > xvmax(right)
expression of Eq.1 and solve for u. For instance, suppose we are testing a
bit 3 = 1, if y < yvmin(below) line against the zvmin, plane of the viewport. Then
bit 4 = 1, if y > yvmax(above) u= zvmin – z1
bit 5 = 1, if z <zvmin(front) z2 – z1 ---------------------------- ( 2)
bit 6 = 1, if z > zvmax(back)
For example, a region code of 101000 identifies a point as above and When the calculated value for u is not in the range from 0 to 1, the line
behind the viewport, and the region code 000000 indicates a point within segment does not intersect the plane under consideration at any point
the volume. between endpoints P1 and P2 (line A in fig).
A line segment can immediately identified as completely within the If the calculated value for u in Eq.2 is in the interval from 0 to 1, we
calculate the intersection's x and y coordinates as
78
CS2401 –Computer Graphics
x1 = x1 + ( x2 – x1) zvmin – z1 - World coordinate vector (xN, yN, zN) defines the normal to the
view plane and the direction of the positive z v viewing axis.
z2 – z1
- The world coordinates (xV, yV, zV) gives the elements of the
view up vector.
y1 = y1 + ( y2 – y1) zvmin – z1
- An integer error code is generated in parameter error if input
z2 – z1 values are not specified correctly.
If either x1 or y1 is not in the range of the boundaries of the viewport, then 2. The matrix proj matrix for transforming viewing coordinates to normalized
this line intersects the front plane beyond the boundaries of the volume projection coordinates is created with the function.
(line B in Fig.) EvaluateViewMappingMatrix3
(xwmin,xwmax,ywmin,ywmax,xvmin,xvmax,yvmin,yvmax,zvmin,zvmax,
projType,xprojRef,yprojRef,zprojRef,zview,zback,zfront,error,projMatrix)
- Window limits on the view plane are given in viewing
coordinates with parameters xwmin, xwmax, ywmin and ywmax.
- Limits of the 3D view port within the unit cube are set with
normalized coordinates xvmin, xvmax, yvmin, yvmax, zvmin and
zvmax.
- Parameter projType is used to choose the projection type either
parallel or perspective.
- Coordinate position (xprojRef, yprojRdf, zprojRef) sets the
projection reference point. This point is used as the center of
projection if projType is set to perspective; otherwise, this point
and the center of the viewplane window define the parallel
projection vector.
- The position of the viewplane along the viewing z v axis is set with
parameter z view.
2.7 Three Dimensional Viewing Functions
- Positions along the viewing z v axis for the front and back planes
1. With parameters specified in world coordinates, elements of the matrix for
of the view volume are given with parameters z front and z back.
transforming world coordinate descriptions to the viewing reference frame
are calculated using the function. - The error parameter returns an integer error code indicating
erroneous input data.
EvaluateViewOrientationMatrix3(x0,y0,z0,xN,yN,zN,xV,yV,zV,error,view
Matrix) 2.8VISIBLE SURFACE IDENTIFICATION
- This function creates the viewMatrix from input coordinates A major consideration in the generation of realistic
defining the viewing system. graphics displays is identifying those parts of a scene that are visible
from a chosen viewing position.
- Parameters x0,y0,z0 specify the sign of the viewing system.
79
CS2401 –Computer Graphics
2.8.1 Classification of Visible Surface Detection Algorithms Furthermore, if object descriptions have been converted to
projection coordinates and our viewing direction is parallel to the viewing
These are classified into two types based on whether
zv. axis, then V = (0, 0, Vz) and V . N = VzC so that we only need to consider the
they deal with object definitions directly or with their projected
sign of C, the ; component of the normal vector N.
images
In a right-handed viewing system with viewing direction along the
1. Object space methods: compares objects and parts of objects to each
negative zv axis in the below Fig. the polygon is a back face if C < 0.
other within the scene definition to determine which surfaces as a
whole we should label as visible.
2. Image space methods: visibility is decided point by point at each
pixel position on the projection plane. Most Visible Surface Detection
Algorithms use image space methods.
2.8.2 Back Face Detection
A point (x, y,z) is "inside" a polygon surface with plane
parameters A, B, C, and D if
Ax + By + Cz + D < 0 ----------------(1 )
When an inside point is along the line of sight to the surface, the Thus, in general, we can label any polygon as a back face if its normal vector has a
polygon must be a back face . z component value
We can simplify this test by considering the normal vector N to a C<= 0
polygon surface, which has Cartesian components (A, B, C). In general, if
V is a vector in the viewing direction from the eye position, as shown in By examining parameter C for the different planes defining an object, we
Fig., can immediately identify all the back faces.
2.8.3 Depth Buffer Method
A commonly used image-space approach to detecting visible
surfaces is the depth-buffer method, which compares surface depths at each pixel
position on the projection plane. This procedure is also referred to as the z-buffer
method.
Each surface of a scene is processed separately, one point at a
time across the surface. The method is usually applied to scenes containing only
polygon surfaces, because depth values can be computed very quickly and the
method is easy to implement. But the mcthod can be applied to nonplanar surfaces.
With object descriptions converted to projection coordinates, each (x, y, z)
position on a polygon surface corresponds to the orthographic projection point (x,
y) on the view plane.
then this polygon is a back face if V . N > 0
80
CS2401 –Computer Graphics
Therefore, for each pixel position (x, y) on the view plane, object depths 1. Initialize the depth buffer and refresh buffer so that for all buffer positions (x, y),
can be compared by comparing z values. The figure shows
depth (x, y)=0, refresh(x , y )=Ibackgnd
three surfaces at varying distances along the orthographic projection line from
2. For each position on each polygon surface, compare depth values to
position (x,y ) in a view plane taken as the (xv,yv) plane. Surface S1, is closest at this
position, so its surface intensity value at (x, y) is saved. previously stored values in the depth buffer to determine visibility.
Calculate the depth z for each (x, y) position on the polygon.
If z > depth(x, y), then set
depth ( x, y)=z , refresh(x,y)= I surf(x, y)
where Ibackgnd is the value for the background intensity, and Isurf(x, y)
is the projected intensity value for the surface at pixel position (x,y).
After all surfaces have been processed, the depth buffer contains
depth values for the visible surfaces and the refresh buffer contains
the corresponding intensity values for those surfaces.
Depth values for a surface position (x, y) are calculated from the plane
equation for each surface:
Ax By D
z -----------------------------(1)
C
For any scan line adjacent horizontal positions across the line differ by1,
and a vertical y value on an adjacent scan line differs by 1. If the depth of
We can implement the depth-buffer algorithm in normalized coordinates, position(x, y) has been determined to be z, then the depth z' of the next position (x
so that z values range from 0 at the back clipping plane to Zmax at the front clipping +1, y) along the scan line is obtained from Eq. (1) as
plane.
Two buffer areas are required.A depth buffer is used to store depth values A( x 1) By D
z' -----------------------(2)
for each (x, y) position as surfaces are processed, and the refresh buffer stores the C
intensity values for each position.
A
Initially,all positions in the depth buffer are set to 0 (minimum depth), and Or z' z -----------------------(3)
the refresh buffer is initialized to the background intensity. C
On each scan line, we start by calculating the depth on a left edge of the
polygon that intersects that scan line in the below fig. Depth values at each
We summarize the steps of a depth-buffer algorithm as follows: successive position across the scan line are then calculated by Eq. (3).
81
CS2401 –Computer Graphics
If we are processing down a vertical edge, the slope is infinite and the
We first determine the y-coordinate extents of each polygon, and process recursive calculations reduce to
the surface from the topmost scan line to the bottom scan line. Starting at a top
vertex, we can recursively calculate x positions down a left edge of the polygon as B
z' z -----------------------(5)
x' = x - l/m, where m is the slope of the edge. C
Depth values down the edge are then obtained recursively as An alternate approach is to use a midpoint method or Bresenham-type
algorithm for determining x values on left edges for each scan line. Also the method
A/ m B can be applied to curved surfaces by determining depth and intensity values at each
z' z ----------------------(4)
C surface projection point.
Intersection positions on successive scan lines along a left polygon edge For polygon surfaces, the depth-buffer method is very easy to implement,
and it requires no sorting of the surfaces in a scene. But it does require the
availability of a second buffer in addition to the refresh buffer.
2.8.4 A- BUFFER METHOD
An extension of the ideas in the depth-buffer method is the A-buffer
method. The A buffer method represents an antialiased, area-averaged,
accumulation-buffer method developed by Lucasfilm for implementation in the
surface-rendering system called REYES (an acronym for "Renders Everything You
Ever Saw").
A drawback of the depth-buffer method is that it can only find one visible
surface at each pixel position. The A-buffer method expands
82
CS2401 –Computer Graphics
the depth buffer so that each position in the buffer can reference a linked percent of area coverage
list of surfaces. surface identifier
Thus, more than one surface intensity can be taken into consideration at other surface-rendering parameters
each pixel position, and object edges can be antialiased.
pointer to next surface
Each position in the A-buffer has two fields:
.
1)depth field - stores a positive or negative real number
2)intensity field - stores surface-intensity information or a pointer value.
2.8.5 SCAN-LINE METHOD
If the depth field is positive, the number stored at that position is the depth
of a single surface overlapping the corresponding pixel area. The intensity field then This image-space method for removing hidden surfaces is an extension of
stores the RCB components of the surface color at that point and the percent of the scan-line algorithm for filling polygon interiors. As each scan line is processed,
pixel coverage, as illustrated in Fig.A all polygon surfaces intersecting that line are examined to determine which are
visible. Across each scan line, depth calculations are made for each overlapping
surface to determine which is nearest to the view plane. When the visible surface
If the depth field is negative, this indicates multiple-surface contributions has been determined, the intensity value for that position is entered into the refresh
to the pixel intensity. The intensity field then stores a pointer to a linked Iist of buffer.
surface data, as in Fig. B.
Organization of an A-buffer pixel position (A) single surface overlap We assume that tables are set up for the various surfaces, which include both an
of the corresponding pixel area (B) multiple surface edge table and a polygon table. The edge table contains coordinate endpoints for
overlap each line in-the scene, the inverse slope of each
line, and pointers into the polygon table to identify the surfaces bounded by each
line.
The polygon table contains coefficients of the plane equation for each
surface, intensity information for the surfaces, and possibly pointers into the edge
table.
To facilitate the search for surfaces crossing a given scan line, we can set
up an active list of edges from information in the edge table. This active list will
contain only edges that cross the current scan line, sorted in order of increasing x.
In addition, we define a flag for each surface that is set on or off to
Data for each surface in the linked list includes indicate whether a position along a scan line is inside or outside of the surface. Scan
lines are processed from left to right. At the leftmost boundary of a surface, the
RGB intensity components surface flag is turned on; and at the rightmost boundary, it is turned off.
opacity parameter (percent of transparency) Scan lines crossing the projection of two surfaces S1 and S2 in the view
depth plane. Dashed lines indicate the boundaries of hidden surfaces
83
CS2401 –Computer Graphics
84
CS2401 –Computer Graphics
We make the following tests for each surface that overlaps with S. If any one of
these tests is true, no reordering is necessary for that surface. The tests are listed in We can perform tests 2 and 3 with an "inside-outside" polygon test. That
order of increasing difficulty. is,we substitute the coordinates for all vertices of S into the plane equation for the
1. The bounding rectangles in the xy plane for the two surfaces do not overlap overlapping surface and check the sign of the result. If the plane equations are setup
so that the outside of the surface is toward the viewing position, then S is behind S'
2. Surface S is completely behind the overlapping surface relative to the viewing if all vertices of S are "inside" S'
position.
Surface S is completely behind (inside) the overlapping surface S’
3. The overlapping surface is completelv in front of S relative to the viewing
position.
4. The projections of the two surfaces onto the view plane do not overlap.
85
CS2401 –Computer Graphics
If tests 1 through 3 have all failed, we try test 4 by checking for intersections
Similarly, S' is completely in front of S if all vertices of S are "outside" of between the bounding edges of the two surfaces using line equations in the xy
S'. Figure shows an overlapping surface S' that is completely in front of S, but plane. As demonstrated in Fig., two surfaces may or may not intersect even though
surface S is not completely inside S‟. their coordinate extents overlap in the x, y, and z directions.
Should all four tests fail with a particular overlapping surface S', we
interchange surfaces S and S' in the sorted list.
86
CS2401 –Computer Graphics
With plane P1,we first partition the space into two sets of objects. One set
of objects is behind, or in back of, plane P1, relative to the viewing direction, and
the other set is in front of P1. Since one object is intersected by plane P1, we divide
that object into two separate objects, labeled A and B.
87
CS2401 –Computer Graphics
Objects A and C are in front of P1 and objects B and D are behind P1. We
next partition the space again with plane P2 and construct the binary tree
representation shown in Fig.(b).
In this tree, the objects are represented as terminal nodes, with front
objects as left branches and back objects as right branches.
2.8.8 Area – Subdivision Method
This technique for hidden-surface removal is essentially an image-space
method ,but object-space operations can be used to accomplish depth ordering of
surfaces.
The area-subdivision method takes advantage of area coherence in a scene The tests for determining surface visibility within an area can be stated in terms
by locating those view areas that represent part of a single surface. We apply this of these four classifications. No further subdivisions of a specified area are needed
if one of the following conditions is true:
method by successively dividing the total viewing area into smaller and smaller
rectangles until each small area is the projection of part of a single visible surface or 1. All surfaces are outside surfaces with respect to the area.
no surface at all.
2. Only one inside, overlapping, or surrounding surface is in the area.
To implement this method, we need to establish tests that can quickly
identify the area as part of a single surface or tell us that the area is too complex to 3. A surrounding surface obscures all other surfaces within the area boundaries.
analyze easily. Starting with the total view, we apply the tests to determine whether Test 1 can be carrieded out by checking the bounding rectangles of all
we should subdivide the total area into smaller rectangles. If the tests indicate that surfaces against the area boundaries.
the view is sufficiently complex, we subdivide it. Next. we apply the tests to each of
the smaller areas, subdividing these if the tests indicate that visibility of a single Test 2 can also use the bounding rectangles in the xy plane to identify an
surface is still uncertain. We continue this process until the subdivisions are easily inside surface
analyzed as belonging to a single surface or until they are reduced to the size of a One method for implementing test 3 is to order surfaces according to their
single pixel. An easy way to do this is to successively divide the area into four equal minimum depth from the view plane. For each surrounding surface, we then
parts at each step. compute the maximum depth within the area under consideration. If the maximum
Tests to determine the visibility of a single surface within a specified area depth of one of these surrounding surfaces is closer to the view plane than the
are made by comparing surfaces to the boundary of the area. There are four possible minimum depth of all other surfaces within the area, test 3 is satisfied.
relationships that a surface can have with a specified area boundary. We can Within a specified area a surrounding surface with a maximum depth of Z max
describe these relative surface characteristics in the following way (Fig. ): obscures all surfaces that have a minimum depth beyond Zmax
Surrounding surface-One that completely encloses the area.
Overlapping surface-One that is partly inside and partly outside the area.
Inside surface-One that is completely inside the area.
Outside surface-One that is completely outside the area.
Possible relationships between polygon surfaces and a rectangular area
88
CS2401 –Computer Graphics
surface for A1, and visibility tests 2 and 3 can be applied to determine
In general, fewer subdivisions are required using this approach, but more
processing is needed to subdivide areas and to analyze the relation of surfaces to the
subdivision boundaries.
Another method for carrying out test 3 that does not require depth sorting
is to use plane equations to calculate depth values at the four vertices of the area for
all surrounding, overlapping, and inside surfaces, If the calculated depths for one of
the surrounding surfaces is less than the calculated depths for all other surfaces, test
3 is true. Then the area can be filled with the intensity values of thesurrounding
surface.
For some situations, both methods of implementing test 3 will fail to
identify correctly a surrounding surface that obscures all the other surfaces. It is
faster to subdivide the area than to continue with more complex testing.
Once outside and surrounding surfaces have been identified for an area,
they will remain outside and surrounding surfaces for all subdivisions of the area.
Furthermore, some inside and overlapping surfaces can be expected to be
eliminated as the subdivision process continues, so that the areas become easier to
analyze.
89
CS2401 –Computer Graphics
When an octree representation is used for the viewing volume, hidden- When a color value is encountered in an octree node, the pixel area in the
surface elimination is accomplished by projecting octree nodes onto the viewing frame buffer corresponding to this node is assigned that color value only if no
surface in a front-to-back order. values have previously been stored in this area. In this way, only the front colors are
loaded into the buffer. Nothing is loaded if an area is void. Any node that is found
In the below Fig. the front face of a region of space (the side to be completely obscured is eliminated from further processing, so that its subtrees
are not accessed.
toward the viewer) is formed with octants 0, 1, 2, and 3. Surfaces in the front of
these octants are visible to the viewer. Any surfaces toward the re in the back Different views of objects represented as octrees can be obtained by
octants (4,5,6, and 7) may be hidden by the front surfaces. applying transformations to the octree representation that reorient the object
according to the view selected.
A method for displaying an octree is first to map the octree onto a quadtree
of visible areas by traversing octree nodes from front to back in a recursive
procedure. Then the quadtree representation for the visible surfaces is loaded into
the frame buffer. The below Figure depicts the octants in a region of space and
the corresponding quadrants on the view plane.
Back surfaces are eliminated, for the viewing directionby processing data
elements in the octree nodes in the order 0, 1, 2,3,4, 5, 6, 7.
are visited before the nodes for the four back suboctants. The traversal of the octree
continues in this order for each octant subdivision.
Contributions to quadrant 0 come from octants 0 and 4. Color values in
quadrant 1 are obtained from surfaces in octants1 and 5, and values in each of the
90
CS2401 –Computer Graphics
other two quadrants are generated from the pair of octants aligned with each of ) Quadtree;
these quadrants.
int nQuadtree = 0.
In most cases, both a front and a back octant must be considered in
determining the correct color values for a quadrant. But if the front octant is void octreeToQuadtree (Octree *oTree. Quadtree *qTree)
homogeneously filled with some color, we do not process the
(
back octant. If the front is empty the, the rear octant is processed. Otherwise, two
,.recursive calls are made, one for the rear octant and one for the front octant. Octree *front. *back:
union ( return:
int color; )
91
CS2401 –Computer Graphics
if (front->status == SOLID) After calculating all ray-surface intersections, we identify the visible
surface as the one whose intersection point is closest to the pixel. This visibility
if (front->data.color != EMPTY) detection scheme uses ray-casting procedures.
qTree->data.children[i]->data.color = front->data.color; Ray casting, as a visibility detection tool, is based on geometric optics
methods, which trace the paths of light rays. Since there are an infinite number of
else light rays in a scene and we are interested only in those rays that pass through pixel
positions, we can trace the light-ray paths backward from the pixels through the
if (back->status == SOLID) scene.
if (back->data.color != EMPTY) The ray-casting approach is an effective visibility-detection method for
scenes with curved surfaces, particularly spheres.
qTree->data.children[i]->data.color = back->data.color;
A ray along a line of sight from a pixel position through a scene
else
qTree->data.children[il->data.color = EMPTY;
newQuadtree->status = MIXED;
2.8.10 RAY CASTING METHOD We can think of ray casting as a variation on the depth-buffer method . In
the depth-buffer algorithm, we process surfaces one at a time and calculate depth
values for all projection points over the surface. The calculated surface depths are
then compared to previously stored depths to determine visible surfaces at each
If we consider the line of sight from a pixel position on the view plane pixel.
through a scene, as in the Fig. below, we can determine which objects in the scene
intersect this line. In ray casting, we process pixels one at a time and calculate depths for all
surfaces along the projection path to that pixel. Ray casting is a special case of ray-
92
CS2401 –Computer Graphics
tracing algorithms that trace multiple ray paths to pick up global reflection and Scan-line and ray-casting algorithms often involve numerical
refraction contributions from approximation techniques to solve the surface equation at the intersection point with
a scan line or with a pixel ray. Various techniques, including parallel calculations
multiple objects in a scene. With ray casting, we only follow a ray out from each and fast hardware implementations, have been developed for solving the curved-
pixel to the nearest object. surface
surfaces. We can also approximate a curved surface as a set of plane, polygon To obtain an xy plot of a functional surface, we write the surface
surfaces and use one of the other hidden-surface methods.With some objects, such representation in the form
as spheres, it can be more efficient as well as more accurate to use ray casting and
y=f(x,z) ----------------(1)
the curved-surface equation.
A curve in the xy plane can then be plotted for values of z within some
Curved-Surface Representations selected range, using a specified interval ∆z. Starting with the largest value of z, we
plot the curves from "front" to "back" and eliminate hidden sections.
We can represent a surface with an implicit equation of the form f(x, y, z)
= 0 or with a parametric representation . We draw the curve sections on the screen by mapping an xy range for the
function into an xy pixel screen range. Then, unit steps are taken in x and the
Spline surfaces, for instance, are normally described with parametric corresponding y value for each x value is determined from Eq. (1) for a given value
equations. of z.
In some cases, it is useful to obtain an explicit surface equation, as, for One way to identify the visible curve sections on the surface is to maintain
example, a height function over an xy ground plane: a list of ymin, and ymax, values previously calculated for the pixel x coordinates on
the screen.
z=f(x,y)
As we step from one pixel x position to the next, we check the calculated y
Many objects of interest, such as spheres, ellipsoids, cylinders, and cones, value against the stored range, ymin, and ymax, for the next pixel.
have quadratic representations.
93
CS2401 –Computer Graphics
If ymin<= y<= ymax that point on the surface is not visible and we do not
plot it. But if the calculated y value is outside the stored y bounds for that pixel, the
point is visible. We then plot the point and reset the bounds for that pixel.
each coordinate position. If both line intersections with the projection of a surface
boundary have greater depth than the surface at those points, the line segment 2.8.13 VISIBILITY-DETECTION FUNCTIONS
between the intersections is completely hidden, as in Fig. (a).
Often, three-dimensional graphics packages accommodate several visible-
This is the usual situation in a scene, but it is also possible to have lines surface detection procedures, particularly the back-face and depth-buffer methods.
and surfaces intersecting each other. When a line has greater depth at one boundary
intersection and less depth than the surface at the other boundary intersection, the A particular function can then be invoked with the procedure name, such
line must penetrate the surface interior, as in Fig. (b). In this case, we calculate the as back-Face or depthBuffer.
intersection point of the line with the surface using the plane equation and display
only the visible sections.
Hidden line sections (dashed) for a line that (a) passes behind a surface and (b)
penetrates a surface
UNIT III - GRAPHICS PROGRAMMING
94
CS2401 –Computer Graphics
Color Models – RGB, YIQ, CMY, HSV – Animations – General Computer Spectral colors range from the reds through orange and yellow at the low
Animation, Raster, Keyframe - Graphics programming using OPENGL – frequency end to greens, blues and violet at the high end.
Basic graphics primitives –Drawing three dimensional objects - Drawing three
dimensional scenes Since light is an electro magnetic wave, the various colors are described in
terms of either the frequency for the wave length λ of the wave.
Intensity in the radiant energy emitted per limit time, per unit solid angle,
and per unit projected area of the source.
- Purity describes how washed out or how pure the color of the
light appears.
Each frequency value within the visible band corresponds to a distinct - Pastels and Pale colors are described as less pure.
color.
The term chromaticity is used to refer collectively to the two properties,
At the low frequency end is a red color (4.3*10 4 Hz) and the highest purity and dominant frequency.
frequency is a violet color (7.5 *10 14Hz)
95
CS2401 –Computer Graphics
Two different color light sources with suitably chosen intensities can be If we specify colors only with x and y, we cannot obtain the amounts X, Y
used to produce a range of other colors. and Z. so, a complete description of a color in given with the 3 values x, y
and Y.
If the 2 color sources combine to produce white light, they are called
complementary colors. E.g., Red and Cyan, green and magenta, and blue X = (x/y)Y, Z = (z/y)Y
and yellow.
Where z = 1-x-y.
Color models that are used to describe combinations of light in terms of
dominant frequency use 3 colors to obtain a wide range of colors, called Intuitive Color Concepts
the color gamut.
Color paintings can be created by mixing color pigments with white and
The 2 or 3 colors used to produce other colors in a color model are called black pigments to form the various shades, tints and tones.
primary colors.
Starting with the pigment for a „pure color‟ the color is added to black
Standard Primaries pigment to produce different shades. The more black pigment produces
darker shades.
XYZ Color Model
Different tints of the color are obtained by adding a white pigment to the
The set of primaries is generally referred to as the XYZ or (X,Y,Z) color original color, making it lighter as more white is added.
model where X,Y and Z represent vectors in a 3D, additive color
space. Tones of the color are produced by adding both black and white pigments.
Cλ = XX + YY + ZZ -------------(1) Based on the tristimulus theory of version, our eyes perceive color through
the stimulation of three visual pigments in the cones on the retina.
Where X,Y and Z designates the amounts of the standard
primaries needed to match Cλ. These visual pigments have a peak sensitivity at wavelengths of about 630
nm (red), 530 nm (green) and 450 nm (blue).
It is convenient to normalize the amount in equation (1) against luminance
(X + Y+ Z). Normalized amounts are calculated as, By comparing intensities in a light source, we perceive the color of the light.
x = X/(X+Y+Z), y = Y/(X+Y+Z), z = Z/(X+Y+Z) This is the basis for displaying color output on a video monitor using the 3
color primaries, red, green, and blue referred to as the RGB color model. It is
with x + y + z = 1 represented in the below figure.
Any color can be represented with just the x and y amounts. The
parameters x and y are called the chromaticity values because they depend
only on hue and purity.
96
CS2401 –Computer Graphics
Shades of gray are represented along the main diagonal of the cube from the
origin (black) to the white vertex.
The National Television System Committee (NTSC) color model for forming
the composite video signal in the YIQ model.
A combination of red, green and blue intensities are chosen for the Y
parameter to yield the standard luminosity curve.
Vertices of the cube on the axes represent the primary colors, the remaining Parameter Q carries green-magenta hue information in a bandwidth of about
vertices represents the complementary color for each of the primary colors. 0.6 MHz.
The RGB color scheme is an additive model. (i.e.,) Intensities of the primary An RGB signal can be converted to a TV signal using an NTSC encoder
colors are added to produce other colors. which converts RGB values to YIQ values, as follows
Each color point within the bounds of the cube can be represented as the
triple (R,G,B) where values for R, G and B are assigned in the range from 0
to1.
Y 0.299 0.587 0.144 R
The color Cλ is expressed in RGB component as
Cλ = RR + GG + BB
I 0.596 0.275 0.321 G
The magenta vertex is obtained by adding red and blue to produce the triple
Q 0.212 0.528 0.311 B
(1,0,1) and white at (1,1,1) in the sum of the red, green and blue vertices. An NTSC video signal can be converted to an RGB signal using an NTSC
encoder which separates the video signal into YIQ components, the converts
to RCB values, as follows:
97
CS2401 –Computer Graphics
A color model defined with the primary colors cyan, magenta, and yellow
(CMY) in useful for describing color output to hard copy devices.
Magenta ink subtracts the green component from incident light and yellow
subtracts the blue component.
Equal amounts of each of the primary colors produce grays along the main
diagonal of the cube.
A combination of cyan and magenta ink produces blue light because the
red and green components of the incident light are absorbed.
The printing process often used with the CMY model generates a color
point with a collection of 4 ink dots; one dot is used for each of the
primary colors (cyan, magenta and yellow) and one dot in black.
The conversion from an RGB representation to a CMY representation is
expressed as
98
CS2401 –Computer Graphics
C 1 R
M 1 G
Y 1 B
Where the white is represented in the RGB system as the unit column vector.
The HSV model uses color descriptions that have a more interactive appeal
to a user.
Color parameters in this model are hue (H), saturation (S), and value (V).
The 3D representation of the HSV model is derived from the RGB cube.
The outline of the cube has the hexagon shape.
99
CS2401 –Computer Graphics
It has the double cone representation shown in the below figure. The 3
parameters in this model are called Hue (H), lightness (L) and saturation
(s).
100
CS2401 –Computer Graphics
The remaining colors are specified around the perimeter of the cone in the Story board layout
same order as in the HSV model.
Object definitions
Magenta is at 600, Red in at 1200, and cyan in at H = 1800.
Key-frame specifications
The vertical axis is called lightness (L). At L = 0, we have black, and white
is at L = 1 Gray scale in along the L axis and the “purehues” on the L = Generation of in-between frames.
0.5 plane.
Story board
Saturation parameter S specifies relative purity of a color. S varies from 0
to 1 pure hues are those for which S = 1 and L = 0.5 The story board is an outline of the action.
- As S decreases, the hues are said to be less pure. It defines the motion sequences as a set of basic events that are to take
place.
- At S= 0, it is said to be gray scale.
Depending on the type of animation to be produced, the story board could
Animation consist of a set of rough sketches or a list of the basic ideas for the motion.
Computer animation refers to any time sequence of visual changes in a Object Definition
scene.
An object definition is given for each participant in the action.
Computer animations can also be generated by changing camera
parameters such as position, orientation and focal length. Objects can be defined in terms of basic shapes such as polygons or
splines.
Applications of computer-generated animation are entertainment,
advertising, training and education. The associated movements of each object are specified along with the
shape.
Example : Advertising animations often transition one object shape into another.
Key frame
Frame-by-Frame animation
A key frame is detailed drawing of the scene at a certain time in the
Each frame of the scene is separately generated and stored. Later, the frames can be animation sequence.
recoded on film or they can be consecutively displayed in "real-time playback"
mode Within each key frame, each object is positioned according to the time for
that frame.
Design of Animation Sequences
Some key frames are chosen at extreme positions in the action; others are
An animation sequence in designed with the following steps: spaced so that the time interval between key frames is not too much.
101
CS2401 –Computer Graphics
In-betweens Object shapes and associated parameter are stored and updated in the
database.
In betweens are the intermediate frames between the key frames.
Motion can be generated according to specified constraints using 2D and
The number of in between needed is determined by the media to be used to 3D transformations.
display the animation.
Standard functions can be applied to identify visible surfaces and apply the
Film requires 24 frames per second and graphics terminals are refreshed at rendering algorithms.
the rate of 30 to 60 frames per seconds.
Camera movement functions such as zooming, panning and tilting are used
Time intervals for the motion are setup so there are from 3 to 5 in-between for motion simulation.
for each pair of key frames.
Given the specification for the key frames, the in-betweens can be
Depending on the speed of the motion, some key frames can be duplicated. automatically generated.
For a 1 min film sequence with no duplication, 1440 frames are needed. Raster Animations
Other required tasks are On raster systems, real-time animation in limited applications can be
generated using raster operations.
- Motion verification
Sequence of raster operations can be executed to produce real time
- Editing animation of either 2D or 3D objects.
- Production and synchronization of a sound track. We can animate objects along 2D motion paths using the color-table
transformations.
General Computer Animation Functions
- Predefine the object as successive positions along the motion
Steps in the development of an animation sequence are, path, set the successive blocks of pixel values to color table
entries.
1. Object manipulation and rendering
- Set the pixels at the first position of the object to „on‟ values, and
2. Camera motion set the pixels at the other object positions to the background color.
3. Generation of in-betweens - The animation is accomplished by changing the color table values
so that the object is „on‟ at successive positions along the
Animation packages such as wave front provide special functions for animation path as the preceding position is set to the background
designing the animation and processing individuals objects. intensity.
102
CS2401 –Computer Graphics
Keyframe Systems
Morphing
Computer Animation Languages Transformation of object shapes from one form to another is called
Morphing.
Animation functions include a graphics editor, a key frame generator and
standard graphics routines. Morphing methods can be applied to any motion or transition involving a
change in shape. The example is shown in the below figure.
The graphics editor allows designing and modifying object shapes, using
spline surfaces, constructive solid geometry methods or other
representation schemes.
Action specification involves the layout of motion paths for the objects and
camera.
103
CS2401 –Computer Graphics
If the vector counts in equalized parameters V k and Vk+1 are used to denote
the number of vertices in the two consecutive frames. In this case we
define
Simulating Accelerations
The general preprocessing rules for equalizing keyframes in terms of either
the number of vertices to be added to a keyframe. Curve-fitting techniques are often used to specify the animation paths between key
frames. Given the vertex positions at the key frames, we can fit the positions with
Suppose we equalize the edge count and parameters Lk and Lk+1 denote the linear or nonlinear paths. Figure illustrates a nonlinear fit of key-frame positions.
number of line segments in two consecutive frames. We define, This determines the trajectories for the in-betweens. To simulate accelerations, we
can adjust the time spacing for the in-betweens.
Lmax = max (Lk, Lk+1)
Ns = int (Lmax/Lmin)
104
CS2401 –Computer Graphics
Motion Specification
These are several ways in which the motions of objects can be specified in
an animation system.
Here the rotation angles and translation vectors are explicitly given.
For constant speed (zero acceleration), we use equal-interval time spacing for the
in-betweens. Suppose we want n in-betweens for key frames at times t1 and t2.
The time interval between key frames is then divided into n + 1 subintervals,
yielding an in-between spacing of
∆= t2-t1/n+1
We can approximate the path of a bouncing ball with a damped, rectified,
we can calculate the time for any in-between as
sine curve
tBj = t1+j ∆t, j = 1,2, . . . . . . n
y (x) = A / sin(ωx + θ0) /e-kx
105
CS2401 –Computer Graphics
where A is the initial amplitude, ω is the angular frequency, θ0 is the phase angle specification of geometric objects in two or three dimensions, using the provided
and k is the damping constant. primitives, together with commands that control how these objects are rendered
(drawn).
Goal Directed Systems
We can specify the motions that are to take place in general terms that
abstractly describe the actions. Libraries
These systems are called goal directed. Because they determine specific
motion parameters given the goals of the animation.
OpenGL Utility Library (GLU) contains several routines that use lower-
level OpenGL commands to perform such tasks as setting up matrices for
Eg., To specify an object to „walk‟ or to „run‟ to a particular distance.
specific viewing orientations and projections and rendering surfaces.
OpenGL Utility Toolkit (GLUT) is a window-system-independent toolkit,
Kinematics and Dynamics
written by Mark Kilgard, to hide the complexities of differing window
APIs.
With a kinematics description, we specify the animation by motion
parameters (position, velocity and acceleration) without reference to the
forces that cause the motion. Include Files
For constant velocity (zero acceleration) we designate the motions of rigid For all OpenGL applications, you want to include the gl.h header file in every file.
bodies in a scene by giving an initial position and velocity vector for each Almost all OpenGL applications use GLU, the aforementioned OpenGL Utility
object. Library, which also requires inclusion of the glu.h header file. So almost every
OpenGL source file begins with:
We can specify accelerations (rate of change of velocity ), speed up, slow
downs and curved motion paths. #include <GL/gl.h>
#include <GL/glu.h>
An alternative approach is to use inverse kinematics; where the initial and
final positions of the object are specified at specified times and the motion If you are using the OpenGL Utility Toolkit (GLUT) for managing your window
parameters are computed by the system. manager tasks, you should include:
#include <GL/glut.h>
Graphics programming using OPENGL
The following files must be placed in the proper folder to run a OpenGL Program.
OpenGL is a software interface that allows you to access the graphics hardware Libraries (place in the lib\ subdirectory of Visual C++)
without taking care of the hardware details or which graphics adapter is in the
system. OpenGL is a low-level graphics library specification. It makes available to opengl32.lib
the programmer a small set of geomteric primitives - points, lines, polygons, glu32.lib
images, and bitmaps. OpenGL provides a set of commands that allow the
106
CS2401 –Computer Graphics
glut32.lib RGBA mode provides more flexibility. In general, use RGBA mode whenever
possible. RGBA mode is the default.
Include files (place in the include\GL\ subdirectory of Visual C++)
Another decision we need to make when setting up the display mode is whether we
gl.h want to use single buffering (GLUT_SINGLE) or double buffering
glu.h (GLUT_DOUBLE). If we aren't using annimation, stick with single buffering,
which is the default.
glut.h
3. glutInitWindowSize(640,480)
Dynamically-linked libraries (place in the \Windows\System subdirectory)
2. glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB) The window is not actually displayed until the glutMainLoop() is entered. The very
last thing is we have to call this function
The next thing we need to do is call the glutInitDisplayMode() procedure to specify
the display mode for a window. Event Driven Programming
We must first decide whether we want to use an RGBA (GLUT_RGB) or color- The method of associating a call back function with a particular type of event is
index (GLUT_INDEX) color model. The RGBA mode stores its color buffers as called as event driven programming. OpenGL provides tools to assist with the event
red, green, blue, and alpha color components. Color-index mode, in contrast, stores management.
color buffers in indicies. And for special effects, such as shading, lighting, and fog,
107
CS2401 –Computer Graphics
Special keys can also be used as triggers. The key passed to the callback function,
in this case, takes one of the following values (defined in glut.h). int main(int argc, char** argv)
Special keys can also be used as triggers. The key passed to the callback function, {
in this case, takes one of the following values (defined in glut.h).
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(465, 250);
GLUT_KEY_UP Up Arrow glutInitWindowPosition(100, 150);
GLUT_KEY_RIGHT Right Arrow glutCreateWindow("My First Example");
GLUT_KEY_DOWN Down Arrow
GLUT_KEY_PAGE_UP Page Up glutDisplayFunc(mydisplay);
108
CS2401 –Computer Graphics
glutReshapeFunc(myreshape); GL_QUAD_STRIP
glutMouseFunc(mymouse); GL_POLYGON
glutKeyboardFunc(mykeyboard); glVertex( ) : The main function used to draw objects is named as glVertex. This
function defines a point (or a vertex) and it can vary from receiving 2 up to 4
myinit(); coordinates.
glutMainLoop(); Format of glVertex Command
return 0;
OpenGL Provides tools for drawing all the output primitives such as points, lines,
triangles, polygons, quads etc and it is defined by one or more vertices.
When we wish to refer the basic command without regard to the specific arguments
To draw such objects in OpenGL we pass it a list of vertices. The list occurs and datatypes it is specified as
between the two OpenGL function calls glBegin() and glEnd(). The argument of
glBegin() determine which object is drawn. glVertex*();
glBegin(int mode);
glEnd( void );
The parameter mode of the function glBegin can be one of the following:
GL_POINTS
GL_LINES
GL_LINE_STRIP
GL_LINE_LOOP
GL_TRIANGLES
GL_TRIANGLE_STRIP
GL_TRIANGLE_FAN
GL_QUADS
109
CS2401 –Computer Graphics
glBegin(GL_TRIANGLES);
glVertex3f(100.0f, 100.0f, 0.0f);
glVertex3f(150.0f, 100.0f, 0.0f);
glVertex3f(125.0f, 50.0f, 0.0f);
glEnd( );
glBegin(GL_LINES);
glVertex3f(100.0f, 100.0f, 0.0f); // origin of the line
glVertex3f(200.0f, 140.0f, 5.0f); // ending point of the line
glEnd( );
OpenGl State
OpenGl keeps track of many state variables, such as current size of a point, the
current color of a drawing, the current background color, etc.
The value of a state variable remains active until new value is given.
glPointSize() : The size of a point can be set with glPointSize(), which takes one
floating point argument
Example
Example : glPointSize(4.0);
//the following code plots three dots
glClearColor() : establishes what color the window will be cleared to. The
glBegin(GL_POINTS); background color is set with glClearColor(red, green, blue, alpha), where alpha
glVertex2i(100, 50); specifies a degree of transparency
glVertex2i(100, 130);
glVertex2i(150, 130); Example : glClearColor (0.0, 0.0, 0.0, 0.0); //set black background color
glEnd( );
110
CS2401 –Computer Graphics
glClear() : To clear the entire window to the background color, we use glClear
(GL_COLOR_BUFFER_BIT). The argument GL_COLOR_BUFFER_BIT is
another constant built into OpenGL Example : glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0);
Example : glClear(GL_COLOR_BUFFER_BIT) glFlush() : ensures that the drawing commands are actually executed rather than
stored in a buffer awaiting (ie) Force all issued OpenGL commands to be executed
Example:
glShadeModel : Sets the shading model. The mode parameter can be either
GL_SMOOTH (the default) or GL_FLAT.
gluOrtho2D(): specifies the coordinate system in two dimension Example : OpenGL Program to draw three dots (2-Dimension)
void gluOrtho2D (GLdouble left, GLdouble right, GLdouble bottom,GLdouble #include "stdafx.h"
top);
#include "gl/glut.h"
#include <gl/gl.h>
Example : gluOrtho2D(0.0, 640.0, 0.0, 480.0);
void myInit(void)
{
glOrtho() : specifies the coordinate system in three dimension
glClearColor (1.0, 1.0, 1.0, 0.0);
111
CS2401 –Computer Graphics
glPointSize(4.0); glutDisplayFunc(Display);
glMatrixMode(GL_PROJECTION); myInit();
glLoadIdentity(); glutMainLoop();
} }
void Display(void)
glClear (GL_COLOR_BUFFER_BIT);
glVertex2i(100, 130);
#include "stdafx.h"
glVertex2i(150, 130);
#include "gl/glut.h"
glEnd( );
#include <gl/gl.h>
glFlush();
}
void Display(void)
int main (int argc, char **argv)
{
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glutInit(&argc, argv);
glClear (GL_COLOR_BUFFER_BIT);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glColor3f (1.0, 1.0, 1.0);
glutInitWindowSize(640,480);
glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0);
glutInitWindowPosition(100,150);
112
CS2401 –Computer Graphics
glBegin(GL_POLYGON);
glEnd();
glFlush();
glutInit(&argc, argv);
#include "stdafx.h"
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
#include "gl/glut.h"
glutInitWindowSize(640,480);
#include <gl/gl.h>
glutCreateWindow("Intro");
void myInit(void)
glClearColor(0.0,0.0,0.0,0.0);
{
glutDisplayFunc(Display);
glClearColor (0.0, 0.0, 0.0, 0.0);
glutMainLoop();
glColor3f (1.0, 1.0, 1.0);
return 0;
glPointSize(4.0);
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
113
CS2401 –Computer Graphics
} glutInitWindowPosition(100,150);
{ glutDisplayFunc(Display);
glBegin(GL_POINTS); glutMainLoop();
glVertex2i(320, 128); }
glVertex2i(239, 67);
glVertex2i(194, 101);
glVertex2i(129, 83);
glVertex2i(75, 73);
glVertex2i(74, 74);
glVertex2i(20, 10);
glEnd( );
glFlush();
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(640,480);
114
CS2401 –Computer Graphics
OpenGL makes it easy to draw a line: use GL_LINES as the argument to glBegin(),
and pass it the two end points as vertices. Thus to draw a line between (40,100) and
(202,96) use: Polyline is a collection of line segments joined end to end. It is described by an
glVertex2i(40, 100);
glVertex2i(202, 96);
glEnd();
In OpenGL a polyline is called a “line strip”, and is drawn by specifying the
vertices in turn between glBegin(GL_LINE_STRIP) and glEnd().
OpenGL provides tools for setting the attributes of lines.
glVertex2i(50,10);
To make stippled (dotted or dashed) lines, you use the command glLineStipple() to glVertex2i(20,80);
define the stipple pattern, and then we enable line stippling with glEnable()
glVertex2i(50,80);
glEnd();
glLineStipple(1, 0x3F07);
glFlush();
glEnable(GL_LINE_STIPPLE);
115
CS2401 –Computer Graphics
glVertex2i(20,10); // draw a rectangle with opposite corners (x1, y1) and (x2, y2);
glVertex2i(20,80);
glRecti(20,20,100,70);
Attributes such as color, thickness and stippling may be applied to polylines in the glColor3f(0.2,0.2,0.2); // dark gray
same way they are applied to single lines. If it is desired to connect the last point
with the first point to make the polyline into a polygon simply replace glRecti(70, 50, 150, 130);
GL_LINE_STRIP with GL_LINE_LOOP.
116
CS2401 –Computer Graphics
vertices: v0, v1, v2, then v0, v2, v3, then v0, v3, v4, etc.
To draw a convex polygon based on vertices (x0, y0), (x1, y1), …, (xn, yn) use the
usual list of vertices, but place them between a glBegin(GL_POLYGON) and an
glEnd(): GL_QUAD_STRIP: draws a series of quadrilaterals based on foursomes of
vertices: first v0, v1, v3, v2, then v2, v3, v5, v4, then v4, v5, v7, v6 (in an order so
that all quadrilaterals are “traversed” in the same way; e.g. counterclockwise).
glBegin(GL_POLYGON);
glVertex2f(x0, y0);
glVertex2f(x1, y1);
. . . . ..
glVertex2f(xn, yn);
glEnd();
The following list explains the function of each of the five constants:
GL_TRIANGLES: takes the listed vertices three at a time, and draws a separate
triangle for each;
GL_QUADS: takes the vertices four at a time and draws a separate quadrilateral for
each
#include "gl/glut.h"
GL_TRIANGLE_FAN: draws a series of connected triangles based on triplets of
117
CS2401 –Computer Graphics
void init(void) {
{ glutInit(&argc, argv);
} glutDisplayFunc(display);
{ return 0;
glClear (GL_COLOR_BUFFER_BIT); }
glBegin (GL_TRIANGLES);
glColor3f (0.0, 1.0, 0.0); A filled polygon might be solidly filled, or stippled with a certain pattern.
glColor3f (0.0, 0.0, 1.0); The pattern is specified with 128-byte array of data type GLubyte. The 128 bytes
provides the bits for a mask that is 32 bits wide and 32 bits high.
glVertex2f (50.0, 250.0);
glEnd();
GLubyte mask[]={0xff,0xfe………….128 entries}
glFlush ();
118
CS2401 –Computer Graphics
The first 4 bytes prescribe the 32 bits across the bottom row from left to right; the 0x06, 0x64, 0x26, 0x60, 0x0c, 0xcc, 0x33, 0x30,
next 4 bytes give the next row up, etc..
0x18, 0xcc, 0x33, 0x18, 0x10, 0xc4, 0x23, 0x08,
void myInit(void)
#include "stdafx.h"
{
#include "gl/glut.h"
glClearColor (0.0, 0.0, 0.0, 0.0);
#include <gl/gl.h>
glColor3f (1.0, 1.0, 1.0);
glPointSize(4.0);
GLubyte mask[]={
glMatrixMode(GL_PROJECTION);
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
glLoadIdentity();
0x03, 0x80, 0x01, 0xC0, 0x06, 0xC0, 0x03, 0x60,
gluOrtho2D(0.0, 640.0, 0.0, 480.0);
0x04, 0x60, 0x06, 0x20, 0x04, 0x30, 0x0C, 0x20,
}
0x04, 0x18, 0x18, 0x20, 0x04, 0x0C, 0x30, 0x20,
void Display(void)
0x04, 0x06, 0x60, 0x20, 0x44, 0x03, 0xC0, 0x22,
{
0x44, 0x01, 0x80, 0x22, 0x44, 0x01, 0x80, 0x22,
glClearColor(0.0,0.0,0.0,0.0); // white background
0x44, 0x01, 0x80, 0x22, 0x44, 0x01, 0x80, 0x22,
glClear(GL_COLOR_BUFFER_BIT);
0x44, 0x01, 0x80, 0x22, 0x44, 0x01, 0x80, 0x22,
glColor3f(1.0, 1.0, 1.0);
0x66, 0x01, 0x80, 0x66, 0x33, 0x01, 0x80, 0xCC,
glRectf(25.0, 25.0, 125.0, 125.0);
0x19, 0x81, 0x81, 0x98, 0x0C, 0xC1, 0x83, 0x30,
glEnable(GL_POLYGON_STIPPLE);
0x07, 0xe1, 0x87, 0xe0, 0x03, 0x3f, 0xfc, 0xc0,
glPolygonStipple(mask);
0x03, 0x31, 0x8c, 0xc0, 0x03, 0x33, 0xcc, 0xc0,
glRectf (125.0, 25.0, 225.0, 125.0);
119
CS2401 –Computer Graphics
glDisable(GL_POLYGON_STIPPLE);
glFlush();
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(640,480);
Simple Interaction with the mouse and keyboard
glutInitWindowPosition(100,150);
glutCreateWindow("Polygon Stipple");
When the user presses or releases a mouse button, moves the mouse, or presses a
glutDisplayFunc(Display);
keyboard key, an event occur. Using the OpenGL Utility Toolkit (GLUT) the
myInit(); programmer can register a callback function with each of these events by using the
following commands:
glutMainLoop();
return 0;
glutMouseFunc(myMouse) which registers myMouse() with the event that occurs
} when the mouse button is pressed or released;
120
CS2401 –Computer Graphics
The value of key is the ASCII value12 of the key pressed. The values x and y report
the position of the mouse at the time that the event occurred. (As before y measures
When a mouse event occurs the system calls the registered function, supplying it the number of pixels down from the top of the window.)
with values for these parameters. The value of button will be one of:
switch(theKey)
with the obvious interpretation, and the value of state will be one of: GLUT_UP or
GLUT_DOWN. The values x and y report the position of the mouse at the time of {
the event.
case „p‟:
break;
break; // do nothing
121
CS2401 –Computer Graphics
Drawing three dimensional objects & Drawing three dimensional scenes Format : gluPerspective(angle, asratio, ZMIN, ZMAX);
OpenGL has separate transformation matrices for different graphics features Example : gluPerspective(60.0, width/height, 0.1, 100.0);
glMatrixMode(GLenum mode), where mode is one of: gluLookAt() - view volume that is centered on a specified eyepoint
glPushMatrix() : push current matrix stack Example for drawing three dimension Objects
122
CS2401 –Computer Graphics
glutWireCube(double size);
glBegin(GL_QUADS);
glutSolidCube(double size);
glColor3f(1,0,0); //red
glutWireSphere(double radius, int slices, int stacks);
glVertex3f(-0.5, -0.5, 0.0);
glutSolidSphere(double radius, int slices, int stacks);
glColor3f(0,1,0); //green
glutWireCone(double radius, double height, int slices, int stacks);
glVertex3f(-0.5, 0.5, 0.0);
glutSolidCone(double radius, double height, int slices, int stacks);
glColor3f(0,0,1); //blue
glutWireTorus(double inner_radius, double outer_radius, int sides, int rings);
glVertex3f(0.5, 0.5, 0.0);
glutSolidTorus(double inner_radius, double outer_radius, int sides, int rings);
glColor3f(1,1,1); //white
glutWireTeapot(double size);
glVertex3f(0.5, -0.5, 0.0);
glutSolidTeapot(double size);
glEnd();
cone
123
CS2401 –Computer Graphics
glTranslate () : multiply the current matrix by a translation matrix voidglScaled(GLdouble x, GLdouble y, GLdouble z);
void glTranslatef(GLfloat x, GLfloat y, GLfloat z); x, y, z : Specify scale factors along the x, y, and z axes, respectively.
124
CS2401 –Computer Graphics
glVertex3f( 0.0, 1.0f, 0.0); // V4 ( 0, 1, 0) Introduction to shading models – Flat and smooth shading – Adding texture to faces
glEnd(); – Adding shadows of objects – Building a camera ina program – Creating shaded
glPopMatrix(); objects – Rendering texture – Drawing shadows.
glFlush();
glutSwapBuffers();
} 4.1 Introduction to Shading Models
void Init(void)
{ The mechanism of light reflection from an actual surface is very
glClearColor(0.0, 0.0, 0.0, 0.0); complicated it depends on many factors. Some of these factors are geometric and
} others are related to the characteristics of the surface.
void Resize(int width, int height)
{ A shading model dictates how light is scattered or reflected from a surface.
glViewport(0, 0, width, height); The shading models described here focuses on achromatic light. Achromatic light
glMatrixMode(GL_PROJECTION); has brightness and no color, it is a shade of gray so it is described by a single value
glLoadIdentity(); its intensity.
gluPerspective(60.0, width/height, 0.1, 1000.0);
A shading model uses two types of light source to illuminate the objects in
glMatrixMode(GL_MODELVIEW);
a scene : point light sources and ambient light. Incident light interacts with the
glLoadIdentity();
surface in three different ways:
}
int main(int argc, char **argv) Some is absorbed by the surface and is converted to heat.
{
Some is reflected from the surface
glutInit(&argc, argv);
Some is transmitted into the interior of the object
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
If all incident light is absorbed the object appears black and is known as a
glutInitWindowSize(400, 400);
black body. If all of the incident light is transmitted the object is visible only
glutInitWindowPosition(200, 200);
through the effects of reflection.
glutCreateWindow("Polygon in OpenGL");
Init(); Some amount of the reflected light travels in the right direction to reach
glutDisplayFunc(Display); the eye causing the object to be seen. The amount of light that reaches the eye
glutReshapeFunc(Resize); depends on the orientation of the surface, light and the observer. There are two
glutMainLoop(); different types of reflection of incident light
return 0;
} Diffuse scattering occurs when some of the incident light slightly
penetrates the surface and is re-radiated uniformly in all directions.
Scattered light interacts strongly with the surface and so its color is
usually affected by the nature of the surface material.
Specular reflections are more mirrorlike and highly directional.
UNIT IV – RENDERING Incident light is directly reflected from its outer surface. This makes
the surface looks shinny. In the simplest model the reflected light has
the same color as the incident light, this makes the material look like
125
CS2401 –Computer Graphics
plastic. In a more complex model the color of the specular light varies Each face of a mesh object has two sides. If the object is solid ,
, providing a better approximation to the shininess of metal surfaces. one is inside and the other is outside. The eye can see only the outside and
The total light reflected from the surface in a certain direction is it is this side for which we must compute light contributions.
the sum of the diffuse component and the specular component. For each
surface point of interest we compute the size of each component that We shall develop the shading model for a given side of a face. If
reaches the eye. that side of the face is turned away from the eye there is no light
contribution.
4.1.1 Geometric Ingredients For Finding Reflected Light
4.1.2 How to Compute the Diffuse Component
We need to find three vectors in order to compute the diffuse and
specular components. The below fig. shows three principal vectors ( s, m Suppose that a light falls from a point source onto one side of a
and v) required to find the amount of light that reaches the eye from a face , a fraction of it is re-radiated diffusely in all directions from this side.
point P. Some fraction of the re-radiated part reaches the eye, with an intensity
denoted by Id.
Important directions in computing the reflected light
An important property assumed for diffuse scattering is that it is
independent of the direction from the point P, to the location of the
viewer‟s eye. This is called omnidirectional scattering , because
scattering is uniform in all directions. Therefore I d is independent of the
angle between m and v.
On the other hand the amount of light that illuminates the face
does not depend on the orientation of the face relative to the point source.
The amount of light is proportional to the area of the face that it sees.
126
CS2401 –Computer Graphics
But for simplicity and to reduce computation time, these effects are usually
suppressed when rendering images. A reasonable value for ρ d is chosen for
each surface.
Phong Model
Fig (b) the face is turned partially away from the light source The Phong model is less successful with objects that have a
through angle θ. The area subtended is now only cos(θ) , so that the shinny metallic surface.
brightness is reduced of S is reduced by this same factor. This relationship
between the brightness and surface orientation is called Lambert’s law. Fig a) shows a situation where light from a source impinges on a
surface and is reflected in different directions.
cos(θ) is the dot product between the normalized versions of s and
m. Therefore the strength of the diffuse component:
s.m
Id = Is ρd
s m
127
CS2401 –Computer Graphics
with beam patterns. The distance from P to the beam envelope shows the The simple reflection model does not perfectly renders a scene. An
relative strength of the light scattered in that direction. example: shadows are unrealistically deep and harsh, to soften these shadows we
add a third light component called ambient light.
With only diffuse and specular reflections, any parts of a surface that are
shadowed from the point source receive no light and so are drawn black but in real,
the scenes around us are always in some soft nondirectional light. This light arrives
by multiple reflections from various objects in the surroundings. But it would be
computationally very expensive to model this kind of light.
The source is assigned an intensity Ia. Each face in the model is assigned a
value for its ambient reflection coefficient ρd, and the term Ia ρa is added to the
Fig(c) shows how to quantify this beam pattern effect . The direction r
diffuse and specular light that is reaching the eye from each point P on that face. Ia
of perfect reflection depends on both s and the normal vector m to the
and ρa are found experimentally.
surface, according to:
Too little ambient light makes shadows appear too deep and harsh., too
s.m much makes the picture look washed out and bland.
r = -s + 2 m ( the mirror – reflection direction)
m
For surfaces that are shiny but are not true mirrors, the amount of light 4.1.5 How to combine Light Contributions
reflected falls off as the angle φ between r and v increases. In Phong model
the φ is said to vary as some power f of the cosine of φ i.e., ( cos (φ ))f in We sum the three light contributions –diffuse, specular and ambient to
which f is chosen experimentally and usually lies between 1 and 200. form the total amount of light I that reaches the eye from point P:
128
CS2401 –Computer Graphics
I depends on various source intensities and reflection coefficients and the Example: A specular highlight seen on a glossy red apple when
relative positions of the point P, the eye and the point light source. illuminated by a yellow light is yellow and not red. This is the same for shiny
objects made of plasticlike material.
Ib= Iab ρab + Idb ρdb × lambert + Ispb ρsb × phongf --------------- (1)
The above equations are applied three times to compute the red,
green and blue components of the reflected light.
The light sources have three types of color : ambient =(I ar,Iag,Iab) ,
diffuse=(Idr,Idg,Idb) and specular=(Ispr,Ispg,Ispb). Usually the diffuse and the specular
light colors are the same. The terms lambert and phongf do not depends on the
color component so they need to be calculated once. To do this we need to define
nine reflection coefficients:
129
CS2401 –Computer Graphics
glEnd();
The vertices are finally passed through the viewport transformation
The call to glNormal3f() sets the “current normal vector” which is where they are mapped into the screen coordinates. The quadrilateral is then
applied to all vertices sent using glVertex3f(). The current normal remains current rendered.
until it is changed with another call to glNormal3f().
All quantities after the modelview transformation are expressed in Create a Light Source
camera coordinates. At this point the shading model equation (1) is applied and a
color is attached to each vertex. In OpenGL we can define upto eight sources, which are referred
through names GL_LIGHT0, GL_LIGHT1 and so on. Each source has properties
The clipping step is performed in homogenous coordinates. This and must be enabled. Each property has a default value. For example, to create a
may alter some of the vertices. The below figure shows the case where vertex v1 of source located at (3,6,5) in the world coordinates
a triangle is clipped off and two new vertices a and b are created. The triangle
becomes a quadrilateral. The color at each new vertices must be computed, since it
is needed in the actual rendering step.
GLfloat myLightPosition[]={3.0 , 6.0,5.0,1.0 };
Clipping a polygon against the view volume
glLightfv(GL_LIGHT0, GL-POSITION, myLightPosition);
130
CS2401 –Computer Graphics
Some sources such as desk lamp are in the scene whereas like the glLightfv(GL_LIGHT0, GL_SPECULAR, spec0);
sun are infinitely remote. OpenGL allows us to create both types by using
homogenous coordinates to specify light position:
(x,y,z,1) : a local light source at the position (x,y,z) Colors are specified in RGBA format meaning red, green, blue and
alpha. The alpha value is sometimes used for blending two colors on the screen.
(x,y,z,0) a vector to an infinitely remote light source in the direction (x,y,z) Light sources have various default values. For all sources:
A local source and an infinitely remote source Default ambient= (0,0,0,1); dimmest possible :black
Spotlights
The above fig,. shows a local source positioned at (0,3,3,1) and a Light sources are point sources by default, meaning that they emit
remote source “located” along vector (3,3,0,0). Infinitely remote light sources are light uniformly in all directions. But OpenGL allows you to make them into
often called “directional”. spotlights, so they emit light in a restricted set of directions. The fig. shows a
spotlight aimed in direction d with a “cutoff angle” of α.
In OpenGL you can assign a different color to three types of light
that a source emits : ambient , diffuse and specular. Arrays are used to hold the Properties of an OpenGL spotlight
colors emitted by light sources and they are passed to glLightfv() through the
following code:
glLightfv(GL_LIGHT0, GL_AMBIENT, amb0); //attach them to LIGHT0 No light is seen at points lying outside the cutoff cone. For vertices
such as P, which lie inside the cone, the amount of light reaching P is attenuated by
glLightfv(GL_LIGHT0, GL_DIFFUSE, diff0);
131
CS2401 –Computer Graphics
the factor cosε(β), where β is the angle between d and a line from the source to P ambient source to a non-zero value makes object in a scene visible even if you have
and is the exponent chosen by the user to give the desired falloff of light with angle. not invoked any of the lighting functions.
In OpenGL the terms “front faces” and “back faces” are used for
“inside” and “outside”. A face is a front face if its vertices are listed in
GLfloat amb[]={ 0.2, 0.3, 0.1, 1.0}; counterclockwise order as seen by the eye.
glLightModelfv(GL_LIGHT_MODEL_AMBIENT,amb); The fig.(a) shows a eye viewing a cube which is modeled using the
counterclockwise order notion. The arrows indicate the order in which the vertices
are passed to OpenGL. For an object that encloses that some space, all faces that are
This code sets the ambient source to the color (0.2, 0.3, 0.1). The visible to the eye are front faces, and OpenGL draws them with the correct shading.
default value is (0.2, 0.2, 0.2,1.0) so the ambient is always present. Setting the OpenGL also draws back faces but they are hidden by closer front faces.
132
CS2401 –Computer Graphics
OpenGL’s definition of a front face is modified by the modelview matrix that is in effect at the time glLightfv() is
called. To modify the position of the light with transformations and independently
move the camera as in the following code:
void display()
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glPushMatrix();
Fig(b) shows a box with a face removed. Three of the visible faces glRotated(….); //move the light
are back faces. By default, OpenGL does not shade these properly. To do proper
shading of back faces we use: glTranslated(…);
glLightfv(GL_LIGHT0,GL_POSITION,position);
glLightModeli (GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE); glPopMatrix();
When this statement is executed, OpenGL reverses the normal gluLookAt(….); //set the camera position
vectors of any back face so that they point towards the viewer, and then it performs
shading computations properly. Replacing GL_TRUE with GL_FALSE will turn draw the object
off this facility.
glutSwapBuffers();
}
Moving Light Sources
To move the light source with camera we use the following code:
Lights can be repositioned by suitable uses of glRotated() and
glTranslated(). The array position, specified by using GLfloat pos[]={0,0,0,1};
glLightfv(GL_LIGHT0,GL_POSITION,position) glMatrixMode(GL_MODELVIEW);
133
CS2401 –Computer Graphics
gluLookAt(….); //move the light and the camera GL_AMBIENT_AND_DIFFUSE: Set both the ambient and the
This code establishes the light to be positoned at the eye and the The emissive color of a face causes it to “glow” in the specified
light moves with the camera. color, independently of any light source.
4.1.9 Working With Material Properties In OpenGL 4.1.10 Shading of Scenes specified by SDL
The effect of a light source can be seen only when light reflects off The scene description language SDL supports the loading of
an object‟s surface. OpenGL provides methods for specifying the various reflection material properties into objects so that they can be shaded properly.
coefficients. The coefficients are set with variations of the function glMaterial and
they can be specified individually for front and back faces. The code:
light 3 4 5 .8 .8 ! bright white light at (3,4,5)
sets the diffuse reflection coefficients( ρdr , ρdg ,ρdb) equal to (0.8, 0.2, 0.0) ambient .2 .6 0
for all specified front faces. The first parameter of glMaterialfv() can take the
following values: diffuse .8 .2 1 ! red material
GL_FRONT:Set the reflection coefficient for front faces. specular 1 1 1 ! bright specular spots – the color of the source
GL_BACK:Set the reflection coefficient for back faces. specularExponent 20 !set the phong exponent
The second parameter can take the following values: The code above describes a scene containing a sphere with the
following material properties:
GL_AMBIENT: Set the ambient reflection coefficients.
134
CS2401 –Computer Graphics
o ambient reflection coefficients: (ρar , ρag , ρab)= (.2, 0.6, 0); Painting a Face
o diffuse reflection coefficients: ( ρdr , ρdg , ρdb)= (0.8,0.2,1.0);
o specular reflection coefficients: (ρsr , ρsg , ρsb) = (1.0,1.0,1.0); A face is colored using a polygon fill routine. A polygon routine is
o Phong exponent f = 20. sometimes called as a tiler because it moves over a polygon pixel by pixel, coloring
The light source is given a color of (0.8,0.8,0.8) for both its each pixel. The pixels in a polygon are visited in a regular order usually from
bottom to top of the polygon and from left to right.
diffuse and specular component. The global ambient term
Polygons intersect are convex. A tiler designed to fill only convex
(Iar , Iag , Iab)= (0.2, 0.2, 0.2). polygons can be very efficient because at each scan line there is unbroken run of
pixels that lie inside the polygon. OpenGL uses this property and always fills
The current material properties are loaded into each object‟s mtrl field at convex polygons correctly whereas nonconvex polygons are not filled correctly.
the time the object is created.
A convex quadrilateral whose face is filled with color
When an object is drawn using drawOpenGL(), it first passes its material
properties to OpenGL, so that at the moment the object is actually drawn, OpenGL
has those properties in its current state.
135
CS2401 –Computer Graphics
for( int x= xleft ; x<= xright; x++) // fill across the scan line Smooth shading attempts to de-emphasize edges between faces by
computing colors at more points on each face. The two types of smooth shading
{
Gouraud shading
find the color c for this pixel Phong shading
Gouraud Shading
put c into the pixel at (x,y)
Gouraud shading computes a different value of c for each pixel. For
} the scan line ys in the fig. , it finds the color at the leftmost pixel, color left, by linear
interpolation of the colors at the top and bottom of the left edge of the polygon. For
}
the same scan line the color at the top is color 4, and that at the bottom is color1, so
The main difference between flat and smooth shading is the manner colorleft will be calculated as
in which the color c is determined in each pixel.
colorleft = lerp(color1, color4,f), ----------(1)
4.2.1 Flat Shading
where the fraction
When a face is flat, like a roof and the light sources are distant , the
diffuse light component varies little over different points on the roof. In such cases
we use the same color for every pixel covered by the face.
OpenGL offers a rendering mode in which the entire face is drawn varies between 0 and 1 as ys varies from ybott to y4. The eq(1) involves three
with the same color. In this mode, although a color is passed down the pipeline as calculations since each color quantity has a red, green and blue component.
part of each vertex of the face, the painting algorithm uses only one color value. So
Colorright is found by interpolating the colors at the top and bottom
the command find the color c for this pixel is not inside the loops, but appears
of the right edge. The tiler then fills across the scan line , linearly interpolating
before the loop, setting c to the color of one of the vertices.
between colorleft and colorright to obtain the color at pixel x:
Flat shading is invoked in OpenGL using the command
C(x) = lerp
glShadeModel(GL_FLAT);
To increase the efficiency of the fill, this color is computed
When objects are rendered using flat shading. The individual faces incrementally at each pixel . that is there is a constant difference between c(x+1)
are clearly visible on both sides. Edges between faces actually appear more and c(x) so that
pronounced than they would on an actual physical object due to a phenomenon in
the eye known as lateral inhibition. When there is a discontinuity across an object
the eye manufactures a Mach Band at the discontinuity and a vivid edge is seen.
C(x+1)=c(x)+
Specular highlights are rendered poorly with flat shading because
the entire face is filled with a color that was computed at only one vertex.
136
CS2401 –Computer Graphics
The incremented is calculated only once outside of the inner most loop. The
code:
glShadeModel(GL_SMOOTH); Fig.(b) shows how Gouraud shading reveals the underlying surface.
The polygonal surface is shown in cross section with vertices V 1 and V2. The
When a sphere and a bucky ball are rendered using Gouraud imaginary smooth surface is also represented. Properly computed vertex normals
shading, the bucky ball looks the same as it was rendered with flat shading because m1,m2 point perpendicularly to this imaginary surface so that the normal for correct
the same color is associated with each vertex of a face. But the sphere looks shading will be used at each vertex and the color there by found will be correct. The
smoother, as there are no abrupt jumps in color between the neighboring faces and color is then made to vary smoothly between the vertices.
the edges of the faces are gone , replaced by a smoothly varying colors across the
object. Gouraud shading does not picture highlights well because colors are
found by interpolation. Therefore in Gouraud shading the specular component of
intensity is suppressed.
137
CS2401 –Computer Graphics
When computing Phong Shading we find the normal vector at each For the scan line ys, the vectors m left and m right are found by linear
point on the face of the object and we apply the shading model there to fig the color interpolation
we compute the normal vector at each pixel by interpolating the normal vectors at
the vertices of the polygon.
The fig shows a projected face with the normal vectors m1, m2, m3
and m4 indicated at the four vertices.
Bitmap Textures
Procedural Textures
Bitmap Textures
138
CS2401 –Computer Graphics
Textures are formed from bitmap representations of images, such as 4.3.1 Painting the Textures onto a Flat Surface
digitized photo. Such a representation consists of an array txtr[c][r] of color values.
If the array has C columns and R rows, the indices c and r vary from 0 to C-1 and Texture space is flat so it is simple to paste texture on a flat surface.
R-1 resp.,. The function texture(s,t) accesses samples in the array as in the code:
Mapping texture onto a planar polygon
Color3 texture (float s, float t)
Procedural Textures
else return 0.2; //dark background Example to define a quadrilateral face and to position a texture on
it, we send OpenGL four texture coordinates and four 3D points, as follows:
}
139
CS2401 –Computer Graphics
glEnd();
The fig. shows the use of texture coordinates , that tile the texture,
making it to repeat. To do this some texture coordinates that lie outside the
interval[0,1] are used. When rendering routine encounters a value of s and t outside
the unit square, such as s=2.67, it ignores the integral part and uses only the
fractional part 0.67. A point on a face that requires (s,t)=(2.6,3.77) is textured with
texture (0.6,0.77).
The fig. shows the a case where the four corners of the texture The points inside F will be filled with texture values lying inside P,
square are associated with the four corners of a rectangle. In this example, the by finding the internal coordinate values (s,t) through the use of interpolation.
texture is a 640-by-480 pixel bit map and it is pasted onto a rectangle with aspect
ratio 640/480, so it appears without distortion.
140
CS2401 –Computer Graphics
to hold all of the coordinate pairs of the mesh. The two important techniques to treat
texture for an object are:
141
CS2401 –Computer Graphics
We compute (sleft,tleft) and (sright,tright) for each scan line in a rapid How does motion along corresponding lines operate?
incremental fashion and to interpolate between these values, moving across these
scan lines. Linear interpolation produces some distortion in the texture. This
distortion is disturbing in an animation when the polygon is rotating. Correct
interpolation produces an texture as it should be. In an animation this texture would
appear to be firmly attached to the moving or rotating face.
We now find the proper texture coordinates (s,t) at each point on the face
being rendered.
142
CS2401 –Computer Graphics
The fig. shows the face of a barn. The left edge of the projected face has The figure shows a refinement of the pipeline. Each vertex V is associated
endpoints a and b. The face extends from xleft to xright across scan line y. We need to with a texture pair (s,t) and a vertex normal. The vertex is transformed by the
find appropriate texture coordinates (sleft, tleft) and (sright, tright) to attach to xleft modelview matrix, producing vertex A=(A1, A2, A3) and a normal n‟ in eye
and xright, which we can then interpolate across the scan line coordinates.
Consider finding sleft(y), the value of sleft at scan line y.We know that Shading calculations are done using this normal, producing the color c=(c r,
texture coordinate sA is attached to point a and sB is attached to point b. If the scan cg, cb). The texture coordinates (sA, tA) are attached to A. Vertex A then goes
line at y is a fraction f of the way between ybott and ytop so that f=(y – ybott)/ (ytop – perspective transformation, producing a =(a1,a2, a3,a4). The texture coordinates and
ybott), the proper texture coordinate to use is color c are not altered.
Next clipping against the view volume is done. Clipping can cause some
vertices to disappear and some vertices to be formed. When a vertex D is created,
we determine its position (d1, d2, d3, d4) and attach it to appropriate color and
and similarly for tleft. texture point. After clipping the face still consists of a number of verices, to each of
which is attached a color and a texture point. For a point A, the information is
Implications for the Graphics Pipeline stored in the array (a1, a2, a3, a4, sA, tA, c,1). A final term of 1 has been appended;
this is used in the next step.
Perspective division is done, we need hyberbolic interpolation so
we divide every term in the array that we wish to interpolate hyperbolically by a 4, to
obtain the array (x, y, z, 1, s A/a4, t4/a4, c, 1/a4). The first three components of the
array (x, y, z)=(a1/a4, a2/a4, a3/a4).
Finally, the rendering routine receives the array (x, y, z, 1, s A/a4, t4/a4, c,
1/a4) for each vertex of the face to be rendered.
143
CS2401 –Computer Graphics
There are three methods to apply the values in the texture map in the Bump mapping is a technique developed by Blinn, to give a surface a
rendering calculations wrinkled or dimpled appearance without struggling to model each dimple itself.
One problem associated with applying bump mapping to a surface like a teapot is
Creating a Glowing Object that since the model does not contain the dimples , the object‟s outline caused by a
shadow does not show dimples and it is smooth along each face.
This is the simplest method. The visibility intensity I is set equal to the
texture value at each spot: The goal is to make a scalar function texture(s,t) disturb the normal vector
at each spot in a controlled fashion. This disturbance should depend only on the
I=texture(s,t) shape of the surface and the texture.
The object then appears to emit light or glow. Lower texture values emit
less light and higher texture values emit more light. No additional lighting
calculations are needed. OpenGL does this type of texturing using
The color of an object is the color of its diffuse light component. Therefore
we can make the texture appear to be painted onto the surface by varying the diffuse
reflection coefficient. The texture function modulates the value of the reflection
coefficient from point to point. We replace eq(1) with
I= texture(s,t) [Ia ρa + Id ρd × lambert ]+ Isp ρs × phongf The fig. shows in cross section how bump mapping works. Suppose the
surface is represented parametrically by the function P(u,v) and has unit normal
For appropriate values of s and t. Phong specular reflections are the color vector m(u,v). Suppose further that the 3D point at(u*,v*) corresponds to texture at
of the source and not the object so highlights do not depend on the texture. OpenGL (u*,v*).
does this type of texturing using
Blinn‟s method simulates perturbing the position of the true surface in the
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, direction of the normal vector by an amount proportional to the texture (u*,v*);that
is
GL_MODULATE);
P‟(u*,V*) = P(u*,v*)+texture(u*,v*)m(u*,v*).
Figure(a) shows how this techniques adds wrinkles to the surface. The disturbed
Simulating Roughness by Bump Mapping surface has a new normal vector m‟(u*,v*)at each point. The idea is to use this
disturbed normal as if it were “attached” to the original undisturbed surface at each
144
CS2401 –Computer Graphics
point, as shown in figure (b). Blinn has demonstrated that a good approximation to you are getting a second look at the object from the view point of the light source.
m‟(u*,v*) is given by There are two methods for computing shadows:
Chrome mapping
A rough and blurry image that suggests the surrounding
environment is reflected in the object as you would see in an object coated with
chrome.
Environment mapping
A recognizable image of the surrounding environment is seen
reflected in the object. Valuable visual clues are got from such reflections
particularly when the object is moving.
Fig(a) shows a box casting a shadow onto the floor. The shape of the
shadow is determined by the projections of each of the faces of the box onto the
plane of the floor, using the light source as the center of projection.
Fig(b) shows the superposed projections of two of the faces. The top faces
4.4 ADDING SHADOWS OF OBJECTS projects to top‟ and the front face to front‟.
Shadows make an image more realistic. The way one object casts a This provides the key to drawing the shadow. After drawing the plane by
shadow on another object gives important visual clues as to how the two objects are the use of ambient, diffuse and specular light contributions, draw the six projections
positioned with respect to each other. Shadows conveys lot of information as such,
145
CS2401 –Computer Graphics
of the box‟s faces on the plane, using only the ambient light. This technique will
draw the shadow in the right shape and color. Finally draw the box.
n.( A S )
V ' S (V S)
n.(V S )
4.4.2 Creating Shadows with the use of a Shadow buffer
This method uses a variant of the depth buffer that performs the removal The fig. shows a scene being viewed by the usual eye camera and
of hidden surfaces. An auxiliary second depth buffer called a shadow buffer is used a source camera located at the light source. Suppose that point P is on the ray from
for each light source. This requires lot of memory. the source through the shadow buffer pixel d[i][j] and that point B on the pyramid is
also on this ray. If the pyramid is present d[i][j] contains the pseudodepth to B; if
This method is based on the principle that any points in a scene that are the pyramid happens to be absent d[i][j] contains the pseudodepth to P.
hidden from the light source must be in shadow. If no object lies between a point
and the light source, the point is not in shadow. The shadow buffer calculation is independent of the eye position, so in
an animation in which only the eye moves, the shadow buffer is loaded only once.
The shadow buffer contains a depth picture of the scene from the point of The shadow buffer must be recalculated whenever the objects move relative to the
view of the light source. Each of the elements of the buffer records the distance light source.
from the source to the closest object in the associated direction. Rendering is done
in two stages: 2) Rendering the scene
1) Loading the shadow buffer Each face in the scene is rendered using the eye camera. Suppose the
eye camera sees point P through pixel p[c][r]. When rendering p[c][r], we need to
The shadow buffer is initialized with 1.0 in each element, the largest find
pseudodepth possible. Then through a camera positioned at the light source, each of
the scene is rasterized but only the pseudodepth of the point on the face is tested. the pseudodepth D from the source to p
Each element of the shadow buffer keeps track of the smallest pseudodepth seen so the index location [i][j] in the shadow buffer that is to be tested and
far. the value d[i][j] stored in the shadow buffer
If d[i][j] is less than D, the point P is in the shadow and p[c][r] is set
Using the shadow buffer
using only ambient light. Otherwise P is not in shadow and p[c][r] is set using
ambient, diffuse and specular light.
146
CS2401 –Computer Graphics
4.5 BUILDING A CAMERA IN A PROGRAM void set(Point3 eye, Point3 look, Vector3 up); //like gluLookAt()
To have a finite control over camera movements, we create and void roll(float, angle); //roll it
manipulate our own camera in a program. After each change to this camera is made,
the camera tells OpenGL what the new camera is. void pitch(float, angle); // increase the pitch
We create a Camera class that does all things a camera does. In a void yaw(float, angle); //yaw it
program we create a Camera object called cam, and adjust it with functions such as
the following: void slide(float delU, float delV, float delN); //slide it
cam.roll(30); // roll it through 30 degree The Camera class definition contains fields for eye and the directions u,
v and n. Point3 and Vector3 are the basic data types. It also has fields that describe
cam.yaw(20); // yaw it through 20 degree the shape of the view volume: viewAngle, aspect, nearDist and farDist.
ux uy uz dx
class Camera { vx vy vz dy
V
private: nx ny nz dz
0 0 0 0
Point3 eye;
Vector3 u, v, n; This matrix V accounts for the transformation of world points into
camera coordinates. The utility routine computes the matrix V on the basis of
double viewAngle, aspect, nearDist, farDist; //view volume shape current values of eye, u ,v and n and loads the matrix directly into the modelview
matrix using glLoadMatrixf().
void setModelViewMatrix(); //tell OpenGL where the camera is
147
CS2401 –Computer Graphics
void Camera :: setModelViewMatrix(void) The method set() acts like gluLookAt(): It uses the values of eye, look
and up to compute u, v and n according to equation:
{ //load modelview matrix with existing camera values
n= eye – look,
float m[16];
u = up X n
Vector3 eVec(eye.x, eye.y, eye.z); //a vector version of eye
and
m[0]= u.x ; m[4]= u.y ; m[8]= u.z ; m[12]= -eVec.dot(u);
v = n X u. It places this information in the camera‟s fields and
m[1]= v.x ; m[5]= v.y ; m[9]= v.z ; m[13]= -eVec.dot(v); communicates it to OpenGL.
m[2]= n.x ; m[6]= n.y ; m[10]= y.z ; m[14]= -eVec.dot(n); The routine setShape() is simple. It puts the four argument values into
the appropriate camera fields and then calls
m[3]= 0 ; m[7]= 0 ; m[11]= 0 ; m[15]= 1.0 ;
gluPerspective(viewangle, aspect, nearDist, farDist)
glMatrixMode(GL_MODELVIEW);
along with
glLoadMatrixf(m); //load OpenGL‟s modelview matrix
glMatrixMode(GL_PROJECTION)
}
and
void Camera :: set (Point3 eye, Point3 look, Vector3 up)
glLoadIdentity()
{ // Create a modelview matrix and send it to OpenGL
to set the projection matrix.
eye.set(Eye); // store the given eye position
The central camera functions are slide(), roll(), yaw() and pitch(), which
n.set(eye.x – look.x, eye.y – look.y, eye.z – look.z); // make n makes relative changes to the camera‟s position and orientation.
u.set(up.cross(n)); //make u= up X n 4.5.1 Flying the camera
n.normalize(); // make them unit length The user flies the camera through a scene interactively by pressing keys
or clicking the mouse. For instance,
u.normalize();
pressing u will slide the camera up some amount
v.set(n.cross(u)); // make v= n X u
pressing y will yaw the camera to the left
setModelViewMatrix(); // tell OpenGL pressing f will slide the camera forward
The user can see different views of the scene and then changes the
} camera to a better view and produce a picture. Or the user can fly around a scene
148
CS2401 –Computer Graphics
taking different snapshots. If the snapshots are stored and then played back, an Roll, pitch and yaw the camera , involves a rotation of the camera about
animation is produced of the camera flying around the scene. one of its own axes.
There are six degrees of freedom for adjusting a camera: It can be slid in To roll the camera we rotate it about its own n-axis. This means that
three dimensions and it can be rotated about any of three coordinate axes. both the directions u and v must be rotated as shown in fig.
Sliding the camera means to move it along one of its own axes that is, in
the u, v and n direction without rotating it. Since the camera is looking along the
negative n axis, movement along n is forward or back. Movement along u is left or
right and along v is up or down.
To move the camera a distance D along its u axis, set eye to eye + Du.
For convenience ,we can combine the three possible slides in a single function:
slides the camera amount delU along u, delV along v and delN along n. The code is
as follows:
eye.z += delU * u.z + delV * v.z + delN * n.z; We form u‟ as the appropriate linear combination of u and v and
similarly for v‟:
setModelViewMatrix();
u‟ = cos (α)u + sin(α)v ;
}
v‟ = -sin (α)u + cos(α)v
The new axes u‟ and v‟ then replace u and v respectively in the camera.
The angles are measured in degrees.
149
CS2401 –Computer Graphics
void Camera :: roll (float angle) void Camera :: yaw (float angle)
{ // roll the camera through angle degrees { // yaw the camera through angle degrees around V
u.set(cs * t.x – sn * v.x , cs * t.y – sn * v.y, cs * t.z – sn * v.z); n.set(cs*t.x - sn*u.x, cs*t.y - sn*u.y, cs*t.z - sn*u.z);
v.set(sn * t.x + cs * v.x , sn * t.y + cs * v.y, sn * t.z + cs * v.z); u.set(sn*t.x + cs*u.x, sn*t.y + cs*u.y, sn*t.z + cs*u.z);
setModelViewMatrix(); setModelViewMatrix();
} }
Implementation of pitch() The Camera class can be used with OpenGL to fly a camera through a scene. The
scene consists of only a teapot. The camera is a global object and is set up in
main(). When a key is pressed myKeyboard() is called and the camera is slid or
rotated, depending on which key was pressed.
void Camera :: pitch (float angle)
For instance, if P is pressed, the camera is pitched up by 1 degree. If
{ // pitch the camera through angle degrees around U CTRL F is pressed , the camera is pitched down by 1 degree. After the keystroke
has been processed, glutPostRedisplay() causes myDisplay() to be called again to
float cs = cos(3.14159265/180 * angle); draw the new picture.
float sn = sin(3.14159265/180 * angle); This application uses double buffering to produce a fast and smooth
transition between one picture and the next. Two memory buffers are used to store
Vector3 t(v); // remember old v
the pictures that are generated. The display switches from showing one buffer to
v.set(cs*t.x - sn*n.x, cs*t.y - sn*n.y, cs*t.z - sn*n.z); showing the other under the control of glutSwapBuffers().
n.set(sn*t.x + cs*n.x, sn*t.y + cs*n.y, sn*t.z + cs*n.z); Application to fly a camera around a teapot
setModelViewMatrix();
} #include “camera.h”
150
CS2401 –Computer Graphics
{ {
case „F‟: //slide camera forward glutSwapBuffers(); //display the screen just made
cam.slide(0, 0, 0.2); }
break; //--------------------------main----------------------------
case „F‟-64: //slide camera back void main(int argc, char **argv)
cam.slide(0, 0,-0.2); {
cam.pitch(1.0); glutKeyboardFunc(myKeyboard);
break; glutDisplayFunc(myDisplay);
//add roll and yaw controls glClearColor(1.0f, 1.0f, 1.0f, 1.0f); //background is white
151
CS2401 –Computer Graphics
Computers are good at repetition. In addition, the high precision with which modern
computers can do calculations allows an algorithm to take closer look at an object,
to get greater levels of details.
Computer graphics can produce pictures of things that do not even exist in
nature or perhaps could never exist. We will study the inherent finiteness of any
computer generated picture. It has finite resolution and finite size, and it must be
made in finite amount of time. The pictures we make can only be approximations,
and the observer of such a picture uses it just as a hint of what the underlying object
really looks like.
5.1 FRACTALS AND SELF-SIMILARITY To create K1 , divide the line K0 into three equal parts and replace the
middle section with a triangular bump having sides of length 1/3. The total length of
Many of the curves and pictures have a particularly important property the line is 4/3. The second order curve K2, is formed by building a bump on each of
called self-similar. This means that they appear the same at every scale: No matter the four line segments of K1.
how much one enlarges a picture of the curve, it has the same level of detail.
To form Kn+1 from Kn:
Some curves are exactly self-similar, whereby if a region is enlarged the
enlargement looks exactly like the original. Subdivide each segment of Kn into three equal parts and replace the middle
part with a bump in the shape of an equilateral triangle.
Other curves are statistically self-similar, such that the wiggles and
irregularities in the curve are the same “on the average”, no matter how many times In this process each segment is increased in length by a factor of 4/3, so
the picture is enlarged. Example: Coastline. the total length of the curve is 4/3 larger than that of the previous generation. Thus
Ki has total length of (4/3)i , which increases as i increases. As i tends to infinity, the
length of the curve becomes infinite.
152
CS2401 –Computer Graphics
We call n the order of the curve Kn, and we say the order –n Koch
curve consists of four versions of the order (n-1) Koch curve.To draw K2 we draw a
smaller version of K1 , then turn left 60 , draw K1 again, turn right 120 , draw K1
a third time. For snowflake this routine is performed just three times, with a
120 turn in between.
The recursive method for drawing any order Koch curve is given
in the following pseudocode:
The Koch snowflake of the above figure is formed out of three Koch
curves joined together. The perimeter of the ith generations shape S i is three times
length of a Koch curve and so is 3(4/3)i , which grows forever as i increases. But the To draw Kn:
area inside the Koch snowflake grows quite slowly. So the edge of the Koch
snowflake gets rougher and rougher and longer and longer, but the area remains if ( n equals 0 ) Draw a straight line;
bounded.
else {
Koch snowflake s3, s4 and s5
Draw Kn-1;
Turn left 60 ;
Draw Kn-1;
153
CS2401 –Computer Graphics
Drawing a Koch Curve The routine drawKoch() draws Kn on the basis of a parent line of
length len that extends from the current position in the direction dir. To keep track
Void drawKoch (double dir, double len, int n) of the direction of each child generation, the parameter dir is passed to subsequent
calls of Koch().
{
if (n ==0)
dir -=120;
dir +=60;
154
CS2401 –Computer Graphics
Each affine map reduces the size of its image at least slightly, the
orbit converge to a unique image called the attractor of the IFS. We denote the
attractor by the set A, some of its important properties are:
1. The attractor set A is a fixed point of the mapping W(.), which we write as
W(A)=A. That is putting A through the copier again produces exactly the
same image A.
The iterates have already converged to the set A, so iterating once
In general this copier will have N lenses, each of which perform more makes no difference.
an affine mapping and then adds its image to the output. The collection of the N
affine transformations is called an “iterated function system”. 2. Starting with any input image B and iterating the copying process enough
times, we find that the orbit of images always converges to the same A.
An iterated function system is a collection of N affine If Ik = W (k)(B) is the kth iterate of image B, then as k goes to
transformations Ti, for i=1,2,…N. infinity Ik becomes indistinguishable from the attractor A.
5.3.2 Underlying Theory of the Copying Process 5.3.3 Drawing the kth Iterate
Each lens in the copier builds an image by transforming every We use graphics to display each of the iterates along the orbit.
point in the input image and drawing it on the output image. A black and white The initial image I0 can be set, but two choices are particularly suited to the tools
image I can be described simply as the set of its black points: developed:
I = set of all black points = { (x,y) such that (x,y) is colored black } I0 is a polyline. Then successive iterates are collections of polylines.
I is the input image to the copier. Then the ith lens characterized I0 is a single point. Then successive iterates are collections of points.
by transformation Ti, builds a new set of points we denote as T i(I) and adds them to Using a polyline for I0 has the advantage that you can see how
the image being produced at the current iteration. Each added set T i(I) is the set of each polyline is reduced in size in each successive iterate. But more memory
all transformed points I: and time are required to draw each polyline and finally each polyline is so
reduced as to be indistinguishable from a point.
Ti(I) = { (x’,y’) such that (x’,y’) = Ti(P) for some point P in I }
Using a single point for I0 causes each iterate to be a set of points,
Upon superposing the three transformed images, we obtain the so it is straight forward to store these in a list. Then if IFS consists of N affine
output image as the union of the outputs from the three lenses: maps, the first iterate I1 consists of N points, image I2 consists of N2 points, I3
consists of N3 points, etc.
Output image = T1(I) U T2(I) U T3(I)
Copier Operation pseudocode(recursive version)
The overall mapping from input image to output image as W(.). It
maps one set of points – one image – into another and is given by:
For instance the copy of the first image I0 is the set W(I0). { //Draw kth iterate of input point list pts for the IFS
int i;
155
CS2401 –Computer Graphics
else for(i=1; i<=N; i++) //apply each affine Choose one of the 3 points at random;
transform(affines[i], pts.pt[j], newpts.pt[j]); A point P is transformed to the midpoint of itself and one of the
three fixed points: p[0], p[1] or p[2]. The new point is then drawn as a dot and the
superCopier(newpts, k – 1); process repeats. The picture slowly fills in as the sequence of dots is drawn.
} The key is that forming a midpoint based on P is in fact applying
an affine transformation. That is
}
1
If k=0 it draws the points in the list P= ( P + p[…]) (find the midpoint of P and p[..])
If k>0 it applies each of the affine maps T i, in turn, to all of the points, 2
creating a new list of points, newpts, and then calls
superCopier(newpts, k – 1); Can be written as
To implement the algorithm we assume that the affine maps are
stored in the global array Affine affines[N]. 1
0 1
Drawbacks P= P 2 + p[..]
1 2
0
Inefficient 2
Huge amount of memory is required.
5.3.4 The Chaos Game So that P is subjected to the affine map, and then the transformed version is written
back into P. The offset for this map depends on which point p[i[ is chosen.
The Chaos Game is a nonrecursive way to produce a picture of
the attractor of an IFS.
Set corners of triangle :p[0]=(0,0), p[1]=(1,0), p[2]=(.5,1) Drawing the Sierpinski gasket
156
CS2401 –Computer Graphics
int index;
do {
P = transform(aff[ondex], P);
} while (!bored);
157
CS2401 –Computer Graphics
158
CS2401 –Computer Graphics
f(.) =(.)2 + c . The behavior of the orbits depends on these fixed points that is those
complex numbers z that map into themselves, so that
The IFS uses the simple function
z2 + c = z. This gives us the quadratic equation z2 – z + c = 0 and the fixed points of
2
f(z) = z + c -------------------------------(1) the system are the two solutions of this equation, given by
where c is some constant. The system produces each output by squaring its input
and adding c. We assume that the process begins with the starting value s, so the 1 1
p+, p- = c --------------------------------(4)
system generates the sequence of values or orbit 2 4
d1= (s)2 + c If an orbit reaches a fixed point, p its gets trapped there forever.
The fixed point can be characterized as attracting or repelling. If an orbit flies
d2= ((s)2 + c)2 + c close to a fixed point p, the next point along the orbit will be forced
d3= (((s)2 + c)2 + c)2 + c closer to p if p is an attracting fixed point
farther away from p if p is a repelling a fixed point.
d4= ((((s)2 + c)2 + c)2 + c)2 + c ------------------------------(2)
If an orbit gets close to an attracting fixed point, it is sucked into
The orbit depends on two ingredients the point. In contrast, a repelling fixed point keeps the orbit away from it.
159
CS2401 –Computer Graphics
The Mandelbrot set M is the set of all complex numbers c that #define Num 100 // increase this for better pictures
produce a finite orbit of 0.
double tmp, dx=cx, dy=cy, fsq=cx *cx + cy * cy;
If c is chosen outside of M, the resulting orbit explodes. If c is
chosen just beyond the border of M, the orbit usually thrashes around the plane and for(int count=0; count<=Num && fsq <=4; count++)
goes to infinity.
{
If the value of c is chosen inside M, the orbit can do a variety of
things. For some c‟s it goes immediately to a fixed point or spirals into such a point. tmp = dx; //save old
real part
160
CS2401 –Computer Graphics
The figure shows how color is assigned to a point having dwell d. The user specifies how large the desired image is to be on the
For very small values of d only a dim blue component is used. As d approaches screen that is
Num the red and green components are increased up to a maximum unity. This
could be implemented in OpenGL using: the number of rows, rows
the number of columns, cols
float v = d / (float)Num; This specification determines the aspect ratio of the image :R=
cols/rows. The user also chooses a portion of the complex plane to be
glColor3f(v * v, v*, v, 0.2); // red & green at level v-squared displayed: a rectangular region having the same aspect ratio as the image. To
do this the user specifies the region‟s upper left hand corner P and its width W.
We need to see how to associate a pixel with a specific complex value of c. A The rectangle‟s height is set by the required aspect ratio. The image is
simple approach is suggested in the following figure. displayed in the upper left corner of the display.
Establishing a window on M and a correspondence between points and pixels. To what complex value c= cx + cyi, does the center of the i, jth
pixel correspond? Combining we get
161
CS2401 –Computer Graphics
A practical problem is to study close up views of the Mandelbrot Pseudocode for drawing a region of the Filled-in Julia set
set, numbers must be stored and manipulated with great precision.
for(j=0; j<rows; j++)
Also when working close to the boundary of the set , you should
use a larger value of Num. The calculation times for each image will increase for(i=0; i<cols; i++)
as you zoom in on a region of the boundary of M. But images of modest size
can easily be created on a microcomputer in a reasonable amount of time. {
162
CS2401 –Computer Graphics
estimate the dwell of the orbit For all other values of c, the set Kc, is complex. It has been shown
that each Kc is one of the two types:
find Color determined by estimated dwell
Kc is connected or
setPixel( j , k, Color); Kc is a Cantor set
A theoretical result is that Kc is connected for precisely those
} values of c that lie in the Mandelbrot set.
The dwell() must be passed to the starting point s as well as c. 5.5.4 The Julia Set Jc
Making a high-resolution image of a Kc requires a great deal of computer time,
since a complex calculation is associated with every pixel. Julia Set Jc is for any given value of c; it is the boundary of K c. Kc
is the set of all starting points that have finite orbits and every point outside Kc has
5.5.3 Notes on Fixed Points and Basins of Attraction an exploding orbit. We say that the points just along the boundary of K c and “on the
fence”. Inside the boundary all orbits remain finite; just outside it, all orbits goes to
If an orbit starts close enough to an attracting fixed point, it is
infinity.
sucked into that point. If it starts too far away, it explodes. The set of points that are
sucked in forms a so called basin of attraction for the fixed point p. The set is the Preimages and Fixed Points
filled-in Julia set Kc. The fixed point which lies inside the circle |z|= ½ is the
attracting point. If the process started instead at f(s), the image of s, then the two
orbits would be:
All points inside Kc, have orbits that explode. All points inside
Kc, have orbits that spiral or plunge into the attracting fixed point. If the starting s, f(s), f2(s), f3(s),…. (orbit of s)
point is inside Kc, then all of the points on the orbit must also be inside Kc and they
produce a finite orbit. The repelling fixed point is on the boundary of K c. or
Kc for Two Simple Cases f(s), f2(s), f3(s), f4(s),…. (orbit of f(s))
The set Kc is simple for two values of c: which have the same value forever. If the orbit of s is finite, then
so is the orbit of its image f(s). All of the points in the orbit , if considered as
1. c=0: Starting at any point s, the orbit is simply s, s2,s4,…….,s2k,…, starting points on their own, have orbits with thew same behavior: They all are
finite or they all explode.
so the orbit spirals into 0 if |s|<1 and explodes if |s|>1. Thus K 0
Any starting point whose orbit passes through s has the same
is the set of all complex numbers lying inside the unit circle, the behavior as the orbit that start at s: The two orbits are identical forever. The point
“just before” s in the sequence is called the preimage of s and is the inverse of the
circle of radius 1 centered at the origin.
function f(.) = (.)2 + c. The inverse of f(.) is z c , so we have
2. c = -2: in this case it turns out that the filled-in Julia set consists
of all points lying on the real axis between -2 and 2. two preimages of z are given by z c ------------------(6)
To check that equation (6) is correct, note that if either preimage
is passed through (.)2 + c, the result is z. The test is illustrated in figure(a) where the
163
CS2401 –Computer Graphics
orbit of s is shown in black dots and the two preimages of s are marked. The two The Julia set Jc can be characterized in many ways that are more
orbits of these preimages “join up” with that of s. precise than simply saying it is the “boundary of” Kc. One such characterization that
suggests an algorithm for drawing Jc is the following:
Each of these preimages has two preimages and each if these has
two, so there is a huge collection of orbits that join up with the orbit of s, and The collection of all preimages of any point in J c is dense in Jc.
thereafter committed to the same path. The tree of preimages of s is illustrated in
fig(B): s has two parent preimages, 4 grandparents, etc. Going back k generations Starting with any point z in Jc, we simply compute its two parent
we find that there are 2k preimages. preimages, their four grandparent preimages, their eight great-grandparent ones, etc.
So we draw a dot at each such preimage, and the display fills in with a picture of the
Julia set. To say that these dots are dense in Jc means that for every point in Jc, there
is some preimage that is close by.
1. finding a point in Jc
2. keeping track of all the preimages
An approach known as the backward-iteration method
overcomes these obstacles and produces good result. The idea is simple: Choose
some point z in the complex plane. The point may or may not be in J c. Now iterate
in backward direction: at each iteration choose one of the two square roots
randomly, to produce a new z value. The following pseudocode is illustrative:
do {
else z = z c ;
draw dot at z;
164
CS2401 –Computer Graphics
165
CS2401 –Computer Graphics
computed randomly. If t is positive, the elbow lies to one side of AB; if t is negative fract(A, C, stdDev);
it lies to the other side.
fract(C, B, stdDev);
For most fractal curves, t is modeled as a Gaussian random
variable with a zero mean and some standard deviation. Using a mean of zero }
causes, with equal probability, the elbow to lie above or below the parent segment.
The routine fract() generates curves that approximate actual
Fractalizing a Line segment fractals. The routine recursively replaces each segment in a random elbow with a
smaller random elbow. The stopping criteria used is: When the length of the
void fract(Point2 A, Point2 B, double stdDev) segment is small enough, the segment is drawn using cvs.lineTo(), where cvs is a
Canvas object. The variable t is made to be approximately Gaussian in its
// generate a fractal curve from A to B distribution by summing together 12 uniformly distributed random values lying
between 0 and 1. The result has a mean value of 6 and a variance of 1. The mean
double xDiff = A.x – B.x, yDiff= A.y –B.y; value is then shifted to 0 and the variance is scaled as necessary.
Point2 C; The depth of recursion in fract() is controlled by the length of the
line segment.
if(xDiff * XDiff + YDiff * yDiff < minLenSq)
5.6.2 Controlling the Spectral Density of the Fractal Curve
cvs.lintTo(B.x, B.y);
The fractal curve generated using the above code has a
else
“power spectral density” given by
{
// make a gaussian variate t lying between 0 and 12.0 Where β the power of the noise process is the parameter the user
can set to control the jaggedness of the fractal noise. When β is 2, the process is
for(int i=0; I, 12; i++) known as Brownian motion and when β is 1, the process is called “1/f noise”. 1/f
noise is self similar and is shown to be a good model for physical process such as
t+= rand()/32768.0;
clouds. The fractal dimension of such processes is:
t= (t-6) * stdDev; //shift the mean to 0
and sc 5
D
2
C.x = 0.5 *(A.x +B.x) – t * (B.y – A.y);
In the routine fract(), the scaling factor factor by which the
C.y = 0.5 *(A.y +B.y) – t * (B.x – A.x); standard deviation is scaled at each level based on the exponent β of the fractal
166
CS2401 –Computer Graphics
curve. Values larger than 2 leads to smoother curves and values smaller than 2 leads
to more jagged curves. The value of factor is given by:
factor = 2 (1 – β/2 )
5.7 INTERSECTING RAYS WITH OTHER PRIMITIVES
The factor decreases as β increases.
First the ray is transformed into the generic coordinates of the object and
Drawing a fractal curve(pseudocode) then the various intersection with the generic object are computed.
void drawFractal (Point2 A, Point2 B) The generic square lies in the z=0 plane and extends from -1 to 1 in both x
and y. The square can be transformed into any parallelogram positioned in space,
{ so it is often used in scenes to provide this, flat surfaces such as walls and windows.
The function hit(1) first finds where the ray hits the generic plane and then test
double beta, StdDev; whether this hit spot also lies within the square.
User inputs beta, MinLenSq and the the initial StdDev 2) Intersecting with a Tapered Cylinder
factor = pow(2.0, (1.0 – beta)/ 2.0); The side of the cylinder is part of an infinitely long wall with a radius of L
at z=0,and a small radius of S at z=1.This wall has the implicit form as
cvs.moveTo(A);
F(x, y, z)=x2 + y2- (1 + (S - 1) z)2, for 0 < z < 1
fract(A, B, StdDev);
If S=1, the shape becomes the generic cylinder, if S=0 , it becomes the
} generic cone. We develop a hit () method for the tapered cylinder, which also
provides hit() method for the cylinder and cone.
In this routine factor is computed using the C++ library function
pow(…). 3) Intersecting with a Cube (or any Convex Polyhedron)
One of the features of fractal curves generated by pseudorandom The convex polyhedron, the generic cube deserves special attention. It is
–number generation is that they are repeatable. All that is required is to use the centered at the origin and has corner at (±1, ±1, ±1) using all right combinations of
same seed each time the curve is fractalized. A complicated shape can be +1 and -1.Thus,its edges are aligned with coordinates axes, and its six faces lie in
fractalized and can be stored in the database by storing only the plan.
the polypoint that describes the original line segments The generic cube is important for two reasons.
the values of minLenSq and stdDev and
the seed. A large variety of intersecting boxes can be modeled and placed in a scene
An extract replica of the fractalized curve can be regenerated at by applying an affine transformation to a generic cube. Then, in ray tracing
any time using these informations. each ray can be inverse transformed into the generic cube‟s coordinate
system and we can use a ray with generic cube intersection routine.
167
CS2401 –Computer Graphics
The generic cube can be used as an extent for the other generic primitives To render the surface of an object, we project pixel areas on to surface and then
in the sense of a bounding box. Each generic primitives, such as the reflect the projected pixel area on to the environment map to pick up the surface
cylinder, fits snugly inside the cube. shading attributes for each pixel. If the object is transparent, we can also refract the
projected pixel are also the environment map. The environment mapping process
4) Adding More Primitives for reflection of a projected pixel area is shown in figure. Pixel intensity is
determined by averaging the intensity values within the intersected region of the
To find where the ray S + ct intersects the surface, we substitute S + ct for environment map.
P in F(P) (the explicit form of the shape)
A simple method for adding surface detail is the model structure and patterns
d(t) = f(S + ct) with polygon facets. For large scale detail, polygon modeling can give good results.
Also we could model an irregular surface with small, randomly oriented polygon
facets, provided the facets were not too small.
This function is Surface pattern polygons are generally overlaid on a larger surface polygon and
are processed with the parent‟s surface. Only the parent polygon is processed by the
positive at these values of t for which the ray is outside the object. visible surface algorithms, but the illumination parameters for the surfac3e detail
zero when the ray coincides with the surface of the object and polygons take precedence over the parent polygon. When fine surface detail is to be
negative when the ray is inside the surface. modeled, polygon are not practical.
The generic torus has the implicit function as
So the resulting equation d(t)=0 is quartic. A method for adding surface detail is to map texture patterns onto the
surfaces of objects. The texture pattern may either be defined in a rectangular array
For quadrics such as the sphere, d(t) has a parabolic shape, for the torus, it has a or as a procedure that modifies surface intensity values. This approach is referred
quartic shape. For other surfaces d(t) may be so complicated that we have to search to as texture mapping or pattern mapping.
numerically to locate t‟s for which d(.) equals zero. The function for super ellipsoid
is The texture pattern is defined with a rectangular grid of intensity values in
a texture space referenced with (s,t) coordinate values. Surface positions in the
d(t) = ((Sx + Cxt)n +(Sy + Cyt)n)m/n + (Sy + Cyt)m -1 scene are referenced with UV object space coordinates and pixel positions on the
projection plane are referenced in xy Cartesian coordinates.
where n and m are constant that govern the shape of the surface.
Texture mapping can be accomplished in one of two ways. Either we can
5.8 ADDING SURFACE TEXTURE map the texture pattern to object surfaces, then to the projection plane, or we can
map pixel areas onto object surfaces then to texture space. Mapping a texture
A fast method for approximating global illumination effect is environmental pattern to pixel coordinates is sometime called texture scanning, while the mapping
mapping. An environment array is used to store background intensity information from pixel coordinates to texture space is referred to as pixel order scanning or
for a scene. This array is then mapped to the objects in a scene based on the inverse scanning or image order scanning.
specified viewing direction. This is called as environment mapping or reflection
mapping.
168
CS2401 –Computer Graphics
To simplify calculations, the mapping from texture space to object space is If P(u,v) represents a position on a parameter surface, we can obtain the surface
often specified with parametric linear functions normal at that point with the calculation
U=fu(s,t)=au s+ but + cu N = Pu × P v
V=fv(s,t)=av s+ bvt + cv Where Pu and Pv are the partial derivatives of P with respect to parameters u and v.
The object to image space mapping is accomplished with the To obtain a perturbed normal, we modify the surface position vector by adding a
concatenation of the viewing and projection transformations. small perturbation function called a bump function.
A disadvantage of mapping from texture space to pixel space is that a P’(u,v) = P(u,v) + b(u,v) n.
selected texture patch usually does not match up with the pixel boundaries, thus
requiring calculation of the fractional area of pixel coverage. Therefore, mapping This adds bumps to the surface in the direction of the unit surface normal n=N/|N|.
from pixel space to texture space is the most commonly used texture mapping The perturbed surface normal is then obtained as
method. This avoids pixel subdivision calculations, and allows anti aliasing
procedures to be easily applied. N'=Pu' + Pv'
The mapping from image space to texture space does require calculation of We calculate the partial derivative with respect to u of the perturbed position vector
the inverse viewing projection transformation mVP -1 and the inverse texture map as
transformation mT -1 .
Pu' = _∂_(P + bn)
5.8.2 Procedural Texturing Methods
∂u
Next method for adding surface texture is to use procedural definitions of
the color variations that are to be applied to the objects in a scene. This approach = Pu + bu n + bnu
avoids the transformation calculations involved transferring two dimensional
Assuming the bump function b is small, we can neglect the last term and write
texture patterns to object surfaces.
p u' ≈ pu + bun
When values are assigned throughout a region of three dimensional space,
the object color variations are referred to as solid textures. Values from texture Similarly p v'= p v + b v n.
space are transferred to object surfaces using procedural methods, since it is usually
impossible to store texture values for all points throughout a region of space (e.g) and the perturbed surface normal is
Wood Grains or Marble patterns Bump Mapping.
N' = Pu + Pv + b v (Pu x n ) + bu ( n x Pv ) + bu bv (n x n).
Although texture mapping can be used to add fine surface detail, it is not a
good method for modeling the surface roughness that appears on objects such as But n x n =0, so that
oranges, strawberries and raisins. The illumination detail in the texture pattern
usually does not correspond to the illumination direction in the scene. N' = N + bv ( Pu x n) + bu ( n x Pv)
A better method for creating surfaces bumpiness is to apply a perturbation function The final step is to normalize N' for use in the illumination model calculations.
to the surface normal and then use the perturbed normal in the illumination model
calculations. This technique is called bump mapping. 5.8.3 Frame Mapping
169
CS2401 –Computer Graphics
Extension of bump mapping is frame mapping. where for rings about z-axis, the radius r = √x2+y2 .The value of the function
rings () jumps between zero and unity as r increases from zero.
In frame mapping, we perturb both the surface normal N and a local
coordinate system attached to N. The local coordinates are defined with a
surface tangent vector T and a binormal vector B x T x N.
5.8.6 3D Noise and Marble Texture
Frame mapping is used to model anisotrophic surfaces. We orient T along
the grain of the surface and apply directional perturbations in addition to bump The grain in materials such as marble is quite chaotic. Turbulent riverlets
perturbations in the direction of N. In this way, we can model wood grain patterns, of dark material course through the stone with random whirls and blotches as if the
cross thread patterns in cloth and streaks in marble or similar materials. Both bump stone was formed out of some violently stirred molten material. We can simulate
and directional perturbations can be obtained with table look-ups. turbulence by building a noise function that produces an apparently random value at
each point (x,y,z) in space. This noise field is the stirred up in a well-controlled way
To incorporate texturing into a ray tracer, two principal kinds of textures to give appearance of turbulence.
are used.
5.8.7 Turbulence
With image textures, 2D image is pasted onto each surface of the
object. A method for generating more interesting noise. The idea is to mix
With solid texture, the object is considered to be carved out of a together several noise components: One that fluctuates slowly as you move slightly
block of some material that itself has texturing. The ray tracer through space, one that fluctuates twice as rapidly, one that fluctuates four times
reveals the color of the texture at each point on the surface of the rapidly, etc. The more rapidly varying components are given progressively smaller
object. strengths
5.8.4 Solid Texture turb (s, x, y, z) = 1/2noise(s ,x, y, z) + 1/4noise(2s,x,y,z) +1/8 noise
(4s,x,y,z).
Solid texture is sometimes called as 3D texture. We view an object as
being carved out of some texture material such as marble or wood. A texture is The function adds three such components, each behalf as strong and
represented by a function texture (x, y, z) that produces an (r, g, h) color value at varying twice as rapidly as its predecessor.
every point in space. Think of this texture as a color or inkiness that varies with
position, if u look at different points (x, y, z) you see different colors. When an Common term of a turb () is a
object of some shape is defined in this space, and all the material outside the shape
is chipped away to reveal the object‟s surface the point (x, y, z) on the surface is
revealed and has the specified texture.
turb (s, x, y, z) = 1/2 1/2Knoise(2ks, x, y, z).
5.8.5 Wood grain texture
The grain in a log of wood is due to concentric cylinders of varying color, 5.8.8 Marble Texture
corresponding to the rings seen when a log is cut. As the distance of the points from
some axis varies, the function jumps back and forth between two values. This effect Marble shows veins of dark and light material that have some regularity
can be simulated with the modulo function. ,but that also exhibit strongly chaotic irregularities. We can build up a marble like
3D texture by giving the veins a smoothly fluctuating behavior in the z-direction
Rings(r) = ( (int) r)%2 and then perturbing it chantically using turb(). We start with a texture that is
constant in x and y and smoothly varying in z.
170
CS2401 –Computer Graphics
Here undulate() is the spline shaped function that varies between some When a ray of light strikes a transparent object, apportion of the ray
dark and some light value as its argument varies from -1 to 1. penetrates the object. The ray will change direction from dir to + if the speed of
light is different in medium 1 than in medium 2. If the angle of incidence of the ray
5.9 REFLECTIONS AND TRANSPERENCY is θ1, Snell‟s law states that the angle of refraction will be
The great strengths of the ray tracing method is the ease with which it can sin(θ2) = sin(θ1)
handle both reflection and refraction of light. This allows one to build scenes of
exquisite realism, containing mirrors, fishbowls, lenses and the like. There can be C2 C1
multiple reflections in which light bounces off several shiny surfaces before
reaching the eye or elaborate combinations of refraction and reflection. Each of where C1 is the spped of light in medium 1 and C2 is the speed of light in
these processes requires the spawnins and tracing of additional rays. medium 2. Only the ratio C2/C1 is important. It is often called the index of
refraction of medium 2 with respect to medium 1. Note that if θ 1 ,equals zero so
The figure 5.15 shows a ray emanating, from the eye in the direction dir does θ2 .Light hitting an interface at right angles is not bent.
and hitting a surface at the point Ph. when the surface is mirror like or transparent,
the light I that reaches the eye may have 5 components In ray traving scenes that include transparent objects, we must keep track
of the medium through which a ray is passing so that we can determine the value
I=Iamb + Idiff + Ispec + Irefl + Itran C2/C1 at the next intersection where the ray either exists from the current object or
enters another one. This tracking is most easily accomplished by adding a field to
The first three are the fan=miler ambient, diffuse and specular the ray that holds a pointer to the object within which the ray is travelling.
contributions. The diffuse and specular part arise from light sources in the
environment that are visible at Pn. Iraft is the reflected light component ,arising Several design polices are used,
from the light , Ik that is incident at P n along the direction –r. This direction is such
that the angles of incidence and reflection are equal,so 1) Design Policy 1: No two transparent object may interpenetrate.
2) Design Policy 2: Transparent object may interpenetrate.
R=dir-2(dir.m)m
5.10 COMPOUND OBJECTS: BOOLEAN OPERATIONS ON OBJECTS
Where we assume that the normal vector m at P h has been normalized.
A ray tracing method to combine simple shapes to more complex ones is
Similarly Itran is the transmitted light components arising from the light IT known as constructive Solid Geometry(CSG). Arbitrarily complex shapes are
that is transmitted thorough the transparent material to Ph along the direction –t. A defined by set operations on simpler shapes in a CSG. Objects such as lenses and
portion of this light passes through the surface and in so doing is bent, continuing hollow fish bowls, as well as objects with holes are easily formed by combining the
its travel along –dir. The refraction direction + depends on several factors. generic shapes. Such objects are called compound, Boolean or CSG objects.
The Boolean operators: union, intersection and difference are shown in the
I is a sum of various light contributions, IR and IT each arise from their figure 5.17.
own fine components – ambient, diffuse and so on. IR is the light that would be seen
by an eye at Ph along a ray from P‟ to P n. To determine IR, we do in fact spawn a Two compound objects build from spheres. The intersection of two
secondary ray from Pn in the direction r, find the first object it hits and then repeat spheres is shown as a lens shape. That is a point in the lens if and only if it is in
the same computation of light component. Similarly IT is found by casting a ray in both spheres. L is the intersection of the S1 and S2 is written as
the direction t and seeing what surface is hit first, then computing the light L=S1∩S2
contributions.
171
CS2401 –Computer Graphics
The difference operation is shown as a bowl.A point is in the difference of Intersection lftinter,rtinter;
sets A and B, denoted A-B,if it is in A and not in B.Applying the difference if (ray misses the extends)return false;
operation is analogous to removing material to cutting or carrying.The bowl is if (C) left −>hit(r,lftinter)||((right−>hit(r,rtinter)))
specified by return false;
B=(S1-S2)-C. return (inter.numHits > 0);
The solid globe, S1 is hollowed out by removing all the points of the inner }
sphere, S2,forming a hollow spherical shell. The top is then opened by removing all Extent tests are first made to see if there is an early out.Then the proper
points in the cone C. hit() routing is called for the left subtree and unless the ray misses this subtree,the
A point is in the union of two sets A and B, denoted AUB, if it is in A or hit list rinter is formed.If there is a miss,hit() returns the value false immediately
in B or in both. Forming the union of two objects is analogous to gluing them because the ray must hit dot subtrees in order to hit their intersection.Then the hit
together. list rtInter is formed.
The union of two cones and two cylinders is shown as a rocket.
R=C1 U C2 U C3 U C4. The code is similar for the union Bool and DifferenceBool classes. For
Cone C1 resets on cylinder C2.Cone C3 is partially embedded in C2 and UnionBool::hit(),the two hits are formed using
resets on the fatter cylinder C4. if((!left-)hit(r,lftInter))**(|right-)hit(r,rtinter)))
return false;
5.10.1 Ray Tracing CSC objects
which provides an early out only if both hit lists are empty.
Ray trace objects that are Boolean combinations of simpler objects. The
ray inside lens L from t3 to t2 and the hit time is t3.If the lens is opaque, the familiar For differenceBool::hit(),we use the code
shading rules will be applied to find what color the lens is at the hit spot. If the lens
is mirror like or transparent spawned rays are generated with the proper directions if((!left−>hit(r,lftInter)) return false;
and are traced as shown in figure 5.18.
if(!right−>hit(r,rtInter))
Ray,first strikes the bowl at t1,the smallest of the times for which it is in S1
but not in either S2 or C. Ray 2 on the other hand,first hits the bowl at t 5. Again this {
is the smallest time for which the ray is in S1,but in neither the other sphere nor the
cone.The hits at earlier times are hits with components parts of the bowl,but not inter=lftInter;
with the bowl itself.
return true;
5.10.2 Data Structure for Boolean objects
Since a compound object is always the combination of two other objects }
say obj1 OP Obj2, or binary tree structure provides a natural description.
which gives an early out if the ray misses the left subtree,since it must then miss the
5.10.3 Intersecting Rays with Boolean Objects whole object.
We need to be develop a hit() method to work each type of Boolean
object.The method must form inside set for the ray with the left subtree,the inside 5.10.4 Building and using Extents for CSG object
set for the ray with the right subtree,and then combine the two sets appropriately.
The creation of projection,sphere and box extend for CSG object. During a
bool Intersection Bool::hit(ray in Intersection & inter) preprocessing step,the true for the CSG object is scanned and extents are built for
{ each node and stored within the node itself. During raytracing,the ray can be tested
172
CS2401 –Computer Graphics
against each extent encounted,with the potential benefit of an early out in the
intersection process if it becomes clear that the ray cannot hit the object.
173