Unit 4-cg - Conti
Unit 4-cg - Conti
The vertex may be specied directly in the application program or indirectly through an
instantiation of some object.
For example, a cube would typically have its faces aligned with axes of the frame, its center
at the origin, and have a side length of 1 or 2 units.
The coordinates in the corresponding function calls are in object or model coordinates. An
individual scene may comprise hundreds or even thousands of individual objects.
Note that if we do not model with predened objects or apply any transformations before we
specify our geometry, object and world coordinates are the same.
Object and world coordinates are the natural frames for the application program.
The image that is produced depends on what the camera or viewer sees. Virtually all
graphics systems use a frame whose origin is the center of the cameras lens and whose axes
are aligned with the sides of the camera. This frame is called the camera frame or eye
frame. Because there is an afne transformation that corresponds to each change of frame,
there are 4 4 matrices that represent the transformation from model coordinates to world
coordinates and from world coordinates to eye coordinates.
These transformations usually are concatenated together into the model-view transformation,
which is specied by the model-view matrix. Usually, the use of the model-view matrix
instead of the individual matrices should not pose any problems for the application
programmer.
Once objects are in eye co- ordinates, OpenGL must check whether they lie within the view
volume.
If an object does not, it is clipped from the scene prior to rasterization. OpenGL can carry out
this process most efficiently if it rst carries out a projection transformation that brings all
potentially visible objects into a cube centered at the origin in clip coordinates.
Priyanka H V
Page 1
Computer Graphics
After this transformation, vertices are still represented in homogeneous coordinates. The
division by the w component, called perspective division, yields three-dimensional
representations in normalized device coordinates.
The nal transformation takes a position in normalized device coordinates and, taking into
account the viewport, creates a three-dimensional representation in window coordinates.
Window coordinates are measured in units of pixels on the display but retain depth
information. If we remove the depth coordinate, we are working with two-dimensional screen
coordinates.
From the application programmer's perspective, OpenGL starts with two frames:
The model-view matrix positions the world frame relative to the camera frame.
The model-view matrix converts the homogeneous-coordinate representations of points and
vectors to their representations in the camera frame. Because the model-view matrix is part of the
state of the system, there is always a camera frame and a present-world frame.
OpenGL provides matrix stacks, so model-view matrices or, equivalently, the frames, can be
stored. The camera is at the origin of its frame.
The three basis vectors correspond to the up direction of the camera, the y direction; the direction
the camera is pointing, the negative z direction; and a third orthogonal direction, x, that is placed
so that the x, y, z directions form a right-handed coordinate system.
Other frames can be obtained in which objects can be placed by performing homogeneous
coordinate transformations that define new frames relative to the camera frame.
These transformations are defined and also can be used to position the camera relative to the
objects. Because changes of frame are represented by model-view matrices and also these
matrices can be stored, frames can be saved and moved between frames by changing the present
model-view matrix.
E.g.In the default settings below, the camera and world frames coincide with the camera pointing
in the negative z direction.
In many applications, objects are defined near the origin,
such as a square centered at the origin or, perhaps, a group
of objects whose center of mass is at the origin. It is also
natural to set up the viewing conditions so that the camera
sees only those objects that are in front of it.
To form images that contain all these objects, either the
camera must be moved away from the objects or the objects
must be moved away from the camera. Equivalently, the
camera frame is moved relative to the world frame.
If the camera frame is regarded as fixed and the model-view matrix as positioning the world
frame relative to the camera frame, then the model-view matrix, moves a point (x, y, z) in the
world frame to the point (x, y, z, -d) in the camera frame. Thus, by making d a suitably large
positive number, the objects are moved in front of the camera by moving the world frame
relative to the camera frame (figure (b)).
The model-view matrix takes care of the relative positioning of the frames.
Priyanka H V
Page 2
Computer Graphics
Using this strategy is almost always better than attempting to alter the positions of the object by
changing their vertices to place them in front of the
camera.
In OpenGL, a model-view matrix can be set by sending
an array of 16 elements to glLoadMatrix. For geometric
problems, one frame to another frame change can be
obtained by a sequence of geometric transformations
such as rotations, translations, and scales.
To model, the pipeline approach used in 2-D is used, with which the objects are defined in
terms of sets of vertices.
The vertices will be passed through a number of transformations before the primitives that
they define are rasterized in the frame buffer.
The use of homogeneous coordinates not only will enable explaining this process, but also
will lead to efficient implementation techniques.
Consider the problem of drawing a rotating cube on the screen of our CRT. One frame of an
animation is shown in figure.
To generate the image following tasks are performed:
Modeling
Converting to the camera frame
Clipping
Projecting
Removing hidden surfaces
Rasterizing
The hardware processes the cube as an object consisting of eight vertices.But here surfacebased model is used. It implies that a cube is regarded either as the intersection of six planes,
or as the six polygons that define its faces, called its facets.
Assume that the vertices of the cube are available through an array of vertices.
For example, as follows:
Page 3
Computer Graphics
OpenGL implements all vertices in 4-D homogeneous coordinates. Function calls using a 3D type, such as glVertex3fv, have the values placed into to 4-D form within the graphics
system.
List of points can then be used to define the faces of the cube.
E.g. The orders of the vertices (0, 3, 2, 1) and (1, 0, 3, 2) are same for the first face because
the final vertex in a polygon definition is always linked back to the first. However, the order
(0, 1, 2, 3) is different.
It describes the same boundary, the edges of the polygon are traversed in the reverse order
(0, 3, 2, 1) - as shown in figure.
Each polygon has two sides. Either or both of them can be displayed.
This method is also known as the right-hand rule because, if the fingers of
right hand are oriented in the direction the vertices are traversed, the thumb points outward.
In the example with the order (0,3,2, 1) rather than as (0, 1,2,3), the outer side of the back of
the cube could be defined correctly.
E.g. glBegin(GL_POLYGON) six times, each time followed by four vertices (via glVertex)
and a glEnd
or
Both of the above methods work, but both fail to capture the essence of the cube's topology,
as opposed to the cube's geometry.
Considering the cube as a polyhedron, the object i.e. the cube composed of six faces.
The faces are each quadrilaterals that meet at vertices; each vertex is shared by three faces.
Pairs of vertices define edges of the quadrilaterals; each edge is shared by two faces.
All are true, regardless of the location of the vertices i.e. regardless of the geometry of the
object.
Priyanka H V
Page 4
Computer Graphics
Data structures separate the topology of the object from its geometry, if used in building
the objects.
Each vertex can be specified indirectly through its index. This data structure is shown in
figure.
Each geometric location appears only once, instead of being repeated each time it is used for
a facet. If, in an interactive application, the location of a vertex is changed, the application
needs to change that location only once, rather than searching for multiple occurrences of the
vertex.
The vertex list structure is used in this example which can be expanded later if necessary.
4.5.4 The Color Cube
Colors of the color solid (black, white, red, green, blue, cyan, magenta, yellow) are assigned
to the vertices.
quad function is used to draw quadrilateral polygons specified by pointers into the vertex list.
The color cube specifies the six faces, taking care to make them all outward-facing.
Page 5
Computer Graphics
quad(l.2.6.5);
quad(4.5.6.7);
quad(0,1.5.4);
}
Note: Include void main and void display
4.5.5 Bilinear Interpolation
It is one of the methods the graphics system adopts to assign colors to points inside the
polygon (i.e. interpolate), using the specified color information.
Consider the polygon in figure. The colors C0, Cl, C2, and C3 are the
ones assigned to the vertices in the application program.
C01() = (1 - )C0 + C1
C23( )= (l - )C2 + C3
As goes from 0 to 1, colors, C0l() and C23() are generated along these two edges.
For a given value of , two colors, C4 and C5 are obtained on these edges.
Colors along the line connecting the two points on the edges corresponding to C4 and C5 can
be interpolated as below:
C45() =(1 - )C4+ C5
For a flat quadrilateral, each color generated by this method corresponds to a point on the
polygon.
If the four vertices are not all in the same plane, then, although a color is generated, its
location on a surface is not clearly defined.
A related algorithm, scan-line interpolation, avoids the flatness issue and can be made part
of the scan-conversion process. A polygon is filled only when it is displayed.
OpenGL provides this method, not only for colors, but also for other values that can be
assigned on a vertex-by-vertex basis.
Vertex arrays provide a method for encapsulating the information in the data structure such
that polyhedral objects can be, drawn with only a few function calls.
OpenGL provides vertex arrays, a facility that extends the use of arrays in a way that avoids
most of the function calls to draw the cube.
The main idea is that information stored in arrays can be stored on the clients, or application
programs, and accessed by a single function call.
The information can be stored in a way that retains the structuring earlier defined earlier,
such as the order in which vertices are called to draw the cube.
Priyanka H V
Page 6
Computer Graphics
OpenGL provides support for six types of arrays: vertex, color, color index, normal, texture
coordinate, and edge flag.
Using vertex arrays requires following three steps. The first two steps are usually part of the
initialization phase of the program.
The second and third parameters indicate that the data are floats packed in the array given by
the fourth parameter.
Syntax Description:
void glVertexPointer(GLint dim, GLenum type, GLsizei stride, GLvoid *array)
void glColorPointer(GLint dim, GLenum type, GLsizei stride, GLvoid *array)
The data are in array, dim is the dimension of the data (2,3, or 4), type denotes how the data
are stored (GL_SHORT, GL_INT, GL_FLOAT, or GL_DOUBLE), and stride is the number
of bytes between consecutive data values (0 means that the data are packed in the array).
A new array is needed that stores the indices in the order in which they are used. The
following array contains the necessary information:
GLubyte cubeIndices[24]={0, 3, 2, 1, 2, 3, 7, 6, 0, 4, 7, 3, 1, 2, 6, 5, 4, 5, 6, 7, 0, 1, 5, 4};
Syntax Description:
void glDrawElements(GLenum mode, GLsizei n, GLenum type, void *indices)
It draws elements of type mode using n indices from the array indices for each.
GL_QUADS is used, each successive group of four vertices determines a new quad. Thus, a
Priyanka H V
Page 7
Computer Graphics
A transformation is a function that takes a point (or vector) and maps that point (or vector)
into another point (or vector).
represented as 4-D column matrices, and the transformation can be defined with a single
function
q = f(p), v = f(u),
that transforms the representations of both points and vectors in a given frame.
Advantages/ Characteristics of Affine Transformations:
When working with homogeneous coordinates, A is a 4 x 4 matrix that leaves unchanged the
fourth (w) component of a representation.
A is of the form
The 12 values can be set arbitrarily, and hence this transformation can be said to have has 12
degrees of freedom. However, points and vectors have slightly different representations
in the affine space.
Vector representation
Point representation
If an arbitrary A is applied to a vector, then v=Au, and only nine of the elements of A affect
u, and, thus, there are only 9 degrees of freedom in the transformation of vectors.
Consider the line P() = Po+ d, where Po is a point and d is a vector. In any frame, the line
can be expressed as p() =Po + d, where Po and d are the representations of Po and d in
that frame.
Transformed line can be constructed by first transforming Po and d, and then using whatever
line-generation algorithm for the display.
Priyanka H V
Page 8
Computer Graphics
The representations of Po and P1 are transformed, and then the transformed line is
constructed. Because there are only 12 elements in M that can be selected arbitrarily, there
are 12 degrees of freedom in the affine transformation of a line or line segment.
Rotation is more difficult to specify than translation, because more parameters are involved
E.g. Rotating a point about the origin in a 2-D plane, as shown below.
A 2-D point at (x, y) in this frame is rotated about the origin by an angle to
the position (x, y).
Expanding these terms using the trigonometric identities for the sine and cosine of the sum of
two angles,
x = cos() cos() - sin() sin() = x cos() - y sin()
y = cos() sin() + sin() cos() = x sin() + y cos()
Priyanka H V
Page 9
Computer Graphics
the positive z axis comes out of the page. Positive direction of rotation is defined
counterclockwise when looking down the positive z axis toward the origin. This
definition is used to define positive rotations about other axes.
2-D rotation in the plane is equivalent to 3-D rotation about the z-axis. Points in
planes of constant z all rotate in a similar manner, leaving their z values unchanged.
These observations can be used to define a general 3-D rotation that is independent of the
frame.
Three entities shown in figure, to be specified are:
For a given fixed point, there are 3 degrees of freedom: the two angles necessary to specify
the orientation of the vector, and the angle that specifies the amount of rotation about the
vector.
Rigid-body transformations:
Scaling
For > 1, the object gets longer in the specified direction; for 0 <
1, the object gets smaller in that direction.
Negative values of give reflection (figure below) about the fixed point, in the scaling
direction.
A properly chosen sequence of scaling, translations, and rotations can be combined to form any
affine transformation.
Priyanka H V
Page 10