0% found this document useful (0 votes)
80 views

Unit 4-cg - Conti

The document discusses the process of rendering 3D objects in OpenGL. It describes the different coordinate systems involved - object coordinates, world coordinates, eye coordinates, clip coordinates, normalized device coordinates, and screen coordinates. It then discusses how OpenGL transforms vertices through these coordinate systems when rendering 3D objects. It provides an example of modeling and rendering a colored cube, including defining the cube's vertices, converting to different coordinate systems, clipping, projecting, removing hidden surfaces, and rasterization.

Uploaded by

Priyanka H V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Unit 4-cg - Conti

The document discusses the process of rendering 3D objects in OpenGL. It describes the different coordinate systems involved - object coordinates, world coordinates, eye coordinates, clip coordinates, normalized device coordinates, and screen coordinates. It then discusses how OpenGL transforms vertices through these coordinate systems when rendering 3D objects. It provides an example of modeling and rendering a colored cube, including defining the cube's vertices, converting to different coordinate systems, clipping, projecting, removing hidden surfaces, and rasterization.

Uploaded by

Priyanka H V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 10

Computer Graphics

4.4 Frames in OpenGL


The following is the usual order in which the frames occur in the pipeline:
1. Object (or model) coordinates
2. World coordinates
3. Eye (or camera) coordinates
4. Clip coordinates
5. Normalized device coordinates
6. Window (or screen) coordinates

Lets consider what happens when an application program species a vertex.

The vertex may be specied directly in the application program or indirectly through an
instantiation of some object.

In most applications , object are specified or used with a convenient size,orientation,and


location in its own frame called the model or object frame.

For example, a cube would typically have its faces aligned with axes of the frame, its center
at the origin, and have a side length of 1 or 2 units.

The coordinates in the corresponding function calls are in object or model coordinates. An
individual scene may comprise hundreds or even thousands of individual objects.

The application program generally applies a sequence of transformations to each object to


size, orient, and position it within a frame that is appropriate for the particular application.
For example, if we were using an instance of a square for a window in an architectural
application, we would scale it to have the correct proportions and units, which would
probably be in feet or meters. The origin of application coordinates might be a location in the
center of the bottom oor of the building. This application frame is called the world frame,
and the values are in world coordinates.

Note that if we do not model with predened objects or apply any transformations before we
specify our geometry, object and world coordinates are the same.

Object and world coordinates are the natural frames for the application program.
The image that is produced depends on what the camera or viewer sees. Virtually all
graphics systems use a frame whose origin is the center of the cameras lens and whose axes
are aligned with the sides of the camera. This frame is called the camera frame or eye
frame. Because there is an afne transformation that corresponds to each change of frame,
there are 4 4 matrices that represent the transformation from model coordinates to world
coordinates and from world coordinates to eye coordinates.

These transformations usually are concatenated together into the model-view transformation,
which is specied by the model-view matrix. Usually, the use of the model-view matrix
instead of the individual matrices should not pose any problems for the application
programmer.

Once objects are in eye co- ordinates, OpenGL must check whether they lie within the view
volume.

If an object does not, it is clipped from the scene prior to rasterization. OpenGL can carry out
this process most efficiently if it rst carries out a projection transformation that brings all
potentially visible objects into a cube centered at the origin in clip coordinates.

Priyanka H V

Page 1

Computer Graphics

After this transformation, vertices are still represented in homogeneous coordinates. The
division by the w component, called perspective division, yields three-dimensional
representations in normalized device coordinates.

The nal transformation takes a position in normalized device coordinates and, taking into
account the viewport, creates a three-dimensional representation in window coordinates.

Window coordinates are measured in units of pixels on the display but retain depth
information. If we remove the depth coordinate, we are working with two-dimensional screen
coordinates.

From the application programmer's perspective, OpenGL starts with two frames:

The camera frame


The world frame.

The model-view matrix positions the world frame relative to the camera frame.
The model-view matrix converts the homogeneous-coordinate representations of points and
vectors to their representations in the camera frame. Because the model-view matrix is part of the
state of the system, there is always a camera frame and a present-world frame.
OpenGL provides matrix stacks, so model-view matrices or, equivalently, the frames, can be
stored. The camera is at the origin of its frame.
The three basis vectors correspond to the up direction of the camera, the y direction; the direction
the camera is pointing, the negative z direction; and a third orthogonal direction, x, that is placed
so that the x, y, z directions form a right-handed coordinate system.
Other frames can be obtained in which objects can be placed by performing homogeneous
coordinate transformations that define new frames relative to the camera frame.
These transformations are defined and also can be used to position the camera relative to the
objects. Because changes of frame are represented by model-view matrices and also these
matrices can be stored, frames can be saved and moved between frames by changing the present
model-view matrix.
E.g.In the default settings below, the camera and world frames coincide with the camera pointing
in the negative z direction.
In many applications, objects are defined near the origin,
such as a square centered at the origin or, perhaps, a group
of objects whose center of mass is at the origin. It is also
natural to set up the viewing conditions so that the camera
sees only those objects that are in front of it.
To form images that contain all these objects, either the
camera must be moved away from the objects or the objects
must be moved away from the camera. Equivalently, the
camera frame is moved relative to the world frame.
If the camera frame is regarded as fixed and the model-view matrix as positioning the world
frame relative to the camera frame, then the model-view matrix, moves a point (x, y, z) in the
world frame to the point (x, y, z, -d) in the camera frame. Thus, by making d a suitably large
positive number, the objects are moved in front of the camera by moving the world frame
relative to the camera frame (figure (b)).
The model-view matrix takes care of the relative positioning of the frames.
Priyanka H V

Page 2

Computer Graphics

Using this strategy is almost always better than attempting to alter the positions of the object by
changing their vertices to place them in front of the
camera.
In OpenGL, a model-view matrix can be set by sending
an array of 16 elements to glLoadMatrix. For geometric
problems, one frame to another frame change can be
obtained by a sequence of geometric transformations
such as rotations, translations, and scales.

4.5 Modeling a Colored Cube

To model, the pipeline approach used in 2-D is used, with which the objects are defined in
terms of sets of vertices.

The vertices will be passed through a number of transformations before the primitives that
they define are rasterized in the frame buffer.

The use of homogeneous coordinates not only will enable explaining this process, but also
will lead to efficient implementation techniques.

Consider the problem of drawing a rotating cube on the screen of our CRT. One frame of an
animation is shown in figure.
To generate the image following tasks are performed:

Modeling
Converting to the camera frame
Clipping
Projecting
Removing hidden surfaces
Rasterizing

4.5.1 Modeling of a Cube

There are a number of ways to model a cube.

A CSG system regards it as a single primitive.

The hardware processes the cube as an object consisting of eight vertices.But here surfacebased model is used. It implies that a cube is regarded either as the intersection of six planes,
or as the six polygons that define its faces, called its facets.

Assume that the vertices of the cube are available through an array of vertices.
For example, as follows:

GLfloat vertices[8][3] = {{-1.0,-1.0,-1.0},{1.0,-1.0,-1.0}, {1.0,1.0,-1.0}, {-1.0,1.0,-1.0}, {-1.0,1.0,1.0}, {1.0,-1.0,1.0}, {1.0,1.0,1.0}, {-1.0,1.0,1.0}};

A more object-oriented form is adopted if a 3-D point type is defined:


typedef GLfloat point3[3];

The vertices of the cube can then be defined as

point3 vertices[8] ={{-1.0,-1.0,-1.0},{1.0,-1.0,-1.0}, {1.0,1.0,-1.0}, {-1.0,1.0,-1.0}, {-1.0,1.0,1.0},{1.0,-1.0,1.0}, {1.0,1.0,1.0}, {-1.0,1.0,1.0}};


Priyanka H V

Page 3

Computer Graphics

OpenGL implements all vertices in 4-D homogeneous coordinates. Function calls using a 3D type, such as glVertex3fv, have the values placed into to 4-D form within the graphics
system.

List of points can then be used to define the faces of the cube.

E.g. One face is


glBegin(GL_POLYGON);
glVertex3fv(vertices[0]);
glVertex3fv(vertices[3]);
glVertex3fv(vertices[2]);
glVertex3fv(vertices[1]);
glEnd( );

and similarly other five faces can be defined.


4.5.2 Inward- and Outward-Pointing Faces

The 3-D polygon vertices must be specified in the order.

E.g. The orders of the vertices (0, 3, 2, 1) and (1, 0, 3, 2) are same for the first face because
the final vertex in a polygon definition is always linked back to the first. However, the order
(0, 1, 2, 3) is different.

It describes the same boundary, the edges of the polygon are traversed in the reverse order
(0, 3, 2, 1) - as shown in figure.

Each polygon has two sides. Either or both of them can be displayed.

A face is outward facing if the vertices are traversed in a counterclockwise


order when the face is viewed from the outside.

This method is also known as the right-hand rule because, if the fingers of

right hand are oriented in the direction the vertices are traversed, the thumb points outward.

In the example with the order (0,3,2, 1) rather than as (0, 1,2,3), the outer side of the back of
the cube could be defined correctly.

4.5.3 Data Structures for Object Representation

Cube is now described through a set of vertex specifications.

E.g. glBegin(GL_POLYGON) six times, each time followed by four vertices (via glVertex)
and a glEnd

or

glBegin(GL_QUADS) followed by 24 vertices and a glEnd.

Need for data structure:

Both of the above methods work, but both fail to capture the essence of the cube's topology,
as opposed to the cube's geometry.

Topology of a six-sided polyhedron can be described by following statements:

Considering the cube as a polyhedron, the object i.e. the cube composed of six faces.

The faces are each quadrilaterals that meet at vertices; each vertex is shared by three faces.
Pairs of vertices define edges of the quadrilaterals; each edge is shared by two faces.

All are true, regardless of the location of the vertices i.e. regardless of the geometry of the
object.

Priyanka H V

Page 4

Computer Graphics

Data structures separate the topology of the object from its geometry, if used in building
the objects.

E.g. The data specifying the location of the


vertices specify the geometry and can be
stored as a simple list or array, such as in
vertices[8] - the vertex list.

The top-level entity is a cube, considered


being composed of six faces.

Each face consists of four ordered vertices.

Each vertex can be specified indirectly through its index. This data structure is shown in
figure.

Advantages of this structure:

Each geometric location appears only once, instead of being repeated each time it is used for
a facet. If, in an interactive application, the location of a vertex is changed, the application
needs to change that location only once, rather than searching for multiple occurrences of the

vertex.
The vertex list structure is used in this example which can be expanded later if necessary.
4.5.4 The Color Cube

Vertex list is used to define a color cube.

Colors of the color solid (black, white, red, green, blue, cyan, magenta, yellow) are assigned
to the vertices.

quad function is used to draw quadrilateral polygons specified by pointers into the vertex list.

The color cube specifies the six faces, taking care to make them all outward-facing.

Glfloat vertices[8][3]={{-1.0,-1.0,1.0},{-1.0,1.0,1.0}, {1.0,1.0,10}, {1.0,-1.0,1.0},{-1.0,-1.0,-1.0},


{1.0,-1.0,-1.0}. {1.0,1.0,-1.0}, {-1.0,1.0,-1.0}};
Glfloat colors[8][3] = {{0.0,0.0,0.0},{1.0,0.0,0.0}, {1.0,1.0,0.0}, {0.0,1.0,0.0}, {0.0,0.0,1.0},
{1.0,0.0,1.0}, {1.0,1.0,1.0}, {0.0,1.0,1.0}};
void quad(int a. int b. int c . int d)
{
glBegin(GL_QUADS);
glColor3fv(colors[a]);
glVertex3fv(vertices[a]);
glColor3fv(colors[b]);
glVertex3fv(vertices[b]);
glColor3fv(colors[c]);
glVertex3fv(vertices[c]);
glColor3fv(colors[d]);
glVertex3fv(vertices[d]);
glEnd( );
}
void colorcube( )
{
quad(0,3,2,1);
quad(2.3.7,6);
quad(0.4.7,3);
Priyanka H V

Page 5

Computer Graphics

quad(l.2.6.5);
quad(4.5.6.7);
quad(0,1.5.4);
}
Note: Include void main and void display
4.5.5 Bilinear Interpolation

It is one of the methods the graphics system adopts to assign colors to points inside the
polygon (i.e. interpolate), using the specified color information.

Consider the polygon in figure. The colors C0, Cl, C2, and C3 are the
ones assigned to the vertices in the application program.

Linear interpolation can be used to interpolate colors along the edges


between vertices 0 and 1, and between 2 and 3, by using

C01() = (1 - )C0 + C1
C23( )= (l - )C2 + C3

As goes from 0 to 1, colors, C0l() and C23() are generated along these two edges.

For a given value of , two colors, C4 and C5 are obtained on these edges.

Colors along the line connecting the two points on the edges corresponding to C4 and C5 can
be interpolated as below:
C45() =(1 - )C4+ C5

For a flat quadrilateral, each color generated by this method corresponds to a point on the
polygon.

If the four vertices are not all in the same plane, then, although a color is generated, its
location on a surface is not clearly defined.

A related algorithm, scan-line interpolation, avoids the flatness issue and can be made part
of the scan-conversion process. A polygon is filled only when it is displayed.

If waited until rasterization, the polygon is first projected onto the


2-D plane as in figure.

If a quadrilateral is filled scan line by scan line, as shown below,


then colors are assigned scan line by scan line on the basis of only
two edges.

OpenGL provides this method, not only for colors, but also for other values that can be
assigned on a vertex-by-vertex basis.

4.5.6 Vertex Arrays

Vertex arrays provide a method for encapsulating the information in the data structure such
that polyhedral objects can be, drawn with only a few function calls.

OpenGL provides vertex arrays, a facility that extends the use of arrays in a way that avoids
most of the function calls to draw the cube.

The main idea is that information stored in arrays can be stored on the clients, or application
programs, and accessed by a single function call.

The information can be stored in a way that retains the structuring earlier defined earlier,
such as the order in which vertices are called to draw the cube.

Priyanka H V

Page 6

Computer Graphics

OpenGL provides support for six types of arrays: vertex, color, color index, normal, texture
coordinate, and edge flag.

Using vertex arrays requires following three steps. The first two steps are usually part of the
initialization phase of the program.

1. Enabling their functionality:


In this example, only color and vertex arrays are used and they are enabled by
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
Syntax Description:
void glEnableClientState(GLenum array)
void glDisableClientState(GLenum array)

Enables and disables arrays of types GL_VERTEX_ARRAY, GL_COLOR_ARRAY,


GL_INDEX_ARRAY, GL_NORMAL_ARRAY, GL_TEXTURE_COORD_ARRAY, or
GL_EDGE_FLAG_ARRAY.

2. Specifying the format of the arrays.


The form of the arrays is given by
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColorPointer(3, GL_FLOAT, 0, colors);

The first value (3) denotes three-dimensional data.

The second and third parameters indicate that the data are floats packed in the array given by
the fourth parameter.

Syntax Description:
void glVertexPointer(GLint dim, GLenum type, GLsizei stride, GLvoid *array)
void glColorPointer(GLint dim, GLenum type, GLsizei stride, GLvoid *array)

Provides the information on arrays.

The data are in array, dim is the dimension of the data (2,3, or 4), type denotes how the data
are stored (GL_SHORT, GL_INT, GL_FLOAT, or GL_DOUBLE), and stride is the number
of bytes between consecutive data values (0 means that the data are packed in the array).

3. Using the arrays to render the scene.

A new array is needed that stores the indices in the order in which they are used. The
following array contains the necessary information:
GLubyte cubeIndices[24]={0, 3, 2, 1, 2, 3, 7, 6, 0, 4, 7, 3, 1, 2, 6, 5, 4, 5, 6, 7, 0, 1, 5, 4};

The cube can be drawn through the function glDrawElements( ).

Syntax Description:
void glDrawElements(GLenum mode, GLsizei n, GLenum type, void *indices)

It draws elements of type mode using n indices from the array indices for each.

The array is of type(GL_UNSIGNED_BYTE,GL_UNSIGNED_SHORT,or GL_UNSIGNED_INT).

If each face need to be rendered individually, following loop can be used:


for(i = 0; i < 6; i++)
glDrawElements(GL_POLYGON, 4, GL_UNSIGNED_BYTE, &cubeIndices[4*i]);

GL_QUADS is used, each successive group of four vertices determines a new quad. Thus, a

Priyanka H V

Page 7

Computer Graphics

single function call suffices:


glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, cubeIndices);
4.6 AffineTransformations

A transformation is a function that takes a point (or vector) and maps that point (or vector)
into another point (or vector).

Such a function is shown in the figure.

In the functional form,

Q = T(P) for points, or

v =R(u) for vectors.

Using homogeneous coordinates, both vectors and points can be

represented as 4-D column matrices, and the transformation can be defined with a single
function
q = f(p), v = f(u),
that transforms the representations of both points and vectors in a given frame.
Advantages/ Characteristics of Affine Transformations:

When working with homogeneous coordinates, A is a 4 x 4 matrix that leaves unchanged the
fourth (w) component of a representation.

A is of the form

The 12 values can be set arbitrarily, and hence this transformation can be said to have has 12
degrees of freedom. However, points and vectors have slightly different representations
in the affine space.
Vector representation

Point representation

If an arbitrary A is applied to a vector, then v=Au, and only nine of the elements of A affect
u, and, thus, there are only 9 degrees of freedom in the transformation of vectors.

Affine transformations of points have the full 12 degrees of freedom.

Affine transformations preserve lines.

Consider the line P() = Po+ d, where Po is a point and d is a vector. In any frame, the line
can be expressed as p() =Po + d, where Po and d are the representations of Po and d in
that frame.

For any affine transformation matrix A,


A P() =A Po + A d.

Transformed line can be constructed by first transforming Po and d, and then using whatever
line-generation algorithm for the display.

Consider 2-point form of the line,

Priyanka H V

Page 8

Computer Graphics

p( ) = Po + (1 - )Pl, a similar result holds.

The representations of Po and P1 are transformed, and then the transformed line is
constructed. Because there are only 12 elements in M that can be selected arbitrarily, there
are 12 degrees of freedom in the affine transformation of a line or line segment.

Types of Affine Transformations in Computer Graphics:


Translation
Rotation, and
Scaling.
With slight modifications, these results can also be used to describe the standard parallel and
perspective projections.
4.7 Translation, Rotation and Scaling
Translation

Translation is an operation that displaces points by a fixed distance in a given direction.

To specify a translation, only a displacement vector d is specified,


because the transformed points are given by P'=P+d for all points
P on the object.

Translation has 3 degrees of freedom, because the three

components of the displacement vector can be specified arbitrarily.


Rotation

Rotation is more difficult to specify than translation, because more parameters are involved

E.g. Rotating a point about the origin in a 2-D plane, as shown below.

Having specified a particular point - the origin there is a particular frame.

A 2-D point at (x, y) in this frame is rotated about the origin by an angle to
the position (x, y).

Standard equations describing this rotation can be obtained by representing

(x, y) and (x', y') in polar form:


x = cos
y = sin
x = cos(+)
y = sin(+)

Expanding these terms using the trigonometric identities for the sine and cosine of the sum of
two angles,
x = cos() cos() - sin() sin() = x cos() - y sin()
y = cos() sin() + sin() cos() = x sin() + y cos()

These equations can be written in matrix form as

This form can be expanded to 3-Ds later.

Note three features of this transformation that extend to other rotations:


There is one point- the origin, in this case-that is unchanged by the rotation. This
point is called the fixed point of the transformation.
Knowing that the 2-D plane is part of 3-D space, this rotation can be re-interpreted in
3-Ds. In a right handed system, when the x and y axes are drawn in the standard way,

Priyanka H V

Page 9

Computer Graphics

the positive z axis comes out of the page. Positive direction of rotation is defined
counterclockwise when looking down the positive z axis toward the origin. This
definition is used to define positive rotations about other axes.
2-D rotation in the plane is equivalent to 3-D rotation about the z-axis. Points in
planes of constant z all rotate in a similar manner, leaving their z values unchanged.

These observations can be used to define a general 3-D rotation that is independent of the
frame.
Three entities shown in figure, to be specified are:

A fixed point (Pf)


A rotation angle (), and
A line or vector about which to rotate.

For a given fixed point, there are 3 degrees of freedom: the two angles necessary to specify
the orientation of the vector, and the angle that specifies the amount of rotation about the
vector.

Rigid-body transformations:

Rotation and translation are known as rigid-body transformations. No combination of


rotations and translations can alter the shape of an object; they can alter only the object's
location and orientation. Consequently, rotation and translation alone cannot give all possible
affine transformations.

Scaling

Scaling is an affine non-rigid-body transformation.

Scaling can make an object bigger or smaller, as shown in figure below.

This figure illustrates both uniform scaling in all directions


and Non-uniform scaling in a single direction.

Non-uniform scaling is needed to build up the full set of affine


transformations that are used in modeling and viewing.

Scaling transformations have a fixed point, as shown below.

To specify a scaling, the fixed point, a direction in which scaling is


needed, and a scale factor () can be specified.

For > 1, the object gets longer in the specified direction; for 0 <
1, the object gets smaller in that direction.

Negative values of give reflection (figure below) about the fixed point, in the scaling
direction.

A properly chosen sequence of scaling, translations, and rotations can be combined to form any
affine transformation.
Priyanka H V

Page 10

You might also like