0% found this document useful (0 votes)
153 views

Computer Graphics UNIT V

This document discusses several algorithms for solving the hidden surface problem in 3D computer graphics, including: 1) The z-buffer/depth buffer algorithm, which uses two buffers (frame and depth) to store color and depth values for each pixel, overwriting closer pixels. 2) Scanline algorithms like the scanline coherence method, which process polygons intersecting each scanline before moving to the next. 3) Object space methods like backface detection, which cull backfacing polygons based on normal vector orientation. 4) The A-buffer method, an extension of z-buffer for transparent surfaces using a linked list of surface contributions per pixel.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views

Computer Graphics UNIT V

This document discusses several algorithms for solving the hidden surface problem in 3D computer graphics, including: 1) The z-buffer/depth buffer algorithm, which uses two buffers (frame and depth) to store color and depth values for each pixel, overwriting closer pixels. 2) Scanline algorithms like the scanline coherence method, which process polygons intersecting each scanline before moving to the next. 3) Object space methods like backface detection, which cull backfacing polygons based on normal vector orientation. 4) The A-buffer method, an extension of z-buffer for transparent surfaces using a linked list of surface contributions per pixel.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Computer Graphics UNIT IV 351 CS 63

Unit V
Visibility, Image and object precision, Z-buffer algorithm, Floating horizons - Computer
Animations, Design of Animation Sequences, General Computer - Animation Functions Raster
Animations, Key-Frame Systems, Morphing, Motion Specifications

When we view a picture containing non-transparent objects and surfaces, then we cannot see
those objects from view which are behind from objects closer to eye. We must remove these
hidden surfaces to get a realistic screen image. The identification and removal of these surfaces
is called Hiddensurface problem.

There are two approaches for removing hidden surface problems –

Object-Space method
and
Image-space method.

The Object-space method is implemented in physical coordinate system and image-space method
is implemented in screen coordinate system.

When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen
that are visible from a chosen viewing position.

Depth Buffer Method

This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the
Z depth of each surface to determine the closest visible surface.

In this method each surface is processed separately one pixel position at a time across the
surface. The depth values for a pixel are compared and the closest smallestz surface determines
the color to be displayed in the frame buffer.

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order.
Tooverride the closer polygons from the far ones, two buffers named frame buffer and depth
buffer, are used.

Depth buffer is used to store depth values for x, y position, as surfaces are processed

0 ≤ depth ≤ 1.

The frame buffer is used to store the intensity value of color value at each position x, y.
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate
indicates back clipping pane and 1 value for z-coordinates indicates front clipping pane.

1
Computer Graphics UNIT IV 351 CS 63

Algorithm

Step-1 − Set the buffer values −

Depthbuffer x, y = 0
Framebuffer x, y = background color

Step-2 − Process each polygon Oneatatime

For each projected x, y pixel position of a polygon, calculate depth z.


If Z > depthbuffer x, y
Compute surface color,
set depthbuffer x, y = z,
framebuffer x, y = surfacecolor x, y

Advantages
It is easy to implement.
It is easy to implement.
It reduces the speed problem if implemented in hardware.
It processes one object at a time.

Disadvantages
It requires large memory.
It is time consuming process.

Scan-Line Method

It is an image-space method to identify visible surface. This method has a depth information for
only single scan-line. In order to require one scan-line of depth values, we must group and
process all polygons intersecting a given scan-line at the same time before processing the next
scan-line. Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of
each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other surface
data, and may be pointers to the edge table.

2
Computer Graphics UNIT IV 351 CS 63

To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed.
The active list stores only those edges that cross the scan-line in order of increasing x. Also a
flag is set for each surface to indicate whether a position along a scan-line is either inside or
outside the surface. Pixel positions across each scan-line are processed from left to right. At the
left intersection with a surface, the surface flag is turned on and at the right, the flag is turned off.
You only need to perform depth calculations when multiple surfaces have their flags turned on at
a certain scan-line position.
Area-Subdivision Method
The area-subdivision method takes advantage by locating those view areas that represent part of
a single surface. Divide the total viewing area into smaller and smaller rectangles until each
small area is the projection of part of a single visible surface or no surface at all.
Continue this process until the subdivisions are easily analyzed as belonging to a single surface
or until they are reduced to the size of a single pixel. An easy way to do this is to successively
divide the area into four equal parts at each step. There are four possible relationships that a
surface can have with a specified area boundary.

Surrounding surface − One that completely encloses the area.


Overlapping surface − One that is partly inside and partly outside the area.
Inside surface − One that is completely inside the area.
Outside surface − One that is completely outside the area.

3
Computer Graphics UNIT IV 351 CS 63

The tests for determining surface visibility within an area can be stated in terms of these four
classifications. No further subdivisions of a specified area are needed if one of the following
conditions is true −
All surfaces are outside surfaces with respect to the area.
Only one inside, overlapping or surrounding surface is in the area.
A surrounding surface obscures all other surfaces within the area boundaries.
Back-Face Detection
A fast and simple object-space method for identifying the back faces of a polyhedron is based on
the "inside-outside" tests. A point x, y, z is "inside" a polygon surface with plane parameters A,
B, C, and D if When an inside point is along the line of sight to the surface, the polygon must be
a back face We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components A, B, C.
In general, if V is a vector in the viewing direction from the eye or " camera " position, then this
polygon is a back face if
V.N > 0
Furthermore, if object descriptions are converted to projection coordinates and your viewing
direction is parallel to the viewing z-axis, then –
V = (0, 0, V ) and V.N = V C
So that we only need to consider the sign of C the component of the normal vector N.
In a right-handed viewing system with viewing direction along the negative ZV axis, the polygon
is a back face if C < 0. Also, we cannot see any face whose normal has z component C = 0, since
your viewing direction is towards that polygon. Thus, in general, we can label any polygon as a
back face if its normal vector has a z component value −
C <= 0

Similar methods can be used in packages that employ a left-handed viewing system. In these
packages, plane parameters A, B, C and D can be calculated from polygon vertex coordinates
specified in a clockwise direction unlikethecounterclockwisedirectionusedinaright −
handedsystem.

4
Computer Graphics UNIT IV 351 CS 63

Also, back faces have normal vectors that point away from the viewing position and are
identified by
C >= 0 when the viewing direction is along the positive Zv axis. By examining parameter C for
the different planes defining an object, we can immediately identify all the back faces.

A-Buffer Method
The A-buffer method is an extension of the depth-buffer method. The A-buffer method is a
visibility detection method developed at Lucas film Studios for the rendering system Renders
Everything You Ever Saw REYES.
The A-buffer expands on the depth buffer method to allow transparencies. The key data structure
in the A-buffer is the accumulation buffer.

Each position in the A-buffer has two fields −


Depth field − It stores a positive or negative real number
Intensity field − It stores surface-intensity information or a pointer value

5
Computer Graphics UNIT IV 351 CS 63

If depth >= 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area. The intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field
then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes −
RGB intensity components
Opacity Parameter
Depth
Percent of area coverage
Surface identifier
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values are
used to determine the final color of a pixel.
Depth Sorting Method
Depth sorting method uses both image space and object-space operations. The depth-sorting
method performs two basic functions −
First, the surfaces are sorted in order of decreasing depth.
Second, the surfaces are scan-converted in order, starting with the surface of greatest depth.
The scan conversion of the polygon surfaces is performed in image space. This method for
solving the hidden-surface problem is often referred to as the painter's algorithm. The
following figure shows the effect of depth sorting –

6
Computer Graphics UNIT IV 351 CS 63

The algorithm begins by sorting by depth. For example, the initial “depth” estimate of a polygon
may be taken to be the closest z value of any vertex of the polygon.
Let us take the polygon P at the end of the list. Consider all polygons Q whose z-extents overlap
P’s.
Before drawing P, we make the following tests. If any of the following tests is positive, then we
can assume P can be drawn before Q.
Do the x-extents not overlap?
Is P entirely on the opposite side of Q’s plane from the viewpoint?
Is Q entirely on the same side of P’s plane as the viewpoint?
Do the projections of the polygons not overlap?
If all the tests fail, then we split either P or Q using the plane of the other. The new cut polygons
are inserting into the depth order and the process continues. Theoretically, this partitioning could
generate O(n ) individual polygons, but in practice, the number of polygons is much smaller.
Binary Space Partition
Trees
Binary space partitioning is used to calculate visibility. To build the BSP trees, one should start
with polygons and label all the edges. Dealing with only one edge at a time, extend each edge so
that it splits the plane in two. Place the first edge in the tree as root. Add subsequent edges based
on whether they are inside or outside. Edges that span the extension of an edge that is already in
the tree are split into two and both are added to the tree.

7
Computer Graphics UNIT IV 351 CS 63

From the above figure, first take A as a root.


Make a list of all nodes in figure a.
Put all the nodes that are in front of root A to the left side of node A and put all those nodes
that are behind the root A to the right side as shown in figure b.
Process all the front nodes first and then the nodes at the back.
As shown in figure c, we will first process the node B. As there is nothing in front of the node
B, we have put NIL. However, we have node C at back of node B, so node C will go to the right
side of node B.
Repeat the same process for the node D.

Computer Animation
Introduction
* The term computer animation refers to anytime sequence of visual changes in a scene.

8
Computer Graphics UNIT IV 351 CS 63

* In computer generated animation, various transformations, along with variations in object


color, transparency or surface texture are displayed with time variation.
* We can also produce computer animations by changing lighting effects or other parameters and
procedures associated with illumination and rendering.
* Rendering is generating an object from one model by means of computer program.
Design of animation sequences
* An animation sequence is designed with the following steps:-
> Storyboard layout
> Object definitions
> Key frame specifications
> Generation of in - between frames
Storyboard layout:
* It is the outline of the action. It defines the motion sequence as a set of basic events that are to
take place .
* Depending on the type of animation to be produced, the storyboard could consist of a set of
rough sketches or it could be a list of the basis ideas for the motion.

9
Computer Graphics UNIT IV 351 CS 63

10
Computer Graphics UNIT IV 351 CS 63

Object Definition:
* Each object participating in the action is given object definition, such as terms of basic shapes,
such as polygons or splines.
Frame:
* It is one of the many single photographic images in a motion picture. The individual frames are
separated by frame lines. Normally, 24 frames are needed for one second of film.
Key frame:
* A key frame in animation and filmmaking is a drawing that defines the starting and ending
points of any smooth transition.
Key frame:
* A sequence of key frames defines which movement the spectator will see, but the position of
the key frames on the film, defines the timing of the movement. 2 or 3 key frames can be present
for a span of a second.

11
Computer Graphics UNIT IV 351 CS 63

In - between:
* It is a process of generating intermediate frames between 2 images to give appearance that the
1st image evolves smoothly into the second image. In – betweens are the drawing between the
key frames which help to create the illusion of motion.
* Film requires 24 frames per second and graphics terminals are refreshed at a rate of 30 to 60
frames per second.
Frames

12
Computer Graphics UNIT IV 351 CS 63

Frames and Key frames

13
Computer Graphics UNIT IV 351 CS 63

* Apart from above 4, the other tasks required are:


> Motion verification
> Editing
> Production and synchronization of sound track.
General computer animation functions
* Animation packages, such as wave front, provide special functions for designing the animation
and processing individual objects.
* Some steps included in the development of animation sequence are
> Object manipulation and rendering
> Camera motions
> Generation of in - between
* One function available in animation packages is provided to store and manage the object
database (object shapes and associated parameters are stored and updated in the database)
> Other object functions include:
> Object motion generation (2-D or 3-D transformations)
> Object rendering
* Another to identify visible surfaces.

14
Computer Graphics UNIT IV 351 CS 63

* One function to simulate (observe) camera movements:


> Zooming
> Panning (rotating horizontally or vertically)
> Tilting
Raster Animations
* On raster systems, we generate real – time animation in limited application using raster
operation.
* Such as 2D or 3D transformations on objects.
* We can also animate objects along 2D motion paths using the color – table transformation.
* The pixel values at successive positions along the motion path of an object are stored in color –
table and the pixel at 1st pixel is set on, we set the pixels at the other object positions to the
background color.
Computer Animation Languages
* A general – purpose language, such as C, LISP, Pascal, or FORTRAN, is often used to
program the animation functions.
* Animation functions include a graphics editor, a key – frame generator, an in – between
generator, and standard graphics routines.
* A graphics editor allows us to design and modify object shapes.
* A typical animation specification is scene description. It includes where to position objects,
light sources, camera parameters, etc.
* Another standard function is action specification that involves the layouts and motion paths for
the objects and camera.
* Key – frame systems are specialized animation languages designed simply togenerate the in –
betweens. Also explains about degrees of freedom of an object.
* As an example this arm can have a total of 12 degrees of freedom.
* The human body, in comparison, has over 200degrees of freedom.

15
Computer Graphics UNIT IV 351 CS 63

Parameterized systems
* Allow object – motion characteristics to be specified as part of the object definitions Object
motion characteristics

Key frame systems


* For complex scenes, we can separate the frames into individual components or objects called
cels. Given the animation paths, we can interpolate the positions of individual objects.
Morphing:
* Transformation of object shapes from one form to another is called as morphing. Given 2 key
frames for an object transformation, we first adjust the object specification in one of the frames
so that the number of polygon edges or vertices is the same for the two frames.
* Examples of morphing are in television advertising.

16
Computer Graphics UNIT IV 351 CS 63

17
Computer Graphics UNIT IV 351 CS 63

Motion Specifications
* There are several ways in which the motions of objects can be specified in an animation
system.
Direct Motion Specification:
* We explicitly give the rotation angles and translation vectors. The geometric transformations
are applied to transform coordinate positions.

18
Computer Graphics UNIT IV 351 CS 63

We could use an approximating equation to specify certain kinds of motions


like bouncing ball, with sine curve.

Goal – Directed Systems:


* We can specify the motions that are to take place in general terms that abstractly describe the
actions.
> Example: We want an object to walk or to run to a particular destination.
> We want an object to pick-up some other specified object.
Kinematics and Dynamics:
* We can construct animation sequences using kinematic or dynamic descriptions. We specify
animation by giving motion parameters like position, velocity and acceleration parameters.
Inverse Kinematics and dynamics:

19
Computer Graphics UNIT IV 351 CS 63

* We can specify the initial and final positions of the object and calculations are done by the
computer

*******************
******

20

You might also like