0% found this document useful (0 votes)
3 views71 pages

M3 CST304 Ktunotes.in

Module 3 covers clipping and projections in computer graphics, detailing window to viewport transformations, clipping algorithms like Cohen-Sutherland and Sutherland-Hodgeman, and visible surface detection methods. It explains the mapping of world coordinates to device coordinates, the concept of clipping to discard parts of scenes outside a defined window, and the mathematical representations involved. Additionally, it discusses various types of clipping and their applications in graphics rendering.

Uploaded by

yoshua Immanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views71 pages

M3 CST304 Ktunotes.in

Module 3 covers clipping and projections in computer graphics, detailing window to viewport transformations, clipping algorithms like Cohen-Sutherland and Sutherland-Hodgeman, and visible surface detection methods. It explains the mapping of world coordinates to device coordinates, the concept of clipping to discard parts of scenes outside a defined window, and the mathematical representations involved. Additionally, it discusses various types of clipping and their applications in graphics rendering.

Uploaded by

yoshua Immanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Module 3

Module - 3 (Clipping and Projections)


Window to viewport transformation. Cohen
Sutherland Line clipping algorithm. Sutherland
Hodgeman Polygon clipping algorithm. Three
dimensional viewing pipeline. Projections- Parallel
and Perspective projections. Visible surface detection
algorithms- Depth buffer algorithm, Scan line
algorithm.
Window to Viewport Transformation
Coordinate Systems
yw yv
(xv,yv)
(xw, yw)

0 xw 0 xv

Device Coordinates:
World Coordinates:
• Device dependent Limits
• User-Defined Limits
• Positive Integer values
• Floating point values

[4]-2
Window to Viewport Transformation

Window: A rectangular region Viewport: A rectangular region


in World Coordinate System. in Device Coordinate System.

There are two coordinate systems. One is the World Coordinate system. While
other is the Device Coordinate system. Therefore, The mapping of a world
coordinate scene to device coordinate is Window to Viewport Transformation or
Windowing Transformation.
[4]-3
Window to Viewport Transformation

Window to View-Port Transformation is the transformation from world to device


coordinates.

It is actually a combination of many transformations. It basically involves translation,


scaling, rotation transformations. It also requires methods to delete those portions of
the scene which are outside the selected display area. This is called clipping.

What is Window?

Window is a world coordinate area selected for display. So, the Window defines what
is to be viewed . First of all, we have to decide which portion of the real world, we
want to display.

View-Port

A View-Port is an area on a display device. Where we map a window. So, Viewport


defines where is to be viewed. We also have to decide that at which portion of the
display device, we want to view a particular scene.
Window to Viewport Transformation

MC
Window to Viewport Transformation

• We denote the boundaries of the world window by four


real values xwmin, ywmin, xwmax, ywmax denoting its left,
bottom, right, top margins respectively.

• Similary the boundaries of the viewport on the screen are


defined by four integer values xvmin, yvmin, xvmax, yvmax .

• When a graphics display is generated using arbitrary


coordinates on the world window, the important problem
encountered in viewing this display on the actual viewport
on the screen is to have a function which maps every point
(xw,yw) on the window to a corresponding point (xv,yv) on
the viewport.
Window to Viewport Transformation
• Object descriptions are transferred to normalized device coordinates:

• We do this thing using a transformation that maintains the same relative


placement of an object in normalized space as they had in viewing
coordinates.

• If a coordinate position is at the center of the viewing window:


It will display at the center of the viewport.

Fig shows the window to


viewport mapping. A point at
position (xw, yw) in window
mapped into position (xv, yv)
in the associated viewport.
Window to Viewport Transformation
Window to Viewport Mapping
In order to maintain the same relative placement of the point in the viewport as in
the window, we require:

Solving these equations for the viewport position (xv, yv), we have:
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy

Where scaling factors are:


Window to Viewport Transformation
Matrix Representation of this transformation:
Step1: Translate window to origin 1
Tx= -xwmin Ty= -ywmin

Step2: Scaling of the window to match its size to the viewport


Sx=(xvmax-xvmin)/(xwmax-xwmin)
Sy=(yvmax-yvmin)/(yw max-ywmin)

Step3: Again translate viewport to its correct position on screen.


Tx=xvmin
Ty=yvmin

Above three steps can be represented in matrix form:


VT=T1 * S * T
T = Translate window to the origin
S = Scaling of the window to viewport size
T1 = Translating viewport on screen.
Window to Viewport Transformation
Matrix Representation of this transformation:

Sx = (xvmax-xvmin)/(xwmax-xwmin)
Sy = (yvmax-yvmin)/(yw max-ywmin)
Window to Viewport Transformation
• From normalized coordinates, object descriptions are mapped
to the various display devices.

• Any number of output devices can be opened in a particular


application, and windows to viewport transformation can be
performed for each open output device.

• This mapping is called workstation transformation (It is


accomplished by selecting a window area in normalized
space and a viewport area in the coordinates of the display
device).

• With the workstation transformation, we gain additional


control over the positioning of a parts of a scene on individual
output devices.
Window to Viewport Transformation
As in fig, workstation transformation to partition a view so that
different parts of normalized space can be displayed on various
output devices).
Window to Viewport Transformation
Exercise:

Find normalization transformation that maps a window whose lower-


left corner is at (1,1) and upper right corner is at (3,5) onto:

a) Viewport with lower-left corner (0,0) and upper right corner (1,1)
b) Viewport with lower left corner (0,0) and upper right corner (1/2,1/2).

wxmin=1, wymin=1 ,
wxmax=3, wymax=5

a) vxmin=0 , vymin=0
vxmax=1, vymax=1

b) vxmin=0, vymin=0
vxmax=1/2 , vymax=1/2
Window to Viewport Transformation

a) Viewport with lower-left corner (0,0) and upper right corner (1,1)

Sx = (vxmax– vxmin) / (wxmax– wxmin) , Sy = ( vymax– vymin) / (wymax– wymin)

Substituting the values, we get:

Sx = 1-0/3-1 = 1/2 ,
Sy=1-0/5-1 = 1/4
Window to Viewport Transformation

b) Viewport with lower left corner (0,0) and upper right corner (1/2,1/2)

Sx = (vxmax– vxmin) / (wxmax– wxmin), Sy = ( vymax– vymin) / (wymax– wymin)

Substituting the values, we get:

Sx = (1/2-0)/3-1 = (½)/2 = 1/4


Sy=(1/2-0)/5-1 = (1/2)/4 = 1/8
Clipping

Clipping and Clip Window


• First of all we define a region. We can call this region Clip Window.

• We need to keep all the portions or objects which are inside this window.
While all the things outside this window are discarded.

• We select a particular portion of a scene and then we display only that


part. Rest of the part is not displayed.

• So clipping involves, identifying which portion is outside clip window.


Then discarding that portion. Clip window may be rectangular or any
other polygon or it may be a curve also.
Clipping
Applications of Clipping

• We can select a part from a scene for displaying. Just like we use camera
to click a picture. We set our camera to focus at a particular portion and then
discard the things which are out of frame.

• In three-dimensional views, only some surfaces are visible and others are
hidden. We can use clipping to identify visible surfaces.

• During the process of Rasterization, We see aliasing effect ( zig-zag) in line


segments or object boundaries. We can use clipping for antialiasing.

• Solid-modeling procedures also use Clipping.

• We can use clipping for multiwindow environment

• With the help of Clipping Operation we can select a part of picture. Then we
can copy, move, or delete that part. Various painting and Drawing
Applications have this feature.
Clipping
Types of Clipping

There are various types of


Clipping:

•Point Clipping

•Line Clipping

•Polygon Clipping

•Text Clipping

•Curve Clipping
Clipping
Point Clipping

First of all consider a point p(x, y).

We can specify Edges of Clip Window by (xmin, xmax, ymin, ymax).

If following inequalities are not satisfied. Then point is clipped.


Otherwise displayed.

xmin<= x<=xmax
ymin<= y<=ymax
Clipping

Line Clipping

We specify a line with its end-points. There are three possible cases for a
line that we need to consider.

First of all we will check if a line is completely inside the window. If it is


then we will display it completely.

Otherwise, we will check if a line is completely outside the window. If it is


then we will clip that line.

Thirdly, line is neither completely inside nor completely outside. This line
is partially visible. This is a line intersecting clipping boundaries.
Clipping
Cohen Sutherland Line Clipping Algorithm

It uses four bit binary code to take clipping decision. This code is Region
Code. We can specify the bit pattern as b4b3b2b1.

Now, consider following cases:

1. If any point is inside clip window, the region code will be 0000.

2. But If the point is left to the window b1 is set to 1. 0001

3. Then, If the point is right to the window b2 is set to 1. 0010

4. Also, If the point is bottom to the window b3 is set to 1. 0100

5. However, If the point is top to the window b4 is set to 1. 1000


Clipping
Cohen Sutherland Line Clipping Algorithm

So, basically we specify nine regions

1.Inside Clip Window 0000


2.Top to the window 1000 ( b4 is 1)
3.Bottom to the window 0100 ( b3 is 1)
4.Right to the Window 0010 (b2 is 1)
5.Left to the Window 0001 (b1 is 1)
6.Top left to the Window 1001 ( b1 and b4
both are 1)
7.Top right to the Window 1010 ( b2 and b4
both are 1)
8.Bottom left to the Window 0101 ( b1 and
b3 both are 1)
9.Bottom Right to the Window 0110 ( b2
and b3 both are 1)
Clipping
Cohen Sutherland Line Clipping Algorithm
Steps of Algorithm

In this Algorithm, we will first find out the region codes of endpoints of given
lines. Then, check for each line.

Step1: If region codes of both the endpoints of a line are 0000. Then line is
completely inside and we will display line completely and exit.

Step2: Otherwise calculate logical AND of region codes of endpoints of the


line. Then, there are two cases possible.

a. If result is non-zero, clip the line completely and exit.

b. However, if the result is zero. Then the line is partially visible. So, calculate
the intersection points of a line segment and clipping boundary.
Clipping
Cohen Sutherland Line Clipping Algorithm
How to calculate intersection points

We specify Clipping boundary by (xmin, xmax, ymin, ymax). Line end-points are (x1, y1)
and (x2, y2). We can calculate intersection points by each clipping boundary.
m
Left Vertical Clipping Boundary: y= y1 + (y2-y1)/(x2-x1){xmin-x1}
Right Vertical Clipping Boundary: y= y1 + (y2-y1)/(x2-x1){xmax-x1}
Top Horizontal Clipping Boundary: x= x1 + (x2-x1)/(y2-y1){ymax-y1} 1/m
Bottom Horizontal Clipping Boundary: x= x1 + (x2-x1)/(y2-y1){ymin-y1}
Vertical Boundary
m = (y-y1)/(x-x1)
(x2, y2)
y-y1 = m(x-x1)
y = y1 + m(x-x1)
(x, y) Horizontal Boundary
(x1, y1) m = (y-y1)/(x-x1)
(Xmin, ymin) (Xmax, ymax)
m(x-x1) = y-y1
x = x1+(1/m)(y-y1)
Clipping
Cohen Sutherland Line Clipping Algorithm

Apply the Cohen Sutherland line clipping algorithm to clip the line
segment with coordinates (40,15) and (75,45) against the window with
(Xmin,Ymin)= (50,10) and (Xmax,Ymax)= (80,40).

Answer: Clip the line segments outside of the clip window intersection
points I1 = (50, 23.57) and I2 = (69.2, 40) and display the line between I1
and I2.

Apply the Cohen Sutherland line clipping algorithm to clip the line
segment with coordinates (60, 60) and (90, 20) against the window with
(Xmin,Ymin)= (40,15) and (Xmax,Ymax)= (85,50)

Answer: Clip the line segments outside of the clip window intersection
points I1 = (67.5, 50) and I2 = (85, 26.7) and display the line between I1
and I2.
Polygon Clipping
• Sutherland-Hodgeman algorithm (A divide-and-conquer strategy)
• Polygons can be clipped against each edge of the window one at a time. Edge
intersections, if any, are easy to find since the X or Y coordinates are already known.
• Vertices which are kept after clipping against one window edge are saved for clipping
against the remaining edges.
• Note that the number of vertices usually changes and will often increases.
Clipping A Polygon Step by Step

Right Clip
Boundary

Bottom Clip Boundary

Top Clip Boundary


Left Clip
Boundary
Sutherland-Hodgeman Algorithm

• Given a polygon with n vertices, v1, v2,…, vn, the algorithm clips the
polygon against a single, infinite clip edge and outputs another series of
vertices defining the clipped polygon.

• In the next pass, the partially clipped polygon is then clipped against
the second clip edge, and so on.

• Let’s consider the polygon edge from vertex vi to vertex vi+1.

• This algorithm requires storage for only two pairs of co-ordinates for
each clipping edge
Sutherland-Hodgman Polygon Clipping

• Input each edge (vertex pair) successively.


• Output is a new list of vertices.
• Each edge goes through 4 clippers.
• The rule for each edge for each clipper is:
• If first input vertex is outside, and second is inside, output the intersection
and the second vertex
• If first both input vertices are inside, then just output second vertex
• If first input vertex is inside, and second is outside, output is the intersection
• If both vertices are outside, output is nothing
Sutherland-Hodgman Polygon Clipping:
Four possible scenarios at each clipper

outside inside outside inside outside inside outside inside

v2 v2 v2 v2

v1’ v1’
v1 v1
v1 v1

Outside to inside: Inside to inside: Inside to outside: Outside to outside:


Output: v1’ and v2 Output: v2 Output: v1’ Output: nothing
Sutherland-Hodgman Polygon Clipping

v2’ v2
v3

v2”
v1’
v3’
Left Right Bottom Top
v1 Clipper Clipper Clipper Clipper
Final
v1v2 v2 v2v2’ v2’ v2’v3’ v2” v2”v1’ v1’
v2v3 v2’ v2’v3’ v3’ v3’v1 Nil v1’v2 v2
v3v1 v3’v1 v3’v1 v1 v1v2 v1’v2 v2v2’ v2’
v1v2 v2 v2v2’ v2’ v2’v2” v2”
Edges Output Edges Output Edges Output Edges Output
Sutherland-Hodgman Polygon Clipping

Limitations:

1. The clipping window and the polygon should be convex for this
algorithm to work properly.
2. This method requires a considerable amount of memory. The
first of all polygons are stored in original form. Then clipping
against each edge is done separately and the results of all
these operations are stored in memory. So memory is wasted
for storing intermediate polygons.
Three dimensional viewing

3D Viewing

The steps for computer generation of a view of a three dimensional


scene are somewhat analogous to the processes involved in taking a
photograph.

Camera Analogy

1. Viewing position
2. Camera orientation
3. Size of clipping window
Three dimensional viewing pipeline
Viewing Pipeline

The general processing steps for modeling and converting a world


coordinate description of a scene to device coordinates:
Three dimensional viewing pipeline
1. Modeling Transformation:

Construct the shape of individual objects in a scene within


modeling coordinate, and place the objects into appropriate
positions within the scene (world coordinate).
Three dimensional viewing pipeline
2. Viewing Transformation:

World coordinate positions are converted to viewing coordinates.


Three dimensional viewing pipeline
3. Projection Transformation:

Convert the viewing coordinate description of the scene to


coordinate positions on the projection plane.
Three dimensional viewing pipeline
4. Normalization Transformation and Clipping
5. Viewport Transformation

Positions on the projection plane, will then mapped to the


Normalized coordinates, clipped and displayed on the output
device.
Viewing Coordinates
Viewing Coordinates

Viewing coordinates system


describes 3D objects with
respect to a viewer.

A Viewing (Projector) plane is


set up perpendicular to zv
and aligned with (xv,yv).
Transformation from World to
Viewing Coordinates
To convert world-coordinates to viewing-coordinates a series of
simple transformations are needed:

- a translation of the coordinate origins onto each other and


afterwards 3 rotations, such that the coordinate-axes also
coincide.
Projection
• Projection can be defined as a mapping of point P(x, y, z) onto its
image in the projection plane.

• The mapping is determined by a projector that passes through P


and intersects the view plane at P′(x′, y′, z′).
Projection
• Projectors are lines from center (reference) of projection
through each point in the object.

• The result of projecting an object is dependent on the spatial


relationship among the projectors and the view plane.
Projection – Parallel
Parallel Projection :
• Coordinate position are transformed to the view plane along
parallel lines.
• For parallel projection, center of projection does not converge and
is at infinity.
• A parallel projection preserves relative proportion of objects, but
dose not give us a realistic representation of the appearance of the
object.
Parallel
Projection – Parallel
Projection vector: Defines the direction for the projection lines
(projectors).

Orthographic Projection: Projection vectors are perpendicular to


the projection plane.

Oblique Projection: Projection vectors are not perpendicular to


the projection plane.
Projection - Perspective
Perspective Projection:

• Object positions are transformed to the view plane along lines


that converge to the projection reference (center) point.

• Produces realistic views but does not preserve relative


proportion of objects.

Perspective
Projection - Perspective
Projections of distant objects are smaller than the projections of
objects of the same size that are closer to the projection plane.
Projection – Parallel and Perspective
Visible Surface Detection Algorithms
• A major consideration in the generation of realistic graphic
displays is determining what is visible within a scene from a
chosen viewing position.
• Various methods and approaches are used for visible surface
detection. Some methods require more memory, some require
more processing time and some can be applied only to special
type of objects.
• Which method we select for a particular application depends on
factors like the complexity of the scene, type of objects to be
displayed, available equipment and whether static or animated
displays are to be generated.
• The various algorithms are referred to as visible surface
detection methods. Sometimes it is also called as hidden
surface elimination methods.
Visible Surface Detection Algorithms
We can broadly classify visible surface detection
algorithms according to whether they deal with the
object definitions or with their projected images.

These two approaches are called object-space


methods and image-space methods, respectively.
Object-space methods deal directly with objects and
parts of the objects but image-space methods deal
with the pixels of the image in the projected plane.
Visible Surface Detection Algorithms
Problem definition of Visible-Surface Detection Methods:
To identify those parts of a scene that are visible from a chosen
viewing position. Surfaces which are obscured by other opaque
surfaces along the line of sight (projection) are invisible to the
viewer.
Characteristics of approaches:
- Require large memory size?
- Require long processing time?
- Applicable to which types of
objects?
Considerations:
- Complexity of the scene
- Type of objects in the scene
- Available equipment
- Static or animated?
Visible Surface Detection Algorithms -
Classification
Classification of Visible-Surface Detection Algorithms:

1. Object-space Methods

Compare objects and parts of objects to each other within the scene definition to
determine which surfaces, as a whole, we should label as visible. Example – Back
Face Detection algorithm
Visible Surface Detection Algorithms -
Classification
1. Object-space Methods
• Compare each object with all other objects to determine the visibility of the
object parts.
• Since the calculations take place at the object level, process is unrelated to
display resolution or the individual pixel in the image and the result of the
process is applicable to different display resolutions.
• Display is more accurate but computationally more expensive as
compared to image space methods because comparison with all other
objects is typically more complex. eg. Due to the possibility of intersection
between surfaces.
• Suitable for scene with small number of objects and objects with simple
relationship with each other.
Visible Surface Detection Algorithms -
Classification
2. Image-space Methods (Mostly used)

Visibility is determined point by point at each pixel position on the projection plane.
Example: Depth Buffer Algorithm

• For each pixel, examine all n objects to determine the one closest to the viewer.
• Accuracy of the calculation is bounded by the display resolution.
• A change of display resolution requires re-calculation
Visible Surface Detection Algorithms –
Depth-Buffer Method
Depth-Buffer Method (Z-Buffer Method)

This is an image space method. This approach compares surface depths at


each pixel position on the projection plane and eliminates hidden surfaces.

Object depth is usually measured from the view plane along the z axis of a
viewing system.

This method requires 2 buffers: one is the refresh buffer or image buffer and
the other is called the z-buffer (or the depth buffer). Each of these buffers
has the same resolution as the image to be captured.

As surfaces are processed, the image buffer is used to store the color /
intensity values of each pixel position and the z-buffer is used to store the
depth values for each (x,y) position.
Visible Surface Detection Algorithms –
Depth-Buffer Method
S3
S2 view
S1
plane
yv
xv

 x, y 
View Direction: Towards –ve Z zv

View Direction: Towards +ve Z

Three surfaces are overlapping pixel position (x,y) on the view


plane. If we are looking at the –ve z direction from +ve z, the
visible surface, S1, is nearest to the view plane and has the largest
depth (z) value. But if we are looking at the +ve z direction from
-ve z, the nearest object S1 will have the smallest depth(z) value.
Visible Surface Detection Algorithms –
Depth-Buffer Method
View Direction: Towards negative Z
Visible Surface Detection Algorithms –
Depth-Buffer Method
View Direction: Towards positive Z
Visible Surface Detection Algorithms –
Depth-Buffer Method
Depth Calculation

Given the depth value in a vertex of a polygon, the depth of any other point in
the plane containing the polygon can be calculated efficiently (additions only).

We need to find xl on
the next scan line. yl (xl,yl)
m = (y-yl)/(x-xl)
Here yl = y-1
Hence xl = x-(1/m)

Depth is calculated from the surface equation Z(x,y) = (-Ax-By-D)/C. During raster scan, x
along scan-line changes by + or – 1.
Therefore,
Z(x+1,y) = [-A(x+1)-By-D)]/C = [-Ax-By-D-A]/C = Z – A/C and the ratio A/C is a constant for the
entire Polygon.
Z(x`,y-1) = [-A(x-(1/m))-B(y-1)-D)]/C = [-Ax-By-D+(A/m)+B]/C = Z + [(A/m)+B]/C
For vertical lines, slope is infinity and A/m = 0 , Hence Z(x`,y-1) = Z + B/C
Visible Surface Detection Algorithms –
Depth-Buffer Method
Visible Surface Detection Algorithms –
Depth-Buffer Method
Visible Surface Detection Algorithms –
Depth-Buffer Method
Visible Surface Detection Algorithms –
Scan line Algorithm
SCAN-LINE Algorithms (for VSD)

Extension of 2-D Scanline (polyfill) algorithm.

In this method, as each scan line is processed, all polygon surfaces intersecting that
line are examined to determine which are visible. Across each scan line, depth
calculations are made for each overlapping surface to determine which is nearest to
the view plane. When the visible surface has been determined, the intensity value
for that position is entered into the image buffer.

Here we deal with a set of polygons.

Data structure used:

ET (Edge Table),
AET (Active Edge Table) and
PT (Polygon Table).
Visible Surface Detection Algorithms –
Scan line Algorithm
Structure of each entry in ET:
• X-coordinate of the end point with the smaller Y-coordinate
• Y-coordinate of the edge’s other end point or the maximum Y value for
that edge
• ΔX = 1/m
• Polygon ID

Structure of each entry in PT:


• Coefficients of the plane equations
• Shading or color information of the polygon
• Flag (IN/OUT), initialized to ‘false’
Visible Surface Detection Algorithms –
Scan line Algorithm
s2

s1

L3
AB S1
DE S1,S2
BC S1,S2
EF S2
Visible Surface Detection Algorithms –
Scan line Algorithm
L1 L4
s2
AB S1 ● ● ● ● AB S1
● ● ● ●
BC S1 CA S1
DE S2 s1 FD S2
● ● ● ●
EF S2 ● ● EF S2

L5
L2, L3 AB S1
AB S1 CA S1
DE S1, S2
BC S1, S2
EF S2
Visible Surface Detection Algorithms –
Scan line Algorithm
Visible Surface Detection Algorithms –
Scan line Algorithm

We fill every scan line span by span. When polygon overlaps on a scan
line, we perform depth calculations at their edges to determine which
polygon should be visible at which span. Any number of overlapping
polygon surfaces can be processed with this method. Depth calculations are
performed only when there are polygons overlapping.

This works only if surfaces do not cut through or otherwise cyclically overlap
each other. If cyclic overlap happens, we can divide the surfaces to eliminate
the overlaps.
Visible Surface Detection Algorithms –
Scan line Algorithm
Step 2 is not efficient because not all polygons necessarily intersect with the scan
line.
- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto
a segment of
the scan line.
- To speed up the process:
Visible Surface Detection Algorithms –
Scan line Algorithm
For each scan line do
Begin
For each pixel (x,y) along the scan line do ------------ Step 1
Begin
z_buffer(x,y) = ꝏ
Image_buffer(x,y) = background_color
End
For each polygon in the scene do ----------- Step 2
Begin
For each pixel (x,y) along the scan line that is covered by the polygon do
Begin
2a. Compute the depth or z of the polygon at pixel location (x,y).
2b. If z < z_buffer(x,y) then
Set z_buffer(x,y) = z
Set Image_buffer(x,y) = polygon's colour
End
End
End
Visible Surface Detection – NPTEL Resources

https://www.youtube.com/watch?v=c59E5b0pcyw
https://www.youtube.com/watch?v=gmoC8xC4MGw
https://www.youtube.com/watch?v=cro30gbbV-M

You might also like