M3 CST304 Ktunotes.in
M3 CST304 Ktunotes.in
0 xw 0 xv
Device Coordinates:
World Coordinates:
• Device dependent Limits
• User-Defined Limits
• Positive Integer values
• Floating point values
[4]-2
Window to Viewport Transformation
There are two coordinate systems. One is the World Coordinate system. While
other is the Device Coordinate system. Therefore, The mapping of a world
coordinate scene to device coordinate is Window to Viewport Transformation or
Windowing Transformation.
[4]-3
Window to Viewport Transformation
What is Window?
Window is a world coordinate area selected for display. So, the Window defines what
is to be viewed . First of all, we have to decide which portion of the real world, we
want to display.
View-Port
MC
Window to Viewport Transformation
Solving these equations for the viewport position (xv, yv), we have:
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy
Sx = (xvmax-xvmin)/(xwmax-xwmin)
Sy = (yvmax-yvmin)/(yw max-ywmin)
Window to Viewport Transformation
• From normalized coordinates, object descriptions are mapped
to the various display devices.
a) Viewport with lower-left corner (0,0) and upper right corner (1,1)
b) Viewport with lower left corner (0,0) and upper right corner (1/2,1/2).
wxmin=1, wymin=1 ,
wxmax=3, wymax=5
a) vxmin=0 , vymin=0
vxmax=1, vymax=1
b) vxmin=0, vymin=0
vxmax=1/2 , vymax=1/2
Window to Viewport Transformation
a) Viewport with lower-left corner (0,0) and upper right corner (1,1)
Sx = 1-0/3-1 = 1/2 ,
Sy=1-0/5-1 = 1/4
Window to Viewport Transformation
b) Viewport with lower left corner (0,0) and upper right corner (1/2,1/2)
• We need to keep all the portions or objects which are inside this window.
While all the things outside this window are discarded.
• We can select a part from a scene for displaying. Just like we use camera
to click a picture. We set our camera to focus at a particular portion and then
discard the things which are out of frame.
• In three-dimensional views, only some surfaces are visible and others are
hidden. We can use clipping to identify visible surfaces.
• With the help of Clipping Operation we can select a part of picture. Then we
can copy, move, or delete that part. Various painting and Drawing
Applications have this feature.
Clipping
Types of Clipping
•Point Clipping
•Line Clipping
•Polygon Clipping
•Text Clipping
•Curve Clipping
Clipping
Point Clipping
xmin<= x<=xmax
ymin<= y<=ymax
Clipping
Line Clipping
We specify a line with its end-points. There are three possible cases for a
line that we need to consider.
Thirdly, line is neither completely inside nor completely outside. This line
is partially visible. This is a line intersecting clipping boundaries.
Clipping
Cohen Sutherland Line Clipping Algorithm
It uses four bit binary code to take clipping decision. This code is Region
Code. We can specify the bit pattern as b4b3b2b1.
1. If any point is inside clip window, the region code will be 0000.
In this Algorithm, we will first find out the region codes of endpoints of given
lines. Then, check for each line.
Step1: If region codes of both the endpoints of a line are 0000. Then line is
completely inside and we will display line completely and exit.
b. However, if the result is zero. Then the line is partially visible. So, calculate
the intersection points of a line segment and clipping boundary.
Clipping
Cohen Sutherland Line Clipping Algorithm
How to calculate intersection points
We specify Clipping boundary by (xmin, xmax, ymin, ymax). Line end-points are (x1, y1)
and (x2, y2). We can calculate intersection points by each clipping boundary.
m
Left Vertical Clipping Boundary: y= y1 + (y2-y1)/(x2-x1){xmin-x1}
Right Vertical Clipping Boundary: y= y1 + (y2-y1)/(x2-x1){xmax-x1}
Top Horizontal Clipping Boundary: x= x1 + (x2-x1)/(y2-y1){ymax-y1} 1/m
Bottom Horizontal Clipping Boundary: x= x1 + (x2-x1)/(y2-y1){ymin-y1}
Vertical Boundary
m = (y-y1)/(x-x1)
(x2, y2)
y-y1 = m(x-x1)
y = y1 + m(x-x1)
(x, y) Horizontal Boundary
(x1, y1) m = (y-y1)/(x-x1)
(Xmin, ymin) (Xmax, ymax)
m(x-x1) = y-y1
x = x1+(1/m)(y-y1)
Clipping
Cohen Sutherland Line Clipping Algorithm
Apply the Cohen Sutherland line clipping algorithm to clip the line
segment with coordinates (40,15) and (75,45) against the window with
(Xmin,Ymin)= (50,10) and (Xmax,Ymax)= (80,40).
Answer: Clip the line segments outside of the clip window intersection
points I1 = (50, 23.57) and I2 = (69.2, 40) and display the line between I1
and I2.
Apply the Cohen Sutherland line clipping algorithm to clip the line
segment with coordinates (60, 60) and (90, 20) against the window with
(Xmin,Ymin)= (40,15) and (Xmax,Ymax)= (85,50)
Answer: Clip the line segments outside of the clip window intersection
points I1 = (67.5, 50) and I2 = (85, 26.7) and display the line between I1
and I2.
Polygon Clipping
• Sutherland-Hodgeman algorithm (A divide-and-conquer strategy)
• Polygons can be clipped against each edge of the window one at a time. Edge
intersections, if any, are easy to find since the X or Y coordinates are already known.
• Vertices which are kept after clipping against one window edge are saved for clipping
against the remaining edges.
• Note that the number of vertices usually changes and will often increases.
Clipping A Polygon Step by Step
Right Clip
Boundary
• Given a polygon with n vertices, v1, v2,…, vn, the algorithm clips the
polygon against a single, infinite clip edge and outputs another series of
vertices defining the clipped polygon.
• In the next pass, the partially clipped polygon is then clipped against
the second clip edge, and so on.
• This algorithm requires storage for only two pairs of co-ordinates for
each clipping edge
Sutherland-Hodgman Polygon Clipping
v2 v2 v2 v2
v1’ v1’
v1 v1
v1 v1
v2’ v2
v3
v2”
v1’
v3’
Left Right Bottom Top
v1 Clipper Clipper Clipper Clipper
Final
v1v2 v2 v2v2’ v2’ v2’v3’ v2” v2”v1’ v1’
v2v3 v2’ v2’v3’ v3’ v3’v1 Nil v1’v2 v2
v3v1 v3’v1 v3’v1 v1 v1v2 v1’v2 v2v2’ v2’
v1v2 v2 v2v2’ v2’ v2’v2” v2”
Edges Output Edges Output Edges Output Edges Output
Sutherland-Hodgman Polygon Clipping
Limitations:
1. The clipping window and the polygon should be convex for this
algorithm to work properly.
2. This method requires a considerable amount of memory. The
first of all polygons are stored in original form. Then clipping
against each edge is done separately and the results of all
these operations are stored in memory. So memory is wasted
for storing intermediate polygons.
Three dimensional viewing
3D Viewing
Camera Analogy
1. Viewing position
2. Camera orientation
3. Size of clipping window
Three dimensional viewing pipeline
Viewing Pipeline
Perspective
Projection - Perspective
Projections of distant objects are smaller than the projections of
objects of the same size that are closer to the projection plane.
Projection – Parallel and Perspective
Visible Surface Detection Algorithms
• A major consideration in the generation of realistic graphic
displays is determining what is visible within a scene from a
chosen viewing position.
• Various methods and approaches are used for visible surface
detection. Some methods require more memory, some require
more processing time and some can be applied only to special
type of objects.
• Which method we select for a particular application depends on
factors like the complexity of the scene, type of objects to be
displayed, available equipment and whether static or animated
displays are to be generated.
• The various algorithms are referred to as visible surface
detection methods. Sometimes it is also called as hidden
surface elimination methods.
Visible Surface Detection Algorithms
We can broadly classify visible surface detection
algorithms according to whether they deal with the
object definitions or with their projected images.
1. Object-space Methods
Compare objects and parts of objects to each other within the scene definition to
determine which surfaces, as a whole, we should label as visible. Example – Back
Face Detection algorithm
Visible Surface Detection Algorithms -
Classification
1. Object-space Methods
• Compare each object with all other objects to determine the visibility of the
object parts.
• Since the calculations take place at the object level, process is unrelated to
display resolution or the individual pixel in the image and the result of the
process is applicable to different display resolutions.
• Display is more accurate but computationally more expensive as
compared to image space methods because comparison with all other
objects is typically more complex. eg. Due to the possibility of intersection
between surfaces.
• Suitable for scene with small number of objects and objects with simple
relationship with each other.
Visible Surface Detection Algorithms -
Classification
2. Image-space Methods (Mostly used)
Visibility is determined point by point at each pixel position on the projection plane.
Example: Depth Buffer Algorithm
• For each pixel, examine all n objects to determine the one closest to the viewer.
• Accuracy of the calculation is bounded by the display resolution.
• A change of display resolution requires re-calculation
Visible Surface Detection Algorithms –
Depth-Buffer Method
Depth-Buffer Method (Z-Buffer Method)
Object depth is usually measured from the view plane along the z axis of a
viewing system.
This method requires 2 buffers: one is the refresh buffer or image buffer and
the other is called the z-buffer (or the depth buffer). Each of these buffers
has the same resolution as the image to be captured.
As surfaces are processed, the image buffer is used to store the color /
intensity values of each pixel position and the z-buffer is used to store the
depth values for each (x,y) position.
Visible Surface Detection Algorithms –
Depth-Buffer Method
S3
S2 view
S1
plane
yv
xv
x, y
View Direction: Towards –ve Z zv
Given the depth value in a vertex of a polygon, the depth of any other point in
the plane containing the polygon can be calculated efficiently (additions only).
We need to find xl on
the next scan line. yl (xl,yl)
m = (y-yl)/(x-xl)
Here yl = y-1
Hence xl = x-(1/m)
Depth is calculated from the surface equation Z(x,y) = (-Ax-By-D)/C. During raster scan, x
along scan-line changes by + or – 1.
Therefore,
Z(x+1,y) = [-A(x+1)-By-D)]/C = [-Ax-By-D-A]/C = Z – A/C and the ratio A/C is a constant for the
entire Polygon.
Z(x`,y-1) = [-A(x-(1/m))-B(y-1)-D)]/C = [-Ax-By-D+(A/m)+B]/C = Z + [(A/m)+B]/C
For vertical lines, slope is infinity and A/m = 0 , Hence Z(x`,y-1) = Z + B/C
Visible Surface Detection Algorithms –
Depth-Buffer Method
Visible Surface Detection Algorithms –
Depth-Buffer Method
Visible Surface Detection Algorithms –
Depth-Buffer Method
Visible Surface Detection Algorithms –
Scan line Algorithm
SCAN-LINE Algorithms (for VSD)
In this method, as each scan line is processed, all polygon surfaces intersecting that
line are examined to determine which are visible. Across each scan line, depth
calculations are made for each overlapping surface to determine which is nearest to
the view plane. When the visible surface has been determined, the intensity value
for that position is entered into the image buffer.
ET (Edge Table),
AET (Active Edge Table) and
PT (Polygon Table).
Visible Surface Detection Algorithms –
Scan line Algorithm
Structure of each entry in ET:
• X-coordinate of the end point with the smaller Y-coordinate
• Y-coordinate of the edge’s other end point or the maximum Y value for
that edge
• ΔX = 1/m
• Polygon ID
s1
L3
AB S1
DE S1,S2
BC S1,S2
EF S2
Visible Surface Detection Algorithms –
Scan line Algorithm
L1 L4
s2
AB S1 ● ● ● ● AB S1
● ● ● ●
BC S1 CA S1
DE S2 s1 FD S2
● ● ● ●
EF S2 ● ● EF S2
L5
L2, L3 AB S1
AB S1 CA S1
DE S1, S2
BC S1, S2
EF S2
Visible Surface Detection Algorithms –
Scan line Algorithm
Visible Surface Detection Algorithms –
Scan line Algorithm
We fill every scan line span by span. When polygon overlaps on a scan
line, we perform depth calculations at their edges to determine which
polygon should be visible at which span. Any number of overlapping
polygon surfaces can be processed with this method. Depth calculations are
performed only when there are polygons overlapping.
This works only if surfaces do not cut through or otherwise cyclically overlap
each other. If cyclic overlap happens, we can divide the surfaces to eliminate
the overlaps.
Visible Surface Detection Algorithms –
Scan line Algorithm
Step 2 is not efficient because not all polygons necessarily intersect with the scan
line.
- Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto
a segment of
the scan line.
- To speed up the process:
Visible Surface Detection Algorithms –
Scan line Algorithm
For each scan line do
Begin
For each pixel (x,y) along the scan line do ------------ Step 1
Begin
z_buffer(x,y) = ꝏ
Image_buffer(x,y) = background_color
End
For each polygon in the scene do ----------- Step 2
Begin
For each pixel (x,y) along the scan line that is covered by the polygon do
Begin
2a. Compute the depth or z of the polygon at pixel location (x,y).
2b. If z < z_buffer(x,y) then
Set z_buffer(x,y) = z
Set Image_buffer(x,y) = polygon's colour
End
End
End
Visible Surface Detection – NPTEL Resources
https://www.youtube.com/watch?v=c59E5b0pcyw
https://www.youtube.com/watch?v=gmoC8xC4MGw
https://www.youtube.com/watch?v=cro30gbbV-M