0% found this document useful (0 votes)
55 views

Computer Graphics Notes

The document discusses random scan and raster scan displays. Random scan displays use an electron beam to draw individual line segments, allowing for high resolution line drawings but inability to display photorealistic images. Raster scan displays store pixel information in a frame buffer and can display millions of colors and photorealistic images but have lower resolution. Anti-aliasing techniques like supersampling and multisampling are used to reduce jagged edges by blurring pixel boundaries. These techniques increase effective screen resolution to reduce aliasing artifacts.

Uploaded by

kunjbhatia1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Computer Graphics Notes

The document discusses random scan and raster scan displays. Random scan displays use an electron beam to draw individual line segments, allowing for high resolution line drawings but inability to display photorealistic images. Raster scan displays store pixel information in a frame buffer and can display millions of colors and photorealistic images but have lower resolution. Anti-aliasing techniques like supersampling and multisampling are used to reduce jagged edges by blurring pixel boundaries. These techniques increase effective screen resolution to reduce aliasing artifacts.

Uploaded by

kunjbhatia1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

COMPUTER GRAPHICS(imp topics)

Random Scan Display:


Random Scan System uses an electron beam which operates like a pencil to create a line image on
the CRT screen. The picture is constructed out of a sequence of straight-line segments. Each line
segment is drawn on the screen by directing the beam to move from one point on the screen to the
next, where its x & y coordinates define each point. After drawing the picture. The system cycles back
to the first line and design all the lines of the image 30 to 60 time each second. The process is shown
in fig:

Random-scan monitors are also known as vector displays or stroke-writing displays or calligraphic
displays.

Advantages:
1. A CRT has the electron beam directed only to the parts of the screen where an image is to be
drawn.
2. Produce smooth line drawings.
3. High Resolution

Disadvantages:
1. Random-Scan monitors cannot display realistic shades scenes.

Raster Scan Display:


A Raster Scan Display is based on intensity control of pixels in the form of a rectangular box called
Raster on the screen. Information of on and off pixels is stored in refresh buffer or Frame buffer.
Televisions in our house are based on Raster Scan Method. The raster scan system can store
information of each pixel position, so it is suitable for realistic display of objects. Raster Scan provides
a refresh rate of 60 to 80 frames per second.
Frame Buffer is also known as Raster or bit map. In Frame Buffer the positions are called picture
elements or pixels. Beam refreshing is of two types. First is horizontal retracing and second is vertical
retracing. When the beam starts from the top left corner and reaches the bottom right scale, it will
again return to the top left side called at vertical retrace. Then it will again more horizontally from top
to bottom call as horizontal retracing shown in fig:
Types of Scanning or travelling of beam in Raster Scan
1. Interlaced Scanning
2. Non-Interlaced Scanning
In Interlaced scanning, each horizontal line of the screen is traced from top to bottom. Due to which
fading of display of object may occur. This problem can be solved by Non-Interlaced scanning. In this
first of all odd numbered lines are traced or visited by an electron beam, then in the next circle, even
number of lines are located.
For non-interlaced display refresh rate of 30 frames per second used. But it gives flickers. For
interlaced display refresh rate of 60 frames per second is used.

Advantages:
1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.

Disadvantages:
1. Low Resolution
2. Expensive

Random Scan Raster Scan


1. It has high Resolution 1. Its resolution is low.
2. It is more expensive 2. It is less expensive
3. Any modification if needed is easy 3.Modification is tough
4. Solid pattern is tough to fill 4.Solid pattern is easy to fill
5. Refresh rate depends or resolution 5. Refresh rate does not depend on the
picture.
6. Only screen with view on an area is 6. Whole screen is scanned.
displayed.
7. Beam Penetration technology come under 7. Shadow mark technology came under this.
it.
8. It does not use interlacing method. 8. It uses interlacing
9. It is restricted to line drawing applications 9. It is suitable for realistic display.
Anti-Aliasing
Antialiasing is a technique used in computer graphics to remove the aliasing effect. The aliasing
effect is the appearance of jagged edges or “jaggies” in a rasterized image (an image rendered using
pixels). The problem of jagged edges technically occurs due to distortion of the image when scan
conversion is done with sampling at a low frequency, which is also known as Under sampling.
Aliasing occurs when real-world objects which comprise smooth, continuous curves are rasterized
using pixels. The cause of anti-aliasing is Under sampling. Under sampling results in a loss of
information about the picture. Under sampling occurs when sampling is done at a frequency lower
than the Nyquist Sampling Frequency. To avoid this loss, we need to have our sampling frequency at
least twice that of the highest frequency occurring in the object. We must have a sampling frequency
that is at least two times higher than the highest frequency appearing in the image in order to prevent
this loss.
Anti-Aliasing Methods:

A high-resolution display, post-filtering (super-sampling), pre-filtering (area sampling), and pixel


phasing are the techniques used to remove aliasing. The explanations of these are given below:

1. Using High-Resolution Display - Displaying objects at a greater resolution is one technique


to decrease aliasing impact and boost the sampling rate. When using high resolution, the
jaggies are reduced to a size that renders them invisible to the human eye. As a result, sharp
edges get blurred and appear smooth.
Real-Life Applications:
For example, OLED displays and retina displays in Apple products both have high pixel
densities, which results in jaggies that are so microscopic that they are blurry and invisible to
the human eye.

2. Post-Filtering or Super-Sampling - With this technique, we reduce the adequate pixel size
while improving the sampling resolution by treating the screen as though it were formed of a
much finer grid. The screen resolution, however, does not change. Now, the average pixel
intensity is determined from the average of the intensities of the subpixels after each subpixel's
intensity has been calculated. In order to display the image at a lesser resolution or screen
resolution, we do sampling at a higher resolution, a process known as supersampling. Due to
the fact that this process is carried out after creating the rasterised image, this technique is
also known as post filtration.

Real-Life Applications:
The finest image quality in gaming is produced with SSAA (Super-sample Antialiasing) or FSAA (full-
scene Antialiasing). It is frequently referred to as the "pure AA," which is extremely slow and
expensive to compute. When no better AA techniques were available, this technique was frequently
utilised in the beginning. Other SSAA modes are available, including 2X, 4X, 8X, and others that
indicate sampling that is done x times (greater than) the present resolution.
MSAA (multisampling Antialiasing), a quicker and more accurate version of super-sampling AA, is a
better AA type.

Its computational cost is lower. Companies that produce graphics cards, such as CSAA by NVIDIA
and CFAA by AMD, are working to improve and advance super-sampling techniques.

3. Pre-Filtering or Area-Sampling - The areas of each pixel's overlap with the objects displayed
are taken into account while calculating pixel intensities in area sampling. In this case, the
computation of pixel colour is centred on the overlap of scene objects with a pixel region.
Example: Let's say a line crosses two pixels. A pixel that covers a larger amount of a line
(90%) displays 90% intensity, whereas a pixel that covers a smaller piece (10%) displays 10-
15% intensity. If a pixel region overlaps with multiple colour areas, the final pixel colour is
calculated as the average of those colours. Pre-filtering is another name for this technique
because it is used before rasterising the image. Some basic graphics algorithms are used to
complete it.

4. Pixel Phasing - It is a method to eliminate aliasing. In this case, pixel coordinates are altered
to virtually exact positions close to object geometry. For dispersing intensities and aiding with
pixel phasing, some systems let you change the size of individual pixels.

Application of Anti-Aliasing:

1. Compensating for Line Intensity Differences - Despite the diagonal line being 1.414 times
larger than the horizontal line when a horizontal line and a diagonal line are plotted on a raster
display, the amount of pixels needed to depict both lines is the same. The extended line's
intensity decreases as a result. Anti-aliasing techniques are used to allocate the intensity of
pixels in accordance with the length of the line to make up for this loss of intensity.
2. Anti-Aliasing Area Boundaries - Jaggies along area boundaries can be eliminated using
anti-aliasing principles. These techniques can be used to smooth out area borders in scanline
algorithms. If moving pixels is an option, they are moved to positions nearer the edges of the
area. Other techniques modify the amount of pixel area inside the boundary by adjusting the
pixel intensity at the boundary position. Area borders are effectively rounded off using these
techniques.

Aliasing :

In computer graphics, the process by which smooth curves and other lines become jagged because
the resolution of the graphics device or file is not high enough to represent a smooth curve.
In the line drawing algorithms, we have seen that all rasterized locations do not match with the true
line and we have to select the optimum raster locations to represent a straight line. This problem is
severe in low resolution screens. In such screens line appears like a stair-step, as shown in the figure
below. This effect is known as aliasing. It is dominant for lines having gentle and sharp slopes.

Transformations

Computer Graphics provide the facility of viewing object from different angles. The architect can study
building from different angles i.e.
1. Front Evaluation
2. Side elevation
3. Top plan

A Cartographer can change the size of charts and topographical maps. So if graphics images are
coded as numbers, the numbers can be stored in memory. These numbers are modified by
mathematical operations called as Transformation.
The purpose of using computers for drawing is to provide facility to user to view the object from
different angles, enlarging or reducing the scale or shape of object called as Transformation.
Two essential aspects of transformation are given below:

1. Each transformation is a single entity. It can be denoted by a unique name or symbol.


2. It is possible to combine two transformations, after connecting a single transformation is
obtained, e.g., A is a transformation for translation. The B transformation performs scaling. The
combination of two is C=AB. So C is obtained by concatenation property.

There are two complementary points of view for describing object transformation.
1. Geometric Transformation: The object itself is transformed relative to the coordinate system or
background. The mathematical statement of this viewpoint is defined by geometric
transformations applied to each point of the object.
2. Coordinate Transformation: The object is held stationary while the coordinate system is
transformed relative to the object. This effect is attained through the application of coordinate
transformations.

An example that helps to distinguish these two viewpoints:


The movement of an automobile against a scenic background we can simulate this by
o Moving the automobile while keeping the background fixed-(Geometric Transformation)
o We can keep the car fixed while moving the background scenery- (Coordinate Transformation)

Types of Transformations:
1. Translation
2. Scaling
3. Rotating
4. Reflection
5. Shearing

Translation

It is the straight line movement of an object from one position to another is called Translation. Here
the object is positioned from one coordinate location to another.

Translation of point:

To translate a point from coordinate position (x, y) to another (x1 y1), we add algebraically the
translation distances Tx and Ty to original coordinate.
x1=x+Tx
y1=y+Ty

The translation pair (Tx,Ty) is called as shift vector.

Translation is a movement of objects without deformation. Every position or point is translated by the
same amount. When the straight line is translated, then it will be drawn using endpoints.

For translating polygon, each vertex of the polygon is converted to a new position. Similarly, curved
objects are translated. To change the position of the circle or ellipse its center coordinates are
transformed, then the object is drawn using new coordinates.
Let P is a point with coordinates (x, y). It will be translated as (x1 y1).

Matrix for Translation:


Three Dimensional Transformations
The geometric transformations play a vital role in generating images of three Dimensional objects
with the help of these transformations. The location of objects relative to others can be easily
expressed. Sometimes viewpoint changes rapidly, or sometimes objects move in relation to each
other. For this number of transformation can be carried out repeatedly.

Translation

It is the movement of an object from one position to another position. Translation is done using
translation vectors. There are three vectors in 3D instead of two. These vectors are in x, y, and z
directions. Translation in the x-direction is represented using Tx. The translation is y-direction is
represented using Ty. The translation in the z- direction is represented using Tz.
If P is a point having co-ordinates in three directions (x, y, z) is translated, then after translation its
coordinates will be (x1 y1 z1) after translation. Tx Ty Tz are translation vectors in x, y, and z directions
respectively.
x1=x+ Tx
y1=y+Ty
z1=z+ Tz
Three-dimensional transformations are performed by transforming each vertex of the object. If an
object has five corners, then the translation will be accomplished by translating all five points to new
locations. Following figure 1 shows the translation of point figure 2 shows the translation of the cube.

Matrix for translation


Matrix representation of point translation

Point shown in fig is (x, y, z). It become (x1,y1,z1) after translation. Tx Ty Tz are translation vector.

Computer Graphics Window to Viewport Co-ordinate Transformation

Once object description has been transmitted to the viewing reference frame, we choose the window
extends in viewing coordinates and selects the viewport limits in normalized coordinates.

Object descriptions are then transferred to normalized device coordinates:

We do this thing using a transformation that maintains the same relative placement of an object in
normalized space as they had in viewing coordinates.

If a coordinate position is at the center of the viewing window:


It will display at the center of the viewport.

Fig shows the window to viewport mapping. A point at position (xw, yw) in window mapped into
position (xv, yv) in the associated viewport.
In order to maintain the same relative placement of the point in the viewport as in the window, we
require:

Solving these impressions for the viewport position (xv, yv), we have
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy ...........equation 2

Where scaling factors are

Equation (1) and Equation (2) can also be derived with a set of transformation that converts the
window or world coordinate area into the viewport or screen coordinate area.

This conversation is performed with the following sequence of transformations:


1. Perform a scaling transformation using a fixed point position (xwmin,ywmin) that scales the
window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport. Relative proportions of
objects are maintained if the scaling factors are the same (sx=sy).

From normalized coordinates, object descriptions are mapped to the various display devices.
Any number of output devices can we open in a particular app, and three windows to viewport
transformation can be performed for each open output device.

This mapping called workstation transformation (It is accomplished by selecting a window area in
normalized space and a viewport area in the coordinates of the display device).
As in fig, workstation transformation to partition a view so that different parts of normalized space can
be displayed on various output devices).
Matrix Representation of the above three steps of Transformation:

Step 1: Translate window to origin 1


Tx=-Xwmin Ty=-Ywmin

Step 2: Scaling of the window to match its size to the viewport


Sx=(Xymax-Xvmin)/(Xwmax-Xwmin)
Sy=(Yvmax-Yvmin)/(Ywmax-Ywmin)

Step 3: Again, translate viewport to its correct position on screen.


Tx=Xvmin
Ty=Yvmin

Above three steps can be represented in matrix form:


VT=T * S * T1
T = Translate window to the origin
S=Scaling of the window to viewport size
T1=Translating viewport on screen.

Viewing Transformation= T * S * T1

Advantage of Viewing Transformation:


We can display picture at device or display system according to our need and choice.
Note:
o World coordinate system is selected suits according to the application program.
o Screen coordinate system is chosen according to the need of design.
o Viewing transformation is selected as a bridge between the world and screen coordinate.
DDA Algorithm

DDA stands for Digital Differential Analyzer. It is an incremental method of scan conversion of line. In
this method calculation is performed at each step but by using results of previous steps.

Advantage:
1. It is a faster method than method of using direct use of line equation.
2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice is not
possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.

Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations of round off error cause
accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suited for hardware
implementation.

Procedure-

Given-
• Starting coordinates = (X0, Y0)
• Ending coordinates = (Xn, Yn)

The points generation using DDA Algorithm involves the following steps-

Step-01:

Calculate ΔX, ΔY and M from the given input.

These parameters are calculated as-


• ΔX = Xn – X0
• ΔY =Yn – Y0
• M = ΔY / ΔX

Step-02:

Find the number of steps or points in between the starting and ending coordinates.

if (absolute (ΔX) > absolute (ΔY))


Steps = absolute (ΔX);

else
Steps = absolute (ΔY);

Step-03:

Suppose the current point is (Xp, Yp) and the next point is (Xp+1, Yp+1).
Find the next point by following the below three cases-

Step-04:

Keep repeating Step-03 until the end point is reached or the number of generated new points
(including the starting and ending points) equals to the steps count.

Mid-Point Circle Drawing Algorithm-


The points for other octacts are generated using the eight symmetry property.
Procedure-
Given-
• Centre point of Circle = (X0, Y0)
• Radius of Circle = R

The points generation using Mid Point Circle Drawing Algorithm involves the following steps-
Step-01:
Assign the starting point coordinates (X0, Y0) as-
• X0 = 0
• Y0 = R
Step-02:
Calculate the value of initial decision parameter P0 as-
P0 = 1 – R
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point of the first octant depending on the value of decision parameter P k.

Follow the below two cases-


Step-04:
If the given centre point (X0, Y0) is not (0, 0), then do the following and plot the point-
• Xplot= Xc + X0
• Yplot = Yc + Y0
Here, (Xc, Yc) denotes the current value of X and Y coordinates.
Step-05:
Keep repeating Step-03 and Step-04 until Xplot >= Yplot.
Step-06:
Step-05 generates all the points for one octant.
To find the points for other seven octants, follow the eight symmetry property of circle.
This is depicted by the following figure-
Perspective Projection

In perspective projection farther away object from the viewer, small it appears. This property of
projection gives an idea about depth. The artist use perspective projection from drawing three-
dimensional scenes.

Two main characteristics of perspective are vanishing points and perspective foreshortening. Due to
foreshortening object and lengths appear smaller from the center of projection. More we increase the
distance from the center of projection, smaller will be the object appear.

Vanishing Point

It is the point where all lines will appear to meet. There can be one point, two point, and three point
perspectives.

One Point: There is only one vanishing point as shown in fig (a)
Backward Skip 10sPlay VideoForward Skip 10s

Two Points: There are two vanishing points. One is the x-direction and other in the y -direction as
shown in fig (b)

Three Points: There are three vanishing points. One is x second in y and third in two directions.

In Perspective projection lines of projection do not remain parallel. The lines converge at a single
point called a center of projection. The projected image on the screen is obtained by points of
intersection of converging lines with the plane of the screen. The image on the screen is seen as of
viewer’s eye were located at the centre of projection, lines of projection would correspond to path
travel by light beam originating from object.
Important terms related to perspective
1. View plane: It is an area of world coordinate system which is projected into viewing plane.
2. Center of Projection: It is the location of the eye on which projected light rays converge.
3. Projectors: It is also called a projection vector. These are rays start from the object scene and
are used to create an image of the object on viewing or view plane.
Anomalies in Perspective Projection

It introduces several anomalies due to these object shape and appearance gets affected.
1. Perspective foreshortening: The size of the object will be small of its distance from the
center of projection increases.
2. Vanishing Point: All lines appear to meet at some point in the view plane.
3. Distortion of Lines: A range lies in front of the viewer to back of viewer is appearing to six
rollers.

Foreshortening of the z-axis in fig (a) produces one vanishing point, P1. Foreshortening the x and z-
axis results in two vanishing points in fig (b). Adding a y-axis foreshortening in fig (c) adds vanishing
point along the negative y-axis

Parallel Projection

Parallel Projection use to display picture in its true shape and size. When projectors are perpendicular
to view plane then is called orthographic projection.
The parallel projection is formed by extending parallel lines from each vertex on the object until they
intersect the plane of the screen.
The point of intersection is the projection of vertex.
Parallel projections are used by architects and engineers for creating working drawing of the object,
for complete representations require two or more views of an object using different planes.
1. Isometric Projection: All projectors make equal angles generally angle is of 30°.

2. Dimetric: In these two projectors have equal angles. With respect to two principle axis.

3. Trimetric: The direction of projection makes unequal angle with their principle axis.

4. Cavalier: All lines perpendicular to the projection plane are projected with no change in length.

5. Cabinet: All lines perpendicular to the projection plane are projected to one half of their length.
These give a realistic appearance of object.
Here's a detailed overview of common curves in computer graphics:

1. Implicit Curves:
- Definition: Implicit curves are defined by a function of 2 or 3 variables where points satisfying the
function form the curve.
- Representation: Lines, planes, spheres, and other shapes can be represented implicitly.
- Limitations: Lack of control over tangents at connection points and difficulty in finding implicit
functions for complex shapes.

2. Explicit Curves:
- Definition: Represented as explicit mathematical functions like \(y = f(x)\) in 2D or \(z = f(x)\), \(z =
f(x, y)\) in surfaces.
- Limitations: Inability to describe vertical tangents and constraints in representing certain curves
like circles or vertical lines.

3. Parametric Curves:
- Definition: Curves represented as functions of a variable \(t\), where coordinates (\(x, y, z\)) are
functions of \(t\).
- Advantages: Flexibility in representing closed or multi-valued curves. Suitable for animations and
smooth transitions.

4. Bezier Curves:
- Definition: Generated using polynomial functions from a set of control points, defining a curve's
shape.
- Properties: Controllable tangents using control points. Types include simple, quadratic, and cubic
curves based on the number of control points.
- Advantages: Ease of use, global control points that impact the entire curve's shape.

5. B-spline Curves:
- Definition: Curves where the sum of basis functions at any parametric value \(u\) equals 1.
- Properties: Modifying control points without altering polynomial degree, continuity, and control
over curve shape via knots.
- Advantages: Greater flexibility in modifying the curve's shape without changing the degree of the
polynomial.

Applications:
- Graphics Rendering: Curves are used extensively to create shapes, surfaces, and animations in 2D
and 3D graphics rendering.
- CAD Systems: Computer-Aided Design systems use curves to model and represent objects,
facilitating design and manufacturing.
- Animation: Parametric curves find applications in motion paths and smooth transitions in animations.

Considerations:
- Control Points: For Bezier and B-spline curves, the positions of control points heavily influence the
resulting curve.
- Degree and Complexity: Different types of curves offer varying degrees of complexity and control,
suitable for different design needs.
Line Clipping:

It is performed by using the line clipping algorithm. The line clipping algorithms are:
1. Cohen Sutherland Line Clipping Algorithm
2. Midpoint Subdivision Line Clipping Algorithm
3. Liang-Barsky Line Clipping Algorithm

Cohen Sutherland Line Clipping Algorithm:


In the algorithm, first of all, it is detected whether line lies inside the screen or it is outside the screen.
All lines come under any one of the following categories:
1. Visible
2. Not Visible
3. Clipping Case

1. Visible: If a line lies within the window, i.e., both endpoints of the line lies within the window. A line
is visible and will be displayed as it is.

2. Not Visible: If a line lies outside the window it will be invisible and rejected. Such lines will not
display. If any one of the following inequalities is satisfied, then the line is considered invisible. Let A
(x1,y2) and B (x2,y2) are endpoints of line.

xmin,xmax are coordinates of the window.


ymin,ymax are also coordinates of the window.

x1>xmax
x2>xmax
y1>ymax
y2>ymax
x1<xmin
x2<xmin
y1<ymin
y2<ymin

3. Clipping Case: If the line is neither visible case nor invisible case. It is considered to be clipped
case. First of all, the category of a line is found based on nine regions given below. All nine regions
are assigned codes. Each code is of 4 bits. If both endpoints of the line have end bits zero, then the
line is considered to be visible.
The center area is having the code, 0000, i.e., region 5 is considered a rectangle window.

Following figure show lines of various types

Line AB is the visible case


Line OP is an invisible case
Line PQ is an invisible line
Line IJ are clipping candidates
Line MN are clipping candidate
Line CD are clipping candidate

Advantage of Cohen Sutherland Line Clipping:


1. It calculates end-points very quickly and rejects and accepts lines quickly.
2. It can clip pictures much large than screen size.

Algorithm of Cohen Sutherland Line Clipping:

Step1:Calculate positions of both endpoints of the line

Step2:Perform OR operation on both of these end-points

Step3:If the OR operation gives 0000


Then
line is considered to be visible
else
Perform AND operation on both endpoints
If And ≠ 0000
then the line is invisible
else
And=0000
Line is considered the clipped case.

Step4:If a line is clipped case, find an intersection with boundaries of the window
m=(y2-y1 )(x2-x1)
(a) If bit 1 is "1" line intersects with left boundary of rectangle window
y3=y1+m(x-X1)
where X = Xwmin
where Xwminis the minimum value of X co-ordinate of window
(b) If bit 2 is "1" line intersect with right boundary
y3=y1+m(X-X1)
where X = Xwmax
where X more is maximum value of X co-ordinate of the window
(c) If bit 3 is "1" line intersects with bottom boundary
X3=X1+(y-y1)/m
where y = ywmin
ywmin is the minimum value of Y co-ordinate of the window
(d) If bit 4 is "1" line intersects with the top boundary
X3=X1+(y-y1)/m
where y = ywmax
ywmax is the maximum value of Y co-ordinate of the window

You might also like