18cs62 Mod 1
18cs62 Mod 1
VTUPulse.com
OpenGL:
An early application for computer graphics is the display of simple data graphs usually
VTUPulse.com
plotted on a character printer. Data plotting is still one of the most common graphics
application.
Graphs & charts are commonly used to summarize functional, statistical, mathematical,
engineering and economic data for research reports, managerial summaries and other types
of publications.
Typically examples of data plots are line graphs, bar charts, pie charts, surface graphs,
contour plots and other displays showing relationships between multiple parameters in two
dimensions, three dimensions, or higher-dimensional spaces
1
Module 1 Computer Graphics and OpenGL
b. Computer-Aided Design
VTUPulse.com
frame shapes are useful for quickly testing the performance of a vehicle or system.
c. Virtual-Reality Environments
2
Module 1 Computer Graphics and OpenGL
With virtual-reality systems, designers and others can move about and interact with objects
in various ways. Architectural designs can be examined by taking simulated “walk”
through the rooms or around the outsides of buildings to better appreciate the overall effect
of a particular design.
With a special glove, we can even “grasp” objects in a scene and turn them over or move
them from one place to another.
d. Data Visualizations
Producing graphical representations for scientific, engineering and medical data sets and
processes is another fairly new application of computer graphics, which is generally
referred to as scientific visualization. And the term business visualization is used in
connection with data sets related to commerce, industry and other nonscientific areas.
VTUPulse.com
There are many different kinds of data sets and effective visualization schemes depend on
the characteristics of the data. A collection of data can contain scalar values, vectors or
higher-order tensors.
3
Module 1 Computer Graphics and OpenGL
f. Computer Art
VTUPulse.com
The picture is usually painted electronically on a graphics tablet using a stylus, which can
simulate different brush strokes, brush widths and colors.
Fine artists use a variety of other computer technologies to produce images. To create pictures
the artist uses a combination of 3D modeling packages, texture mapping, drawing programs
and CAD software etc.
Commercial art also uses theses “painting” techniques for generating logos & other designs,
page layouts combining text & graphics, TV advertising spots & other applications.
A common graphics method employed in many television commercials is morphing, where
one object is transformed into another.
4
Module 1 Computer Graphics and OpenGL
g. Entertainment
Television production, motion pictures, and music videos routinely a computer graphics
methods.
Sometimes graphics images are combined a live actors and scenes and sometimes the films
are completely generated a computer rendering and animation techniques.
VTUPulse.com
5
Module 1 Computer Graphics and OpenGL
Some television programs also use animation techniques to combine computer generated
figures of people, animals, or cartoon characters with the actor in a scene or to transform
an actor’s face into another shape.
h. Image Processing
VTUPulse.com
Methods used in computer graphics and image processing overlap, the two areas are
concerned with fundamentally different operations.
Image processing methods are used to improve picture quality, analyze images, or
recognize visual patterns for robotics applications.
Image processing methods are often used in computer graphics, and computer graphics
methods are frequently applied in image processing.
Medical applications also make extensive use of image processing techniques for picture
enhancements in tomography and in simulations and surgical operations.
It is also used in computed X-ray tomography(CT), position emission
tomography(PET),and computed axial tomography(CAT).
6
Module 1 Computer Graphics and OpenGL
Each screen display area can contain a different process, showing graphical or nongraphical
information, and various methods can be used to activate a display window.
Using an interactive pointing device, such as mouse, we can active a display window on
some systems by positioning the screen cursor within the window display area and pressing
the left mouse button.
VTUPulse.com
The primary output device in a graphics system is a video monitor.
Historically, the operation of most video monitors was based on the standard cathoderay
tube (CRT) design, but several other technologies exist.
In recent years, flat-panel displays have become significantly more popular due to their
reduced power consumption and thinner designs.
7
Module 1 Computer Graphics and OpenGL
A beam of electrons, emitted by an electron gun, passes through focusing and deflection
systems that direct the beam toward specified positions on the phosphor-coated screen.
The phosphor then emits a small spot of light at each position contacted by the electron
beam and the light emitted by the phosphor fades very rapidly.
One way to maintain the screen picture is to store the picture information as a charge
distribution within the CRT in order to keep the phosphors activated.
The most common method now employed for maintaining phosphor glow is to redraw the
picture repeatedly by quickly directing the electron beam back over the same screen points.
This type of display is called a refresh CRT.
The frequency at which a picture is redrawn on the screen is referred to as the refresh rate.
VTUPulse.com
The primary components of an electron gun in a CRT are the heated metal cathode and a
control grid.
The heat is supplied to the cathode by directing a current through a coil of wire, called the
filament, inside the cylindrical cathode structure.
This causes electrons to be “boiled off” the hot cathode surface.
Inside the CRT envelope, the free, negatively charged electrons are then accelerated toward
the phosphor coating by a high positive voltage.
8
Module 1 Computer Graphics and OpenGL
Intensity of the electron beam is controlled by the voltage at the control grid.
Since the amount of light emitted by the phosphor coating depends on the number of
electrons striking the screen, the brightness of a display point is controlled by varying the
voltage on the control grid.
The focusing system in a CRT forces the electron beam to converge to a small cross section
as it strikes the phosphor and it is accomplished with either electric or magnetic fields.
With electrostatic focusing, the electron beam is passed through a positively charged metal
cylinder so that electrons along the center line of the cylinder are in equilibrium position.
Deflection of the electron beam can be controlled with either electric or magnetic fields.
Cathode-ray tubes are commonly constructed with two pairs of magnetic-deflection coils
One pair is mounted on the top and bottom of the CRT neck, and the other pair is mounted
on opposite sides of the neck.
The magnetic field produced by each pair of coils results in a traverse deflection force that
is perpendicular to both the direction of the magnetic field and the direction of travel of the
electron beam.
Horizontal and vertical deflections are accomplished with these pair of coils
VTUPulse.com
Electrostatic deflection of the electron beam in a CRT
When electrostatic deflection is used, two pairs of parallel plates are mounted inside the
CRT envelope where, one pair of plates is mounted horizontally to control vertical
deflection, and the other pair is mounted vertically to control horizontal deflection.
Spots of light are produced on the screen by the transfer of the CRT beam energy to the
phosphor.
When the electrons in the beam collide with the phosphor coating, they are stopped and
their kinetic energy is absorbed by the phosphor.
Part of the beam energy is converted by the friction in to the heat energy, and the remainder
causes electros in the phosphor atoms to move up to higher quantum-energy levels.
After a short time, the “excited” phosphor electrons begin dropping back to their stable
ground state, giving up their extra energy as small quantum of light energy called photons.
9
Module 1 Computer Graphics and OpenGL
What we see on the screen is the combined effect of all the electrons light emissions: a
VTUPulse.com
glowing spot that quickly fades after all the excited phosphor electrons have returned to
their ground energy level.
The frequency of the light emitted by the phosphor is proportional to the energy difference
between the excited quantum state and the ground state.
Lower persistence phosphors required higher refresh rates to maintain a picture on the
screen without flicker.
The maximum number of points that can be displayed without overlap on a CRT is referred
to as a resolution.
Resolution of a CRT is dependent on the type of phosphor, the intensity to be displayed,
and the focusing and deflection systems.
High-resolution systems are often referred to as high-definition systems.
10
Module 1 Computer Graphics and OpenGL
As it moves across each row, the beam intensity is turned on and off to create a pattern of
illuminated spots.
This scanning process is called refreshing. Each complete scanning of a screen is normally
called a frame.
The refreshing rate, called the frame rate, is normally 60 to 80 frames per second, or
described as 60 Hz to 80 Hz.
Picture definition is stored in a memory area called the frame buffer.
This frame buffer stores the intensity values for all the screen points. Each screen point is
called a pixel (picture element).
Property of raster scan is Aspect ratio, which defined as number of pixel columns divided
by number of scan lines that can be displayed by the system.
VTUPulse.com
Case 1: In case of black and white systems
On black and white systems, the frame buffer storing the values of the pixels is called a
bitmap.
Each entry in the bitmap is a 1-bit data which determine the on (1) and off (0) of the
intensity of the pixel.
11
Module 1 Computer Graphics and OpenGL
VTUPulse.com
any specified order
12
Module 1 Computer Graphics and OpenGL
VTUPulse.com
Electron Beam
Resolution
The electron beam is swept across
the screen, one row at a time,
from top to bottom
Realistic The capability of this system to These systems are designed for line-
Display store intensity values for pixel drawing and can’t display realistic
makes it well suited for the shaded scenes.
realistic display of scenes
1
3
Module 1 Computer Graphics and OpenGL
Draw an Image Screen points/pixels are used Mathematical functions are used to
to draw an image draw an image
VTUPulse.com
The color depends on how far the electron beam penetrates into the phosphor layer.
A beam of fast electron penetrates more and excites inner green layer while slow eletron
excites outer red layer.
At intermediate beam speed we can produce combination of red and green lights which emit
additional two colors orange and yellow.
The beam acceleration voltage controls the speed of the electrons and hence color of pixel.
Disadvantages:
It is a low cost technique to produce color in random scan monitors.
It can display only four colors.
Quality of picture is not good compared to other techniques.
2)Shadow-mask technique
It produces wide range of colors as compared to beam-penetration technique.
This technique is generally used in raster scan displays. Including color TV.
In this technique CRT has three phosphor color dots at each pixel position.
1
4
Module 1 Computer Graphics and OpenGL
One dot for red, one for green and one for blue light. This is commonly known as Dot
triangle.
Here in CRT there are three electron guns present, one for each color dot. And a shadow
mask grid just behind the phosphor coated screen.
The shadow mask grid consists of series of holes aligned with the phosphor dot pattern.
Three electron beams are deflected and focused as a group onto the shadow mask and when
they pass through a hole they excite a dot triangle.
In dot triangle three phosphor dots are arranged so that each electron beam can activate only
its corresponding color dot when it passes through the shadow mask.
A dot triangle when activated appears as a small dot on the screen which has color of
combination of three small dots in the dot triangle.
By changing the intensity of the three electron beams we can obtain different colors in the
shadow mask CRT.
VTUPulse.com
1
5
Module 1 Computer Graphics and OpenGL
Since we can even write on some flat panel displays they will soon be available as pocket
notepads.
We can separate flat panel display in two categories:
1. Emissive displays: - the emissive display or emitters are devices that
convert electrical energy into light. For Ex. Plasma panel, thin film
electroluminescent displays and light emitting diodes.
2. Non emissive displays: - non emissive display or non emitters use optical
effects to convert sunlight or light from some other source into graphics patterns.
For Ex. LCD (Liquid Crystal Display).
VTUPulse.com
Firing voltage is applied to a pair of horizontal and vertical conductors cause the gas at the
intersection of the two conductors to break down into glowing plasma of electrons and ions.
Picture definition is stored in a refresh buffer and the firing voltages are applied to refresh
the pixel positions, 60 times per second.
Alternating current methods are used to provide faster application of firing voltages and
thus brighter displays.
1
6
Module 1 Computer Graphics and OpenGL
VTUPulse.com
c. Light Emitting Diode (LED)
In this display a matrix of multi-color light emitting diode is arranged to form the pixel
position in the display and the picture definition is stored in refresh buffer.
Similar to scan line refreshing of CRT information is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light pattern on the
display.
1
7
Module 1 Computer Graphics and OpenGL
This non emissive device produce picture by passing polarized light from the surrounding
or from an internal light source through liquid crystal material that can be aligned to either
block or transmit the light.
The liquid crystal refreshes to fact that these compounds have crystalline arrangement of
molecules then also flows like liquid.
It consists of two glass plates each with light polarizer at right angles to each other sandwich
the liquid crystal material between the plates.
Rows of horizontal transparent conductors are built into one glass plate, and column of
vertical conductors are put into the other plates.
The intersection of two conductors defines a pixel position.
In the ON state polarized light passing through material is twisted so that it will pass through
the opposite polarizer.
In the OFF state it will reflect back towards source.
VTUPulse.com
1
8
Module 1 Computer Graphics and OpenGL
These vibrations are synchronized with the display of an object on a CRT so that each point
on the object is reflected from the mirror into a spatial position corresponding to the distance
of that point from a specified viewing location.
This allows us to walk around an object or scene and view it from different sides.
VTUPulse.com
Interactive raster-graphics systems typically employ several processing units.
In addition to the central processing unit (CPU), a special-purpose processor, called the
video controller or display controller, is used to control the operation of the display device.
Organization of a simple raster system is shown in below Figure.
Here, the frame buffer can be anywhere in the system memory, and the video controller
accesses the frame buffer to refresh the screen.
1
9
Module 1 Computer Graphics and OpenGL
In addition to the video controller, raster systems employ other processors as coprocessors
and accelerators to implement various graphics operations.
VTUPulse.com
Cartesian reference frame:
Frame-buffer locations and the corresponding screen positions, are referenced in Cartesian
coordinates.
In an application (user) program, we use the commands within a graphics software package
to set coordinate positions for displayed objects relative to the origin of the
The coordinate origin is referenced at the lower-left corner of a screen display area by the
software commands, although we can typically set the origin at any convenient location for
a particular application.
2
0
Module 1 Computer Graphics and OpenGL
Working:
Figure shows a two-dimensional Cartesian reference frame with the origin at the lowerleft
screen corner.
The screen surface is then represented as the first quadrant of a two-dimensional system
with positive x and y values increasing from left to right and bottom of the screen to the top
respectively.
Pixel positions are then assigned integer x values that range from 0 to xmax across the
screen, left to right, and integer y values that vary from 0 to ymax, bottom to top.
VTUPulse.com
Basic Video Controller Refresh Operations
The basic refresh operations of the video controller are diagrammed
Two registers are used to store the coordinate values for the screen pixels.
2
1
Module 1 Computer Graphics and OpenGL
Initially, the x register is set to 0 and the y register is set to the value for the top scan line.
The contents of the frame buffer at this pixel position are then retrieved and used to set the
intensity of the CRT beam.
Then the x register is incremented by 1, and the process is repeated for the next pixel on the
top scan line.
This procedure continues for each pixel along the top scan line.
After the last pixel on the top scan line has been processed, the x register is reset to 0 and
the y register is set to the value for the next scan line down from the top of the screen.
The procedure is repeated for each successive scan line.
After cycling through all pixels along the bottom scan line, the video controller resets the
registers to the first pixel position on the top scan line and the refresh process starts over
a.Speed up pixel position processing of video controller:
Since the screen must be refreshed at a rate of at least 60 frames per second,the simple
procedure illustrated in above figure may not be accommodated by RAM chips if the cycle
time is too slow.
To speed up pixel processing, video controllers can retrieve multiple pixel values from the
VTUPulse.com
refresh buffer on each pass.
When group of pixels has been processed, the next block of pixel values is retrieved from
the frame buffer.
Advantages of video controller:
A video controller can be designed to perform a number of other operations.
For various applications, the video controller can retrieve pixel values from different
memory areas on different refresh cycles.
This provides a fast mechanism for generating real-time animations.
Another video-controller task is the transformation of blocks of pixels, so that screen areas
can be enlarged, reduced, or moved from one location to another during the refresh cycles.
In addition, the video controller often contains a lookup table, so that pixel values in the
frame buffer are used to access the lookup table. This provides a fast method for changing
screen intensity values.
Finally, some systems are designed to allow the video controller to mix the framebuffer
image with an input image from a television camera or other input device
2
2
Module 1 Computer Graphics and OpenGL
VTUPulse.com
The purpose of the display processor is to free the CPU from the graphics chores.
In addition to the system memory, a separate display-processor memory area can be
provided.
Scan conversion:
A major task of the display processor is digitizing a picture definition given in an application
program into a set of pixel values for storage in the frame buffer.
This digitization process is called scan conversion. Example 1: displaying a line
Graphics commands specifying straight lines and other geometric objects are scan
converted into a set of discrete points, corresponding to screen pixel positions.
Scan converting a straight-line segment.
Example 2: displaying a character
Characters can be defined with rectangular pixel grids
The array size for character grids can vary from about 5 by 7 to 9 by 12 or more for higher-
quality displays.
2
3
Module 1 Computer Graphics and OpenGL
A character grid is displayed by superimposing the rectangular grid pattern into the frame
buffer at a specified coordinate position.
Using outline:
For characters that are defined as outlines, the shapes are scan-converted into the frame
buffer by locating the pixel positions closest to the outline.
VTUPulse.com
Display processors are also designed to perform a number of additional operations.
These functions include generating various line styles (dashed, dotted, or solid), displaying
color areas, and applying transformations to the objects in a scene.
Display processors are typically designed to interface with interactive input devices, such
as a mouse.
2
4
Module 1 Computer Graphics and OpenGL
The first number in each pair can be a reference to a color value, and the second number
can specify the number of adjacent pixels on the scan line that are to be displayed in that
color.
This technique, called run-length encoding, can result in a considerable saving in storage
space if a picture is to be constructed mostly with long runs of a single color each.
A similar approach can be taken when pixel colors change linearly.
ii)Cell encoding:
Another approach is to encode the raster as a set of rectangular areas (cell encoding).
Disadvantages of encoding:
The disadvantages of encoding runs are that color changes are difficult to record and storage
requirements increase as the lengths of the runs decrease.
In addition, it is difficult for the display controller to process the raster when many short
runs are involved.
Moreover, the size of the frame buffer is no longer a major concern, because of sharp
declines in memory costs
VTUPulse.com
1.4.3 Graphics workstations and viewing systems
Most graphics monitors today operate as raster-scan displays, and both CRT and flat panel
systems are in common use.
Graphics workstation range from small general-purpose computer systems to multi monitor
facilities, often with ultra –large viewing screens.
High-definition graphics systems, with resolutions up to 2560 by 2048, are commonly used
in medical imaging, air-traffic control, simulation, and CAD.
Many high-end graphics workstations also include large viewing screens, often with
specialized features.
Multi-panel display screens are used in a variety of applications that require “wall-sized”
viewing areas. These systems are designed for presenting graphics displays at meetings,
conferences, conventions, trade shows, retail stores etc.
A multi-panel display can be used to show a large view of a single scene or several
individual images. Each panel in the system displays one section of the overall picture
2
5
Module 1 Computer Graphics and OpenGL
A large, curved-screen system can be useful for viewing by a group of people studying a
particular graphics application.
A 360 degree paneled viewing system in the NASA control-tower simulator, which is used
for training and for testing ways to solve air-traffic and runway problems at airports.
VTUPulse.com
keystroke.Cursor-control keys are used for selecting a displayed object or a location by
positioning the screen cursor.
Mouse Devices:
Mouse is a hand-held device,usually moved around on a flat surface to position the screen
cursor.wheeler or roolers on the bottom of the mouse used to record the amount and
direction of movement.
2
6
Module 1 Computer Graphics and OpenGL
Some of the mouses uses optical sensors,which detects movement across the horizontal and
vertical grid lines.
Since a mouse can be picked up and put down,it is used for making relative changes in the
position of the screen.
Most general purpose graphics systems now include a mouse and a keyboard as the primary
input devices.
VTUPulse.com
Joysticks:
Joystick is used as a positioning device,which uses a small vertical lever(stick) mounded on
a base.It is used to steer the screen cursor around and select screen position with the stick
movement.
A push or pull on the stick is measured with strain gauges and converted to movement of
the screen cursor in the direction of the applied pressure.
Data Gloves:
Data glove can be used to grasp a virtual object.The glove is constructed with a series of
sensors that detect hand and finger motions.
Input from the glove is used to position or manipulate objects in a virtual scene.
Digitizers:
Digitizer is a common device for drawing,painting or selecting positions.
Graphics tablet is one type of digitizer,which is used to input 2-dimensional coordinates by
activating a hand cursor or stylus at selected positions on a flat surface.
2
7
Module 1 Computer Graphics and OpenGL
A hand cursor contains cross hairs for sighting positions and stylus is a pencil-shaped device
that is pointed at positions on the tablet.
Image Scanners:
Drawings,graphs,photographs or text can be stored for computer processing with an image
scanner by passing an optical scanning mechanism over the information to be stored.
Once we have the representation of the picture, then we can apply various imageprocessing
method to modify the representation of the picture and various editing operations can be
performed on the stored documents.
Touch Panels:
Touch panels allow displayed objects or screen positions to be selected with the touch of a
finger.
Touch panel is used for the selection of processing options that are represented as a menu
of graphical icons.
Optical touch panel-uses LEDs along one vertical and horizontal edge of the frame.
VTUPulse.com
Acoustical touch panels generates high-frequency sound waves in horizontal and vertical
directions across a glass plate.
Light Pens:
Light pens are pencil-shaped devices used to select positions by detecting the light coming
from points on the CRT screen.
To select positions in any screen area with a light pen,we must have some nonzero light
intensity emitted from each pixel within that area.
Light pens sometimes give false readings due to background lighting in a room.
Voice Systems:
Speech recognizers are used with some graphics workstations as input devices for voice
commands.The voice system input can be used to initiate operations or to enter data.
A dictionary is set up by speaking command words several times,then the system analyses
each word and matches with the voice command to match the pattern
2
8
Module 1 Computer Graphics and OpenGL
VTUPulse.com
Computers on the Internet communicate using TCP/IP.
Resources such as graphics files are identified by URL (Uniform resource locator).
The World Wide Web provides a hypertext system that allows users to loacate and view
documents, audio and graphics.
Each URL sometimes also called as universal resource locator.
The URL contains two parts Protocol- for transferring the document, and Server- contains
the document.
2
9
Module 1 Computer Graphics and OpenGL
VTUPulse.com
displaying
First we define the shapes of individual objects, such as trees or furniture, These reference
frames are called modeling coordinates or local coordinates
Then we place the objects into appropriate locations within a scene reference frame called
world coordinates.
After all parts of a scene have been specified, it is processed through various outputdevice
reference frames for display. This process is called the viewing pipeline.
The scene is then stored in normalized coordinates. Which range from −1 to 1 or from 0 to
1 Normalized coordinates are also referred to as normalized device coordinates.
The coordinate systems for display devices are generally called device coordinates, or
screen coordinates.
NOTE: Geometric descriptions in modeling coordinates and world coordinates can be given in
floating-point or integer values.
Example: Figure briefly illustrates the sequence of coordinate transformations from
modeling coordinates to device coordinates for a display
3
0
Module 1 Computer Graphics and OpenGL
VTUPulse.com
transformations
Modeling transformations, which are used to construct a scene.
Viewing transformations are used to select a view of the scene, the type of projection to be
used and the location where the view is to be displayed.
Input functions are used to control and process the data flow from these interactive
devices(mouse, tablet and joystick)
Graphics package contains a number of tasks .We can lump the functions for carrying out
many tasks by under the heading control operations.
Software Standards
The primary goal of standardized graphics software is portability.
In 1984, Graphical Kernel System (GKS) was adopted as the first graphics software
standard by the International Standards Organization (ISO)
The second software standard to be developed and approved by the standards organizations
was Programmer’s Hierarchical Interactive Graphics System (PHIGS).
3
1
Module 1 Computer Graphics and OpenGL
Extension of PHIGS, called PHIGS+, was developed to provide 3-D surface rendering
capabilities not available in PHIGS.
The graphics workstations from Silicon Graphics, Inc. (SGI), came with a set of routines
called GL (Graphics Library)
VTUPulse.com
Basic OpenGL Syntax
Function names in the OpenGL basic library (also called the OpenGL core library) are
prefixed with gl. The component word first letter is capitalized.
For eg:- glBegin, glClear, glCopyPixels, glPolygonMode
Symbolic constants that are used with certain functions as parameters are all in capital
letters, preceded by “GL”, and component are separated by underscore.
For eg:- GL_2D, GL_RGB, GL_CCW, GL_POLYGON,
GL_AMBIENT_AND_DIFFUSE.
The OpenGL functions also expect specific data types. For example, an OpenGL function
parameter might expect a value that is specified as a 32-bit integer. But the size of an integer
specification can be different on different machines.
To indicate a specific data type, OpenGL uses special built-in, data-type names, such as
GLbyte, GLshort, GLint, GLfloat, GLdouble, Glboolean
3
2
Module 1 Computer Graphics and OpenGL
Related Libraries
In addition to OpenGL basic(core) library(prefixed with gl), there are a number of associated
libraries for handling special operations:-
1) OpenGL Utility(GLU):- Prefixed with “glu”. It provides routines for setting up
viewing and projection matrices, describing complex objects with line and polygon
approximations, displaying quadrics and B-splines using linear approximations, processing
the surface-rendering operations, and other complex tasks.
-Every OpenGL implementation includes the GLU library
2) Open Inventor:- provides routines and predefined object shapes for interactive
threedimensional applications which are written in C++.
3) Window-system libraries:- To create graphics we need display window. We
cannot create the display window directly with the basic OpenGL functions since it contains
only device-independent graphics functions, and window-management operations are
device-dependent. However, there are several window-system libraries that supports
OpenGL functions for a variety of machines.
Eg:- Apple GL(AGL), Windows-to-OpenGL(WGL), Presentation Manager to
VTUPulse.com
OpenGL(PGL), GLX.
4) OpenGL Utility Toolkit(GLUT):- provides a library of functions which acts as
interface for interacting with any device specific screen-windowing system, thus making
our program device-independent. The GLUT library functions are prefixed with “glut”.
Header Files
In all graphics programs, we will need to include the header file for the OpenGL core
library.
In windows to include OpenGL core libraries and GLU we can use the following header
files:-
#include <windows.h> //precedes other header files for including Microsoft windows ver
of OpenGL libraries
#include<GL/gl.h> #include <GL/glu.h>
The above lines can be replaced by using GLUT header file which ensures gl.h and glu.h
are included correctly,
3
3
Module 1 Computer Graphics and OpenGL
VTUPulse.com
for the display-window title.
Step 3: Specification of the display window
Then we need to specify what the display window is to contain.
For this, we create a picture using OpenGL functions and pass the picture definition to the
GLUT routine glutDisplayFunc, which assigns our picture to the display window.
Example: suppose we have the OpenGL code for describing a line segment in a procedure
called lineSegment.
Then the following function call passes the line-segment description to the display window:
glutDisplayFunc (lineSegment);
Step 4: one more GLUT function
But the display window is not yet on the screen.
We need one more GLUT function to complete the window-processing operations.
After execution of the following statement, all display windows that we have created,
including their graphic content, are now activated:
3
4
Module 1 Computer Graphics and OpenGL
glutMainLoop ( );
This function must be the last one in our program. It displays the initial graphics and puts
the program into an infinite loop that checks for input from devices such as a mouse or
keyboard.
Step 5: these parameters using additional GLUT functions
Although the display window that we created will be in some default location and size, we
can set these parameters using additional GLUT functions.
GLUT Function 1:
We use the glutInitWindowPosition function to give an initial location for the upper left
corner of the display window.
This position is specified in integer screen coordinates, whose origin is at the upper-left
corner of the screen.
VTUPulse.com
GLUT Function 2:
After the display window is on the screen, we can reposition and resize it.
GLUT Function 3:
We can also set a number of other options for the display window, such as buffering and a
choice of color modes, with the glutInitDisplayMode function.
Arguments for this routine are assigned symbolic GLUT constants.
Example: the following command specifies that a single refresh buffer is to be used for the
display window and that we want to use the color mode which uses red, green, and blue
(RGB) components to select color values:
3
5
Module 1 Computer Graphics and OpenGL
VTUPulse.com
The first three arguments in this function set the red, green, and blue component colors to
the value 1.0, giving us a white background color for the display window.
If, instead of 1.0, we set each of the component colors to 0.0, we would get a black
background.
The fourth parameter in the glClearColor function is called the alpha value for the specified
color. One use for the alpha value is as a “blending” parameter
When we activate the OpenGL blending operations, alpha values can be used to determine
the resulting color for two overlapping objects.
An alpha value of 0.0 indicates a totally transparent object, and an alpha value of 1.0
indicates an opaque object.
For now, we will simply set alpha to 0.0.
Although the glClearColor command assigns a color to the display window, it does not put
the display window on the screen.
3
6
Module 1 Computer Graphics and OpenGL
To get the assigned window color displayed, we need to invoke the following OpenGL
function:
glClear (GL_COLOR_BUFFER_BIT);
The argument GL COLOR BUFFER BIT is an OpenGL symbolic constant specifying that
it is the bit values in the color buffer (refresh buffer) that are to be set to the values indicated
in the glClearColor function. (OpenGL has several different kinds of buffers that can be
manipulated.
VTUPulse.com
0.0, green = 0.4, and blue = 0.2.
Example program
For our first program, we simply display a two-dimensional line segment.
To do this, we need to tell OpenGL how we want to “project” our picture onto the display
window because generating a two-dimensional picture is treated by OpenGL as a special
case of three-dimensional viewing.
So, although we only want to produce a very simple two-dimensional line, OpenGL
processes our picture through the full three-dimensional viewing operations.
We can set the projection type (mode) and other viewing parameters that we need with the
following two functions:
3
7
Module 1 Computer Graphics and OpenGL
VTUPulse.com
Cartesian endpoint coordinates (180, 15) and (10, 145). glBegin (GL_LINES); glVertex2i
(180, 15); glVertex2i (10, 145); glEnd ( );
Now we are ready to put all the pieces together:
The following OpenGL program is organized into three functions.
init: We place all initializations and related one-time parameter settings in function init.
lineSegment: Our geometric description of the “picture” that we want to display is in
function lineSegment, which is the function that will be referenced by the GLUT function
glutDisplayFunc.
main function main function contains the GLUT functions for setting up the display
window and getting our line segment onto the screen.
glFlush: This is simply a routine to force execution of our OpenGL functions, which are
stored by computer systems in buffers in different locations,depending on how OpenGL is
implemented.
The procedure lineSegment that we set up to describe our picture is referred to as a display
callback function.
3
8
Module 1 Computer Graphics and OpenGL
VTUPulse.com
( );
glFlush ( ); // Process all OpenGL routines as quickly as possible.
} void main (int argc, char**
argv)
{
glutInit (&argc, argv); // Initialize GLUT.
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode.
glutInitWindowPosition (50, 100); // Set top-left display-window position.
glutInitWindowSize (400, 300); // Set display-window width and height.
glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure.
glutDisplayFunc (lineSegment); // Send graphics to display window.
glutMainLoop ( ); // Display everything and wait. }
3
9
Module 1 Computer Graphics and OpenGL
VTUPulse.com
identify visible surfaces and map the objects to the frame buffer positions and then on the
video monitor.
The scan-conversion algorithm stores info about the scene, such as color values, at the
appropriate locations in the frame buffer, and then the scene is displayed on the output
device.
Screen co-ordinates:
Locations on a video monitor are referenced in integer screen coordinates, which
correspond to the integer pixel positions in the frame buffer.
Scan-line algorithms for the graphics primitives use the coordinate descriptions to
determine the locations of pixels
Example: given the endpoint coordinates for a line segment, a display algorithm must
calculate the positions for those pixels that lie along the line path between the endpoints.
Since a pixel position occupies a finite area of the screen, the finite size of a pixel must be
taken into account by the implementation algorithms.
4
0
Module 1 Computer Graphics and OpenGL
For the present, we assume that each integer screen position references the centre of a pixel
area.
Once pixel positions have been identified the color values must be stored in the frame buffer
VTUPulse.com
Absolute and Relative Coordinate Specifications Absolute
coordinate:
So far, the coordinate references that we have discussed are stated as absolute coordinate
values.
This means that the values specified are the actual positions within the coordinate system
in use.
Relative coordinates:
However, some graphics packages also allow positions to be specified using relative
coordinates.
This method is useful for various graphics applications, such as producing drawings with
pen plotters, artist’s drawing and painting systems, and graphics packages for publishing
and printing applications.
Taking this approach, we can specify a coordinate position as an offset from the last position
that was referenced (called the current position).
4
1
Module 1 Computer Graphics and OpenGL
4
2
Module 1 Computer Graphics and OpenGL
We can then designate one or more graphics primitives for display using the coordinate
reference specified in the gluOrtho2D statement.
If the coordinate extents of a primitive are within the coordinate range of the display
window, all of the primitive will be displayed.
Otherwise, only those parts of the primitive within the display-window coordinate limits
VTUPulse.com
will be shown.
Also, when we set up the geometry describing a picture, all positions for the OpenGL
primitives must be given in absolute coordinates, with respect to the reference frame defined
in the gluOrtho2D function.
4
3
Module 1 Computer Graphics and OpenGL
where:
glBegin indicates the beginning of the object that has to be displayed glEnd
indicates the end of primitive
VTUPulse.com
GL_POINTS
Each vertex is displayed as a point.
The size of the point would be of at least one pixel.
Then this coordinate position, along with other geometric descriptions we may have in our
scene, is passed to the viewing routines.
Unless we specify other attribute values, OpenGL primitives are displayed with a default
size and color.
The default color for primitives is white, and the default point size is equal to the size of a
single screen pixel Syntax:
Case 1:
glBegin (GL_POINTS);
glVertex2i (50, 100);
glVertex2i (75, 150);
glVertex2i (100, 200); glEnd (
); Case 2:
4
4
Module 1 Computer Graphics and OpenGL
we could specify the coordinate values for the preceding points in arrays such as int point1
[ ] = {50, 100}; int point2 [ ] = {75, 150}; int point3 [ ] = {100, 200}; and call the OpenGL
functions for plotting the three points as
glBegin (GL_POINTS);
glVertex2iv (point1); glVertex2iv
(point2); glVertex2iv (point3); glEnd (
); Case 3:
specifying two point positions in a three dimensional world reference frame. In this case,
we give the coordinates as explicit floating-point values:
glBegin (GL_POINTS);
glVertex3f (-78.05, 909.72, 14.60);
glVertex3f (261.91, -5200.67, 188.33);
glEnd ( );
VTUPulse.com
Successive pairs of vertices are considered as endpoints and they are connected to form an
individual line segments.
Note that successive segments usually are disconnected because the vertices are processed
on a pair-wise basis.
we obtain one line segment between the first and second coordinate positions and another
line segment between the third and fourth positions.
if the number of specified endpoints is odd, so the last coordinate position is ignored.
4
5
Module 1 Computer Graphics and OpenGL
Case 2: GL_LINE_STRIP:
Successive vertices are connected using line segments.
However, the final vertex is not connected to the initial
vertex.
glBegin (GL_LINES_STRIP);
glVertex2iv (p1); glVertex2iv (p2);
glVertex2iv (p3); glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
VTUPulse.com
Case 3: GL_LINE_LOOP:
Successive vertices are connected using line segments to form a closed path or loop i.e., final vertex
is connected to the initial vertex.
glBegin (GL_LINES_LOOP);
glVertex2iv (p1); glVertex2iv (p2);
glVertex2iv (p3); glVertex2iv (p4);
glVertex2iv (p5);
glEnd ( );
4
6
Module 1 Computer Graphics and OpenGL
For a raster system: Point size is an integer multiple of the pixel size, so that a large point
is displayed as a square block of pixels
VTUPulse.com
If we activate the antialiasing features of OpenGL, the size of a displayed block of pixels
will be modified to smooth the edges.
The default value for point size is 1.0.
Example program:
Attribute functions may be listed inside or outside of a glBegin/glEnd pair.
Example: the following code segment plots three points in varying colors and sizes.
4
7
Module 1 Computer Graphics and OpenGL
The first is a standard-size red point, the second is a double-size green point, and the third
is a triple-size blue point:
Ex:
glColor3f (1.0, 0.0, 0.0);
glBegin (GL_POINTS);
glVertex2i (50, 100);
glPointSize (2.0);
glColor3f (0.0, 1.0, 0.0);
glVertex2i (75, 150);
glPointSize (3.0);
glColor3f (0.0, 0.0, 1.0);
glVertex2i (100, 200);
glEnd ( );
VTUPulse.com
1.17 Line-Attribute Functions OpenGL
In OpenGL straight-line segment with three attribute settings: line color, line-width, and
line style.
OpenGL provides a function for setting the width of a line and another function for
specifying a line style, such as a dashed or dotted line.
4
8
Module 1 Computer Graphics and OpenGL
That is, the magnitude of the horizontal and vertical separations of the line endpoints, deltax
and deltay, are compared to determine whether to generate a thick line using vertical pixel
spans or horizontal pixel spans.
Pattern:
Parameter pattern is used to reference a 16-bit integer that describes how the line should be
displayed.
1 bit in the pattern denotes an “on” pixel position, and a 0 bit indicates an “off” pixel
VTUPulse.com
position.
The pattern is applied to the pixels along the line path starting with the low-order bits in
the pattern.
The default pattern is 0xFFFF (each bit position has a value of 1),which produces a solid
line.
repeatFactor
Integer parameter repeatFactor specifies how many times each bit in the pattern is to be
repeated before the next bit in the pattern is applied.
The default repeat value is 1.
Polyline:
With a polyline, a specified line-style pattern is not restarted at the beginning of each
segment.
4
9
Module 1 Computer Graphics and OpenGL
It is applied continuously across all the segments, starting at the first endpoint of the
polyline and ending at the final endpoint for the last segment in the series.
Example:
For line style, suppose parameter pattern is assigned the hexadecimal representation
0x00FF and the repeat factor is 1.
This would display a dashed line with eight pixels in each dash and eight pixel positions
that are “off” (an eight-pixel space) between two dashes.
Also, since low order bits are applied first, a line begins with an eight-pixel dash starting
at the first endpoint.
This dash is followed by an eight-pixel space, then another eight-pixel dash, and so forth,
until the second endpoint position is reached.
VTUPulse.com
If we forget to include this enable function, solid lines are displayed; that is, the default
At
pattern 0xFFFF is used to display line segments.
any time, we can turn off the line-pattern feature with glDisable
(GL_LINE_STIPPLE);
This replaces the current line-style pattern with the default pattern (solid lines).
Example Code:
typedef struct { float x, y; } wcPt2D;
wcPt2D dataPts [5]; void linePlot
(wcPt2D dataPts [5])
{
int k;
glBegin (GL_LINE_STRIP); for (k = 0; k < 5;
k++) glVertex2f (dataPts [k].x, dataPts [k].y);
5
0
Module 1 Computer Graphics and OpenGL
glFlush ( ); glEnd
( );
}
/* Invoke a procedure here to draw coordinate axes. */ glEnable
(GL_LINE_STIPPLE); /* Input first set of (x, y) data values. */
glLineStipple (1, 0x1C47); // Plot a dash-dot, standard-width polyline.
linePlot (dataPts);
/* Input second set of (x, y) data values. */ glLineStipple (1,
0x00FF); / / Plot a dashed, double-width polyline.
glLineWidth (2.0); linePlot
(dataPts);
/* Input third set of (x, y) data values. */ glLineStipple (1,
0x0101); // Plot a dotted, triple-width polyline.
glLineWidth (3.0); linePlot
(dataPts);
glDisable (GL_LINE_STIPPLE);
VTUPulse.com
1.18 Curve Attributes
Parameters for curve attributes are the same as those for straight-line segments.
We can display curves with varying colors, widths, dot-dash patterns, and available pen or
brush options.
Methods for adapting curve-drawing algorithms to accommodate attribute selections are
similar to those for line drawing.
Raster curves of various widths can be displayed using the method of horizontal or vertical
pixel spans.
Case 1: Where the magnitude of the curve slope |m| <= 1.0, we plot vertical spans; Case
2: when the slope magnitude |m| > 1.0, we plot horizontal spans.
5
1
Module 1 Computer Graphics and OpenGL
Method 1: Using circle symmetry property, we generate the circle path with vertical spans in the
octant from x = 0 to x = y, and then reflect pixel positions about the line y = x to y=0
Method 2: Another method for displaying thick curves is to fill in the area between two Parallel
curve paths, whose separation distance is equal to the desired width. We could do this using the
specified curve path as one boundary and setting up the second boundary either inside or outside
the original curve path. This approach, however, shifts the original curve path either inward or
outward, depending on which direction we choose for the second boundary.
Method 3:The pixel masks discussed for implementing line-style options could also be used in
raster curve algorithms to generate dashed or dotted patterns
Method 4: Pen (or brush) displays of curves are generated using the same techniques discussed
for straight-line segments.
Method 5: Painting and drawing programs allow pictures to be constructed interactively by using
a pointing device, such as a stylus and a graphics tablet, to sketch various curve shapes.
VTUPulse.com
1.19 Line Drawing Algorithm
A straight-line segment in a scene is defined by coordinate positions for the endpoints of
the segment.
To display the line on a raster monitor, the graphics system must first project the endpoints
to integer screen coordinates and determine the nearest pixel positions along the line path
between the two endpoints then the line color is loaded into the frame buffer at the
corresponding pixel coordinates
The Cartesian slope-intercept equation for a straight line is y=m * x +b------------>(1) with
m as the slope of the line and b as the y intercept.
Given that the two endpoints of a line segment are specified at positions (x0,y0) and (xend,
yend) ,as shown in fig.
5
2
Module 1 Computer Graphics and OpenGL
We determine values for the slope m and y intercept b with the following equations:
m=(yend - y0)/(xend - x0)----------------->(2) b=y0 - m.x0-------------->(3)
Algorithms for displaying straight line are based on the line equation (1) and calculations
given in eq(2) and (3).
For given x interval δx along a line, we can compute the corresponding y interval δy from
eq.(2) as δy=m. δx----------------->(4)
Similarly, we can obtain the x interval δx corresponding to a specified δy as δx=δy/m-----
VTUPulse.com
------------->(5)
These equations form the basis for determining deflection voltages in analog displays, such
as vector-scan system, where arbitrarily small changes in deflection voltage are possible.
For lines with slope magnitudes
|m|<1, δx can be set proportional to a small horizontal deflection voltage with the
corresponding vertical deflection voltage set proportional to δy from eq.(4)
|m|>1, δy can be set proportional to a small vertical deflection voltage with the
corresponding horizontal deflection voltage set proportional to δx from eq.(5)
|m|=1, δx=δy and the horizontal and vertical deflections voltages are equal
5
3
Module 1 Computer Graphics and OpenGL
A line is sampled at unit intervals in one coordinate and the corresponding integer values
nearest the line path are determined for the other coordinate
DDA Algorithm has three cases so from equation i.e.., m=(yk+1 - yk)/(xk+1 - xk)
Case1:
if m<1,x increment in unit
intervals i.e..,xk+1=xk+1 then,
m=(yk+1 - yk)/( xk+1 - xk) m= yk+1 -
yk yk+1 = yk + m------------>(1)
where k takes integer values starting from 0,for the first point and increases by 1 until final
endpoint is reached. Since m can be any real number between 0.0 and 1.0,
Case2:
if m>1, y increment in unit intervals
i.e.., yk+1 = yk + 1 then, m= (yk + 1- yk)/(
xk+1 - xk) m(xk+1 - xk)=1 xk+1 =(1/m)+
VTUPulse.com
Case3:
xk-----------------(2)
Equations (1) and (2) are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint. If this processing is reversed, so that the starting endpoint is at the
right, then either we have δx=-1 and yk+1 = yk - m-----------------(3) or(when the slope is greater
than 1)we have δy=-1 with xk+1 = xk - (1/m)----------------(4)
Similar calculations are carried out using equations (1) through (4) to determine the pixel
positions along a line with negative slope. thus, if the absolute value of the slope is less
than 1 and the starting endpoint is at left ,we set δx==1 and calculate y values with eq(1).
5
4
Module 1 Computer Graphics and OpenGL
when starting endpoint is at the right(for the same slope),we set δx=-1 and obtain y
positions using eq(3).
This algorithm is summarized in the following procedure, which accepts as input two
integer screen positions for the endpoints of a line segment.
if m<1,where x is incrementing by 1
yk+1 = yk + m
So initially x=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the starting
point .
o Illuminate pixel(x, round(y))
o x1= x+ 1 , y1=y + 1 o
Illuminate pixel(x1,round(y1)) o
x2= x1+ 1 , y2=y1 + 1 o
Illuminate pixel(x2,round(y2)) o
Till it reaches final point.
if m>1,where y is incrementing by 1
VTUPulse.com
xk+1 =(1/m)+ xk
So initially y=0,Assuming (x0,y0)as initial point assigning x= x0,y=y0 which is the starting
point .
o Illuminate pixel(round(x),y) o
x1= x+( 1/m) ,y1=y o Illuminate
pixel(round(x1),y1) o x2= x1+
(1/m) , y2=y1 o Illuminate
pixel(round(x2),y2) o Till it
reaches final point.
The DDA algorithm is faster method for calculating pixel position than one that directly
implements .
It eliminates the multiplication by making use of raster characteristics, so that appropriate
increments are applied in the x or y directions to step from one pixel position to another
along the line path.
5
5
Module 1 Computer Graphics and OpenGL
The accumulation of round off error in successive additions of the floating point increment,
however can cause the calculated pixel positions to drift away from the true line path for
long line segments. Furthermore ,the rounding operations and floating point arithmetic in
this procedure are still time consuming.
we improve the performance of DDA algorithm by separating the increments m and 1/m
into integer and fractional parts so that all calculations are reduced to integer operations.
#include <stdlib.h> #include <math.h> inline int round (const float a)
{ return int (a + 0.5);
} void lineDDA (int x0, int y0, int xEnd, int
yEnd)
{ int dx = xEnd - x0, dy = yEnd - y0, steps, k; float
xIncrement, yIncrement, x = x0, y = y0; if
(fabs (dx) > fabs (dy)) steps = fabs (dx); else
steps = fabs (dy);
xIncrement = float (dx) / float (steps);
VTUPulse.com
yIncrement = float (dy) / float (steps);
setPixel (round (x), round (y)); for (k
= 0; k < steps; k++) { x +=
xIncrement; y += yIncrement;
setPixel (round (x), round (y));
}
5
6
Module 1 Computer Graphics and OpenGL
Bresenham’s Algorithm:
It is an efficient raster scan generating algorithm that uses incremental integral calculations
To illustrate Bresenham’s approach, we first consider the scan-conversion process for lines
with positive slope less than 1.0.
Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0, y0) of a given line, we step to each successive column
(x position) and plot the pixel whose scan-line y value is closest to the line path.
VTUPulse.com
3. Calculate the constants ∆x, ∆y, 2∆y, and 2∆y − 2∆x, and obtain the starting value
for the decision parameter as
4.
p0 = 2∆y −∆x
At each xk along the line, starting at k = 0, perform the following test:
If pk < 0, the next point to plot is (xk + 1, yk ) and
pk+1 = pk + 2∆y
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆y − 2∆x
5. Repeat step 4 ∆x − 1 more times.
Note:
If |m|>1.0
Then p0 = 2∆x −∆y
and
5
7
If pk < 0, the next point to plot is (xk , yk +1) and
pk+1 = pk + 2∆x
Otherwise, the next point to plot is (xk + 1, yk + 1) and
pk+1 = pk + 2∆x − 2∆y
Code:
#include <stdlib.h>
#include <math.h>
/* Bresenham line-drawing procedure for |m| < 1.0. */ void
lineBres (int x0, int y0, int xEnd, int yEnd)
{ int dx = fabs (xEnd - x0), dy = fabs(yEnd - y0); int p = 2
* dy - dx; int twoDy = 2 * dy, twoDyMinusDx = 2
* (dy - dx); int x, y;
/* Determine which endpoint to use as start position.
*/ if (x0 > xEnd) { x = xEnd; y = yEnd; xEnd = x0;
VTUPulse.com
} else { x =
x0; y = y0;
} setPixel (x, y);
while (x < xEnd) {
x++;
if (p < 0)
p += twoDy;
Module 1 Computer Graphics and OpenGL
else { y++; p +=
twoDyMinusDx;
} setPixel (x,
y);
}
}
Properties of Circles
A circle is defined as the set of points that are all at a given distance r from a center position
(xc , yc ).
For any circle point (x, y), this distance relationship is expressed by the Pythagorean
theorem in Cartesian coordinates as
We could use this equation to calculate the position of points on a circle circumference by
VTUPulse.com
stepping along the x axis in unit steps from xc −r to xc +r and calculating the corresponding
y values at each position as
One problem with this approach is that it involves considerable computation at each step.
Moreover, the spacing between plotted pixel positions is not uniform.
We could adjust the spacing by interchanging x and y (stepping through y values and
calculating x values) whenever the absolute value of the slope of the circle is greater than
1; but this simply increases the computation and processing required by the algorithm.
Another way to eliminate the unequal spacing is to calculate points along the circular
boundary using polar coordinates r and θ
Expressing the circle equation in parametric polar form yields the pair of equations
5
9
Module 1 Computer Graphics and OpenGL
To summarize, the relative position of any point (x, y) can be determined by checking the
sign of the circle function as follows:
VTUPulse.com
Eight way symmetry
The shape of the circle is similar in each quadrant.
Therefore ,if we determine the curve positions in the first quadrant ,we can generate the
circle positions in the second quadrant of xy plane.
The circle sections in the third and fourth quadrant can be obtained from sections in the
first and second quadrant by considering the symmetry along X axis
6
0
Module 1 Computer Graphics and OpenGL
Conside the circle centered at the origin,if the point ( x, y) is on the circle,then we can
compute 7 other points on the circle as shown in the above figure.
VTUPulse.com
Our decision parameter is the circle function evaluated at the midpoint between these two
pixels:
6
1
Module 1 Computer Graphics and OpenGL
The initial decision parameter is obtained by evaluating the circle function at the start
position (x0, y0) = (0, r ):
VTUPulse.com
3. At each xk position, starting at k = 0, perform the following test:
If pk <0, the next point along the circle centered on (0, 0) is (xk+1, yk ) and
pk+1 = pk + 2xk+1 + 1
6
2
VTUPulse.com