PDF-of-Computer-Graphics-and-Image-Processing-Lab-1
PDF-of-Computer-Graphics-and-Image-Processing-Lab-1
SEMESTER - IV (CBCS)
Published by
Director
Institute of Distance and Open learning ,
University of Mumbai,Vidyanagari, Mumbai - 400 098.
Module I
1. Introduction To Graphics 1-11
2. Demonstration of Simple Graphic Inbuilt Function 12-22
Module II
3. Output Primitives & Its Algorithms 23-38
Module III
4. Output Primitives & Its Algorithms 39-54
Module IV
5. Output Primitives & Its Algorithms 55-67
Unit V
6. Output Primitives & Its Algorithm 68-80
Module VI
7. Output Primitives & Its Algorithms 81-96
Module VII
8. 2d Geometric Transformations & Clipping 97-114
Module VIII
9. 2d Geometric Transformations & Clipping 115-139
Module IX
10. Implementation of 3d Transformations (Only Coordinates Calculation) 140-148
Unit X
11. Output Primitives & Its Algorithm 149-162
Module XI
12. Introduction To Animation 163-171
Module XII
13. Image Enhancement Transformation 172-177
14. Image Enhancement Transformation 178-187
15. Image Enhancement Transformation 188-198
*****
Syllabus
*****
MODULE I
1
INTRODUCTION TO GRAPHICS
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 Summary
1.3 References
1.4 Unit End Exercises
1.0 OBJECTIVE
Graphics are defined as any sketch or a drawing or a special network that
pictorially represents some meaningful information. Computer Graphics is
used where a set of images needs to be manipulated or the creation of the
image in the form of pixels and is drawn on the computer. Computer
Graphics can be used in digital photography, film, entertainment,
electronic gadgets, and all other core technologies which are required. It is
a vast subject and area in the field of computer science. Computer
Graphics can be used in UI design, rendering, geometric objects,
animation, and many more. In most areas, computer graphics is an
abbreviation of CG. There are several tools used for the implementation of
Computer Graphics. The basic is the <graphics.h> header file in Turbo-C,
Unity for advanced and even OpenGL can be used for its Implementation.
It was invented in 1960 by great researchers Verne Hudson and William
Fetter from Boeing.
1.1 INTRODUCTION
Types of Computer Graphics:
● Raster Graphics: In raster, graphics pixels are used for an image to
be drawn. It is also known as a bitmap image in which a sequence of
images is into smaller pixels. Basically, a bitmap indicates a large
number of pixels together.
1
● Vector Graphics: In vector graphics, mathematical formulae are used Introduction to Graphics
to draw different types of shapes, lines, objects, and so on.
Applications:
● Computer Graphics are used for an aided design for engineering
and architectural system: These are used in electrical automobiles,
electro-mechanical, mechanical, electronic devices. For example
gears and bolts.
● Computer Art: MS Paint.
● Presentation Graphics: It is used to summarize financial statistical
scientific or economic data. For example- Bar chart, Line chart.
● Entertainment: It is used in motion pictures, music videos, television
gaming.
● Education and training: It is used to understand the operations of
complex systems. It is also used for specialized system such for
framing for captains, pilots and so on.
● Visualization: To study trends and patterns.For example- Analyzing
satellite photo of earth.
1.2 SUMMARY
Pixel Coordinates:
A digital image is made up of rows and columns of pixels. A pixel in such
an image can be specified by saying which column and which row
contains it. In terms of coordinates, a pixel can be identified by a pair of
integers giving the column number and the row number. For example, the
pixel with coordinates (3,5) would lie in column number 3 and row
number 5. Conventionally, columns are numbered from left to right,
starting with zero. Most graphics systems, including the ones we will
study in this chapter, number rows from top to bottom, starting from zero.
Some, including OpenGL, number the rows from bottom to top instead.
2
Computer Graphics Note in particular that the pixel that is identified by a pair of coordinates
and Image Processing
(x,y) depends on the choice of coordinate system. You always need to
know what coordinate system is in use before you know what point you
are talking about.
Row and column numbers identify a pixel, not a point. A pixel contains
many points; mathematically, it contains an infinite number of points. The
goal of computer graphics is not really to color pixels—it is to create and
manipulate images. In some ideal sense, an image should be defined by
specifying a color for each point, not just for each pixel. Pixels are an
approximation. If we imagine that there is a true, ideal image that we want
to display, then any image that we display by coloring pixels is an
approximation. This has many implications.
3
Introduction to Graphics
Note that antialiasing does not give a perfect image, but it can reduce the
"jaggies" that are caused by aliasing (at least when it is viewed on a
normal scale).
There are other issues involved in mapping real-number coordinates to
pixels. For example, which point in a pixel should correspond to integer-
valued coordinates such as (3,5)? The center of the pixel? One of the
corners of the pixel? In general, we think of the numbers as referring to
the top-left corner of the pixel. Another way of thinking about this is to
say that integer coordinates refer to the lines between pixels, rather than to
the pixels themselves. But that still doesn't determine exactly which pixels
are affected when a geometric shape is drawn. For example, here are two
lines drawn using HTML canvas graphics, shown greatly magnified. The
lines were specified to be colored black with a one-pixel line width
The top line was drawn from the point (100,100) to the point (120,100). In
canvas graphics, integer coordinates corresponding to the lines between
pixels, but when a one-pixel line is drawn, it extends one-half pixel on
either side of the infinitely thin geometric line. So for the top line, the line
as it is drawn lies half in one row of pixels and half in another row. The
graphics system, which uses antialiasing, rendered the line by coloring
both rows of pixels gray. The bottom line was drawn from the point
(100.5,100.5) to (120.5,120.5). In this case, the line lies exactly along one
line of pixels, which gets colored black. The gray pixels at the ends of the
bottom line have to do with the fact that the line only extends halfway into
the pixels at its endpoints. Other graphics systems might render the same
lines differently.
4
Computer Graphics All this is complicated further by the fact that pixels aren't what they used
and Image Processing
to be. Pixels today are smaller! The resolution of a display device can be
measured in terms of the number of pixels per inch on the display, a
quantity referred to as PPI (pixels per inch) or sometimes DPI (dots per
inch). Early screens tended to have resolutions of somewhere close to 72
PPI. At that resolution, and at a typical viewing distance, individual pixels
are clearly visible. For a while, it seemed like most displays had about 100
pixels per inch, but high resolution displays today can have 200, 300 or
even 400 pixels per inch. At the highest resolutions, individual pixels can
no longer be distinguished.
The fact that pixels come in such a range of sizes is a problem if we use
coordinate systems based on pixels. An image created assuming that there
are 100 pixels per inch will look tiny on a 400 PPI display. A one-pixel-
wide line looks good at 100 PPI, but at 400 PPI, a one-pixel-wide line is
probably too thin.
In fact, in many graphics systems, "pixel" doesn't really refer to the size of
a physical pixel. Instead, it is just another unit of measure, which is set by
the system to be something appropriate. (On a desktop system, a pixel is
usually about one one-hundredth of an inch. On a smart phone, which is
usually viewed from a closer distance, the value might be closer to 1/160
inch. Furthermore, the meaning of a pixel as a unit of measure can change
when, for example, the user applies a magnification to a web page.)
Pixels cause problems that have not been completely solved. Fortunately,
they are less of a problem for vector graphics, which is mostly what we
will use in this book. For vector graphics, pixels only become an issue
during rasterization, the step in which a vector image is converted into
pixels for display. The vector image itself can be created using any
convenient coordinate system. It represents an idealized, resolution-
independent image. A rasterized image is an approximation of that ideal
image, but how to do the approximation can be left to the display
hardware.
5
To specify the coordinate system on a rectangle, you just have to specify Introduction to Graphics
the horizontal coordinates for the left and right edges of the rectangle and
the vertical coordinates for the top and bottom. Let's call these values left,
right, top, and bottom. Often, they are thought of as xmin, xmax, ymin,
and ymax, but there is no reason to assume that, for example, top is less
than bottom. We might want a coordinate system in which the vertical
coordinate increases from bottom to top instead of from top to bottom. In
that case, top will correspond to the maximum y-value instead of the
minimum value.
To allow programmers to specify the coordinate system that they would
like to use, it would be good to have a subroutine such as
setCoordinateSystem(left,right,bottom,top)
The graphics system would then be responsible for automatically
transforming the coordinates from the specfiied coordinate system into
pixel coordinates. Such a subroutine might not be available, so it's useful
to see how the transformation is done by hand. Let's consider the general
case. Given coordinates for a point in one coordinate system, we want to
find the coordinates for the same point in a second coordinate system.
(Remember that a coordinate system is just a way of assigning numbers to
points. It's the points that are real!) Suppose that the horizontal and
vertical limits are oldLeft, oldRight, oldTop, and oldBottom for the first
coordinate system, and are newLeft, newRight, newTop, and newBottom
for the second. Suppose that a point has coordinates (oldX,oldY) in the
first coordinate system. We want to find the coordinates (newX,newY) of
the point in the second coordinate system
Example:
Aim: Write a program to draw a circle in C graphics
Code:
The header file graphics.h contains circle() function which draws a circle
with center at (x, y) and given radius.
Syntax:
circle(x, y, radius);
where,
(x, y) is center of the circle.
'radius' is the Radius of the circle.
Examples:
Input: x = 250, y = 200, radius = 50
Output:
7
Input: x = 300, y = 150, radius = 90 Introduction to Graphics
Output:
8
Computer Graphics 23. closegraph();
and Image Processing
24. return 0;
25. }
Output:
Example:
Aim: Write a code to draw a line in C graphics
graphics.h library is used to include and facilitate graphical operations in
program. graphics.h functions can be used to draw different shapes,
display text in different fonts, change colors and many more. Using
functions of graphics.h you can make graphics programs, animations,
projects and games. You can draw circles, lines, rectangles, bars and many
other geometrical figures. You can change their colors using the available
functions and fill them.
Examples:
For line 1, Input: x1 = 150, y1 = 150, x2 = 450, y2 = 150
For line 2, Input: x1 = 150, y1 = 200, x2 = 450, y2 = 200
For line 2, Input: x1 = 150, y1 = 250, x2 = 450, y2 = 250
Output:
9
Explanation: The header file graphics.h contains line() function which is Introduction to Graphics
described below:
Declaration: void line(int x1, int y1, int x2, int y2);
line function is used to draw a line from a point(x1,y1) to point(x2,y2) i.e.
(x1,y1) and (x2,y2) are end points of the line.The code given below draws
a line.
Code:
1. // C++ Implementation for drawing line
2. #include <graphics.h>
3. // driver code
4. int main()
5. {
6. // gm is Graphics mode which is a computer display
7. // mode that generates image using pixels.
8. // DETECT is a macro defined in "graphics.h" header file
9. int gd = DETECT, gm;
10. // initgraph initializes the graphics system
11. // by loading a graphics driver from disk
12. initgraph(&gd, &gm, "");
13. // line for x1, y1, x2, y2
14. line(150, 150, 450, 150);
15. // line for x1, y1, x2, y2
16. line(150, 200, 450, 200);
17. // line for x1, y1, x2, y2
18. line(150, 250, 450, 250);
19. getch();
20. // closegraph function closes the graphics
21. // mode and deallocates all memory allocated
22. // by graphics system .
23. closegraph();
24. }
10
Computer Graphics Output:
and Image Processing
1.3 REFERENCE
1] Introduction to Computer Graphics: A Practical Learning Approach
By Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, Marco
Di Benedetto
2] Computer Graphics Principles and Practice in C: Principles &
Practice in C Paperback – 1 January 2002 by Andries van Dam; F.
Hughes John; James D. Foley; Steven K. Feiner (Author)
*****
11
2
DEMONSTRATION OF SIMPLE GRAPHIC
INBUILT FUNCTION
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Summary
2.3 References
2.4 Unit End Exercises
2.0 OBJECTIVES
C++ GRAPHICS Functions relating to graphics are used to create
different shapes in different colors. The graphics functions require a
graphics monitor (nowadays almost all computers have graphics monitors)
and a graphics card such as VGA, SVGA or EGA. Color monitor is
recommended for viewing graphics in colors.
Display Mode:
The output of a program can be displayed on the screen in two modes.
These modes are:
1. Text Mode
2. Graphics Mode
12
Computer Graphics C++ Text Mode:
and Image Processing
In a text mode, the screen is normally divided into 80 columns and 25
rows. The topmost row is 0 and the bottom-most row is 24. Similarly. the
leftmost column is 0 and the rightmost column is 79. In-text mode, only
text can be displayed. The images and one graphics object cannot be
displayed.
2.1 INTRODUCTION
In graphics mode, the screen is divided into small dots called For example,
in the VGA monitor, the screen is divided into 480 row 640 columns of
dots. Thus, the VGA monitor screen is divide 640×480 pixels. The
number of dots per inch is called resolution screen. The dots are very close
to each other. The more the number pixels, the more clearer the graphics
is.
The monitor types (display adapter) and their respective resolution are
given below.
VGA 640×480 16
Driver:
Represents the graphics driver installed on the computer. It may be an
integer variable or an integer constant identifier, e.g. CGA, EGA, SVGA,
etc.
13
The-graphics driver can also be automatically detected by using the Demonstration of Simple
keyword “DETECT”. Letting compiler detect the graphic driver is known Graphic Inbuilt Function
as auto-detect.
If the driver is to be automatically detected, the variable driver is declared
as of integer type and DETECT value is assigned to it as shown below.
int driver, mode;
driver = DETECT;
This statement must be placed before “initgraph” function. When the
above statement is executed. he computer automatically detects the
graphic driver and the graphics mode.
Mode:
Represents output resolution on the computer screen. The normal mode
for VGA is VGAHI. It gives the highest resolution
If the driver is on auto-detected, then its use is optional. The computer
automatically detects the driver as well as the mode.
&
represents the addresses of constant numerical identifiers of driver and
mode. If constants (e.g., VGA, VGAHI) are used, then “&” operator is not
used as shown below:
initgraph (VGA, VGAHI, “path”);
Path:
Represents the path of graphic drivers. It is the directory on the dish where
the BGI files are located. Suppose the BGI files are stored in
“C:\TC\BGI”, then the complete path is written as:
initgraph (VGA, VGAHI, “C:\TC\\BGI”);
Use of double backslash “\” is to be noted. One backslash is used as
escape character and other for the directory path. If the BGI files are in the
current directory, then the path is written as:
initgraph (VGA, VGAHI, “ ”);
1 #include<graphics.h>
2 main()
3 {
4 int d, m;
5 d= DETECT;
6 Initgraph(&d, &m, “c:\\tc”);
7 }
14
Computer Graphics In the above example, the BGI files must be in the specified directory,
and Image Processing
1.e., in “c:\tclbgi”. If the BGI files are in the directory path”c:\tc” then the
above statement is written as:
initgraph(&d, &m, “C:\\TC”);
The “cleardevice” Function:
The “cleardevice” function is used to clear the screen in graphics mode. It
is similar to “clrscr” function that is used to clear the screen text mode. Its
syntax is:
cleardevice();
string:
Represents the characters that are to be printed on the screen. It may be a
string variable or string constant. The string constant is enclosed in
double-quotes.
Example how to use cleardevice, closegraph and outtext function and print
Electronic Clinic into C++ graphic mode.
1 #include<graphics.h>
2 #include<conio.h>
3 main()
15
4 { Demonstration of Simple
Graphic Inbuilt Function
5 int d, m;
6 d=DETECT;
7 initgraph (&d, &m, “”);
8 cleardevice();
9 outtext(“electronic clinic”);
10 getch();
11 closegraph();
12 }
The “moveto” Function:
The “moveto” function is used to move the current cursor position to a
specified location on the screen where the output is to be printed. It is
similar to “gotoxy” function used in text mode. Its syntax is:
moveto (x, y);
Where:
x
Represents the x-coordinate of the screen. It is the horizontal distance in
pixels from the left of the screen. It may be an unsigned int type value or
variable. For VGA, its value is from 0 to 639.
y
represents the y-coordinate of the scren. It is the vertical distance in pixels
from the top of the screen. It may be an unsigned int type value or
variable. For VGA, its value is from 0 to 479.
Example of how to use moveto function using C++ graphics.
1 #include<graphics.h>
2 #include<conio.h>
3 main()
4 {
5 int d,m;
6 d= DETECT;
7 initgraph(&d, &m, “”);
8 cleardevice();
9 moveto(400,200);
10 outtext(“Electronic Clinic”);
16
Computer Graphics 11 getch();
and Image Processing
12 closegraph();
13 }
The “outtextxy” Function:
The “outtextxy” function is similar to the outtext” function but it is used to
print text on the screen at a specified location. This function serves the
purpose of both the “moveto” and “outtext” functions. Its syntax is:
outtextxy (x, y, string);
where:
x
represents the x-coordinate of the screen. It is the horizontal distance in
pixels from the left of the screen. It may be unsigned int type value or
variable. For VGA, its value is from 0 to 639.
Y
represents the y-coordinate of the screen. It is the vertical distance in
pixels from the top of the screen. It may be unsigned int type value or
variable. For VGA, its value is from 0 to 479.
String
represents the string of characters that is to be printed on the computer
screen. It may be a string variable or a string. constant. The string constant
is enclosed in double quotes.
Examples of using outtextxy in C++ graphics.
1 #include<graphics.h>
2 #include<conio.h>
3 main()
4 {
5 int d,m;
6 d= DETECT;
7 initgraph(&d, &m, “”);
8 cleardevice();
9 outtextxy(100,200, “electronic clinic”);
10 getch();
11 closegraph();
12 }
17
The “settextstyle” Function: Demonstration of Simple
Graphic Inbuilt Function
The “settextstyle” function is used to define the text style in graphics
mode. The text style includes the font type, font size and text direction.
The syntax of this function is given as: settextstyle (style, dir, size);
All the three parameters are of int type. These may be int type values or
variables.
Where:
Style:
specifies the font style. Its value range is from 0 to 10.
Dir:
Specifies the direction of the text in which it is to be displayed. Its value is
from 0 to 1. It may be a numerical constant identifier. It is HORIZ DIR
(for horizontal direction) or VERT_DIR (for vertical direction).
Size:
Specifies the font size of the text. Its value is from 0 to 72.
Example of settextstyle:
1 #include<graphics.h>
2 #include<conio.h>
3 main()
4 {
5 int d,m,c;
6 d= DETECT;
7 initgraph(&d, &m, “”);
8 cleardevice();
9 for(c=0; c<=10; c++)
10 {
11 setextstyle(c,0,0);
12 outtextxy(100,20+c*20, “electronic clinic”)
13 }
14 getch();
15 closegraph();
16 }
18
Computer Graphics
and Image Processing 2.2 SUMMARY
The “setcolor” Function:
The setcolor” function is used to define color of the objects and the text in
graphics mode. Its syntax is: setcolor (co);
where:
со
Represents the color. It may be an int type value or variable. For VGA, its
value is from 0 to 15. It may also be a numerical constant identifier, e.g.
BLUE, GREEN, RED etc.
The “setbkcolor” Function
The “setbkcolor” function is used in graphics mode to define the
background color of the screen. Its syntax is: setbkcolor(co);
Where:
co
Specifies the color. It may be an it type value or variable. For VGA. Its
value is from 0 to 15. It may also be numerical constant identifier eg blue,
green and red etc.
Example how to use setcolor and setbkcolor function and print Electronic
Clinic into C++ graphic mode
1 #include<graphics.h>
2 #include<conio.h>
3 main()
4 {
5 int d,m,co;
6 d= DETECT;
7 initgraph(&d, &m, “”);
8 cleardevice();
9 for(co=0; co<=15; co++)
10 {
11 setbkcolor(co);
12 setcolor(co+1);
13 settextstyle(0,0,2);
14 outtextxy(100,10+co*20, “electronic clinic”);
15 outtextxy(200, 200,”press any key….”);
19
16 getch(); Demonstration of Simple
Graphic Inbuilt Function
17 }
18 closegraph();
19 }
Creating Objects in C++ Graphics Mode:
Different objects, e.g. lines, circles, rectangles and many other shapes are
created in graphics mode using various built-in functions. Following are
the functions that are commonly used to create graphics objects:
The “circle” Function
The “circle” function is used to draw a circle on the screen. Its syntax is:
circle(x, y, radius);
All the three parameters are of int type. These may be int type values or
variables.
Where
x&y
Specifies the center point of the circle. These are the x- coordinate and y-
coordinate of the center of the circle on the screen.
Radius
Specifies the radius of the circle.
Example how to make a circle using circle function in C++ graphics
mode:
1 #include<graphics.h>
2 #include<conio.h>
3 main()
4 {
5 int d,m,r,c;
6 d= DETECT;
7 initgraph(&d, &m, “”);
8 cleardevice();
9 for(c=1; c<= 15; c++)
10 {
11 setcolor(c);
12 circle(300,200,c*10);
13 }
20
Computer Graphics 14 getch();
and Image Processing
15 closegraph();
16 }
Stangle:
Specifies the starting angle of the arc in degree.
Endangle:
Specifies the ending angle of the arc in degree.
Radius:
Specifies the radius of the arc.
Note:
The arc function can also be used to draw a circle by giving the starting
angle 0 and ending angle 360. Similarly, it can also be used to draw line
by giving the same values for starting and ending angles.
Example on how to use arc in C++
1 #include<graphics.h>
2 #include<conio.h>
3 main()
4 {
5 int d,m,c;
6 d= DETECT;
7 initgraph(&d, &m, “”);
8 cleardevice();
9 for(c=1; c<=15; c++)
21
10 { Demonstration of Simple
Graphic Inbuilt Function
11 setcolor(c);
12 arc(300,200,45,145,c*10);
13 }
14 getch();
15 closegraph();
16
17 }
2.3 REFERENCES
1] Introduction to Computer Graphics: A Practical Learning Approach
By Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, Marco
Di Benedetto
2] Computer Graphics Principles and Practice in C: Principles &
Practice in C Paperback – 1 January 2002 by Andries van Dam; F.
Hughes John; James D. Foley; Steven K. Feiner (Author)
*****
22
MODULE II
3
OUTPUT PRIMITIVES & ITS
ALGORITHMS
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Scan Conversion Algorithms
3.3 Line drawing algorithms
3.3.1 Digital Differential Analyser (DDA) Line Drawing Algorithm
3.3.2 Bresenham’s Line Drawing Algorithm
3.4 Summary
3.5 Unit End Exercise
3.6 References for Future Reading
3.0 OBJECTIVES
The objective of scan conversion is to determine the intersection of rows
& columns to find an area called pixel, to paint any object.
3.1 INTRODUCTION
It is a field of computer science that refers to the creation, storage,
manipulation & drawing of pictures in digital form. It also manipulates
visual contents which helps in manipulating images & objects in two &
three dimension.
23
3.3.1 Digital Differential Analyser (DDA) Line Drawing Algorithm: Output Primitives & Its
Algorithms
It is an incremental scan conversion method to determine points on a line.
Algorithm:
Step1: Input the coordinates of the two end points A(x1,y1) & B(x2,y2)
for the line AB, such that A and B are not equal.
Step2: Calculate dx=(x2-x1) and dy=(y2-y1)
Step3: Calculate the length (L)
If abs(x2-x1) ≥ abs(y2-y1) then
L= abs(x2-x1)
Else
L= abs(y2-y1)
Step4: Calculate the increment factor
Solved Example:
Q.1. Consider a line AB with A(0,0) and B(8,4) apply a simple DDA
algorithm to calculate the pixels on this line.
Solution:
1. A(0,0) B(8,4) x1=0 y1=0 x2=8 y2=4
24
Computer Graphics 2. dx = 8-0 = 8, dy = 4-0 = 4
and Image Processing
3. L=8, dx>dy
4. dx = =
dy = = =0.5
5. xnew = x1+ 0.5 = 0+0.5 =0.5
ynew = y1+0.5 = 0+0.5 = 0.5
Plot (0,0)
6. i=1
while (1 ≤ 8)
{
xnew = 0.5 + 1 =1.5
ynew= 0.5+0.5 =1.0
Plot (1,1)
i=i+1
}
while (2 ≤ 8)
{
xnew = 1.5 + 1 =1.5
ynew= 1.0+0.5 =1.5
Plot (2,1)
i=i+1
}
while (3 ≤ 8)
{
xnew = 2.5 + 1 =3.5
ynew= 1.5+0.5 =2.0
Plot (3,2)
i=i+1
}
while (4 ≤ 8)
{
xnew = 3.5 + 1 =4.5
25
ynew= 2.0+0.5 =2.5 Output Primitives & Its
Plot (4,2) Algorithms
i=i+1
}
while (5 ≤ 8)
{
xnew = 4.5 + 1 =5.5
ynew= 2.5+0.5 =3.0
Plot (5,3)
i=i+1
}
while (6 ≤ 8)
{
xnew = 5.5 + 1 =6.5
ynew= 3.0+0.5 =3.5
Plot (6,3)
i=i+1
}
while (7 ≤ 8)
{
xnew = 6.5 + 1 =7.5
ynew= 3.5+0.5 =4.0
Plot (7,4)
i=i+1
}
while (8 ≤ 8)
{
xnew = 7.5 + 1 =8.5
ynew= 4.0+0.5 =4.5
Plot (8,4)
i=i+1
}
26
Computer Graphics i Plot Xnew ynew
and Image Processing
- (0,0) 0.5 0.5
1 (1,1) 1.5 1.0
2 (2,1) 2.5 1.5
3 (3,2) 3.5 2.0
4 (4,2) 4.5 2.5
5 (5,3) 5.5 3.0
6 (6,3) 6.5 3.5
7 (7,4) 7.5 4.0
8 (8,4) 8.5 4.5
9
8
7
6
5
4
3
2
1
0
0 1 2 3 4 5 6 7 8 9
Plot of the Line AB
Q.2. Use DDA Line drawing algorithm draw a line AB for the endpoints
A (1,1) and B(5,3)
Solution:
1. A(1,1) and B(5,3), x1=1 y1=1 x2= 5 y2=3
2. dx=5-1 = 4 , dy =3-1 =2
3. L= 4, dx>dy
4. dx = = = 1, dy = =0.5
5. xnew = 1+0.5 =1.5
ynew = 1+0.5 =1.5
Plot(1,1)
6. i=1
while (1 ≤ 4)
{
27
xnew = 1.5 +1=2.5 Output Primitives & Its
ynew=1.5+0.5 =2.0 Algorithms
Plot(2,2)
i=i+1
}
while (2 ≤ 4)
{
xnew = 2 +1.5=3.5
ynew=2+0.5 =2.5
Plot(3,2)
i=i+1
}
while (3 ≤ 4)
{
xnew = 3 +1.5=4.5
ynew=2.5+0.5 =3.0
Plot(4,3)
i=i+1
}
while (4 ≤ 4)
{
xnew = 4 +1.5=5.5
ynew=3.0+0.5 =3.5
Plot(5,3)
i=i+1
}
28
Computer Graphics 3
and Image Processing
2
1
0
0 1 2 3 4 5 6
Q.3. Consider a line AB with A (2,3) and B (6,8). Apply a simple DDA
algorithm and calculate the pixels on the line
Solution:
1. A(2,3) and B(6,8) x1=2, y1 =3 x2=6 y2 =8
2. dx = x2-x1 = 6-2 =4
dy = y2-y1 = 8-3 =5
3. L=5
4. dx = = 0.8
dy= = =1
9
8
7
6
5
4
3
2
1 2 3 4 5 6 7
Algorithm:
Step1: Initialize the end points of the line AB with A(x1,y1) and
B(x2,y2). The two end points are assumed to be distinct.
Step2: Calculate dx and dy such that:
dx=x2-x1
dy=y2-y2
Step3: Initialize error term (e)
e= 2*dy-dx
xnew=x1
ynew=y1
31
Step4: Determine the first pixel on the line and update the error term Output Primitives & Its
Algorithms
For i=1 to dx
Plot(Integer xnew, Integer ynew)
While(e≥0)
{
ynew=ynew+1
e=e-2*dx
}
xnew=xnew+1
e=e+2*dy
Next i
End for loop
Step5: Stop
Solved Examples:
Q.1. Consider a line AB with coordinates A(5,5) and B(13,9). Determine
the line segment using Bresenham’s line drawing algorithm.
Solution:
1. A(x1,y1) = A(5,5)
B(x2,y2)=B(13,9)
2. dx=x2-x1 =13-5 =8
dy = y2-y1 = 9-5 =4
3. e=2*dy-dx = 2*4 -8 = 8-8 =0
xnew=x1 ynew=y1
xnew=5 ynew=5
4. for i=1 to 8
Plot(5,5)
While (e≥0)
{
ynew=ynew+1=5+1=6
e=e-2*dx=0-2*8 = -16
32
Computer Graphics }
and Image Processing
xnew=xnew+1 =5+1=6
e=e+2*dy = -16+2*4 = -16+8 = -8
for i=2 to 8
plot(6,6)
while(e≥0)
{
}
xnew=xnew+1 =6+1=7
e=e+2*dy =-8+2*4 = 0
for i=3 to 8
plot(7,6)
while(e≥0)
{
ynew=ynew+1=6+1=7
e=e-2*dx = 0-2*8 = - 16
}
xnew=xnew+1=7+1=8
e=e+2*dy = -16+2*4 = -16+8 = -8
for i=4 to 8
plot(8,7)
while(e≥0)
{
}
xnew=xnew+1 = 8+1 =9
e=e+2*dy = -8+2*4 = -8+8 =0
for i=5 to 8
plot(9,7)
while(e≥0)
{
ynew=ynew +1=7+1 =8
e=e-2*dx = 0-2*8 =-16
}
xnew=xnew+1=9+1 =10
e=e+2*dy =-16+2*4 = -8
33
for i=6 to 8 Output Primitives & Its
Algorithms
plot(10,8)
while(e≥0)
{
}
xnew=xnew+1 = 10+1 = 11
e=e+2*dy = -8+2*4 = 0
for i=7 to 8
Plot(11,8)
ynew=ynew+1 = 8+1 =9
e=e-2*dx = 0-2*8 = -16
}
for i=8 to 8
plot(12,9)
while(e≥0)
{
}
Xnew=xnew+1 =12+1 =13
e=e+2*dy =-8+2*4 =0
10
9
8
7
34
Computer Graphics 6
and Image Processing
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10 11 12 13
Q.2. Consider the line coordnates A(0,0) and B(8,4). Determine the line
segment using Bresenham’s algorithm.
Solution:
1. A(0,0) and B(8,4) x1=0 y1=0 and x2 =8 y2 = 4
2. dx = x2-x1 = 8-0 dy= y2-y1 = 4-0
3. e=2*dy-dx = 2*4-8 =8-8 =0
xnew=x1=0 ynew= y1=0
4. for (i=1) to 8
Plot(0,0)
while(e≥0)
{
ynew=0+1=1
e = 0-2*8 = - 16
}
xnew=0+1=1
e=-16+2*4 = -8
for i=2 to 8
plot(1,1)
while(e≥0)
{
}
xnew=1+1 =2
enew= -8+2*4 =0
for i=3 to 8
Plot (2,1)
35
while(e≥0) Output Primitives & Its
Algorithms
{
ynew=1+1=2
e=0-2*8 = -16
}
xnew =2+1 =3
e= -16+2*4 = -8
for i=4 to 8
Plot(3,2)
while (e≥0)
{
}
xnew = 3+1=4
e=8+2*4 = 0
for i=5 to 8
plot(4,2)
while(e≥0)
{
ynew=2+1 =3
e=0-2*8 =-16
}
xnew =4+1=5
e=-16+2*4 = -8
for i=6 to 8
plot(5,3)
while(e≥0)
{
}
xnew=5+1=6
e= -8+2*4 =0
for i=7 to 8
36
Computer Graphics plot(6,3)
and Image Processing
while(e≥0)
{
ynew= 3+1=4
e=0-2*8 = -16
}
xnew=6+1 =7
e=-16+2*4 =-8
For i=8 to 8
Plot(7,4)
while(e≥0)
{
}
xnew=7+1=8
e= -8+2+4 =0
37
3.4 SUMMARY Output Primitives & Its
Algorithms
In this chapter line algorithms are used to describe how a line can be
drawn using each and pixel. Two important line drawing algorithms are
explained- DDA & Bresenham’s line drawing algorithm.
*****
38
MODULE III
4
OUTPUT PRIMITIVES & ITS
ALGORITHMS
Unit Structure
4.0 Introduction
4.1 Implementation of circle drawing Midpoint circle
4.2 Application of Circle drawing algorithm
4.3 Unit End Exercise
4.0 INTRODUCTION
What is scan conversion and how it is utilized for drawing a Circle:
Converting the unbroken graphical object as a group of distinct objects is
called as scan conversion.
In the process of “scan –converting a circle”, the circle is divided into
eight equal parts, one part is called as octant, and if one part is generated,
then it is easy to replicate the other seven parts; so computing one octant is
enough to determine the complete circle.
Properties of Circle:
1. Circle function f(x, y) = x2 + y2 – r2.
2. Any point (x, y) on the boundary of the circle with radius „r‟ satisfies
the equation circle (x, y) = 0.
3. If the point is in the interior of the circle, the circle function is
negative.
4. If the point is outside of the circle, the circle function is positive.
40
Computer Graphics
and Image Processing
Fig. 3.4.2 For pixel (x,y) all possible pixels in 8 (eight) octants.
Circle algorithm is based on The circle equation is given by,
(x – xc)2 + (y – yc)2 = r2 (where xc & yc are coordinates of the center of
the Circle)
However, above equation is non linear, so that square-root evaluations
would be required to compute pixel distances from circular path. This
algorithm avoids this square-root calculations by comparing the squares of
the pixel separation distances.
A method for direct distance comparison is to test the halfway position
between two pixels to determine if this midpoint is inside or outside the
circle boundary. This method is more easily applied to other conics, and
for an integer circle radius, the midpoint approach generates the same
pixel.
How Mid Point Circle differs from above define Bresenham‟s algorithm?
As per above Bresenham’s Algorithm, we deal with integers, so it
occupies less memory and lesser time for execution as well as it is reliable,
accurate and efficient as it avoids using round function or floating-point
calculations, where as in Mid-point circle algorithm also avoids square
root or trigonometric calculation by applying integer operation only. This
algorithm checks the nearest integer by computing the middle point of the
pixels nearer to the given point on the circle.
41
(xk+1, yk) and Output Primitives & Its
Algorithms
pk+1 = pk +2xk+1 +1
Otherwise,
the next point along the circle is (xk+1, yk –1) and
pk+1 = pk + 2xk+1 +1 – 2yk+1
Where 2xk+1= 2xk +2, 2yk+1=2yk-2
(4) Determine symmetry points in the other seven octants.
(5) Move each calculated pixel position (x, y) onto the circular path
centered on (xc, yc) and plot the coordinate values x = x + xc, y = y +
yc
42
Computer Graphics
and Image Processing
43
void circlemidpoints (int xcenter, int ycenter, int radius) Output Primitives & Its
Algorithms
{ int x=0;
int y=radius;
int p=1-radius;
void circleplotpoints (int ,int ,int ,int );
circleplotpoints (xcenter,ycenter,x,y);
while(x<y)
{ x++;
if(p<0)
p=p+2*x+1;
else
{ y--;
p=p+2*(x-y)+1;
}
circleplotpoints (xcenter,ycenter,x,y);
} return(0);
}
44
Computer Graphics 1. Concentric Circle Application:
and Image Processing
In graphic designing in many application areas drawing of concentric
circle is required.
Below is program written using C++ to draw a concentric circle with
different colors and at periodic interval.
Program: Write a C++ program to draw a concentric circle of different
colors at periodic interval of time.
#include<iostream.h>
#include<conio.h>
#include<graphics.h>
main()
{
int i,gd,gc,xcen,ycen,color=1;
gd=DETECT;
initgraph(&gd,&gc,"C:\\TURBOC3\\BGI");
xcen=getmaxx()/2;
ycen=getmaxy()/2;
for(i=20;i<=200;i+=20)
{
setcolor(color++);
circle(xcen,ycen,i);
}
getch();
}
45
Output Primitives & Its
Algorithms
#include<graphics.h>
#include<dos.h>
#include<conio.h>
main()
{
int gd,gm,x=10,y=10,xinc=10,yinc=10,c=1,f=1;
gd=DETECT;
initgraph(&gd,&gm,"C:\\TC\\BGI");
while(!kbhit())
{
x = x + xinc;
y = y + yinc;
if(x<=0 || x>=getmaxx())
xinc=-xinc;
if(y<=0 || y>=getmaxy())
yinc=-yinc;
c++;
f++;
if(c>=15)
46
Computer Graphics c=1;
and Image Processing
if(f>=12)
f=0;
setfillstyle(c,f);
fillellipse(x,y,10,10);
delay(100);
cleardevice();
}}
Output of above program is as below:
#include<dos.h>
#include<math.h>
#include<stdio.h>
#include<conio.h>
#include<graphics.h>
#define x 3.1415
void main()
47
{ Output Primitives & Its
Algorithms
int gdriver=DETECT,gmode;
float sec_x,sec_y,min_x,min_y,hour_x,hour_y,h=0,m=0,s=0;
while(!kbhit())
{
setcolor(WHITE);
outtextxy(420,240,"3");
outtextxy(210,240,"9");
outtextxy(310,130,"12");
outtextxy(310,340,"6");
sec_x=100*cos(2*x/60*s-x/2)+getmaxx()/2; //Compute
Second Needle Coordinate
48
Computer Graphics sec_y=100*sin(2*x/60*s-x/2)+getmaxy()/2;
and Image Processing
min_x=90*cos(2*x/60*m-x/2)+getmaxx()/2; //Compute
Minute Needle Coordinate
min_y=90*sin(2*x/60*m-x/2)+getmaxy()/2;
hour_x=60*cos(2*x/12*(h+m/60)-x/2)+getmaxx()/2; //Compute
Hour Needle Coordinate
hour_y=60*sin(2*x/12*(h+m/60)-x/2)+getmaxy()/2;
setcolor(RED);
line(getmaxx()/2,getmaxy()/2,sec_x,sec_y);
setcolor(WHITE);
line(getmaxx()/2,getmaxy()/2,min_x,min_y);
setcolor(YELLOW);
line(getmaxx()/2,getmaxy()/2,hour_x,hour_y);
tick();
delay(1000);
setcolor(BLUE);
line(getmaxx()/2,getmaxy()/2,sec_x,sec_y);
line(getmaxx()/2,getmaxy()/2,min_x,min_y);
line(getmaxx()/2,getmaxy()/2,hour_x,hour_y);
s=s+1;
if(s>=60)
{
49
s=0; Output Primitives & Its
Algorithms
m=m+1;
h=h+1/60;
}
}
nosound();
getch();
closegraph();
}
/* simulate clock tick sound*/
void tick()
{
int i;
for(i=3500;i<=6500;i++)
sound(i);
nosound();
}
50
Computer Graphics
and Image Processing
Program:
#include<conio.h>
#include<graphics.h>
main()
{
int gd=DETECT,gm,xc=10,yc=10,xctr=5,yctr=5,
xc2=(int)getmaxx()/2+10,yc2=10,xctr2=5,yctr2=5,
//xc3=(int)getmaxx()/-2+10,yc3=10,xctr3=5,yctr3=5;
xc3=10,yc3=getmaxy()/2,xctr3=5,yctr3=5,xc4,yc4,xctr4=5,yc
tr4=5;
initgraph(&gd,&gm,"C:\\TurboC3\\BGI");
xc2=(int)getmaxx()/2+10;yc2=10;xctr2=5;yctr2=5;
yc3=getmaxy()/2+10;
xc4=getmaxx()/2;
yc4=getmaxy()/2;
while(!kbhit())
{
line(getmaxx()/2,0,getmaxx()/2,getmaxy());
line(0,getmaxy()/2,getmaxx(),getmaxy()/2);
circle(xc,yc,10);
if(xc<=0 || xc>=getmaxx()/2-20)
xctr=-xctr;
if(yc<=0 || yc>=getmaxy()/2-20)
yctr=-yctr;
xc += xctr;
yc += yctr;
51
circle(xc2,yc2,10); Output Primitives & Its
Algorithms
if(xc2-4<=(int)getmaxx()/2 || xc2>=getmaxx()-20)
xctr2=-xctr2;
if(yc2<=0 || yc2>=getmaxy()/2-20)
yctr2=-yctr2;
xc2 += xctr2;
yc2 += yctr2;
circle(xc3,yc3,10);
if(xc3<=0 || xc3>=getmaxx()/2-20)
xctr3=-xctr3;
if(yc3<getmaxy()/2 || yc3>=getmaxy())
yctr3=-yctr3;
xc3 += xctr3;
yc3 += yctr3;
circle(xc4,yc4,10);
if(xc4<getmaxx()/2 || xc4>=getmaxx()-20)
xctr4=-xctr4;
if(yc4<getmaxy()/2 || yc4>=getmaxy())
yctr4=-yctr4;
xc4 += xctr4;
yc4 += yctr4;
delay(10);
cleardevice();
}
52
Computer Graphics
and Image Processing
Output of the above program is as below:
Apart from above application of circle algorithm, there are many other
applications which are mention below.
1. For drawing symbol that can be like as an Olympic ring like below in
OpenGL software:
53
Q-3. Explain in brief Mid-Point Circle Algorithm. Output Primitives & Its
Algorithms
Q-4. For a Center x and y coordinates as 20 and 20 respectively and
having radius 10 units, plot a scanning a table using Mid-Point circle
algorithm.
Q-5. Based on scanning table received in Q-4, plot a Circle on a graph
paper.
Q-6. List a various application where Mid-Point Circle algorithm is used.
Q-7. Write a CPP using computer graphics to demonstrate how it can be
utilized to develop various application such as Analog Clock.
*****
54
MODULE IV
5
OUTPUT PRIMITIVES & ITS
ALGORITHMS
Unit Structure
5.0 Objectives
5.1 Introduction
5.2 Mid-Point Ellipse
5.3 Implementation of Ellipse Drawing A. Midpoint Ellipse
5.4 Summary
5.5 References for Future Reading
5.0 OBJECTIVES
After this Chapter a learner will be able to implement the Mid-Point
Ellipse algorithm using computer program.
5.1 INTRODUCTION
Ellipse is defined as the geometric figure which is the set of all points on a
plane whose distance from two fixed points known as the foci remains a
constant.
It consists of two axes: major and minor axes where the major axis is the
longest diameter and minor axis is the shortest diameter.
Unlike circle, the ellipse has four-way symmetry property which means
that only the quadrants are symmetric while the octants are not.
55
let us consider the elliptical curve in the first quadrant. Output Primitives & Its
Algorithms
56
Computer Graphics x is always incremented in each step, i.e. xk+1 = xk + 1.
and Image Processing
defined as follows:
Pk = f(xk+1, yk–½)
= b2(xk2 + 2xk +
1. Input rx, ry, and ellipse center (xc, yc,), and plot the first point as
(x0,y0)=(0,ry)
Region 1:
2. Then, Calculate the initial value of the decision parameter in region 1
as
p0 = ry2 – rx2ry + rx2/4
Region 2:
4. Calculate initial decision parameter in region 2 with the last point (x0,
y0) calculated in region 1 as
p0 = ry2(x0 + 1/2)2 + rx2(y0 – 1)2 – rx2ry2
57
If pk> 0, the next point is (xk, yk – 1) Output Primitives & Its
Algorithms
pk+1 = pk – 2 rx2 yk+1 + rx2
C Implementation:
#include <graphics.h>
#include <stdlib.h>
#include <math.h>
#include <stdio.h>
#include <conio.h>
#include <iostream.h>
58
Computer Graphics class Mid-Point
and Image Processing
{
float x,y,a, b,r,p,h,k,p1,p2;
public:
void get ();
void cal ();
};
void main ()
{
Mid-Point b;
b.get ();
b.cal ();
getch ();
}
void Mid-Point :: get ()
{
cout<<"\n ENTER CENTER OF ELLIPSE";
cout<<"\n ENTER (h, k) ";
cin>>h>>k;
cout<<"\n ENTER LENGTH OF MAJOR AND MINOR AXIS";
cin>>a>>b;
}
void Mid-Point ::cal ()
{
/* request auto detection */
int gdriver = DETECT,gmode, errorcode;
int midx, midy, i;
/* initialize graphics and local variables */
initgraph (&gdriver, &gmode, " ");
/* read result of initialization */
errorcode = graphresult ();
if (errorcode ! = grOK) /*an error occurred */
{
printf("Graphics error: %s \n", grapherrormsg (errorcode);
printf ("Press any key to halt:");
59
getch (); Output Primitives & Its
Algorithms
exit (1); /* terminate with an error code */
}
x=0;
y=b;
// REGION 1
p1 =(b * b)-(a * a * b) + (a * a)/4);
{
putpixel (x+h, y+k, RED);
putpixel (-x+h, -y+k, RED);
putpixel (x+h, -y+k, RED);
putpixel (-x+h, y+k, RED);
if (p1 < 0)
p1 += ((2 *b * b) *(x+1))-((2 * a * a)*(y-1)) + (b * b);
else
{
p1+= ((2 *b * b) *(x+1))-((2 * a * a)*(y-1))-(b * b);
y--;
}
x++;
}
//REGION 2
p2 =((b * b)* (x + 0.5))+((a * a)*(y-1) * (y-1))-(a * a *b * b);
while (y>=0)
{
If (p2>0)
p2=p2-((2 * a * a)* (y-1))+(a *a);
else
{
p2=p2-((2 * a * a)* (y-1))+((2 * b * b)*(x+1))+(a * a);
x++;
}
y--;
putpixel (x+h, y+k, RED);
putpixel (-x+h, -y+k, RED);
60
Computer Graphics putpixel (x+h, -y+k, RED);
and Image Processing
putpixel (-x+h, y+k, RED);
}
getch();
}
Output:
Java Implementation:
// Java program for implementing
// Mid-Point Ellipse Drawing Algorithm
import java.util.*;
import java.text.DecimalFormat;
class GFG
{
static void midptellipse(float rx, float ry, float xc, float yc)
{
62
Computer Graphics dx = dx + (2 * ry * ry);
and Image Processing
dy = dy - (2 * rx * rx);
d1 = d1 + dx - dy + (ry * ry);
}
}
63
d2 = d2 + (rx * rx) - dy; Output Primitives & Its
Algorithms
}
else {
y--;
x++;
dx = dx + (2 * ry * ry);
dy = dy - (2 * rx * rx);
d2 = d2 + dx - dy + (rx * rx);
}
}
}
// Driver code
public static void main(String args[])
{
// To draw a ellipse of major and
// minor radius 15, 10 centered at (50, 50)
midptellipse(10, 15, 50, 50);
}
}
Python Implementation:
# Python3 program for implementing
# Mid-Point Ellipse Drawing Algorithm
x = 0;
y = ry;
64
Computer Graphics # Initial decision parameter of region 1
and Image Processing
d1 = ((ry * ry) - (rx * rx * ry) +
(0.25 * rx * rx));
dx = 2 * ry * ry * x;
dy = 2 * rx * rx * y;
# For region 1
while (dx < dy):
65
d2 = (((ry * ry) * ((x + 0.5) * (x + 0.5))) + Output Primitives & Its
Algorithms
((rx * rx) * ((y - 1) * (y - 1))) -
(rx * rx * ry * ry));
66
Computer Graphics
and Image Processing 5.4 SUMMARY
Mid-point Ellipse algorithm is used to draw an ellipse in computer
graphics. Midpoint ellipse algorithm plots(finds) points of an ellipse on
the first quadrant by dividing the quadrant into two regions.
*****
67
UNIT V
6
OUTPUT PRIMITIVES & ITS
ALGORITHM
Unit Structure
6.0 Objective
6.1 Introduction
6.2 Implementation of curve
6.2.1 Bezier curve and surface
6.2.2 Properties of Bezier curve
6.2.3 Design techniques using Bezier curve
6.2.4 Cubic Bezier curve
6.2.5 Bezier surface
6.3 Summary
6.4 Unit End Exercise
6.5 References for Future Reading
6.0 OBJECTIVE
This chapter will able you to understand the following concept:
Bezier curve and surface
properties of Bezier curve
Design techniques using Bezier curve
Cubic Bezier curve
Bezier surface
6.1 INTRODUCTION
The object which we can see around us is of different shapes either visible
as 2D or 3D. Modelling of objects in computer graphics uses the
primitives such as lines, circle, ellipse etc. Modeling of geometric objects
generally combines these basic primitives to create another object. For
example, to generate curves in computer graphic, multiple lines are
conducted with each other using same data points.
Curve
Representation
Curve Function:
Routines for circle, splines, and other commonly used curves are included
in many graphics packages. The PHIGS standard does not provide explicit
functions for these curve, but it does include
69
where wcpoints is a list of n coordinate positions, datalist contain noncoor- Output Primitives & Its
dinate data value, and parameter id select the desired function. At a Algorithm
particular installation, a circle might be referenced with id=1, an ellipse
with id=2, and so on.
As an example of the definition of curve through this PHIGS function a,
circle (id=1, say) could be specified by assigning the two center coordinate
Val-uses to wcpoints and assigning the radius value to datalist. The
generalized drawing primitive would then reference the appropriate
algorithm, such as file midpoint method, to generate the circle. With
interactive input, a circle could be defined two coordinate points: the
center position and a point on the circumference. Similarly, interactive
specification of an ellipse can be done with three points. The two foci and
a point on the ellipse boundary, all stored in wcpoints. For an ellipse in
standard position wcpoint could be assigned only the center coordinate,
with datalist assigned the value for rx and ry. Splines defined with control
points would be generated by assigning the control point coordinate to
wcpoints.
Function to generate circle and ellipse often include the capability of
drawing curve section by specifying parameter for the line endpoints.
Expanding the parameter list allow specification of the beginning and
ending angular value for an arc, as illustrated in fig 3-27. Another method
for designation a circular or elliptical arc is to input the beginning and
ending coordinate positions of the arc.
70
Computer Graphics The curve goes from p0 and p1 and it will determine the slope of the curve
and Image Processing
also the shape of the curve. It will always contain polygon shape formed
by specified control points. Hence the specific polygon is called as control
polygon or Bézier polygon, as it is controlled by the points determine by
the curve. With this property any number of control points can holds the
polygon.
2. Matrix representation:
Using matrix multiplication, we can actually represent the Bézier curve,
which we can use in splitting the Bézier curve.
Matrix M cane be used for holding all the information about the quadratic
Bézier curve into one matrix. The steps proceeded by matrix we need to
take the coefficients of that matrix in all these steps, hence the coefficients
of the matrix are connected to the polynomial in front of each Pi, here we
have to expanded form of the Bernstein polynomial.
3. Interpolation:
Bézier curves can be drawn with smooth points hence the drawn curve is
appearing smooth with a predefined set of points. The formula used for
this is of P(t) produces points and is not of the form y=f(x), so one x can
have multiple y’s (basically a function that can “go backward”).
71
boundary conditions with the interpolation splines, with a characterizing Output Primitives & Its
matrix, or with blending functions. The blending function specification is Algorithm
the most convenient in brazier curve.
Suppose we are given n+1 control-point positions: pk = (xk, yk, zk), with
k varying from 0 to n. the following position vector P(u) can be produced
by blended coordinate points, which describes the path of an
approximating Bezier polynomial function between P0 and Pn.
n:
C (n, k )
k : (n k ) :
72
Computer Graphics
and Image Processing
C (n,k) =
De Casteliau’s Algorithm:
1. Draw control points. As named are labeled 1, 2, 3.
2. Draw the line between control points 1 → 2 → 3.
3. The parameter t moves from 0 to 1. In the illustration above the
step0.05 is used the curve goes over 0,0.05,0.1,0.15,.0.95, 1. For each
of these values of t.
On proportional distance draw each line on point t located from its
starting. we've two points, so draw two lines.
For illustration, for t = 0 – both points will be at the starting of points, and
for t = 0.25 – on the 25% of length from the starting, for t = 0.5 – 50 (the
middle), for t = 1 – in the end of line. Connect the points. For t = 0.25 For
t = 0.5
4. Now in the line take a point on the distance proportional to the same
value of t. That is, for t = 0.25 (the left picture) we've a point at the
end of the left quarter of the line, and for t = 0.5 (the right picture) –
in the middle of the line.
5. As t runs from 0 to 1, every value of t adds a point to the wind. The
set of similar points forms the Bezier curve.
73
Output Primitives & Its
Algorithm
Thus, at the starting of the curve the slope of the line is going through the
first two control points, and at the end, slope of the curve is along the line
joining the last two endpoints. Similarly, the calculation of the second
parametric derivatives of a Bezier curve at the endpoints
P”(0) = n(n-1)[(p1 – p2)-( p1 – p0)]
P”(0) = n(n-1)[(pn-2 – pn-1)-( pn-1 – pn)]
74
Computer Graphics One of the most attractive property of Bezier curve is that, All the control
and Image Processing
points of Bezier curve are lies within the convex (convex polygon
boundary) of the control points. The blending function property is used in
the above curve generation, hence they all are positive and their sum is
always 1,
n
k 0
( pk (u )) 1
The weighted sum of control points position can be simply specifying the
curve position. The convex-hull property for a Bezier curve ensures that
the polynomial smoothly follows the control panels without erratic
oscillations.
Joining the control points segments will decide the shape of the
control polygon.
They are contained in the convex hull of their defining control points.
The degree of polynomial defining the curve segment is one less that
the number of defining polygon point. Therefore, for 4 control point,
the degree of the polynomial is 3, i.e. cubic polynomial.
Shape of the polygon can generally draw with the Bezier curve
representation.
The tangent vector direction aims at the end points is same with the
vector determined by initial and end segments.
Bezier curves can handle the global changes like, moving a control
point alters the shape of the whole curve.
A given Bezier curve can be subdivided at a point u=u0 int two Bezier
segments which join together at the point corresponding to the
parameter value u=u0.
75
6.2.3 Design Technique Using Bezier Curve: Output Primitives & Its
Algorithm
The first and the last control points at the same position can be called as
closed Bezier curve, as in figure shown.
Also, at a single coordinate position multiple control points gives more
weight to that position. In below figure, two control points can be taken by
a single coordinate position as input, and the output curve is puled nearer
to this position.
With the help of polynomial calculation function of higher degree, we can
draw Bezier curve with n number of control points. Complicated Bezier
curve ca be generated with doing fractals of lower degree together to draw
a curve, to get the better control we do the smaller section of the curve and
with this we can get better control over the shape of the curve in small
region. With the help of property of Bezier curves pass through endpoints,
it is easy to match curve sections (zero-order continuity). also, the
property of Bezier curves the tangent to the curve at an endpoint is along
the line joining that endpoint to the adjacent control point. Therefore, to
obtain first-order continuity between curve sections, we can pick control
points p’0 and p’1 of a new section to be along the same straight line as
control points pn-1 and pn of the previous section , when the number of
control points are same for two curve sections, we can obtain C1
continuity by choosing the first control point of the new section as the last
control point of the previous section and by positioning the second control
point of the new section at position
pn + (pn - pn-1)
76
Computer Graphics In the above figure, Two Bezier section can have formed Piecewise
and Image Processing
approximation curve. Zero-order and first-order continuity are attained
between curve sections by setting p’0 = p2 and by making points p1, p2,
and p’1 collinear.
Thus, the number of collinear and equally spaced control points are three.
We obtain C2 continuity between two Bezier section by calculating the
position of the third control point of a new section in terms of the positions
of the last three control points of the previous section as
Pn-2 + 4(pn – pn-1)
In cubic curve, second order continuity of Bezier curve requirement is
specifying unnecessarily restrictive. In four control points cubic curve this
is specially indicate. In this case, we have to fix the position of first three
control points and the other one point will use to adjust the shape of the
curve segment in second-order continuity.
BEZ1,3(u)=3u(1-u)2
BEZ2,3(u)=3u2(1-u)
BEZ3,3(u)=u3
As you can see in the diagram first we have to plots of the four cubic
Bezier blending functions are given in Fig. The shape of the curve is
decided from the blending function which shows the influence of the
control points control points influence range from 0 to 1. At u=0, the only
nonzero blending function is BEZ0,3, which has the value 1.At u=1, the
only nonzero function is BEZ3,3,with a value of 1 at that point. Thus, the
cubic Bezier curve will always pass through control points p0 and p3. The
other functions, BEZ1,3 and BEZ2,3, influence the shape of the curve at
intermediate values of parameter u, so that the resulting curve tends
toward points p1 and p2. Blending function BEZ1,3 is maximum at u=1/3,
and BEZ2,3 is maximum at u=2/3.
77
Output Primitives & Its
Algorithm
P”(0)=6(p0-2p1+p2), P”(1)=6(p1-2p2+p3)
MBEZ=
The additional parameter can also allow to do adjustment of curve
“tension” and “bias”, as we did with interpolating splines. But the more
useful B-splines, as well as β-splines, provide this capability.
78
Computer Graphics
and Image Processing
P(u, v) mj 0 kn0 (p j ,k (v) BEZ k ,n (u))
With pj,k specifying the location of the (m+1) by (n+1) control points.
The above figure illustrates two Bezier surface plots. The dashed lines are
used to connect the control points, and constant v can be defined by the
solid lines. The 0-1 ranges interval can be varying by plotted Each curve
of constant u over v, with u fixed at one of the values in this unit interval.
Constant v by Curves plotted similarly.
Bezier curve and Bezier surfaces shares the same properties, and they also
help to achieve interactive design application with convenient method. For
each surface patch, we can select a mesh of controls points in the xy
“ground” plane, then we choose elevations above the ground plane for the
z-coordinate values of the control points. Using the boundary constraints,
the Patches can then be pieced together.
The figure illustrates a surface formed with two Bezier sections. With the
zero-order continuity at the boundary line one can get smooth transition
from one section to the other curve. matching control points at the
boundary can specify the Zero-order continuity. First-order continuity is
obtained by choosing control points along a straight line across the
boundary and by maintaining a constant ratio of collinear line segments
for each set of specified control points across section boundaries.
79
6.3 SUMMARY Output Primitives & Its
Algorithm
The spline approximation method was developed by the French engineer
Pierre Bezier for use in the design of Renault automobile bodies. Bezier
splines have a multiple property that one can use very conveniently for
curve and surface design. Painting and drawing packages contain Bezier
curves which is mostly used, as they are very easy to implement and
powerful to draw a curve hence it can also use in CAD systems. Many
Graphics packages provide only cubic spline functions. Bezier curve and
Bezier surfaces shares the same properties, and they also help to achieve
interactive design application with convenient method. With the help of
polynomial calculation function of higher degree, we can draw Bezier
curve with n number of control points. Bezier curve and Bezier surfaces
shares the same properties, and they also help to achieve interactive design
application with convenient method. The blending function can be
extended with the polynomial expressions, the cubic Bezier point can be
written in matrix form
*****
80
MODULE VI
7
OUTPUT PRIMITIVES & ITS
ALGORITHMS
Unit Structure
7.1 Objectives
7.2 Definition
7.3 Introduction
7.4 Polygon Filling
7.5 Seed fill algorithms
7.6 Boundary Fill Algorithm
7.7 Flood Fill Algorithm
7.8 Scan Line Algorithm
7.9 Summary
7.10 Unit End Exercise
7.11 Reference for further reading
7.1 OBJECTIVES
The purpose is to colour entire area of pixels connected with each
other & then finally get the shape of the object on the screen.
Basically filling up of polygons using horizontal lines or scanlines.
The purpose is to fill the interior pixels of a polygon given only the
vertices of the figure.
7.2 DEFINITION
The process of colouring or highlighting the pixels with any colour which
lies inside the polygon is known as polygon filing.
7.3 INTRODUCTION
The process of colouring or highlighting the pixels with any colour which
lies inside the polygon is known as polygon filing. For polygon filing the
requirements are:
A digital representation of the shape must be closed.
A test for determining if a point is inside or outside of the shape.
A rule or procedure for determining the colours of each point inside the
shape.
81
Introduction to Solid Area Scan Conversion: Output Primitives & Its
Algorithms
Polygon: It is a figure which is formed by connecting line segments in a
closed manner. Polygons are categorised as concave & convex polygon.
Concave Polygon: A line segment joining any two points within the
polygon which is not completely inside the polygon is called a concave
polygon.
Convex Polygon: A line segment joining any two points within the
polygon which is completely inside the polygon is called a convex
polygon.
82
Computer Graphics number of polygon edges crossed by the line is odd, then this point is an
and Image Processing
interior point. Otherwise point is an exterior point.
In the diagram point P1 has one intersection which is odd so P1 is an
interior point. Point P2 has two intersections which is even so P2 is an
exterior point.
When the line segment intersects the vertex of a polygon then the
following rules are used:
Count is even: If the other end points o0f the two segments meet at the
intersecting vertex.
Count is odd: if both the end points lies on the opposite side of the line
segment.
P1
P2
For Vertices: If the (intersection) point is the vertex of the polygon. Then
check the edges. If they are in the same side of the polygon direction, then
it is counted as even number of intersection.
P3 count (1+2)
So odd count hence
point is inside the
polygon
If the intersecting edges are in the opposite direction, then the intersection
point is counted as odd number of intersection.
83
b) Winding Number Test: Output Primitives & Its
Algorithms
In this picture, a line segment running from outside the polygon to the
point given in the question & consider a polygon sides which it crosses.
Assign direction numbers to the boundary line crossed & sum these
direction numbers.
Let P1 is a test point & a line segment is drawn from the outside of the
polygon up to the point P1. The edge can be drawn starting below the line,
cross it & end above the line (direction number-1) and starting above the
line, cross it & end below the line (direction number 1)
Take the sum of these direction numbers, if the value is non-zero then the
point is inside a polygon otherwise the point is outside a polygon.
-1
+1 +1
Sum=+1-1+1 =+1
Sum ≠ 0 (inside the polygon)
Sum = non zero
Sum=0 (outside the polygon)
Point P1 is inside a polygon.
Algorithm:
1. Select a seed point inside the region
84
Computer Graphics 2. Move outwards from the seed point
and Image Processing
3. If pixel is not set, set pixel.
4. Process each neighbour of pixel that is inside the region.
Seed point or seed pixel Move towards its Stop when the entire region
neighbouring pixel left, is filled with the pixel color
right, top & bottom seed fill
algorithm
It is further classified as flood fill algorithm (that fills an interior region) &
boundary fill algorithm (that fills the boundary defined region.
85
7.6 BOUNDARY FILL ALGORITHM Output Primitives & Its
Algorithms
Starting with the seed point, i.e. any point inside the polygon examine the
neighbouring pixel to check whether boundary pixel is reached. If
boundary pixels are not reached pixels are highlighted and the process is
continued until boundary pixel is reached.
Algorithm:
1. Region described by a set of bounding pixels.
2. A seed pixel is set inside the boundary
3. Check if this pixel is a boundary pixel or has already been filled.
4. If no to both, then fill it & make neighbours new seeds.
It is defined either with four connected or eight connected regions. In four
connected regions every pixel can be reached by a combination of moves
in four directions – left, right, top & bottom.
In eight connected regions every pixel can be reached by a combination of
moves in two horizontals, two vertical & four diagonal directions.
86
Computer Graphics
and Image Processing
seed pixel, boundary Move towards neighbouring Stop when the entire
pixel pixels left, right, top & bottom boundary region is filled
with colours
87
C++ program to fill the rectangle using boundary fill algorithm: Output Primitives & Its
Algorithms
#include<iostream.h>
#include<conio.h>
#include<dos.h>
#include<graphics.h>
void b_fill(int x, int y, int bc, int fc)
{
int p;
p=getpixel(x,y)
if((p!=bc) && (p!=fc)
{
putpixel(x,y,fc);
b_fill(x,y+1,bc,fc);
b_fill(x,y-1,bc,fc);
b_fill(x+1,y,bc,fc);
b_fill(x-1,y,bc,fc)
}
}
void main()
{
int gd=DETECT,gm;
initgraph(&gd,&gm,”c:\\tc\\bgi”);
settextstyle(5,HORIZ_DIR,3);
outtextxy(100,100,”Program to boundary fill”);
setcolor(10);
rectangle(260,200,310,260);
delay(1000);
b_fill(280,250,10,12);
getch();
}
88
Computer Graphics 2. 4-connected boundary: fill regions can be defined by lines and arcs.
and Image Processing
By translating the line and arc endpoints we can translate, scale and
rotate the whole boundary-fill region. Therefore 4-connected
boundary-fill regions are better suited to modelling.
Algorithm:
1. Region is a patch of like-coloured pixels.
2. A seed pixel is set and a range of colours is defined.
3. Check if the pixel is in the colour range.
4. If yes, fill it and make the neighbours new seed.
89
Output Primitives & Its
Algorithms
91
delay(1); Output Primitives & Its
Algorithms
putpixel(x,y,fillColor);
flood(x+1,y,fillColor,defaultColor);
flood(x-1,y,fillColor,defaultColor);
flood(x,y+1,fillColor,defaultColor);
flood(x,y-1,fillColor,defaultColor);
}
}
Output:
92
Computer Graphics floodfill(x-1,y-1,old,newcol);
and Image Processing
}
}
void main()
{
intgd=DETECT,gm;
initgraph(&gd,&gm,"C:\\TURBOC3\\BGI");
rectangle(50,50,150,150);
floodfill(70,70,0,15);
getch();
closegraph();
}
Output:
93
Flood fill Vs Boundary fill: Output Primitives & Its
Algorithms
Though both Flood fill and Boundary fill algorithms color a given figure
with a chosen color, they differ in one aspect. In Flood fill, all the
connected pixels of a selected color get replaced by a fill color. On the
other hand, in Boundary fill, the program stops when a given color
boundary is found.
Algorithm:
1. A seed pixel is selected and colour it.
2. The left, right, top, bottom line of the seed pixel is filled until a
boundary is found.
3. The extreme left and extreme right unprocessed pixel in the span are
saved as xleft and xright.
4. The scan line above and below the current scan line are examined in
the range of xleft to xright in any contiguous span of either boundary
pixel. If any span is found cross over.
1 2
s
3
94
Computer Graphics Red color – xleft
and Image Processing
Green color –xright
s-seed pixel
Black color- Boundary region filled With color.
7.9 SUMMARY
How to fill the polygon using seed fill algorithms are explained along with
flood fill & boundary fill algorithm?
Scan line algorithm is also explained.
95
https://techdifferences.com/difference-between-flood-fill-and- Output Primitives & Its
boundary-fill-algorithm.html Algorithms
Computer_Graphics_C_Version_by_Donald_Hearn_and_M_Pauline_
Baker_II_Edition
https://www.tutorialspoint.com/computer_graphics/polygon_filling_al
gorithm.htm
https://www.javatpoint.com/computer-graphics-boundary-filled-
algorithm
*****
96
MODULE VII
8
2D GEOMETRIC TRANSFORMATIONS &
CLIPPING
Unit Structure
8.1 Implementation of Two Dimensional Transformations:
8.1.1 What is Transformation?
8.2 Translation
8.2.1 What is Cohen Sutherland Line Clipping algorithm?
8.2.2 Source Code
8.2.3 Output
8.3 Rotation
8.3.1 What is Rotation
8.3.2 Source Code
8.3.3 Output
8.4 Shearing
8.4.1 What is Rotation
8.4.2 Source Code
8.4.3 Output
8.5 Scaling
8.5.1 What is Rotation
8.5.2 Source Code
8.5.3 Output
8.6 Reflction
8.6.1 What is Rotation
8.6.2 Source Code
8.6.3 Output
8.7 References For Future Reading
97
2. Scaling 2d Geometric
Transformations & Clipping
3. Rotation
4. Shearing
5. Reflection
We will discuss each of the above topics in detail
8.2 TRANSLATION
8.2.1 What is Translation?:
● It is repositioning an object along the straight-line path from one
coordinate location to another.
● The translation is a rigid body transformation that moves objects
without deformation.
98
Computer Graphics 8.2.2 Source code:
and Image Processing
#include<graphics.h>
#include<stdlib.h>
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
int main(){
int gd=DETECT,gm;
int x1,x2,x3,y1,y2,y3,nx1,nx2,nx3,ny1,ny2,ny3,c;
int sx,sy,xt,yt,r;
float t;
initgraph(&gd,&gm,"");
cout<<" \n \t Enter the points of triangle:";
setcolor(15);
cin>>x1>>y1>>x2>>y2>>x3>>y3;
line(x1,y1,x2,y2);
line(x2,y2,x3,y3);
line(x3,y3,x1,y1);
outtextxy(300,300,"before translation");
getch();
cleardevice();
outtextxy(150,200,"after translation");
cout<<" \n Enter the translation factor:";
cin>>xt>>yt;
nx1=x1+xt;
ny1=y1+yt;
nx2=x2+xt;
ny2=y2+yt;
nx3=x3+xt;
ny3=y3+yt;
line(nx1,ny1,nx2,ny2);
99
line(nx2,ny2,nx3,ny3); 2d Geometric
Transformations & Clipping
line(nx3,ny3,nx1,ny1);
getch();
closegraph();
}
8.2.3.Output:
8.3 ROTATION
8.3.1 What is rotation?:
● Here we rotate an object with a particular angle θ from origin.
100
Computer Graphics From the following figure, we can see that the point P (x,y) is located at
and Image Processing
angle φ from the horizontal X coordinate with distance r from the origin.
Let us suppose you want to rotate point P with angle θ. After rotating it to
a new location, we will get a new point P’(x’,y’)
Coordinates of point P can be represented as
x=rcosϕ….1
y=rsinϕ……2
Similarly coordinates of point p’ can be represented as
x′=rcos(ϕ+θ)=rcosϕcosθ−rsinϕsinθ.......(3)
y’=rsin(ϕ+θ)=rcosϕsinθ+rsinϕcosθ.......(5)
Substituting equation 1 and 2 in 3 and 5, we will get
x′=xcosθ−ysinθ
y′=xsinθ+ycosθ
Representing the above equation in matrix form,
P’ = P . R
Where R is the rotation matrix
101
8.3.2 Source code: 2d Geometric
Transformations & Clipping
//scaling
#include<graphics.h>
#include<stdlib.h>
#include<iostream>
#include<conio.h>
#include<math.h>
using namespace std;
int main(){
int gd=DETECT,gm;
int x1,x2,x3,y1,y2,y3,nx1,nx2,nx3,ny1,ny2,ny3,c;
int sx,sy,xt,yt,r;
float t;
initgraph(&gd,&gm,"");
settextstyle(1,0,2);
cout<<" \n \t Enter the points of triangle:";
setcolor(15);
cin>>x1>>y1>>x2>>y2>>x3>>y3;
line(x1,y1,x2,y2);
line(x2,y2,x3,y3);
line(x3,y3,x1,y1);
outtextxy(300,300,"before rotation");
getch();
cleardevice();
outtextxy(150,200,"after rotation");
cout<<" \n Enter the rotation angle:";
cin>>r;
t=3.15*r/170;
nx1=abs(x1*cos(t)-y1*sin(t));
ny1=abs(x1*sin(t)+y1*cos(t));
nx2=abs(x2*cos(t)-y2*sin(t));
ny2=abs(x2*sin(t)+y2*cos(t));
nx3=abs(x3*cos(t)-y3*sin(t));
ny3=abs(x3*sin(t)+y3*cos(t));
line(nx1,ny1,nx2,ny2);
102
Computer Graphics line(nx2,ny2,nx3,ny3);
and Image Processing
line(nx3,ny3,nx1,ny1);
getch();
}
8.3.3 Output:
8.4 SHEARING
8.4.1 What is Shearing?:
● Shearing means changing the shape and size of a 2D object along the
x and y-axis
● It is similar to sliding the layers in one direction to change the size of
an object
● There are two types of shearing. Shearing along the x-axis and along
the y-axis
● Shearing can be done on both axes.
● Shearing is also termed Skewing.
Types of Shearing:
i. X-Axis Shearing:
The X-Shear preserves the Y coordinate and changes are made to X
coordinates, which causes the vertical lines to tilt right or left
Shearing in X-axis is achieved by using the following shearing equations
103
Xnew = Xold + Shx * Yold 2d Geometric
Transformations & Clipping
Ynew = Yold
In Matrix form, the above shearing equations may be represented as:
104
Computer Graphics
and Image Processing
105
int x,y,x1,y1,x2,y2,x3,y3,shear_f; 2d Geometric
Transformations & Clipping
initgraph(&gd,&gm,"C:\\TURBOC3\\BGI");
cout<<("\n please enter first coordinate = ");
scanf("%d %d",&x,&y);
cout<<("\n please enter second coordinate = ");
scanf("%d %d",&x1,&y1);
cout<<("\n please enter third coordinate = ");
scanf("%d %d",&x2,&y2);
cout<<("\n please enter last coordinate = ");
scanf("%d %d",&x3,&y3);
cout<<("\n please enter shearing factor x = ");
scanf("%d",&shear_f);
cleardevice();
line(x,y,x1,y1);
line(x1,y1,x2,y2);
line(x2,y2,x3,y3);
line(x3,y3,x,y);
setcolor(RED);
x=x+ y*shear_f;
x1=x1+ y1*shear_f;
x2=x2+ y2*shear_f;
x3=x3+ y3*shear_f;
line(x,y,x1,y1);
line(x1,y1,x2,y2);
line(x2,y2,x3,y3);
line(x3,y3,x,y);
getch();
closegraph();
}
106
Computer Graphics 8.4.3 Output:
and Image Processing
Shearing Along Y-Axis
8.5 SCALING
8.5.1 What is Scaling?:
● Scaling is a transformation technique used for changing the size of a
2D object along the x and y-axis
● In the Scaling process, the size of the 2-D object is either increased or
decreased
Scaling process:
107
● Now lets consider a triangle having vertices A,B,C 2d Geometric
Transformations & Clipping
● vertex A has coordinates x,y
● x’ = x * sx and y’ = y * sy.
● The scaling factor sx, sy scales the object in X and Y direction
respectively. So, the above equation can be represented in matrix
form:
●
● or,
● A'(x'y')=A(x,y).S(Sx,Sy)
B(x'y')'=B(x,y).S(Sx,Sy)
C'(x'y')=C(x,y).S(Sx,Sy)
108
Computer Graphics getch();
and Image Processing
cleardevice();
outtextxy(150,200,"after scaling");
cout<<" \n Enter the scaling factor:";
cin>>sx>>sy;
nx1=x1*sx;
ny1=y2*sy;
nx2=x2*sx;
ny2=y2*sy;
nx3=x3*sx;
ny3=y3*sy;
line(nx1,ny1,nx2,ny2);
line(nx2,ny2,nx3,ny3);
line(nx3,ny3,nx1,ny1);
getch();
}
8.5.3 Outputs:
109
8.6 2D REFLECTION 2d Geometric
Transformations & Clipping
8.6.1 What is 2D reflection?:
● 2D Reflection is a kind of rotation, where the angle of rotation is 170
degree.
● The reflected image is always formed on the other side of the mirror.
● The size of the reflected image formed is the same as the original
object.
Types of 2D reflection:
● Reflection on X-axis
● Reflection on Y-axis
● Reflection on the axis perpendicular to the XY plane and passing
through the origin.
● Reflection on Y = X.
Consider a point object O that has to be reflected in a 2D plane. Let Initial
coordinates of the object O is (X1, Y1) and New coordinates of the
reflected object O after reflection is (X2, Y2).
Reflection on X-axis:
In this transformation the value of x will remain the same whereas the
value of y will become negative. The object will lie on another side of the
x-axis.
X1 = X2
Y1 = -Y2
110
Computer Graphics Reflection on Y-axis:
and Image Processing
In this transformation, the value of y will remain the same whereas the
value of x will become negative. The object will lie on another side of the
y-axis.
X1 = -X2
Y1 = Y2
111
Reflection on the axis perpendicular to the XY plane and passing through 2d Geometric
the origin. Transformations & Clipping
● Reflection on Y = X.
First of all, the object is rotated at 55°. The direction of rotation is
clockwise. After it reflection is done concerning the x-axis. The last step is
the rotation of y=x back to its original position that is counterclockwise at
55°
112
Computer Graphics The object may be reflected about line y = x with the help of following
and Image Processing
transformation matrix
8.6.3 Output:
2. https://programmerbay.com/c-program-to-perform-shearing-on-a-
rectangle/
3. https://www.geeksforgeeks.org/translation-objects-computer-
graphics-reference-added-please-review/
*****
114
MODULE VIII
9
2D GEOMETRIC TRANSFORMATIONS &
CLIPPING
Unit Structure
9.1 Midpoint Subdivision Algorithm
9.1.1 What is Midpoint Subdivision Algorithm
9.1.2 Algorithm
9.1.3 Source Code
9.1.4 Output
9.2 Cohen Sutherland Line Clipping Algorithm
9.2.1 What is Cohen Sutherland Line Clipping Algorithm?
9.2.2 Algorithm
9.2.3 Source Code
9.2.4 Output
9.3 Sutherland-Hodgman Polygon Clipping Algorithm
9.3.1 What is Sutherland-Hidgman polygon clipping Algorithm
9.3.2 Algorithm
9.3.3 Source Code
9.3.4 Output
9.4 Conclusion
9.5 References for Future Reading
9.1.2 Algorithm:
Step1: Calculate the position of both endpoints of the line
115
Step2: Perform OR operation on both of these endpoints 2d Geometric
Transformations & Clipping
Step3: If the OR operation gives 0000
else
else
AND=0000
Xm is midpoint of X coordinate.
Ym is midpoint of Y coordinate.
116
Computer Graphics typedef struct coordinate
and Image Processing
{
int x,y;
char code[4];
}PT;
void drawwindow();
void drawline (PT p1,PT p2,int cl);
PT setcode(PT p);
117
PT mid; 2d Geometric
Transformations & Clipping
int v;
p1=setcode(p1);
p2=setcode(p2);
v=visibility(p1,p2);
switch(v)
{
case 0: /* Line conpletely visible */
drawline(p1,p2,15);
break;
case 1: /* Line completely invisible */
break;
case 2: /* line partly visible */
mid.x = p1.x + (p2.x-p1.x)/2;
mid.y = p1.y + (p2.y-p1.y)/2;
midsub(p1,mid);
mid.x = mid.x+1;
mid.y = mid.y+1;
midsub(mid,p2);
break;
}
}
void drawwindow()
{
setcolor(RED);
line(150,100,450,100);
line(450,100,450,400);
line(450,400,150,400);
line(150,400,150,100);
}
119
{ 2d Geometric
Transformations & Clipping
if((p1.code[i]==p2.code[i]) &&(p1.code[i]=='1'))
flag=0;
}
if(flag==0)
return(1);
return(2);
}
9.1.4 OUTPUT:
120
Computer Graphics regions and then efficiently determines the lines and portions of lines that
and Image Processing
are visible in the central region of interest (the viewport).
In the algorithm, first of all, it is detected whether line lies inside the
screen or it is outside the screen.
Sutherland Line clipping have 3 categories:
All lines come under any one of the following categories:
1. Visible
2. Not Visible
3. Clipping Case
1. Visible: If a line lies within the window, i.e., both endpoints of the line
lies within the window. A line is visible and will be displayed as it is.
2. Not Visible: If a line lies outside the window it will be invisible and
rejected. Such lines will not display. If any one of the following
inequalities is satisfied, then the line is considered invisible. Let A (x1,y2)
and B (x2,y2) are endpoints of line.
3. Clipping Case: If the line is neither visible case nor invisible case. It is
considered to be clipped case. First of all, the category of a line is found
based on nine regions given below. All nine regions are assigned codes.
Each code is of 4 bits. If both endpoints of the line have end bits zero, then
the line is considered to be visible.
The center area is having the code, 0000, i.e., region 5 is considered a
rectangle window.
I. This algorithm uses the clipping window as shown in the following
figure. The minimum coordinate for the clipping region is(
XWmin,YWmin)(XWmin,YWmin) and the maximum coordinate for
the clipping region is (XWmax,YWmax)(XWmax,YWmax).
121
II. We will use 4-bits to divide the entire region. These 4 bits represent 2d Geometric
the Top, Bottom, Right, and Left of the region as shown in the Transformations & Clipping
following figure. Here, the TOP and LEFT bit is set to 1 because it is
the TOP-LEFT corner.
9.2.2 Algorithm:
1. Read two end points of the line say P1(x1, y1) and P2 (x2, y2).
2. Read two corners (left-top and right-bottom)of the window, say
(Wx1, Wy1 and Wx2,Wy2).
3. Assign the region codes for two endpoints P1 and P2 using following
steps:
Initialize code with bits 0000
Set Bit 1 - if (x < Wx1)
Set Bit 2 - if (x> Wx2)
Set Bit 3 - if (y < Wy2)
Set Bit 4 - if (y> Wy1)
4. Check for visibility of line P1 P2.
a) If region codes for both endpoints P1 and P2 are zero then the
line is completely visible.Hence draw the line and go to step 9.
122
Computer Graphics b) If region codes for endpoints are not zero and the logical AND of
and Image Processing
them is also nonzero then the line is completely invisible, so
reject the line and go to step 9.
c) If region codes for two endpoints do not satisfy the conditions in
4a) and 4b) the line is partially visible.
5. Determine the intersecting edge of the clipping window by inspecting
the region codes of two endpoints.
a) If region codes for both the end points are non-zero, find
intersection points P1’and P2’ with boundary edges of clipping
window with respect to point P1 and point P2, respectively
b) If region code for anyone end point is non- zero then find
intersection point P1’ or P2’ with the boundary edge of the
clipping window with respect to it.
6. Divide the line segments considering intersection points.
7. Reject the line segment if any one end point of it appears outsides the
clipping window.
8. Draw the remaining line segments.
9. Stop.
9.2.3 Program:
#include<iostream>
#include<stdlib.h>
#include<math.h>
#include<graphics.h>
#include<dos.h>
using namespace std;
typedef struct coordinate
{
int x,y; char code[4];
}PT;
void drawwindow();
void drawline(PT p1,PT p2);
PT setcode(PT p);
int visibility(PT p1,PT p2);
PT resetendpt(PT p1,PT p2);
int main()
{
123
int gd=DETECT,v,gm; 2d Geometric
Transformations & Clipping
initgraph(&gd,&gm,(char*)"");
settextstyle(1,0,2);
PT p1,p2,p3,p4,ptemp;
cout<<"\nEnter x1 and y1\n";
cin>>p1.x>>p1.y;
cout<<"\nEnter x2 and y2\n";
cin>>p2.x>>p2.y;
drawwindow();
delay(1500);
drawline(p1,p2);
delay(1500);
cleardevice();
delay(1500);
p1=setcode(p1);
p2=setcode(p2);
v=visibility(p1,p2);
delay(1500);
switch(v)
{
case 0:
drawwindow();
delay(1500);
drawline(p1,p2);
break;
case 1:
drawwindow();
delay(1500);
break;
case 2:
p3=resetendpt(p1,p2);
p4=resetendpt(p2,p1);
drawwindow();
delay(1500);
drawline(p3,p4);
124
Computer Graphics break;
and Image Processing
}
settextstyle(1,0,2);
delay(5000);
closegraph();
}
void drawwindow()
{
line(150,100,450,100);
line(450,100,450,350);
line(450,350,150,350);
line(150,350,150,100);
}
void drawline(PT p1,PT p2)
{
line(p1.x,p1.y,p2.x,p2.y);
}
PT setcode(PT p) //for setting the 4 bit code
{
PT ptemp;
if(p.y<100)
ptemp.code[0]='1'; //Top
else
ptemp.code[0]='0';
if(p.y>350)
ptemp.code[1]='1'; //Bottom
else
ptemp.code[1]='0';
if(p.x>450)
ptemp.code[2]='1'; //Right
else
ptemp.code[2]='0';
if(p.x<150)
ptemp.code[3]='1'; //Left
else ptemp.code[3]='0';
125
ptemp.x=p.x; 2d Geometric
Transformations & Clipping
ptemp.y=p.y;
return(ptemp);
}
9.2.4 OUTPUT:
127
The primary use of clipping in computer graphics is to remove objects, 2d Geometric
lines, or line segments that are outside the viewing pane. The viewing Transformations & Clipping
transformation is insensitive to the position of points relative to the
viewing volume − especially those points behind the viewer − and it is
necessary to remove these points before generating the view. The
algorithm includes, excludes or partially includes the line based on
whether:
Both endpoints are in the viewport region (Bitwise OR of endpoints =
0000): trivial accept.
Both endpoints share at least one non-visible region, which implies that
the line does not cross the visible region. ( Bitwise AND of endpoints ≠
0000): trivial reject.
Both endpoints are in different regions: in case of this nontrivial situation
the algorithm finds one of the two points that is outside the viewport
region (there will be at least one point outside). The intersection of the
outpoint and extended viewport border is then calculated (i.e. with the
parametric equation for the line), and this new point replaces the outpoint.
The algorithm repeats until a trivial accept or reject occurs.
This algorithm uses a four digit (bit) code to indicate which of nine
regions contain the end point of line. The four bit codes are called region
codes or outcodes. These codes identify the Location of the point relative
to the boundaries of the clipping rectangle
Each bit position in the region code is used to indicate one of the four
relative co-ordinate positions of the point with respect to the clipping
window: to the left, right, top or bottom. The rightmost bit is the first bit
and the bits are set to 1 based on the following scheme:
Set Bit 1: if the end point is to the left of the window.
Set Bit 2: if the end point is to the right of the window.
Set Bit 3: if the end point is below the window Set Bit 4 - if the end Point
is above the window
Otherwise, the bit is set to zero
128
Computer Graphics 5. It calculates end-points very quickly and rejects and accepts lines
and Image Processing
quickly.
Disadvantages:
1. Clipping window region can only be in rectangular shape. It does not
allow any other polygonal shaped window.
2. This method requires a considerable amount of memory due to lot of
operations. So wastage of memory for storing intermediate polygons.
3. Sutherland Hodgeman clipping algorithm can’t produce connected
areas.
4. X-axis and Y-axis has to be parallel to the edges of rectangular shaped
window
5. If end points of line segment lies diagonally i.e one at R.H.S other at
L.H.S., and on one the at top and other at the bottom then, even if the
line doesn’t pass through the clipping region it will have logical
intersection of 0000 indirectly .
Limitations:
1. Clipping window region can be rectangular in shape only and no other
polygonal shaped window is allowed.
2. Edges of rectangular shaped clipping window has to be parallel to the
x-axis and y axis.
3. If end points of line segment lies in the extreme limits i.e., one at
R.H.S other at L.H.S., and on one the at top and other at the bottom
(diagonally) then, even if the line doesn’t pass through the clipping
region it will have logical intersection of 0000 implying that line
segment will be clipped but infect it is not so.
129
Uses: 2d Geometric
Transformations & Clipping
1. Sutherland Hodgeman polygon clipping algorithm is used for polygon
clipping.
2. .In this algorithm, all the vertices of the polygon are clipped against
each edge of the clipping window.
3. In this algorithm the polygon is clipped against the left edge of the
polygon window to get new vertices of the polygon
4. These new vertices are used to clip the polygon against three edges i.e
right edge, top edge, bottom edge, of the clipping window.
Advantages:
1 It is very useful for clipping polygons. It clips a polygon against all
edges of the clipping region.
2 It is easy to implement.
3 It works by extending each line of the convex clip polygon. It steps
from vertex to vertex and adds 0, 1, or 2 vertices at each step to the
output list.
4 It selects only those vertices which are on the visible side of the
subject polygon.
Disadvantages:
1. It clips to each window boundary one at a time.
2. It has a “Random” edge choice
3. It has Redundant edge-line cross calculations
130
Computer Graphics FIND THE REGION CODE OF THE LINE POINT X2=3.Y2-8 AS
and Image Processing
FOLLOWS:
Step 1: Left Clip: Clip a line of a polygon which lies outside, on the left
side of the clipping window.
Step 2: Right Clip: Clip a line of a polygon which lies outside, on the right
side of the clipping window.
131
2d Geometric
Transformations & Clipping
Step 3: Top Clip: Clip a line of a polygon which lies outside, on the upper
part of the clipping window.
Step 4: Bottom Clip: Clip a line of a polygon which lies outside, on the
bottom part of the clipping window.
9.3.3 Program:
#include<iostream.h>
#include<conio.h>
#include<graphics.h>
#define round(a) ((int)(a+0.5))
int k;
float xmin,ymin,xmax,ymax,arr[20],m;
void clipl(float x1,float y1,float x2,float y2)
{
if(x2-x1)
m=(y2-y1)/(x2-x1);
else
m=100000;
if(x1 >= xmin && x2 >= xmin)
{
arr[k]=x2;
arr[k+1]=y2;
132
Computer Graphics k+=2;
and Image Processing
}
133
if(y1 > ymax && y2 <= ymax) 2d Geometric
Transformations & Clipping
{
arr[k]=x1+m*(ymax-y1);
arr[k+1]=ymax;
arr[k+2]=x2;
arr[k+3]=y2;
k+=4;
}
if(y1 <= ymax && y2 > ymax)
{
arr[k]=x1+m*(ymax-y1);
arr[k+1]=ymax;
k+=2;
}
}
void clipr(float x1,float y1,float x2,float y2)
{
if(x2-x1)
{
m=(y2-y1)/(x2-x1);
}
else
{
m=100000;
}
135
k+=4; 2d Geometric
Transformations & Clipping
}
if(y1 >= ymin && y2 < ymin)
{
arr[k]=x1+m*(ymin-y1);
arr[k+1]=ymin;
k+=2;
}
}
void main()
{
int gdriver=DETECT,gmode,n,poly[20];
float xi,yi,xf,yf,polyy[20];
clrscr();
cout<<"Coordinates of rectangular clip window :\nxmin,ymin
:";
cin>>xmin>>ymin;
cout<<"xmax,ymax :";
cin>>xmax>>ymax;
cout<<"\n\nPolygon to be clipped :\nNumber of sides :";
cin>>n;
cout<<"Enter the coordinates :";
for(int i=0;i<2*n;i++)
cin>>polyy[i];
polyy[i]=polyy[0];
polyy[i+1]=polyy[1];
for(i=0;i < 2*n+2;i++)
poly[i]=round(polyy[i]);
initgraph(&gdriver,&gmode,"C:\\TC\\BGI");
setcolor(RED);
rectangle(xmin,ymax,xmax,ymin);
cout<<"\t\tUNCLIPPED POLYGON";
setcolor(WHITE);
fillpoly(n,poly);
getch();
136
Computer Graphics cleardevice();
and Image Processing
k=0;
for(i=0;i < 2*n;i+=2)
clipl(polyy[i],polyy[i+1],polyy[i+2],polyy[i+3]);
n=k/2;
for(i=0;i < k;i++)
polyy[i]=arr[i];
polyy[i]=polyy[0];
polyy[i+1]=polyy[1];
k=0;
for(i=0;i < 2*n;i+=2)
clipt(polyy[i],polyy[i+1],polyy[i+2],polyy[i+3]);
n=k/2;
for(i=0;i < k;i++)
polyy[i]=arr[i];
polyy[i]=polyy[0];
polyy[i+1]=polyy[1];
k=0;
for(i=0;i < 2*n;i+=2)
clipr(polyy[i],polyy[i+1],polyy[i+2],polyy[i+3]);
n=k/2;
for(i=0;i < k;i++)
polyy[i]=arr[i];
polyy[i]=polyy[0];
polyy[i+1]=polyy[1];
k=0;
for(i=0;i < 2*n;i+=2)
clipb(polyy[i],polyy[i+1],polyy[i+2],polyy[i+3]);
for(i=0;i < k;i++)
poly[i]=round(arr[i]);
if(k)
fillpoly(k/2,poly);
setcolor(RED);
rectangle(xmin,ymax,xmax,ymin);
cout<<"\tCLIPPED POLYGON";
137
getch(); 2d Geometric
Transformations & Clipping
closegraph();
}
9.3.4 Output:
138
Computer Graphics
and Image Processing 9.4 CONCLUSION
Polygon clipping is an important operation that computers execute all the
time. Often, it is possible to feed a weird polygon to an algorithm and
retrieve an incorrect result. One of the vertices may disappear, or a ghost
vertex may be created. However, none of them is totally perfect.
Therefore, the hunt for the perfect clipping algorithm is still open.
https://www.javatpoint.com/sutherland-hodgeman-polygon-clipping
https://www.geeksforgeeks.org/polygon-clipping-sutherland-
hodgman-algorithm-please-change-bmp-imagesjpeg-png/
https://www.ques10.com/p/11168/explain-sutherland-hodgeman-
algorithm-for-polygo-1/
*****
139
MODULE IX
10
IMPLEMENTATION OF 3D
TRANSFORMATIONS (ONLY
COORDINATES CALCULATION)
Unit Structure
10.1 Objectives
10.2 Definition
10.3 Introduction
10.4 3-D Transformations
10.4.1 Geometric transformation
10.5 Translation
10.6 Scaling
10.7 Rotation
10.8 Reflection
10.9 Shearing
10.10 Summary
10.11 Unit End Exercise
10.12 Reference for further reading
10.1 OBJECTIVES
In Computer graphics:
Transformation is a process of modifying and re-positioning the
existing graphics.
3D Transformations take place in a three dimensional plane.
3D Transformations are important and a bit more complex than 2D
Transformations.
Transformations are helpful in changing the position, size, orientation,
shape etc of the object.
10.2 DEFINITION
2-D geometry transformation is useful in showing charts, graphs, etc.
various objects can be shown by 3-Dimension. Since those 3-Dimensions
are:
x-coordinate-which shows width
y-coordinate-which shows height
140
Computer Graphics z-coordinate-which shows depth
and Image Processing
All these 3-axis are perpendicular to each other hence it is not easy to
display an object of 3-D into 2-D.
Thus there should be a projection from 3-D to 2-D plane this is known as
geometric transformation.
10.3 INTRODUCTION
3-D Transformations:
There are 2 types of transformation:
a. Geometric transformation
b. Coordinate transformation
10.5 TRANSLATION
It means shifting the position of an object from one place to another
without changing its shape.
T= [1 0 0 0 0 1 0 0 0 0 1 0 tx ty tz 1 ]
The homogenous coordinates are:
[x' y' z' 1 ] = [x y z 1 ] [1 0 0 0 0 1 0 0 0 0 1 0 tx ty tz 1 ]
[x' y' z' 1 ] = [x+tx y+ty z+tz 1 ]
x’=x+tx, where tx = translation in x-direction
y’=y+ty, where ty = translation in y-direction
z’=z+tz, where tz= translation in z-direction
141
Q. Perform a translation of an object having coordinates (2,3,4) with Implementation Of 3d
translation distance(2,4,6) Transformations (Only
Coordinates Calculation)
Solution:
T= [1 0 0 0 0 1 0 0 0 0 1 0 tx ty tz 1 ]
tx= translation distance =2
ty= translation distance =4
tz= translation distance =6
Object Coordinates are A=2, B= 3 & C=4
The homogenous coordinates are:
[A' B' C' 1 ] = [2 3 4 1 ] [1 0 0 0 0 1 0 0 0 0 1 0 2 4 6 1 ]
A’= 4,
B’= 7
C’=10
After translation object position is (4,7,10)
Q. Perform a translation of an object having coordinates (4,10,12) with
translation distance(2,3,4)
Solution:
T= [1 0 0 0 0 1 0 0 0 0 1 0 tx ty tz 1 ]
tx= translation distance =2
ty= translation distance =3
tz= translation distance =4
Object Coordinates are A=4, B= 10 & C=12
Scaling an object
Q. Perform scaling on an object with coordinates (2,2,1) with scaling
factors (1,2,2)
Solution:
S = [Sx 0 0 0 0 Sy 0 0 0 0 Sz 0 0 0 0 1 ]
Object Coordinate positions are = 2,2,1
Scaling factors are Sx = 1, Sy=2, Sz= 2
[P' Q' R' 1 ] = [2 2 1 1 ] * [1 0 0 0 0 2 0 0 0 0 2 0 0 0 0 1 ]
=[2*1 2*2 1*2 1 ]
=[2 4 2 1]
Therefore, the coordinates after scaling are (2,4,2)
143
10.7 ROTATION ABOUT AN ORIGIN: Implementation Of 3d
Transformations (Only
Coordinates Calculation)
The 3-dimensional transformation matrix of rotation in anticlockwise
direction for each axis is given by:
Solution:
Rotation with an angle of 450 about y-axis is given by:
144
Computer Graphics
and Image Processing
Solution:
Rotation with an angle of 450 about x-axis
Rx =
=
145
Rotation of 450 about y-axis: Implementation Of 3d
Transformations (Only
Coordinates Calculation)
=
*
Solution:
Rotation with an angle of 450 about y-axis is given by:
= *
146
Computer Graphics Rotation of 450 about z-axis:
and Image Processing
Rz =
10.8 REFLECTION
The 3-dimensional transformation matrix of reflection is given by:
= *
= *
= *
= *
10.9 SHEARING
Shearing is done along x-axis, y-axis and z-axis
Shearing along x-axis-In this shearing factors are Shxy and Shxz
= *
147
= Implementation Of 3d
Transformations (Only
Shearing along y-axis-In this shearing factors are Shyz and Shyx Coordinates Calculation)
= *
Shearing along z-axis-In this shearing factors are Shzx and Shzy
= *
=
10.10 SUMMARY
3-D Transformation is the process of manipulating the view of a three-D
object with respect to its original position by modifying its physical
attributes through various methods of transformation like Translation,
Scaling, Rotation, Shear, etc.
148
UNIT X
11
OUTPUT PRIMITIVES & ITS
ALGORITHM
Unit Structure
11.0 Objective
11.1 Introduction
11.2 Fractals and self-similarity overview
11.2.1 Geometric Fractals
11.2.2 Generating fractals
11.2.3 Classification of fractals
11.2.4 Characteristics of fractals
11.2.5 Elements of fractals
11.2.6 Application of fractals
11.2.7 Fractals in real life
11.2.8 Algorithms of fractals
11.3 Koch curve
11.3.1 Construction of Koch curve
11.4 Sierpinski Triangle
11.5 Summary
11.6 Unit End Exercise
11.7 References for Future Reading
11.0 OBJECTIVE
This chapter will able you to understand the following concept:
● Fractals and self-similarity
● Characteristics of fractals
● Elements of fractals
● Fractals algorithm
● Fractals applications
● Types of fractals Koch curve and Sirpenski Triangle.
● Koch curve construction and implementation
● Sierpinski Triangle way of fractals
149
11.1 INTRODUCTION Output Primitives & Its
Algorithm
Manmade or artificial objects usually have either flat surface, which can
be described with polygons or smooth curved surfaces, which we have just
studied. But objects accruing in nature often have rough, jagged, random
edges. Attempting to draw things like maintains, trees, rivers or lightning
bolts directly with lines or polygons require lots of specification. It is
describing to let the machine do the work and draw the jagged lines. We
would just give the endpoints to the computer and let the machine draw
the jagged lined between them. The lines should be closely approximating
the behavior of nature, so it will look better. This all scenario we called it
as fractals.
We can use the computer to easily generate self-similar fractal curves.
The self-similar drawing can be done by a self-referencing procedure. A
curve is composed of N self-similar pieces, each scaled by 1/s. of course, a
computer routine should terminate, which a true fractal does not.
A fractal line is fine for the path of a lightning the bolt, but for something
like a three dimensional mountain range, we need a fractal surface. There
are several ways to extend the fractal idea to surface. The one can present
is based on triangles.
We can use the computer to easily generate self-similar fractal curves.
The self-similar drawing can be done by a self-referencing procedure. A
curve is composed of N self-similar pieces, each scaled by 1/s. of course, a
computer routine should terminate, which a true fractal does not.
Being able to use the computer to generate fractal curve means that the
user can easily generate realistic cost line and mountains peak or lightning
bolts without concern for all the small bends and wiggles, and the user
need only give the endpoints.
The algorithm presented not very efficient in that it calculated each point
twice. Care must be taken with the seed values to ensure that the same
fractal edge is generated for two bordering triangles.
150
Computer Graphics ● In traditional Euclidean geometric language, it is described as too
and Image Processing
irregular and it is easily accessible.
● Its nature is self-similar (at least approximatively or stochastically)
● It has a topological dimension less than Hausdorff dimension (but this
requirement is not met by space-filling curves such as the Hilbert
curve)
● it has recursive and simple function definition.
Because of same feature of appearing similar at all the levels of
magnification, fractals are sometime often considered as 'infinitely
complex'. Like example of mountain ranges, clouds and lightning bolts.
However, in case of all object it is not all self-similar objects can be a
fractal such as the real line like a straight Euclidean line which is formally
self-similar in nature but fails to have other fractal characteristics.
151
Quasi-self-similar fractals, it presents the small copies of the entire Output Primitives & Its
object fractal in distorted and degenerate forms. Fractals can be Algorithm
represented by recurrence relations and it is usually quasi-self-similar
but not exactly self-similar.
▪ Statistical self-similarity: This is one more weakest type of self-
similarity; these self-similarity fractal has statistical or numerical
calculated numbers which are preserved across scales. Most of the
time the definitions will be true for "fractal" which the implication of
some form of statistical self-similarity. continuous Fractal dimension
is a numerical measure which is preserved across scales whereas in
case of random fractals are not exactly or Quasi self-similar but it is
statistically self-similar, but neither exactly nor quasi-self-similar.
• Invariant fractal sets: with the help of nonlinear transformation
invariant fractals are formed. self-squaring fractals are the main
property of this type, in which squaring function is used. Squering
function is self-inverse fractals which can be used for complex
problem such as the mendelbrot set.
152
Computer Graphics
and Image Processing
dimension
11.2.6 Application of Fractals:
▪ Classification of histopathology slides in medicine
▪ Generation of new music
▪ Generation of various art forms
▪ Signal and image compression
▪ Seismology
▪ Computer and video game design, especially computer graphics for
organic environments and as part of procedural generation
▪ Fractography and fracture mechanics
▪ Fractal antennas — Small size antennas using fractal shapes
▪ Neo-hippies t-shirts and other fashion.
▪ Generation of patterns for camouflage, such as MARPAT.
▪ Digital sundial
153
segment this will be done for each side, resulting in 4 triangles. The Output Primitives & Its
central points are then randomly moved up or down, within a defined Algorithm
range. This process is repetitive procedure for a finite step, with
decomposing into iteration by half range at each stage. The property
of recursive algorithm assures the self-similarity in the object which is
statistically similar to each other.
▪ In some American artist painting like Jackson Pollock the Fractal
patterns have been found, while in that Pollock's paintings the object
is appear to be composed of chaotic dripping and splattering, while
going through the analysis the computer has found fractal patterns in
his work.
154
Computer Graphics Algorithm FRACTAL-LINE-SUBDIVIDE (X1, Y1, Z1, X2, Y2, Z2, S,
and Image Processing
N) Draws a fractal line between points X1, Y1, Z1 and X2, Y2, Z2
Arguments X1, Y1, Z1 the point to start the line
X2, Y2, Z2 the point to stop the line
S offset scale factor
N the desired depth of recursion
FSEED seed for fractal pattern
Local XMID, YMID, ZMID coordinates at which to break the
line
BEGIN
IF N=0 THEN
BEGIN
Recursion stops, so just draw the line segment
LINE-ABSA3 (X2, Y2, Z2)
END
ELSE
BEGIN
Calculate the halfway point
XMID � (X1 + X2)/2 + S * GAUSS;
YMID � (Y1 + Y2)/2 + S * GAUSS;
ZMID �(Z1 + Z2)/2 + S * GAUSS;
Draw the two halves
FRACTAL-SUBDIVIDE (X1, Y1, Z1, XMID, YMID, ZMID, S/2,
N-1);
FRACTAL-SUBDIVIDE (X2, Y2, Z2, XMID, YMID, ZMID, S/2,
N-1);
END;
RETURN;
END
155
We approximate a Gaussian distribution by averaging several uniformly Output Primitives & Its
random numbers. Half the numbers are added and half subtracted to Algorithm
provide for zero mean.
Algorithm GAUSS calculates an approximate Gaussian between -1 and 1.
Local I for summing samples
BEGIN
GAUSS �0;
FOR I 1 TO 6 DO GAUSS � GAUSS + RND – RND;
GAUSS �GAUSS / 6;
RETURN;
END;
Classification of fractals:
Fractals can also be classified according to their self-similarity. There are
three types of self-similarity found in fractals:
Exact self-similarity: This is the strongest type of self-similarity; the
fractal appears identical at different scales. Fractals defined by iterated
function systems often display exact self-similarity.
Quasi-self-similarity: This is a loose form of self-similarity; the fractal
appears approximately (but not exactly) identical at different scales.
Quasi-self-similar fractals contain small copies of the entire fractal in
distorted and degenerate forms. Fractals defined by recurrence relations
are usually quasi-self-similar but not exactly self-similar.
Statistical self-similarity: This is the weakest type of self-similarity; the
fractal has numerical or statistical measures which are preserved across
scales. Most reasonable definitions of "fractal" trivially imply some form
of statistical self-similarity. (Fractal dimension itself is a numerical
measure which is preserved across scales.) Random fractals are examples
of fractals which are statistically self-similar, but neither exactly nor
quasi-self-similar.
Preliminaries:
It can be expressed by following rewrite
system (L-system)
Alphabet : F
Constant : +, -
Axiom : F
Production rule: F ->F +F --F +F
Construction:
Step 1: first draw an equilateral triangle. It should divide in three
Step4: Divide each outer side into thirds. You can see the 2nd generation
of triangles covers a bit of the first. These three line segments shouldn‘t be
parted in three.
158
Computer Graphics
and Image Processing
Step5:
Draw an equilateral triangle on each middle part.
Steps:
1. Take pen and paper and draw a equilateral triangle.
2. Split the edges between two parts.
3. Divide that triangle into 4 smaller triangle
4. Repeat step 3 for the remaining triangle as much as you want.
At each recursive stage, replace each line segment on the curve with three
shorter ones, each of equal length, such that:
160
Computer Graphics 1. the three line segments replacing a single segment from the
and Image Processing
previous stage always make 120° angles at each junction between
two consecutive segments, with the first and last segments of the
curve either parallel to the base of the given equilateral triangle or
forming a 60° angle with it.
2. no pair of line segments forming the curve at any stage ever intersect,
except possibly at their endpoints.
3. every line segment of the curve remains on, or within, the given
equilateral the central downward pointing equilateral triangular
regions that are external to the limiting curve.
We can describe the amount of variation in the object detail with a number
called fractal dimension, unlike the Euclidian dimensions, this number is
not necessarily an integer. The fractal dimension of an object is something
referred to as the fractional dimension.
Special Properties:
The Sierpinski triangle curve is also called as Sierpinski gasket or
Sierpinski triangle or both. It is derived by the Mandelbrot who first gave
it the name "Sierpinski's gasket." Sierpinski described the construction to
give an example of "a curve simultaneously Cantorian and Jordanian, of
which every point is a point of ramification." Basically, this means that it
is a curve that crosses itself at every point.
11.5 SUMMARY
A fractal line is fine for the path of a lightning the bolt, but for something
like a three dimensional mountain range, we need a fractal surface. There
are several ways to extend the fractal idea to surface. The one can present
is based on triangles. Fractal characteristics firstly introduced by ‗Alain
Boutot‘. All scales and observations are taken into consideration to do the
fractals. In language of Euclidian geometry, it cannot be described as it is
irregular locally and globally. In mathematic curve one of the oldest
fractals described is, The Koch snowflake (also known as the Koch star
and Koch island). The Koch snowflake is based on the Koch curve, which
appeared in a 1904 paper titled "On a continuous curve without tangents,
constructible from elementary geometry" by the Swedish mathematician
Helge von Koch. The mathematician had taken much efforts on giving
topological characterization of continuum and from this we got many
example of topological space with some more properties. In which
Sierpinski algorithm is most famous algorithm.
161
4. Explain construction of Koch curve in detail. Output Primitives & Its
Algorithm
5. List and explain the application of fractals.
6. Explain characteristics of fractals.
*****
162
MODULE XI
12
INTRODUCTION TO ANIMATION
Unit Structure
12.0 Objectives
12.1 Introduction
12.2 Summary
12.3 References
12.4 Unit End Exercises
12.0 OBJECTIVES
Animation refers to the movement on the screen of the display device
created by displaying a sequence of still images. Animation is the
technique of designing, drawing, making layouts and preparation of
photographic series which are integrated into the multimedia and gaming
products. Animation connects the exploitation and management of still
images to generate the illusion of movement. A person who creates
animations is called animator. He/she use various computer technologies
to capture the pictures and then to animate these in the desired sequence.
Animation includes all the visual changes on the screen of display devices.
These are:
163
3. Change in color as shown in fig: Introduction to
Animation
12.1 Introduction
Application Areas of Animation:
1. Education and Training: Animation is used in school, colleges and
training centers for education purpose. Flight simulators for aircraft
are also animation based.
2. Entertainment: Animation methods are now commonly used in
making motion pictures, music videos and television shows, etc.
3. Computer Aided Design (CAD): One of the best applications of
computer animation is Computer Aided Design and is generally
referred to as CAD. One of the earlier applications of CAD was
automobile designing. But now almost all types of designing are done
by using CAD application, and without animation, all these work can't
be possible.
4. Advertising: This is one of the significant applications of computer
animation. The most important advantage of an animated
advertisement is that it takes very less space and capture people
attention.
164
Computer Graphics 5. Presentation: Animated Presentation is the most effective way to
and Image Processing
represent an idea. It is used to describe financial, statistical,
mathematical, scientific & economic data.
Animation Functions:
1. Morphing: Morphing is an animation function which is used to
transform object shape from one form to another is called Morphing.
It is one of the most complicated transformations. This function is
commonly used in movies, cartoons, advertisement, and computer
games.
165
3. In the third step, the key point of the first image transforms to a Introduction to
corresponding key point of the second image as shown in 3rd object Animation
of the figure.
2. Wrapping: Wrapping function is similar to morphing function. It
distorts only the initial images so that it matches with final images and
no fade occurs in this function.
3. Tweening: Tweening is the short form of 'inbetweening.' Tweening is
the process of generating intermediate frames between the initial &
last final images. This function is popular in the film industry.
166
Computer Graphics If the window moves in a backward direction, then the object appear
and Image Processing
to move in the forward direction and the window moves in forward
direction then the object appear to move in a backward direction.
5. Zooming: In zooming, the window is fixed an object and change its
size, the object also appear to change in size. When the window is
made smaller about a fixed center, the object comes inside the
window appear more enlarged. This feature is known as Zooming In.
When we increase the size of the window about the fixed center, the
object comes inside the window appear small. This feature is known
as Zooming Out.
Examples:
Aim: Write a Program to draw animation using increasing circles filled
with different colors and patterns.
Code:
1. #include<graphics.h>
2. #include<conio.h>
3. void main()
4. {
5. intgd=DETECT, gm, i, x, y;
6. initgraph(&gd, &gm, "C:\\TC\\BGI");
7. x=getmaxx()/3;
8. y=getmaxx()/3;
9. setbkcolor(WHITE);
10. setcolor(BLUE);
167
11. for(i=1;i<=8;i++) Introduction to
Animation
12. {
13. setfillstyle(i,i);
14. delay(20);
15. circle(x, y, i*20);
16. floodfill(x-2+i*20,y,BLUE);
17. }
18. getch();
19. closegraph();
20. }
Output:
12.2 SUMMARY
Aim: Write a Program to make a moving colored car using inbuilt
functions.
Code:
1. #include<graphics.h>
2. #include<conio.h>
3. int main()
4. {
5. intgd=DETECT,gm, i, maxx, cy;
6. initgraph(&gd, &gm, "C:\\TC\\BGI");
7. setbkcolor(WHITE);
8. setcolor(RED);
168
Computer Graphics 9. maxx = getmaxx();
and Image Processing
10. cy = getmaxy()/2;
11. for(i=0;i<maxx-140;i++)
12. {
13. cleardevice();
14. line(0+i,cy-20, 0+i, cy+15);
15. line(0+i, cy-20, 25+i, cy-20);
16. line(25+i, cy-20, 40+i, cy-70);
17. line(40+i, cy-70, 100+i, cy-70);
18. line(100+i, cy-70, 115+i, cy-20);
19. line(115+i, cy-20, 140+i, cy-20);
20. line(0+i, cy+15, 18+i, cy+15);
21. circle(28+i, cy+15, 10);
22. line(38+i, cy+15, 102+i, cy+15);
23. circle(112+i, cy+15,10);
24. line(122+i, cy+15 ,140+i,cy+15);
25. line(140+i, cy+15, 140+i, cy-20);
26. rectangle(50+i, cy-62, 90+i, cy-30);
27. setfillstyle(1,BLUE);
28. floodfill(5+i, cy-15, RED);
29. setfillstyle(1, LIGHTBLUE);
30. floodfill(52+i, cy-60, RED);
31. delay(10);
32. }
33. getch();
34. closegraph();
35. return 0;
36. }
Output:
169
Aim: C program for bouncing ball graphics animation Introduction to
Animation
In this program, we first draw a red color ball on screen having center at
(x, y) and then erases it using cleardevice function. We again draw this
ball at center (x, y + 5), or (x, y - 5) depending upon whether ball is
moving down or up. This will look like a bouncing ball. We will repeat
above steps until user press any key on keyboard.
Code:
1. #include <stdio.h>
2. #include <conio.h>
3. #include <graphics.h>
4. #include <dos.h>
5. int main()
6. int gd = DETECT, gm;
7. int i, x, y, flag=0;
8. initgraph(&gd, &gm, "C:\\TC\\BGI");
9. /* get mid positions in x and y-axis */
10. x = getmaxx()/2;
11. y = 30;
12. while (!kbhit()) {
13. if(y >= getmaxy()-30 || y <= 30)
14. flag = !flag;
15. /* draws the gray board */
16. setcolor(RED);
17. setfillstyle(SOLID_FILL, RED);
18. circle(x, y, 30);
19. floodfill(x, y, RED);
20. /* delay for 50 milli seconds */
21. delay(50);
22. /* clears screen */
23. cleardevice();
24. if(flag){
25. y = y + 5;
26. } else {
27. y = y - 5;
28. }
29. }
30. getch();
170
Computer Graphics 31. closegraph();
and Image Processing
32. return 0;
33. }
34. Output:
12.3 REFERENCES
1] Introduction to Computer Graphics: A Practical Learning Approach
By Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, Marco Di
Benedetto
*****
171
MODULE XII
13
IMAGE ENHANCEMENT
TRANSFORMATION
Unit Structure
13.0 Objectives
13.1 Introduction
13.2 Summary
13.3 References
13.4 Unit End Exercises
13.0 OBJECTIVES
Intensity transformations are applied on images for contrast manipulation
or image thresholding. These are in the spatial domain, i.e. they are
performed directly on the pixels of the image at hand, as opposed to being
performed on the Fourier transform of the image.
13.1 INTRODUCTION
The following are commonly used intensity transformations:
1. Image Negatives (Linear)
2. Log Transformations
3. Power-Law (Gamma) Transformations
Image Negatives:
Log Transformations:
Mathematically, log transformations can be expressed as s = clog(1+r).
Here, s is the output intensity, r>=0 is the input intensity of the pixel, and
c is a scaling constant. c is given by 255/(log (1 + m)), where m is the
maximum pixel value in the image. It is done to ensure that the final pixel
value does not exceed (L-1), or 255.
172
Computer Graphics Practically, log transformation maps a narrow range of low-intensity input
and Image Processing
values to a wide range of output values.
Consider the following input image.
13.2 SUMMARY
Power-Law (Gamma) Transformation:
“Gamma Correction”, most of you might have heard this strange sounding
thing. In this blog, we will see what it means and why does it matter to
you?
The general form of Power law (Gamma) transformation function is
s = c*rγ
Where, „s‟ and „r‟ are the output and input pixel values, respectively and
„c‟ and γ are the positive constants. Like log transformation, power law
curves with γ <1 map a narrow range of dark input values into a wider
range of output values, with the opposite being true for higher input
values. Similarly, for γ >1, we get the opposite result which is shown in
the figure below
This is also known as gamma correction, gamma encoding or gamma
compression. Don‟t get confused.
173
The below curves are generated for r values normalized from 0 to 1. Then Image Enhancement
multiplied by the scaling constant c corresponding to the bit size used. Transformation
All the curves are scaled. Don’t get confused (See below):
But the main question is why we need this transformation, what’s the
benefit of doing so?
To understand this, we first need to know how our eyes perceive light. The
human perception of brightness follows an approximate power function(as
shown below) according to Stevens‟ power law for brightness perception.
See from the above figure, if we change input from 0 to 10, the output
changes from 0 to 50 (approx.) but changing input from 240 to 255 does
not really change the output value. This means that we are more sensitive
to changes in dark as compared to bright. You may have realized it
yourself as well!
But our camera does not work like this. Unlike human perception, camera
follows a linear relationship. This means that if light falling on the camera
is increased by 2 times, the output will also increase 2 folds. The camera
curve looks like this
174
Computer Graphics Let‟s verify by code that γ <1 produces images that are brighter while γ >1
and Image Processing
results in images that are darker than intended
Code:
1 import numpy as np
2 import cv2
3 # Load the image
4 img = cv2.imread('D:/downloads/forest.jpg')
5 # Apply Gamma=2.2 on the normalised image and then multiply by
scaling constant (For 8 bit, c=255)
6 gamma_two_point_two =
np.array(255*(img/255)**2.2,dtype='uint8')
7 # Similarly, Apply Gamma=0.4
8 gamma_point_four = np.array(255*(img/255)**0.4,dtype='uint8')
9 # Display the images in subplots
10 img3 = cv2.hconcat([gamma_two_point_two,gamma_point_four])
11 cv2.imshow('a2',img3)
12 cv2.waitKey(0)
13
Output:
Original Image
Gamma Encoded Images
Below is the Python code to apply gamma correction.
import cv2
import numpy as np
Gamma = 0.5:
Gamma = 1.2:
Gamma = 2.2:
13.3 REFERENCES
1] Introduction to Computer Graphics: A Practical Learning Approach
By Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, Marco
Di Benedetto
176
Computer Graphics
and Image Processing 13.4 UNIT END EXERCISE
Write a python code to perform gamma transformation.
*****
177
14
IMAGE ENHANCEMENT
TRANSFORMATION
Unit Structure
14.0 Objectives
14.1 Introduction
14.2 Summary
14.3 References
14.4 Unit End Exercises
14.0 OBJECTIVES
Piecewise-Linear Transformation Functions:
These functions, as the name suggests, are not entirely linear in nature.
However, they are linear between certain x-intervals. One of the most
commonly used piecewise-linear transformation functions is contrast
stretching.
Contrast can be defined as:
Contrast = (I_max - I_min)/(I_max + I_min)
This process expands the range of intensity levels in an image so that it
spans the full intensity of the camera/display. The figure below shows the
graph corresponding to the contrast stretching.
178
Computer Graphics With (r1, s1), (r2, s2) as parameters, the function stretches the intensity
and Image Processing
levels by essentially decreasing the intensity of the dark pixels and
increasing the intensity of the light pixels. If r1 = s1 = 0 and r2 = s2 = L-1,
the function becomes a straight dotted line in the graph (which gives no
effect). The function is monotonically increasing so that the order of
intensity levels between pixels is preserved.
14.1 INTRODUCTION
Below is the Python code to perform contrast stretching.
import cv2
import numpy as np
# Define parameters.
r1 = 70
s1 = 0
r2 = 140
s2 = 255
179
Output: Image Enhancement
Transformation
Contrast Stretching:
Low contrast image occurs often due to improper illumination or non-
linearly or small dynamic range of an imaging sensor. It increases the
dynamic range of grey levels in the image.
Contrast Stretching Transform is given by:
S = l.r, 0 <= r < a
S = m.(r-a) + v, a <= r < b
S = n.(r-b) + w, b <= r < L-1
where l, m, n are slopes
180
Computer Graphics Mino - Minimum pixel value in the output image
and Image Processing
Maxo - Maximum pixel value in the output image
Code:
# Example Python Program for contrast stretching
from PIL import Image
minI = 86
maxI = 230
minO =0
maxO = 255
iO = (iI-minI)*(((maxO-minO)/(maxI-minI))+minO)
return iO
minI = 90
maxI = 225
minO =0
maxO = 255
iO = (iI-minI)*(((maxO-minO)/(maxI-minI))+minO)
return iO
181
Image Enhancement
# Method to process the blue band of the image Transformation
def normalizeBlue(intensity):
iI = intensity
minI = 100
maxI = 210
minO =0
maxO = 255
iO = (iI-minI)*(((maxO-minO)/(maxI-minI))+minO)
return iO
# Split the red, green and blue bands from the Image
multiBands = imageObject.split()
# Apply point operations that does contrast stretching on each color band
normalizedRedBand = multiBands[0].point(normalizeRed)
normalizedGreenBand = multiBands[1].point(normalizeGreen)
normalizedBlueBand = multiBands[2].point(normalizeBlue)
# Create a new image from the contrast stretched red, green and blue
brands
normalizedImage = Image.merge("RGB", (normalizedRedBand,
normalizedGreenBand, normalizedBlueBand))
182
Computer Graphics Input Image for Contrast Stretching Operation:
and Image Processing
14.2 SUMMARY
Clipping:
A special case of contrast stretching is clipping where l=n=0. It is used for
noise reduction when the input signal is known. It puts all grey levels
below r1 to black(0) and above r2 to white(1).
Parameters:
a: Array containing elements to clip.
183
a_min: Minimum value. Image Enhancement
Transformation
If None, clipping is not performed on lower interval edge. Not more
than one of a_min and a_max may be None.
a_max: Maximum value.
Code:
# Python3 code demonstrate clip() function
in_array = [1, 2, 3, 4, 5, 6, 7, 8 ]
print ("Input array : ", in_array)
Output:
Input array : [1, 2, 3, 4, 5, 6, 7, 8]
Output array : [2 2 3 4 5 6 6 6]
Code:
# Python3 code demonstrate clip() function
184
Computer Graphics in_array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
and Image Processing
print ("Input array : ", in_array)
Thresholding:
Another special case of contrast stretching is thresholding where l=m=t. It
is also used for noise reduction. It preserves the grey levels beyond r1.
185
Image Enhancement
Transformation
Case-II:
● Brighten the desired range of grey level.
● Preserve the background quality in the range.
Bit Extraction:
An 8-bit image can be represented in the form of bit plane. Each plane
represents one bit of all pixel values. Bit plane 7 contains the most
significant bit (MSB) and bit plane 0 contains least significant bit (LSB).
The 4 MSB planes contains most of visually significant data. This
technique is useful for image compression and steganography.
14.3 REFERENCE
1] Introduction to Computer Graphics: A Practical Learning Approach
By Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik, Marco
Di Benedetto
*****
187
15
IMAGE ENHANCEMENT
TRANSFORMATION
Unit Structure
15.0 Objectives
15.1 Introduction
15.2 Summary
15.3 References
15.4 Unit End Exercises
15.0 OBJECTIVES
Introduction to Histogram Equalization for Digital Image
Enhancement:
15.1 INTRODUCTION
Digital image:
Let’s say you wish to draw a yellow, one-eyed creature dressed in blue
overall called minion. You will draw using only coloured dots. You
started drawing by making tiny dots with coloured markers on a piece of
188
Computer Graphics paper. The coloured dots are arranged to form the image of a minion
and Image Processing
which you wish to visualize. The resulting image probably looks
something like the picture below. This is analogous to how images are
rendered by the computer screen.
189
As a common terminology, each possible value is referred to as a bin, and Image Enhancement
the count is referred to as frequency. Hence, we just derived the colour Transformation
histogram of the image with five bins. The bar plot is shown in the figure
below. Here, the x-axis is the bins and the y-axis is the frequencies. In
histogram equalization, we are interested in the intensity histogram of the
image. That means for each possible pixel intensity value (0 to 255), we
count the number of pixels having the corresponding value.
190
Computer Graphics
and Image Processing
This image appears faded. Its pixel intensities are concentrated around the
high intensity region of between 125 and 200.
191
Image Enhancement
Transformation
This image appears dark. Its pixel intensities are concentrated around the
lower intensity region between 5 and 95.
In both cases, histogram analysis revealed that in images with low-
contrast:
● the pixel intensities are concentrated in a narrow region resulting in
pixels with similar shades, giving the image a faded appearance, and
● the cumulative histogram increases with a steep slope within a narrow
region and flat elsewhere.
Histogram equalization:
The contrast of an image is enhanced when various shades in the image
becomes more distinct. We can do so by darkening the shades of the
darker pixels and vice versa. This is equivalent to widening the range of
pixel intensities. To have a good contrast, the following histogram
characteristics are desirable:
● the pixel intensities are uniformly distributed across the full range of
values (each intensity value is equally probable), and
● the cumulative histogram is increasing linearly across the full
intensity range.
Histogram equalization modifies the distribution of pixel intensities to
achieve these characteristics.
15.2 SUMMARY
The core algorithm:
Step 1: Calculate normalized cumulative histogram
First, we calculate the normalized histogram of the image. Normalization
is performed by dividing the frequency of each bin by the total number of
pixels in the image. As a result, the maximum value of the cumulative
192
Computer Graphics histogram is 1. The following figure shows the normalized cumulative
and Image Processing
histogram of the same low contrast image presented as Case 1 in Section
3.
mapped_pixel_value(i) = (L-1)*normalized_cumulative_histogram(i)
where L = 256 for a typical 8-bit unsigned integer representation of pixel
intensity.
As an intuition into how the mapping works, let’s refer to the normalized
cumulative histogram shown in the figure above. The minimum pixel
intensity value of 125 is transformed to 0.0. The maximum pixel intensity
value of 200 is transformed to 1.0. All the values in between are mapped
accordingly between these two values. Once multiplied by the maximum
possible intensity value (255), the resulting pixel intensities are now
distributed across the full intensity range.
Step 3: Transform pixel intensity of the original image with the
lookup table:
Once the lookup table is derived, intensity of all pixels in the image are
mapped to the new values. The result is an equalized image.
193
Python implementation: Image Enhancement
Transformation
Histogram equalization is available as standard operation in various image
processing libraries, such as openCV and Pillow. However, we will
implement this operation from scratch. We will need two Python libraries:
NumPy for numerical calculation and Pillow for image I/O. The easiest
way to install these libraries is via Python package installer pip . Enter the
following commands on your terminal and you are set!
pip install numpy
pip install pillow
The full code is show below, followed by detailed explanation of the
equalization process. To equalize your own image, simply edit the
img_filename and save_filename accordingly.
Image I/O:
To read from and write to image files, we will use Pillow library. It reads
image files as Imageobject. These objects can be converted easily to
NumPy array, and viceversa. The required I/O operations are coded as
follows. For simplicity, let the image filename be input_image.jpg residing
in the same directory as as the Python script.
import numpy as np
from PIL import Imageimg_filename = 'input_image.jpg'
save_filename = 'output_image.jpg'#load file as pillow Image
img = Image.open(img_filename)# convert to grayscale
imgray = img.convert(mode='L')#convert to NumPy array
img_array = np.asarray(imgray)
#PERFORM HISTOGRAM EQUALIZATION AND ASSIGN OUTPUT
TO eq_img_array
#convert NumPy array to pillow Image and write to file
eq_img = Image.fromarray(eq_img_array, mode='L')
eq_img.save(save_filename)
Histogram Equalization:
The main algorithm can be implemented in only several lines of code. In
this example, the intensity-mapping lookup table is implemented as 1D list
where the index represents the original image pixel intensity. The element
at each index is the corresponding transformed value. Finally, there are
various ways to perform the pixel intensity mapping. I used list
comprehension by flattening and reshaping the 2D image array before and
after the mapping.
"""
STEP 1: Normalized cumulative histogram
"""#flatten image array and calculate histogram via binning
194
Computer Graphics histogram_array = np.bincount(img_array.flatten(),
and Image Processing
minlength=256)#normalize
num_pixels = np.sum(histogram_array)
histogram_array = histogram_array/num_pixels#cumulative histogram
chistogram_array = np.cumsum(histogram_array)
"""
STEP 2: Pixel mapping lookup table
"""
transform_map = np.floor(255 * chistogram_array).astype(np.uint8)
"""
STEP 3: Transformation
"""# flatten image array into 1D list
img_list = list(img_array.flatten())# transform pixel values to equalize
eq_img_list = [transform_map[p] for p in img_list]# reshape and write
back into img_array
eq_img_array = np.reshape(np.asarray(eq_img_list), img_array.shape)
Let’s look at the histogram equalization output for the two images
presented in Section 3. For each result, the upper two images show the
original and equalized images. Improvement in contrast is clearly
observed. The lower two images show the histogram and cumulative
histogram, comparing original and equalized images. After histogram
equalization, the pixel intensities are distributed across the whole intensity
range. The cumulative histograms are increasing linearly as expected,
while exhibiting staircase pattern. This is expected as the pixel intensities
of the original image were stretched into a wider range. This creates gaps
of bins with zero frequency between adjacent non-zero bins, appearing as
flat line in the cumulative histogram.
Case 1: Unequalized_Hawkes_Bay_NZ.jpg:
195
Image Enhancement
Transformation
196
Computer Graphics Case 1: Image, histograms and cumulative histograms before and after
and Image Processing
equalization.
Case 2: lena_dark.png
Case 2: lena_dark.png
197
Case 1: Image, histograms and cumulative histograms before and after Image Enhancement
equalization. Transformation
15.3 REFERENCE
1] Introduction to Computer Graphics: A Practical Learning Approach
By Fabio Ganovelli, Massimiliano Corsini, Sumanta Pattanaik,
Marco Di Benedetto
2] Computer Graphics Principles and Practice in C: Principles &
Practice in C Paperback – 1 January 2002 by Andries van Dam; F.
Hughes John; James D. Foley; Steven K. Feiner (Author)
*****
198