Introduction To Computer Graphics Itebooks instant download
Introduction To Computer Graphics Itebooks instant download
download
https://ebookbell.com/product/introduction-to-computer-graphics-
itebooks-23836686
https://ebookbell.com/product/introduction-to-computer-graphics-using-
opengl-and-java-3rd-edition-3rd-karsten-lehn-50434526
https://ebookbell.com/product/introduction-to-computer-graphics-using-
java-2d-and-3d-frank-klawonn-2107070
https://ebookbell.com/product/introduction-to-computer-graphics-using-
java-2d-and-3d-2nd-ed-2012-klawonn-21986742
https://ebookbell.com/product/introduction-to-computer-graphics-a-
practical-learning-approach-corsini-5066634
Introduction To Computer Graphics With Opengl Es First Edition
Junghyun Han
https://ebookbell.com/product/introduction-to-computer-graphics-with-
opengl-es-first-edition-junghyun-han-7160912
https://ebookbell.com/product/introduction-to-computer-graphics-frank-
klawonn-43188002
https://ebookbell.com/product/introduction-to-computer-graphics-
version-13-august-2021-version-131-december-david-j-eck-37722230
https://ebookbell.com/product/an-introduction-to-computer-graphics-
for-artists-2nd-edition-andrew-paquette-auth-4241556
https://ebookbell.com/product/an-introduction-to-computer-graphics-
and-creative-3d-environments-2008th-edition-barry-g-blundell-50195970
Introduction to Computer Graphics
Version 1.1, January 2016
David J. Eck
Hobart and William Smith Colleges
Preface vii
1 Introduction 1
1.1 Painting and Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Elements of 3D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Hardware and Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Two-Dimensional Graphics 11
2.1 Pixels, Coordinates, and Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Pixel Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 Real-number Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 Aspect Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.4 Color Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Basic Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 Stroke and Fill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.3 Polygons, Curves, and Paths . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.1 Viewing and Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3.3 Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3.4 Combining Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3.5 Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.6 Shear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3.7 Window-to-Viewport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.8 Matrices and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Hierarchical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.1 Building Complex Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.2 Scene Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4.3 The Transform Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Java Graphics2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.1 Graphics2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.5.2 Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.5.3 Stroke and Fill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.5.4 Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.5.5 BufferedImage and Pixels . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.6 HTML Canvas Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.6.1 The 2D Graphics Context . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
i
CONTENTS ii
2.6.2 Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.6.3 Stroke and Fill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.6.4 Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.6.5 Auxiliary Canvases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.6.6 Pixel Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.6.7 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.7 SVG: A Scene Description Language . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.7.1 SVG Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.7.2 Shapes, Styles, and Transforms . . . . . . . . . . . . . . . . . . . . . . . . 69
2.7.3 Polygons and Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.7.4 Hierarchical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.7.5 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
APPENDICES
B Blender 355
B.1 Blender Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
B.1.1 The 3D View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
B.1.2 Adding and Transforming Objects . . . . . . . . . . . . . . . . . . . . . . 357
B.1.3 Edit Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
B.1.4 Light, Material, and Texture . . . . . . . . . . . . . . . . . . . . . . . . . 361
B.1.5 Saving Your Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
B.1.6 More Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
B.2 Blender Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
B.2.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
B.2.2 Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
B.2.3 Proportional Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
CONTENTS vi
E Glossary 411
Preface
vii
Preface viii
the examples.
∗ ∗ ∗
I have taught computer graphics every couple of years or so for almost 30 years. As the
field developed, I had to make major changes almost every time I taught the course, but for
much of that time, I was able to structure the course primarily around OpenGL 1.1, a graphics
API that was in common use for an extended period. OpenGL 1.1 supported fundamental
graphics concepts in a way that was fairly easy to use. OpenGL is still widely supported, but,
for various reasons, the parts of it that were easy to use have been officially dropped from
the latest versions (although they are in practice supported on most desktop computers). The
result is a much more powerful API but one that is much harder to learn. In particular, modern
OpenGL in its pure form does not make for a good introduction to graphics programming.
My approach in this book is to use a subset of OpenGL 1.1 to introduce the fundamental
concepts of three-dimensional graphics. I then go on to cover WebGL—a version of OpenGL
that runs in a web browser—as an example of the more modern approach to computer graph-
ics. While OpenGL makes up the major foundation for the course, the real emphasis is on
fundamental concepts such as geometric modeling and transformations; hierarchical modeling
and scene graphs; color, lighting, and textures; and animation.
Chapter 1 is a short overview of computer graphics. It introduces many concepts that will
be covered in much more detail in the rest of the book.
Chapter 2 covers two-dimensional graphics in Java, JavaScript, and SVG, with an emphasis
on ideas such as transformations and scene graphs that carry over to three dimensions.
Chapter 3 and Chapter 4 cover OpengGL 1.1. While OpenGL 1.1 is fairly primitive by
today’s standard, it includes many basic features that are still fundamental to three-dimensional
computer graphics, in a form that is an easier starting point for people new to 3D graphics.
Only part of the API is covered.
Chapter 5 covers three.js, a higher-level 3D graphics API for Web graphics using JavaScript.
This chapter shows how fundamental concepts can be used in a higher-level interface.
Chapter 6 and Chapter 7 cover WebGL, a modern version of OpenGL for graphics on the
Web. WebGL is very low-level, and it requires the programmer to write “shader programs” to
implement many features that are built into OpenGL 1.1. Looking at the implementation is an
opportunity to understand more deeply how computers actually make 3D images.
And Chapter 8 looks briefly at some advanced techniques that are not possible in OpenGL.
Appendix A contains brief introductions to three programming languages that are used in the
book: Java, C, and JavaScript. Appendix B is meant to get readers started with the most basic
uses of Blender, a sophisticated 3D modeling program. I have found that introducing students
to Blender is a good way to help them develop their three-dimensional intuition. Appendix C
contains even briefer introductions to two 2D graphics programs, Gimp and Inkscape.
∗ ∗ ∗
Professor David J. Eck
Department of Mathematics and Computer Science
Hobart and William Smith Colleges
300 Pulteney Street
Geneva, New York 14456, USA
Email: [email protected]
WWW: http://math.hws.edu/eck/
January, 2016
Chapter 1
Introduction
The term “computer graphics” refers to anything involved in the creation or manipu-
lation of images on computer, including animated images. It is a very broad field, and one in
which changes and advances seem to come at a dizzying pace. It can be difficult for a beginner
to know where to start. However, there is a core of fundamental ideas that are part of the
foundation of most applications of computer graphics. This book attempts to cover those foun-
dational ideas, or at least as many of them as will fit into a one-semester college-level course.
While it is not possible to cover the entire field in a first course—or even a large part of it—this
should be a good place to start.
This short chapter provides an overview and introduction to the material that will be covered
in the rest of the book, without going into a lot of detail.
1
CHAPTER 1. INTRODUCTION 2
the pixels on the screen will be changed to match, and the displayed image will change.
A computer screen used in this way is the basic model of raster graphics. The term
“raster” technically refers to the mechanism used on older vacuum tube computer monitors:
An electron beam would move along the rows of pixels, making them glow. The beam was
moved across the screen by powerful magnets that would deflect the path of the electrons. The
stronger the beam, the brighter the glow of the pixel, so the brightness of the pixels could be
controlled by modulating the intensity of the electron beam. The color values stored in the
frame buffer were used to determine the intensity of the electron beam. (For a color screen,
each pixel had a red dot, a green dot, and a blue dot, which were separately illuminated by the
beam.)
A modern flat-screen computer monitor is not a raster in the same sense. There is no
moving electron beam. The mechanism that controls the colors of the pixels is different for
different types of screen. But the screen is still made up of pixels, and the color values for all
the pixels are still stored in a frame buffer. The idea of an image consisting of a grid of pixels,
with numerical color values for each pixel, defines raster graphics.
∗ ∗ ∗
Although images on the computer screen are represented using pixels, specifying individual
pixel colors is not always the best way to create an image. Another way is to specify the basic
geometric objects that it contains, shapes such as lines, circles, triangles, and rectangles. This
is the idea that defines vector graphics: Represent an image as a list of the geometric shapes
that it contains. To make things more interesting, the shapes can have attributes, such as
the thickness of a line or the color that fills a rectangle. Of course, not every image can be
composed from simple geometric shapes. This approach certainly wouldn’t work for a picture
of a beautiful sunset (or for most any other photographic image). However, it works well for
many types of images, such as architectural blueprints and scientific illustrations.
In fact, early in the history of computing, vector graphics was even used directly on computer
screens. When the first graphical computer displays were developed, raster displays were too
slow and expensive to be practical. Fortunately, it was possible to use vacuum tube technology
in another way: The electron beam could be made to directly draw a line on the screen, simply
by sweeping the beam along that line. A vector graphics display would store a display list
of lines that should appear on the screen. Since a point on the screen would glow only very
briefly after being illuminated by the electron beam, the graphics display would go through the
display list over and over, continually redrawing all the lines on the list. To change the image,
it would only be necessary to change the contents of the display list. Of course, if the display
list became too long, the image would start to flicker because a line would have a chance to
visibly fade before its next turn to be redrawn.
But here is the point: For an image that can be specified as a reasonably small number of
geometric shapes, the amount of information needed to represent the image is much smaller
using a vector representation than using a raster representation. Consider an image made up
of one thousand line segments. For a vector representation of the image, you only need to store
the coordinates of two thousand points, the endpoints of the lines. This would take up only a
few kilobytes of memory. To store the image in a frame buffer for a raster display would require
much more memory. Similarly, a vector display could draw the lines on the screen more quickly
than a raster display could copy the the same image from the frame buffer to the screen. (As
soon as raster displays became fast and inexpensive, however, they quickly displaced vector
displays because of their ability to display all types of images reasonably well.)
∗ ∗ ∗
CHAPTER 1. INTRODUCTION 3
The divide between raster graphics and vector graphics persists in several areas of computer
graphics. For example, it can be seen in a division between two categories of programs that
can be used to create images: painting programs and drawing programs. In a painting
program, the image is represented as a grid of pixels, and the user creates an image by assigning
colors to pixels. This might be done by using a “drawing tool” that acts like a painter’s brush,
or even by tools that draw geometric shapes such as lines or rectangles. But the point in a
painting program is to color the individual pixels, and it is only the pixel colors that are saved.
To make this clearer, suppose that you use a painting program to draw a house, then draw a
tree in front of the house. If you then erase the tree, you’ll only reveal a blank background, not
a house. In fact, the image never really contained a “house” at all—only individually colored
pixels that the viewer might perceive as making up a picture of a house.
In a drawing program, the user creates an image by adding geometric shapes, and the image
is represented as a list of those shapes. If you place a house shape (or collection of shapes making
up a house) in the image, and you then place a tree shape on top of the house, the house is
still there, since it is stored in the list of shapes that the image contains. If you delete the tree,
the house will still be in the image, just as it was before you added the tree. Furthermore, you
should be able to select one of the shapes in the image and move it or change its size, so drawing
programs offer a rich set of editing operations that are not possible in painting programs. (The
reverse, however, is also true.)
A practical program for image creation and editing might combine elements of painting and
drawing, although one or the other is usually dominant. For example, a drawing program might
allow the user to include a raster-type image, treating it as one shape. A painting program
might let the user create “layers,” which are separate images that can be layered one on top of
another to create the final image. The layers can then be manipulated much like the shapes in
a drawing program (so that you could keep both your house and your tree in separate layers,
even if in the image of the house is in back of the tree).
Two well-known graphics programs are Adobe Photoshop and Adobe Illustrator. Photoshop
is in the category of painting programs, while Illustrator is more of a drawing program. In
the world of free software, the GNU image-processing program, Gimp, is a good alternative to
Photoshop, while Inkscape is a reasonably capable free drawing program. Short introductions
to Gimp and Inkscape can be found in Appendix C.
∗ ∗ ∗
The divide between raster and vector graphics also appears in the field of graphics file
formats. There are many ways to represent an image as data stored in a file. If the original
image is to be recovered from the bits stored in the file, the representation must follow some
exact, known specification. Such a specification is called a graphics file format. Some popular
graphics file formats include GIF, PNG, JPEG, and SVG. Most images used on the Web are
GIF, PNG, or JPEG. Modern web browsers also have support for SVG images.
GIF, PNG, and JPEG are basically raster graphics formats; an image is specified by storing
a color value for each pixel. GIF is an older file format, which has largely been superseded
by PNG, but you can still find GIF images on the web. (The GIF format supports animated
images, so GIFs are often used for simple animations on Web pages.) GIF uses an indexed
color model with a maximum of 256 colors. PNG can use either indexed or full 24-bit color,
while JPEG is meant for full color images.
The amount of data necessary to represent a raster image can be quite large. However,
the data usually contains a lot of redundancy, and the data can be “compressed” to reduce its
size. GIF and PNG use lossless data compression, which means that the original image
CHAPTER 1. INTRODUCTION 4
can be recovered perfectly from the compressed data. JPEG uses a lossy data compression
algorithm, which means that the image that is recovered from a JPEG file is not exactly the
same as the original image; some information has been lost. This might not sound like a good
idea, but in fact the difference is often not very noticeable, and using lossy compression usually
permits a greater reduction in the size of the compressed data. JPEG generally works well for
photographic images, but not as well for images that have sharp edges between different colors.
It is especially bad for line drawings and images that contain text; PNG is the preferred format
for such images.
SVG, on the other hand, is fundamentally a vector graphics format (although SVG im-
ages can include raster images). SVG is actually an XML-based language for describing two-
dimensional vector graphics images. “SVG” stands for “Scalable Vector Graphics,” and the
term “scalable” indicates one of the advantages of vector graphics: There is no loss of quality
when the size of the image is increased. A line between two points can be represented at any
scale, and it is still the same perfect geometric line. If you try to greatly increase the size of
a raster image, on the other hand, you will find that you don’t have enough color values for
all the pixels in the new image; each pixel from the original image will be expanded to cover a
rectangle of pixels in the scaled image, and you will get multi-pixel blocks of uniform color. The
scalable nature of SVG images make them a good choice for web browsers and for graphical
elements on your computer’s desktop. And indeed, some desktop environments are now using
SVG images for their desktop icons.
∗ ∗ ∗
A digital image, no matter what its format, is specified using a coordinate system. A
coordinate system sets up a correspondence between numbers and geometric points. In two
dimensions, each point is assigned a pair of numbers, which are called the coordinates of the
point. The two coordinates of a point are often called its x -coordinate and y-coordinate,
although the names “x” and “y” are arbitrary.
A raster image is a two-dimensional grid of pixels arranged into rows and columns. As
such, it has a natural coordinate system in which each pixel corresponds to a pair of integers
giving the number of the row and the number of the column that contain the pixel. (Even in
this simple case, there is some disagreement as to whether the rows should be numbered from
top-to-bottom or from bottom-to-top.)
For a vector image, it is natural to use real-number coordinates. The coordinate system for
an image is arbitrary to some degree; that is, the same image can be specified using different
coordinate systems. I do not want to say a lot about coordinate systems here, but they will be a
major focus of a large part of the book, and they are even more important in three-dimensional
graphics than in two dimensions.
of more basic shapes, if it is not itself considered to be basic. To make a two-dimensional image
of the scene, the scene is projected from three dimensions down to two dimensions. Projection
is the equivalent of taking a photograph of the scene. Let’s look at how it all works in a little
more detail.
First, the geometry. . . . We start with an empty 3D space or “world.” Of course, this
space exists only conceptually, but it’s useful to think of it as real and to be able to visualize it
in your mind. The space needs a coordinate system that associates each point in the space with
three numbers, usually referred to as the x, y, and z coordinates of the point. This coordinate
system is referred to as “world coordinates.”
We want to build a scene inside the world, made up of geometric objects. For example,
we can specify a line segment in the scene by giving the coordinates of its two endpoints,
and we can specify a triangle by giving the coordinates of its three vertices. The smallest
building blocks that we have to work with, such as line segments and triangles, are called
geometric primitives. Different graphics systems make different sets of primitive available,
but in many cases only very basic shapes such as lines and triangles are considered primitive.
A complex scene can contain a large number of primitives, and it would be very difficult to
create the scene by giving explicit coordinates for each individual primitive. The solution,
as any programmer should immediately guess, is to chunk together primitives into reusable
components. For example, for a scene that contains several automobiles, we might create a
geometric model of a wheel. An automobile can be modeled as four wheels together with
models of other components. And we could then use several copies of the automobile model in
the scene. Note that once a geometric model has been designed, it can be used as a component
in more complex models. This is referred to as hierarchical modeling .
Suppose that we have constructed a model of a wheel out of geometric primitives. When
that wheel is moved into position in the model of an automobile, the coordinates of all of its
primitives will have to be adjusted. So what exactly have we gained by building the wheel? The
point is that all of the coordinates in the wheel are adjusted in the same way. That is, to place
the wheel in the automobile, we just have to specify a single adjustment that is applied to the
wheel as a whole. The type of “adjustment” that is used is called a geometric transform (or
geometric transformation). A geometric transform is used to adjust the size, orientation, and
position of a geometric object. When making a model of an automobile, we build one wheel.
We then apply four different transforms to the wheel model to add four copies of the wheel
to the automobile. Similarly, we can add several automobiles to a scene by applying different
transforms to the same automobile model.
The three most basic kinds of geometric transform are called scaling , rotation, and trans-
lation. A scaling transform is used to set the size of an object, that is, to make it bigger or
smaller by some specified factor. A rotation transform is used to set an object’s orientation,
by rotating it by some angle about some specific axis. A translation transform is used to set
the position of an object, by displacing it by a given amount from its original position. In
this book, we will meet these transformations first in two dimensions, where they are easier to
understand. But it is in 3D graphics that they become truly essential.
∗ ∗ ∗
Next, appearance. . . . Geometric shapes by themselves are not very interesting. You
have to be able to set their appearance. This is done by assigning attributes to the geometric
objects. An obvious attribute is color, but getting a realistic appearance turns out to be a lot
more complicated than simply specifying a color for each primitive. In 3D graphics, instead of
color, we usually talk about material . The term material here refers to the properties that
CHAPTER 1. INTRODUCTION 6
determine the intrinsic visual appearance of a surface. Essentially, this means how the surface
interacts with light that hits the surface. Material properties can include a basic color as well
as other properties such as shininess, roughness, and transparency.
One of the most useful kinds of material property is a texture. In most general terms,
a texture is a way of varying material properties from point-to-point on a surface. The most
common use of texture is to allow different colors for different points. This is done by using
a 2D image as a texture, which can be applied to a surface so that the image looks like it is
“painted” onto the surface. However, texture can also refer to changing values for things like
transparency or “bumpiness.” Textures allow us to add detail to a scene without using a huge
number of geometric primitives; instead, you can use a smaller number of textured primitives.
A material is an intrinsic property of an object, but the actual appearance of the object
also depends on the environment in which the object is viewed. In the real world, you don’t
see anything unless there is some light in the environment. The same is true in 3D graphics:
you have to add simulated lighting to a scene. There can be several sources of light in a
scene. Each light source can have its own color, intensity, and direction or position. The light
from those sources will then interact with the material properties of the objects in the scene.
Support for lighting in a graphics system can range from fairly simple to very complex and
computationally intensive.
∗ ∗ ∗
Finally, the image. . . . In general, the ultimate goal of 3D graphics is to produce 2D
images of the 3D world. The transformation from 3D to 2D involves viewing and projection.
The world looks different when seen from different points of view. To set up a point of view,
we need to specify the position of the viewer and the direction that the viewer is looking. It
is also necessary to specify an “up” direction, a direction that will be pointing upwards in the
final image. This can be thought of as placing a “virtual camera” into the scene. Once the
view is set up, the world as seen from that point of view can be projected into 2D. Projection
is analogous to taking a picture with the camera.
The final step in 3D graphics is to assign colors to individual pixels in the 2D image. This
process is called rasterization, and the whole process of producing an image is referred to as
rendering the scene.
In many cases the ultimate goal is not to create a single image, but to create an animation,
consisting a sequence of images that show the world at different times. In an animation, there
are small changes from one image in the sequence to the next. Almost any aspect of a scene
can change during an animation, including coordinates of primitives, transformations, material
properties, and the view. For example, an object can be made to grow over the course of an
animation by gradually increasing the scale factor in a scaling transformation that is applied to
the object. And changing the view during an animation can give the effect of moving or flying
through the scene. Of course, it can be difficult to compute the necessary changes. There are
many techniques to help with the computation. One of the most important is to use a “physics
engine,” which computes the motion and interaction of objects based on the laws of physics.
(However, you won’t learn about physics engines in this book.)
applications. (Today, you probably have more graphics computing power on your smart phone.)
OpenGL is supported by the graphics hardware in most modern computing devices, including
desktop computers, laptops, and many mobile devices. This section will give you a bit of
background about the history of OpenGL and about the graphics hardware that supports it.
In the first desktop computers, the contents of the screen were managed directly by the
CPU. For example, to draw a line segment on the screen, the CPU would run a loop to set the
color of each pixel that lies along the line. Needless to say, graphics could take up a lot of the
CPU’s time. And graphics performance was very slow, compared to what we expect today. So
what has changed? Computers are much faster in general, of course, but the big change is that
in modern computers, graphics processing is done by a specialized component called a GPU ,
or Graphics Processing Unit. A GPU includes processors for doing graphics computations; in
fact, it can include a large number of such processors that work in parallel to greatly speed up
graphical operations. It also includes its own dedicated memory for storing things like images
and lists of coordinates. GPU processors have very fast access to data that is stored in GPU
memory—much faster than their access to data stored in the computer’s main memory.
To draw a line or perform some other graphical operation, the CPU simply has to send
commands, along with any necessary data, to the GPU, which is responsible for actually car-
rying out those commands. The CPU offloads most of the graphical work to the GPU, which
is optimized to carry out that work very quickly. The set of commands that the GPU under-
stands make up the API of the GPU. OpenGL is an example of a graphics API, and most GPUs
support OpenGL in the sense that they can understand OpenGL commands, or at least that
OpenGL commands can efficiently be translated into commands that the GPU can understand.
OpenGL is not the only graphics API. The best-known alternative is probably Direct3D,
a 3D graphics API used for Microsoft Windows. OpenGL is more widely available, since it is
not limited to Microsoft, but Direct3D is supported by most graphics cards, and it has often
introduced new features earlier than OpenGL.
∗ ∗ ∗
I have said that OpenGL is an API, but in fact it is a series of APIs that have been subject
to repeated extension and revision. The current version, in early 2015, is 4.5, and it is very
different from the 1.0 version from 1992. Furthermore, there is a specialized version called
OpengGL ES for “embedded systems” such as mobile phones and tablets. And there is also
WebGL, for use in Web browsers, which is basically a port of OpenGL ES 2.0. It’s useful to
know something about how and why OpenGL has changed.
First of all, you should know that OpenGL was designed as a “client/server” system. The
server, which is responsible for controlling the computer’s display and performing graphics com-
putations, carries out commands issued by the client. Typically, the server is a GPU, including
its graphics processors and memory. The server executes OpenGL commands. The client is the
CPU in the same computer, along with the application program that it is running. OpenGL
commands come from the program that is running on the CPU. However, it is actually possible
to run OpenGL programs remotely over a network. That is, you can execute an application
program on a remote computer (the OpenGL client), while the graphics computations and
display are done on the computer that you are actually using (the OpenGL server).
The key idea is that the client and the server are separate components, and there is a
communication channel between those components. OpenGL commands and the data that
they need are communicated from the client (the CPU) to the server (the GPU) over that
channel. The capacity of the channel can be a limiting factor in graphics performance. Think
of drawing an image onto the screen. If the GPU can draw the image in microseconds, but it
CHAPTER 1. INTRODUCTION 8
takes milliseconds to send the data for the image from the CPU to the GPU, then the great speed
of the GPU is irrelevant—most of the time that it takes to draw the image is communication
time.
For this reason, one of the driving factors in the evolution of OpenGL has been the desire
to limit the amount of communication that is needed between the CPU and the GPU. One
approach is to store information in the GPU’s memory. If some data is going to be used several
times, it can be transmitted to the GPU once and stored in memory there, where it will be
immediately accessible to the GPU. Another approach is to try to decrease the number of
OpenGL commands that must be transmitted to the GPU to draw a given image.
OpenGL draws primitives such as triangles. Specifying a primitive means specifying coor-
dinates and attributes for each of its vertices. In the original OpenGL 1.0, a separate command
was used to specify the coordinates of each vertex, and a command was needed each time the
value of an attribute changed. To draw a single triangle would require three or more commands.
Drawing a complex object made up of thousands of triangles would take many thousands of
commands. Even in OpenGL 1.1, it became possible to draw such an object with a single
command instead of thousands. All the data for the object would be loaded into arrays, which
could then be sent in a single step to the GPU. Unfortunately, if the object was going to be
drawn more than once, then the data would have to be retransmitted each time the object was
drawn. This was fixed in OpenGL 1.5 with Vertex Buffer Objects. A VBO is a block of
memory in the GPU that can store the coordinates or attribute values for a set of vertices.
This makes it possible to reuse the data without having to retransmit it from the CPU to the
GPU every time it is used.
Similarly, OpenGL 1.1 introduced texture objects to make it possible to store several
images on the GPU for use as textures. This means that texture images that are going to
be reused several times can be loaded once into the GPU, so that the GPU can easily switch
between images without having to reload them.
∗ ∗ ∗
As new capabilities were added to OpenGL, the API grew in size. But the growth was still
outpaced by the invention of new, more sophisticated techniques for doing graphics. Some of
these new techniques were added to OpenGL, but the problem is that no matter how many
features you add, there will always be demands for new features—as well as complaints that all
the new features are making things too complicated! OpenGL was a giant machine, with new
pieces always being tacked onto it, but still not pleasing everyone. The real solution was to
make the machine programmable. With OpenGL 2.0, it became possible to write programs
to be executed as part of the graphical computation in the GPU. The programs are run on the
GPU at GPU speed. A programmer who wants to use a new graphics technique can write a
program to implement the feature and just hand it to the GPU. The OpenGL API doesn’t have
to be changed. The only thing that the API has to support is the ability to send programs to
the GPU for execution.
The programs are called shaders (although the term does’t really describe what most of
them actually do). The first shaders to be introduced were vertex shaders and fragment
shaders. When a primitive is drawn, some work has to be done at each vertex of the primitive,
such as applying a geometric transform to the vertex coodinates or using the attributes and
global lighting environment to compute the color of that vertex. A vertex shader is a program
that can take over the job of doing such “per-vertex” computations. Similarly, some work has
to be done for each pixel inside the primitive. A fragment shader can take over the job of
performing such “per-pixel” computations. (Fragment shaders are also called pixel shaders.)
CHAPTER 1. INTRODUCTION 9
The idea of programmable graphics hardware was very successful—so successful that in
OpenGL 3.0, the usual per-vertex and per-fragment processing was deprecated (meaning that
its use was discouraged). And in OpenGL 3.1, it was removed from the OpenGL standard,
although it is still present as an optional extension. In practice, all the original features of
OpenGL are still supported in desktop versions of OpenGL and will probably continue to be
available in the future. On the embedded system side, however, with OpenGL ES 2.0 and later,
the use of shaders is mandatory, and a large part of the OpenGL 1.1 API has been completely
removed. WebGL, the version of OpenGL for use in web browsers, is based on OpenGL ES 2.0,
and it also requires shaders to get anything at all done. Nevertheless, we will begin our study of
OpenGL with version 1.1. Most of the concepts and many of the details from that version are
still relevant, and it offers an easier entry point for someone new to 3D graphics programming.
OpenGL shaders are written in GLSL (OpenGL Shading Language). Like OpenGL itself,
GLSL has gone through several versions. We will spend some time later in the course studying
GLSL ES 1.0, the version used with WebGL 1.0 and OpenGL ES 2.0. GLSL uses a syntax
similar to the C programming language.
∗ ∗ ∗
As a final remark on GPU hardware, I should note that the computations that are done for
different vertices are pretty much independent, and so can potentially be done in parallel. The
same is true of the computations for different fragments. In fact, GPUs can have hundreds or
thousands of processors that can operate in parallel. Admittedly, the individual processors are
much less powerful than a CPU, but then typical per-vertex and per-fragment computations
are not very complicated. The large number of processors, and the large amount of parallelism
that is possible in graphics computations, makes for impressive graphics performance even on
fairly inexpensive GPUs.
Chapter 2
Two-Dimensional Graphics
With this chapter, we begin our study of computer graphics by looking at the two-
dimensional case. Things are simpler, and a lot easier to visualize, in 2D than in 3D, but most
of the ideas that are covered in this chapter will also be very relevant to 3D.
The chapter begins with four sections that examine 2D graphics in a general way, without
tying it to a particular programming language or graphics API. The coding examples in these
sections are written in pseudocode that should make sense to anyone with enough programming
background to be reading this book. In the next three sections, we will take quick looks at 2D
graphics in three particular languages: Java with Graphics2D, JavaScript with HTML <canvas>
graphics, and SVG. We will see how these languages use many of the general ideas from earlier
in the chapter.
11
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 12
0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11
0 7
1 6
2 5
3 (3,5) 4
4 3 (3,5)
5 2
6 1
7 0
Note in particular that the pixel that is identified by a pair of coordinates (x,y) depends on the
choice of coordinate system. You always need to know what coordinate system is in use before
you know what point you are talking about.
Row and column numbers identify a pixel, not a point. A pixel contains many points;
mathematically, it contains an infinite number of points. The goal of computer graphics is not
really to color pixels—it is to create and manipulate images. In some ideal sense, an image
should be defined by specifying a color for each point, not just for each pixel. Pixels are an
approximation. If we imagine that there is a true, ideal image that we want to display, then
any image that we display by coloring pixels is an approximation. This has many implications.
Suppose, for example, that we want to draw a line segment. A mathematical line has no
thickness and would be invisible. So we really want to draw a thick line segment, with some
specified width. Let’s say that the line should be one pixel wide. The problem is that, unless
the line is horizontal or vertical, we can’t actually draw the line by coloring pixels. A diagonal
geometric line will cover some pixels only partially. It is not possible to make part of a pixel
black and part of it white. When you try to draw a line with black and white pixels only,
the result is a jagged staircase effect. This effect is an example of something called “aliasing.”
Aliasing can also be seen in the outlines of characters drawn on the screen and in diagonal or
curved boundaries between any two regions of different color. (The term aliasing likely comes
from the fact that ideal images are naturally described in real-number coordinates. When you
try to represent the image using pixels, many real-number coordinates will map to the same
integer pixel coordinates; they can all be considered as different names or “aliases” for the same
pixel.)
Antialiasing is a term for techniques that are designed to mitigate the effects of aliasing.
The idea is that when a pixel is only partially covered by a shape, the color of the pixel should be
a mixture of the color of the shape and the color of the background. When drawing a black line
on a white background, the color of a partially covered pixel would be gray, with the shade of
gray depending on the fraction of the pixel that is covered by the line. (In practice, calculating
this area exactly for each pixel would be too difficult, so some approximate method is used.)
Here, for example, is a geometric line, shown on the left, along with two approximations of that
line made by coloring pixels. The lines are greately magnified so that you can see the individual
pixels. The line on the right is drawn using antialiasing, while the one in the middle is not:
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 13
Note that antialiasing does not give a perfect image, but it can reduce the “jaggies” that are
caused by aliasing (at least when it is viewed on a normal scale).
There are other issues involved in mapping real-number coordinates to pixels. For example,
which point in a pixel should correspond to integer-valued coordinates such as (3,5)? The center
of the pixel? One of the corners of the pixel? In general, we think of the numbers as referring
to the top-left corner of the pixel. Another way of thinking about this is to say that integer
coordinates refer to the lines between pixels, rather than to the pixels themselves. But that
still doesn’t determine exactly which pixels are affected when a geometric shape is drawn. For
example, here are two lines drawn using HTML canvas graphics, shown greatly magnified. The
lines were specified to be colored black with a one-pixel line width:
The top line was drawn from the point (100,100) to the point (120,100). In canvas graphics,
integer coordinates corresponding to the lines between pixels, but when a one-pixel line is
drawn, it extends one-half pixel on either side of the infinitely thin geometric line. So for the
top line, the line as it is drawn lies half in one row of pixels and half in another row. The
graphics system, which uses antialiasing, rendered the line by coloring both rows of pixels gray.
The bottom line was drawn from the point (100.5,100.5) to (120.5,120.5). In this case, the line
lies exactly along one line of pixels, which gets colored black. The gray pixels at the ends of
the bottom line have to do with the fact that the line only extends halfway into the pixels at
its endpoints. Other graphics systems might render the same lines differently.
The interactive demo c2/pixel-magnifier.html lets you experiment with pixels and antialias-
ing. Interactive demos can be found on the web pages in the on-line version of this book. If you
have downloaded the web site, you can also find the demos in the folder named demos. (Note
that in any of the interactive demos that accompany this book, you can click the question mark
icon in the upper left for more information about how to use it.) (Demo)
∗ ∗ ∗
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 14
All this is complicated further by the fact that pixels aren’t what they used to be. Pixels
today are smaller! The resolution of a display device can be measured in terms of the number
of pixels per inch on the display, a quantity referred to as PPI (pixels per inch) or sometimes
DPI (dots per inch). Early screens tended to have resolutions of somewhere close to 72 PPI.
At that resolution, and at a typical viewing distance, individual pixels are clearly visible. For a
while, it seemed like most displays had about 100 pixels per inch, but high resolution displays
today can have 200, 300 or even 400 pixels per inch. At the highest resolutions, individual
pixels can no longer be distinguished.
The fact that pixels come in such a range of sizes is a problem if we use coordinate systems
based on pixels. An image created assuming that there are 100 pixels per inch will look tiny on a
400 PPI display. A one-pixel-wide line looks good at 100 PPI, but at 400 PPI, a one-pixel-wide
line is probably too thin.
In fact, in many graphics systems, “pixel” doesn’t really refer to the size of a physical
pixel. Instead, it is just another unit of measure, which is set by the system to be something
appropriate. (On a desktop system, a pixel is usually about one one-hundredth of an inch. On
a smart phone, which is usually viewed from a closer distance, the value might be closer to
1/160 inch. Furthermore, the meaning of a pixel as a unit of measure can change when, for
example, the user applies a magnification to a web page.)
Pixels cause problems that have not been completely solved. Fortunately, they are less of a
problem for vector graphics, which is mostly what we will use in this book. For vector graphics,
pixels only become an issue during rasterization, the step in which a vector image is converted
into pixels for display. The vector image itself can be created using any convenient coordinate
system. It represents an idealized, resolution-independent image. A rasterized image is an
approximation of that ideal image, but how to do the approximation can be left to the display
hardware.
setCoordinateSystem(left,right,bottom,top)
The graphics system would then be responsible for automatically transforming the coordinates
from the specfiied coordinate system into pixel coordinates. Such a subroutine might not be
available, so it’s useful to see how the transformation is done by hand. Let’s consider the general
case. Given coordinates for a point in one coordinate system, we want to find the coordinates
for the same point in a second coordinate system. (Remember that a coordinate system is just
a way of assigning numbers to points. It’s the points that are real!) Suppose that the horizontal
and vertical limits are oldLeft, oldRight, oldTop, and oldBottom for the first coordinate system,
and are newLeft, newRight, newTop, and newBottom for the second. Suppose that a point
has coordinates (oldX,oldY ) in the first coordinate system. We want to find the coordinates
(newX,newY ) of the point in the second coordinate system
oldLeft oldRight newLeft newRight
oldTop newTop
oldBottom newBottom
-5 -5
It is not always a bad thing to use different units of length in the vertical and horizontal
directions. However, suppose that you want to use coordinates with limits left, right, bottom,
and top, and that you do want to preserve the aspect ratio. In that case, depending on the
shape of the display rectangle, you might have to adjust the values either of left and right or
of bottom and top to make the aspect ratios match:
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 17
-5 5 -7 -5 5 7 -5 5
5 5 8
-5 -5
We will look more deeply into geometric transforms later in the chapter, and at that time, we’ll
see some program code for setting up coordinate systems.
how those colors are chosen. This is just a fact about the way our eyes actually work; it might
have been different. Three basic colors can produce a reasonably large fraction of the set of
perceivable colors, but there are colors that you can see in the world that you will never see on
your computer screen. (This whole discussion only applies to people who actually have three
kinds of cone cell. Color blindness, where someone is missing one or more kinds of cone cell, is
surprisingly common.)
The range of colors that can be produced by a device such as a computer screen is called
the color gamut of that device. Different computer screens can have different color gamuts,
and the same RGB values can produce somewhat different colors on different screens. The color
gamut of a color printer is noticeably different—and probably smaller—than the color gamut
of a screen, which explain why a printed image probably doesn’t look exactly the same as it
did on the screen. (Printers, by the way, make colors differently from the way a screen does it.
Whereas a screen combines light to make a color, a printer combines inks or dyes. Because of
this difference, colors meant for printers are often expressed using a different set of basic colors.
A common color model for printer colors is CMYK, using the colors cyan, magenta, yellow, and
black.)
In any case, the most common color model for computer graphics is RGB. RGB colors are
most often represented using 8 bits per color component, a total of 24 bits to represent a color.
This representation is sometimes called “24-bit color.” An 8-bit number can represent 28 , or
256, different values, which we can take to be the positive integers from 0 to 255. A color is
then specified as a triple of integers (r,g,b) in that range.
This representation works well because 256 shades of red, green, and blue are about as many
as the eye can distinguish. In applications where images are processed by computing with color
components, it is common to use additional bits per color component, to avoid visual effects
that might occur due to rounding errors in the computations. Such applications might use a
16-bit integer or even a 32-bit floating point value for each color component. On the other
hand, sometimes fewer bits are used. For example, one common color scheme uses 5 bits for
the red and blue components and 6 bits for the green component, for a total of 16 bits for a
color. (Green gets an addition bit because the eye is more sensitive to green light than to red
or blue.) This “16-bit color” saves memory compared to 24-bit color and was more common
when memory was more expensive.
There are many other color models besides RGB. RGB is sometimes criticized as being
unintuitive. For example, it’s not obvious to most people that yellow is made of a combination
of red and green. The closely related color models HSV and HSL describe the same set of
colors as RGB, but attempt to do it in a more intuitive way. (HSV is sometimes called HSB,
with the “B” standing for “brightness.” HSV and HSB are exactly the same model.)
The “H” in these models stands for “hue,” a basic spectral color. As H increases, the color
changes from red to yellow to green to cyan to blue to magenta, and then back to red. The
value of H is often taken to range from 0 to 360, since the colors can be thought of as arranged
around a circle with red at both 0 and 360 degrees.
The “S” in HSV and HSL stands for “saturation,” and is taken to range from 0 to 1. A
saturation of 0 gives a shade of gray (the shade depending on the value of V or L). A saturation
of 1 gives a “pure color,” and decreasing the saturation is like adding more gray to the color.
“V” stands for “value,” and “L” stands for “lightness.” They determine how bright or dark the
color is. The main difference is that in the HSV model, the pure spectral colors occur for V=1,
while in HSL, they occur for L=0.5.
Let’s look at some colors in the HSV color model. The illustration below shows colors with
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 19
a full range of H-values, for S and V equal to 1 and to 0.5. Note that for S=V=1, you get
bright, pure colors. S=0.5 gives paler, less saturated colors. V=0.5 gives darker colors.
It’s probably easier to understand color models by looking at some actual colors and how
they are represented. The interactive demo c2/rgb-hsv.html lets you experiment with the RGB
and HSV color models. (Demo)
∗ ∗ ∗
Often, a fourth component is added to color models. The fourth component is called alpha,
and color models that use it are referred to by names such as RGBA and HSLA. Alpha is not a
color as such. It is usually used to represent transparency. A color with maximal alpha value is
fully opaque; that is, it is not at all transparent. A color with alpha equal to zero is completely
transparent and therefore invisible. Intermediate values give translucent, or partly transparent,
colors. Transparency determines what happens when you draw with one color (the foreground
color) on top of another color (the background color). If the foreground color is fully opaque, it
simply replaces the background color. If the foreground color is partly transparent, then then it
is blended with the background color. Assuming that the alpha component ranges from 0 to 1,
the color that you get can be computed as
new color = (alpha)*(foreground color) + (1 - alpha)*(background color)
This computation is done separately for the red, blue, and green color components. This is
called alpha blending . The effect is like viewing the background through colored glass; the
color of the glass adds a tint to the background color. This type of blending is not the only
possible use of the alpha component, but it is the most common.
An RGBA color model with 8 bits per component uses a total of 32 bits to represent a color.
This is a convenient number because integer values are often represented using 32-bit values. A
32-bit integer value can be interpreted as a 32-bit RGBA color. How the color components are
arranged within a 32-bit integer is somewhat arbitrary. The most common layout is to store the
alpha component in the eight high-order bits, followed by red, green, and blue. (This should
probably be called ARGB color.) However, other layouts are also in use.
2.2 Shapes
We have been talking about low-level graphics concepts like pixels and coordinates, but
fortunately we don’t usually have to work on the lowest levels. Most graphics systems let you
work with higher-level shapes, such as triangles and circles, rather than individual pixels. And
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 20
a lot of the hard work with coordinates is done using transforms rather than by working with
coordinates directly. In this section and the next, we will look at some of the higher-level
capabilities that are typically provided by 2D graphics APIs.
On the left are three wide lines with no cap, a round cap, and a square cap. The geometric line
segment is shown as a dotted line. (The no-cap style is called “butt.”) To the right are four
lines with different patters of dots and dashes. In the middle are three different styles of line
joins: mitered, rounded, and beveled.
∗ ∗ ∗
The basic rectangular shape has sides that are vertical and horizontal. (A tilted rectangle
generally has to be made by applying a rotation.) Such a rectangle can be specified with two
points, (x1,y1) and (x2,y2), that give the endpoints of one of the diagonals of the rectangle.
Alternatively, the width and the height can be given, along with a single base point, (x,y). In
that case, the width and height have to be positive, or the rectangle is empty. The base point
(x,y) will be the upper left corner of the rectangle if y increases from top to bottom, and it will
be the lower left corner of the rectangle if y increases from bottom to top.
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 21
height
(x2,y2) (x2,y2)
width
Suppose that you are given points (x1,y1) and (x2,y2), and that you want to draw the rectangle
that they determine. And suppose that the only rectangle-drawing command that you have
available is one that requires a point (x,y), a width, and a height. For that command, x must
be the smaller of x1 and x2, and the width can be computed as the absolute value of x1 minus
x2. And similarly for y and the height. In pseudocode,
DrawRectangle from points (x1,y1) and (x2,y2):
x = min( x1, x2 )
y = min( y1, y2 )
width = abs( x1 - x2 )
height = abs( y1 - y2 )
DrawRectangle( x, y, width, height )
A common variation on rectangles is to allow rounded corners. For a “round rect,” the
corners are replaced by elliptical arcs. The degree of rounding can be specified by giving the
horizontal radius and vertical radius of the ellipse. Here are some examples of round rects. For
the shape at the right, the two radii of the ellipse are shown:
My final basic shape is the oval. (An oval is also called an ellipse.) An oval is a closed curve
that has two radii. For a basic oval, we assume that the radii are vertical and horizontal. An
oval with this property can be specified by giving the rectangle that just contains it. Or it can
be specified by giving its center point and the lengths of its vertical radius and its horizontal
radius. In this illustration, the oval on the left is shown with its containing rectangle and with
its center point and radii:
r2
r1
The oval on the right is a circle. A circle is just an oval in which the two radii have the same
length.
If ovals are not available as basic shapes, they can be approximated by drawing a large
number of line segments. The number of lines that is needed for a good approximation depends
on the size of the oval. It’s useful to know how to do this. Suppose that an oval has center
point (x,y), horizontal radius r1, and vertical radius r2. Mathematically, the points on the oval
are given by
( x + r1*cos(angle), y + r2*sin(angle) )
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 22
where angle takes on values from 0 to 360 if angles are measured in degrees or from 0 to 2π if
they are measured in radians. Here sin and cos are the standard sine and cosine functions. To
get an approximation for an oval, we can use this formula to generate some number of points
and then connect those points with line segments. In pseudocode, assuming that angles are
measured in radians and that pi represents the mathematical constant π,
Draw Oval with center (x,y), horizontal radius r1, and vertical radius r2:
for i = 0 to numberOfLines:
angle1 = i * (2*pi/numberOfLines)
angle2 = (i+1) * (2*pi/numberOfLines)
a1 = x + r1*cos(angle1)
b1 = y + r2*sin(angle1)
a2 = x + r1*cos(angle2)
b2 = y + r2*sin(angle2)
Draw Line from (x1,y1) to (x2,y2)
For a circle, of course, you would just have r1 = r2. This is the first time we have used the
sine and cosine functions, but it won’t be the last. These functions play an important role in
computer graphics because of their association with circles, circular motion, and rotation. We
will meet them again when we talk about transforms in the next section. (Demo)
1
2 1
1
-1
1 2 0
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 23
The shapes are also shown filled using the two fill rules. For the shapes in the center, the fill
rule is to color any region that has a non-zero winding number. For the shapes shown on the
right, the rule is to color any region whose winding number is odd; regions with even winding
number are not filled.
There is still the question of what a shape should be filled with. Of course, it can be filled
with a color, but other types of fill are possible, including patterns and gradients. A pattern
is an image, usually a small image. When used to fill a shape, a pattern can be repeated
horizontally and vertically as necessary to cover the entire shape. A gradient is similar in that
it is a way for color to vary from point to point, but instead of taking the colors from an
image, they are computed. There are a lot of variations to the basic idea, but there is always
a line segment along which the color varies. The color is specified at the endpoints of the line
segment, and possibly at additional points; between those points, the color is interpolated. For
other points on the line that contains the line segment, the pattern on the line segment can
be repeated, or the color of the endpoint can simply be extended. For a linear gradient, the
color is constant along lines perpendicular to the basic line segment, so you get lines of solid
color going in that direction. In a radial gradient, the color is constant along circles centered
at one of the endpoints of the line segment. And that doesn’t exhaust the possibilities. To give
you an idea what patterns and gradients can look like, here is a shape, filled with two gradients
and two patterns:
The first shape is filled with a simple linear gradient defined by just two colors, while the second
shape uses a radial gradient.
Patterns and gradients are not necessarily restricted to filling shapes. Stroking a shape is,
after all, the same as filling a band of pixels along the boundary of the shape, and that can be
done with a gradient or a pattern, instead of with a solid color.
Finally, I will mention that a string of text can be considered to be a shape for the purpose
of drawing it. The boundary of the shape is the outline of the characters. The text is drawn
by filling that shape. In some graphics systems, it is also possible to stroke the outline of the
shape that defines the text. In the following illustration, the string “Graphics” is shown, on
top, filled with a pattern and, below that, filled with a gradient and stroked with solid black:
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 24
olygons olygons
Sometimes, polygons are required to be “simple,” meaning that the polygon has no self-
intersections. That is, all the vertices are different, and a side can only intersect another
side at its endpoints. And polygons are usually required to be “planar,” meaning that all the
vertices lie in the same plane. (Of course, in 2D graphics, everything lies in the same plane, so
this is not an issue. However, it does become an issue in 3D.)
How then should we draw polygons? That is, what capabilities would we like to have in a
graphics API for drawing them. One possibility is to have commands for stroking and for filling
polygons, where the vertices of the polygon are given as an array of points or as an array of
x-coordinates plus an array of y-coordinates. In fact, that is sometimes done; for example, the
Java graphics API includes such commands. Another, more flexible, approach is to introduce
the idea of a “path.” Java, SVG, and the HTML canvas API all support this idea. A path is
a general shape that can include both line segments and curved segments. Segments can, but
don’t have to be, connected to other segments at their endpoints. A path is created by giving
a series of commands that tell, essentially, how a pen would be moved to draw the path. While
a path is being created, there is a point that represents the pen’s current location. There will
be a command for moving the pen without drawing, and commands for drawing various kinds
of segments. For drawing polygons, we need commands such as
• createPath() — start a new, empty path
• moveTo(x,y) — move the pen to the point (x,y), without adding a segment to the the
path; that is, without drawing anything
• lineTo(x,y) — add a line segment to the path that starts at the current pen location
and ends at the point (x,y), and move the pen to (x,y)
• closePath() — add a line segment from the current pen location back to the starting
point, unless the pen is already there, producing a closed path.
(For closePath, I need to define “starting point.” A path can be made up of “subpaths” A
subpath consists of a series of connected segments. A moveTo always starts a new subpath.
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 25
A closePath ends the current segment and implicitly starts a new one. So “starting point”
means the position of the pen after the most recent moveTo or closePath.)
Suppose that we want a path that represents the triangle with vertices at (100,100),
(300,100), and (200, 200). We can do that with the commands
createPath()
moveTo( 100, 100 )
lineTo( 300, 100 )
lineTo( 200, 200 )
closePath()
The closePath command at the end could be replaced by lineTo(100,100), to move the pen
back to the first vertex.
A path represents an abstract geometric object. Creating one does not make it visible on
the screen. Once we have a path, to make it visible we need additional commands for stroking
and filling the path.
Earlier in this section, we saw how to approximate an oval by drawing, in effect, a regular
polygon with a large number of sides. In that example, I drew each side as a separate line
segment, so we really had a bunch of separate lines rather than a polygon. There is no way to
fill such a thing. It would be better to approximate the oval with a polygonal path. For an oval
with center (x,y) and radii r1 and r2:
createPath()
moveTo( x + r1, y )
for i = 1 to numberOfPoints-1
angle = i * (2*pi/numberOfLines)
lineTo( x + r1*cos(angle), y + r2*sin(angle) )
closePath()
Using this path, we could draw a filled oval as well as stroke it. Even if we just want to draw
the outline of a polygon, it’s still better to create the polygon as a path rather than to draw
the line segments as separate sides. With a path, the computer knows that the sides are part of
single shape. This makes it possible to control the appearance of the “join” between consecutive
sides, as noted earlier in this section.
∗ ∗ ∗
I noted above that a path can contain other kinds of segments besides lines. For example,
it might be possible to include an arc of a circle as a segment. Another type of curve is a
Bezier curve. Bezier curves can be used to create very general curved shapes. They are fairly
intuitive, so that they are often used in programs that allow users to design curves interactively.
Mathematically, Bezier curves are defined by parametric polynomial equations, but you don’t
need to understand what that means to use them. There are two kinds of Bezier curve in
common use, cubic Bezier curves and quadratic Bezier curves; they are defined by cubic and
quadratic polynomials respectively. When the general term “Bezier curve” is used, it usually
refers to cubic Bezier curves.
A cubic Bezier curve segment is defined by the two endpoints of the segment together with
two control points. To understand how it works, it’s best to think about how a pen would
draw the curve segment. The pen starts at the first endpoint, headed in the direction of the
first control point. The distance of the control point from the endpoint controls the speed of
the pen as it starts drawing the curve. The second control point controls the direction and
speed of the pen as it gets to the second endpoint of the curve. There is a unique cubic curve
that satisfies these conditions.
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 26
The illustration above shows three cubic Bezier curve segments. The two curve segments on
the right are connected at an endpoint to form a longer curve. The curves are drawn as thick
black lines. The endpoints are shown as black dots and the control points as blue squares, with
a thin red line connecting each control point to the corresponding endpoint. (Ordinarily, only
the curve would be drawn, except in an interface that lets the user edit the curve by hand.)
Note that at an endpoint, the curve segment is tangent to the line that connects the endpoint
to the control point. Note also that there can be a sharp point or corner where two curve
segments meet. However, one segment will merge smoothly into the next if control points are
properly chosen.
This will all be easier to understand with some hands-on experience. The interactive demo
c2/cubic-bezier.html lets you edit cubic Bezier curve segments by dragging their endpoints and
control points. (Demo)
When a cubic Bezier curve segment is added to a path, the path’s current pen location acts
as the first endpoint of the segment. The command for adding the segment to the path must
specify the two control points and the second endpoint. A typical command might look like
cubicCurveTo( cx1, cy1, cx2, cy2, x, y )
This would add a curve from the current location to point (x,y), using (cx1,cy1) and (cx2,cy2)
as the control points. That is, the pen leaves the current location heading towards (cx1,cy1),
and it ends at the point (x,y), arriving there from the direction of (cx2,cy2).
Quadratic Bezier curve segments are similar to the cubic version, but in the quadratic case,
there is only one control point for the segment. The curve leaves the first endpoint heading
in the direction of the control point, and it arrives at the second endpoint coming from the
direction of the control point. The curve in this case will be an arc of a parabola.
Again, this is easier to understand this with some hands-on experience. Try the interactive
demo c2/quadratic-bezier.html. (Demo)
2.3 Transforms
In Section 2.1, we discussed coordinate systems and how it is possible to transform
coordinates from one coordinate system to another. In this section, we’ll look at that idea a
little more closely, and also look at how geometric transformations can be used to place graphics
objects into a coordinate system.
or “world” that we want to view, and the coordinates that we use to define the scene are called
world coordinates.
For 2D graphics, the world lies in a plane. It’s not possible to show a picture of the entire
infinite plane. We need to pick some rectangular area in the plane to display in the image.
Let’s call that rectangular area the window , or view window. A coordinate transform is used
to map the window to the viewport.
(300,100)
0 800
0
(-1,2)
3
T
-4 4
600
-3
(3,-1) (700,400)
Window Viewport
In this illustration, T represents the coordinate transformation. T is a function that takes world
coordinates (x,y) in some window and maps them to pixel coordinates T(x,y) in the viewport.
(I’ve drawn the viewport and window with different sizes to emphasize that they are not the
same thing, even though they show the same objects, but in fact they don’t even exist in the
same space, so it doesn’t really make sense to compare their sizes.) In this example, as you can
check,
T(x,y) = ( 800*(x+4)/8, 600*(3-y)/6 )
Look at the rectangle with corners at (-1,2) and (3,-1) in the window. When this rectangle is
displayed in the viewport, it is displayed as the rectangle with corners T(-1,2) and T(3,-1). In
this example, T(-1,2) = (300,100) and T(3,-1) = (700,400).
We use coordinate transformations in this way because it allows us to choose a world coor-
dinate system that is natural for describing the scene that we want to display, and it is easier
to do that than to work directly with viewport coordinates. Along the same lines, suppose that
we want to define some complex object, and suppose that there will be several copies of that
object in our scene. Or maybe we are making an animation, and we would like the object to
have different positions in different frames. We would like to choose some convenient coordinate
system and use it to define the object once and for all. The coordinates that we use to define
an object are called object coordinates for the object. When we want to place the object
into a scene, we need to transform the object coordinates that we used to define the object into
the world coordinate system that we are using for the scene. The transformation that we need
is called a modeling transformation. This picture illustrates an object defined in its own
object coordinate system and then mapped by three different modeling transformations into
the world coordinate system:
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 28
M1
M2
M3
Remember that in order to view the scene, there will be another transformation that maps the
object from a view window in world coordinates into the viewport.
Now, keep in mind that the choice of a view window tells which part of the scene is shown
in the image. Moving, resizing, or even rotating the window will give a different view of the
scene. Suppose we make several images of the same car:
What happened between making the top image in this illustration and making the image on
the bottom left? In fact, there are two possibilities: Either the car was moved to the right, or
the view window that defines the scene was moved to the left. This is important, so be sure
you understand it. (Try it with your cell phone camera. Aim it at some objects, take a step
to the left, and notice what happens to the objects in the camera’s viewfinder: They move
to the right in the picture!) Similarly, what happens between the top picture and the middle
picture on the bottom? Either the car rotated counterclockwise, or the window was rotated
clockwise. (Again, try it with a camera—you might want to take two actual photos so that you
can compare them.) Finally, the change from the top picture to the one on the bottom right
could happen because the car got smaller or because the window got larger. (On your camera,
a bigger window means that you are seeing a larger field of view, and you can get that by
applying a zoom to the camera or by backing up away from the objects that you are viewing.)
There is an important general idea here. When we modify the view window, we change
the coordinate system that is applied to the viewport. But in fact, this is the same as leaving
that coordinate system in place and moving the objects in the scene instead. Except that to
get the same effect in the final image, you have to apply the opposite transformation to the
objects (for example, moving the window to the left is equivalent to moving the objects to the
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 29
right). So, there is no essential distinction between transforming the window and transforming
the object. Mathematically, you specify a geometric primitive by giving coordinates in some
natural coordinate system, and the computer applies a sequence of transformations to those
coordinates to produce, in the end, the coordinates that are used to actually draw the primitive
in the image. You will think of some of those transformations as modeling transforms and some
as coordinate transforms, but to the computer, it’s all the same.
The on-line version of this section includes the live demo c2/transform-equivalence-2d.html
that can help you to understand the equivalence between modeling transformations and view-
port transformations. Read the help text in the demo for more information. (Demo)
We will return to this idea several times later in the book, but in any case, you can see that
geometric transforms are a central concept in computer graphics. Let’s look at some basic types
of transformation in more detail. The transforms we will use in 2D graphics can be written in
the form
x1 = a*x + b*y + e
y1 = c*x + d*y + f
where (x,y) represents the coordinates of some point before the transformation is applied, and
(x1,y1 ) are the transformed coordinates. The transform is defined by the six constants a, b, c,
d, e, and f. Note that this can be written as a function T, where
T(x,y) = ( a*x + b*y + e, c*x + d*y + f )
A transformation of this form is called an affine transform. An affine transform has the
property that, when it is applied to two parallel lines, the transformed lines will also be parallel.
Also, if you follow one affine transform by another affine transform, the result is again an affine
transform.
2.3.2 Translation
A translation transform simply moves every point by a certain amount horizontally and a
certain amount vertically. If (x,y) is the original point and (x1,y1 ) is the transformed point,
then the formula for a translation is
x1 = x + e
y1 = y + f
where e is the number of units by which the point is moved horizontally and f is the amount by
which it is moved vertically. (Thus for a translation, a = d = 1, and b = c = 0 in the general
formula for an affine transform.) A 2D graphics system will typically have a function such as
translate( e, f )
to apply a translate transformation. The translation would apply to everything that is drawn
after the command is given. That is, for all subsequent drawing operations, e would be added
to the x-coordinate and f would be added to the y-coordinate. Let’s look at an example.
Suppose that you draw an “F” using coordinates in which the “F” is centered at (0,0). If
you say translate(4,2) before drawing the “F”, then every point of the “F” will be moved
horizontally by 4 units and vertically by 2 units before the coordinates are actually used, so
that after the translation, the “F” will be centered at (4,2):
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 30
The light gray “F” in this picture shows what would be drawn without the translation; the dark
red “F” shows the same “F” drawn after applying a translation by (4,2). The top arrow shows
that the upper left corner of the “F” has been moved over 4 units and up 2 units. Every point
in the “F” is subjected to the same displacement. Note that in my examples, I am assuming
that the y-coordinate increases from bottom to top. That is, the y-axis points up.
Remember that when you give the command translate(e,f ), the translation applies to all the
drawing that you do after that, not just to the next shape that you draw. If you apply another
transformation after the translation, the second transform will not replace the translation.
It will be combined with the translation, so that subsequent drawing will be affected by the
combined transformation. For example, if you combine translate(4,2) with translate(-1,5), the
result is the same as a single translation, translate(3,7). This is an important point, and there
will be a lot more to say about it later.
Also remember that you don’t compute coordinate transformations yourself. You just spec-
ify the original coordinates for the object (that is, the object coordinates), and you specify
the transform or transforms that are to be applied. The computer takes care of applying the
transformation to the coordinates. You don’t even need to know the equations that are used
for the transformation; you just need to understand what it does geometrically.
2.3.3 Rotation
A rotation transform, for our purposes here, rotates each point about the origin, (0,0). Every
point is rotated through the same angle, called the angle of rotation. For this purpose, angles
can be measured either in degrees or in radians. (The 2D graphics APIs that we will look at
later in this chapter use radians, but OpenGL uses degrees.) A rotation with a positive angle
rotates objects in the direction from the positive x-axis towards the positive y-axis. This is
counterclockwise in a coordinate system where the y-axis points up, as it does in my examples
here, but it is clockwise in the usual pixel coordinates, where the y-axis points down rather
than up. Although it is not obvious, when rotation through an angle of r radians about the
origin is applied to the point (x,y), then the resulting point (x1,y1 ) is given by
x1 = cos(r) * x - sin(r) * y
y1 = sin(r) * x + cos(r) * y
That is, in the general formula for an affine transform, e = f = 0, a = d = cos(r ), b = -sin(r ),
and c = sin(r ). Here is a picture that illustrates a rotation about the origin by the angle
negative 135 degrees:
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 31
Again, the light gray “F” is the original shape, and the dark red “F” is the shape that results
if you apply the rotation. The arrow shows how the upper left corner of the original “F” has
been moved.
A 2D graphics API would typically have a command rotate(r ) to apply a rotation. The
command is used before drawing the objects to which the rotation applies.
Note that transforms are applied to objects in the reverse of the order in which they are given
in the code (because the first transform in the code is applied to an object that has already
been affected by the second transform). And note that the order in which the transforms are
applied is important. If we reverse the order in which the two transforms are applied in this
example, by saying
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 32
rotate(90)
translate(4,0)
then the result is as shown on the right in the above illustration. In that picture, the original
“F” is first moved 4 units to the right and the resulting shape is then rotated through an angle
of 90 degrees about the origin to give the shape that actually appears on the screen.
For another example of applying several transformations, suppose that we want to rotate
a shape through an angle r about a point (p,q) instead of about the point (0,0). We can do
this by first moving the point (p,q) to the origin, using translate(-p,-q). Then we can do a
standard rotation about the origin by calling rotate(r ). Finally, we can move the origin back
to the point (p,q) by applying translate(p,q). Keeping in mind that we have to write the code
for the transformations in the reverse order, we need to say
translate(p,q)
rotate(r)
translate(-p,-q)
before drawing the shape. (In fact, some graphics APIs let us accomplish this transform with a
single command such as rotate(r,p,q). This would apply a rotation through the angle r about
the point (p,q).)
2.3.5 Scaling
A scaling transform can be used to make objects bigger or smaller. Mathematically, a scaling
transform simply multiplies each x-coordinate by a given amount and each y-coordinate by a
given amount. That is, if a point (x,y) is scaled by a factor of a in the x direction and by a
factor of d in the y direction, then the resulting point (x1,y1 ) is given by
x1 = a * x
y1 = d * y
If you apply this transform to a shape that is centered at the origin, it will stretch the shape
by a factor of a horizontally and d vertically. Here is an example, in which the original light
gray “F” is scaled by a factor of 3 horizontally and 2 vertically to give the final dark red “F”:
The common case where the horizontal and vertical scaling factors are the same is called
uniform scaling . Uniform scaling stretches or shrinks a shape without distorting it.
When scaling is applied to a shape that is not centered at (0,0), then in addition to being
stretched or shrunk, the shape will be moved away from 0 or towards 0. In fact, the true
description of a scaling operation is that it pushes every point away from (0,0) or pulls every
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 33
point towards (0,0). If you want to scale about a point other than (0,0), you can use a sequence
of three transforms, similar to what was done in the case of rotation.
A 2D graphics API can provide a function scale(a,d ) for applying scaling transformations.
As usual, the transform applies to all x and y coordinates in subsequent drawing operations.
Note that negative scaling factors are allowed and will result in reflecting the shape as well
as possibly stretching or shrinking it. For example, scale(1,-1) will reflect objects vertically,
through the x -axis.
It is a fact that every affine transform can be created by combining translations, ro-
tations about the origin, and scalings about the origin. I won’t try to prove that, but
c2/transforms-2d.html is an interactive demo that will let you experiment with translations,
rotations, and scalings, and with the transformations that can be made by combining them. (Demo)
I also note that a transform that is made from translations and rotations, with no scaling,
will preserve length and angles in the objects to which it is applied. It will also preserve aspect
ratios of rectangles. Transforms with this property are called “Euclidean.” If you also allow
uniform scaling, the resulting transformation will preserve angles and aspect ratio, but not
lengths.
2.3.6 Shear
We will look at one more type of basic transform, a shearing transform. Although shears
can in fact be built up out of rotations and scalings if necessary, it is not really obvious how
to do so. A shear will “tilt” objects. A horizontal shear will tilt things towards the left (for
negative shear) or right (for positive shear). A vertical shear tilts them up or down. Here is an
example of horizontal shear:
A horizontal shear does not move the x-axis. Every other horizontal line is moved to the left or
to the right by an amount that is proportional to the y-value along that line. When a horizontal
shear is applied to a point (x,y), the resulting point (x1,y1 ) is given by
x1 = x + b * y
y1 = y
for some constant shearing factor b. Similarly, a vertical shear with shearing factor c is given
by the equations
x1 = x
y1 = c * x + y
Shear is occasionally called “skew.”
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 34
2.3.7 Window-to-Viewport
The last transformation that is applied to an object before it is displayed in an image is the
window-to-viewport transformation, which maps the rectangular view window in the xy-plane
that contains the scene to the rectangular grid of pixels where the image will be displayed.
I’ll assume here that the view window is not rotated; that it, its sides are parallel to the x-
and y-axes. In that case, the window-to-viewport transformation can be expressed in terms of
translation and scaling transforms. Let’s look at the typical case where the viewport has pixel
coordinates ranging from 0 on the left to width on the right and from 0 at the top to height at
the bottom. And assume that the limits on the view window are left, right, bottom, and top.
In that case, the window-to-viewport transformation can be programmed as:
scale( width / (right-left), height / (bottom-top) );
translate( -left, -top )
These should be the last transforms that are applied to a point. Since transforms are applied
to points in the reverse of the order in which they are specified in the program, they should be
the first transforms that are specified in the program. To see how this works, consider a point
(x,y) in the view window. (This point comes from some object in the scene. Several modeling
transforms might have already been applied to the object to produce the point (x,y), and that
point is now ready for its final transformation into viewport coordinates.) The coordinates (x,y)
are first translated by (-left,-top) to give (x-left,y-top). These coordinates are then multiplied
by the scaling factors shown above, giving the final coordinates
x1 = width / (right-left) * (x-left)
y1 = height / (bottom-top) * (y-top)
Note that the point (left,top) is mapped to (0,0), while the point (right,bottom) is mapped to
(width,height), which is just what we want.
There is still the question of aspect ratio. As noted in Subsection 2.1.3, if we want to force
the aspect ratio of the window to match the aspect ratio of the viewport, it might be necessary
to adjust the limits on the window. Here is pseudocode for a subroutine that will do that, again
assuming that the top-left corner of the viewport has pixel coordinates (0,0):
subroutine applyWindowToViewportTransformation (
left, right, // horizontal limits on view window
bottom, top, // vertical limits on view window
width, height, // width and height of viewport
preserveAspect // should window be forced to match viewport aspect?
)
if preserveAspect :
// Adjust the limits to match the aspect ratio of the drawing area.
displayAspect = abs(height / width);
windowAspect = abs(( top-bottom ) / ( right-left ));
if displayAspect > windowAspect :
// Expand the viewport vertically.
excess = (top-bottom) * (displayAspect/windowAspect - 1)
top = top + excess/2
bottom = bottom - excess/2
else if displayAspect < windowAspect :
// Expand the viewport vertically.
excess = (right-left) * (windowAspect/displayAspect - 1)
right = right + excess/2
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 35
This is really nice, but there is a gaping problem: Translation is not a linear transfor-
mation. To bring translation into this framework, we do something that looks a little strange
at first: Instead of representing a point in 2D as a pair of numbers (x,y), we represent it as the
triple of numbers (x,y,1). That is, we add a one as the third coordinate. It then turns out that
we can then represent rotation, scaling, and translation—and hence any affine transformation—
on 2D space as multiplication by a 3-by-3 matrix. The matrices that we need have a bottom
row containing (0,0,1). Multiplying (x,y,1) by such a matrix gives a new vector (x1,y1,1). We
ignore the extra coordinate and consider this to be a transformation of (x,y) into (x1,y1 ). For
the record, the 3-by-3 matrices for translation (Ta,b ), scaling (Sa,b ), and rotation (Rd ) in 2D
are
1 0 a a 0 0 cos(d) -sin(d) 0
Ta,b = 0 1 b Sa,b = 0 b 0 Rd= sin(d) cos(d) 0
0 0 1 0 0 1 0 0 1
You can compare multiplication by these matrices to the formulas given above for translation,
scaling, and rotation. However, you won’t need to do the multiplication yourself. For now,
the important idea that you should take away from this discussion is that a sequence of trans-
formations can be combined into a single transformation. The computer only needs to keep
track of a single matrix, which we can call the “current matrix” or “current transformation.”
To implement transform commands such as translate(a,b) or rotate(d), the computer simply
multiplies the current matrix by the matrix that represents the transform.
object has been scaled and rotated, it’s easy to use a translation to move the reference point
to any desired point in the scene. (Of course, in a particular case, you might not need all three
operations.) Remember that in the code, the transformations are specified in the opposite
order from the order in which they are applied to the object and that the transformations are
specified before drawing the object. So in the code, the translation would come first, followed
by the rotation and then the scaling. Modeling transforms are not always composed in this
order, but it is the most common usage.
The modeling transformations that are used to place an object in the scene should not
affect other objects in the scene. To limit their application to just the one object, we can
save the current transformation before starting work on the object and restore it afterwards.
How this is done differs from one graphics API to another, but let’s suppose here that there
are subroutines saveTransform() and restoreTransform() for performing those tasks. That is,
saveTransform will make a copy of the modeling transformation that is currently in effect and
store that copy. It does not change the current transformation; it merely saves a copy. Later,
when restoreTransform is called, it will retrieve that copy and will replace the current modeling
transform with the retrieved transform. Typical code for drawing an object will then have the
form:
saveTransform()
translate(dx,dy) // move object into position
rotate(r) // set the orientation of the object
scale(sx,sy) // set the size of the object
.
. // draw the object, using its natural coordinates
.
restoreTransform()
Note that we don’t know and don’t need to know what the saved transform does. Perhaps
it is simply the so-called identity transform, which is a transform that doesn’t modify the
coordinates to which it is applied. Or there might already be another transform in place, such
as a coordinate transform that affects the scene as a whole. The modeling transform for the
object is effectively applied in addition to any other transform that was specified previously.
The modeling transform moves the object from its natural coordinates into its proper place in
the scene. Then on top of that, a coordinate transform that is applied to the scene as a whole
would carry the object along with it.
Now let’s extend this idea. Suppose that the object that we want to draw is itself a complex
picture, made up of a number of smaller objects. Think, for example, of a potted flower made
up of pot, stem, leaves, and bloom. We would like to be able to draw the smaller component
objects in their own natural coordinate systems, just as we do the main object. For example,
we would like to specify the bloom in a coordinate system in which the center of the bloom is
at (0,0). But this is easy: We draw each small component object, such as the bloom, in its own
coordinate system, and use a modeling transformation to move the sub-object into position
within the main object. We are composing the complex object in its own natural coordinate
system as if it were a complete scene.
On top of that, we can apply another modeling transformation to the complex object as
a whole, to move it into the actual scene; the sub-objects of the complex object are carried
along with it. That is, the overall transformation that applies to a sub-object consists of a
modeling transformation that places the sub-object into the complex object, followed by the
transformation that places the complex object into the scene.
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 38
In fact, we can build objects that are made up of smaller objects which in turn are made
up of even smaller objects, to any level. For example, we could draw the bloom’s petals in
their own coordinate systems, then apply modeling transformations to place the petals into the
natural coordinate system for the bloom. There will be another transformation that moves the
bloom into position on the stem, and yet another transformation that places the entire potted
flower into the scene. This is hierarchical modeling.
Let’s look at a little example. Suppose that we want to draw a simple 2D image of a cart
with two wheels.
This cart is used as one part of a complex scene in an example below. The body of the cart can
be drawn as a pair of rectangles. For the wheels, suppose that we have written a subroutine
drawWheel()
that draws a wheel. This subroutine draws the wheel in its own natural coordinate system. In
this coordinate system, the wheel is centered at (0,0) and has radius 1.
In the cart’s coordinate system, I found it convenient to use the midpoint of the base of
the large rectangle as the reference point. I assume that the positive direction of the y-axis
points upward, which is the common convention in mathematics. The rectangular body of the
cart has width 6 and height 2, so the coordinates of the lower left corner of the rectangle are
(-3,0), and we can draw it with a command such as fillRectangle(-3,0,6,2). The top of the cart
is a smaller red rectangle, which can be drawn in a similar way. To complete the cart, we need
to add two wheels to the object. To make the size of the wheels fit the cart, they need to be
scaled. To place them in the correct positions relative to body of the cart, one wheel must be
translated to the left and the other wheel, to the right. When I coded this example, I had to
play around with the numbers to get the right sizes and positions for the wheels, and I found
that the wheels looked better if I also moved them down a bit. Using the usual techniques of
hierarchical modeling, we save the current transform before drawing each wheel, and we restore
it after drawing the wheel. This restricts the effect of the modeling transformation for the wheel
to that wheel alone, so that it does not affect any other part of the cart. Here is pseudocode
for a subroutine that draws the cart in its own coordinate system:
subroutine drawCart() :
saveTransform() // save the current transform
translate(-1.65,-0.1) // center of first wheel will be at (-1.65,-0.1)
scale(0.8,0.8) // scale to reduce radius from 1 to 0.8
drawWheel() // draw the first wheel
restoreTransform() // restore the saved transform
saveTransform() // save it again
translate(1.5,-0.1) // center of second wheel will be at (1.5,-0.1)
scale(0.8,0.8) // scale to reduce radius from 1 to 0.8
drawWheel(g2) // draw the second wheel
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 39
(Demo)
You can probably guess how hierarchical modeling is used to draw the three windmills in
this example. There is a drawWindmill method that draws a windmill in its own coordinate
system. Each of the windmills in the scene is then produced by applying a different modeling
transform to the standard windmill. Furthermore, the windmill is itself a complex object that
is constructed from several sub-objects using various modeling transformations.
∗ ∗ ∗
It might not be so easy to see how different parts of the scene can be animated. In fact,
animation is just another aspect of modeling. A computer animation consists of a sequence
of frames. Each frame is a separate image, with small changes from one frame to the next.
From our point of view, each frame is a separate scene and has to be drawn separately. The
same object can appear in many frames. To animate the object, we can simply apply a different
modeling transformation to the object in each frame. The parameters used in the transformation
CHAPTER 2. TWO-DIMENSIONAL GRAPHICS 40
can be computed from the current time or from the frame number. To make a cart move from
left to right, for example, we might apply a modeling transformation
translate(frameNumber * 0.1)
to the cart, where frameNumber is the frame number. In each frame, the cart will be 0.1 units
farther to the right than in the previous frame. (In fact, in the actual program, the translation
that is applied to the cart is
translate( -3 + 13*(frameNumber % 300) / 300.0, 0 )
which moves the reference point of the cart from -3 to 13 along the horizontal axis every 300
frames. In the coordinate system that is used for the scene, the x-coordinate ranges from 0 to
7, so this puts the cart outside the scene for much of the loop.)
The really neat thing is that this type of animation works with hierarchical modeling.
For example, the drawWindmill method doesn’t just draw a windmill—it draws an animated
windmill, with turning vanes. That just means that the rotation applied to the vanes depends
on the frame number. When a modeling transformation is applied to the windmill, the rotating
vanes are scaled and moved as part of the object as a whole. This is an example of hierarchical
modeling. The vanes are sub-objects of the windmill. The rotation of the vanes is part of
the modeling transformation that places the vanes into the windmill object. Then a further
modeling transformation is applied to the windmill object to place it in the scene.
The file java2d/HierarchicalModeling2D.java contains the complete source code for a Java
version of this example. The next section of this book covers graphics programming in Java.
Once you are familiar with that, you should take a look at the source code, especially the
paintComponent() method, which draws the entire scene.
CART
SUN
WHEEL WINDMILL
(12)
(12)
GROUND
FILLED FILLED
CIRCLE SQUARE
CIRCLE
LINE
VANE
In this drawing, a single object can have several connections to one or more parent objects.
Each connection represents one occurrence of the object in its parent object. For example, the
“filled square” object occurs as a sub-object in the cart and in the windmill. It is used twice in
the cart and once in the windmill. (The cart contains two red rectangles, which are created as
squares with a non-uniform scaling; the pole of the windmill is made as a scaled square.) The
“filled circle” is used in the sun and is used twice in the wheel. The “line” is used 12 times in
the sun and 12 times in the wheel; I’ve drawn one thick arrow, marked with a 12, to represent
12 connections. The wheel, in turn, is used twice in the cart. (My diagram leaves out, for lack
of space, two occurrences of the filled square in the scene: It is used to make the road and the
line down the middle of the road.)
Each arrow in the picture can be associated with a modeling transformation that places
the sub-object into its parent object. When an object contains several copies of a sub-object,
each arrow connecting the sub-object to the object will have a different associated modeling
transformation. The object is the same for each copy; only the transformation differs.
Although the scene graph exists conceptually, in some applications it exists only implicitly.
For example, the Java version of the program that was mentioned above draws the image
“procedurally,” that is, by calling subroutines. There is no data structure to represent the
scene graph. Instead, the scene graph is implicit in the sequence of subroutine calls that
draw the scene. Each node in the graph is a subroutine, and each arrow is a subroutine
call. The various objects are drawn using different modeling transformations. As discussed in
Subsection 2.3.8, the computer only keeps track of a “current transformation” that represents
all the transforms that are applied to an object. When an object is drawn by a subroutine, the
Other documents randomly have
different content
the country was too far away from his capital to be properly looked
after. The United States finally made an offer to buy the great
territory from the Czar, although the government at Washington was
not very anxious to make the purchase. The tract, large as it was,
did not seem to promise much, and it was almost as far from
Washington as it was from St. Petersburg. The Czar was quite willing
to sell, however, and so the United States bought the country from
him in 1867, paying him $7,200,000 for it.
On a fine October afternoon in 1867 Sitka Bay saw the Stars and
Stripes flying from three United States war-ships, while the Russian
Eagle waved from the flagstaffs and houses in the small town. On
the shore soldiers of the two nations were drawn up in front of the
old castle, and officers stood waiting at the foot of the flagpole on
the parade ground. Then a gun was fired from one of the United
States war-ships, and instantly the Russian batteries returned the
salute. A Russian officer lowered his country's flag from the parade
ground pole, and an American pulled the Stars and Stripes to the
peak. Guns boomed and regimental bands played, and then the
Russian troops saluted and left the fortress, and the territory
became part of the United States.
Up to that time the country had been known as Russian
America, but now a new name had to be found. Some suggested
American Siberia, and others the Zero Islands; but an American
statesman, Charles Sumner, urged the name of Alaska, a native
word meaning "the Great Land," and this was the name that was
finally adopted.
It took many years to explore the western part of the United
States, and men who were in search of wealth in mines and forests
did not have to go as far as Alaska to find it. That bleak country was
separated from the United States by a long, stormy sea voyage on
the Pacific, or a tedious and difficult overland journey through
Canada. Alaska might have remained for years as little known as
while Russia owned it had it not been for a small party of men who
set out to explore the Yukon and the Klondike Rivers.
On June 16, 1897, a small ship called the Excelsior sailed into
San Francisco Harbor, and half an hour after she had landed at her
wharf the news was spreading far and wide that gold had been
discovered in large quantities on the Klondike. Some of the men had
gone out years before; some only a few months earlier, but they all
brought back fortunes. Not one had left with less than $5,000 in
gold, gathered in nuggets or flakes, in tin cans, canvas bags,
wooden boxes, or wrapped up in paper. The cry of such sudden
wealth was heard by many adventurers, and the old days of 'Forty-
Nine in California began over again when the wild rush started north
to the Klondike.
On June 17th another ship, the Portland, arrived at Seattle, with
sixty more miners and $800,000 in gold. This was the largest find of
the precious mineral that had been made anywhere in the world,
and Seattle followed the example of San Francisco in going gold-
crazy. Immediately hundreds of people took passage on the outward
bound steamers, and hundreds more were turned away because of
lack of room. Ships set out from all the seaports along the Pacific
coast of the United States, and from the Canadian ports of Victoria
and Vancouver. As in the old days of 1849 men gave up their
business to seek the gold fields, but now they had to travel to a
wilder and more desolate country than California had been.
There were many ways of getting to the Klondike country. Those
who went by ocean steamer had to transfer to flat-bottomed boats
to go up the Yukon River. This was the easiest route, but the boats
could only be used on the Yukon from June until September, and the
great rush of gold-seekers came later that autumn. A second route
was by the Chilkoot trail, which had been used for many years by
miners going into the country of the Yukon. Over this trail horses
could be used as far as the foot of the great Chilkoot Pass, but from
there luggage had to be carried by hand. Another trail, much like
this one, was the White Pass trail, but it led through a less-known
country than the Chilkoot, and was not so popular. The Canadian
government laid out a trail of its own, which was called "the Stikeen
route," and which ran altogether through Canadian territory. Besides
these there were innumerable other roads through the mountains,
and along the rivers; but the farther men got from the better known
trails the more danger they were in of losing their way, or suffering
from hunger and hardships.
Towns blossomed along the coast of Alaska almost over night,
but they were strange looking villages. The ships that landed at
Skagway in the summer of 1897 found a number of rough frame
houses, with three or four larger than the rest which hung out hotel
signs. The only government officer lived in a tent over which flew
the flag of the United States. The passengers landed their outfits
themselves, for labor was scarce, and found shelter wherever they
could until they might start on the trail.
No one seemed to know much about the country they were
going through, but fortunately most of the men were experienced
woodsmen. They loaded their baggage on their packhorses, and
started out, ready for any sort of country they might have to cross.
Sometimes the trail lay over miry ground, where a false step to the
right or left would send the horses or men deep into the bog;
sometimes it led up steep and rocky mountainsides, where a man
had to guard his horse's footing as carefully as his own; and much of
the way was in the bed of an old river, where each step brought a
splash of mud, and left the travelers at the end of the day spattered
from head to foot. The journey was harder on the horses than on
the men. The heavy packs they carried, and the wretched footing,
caused them to drop along the road from time to time, and then the
travelers had to make the best shift they could with their luggage.
Had the men journeyed alone, or in small companies, they would
have suffered greatly, but the Chilkoot trail was filled with miners
who were ready to help each other, and to give encouragement to
any who lagged behind. At Dyea they came to an old Alaskan
settlement, an Indian trading post, where a number of native tribes
lived in their little wooden cabins. These men were the Chilkats, the
Stikeen Indians, and the Chilkoots, short, heavy men, with heads
and eyes more like Mongolians than like American Indians. Both men
and women were accustomed to painting their faces jet black or
chocolate brown, in order to protect their eyes and skin from the
glare of the sunlight on the snow. The traveler could here get
Indians to act as guides, or if he had lost his horses might obtain
dogs and sleds to carry some of his packs.
Each of the little settlements through which the travelers went
boasted of a hotel, usually a frame building with two or three large
rooms. Each day meals were served to three or four hundred hungry
travelers at rude board tables, and at night the men would spread
their blankets on the floor and lie down to sleep. But as the trail
went farther inland these little settlements grew fewer, and the men
had to find whatever shelter they could. From Dyea they pushed on
through the Chilkoot Pass, where the cliffs rose high above them.
The winds blew cold from the north, and the mists kept everything
wet. In the Pass some men turned back, finding the trip too difficult.
Those who went on met with increasing hardships. They came to a
place called Sheep Camp, where a stream of water and rocks from
the mountain top had swept down upon a town of tents and carried
them all away. Stories of similar happenings at other places were
passed from mouth to mouth along the trail. More men turned back,
finding such accidents a good excuse, and only the most determined
stuck to the road.
In time they came to a chain of lakes and rivers. The travelers
stopped to build rude boats and paddles, and navigated them as
best they could. The rivers were full of rapids, and it was only by a
miracle that the little clumsily-built skiffs went dancing over the
waters safely, and escaped the jutting rocks on either bank. In the
rivers there was good trout fishing, and in the wild country good
hunting, and Indian boys brought game to the tents at night. To the
trees at each stopping-place papers were fastened, telling of the
marvelous adventures of the miners who had just gone over the
trail. As they neared Dawson City they found the Yukon River more
and more covered with floating ice, and travel by boat became
harder. After a time the oars, paddles, gunwales, and all the
baggage in the boats was encrusted with ice, and the boatmen had
to make their way slowly among the floes. Then they came to a turn
in the river, and on the bank saw a great number of tents and
people. "How far is it to Dawson?" the boatman would call. "This is
Dawson. If you don't look out you'll be carried past," the men on
shore answered. Paddles were thrust into the ice, and the boat
brought to shore. The trip from Seattle had so far taken ninety-two
days.
Food was scarce in Dawson, and men were urged to leave as
soon as they could. Winter was now setting in, and the miners
traveled with dog teams and sleds to the place where they meant to
camp. Little work could be done in the winter, and the time was
spent in preparing to work the gold fields in the early spring. All
through the cold weather the men talked of the fortunes waiting for
them, and when the warm weather came they staked out their
claims and set to work. Stories of fabulous finds spread like wild-fire,
and those who were not finding gold rushed to the places that were
proving rich. That summer many new towns sprang up, and in a few
weeks the Bonanza and Eldorado mines made their owners rich, and
all the tributaries of the Klondike River were yielding a golden
harvest.
When men found land that they thought would prove rich they
made haste to claim it. Sometimes wild races followed, rivals trying
to beat each other to the government offices at Dawson in order to
claim the land. Frequently after such a wild race the claim would
amount to nothing, while another man, who had picked out some
place that no one wanted, would find a rich lode and make a fortune
from it. Then there would be great excitement, for sudden wealth
usually went to the miner's head. He would go down to Dawson, and
spend his money freely, while every one in the town would crowd
around him to share in his good luck. One of the most successful
was a Scotchman, Alexander McDonald. At the time of the Klondike
strike he was employed by a company at the town of Forty-Mile. He
had a little money and began to buy separate pieces of land. He
could not afford the rich ground, but managed to purchase more
than forty claims through the Klondike. At the end of that first
season his fortune was said to be $5,000,000, and might well have
been more, as all his claims had not been fully worked. He was
called "the King of the Klondike," and pointed out to newcomers as
an example of what men might do in the gold fields.
That was only the beginning of the story of the Alaskan gold
fields, and each year brought news of other discoveries. But the one
season of 1897 was enough to prove the great value of Alaska, and
to show that the United States had done well to buy that great
territory from the Czar of Russia. Yet gold is only a small part of its
riches, and even should the fields of the Klondike yield no more of
the precious mineral, the seals, the fur trade, and the cities springing
up along its coast are worth much more than the $7,000,000 paid
for it. It is still a land of adventure, one of the few waste places that
beckon men to come and find what wealth lies hidden within its
borders.
XIV
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
ebookbell.com