0% found this document useful (0 votes)
4 views

Computer Graphics Programming Problem Solving And Visual Communication Dr Steve Cunningham pdf download

The document is a comprehensive guide to computer graphics programming, authored by Dr. Steve Cunningham, covering essential topics such as 3D geometry, modeling, viewing and projection, color and blending, and visual communication. It includes practical examples, exercises, and projects to enhance understanding of OpenGL and related techniques. Additionally, it provides links to various recommended ebooks for further exploration in the field of computer graphics.

Uploaded by

lenicikeally
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Computer Graphics Programming Problem Solving And Visual Communication Dr Steve Cunningham pdf download

The document is a comprehensive guide to computer graphics programming, authored by Dr. Steve Cunningham, covering essential topics such as 3D geometry, modeling, viewing and projection, color and blending, and visual communication. It includes practical examples, exercises, and projects to enhance understanding of OpenGL and related techniques. Additionally, it provides links to various recommended ebooks for further exploration in the field of computer graphics.

Uploaded by

lenicikeally
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Computer Graphics Programming Problem Solving

And Visual Communication Dr Steve Cunningham


download

https://ebookbell.com/product/computer-graphics-programming-
problem-solving-and-visual-communication-dr-steve-
cunningham-50195150

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Computer Graphics Programming In Opengl Using C 3rd Edition 3rd


Edition Gordon Phd

https://ebookbell.com/product/computer-graphics-programming-in-opengl-
using-c-3rd-edition-3rd-edition-gordon-phd-56221524

Computer Graphics Programming In Opengl With Java 2nd Edition V Scott


Gordon

https://ebookbell.com/product/computer-graphics-programming-in-opengl-
with-java-2nd-edition-v-scott-gordon-23287532

Computer Graphics Programming In Opengl With C 2nd Edition V Scott


Gordon John Clevenger

https://ebookbell.com/product/computer-graphics-programming-in-opengl-
with-c-2nd-edition-v-scott-gordon-john-clevenger-30972724

Computer Graphics Programming In Opengl With Java 3rd Edition V Scott


Gordon

https://ebookbell.com/product/computer-graphics-programming-in-opengl-
with-java-3rd-edition-v-scott-gordon-34633876
Computer Graphics Programming In Opengl With Java Harcdr Gordon

https://ebookbell.com/product/computer-graphics-programming-in-opengl-
with-java-harcdr-gordon-55587658

Mathematics For Computer Graphics And Game Programming A Selfteaching


Introduction D P Kothari G Awari A Bhende D Shrimankar Kothari

https://ebookbell.com/product/mathematics-for-computer-graphics-and-
game-programming-a-selfteaching-introduction-d-p-kothari-g-awari-a-
bhende-d-shrimankar-kothari-23396878

Mathematics For 3d Game Programming Computer Graphics 1st Eric Lengyel

https://ebookbell.com/product/mathematics-for-3d-game-programming-
computer-graphics-1st-eric-lengyel-1469998

Mathematical And Computer Programming Techniques For Computer Graphics


1st Edition Peter Comninos Dip Comp Prog

https://ebookbell.com/product/mathematical-and-computer-programming-
techniques-for-computer-graphics-1st-edition-peter-comninos-dip-comp-
prog-4200962

Mathematics For Game Programming And Computer Graphics 1st Edition


Penny De Byl

https://ebookbell.com/product/mathematics-for-game-programming-and-
computer-graphics-1st-edition-penny-de-byl-47238264
Computer Graphics:
Programming, Problem Solving,
and Visual Communication

Dr. Steve Cunningham


Computer Science Department
California State University Stanislaus
Turlock, CA 95382

copyright © 2003, Steve Cunningham


All rights reserved
CONTENTS:
Preface
• What is Computer Graphics?
• What is a Graphics API?
• Why do Computer Graphics?
• Overview of the Book

Getting Started
• 3D Geometry
- 3D model coordinate systems
- 3D world coordinate system
- 3D eye coordinate system
- Clipping
- Projections
- 2D eye coordinates
- 2D screen coordinates
• Appearance
- Color
- Texture mapping
- Depth buffering
• The viewing process
- Different implementation, same result
- Summary of viewing advantages
• A basic OpenGL program
- The structure of the main() program using OpenGL
- Model space
- Modeling transformation
- 3D world space
- Viewing transformation
- 3D eye space
- Projections
- 2D eye space
- 2D screen space
- Appearance
- Another way to see the program
• OpenGL extensions
• Summary
• Questions
• Exercises
• Experiments

Chapter 1: Viewing and Projection


• Introduction
• Fundamental model of viewing
• Definitions
- Setting up the viewing environment
- Defining the projection
- View volumes
- Calculating the perspective transformation
- Clipping on the view volume
- Defining the window and viewport

3/15/03 Page 2
• Some aspects of managing the view
- Hidden surfaces
- Double buffering
- Clipping planes
• Stereo viewing
• Implementation of viewing and projection in OpenGL
- Defining a window and viewport
- Reshaping the window
- Defining a viewing environment
- Defining perspective projection
- Defining an orthogonal projection
- Managing hidden surface viewing
- Setting double buffering
- Defining clipping planes
• Implementing a stereo view
• Summary
• Questions
• Exercises
• Experiments

Chapter 2: Principles of Modeling


Simple Geometric Modeling
• Introduction
• Definitions
• Some examples
- Point and points
- Line segments
- Connected lines
- Triangle
- Sequence of triangles
- Quadrilateral
- Sequence of quads
- General polygon
- Polyhedron
- Aliasing and antialiasing
- Normals
- Data structures to hold objects
- Additional sources of graphic objects
- A word to the wise
Transformations and modeling
• Introduction
• Definitions
- Transformations
- Composite transformations
- Transformation stacks and their manipulation
- Compiling geometry
• A word to the wise
Scene graphs and modeling graphs
• Introduction
• A brief summary of scene graphs
- An example of modeling with a scene graph
• The viewing transformation
• The scene graph and depth testing

3/15/03 Page 3
• Using the modeling graph for coding
- Example
- Using standard objects to create more complex scenes
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 3: Implementing Modeling in OpenGL


• The OpenGL model for specifying geometry
- Point and points mode
- Line segments
- Line strips
- Line loops
- Triangle
- Sequence of triangles
- Quads
- Quad strips
- General polygon
- Antialiasing
- The cube we will use in many examples
• Additional objects with the OpenGL toolkits
- GLU quadric objects
> GLU cylinder
> GLU disk
> GLU sphere
- The GLUT objects
- An example
• A word to the wise
• Transformations in OpenGL
• Code examples for transformations
- Simple transformations
- Transformation stacks
- Inverting the eyepoint transformation
- Creating display lists
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 4: Mathematics for Modeling


• Coordinate systems and points
• Points, lines, and line segments
• Distance from a point to a line
• Line segments and parametric curves
• Vectors
• Dot and cross products of vectors
• Reflection vectors
• Transformations
• Planes and half-spaces
• Distance from a point to a plane
• Polygons and convexity

3/15/03 Page 4
• Polyhedra
• Collision detection
• Polar, cylindrical, and spherical coordinates
• Higher dimensions?
• Questions
• Exercises
• Experiments

Chapter 5: Color and Blending


• Introduction
• Principles
- Specifying colors
- The RGB cube
- Luminance
- Other color models
- Color depth
- Color gamut
- Color blending with the alpha channel
- Challenges in blending
- Modeling transparency with blending
- Indexed color
- Using color to create 3D images
• Some examples
- An object with partially transparent faces
• Color in OpenGL
- Specifying colors
- Enabling blending
• A word to the wise
• Code examples
- A model with parts having a full spectrum of colors
- The HSV cone
- The HLS double cone
- An object with partially transparent faces
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 6: Visual Communication


• Introduction
• General issues in visual communication
- Use appropriate representation for your information
- Keep your images focused
- Use appropriate presentation levels for your information
- Use appropriate forms for your information
- Be very careful to be accurate with your information
- Understand and respect the cultural context of your audience
- Make your interactions reflect familiar and comfortable relationships between cause
and effect
• Shape
- Comparing shape and color encodings

3/15/03 Page 5
• Color
- Emphasis colors
- Background colors
- Color deficiencies in audience
- Naturalistic color
- Pseudocolor and color ramps
- Implementing color ramps
- Using color ramps
- To light or not to light
- Higher dimensions
• Dimensions
• Image context
- Choosing an appropriate view
- Legends to help communicate your encodings
- Labels to help communicate your problem
• Motion
- Leaving traces of motion
- Motion blurring
• Interactions
• Cultural context of the audience
• Accuracy
• Output media
• Implementing some of these ideas in OpenGL
- Using color ramps
- Legends and labels
- Creating traces
- Using the accumulation buffer
• A word to the wise

Chapter 7: Graphical Problem Solving in Science


• Introduction
• Examples
• Diffusion
- Temperatures in a bar
- Spread of disease
• Function graphing and applications
• Parametric curves and surfaces
• Graphical objects that are the results of limit processes
• Scalar fields
• Simulation of objects and behaviors
- Gas laws and diffusion principles
- Molecular display
- A scientific instrument
- Monte Carlo modeling process
• 4D graphing
- Volume data
- Vector fields
• Graphing in higher dimensions
• Data-driven graphics
• Code examples
- Diffusion
- Function graphing
- Parametric curves and surfaces
- Limit processes

3/15/03 Page 6
- Scalar fields
- Representation of objects and behaviors
- Molecular display
- Monte Carlo modeling
- 4D graphing
- Higher dimensional graphing
- Data-driven graphics
• Summary
• Credits
• Questions
• Exercises
• Experiments
• Projects

Chapter 8: Lighting and Shading


Lighting
• Definitions
- Ambient, diffuse, and specular light
- Surface normals
• Light properties
- Light color
- Positional lights
- Spotlights
- Attenuation
- Directional lights
- Positional and moving lights
Materials
Shading
• Definitions
- Flat shading
- Smooth shading
• Examples of flat and smooth shading
• Calculating per-vertex normals
- Averaging polygon normals
- Analytic computations
• Other shading models
- Vertex and pixel shaders
Lighting, Shading, and Materials in Scene Graphs
Global Illumination
Local Illumination
• Lights and materials in OpenGL
- Specifying and defining lights
- Defining materials
- Setting up a scene to use lighting
- Using GLU quadric objects
- An example: lights of all three primary colors applied to a white surface
- Code for the example
• A word to the wise
- Shading example
• Summary
• Questions
• Exercises
• Experiments
• Projects

3/15/03 Page 7
Chapter 9: Texture Mapping
• Introduction
• Definitions
- 1D texture maps
- 2D texture maps
- 3D texture maps
- Associating a vertex with a texture point
- The relation between the color of the object and the color of the texture map
- Other meanings for texture maps
- Texture mapping in the scene graph
• Creating a texture map
- Getting an image as a texture map
- Generating a synthetic texture map
• Texture mapping and billboards
- Including multiple textures in one texture map
• Interpolation for texture maps
• Antialiasing in texturing
• MIP mapping
• Multitexturing
• Using billboards
• Texture mapping in OpenGL
- Associating vertices and texture points
- Capturing a texture from the screen
- Texture environment
- Texture parameters
- Getting and defining a texture map
- Texture coordinate control
- Texture interpolation
- Texture mapping and GLU quadrics
• Some examples
- The Chromadepth™ process
- Environment maps
• A word to the wise
• Code examples
- A 1D color ramp
- An image on a surface
- An environment map
- Multitexturing code
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 10: The Rendering Pipeline


• Introduction
• The pipeline
• The rendering pipeline for OpenGL
- Texture mapping in the rendering pipeline
- Per-fragment operations
- Some extensions to OpenGL
- An implementation of the rendering pipeline in a graphics card
• The rasterization process

3/15/03 Page 8
• Some 3D Viewing Operations with Graphics Cards
• Summary
• Questions
• Exercises
• Experiments

Chapter 11: Event Handling


• Definitions
• Some examples of events
- keypress events
- mouse events
- menu events
- window events
- system events
- software events
• The vocabulary of interaction
• Object selection
• Events and the scene graph
• A word to the wise
• Events in OpenGL
• Callback registering
• Some details
- More on menus
• Code examples
- Idle event callback
- Keyboard callback
- Menu callback
- Mouse callback for mouse motion
- Mouse callback for object picking
• Details on picking
- Definitions
- Making picking work
- The pick matrix
- Using the back buffer to do picking
- A selection example
- A summary of picking
The MUI (Micro User Interface) Facility
• Introduction
• Definitions
- Menu bars
- Buttons
- Radio buttons
- Text boxes
- Horizontal sliders
- Vertical sliders
- Text labels
• Using the MUI functionality
• Some examples
• Installing MUI for Windows
• A word to the wise
• Summary
• Questions
• Exercises
• Experiments

3/15/03 Page 9
• Projects

Chapter 12: Dynamics and Animation


• Introduction
• Definitions
• Keyframe animation
- Temporal aliasing
- Building an animation
• Some examples
- Moving objects in your model
- Moving parts of objects in your model
- Moving the eye point or the view frame in your model
- Changing features of your models
• Some points to consider when doing animations with OpenGL
• Code examples
• A word to the wise
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 13: High-Performance Graphics Techniques


• Definitions
• Techniques
- Hardware avoidance
- Designing out visible polygons
- Culling polygons
- Avoiding depth comparisons
- Front-to-back drawing
- Binary space partitioning
- Clever use of textures
- System speedups
- Level of detail
- Reducing lighting computation
- Fog
- Collision detection
• A word to the wise
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 14: Interpolation and Spline Modeling


• Introduction
- Interpolations
- Extending interpolations to more control points
- Generating normals for a patch
- Generating texture coordinates for a patch
• Interpolations in OpenGL
- Automatic normal and texture generation with evaluators
- Additional techniques
• Definitions

3/15/03 Page 10
• Some examples
- Spline curves
- Spline surfaces
• A word to the wise
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 15: Per-Pixel Operations


• Introduction
• Definitions
• Ray casting
• Ray tracing
• Ray tracing and global illumination
• Volume rendering
• Fractal images
• Iterated function systems
• Per-pixel operations supported by OpenGL
• Summary
• Questions
• Exercises
• Experiments
• Projects

Chapter 16: Hardcopy


• Introduction
• Definitions
- Digital images
- Print
- Film
- Video
- Digital video
- 3D object prototyping
- The STL file
• Summary
• Questions
• Exercises
• Experiments
• Projects

References and Resources


• References
• Resources

Appendices
• Appendix I: PDB file format
• Appendix II: CTL file format
• Appendix III: STL file format

Index

3/15/03 Page 11
Evaluation
• Instructor’s evaluation
• Student’s evaluation

Because this is a draft of a textbook for an introductory, API-based computer graphics course, the
author recognizes that there may be some inaccuracies, incompleteness, or clumsiness in the
presentation and apologizes for these in advance. Further development of these materials, as well
as source code for many projects and additional examples, is ongoing continuously. All such
materials will be posted as they are ready on the author’s Web site:
http://www.cs.csustan.edu/~rsc/NSF/
Your comments and suggestions will be very helpful in making these materials as useful as
possible and are solicited; please contact

Steve Cunningham
California State University Stanislaus
[email protected]

While the author retains copyright and other associated rights, permission is given to use this
manuscript and associated materials in both electronic and printed form as a resource for a
beginning computer graphics course.

This work was supported by National Science Foundation grant DUE-9950121. All
opinions, findings, conclusions, and recommendations in this work are those of the author
and do not necessarily reflect the views of the National Science Foundation. The author
also gratefully acknowledges sabbatical support from California State University Stanislaus
and thanks the San Diego Supercomputer Center, most particularly Dr. Michael J. Bailey,
for hosting this work and for providing significant assistance with both visualization and
science content. Ken Brown, a student of the author’s, provided invaluable and much-
appreciated assistance with several figures and concepts in this manuscript. The author
also thanks students Ben Eadington, Jordan Maynard, and Virginia Muncy for their
contributions through examples, and a number of others for valuable conversations and
suggestions on these notes.

3/15/03 Page 12
Preface
Computer graphics is one of the most exciting ways that computing has made an impact on the
world. From the simple ways that spreadsheets allow you to create charts to see data, to the
ways graphics has enhanced entertainment by providing new kinds of cartoons and special effect,
to the ways graphics has enabled us to see and understand scientific principles, computer
graphics is everywhere we turn. This important presence has come from the greatly improved
graphics hardware and software that is found in current computing systems. With these
advances, computer graphics has emerged from a highly technical field, needing very expensive
computers and frame buffers and requiring programmers to master all the mathematics and
algorithms needed to create an image. It has now become a field that allows the graphics
programmer to think and work at a much higher level of modeling and to create effective images
that communicate effectively with the user. We believe that the beginning computer graphics
course should focus on how the student can learn to create effective communications with
computer graphics, including motion and interaction, and that the more technical details of
algorithms and mathematics for graphics should be saved for more advanced courses.

What is Computer Graphics?

Computer graphics is involved in any work that uses computation to create or modify images,
whether those images are still or moving; interactive or fixed; on film, video, screen, or print. It
can also be part of creating objects that are manufactured from the processes that also create
images. This makes it a very broad field, encompassing many kinds of uses in the creative,
commercial, and scientific worlds. This breadth means that there are many kinds of tools to
create and manipulate images for these different areas. The large number of tools and
applications means that there are many different things one could learn about computer graphics.

In this book, we do not try to cover the full scope of work that can be called computer graphics.
Rather, we view computer graphics as the art and science of creating synthetic images by
programming the geometry and appearance of the contents of the images, and by displaying the
results of that programming on appropriate devices that support graphical output and interaction.
This focus on creating images by programming means that we must learn how to think about
how to represent graphical and interaction concepts in ways that can be used by the computer,
which both limits and empowers the graphics programmer.

The work of the programmer is to develop appropriate representations for the geometric objects
that are to make up the images, to assemble these objects into an appropriate geometric space
where they can have the proper relationships with each other as needed for the image, to define
and present the look of each of the objects as part of that scene, to specify how the scene is to be
viewed, and to specify how the scene as viewed is to be displayed on the graphic device. The
programming may be done in many ways, but in current practice it usually uses a graphics API
that supports the necessary modeling and does most of the detailed work of rendering the scene
that is defined through the programming. There are a number of graphics APIs available, but the
OpenGL API is probably most commonly used currently.
In addition to the creation of the modeling, viewing, and look of the scene, the programmer has
two other important tasks. Because a static image does not present as much information as a
moving image, the programmer may want to design some motion into the scene, that is, may
want to define some animation for the image. And because the programmer may want the user
to have the opportunity to control the nature of the image or the way the image is seen, the
programmer may want to design ways for the user to interact with the scene as it is presented.
These additional tasks are also supported by the graphics API.

What is a Graphics API?

An API is an Application Programming Interface—a set of tools that allow a programmer to


work in an application area. The API’s tools are oriented to the tasks of the application area, and
allow a programmer to design applications using the concepts of the area without having to deal
with the details of the computer system. Among the advantages of an API is that it hides the
details of any one computer system and allows the programmer to develop applications that will
work on any of a wide range of systems. Thus a graphics API is a set of tools that allow a
programmer to write applications that include the use of interactive computer graphics without
dealing with system details for tasks such as window handling and interactions.

Besides covering the basic ideas of interactive computer graphics, this book will introduce you to
the OpenGL graphics API and to give you a number of examples that will help you understand
the capabilities that OpenGL provides and will allow you to learn how to integrate graphics
programming into your other work.

Why do Computer Graphics?

Computer graphics has many faces, so there are many reasons why one might want to use
computer graphics in his or her work. Many of the most visible uses of computer graphics are to
create images for the sciences (scientific visualization, explanations to the public), entertainment
(movies, video games, special effects), for creative or aesthetic work (art, interactive
installations), for commercial purposes (advertising, communication, product design), or for
general communication (animated weather displays, information graphics). The processes
described in this book are all fundamental to each of these applications, although some of the
applications will want the kinds of sophistication or realism in images that are not possible
through simple API programming.

In all of these application areas, and more, there is a fundamental role for computer graphics in
solving problems. Problem solving is a basic process in all human activity, so computer graphics
can play a fundamental role in almost any area, as shown in Figure 1. This figure describes what
occurs as someone:
• identifies a problem
• addresses the problem by building a model that represents it and allows it to be considered
more abstractly
• identifies a way to represent the problem geometrically
• creates an image from that geometry so that the problem can be seen visually

3/15/03 Page 2
• uses the image to understand the problem or the model and to try to understand a possible
solution.

Problem

Model

Geometry

Image

Figure 1: Computer graphics in the problem-solving process

The image that represents a problem can be made in many ways. One of the classical uses of
images in problem solving is simply to sketch an image—a diagram or picture—to communicate
the problem to a colleague so it can be discussed informally. (In the sciences, it is assumed that
restaurants are not happy to see a group of scientists or mathematicians come to dinner because
they write diagrams on the tablecloth!) But an image can also be made with computer graphics,
and this is especially useful when it is important to share the idea to a larger audience. If the
model permits it, this image may be an animation or an interactive display so that the problem
can be examined more generally than a single image would permit. That image, then, can be
used by the problem-solver or the audience to gain a deeper understanding of the model and
hence of the problem, and the problem can be refined iteratively and a more sophisticated model
created, and the process can continue.

This process is the basis for all of the discussions in a later chapter on graphical problem solving
in the sciences, but it may be applied to more general application areas. In allowing us to bring
the visual parts of our brain and our intelligence to a problem, it gives us a powerful tool to think
about the world. In the words of Mike Bailey of the San Diego Supercomputer Center, computer
graphics gives us a “brain wrench” that magnifies the power of our mind, just as a physical
wrench magnifies the power of our hands.

Overview of the Book

This book is a textbook for a beginning computer graphics course for students who have a good
programming background, equivalent to a full year of programming courses. We use C as the
programming language in our examples because it is the most common language for developing
applications with OpenGL. The book can be used by students with no previous computer
graphics experience and less mathematics and advanced computer science studies than the
traditional computer graphics course. Because we focus on graphics programming rather than
algorithms and techniques, we have fewer instances of data structures and other computer
science techniques. This means that this text can be used for a computer graphics course that can
be taken earlier in a student’s computer science studies than the traditional graphics course, or
for self-study be anyone with a sound programming background. In particular, this book can be
used as a text for a computer graphics course at the community college level.

3/15/03 Page 3
Many, if not most, of the examples in this book are taken from sources in the sciences, and we
include a chapter that discusses several kinds of scientific and mathematical applications of
computer graphics. This emphasis makes this book appropriate for courses in computational
science or in computer science programs that want to develop ties with other programs on
campus, particularly programs that want to provide science students with a background that will
support development of computational science or scientific visualization work. It is tempting to
use the word “visualization” somewhere in the title of this book, but we would reserve that word
for material that is primarily focused on the science with only a sidelight on the graphics;
because we reverse that emphasis, the role of scientific visualization is in the application of the
computer graphics.

The book is organized along fairly traditional lines, treating projection, viewing, modeling,
rendering, lighting, shading, and many other aspects of the field. It also includes an emphasis on
using computer graphics to address real problems and to communicate results effectively to the
viewer. As we move through this material, we describe some general principles in computer
graphics and show how the OpenGL API provides the graphics programming tools that
implement these principles. We do not spend time describing in depth the algorithms behind the
techniques or the way the techniques are implemented; your instructor will provide these if he or
she finds it necessary. Instead, the book focuses on describing the concepts behind the graphics
and on using a graphics API (application programming interface) to carry out graphics
operations and create images.

We have tried to match the sequence of chapters in the book to the sequence we would expect to
be used in a beginning computer graphics course, and in some cases the presentation of one
module will depend on your knowing the content of an earlier chapter. However, in other cases
it will not be critical that earlier chapters have been covered. It should be pretty obvious if other
chapters are assumed, and we may make that assumption explicit in some modules.

The book focuses on computer graphics programming with a graphics API, and in particular uses
the OpenGL API to implement the basic concepts that it presents. Each chapter includes a
general discussion of a topic in graphics as well as a discussion of the way the topic is handled in
OpenGL. However, another graphics API might also be used, with the OpenGL discussion
serving as an example of the way an API could work. Many of the fundamental algorithms and
techniques that are at the root of computer graphics are covered only at the level they are needed
to understand questions of graphics programming. This differs from most computer graphics
textbooks that place a great deal of emphasis on understanding these algorithms and techniques.
We recognize the importance of these for persons who want to develop a deep knowledge of the
subject and suggest that a second graphics course can provide that knowledge. We believe that
the experience provided by API-based graphics programming will help you understand the
importance of these algorithms and techniques as they are developed and will equip you to work
with them more fluently than if you met them with no previous background.

This book includes several features that are not found in most beginning textbooks. These
features support a course that fits the current programming practice in computer graphics. The
discussions in this book will focus on 3D graphics and will almost completely omit uniquely 2D

3/15/03 Page 4
techniques. It has been traditional for computer graphics courses to start with 2D graphics and
move up to 3D because some of the algorithms and techniques have been easier to grasp at the
2D level, but without that concern it is easier to begin by covering 3D concepts and discuss 2D
graphics as the special case where all the modeling happens in the X-Y plane.

Modeling is a very fundamental topic in computer graphics, and there are many different ways
that one can model objects for graphical display. This book uses the standard beginning
approach of focusing on polygon-based modeling because that approach is supported by
OpenGL and most other graphics APIs. The discussion on modeling in this book places an
important emphasis on the scene graph as a fundamental tool in organizing the work needed to
create a graphics scene. The concept of the scene graph allows the student to design the
transformations, geometry, and appearance of a number of complex components in a way that
they can be implemented quite readily in code, even if the graphics API itself does not support
the scene graph directly. This is particularly important for hierarchical modeling, but it also
provides a unified design approach to modeling and has some very useful applications for
placing the eye point in the scene and for managing motion and animation.

A key feature of this book is an emphasis on using computer graphics to create effective visual
communication. This recognizes the key role that computer graphics has taken in developing an
understanding of complex problems and in communicating this understanding to others, from
small groups of working scientists to the general public. This emphasis is usually missing from
computer graphics textbooks, although we expect that most instructors include this somehow in
their courses. The discussion of effective communication is integrated throughout several of the
basic chapters in the book, because it is an important consideration in graphics modeling,
viewing, color, and interaction. We believe that a systematic discussion of this subject will help
prepare students for more effective use of computer graphics in their future professional lives,
whether this is in technical areas in computing or is in areas where there are significant
applications of computer graphics.

This book also places a good deal of emphasis on creating interactive displays. Most computer
graphics textbooks cover interaction and the creation of interactive graphics. Historically this
was a difficult area to implement because it involved writing or using specialized device drivers,
but with the growing importance of OpenGL and other graphics APIs this area has become much
more common. Because we are concerned with effective communication, we believe it is
critically important to understand the role of interaction in communicating information with
graphics. Our discussion of interaction includes a general treatment of event-driven
programming and covers the events and callbacks used in OpenGL, but it also discusses the role
of interaction in creating effective communications. This views interaction in the context of the
task that is to be supported, not just the technology being studied, and thus integrates it into the
overall context of the book.

This book’s approach, discussing computer graphics principles without covering the details of
the algorithms and mathematics that implement them, differs from most computer graphics
textbooks that place a much larger emphasis on understanding these graphics algorithms and
techniques. We recognize the importance of these ideas for persons who want to develop a deep
knowledge of the subject and suggest that a second graphics course can provide that knowledge.

3/15/03 Page 5
We believe that the experience provided by API-based graphics programming will help the
student understand the importance of these algorithms and techniques as they are developed and
will equip someone to work with them more fluently than if they were covered with no previous
computer graphics background.

3/15/03 Page 6
Chapter 0: Getting Started
This chapter is intended to give you a basic overview of the concepts of computer graphics so that
you can move forward into the rest of the book with some idea of what the field is about. It gives
some general discussion of the basis of the field, and then has two key content areas.

The first key area is the discussion of three-dimensional geometry, managed by the 3D geometry
pipeline, and the concept of appearance for computer graphics objects, managed by the rendering
pipeline. The geometry pipeline shows you the key information that you must specify to create an
image and the kind of computation a graphics system must do in order to present that image. We
will also discuss some of the ways appearance can be specified, but we will wait until a later
chapter to discuss the rendering pipeline.

The second key area is a presentation of the way a graphics program is laid out for the OpenGL
graphics API, the key API we will use in this book. In this presentation you will see both the
general structure of an OpenGL program and a complete example of a program that models a
particular problem and produces a particular animated image. In that example you will see how the
information for the geometry pipeline and the appearance information are defined for the program
and will be able to try out various changes to the program as part of the chapter exercises.

3D Geometry and the Geometry Pipeline

Computer graphics, at least as we will treat it in this book, is fundamentally three-dimensional, so


a graphics system must deal with three-dimensional geometry. Computer graphics systems do this
by creating the 3D geometry pipeline, a collection of processes that convert 3D points, the basic
building blocks of 3D geometry, from those that are most convenient for the application
programmer into those that are most convenient for the display hardware. We will explore the
details of the steps for the geometry pipeline in chapter on viewing and projection, but here we
outline the steps of the geometry pipeline to help you understand how it operates. This pipeline is
diagrammed in Figure 0.1, and we will start to sketch some stages in the pipeline in this chapter.
A great deal more detail will be given in the next few chapters.
3D Model 3D World 3D Eye 3D Eye 2D Eye 2D Screen
Coordinates Coordinates Coordinates Coordinates Coordinates Coordinates

Modeling Viewing Clipping Projection Window-to-Viewport


Transformation Transformation Mapping

Figure 0.1: The geometry pipeline’s stages and mappings

3D model coordinate systems

In order to create an image, we must define the geometry that represents each part of the image.
The process of creating and defining this geometry is called modeling, and is described in the
chapters below on principles of modeling and on modeling in OpenGL. This is usually done by
defining each object in terms of a coordinate system that makes sense for that particular object, and
then using a set of modeling transformations that places that object in a single world coordinate
system that represents the common space in which all the objects will live. Modeling then creates
the 3D model coordinates for each object, and the modeling transformations place the objects in the
world coordinate system that contains the entire scene.
3D world coordinate system

The 3D coordinate system that is shared by all the objects in the scene is called the world
coordinate system. By placing every component of the scene in this single shared world, we can
treat the scene uniformly as we develop the presentation of the scene through the graphics display
device to the user. The scene is a master design element that contains both the geometry of the
objects placed in it and the geometry of lights that illuminate it. Note that the world coordinate
system often is considered to represent the actual dimensions of a scene because it may be used to
model some real-world environment. This coordinate system exists without any reference to a
viewer, as is the case with any real-world scene. In order to create an image from the scene, the
viewer is added at the next stage.

3D eye coordinate system

Once the 3D world has been created, an application programmer would like the freedom to allow
an audience to view it from any location. But graphics viewing models typically require a specific
orientation and/or position for the eye at this stage. For example, the system might require that the
eye position be at the origin, looking in –Z (or sometimes +Z). So the next step in the geometry
pipeline is the viewing transformation, in which the coordinate system for the scene is changed to
satisfy this requirement. The result is the 3D eye coordinate system. We can think of this process
as grabbing the arbitrary eye location and all the 3D world objects and sliding them around to
realign the spaces so that the eye ends up at the proper place and looking in the proper direction.
The relative positions between the eye and the other objects have not been changed; all the parts of
the scene are simply anchored in a different spot in 3D space. Because standard viewing models
may also specify a standard distance from the eyepoint to some fixed “look-at” point in the scene,
there may also be some scaling involved in the viewing transformation. The viewing
transformation is just a transformation in the same sense as modeling transformations, although it
can be specified in a variety of ways depending on the graphics API. Because the viewing
transformation changes the coordinates of the entire world space in order to move the eye to the
standard position and orientation, we can consider the viewing transformation to be the inverse of
whatever transformation placed the eye point in the position and orientation defined for the view.
We will take advantage of this observation in the modeling chapter when we consider how to place
the eye in the scene’s geometry.

Clipping

At this point, we are ready to clip the object against the 3D viewing volume. The viewing volume
is the 3D volume that is determined by the projection to be used (see below) and that declares what
portion of the 3D universe the viewer wants to be able to see. This happens by defining how much
of the scene should be visible, and includes defining the left, right, bottom, top, near, and far
boundaries of that space. Any portions of the scene that are outside the defined viewing volume
are clipped and discarded. All portions that are inside are retained and passed along to the
projection step. In Figure 0.2, it is clear that some of the world and some of the helicopter lie
outside the viewable space to the left, right, top, or bottom, but note how the front of the image of
the ground in the figure is clipped—is made invisible in the scene—because it is too close to the
viewer’s eye. This is a bit difficult to see, but if you look at the cliffs at the upper left of the scene
you will see a clipped edge.

Clipping is done as the scene is projected to the 2D eye coordinates in projections, as described
next. Besides ensuring that the view includes only the things that should be visible, clipping also
increases the efficiency of image creation because it eliminates some parts of the geometry from the
rest of the display process.

3/15/03 Page 0.2


Figure 0.2: Clipping on the Left, Bottom, and Right

Projections

The 3D eye coordinate system still must be converted into a 2D coordinate system before it can be
mapped onto a graphics display device. The next stage of the geometry pipeline performs this
operation, called a projection. Before discussing the actual projection, we must think about what
we will actually see in the graphic device. Imagine your eye placed somewhere in the scene,
looking in a particular direction. You do not see the entire scene; you only see what lies in front of
your eye and within your field of view. This space is called the viewing volume for your scene,
and it includes a bit more than the eye point, direction, and field of view; it also includes a front
plane, with the concept that you cannot see anything closer than this plane, and a back plane, with
the concept that you cannot see anything farther than that plane. In Figure 0.3 we see two viewing
volumes for the two kinds of projections that we will discuss in a moment.

Figure 0.3: Parallel and Perspective Viewing Volumes, with Eyeballs

There are two kinds of projections commonly used in computer graphics. One maps all the points
in the eye space to the viewing plane by simply ignoring the value of the z-coordinate, and as a

3/15/03 Page 0.3


result all points on a line parallel to the direction of the eye are mapped to the same point on the
viewing plane. Such a projection is called a parallel projection, and it has the effect that the viewer
can read accurate dimensions in the x- and y-coordinates. It is common for engineering drawings
to present two parallel projections with the second including a 90° rotation of the world space so
accurate z-coordinates can also be seen. The other projection acts as if the eye were a single point
and each point in the scene is mapped along a line from the eye to that point, to a point on a plane
in front of the eye, which is the classical technique of artists when drawing with perspective. Such
a projection is called a perspective projection. And just as there are parallel and perspective
projections, there are parallel (also called orthographic) and perspective viewing volumes. In a
parallel projection, objects stay the same size as they get farther away. In a perspective projection,
objects get smaller as they get farther away. Perspective projections tend to look more realistic,
while parallel projections tend to make objects easier to line up. Each projection will display the
geometry within the region of 3-space that is bounded by the right, left, top, bottom, back, and
front planes described above. The region that is visible with each projection is often called its view
volume. As we see in Figure 0.3, the viewing volume of a parallel projection is a rectangular
region (here shown as a solid), while the viewing volume of a perspective projection has the shape
of a pyramid that is truncated at the top (also shown as a solid). This kind of shape is a truncated
pyramid and is sometimes called a frustum.

While the viewing volume describes the region in space that is included in the view, the actual view
is what is displayed on the front clipping space of the viewing volume. This is a 2D space and is
essentially the 2D eye space discussed below. Figure 0.4 presents a scene with both parallel and
perspective projections; in this example, you will have to look carefully to see the differences!

Figure 0.4: the same scene as presented by a parallel projection (left)


and by a perspective projection (right)

2D eye coordinates

The space that projection maps to is a two-dimensional real-coordinate space that contains the
geometry of the original scene after the projection is applied. Because a single point in 2D eye
coordinates corresponds to an entire line segment in the 3D eye space, depth information is lost in
the projection and it can be difficult to perceive depth, particularly if a parallel projection was used.
Even in that case, however, if we display the scene with a hidden-surface technique, object
occlusion will help us order the content in the scene. Hidden-surface techniques are discussed in a
later chapter.

3/15/03 Page 0.4


2D screen coordinates

The final step in the geometry pipeline is to change the coordinates of objects in the 2D eye space
so that the object is in a coordinate system appropriate for the 2D display device. Because the
screen is a digital device, this requires that the real numbers in the 2D eye coordinate system be
converted to integer numbers that represent screen coordinate. This is done with a proportional
mapping followed by a truncation of the coordinate values. It is called the window-to-viewport
mapping, and the new coordinate space is referred to as screen coordinates, or display coordinates.
When this step is done, the entire scene is now represented by integer screen coordinates and can
be drawn on the 2D display device.

Note that this entire pipeline process converts vertices, or geometry, from one form to another by
means of several different transformations. These transformations ensure that the vertex geometry
of the scene is consistent among the different representations as the scene is developed, but
computer graphics also assumes that the topology of the scene stays the same. For instance, if two
points are connected by a line in 3D model space, then those converted points are assumed to
likewise be connected by a line in 2D screen space. Thus the geometric relationships (points,
lines, polygons, ...) that were specified in the original model space are all maintained until we get
to screen space, and are only actually implemented there.

Appearance

Along with geometry, computer graphics is built on the ability to define the appearance of objects,
so you can make them appear naturalistic or can give them colors that can communicate something
to the user.

In the discussion so far, we have only talked about the coordinates of the vertices of a model.
There are many other properties of vertices, though, that are used in rendering the scene, that is, in
creating the actual image defined by the scene. These are discussed in many of the later chapters,
but it is worth noting here that these properties are present when the vertex is defined and are
preserved as the vertex is processed through the geometry pipeline. Some of these properties
involve concepts that we have not yet covered, but these will be defined below. These properties
include:
• a depth value for the vertex, defined as the distance of the vertex from the eye point in the
direction of the view reference point,
• a color for the vertex,
• a normal vector at the vertex,
• material properties for the vertex, and
• texture coordinates for the vertex.
These properties are used in the development of the appearance of each of the objects in the image.
They allow the graphics system to calculate the color of each pixel in the screen representation of
the image after the vertices are converted to 2D screen coordinates. For the details of the process,
see the chapter below on the rendering pipeline.

Appearance is handled by operations that are applied after the geometry is mapped to screen space.
In order to do this, the geometric primitives described above are broken down into very simple
primitives and these are processed by identifying the parts of the window raster that make up each
one. This is done by processing the vertex information described in the previous paragraph into
scanline information, as described in a later chapter. Appearance information is associated with
each vertex, and as the vertex information is processed into scanlines, and as the pixels on each
scanline are processed, appearance information is also processed to create the colors that are used
in filling each primitive. Processes such as depth buffering are also handled at this stage, creating
the appropriate visible surface view of a scene. So the appearance information follows the

3/15/03 Page 0.5


geometry information, and the chapters of this book that discuss appearance issues will follow
most of the geometry chapters.

Color

Color can be set directly by the program or can be computed from a lighting model in case your
scene is defined in terms of lights and materials. Most graphics APIs now support what is called
RGBA color: color defined in terms of the emissive primaries red, green, and blue, and with an
alpha channel that allows you to blend items with the background when they are drawn. These
systems also allow a very large number of colors, typically on the order of 16 million. So there are
a large number of possibilities for color use, as described in later chapters on color and on lighting.

Texture mapping

Among the most powerful ways to add visual interest to a scene is texture mapping, a capability
that allows you to add information to objects in a scene from either natural or synthetic complex
images. With texture mapping you can achieve photographic surface effects or other kinds of
images that will make your images much more interesting and realistic. This is discussed in a later
chapter and should be an important facility for you.

Depth buffering

As your scene is developed, you want only the objects nearest the eye to be seen; anything that is
behind these will be hidden by the nearer objects. This is managed in the rendering stage by
keeping track of the distance of each vertex from the eye. If an object is nearer than the previously
drawn part of the scene for the same pixels, then the object will replace the previous part; otherwise
the previous part is retained. This is a straightforward computation that is supported by essentially
all modern graphics systems.

The viewing process

Let’s look at the overall operations on the geometry you define for a scene as the graphics system
works on that scene and eventually displays it to your user. Referring again to Figure 0.1 and
omitting the clipping and window-to-viewport process, we see that we start with geometry, apply
the modeling transformation(s), apply the viewing transformation, and finally apply the projection
to the screen. This can be expressed in terms of function composition as the sequence
projection(viewing(transformation(geometry))))
or, with the associative law for functions and writing function composition as multiplication,
(projection * viewing * transformation) (geometry).
In the same way we saw that the operations nearest the geometry were performed before operations
further from the geometry, then, we will want to define the projection first, the viewing next, and
the transformations last before we define the geometry they are to operate on. This is independent
of whether we want to use a perspective or parallel projection. We will see this sequence as a key
factor in the way we structure a scene through the scene graph in the modeling chapter later in these
notes.

Different implementation, same result

Warning! To this point, our discussion has only shown the concept of how a vertex travels
through the geometry pipeline, but we not given any details on how this actually is done. There
are several ways of implementing this travel, any of which will produce a correct display. Do not
be surprised if you find out a graphics system does not manage the overall geometry pipeline
process exactly as shown here. The basic principles and stages of the operation are still the same.

3/15/03 Page 0.6


For example, OpenGL combines the modeling and viewing transformations into a single
transformation known as the modelview matrix. This will force us to take a little different
approach to the modeling and viewing process that integrates these two steps. Also, graphics
hardware systems typically perform a window-to-normalized-coordinates operation prior to
clipping so that hardware can be optimized around a particular coordinate system. In this case,
everything else stays the same except that the final step would be normalized-coordinate-to-
viewport mapping.

In many cases, we simply will not be concerned about the details of how the stages are carried out.
Our goal will be to represent the geometry correctly at the modeling and world coordinate stages, to
specify the eye position appropriately so the transformation to eye coordinates will be correct, and
to define our window and projections correctly so the transformations down to 2D and to screen
space will be correct. Other details will be left to a more advanced graphics course.

Summary of viewing advantages

One of the classic questions beginners have about viewing a computer graphics image is whether to
use perspective or orthographic projections. Each of these has its strengths and its weaknesses.
As a quick guide to start with, here are some thoughts on the two approaches:

Orthographic projections are at their best when:


• Items in the scene need to be checked to see if they line up or are the same size
• Lines need to be checked to see if they are parallel
• We do not care that distance is handled unrealistically
• We are not trying to move through the scene

Perspective projections are at their best when:


• Realism counts
• We want to move through the scene and have a view like a human viewer would have
• We do not need to measure or align parts of the image

In fact, when you have some experience with each, and when you know the expectations of the
audience for which you’re preparing your images, you will find that the choice is quite natural and
will have no problem knowing which is better for a given image.

A basic OpenGL program

Our example programs that use OpenGL have some strong similarities. Each is based on the
GLUT utility toolkit that usually accompanies OpenGL systems, so all the sample codes have this
fundamental similarity. (If your version of OpenGL does not include GLUT, its source code is
available online; check the page at
http://www.reality.sgi.com/opengl/glut3/glut3.h
and you can find out where to get it. You will need to download the code, compile it, and install it
in your system.) Similarly, when we get to the section on event handling, we will use the MUI
(micro user interface) toolkit, although this is not yet developed or included in this first draft
release.

Like most worthwhile APIs, OpenGL is complex and offers you many different ways to express a
solution to a graphical problem in code. Our examples use a rather limited approach that works
well for interactive programs, because we believe strongly that graphics and interaction should be
learned together. When you want to focus on making highly realistic graphics, of the sort that
takes a long time to create a single image, then you can readily give up the notion of interactive
work.

3/15/03 Page 0.7


So what is the typical structure of a program that would use OpenGL to make interactive images?
We will display this structure-only example in C, as we will with all our examples in these notes.
We have chosen not to present examples in C++ because OpenGL is not really compatible with the
concept of object-oriented programming because it maintains an extensive set of state information
that cannot be encapsulated in graphics classes, while object-oriented design usually calls for
objects to maintain their own state. Indeed, as you will see when you look at the example
programs, many functions such as event callbacks cannot even deal with parameters and must
work with global variables, so the usual practice is to create a global application environment
through global variables and use these variables instead of parameters to pass information in and
out of functions. (Typically, OpenGL programs use side effects—passing information through
external variables instead of through parameters—because graphics environments are complex and
parameter lists can become unmanageable.)

In the code below, you will see that the main function involves mostly setting up the system. This
is done in two ways: first, setting up GLUT to create and place the system window in which your
work will be displayed, and second, setting up the event-handling system by defining the callbacks
to be used when events occur. After this is done, main calls the main event loop that will drive all
the program operations, as described in the chapter below on event handling.

The full code example that follows this outline also discusses many of the details of these functions
and of the callbacks, so we will not go into much detail here. For now, the things to note are that
the reshape callback sets up the window parameters for the system, including the size, shape, and
location of the window, and defines the projection to be used in the view. This is called first when
the main event loop is entered as well as when any window activity happens (such as resizing or
dragging). The reshape callback requests a redisplay when it finishes, which calls the display
callback function, whose task is to set up the view and define the geometry for the scene. When
this is finished, OpenGL is finished and goes back to your computer system to see if there has
been any other graphics-related event. If there has, your program should have a callback to
manage it; if there has not, then the idle event is generated and the idle callback function is called;
this may change some of the geometry parameters and then a redisplay is again called.

#include <GL/glut.h> // Windows; other includes for other systems


// other includes as needed

// typedef and global data section


// as needed

// function template section


void doMyInit(void);
void display(void);
void reshape(int,int);
void idle(void);
// others as defined

// initialization function
void doMyInit(void) {
set up basic OpenGL parameters and environment
set up projection transformation (ortho or perspective)
}

// reshape function
void reshape(int w, int h) {
set up projection transformation with new window
dimensions w and h
post redisplay
}

3/15/03 Page 0.8


// display function
void display(void){
set up viewing transformation as in later chapters
define the geometry, transformations, appearance you need
post redisplay
}

// idle function
void idle(void) {
update anything that changes between steps of the program
post redisplay
}

// other graphics and application functions


// as needed

// main function -- set up the system, turn it over to events


void main(int argc, char** argv) {
// initialize system through GLUT and your own initialization
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(windW,windH);
glutInitWindowPosition(topLeftX,topLeftY);
glutCreateWindow("A Sample Program");
doMyInit();
// define callbacks for events as needed; this is pretty minimal
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutIdleFunc(idle);
// go into main event loop
glutMainLoop();
}

Now that we have seen a basic structure for an OpenGL program, we will present a complete,
working program and will analyze the way it represents the geometry pipeline described earlier in
this chapter, while describing the details of OpenGL that it uses. The program is the simple
simulation of temperatures in a uniform metal bar that is described in the later chapter on graphical
problem-solving in science, and we will only analyze the program structure, not its function. It
creates the image shown in Figure 0.5.

Figure 0.5: heat distribution in a bar

3/15/03 Page 0.9


The code we will discuss is given below. We will segment it into components so you may see the
different ways the individual pieces contribute to the overall graphics operations, and then we will
discuss the pieces after the code listing.
// Example - heat flow in a thin rectangular body
// declarations and initialization of variables and system
#include <GL/glut.h> // for windows; changes for other systems
#include <stdlib.h>
#include <stdio.h>
#include <math.h>

#define ROWS 10 // body is ROWSxCOLS (unitless) squares


#define COLS 30
#define AMBIENT 25.0;// ambient temperature, degrees Celsius
#define HOT 50.0 // hot temperature of heat-source cell
#define COLD 0.0 // cold temperature of cold-sink cell
#define NHOTS 4 // number of hot cells
#define NCOLDS 5 // number of cold cells

GLfloat angle = 0.0;


GLfloat temps[ROWS][COLS], back[ROWS+2][COLS+2];
GLfloat theta = 0.0, vp = 30.0;
// set locations of fixed hot and cold spots on the bar
int hotspots[NHOTS][2] =
{ {ROWS/2,0},{ROWS/2-1,0},{ROWS/2-2,0},{0,3*COLS/4} };
int coldspots[NCOLDS][2] =
{ {ROWS-1,COLS/3}, {ROWS-1,1+COLS/3}, {ROWS-1,2+COLS/3},
{ROWS-1,3+COLS/3}, {ROWS-1,4+COLS/3} };
int myWin;

void myinit(void);
void cube(void);
void display(void);
void setColor(float);
void reshape(int, int);
void animate(void);
void iterationStep(void);

void myinit(void) {
int i,j;

glEnable (GL_DEPTH_TEST);
glClearColor(0.6, 0.6, 0.6, 1.0);

// set up initial temperatures in cells


for (i=0; i<ROWS; i++) {
for (j=0; j < COLS; j++) {
temps[i][j] = AMBIENT;
}
}
for (i=0; i<NHOTS; i++) {
temps[hotspots[i][0]][hotspots[i][1]]=HOT; }
for (i=0; i<NCOLDS; i++) {
temps[coldspots[i][0]][coldspots[i][1]]=COLD; }
}

3/15/03 Page 0.10


// create a unit cube in first octant in model coordinates
void cube (void) {
typedef GLfloat point [3];

point v[8] = {
{0.0, 0.0, 0.0}, {0.0, 0.0, 1.0},
{0.0, 1.0, 0.0}, {0.0, 1.0, 1.0},
{1.0, 0.0, 0.0}, {1.0, 0.0, 1.0},
{1.0, 1.0, 0.0}, {1.0, 1.0, 1.0} };

glBegin (GL_QUAD_STRIP);
glVertex3fv(v[4]);
glVertex3fv(v[5]);
glVertex3fv(v[0]);
glVertex3fv(v[1]);
glVertex3fv(v[2]);
glVertex3fv(v[3]);
glVertex3fv(v[6]);
glVertex3fv(v[7]);
glEnd();

glBegin (GL_QUAD_STRIP);
glVertex3fv(v[1]);
glVertex3fv(v[3]);
glVertex3fv(v[5]);
glVertex3fv(v[7]);
glVertex3fv(v[4]);
glVertex3fv(v[6]);
glVertex3fv(v[0]);
glVertex3fv(v[2]);
glEnd();
}

void display( void ) {


#define SCALE 10.0
int i,j;

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// This short section defines the viewing transformation


glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// eye point center of view up
gluLookAt(vp, vp/2., vp/4., 0.0, 0.0, 0.0, 0.0, 0.0, 1.0);

// Set up a rotation for the entire scene


glPushMatrix();
glRotate(alpha, 0., 0., 1.);

// Draw the bars


for (i = 0; i < ROWS; i++) {
for (j = 0; j < COLS; j++) {
setColor( temps[i][j] );
// Here is the modeling transformation for each item in the display
glPushMatrix();
glTranslatef((float)i-(float)ROWS/2.0,
(float)j-(float)COLS/2.0,0.0);

3/15/03 Page 0.11


// 0.1 cold, 4.0 hot
glScalef(1.0, 1.0, 0.1+3.9*temps[i][j]/HOT);
cube();
glPopMatrix();
}
}
glPopMatrix(); // pop the scene rotation
glutSwapBuffers();
}

void reshape(int w,int h) {


// This defines the projection transformation
glViewport(0,0,(GLsizei)w,(GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, (float)w/(float)h, 1.0, 300.0);
}

void setColor(float t) {
float r, g, b;
r = t/HOT; g = 0.0; b = 1.0 - t/HOT;
glColor3f(r, g, b);
}

void animate(void) {
// This function is called whenever the system is idle; it calls
// iterationStep() to change the data so the next image is changed
iterationStep();
glutPostRedisplay();
}

void iterationStep(void) {
int i, j, m, n;

float filter[3][3]={{ 0. , 0.125, 0. },


{ 0.125 , 0.5, 0.125 },
{ 0. , 0.125, 0. } };

// increment temperatures throughout the material


for (i=0; i<ROWS; i++) // backup temps up to recreate it
for (j=0; j<COLS; j++)
back[i+1][j+1] = temps[i][j]; // leave boundaries on back
// fill boundaries with adjacent values from original temps[][]
for (i=1; i<ROWS+2; i++) {
back[i][0]=back[i][1];
back[i][COLS+1]=back[i][COLS];
}
for (j=0; j<COLS+2; j++) {
back[0][j] = back[1][j];
back[ROWS+1][j]=back[ROWS][j];
}
for (i=0; i<ROWS; i++) // diffusion based on back values
for (j=0; j<COLS; j++) {
temps[i][j]=0.0;
for (m=-1; m<=1; m++)
for (n=-1; n<=1; n++)

3/15/03 Page 0.12


temps[i][j]+=back[i+1+m][j+1+n]*filter[m+1][n+1];
}
for (i=0; i<NHOTS; i++) {
temps[hotspots[i][0]][hotspots[i][1]]=HOT; }
for (i=0; i<NCOLDS; i++) {
temps[coldspots[i][0]][coldspots[i][1]]=COLD; }

// update the angle for the rotation


angle += 1.0;}

void main(int argc, char** argv) {


// Initialize the GLUT system and define the window
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(500,500);
glutInitWindowPosition(50,50);
myWin = glutCreateWindow("Temperature in bar");
myinit();

// define the event callbacks and enter main event loop


glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutIdleFunc(animate);
glutMainLoop(); /* enter event loop */
}

The structure of the main() program using OpenGL

The main() program in an OpenGL-based application looks somewhat different from the
programs we probably have seen before. This function has several key operations: it sets up the
display mode, defines the window in which the display will be presented, and does whatever
initialization is needed by the program. It then does something that may not be familiar to you: it
defines a set of event callbacks, which are functions that are called by the system when an event
occurs.

When you set up the display mode, you indicate to the system all the special features that your
program will use at some point. In the example here,
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
you tell the system that you will be working in double-buffered mode, will use the RGB color
model, and will be using depth testing. Some of these have to be enabled before they are actually
used, as the depth testing is in the myInit() function with
glEnable(GL_DEPTH_TEST) .
Details on depth testing and a discussion of how this is managed in OpenGL are found in the next
chapter.

Setting up the window (or windows—OpenGL will let you have multiple windows open and
active) is handled by a set of GLUT function calls that position the window, define the size of the
window, and give a title to the window. As the program runs, an active window may be reshaped
by the user using the standard techniques of whatever window system is being used, and this is
handled by the reshape() function.

The way OpenGL handles event-driven programming is described in much more detail in a later
chapter, but for now you need to realize that GLUT-based OpenGL (which is all we will describe
in this book) operates entirely from events. For each event that the program is to handle, you need
to define a callback function here in main(). When the main event loop is started, a reshape

3/15/03 Page 0.13


event is generated to create the window and a display event is created to draw an image in the
window. If any other events have callbacks defined, then those callback functions are invoked
when the events happen. The reshape callback allows you to move the window or change its size,
and is called whenever you do any window manipulation. The idle callback allows the program to
create a sequence of images by doing recomputations whenever the system is idle (is not creating
an image or responding to another event), and then redisplaying the changed image.

Model space

The function cube() above defines a unit cube with sides parallel to the coordinate axes, one
vertex at the origin, and one vertex at (1,1,1). This cube is created by defining an array of points
that are the eight vertices of such a cube, and then using the glBegin()...glEnd()
construction to draw the six squares that make up the cube through two quad strips. This is
discussed in the chapter on modeling with OpenGL; for now, note that the cube uses its own set of
coordinates that may or may not have anything to do with the space in which we will define our
metallic strip to simulate heat transfer.

Modeling transformation

Modeling transformations are found in the display() function or functions called from it, and
are quite simple: they define the fundamental transformations that are to be applied to the basic
geometry units as they are placed into the world. In our example, the basic geometry unit is a unit
cube, and the cube is scaled in Z (but not in X or Y ) to define the height of each cell and is then
translated by X and Y (but not Z) to place the cell in the right place. The order of the
transformations, the way each is defined, and the pushMatrix()/popMatrix() operations
you see in the code are described in the later chapter on modeling in OpenGL. For now it suffices
to see that the transformations are defined in order to make a rectangular object of the proper height
to represent the temperature.

3D world space

The 3D world space for this program is the space in which the graphical objects live after they have
been placed by the modeling transformations. The translations give us one hint as to this space; we
see that the x-coordinates of the translated cubes will lie between -ROWS/2 and ROWS/2, while
the y-coordinates of these cubes will lie between -COLS/2 and COLS/2. Because ROWS and
COLS are 30 and 10, respectively, the x-coordinates will lie between -15 and 15 and the y-
coordinates will lie between -5 and 5. The low z-coordinate is 0 because that is never changed
when the cubes are scaled, while the high z-coordinate is never larger then 4. Thus the entire bar
lies in the region between -15 and 15 in x, -5 and 5 in y, and 0 and 4 in z. (Actually, this is not
quite correct, but it is adequate for now; you are encouraged to find the small error.)

Viewing transformation

The viewing transformation is defined at the beginning of the display() function above. The
code identifies that it is setting up the modelview matrix, sets that matrix to the identity (a
transformation that makes no changes to the world), and then specifies the view. A view is
specified in OpenGL with the gluLookAt() call:
gluLookAt( ex, ey, ez, lx, ly, lz, ux, uy, uz );
with parameters that include the coordinates of eye position (ex, ey, ez), the coordinates of the
point at which the eye is looking (lx, ly, lz), and the coordinates of a vector that defines the
“up” direction for the view (ux, uy, uz). This is discussed in more detail in the chapter below
on viewing.

3/15/03 Page 0.14


3D eye space

There is no specific representation of the 3D eye space in the program, because this is simply an
intermediate stage in the production of the image. We can see, however, that we had set the center
of view to the origin, which is the center of our image, and we had set our eye point to look at the
origin from a point somewhat above and to the right of the center, so after the viewing
transformation the object seems to be tilted up and to the side. This is the representation in the final
3D space that will be used to project the scene to the eye.

Projections

The projection operation is defined here in the reshape() function. It may be done in other
places, but this is a good location and clearly separates the operation of projection from the
operation of viewing.

Projections are specified fairly easily in the OpenGL system. An orthographic (or parallel)
projection is defined with the function call:
glOrtho( left, right, bottom, top, near, far );
where left and right are the x-coordinates of the left and right sides of the orthographic view
volume, bottom and top are the y-coordinates of the bottom and top of the view volume, and
near and far are the z-coordinates of the front and back of the view volume. A perspective
projection can be defined with the function call:
gluPerspective( fovy, aspect, near, far );
In this function, the first parameter is the field of view in degrees, the second is the aspect ratio for
the window, and the near and far parameters are as above. In this projection, it is assumed that
your eye is at the origin so there is no need to specify the other four clipping planes; they are
determined by the field of view and the aspect ratio.

When the window is reshaped, it is useful to take the width and height from the reshape event and
define your projection to have the same aspect ratio (ratio of width to height) that the window has.
That way there is no distortion introduced into the scene as it is seen through the newly-shaped
window. If you use a fixed aspect ratio and change the window’s shape, the original scene will be
distorted to be seen through the new window, which can be confusing to the user.

2D eye space

This is the real 2D space to which the 3D world is projected, and it corresponds to the forward
plane of the view volume. In order to provide uniform dimensions for mapping to the screen, the
eye space is scaled so it has dimension -1 to 1 in each coordinate.

2D screen space

When the system was initialized, the window for this program was defined to be 500x500 pixels in
size with a top corner at (50, 50), or 50 pixels down and 50 pixels over from the upper-left corner
of the screen. Thus the screen space for the window is the set of pixels in that area of the screen.
In fact, though, the window maintains its coordinate system independently of its location, so the
point that had been (0, 0, 0) in 3D eye space is now (249, 249) in screen space. Note that screen
space has integer coordinates that represent individual pixels and is discrete, not continuous, and
its coordinates start at 0.

3/15/03 Page 0.15


Appearance

The appearance of the objects in this program is defined by the function setColor(), called
from the display() function. If you recall that display() is also the place where modeling is
defined, you will see that appearance is really part of modeling—you model both the geometry of
an object and its appearance. The value of the temperature in each cell is used to compute a color
for the cell’s object as it is displayed, using the OpenGL glColor3f() function. This is about
the simplest way to define the color for an object’s appearance, but it is quite effective.

Another way to see the program

Another way to see how this program works is to consider the code function-by-function instead
of by the properties of the geometry pipeline. We will do this briefly here.

The task of the function myinit() is to set up the environment for the program so that the
scene’s fundamental environment is set up. This is a good place to compute values for arrays that
define the geometry, to define specific named colors, and the like. At the end of this function you
should set up the initial projection specifications.

The task of the function display() is to do everything needed to create the image. This can
involve manipulating a significant amount of data, but the function does not allow any parameters.
Here is the first place where the data for graphics problems must be managed through global
variables. As we noted above, we treat the global data as a programmer-created environment, with
some functions manipulating the data and the graphical functions using that data (the graphics
environment) to define and present the display. In most cases, the global data is changed only
through well-documented side effects, so this use of the data is reasonably clean. (Note that this
argues strongly for a great deal of emphasis on documentation in your projects, which most people
believe is not a bad thing.) Of course, some functions can create or receive control parameters, and
it is up to you to decide whether these parameters should be managed globally or locally, but even
in this case the declarations are likely to be global because of the wide number of functions that
may use them. You will also find that your graphics API maintains its own environment, called its
system state, and that some of your functions will also manipulate that environment, so it is
important to consider the overall environment effect of your work.

The task of the function reshape() is to respond to user manipulation of the window in which
the graphics are displayed. The function takes two parameters, which are the width and height of
the window in screen space (or in pixels) as it is resized by the user’s manipulation, and should be
used to reset the projection information for the scene. GLUT interacts with the window manager
of the system and allows a window to be moved or resized very flexibly without the programmer
having to manage any system-dependent operations directly. Surely this kind of system
independence is one of the very good reasons to use the GLUT toolkit!

The task of the function animate() is to respond to the “idle” event — the event that nothing has
happened. This function defines what the program is to do without any user activity, and is the
way we can get animation in our programs. Without going into detail that should wait for our
general discussion of events, the process is that the idle() function makes any desired changes
in the global environment, and then requests that the program make a new display (with these
changes) by invoking the function glutPostRedisplay() that simply requests the display
function when the system can next do it by posting a “redisplay” event to the system.

The execution sequence of a simple program with no other events would then look something like
is shown in Figure 0.7. Note that main() does not call the display() function directly;
instead main() calls the event handling function glutMainLoop() which in turn makes the

3/15/03 Page 0.16


first call to display() and then waits for events to be posted to the system event queue. We will
describe event handling in more detail in a later chapter.

main() display()

redisplay event no events?

idle()

Figure 0.7: the event loop for the idle event

So we see that in the absence of any other event activity, the program will continues to apply the
activity of the idle() function as time progresses, leading to an image that changes over
time—that is, to an animated image.

A few words on the details of the idle() function might help in seeing what it does. The whole
program presents the behavior of heat in a bar, and the transfer of heat from one place to another is
described by the heat equation. In this program we model heat transfer by a diffusion process.
This uses a filter that sets the current heat at a position to a weighted average of the heat of the
cell’s neighbors, modeled by the filter array in this function. A full description of this is in the
chapter on science applications. At each time step—that is, at each time when the program
becomes idle—this diffusion process is applied to compute a new set of temperatures, and the
angle of rotation of the display is updated. The call to glutPostRedisplay() at the end of
this function then generates a call to the display() function that draws the image with the new
temperatures and new angle.

In looking at the execution sequence for the functions in this simple program, it can be useful to
consider a graph that shows which functions are called by which other functions. Bearing in mind
that the program is event-driven and so the event callback functions (animate(), display(),
and reshape()) are not called directly by the program, we have the function caller-callee graph
in Figure 0.8.

Note that the graph is really a tree: functions are called only by event callbacks or the init()
function, the init() function is called only once from main(), and all the event callbacks are
called from the event handler. For most OpenGL programs, this is the general shape of the graph:
a callback function may use several functions, but any function except a callback will only be called
as part of program initialization or from an event callback.

Figure 0.8: the function caller/callee graph for the example program

3/15/03 Page 0.17


Now that we have an idea of the geometry pipeline and know what a program can look like, we
can move on to discuss how we specify the viewing and projection environment, how we define
the fundamental geometry for our image, and how we create the image in the display()
function with the environment that we define through the viewing and projection.

OpenGL extensions

In this chapter, and throughout these notes, we take a fairly limited view of the OpenGL graphics
API. Because this is an introductory text, we focus on the basic features of computer graphics and
of the graphics API, so we do not work with most of the advanced features of the system and we
only consider the more straightforward uses of the parts we cover. But OpenGL is capable of very
sophisticated kinds of graphics, both in its original version and in versions that are available for
specific kinds of graphics, and you should know of these because as you develop your graphics
skills, you may find that the original “vanilla” OpenGL that we cover here will not do everything
you want.

Advanced features of OpenGL include a number of special operation to store or manipulate


information on a scene. These include modeling via polygon tessellation, NURBS surfaces, and
defining and applying your own special-purpose transformations; the scissor test and the more
general stencil buffer and stencil test; rendering in feedback mode to get details on what is being
drawn; and facilities for client/server support. Remember that this is a general text, not a detailed
presentation of OpenGL, and be ready to look further (see the references) for more information.

In addition to standard OpenGL, there are a number of extensions that support more specialized
kinds of operations. These include the ARB imaging subset extension for image processing, the
ARB multitexturing extension, vertex shader extensions, and many others. Some of these might
have just the tools you need to do the very special things you want, so it would be useful for you
to keep up to date on them. You can get information on extensions at the standard OpenGL Web
site, http://www.opengl.org.

Summary

In this chapter we have discussed the geometry pipeline and have indicated what each step involves
and how it contributes to creating the final image you are programming. We have also shown how
appearance fits into the geometry pipeline, although it is actually implemented in a separate
pipeline, and how all of this is implemented through a complete sample OpenGL program. In fact,
you actually have a significant tool in this sample program, because it can be modified and adapted
to serve as a basis for a great deal of other graphics programming. We do not have any
programming projects in this chapter, but these will come along quickly and you will be able to use
this sample program to get started on them.

Questions
1. There are other ways to do graphics besides API-based programming. You can use a number
of different modeling, painting, and other end-user tools. Distinguish between API-based
graphics and graphics done with a tool such as Photoshop™ or a commercial paint program.
The sample program in this chapter can give you an idea of how API-based graphics can look,
although it is only a simple program and much more complex programs are discussed in later
chapters.

2. Trace the behavior of the 3D geometry in the sample program through the steps of the geometry
pipeline from the point where you define a unit cube in model space, through the
transformations that place that point into world space, through the viewing transformation that

3/15/03 Page 0.18


places the point in 3D eye space, to the projection that places the point in 2D eye space.
Without doing any of the mathematics, work out what kind of changes are made in the points’
coordinates as these operations are performed.

Exercises
3. Compile and execute the full sample program in the chapter so you can become familiar with
the use of your compiler for graphics programming. Exercise the reshape() function in the
code by dragging and resizing the window. As you manipulate the window, change the shape
of the window (make it narrower but not shorter, for example, or make it shorter but not
narrower) and see how the window and image respond.

Experiments
4. There are many ways you can experiment with the full sample program in this chapter. A few
of them include
(a) changing the size and upper left corner coordinates of the window [function main()]
(b) changing the locations of the hot and cold spots in the bar [function myinit()]
(c) changing the way the color of each bar is computed by changing the function that
determines the color [function setColor()] (see the later chapter on color for more on
this topic)
(d) changing the rate at which the image rotates by changing the amount the angle is increased
[function animate()]
(e) changing the values in the filter array to change the model of how heat diffuses in the bar
[function iterationStep()]
(f) changing the way the edge of the bar is treated, so that instead of simply repeating the
values at the edge, you get the values at the opposite edge of the bar, effectively allowing
temperatures to move from one edge to the other as if the bar were a torus [function
iterationStep()]
(g) changing the view of the bar from a perspective view to an orthogonal view [function
reshape()] (you will probably need to look up the details of orthogonal projections in
the appropriate chapter below).
Take as many of them as you can and add appropriate code changes to the code of the previous
exercise and observe the changes in the program behavior that result. Draw as many
conclusions as you can about the role of these various functions in creating the final image.

5. Continuing with the earlier theme of the reshape() function, look at the code of the
reshape() function and think about how you might make it respond differently. The current
version uses the window dimensions w and h in the perspective projection definition to ensure
that the aspect ratio of the original image are preserved, but the window may cut off part of the
image if it is too narrow. You can change the projection angle to increase as the window is
narrower, for example. Change the code in reshape() to try to change the behavior in the
window.

6. There are some sample programs available for this book, and there are an enormous number of
OpenGL programs available on the Web. Find several of these and create the graph of function
calls described in this chapter to verify (or refute) the claim there that functions tend to operate
in either program initialization or a single event callback. What does this tell you about the way
you develop a graphics program with OpenGL? Where do you find most of the user-defined
functions within the program?

3/15/03 Page 0.19


Chapter 1: Viewing and Projection
This chapter looks at two important stages of the graphics pipeline, introduced in the previous
chapter, in detail. It presents the fundamental models for viewing and projection and discusses the
operation of each. Viewing is seen in the context of the overall scene that is being built, and the
key information needed to define a view is presented in terms of the scene. Both perspective and
orthogonal (or parallel) projections are discussed, and again the key information needed for each is
presented. The chapter assumes a basic understanding of 2D and 3D geometry and some
familiarity with simple linear mappings.

Besides discussing viewing and projection, this chapter includes some topics that are related to
basic steps in the graphics pipeline. These include clipping that is performed during the projection
(as well as the more concept of clipping in general), defining the screen window on which the
image is presented, and specifying the viewport in the window that is to contain the actual image.
Other topics include double buffering (creating the image in an invisiblewindow and then swapping
it with the visible window) and managing hidden surfaces. Finally, we show how you can create a
stereo view with two images, computed from viewpoints that represent the left and right eyes,
presented in adjacent viewports so they may be fused by someone with appropriate vision skills.

After discussing the pipeline as a general feature of computer graphics, the chapter moves on to
discuss how each stage is created in OpenGL. We discuss the OpenGL functions that allow you to
define the viewing transformation and the orthogonal and perspective projections, and show how
they are used in a program and how they can respond to window manipulation. We also show
how the concepts of clipping, double buffering, and hidden surfaces are implemented, and show
how to implement the stereo viewing described above.

When the reader is finished with this chapter, he or she should be able to choose an appropriate
view and projection for a scene and should be able to define the view and projection and write the
necessary code to implement them in OpenGL. The reader should also understand the function of
double buffering and hidden surfaces in 3D graphics and be able to use them in graphics
programming.

Introduction

We emphasize 3D computer graphics consistently in this book because we believe that computer
graphics should be encountered through 3D processes and that 2D graphics can be considered
effectively as a special case of 3D graphics. But almost all of the viewing technologies that are
readily available to us are 2D—certainly monitors, printers, video, and film—and eventually even
the active visual retina of our eyes presents a 2D environment. So in order to present the images of
the scenes we define with our modeling, we must create a 2D representation of the 3D scenes. As
we saw in the graphics pipeline in the previous chapter, you begin by developing a set of models
that make up the elements of your scene and set up the way the models are placed in the scene,
resulting in a set of objects in a common world space. You then define the way the scene will be
viewed and the way that view is presented on the screen. In this early chapter, we are concerned
with the way we move from the world space to a 2D image with the tools of viewing and
projection.

We set the scene for this process in the last chapter, when we defined the geometry pipeline. We
begin at the point where we have the 3D world coordinates—that is, where we have a complete
scene fully defined in a 3D world. This point comes after we have done the modeling and model
transformations noted in the previous chapter and discussed in more detail in the two chapters that
follow this one. This chapter is about creating a view of that scene in our display space of a
computer monitor, a piece of film or video, or a printed page, whatever we want. To remind
ourselves of the steps in this process, the geometry pipeline (without the modeling stage) is again
shown in Figure 1.1.
3D World 3D Eye 3D Eye 2D Eye 2D Screen
Coordinates Coordinates Coordinates Coordinates Coordinates

Viewing Transformation 3D Clipping Projection Window-to-Viewport Mapping

Figure 1.1: the geometry pipeline for creating an image of a scene

Let’s consider an example of a world space and look at just what it means to have a view as a
presentation of that space. One of the author’s favorite places is Yosemite National Park, which is
a wonderful example of a 3D world. Certainly there is a basic geometry in the park, made up of
stone, wood, and water; this geometry can be seen from a number of points. In Figure 1.2 we see
the classic piece of Yosemite geometry, the Half Dome monolith, from below in the valley and
from above at Glacier Point. This gives us an excellent example of two views of the same
geometry.

If you think about this area shown in these photographs, we can see the essential components of
viewing. First, you notice that your view depends first on where you are standing. If you are
standing on the valley floor, you see the face of the monolith in the classic view; if you are
standing on the rim of Yosemite Valley at about the same height as Half Dome, you get a view that
shows the profile of the rock. So your view depends on your position, which we call your eye
point. Second, the view also depends on the point you are looking at, which we will call the view
reference point. Both photos look generally towards the Half Dome monolith, or more
specifically, towards a point in space behind the dome. This makes a difference not only in the
view of the dome, but in the view of the region around the dome. In the classic Half Dome view
from the valley, if you look off to the right you see the south wall of the valley; in the view from
Glacier Point, if you look off to the right you see Vernal and Nevada falls on the Merced River
and, farther to the right, the high Sierra in the south of the park. The view also depends on the
breadth of field of your view, whether you are looking at a wide part of the scene or a narrow part;
again, the photograph at the left is a view of just Half Dome, while the one at the right is a

Figure 1.2: two photographs of Half Dome from different positions

3/15/03 Page 1.2


panoramic view that includes the dome. While both photos are essentially square, you can
visualize the left-hand photo as part of a photo that’s vertical in layout while the right-hand photo
looks more like it would come from a horizontal layout; this represents an aspect ratio for the image
that can be part of its definition. Finally, although this may not be obvious at first because our
minds process images in context, the view depends on your sense of the up direction in the scene:
whether you are standing with your head upright or tilted (this might be easier to grasp if you think
of the view as being defined by a camera instead of by your vision; it’s clear that if you tilt a
camera at a 45° angle you get a very different photo than one that’s taken by a horizontal or vertical
camera.) The world is the same in any case, but the determining factors for the image are where
your eye is, the point you are looking toward, the breadth of your view, the aspect ratio of your
view, and the way your view is tilted. All these will be accounted for when you define an image in
computer graphics.

Once you have determined your view, it must then be translated into an image that can be presented
on your computer monitor. You may think of this in terms of recording an image on a digital
camera, because the result is the same: each point of the view space (each pixel in the image) must
be given a specific color. Doing that with the digital camera involves only capturing the light that
comes through the lens to that point in the camera’s sensing device, but doing it with computer
graphics requires that we calculate exactly what will be seen at that particular point when the view
is presented. We must define the way the scene is transformed into a two-dimensional space,
which involves a number of steps: taking into account all the questions of what parts are in front of
what other parts, what parts are out of view from the camera’s lens, and how the lens gathers light
from the scene to bring it into the camera. The best way to think about the lens is to compare two
very different kinds of lenses: one is a wide-angle lens that gathers light in a very wide cone, and
the other is a high-altitude photography lens that gathers light only in a very tight cylinder and
processes light rays that are essentially parallel as they are transferred to the sensor. Finally, once
the light from the continuous world comes into the camera, it is recorded on a digital sensor that
only captures a discrete set of points.

This model of viewing is paralleled quite closely by a computer graphics system, and it follows the
graphics pipeline that we discussed in the last chapter. You begin your work by modeling your
scene in an overall world space (you may actually start in several modeling spaces, because you
may model the geometry of each part of your scene in its own modeling space where it can be
defined easily, then place each part within a single consistent world space to define the scene).
This is very different from the viewing we discuss here but is covered in detail in the next chapter.
The fundamental operation of viewing is to define an eye within your world space that represents
the view you want to take of your modeling space. Defining the eye implies that you are defining a
coordinate system relative to that eye position, and you must then transform your modeling space
into a standard form relative to this coordinate system by defining, and applying, a viewing
transformation. The fundamental operation of projection, in turn, is to define a plane within 3-
dimensional space, define a mapping that projects the model into that plane, and displays that plane
in a given space on the viewing surface (we will usually think of a screen, but it could be a page, a
video frame, or a number of other spaces).

We will think of the 3D space we work in as the traditional X -Y -Z Cartesian coordinate space,
usually with the X - and Y -axes in their familiar positions and with the Z-axis coming toward the
viewer from the X -Y plane. This is a right-handed coordinate system, so-called because if you
orient your right hand with your fingers pointing from the X -axis towards the Y -axis, your thumb
will point towards the Z-axis. This orientation is commonly used for modeling in computer
graphics because most graphics APIs define the plane onto which the image is projected for
viewing as the X -Y plane, and project the model onto this plane in some fashion along the Z-axis.
The mechanics of the modeling transformations, viewing transformation, and projection are
managed by the graphics API, and the task of the graphics programmer is to provide the API with

3/15/03 Page 1.3


the correct information and call the API functionality in the correct order to make these operations
work. We will describe the general concepts of viewing and projection below and will then tell
you how to specify the various parts of this process to OpenGL.

Finally, it is sometimes useful to “cut away” part of an image so you can see things that would
otherwise be hidden behind some objects in a scene. We include a brief discussion of clipping
planes, a technique for accomplishing this action, because the system must clip away parts of the
scene that are not visible in the final image.

Fundamental model of viewing

As a physical model, we can think of the viewing process in terms of looking through a rectangular
frame that is held in front of your eye. You can move yourself around in the world, setting your
eye into whatever position and orientation from you wish to see the scene. This defines your
viewpoint and view reference point. The shape of the frame and the orientation you give it
determine the aspect ratio and the up direction for the image. Once you have set your position in
the world, you can hold up the frame to your eye and this will set your projection; by changing the
distance of the frame from the eye you change the breadth of field for the projection. Between
these two operations you define how you see the world in perspective through the hole. And
finally, if you put a piece of transparent material that is ruled in very small squares behind the
cardboard (instead of your eye) and you fill in each square to match the brightness you see in the
square, you will create a copy of the image that you can take away from the original location. Of
course, you only have a perspective projection instead of an orthogonal projection, but this model
of viewing is a good place to start in understanding how viewing and projection work.

As we noted above, the goal of the viewing process is to rearrange the world so it looks as it
would if the viewer’s eye were in a standard position, depending on the API’s basic model. When
we define the eye location, we give the API the information it needs to do this rearrangement. In
the next chapter on modeling, we will introduce the important concept of the scene graph, which
will integrate viewing and modeling. Here we give an overview of the viewing part of the scene
graph.

Figure 1.3: the eye coordinate system within the world coordinate system

3/15/03 Page 1.4


The key point is that your view is defined by the location, direction, orientation, and field of view
of the eye as we noted above. To understand this a little more fully, consider the situation shown
in Figure 1.3. Here we have a world coordinate system that is oriented in the usual way, and
within this world we have both a (simple) model and an eyepoint. At the eyepoint we have the
coordinate system that is defined by the eyepoint-view reference point-up information that is
specified for the view, so you may see the eyepoint coordinates in context. From this, you should
try to visualize how the model will look once it is displayed with the view.

In effect, you have defined a coordinate system within the world space relative to the eye. There
are many ways to create this definition, but basically they all involve specifying three pieces of data
in 3D space. Once this eye coordinate system is defined, we can apply an operation that changes
the coordinates of everything in the world into equivalent representations in the eye coordinate
system. This change of coordinates is a straightforward mathematical operation, performed by
creating a change-of-basis matrix for the new system and then applying it to the world-space
geometry. The transformation places the eye at the origin, looking along the Z-axis, and with the
Y -axis pointed upwards; this view is similar to that shown in Figure 1.4. The specifications allow
us to define the viewing transformation needed to move from the world coordinate system to the
eye coordinate system. Once the eye is in standard position, and all your geometry is adjusted in
the same way, the system can easily move on to project the geometry onto the viewing plane so the
view can be presented to the user.

In the next chapter we will discuss modeling, and part of that process is using transformations to
place objects that are defined in one position into a different position and orientations in world
space. This can be applied to defining the eye point, and we can think of starting with the eye in
standard position and applying transformations to place the eye where you want it. If we do that,
then the viewing transformation is defined by computing the inverse of the transformation that
placed the eye into the world. (If the concept of computing the inverse seems difficult, simply
think of undoing each of the pieces of the transformation; we will discuss this more in the chapter
on modeling).

Once you have organized the viewing information as we have described, you must organize the
information you send to the graphics system to define the way your scene is projected to the
screen. The graphics system provides ways to define the projection and, once the projection is
defined, the system will carry out the manipulations needed to map the scene to the display space.
These operations will be discussed later in this chapter.

Definitions

There are a small number of things that you must consider when thinking of how you will view
your scene. These are independent of the particular API or other graphics tools you are using, but
later in the chapter we will couple our discussion of these points with a discussion of how they are
handled in OpenGL. The things are:
• Your world must be seen, so you need to say how the view is defined in your model including
the eye position, view direction, field of view, and orientation. This defines the viewing
transformation that will be used to move from 3D world space to 3D eye space.
• In general, your world must be seen on a 2D surface such as a screen or a sheet of paper, so
you must define how the 3D world is projected into a 2D space. This defines the 3D clipping
and projection that will take the view from 3D eye space to 2D eye space.
• The region of the viewing device where the image is to be visible must be defined. This is the
window, which should not be confused with the concept of window on your screen, though
they often will both refer to the same space.
• When your world is seen in the window on the 2D surface, it must be seen at a particular place,
so you must define the location where it will be seen. This defines the location of the viewport

3/15/03 Page 1.5


within the overall 2D viewing space and the window-to-viewport mapping that takes the 2D
eye space to screen space.

We will call these three things setting up your viewing environment, defining your projection, and
defining your window and viewport, respectively, and they are discussed in that order in the
sections below.

Setting up the viewing environment

When you define a scene, you will want to do your work in the most natural world that would
contain the scene, which we called the model space in the graphics pipeline discussion of the
previous chapter. Objects defined in their individual model spaces are then placed in the world
space with modeling transformations, as described in the next chapter on modeling. This world
space is then transformed by the viewing transformation into a 3D space with the eye in standard
position. To define the viewing transformation, you must set up a view by putting your eyepoint
in the world space. This world is defined by the coordinate space you assumed when you modeled
your scene as discussed earlier. Within that world, you define four critical components for your
eye setup: where your eye is located, what point your eye is looking towards, how wide your field
of view is, and what direction is vertical with respect to your eye. When these are defined to your
graphics API, the geometry in your modeling is transformed with the viewing transformation to
create the view as it would be seen with the environment that you defined.

A graphics API defines the computations that transform your geometric model as if it were defined
in a standard position so it could be projected in a standard way onto the viewing plane. Each
graphics API defines this standard position and has tools to create the transformation of your
geometry so it can be viewed correctly. For example, OpenGL defines its viewing to take place in
a right-handed coordinate system and transforms all the geometry in your scene (and we do mean
all the geometry, including lights and directions, as we will see in later chapters) to place your eye
point at the origin, looking in the negative direction along the Z-axis. The eye-space orientation is
illustrated in Figure 1.4.
Y

Eye at origin, X
looking along
the Z-axis in
Z negative direction

Figure 1.4: the standard OpenGL viewing model

Of course, no graphics API assumes that you can only look at your scenes with this standard view
definition. Instead, you are given a way to specify your view very generally, and the API will
convert the geometry of the scene so it is presented with your eyepoint in this standard position.
This conversion is accomplished through the viewing transformation that is defined from your
view definition as we discussed earlier.

The information needed to define your view includes your eye position (its (x, y, z) coordinates),
the direction your eye is facing or the coordinates of a point toward which it is facing, and the

3/15/03 Page 1.6


direction your eye perceives as “up” in the world space. For example, the standard view that
would be used unless you define another one has the position at the origin, or (0, 0, 0), the view
direction or the “look-at” point coordinates as (0, 0, –1), and the up direction as (0, 1, 0). You
will probably want to identify a different eye position for most of your viewing, because this is
very restrictive and you probably will not want to define your whole viewable world as lying
somewhere behind the X -Y plane. Your graphics API will give you a function that allows you to
set your eye point as you desire.

The viewing transformation, then, is the transformation that takes the scene as you define it in
world space and aligns the eye position with the standard model, giving you the eye space we
discussed in the previous chapter. The key actions that the viewing transformation accomplishes
are to rotate the world to align your personal up direction with the direction of the Y -axis, to rotate
it again to put the look-at direction in the direction of the negative Z-axis (or to put the look-at point
in space so it has the same X - and Y -coordinates as the eye point and a Z-coordinate less than the
Z-coordinate of the eye point), to translate the world so that the eye point lies at the origin, and
finally to scale the world so that the look-at point or look-at vector has the value (0, 0, –1). This is
a very interesting transformation because what it really does is to invert the set of transformations
that would move the eye point from its standard position to the position you define with your API
function as above. This is very important in the modeling chapter below, and is discussed in some
depth later in this chapter in terms of defining the view environment for the OpenGL API.

Defining the projection

The viewing transformation above defines the 3D eye space, but that cannot be viewed on our
standard devices. In order to view the scene, it must be mapped to a 2D space that has some
correspondence to your display device, such as a computer monitor, a video screen, or a sheet of
paper. The technique for moving from the three-dimensional world to a two-dimensional world
uses a projection operation that you define based on some straightforward fundamental principles.

When you (or a camera) view something in the real world, everything you see is the result of light
that comes to the retina (or the film) through a lens that focuses the light rays onto that viewing
surface. This process is a projection of the natural (3D) world onto a two-dimensional space.
These projections in the natural world operate when light passes through the lens of the eye (or
camera), essentially a single point, and have the property that parallel lines going off to infinity
seem to converge at the horizon so things in the distance are seen as smaller than the same things
when they are close to the viewer. This kind of projection, where everything is seen by being
projected onto a viewing plane through or towards a single point, is called a perspective projection.
Standard graphics references show diagrams that illustrate objects projected to the viewing plane
through the center of view; the effect is that an object farther from the eye are seen as smaller in the
projection than the same object closer to the eye.

On the other hand, there are sometimes situations where you want to have everything of the same
size show up as the same size on the image. This is most common where you need to take careful
measurements from the image, as in engineering drawings. An orthographic projection
accomplishes this by projecting all the objects in the scene to the viewing plane by parallel lines.
For orthographic projections, objects that are the same size are seen in the projection with the same
size, no matter how far they are from the eye. Standard graphics texts contain diagrams showing
how objects are projected by parallel lines to the viewing plane.

In Figure 1.5 we show two images of a wireframe house from the same viewpoint. The left-hand
image of the figure is presented with a perspective projection, as shown by the difference in the
apparent sizes of the front and back ends of the building, and by the way that the lines outlining the
sides and roof of the building get closer as they recede from the viewer. The right-hand image of
the figure is shown with an orthogonal projection, as shown by the equal sizes of the front and

3/15/03 Page 1.7


back ends of the building and the parallel lines outlining the sides and roof of the building. The
differences between these two images is admittedly modest, but you should look for the
differences noted above. It could be useful to use both projections on some of your scenes and
compare the results to see how each of the projections works in different situations.

Figure 1.5: perspective image (left) and orthographic image (right) of a simple model

These two projections operate on points in 3D space in rather straightforward ways. For the
orthographic projection, all points are projected onto the (X ,Y )-plane in 3D eye space by simply
omitting the Z-coordinate. Each point in 2D eye space is the image of a line parallel to the Z-axis,
so the orthographic projection is sometimes called a parallel projection. For the perspective
projection, any point is projected onto the plane Z=1 in 3D eye space at the point where the line
from the point to the origin in 3D eye space meets that plane. Because of similar triangles, if the
point (x,y,z) is projected to the point ( x ′, y ′), we must have x ′ = x /z and ( y ′ = y / z). Here each
point in 2D eye space is the image of a line through that point and the origin in 3D eye space.

After a projection is applied, your scene is mapped to 2D eye space, as we discussed in the last
chapter. However, the z-values in your scene are not lost. As each point is changed by the
projection transformation, its z-value is retained for later computations such as depth tests or
perspective-corrected textures. In some APIs such as OpenGL, the z-value is not merely retained
but its sign is changed so that positive z-values will go away from the origin in a left-handed way.
This convention allows the use of positive numbers in depth operations, which makes them more
efficient.

View Volumes

A projection is often thought of in terms of its view volume, the region of space that is to be visible
in the scene after the projection. With any projection, the fact that the projection creates an image
on a rectangular viewing device implicitly defines a set of boundaries for the left, right, top, and
bottom sides of the scene; these correspond to the left, right, top, and bottom of the viewing space.
In addition, the conventions of creating images include not including objects that are too close to or
too far from the eye point, and these give us the idea of front and back sides of the region of the
scene that can be viewed. Overall, then, the projection defines a region in three-dimensional space
that will contain all the parts of the scene that can be viewed. This region is called the viewing
volume for the projection. The viewing volumes for the perspective and orthogonal projections are
shown in Figure 1.6, with the eye point at the origin; this region is the space within the rectangular
volume (left, for the orthogonal projection) or the pyramid frustum (right, for the perspective
transformation). Note how these view volumes match the definitions of the regions of 3D eye

3/15/03 Page 1.8


space that map to points in 2D eye space, and note that each is presented in the left-handed viewing
coordinate system described in Figure 1.4.

Z Y Z

Y Zfar

Znear
Zfar
X X

Znear

Figure 1.6: the viewing volumes for the orthogonal (left) and perspective (right) projections

While the orthographic view volume is defined only in a specified place in your model space, the
orthogonal view volume may be defined wherever you need it because, being independent of the
calculation that makes the world appear from a particular point of view, an orthogonal view can
take in any part of space. This allows you to set up an orthogonal view of any part of your space,
or to move your view volume around to view any part of your model. In fact, this freedom to
place your viewing volume for the orthographic projection is not particularly important because
you could always use simple translations to center the region you will see.

One of the reasons we pay attention to the view volume is that only objects that are inside the view
volume for your projection will be displayed; anything else in the scene will be clipped, that is, be
identified in the projection process as invisible, and thus will not be handled further by the graphics
system. Any object that is partly within and partly outside the viewing volume will be clipped so
that precisely those parts inside the volume are seen, and we discuss the general concept and
process of clipping later in this chapter. The sides of the viewing volume correspond to the
projections of the sides of the rectangular space that is to be visible, but the front and back of the
volume are less obvious—they correspond to the nearest and farthest space that is to be visible in
the projection. These allow you to ensure that your image presents only the part of space that you
want, and prevent things that might lie behind your eye from being projected into the visible space.

Calculating the perspective projection

The perspective projection is quite straightforward to compute, and although you do not need to
carry this out yourself we will find it very useful later on to understand how it works. Given the
general setup for the perspective viewing volume, let’s look at a 2D version in Figure 1.7. Here
Point (X,Y,Z)

Y
Y'
Point (0,0,0)
Z 1

Plane Z = 1

Figure 1.7: the perspective projection calculation

3/15/03 Page 1.9


we see that Y /Y ’ = Z/1, or simplifying, Y ’ = Y /Z. Thus with the conventions we have defined, the
perspective projection defined on 3D eye space simply divides the original X and Y values by Z. If
we write this projection as a matrix, we have:

1/ Z 1 1
 
 1 1/ Z 1
 1 1 1

This matrix represents a transformation called the perspective transformation, but because this
matrix involves a variable in the denominator this transformation is not a linear mapping. That will
have some significance later on when we realize that we must perform perspective corrections on
some interpolations of object properties. Note here that we do not make any change in the value of
Z, so that if we have the transformed values of X ’ and Y ’ and keep the original value of Z, we can
reconstruct the original values as X = X ’*Z and Y = Y ’*Z. The perspective projection then is
done by applying the perspective transformation and using only the values of X’ and Y’ as output.

Clipping on the view volume

We noted just a bit earlier that parts of an image outside the view volume are clipped, or removed
from the active scene, before the scene is displayed. Clipping for an orthogonal projection is quite
straightforward because the boundary planes are defined by constant values of single coordinates:
X = Xleft, X = Xright, Y = Ybottom, Y = Ytop, Z = Znear, and Z = Zfar. Clipping a line segment
against any of these planes checks to see whether the line crosses the plane and, if it does, replaces
the entire line segment with the line segment that does not include the part outside the volume.
Algorithms for clipping are very well known and we do not include them here because we do not
want to distract the reader from the ideas of the projection.

On the other hand, clipping on the view volume for the perspective projection would require doing
clipping tests against the side planes that slope, and this is more complex. We can avoid this by
applying a bit of cleverness: apply the perspective transformation before you carry out the
clipping. Because each of the edges of the perspective view volume projects into a single point,
each edge is transformed by the perspective transformation into a line parallel to the Z-axis. Thus
the viewing volume for the perspective projection is transformed into a rectangular volume and the
clipping can be carried out just as it was for the orthogonal projection.

Defining the window and viewport

The scene as presented by the projection is still in 2D eye space, and the objects are all defined by
real numbers. However, the display space is discrete, so the next step is a conversion of the
geometry in 2D eye coordinates to discrete coordinates. This required identifying discrete screen
points to replace the real-number eye geometry points, and introduces some sampling issues that
must be handled carefully, but graphics APIs do this for you. The actual display space used
depends on the window and the viewport you have defined for your image.

To a graphics system, a window is a rectangular region in your viewing space in which all of the
drawing from your program will be done. It is usually defined in terms of the physical units of the
drawing space. The window will be placed in your overall display device in terms of the device’s
coordinate system, which will vary between devices and systems. The window itself will have its
own coordinate system, and the window space in which you define and manage your graphics
content will be called screen space, and is identified with integer coordinates. The smallest
displayed unit in this space will be called a pixel, a shorthand for picture element. Note that the
window for drawing is a distinct concept from the window in a desktop display window system,

3/15/03 Page 1.10


although the drawing window may in fact occupy a window on the desktop; we will be
consistently careful to reserve the term window for the graphic display. While the window is
placed in screen space, within the window itself—where we will do all our graphics work—we
have a separate coordinate system that also has integer coordinates that represent pixel coordinates
within the window itself. We will consistently think of the display space in terms of window
points and window coordinates because they are all that matter to our image.

You will recall that we have a final transformation in the graphics pipeline from the 2D eye
coordinate system to the 2D screen coordinate system. In order to understand that transformation,
you need to understand the relation between points in two corresponding rectangular spaces. In
this case, the rectangle that describes the scene to the eye is viewed as one space, and the rectangle
on the screen where the scene is to be viewed is presented as another. The same processes apply
to other situations that are particular cases of corresponding points in two rectangular spaces that
we will see later, such as the relation between the position on the screen where the cursor is when a
mouse button is pressed, and the point that corresponds to this in the viewing space, or points in
the world space and points in a texture space.

YMAX T

(x,y) (u,v)
XMIN XMAX L R

YMIN B

Figure 1.8: correspondences between points in two rectangles

In Figure 1.8, we show two rectangles with boundaries and points named as shown. In this
example, we assume that the lower left corner of each rectangle has the smallest coordinate values
in the rectangle. So the right-hand rectangle has a smallest X -value of L and a largest X -value of
R, and a smallest Y -value of B and a largest Y -value of T, for example (think left, right, top, and
bottom in this case).

With the names that are used in the figures, we have the proportions
X : XMIN:: XMAX: XMIN = u : L :: R : L
Y :YMIN :: YMAX: YMIN = v : B :: T : B
from which we can derive the equations:
(x − XMIN)/( XMAX− XMIN) = (u − L)/(R − L)
(y − YMIN)/(YMAX− YMIN) = (v − B)/(T − B)
and finally these two equations can be solved for the variables of either point in terms of the other,
giving x and y in terms of u and v as:
x = XMIN+ (u − L)(XMAX− XMIN)/( R − L)
y = YMIN + (v − B)(YMAX− YMIN)/(T − B)
or the dual equations that solve for (u,v) in terms of (x, y).

This discussion was framed in very general terms with the assumption that all our values are real
numbers, because we were taking arbitrary ratios and treating them as exact values. This would
hold if we were talking about 2D eye space, but a moment’s thought will show that these relations
cannot hold in general for 2D screen space because integer ratios are only rarely exact. In the case
of interest to us, one of these is 2D eye space and one is 2D screen space, so we must stop to ask

3/15/03 Page 1.11


how to modify our work for that case. For this case, to use the equations above for x and y we
would regard the ratios for the right-hand rectangle in terms of real numbers and those for the left-
hand rectangle as integers, we can get exact values for the ratios (u − L)/( R − L) and
(v − B)/(T − B) and calculate real values for x and y, which we then truncate to get the desired
integer values. This means that we take the view that an integer coordinate pair represents the unit
square with that pair at the lower left of the square.

We noted that the window has a separate coordinate system, but we were not more specific about
it. Your graphics API may use either of two conventions for window coordinates. The window
may have its origin, or (0,0) value, at either the upper left or lower left corner. In the discussion
above, we assumed that the origin was at the lower left because that is the standard mathematical
convention, but graphics hardware often puts the origin at the top left because that corresponds to
the lowest address of the graphics memory. If your API puts the origin at the upper left, you can
make a simple change of variable as Y ′ = YMAX− Y and using the Y ′ values instead of Y will put
you back into the situation described in the figure.

When you create your image, you can choose to present it in a distinct sub-rectangle of the window
instead of the entire window, and this part is called a viewport. A viewport is a rectangular region
within that window to which you can restrict your image drawing. In any window or viewport,
the ratio of its width to its height is called its aspect ratio. A window can have many viewports,
even overlapping if needed to manage the effect you need, and each viewport can have its own
image. Mapping an image to a viewport is done with exactly the same calculations we described
above, except that the boundaries of the drawing area are the viewport’s boundaries instead of the
window’s. The default behavior of most graphics systems is to use the entire window for the
viewport. A viewport is usually defined in the same terms as the window it occupies, so if the
window is specified in terms of physical units, the viewport probably will be also. However, a
viewport can also be defined in terms of its size relative to the window, in which case its boundary
points will be calculated from the window’s.

If your graphics window is presented in a windowed desktop system, you may want to be able to
manipulate your graphics window in the same way you would any other window on the desktop.
You may want to move it, change its size, and click on it to bring it to the front if another window
has been previously chosen as the top window. This kind of window management is provided by
the graphics API in order to make the graphics window behavior on your system compatible with
the behavior on all the other kinds of windows available. When you manipulate the desktop
window containing the graphics window, the contents of the window need to be managed to
maintain a consistent view. The graphics API tools will give you the ability to manage the aspect
ratio of your viewports and to place your viewports appropriately within your window when that
window is changed. If you allow the aspect ratio of a new viewport to be different than it was
when defined, you will see that the image in the viewport seems distorted, because the program is
trying to draw to the originally-defined viewport.

A single program can manage several different windows at once, drawing to each as needed for the
task at hand. Individual windows will have different identifiers, probably returned when the
window is defined, and these identifiers are used to specify which window will get the drawing
commands as they are given. Window management can be a significant problem, but most
graphics APIs have tools to manage this with little effort on the programmer’s part, producing the
kind of window you are accustomed to seeing in a current computing system—a rectangular space
that carries a title bar and can be moved around on the screen and reshaped. This is the space in
which all your graphical image will be seen. Of course, other graphical outputs such as video will
handle windows differently, usually treating the entire output frame as a single window without
any title or border.

3/15/03 Page 1.12


Some aspects of managing the view

Once you have defined the basic features for viewing your model, there are a number of other
things you can consider that affect how the image is created and presented. We will talk about
many of these over the next few chapters, but here we talk about hidden surfaces, clipping planes,
and double buffering.

Hidden surfaces

Most of the things in our world are opaque, so we only see the things that are nearest to us as we
look in any direction. This obvious observation can prove challenging for computer-generated
images, however, because a graphics system simply draws what we tell it to draw in the order we
tell it to draw them. In order to create images that have the simple “only show me what is nearest”
property we must use appropriate tools in viewing our scene.

Most graphics systems have a technique that uses the geometry of the scene in order to decide what
objects are in front of other objects, and can use this to draw only the part of the objects that are in
front as the scene is developed. This technique is generally called Z-buffering because it uses
information on the z-coordinates in the scene, as shown in Figure 1.4. In some systems it goes by
other names; for example, in OpenGL this is called the depth buffer. This buffer holds the z-value
of the nearest item in the scene for each pixel in the scene, where the z-values are computed from
the eye point in eye coordinates. This z-value is the depth value after the viewing transformation
has been applied to the original model geometry.

This depth value is not merely computed for each vertex defined in the geometry of a scene. When
a polygon is processed by the graphics pipeline, an interpolation process is applied as described in
the interpolation discussion in the chapter on the rendering pipeline. If a perspective projection is
selected, the interpolation can take perspective into account as described there. This process will
define a z-value, which is also the distance of that point from the eye in the z-direction, for each
pixel in the polygon as it is processed. This allows a comparison of the z-value of the pixel to be
plotted with the z-value that is currently held in the depth buffer. When a new point is to be
plotted, the system first makes this comparison to check whether the new pixel is closer to the
viewer than the current pixel in the image buffer and if it is, replaces the current point by the new
point. This is a straightforward technique that can be managed in hardware by a graphics board or
in software by simple data structures. There is a subtlety in this process for some graphics APIs
that should be understood, however. Because it is more efficient to compare integers than floating-
point numbers, the depth values in the buffer may be kept as unsigned integers, scaled to fit the
range between the near and far planes of the viewing volume with 0 as the front plane. This
integer conversion can cause a phenomenon called “Z-fighting” because of aliasing when floating-
point numbers are converted to integers. This can cause the depth buffer to show inconsistent
values for things that are supposed to be at equal distances from the eye. Integer conversion is
particularly a problem if the dear and far planes are far apart, because in that case the integer depth
is coarser than if the planes are close. This problem is best controlled by trying to fit the near and
far planes of the view as closely as possible to the actual items being displayed. This makes each
integer value represent a smaller real number and so there is less likelihood of two real depths
getting the same integer representation.

There are other techniques for ensuring that only the genuinely visible parts of a scene are
presented to the viewer, however. If you can determine the depth (the distance from the eye) of
each object in your model, then you may be able to sort a list of the objects so that you can draw
them from back to front—that is, draw the farthest first and the nearest last. In doing this, you will
replace anything that is hidden by other objects that are nearer, resulting in a scene that shows just
the visible content. This is a classical technique called the painter’s algorithm (because it mimics
the way a painter could create an image using opaque paints) that was widely used in more limited

3/15/03 Page 1.13


Random documents with unrelated
content Scribd suggests to you:
CHAPTER II

THE WRECKED SCHOONER

The great boat lay almost against the road. As the buckboard sped
by she loomed above it in the gathering dusk, menacing and
mountainous. Her broken bowsprit swung over the wagon and
creaked in the breeze that had just sprung up. Directly below the
bowsprit was a carved figurehead, larger than life and clearly
outlined against the dull gray of the ship. Sea and rain had washed
away the figure’s paint and worn the wood bone-white. It
represented a demon nailed to the battered prow, its wide ugly grin
and blank eyes peering almost into Ann’s face as the buckboard
passed beneath. Ann was on the side of the wagon which was closer
and could have touched the face if she had reached out her hand to
do so. Helen gave a little shriek of fright at sight of the thing and
Ann felt the cry echoing in her brain as if she had been the one who
called out.

Instinctively she dodged back against Jo, and felt that his muscles
were tense against the tightened reins in his hands.

Jerry needed no urging; with his back flattened down he ran,


swinging his heavy feet swiftly as he mounted the hill toward the
house. Ann glanced up from the strong brown hands holding the
reins and saw that Jo was staring straight ahead as though he had
not looked at the figurehead as he went by and was determined not
to turn and look back at it afterward.

They were past, but as they went up the hill the evening wind
suddenly grew stronger and sighed through the weatherworn boards
that covered the schooner’s hull, and the rattling of their loose ends
was like the sound of clapping hands.
What was this old boat, and why did it impress them so? And yet
Ann did not feel like asking Jo about it. She wished that her father
would say something to quiet this fear that had come over her so
suddenly. She never before had felt anything like this strange
impression that the schooner was more than just a plain ordinary
boat cast up on a narrow strip of beach.

As though Mr. Seymour had read her mind he asked Jo, “Where
did that schooner come from? She wasn’t here last summer when I
was down.”

“No, sir.” Jo had trouble in making his stiff lips move. “She came in
on a blizzard the winter past and stove up on the pond rocks.”

“Whose boat was she? What is her name?”

“She had no cargo on board,” said Jo slowly, as if he did not wish


to say anything about it. “She had no log either. And the waves were
so heavy that her name plate was gone and never came ashore.”

“But wasn’t there somebody on board to tell you who she was?”

“A man had no chance to live in the sea the day she came in,”
explained Jo. “Four of the crew were washed ashore the next day,
but they carried no papers and nobody claimed them. None of the
folks wanted to bury them down in the village churchyard so pop
and I put them up back of the barn where grandpop lies. It didn’t
seem right not to give them a bit of ground to lie in, even though we
didn’t know what brought them in here.”

Mrs. Seymour exclaimed indignantly, “I never heard of anything so


inhuman! Do you really mean that the people in the village refused
to bury those poor shipwrecked sailors in the cemetery? Jo! Not here
in a civilized land?”

“You couldn’t blame the folks,” apologized Jo.


But evidently Mrs. Seymour was quite positive that she could, and
Ann agreed with her most thoroughly.

Jerry had stopped running. He was going uphill and besides they
were almost home now, but Jo had time to say, “Nobody ever
claimed the boat. I guess nobody owns her. And not even the sea
wants her you can make that out by the way it threw her away up
here by the road, just as if it wanted to be free of her. Only the flood
tides reach her now.”

They had reached the house as Jo talked, and he jumped down


from his seat with his face still grim and set. And then everything
changed, for the house door was flung open with a flood of
lamplight over the doorstep and there stood Fred Bailey, Jo’s father.

“Come right in,” he called, striding to meet them. “Don’t mind that
stuff, Mr. Seymour. We’ll take it in for you.”

Ann liked Fred Bailey almost as much as she had liked Jo. As soon
as she saw him standing there, tall and thin and gangling in his
rough clothes, a fisherman and a farmer, all thoughts of the strange
wrecked ship were forgotten. Here was some one who made her feel
at home, some one who was strong and trustworthy and honest as
the good brown earth and the mighty cliffs.

Mr. Seymour had rented the Bailey house and Jo and his father
had moved into the barn for the summer. So presently, when the
baggage had been brought in and when Mr. Bailey had shown Mrs.
Seymour where things were in the pantry and the kitchen and the
woodshed and where the linen and blankets were kept, he and Jo
went off to their summer quarters leaving the Seymours alone.

Provisions had been sent from the village store and Ann and her
mother found the shelves well stocked with all kinds of food, with
big barrels of sugar, flour, and potatoes stored under the shelf in the
pantry. After they had studied the workings of the kerosene stove
they cooked the first meal over it, and Ann loved just such an
opportunity to show how much she knew about cooking. Ben was
ready to admit that she could boil potatoes expertly when she didn’t
forget and let the water boil away. As there was plenty of water this
time, and as Mrs. Seymour knew how to cook the steak deliciously in
a hot pan, and as Fred Bailey had left them a batch of soft yellow
biscuits, the hungry travelers were very well off indeed this evening.

Mr. Seymour was already gloating over the work he meant to do


this summer. “That boat is a find I didn’t expect. I’ll start sketching
her the first thing in the morning. Just think of having a cottage with
a wrecked schooner right in the front yard.”

“I don’t like that boat,” said Helen. Her lips twisted as though she
were going to cry. “It has such big round eyes that stare at you.”

Her mother laughed. “You must have been sleepy when you
passed the boat. That was only the figure of a man cut out of wood.
The eyes didn’t belong to anybody who is actually alive.”

“I don’t know about that, mother,” Ben said soberly. “I saw the
eyes, too, and I was wide-awake, for I pinched myself to make sure.
Those eyes made little holes right through me when they looked
down at me. They were looking at me, really, and not at Helen.”

“They were looking at me!” Helen insisted. “And I don’t like that
ship! I want to go home to Boston.”

Mr. Seymour looked at her in astonishment. “Come, come, my


dear child, you mustn’t let a thing like that frighten you. It is strange
and grotesque but that only makes it more interesting. I’ll tell you
about figureheads. The sailors think of the ship’s figurehead as a
sort of guardian spirit that watches over the boat and protects it
during storms. Even if it were alive it wouldn’t hurt you because it
was created only to protect. But it isn’t alive, Helen, it is made out of
wood. I’ll go with all of you to-morrow and let you touch it and then
you will never be afraid of it again.”
“Do they always put figureheads on big boats, father?” asked Ann.
She would not have been willing to admit that she, too, had those
eyes upon her and had thought they seemed very much alive.

“No, not always,” Mr. Seymour explained. “Sometimes the portion


over the cutwater of a ship is finished off with scrollwork, gilded and
painted. Modern steamers don’t have them now, very often, but the
deep-sea men who are on a sailing vessel months at a time like to
feel that they have a figurehead to watch and care for them while
they are asleep. The owners decide what it will be, and give
directions to the builders. That is, if they name a boat after a man
they will carve a statue of him for the bow, or else they will choose a
saint or an old-time god, like Neptune, who was once supposed to
rule over the sea. Sometimes they will have a mermaid, because
mermaids are gay and dancing and will make the ship travel more
swiftly; no sea could drown a mermaid. When a sailing ship makes a
safe passage through storm and peril and brings the sailors home
happy and well, they are very likely to believe that the figurehead
has had as much to do with it as the captain with his real knowledge
of navigation and charts.”

“It is a mascot, then?” said Ben.

“Yes, a sort of mascot,” his father assented. “And some of the old
figureheads are beautifully made, real works of art. When he retired,
many a sea captain took the figurehead from his ship and nailed it
over the door of his home, for he felt a real affection for it. Perhaps
he thought that since Neptune had taken such good care of the ship
at sea he was entitled to the same enjoyment and rest ashore that
the captain had earned.”

Mr. Seymour seemed to feel that everything was clear now, but
Ann was not satisfied.

“This ship did not get home safely,” she said in a half whisper.
“No, it didn’t,” her father assented. He was perfectly frank in
admitting that even the best of figureheads failed when storms were
too heavy or when sailors made mistakes in calculating the force of
wind and currents. “But that would not be the fault of the
figurehead. I am sure we shall learn that the captain lost track of
where he was and came in too close to shore.”

Ann’s doubts showed in her face. “But the crew and cargo have
disappeared.”

“You mustn’t be superstitious, Ann. There is always a logical


explanation for everything that seems strange and unnatural. There
must be a good reason why that boat had no cargo and probably we
shall learn all about her this summer before we go back to Boston.
Some of the people about here may know more than they care to
admit and have purposely kept it secret from Jo and Mr. Bailey.”

“Wouldn’t it be fun if we could find out all about her!” Her father’s
calm confidence had reassured Ann; her father must be right and
she didn’t want to be silly and timid. Never before had she felt the
least bit afraid of anything.

Ben had been thinking. “Just exactly what does it mean to be


superstitious, dad?” he asked.

“If you try to make yourself believe that the wooden figure out
there is alive, or if you are willing to accept any one else’s belief in
such nonsense, you will be superstitious and not intelligent. For
instance, you may think you see something, or hear something, and
not be able to explain what it is immediately. If instead of working to
learn a true explanation you remember the incident as it first
impressed you——”

“Like thinking a mouse at night is a burglar,” Ann interrupted.

“That is it exactly,” said Mr. Seymour. “Take that figurehead of a


demon on the boat; we passed by it just at twilight when it couldn’t
be seen as plainly as in full sunlight, and because the face was
leaning toward us, with shadows moving over it, it gave you the
impression that the thing was alive and watching you. To-morrow
when the sun comes out you will go back to look at it and see that it
is only a wooden statue, while if we should go home to-night, as
Helen wishes, you children would remember it all your lives as
something evil. And in that case you would be permitting yourselves
to grow superstitious instead of taking this as an opportunity for the
exercise of honest thinking and intelligent observation.”

“Is Jo superstitious?” asked Ben abruptly.

“Jo is too sensible to be superstitious,” answered his father.

“But Jo is afraid of that boat! I saw his face when we went past.
And even Jerry was afraid. He ran.”

Mr. Seymour glanced quickly across the table to where his wife sat
between Ann and Helen. Ann saw the look that passed between him
and her mother and realized that they both were worried. They did
not want Helen and Ben to go on thinking about the boat, nor did
they want the children to know that they, too, had felt the
strangeness of that gray broken boat and that grinning face.

Ann believed with her father that this was nothing more than an
old wooden sailing vessel thrown on the shore by a great storm.
Where had it come from, and for what port was it bound? Where
were the families who were waiting for their men to come home to
them? Were there children who thought that their father would
come back in a few weeks, now that good weather had made the
seas safe? Were there mothers who believed that their sailor sons
would soon be home? How anxious they must be, waiting all this
time since last winter. Something ought to be done about letting
them know the truth. It was tragic, and it was romantic, too.

And if there was a mystery attached to the ship that mystery could
be explained by a detective or by any one else who had the courage
and determination to find out what was at the bottom of this
strangeness. Her father had said there was a reason for everything
that was queer and uncanny. If only she were brave enough to face
that grinning demon! Should she be sensible, or should she let
herself be weak and unintelligent? Intelligent, that was what father
wanted them all to be, it was his favorite expression, “Be intelligent.”

The others began to chatter about other things while they were
finishing supper and washing the dishes afterward, but although Ann
took part in the work and the jokes and laughter and all the
anticipations of a great time to-morrow, she could think in the back
of her mind of nothing but the ship. If Jo would help them, she and
Ben would try to find out all about the wreck. It would be much
more fun than hunting imaginary Indians and bears in the woods.

After supper had been cleared away and the sweet old kitchen put
in order, all the Seymours trooped through every room in the house,
patting the wide soft feather beds that stood so high from the floor
that a little flight of steps was needed to climb into them.

“A tiny stepladder beside my bed!” exclaimed Helen. “What fun! I


love this house.”

The unaccustomedness of the quaint old furniture, the wide floor


boards polished with age, the small-paned windows, the bulky
mahogany chests of drawers that smiled so kindly as they waited for
the children’s clothes to be unpacked, all these things crowded the
ship out of Helen’s mind. She went to bed perfectly happy.

“Don’t you fall out,” called Ben from his room, “because if you
should you’d break your leg, probably, you’re so high.”

“I couldn’t fall out,” Helen called back. “You wait until you try your
bed. It seemed high before I got in, but I sank away down and
down into a nest; I think I’ll pretend I am a baby swan to-night with
billows of my mother swan’s feathers all about me to keep me warm.
I never slept in such a funny bed, but I like it!”
And then Helen’s voice trailed off into silence.

In each room the Seymours found a lamp trimmed and filled ready
for use, with its glass chimney as spotlessly clear as the glass of a
lighthouse.

“How kind the Baileys are!” exclaimed Mrs. Seymour gratefully. “I


don’t feel as if we were renting this house; Jo and his father seem
like old friends already.”

This time it was Ann and her father who exchanged a quick
glance, a flash of understanding and satisfaction. Impulsively Ann
threw her arms around her mother’s neck and kissed her. Her
mother should have a chance to rest here, if Ann’s help could make
it possible, dear mother who still looked so pale and tired after the
long weeks of nursing Helen and bringing her back to health.

“I knew that you’d like the Baileys,” said Mr. Seymour.

“Jo is an unusually nice boy, isn’t he, father?” Ann had already
grown attached to him.

“He certainly is,” Mr. Seymour agreed heartily. “And I know that
you will like him even better as you become better acquainted. His
father couldn’t get along without Jo. He does a man’s work on the
farm and helps bring in the lobsters every morning.”

“I’m going to be just like him,” Ben called from his bed in the next
room. Jo’s sturdy strength and the simple unconscious way the boy
used it had fired Ben’s imagination.

“Nothing could make me happier than to have you as well and


strong as he is, when we go away next fall,” answered Mr. Seymour.

With supper and the lamplight and the homely charm of the old
house, the atmosphere of uncanny strangeness had vanished, but
after Ann had blown out her lamp, just before she was ready to
climb the steps to her bed, she went to the window and peered
through the darkness toward the wrecked ship.

And as she looked a flickering light passed across the deck.

She must be mistaken. It was a firefly. No, there it was again, as


though a man walked carrying a swinging lantern with its wick no
bigger than a candle flame. He passed the bow, and the glow swung
across the figure of the demon.

Was it Jo or his father? That was Ann’s first thought, but she
wanted to make sure. From a second window in her room, across a
corner, she could see the windows of the barn which the Baileys had
made into a living room, and she leaned far out to see clearly. Jo
was there. He was talking to some one at the back of the room.

If Jo and his father were talking together, who could be prowling


around the boat? She crossed the room to look again at the
schooner. And as she watched, the bright pin prick of light
disappeared; the lantern had been carried behind some opaque
object that hid it.

“What’s up, Ann?” Ben stirred restlessly in the adjoining room. “It
will be morning before you get to bed.”

“Oh, I was looking out of the window. The stars are so bright in
Maine!”

“Ann! What do you think about that ship? I feel as if ghosts lived
on her.”

Ann climbed her little flight of steps and slid down between upper
sheet and feathers.

“Nonsense,” she called to Ben. “Ghosts don’t carry lanterns.”


“What?” Ben’s voice sounded much more awake. “What did you
say, Ann?”

“I said I don’t believe in ghosts.”

Ann slid farther into her feather nest and promptly went to sleep.
CHAPTER III

HOW THE BOAT CAME ASHORE

Vaguely Ann heard a bell ringing. She thought that she was
lobstering with Jo and that Jo was pulling up a bell in one of the
heavy lobster pots. They were bobbing about on waves as high as
mountains.

“It is seven o’clock! No farmer stays in bed late, you know.”

It was Mrs. Seymour’s voice.

How could her mother have come away out to sea? Ann sat up in
bed, not awake yet. And then she saw the sun pouring in through
the open windows. Her mother was standing in the hall between
Ann’s room and Ben’s, swinging an old ship’s bell that she must have
found somewhere in the house.

“In one minute, mother!”

How queer to wash in a huge bowl in her room instead of in a


bathroom! And how lovely to dry oneself while standing on a braided
mat before the washstand with the sun pouring down on one’s back
and legs! Bloomers and middy had miraculously appeared from her
baggage; some fairy had been at work while Ann was sleeping.

The smell of breakfast tweaked her hungry nose and she scurried
madly with her dressing, for Ben and Helen would eat everything in
sight if they felt half as starved as she did.

The kitchen seemed altogether different in the daytime. It had


grown smaller without the flickering shadows from the lamps. The
ceiling was low and Mr. Seymour bumped his head as he came
through the doorway; he would have to remember to stoop.

The big kitchen stove hummed merrily with the sweet smell of
wood smoke seeping up through the lids, a delicate fragrant thread
of gray that curled and disappeared. Mrs. Seymour explained that
Mr. Bailey built the fire for her; he had come early to show her how
to make it. Just as she spoke he appeared in the doorway again with
a foaming milk pail in his hand. His face was unsmiling but his blue
eyes were alight.

“So much milk for us?” inquired Mrs. Seymour.

“Drink it down, free as water,” he answered. “That’s what puts the


color in children’s cheeks. Get your milk pans ready.”

“Hello,” said Ann. “Isn’t this a fine morning?”

“Morning? Morning?” said Mr. Bailey. “This be the middle of the


forenoon.”

Ann saw that his eyes were laughing at her although his face
never moved a muscle. “What time is morning up here?” she
demanded.

“Oh—about half past three, these days. That’s dawn.”

“Do we have to get up at half past three?” cried Ben.

“Well, you do if you want to keep up with Jo,” answered his father.

“Where’s Jo now?” Ben asked, getting up from his chair.

“He’s hoein’ corn,” said Mr. Bailey. “Got two rows done already.
He’s not one to lie in bed, not Jo.”

“May I hoe with him? I’d like to, really.”


Fred Bailey looked at Ben’s mother. She nodded permission and
Ben was off like a shot.

“Won’t you sit down and have a cup of coffee with us,” asked Mrs.
Seymour, “to celebrate our first morning?”

“I don’t know but what I might,” said Fred Bailey. “Only don’t
leave that pail o’ milk out there by the door for a minute.” And he
picked it up and handed it to Ann. “It’ll be tipped over the second
you take your eyes off it.”

“Your barn cats come over this far for milk?” inquired Mr. Seymour
laughing. “They can smell a good thing from a long distance.”

“It ain’t no cats that dump it out on me,” said Fred soberly. “And I
think that I’d better warn you, first thing. It’s the spirits, the spirits
from the ship. They pester me almost to death, dumping out the
milk from pails, and they tear up the packages left beside the door.
You don’t want to leave nothin’ about.”

“You think that ship is haunted?” Mrs. Seymour poured out a big
cup of coffee.

Helen had gone already and Ann hoped that neither of her parents
would notice that she had stayed. She made as little noise as
possible with the milk pans and then came and sat down quietly. She
saw her mother’s eye wander toward her but she smiled pleadingly,
hoping that her mother would know she could not be frightened by
any story about ghosts.

Fred was evidently glad to talk, once he had started on the


subject. “I shouldn’t wonder but what something was aboard that
boat that shouldn’t be there. I know this much—I’ve been bothered
uncommon ever since she came ashore, and not by human beings.”

“How did she happen to be wrecked?” Mr. Seymour was as eager


as Ann for the story, now that he felt sure that a story existed.
“She struck last winter in January,” began Fred, settling himself
more comfortably in his chair. “It was during the worst storm we’ve
had in these parts in the last hundred years.”

“It must have been a howler,” commented Mr. Seymour.

Mr. Bailey nodded soberly. “You’re right, I never saw nothin’ like
it,” he said. “The storm had been brewing for days and we could feel
it coming long before it struck us up here; there was warning
enough in the Boston paper. Then the sea grew flat and shining
without a hint of a whitecap on her. The wind was so strong it just
pressed right down and smothered the waves, and it blew straight
off the land. It never let up blowing off the land all through the
storm, and that was one of the queer things that happened.

“We had three days o’ wind, and then the snow broke, all to once,
as though the sky opened and shook all its stuffing right out on us.
With the coming o’ the snow the wind eased up a bit an’ let the
water churn on the top of the sea until it was as white as the falling
snow. Finally I couldn’t tell where the water ended and the snow
began.

“The wind driving the sleet was cruel. Whenever Jo or I ventured


out it cut our faces and made them raw and bleeding. At times the
wind lifted the house right off its stone foundations and shook it,
and I feared it would be blown clear over the bluff and set awash in
the sea.”

“How terrible!” exclaimed Mrs. Seymour.

“It was all of that,” Fred agreed. “The second day of the snow I
thought the wind hove to a mite, it seemed more quiet. I went to
the window to see if the snow had let up. It had—but not in any way
I ever had seen it in all my fifty years of life on this bluff. It was as if
a path had been cut through the flying storm, straight and clear with
the wind sweeping through, so that I could see beyond the bluff
over the water. It was then I had my first glimpse of it, riding over
the waves and coming ashore dead against the gale. It was such a
thing as no mortal ever saw nowadays. I thought I was losing my
wits to see a boat coming toward me, riding in to shore against the
wind and while the tide was running out. I just couldn’t believe what
my eyes were telling me, for no boat that I ever heard tell of had
struck on this section of the coast. Nature built here so that they
can’t come in, what with Douglas Head stretching out to the north
and making a current to sweep wrecks farther down; they strike to
the north or the south of us, but never here.”

“To see a ship coming in and be powerless to help it!” exclaimed


Mr. Seymour as Fred paused for a sip of coffee and a bite of
doughnut. “There was nothing that you could do?”

“Not a thing. I was alone with Jo, and even if we had been able to
get out a small boat we couldn’t have done nothin’. She was coming
in too fast. So we bundled up, Jo and I, and went out to stand by on
the shore.”

“Into that storm?” Anne demanded. She had drawn close to her
mother’s chair during the story and now she stood tense against it.
She could almost see the two figures, Fred so tall and Jo a little
shorter, as they ventured out into the wind that threatened to blow
them into the water. How the cutting sleet must have hurt, and how
cold they must have been as they stamped their feet on the ice-
covered rocks and beat their hands to keep from freezing!

“Nothing else to do but try to save the men as they washed


ashore, now was there?” Fred asked gently, and Ann shook her
head. She knew that if she had been there she would have gone
with them and borne the cold as best she could.

“We waited and watched,” Fred continued. “And all that time the
narrow path stayed in the storm, swept clear of the driving snow.
And the boat came nearer with no sails set and on even keel. When
she struck she cried like a living thing.
“We couldn’t see a man aboard. We waited all day and when night
closed in I sent Jo down to the village for help, and I listened alone
all night for the cry of some one washed to the beach; but no one
came.

“When dawn broke Jo came back with ten or twelve men. They
hadn’t known a thing about the wreck in the village nor we
shouldn’t, either, if it hadn’t been for that path in the storm; the
snow was falling too thick for any one to see through it. Well, that
morning the storm was over and the sun burst out. And there she
lay, almost as you see her now, but farther out. The water was
boiling all about her. The waves were crashing in pretty high but we
thought we could get one of the boats launched at the mouth of the
river and work it round to the ship. So we left Jo to watch the bluff
here and picked my dory to make the trip as she shipped less water
and rode the waves easier. We got her down the river and around
the point and after a couple of attempts we pulled in under the
schooner’s stern and three of us swung aboard while Les Perkins
and Pete Simonds held the dory.

“When we got on the schooner’s deck we found that the sea had
swept her clean of anything that might have identified her. The
name plates looked as if a mighty hand had wrenched them loose
and great cuts showed in the bow and stern where they had been.
There wasn’t a sound but the pounding of the waves along her side.
It made a queer sussh-sussh that didn’t seem to come from where
the water touched her. We broke open the hatches and went down
in her—two by two. Wasn’t a man of us who dast go down there
alone, for you never can tell what you’re going to find in a wrecked
ship’s cabin. We looked all about, but no one was in the place and I
don’t believe that any one was on her when she struck. The crew’s
quarters were in order but the cabin appeared as if there had been a
struggle there, though the sea might have done it, tossing things
about. Then we searched her careful but found no log nor no
papers. Some clothes were scattered here and there but the pockets
were empty and turned wrongside foremost. She had no cargo and
the fire was still a-going in the stove.”

Mr. Bailey had another cup of coffee and drank it silently while the
Seymours waited for the rest of the story.

“Well, that’s how she came in,” he said at last.

“But what makes you think there are spirits on board?” asked Mr.
Seymour. “There must have been something more than you have
told us, to make you believe that.”

“Yes, there is more to it,” admitted Fred, “but if I was to tell ye


you’d think me foolish.”

“We’d never think that, I can assure you,” said Mrs. Seymour
quickly. “If we had been with you on the schooner probably we
should be feeling exactly as you do about her.”

“Perhaps you might, and perhaps you might not. I would think
that the trouble was with me if it hadn’t been for the other men, but
every one of them down to the cove would back me up in what I
say. And I might as well tell you, because if I don’t some one else
will, no doubt.

“We had almost finished searching when I got a sort of feeling


that some one or something was peering at me. I kept looking
around behind me, and then I noticed that the other men were
doing the same thing. There was nothin’ there. We kind of looked at
each other and laughed at first. But soon it was all I could do to
keep from running around the next corner to catch whatever was
behind it. We did our search thorough, but I can tell you I was glad
when Les Perkins pulled the dory under the stern and I could drop
into her. None of us hankered to stay aboard that ship.”

In spite of herself Ann shivered and was glad when her mother
hugged her reassuringly.
“Two days after that,” Fred continued, “we picked up four men
who had been washed in by the sea. We are God-fearing people up
here and I couldn’t understand why the folks in the village wouldn’t
put those sailors in the churchyard, but some of the people were
foolish and said those men should not be put in consecrated ground,
coming out of the sea like that. I didn’t know quite what to do, and I
suppose I should have taken them out and put them back into the
sea, the way most sailormen are done by when they’re dead. But I
didn’t decide to do that way; I buried them with my own people,
yonder in the field, and they lie there marked by four bits of
sandstone.

“Jo and I have been back on the boat several times, for we felt we
had a duty by her, lying at our door as she does, but we can’t find a
trace of anything to identify her and we both had that feeling that
something there is wrong. Something was watching us all the time
we were on her. So I’ve given up trying to think where she came
from or who sailed on her, for such things a man like me is not
supposed to know. Spirits from the sea no doubt came on board
during the storm and threw the crew overside. But if those spirits
are there now I don’t understand why the sea don’t claim her and
break her up. Sea seems to be shoving her back on the land as
though it wanted to be rid of her.”

“That is a great story, Fred,” said Mr. Seymour. “And I can


sympathize with the way you felt; it must have taken a great deal of
courage to go back to her when you and Jo looked her over. And you
have never seen anything move on the boat?”

Ann wanted to tell about the light she had seen there last night,
but that was her discovery and she so hoped to be the one to solve
the mystery! She said not a word about it.

“Nary a sight of anything have we ever had,” Fred answered.


Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like