COMPUTER GRAPHICS Isibor O O 2018 2
COMPUTER GRAPHICS Isibor O O 2018 2
February 2018
ISIBOR O.O.
COMPUTER SCIENCE DEPARTMENT
LAGOS CITY POLYTECHNIC
IKEJA-NIGERIA.
[email protected], 08063546421.
FEBRUARY 2017
REVISED FEBRUARY 2018.
©IsiborOO2017
1
1
DEFINITION AND CONCEPTS OF COMPUTER GRAPHICS
“Perhaps the best way to define computer graphics is to find out what it is not. It is not a
machine. It is not a computer, nor a group of computer programs. It is not the know-how of a
graphic designer, a programmer, a writer, a motion picture specialist, or a reproduction specialist.
Computer graphics is all these –a consciously managed and documented technology directed
toward communicating information accurately and descriptively.
In 1963 Ivan Sutherland presented his paper Sketchpad at the Summer Joint Computer Conference.
2
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
Sketchpad allowed interactive design on a vector graphics display monitor with a light pen input
device. Most people mark this event as the origins of computer graphics.
3
5. The Early '80's
Iomega. The Intel 486 chipset allowed PC to get reasonable floating point performance. In 1994,
Silicon Graphics produced the Reality Engine: It had hardware for real-time texture mapping. The
Ninetendo 64 game console hit the market providing Reality Engine-like graphics for the masses of
games players. Scanners were introduced.
9. The '00's
Today most graphicist want an Intel PC with at least 256 MB of memory and a 10 GB hard
drive. Their display should have graphics board that supports real-time texture mapping. A flatbed
scanner, color laser printer, digital video camera, DVD, and MPEG encoder/decoder are the
peripherals one wants. The environment for program development is most likely Windows
and Linux, with Direct 3D and OpenGL, but Java 3D might become more important. Programs
would typically be written in C++ or Java.
What will happen in the near future -- difficult to say, but high definition TV (HDTV) is poised to
take off (after years of hype). Ubiquitous, untethered, wireless computing should become
widespread, and audio and gestural input devices should replace some of the functionality of the
keyboard and mouse.
You should expect 3-D modeling and video editing for the masses, computer vision for
robotic devices and capture facial expressions, and realistic rendering of difficult things like a
human face, hair, and water. With any luck C++ will fall out of favor.
ETHICAL ISSUES
Graphics has had a tremendous affect on society. Things that affect society often lead to ethical and
legal issues. For example, graphics are used in battles and their simulation, medical diagnosis,
crime re- enactment, cartoons and films. The ethical role played by a computer graphic is in the use
of graphics programs that may be used for these and other purposes is discussed and analyzed in the
notes on Ethics.
5
APPLICATIONS OF COMPUTER GRAPHICS
1. Medical Imaging
There are few endeavors more noble than the preservation of life. Today, it can honestly be
said that computer graphics plays an significant role in saving lives. The range of application
spans from tools for teaching and diagnosis, all the way to treatment. Computer graphics is tool in
medical applications rather than an a mere artifact. No cheating or tricks allowed.
2. Scientific Visualization
Computer graphics makes vast quantities of data accessible. Numerical simulations frequently
produce millions of data values. Similarly, satellite-based sensors amass data at rates beyond our
abilities to interpret them by any other means than visually. Mathematicians use computer graphics
to explore abstract and high-dimensional functions and spaces. Physicists can use computer graphics
to transcend the limits of scale. With it they can explore both microscopic and macroscopic world
5. Games
Games are an important driving force in computer graphics. In this class we are going to talk about
games. We'll discuss on how they work. We'll also question how they get so much done with so
little to work with.
6. Entertainment
If you can imagine it, it can be done with computer graphics. Obviously, Hollywood has caught on
to this. Each summer, we are amazed by state- of-the-art special effects. Computer graphics is
now as much a part of the entertainment industry as stunt men and makeup. The entertainment
industry plays many other important roles in the field of computer graphics.
6
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
2
THE GRAPHICS RENDERING PIPELINE
Classically, “model” to “scene”' to “image” conversion broken into finer steps is called the
graphics pipeline which is commonly implemented in graphics hardware to get interactive speeds.
At a high level, the graphics pipeline usually looks like the diagram below:
7
Each stage refines the scene, converting primitives in modeling space to primitives in device space,
where they are converted to pixels (rasterized). A number of coordinate systems are used:
Keeping these straight is the key to understanding a rendering system. Transformation between two
coordinate systems is represented with matrix. Derived information may be added (lighting and
shading) and primitives may be removed (hidden surface removal) or modified (clipping).
8
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
3
GRAPHICS HARDWARE, SOFTWARE AND DISPLAY DEVICES
GRAPHICS SOFTWARE
Graphics software (that is, the software tool needed to create graphics applications) has taken the
form of subprogram libraries. The libraries contain functions to do things like: draw points, lines,
polygons apply transformations fill areas with color handle user interactions. An important goal has
been the development of standard hardware- independent libraries such as:
CORE GKS (Graphical Kernel Standard)
PHIGS (Programmer’s Hierarchical Interactive Graphics System)
X Windows OpenGL
(Study OpenGL)
GRAPHICS HARDWARE
9
Graphics Hardware Systems consist of the following:
DISPLAY HARDWARE
An important component is the “refresh buffer” or “frame buffer” which is a random-access
memory containing one or more values per pixel, used to drive the display.
The video controller translates the contents of the frame buffer into signals used by the CRT to
illuminate the screen. It works as follows:
1. The display screen is coated with “phospors” which emit light when excited by an electron
beam. (There are three types of phospor, emitting red, green, and blue light.) They are
arranged in rows, with three phospor dots (R, G, and B) for each pixel.
2. The energy exciting the phosphors dissipates quickly, so the entire screen must be refreshed
60 times per second.
3. An electron gun scans the screen, line by line, mapping out a scan
4. Pattern. On each scan of the screen, each pixel is passed over once. Using the contents
of the frame buffer, the controller controls the intensity of the beam hitting each pixel,
producing a certain color.
FLAT-PANEL DISPLAYS:
This is the Technology used to replace CRT monitors.
Characterized by Reduced volume, weight, power needs
Thinner: can hang on a wall
Higher resolution (High Definition).
And comes in Two categories: Emissive and Non-Emissive
Other technologies require storage of x-y coordinates of pixels, e.g.: Thin-film electroluminescent
displays, LEDs, Flat CRTs.
Also called random, stroke, calligraphic displays, it possesses the following features:
i. Its images are drawn as line segments (vectors)
ii. Beam can be moved to any position on screen
iii. Refresh Buffer stores plotting commands so Frame Buffer often called "Display File”
provides DPU with needed endpoint coordinates. Its pixel size is independent of frame buffer
which gives a very high resolution
“Vector graphics” i.e. Early graphic devices were line-oriented. For example, a “pen plotter”
from H-P. Image stored as line segments (vectors) that can be drawn anywhere on display device
Primitive operation is line drawing.
Advantages of Vector Scan
High resolution (good for detailed line drawings)
Crisp lines (no "jaggies")
High contrast (beam can dwell on a single point for some time ==> very bright)
Selective erase (remove commands from display file)
Animation (change line endpoints slightly after each refresh)
11
Disadvantages of Vector Scan
Complex drawings can have flicker Many lines so if time to draw > refresh time ==> flicker
High cost--very fast deflection system needed
Hard to get colors
No area fill: so it’s difficult to use for realistic (shaded) images
1960s Technology, only used for special purpose stuff today
Beam continually traces a raster pattern. Its Intensity is adjusted as raster scan takes place
• In synchronization with beam
• Beam focuses on each pixel
• Each pixel’s intensity is stored in frame buffer
• So resolution determined by size of frame buffer
Each pixel on screen visited during each scan, and Scan rate must be >= 30 Hz to avoid flicker
“Raster graphics” is today’s standard. A raster is a 2-dimensional grid of pixels (picture elements). Image
stored as a 2D array of color values in a memory area called the frame buffer. Each value stored determines
the color/intensity of an accessible point on display device
Each pixel may be addressed and illuminated independently. So, the primitive operation is to draw a point;
that is, assign a color to a pixel. Everything else is built upon that. There are a variety of raster devices, both
hardcopy and display. Hardcopy: Laser printer, Ink-jet printer, Film recorder, Electrostatic printer, Pen
plotter.
Scan Conversion here referes to the Process of determining which pixels need to be turned on in
the frame buffer to draw a given graphics primitive. It need algorithms to efficiently scan convert
graphics primitives like lines, circles, etc.
TYPES OF CRT
1. Direct View Storage Tubes (not CRT, no need for refresh, pictures stored as a permanent
charge on phosphor screen)
2. Calligraphic refresh CRT (line drawing or vector random scan, need refreshing)
3. Rasterscan (point by point refreshing)
Refresh rate: # of complete images (frames) drawn on the screen in 1 second. Frames/sec.
Frame time: reciprocal of the refresh rate, time between each complete scan. sec/frame
13
Electron gun sends beam aimed (deflected) at a particular point on the screen, traces out a path on
the screen, hitting each pixel once per cycle. “Scan lines”. Phosphors emit light (phosphoresence);
output decays rapidly (exponentially - 10 to 60 microseconds) · As a result of this decay, the
entire screen must be redrawn (refreshed) at least 60 times per second. This is called the refresh
rate. If the refresh rate is too slow, we will see a noticeable flicker on the screen. CFF (Critical
Fusion Frequency) is the minimum refresh rate needed to avoid flicker. This depends to some
degree on the human observer. Also depends on the persistence of the phosphors; that is, how
long it takes for their output to decay. The horizontal scan rate is defined as the number of scan
lines traced out per second.
The most common form of CRT is the shadow-mask CRT. Each pixel consists of a group of three
phosphor dots (one each for red, green, and blue), arranged in a triangular form called a triad. The
14
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
shadow mask is a layer with one hole per pixel. To excite one pixel, the electron gun (actually three
guns, one for each of red, green, and blue) fires its electron stream through the hole in the mask
to hit that pixel. The dot pitch is the distance between the centers of two triads. It is used to measure
the resolution of the screen.
(Note: On a vector display, a scan is in the form of a list of lines to be drawn, so the
time to refresh is dependent on the length of the display list.)
15
A liquid crystal display consists of 6 layers, arranged in the following order (back-to-front):
How it works:
The liquid crystal rotates the polarity of incoming light by 90 degrees. Ambient light is captured,
vertically polarized, rotated to horizontal polarity by the liquid crystal layer, passes through the
horizontal filter, is reflected by the reflective layer, and passes back through all the layers, giving
an appearance of lightness. However, if the liquid crystal molecules are charged, they become
aligned and no longer change the polarity of light passing through them. If this occurs, no light can
pass through the horizontal filter, so the screen appears dark.
The principle of the display is to apply this charge selectively to points in the liquid crystal
layer, thus lighting or not lighting points on the screen. Crystals can be dyed to provide color.
An LCD may be backlit, so as not to be dependent on ambient light. TFT (thin film transistor) is
most popular LCD technology today.
16
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
Vector Displays
Oscilloscopes were some of the 1st computer displays used by both analog and digital computers.
Computation results used to drive the vertical and horizontal axis (X-Y). Intensity could also be
controlled (Z-axis). Used mostly for line drawings Called vector, calligraphic or affectionately
stroker displays. Display list had to be constantly updated (except for storage tubes).
(Note: In early PCs, there was no display processor. The frame buffer was part of the physical
address space addressable by the CPU. The CPU was responsible for all display functions.)
17
Some Typical Examples of Frame Buffer Structures:
1. For a simple monochrome monitor, just use one bit per pixel.
2. A gray-scale monitor displays only one color, but allows for a range of intensity levels at each
pixel. A typical example would be to use 6-8 bits per pixel, giving 64-256 intensity levels.
For a color monitor, we need a range of intensity levels for each of red, green, and blue.
There are two ways to arrange this.
3. A color monitor may use a color lookup table (LUT). For example, we could have a LUT
with 256 entries. Each entry contains a color represented by red, green, and blue values.
We then could use a frame buffer with depth of 8. For each pixel, the frame buffer
contains an
4. Index into the LUT, thus choosing one of the 256 possible colors. This approach saves
memory, but limits the number of colors visible at any one time.
5. A frame buffer with a depth of 24 has 8 bits for each color, thus 256 intensity levels for each
color. 224 colors may be displayed. Any pixel can have any color at any time. For a
1024x1024 monitor we would need 3 megabytes of memory for this type of frame buffer. The
display processor can handle some medium-level functions like scan conversion (drawing
lines, filling polygons), not just turn pixels on and off. Other functions: bit block transfer,
display list storage. Use of the display processor reduces CPU involvement and bus traffic
resulting in a faster processor. Graphics processors have been increasing in power faster than
CPUs, a new generation every 6-9 months, examples: 10 3E. NVIDIA GeForce FX
· 125 million transistors (GeForce4: 63 million)
· 128MB RAM
· 128-bit floating point pipeline
One of the advantages of a hardware-independent API like OpenGL is that it can be used with
a wide range of CPU-display combinations, from software-only to hardware-only. It also
means that a fast video card may run slowly if it does not have a good implementation of
OpenGL.
18
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
4
IMAGE REPRESENTATION
Introduction:
Computer Graphics is principally concerned with the generation of images, with wide ranging
applications from entertainment to scientific visualization. In this unit, we begin our exploration of
Computer Graphics by introducing the fundamental data structures used to represent images on
modern computers. We describe the various formats for storing and working with image data, and
for representing colour on modern machines.
Rasters are used to represent digital images. Modern displays use a rectangular raster, comprised
of W × H pixels. The raster illustrated here contains a greyscale image; its contents are
represented in memory by a greyscale frame buffer.
The values stored in the frame buffer record the intensities of the pixels on a discrete scale
(0=black, 255=white).
The pixel is the atomic unit of the image; it is coloured uniformly, its single colour represents a
discrete sample of light e.g. from a captured image.
In most implementations, rasters take the form of a rectilinear grid often containing many thousands
of pixels.
The raster provides an orthogonal two-dimensional basis with which to specify pixel
19
coordinates.
By convention, pixels coordinates are zero-indexed and so the origin is located at the top-left of the
image. Therefore pixel (W − 1,H − 1) is located at the bottom-right corner of a raster of width
W pixels and height H pixels. As a note, some Graphics applications make use of hexagonal pixels
instead 1, however we will not consider these on the course.
The number of pixels in an image is referred to as the image’s resolution.
Modern desktop displays are capable of visualizing images with resolutions around 1024 × 768
pixels (i.e. a million pixels or one mega-pixel). Even inexpensive modern cameras and scanners are
now capable of capturing images at resolutions of several mega-pixels. In general, the greater the
resolution, the greater the level of spatial detail an image can represent.
Resolution
A display’s “resolution” is determined by:
i. number of scan lines (Each left-to-right trace)
ii. number of pixels (Each spot on the screen) per scan line
iii. number of bits per pixel
Resolution is used here to mean total number of bits in a display. It should really refer to the
resolvable dots per unit length.
Examples:
Bitmapped display: 960 x 1152 x 1b 1/8 M B
NTSC TV: 640 x 480 x 1 6b 1/2 M B
Color workstation: 1280 x 1024 x 24b 4 MB
Laser-printed page:
300 dpi: 8.5 x 11 x 300 x 1b 1 MB
1200 dpi: 8.5 x 11 x 1200 x 1b 17 MB
Film: 4500 x 3000 x 30b 50 MB
Frame aspect ratio = horizontal / vertical size
TV 4:3
HDTV 16 : 9
Letter-size paper 8.5 : 11 (about 3 : 4)
35mm film 3:2
Panavision 2.35 : 1
Pixel aspect ratio = pixel width / pixel height (nowadays, this is almost always 1.)
hardware was needed to store enough data to represent just that single image. However, we may
now manipulate hundreds of images in memory simultaneously and the term ‘frame buffer’ has
fallen into informal use to describe any piece of storage that represents an image.
There are a number of popular formats (i.e. ways of encoding pixels) within a frame
buffer. This is partly because each format has its own advantages, and partly for reasons of
backward compatibility with older systems (especially on the PC). Often video hardware can be
switched between different video modes, each of which encodes the frame buffer in a different
way.
We will describe three common frame buffer formats in the subsequent sections; the greyscale,
pseudo-colour, and true-colour formats. If you do Graphics, Vision or mainstream Windows GUI
programming then you will likely encounter all three in your work at some stage.
22
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
5
GEOMETRIC MODELING.
In computer graphics we work with points and vectors defined in terms of some coordinate frame
(a positioned coordinate system). We also need to change coordinate representation of points and
vectors, hence to transform between different coordinate frames.
There are many ways for creating graphical data. The Classic way is Geometric Modeling.
Other approaches are:
3D scanners
Photography for measuring optical properties
Simulations, e.g., for flow data
Geometric Modeling is the computer-aided design and manipulation of geometric objects. (CAD).
It is the basis for:
Computation of geometric properties
Rendering of geometric objects
Physics computations (if some physical attributes are given)
Geometric objects convey a part of the real or theoretical world; often, something tangible. They are
described by their geometric and topological properties:
Geometry describes the form and the position/orientation in a coordinate system.
Topology defines the fundamental structure that is invariant against continuous
transformations.
23
3D models are geometric representations of 3D objects with a certain level of abstraction. Let’s
distinguish between three types of models:
1) Wire Frame Models: describe an object using boundary lines. No relationship exist between
these curves and surfaces between them are not defined
3) Solid Models: describe an object as a solid, that is, it describe the 3D object completely by
covering the solid
24
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
A Cartesian coordinate system is an orthogonal coordinate system with lines as coordinate axes.
A Cartesian coordinate frame is a Cartesian coordinate system positioned in space.
25
Vectors
A vector u; v;w is a directed line segment (no concept of position). Vectors are represented in a
coordinate system by a n-tuple v = (v 1; : : : ; vn).
The dimension of a vector is dim(v) = n.
Length jvj and direction of a vector is invariant with respect to choice of Coordinate system.
Matrix Algebra
A matrix is a rectangular array of numbers. Both vectors and scalars are degenerate forms of
matrices. By convention we say that an (n×m) matrix has n rows and m columns; i.e. we write
(height × width). In this subsection we will use two 2 × 2 matrices for our examples:
26
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
Observe that the notation for addressing an individual element of a matrix is x row, column.
Matrix Addition
Matrices can be added, if they are of the same size. This is achieved by summing the elements in
one matrix with corresponding elements in the other matrix:
This is identical to vector addition.
Matrix Scaling
Matrices can also be scaled by multiplying each element in the matrix by a scale factor.
Again, this is identical to vector scaling.
Matrix Multiplication
As we will see later, matrix multiplication is a cornerstone of many useful geometric
transformations in Computer Graphics. You should ensure that you are familiar with this
operation.
In general each element cij of the matrix C = AB, where A is of size (n×P) and B is of size
(P × m) has the form:
Not all matrices are compatible for multiplication. In the above system, A must have as many
columns as B has rows. Furthermore, matrix multiplication is non-commutative, which means that
BA 6= AB, in general. Given equation 1.27 you might like to write out the multiplication for BA to
satisfy yourself of this.
Finally, matrix multiplication is associative i.e.: ABC = (AB)C = A(BC)
If the matrices being multiplied are of different (but compatible) sizes, then the complexity of
evaluating such an expression varies according to the order of multiplication1.
For some matrices (the orthonormal matrices), the transpose actually gives us the inverse of the
matrix. We decide if a matrix is orthonormal by inspecting the vectors that make up the matrix’s
columns, e.g. [a11, a21]T and [a12, a22]T . These are sometimes called column vectors of the matrix. If
the magnitudes of all these vectors are one, and if the vectors are orthogonal (perpendicular) to each
other, then the matrix is orthonormal. Examples of orthonormal matrices are the identity matrix, and
the rotation matrix that we will meet in subsequent classes.
28
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
6
GRAPHICS RENDERING: TRANSFORMATION
In Computer Graphics we most commonly model objects using points, i.e. locations in 2D or 3D
space. For example, we can model a 2D shape as a polygon whose vertices are points. By
manipulating the points, we can define the shape of an object, or move it around in space.
In 3D too, we can model a shape using points. Points might define the locations (perhaps the
corners) of surfaces in space. In this unit, we will describe how to manipulate models of objects
and display them on the screen.
Transformation
Transformations are often considered to be one of the hardest concepts in elementary computer
graphics. But transformations are straightforward, as long as you
•Have a clear representation of the geometry
•Understand the underlying mathematics
•Are systematic about concatenating transformations
Given a point cloud, polygon, or sampled parametric curve, we can use transformations for
several purposes:
A. TRANSLATION (2D)
This is a transformation on an object that simply moves it to a different position somewhere else
within the same coordinate system. To translate an object, we translate each of its vertices (points). It
involves moving an object along a line from one location to another.
To translate the point (x1 , y1) by tx in x and ty in y
– result is (x2, y2) So, (x2, y2) = (x1 + tx, y1 + ty)
• Translations can be represented by adding vectors.
[ x1 ] + [ tx ] = [ x1 + tx ]
[ y1 ] [ ty ] [ y1 + ty ]
Suppose we want to move a point from A to B e.g, the vertex of a polygon. This operation is
called a translation
To translate point A by (tx, ty), we add (tx, ty) to A’s coordinates
29
To translate a 2D shape by (tx, ty)
•Translate each point that defines the shape e.g., each vertex of a polygon, the center point
of a circle, the control points of a curve
Translation by (tx, ty) moves each object point by (tx, ty) (x, y) →(x + tx , y + ty)
Translation is a linear operation. The new coordinates are a linear combination o f t h e
p r e v i o u s c o o r d i n a t e s , t h e n e w coordinates are determined from a linear system
x’ = x + tx
y’ = y + ty
B. Rotation (2D)
Rotation is a transformation on an object that changes its position by rotating the object some angle
about some axis. Rotations in the x-y plane are about an axis parallel to z. The point of intersection
of the rotation axis with the x-y plane is the pivot point. We need to specify the angle and pivot
point about which the object is to be rotated.
• To rotate an object, we rotate each of its vertices (points).
• Positive angles are in the counterclockwise direction.
*Rotate (x1 , y1) by some angle B counterclockwise, the result is (x2, y2)
(x2, y2) = (x1 * cos(B) – y1 * sin(B), y1 * cos(B) + x1 * sin(B))
31
B. SCALING (2D)
Scaling is a transformation on an object that changes its size. Just as the translation could have been
different amounts in x and y, you can scale x and y by different factors. Scaling is a transformation
on an object that changes its size within the same coordinate system. To scale an object we scale
each of its vertices (points).
To scale a 2D point (x1, y) by sx in the x direction and sy in the y direction, we simply calculate the
new coordinates to be: (x2, y2) = (sxx1 , syy1 ).
32
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
7
ANIMATION
Motion can bring the simplest of characters to life. Even simple polygonal shapes can convey a
number of human qualities when animated: identity, character, gender, mood, intention, emotion,
and so on.
33
In general, animation may be achieved by specifying a model with n parameters that identify degrees
of freedom that an animator may be interested in such as
• polygon vertices,
• spline control,
• joint angles,
• muscle contraction,
• camera parameters, or
• color.
With n parameters, this results in a vector q in n-dimensional state space. Parameters may be varied
to generate animation. A model’s motion is a trajectory through its state space or a set of motion
curves for each parameter over time, i.e. q(t), where t is the time of the current frame. Every
animation technique reduces to specifying the state space trajectory.
A. KEYFRAMING
Keyframing is an animation technique where motion curves are interpolated through states at times,
(q1, ..., qT ), called keyframes, specified by a user.
Catmull-Rom splines are well suited for keyframe animation because they pass through their con-
trol points.
34
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
• Pros:
– Very expressive
– Animator has complete control over all motion parameters
• Cons:
– Very labor intensive
– Difficult to create convincing physical realism
• Uses:
– Potentially everything except complex physical phenomena such as smoke, water, or fire
B. KINEMATICS
Kinematics describe the properties of shape and motion independent of physical forces that cause
motion. Kinematic techniques are used often in keyframing, with an animator either setting joint
parameters explicitly with forward kinematics or specifying a few key joint orientations and having
the rest computed automatically with inverse kinematics.
i. Forward Kinematics
With forward kinematics, a point p¯ is positioned by p¯ = f (Θ) where Θ is a state vector (θ1, θ2,
...θn ) specifying the position, orientation, and rotation of all joints.
35
ii. Inverse Kinematics
With inverse kinematics, a user specifies the position of the end effector, p¯ , and the algorithm has
to evaluate the required Θ give p¯ . That is, Θ = f −1(p¯ ).
Usually, numerical methods are used to solve this problem, as it is often nonlinear and either
underdetermined or overdetermined. A system is underdetermined when there is not a unique
solution, such as when there are more equations than unknowns. A system is overdetermined when
it is inconsistent and has no solutions.
Extra constraints are necessary to obtain unique and stable solutions. For example, constraints may
be placed on the range of joint motion and the solution may be required to minimize the kinetic
energy of the system.
C. MOTION CAPTURE
In motion capture, an actor has a number of small, round markers attached to his or her body that
reflect light in frequency ranges that motion capture cameras are specifically designed to pick up.
36
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
• Pros:
– Captures specific style of real actors
• Cons:
– Often not expressive enough
– Time consuming and expensive
– Difficult to edit
• Uses:
– Character animation
– Medicine, such as kinesiology and biomechanics
D. PHYSICALLY-BASED ANIMATION
It is possible to simulate the physics of the natural world to generate realistic motions, interactions,
and deformations. Dynamics rely on the time evolution of a physical system in response to forces.
Forward simulation has the advantage of being reasonably easy to simulate. However, a simulation
is often very sensitive to initial conditions, and it is often difficult to predict paths x(t) without
running a simulation—in other words, control is hard.
With inverse dynamics, constraints on a path x(t) are specified. Then we attempt to solve for the
forces required to produce the desired path. This technique can be very difficult computationally.
37
Physically-based animation has the advantages of:
• Realism,
• Long simulations are easy to create,
• Natural secondary effects such as wiggles, bending, and so on—materials behave naturally,
• Interactions between objects are also natural.
The main disadvantage of physically-based animation is the lack of control, which can be critical,
for example, when a complicated series of events needs to be modeled or when an artist needs
precise control over elements in a scene.
• Pros:
– Very realistic motion
• Cons:
– Very slow
– Very difficult to control
– Not expressive
• Uses:
– Complex physical phenomena
38
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
39
iv. Particle Systems
A particle system fakes passive dynamics to quickly render complex systems such as fire, flowing
water, and sparks. A particle is a point in space with some associated parameters such as velocity,
time to live, color, or whatever else might be appropriate for the given application. During a
simulation loop, particles are created by emitters that determine their initial properties, and existing
particles are removed if their time to live has been exceeded. The physical rules of the system are
then applied to each of the remaining particles, and they are rendered to the display. Particles are
usually rendered as flat textures, but they may be rendered procedurally or with a small mesh as
well
E. BEHAVIORAL ANIMATION
Particle systems don’t have to model physics, since rules may be arbitrarily specified. Individual
particles can be assigned rules that depend on their relationship to the world and other particles,
effectively giving them behaviors that model group interactions. To create particles that seem to
flock together, only three rules are necessary to simulate separation between particles, alignment
of particle steering direction, and the cohesion of a group of particles.
More complicated rules of behavior can be designed to control large crowds of detailed characters
that would be nearly impossible to manually animate by hand. However, it is difficult to program
40
INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION. Isibor O. O. February 2018
characters to handle all but simple tasks automatically. Such techniques are usually limited to
animating background characters in large crowds and characters in games.
F. DATA-DRIVEN ANIMATION
Data-driven animation uses information captured from the real world, such as video or captured
motion data, to generate animation. The technique of video textures finds points in a video se-
quence that are similar enough that a transition may be made without appearing unnatural to a
viewer, allowing for arbitrarily long and varied animation from video. A similar approach may be
taken to allow for arbitrary paths of motion for a 3D character by automatically finding frames in
motion capture data or keyframed sequences that are similar to other frames. An animator can then
trace out a path on the ground for a character to follow, and the animation is automatically generated
from a database of motion.
• Pros:
– Captures specific style of real actors
– Very flexible
– Can generate new motion in real-time
• Cons:
– Requires good data, and possibly lots of it
• Uses:
– Character animation
41